diff --git a/.gitmodules b/.gitmodules index de7250166b8..f7dcf5f4ac1 100644 --- a/.gitmodules +++ b/.gitmodules @@ -133,7 +133,7 @@ url = https://github.com/unicode-org/icu.git [submodule "contrib/flatbuffers"] path = contrib/flatbuffers - url = https://github.com/google/flatbuffers.git + url = https://github.com/ClickHouse-Extras/flatbuffers.git [submodule "contrib/libc-headers"] path = contrib/libc-headers url = https://github.com/ClickHouse-Extras/libc-headers.git @@ -221,6 +221,9 @@ [submodule "contrib/NuRaft"] path = contrib/NuRaft url = https://github.com/ClickHouse-Extras/NuRaft.git +[submodule "contrib/nanodbc"] + path = contrib/nanodbc + url = https://github.com/ClickHouse-Extras/nanodbc.git [submodule "contrib/datasketches-cpp"] path = contrib/datasketches-cpp url = https://github.com/ClickHouse-Extras/datasketches-cpp.git diff --git a/CHANGELOG.md b/CHANGELOG.md index 43531b60267..cc1ec835a7b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,156 @@ +## ClickHouse release 21.4 + +### ClickHouse release 21.4.1 2021-04-12 + +#### Backward Incompatible Change + +* The `toStartOfIntervalFunction` will align hour intervals to the midnight (in previous versions they were aligned to the start of unix epoch). For example, `toStartOfInterval(x, INTERVAL 11 HOUR)` will split every day into three intervals: `00:00:00..10:59:59`, `11:00:00..21:59:59` and `22:00:00..23:59:59`. This behaviour is more suited for practical needs. This closes [#9510](https://github.com/ClickHouse/ClickHouse/issues/9510). [#22060](https://github.com/ClickHouse/ClickHouse/pull/22060) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* `Age` and `Precision` in graphite rollup configs should increase from retention to retention. Now it's checked and the wrong config raises an exception. [#21496](https://github.com/ClickHouse/ClickHouse/pull/21496) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Fix `cutToFirstSignificantSubdomainCustom()`/`firstSignificantSubdomainCustom()` returning wrong result for 3+ level domains present in custom top-level domain list. For input domains matching these custom top-level domains, the third-level domain was considered to be the first significant one. This is now fixed. This change may introduce incompatibility if the function is used in e.g. the sharding key. [#21946](https://github.com/ClickHouse/ClickHouse/pull/21946) ([Azat Khuzhin](https://github.com/azat)). +* Column `keys` in table `system.dictionaries` was replaced to columns `key.names` and `key.types`. Columns `key.names`, `key.types`, `attribute.names`, `attribute.types` from `system.dictionaries` table does not require dictionary to be loaded. [#21884](https://github.com/ClickHouse/ClickHouse/pull/21884) ([Maksim Kita](https://github.com/kitaisreal)). +* Now replicas that are processing the `ALTER TABLE ATTACH PART[ITION]` command search in their `detached/` folders before fetching the data from other replicas. As an implementation detail, a new command `ATTACH_PART` is introduced in the replicated log. Parts are searched and compared by their checksums. [#18978](https://github.com/ClickHouse/ClickHouse/pull/18978) ([Mike Kot](https://github.com/myrrc)). **Note**: + * `ATTACH PART[ITION]` queries may not work during cluster upgrade. + * It's not possible to rollback to older ClickHouse version after executing `ALTER ... ATTACH` query in new version as the old servers would fail to pass the `ATTACH_PART` entry in the replicated log. +* In this version, empty `` will block all access to remote hosts while in previous versions it did nothing. If you want to keep old behaviour and you have empty `remote_url_allow_hosts` element in configuration file, remove it. [#20058](https://github.com/ClickHouse/ClickHouse/pull/20058) ([Vladimir Chebotarev](https://github.com/excitoon)). + + +#### New Feature + +* Extended range of `DateTime64` to support dates from year 1925 to 2283. Improved support of `DateTime` around zero date (`1970-01-01`). [#9404](https://github.com/ClickHouse/ClickHouse/pull/9404) ([alexey-milovidov](https://github.com/alexey-milovidov), [Vasily Nemkov](https://github.com/Enmk)). Not every time and date functions are working for extended range of dates. +* Added support of Kerberos authentication for preconfigured users and HTTP requests (GSS-SPNEGO). [#14995](https://github.com/ClickHouse/ClickHouse/pull/14995) ([Denis Glazachev](https://github.com/traceon)). +* Add `prefer_column_name_to_alias` setting to use original column names instead of aliases. it is needed to be more compatible with common databases' aliasing rules. This is for [#9715](https://github.com/ClickHouse/ClickHouse/issues/9715) and [#9887](https://github.com/ClickHouse/ClickHouse/issues/9887). [#22044](https://github.com/ClickHouse/ClickHouse/pull/22044) ([Amos Bird](https://github.com/amosbird)). +* Added functions `dictGetChildren(dictionary, key)`, `dictGetDescendants(dictionary, key, level)`. Function `dictGetChildren` return all children as an array if indexes. It is a inverse transformation for `dictGetHierarchy`. Function `dictGetDescendants` return all descendants as if `dictGetChildren` was applied `level` times recursively. Zero `level` value is equivalent to infinity. Closes [#14656](https://github.com/ClickHouse/ClickHouse/issues/14656). [#22096](https://github.com/ClickHouse/ClickHouse/pull/22096) ([Maksim Kita](https://github.com/kitaisreal)). +* Added `executable_pool` dictionary source. Close [#14528](https://github.com/ClickHouse/ClickHouse/issues/14528). [#21321](https://github.com/ClickHouse/ClickHouse/pull/21321) ([Maksim Kita](https://github.com/kitaisreal)). +* Added table function `dictionary`. It works the same way as `Dictionary` engine. Closes [#21560](https://github.com/ClickHouse/ClickHouse/issues/21560). [#21910](https://github.com/ClickHouse/ClickHouse/pull/21910) ([Maksim Kita](https://github.com/kitaisreal)). +* Support `Nullable` type for `PolygonDictionary` attribute. [#21890](https://github.com/ClickHouse/ClickHouse/pull/21890) ([Maksim Kita](https://github.com/kitaisreal)). +* Functions `dictGet`, `dictHas` use current database name if it is not specified for dictionaries created with DDL. Closes [#21632](https://github.com/ClickHouse/ClickHouse/issues/21632). [#21859](https://github.com/ClickHouse/ClickHouse/pull/21859) ([Maksim Kita](https://github.com/kitaisreal)). +* Added function `dictGetOrNull`. It works like `dictGet`, but return `Null` in case key was not found in dictionary. Closes [#22375](https://github.com/ClickHouse/ClickHouse/issues/22375). [#22413](https://github.com/ClickHouse/ClickHouse/pull/22413) ([Maksim Kita](https://github.com/kitaisreal)). +* Added async update in `ComplexKeyCache`, `SSDCache`, `SSDComplexKeyCache` dictionaries. Added support for `Nullable` type in `Cache`, `ComplexKeyCache`, `SSDCache`, `SSDComplexKeyCache` dictionaries. Added support for multiple attributes fetch with `dictGet`, `dictGetOrDefault` functions. Fixes [#21517](https://github.com/ClickHouse/ClickHouse/issues/21517). [#20595](https://github.com/ClickHouse/ClickHouse/pull/20595) ([Maksim Kita](https://github.com/kitaisreal)). +* Support `dictHas` function for `RangeHashedDictionary`. Fixes [#6680](https://github.com/ClickHouse/ClickHouse/issues/6680). [#19816](https://github.com/ClickHouse/ClickHouse/pull/19816) ([Maksim Kita](https://github.com/kitaisreal)). +* Add function `timezoneOf` that returns the timezone name of `DateTime` or `DateTime64` data types. This does not close [#9959](https://github.com/ClickHouse/ClickHouse/issues/9959). Fix inconsistencies in function names: add aliases `timezone` and `timeZone` as well as `toTimezone` and `toTimeZone` and `timezoneOf` and `timeZoneOf`. [#22001](https://github.com/ClickHouse/ClickHouse/pull/22001) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add new optional clause `GRANTEES` for `CREATE/ALTER USER` commands. It specifies users or roles which are allowed to receive grants from this user on condition this user has also all required access granted with grant option. By default `GRANTEES ANY` is used which means a user with grant option can grant to anyone. Syntax: `CREATE USER ... GRANTEES {user | role | ANY | NONE} [,...] [EXCEPT {user | role} [,...]]`. [#21641](https://github.com/ClickHouse/ClickHouse/pull/21641) ([Vitaly Baranov](https://github.com/vitlibar)). +* Add new column `slowdowns_count` to `system.clusters`. When using hedged requests, it shows how many times we switched to another replica because this replica was responding slowly. Also show actual value of `errors_count` in `system.clusters`. [#21480](https://github.com/ClickHouse/ClickHouse/pull/21480) ([Kruglov Pavel](https://github.com/Avogar)). +* Add `_partition_id` virtual column for `MergeTree*` engines. Allow to prune partitions by `_partition_id`. Add `partitionID()` function to calculate partition id string. [#21401](https://github.com/ClickHouse/ClickHouse/pull/21401) ([Amos Bird](https://github.com/amosbird)). +* Add function `isIPAddressInRange` to test if an IPv4 or IPv6 address is contained in a given CIDR network prefix. [#21329](https://github.com/ClickHouse/ClickHouse/pull/21329) ([PHO](https://github.com/depressed-pho)). +* Added new SQL command `ALTER TABLE 'table_name' UNFREEZE [PARTITION 'part_expr'] WITH NAME 'backup_name'`. This command is needed to properly remove 'freezed' partitions from all disks. [#21142](https://github.com/ClickHouse/ClickHouse/pull/21142) ([Pavel Kovalenko](https://github.com/Jokser)). +* Supports implicit key type conversion for JOIN. [#19885](https://github.com/ClickHouse/ClickHouse/pull/19885) ([Vladimir](https://github.com/vdimir)). + +#### Experimental Feature + +* Support `RANGE OFFSET` frame (for window functions) for floating point types. Implement `lagInFrame`/`leadInFrame` window functions, which are analogous to `lag`/`lead`, but respect the window frame. They are identical when the frame is `between unbounded preceding and unbounded following`. This closes [#5485](https://github.com/ClickHouse/ClickHouse/issues/5485). [#21895](https://github.com/ClickHouse/ClickHouse/pull/21895) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Zero-copy replication for `ReplicatedMergeTree` over S3 storage. [#16240](https://github.com/ClickHouse/ClickHouse/pull/16240) ([ianton-ru](https://github.com/ianton-ru)). +* Added possibility to migrate existing S3 disk to the schema with backup-restore capabilities. [#22070](https://github.com/ClickHouse/ClickHouse/pull/22070) ([Pavel Kovalenko](https://github.com/Jokser)). + +#### Performance Improvement + +* Supported parallel formatting in `clickhouse-local` and everywhere else. [#21630](https://github.com/ClickHouse/ClickHouse/pull/21630) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Support parallel parsing for `CSVWithNames` and `TSVWithNames` formats. This closes [#21085](https://github.com/ClickHouse/ClickHouse/issues/21085). [#21149](https://github.com/ClickHouse/ClickHouse/pull/21149) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Enable read with mmap IO for file ranges from 64 MiB (the settings `min_bytes_to_use_mmap_io`). It may lead to moderate performance improvement. [#22326](https://github.com/ClickHouse/ClickHouse/pull/22326) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add cache for files read with `min_bytes_to_use_mmap_io` setting. It makes significant (2x and more) performance improvement when the value of the setting is small by avoiding frequent mmap/munmap calls and the consequent page faults. Note that mmap IO has major drawbacks that makes it less reliable in production (e.g. hung or SIGBUS on faulty disks; less controllable memory usage). Nevertheless it is good in benchmarks. [#22206](https://github.com/ClickHouse/ClickHouse/pull/22206) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Avoid unnecessary data copy when using codec `NONE`. Please note that codec `NONE` is mostly useless - it's recommended to always use compression (`LZ4` is by default). Despite the common belief, disabling compression may not improve performance (the opposite effect is possible). The `NONE` codec is useful in some cases: - when data is uncompressable; - for synthetic benchmarks. [#22145](https://github.com/ClickHouse/ClickHouse/pull/22145) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Faster `GROUP BY` with small `max_rows_to_group_by` and `group_by_overflow_mode='any'`. [#21856](https://github.com/ClickHouse/ClickHouse/pull/21856) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Optimize performance of queries like `SELECT ... FINAL ... WHERE`. Now in queries with `FINAL` it's allowed to move to `PREWHERE` columns, which are in sorting key. [#21830](https://github.com/ClickHouse/ClickHouse/pull/21830) ([foolchi](https://github.com/foolchi)). +* Improved performance by replacing `memcpy` to another implementation. This closes [#18583](https://github.com/ClickHouse/ClickHouse/issues/18583). [#21520](https://github.com/ClickHouse/ClickHouse/pull/21520) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Improve performance of aggregation in order of sorting key (with enabled setting `optimize_aggregation_in_order`). [#19401](https://github.com/ClickHouse/ClickHouse/pull/19401) ([Anton Popov](https://github.com/CurtizJ)). + +#### Improvement + +* Add connection pool for PostgreSQL table/database engine and dictionary source. Should fix [#21444](https://github.com/ClickHouse/ClickHouse/issues/21444). [#21839](https://github.com/ClickHouse/ClickHouse/pull/21839) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Support non-default table schema for postgres storage/table-function. Closes [#21701](https://github.com/ClickHouse/ClickHouse/issues/21701). [#21711](https://github.com/ClickHouse/ClickHouse/pull/21711) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Support replicas priority for postgres dictionary source. [#21710](https://github.com/ClickHouse/ClickHouse/pull/21710) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Introduce a new merge tree setting `min_bytes_to_rebalance_partition_over_jbod` which allows assigning new parts to different disks of a JBOD volume in a balanced way. [#16481](https://github.com/ClickHouse/ClickHouse/pull/16481) ([Amos Bird](https://github.com/amosbird)). +* Added `Grant`, `Revoke` and `System` values of `query_kind` column for corresponding queries in `system.query_log`. [#21102](https://github.com/ClickHouse/ClickHouse/pull/21102) ([Vasily Nemkov](https://github.com/Enmk)). +* Allow customizing timeouts for HTTP connections used for replication independently from other HTTP timeouts. [#20088](https://github.com/ClickHouse/ClickHouse/pull/20088) ([nvartolomei](https://github.com/nvartolomei)). +* Better exception message in client in case of exception while server is writing blocks. In previous versions client may get misleading message like `Data compressed with different methods`. [#22427](https://github.com/ClickHouse/ClickHouse/pull/22427) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix error `Directory tmp_fetch_XXX already exists` which could happen after failed fetch part. Delete temporary fetch directory if it already exists. Fixes [#14197](https://github.com/ClickHouse/ClickHouse/issues/14197). [#22411](https://github.com/ClickHouse/ClickHouse/pull/22411) ([nvartolomei](https://github.com/nvartolomei)). +* Fix MSan report for function `range` with `UInt256` argument (support for large integers is experimental). This closes [#22157](https://github.com/ClickHouse/ClickHouse/issues/22157). [#22387](https://github.com/ClickHouse/ClickHouse/pull/22387) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add `current_database` column to `system.processes` table. It contains the current database of the query. [#22365](https://github.com/ClickHouse/ClickHouse/pull/22365) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Add case-insensitive history search/navigation and subword movement features to `clickhouse-client`. [#22105](https://github.com/ClickHouse/ClickHouse/pull/22105) ([Amos Bird](https://github.com/amosbird)). +* If tuple of NULLs, e.g. `(NULL, NULL)` is on the left hand side of `IN` operator with tuples of non-NULLs on the right hand side, e.g. `SELECT (NULL, NULL) IN ((0, 0), (3, 1))` return 0 instead of throwing an exception about incompatible types. The expression may also appear due to optimization of something like `SELECT (NULL, NULL) = (8, 0) OR (NULL, NULL) = (3, 2) OR (NULL, NULL) = (0, 0) OR (NULL, NULL) = (3, 1)`. This closes [#22017](https://github.com/ClickHouse/ClickHouse/issues/22017). [#22063](https://github.com/ClickHouse/ClickHouse/pull/22063) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Update used version of simdjson to 0.9.1. This fixes [#21984](https://github.com/ClickHouse/ClickHouse/issues/21984). [#22057](https://github.com/ClickHouse/ClickHouse/pull/22057) ([Vitaly Baranov](https://github.com/vitlibar)). +* Added case insensitive aliases for `CONNECTION_ID()` and `VERSION()` functions. This fixes [#22028](https://github.com/ClickHouse/ClickHouse/issues/22028). [#22042](https://github.com/ClickHouse/ClickHouse/pull/22042) ([Eugene Klimov](https://github.com/Slach)). +* Add option `strict_increase` to `windowFunnel` function to calculate each event once (resolve [#21835](https://github.com/ClickHouse/ClickHouse/issues/21835)). [#22025](https://github.com/ClickHouse/ClickHouse/pull/22025) ([Vladimir](https://github.com/vdimir)). +* If partition key of a `MergeTree` table does not include `Date` or `DateTime` columns but includes exactly one `DateTime64` column, expose its values in the `min_time` and `max_time` columns in `system.parts` and `system.parts_columns` tables. Add `min_time` and `max_time` columns to `system.parts_columns` table (these was inconsistency to the `system.parts` table). This closes [#18244](https://github.com/ClickHouse/ClickHouse/issues/18244). [#22011](https://github.com/ClickHouse/ClickHouse/pull/22011) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Supported `replication_alter_partitions_sync=1` setting in `clickhouse-copier` for moving partitions from helping table to destination. Decreased default timeouts. Fixes [#21911](https://github.com/ClickHouse/ClickHouse/issues/21911). [#21912](https://github.com/ClickHouse/ClickHouse/pull/21912) ([turbo jason](https://github.com/songenjie)). +* Show path to data directory of `EmbeddedRocksDB` tables in system tables. [#21903](https://github.com/ClickHouse/ClickHouse/pull/21903) ([tavplubix](https://github.com/tavplubix)). +* Add profile event `HedgedRequestsChangeReplica`, change read data timeout from sec to ms. [#21886](https://github.com/ClickHouse/ClickHouse/pull/21886) ([Kruglov Pavel](https://github.com/Avogar)). +* DiskS3 (experimental feature under development). Fixed bug with the impossibility to move directory if the destination is not empty and cache disk is used. [#21837](https://github.com/ClickHouse/ClickHouse/pull/21837) ([Pavel Kovalenko](https://github.com/Jokser)). +* Better formatting for `Array` and `Map` data types in Web UI. [#21798](https://github.com/ClickHouse/ClickHouse/pull/21798) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Update clusters only if their configurations were updated. [#21685](https://github.com/ClickHouse/ClickHouse/pull/21685) ([Kruglov Pavel](https://github.com/Avogar)). +* Propagate query and session settings for distributed DDL queries. Set `distributed_ddl_entry_format_version` to 2 to enable this. Added `distributed_ddl_output_mode` setting. Supported modes: `none`, `throw` (default), `null_status_on_timeout` and `never_throw`. Miscellaneous fixes and improvements for `Replicated` database engine. [#21535](https://github.com/ClickHouse/ClickHouse/pull/21535) ([tavplubix](https://github.com/tavplubix)). +* If `PODArray` was instantiated with element size that is neither a fraction or a multiple of 16, buffer overflow was possible. No bugs in current releases exist. [#21533](https://github.com/ClickHouse/ClickHouse/pull/21533) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add `last_error_time`/`last_error_message`/`last_error_stacktrace`/`remote` columns for `system.errors`. [#21529](https://github.com/ClickHouse/ClickHouse/pull/21529) ([Azat Khuzhin](https://github.com/azat)). +* Add aliases `simpleJSONExtract/simpleJSONHas` to `visitParam/visitParamExtract{UInt, Int, Bool, Float, Raw, String}`. Fixes #21383. [#21519](https://github.com/ClickHouse/ClickHouse/pull/21519) ([fastio](https://github.com/fastio)). +* Add setting `optimize_skip_unused_shards_limit` to limit the number of sharding key values for `optimize_skip_unused_shards`. [#21512](https://github.com/ClickHouse/ClickHouse/pull/21512) ([Azat Khuzhin](https://github.com/azat)). +* Improve `clickhouse-format` to not throw exception when there are extra spaces or comment after the last query, and throw exception early with readable message when format `ASTInsertQuery` with data . [#21311](https://github.com/ClickHouse/ClickHouse/pull/21311) ([flynn](https://github.com/ucasFL)). +* Improve support of integer keys in data type `Map`. [#21157](https://github.com/ClickHouse/ClickHouse/pull/21157) ([Anton Popov](https://github.com/CurtizJ)). +* MaterializeMySQL: attempt to reconnect to MySQL if the connection is lost. [#20961](https://github.com/ClickHouse/ClickHouse/pull/20961) ([Håvard Kvålen](https://github.com/havardk)). +* Support more cases to rewrite `CROSS JOIN` to `INNER JOIN`. [#20392](https://github.com/ClickHouse/ClickHouse/pull/20392) ([Vladimir](https://github.com/vdimir)). +* Do not create empty parts on INSERT when `optimize_on_insert` setting enabled. Fixes [#20304](https://github.com/ClickHouse/ClickHouse/issues/20304). [#20387](https://github.com/ClickHouse/ClickHouse/pull/20387) ([Kruglov Pavel](https://github.com/Avogar)). +* `MaterializeMySQL`: add minmax skipping index for `_version` column. [#20382](https://github.com/ClickHouse/ClickHouse/pull/20382) ([Stig Bakken](https://github.com/stigsb)). +* Add option `--backslash` for `clickhouse-format`, which can add a backslash at the end of each line of the formatted query. [#21494](https://github.com/ClickHouse/ClickHouse/pull/21494) ([flynn](https://github.com/ucasFL)). +* Now clickhouse will not throw `LOGICAL_ERROR` exception when we try to mutate the already covered part. Fixes [#22013](https://github.com/ClickHouse/ClickHouse/issues/22013). [#22291](https://github.com/ClickHouse/ClickHouse/pull/22291) ([alesapin](https://github.com/alesapin)). + +#### Bug Fix + +* Remove socket from epoll before cancelling packet receiver in `HedgedConnections` to prevent possible race. Fixes [#22161](https://github.com/ClickHouse/ClickHouse/issues/22161). [#22443](https://github.com/ClickHouse/ClickHouse/pull/22443) ([Kruglov Pavel](https://github.com/Avogar)). +* Add (missing) memory accounting in parallel parsing routines. In previous versions OOM was possible when the resultset contains very large blocks of data. This closes [#22008](https://github.com/ClickHouse/ClickHouse/issues/22008). [#22425](https://github.com/ClickHouse/ClickHouse/pull/22425) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix exception which may happen when `SELECT` has constant `WHERE` condition and source table has columns which names are digits. [#22270](https://github.com/ClickHouse/ClickHouse/pull/22270) ([LiuNeng](https://github.com/liuneng1994)). +* Fix query cancellation with `use_hedged_requests=0` and `async_socket_for_remote=1`. [#22183](https://github.com/ClickHouse/ClickHouse/pull/22183) ([Azat Khuzhin](https://github.com/azat)). +* Fix uncaught exception in `InterserverIOHTTPHandler`. [#22146](https://github.com/ClickHouse/ClickHouse/pull/22146) ([Azat Khuzhin](https://github.com/azat)). +* Fix docker entrypoint in case `http_port` is not in the config. [#22132](https://github.com/ClickHouse/ClickHouse/pull/22132) ([Ewout](https://github.com/devwout)). +* Fix error `Invalid number of rows in Chunk` in `JOIN` with `TOTALS` and `arrayJoin`. Closes [#19303](https://github.com/ClickHouse/ClickHouse/issues/19303). [#22129](https://github.com/ClickHouse/ClickHouse/pull/22129) ([Vladimir](https://github.com/vdimir)). +* Fix the background thread pool name which used to poll message from Kafka. The Kafka engine with the broken thread pool will not consume the message from message queue. [#22122](https://github.com/ClickHouse/ClickHouse/pull/22122) ([fastio](https://github.com/fastio)). +* Fix waiting for `OPTIMIZE` and `ALTER` queries for `ReplicatedMergeTree` table engines. Now the query will not hang when the table was detached or restarted. [#22118](https://github.com/ClickHouse/ClickHouse/pull/22118) ([alesapin](https://github.com/alesapin)). +* Disable `async_socket_for_remote`/`use_hedged_requests` for buggy Linux kernels. [#22109](https://github.com/ClickHouse/ClickHouse/pull/22109) ([Azat Khuzhin](https://github.com/azat)). +* Docker entrypoint: avoid chown of `.` in case when `LOG_PATH` is empty. Closes [#22100](https://github.com/ClickHouse/ClickHouse/issues/22100). [#22102](https://github.com/ClickHouse/ClickHouse/pull/22102) ([filimonov](https://github.com/filimonov)). +* The function `decrypt` was lacking a check for the minimal size of data encrypted in `AEAD` mode. This closes [#21897](https://github.com/ClickHouse/ClickHouse/issues/21897). [#22064](https://github.com/ClickHouse/ClickHouse/pull/22064) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* In rare case, merge for `CollapsingMergeTree` may create granule with `index_granularity + 1` rows. Because of this, internal check, added in [#18928](https://github.com/ClickHouse/ClickHouse/issues/18928) (affects 21.2 and 21.3), may fail with error `Incomplete granules are not allowed while blocks are granules size`. This error did not allow parts to merge. [#21976](https://github.com/ClickHouse/ClickHouse/pull/21976) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Reverted [#15454](https://github.com/ClickHouse/ClickHouse/issues/15454) that may cause significant increase in memory usage while loading external dictionaries of hashed type. This closes [#21935](https://github.com/ClickHouse/ClickHouse/issues/21935). [#21948](https://github.com/ClickHouse/ClickHouse/pull/21948) ([Maksim Kita](https://github.com/kitaisreal)). +* Prevent hedged connections overlaps (`Unknown packet 9 from server` error). [#21941](https://github.com/ClickHouse/ClickHouse/pull/21941) ([Azat Khuzhin](https://github.com/azat)). +* Fix reading the HTTP POST request with "multipart/form-data" content type in some cases. [#21936](https://github.com/ClickHouse/ClickHouse/pull/21936) ([Ivan](https://github.com/abyss7)). +* Fix wrong `ORDER BY` results when a query contains window functions, and optimization for reading in primary key order is applied. Fixes [#21828](https://github.com/ClickHouse/ClickHouse/issues/21828). [#21915](https://github.com/ClickHouse/ClickHouse/pull/21915) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Fix deadlock in first catboost model execution. Closes [#13832](https://github.com/ClickHouse/ClickHouse/issues/13832). [#21844](https://github.com/ClickHouse/ClickHouse/pull/21844) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix incorrect query result (and possible crash) which could happen when `WHERE` or `HAVING` condition is pushed before `GROUP BY`. Fixes [#21773](https://github.com/ClickHouse/ClickHouse/issues/21773). [#21841](https://github.com/ClickHouse/ClickHouse/pull/21841) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Better error handling and logging in `WriteBufferFromS3`. [#21836](https://github.com/ClickHouse/ClickHouse/pull/21836) ([Pavel Kovalenko](https://github.com/Jokser)). +* Fix possible crashes in aggregate functions with combinator `Distinct`, while using two-level aggregation. This is a follow-up fix of [#18365](https://github.com/ClickHouse/ClickHouse/pull/18365) . Can only reproduced in production env. [#21818](https://github.com/ClickHouse/ClickHouse/pull/21818) ([Amos Bird](https://github.com/amosbird)). +* Fix scalar subquery index analysis. This fixes [#21717](https://github.com/ClickHouse/ClickHouse/issues/21717) , which was introduced in [#18896](https://github.com/ClickHouse/ClickHouse/pull/18896). [#21766](https://github.com/ClickHouse/ClickHouse/pull/21766) ([Amos Bird](https://github.com/amosbird)). +* Fix bug for `ReplicatedMerge` table engines when `ALTER MODIFY COLUMN` query doesn't change the type of `Decimal` column if its size (32 bit or 64 bit) doesn't change. [#21728](https://github.com/ClickHouse/ClickHouse/pull/21728) ([alesapin](https://github.com/alesapin)). +* Fix possible infinite waiting when concurrent `OPTIMIZE` and `DROP` are run for `ReplicatedMergeTree`. [#21716](https://github.com/ClickHouse/ClickHouse/pull/21716) ([Azat Khuzhin](https://github.com/azat)). +* Fix function `arrayElement` with type `Map` for constant integer arguments. [#21699](https://github.com/ClickHouse/ClickHouse/pull/21699) ([Anton Popov](https://github.com/CurtizJ)). +* Fix SIGSEGV on not existing attributes from `ip_trie` with `access_to_key_from_attributes`. [#21692](https://github.com/ClickHouse/ClickHouse/pull/21692) ([Azat Khuzhin](https://github.com/azat)). +* Server now start accepting connections only after `DDLWorker` and dictionaries initialization. [#21676](https://github.com/ClickHouse/ClickHouse/pull/21676) ([Azat Khuzhin](https://github.com/azat)). +* Add type conversion for keys of tables of type `Join` (previously led to SIGSEGV). [#21646](https://github.com/ClickHouse/ClickHouse/pull/21646) ([Azat Khuzhin](https://github.com/azat)). +* Fix distributed requests cancellation (for example simple select from multiple shards with limit, i.e. `select * from remote('127.{2,3}', system.numbers) limit 100`) with `async_socket_for_remote=1`. [#21643](https://github.com/ClickHouse/ClickHouse/pull/21643) ([Azat Khuzhin](https://github.com/azat)). +* Fix `fsync_part_directory` for horizontal merge. [#21642](https://github.com/ClickHouse/ClickHouse/pull/21642) ([Azat Khuzhin](https://github.com/azat)). +* Remove unknown columns from joined table in `WHERE` for queries to external database engines (MySQL, PostgreSQL). close [#14614](https://github.com/ClickHouse/ClickHouse/issues/14614), close [#19288](https://github.com/ClickHouse/ClickHouse/issues/19288) (dup), close [#19645](https://github.com/ClickHouse/ClickHouse/issues/19645) (dup). [#21640](https://github.com/ClickHouse/ClickHouse/pull/21640) ([Vladimir](https://github.com/vdimir)). +* `std::terminate` was called if there is an error writing data into s3. [#21624](https://github.com/ClickHouse/ClickHouse/pull/21624) ([Vladimir](https://github.com/vdimir)). +* Fix possible error `Cannot find column` when `optimize_skip_unused_shards` is enabled and zero shards are used. [#21579](https://github.com/ClickHouse/ClickHouse/pull/21579) ([Azat Khuzhin](https://github.com/azat)). +* In case if query has constant `WHERE` condition, and setting `optimize_skip_unused_shards` enabled, all shards may be skipped and query could return incorrect empty result. [#21550](https://github.com/ClickHouse/ClickHouse/pull/21550) ([Amos Bird](https://github.com/amosbird)). +* Fix table function `clusterAllReplicas` returns wrong `_shard_num`. close [#21481](https://github.com/ClickHouse/ClickHouse/issues/21481). [#21498](https://github.com/ClickHouse/ClickHouse/pull/21498) ([flynn](https://github.com/ucasFL)). +* Fix that S3 table holds old credentials after config update. [#21457](https://github.com/ClickHouse/ClickHouse/pull/21457) ([Grigory Pervakov](https://github.com/GrigoryPervakov)). +* Fixed race on SSL object inside `SecureSocket` in Poco. [#21456](https://github.com/ClickHouse/ClickHouse/pull/21456) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fix `Avro` format parsing for `Kafka`. Fixes [#21437](https://github.com/ClickHouse/ClickHouse/issues/21437). [#21438](https://github.com/ClickHouse/ClickHouse/pull/21438) ([Ilya Golshtein](https://github.com/ilejn)). +* Fix receive and send timeouts and non-blocking read in secure socket. [#21429](https://github.com/ClickHouse/ClickHouse/pull/21429) ([Kruglov Pavel](https://github.com/Avogar)). +* `force_drop_table` flag didn't work for `MATERIALIZED VIEW`, it's fixed. Fixes [#18943](https://github.com/ClickHouse/ClickHouse/issues/18943). [#20626](https://github.com/ClickHouse/ClickHouse/pull/20626) ([tavplubix](https://github.com/tavplubix)). +* Fix name clashes in `PredicateRewriteVisitor`. It caused incorrect `WHERE` filtration after full join. Close [#20497](https://github.com/ClickHouse/ClickHouse/issues/20497). [#20622](https://github.com/ClickHouse/ClickHouse/pull/20622) ([Vladimir](https://github.com/vdimir)). + +#### Build/Testing/Packaging Improvement + +* Add [Jepsen](https://github.com/jepsen-io/jepsen) tests for ClickHouse Keeper. [#21677](https://github.com/ClickHouse/ClickHouse/pull/21677) ([alesapin](https://github.com/alesapin)). +* Run stateless tests in parallel in CI. Depends on [#22181](https://github.com/ClickHouse/ClickHouse/issues/22181). [#22300](https://github.com/ClickHouse/ClickHouse/pull/22300) ([alesapin](https://github.com/alesapin)). +* Enable status check for [SQLancer](https://github.com/sqlancer/sqlancer) CI run. [#22015](https://github.com/ClickHouse/ClickHouse/pull/22015) ([Ilya Yatsishin](https://github.com/qoega)). +* Multiple preparations for PowerPC builds: Enable the bundled openldap on `ppc64le`. [#22487](https://github.com/ClickHouse/ClickHouse/pull/22487) ([Kfir Itzhak](https://github.com/mastertheknife)). Enable compiling on `ppc64le` with Clang. [#22476](https://github.com/ClickHouse/ClickHouse/pull/22476) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix compiling boost on `ppc64le`. [#22474](https://github.com/ClickHouse/ClickHouse/pull/22474) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix CMake error about internal CMake variable `CMAKE_ASM_COMPILE_OBJECT` not set on `ppc64le`. [#22469](https://github.com/ClickHouse/ClickHouse/pull/22469) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix Fedora/RHEL/CentOS not finding `libclang_rt.builtins` on `ppc64le`. [#22458](https://github.com/ClickHouse/ClickHouse/pull/22458) ([Kfir Itzhak](https://github.com/mastertheknife)). Enable building with `jemalloc` on `ppc64le`. [#22447](https://github.com/ClickHouse/ClickHouse/pull/22447) ([Kfir Itzhak](https://github.com/mastertheknife)). Fix ClickHouse's config embedding and cctz's timezone embedding on `ppc64le`. [#22445](https://github.com/ClickHouse/ClickHouse/pull/22445) ([Kfir Itzhak](https://github.com/mastertheknife)). Fixed compiling on `ppc64le` and use the correct instruction pointer register on `ppc64le`. [#22430](https://github.com/ClickHouse/ClickHouse/pull/22430) ([Kfir Itzhak](https://github.com/mastertheknife)). +* Re-enable the S3 (AWS) library on `aarch64`. [#22484](https://github.com/ClickHouse/ClickHouse/pull/22484) ([Kfir Itzhak](https://github.com/mastertheknife)). +* Add `tzdata` to Docker containers because reading `ORC` formats requires it. This closes [#14156](https://github.com/ClickHouse/ClickHouse/issues/14156). [#22000](https://github.com/ClickHouse/ClickHouse/pull/22000) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Introduce 2 arguments for `clickhouse-server` image Dockerfile: `deb_location` & `single_binary_location`. [#21977](https://github.com/ClickHouse/ClickHouse/pull/21977) ([filimonov](https://github.com/filimonov)). +* Allow to use clang-tidy with release builds by enabling assertions if it is used. [#21914](https://github.com/ClickHouse/ClickHouse/pull/21914) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add llvm-12 binaries name to search in cmake scripts. Implicit constants conversions to mute clang warnings. Updated submodules to build with CMake 3.19. Mute recursion in macro expansion in `readpassphrase` library. Deprecated `-fuse-ld` changed to `--ld-path` for clang. [#21597](https://github.com/ClickHouse/ClickHouse/pull/21597) ([Ilya Yatsishin](https://github.com/qoega)). +* Updating `docker/test/testflows/runner/dockerd-entrypoint.sh` to use Yandex dockerhub-proxy, because Docker Hub has enabled very restrictive rate limits [#21551](https://github.com/ClickHouse/ClickHouse/pull/21551) ([vzakaznikov](https://github.com/vzakaznikov)). +* Fix macOS shared lib build. [#20184](https://github.com/ClickHouse/ClickHouse/pull/20184) ([nvartolomei](https://github.com/nvartolomei)). +* Add `ctime` option to `zookeeper-dump-tree`. It allows to dump node creation time. [#21842](https://github.com/ClickHouse/ClickHouse/pull/21842) ([Ilya](https://github.com/HumanUser)). + + ## ClickHouse release 21.3 (LTS) ### ClickHouse release v21.3, 2021-03-12 @@ -26,7 +179,7 @@ #### Experimental feature * Add experimental `Replicated` database engine. It replicates DDL queries across multiple hosts. [#16193](https://github.com/ClickHouse/ClickHouse/pull/16193) ([tavplubix](https://github.com/tavplubix)). -* Introduce experimental support for window functions, enabled with `allow_experimental_functions = 1`. This is a preliminary, alpha-quality implementation that is not suitable for production use and will change in backward-incompatible ways in future releases. Please see [the documentation](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/sql-reference/window-functions/index.md#experimental-window-functions) for the list of supported features. [#20337](https://github.com/ClickHouse/ClickHouse/pull/20337) ([Alexander Kuzmenkov](https://github.com/akuzm)). +* Introduce experimental support for window functions, enabled with `allow_experimental_window_functions = 1`. This is a preliminary, alpha-quality implementation that is not suitable for production use and will change in backward-incompatible ways in future releases. Please see [the documentation](https://github.com/ClickHouse/ClickHouse/blob/master/docs/en/sql-reference/window-functions/index.md#experimental-window-functions) for the list of supported features. [#20337](https://github.com/ClickHouse/ClickHouse/pull/20337) ([Alexander Kuzmenkov](https://github.com/akuzm)). * Add the ability to backup/restore metadata files for DiskS3. [#18377](https://github.com/ClickHouse/ClickHouse/pull/18377) ([Pavel Kovalenko](https://github.com/Jokser)). #### Performance Improvement diff --git a/CMakeLists.txt b/CMakeLists.txt index 5d716985c46..1423f3a0bc2 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -68,17 +68,30 @@ endif () include (cmake/find/ccache.cmake) -option(ENABLE_CHECK_HEAVY_BUILDS "Don't allow C++ translation units to compile too long or to take too much memory while compiling" OFF) +# Take care to add prlimit in command line before ccache, or else ccache thinks that +# prlimit is compiler, and clang++ is its input file, and refuses to work with +# multiple inputs, e.g in ccache log: +# [2021-03-31T18:06:32.655327 36900] Command line: /usr/bin/ccache prlimit --as=10000000000 --data=5000000000 --cpu=600 /usr/bin/clang++-11 - ...... std=gnu++2a -MD -MT src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o -MF src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o.d -o src/CMakeFiles/dbms.dir/Storages/MergeTree/IMergeTreeDataPart.cpp.o -c ../src/Storages/MergeTree/IMergeTreeDataPart.cpp +# +# [2021-03-31T18:06:32.656704 36900] Multiple input files: /usr/bin/clang++-11 and ../src/Storages/MergeTree/IMergeTreeDataPart.cpp +# +# Another way would be to use --ccache-skip option before clang++-11 to make +# ccache ignore it. +option(ENABLE_CHECK_HEAVY_BUILDS "Don't allow C++ translation units to compile too long or to take too much memory while compiling." OFF) if (ENABLE_CHECK_HEAVY_BUILDS) # set DATA (since RSS does not work since 2.6.x+) to 2G set (RLIMIT_DATA 5000000000) # set VIRT (RLIMIT_AS) to 10G (DATA*10) set (RLIMIT_AS 10000000000) + # set CPU time limit to 600 seconds + set (RLIMIT_CPU 600) + # gcc10/gcc10/clang -fsanitize=memory is too heavy if (SANITIZE STREQUAL "memory" OR COMPILER_GCC) set (RLIMIT_DATA 10000000000) endif() - set (CMAKE_CXX_COMPILER_LAUNCHER prlimit --as=${RLIMIT_AS} --data=${RLIMIT_DATA} --cpu=600) + + set (CMAKE_CXX_COMPILER_LAUNCHER prlimit --as=${RLIMIT_AS} --data=${RLIMIT_DATA} --cpu=${RLIMIT_CPU} ${CMAKE_CXX_COMPILER_LAUNCHER}) endif () if (NOT CMAKE_BUILD_TYPE OR CMAKE_BUILD_TYPE STREQUAL "None") @@ -277,6 +290,12 @@ if (COMPILER_GCC OR COMPILER_CLANG) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsized-deallocation") endif () +# falign-functions=32 prevents from random performance regressions with the code change. Thus, providing more stable +# benchmarks. +if (COMPILER_GCC OR COMPILER_CLANG) + set(COMPILER_FLAGS "${COMPILER_FLAGS} -falign-functions=32") +endif () + # Compiler-specific coverage flags e.g. -fcoverage-mapping for gcc option(WITH_COVERAGE "Profile the resulting binary/binaries" OFF) @@ -464,6 +483,7 @@ find_contrib_lib(double-conversion) # Must be before parquet include (cmake/find/ssl.cmake) include (cmake/find/ldap.cmake) # after ssl include (cmake/find/icu.cmake) +include (cmake/find/xz.cmake) include (cmake/find/zlib.cmake) include (cmake/find/zstd.cmake) include (cmake/find/ltdl.cmake) # for odbc @@ -496,6 +516,7 @@ include (cmake/find/fast_float.cmake) include (cmake/find/rapidjson.cmake) include (cmake/find/fastops.cmake) include (cmake/find/odbc.cmake) +include (cmake/find/nanodbc.cmake) include (cmake/find/rocksdb.cmake) include (cmake/find/libpqxx.cmake) include (cmake/find/nuraft.cmake) diff --git a/base/CMakeLists.txt b/base/CMakeLists.txt index 46bd57eda12..023dcaaccae 100644 --- a/base/CMakeLists.txt +++ b/base/CMakeLists.txt @@ -8,6 +8,7 @@ add_subdirectory (loggers) add_subdirectory (pcg-random) add_subdirectory (widechar_width) add_subdirectory (readpassphrase) +add_subdirectory (bridge) if (USE_MYSQL) add_subdirectory (mysqlxx) diff --git a/base/bridge/CMakeLists.txt b/base/bridge/CMakeLists.txt new file mode 100644 index 00000000000..20b0b651677 --- /dev/null +++ b/base/bridge/CMakeLists.txt @@ -0,0 +1,7 @@ +add_library (bridge + IBridge.cpp +) + +target_include_directories (daemon PUBLIC ..) +target_link_libraries (bridge PRIVATE daemon dbms Poco::Data Poco::Data::ODBC) + diff --git a/base/bridge/IBridge.cpp b/base/bridge/IBridge.cpp new file mode 100644 index 00000000000..b1f71315fef --- /dev/null +++ b/base/bridge/IBridge.cpp @@ -0,0 +1,238 @@ +#include "IBridge.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#if USE_ODBC +# include +#endif + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int ARGUMENT_OUT_OF_BOUND; +} + +namespace +{ + Poco::Net::SocketAddress makeSocketAddress(const std::string & host, UInt16 port, Poco::Logger * log) + { + Poco::Net::SocketAddress socket_address; + try + { + socket_address = Poco::Net::SocketAddress(host, port); + } + catch (const Poco::Net::DNSException & e) + { + const auto code = e.code(); + if (code == EAI_FAMILY +#if defined(EAI_ADDRFAMILY) + || code == EAI_ADDRFAMILY +#endif + ) + { + LOG_ERROR(log, "Cannot resolve listen_host ({}), error {}: {}. If it is an IPv6 address and your host has disabled IPv6, then consider to specify IPv4 address to listen in element of configuration file. Example: 0.0.0.0", host, e.code(), e.message()); + } + + throw; + } + return socket_address; + } + + Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, Poco::Logger * log) + { + auto address = makeSocketAddress(host, port, log); +#if POCO_VERSION < 0x01080000 + socket.bind(address, /* reuseAddress = */ true); +#else + socket.bind(address, /* reuseAddress = */ true, /* reusePort = */ false); +#endif + + socket.listen(/* backlog = */ 64); + + return address; + } +} + + +void IBridge::handleHelp(const std::string &, const std::string &) +{ + Poco::Util::HelpFormatter help_formatter(options()); + help_formatter.setCommand(commandName()); + help_formatter.setHeader("HTTP-proxy for odbc requests"); + help_formatter.setUsage("--http-port "); + help_formatter.format(std::cerr); + + stopOptionsProcessing(); +} + + +void IBridge::defineOptions(Poco::Util::OptionSet & options) +{ + options.addOption( + Poco::Util::Option("http-port", "", "port to listen").argument("http-port", true) .binding("http-port")); + + options.addOption( + Poco::Util::Option("listen-host", "", "hostname or address to listen, default 127.0.0.1").argument("listen-host").binding("listen-host")); + + options.addOption( + Poco::Util::Option("http-timeout", "", "http timeout for socket, default 1800").argument("http-timeout").binding("http-timeout")); + + options.addOption( + Poco::Util::Option("max-server-connections", "", "max connections to server, default 1024").argument("max-server-connections").binding("max-server-connections")); + + options.addOption( + Poco::Util::Option("keep-alive-timeout", "", "keepalive timeout, default 10").argument("keep-alive-timeout").binding("keep-alive-timeout")); + + options.addOption( + Poco::Util::Option("log-level", "", "sets log level, default info") .argument("log-level").binding("logger.level")); + + options.addOption( + Poco::Util::Option("log-path", "", "log path for all logs, default console").argument("log-path").binding("logger.log")); + + options.addOption( + Poco::Util::Option("err-log-path", "", "err log path for all logs, default no").argument("err-log-path").binding("logger.errorlog")); + + options.addOption( + Poco::Util::Option("stdout-path", "", "stdout log path, default console").argument("stdout-path").binding("logger.stdout")); + + options.addOption( + Poco::Util::Option("stderr-path", "", "stderr log path, default console").argument("stderr-path").binding("logger.stderr")); + + using Me = std::decay_t; + + options.addOption( + Poco::Util::Option("help", "", "produce this help message").binding("help").callback(Poco::Util::OptionCallback(this, &Me::handleHelp))); + + ServerApplication::defineOptions(options); // NOLINT Don't need complex BaseDaemon's .xml config +} + + +void IBridge::initialize(Application & self) +{ + BaseDaemon::closeFDs(); + is_help = config().has("help"); + + if (is_help) + return; + + config().setString("logger", bridgeName()); + + /// Redirect stdout, stderr to specified files. + /// Some libraries and sanitizers write to stderr in case of errors. + const auto stdout_path = config().getString("logger.stdout", ""); + if (!stdout_path.empty()) + { + if (!freopen(stdout_path.c_str(), "a+", stdout)) + throw Poco::OpenFileException("Cannot attach stdout to " + stdout_path); + + /// Disable buffering for stdout. + setbuf(stdout, nullptr); + } + const auto stderr_path = config().getString("logger.stderr", ""); + if (!stderr_path.empty()) + { + if (!freopen(stderr_path.c_str(), "a+", stderr)) + throw Poco::OpenFileException("Cannot attach stderr to " + stderr_path); + + /// Disable buffering for stderr. + setbuf(stderr, nullptr); + } + + buildLoggers(config(), logger(), self.commandName()); + + BaseDaemon::logRevision(); + + log = &logger(); + hostname = config().getString("listen-host", "127.0.0.1"); + port = config().getUInt("http-port"); + if (port > 0xFFFF) + throw Exception("Out of range 'http-port': " + std::to_string(port), ErrorCodes::ARGUMENT_OUT_OF_BOUND); + + http_timeout = config().getUInt("http-timeout", DEFAULT_HTTP_READ_BUFFER_TIMEOUT); + max_server_connections = config().getUInt("max-server-connections", 1024); + keep_alive_timeout = config().getUInt("keep-alive-timeout", 10); + + initializeTerminationAndSignalProcessing(); + +#if USE_ODBC + if (bridgeName() == "ODBCBridge") + Poco::Data::ODBC::Connector::registerConnector(); +#endif + + ServerApplication::initialize(self); // NOLINT +} + + +void IBridge::uninitialize() +{ + BaseDaemon::uninitialize(); +} + + +int IBridge::main(const std::vector & /*args*/) +{ + if (is_help) + return Application::EXIT_OK; + + registerFormats(); + LOG_INFO(log, "Starting up {} on host: {}, port: {}", bridgeName(), hostname, port); + + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, hostname, port, log); + socket.setReceiveTimeout(http_timeout); + socket.setSendTimeout(http_timeout); + + Poco::ThreadPool server_pool(3, max_server_connections); + + Poco::Net::HTTPServerParams::Ptr http_params = new Poco::Net::HTTPServerParams; + http_params->setTimeout(http_timeout); + http_params->setKeepAliveTimeout(keep_alive_timeout); + + auto shared_context = Context::createShared(); + auto context = Context::createGlobal(shared_context.get()); + context->makeGlobalContext(); + + if (config().has("query_masking_rules")) + SensitiveDataMasker::setInstance(std::make_unique(config(), "query_masking_rules")); + + auto server = HTTPServer( + context, + getHandlerFactoryPtr(context), + server_pool, + socket, + http_params); + + SCOPE_EXIT({ + LOG_DEBUG(log, "Received termination signal."); + LOG_DEBUG(log, "Waiting for current connections to close."); + + server.stop(); + + for (size_t count : ext::range(1, 6)) + { + if (server.currentConnections() == 0) + break; + LOG_DEBUG(log, "Waiting for {} connections, try {}", server.currentConnections(), count); + std::this_thread::sleep_for(std::chrono::milliseconds(1000)); + } + }); + + server.start(); + LOG_INFO(log, "Listening http://{}", address.toString()); + + waitForTerminationRequest(); + return Application::EXIT_OK; +} + +} diff --git a/base/bridge/IBridge.h b/base/bridge/IBridge.h new file mode 100644 index 00000000000..c64003d9959 --- /dev/null +++ b/base/bridge/IBridge.h @@ -0,0 +1,51 @@ +#pragma once + +#include +#include +#include + +#include +#include + + +namespace DB +{ + +/// Class represents base for clickhouse-odbc-bridge and clickhouse-library-bridge servers. +/// Listens to incoming HTTP POST and GET requests on specified port and host. +/// Has two handlers '/' for all incoming POST requests and /ping for GET request about service status. +class IBridge : public BaseDaemon +{ + +public: + /// Define command line arguments + void defineOptions(Poco::Util::OptionSet & options) override; + +protected: + using HandlerFactoryPtr = std::shared_ptr; + + void initialize(Application & self) override; + + void uninitialize() override; + + int main(const std::vector & args) override; + + virtual std::string bridgeName() const = 0; + + virtual HandlerFactoryPtr getHandlerFactoryPtr(ContextPtr context) const = 0; + + size_t keep_alive_timeout; + +private: + void handleHelp(const std::string &, const std::string &); + + bool is_help; + std::string hostname; + size_t port; + std::string log_level; + size_t max_server_connections; + size_t http_timeout; + + Poco::Logger * log; +}; +} diff --git a/src/Common/BorrowedObjectPool.h b/base/common/BorrowedObjectPool.h similarity index 99% rename from src/Common/BorrowedObjectPool.h rename to base/common/BorrowedObjectPool.h index d5263cf92a8..6a90a7e7122 100644 --- a/src/Common/BorrowedObjectPool.h +++ b/base/common/BorrowedObjectPool.h @@ -7,8 +7,7 @@ #include #include - -#include +#include /** Pool for limited size objects that cannot be used from different threads simultaneously. * The main use case is to have fixed size of objects that can be reused in difference threads during their lifetime diff --git a/base/common/DateLUTImpl.h b/base/common/DateLUTImpl.h index 363f281584e..9e60181e802 100644 --- a/base/common/DateLUTImpl.h +++ b/base/common/DateLUTImpl.h @@ -25,7 +25,7 @@ #if defined(__PPC__) -#if !__clang__ +#if !defined(__clang__) #pragma GCC diagnostic ignored "-Wmaybe-uninitialized" #endif #endif @@ -1266,7 +1266,7 @@ public: }; #if defined(__PPC__) -#if !__clang__ +#if !defined(__clang__) #pragma GCC diagnostic pop #endif #endif diff --git a/src/Common/MoveOrCopyIfThrow.h b/base/common/MoveOrCopyIfThrow.h similarity index 100% rename from src/Common/MoveOrCopyIfThrow.h rename to base/common/MoveOrCopyIfThrow.h diff --git a/base/common/wide_integer_impl.h b/base/common/wide_integer_impl.h index 5b981326e25..456c10a22e4 100644 --- a/base/common/wide_integer_impl.h +++ b/base/common/wide_integer_impl.h @@ -271,9 +271,13 @@ struct integer::_impl /// As to_Integral does a static_cast to int64_t, it may result in UB. /// The necessary check here is that long double has enough significant (mantissa) bits to store the /// int64_t max value precisely. + + //TODO Be compatible with Apple aarch64 +#if not (defined(__APPLE__) && defined(__aarch64__)) static_assert(LDBL_MANT_DIG >= 64, "On your system long double has less than 64 precision bits," "which may result in UB when initializing double from int64_t"); +#endif if ((rhs > 0 && rhs < static_cast(max_int)) || (rhs < 0 && rhs > static_cast(min_int))) { diff --git a/base/daemon/SentryWriter.cpp b/base/daemon/SentryWriter.cpp index 29430b65983..1b7d0064b99 100644 --- a/base/daemon/SentryWriter.cpp +++ b/base/daemon/SentryWriter.cpp @@ -9,6 +9,7 @@ #include #include +#include #include #include #include diff --git a/base/ext/scope_guard_safe.h b/base/ext/scope_guard_safe.h new file mode 100644 index 00000000000..55140213572 --- /dev/null +++ b/base/ext/scope_guard_safe.h @@ -0,0 +1,68 @@ +#pragma once + +#include +#include +#include + +/// Same as SCOPE_EXIT() but block the MEMORY_LIMIT_EXCEEDED errors. +/// +/// Typical example of SCOPE_EXIT_MEMORY() usage is when code under it may do +/// some tiny allocations, that may fail under high memory pressure or/and low +/// max_memory_usage (and related limits). +/// +/// NOTE: it should be used with caution. +#define SCOPE_EXIT_MEMORY(...) SCOPE_EXIT( \ + MemoryTracker::LockExceptionInThread \ + lock_memory_tracker(VariableContext::Global); \ + __VA_ARGS__; \ +) + +/// Same as SCOPE_EXIT() but try/catch/tryLogCurrentException any exceptions. +/// +/// SCOPE_EXIT_SAFE() should be used in case the exception during the code +/// under SCOPE_EXIT() is not "that fatal" and error message in log is enough. +/// +/// Good example is calling CurrentThread::detachQueryIfNotDetached(). +/// +/// Anti-pattern is calling WriteBuffer::finalize() under SCOPE_EXIT_SAFE() +/// (since finalize() can do final write and it is better to fail abnormally +/// instead of ignoring write error). +/// +/// NOTE: it should be used with double caution. +#define SCOPE_EXIT_SAFE(...) SCOPE_EXIT( \ + try \ + { \ + __VA_ARGS__; \ + } \ + catch (...) \ + { \ + tryLogCurrentException(__PRETTY_FUNCTION__); \ + } \ +) + +/// Same as SCOPE_EXIT() but: +/// - block the MEMORY_LIMIT_EXCEEDED errors, +/// - try/catch/tryLogCurrentException any exceptions. +/// +/// SCOPE_EXIT_MEMORY_SAFE() can be used when the error can be ignored, and in +/// addition to SCOPE_EXIT_SAFE() it will also lock MEMORY_LIMIT_EXCEEDED to +/// avoid such exceptions. +/// +/// It does exists as a separate helper, since you do not need to lock +/// MEMORY_LIMIT_EXCEEDED always (there are cases when code under SCOPE_EXIT does +/// not do any allocations, while LockExceptionInThread increment atomic +/// variable). +/// +/// NOTE: it should be used with triple caution. +#define SCOPE_EXIT_MEMORY_SAFE(...) SCOPE_EXIT( \ + try \ + { \ + MemoryTracker::LockExceptionInThread \ + lock_memory_tracker(VariableContext::Global); \ + __VA_ARGS__; \ + } \ + catch (...) \ + { \ + tryLogCurrentException(__PRETTY_FUNCTION__); \ + } \ +) diff --git a/base/mysqlxx/Pool.h b/base/mysqlxx/Pool.h index b6189663f55..530e2c78cf2 100644 --- a/base/mysqlxx/Pool.h +++ b/base/mysqlxx/Pool.h @@ -159,9 +159,9 @@ public: */ Pool(const std::string & db_, const std::string & server_, - const std::string & user_ = "", - const std::string & password_ = "", - unsigned port_ = 0, + const std::string & user_, + const std::string & password_, + unsigned port_, const std::string & socket_ = "", unsigned connect_timeout_ = MYSQLXX_DEFAULT_TIMEOUT, unsigned rw_timeout_ = MYSQLXX_DEFAULT_RW_TIMEOUT, diff --git a/base/mysqlxx/PoolWithFailover.cpp b/base/mysqlxx/PoolWithFailover.cpp index 5e9f70f4ac1..ea2d060e596 100644 --- a/base/mysqlxx/PoolWithFailover.cpp +++ b/base/mysqlxx/PoolWithFailover.cpp @@ -2,7 +2,6 @@ #include #include #include - #include @@ -15,9 +14,12 @@ static bool startsWith(const std::string & s, const char * prefix) using namespace mysqlxx; -PoolWithFailover::PoolWithFailover(const Poco::Util::AbstractConfiguration & config_, - const std::string & config_name_, const unsigned default_connections_, - const unsigned max_connections_, const size_t max_tries_) +PoolWithFailover::PoolWithFailover( + const Poco::Util::AbstractConfiguration & config_, + const std::string & config_name_, + const unsigned default_connections_, + const unsigned max_connections_, + const size_t max_tries_) : max_tries(max_tries_) { shareable = config_.getBool(config_name_ + ".share_connection", false); @@ -59,16 +61,38 @@ PoolWithFailover::PoolWithFailover(const Poco::Util::AbstractConfiguration & con } } -PoolWithFailover::PoolWithFailover(const std::string & config_name_, const unsigned default_connections_, - const unsigned max_connections_, const size_t max_tries_) - : PoolWithFailover{ - Poco::Util::Application::instance().config(), config_name_, - default_connections_, max_connections_, max_tries_} + +PoolWithFailover::PoolWithFailover( + const std::string & config_name_, + const unsigned default_connections_, + const unsigned max_connections_, + const size_t max_tries_) + : PoolWithFailover{Poco::Util::Application::instance().config(), + config_name_, default_connections_, max_connections_, max_tries_} { } + +PoolWithFailover::PoolWithFailover( + const std::string & database, + const RemoteDescription & addresses, + const std::string & user, + const std::string & password, + size_t max_tries_) + : max_tries(max_tries_) + , shareable(false) +{ + /// Replicas have the same priority, but traversed replicas are moved to the end of the queue. + for (const auto & [host, port] : addresses) + { + replicas_by_priority[0].emplace_back(std::make_shared(database, host, user, password, port)); + } +} + + PoolWithFailover::PoolWithFailover(const PoolWithFailover & other) - : max_tries{other.max_tries}, shareable{other.shareable} + : max_tries{other.max_tries} + , shareable{other.shareable} { if (shareable) { diff --git a/base/mysqlxx/PoolWithFailover.h b/base/mysqlxx/PoolWithFailover.h index 029fc3ebad3..5154fc3e253 100644 --- a/base/mysqlxx/PoolWithFailover.h +++ b/base/mysqlxx/PoolWithFailover.h @@ -11,6 +11,8 @@ namespace mysqlxx { /** MySQL connection pool with support of failover. + * + * For dictionary source: * Have information about replicas and their priorities. * Tries to connect to replica in an order of priority. When equal priority, choose replica with maximum time without connections. * @@ -68,42 +70,58 @@ namespace mysqlxx using PoolPtr = std::shared_ptr; using Replicas = std::vector; - /// [priority][index] -> replica. + /// [priority][index] -> replica. Highest priority is 0. using ReplicasByPriority = std::map; - ReplicasByPriority replicas_by_priority; /// Number of connection tries. size_t max_tries; /// Mutex for set of replicas. std::mutex mutex; - /// Can the Pool be shared bool shareable; public: using Entry = Pool::Entry; + using RemoteDescription = std::vector>; /** - * config_name Name of parameter in configuration file. + * * Mysql dictionary sourse related params: + * config_name Name of parameter in configuration file for dictionary source. + * + * * Mysql storage related parameters: + * replicas_description + * + * * Mutual parameters: * default_connections Number of connection in pool to each replica at start. * max_connections Maximum number of connections in pool to each replica. * max_tries_ Max number of connection tries. */ - PoolWithFailover(const std::string & config_name_, + PoolWithFailover( + const std::string & config_name_, unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS, unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS, size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); - PoolWithFailover(const Poco::Util::AbstractConfiguration & config_, + PoolWithFailover( + const Poco::Util::AbstractConfiguration & config_, const std::string & config_name_, unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS, unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS, size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); + PoolWithFailover( + const std::string & database, + const RemoteDescription & addresses, + const std::string & user, + const std::string & password, + size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); + PoolWithFailover(const PoolWithFailover & other); /** Allocates a connection to use. */ Entry get(); }; + + using PoolWithFailoverPtr = std::shared_ptr; } diff --git a/cmake/arch.cmake b/cmake/arch.cmake index 9604ef62b31..60e0346dbbf 100644 --- a/cmake/arch.cmake +++ b/cmake/arch.cmake @@ -1,7 +1,7 @@ if (CMAKE_SYSTEM_PROCESSOR MATCHES "amd64|x86_64") set (ARCH_AMD64 1) endif () -if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64.*|AARCH64.*)") +if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64.*|AARCH64.*|arm64.*|ARM64.*)") set (ARCH_AARCH64 1) endif () if (ARCH_AARCH64 OR CMAKE_SYSTEM_PROCESSOR MATCHES "arm") diff --git a/cmake/autogenerated_versions.txt b/cmake/autogenerated_versions.txt index 9d74179902d..51f4b974161 100644 --- a/cmake/autogenerated_versions.txt +++ b/cmake/autogenerated_versions.txt @@ -1,9 +1,9 @@ # This strings autochanged from release_lib.sh: -SET(VERSION_REVISION 54450) +SET(VERSION_REVISION 54451) SET(VERSION_MAJOR 21) -SET(VERSION_MINOR 5) +SET(VERSION_MINOR 6) SET(VERSION_PATCH 1) -SET(VERSION_GITHASH 3827789b3d8fd2021952e57e5110343d26daa1a1) -SET(VERSION_DESCRIBE v21.5.1.1-prestable) -SET(VERSION_STRING 21.5.1.1) +SET(VERSION_GITHASH 96fced4c3cf432fb0b401d2ab01f0c56e5f74a96) +SET(VERSION_DESCRIBE v21.6.1.1-prestable) +SET(VERSION_STRING 21.6.1.1) # end of autochange diff --git a/cmake/darwin/default_libs.cmake b/cmake/darwin/default_libs.cmake index 79ac675f234..a6ee800d59b 100644 --- a/cmake/darwin/default_libs.cmake +++ b/cmake/darwin/default_libs.cmake @@ -1,11 +1,14 @@ set (DEFAULT_LIBS "-nodefaultlibs") -if (NOT COMPILER_CLANG) - message (FATAL_ERROR "Darwin build is supported only for Clang") -endif () - set (DEFAULT_LIBS "${DEFAULT_LIBS} ${COVERAGE_OPTION} -lc -lm -lpthread -ldl") +if (COMPILER_GCC) + set (DEFAULT_LIBS "${DEFAULT_LIBS} -lgcc_eh") + if (ARCH_AARCH64) + set (DEFAULT_LIBS "${DEFAULT_LIBS} -lgcc") + endif () +endif () + message(STATUS "Default libraries: ${DEFAULT_LIBS}") set(CMAKE_CXX_STANDARD_LIBRARIES ${DEFAULT_LIBS}) diff --git a/cmake/darwin/toolchain-aarch64.cmake b/cmake/darwin/toolchain-aarch64.cmake new file mode 100644 index 00000000000..81398111495 --- /dev/null +++ b/cmake/darwin/toolchain-aarch64.cmake @@ -0,0 +1,14 @@ +set (CMAKE_SYSTEM_NAME "Darwin") +set (CMAKE_SYSTEM_PROCESSOR "aarch64") +set (CMAKE_C_COMPILER_TARGET "aarch64-apple-darwin") +set (CMAKE_CXX_COMPILER_TARGET "aarch64-apple-darwin") +set (CMAKE_ASM_COMPILER_TARGET "aarch64-apple-darwin") +set (CMAKE_OSX_SYSROOT "${CMAKE_CURRENT_LIST_DIR}/../toolchain/darwin-aarch64") + +set (CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY) # disable linkage check - it doesn't work in CMake + +set (HAS_PRE_1970_EXITCODE "0" CACHE STRING "Result from TRY_RUN" FORCE) +set (HAS_PRE_1970_EXITCODE__TRYRUN_OUTPUT "" CACHE STRING "Output from TRY_RUN" FORCE) + +set (HAS_POST_2038_EXITCODE "0" CACHE STRING "Result from TRY_RUN" FORCE) +set (HAS_POST_2038_EXITCODE__TRYRUN_OUTPUT "" CACHE STRING "Output from TRY_RUN" FORCE) diff --git a/cmake/find/amqpcpp.cmake b/cmake/find/amqpcpp.cmake index 4191dce26bb..e3eaaf33ddb 100644 --- a/cmake/find/amqpcpp.cmake +++ b/cmake/find/amqpcpp.cmake @@ -1,3 +1,8 @@ +if (OS_DARWIN AND COMPILER_GCC) + # AMQP-CPP requires libuv which cannot be built with GCC in macOS due to a bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93082 + set (ENABLE_AMQPCPP OFF CACHE INTERNAL "") +endif() + option(ENABLE_AMQPCPP "Enalbe AMQP-CPP" ${ENABLE_LIBRARIES}) if (NOT ENABLE_AMQPCPP) diff --git a/cmake/find/cassandra.cmake b/cmake/find/cassandra.cmake index 037d6c3f131..ded25a5bf41 100644 --- a/cmake/find/cassandra.cmake +++ b/cmake/find/cassandra.cmake @@ -1,3 +1,8 @@ +if (OS_DARWIN AND COMPILER_GCC) + # Cassandra requires libuv which cannot be built with GCC in macOS due to a bug: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93082 + set (ENABLE_CASSANDRA OFF CACHE INTERNAL "") +endif() + option(ENABLE_CASSANDRA "Enable Cassandra" ${ENABLE_LIBRARIES}) if (NOT ENABLE_CASSANDRA) diff --git a/cmake/find/ccache.cmake b/cmake/find/ccache.cmake index fea1f8b4c97..986c9cb5fe2 100644 --- a/cmake/find/ccache.cmake +++ b/cmake/find/ccache.cmake @@ -32,7 +32,9 @@ if (CCACHE_FOUND AND NOT COMPILER_MATCHES_CCACHE) if (CCACHE_VERSION VERSION_GREATER "3.2.0" OR NOT CMAKE_CXX_COMPILER_ID STREQUAL "Clang") message(STATUS "Using ${CCACHE_FOUND} ${CCACHE_VERSION}") - set_property (GLOBAL PROPERTY RULE_LAUNCH_COMPILE ${CCACHE_FOUND}) + set (CMAKE_CXX_COMPILER_LAUNCHER ${CCACHE_FOUND} ${CMAKE_CXX_COMPILER_LAUNCHER}) + set (CMAKE_C_COMPILER_LAUNCHER ${CCACHE_FOUND} ${CMAKE_C_COMPILER_LAUNCHER}) + set_property (GLOBAL PROPERTY RULE_LAUNCH_LINK ${CCACHE_FOUND}) # debian (debhelpers) set SOURCE_DATE_EPOCH environment variable, that is diff --git a/cmake/find/ldap.cmake b/cmake/find/ldap.cmake index 0dffa334e73..d8baea89429 100644 --- a/cmake/find/ldap.cmake +++ b/cmake/find/ldap.cmake @@ -64,7 +64,8 @@ if (NOT OPENLDAP_FOUND AND NOT MISSING_INTERNAL_LDAP_LIBRARY) ( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "aarch64" ) OR ( "${_system_name}" STREQUAL "linux" AND "${_system_processor}" STREQUAL "ppc64le" ) OR ( "${_system_name}" STREQUAL "freebsd" AND "${_system_processor}" STREQUAL "x86_64" ) OR - ( "${_system_name}" STREQUAL "darwin" AND "${_system_processor}" STREQUAL "x86_64" ) + ( "${_system_name}" STREQUAL "darwin" AND "${_system_processor}" STREQUAL "x86_64" ) OR + ( "${_system_name}" STREQUAL "darwin" AND "${_system_processor}" STREQUAL "aarch64" ) ) set (_ldap_supported_platform TRUE) endif () diff --git a/cmake/find/nanodbc.cmake b/cmake/find/nanodbc.cmake new file mode 100644 index 00000000000..894a2a60bad --- /dev/null +++ b/cmake/find/nanodbc.cmake @@ -0,0 +1,16 @@ +if (NOT ENABLE_ODBC) + return () +endif () + +if (NOT USE_INTERNAL_NANODBC_LIBRARY) + message (FATAL_ERROR "Only the bundled nanodbc library can be used") +endif () + +if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/nanodbc/CMakeLists.txt") + message (FATAL_ERROR "submodule contrib/nanodbc is missing. to fix try run: \n git submodule update --init --recursive") +endif() + +set (NANODBC_LIBRARY nanodbc) +set (NANODBC_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/nanodbc/nanodbc") + +message (STATUS "Using nanodbc: ${NANODBC_INCLUDE_DIR} : ${NANODBC_LIBRARY}") diff --git a/cmake/find/nuraft.cmake b/cmake/find/nuraft.cmake index 7fa5251946e..4e5258e132f 100644 --- a/cmake/find/nuraft.cmake +++ b/cmake/find/nuraft.cmake @@ -11,7 +11,7 @@ if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/NuRaft/CMakeLists.txt") return() endif () -if (NOT OS_FREEBSD AND NOT OS_DARWIN) +if (NOT OS_FREEBSD) set (USE_NURAFT 1) set (NURAFT_LIBRARY nuraft) diff --git a/cmake/find/odbc.cmake b/cmake/find/odbc.cmake index a23f0c831e9..c475e600c0d 100644 --- a/cmake/find/odbc.cmake +++ b/cmake/find/odbc.cmake @@ -50,4 +50,6 @@ if (NOT EXTERNAL_ODBC_LIBRARY_FOUND) set (USE_INTERNAL_ODBC_LIBRARY 1) endif () +set (USE_INTERNAL_NANODBC_LIBRARY 1) + message (STATUS "Using unixodbc") diff --git a/cmake/find/rocksdb.cmake b/cmake/find/rocksdb.cmake index 968cdb52407..94278a603d7 100644 --- a/cmake/find/rocksdb.cmake +++ b/cmake/find/rocksdb.cmake @@ -1,3 +1,7 @@ +if (OS_DARWIN AND ARCH_AARCH64) + set (ENABLE_ROCKSDB OFF CACHE INTERNAL "") +endif() + option(ENABLE_ROCKSDB "Enable ROCKSDB" ${ENABLE_LIBRARIES}) if (NOT ENABLE_ROCKSDB) diff --git a/cmake/find/xz.cmake b/cmake/find/xz.cmake new file mode 100644 index 00000000000..0d19859c6b1 --- /dev/null +++ b/cmake/find/xz.cmake @@ -0,0 +1,27 @@ +option (USE_INTERNAL_XZ_LIBRARY "Set to OFF to use system xz (lzma) library instead of bundled" ${NOT_UNBUNDLED}) + +if(NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/xz/src/liblzma/api/lzma.h") + if(USE_INTERNAL_XZ_LIBRARY) + message(WARNING "submodule contrib/xz is missing. to fix try run: \n git submodule update --init --recursive") + message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal xz (lzma) library") + set(USE_INTERNAL_XZ_LIBRARY 0) + endif() + set(MISSING_INTERNAL_XZ_LIBRARY 1) +endif() + +if (NOT USE_INTERNAL_XZ_LIBRARY) + find_library (XZ_LIBRARY lzma) + find_path (XZ_INCLUDE_DIR NAMES lzma.h PATHS ${XZ_INCLUDE_PATHS}) + if (NOT XZ_LIBRARY OR NOT XZ_INCLUDE_DIR) + message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find system xz (lzma) library") + endif () +endif () + +if (XZ_LIBRARY AND XZ_INCLUDE_DIR) +elseif (NOT MISSING_INTERNAL_XZ_LIBRARY) + set (USE_INTERNAL_XZ_LIBRARY 1) + set (XZ_LIBRARY liblzma) + set (XZ_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/xz/src/liblzma/api) +endif () + +message (STATUS "Using xz (lzma): ${XZ_INCLUDE_DIR} : ${XZ_LIBRARY}") diff --git a/cmake/warnings.cmake b/cmake/warnings.cmake index a398c59e981..a85fe8963c7 100644 --- a/cmake/warnings.cmake +++ b/cmake/warnings.cmake @@ -171,6 +171,7 @@ elseif (COMPILER_GCC) add_cxx_compile_options(-Wtrampolines) # Obvious add_cxx_compile_options(-Wunused) + add_cxx_compile_options(-Wundef) # Warn if vector operation is not implemented via SIMD capabilities of the architecture add_cxx_compile_options(-Wvector-operation-performance) # XXX: libstdc++ has some of these for 3way compare diff --git a/contrib/CMakeLists.txt b/contrib/CMakeLists.txt index 4aeb67a5085..087212ad3b0 100644 --- a/contrib/CMakeLists.txt +++ b/contrib/CMakeLists.txt @@ -47,7 +47,11 @@ add_subdirectory (lz4-cmake) add_subdirectory (murmurhash) add_subdirectory (replxx-cmake) add_subdirectory (unixodbc-cmake) -add_subdirectory (xz) +add_subdirectory (nanodbc-cmake) + +if (USE_INTERNAL_XZ_LIBRARY) + add_subdirectory (xz) +endif() add_subdirectory (poco-cmake) add_subdirectory (croaring-cmake) @@ -93,14 +97,8 @@ if (USE_INTERNAL_ZLIB_LIBRARY) add_subdirectory (${INTERNAL_ZLIB_NAME}) # We should use same defines when including zlib.h as used when zlib compiled target_compile_definitions (zlib PUBLIC ZLIB_COMPAT WITH_GZFILEOP) - if (TARGET zlibstatic) - target_compile_definitions (zlibstatic PUBLIC ZLIB_COMPAT WITH_GZFILEOP) - endif () if (ARCH_AMD64 OR ARCH_AARCH64) target_compile_definitions (zlib PUBLIC X86_64 UNALIGNED_OK) - if (TARGET zlibstatic) - target_compile_definitions (zlibstatic PUBLIC X86_64 UNALIGNED_OK) - endif () endif () endif () diff --git a/contrib/NuRaft b/contrib/NuRaft index 70468326ad5..377f8e77491 160000 --- a/contrib/NuRaft +++ b/contrib/NuRaft @@ -1 +1 @@ -Subproject commit 70468326ad5d72e9497944838484c591dae054ea +Subproject commit 377f8e77491d9f66ce8e32e88aae19dffe8dc4d7 diff --git a/contrib/antlr4-runtime b/contrib/antlr4-runtime index a2fa7b76e2e..672643e9a42 160000 --- a/contrib/antlr4-runtime +++ b/contrib/antlr4-runtime @@ -1 +1 @@ -Subproject commit a2fa7b76e2ee16d2ad955e9214a90bbf79da66fc +Subproject commit 672643e9a427ef803abf13bc8cb4989606553d64 diff --git a/contrib/boost b/contrib/boost index ee24fa55bc4..a8d43d3142c 160000 --- a/contrib/boost +++ b/contrib/boost @@ -1 +1 @@ -Subproject commit ee24fa55bc46e4d2ce7d0d052cc5a0d9b1be8c36 +Subproject commit a8d43d3142cc6b26fc55bec33f7f6edb1156ab7a diff --git a/contrib/boringssl b/contrib/boringssl index fd9ce1a0406..83c1cda8a02 160000 --- a/contrib/boringssl +++ b/contrib/boringssl @@ -1 +1 @@ -Subproject commit fd9ce1a0406f571507068b9555d0b545b8a18332 +Subproject commit 83c1cda8a0224dc817cbad2966c7ed4acc35f02a diff --git a/contrib/boringssl-cmake/CMakeLists.txt b/contrib/boringssl-cmake/CMakeLists.txt index 017a8a64c0e..adfee82dda4 100644 --- a/contrib/boringssl-cmake/CMakeLists.txt +++ b/contrib/boringssl-cmake/CMakeLists.txt @@ -16,7 +16,7 @@ endif() if(CMAKE_COMPILER_IS_GNUCXX OR CLANG) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11 -fvisibility=hidden -fno-common -fno-exceptions -fno-rtti") - if(APPLE) + if(APPLE AND CLANG) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libc++") endif() diff --git a/contrib/flatbuffers b/contrib/flatbuffers index 6df40a24717..22e3ffc66d2 160000 --- a/contrib/flatbuffers +++ b/contrib/flatbuffers @@ -1 +1 @@ -Subproject commit 6df40a2471737b27271bdd9b900ab5f3aec746c7 +Subproject commit 22e3ffc66d2d7d72d1414390aa0f04ffd114a5a1 diff --git a/contrib/jemalloc-cmake/CMakeLists.txt b/contrib/jemalloc-cmake/CMakeLists.txt index 73afa99f1d8..f8cab3e548c 100644 --- a/contrib/jemalloc-cmake/CMakeLists.txt +++ b/contrib/jemalloc-cmake/CMakeLists.txt @@ -1,10 +1,13 @@ -if (SANITIZE OR NOT (ARCH_AMD64 OR ARCH_ARM OR ARCH_PPC64LE) OR NOT (OS_LINUX OR OS_FREEBSD OR OS_DARWIN)) +if (SANITIZE OR NOT ( + ((OS_LINUX OR OS_FREEBSD) AND (ARCH_AMD64 OR ARCH_ARM OR ARCH_PPC64LE)) OR + (OS_DARWIN AND CMAKE_BUILD_TYPE STREQUAL "RelWithDebInfo") +)) if (ENABLE_JEMALLOC) message (${RECONFIGURE_MESSAGE_LEVEL} - "jemalloc is disabled implicitly: it doesn't work with sanitizers and can only be used with x86_64, aarch64 or ppc64le on linux or freebsd.") - endif() + "jemalloc is disabled implicitly: it doesn't work with sanitizers and can only be used with x86_64, aarch64, or ppc64le Linux or FreeBSD builds and RelWithDebInfo macOS builds.") + endif () set (ENABLE_JEMALLOC OFF) -else() +else () option (ENABLE_JEMALLOC "Enable jemalloc allocator" ${ENABLE_LIBRARIES}) endif () @@ -34,9 +37,9 @@ if (OS_LINUX) # avoid spurious latencies and additional work associated with # MADV_DONTNEED. See # https://github.com/ClickHouse/ClickHouse/issues/11121 for motivation. - set (JEMALLOC_CONFIG_MALLOC_CONF "percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:10000") + set (JEMALLOC_CONFIG_MALLOC_CONF "percpu_arena:percpu,oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000") else() - set (JEMALLOC_CONFIG_MALLOC_CONF "oversize_threshold:0,muzzy_decay_ms:10000") + set (JEMALLOC_CONFIG_MALLOC_CONF "oversize_threshold:0,muzzy_decay_ms:5000,dirty_decay_ms:5000") endif() # CACHE variable is empty, to allow changing defaults without necessity # to purge cache @@ -121,12 +124,14 @@ target_include_directories(jemalloc SYSTEM PRIVATE target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_NO_PRIVATE_NAMESPACE) if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG") - target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_DEBUG=1 -DJEMALLOC_PROF=1) + target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_DEBUG=1) +endif () - if (USE_UNWIND) - target_compile_definitions (jemalloc PRIVATE -DJEMALLOC_PROF_LIBUNWIND=1) - target_link_libraries (jemalloc PRIVATE unwind) - endif () +target_compile_definitions(jemalloc PRIVATE -DJEMALLOC_PROF=1) + +if (USE_UNWIND) + target_compile_definitions (jemalloc PRIVATE -DJEMALLOC_PROF_LIBUNWIND=1) + target_link_libraries (jemalloc PRIVATE unwind) endif () target_compile_options(jemalloc PRIVATE -Wno-redundant-decls) diff --git a/contrib/jemalloc-cmake/include_darwin_aarch64/jemalloc/internal/jemalloc_internal_defs.h.in b/contrib/jemalloc-cmake/include_darwin_aarch64/jemalloc/internal/jemalloc_internal_defs.h.in index c7c884d0eaa..5c0407db24a 100644 --- a/contrib/jemalloc-cmake/include_darwin_aarch64/jemalloc/internal/jemalloc_internal_defs.h.in +++ b/contrib/jemalloc-cmake/include_darwin_aarch64/jemalloc/internal/jemalloc_internal_defs.h.in @@ -42,7 +42,7 @@ * total number of bits in a pointer, e.g. on x64, for which the uppermost 16 * bits are the same as bit 47. */ -#define LG_VADDR 48 +#define LG_VADDR 64 /* Defined if C11 atomics are available. */ #define JEMALLOC_C11_ATOMICS 1 @@ -101,11 +101,6 @@ */ #define JEMALLOC_HAVE_MACH_ABSOLUTE_TIME 1 -/* - * Defined if clock_gettime(CLOCK_REALTIME, ...) is available. - */ -#define JEMALLOC_HAVE_CLOCK_REALTIME 1 - /* * Defined if _malloc_thread_cleanup() exists. At least in the case of * FreeBSD, pthread_key_create() allocates, which if used during malloc @@ -181,14 +176,14 @@ /* #undef LG_QUANTUM */ /* One page is 2^LG_PAGE bytes. */ -#define LG_PAGE 16 +#define LG_PAGE 14 /* * One huge page is 2^LG_HUGEPAGE bytes. Note that this is defined even if the * system does not explicitly support huge pages; system calls that require * explicit huge page support are separately configured. */ -#define LG_HUGEPAGE 29 +#define LG_HUGEPAGE 21 /* * If defined, adjacent virtual memory mappings with identical attributes @@ -356,7 +351,7 @@ /* #undef JEMALLOC_EXPORT */ /* config.malloc_conf options string. */ -#define JEMALLOC_CONFIG_MALLOC_CONF "@JEMALLOC_CONFIG_MALLOC_CONF@" +#define JEMALLOC_CONFIG_MALLOC_CONF "" /* If defined, jemalloc takes the malloc/free/etc. symbol names. */ /* #undef JEMALLOC_IS_MALLOC */ diff --git a/contrib/libcxx b/contrib/libcxx index 8b80a151d12..2fa892f69ac 160000 --- a/contrib/libcxx +++ b/contrib/libcxx @@ -1 +1 @@ -Subproject commit 8b80a151d12b98ffe2d0c22f7cec12c3b9ff88d7 +Subproject commit 2fa892f69acbaa40f8a18c6484854a6183a34482 diff --git a/contrib/libcxx-cmake/CMakeLists.txt b/contrib/libcxx-cmake/CMakeLists.txt index 3b5d53cd1c0..59d23b2cd9e 100644 --- a/contrib/libcxx-cmake/CMakeLists.txt +++ b/contrib/libcxx-cmake/CMakeLists.txt @@ -56,6 +56,11 @@ if (USE_UNWIND) target_compile_definitions(cxx PUBLIC -DSTD_EXCEPTION_HAS_STACK_TRACE=1) endif () +# Override the deduced attribute support that causes error. +if (OS_DARWIN AND COMPILER_GCC) + add_compile_definitions(_LIBCPP_INIT_PRIORITY_MAX) +endif () + target_compile_options(cxx PUBLIC $<$:-nostdinc++>) # Third party library may have substandard code. diff --git a/contrib/librdkafka-cmake/config.h.in b/contrib/librdkafka-cmake/config.h.in index 80b6ea61b6e..9fecb45e42d 100644 --- a/contrib/librdkafka-cmake/config.h.in +++ b/contrib/librdkafka-cmake/config.h.in @@ -66,7 +66,7 @@ #cmakedefine WITH_SASL_OAUTHBEARER 1 #cmakedefine WITH_SASL_CYRUS 1 // crc32chw -#if !defined(__PPC__) && (!defined(__aarch64__) || defined(__ARM_FEATURE_CRC32)) +#if !defined(__PPC__) && (!defined(__aarch64__) || defined(__ARM_FEATURE_CRC32)) && !(defined(__aarch64__) && defined(__APPLE__)) #define WITH_CRC32C_HW 1 #endif // regex @@ -75,6 +75,8 @@ #define HAVE_STRNDUP 1 // strerror_r #define HAVE_STRERROR_R 1 +// rand_r +#define HAVE_RAND_R 1 #ifdef __APPLE__ // pthread_setname_np diff --git a/contrib/mariadb-connector-c b/contrib/mariadb-connector-c index f4476ee7311..5f4034a3a63 160000 --- a/contrib/mariadb-connector-c +++ b/contrib/mariadb-connector-c @@ -1 +1 @@ -Subproject commit f4476ee7311b35b593750f6ae2cbdb62a4006374 +Subproject commit 5f4034a3a6376416504f17186c55fe401c6d8e5e diff --git a/contrib/nanodbc b/contrib/nanodbc new file mode 160000 index 00000000000..9fc45967551 --- /dev/null +++ b/contrib/nanodbc @@ -0,0 +1 @@ +Subproject commit 9fc459675515d491401727ec67fca38db721f28c diff --git a/contrib/nanodbc-cmake/CMakeLists.txt b/contrib/nanodbc-cmake/CMakeLists.txt new file mode 100644 index 00000000000..1673b311c49 --- /dev/null +++ b/contrib/nanodbc-cmake/CMakeLists.txt @@ -0,0 +1,18 @@ +if (NOT USE_INTERNAL_NANODBC_LIBRARY) + return () +endif () + +set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/nanodbc) + +if (NOT TARGET unixodbc) + message(FATAL_ERROR "Configuration error: unixodbc is not a target") +endif() + +set (SRCS + ${LIBRARY_DIR}/nanodbc/nanodbc.cpp +) + +add_library(nanodbc ${SRCS}) + +target_link_libraries (nanodbc PUBLIC unixodbc) +target_include_directories (nanodbc SYSTEM PUBLIC ${LIBRARY_DIR}/) diff --git a/contrib/openldap-cmake/darwin_aarch64/include/lber_types.h b/contrib/openldap-cmake/darwin_aarch64/include/lber_types.h new file mode 100644 index 00000000000..dbd59430527 --- /dev/null +++ b/contrib/openldap-cmake/darwin_aarch64/include/lber_types.h @@ -0,0 +1,63 @@ +/* include/lber_types.h. Generated from lber_types.hin by configure. */ +/* $OpenLDAP$ */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +/* + * LBER types + */ + +#ifndef _LBER_TYPES_H +#define _LBER_TYPES_H + +#include + +LDAP_BEGIN_DECL + +/* LBER boolean, enum, integers (32 bits or larger) */ +#define LBER_INT_T int + +/* LBER tags (32 bits or larger) */ +#define LBER_TAG_T long + +/* LBER socket descriptor */ +#define LBER_SOCKET_T int + +/* LBER lengths (32 bits or larger) */ +#define LBER_LEN_T long + +/* ------------------------------------------------------------ */ + +/* booleans, enumerations, and integers */ +typedef LBER_INT_T ber_int_t; + +/* signed and unsigned versions */ +typedef signed LBER_INT_T ber_sint_t; +typedef unsigned LBER_INT_T ber_uint_t; + +/* tags */ +typedef unsigned LBER_TAG_T ber_tag_t; + +/* "socket" descriptors */ +typedef LBER_SOCKET_T ber_socket_t; + +/* lengths */ +typedef unsigned LBER_LEN_T ber_len_t; + +/* signed lengths */ +typedef signed LBER_LEN_T ber_slen_t; + +LDAP_END_DECL + +#endif /* _LBER_TYPES_H */ diff --git a/contrib/openldap-cmake/darwin_aarch64/include/ldap_config.h b/contrib/openldap-cmake/darwin_aarch64/include/ldap_config.h new file mode 100644 index 00000000000..89f7b40b884 --- /dev/null +++ b/contrib/openldap-cmake/darwin_aarch64/include/ldap_config.h @@ -0,0 +1,74 @@ +/* include/ldap_config.h. Generated from ldap_config.hin by configure. */ +/* $OpenLDAP$ */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +/* + * This file works in conjunction with OpenLDAP configure system. + * If you do no like the values below, adjust your configure options. + */ + +#ifndef _LDAP_CONFIG_H +#define _LDAP_CONFIG_H + +/* directory separator */ +#ifndef LDAP_DIRSEP +#ifndef _WIN32 +#define LDAP_DIRSEP "/" +#else +#define LDAP_DIRSEP "\\" +#endif +#endif + +/* directory for temporary files */ +#if defined(_WIN32) +# define LDAP_TMPDIR "C:\\." /* we don't have much of a choice */ +#elif defined( _P_tmpdir ) +# define LDAP_TMPDIR _P_tmpdir +#elif defined( P_tmpdir ) +# define LDAP_TMPDIR P_tmpdir +#elif defined( _PATH_TMPDIR ) +# define LDAP_TMPDIR _PATH_TMPDIR +#else +# define LDAP_TMPDIR LDAP_DIRSEP "tmp" +#endif + +/* directories */ +#ifndef LDAP_BINDIR +#define LDAP_BINDIR "/tmp/ldap-prefix/bin" +#endif +#ifndef LDAP_SBINDIR +#define LDAP_SBINDIR "/tmp/ldap-prefix/sbin" +#endif +#ifndef LDAP_DATADIR +#define LDAP_DATADIR "/tmp/ldap-prefix/share/openldap" +#endif +#ifndef LDAP_SYSCONFDIR +#define LDAP_SYSCONFDIR "/tmp/ldap-prefix/etc/openldap" +#endif +#ifndef LDAP_LIBEXECDIR +#define LDAP_LIBEXECDIR "/tmp/ldap-prefix/libexec" +#endif +#ifndef LDAP_MODULEDIR +#define LDAP_MODULEDIR "/tmp/ldap-prefix/libexec/openldap" +#endif +#ifndef LDAP_RUNDIR +#define LDAP_RUNDIR "/tmp/ldap-prefix/var" +#endif +#ifndef LDAP_LOCALEDIR +#define LDAP_LOCALEDIR "" +#endif + + +#endif /* _LDAP_CONFIG_H */ diff --git a/contrib/openldap-cmake/darwin_aarch64/include/ldap_features.h b/contrib/openldap-cmake/darwin_aarch64/include/ldap_features.h new file mode 100644 index 00000000000..f0cc7c3626f --- /dev/null +++ b/contrib/openldap-cmake/darwin_aarch64/include/ldap_features.h @@ -0,0 +1,61 @@ +/* include/ldap_features.h. Generated from ldap_features.hin by configure. */ +/* $OpenLDAP$ */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +/* + * LDAP Features + */ + +#ifndef _LDAP_FEATURES_H +#define _LDAP_FEATURES_H 1 + +/* OpenLDAP API version macros */ +#define LDAP_VENDOR_VERSION 20501 +#define LDAP_VENDOR_VERSION_MAJOR 2 +#define LDAP_VENDOR_VERSION_MINOR 5 +#define LDAP_VENDOR_VERSION_PATCH X + +/* +** WORK IN PROGRESS! +** +** OpenLDAP reentrancy/thread-safeness should be dynamically +** checked using ldap_get_option(). +** +** The -lldap implementation is not thread-safe. +** +** The -lldap_r implementation is: +** LDAP_API_FEATURE_THREAD_SAFE (basic thread safety) +** but also be: +** LDAP_API_FEATURE_SESSION_THREAD_SAFE +** LDAP_API_FEATURE_OPERATION_THREAD_SAFE +** +** The preprocessor flag LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE +** can be used to determine if -lldap_r is available at compile +** time. You must define LDAP_THREAD_SAFE if and only if you +** link with -lldap_r. +** +** If you fail to define LDAP_THREAD_SAFE when linking with +** -lldap_r or define LDAP_THREAD_SAFE when linking with -lldap, +** provided header definitions and declarations may be incorrect. +** +*/ + +/* is -lldap_r available or not */ +#define LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE 1 + +/* LDAP v2 Referrals */ +/* #undef LDAP_API_FEATURE_X_OPENLDAP_V2_REFERRALS */ + +#endif /* LDAP_FEATURES */ diff --git a/contrib/openldap-cmake/darwin_aarch64/include/portable.h b/contrib/openldap-cmake/darwin_aarch64/include/portable.h new file mode 100644 index 00000000000..fdf4e89017e --- /dev/null +++ b/contrib/openldap-cmake/darwin_aarch64/include/portable.h @@ -0,0 +1,1169 @@ +/* include/portable.h. Generated from portable.hin by configure. */ +/* include/portable.hin. Generated from configure.in by autoheader. */ + + +/* begin of portable.h.pre */ +/* This work is part of OpenLDAP Software . + * + * Copyright 1998-2020 The OpenLDAP Foundation + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted only as authorized by the OpenLDAP + * Public License. + * + * A copy of this license is available in the file LICENSE in the + * top-level directory of the distribution or, alternatively, at + * . + */ + +#ifndef _LDAP_PORTABLE_H +#define _LDAP_PORTABLE_H + +/* define this if needed to get reentrant functions */ +#ifndef REENTRANT +#define REENTRANT 1 +#endif +#ifndef _REENTRANT +#define _REENTRANT 1 +#endif + +/* define this if needed to get threadsafe functions */ +#ifndef THREADSAFE +#define THREADSAFE 1 +#endif +#ifndef _THREADSAFE +#define _THREADSAFE 1 +#endif +#ifndef THREAD_SAFE +#define THREAD_SAFE 1 +#endif +#ifndef _THREAD_SAFE +#define _THREAD_SAFE 1 +#endif + +#ifndef _SGI_MP_SOURCE +#define _SGI_MP_SOURCE 1 +#endif + +/* end of portable.h.pre */ + + +/* Define if building universal (internal helper macro) */ +/* #undef AC_APPLE_UNIVERSAL_BUILD */ + +/* define to use both and */ +/* #undef BOTH_STRINGS_H */ + +/* define if cross compiling */ +/* #undef CROSS_COMPILING */ + +/* set to the number of arguments ctime_r() expects */ +#define CTIME_R_NARGS 2 + +/* define if toupper() requires islower() */ +/* #undef C_UPPER_LOWER */ + +/* define if sys_errlist is not declared in stdio.h or errno.h */ +/* #undef DECL_SYS_ERRLIST */ + +/* define to enable slapi library */ +/* #undef ENABLE_SLAPI */ + +/* defined to be the EXE extension */ +#define EXEEXT "" + +/* set to the number of arguments gethostbyaddr_r() expects */ +/* #undef GETHOSTBYADDR_R_NARGS */ + +/* set to the number of arguments gethostbyname_r() expects */ +/* #undef GETHOSTBYNAME_R_NARGS */ + +/* Define to 1 if `TIOCGWINSZ' requires . */ +/* #undef GWINSZ_IN_SYS_IOCTL */ + +/* define if you have AIX security lib */ +/* #undef HAVE_AIX_SECURITY */ + +/* Define to 1 if you have the header file. */ +#define HAVE_ARPA_INET_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_ARPA_NAMESER_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_ASSERT_H 1 + +/* Define to 1 if you have the `bcopy' function. */ +#define HAVE_BCOPY 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_BITS_TYPES_H */ + +/* Define to 1 if you have the `chroot' function. */ +#define HAVE_CHROOT 1 + +/* Define to 1 if you have the `closesocket' function. */ +/* #undef HAVE_CLOSESOCKET */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_CONIO_H */ + +/* define if crypt(3) is available */ +/* #undef HAVE_CRYPT */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_CRYPT_H */ + +/* define if crypt_r() is also available */ +/* #undef HAVE_CRYPT_R */ + +/* Define to 1 if you have the `ctime_r' function. */ +#define HAVE_CTIME_R 1 + +/* define if you have Cyrus SASL */ +/* #undef HAVE_CYRUS_SASL */ + +/* define if your system supports /dev/poll */ +/* #undef HAVE_DEVPOLL */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_DIRECT_H */ + +/* Define to 1 if you have the header file, and it defines `DIR'. + */ +#define HAVE_DIRENT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_DLFCN_H 1 + +/* Define to 1 if you don't have `vprintf' but do have `_doprnt.' */ +/* #undef HAVE_DOPRNT */ + +/* define if system uses EBCDIC instead of ASCII */ +/* #undef HAVE_EBCDIC */ + +/* Define to 1 if you have the `endgrent' function. */ +#define HAVE_ENDGRENT 1 + +/* Define to 1 if you have the `endpwent' function. */ +#define HAVE_ENDPWENT 1 + +/* define if your system supports epoll */ +/* #undef HAVE_EPOLL */ + +/* Define to 1 if you have the header file. */ +#define HAVE_ERRNO_H 1 + +/* Define to 1 if you have the `fcntl' function. */ +#define HAVE_FCNTL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_FCNTL_H 1 + +/* define if you actually have FreeBSD fetch(3) */ +/* #undef HAVE_FETCH */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_FILIO_H */ + +/* Define to 1 if you have the `flock' function. */ +#define HAVE_FLOCK 1 + +/* Define to 1 if you have the `fstat' function. */ +#define HAVE_FSTAT 1 + +/* Define to 1 if you have the `gai_strerror' function. */ +#define HAVE_GAI_STRERROR 1 + +/* Define to 1 if you have the `getaddrinfo' function. */ +#define HAVE_GETADDRINFO 1 + +/* Define to 1 if you have the `getdtablesize' function. */ +#define HAVE_GETDTABLESIZE 1 + +/* Define to 1 if you have the `geteuid' function. */ +#define HAVE_GETEUID 1 + +/* Define to 1 if you have the `getgrgid' function. */ +#define HAVE_GETGRGID 1 + +/* Define to 1 if you have the `gethostbyaddr_r' function. */ +/* #undef HAVE_GETHOSTBYADDR_R */ + +/* Define to 1 if you have the `gethostbyname_r' function. */ +/* #undef HAVE_GETHOSTBYNAME_R */ + +/* Define to 1 if you have the `gethostname' function. */ +#define HAVE_GETHOSTNAME 1 + +/* Define to 1 if you have the `getnameinfo' function. */ +#define HAVE_GETNAMEINFO 1 + +/* Define to 1 if you have the `getopt' function. */ +#define HAVE_GETOPT 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_GETOPT_H 1 + +/* Define to 1 if you have the `getpassphrase' function. */ +/* #undef HAVE_GETPASSPHRASE */ + +/* Define to 1 if you have the `getpeereid' function. */ +#define HAVE_GETPEEREID 1 + +/* Define to 1 if you have the `getpeerucred' function. */ +/* #undef HAVE_GETPEERUCRED */ + +/* Define to 1 if you have the `getpwnam' function. */ +#define HAVE_GETPWNAM 1 + +/* Define to 1 if you have the `getpwuid' function. */ +#define HAVE_GETPWUID 1 + +/* Define to 1 if you have the `getspnam' function. */ +/* #undef HAVE_GETSPNAM */ + +/* Define to 1 if you have the `gettimeofday' function. */ +#define HAVE_GETTIMEOFDAY 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_GMP_H */ + +/* Define to 1 if you have the `gmtime_r' function. */ +#define HAVE_GMTIME_R 1 + +/* define if you have GNUtls */ +/* #undef HAVE_GNUTLS */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_GNUTLS_GNUTLS_H */ + +/* if you have GNU Pth */ +/* #undef HAVE_GNU_PTH */ + +/* Define to 1 if you have the header file. */ +#define HAVE_GRP_H 1 + +/* Define to 1 if you have the `hstrerror' function. */ +#define HAVE_HSTRERROR 1 + +/* define to you inet_aton(3) is available */ +#define HAVE_INET_ATON 1 + +/* Define to 1 if you have the `inet_ntoa_b' function. */ +/* #undef HAVE_INET_NTOA_B */ + +/* Define to 1 if you have the `inet_ntop' function. */ +#define HAVE_INET_NTOP 1 + +/* Define to 1 if you have the `initgroups' function. */ +#define HAVE_INITGROUPS 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_INTTYPES_H 1 + +/* Define to 1 if you have the `ioctl' function. */ +#define HAVE_IOCTL 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_IO_H */ + +/* define if your system supports kqueue */ +#define HAVE_KQUEUE 1 + +/* Define to 1 if you have the `gen' library (-lgen). */ +/* #undef HAVE_LIBGEN */ + +/* Define to 1 if you have the `gmp' library (-lgmp). */ +/* #undef HAVE_LIBGMP */ + +/* Define to 1 if you have the `inet' library (-linet). */ +/* #undef HAVE_LIBINET */ + +/* define if you have libtool -ltdl */ +/* #undef HAVE_LIBLTDL */ + +/* Define to 1 if you have the `net' library (-lnet). */ +/* #undef HAVE_LIBNET */ + +/* Define to 1 if you have the `nsl' library (-lnsl). */ +/* #undef HAVE_LIBNSL */ + +/* Define to 1 if you have the `nsl_s' library (-lnsl_s). */ +/* #undef HAVE_LIBNSL_S */ + +/* Define to 1 if you have the `socket' library (-lsocket). */ +/* #undef HAVE_LIBSOCKET */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_LIBUTIL_H */ + +/* Define to 1 if you have the `V3' library (-lV3). */ +/* #undef HAVE_LIBV3 */ + +/* Define to 1 if you have the header file. */ +#define HAVE_LIMITS_H 1 + +/* if you have LinuxThreads */ +/* #undef HAVE_LINUX_THREADS */ + +/* Define to 1 if you have the header file. */ +#define HAVE_LOCALE_H 1 + +/* Define to 1 if you have the `localtime_r' function. */ +#define HAVE_LOCALTIME_R 1 + +/* Define to 1 if you have the `lockf' function. */ +#define HAVE_LOCKF 1 + +/* Define to 1 if the system has the type `long long'. */ +#define HAVE_LONG_LONG 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_LTDL_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_MALLOC_H */ + +/* Define to 1 if you have the `memcpy' function. */ +#define HAVE_MEMCPY 1 + +/* Define to 1 if you have the `memmove' function. */ +#define HAVE_MEMMOVE 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_MEMORY_H 1 + +/* Define to 1 if you have the `memrchr' function. */ +/* #undef HAVE_MEMRCHR */ + +/* Define to 1 if you have the `mkstemp' function. */ +#define HAVE_MKSTEMP 1 + +/* Define to 1 if you have the `mktemp' function. */ +#define HAVE_MKTEMP 1 + +/* define this if you have mkversion */ +#define HAVE_MKVERSION 1 + +/* Define to 1 if you have the header file, and it defines `DIR'. */ +/* #undef HAVE_NDIR_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_NETINET_TCP_H 1 + +/* define if strerror_r returns char* instead of int */ +/* #undef HAVE_NONPOSIX_STRERROR_R */ + +/* if you have NT Event Log */ +/* #undef HAVE_NT_EVENT_LOG */ + +/* if you have NT Service Manager */ +/* #undef HAVE_NT_SERVICE_MANAGER */ + +/* if you have NT Threads */ +/* #undef HAVE_NT_THREADS */ + +/* define if you have OpenSSL */ +#define HAVE_OPENSSL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_OPENSSL_BN_H 1 + +/* define if you have OpenSSL with CRL checking capability */ +#define HAVE_OPENSSL_CRL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_OPENSSL_CRYPTO_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_OPENSSL_SSL_H 1 + +/* Define to 1 if you have the `pipe' function. */ +#define HAVE_PIPE 1 + +/* Define to 1 if you have the `poll' function. */ +#define HAVE_POLL 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_POLL_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_PROCESS_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_PSAP_H */ + +/* define to pthreads API spec revision */ +#define HAVE_PTHREADS 10 + +/* define if you have pthread_detach function */ +#define HAVE_PTHREAD_DETACH 1 + +/* Define to 1 if you have the `pthread_getconcurrency' function. */ +#define HAVE_PTHREAD_GETCONCURRENCY 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_PTHREAD_H 1 + +/* Define to 1 if you have the `pthread_kill' function. */ +#define HAVE_PTHREAD_KILL 1 + +/* Define to 1 if you have the `pthread_kill_other_threads_np' function. */ +/* #undef HAVE_PTHREAD_KILL_OTHER_THREADS_NP */ + +/* define if you have pthread_rwlock_destroy function */ +#define HAVE_PTHREAD_RWLOCK_DESTROY 1 + +/* Define to 1 if you have the `pthread_setconcurrency' function. */ +#define HAVE_PTHREAD_SETCONCURRENCY 1 + +/* Define to 1 if you have the `pthread_yield' function. */ +/* #undef HAVE_PTHREAD_YIELD */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_PTH_H */ + +/* Define to 1 if the system has the type `ptrdiff_t'. */ +#define HAVE_PTRDIFF_T 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_PWD_H 1 + +/* Define to 1 if you have the `read' function. */ +#define HAVE_READ 1 + +/* Define to 1 if you have the `recv' function. */ +#define HAVE_RECV 1 + +/* Define to 1 if you have the `recvfrom' function. */ +#define HAVE_RECVFROM 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_REGEX_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_RESOLV_H */ + +/* define if you have res_query() */ +/* #undef HAVE_RES_QUERY */ + +/* define if OpenSSL needs RSAref */ +/* #undef HAVE_RSAREF */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SASL_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SASL_SASL_H */ + +/* define if your SASL library has sasl_version() */ +/* #undef HAVE_SASL_VERSION */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SCHED_H 1 + +/* Define to 1 if you have the `sched_yield' function. */ +#define HAVE_SCHED_YIELD 1 + +/* Define to 1 if you have the `send' function. */ +#define HAVE_SEND 1 + +/* Define to 1 if you have the `sendmsg' function. */ +#define HAVE_SENDMSG 1 + +/* Define to 1 if you have the `sendto' function. */ +#define HAVE_SENDTO 1 + +/* Define to 1 if you have the `setegid' function. */ +#define HAVE_SETEGID 1 + +/* Define to 1 if you have the `seteuid' function. */ +#define HAVE_SETEUID 1 + +/* Define to 1 if you have the `setgid' function. */ +#define HAVE_SETGID 1 + +/* Define to 1 if you have the `setpwfile' function. */ +/* #undef HAVE_SETPWFILE */ + +/* Define to 1 if you have the `setsid' function. */ +#define HAVE_SETSID 1 + +/* Define to 1 if you have the `setuid' function. */ +#define HAVE_SETUID 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SGTTY_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SHADOW_H */ + +/* Define to 1 if you have the `sigaction' function. */ +#define HAVE_SIGACTION 1 + +/* Define to 1 if you have the `signal' function. */ +#define HAVE_SIGNAL 1 + +/* Define to 1 if you have the `sigset' function. */ +#define HAVE_SIGSET 1 + +/* define if you have -lslp */ +/* #undef HAVE_SLP */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SLP_H */ + +/* Define to 1 if you have the `snprintf' function. */ +#define HAVE_SNPRINTF 1 + +/* if you have spawnlp() */ +/* #undef HAVE_SPAWNLP */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SQLEXT_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SQL_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_STDDEF_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STDINT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STDLIB_H 1 + +/* Define to 1 if you have the `strdup' function. */ +#define HAVE_STRDUP 1 + +/* Define to 1 if you have the `strerror' function. */ +#define HAVE_STRERROR 1 + +/* Define to 1 if you have the `strerror_r' function. */ +#define HAVE_STRERROR_R 1 + +/* Define to 1 if you have the `strftime' function. */ +#define HAVE_STRFTIME 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STRINGS_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_STRING_H 1 + +/* Define to 1 if you have the `strpbrk' function. */ +#define HAVE_STRPBRK 1 + +/* Define to 1 if you have the `strrchr' function. */ +#define HAVE_STRRCHR 1 + +/* Define to 1 if you have the `strsep' function. */ +#define HAVE_STRSEP 1 + +/* Define to 1 if you have the `strspn' function. */ +#define HAVE_STRSPN 1 + +/* Define to 1 if you have the `strstr' function. */ +#define HAVE_STRSTR 1 + +/* Define to 1 if you have the `strtol' function. */ +#define HAVE_STRTOL 1 + +/* Define to 1 if you have the `strtoll' function. */ +#define HAVE_STRTOLL 1 + +/* Define to 1 if you have the `strtoq' function. */ +#define HAVE_STRTOQ 1 + +/* Define to 1 if you have the `strtoul' function. */ +#define HAVE_STRTOUL 1 + +/* Define to 1 if you have the `strtoull' function. */ +#define HAVE_STRTOULL 1 + +/* Define to 1 if you have the `strtouq' function. */ +#define HAVE_STRTOUQ 1 + +/* Define to 1 if `msg_accrightslen' is a member of `struct msghdr'. */ +/* #undef HAVE_STRUCT_MSGHDR_MSG_ACCRIGHTSLEN */ + +/* Define to 1 if `msg_control' is a member of `struct msghdr'. */ +/* #undef HAVE_STRUCT_MSGHDR_MSG_CONTROL */ + +/* Define to 1 if `pw_gecos' is a member of `struct passwd'. */ +#define HAVE_STRUCT_PASSWD_PW_GECOS 1 + +/* Define to 1 if `pw_passwd' is a member of `struct passwd'. */ +#define HAVE_STRUCT_PASSWD_PW_PASSWD 1 + +/* Define to 1 if `st_blksize' is a member of `struct stat'. */ +#define HAVE_STRUCT_STAT_ST_BLKSIZE 1 + +/* Define to 1 if `st_fstype' is a member of `struct stat'. */ +/* #undef HAVE_STRUCT_STAT_ST_FSTYPE */ + +/* define to 1 if st_fstype is char * */ +/* #undef HAVE_STRUCT_STAT_ST_FSTYPE_CHAR */ + +/* define to 1 if st_fstype is int */ +/* #undef HAVE_STRUCT_STAT_ST_FSTYPE_INT */ + +/* Define to 1 if `st_vfstype' is a member of `struct stat'. */ +/* #undef HAVE_STRUCT_STAT_ST_VFSTYPE */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYNCH_H */ + +/* Define to 1 if you have the `sysconf' function. */ +#define HAVE_SYSCONF 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYSEXITS_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYSLOG_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_DEVPOLL_H */ + +/* Define to 1 if you have the header file, and it defines `DIR'. + */ +/* #undef HAVE_SYS_DIR_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_EPOLL_H */ + +/* define if you actually have sys_errlist in your libs */ +#define HAVE_SYS_ERRLIST 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_ERRNO_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_EVENT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_FILE_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_FILIO_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_FSTYP_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_IOCTL_H 1 + +/* Define to 1 if you have the header file, and it defines `DIR'. + */ +/* #undef HAVE_SYS_NDIR_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_PARAM_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_POLL_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_PRIVGRP_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_RESOURCE_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_SELECT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_SOCKET_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_STAT_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_SYSLOG_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_TIME_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_TYPES_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_UCRED_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_UIO_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_SYS_UN_H 1 + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_UUID_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_SYS_VMOUNT_H */ + +/* Define to 1 if you have that is POSIX.1 compatible. */ +#define HAVE_SYS_WAIT_H 1 + +/* define if you have -lwrap */ +/* #undef HAVE_TCPD */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_TCPD_H */ + +/* Define to 1 if you have the header file. */ +#define HAVE_TERMIOS_H 1 + +/* if you have Solaris LWP (thr) package */ +/* #undef HAVE_THR */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_THREAD_H */ + +/* Define to 1 if you have the `thr_getconcurrency' function. */ +/* #undef HAVE_THR_GETCONCURRENCY */ + +/* Define to 1 if you have the `thr_setconcurrency' function. */ +/* #undef HAVE_THR_SETCONCURRENCY */ + +/* Define to 1 if you have the `thr_yield' function. */ +/* #undef HAVE_THR_YIELD */ + +/* define if you have TLS */ +#define HAVE_TLS 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_UNISTD_H 1 + +/* Define to 1 if you have the header file. */ +#define HAVE_UTIME_H 1 + +/* define if you have uuid_generate() */ +/* #undef HAVE_UUID_GENERATE */ + +/* define if you have uuid_to_str() */ +/* #undef HAVE_UUID_TO_STR */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_UUID_UUID_H */ + +/* Define to 1 if you have the `vprintf' function. */ +#define HAVE_VPRINTF 1 + +/* Define to 1 if you have the `vsnprintf' function. */ +#define HAVE_VSNPRINTF 1 + +/* Define to 1 if you have the `wait4' function. */ +#define HAVE_WAIT4 1 + +/* Define to 1 if you have the `waitpid' function. */ +#define HAVE_WAITPID 1 + +/* define if you have winsock */ +/* #undef HAVE_WINSOCK */ + +/* define if you have winsock2 */ +/* #undef HAVE_WINSOCK2 */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_WINSOCK2_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_WINSOCK_H */ + +/* Define to 1 if you have the header file. */ +/* #undef HAVE_WIREDTIGER_H */ + +/* Define to 1 if you have the `write' function. */ +#define HAVE_WRITE 1 + +/* define if select implicitly yields */ +#define HAVE_YIELDING_SELECT 1 + +/* Define to 1 if you have the `_vsnprintf' function. */ +/* #undef HAVE__VSNPRINTF */ + +/* define to 32-bit or greater integer type */ +#define LBER_INT_T int + +/* define to large integer type */ +#define LBER_LEN_T long + +/* define to socket descriptor type */ +#define LBER_SOCKET_T int + +/* define to large integer type */ +#define LBER_TAG_T long + +/* define to 1 if library is thread safe */ +#define LDAP_API_FEATURE_X_OPENLDAP_THREAD_SAFE 1 + +/* define to LDAP VENDOR VERSION */ +/* #undef LDAP_API_FEATURE_X_OPENLDAP_V2_REFERRALS */ + +/* define this to add debugging code */ +/* #undef LDAP_DEBUG */ + +/* define if LDAP libs are dynamic */ +/* #undef LDAP_LIBS_DYNAMIC */ + +/* define to support PF_INET6 */ +#define LDAP_PF_INET6 1 + +/* define to support PF_LOCAL */ +#define LDAP_PF_LOCAL 1 + +/* define this to add SLAPI code */ +/* #undef LDAP_SLAPI */ + +/* define this to add syslog code */ +/* #undef LDAP_SYSLOG */ + +/* Version */ +#define LDAP_VENDOR_VERSION 20501 + +/* Major */ +#define LDAP_VENDOR_VERSION_MAJOR 2 + +/* Minor */ +#define LDAP_VENDOR_VERSION_MINOR 5 + +/* Patch */ +#define LDAP_VENDOR_VERSION_PATCH X + +/* Define to the sub-directory where libtool stores uninstalled libraries. */ +#define LT_OBJDIR ".libs/" + +/* define if memcmp is not 8-bit clean or is otherwise broken */ +/* #undef NEED_MEMCMP_REPLACEMENT */ + +/* define if you have (or want) no threads */ +/* #undef NO_THREADS */ + +/* define to use the original debug style */ +/* #undef OLD_DEBUG */ + +/* Package */ +#define OPENLDAP_PACKAGE "OpenLDAP" + +/* Version */ +#define OPENLDAP_VERSION "2.5.X" + +/* Define to the address where bug reports for this package should be sent. */ +#define PACKAGE_BUGREPORT "" + +/* Define to the full name of this package. */ +#define PACKAGE_NAME "" + +/* Define to the full name and version of this package. */ +#define PACKAGE_STRING "" + +/* Define to the one symbol short name of this package. */ +#define PACKAGE_TARNAME "" + +/* Define to the home page for this package. */ +#define PACKAGE_URL "" + +/* Define to the version of this package. */ +#define PACKAGE_VERSION "" + +/* define if sched_yield yields the entire process */ +/* #undef REPLACE_BROKEN_YIELD */ + +/* Define as the return type of signal handlers (`int' or `void'). */ +#define RETSIGTYPE void + +/* Define to the type of arg 1 for `select'. */ +#define SELECT_TYPE_ARG1 int + +/* Define to the type of args 2, 3 and 4 for `select'. */ +#define SELECT_TYPE_ARG234 (fd_set *) + +/* Define to the type of arg 5 for `select'. */ +#define SELECT_TYPE_ARG5 (struct timeval *) + +/* The size of `int', as computed by sizeof. */ +#define SIZEOF_INT 4 + +/* The size of `long', as computed by sizeof. */ +#define SIZEOF_LONG 8 + +/* The size of `long long', as computed by sizeof. */ +#define SIZEOF_LONG_LONG 8 + +/* The size of `short', as computed by sizeof. */ +#define SIZEOF_SHORT 2 + +/* The size of `wchar_t', as computed by sizeof. */ +#define SIZEOF_WCHAR_T 4 + +/* define to support per-object ACIs */ +/* #undef SLAPD_ACI_ENABLED */ + +/* define to support LDAP Async Metadirectory backend */ +/* #undef SLAPD_ASYNCMETA */ + +/* define to support cleartext passwords */ +/* #undef SLAPD_CLEARTEXT */ + +/* define to support crypt(3) passwords */ +/* #undef SLAPD_CRYPT */ + +/* define to support DNS SRV backend */ +/* #undef SLAPD_DNSSRV */ + +/* define to support LDAP backend */ +/* #undef SLAPD_LDAP */ + +/* define to support MDB backend */ +/* #undef SLAPD_MDB */ + +/* define to support LDAP Metadirectory backend */ +/* #undef SLAPD_META */ + +/* define to support modules */ +/* #undef SLAPD_MODULES */ + +/* dynamically linked module */ +#define SLAPD_MOD_DYNAMIC 2 + +/* statically linked module */ +#define SLAPD_MOD_STATIC 1 + +/* define to support cn=Monitor backend */ +/* #undef SLAPD_MONITOR */ + +/* define to support NDB backend */ +/* #undef SLAPD_NDB */ + +/* define to support NULL backend */ +/* #undef SLAPD_NULL */ + +/* define for In-Directory Access Logging overlay */ +/* #undef SLAPD_OVER_ACCESSLOG */ + +/* define for Audit Logging overlay */ +/* #undef SLAPD_OVER_AUDITLOG */ + +/* define for Automatic Certificate Authority overlay */ +/* #undef SLAPD_OVER_AUTOCA */ + +/* define for Collect overlay */ +/* #undef SLAPD_OVER_COLLECT */ + +/* define for Attribute Constraint overlay */ +/* #undef SLAPD_OVER_CONSTRAINT */ + +/* define for Dynamic Directory Services overlay */ +/* #undef SLAPD_OVER_DDS */ + +/* define for Dynamic Directory Services overlay */ +/* #undef SLAPD_OVER_DEREF */ + +/* define for Dynamic Group overlay */ +/* #undef SLAPD_OVER_DYNGROUP */ + +/* define for Dynamic List overlay */ +/* #undef SLAPD_OVER_DYNLIST */ + +/* define for Reverse Group Membership overlay */ +/* #undef SLAPD_OVER_MEMBEROF */ + +/* define for Password Policy overlay */ +/* #undef SLAPD_OVER_PPOLICY */ + +/* define for Proxy Cache overlay */ +/* #undef SLAPD_OVER_PROXYCACHE */ + +/* define for Referential Integrity overlay */ +/* #undef SLAPD_OVER_REFINT */ + +/* define for Return Code overlay */ +/* #undef SLAPD_OVER_RETCODE */ + +/* define for Rewrite/Remap overlay */ +/* #undef SLAPD_OVER_RWM */ + +/* define for Sequential Modify overlay */ +/* #undef SLAPD_OVER_SEQMOD */ + +/* define for ServerSideSort/VLV overlay */ +/* #undef SLAPD_OVER_SSSVLV */ + +/* define for Syncrepl Provider overlay */ +/* #undef SLAPD_OVER_SYNCPROV */ + +/* define for Translucent Proxy overlay */ +/* #undef SLAPD_OVER_TRANSLUCENT */ + +/* define for Attribute Uniqueness overlay */ +/* #undef SLAPD_OVER_UNIQUE */ + +/* define for Value Sorting overlay */ +/* #undef SLAPD_OVER_VALSORT */ + +/* define to support PASSWD backend */ +/* #undef SLAPD_PASSWD */ + +/* define to support PERL backend */ +/* #undef SLAPD_PERL */ + +/* define to support relay backend */ +/* #undef SLAPD_RELAY */ + +/* define to support reverse lookups */ +/* #undef SLAPD_RLOOKUPS */ + +/* define to support SHELL backend */ +/* #undef SLAPD_SHELL */ + +/* define to support SOCK backend */ +/* #undef SLAPD_SOCK */ + +/* define to support SASL passwords */ +/* #undef SLAPD_SPASSWD */ + +/* define to support SQL backend */ +/* #undef SLAPD_SQL */ + +/* define to support WiredTiger backend */ +/* #undef SLAPD_WT */ + +/* define to support run-time loadable ACL */ +/* #undef SLAP_DYNACL */ + +/* Define to 1 if you have the ANSI C header files. */ +#define STDC_HEADERS 1 + +/* Define to 1 if you can safely include both and . */ +#define TIME_WITH_SYS_TIME 1 + +/* Define to 1 if your declares `struct tm'. */ +/* #undef TM_IN_SYS_TIME */ + +/* set to urandom device */ +#define URANDOM_DEVICE "/dev/urandom" + +/* define to use OpenSSL BIGNUM for MP */ +/* #undef USE_MP_BIGNUM */ + +/* define to use GMP for MP */ +/* #undef USE_MP_GMP */ + +/* define to use 'long' for MP */ +/* #undef USE_MP_LONG */ + +/* define to use 'long long' for MP */ +/* #undef USE_MP_LONG_LONG */ + +/* Define WORDS_BIGENDIAN to 1 if your processor stores words with the most + significant byte first (like Motorola and SPARC, unlike Intel). */ +#if defined AC_APPLE_UNIVERSAL_BUILD +# if defined __BIG_ENDIAN__ +# define WORDS_BIGENDIAN 1 +# endif +#else +# ifndef WORDS_BIGENDIAN +/* # undef WORDS_BIGENDIAN */ +# endif +#endif + +/* Define to the type of arg 3 for `accept'. */ +#define ber_socklen_t socklen_t + +/* Define to `char *' if does not define. */ +/* #undef caddr_t */ + +/* Define to empty if `const' does not conform to ANSI C. */ +/* #undef const */ + +/* Define to `int' if doesn't define. */ +/* #undef gid_t */ + +/* Define to `int' if does not define. */ +/* #undef mode_t */ + +/* Define to `long' if does not define. */ +/* #undef off_t */ + +/* Define to `int' if does not define. */ +/* #undef pid_t */ + +/* Define to `int' if does not define. */ +/* #undef sig_atomic_t */ + +/* Define to `unsigned' if does not define. */ +/* #undef size_t */ + +/* define to snprintf routine */ +/* #undef snprintf */ + +/* Define like ber_socklen_t if does not define. */ +/* #undef socklen_t */ + +/* Define to `signed int' if does not define. */ +/* #undef ssize_t */ + +/* Define to `int' if doesn't define. */ +/* #undef uid_t */ + +/* define as empty if volatile is not supported */ +/* #undef volatile */ + +/* define to snprintf routine */ +/* #undef vsnprintf */ + + +/* begin of portable.h.post */ + +#ifdef _WIN32 +/* don't suck in all of the win32 api */ +# define WIN32_LEAN_AND_MEAN 1 +#endif + +#ifndef LDAP_NEEDS_PROTOTYPES +/* force LDAP_P to always include prototypes */ +#define LDAP_NEEDS_PROTOTYPES 1 +#endif + +#ifndef LDAP_REL_ENG +#if (LDAP_VENDOR_VERSION == 000000) && !defined(LDAP_DEVEL) +#define LDAP_DEVEL +#endif +#if defined(LDAP_DEVEL) && !defined(LDAP_TEST) +#define LDAP_TEST +#endif +#endif + +#ifdef HAVE_STDDEF_H +# include +#endif + +#ifdef HAVE_EBCDIC +/* ASCII/EBCDIC converting replacements for stdio funcs + * vsnprintf and snprintf are used too, but they are already + * checked by the configure script + */ +#define fputs ber_pvt_fputs +#define fgets ber_pvt_fgets +#define printf ber_pvt_printf +#define fprintf ber_pvt_fprintf +#define vfprintf ber_pvt_vfprintf +#define vsprintf ber_pvt_vsprintf +#endif + +#include "ac/fdset.h" + +#include "ldap_cdefs.h" +#include "ldap_features.h" + +#include "ac/assert.h" +#include "ac/localize.h" + +#endif /* _LDAP_PORTABLE_H */ +/* end of portable.h.post */ + diff --git a/contrib/poco b/contrib/poco index 83beecccb09..b7d9ec16ee3 160000 --- a/contrib/poco +++ b/contrib/poco @@ -1 +1 @@ -Subproject commit 83beecccb09eec0c9fd2669cacea03ede1d9f138 +Subproject commit b7d9ec16ee33ca76643d5fcd907ea9a33285640a diff --git a/contrib/poco-cmake/Foundation/CMakeLists.txt b/contrib/poco-cmake/Foundation/CMakeLists.txt index f4647461ec0..6476845b4e3 100644 --- a/contrib/poco-cmake/Foundation/CMakeLists.txt +++ b/contrib/poco-cmake/Foundation/CMakeLists.txt @@ -233,3 +233,10 @@ else () message (STATUS "Using Poco::Foundation: ${LIBRARY_POCO_FOUNDATION} ${INCLUDE_POCO_FOUNDATION}") endif () + +if(OS_DARWIN AND ARCH_AARCH64) + target_compile_definitions (_poco_foundation + PRIVATE + POCO_NO_STAT64 + ) +endif() diff --git a/contrib/rocksdb-cmake/CMakeLists.txt b/contrib/rocksdb-cmake/CMakeLists.txt index 77a30776a4a..117015ef5c2 100644 --- a/contrib/rocksdb-cmake/CMakeLists.txt +++ b/contrib/rocksdb-cmake/CMakeLists.txt @@ -142,14 +142,14 @@ if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64") endif(HAS_ALTIVEC) endif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64") -if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64") +if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64|arm64|ARM64") CHECK_C_COMPILER_FLAG("-march=armv8-a+crc+crypto" HAS_ARMV8_CRC) if(HAS_ARMV8_CRC) message(STATUS " HAS_ARMV8_CRC yes") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -march=armv8-a+crc+crypto -Wno-unused-function") endif(HAS_ARMV8_CRC) -endif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64") +endif(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64|AARCH64|arm64|ARM64") include(CheckCXXSourceCompiles) diff --git a/contrib/zlib-ng b/contrib/zlib-ng index 6fd1846c8b8..5cc4d232020 160000 --- a/contrib/zlib-ng +++ b/contrib/zlib-ng @@ -1 +1 @@ -Subproject commit 6fd1846c8b8f59436fe2dd752d0f316ddbb64df6 +Subproject commit 5cc4d232020dc66d1d6c5438834457e2a2f6127b diff --git a/debian/changelog b/debian/changelog index be77dfdefe9..8b6626416a9 100644 --- a/debian/changelog +++ b/debian/changelog @@ -1,5 +1,5 @@ -clickhouse (21.5.1.1) unstable; urgency=low +clickhouse (21.6.1.1) unstable; urgency=low * Modified source code - -- clickhouse-release Fri, 02 Apr 2021 18:34:26 +0300 + -- clickhouse-release Tue, 20 Apr 2021 01:48:16 +0300 diff --git a/debian/clickhouse-common-static.install b/debian/clickhouse-common-static.install index 17c955a12a9..087a6dbba8f 100644 --- a/debian/clickhouse-common-static.install +++ b/debian/clickhouse-common-static.install @@ -1,5 +1,5 @@ usr/bin/clickhouse usr/bin/clickhouse-odbc-bridge +usr/bin/clickhouse-library-bridge usr/bin/clickhouse-extract-from-config usr/share/bash-completion/completions -etc/security/limits.d/clickhouse.conf diff --git a/debian/clickhouse-server.config b/debian/clickhouse-server.config deleted file mode 100644 index 636ff7f4da7..00000000000 --- a/debian/clickhouse-server.config +++ /dev/null @@ -1,16 +0,0 @@ -#!/bin/sh -e - -test -f /usr/share/debconf/confmodule && . /usr/share/debconf/confmodule - -db_fget clickhouse-server/default-password seen || true -password_seen="$RET" - -if [ "$1" = "reconfigure" ]; then - password_seen=false -fi - -if [ "$password_seen" != "true" ]; then - db_input high clickhouse-server/default-password || true - db_go || true -fi -db_go || true diff --git a/debian/clickhouse-server.postinst b/debian/clickhouse-server.postinst index dc876f45954..419c13e3daf 100644 --- a/debian/clickhouse-server.postinst +++ b/debian/clickhouse-server.postinst @@ -23,11 +23,13 @@ if [ ! -f "/etc/debian_version" ]; then fi if [ "$1" = configure ] || [ -n "$not_deb_os" ]; then + + ${CLICKHOUSE_GENERIC_PROGRAM} install --user "${CLICKHOUSE_USER}" --group "${CLICKHOUSE_GROUP}" --pid-path "${CLICKHOUSE_PIDDIR}" --config-path "${CLICKHOUSE_CONFDIR}" --binary-path "${CLICKHOUSE_BINDIR}" --log-path "${CLICKHOUSE_LOGDIR}" --data-path "${CLICKHOUSE_DATADIR}" + if [ -x "/bin/systemctl" ] && [ -f /etc/systemd/system/clickhouse-server.service ] && [ -d /run/systemd/system ]; then # if old rc.d service present - remove it if [ -x "/etc/init.d/clickhouse-server" ] && [ -x "/usr/sbin/update-rc.d" ]; then /usr/sbin/update-rc.d clickhouse-server remove - echo "ClickHouse init script has migrated to systemd. Please manually stop old server and restart the service: sudo killall clickhouse-server && sleep 5 && sudo service clickhouse-server restart" fi /bin/systemctl daemon-reload @@ -38,10 +40,8 @@ if [ "$1" = configure ] || [ -n "$not_deb_os" ]; then if [ -x "/usr/sbin/update-rc.d" ]; then /usr/sbin/update-rc.d clickhouse-server defaults 19 19 >/dev/null || exit $? else - echo # TODO [ "$OS" = "rhel" ] || [ "$OS" = "centos" ] || [ "$OS" = "fedora" ] + echo # Other OS fi fi fi - - ${CLICKHOUSE_GENERIC_PROGRAM} install --user "${CLICKHOUSE_USER}" --group "${CLICKHOUSE_GROUP}" --pid-path "${CLICKHOUSE_PIDDIR}" --config-path "${CLICKHOUSE_CONFDIR}" --binary-path "${CLICKHOUSE_BINDIR}" --log-path "${CLICKHOUSE_LOGDIR}" --data-path "${CLICKHOUSE_DATADIR}" fi diff --git a/debian/clickhouse-server.preinst b/debian/clickhouse-server.preinst deleted file mode 100644 index 3529aefa7da..00000000000 --- a/debian/clickhouse-server.preinst +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/sh - -if [ "$1" = "upgrade" ]; then - # Return etc/cron.d/clickhouse-server to original state - service clickhouse-server disable_cron ||: -fi - -#DEBHELPER# diff --git a/debian/clickhouse-server.prerm b/debian/clickhouse-server.prerm deleted file mode 100644 index 02e855a7125..00000000000 --- a/debian/clickhouse-server.prerm +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/sh - -if [ "$1" = "upgrade" ] || [ "$1" = "remove" ]; then - # Return etc/cron.d/clickhouse-server to original state - service clickhouse-server disable_cron ||: -fi diff --git a/debian/clickhouse-server.templates b/debian/clickhouse-server.templates deleted file mode 100644 index dd55824e15c..00000000000 --- a/debian/clickhouse-server.templates +++ /dev/null @@ -1,3 +0,0 @@ -Template: clickhouse-server/default-password -Type: password -Description: Enter password for default user: diff --git a/debian/clickhouse.limits b/debian/clickhouse.limits deleted file mode 100644 index aca44082c4e..00000000000 --- a/debian/clickhouse.limits +++ /dev/null @@ -1,2 +0,0 @@ -clickhouse soft nofile 262144 -clickhouse hard nofile 262144 diff --git a/debian/rules b/debian/rules index 8eb47e95389..73d1f3d3b34 100755 --- a/debian/rules +++ b/debian/rules @@ -113,9 +113,6 @@ override_dh_install: ln -sf clickhouse-server.docs debian/clickhouse-client.docs ln -sf clickhouse-server.docs debian/clickhouse-common-static.docs - mkdir -p $(DESTDIR)/etc/security/limits.d - cp debian/clickhouse.limits $(DESTDIR)/etc/security/limits.d/clickhouse.conf - # systemd compatibility mkdir -p $(DESTDIR)/etc/systemd/system/ cp debian/clickhouse-server.service $(DESTDIR)/etc/systemd/system/ diff --git a/debian/watch b/debian/watch index 7ad4cedf713..ed3cab97ade 100644 --- a/debian/watch +++ b/debian/watch @@ -1,6 +1,6 @@ version=4 opts="filenamemangle=s%(?:.*?)?v?(\d[\d.]*)-stable\.tar\.gz%clickhouse-$1.tar.gz%" \ - https://github.com/yandex/clickhouse/tags \ + https://github.com/ClickHouse/ClickHouse/tags \ (?:.*?/)?v?(\d[\d.]*)-stable\.tar\.gz debian uupdate diff --git a/docker/client/Dockerfile b/docker/client/Dockerfile index 2efba9735ae..569025dec1c 100644 --- a/docker/client/Dockerfile +++ b/docker/client/Dockerfile @@ -1,7 +1,7 @@ FROM ubuntu:18.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.5.1.* +ARG version=21.6.1.* RUN apt-get update \ && apt-get install --yes --no-install-recommends \ diff --git a/docker/images.json b/docker/images.json index 303bd159ce4..e2e22468596 100644 --- a/docker/images.json +++ b/docker/images.json @@ -138,7 +138,8 @@ "docker/test/stateless_unbundled", "docker/test/stateless_pytest", "docker/test/integration/base", - "docker/test/fuzzer" + "docker/test/fuzzer", + "docker/test/keeper-jepsen" ] }, "docker/packager/unbundled": { @@ -159,5 +160,9 @@ "docker/test/sqlancer": { "name": "yandex/clickhouse-sqlancer-test", "dependent": [] + }, + "docker/test/keeper-jepsen": { + "name": "yandex/clickhouse-keeper-jepsen-test", + "dependent": [] } } diff --git a/docker/packager/binary/Dockerfile b/docker/packager/binary/Dockerfile index 94c7f934f6e..fccae66b66b 100644 --- a/docker/packager/binary/Dockerfile +++ b/docker/packager/binary/Dockerfile @@ -35,35 +35,32 @@ RUN apt-get update \ RUN apt-get update \ && apt-get install \ bash \ - cmake \ + build-essential \ ccache \ - curl \ - gcc-9 \ - g++-9 \ clang-10 \ - clang-tidy-10 \ - lld-10 \ - llvm-10 \ - llvm-10-dev \ clang-11 \ + clang-tidy-10 \ clang-tidy-11 \ - lld-11 \ - llvm-11 \ - llvm-11-dev \ + cmake \ + curl \ + g++-9 \ + gcc-9 \ + gdb \ + git \ + gperf \ libicu-dev \ libreadline-dev \ + lld-10 \ + lld-11 \ + llvm-10 \ + llvm-10-dev \ + llvm-11 \ + llvm-11-dev \ + moreutils \ ninja-build \ - gperf \ - git \ - opencl-headers \ - ocl-icd-libopencl1 \ - intel-opencl-icd \ - tzdata \ - gperf \ - cmake \ - gdb \ + pigz \ rename \ - build-essential \ + tzdata \ --yes --no-install-recommends # This symlink required by gcc to find lld compiler @@ -111,4 +108,4 @@ RUN rm /etc/apt/sources.list.d/proposed-repositories.list && apt-get update COPY build.sh / -CMD ["/bin/bash", "/build.sh"] +CMD ["bash", "-c", "/build.sh 2>&1 | ts"] diff --git a/docker/packager/binary/build.sh b/docker/packager/binary/build.sh index a42789c6186..cf74105fbbb 100755 --- a/docker/packager/binary/build.sh +++ b/docker/packager/binary/build.sh @@ -11,17 +11,28 @@ tar xJf gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz -C build/cmake/toolc mkdir -p build/cmake/toolchain/freebsd-x86_64 tar xJf freebsd-11.3-toolchain.tar.xz -C build/cmake/toolchain/freebsd-x86_64 --strip-components=1 +# Uncomment to debug ccache. Don't put ccache log in /output right away, or it +# will be confusingly packed into the "performance" package. +# export CCACHE_LOGFILE=/build/ccache.log +# export CCACHE_DEBUG=1 + mkdir -p build/build_docker cd build/build_docker -ccache --show-stats ||: -ccache --zero-stats ||: -ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||: rm -f CMakeCache.txt # Read cmake arguments into array (possibly empty) read -ra CMAKE_FLAGS <<< "${CMAKE_FLAGS:-}" cmake --debug-trycompile --verbose=1 -DCMAKE_VERBOSE_MAKEFILE=1 -LA "-DCMAKE_BUILD_TYPE=$BUILD_TYPE" "-DSANITIZE=$SANITIZER" -DENABLE_CHECK_HEAVY_BUILDS=1 "${CMAKE_FLAGS[@]}" .. + +ccache --show-config ||: +ccache --show-stats ||: +ccache --zero-stats ||: + # shellcheck disable=SC2086 # No quotes because I want it to expand to nothing if empty. ninja $NINJA_FLAGS clickhouse-bundle + +ccache --show-config ||: +ccache --show-stats ||: + mv ./programs/clickhouse* /output mv ./src/unit_tests_dbms /output ||: # may not exist for some binary builds find . -name '*.so' -print -exec mv '{}' /output \; @@ -65,8 +76,21 @@ then cp ../programs/server/config.xml /output/config cp ../programs/server/users.xml /output/config cp -r --dereference ../programs/server/config.d /output/config - tar -czvf "$COMBINED_OUTPUT.tgz" /output + tar -cv -I pigz -f "$COMBINED_OUTPUT.tgz" /output rm -r /output/* mv "$COMBINED_OUTPUT.tgz" /output fi -ccache --show-stats ||: + +if [ "${CCACHE_DEBUG:-}" == "1" ] +then + find . -name '*.ccache-*' -print0 \ + | tar -c -I pixz -f /output/ccache-debug.txz --null -T - +fi + +if [ -n "$CCACHE_LOGFILE" ] +then + # Compress the log as well, or else the CI will try to compress all log + # files in place, and will fail because this directory is not writable. + tar -cv -I pixz -f /output/ccache.log.txz "$CCACHE_LOGFILE" +fi + diff --git a/docker/packager/deb/Dockerfile b/docker/packager/deb/Dockerfile index 8fd89d60f85..902929a2644 100644 --- a/docker/packager/deb/Dockerfile +++ b/docker/packager/deb/Dockerfile @@ -34,31 +34,32 @@ RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \ # Libraries from OS are only needed to test the "unbundled" build (this is not used in production). RUN apt-get update \ && apt-get install \ - gcc-9 \ - g++-9 \ - clang-11 \ - clang-tidy-11 \ - lld-11 \ - llvm-11 \ - llvm-11-dev \ + alien \ clang-10 \ + clang-11 \ clang-tidy-10 \ + clang-tidy-11 \ + cmake \ + debhelper \ + devscripts \ + g++-9 \ + gcc-9 \ + gdb \ + git \ + gperf \ lld-10 \ + lld-11 \ llvm-10 \ llvm-10-dev \ + llvm-11 \ + llvm-11-dev \ + moreutils \ ninja-build \ perl \ - pkg-config \ - devscripts \ - debhelper \ - git \ - tzdata \ - gperf \ - alien \ - cmake \ - gdb \ - moreutils \ pigz \ + pixz \ + pkg-config \ + tzdata \ --yes --no-install-recommends # NOTE: For some reason we have outdated version of gcc-10 in ubuntu 20.04 stable. diff --git a/docker/packager/deb/build.sh b/docker/packager/deb/build.sh index 6450e21d289..c1a0b27db5d 100755 --- a/docker/packager/deb/build.sh +++ b/docker/packager/deb/build.sh @@ -2,8 +2,14 @@ set -x -e +# Uncomment to debug ccache. +# export CCACHE_LOGFILE=/build/ccache.log +# export CCACHE_DEBUG=1 + +ccache --show-config ||: ccache --show-stats ||: ccache --zero-stats ||: + read -ra ALIEN_PKGS <<< "${ALIEN_PKGS:-}" build/release --no-pbuilder "${ALIEN_PKGS[@]}" | ts '%Y-%m-%d %H:%M:%S' mv /*.deb /output @@ -22,5 +28,19 @@ then mv /build/obj-*/src/unit_tests_dbms /output/binary fi fi + +ccache --show-config ||: ccache --show-stats ||: -ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||: + +if [ "${CCACHE_DEBUG:-}" == "1" ] +then + find /build -name '*.ccache-*' -print0 \ + | tar -c -I pixz -f /output/ccache-debug.txz --null -T - +fi + +if [ -n "$CCACHE_LOGFILE" ] +then + # Compress the log as well, or else the CI will try to compress all log + # files in place, and will fail because this directory is not writable. + tar -cv -I pixz -f /output/ccache.log.txz "$CCACHE_LOGFILE" +fi diff --git a/docker/packager/unbundled/Dockerfile b/docker/packager/unbundled/Dockerfile index f640c595f14..4dd6dbc61d8 100644 --- a/docker/packager/unbundled/Dockerfile +++ b/docker/packager/unbundled/Dockerfile @@ -35,9 +35,6 @@ RUN apt-get update \ libjemalloc-dev \ libmsgpack-dev \ libcurl4-openssl-dev \ - opencl-headers \ - ocl-icd-libopencl1 \ - intel-opencl-icd \ unixodbc-dev \ odbcinst \ tzdata \ diff --git a/docker/packager/unbundled/build.sh b/docker/packager/unbundled/build.sh index 54575ab977c..99fc34fd9f3 100755 --- a/docker/packager/unbundled/build.sh +++ b/docker/packager/unbundled/build.sh @@ -13,4 +13,3 @@ mv /*.rpm /output ||: # if exists mv /*.tgz /output ||: # if exists ccache --show-stats ||: -ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||: diff --git a/docker/server/Dockerfile b/docker/server/Dockerfile index 05ca29f22d4..48c978366c6 100644 --- a/docker/server/Dockerfile +++ b/docker/server/Dockerfile @@ -1,7 +1,7 @@ FROM ubuntu:20.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.5.1.* +ARG version=21.6.1.* ARG gosu_ver=1.10 # set non-empty deb_location_url url to create a docker image diff --git a/docker/test/Dockerfile b/docker/test/Dockerfile index 976c46ebe27..0e4646386ce 100644 --- a/docker/test/Dockerfile +++ b/docker/test/Dockerfile @@ -1,7 +1,7 @@ FROM ubuntu:18.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.5.1.* +ARG version=21.6.1.* RUN apt-get update && \ apt-get install -y apt-transport-https dirmngr && \ diff --git a/docker/test/fasttest/run.sh b/docker/test/fasttest/run.sh index c21a115289d..a7cc398e5c9 100755 --- a/docker/test/fasttest/run.sh +++ b/docker/test/fasttest/run.sh @@ -300,6 +300,7 @@ function run_tests 01663_aes_msan # Depends on OpenSSL 01667_aes_args_check # Depends on OpenSSL 01776_decrypt_aead_size_check # Depends on OpenSSL + 01811_filter_by_null # Depends on OpenSSL 01281_unsucceeded_insert_select_queries_counter 01292_create_user 01294_lazy_database_concurrent @@ -307,10 +308,8 @@ function run_tests 01354_order_by_tuple_collate_const 01355_ilike 01411_bayesian_ab_testing - 01532_collate_in_low_cardinality - 01533_collate_in_nullable - 01542_collate_in_array - 01543_collate_in_tuple + collate + collation _orc_ arrow avro @@ -365,6 +364,12 @@ function run_tests # JSON functions 01666_blns + + # Requires postgresql-client + 01802_test_postgresql_protocol_with_row_policy + + # Depends on AWS + 01801_s3_cluster ) (time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt" diff --git a/docker/test/fuzzer/run-fuzzer.sh b/docker/test/fuzzer/run-fuzzer.sh index 4bd3fa717a2..626bedb453c 100755 --- a/docker/test/fuzzer/run-fuzzer.sh +++ b/docker/test/fuzzer/run-fuzzer.sh @@ -198,7 +198,7 @@ case "$stage" in # Lost connection to the server. This probably means that the server died # with abort. echo "failure" > status.txt - if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt + if ! grep -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: AddressSanitizer:.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt then echo "Lost connection to server. See the logs." > description.txt fi diff --git a/docker/test/integration/base/Dockerfile b/docker/test/integration/base/Dockerfile index 938d8d45ffd..1c962f1bf8f 100644 --- a/docker/test/integration/base/Dockerfile +++ b/docker/test/integration/base/Dockerfile @@ -19,7 +19,8 @@ RUN apt-get update \ tar \ krb5-user \ iproute2 \ - lsof + lsof \ + g++ RUN rm -rf \ /var/lib/apt/lists/* \ /var/cache/debconf \ diff --git a/docker/test/integration/runner/Dockerfile b/docker/test/integration/runner/Dockerfile index e0e5e36a3d6..783e689ed01 100644 --- a/docker/test/integration/runner/Dockerfile +++ b/docker/test/integration/runner/Dockerfile @@ -31,6 +31,7 @@ RUN apt-get update \ software-properties-common \ libkrb5-dev \ krb5-user \ + g++ \ && rm -rf \ /var/lib/apt/lists/* \ /var/cache/debconf \ diff --git a/docker/test/integration/runner/compose/docker_compose_mysql_cluster.yml b/docker/test/integration/runner/compose/docker_compose_mysql_cluster.yml new file mode 100644 index 00000000000..d0674362709 --- /dev/null +++ b/docker/test/integration/runner/compose/docker_compose_mysql_cluster.yml @@ -0,0 +1,23 @@ +version: '2.3' +services: + mysql2: + image: mysql:5.7 + restart: always + environment: + MYSQL_ROOT_PASSWORD: clickhouse + ports: + - 3348:3306 + mysql3: + image: mysql:5.7 + restart: always + environment: + MYSQL_ROOT_PASSWORD: clickhouse + ports: + - 3388:3306 + mysql4: + image: mysql:5.7 + restart: always + environment: + MYSQL_ROOT_PASSWORD: clickhouse + ports: + - 3368:3306 diff --git a/docker/test/integration/runner/compose/docker_compose_postgres.yml b/docker/test/integration/runner/compose/docker_compose_postgres.yml index 58ed97251fb..5657352e1b3 100644 --- a/docker/test/integration/runner/compose/docker_compose_postgres.yml +++ b/docker/test/integration/runner/compose/docker_compose_postgres.yml @@ -11,10 +11,3 @@ services: default: aliases: - postgre-sql.local - postgres2: - image: postgres - restart: always - environment: - POSTGRES_PASSWORD: mysecretpassword - ports: - - 5441:5432 diff --git a/docker/test/integration/runner/compose/docker_compose_postgres_cluster.yml b/docker/test/integration/runner/compose/docker_compose_postgres_cluster.yml new file mode 100644 index 00000000000..d04c8a2f3a6 --- /dev/null +++ b/docker/test/integration/runner/compose/docker_compose_postgres_cluster.yml @@ -0,0 +1,23 @@ +version: '2.3' +services: + postgres2: + image: postgres + restart: always + environment: + POSTGRES_PASSWORD: mysecretpassword + ports: + - 5421:5432 + postgres3: + image: postgres + restart: always + environment: + POSTGRES_PASSWORD: mysecretpassword + ports: + - 5441:5432 + postgres4: + image: postgres + restart: always + environment: + POSTGRES_PASSWORD: mysecretpassword + ports: + - 5461:5432 diff --git a/docker/test/integration/runner/dockerd-entrypoint.sh b/docker/test/integration/runner/dockerd-entrypoint.sh index c0255d3d706..bda6f5a719d 100755 --- a/docker/test/integration/runner/dockerd-entrypoint.sh +++ b/docker/test/integration/runner/dockerd-entrypoint.sh @@ -21,6 +21,7 @@ export CLICKHOUSE_TESTS_SERVER_BIN_PATH=/clickhouse export CLICKHOUSE_TESTS_CLIENT_BIN_PATH=/clickhouse export CLICKHOUSE_TESTS_BASE_CONFIG_DIR=/clickhouse-config export CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH=/clickhouse-odbc-bridge +export CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH=/clickhouse-library-bridge export DOCKER_MYSQL_GOLANG_CLIENT_TAG=${DOCKER_MYSQL_GOLANG_CLIENT_TAG:=latest} export DOCKER_MYSQL_JAVA_CLIENT_TAG=${DOCKER_MYSQL_JAVA_CLIENT_TAG:=latest} diff --git a/docker/test/keeper-jepsen/Dockerfile b/docker/test/keeper-jepsen/Dockerfile new file mode 100644 index 00000000000..1a62d5e793f --- /dev/null +++ b/docker/test/keeper-jepsen/Dockerfile @@ -0,0 +1,39 @@ +# docker build -t yandex/clickhouse-keeper-jepsen-test . +FROM yandex/clickhouse-test-base + +ENV DEBIAN_FRONTEND=noninteractive +ENV CLOJURE_VERSION=1.10.3.814 + +# arguments +ENV PR_TO_TEST="" +ENV SHA_TO_TEST="" + +ENV NODES_USERNAME="root" +ENV NODES_PASSWORD="" +ENV TESTS_TO_RUN="30" +ENV TIME_LIMIT="30" + + +# volumes +ENV NODES_FILE_PATH="/nodes.txt" +ENV TEST_OUTPUT="/test_output" + +RUN mkdir "/root/.ssh" +RUN touch "/root/.ssh/known_hosts" + +# install java +RUN apt-get update && apt-get install default-jre default-jdk libjna-java libjna-jni ssh gnuplot graphviz --yes --no-install-recommends + +# install clojure +RUN curl -O "https://download.clojure.org/install/linux-install-${CLOJURE_VERSION}.sh" && \ + chmod +x "linux-install-${CLOJURE_VERSION}.sh" && \ + bash "./linux-install-${CLOJURE_VERSION}.sh" + +# install leiningen +RUN curl -O "https://raw.githubusercontent.com/technomancy/leiningen/stable/bin/lein" && \ + chmod +x ./lein && \ + mv ./lein /usr/bin + +COPY run.sh / + +CMD ["/bin/bash", "/run.sh"] diff --git a/docker/test/keeper-jepsen/run.sh b/docker/test/keeper-jepsen/run.sh new file mode 100644 index 00000000000..352585e16e3 --- /dev/null +++ b/docker/test/keeper-jepsen/run.sh @@ -0,0 +1,22 @@ +#!/usr/bin/env bash +set -euo pipefail + + +CLICKHOUSE_PACKAGE=${CLICKHOUSE_PACKAGE:="https://clickhouse-builds.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/clang-11_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse"} +CLICKHOUSE_REPO_PATH=${CLICKHOUSE_REPO_PATH:=""} + + +if [ -z "$CLICKHOUSE_REPO_PATH" ]; then + CLICKHOUSE_REPO_PATH=ch + rm -rf ch ||: + mkdir ch ||: + wget -nv -nd -c "https://clickhouse-test-reports.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/repo/clickhouse_no_subs.tar.gz" + tar -C ch --strip-components=1 -xf clickhouse_no_subs.tar.gz + ls -lath ||: +fi + +cd "$CLICKHOUSE_REPO_PATH/tests/jepsen.clickhouse-keeper" + +(lein run test-all --nodes-file "$NODES_FILE_PATH" --username "$NODES_USERNAME" --logging-json --password "$NODES_PASSWORD" --time-limit "$TIME_LIMIT" --concurrency 50 -r 50 --snapshot-distance 100 --stale-log-gap 100 --reserved-log-items 10 --lightweight-run --clickhouse-source "$CLICKHOUSE_PACKAGE" -q --test-count "$TESTS_TO_RUN" || true) | tee "$TEST_OUTPUT/jepsen_run_all_tests.log" + +mv store "$TEST_OUTPUT/" diff --git a/docker/test/performance-comparison/config/config.d/zzz-perf-comparison-tweaks-config.xml b/docker/test/performance-comparison/config/config.d/zzz-perf-comparison-tweaks-config.xml index 31f5b739c6d..7b941f844de 100644 --- a/docker/test/performance-comparison/config/config.d/zzz-perf-comparison-tweaks-config.xml +++ b/docker/test/performance-comparison/config/config.d/zzz-perf-comparison-tweaks-config.xml @@ -1,6 +1,7 @@ + diff --git a/docker/test/performance-comparison/config/users.d/perf-comparison-tweaks-users.xml b/docker/test/performance-comparison/config/users.d/perf-comparison-tweaks-users.xml index 41bc7f777bf..63e23d8453c 100644 --- a/docker/test/performance-comparison/config/users.d/perf-comparison-tweaks-users.xml +++ b/docker/test/performance-comparison/config/users.d/perf-comparison-tweaks-users.xml @@ -17,6 +17,9 @@ 12 + + + 64Mi diff --git a/docker/test/performance-comparison/perf.py b/docker/test/performance-comparison/perf.py index 4727f485943..2588b9f4213 100755 --- a/docker/test/performance-comparison/perf.py +++ b/docker/test/performance-comparison/perf.py @@ -66,7 +66,12 @@ reportStageEnd('parse') subst_elems = root.findall('substitutions/substitution') available_parameters = {} # { 'table': ['hits_10m', 'hits_100m'], ... } for e in subst_elems: - available_parameters[e.find('name').text] = [v.text for v in e.findall('values/value')] + name = e.find('name').text + values = [v.text for v in e.findall('values/value')] + if not values: + raise Exception(f'No values given for substitution {{{name}}}') + + available_parameters[name] = values # Takes parallel lists of templates, substitutes them with all combos of # parameters. The set of parameters is determined based on the first list. diff --git a/docker/test/stateful/run.sh b/docker/test/stateful/run.sh index 9e210dc92a2..8d865431570 100755 --- a/docker/test/stateful/run.sh +++ b/docker/test/stateful/run.sh @@ -21,14 +21,14 @@ function start() -- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \ --logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \ --tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \ - --mysql_port 19004 \ + --mysql_port 19004 --postgresql_port 19005 \ --keeper_server.tcp_port 19181 --keeper_server.server_id 2 sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server2/config.xml --daemon \ -- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \ --logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \ --tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \ - --mysql_port 29004 \ + --mysql_port 29004 --postgresql_port 29005 \ --keeper_server.tcp_port 29181 --keeper_server.server_id 3 fi diff --git a/docker/test/stateless/Dockerfile b/docker/test/stateless/Dockerfile index 61d1b2f4849..658ae1f27ba 100644 --- a/docker/test/stateless/Dockerfile +++ b/docker/test/stateless/Dockerfile @@ -28,7 +28,8 @@ RUN apt-get update -y \ tree \ unixodbc \ wget \ - mysql-client=5.7* + mysql-client=5.7* \ + postgresql-client RUN pip3 install numpy scipy pandas diff --git a/docker/test/stateless/run.sh b/docker/test/stateless/run.sh index 20132eafb75..e6f2d678aa9 100755 --- a/docker/test/stateless/run.sh +++ b/docker/test/stateless/run.sh @@ -44,7 +44,7 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]] -- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \ --logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \ --tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \ - --mysql_port 19004 \ + --mysql_port 19004 --postgresql_port 19005 \ --keeper_server.tcp_port 19181 --keeper_server.server_id 2 \ --macros.replica r2 # It doesn't work :( @@ -52,7 +52,7 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]] -- --path /var/lib/clickhouse2/ --logger.stderr /var/log/clickhouse-server/stderr2.log \ --logger.log /var/log/clickhouse-server/clickhouse-server2.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server2.err.log \ --tcp_port 29000 --tcp_port_secure 29440 --http_port 28123 --https_port 28443 --interserver_http_port 29009 --tcp_with_proxy_port 29010 \ - --mysql_port 29004 \ + --mysql_port 29004 --postgresql_port 29005 \ --keeper_server.tcp_port 29181 --keeper_server.server_id 3 \ --macros.shard s2 # It doesn't work :( @@ -104,6 +104,12 @@ clickhouse-client -q "system flush logs" ||: pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz & clickhouse-client -q "select * from system.query_log format TSVWithNamesAndTypes" | pigz > /test_output/query-log.tsv.gz & clickhouse-client -q "select * from system.query_thread_log format TSVWithNamesAndTypes" | pigz > /test_output/query-thread-log.tsv.gz & +clickhouse-client --allow_introspection_functions=1 -q " + WITH + arrayMap(x -> concat(demangle(addressToSymbol(x)), ':', addressToLine(x)), trace) AS trace_array, + arrayStringConcat(trace_array, '\n') AS trace_string + SELECT * EXCEPT(trace), trace_string FROM system.trace_log FORMAT TSVWithNamesAndTypes +" | pigz > /test_output/trace-log.tsv.gz & wait ||: mv /var/log/clickhouse-server/stderr.log /test_output/ ||: @@ -112,10 +118,13 @@ if [[ -n "$WITH_COVERAGE" ]] && [[ "$WITH_COVERAGE" -eq 1 ]]; then fi tar -chf /test_output/text_log_dump.tar /var/lib/clickhouse/data/system/text_log ||: tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||: +tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||: if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then pigz < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.gz ||: pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||: mv /var/log/clickhouse-server/stderr1.log /test_output/ ||: mv /var/log/clickhouse-server/stderr2.log /test_output/ ||: + tar -chf /test_output/coordination1.tar /var/lib/clickhouse1/coordination ||: + tar -chf /test_output/coordination2.tar /var/lib/clickhouse2/coordination ||: fi diff --git a/docker/test/stateless_unbundled/Dockerfile b/docker/test/stateless_unbundled/Dockerfile index 9efe08dbf23..c5463ac447d 100644 --- a/docker/test/stateless_unbundled/Dockerfile +++ b/docker/test/stateless_unbundled/Dockerfile @@ -14,9 +14,7 @@ RUN apt-get --allow-unauthenticated update -y \ expect \ gdb \ gperf \ - gperf \ heimdal-multidev \ - intel-opencl-icd \ libboost-filesystem-dev \ libboost-iostreams-dev \ libboost-program-options-dev \ @@ -50,9 +48,7 @@ RUN apt-get --allow-unauthenticated update -y \ moreutils \ ncdu \ netcat-openbsd \ - ocl-icd-libopencl1 \ odbcinst \ - opencl-headers \ openssl \ perl \ pigz \ diff --git a/docker/test/stress/run.sh b/docker/test/stress/run.sh index 3594eead992..74a88df21e0 100755 --- a/docker/test/stress/run.sh +++ b/docker/test/stress/run.sh @@ -108,6 +108,11 @@ zgrep -Fav "ASan doesn't fully support makecontext/swapcontext functions" > /dev || echo -e 'No sanitizer asserts\tOK' >> /test_output/test_results.tsv rm -f /test_output/tmp +# OOM +zgrep -Fa " Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \ + && echo -e 'OOM killer (or signal 9) in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \ + || echo -e 'No OOM messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv + # Logical errors zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \ && echo -e 'Logical error thrown (see clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \ @@ -118,7 +123,7 @@ zgrep -Fa "########################################" /var/log/clickhouse-server/ && echo -e 'Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \ || echo -e 'Not crashed\tOK' >> /test_output/test_results.tsv -# It also checks for OOM or crash without stacktrace (printed by watchdog) +# It also checks for crash without stacktrace (printed by watchdog) zgrep -Fa " " /var/log/clickhouse-server/clickhouse-server.log > /dev/null \ && echo -e 'Fatal message in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \ || echo -e 'No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv @@ -131,6 +136,7 @@ pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhous tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||: mv /var/log/clickhouse-server/stderr.log /test_output/ tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||: +tar -chf /test_output/trace_log_dump.tar /var/lib/clickhouse/data/system/trace_log ||: # Write check result into check_status.tsv clickhouse-local --structure "test String, res String" -q "SELECT 'failure', test FROM table WHERE res != 'OK' order by (lower(test) like '%hung%') LIMIT 1" < /test_output/test_results.tsv > /test_output/check_status.tsv diff --git a/docker/test/stress/stress b/docker/test/stress/stress index 25a705ecbd1..4fbedceb0b8 100755 --- a/docker/test/stress/stress +++ b/docker/test/stress/stress @@ -1,7 +1,7 @@ #!/usr/bin/env python3 # -*- coding: utf-8 -*- from multiprocessing import cpu_count -from subprocess import Popen, call, STDOUT +from subprocess import Popen, call, check_output, STDOUT import os import sys import shutil @@ -85,10 +85,27 @@ def prepare_for_hung_check(): # Issue #21004, live views are experimental, so let's just suppress it call("""clickhouse client -q "KILL QUERY WHERE upper(query) LIKE 'WATCH %'" """, shell=True, stderr=STDOUT) - # Wait for last queries to finish if any, not longer than 120 seconds + # Kill other queries which known to be slow + # It's query from 01232_preparing_sets_race_condition_long, it may take up to 1000 seconds in slow builds + call("""clickhouse client -q "KILL QUERY WHERE query LIKE 'insert into tableB select %'" """, shell=True, stderr=STDOUT) + # Long query from 00084_external_agregation + call("""clickhouse client -q "KILL QUERY WHERE query LIKE 'SELECT URL, uniq(SearchPhrase) AS u FROM test.hits GROUP BY URL ORDER BY u %'" """, shell=True, stderr=STDOUT) + + # Wait for last queries to finish if any, not longer than 300 seconds call("""clickhouse client -q "select sleepEachRow(( - select maxOrDefault(120 - elapsed) + 1 from system.processes where query not like '%from system.processes%' and elapsed < 120 - ) / 120) from numbers(120) format Null" """, shell=True, stderr=STDOUT) + select maxOrDefault(300 - elapsed) + 1 from system.processes where query not like '%from system.processes%' and elapsed < 300 + ) / 300) from numbers(300) format Null" """, shell=True, stderr=STDOUT) + + # Even if all clickhouse-test processes are finished, there are probably some sh scripts, + # which still run some new queries. Let's ignore them. + try: + query = """clickhouse client -q "SELECT count() FROM system.processes where where elapsed > 300" """ + output = check_output(query, shell=True, stderr=STDOUT).decode('utf-8').strip() + if int(output) == 0: + return False + except: + pass + return True if __name__ == "__main__": logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s') @@ -119,12 +136,12 @@ if __name__ == "__main__": logging.info("All processes finished") if args.hung_check: - prepare_for_hung_check() + have_long_running_queries = prepare_for_hung_check() logging.info("Checking if some queries hung") cmd = "{} {} {}".format(args.test_cmd, "--hung-check", "00001_select_1") res = call(cmd, shell=True, stderr=STDOUT) hung_check_status = "No queries hung\tOK\n" - if res != 0: + if res != 0 and have_long_running_queries: logging.info("Hung check failed with exit code {}".format(res)) hung_check_status = "Hung check failed\tFAIL\n" open(os.path.join(args.output_folder, "test_results.tsv"), 'w+').write(hung_check_status) diff --git a/docs/en/commercial/cloud.md b/docs/en/commercial/cloud.md index 91d2061c0af..953a0ab5748 100644 --- a/docs/en/commercial/cloud.md +++ b/docs/en/commercial/cloud.md @@ -31,9 +31,10 @@ toc_title: Cloud ## Alibaba Cloud {#alibaba-cloud} -Alibaba Cloud Managed Service for ClickHouse [China Site](https://www.aliyun.com/product/clickhouse) (Will be available at international site at May, 2021) provides the following key features: -- Highly reliable cloud disk storage engine based on Alibaba Cloud Apsara distributed system -- Expand capacity on demand without manual data migration +Alibaba Cloud Managed Service for ClickHouse. [China Site](https://www.aliyun.com/product/clickhouse) (will be available at the international site in May 2021). Provides the following key features: + +- Highly reliable cloud disk storage engine based on [Alibaba Cloud Apsara](https://www.alibabacloud.com/product/apsara-stack) distributed system +- Expand capacity on-demand without manual data migration - Support single-node, single-replica, multi-node, and multi-replica architectures, and support hot and cold data tiering - Support access allow-list, one-key recovery, multi-layer network security protection, cloud disk encryption - Seamless integration with cloud log systems, databases, and data application tools diff --git a/docs/en/development/build-osx.md b/docs/en/development/build-osx.md index 886e85bbf86..24ecbdc1c2c 100644 --- a/docs/en/development/build-osx.md +++ b/docs/en/development/build-osx.md @@ -5,12 +5,13 @@ toc_title: Build on Mac OS X # How to Build ClickHouse on Mac OS X {#how-to-build-clickhouse-on-mac-os-x} -Build should work on x86_64 (Intel) based macOS 10.15 (Catalina) and higher with recent Xcode's native AppleClang, or Homebrew's vanilla Clang or GCC compilers. +Build should work on x86_64 (Intel) and arm64 (Apple Silicon) based macOS 10.15 (Catalina) and higher with recent Xcode's native AppleClang, or Homebrew's vanilla Clang or GCC compilers. ## Install Homebrew {#install-homebrew} ``` bash -$ /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" +/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" +# ...and follow the printed instructions on any additional steps required to complete the installation. ``` ## Install Xcode and Command Line Tools {#install-xcode-and-command-line-tools} @@ -22,8 +23,8 @@ Open it at least once to accept the end-user license agreement and automatically Then, make sure that the latest Comman Line Tools are installed and selected in the system: ``` bash -$ sudo rm -rf /Library/Developer/CommandLineTools -$ sudo xcode-select --install +sudo rm -rf /Library/Developer/CommandLineTools +sudo xcode-select --install ``` Reboot. @@ -31,14 +32,15 @@ Reboot. ## Install Required Compilers, Tools, and Libraries {#install-required-compilers-tools-and-libraries} ``` bash -$ brew update -$ brew install cmake ninja libtool gettext llvm gcc +brew update +brew install cmake ninja libtool gettext llvm gcc ``` ## Checkout ClickHouse Sources {#checkout-clickhouse-sources} ``` bash -$ git clone --recursive git@github.com:ClickHouse/ClickHouse.git # or https://github.com/ClickHouse/ClickHouse.git +git clone --recursive git@github.com:ClickHouse/ClickHouse.git +# ...alternatively, you can use https://github.com/ClickHouse/ClickHouse.git as the repo URL. ``` ## Build ClickHouse {#build-clickhouse} @@ -46,37 +48,37 @@ $ git clone --recursive git@github.com:ClickHouse/ClickHouse.git # or https://gi To build using Xcode's native AppleClang compiler: ``` bash -$ cd ClickHouse -$ rm -rf build -$ mkdir build -$ cd build -$ cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DENABLE_JEMALLOC=OFF .. -$ cmake --build . --config RelWithDebInfo -$ cd .. +cd ClickHouse +rm -rf build +mkdir build +cd build +cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo .. +cmake --build . --config RelWithDebInfo +cd .. ``` To build using Homebrew's vanilla Clang compiler: ``` bash -$ cd ClickHouse -$ rm -rf build -$ mkdir build -$ cd build -$ cmake -DCMAKE_C_COMPILER=$(brew --prefix llvm)/bin/clang -DCMAKE_CXX_COMPILER==$(brew --prefix llvm)/bin/clang++ -DCMAKE_BUILD_TYPE=RelWithDebInfo -DENABLE_JEMALLOC=OFF .. -$ cmake --build . --config RelWithDebInfo -$ cd .. +cd ClickHouse +rm -rf build +mkdir build +cd build +cmake -DCMAKE_C_COMPILER=$(brew --prefix llvm)/bin/clang -DCMAKE_CXX_COMPILER=$(brew --prefix llvm)/bin/clang++ -DCMAKE_BUILD_TYPE=RelWithDebInfo .. +cmake --build . --config RelWithDebInfo +cd .. ``` To build using Homebrew's vanilla GCC compiler: ``` bash -$ cd ClickHouse -$ rm -rf build -$ mkdir build -$ cd build -$ cmake -DCMAKE_C_COMPILER=$(brew --prefix gcc)/bin/gcc-10 -DCMAKE_CXX_COMPILER=$(brew --prefix gcc)/bin/g++-10 -DCMAKE_BUILD_TYPE=RelWithDebInfo -DENABLE_JEMALLOC=OFF .. -$ cmake --build . --config RelWithDebInfo -$ cd .. +cd ClickHouse +rm -rf build +mkdir build +cd build +cmake -DCMAKE_C_COMPILER=$(brew --prefix gcc)/bin/gcc-10 -DCMAKE_CXX_COMPILER=$(brew --prefix gcc)/bin/g++-10 -DCMAKE_BUILD_TYPE=RelWithDebInfo .. +cmake --build . --config RelWithDebInfo +cd .. ``` ## Caveats {#caveats} @@ -115,7 +117,7 @@ To do so, create the `/Library/LaunchDaemons/limit.maxfiles.plist` file with the Execute the following command: ``` bash -$ sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist +sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist ``` Reboot. diff --git a/docs/en/development/build.md b/docs/en/development/build.md index 3181f26800d..852b9de4fb3 100644 --- a/docs/en/development/build.md +++ b/docs/en/development/build.md @@ -27,53 +27,20 @@ Or cmake3 instead of cmake on older systems. On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -```bash +```bash sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ``` For other Linux distribution - check the availability of the [prebuild packages](https://releases.llvm.org/download.html) or build clang [from sources](https://clang.llvm.org/get_started.html). -#### Use clang-11 for Builds {#use-gcc-10-for-builds} +#### Use clang-11 for Builds ``` bash $ export CC=clang-11 $ export CXX=clang++-11 ``` -### Install GCC 10 {#install-gcc-10} - -We recommend building ClickHouse with clang-11, GCC-10 also supported, but it is not used for production builds. - -If you want to use GCC-10 there are several ways to install it. - -#### Install from Repository {#install-from-repository} - -On Ubuntu 19.10 or newer: - - $ sudo apt-get update - $ sudo apt-get install gcc-10 g++-10 - -#### Install from a PPA Package {#install-from-a-ppa-package} - -On older Ubuntu: - -``` bash -$ sudo apt-get install software-properties-common -$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test -$ sudo apt-get update -$ sudo apt-get install gcc-10 g++-10 -``` - -#### Install from Sources {#install-from-sources} - -See [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh) - -#### Use GCC 10 for Builds {#use-gcc-10-for-builds} - -``` bash -$ export CC=gcc-10 -$ export CXX=g++-10 -``` +Gcc can also be used though it is discouraged. ### Checkout ClickHouse Sources {#checkout-clickhouse-sources} diff --git a/docs/en/development/contrib.md b/docs/en/development/contrib.md index 76a2f647231..64ca2387029 100644 --- a/docs/en/development/contrib.md +++ b/docs/en/development/contrib.md @@ -5,36 +5,87 @@ toc_title: Third-Party Libraries Used # Third-Party Libraries Used {#third-party-libraries-used} -| Library | License | -|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------| -| base64 | [BSD 2-Clause License](https://github.com/aklomp/base64/blob/a27c565d1b6c676beaf297fe503c4518185666f7/LICENSE) | -| boost | [Boost Software License 1.0](https://github.com/ClickHouse-Extras/boost-extra/blob/6883b40449f378019aec792f9983ce3afc7ff16e/LICENSE_1_0.txt) | -| brotli | [MIT](https://github.com/google/brotli/blob/master/LICENSE) | -| capnproto | [MIT](https://github.com/capnproto/capnproto/blob/master/LICENSE) | -| cctz | [Apache License 2.0](https://github.com/google/cctz/blob/4f9776a310f4952454636363def82c2bf6641d5f/LICENSE.txt) | -| double-conversion | [BSD 3-Clause License](https://github.com/google/double-conversion/blob/cf2f0f3d547dc73b4612028a155b80536902ba02/LICENSE) | -| FastMemcpy | [MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libmemcpy/impl/LICENSE) | -| googletest | [BSD 3-Clause License](https://github.com/google/googletest/blob/master/LICENSE) | -| h3 | [Apache License 2.0](https://github.com/uber/h3/blob/master/LICENSE) | -| hyperscan | [BSD 3-Clause License](https://github.com/intel/hyperscan/blob/master/LICENSE) | -| libcxxabi | [BSD + MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libglibc-compatibility/libcxxabi/LICENSE.TXT) | -| libdivide | [Zlib License](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libdivide/LICENSE.txt) | -| libgsasl | [LGPL v2.1](https://github.com/ClickHouse-Extras/libgsasl/blob/3b8948a4042e34fb00b4fb987535dc9e02e39040/LICENSE) | -| libhdfs3 | [Apache License 2.0](https://github.com/ClickHouse-Extras/libhdfs3/blob/bd6505cbb0c130b0db695305b9a38546fa880e5a/LICENSE.txt) | -| libmetrohash | [Apache License 2.0](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libmetrohash/LICENSE) | -| libpcg-random | [Apache License 2.0](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libpcg-random/LICENSE-APACHE.txt) | -| libressl | [OpenSSL License](https://github.com/ClickHouse-Extras/ssl/blob/master/COPYING) | -| librdkafka | [BSD 2-Clause License](https://github.com/edenhill/librdkafka/blob/363dcad5a23dc29381cc626620e68ae418b3af19/LICENSE) | -| libwidechar_width | [CC0 1.0 Universal](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libwidechar_width/LICENSE) | -| llvm | [BSD 3-Clause License](https://github.com/ClickHouse-Extras/llvm/blob/163def217817c90fb982a6daf384744d8472b92b/llvm/LICENSE.TXT) | -| lz4 | [BSD 2-Clause License](https://github.com/lz4/lz4/blob/c10863b98e1503af90616ae99725ecd120265dfb/LICENSE) | -| mariadb-connector-c | [LGPL v2.1](https://github.com/ClickHouse-Extras/mariadb-connector-c/blob/3.1/COPYING.LIB) | -| murmurhash | [Public Domain](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/murmurhash/LICENSE) | -| pdqsort | [Zlib License](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/pdqsort/license.txt) | -| poco | [Boost Software License - Version 1.0](https://github.com/ClickHouse-Extras/poco/blob/fe5505e56c27b6ecb0dcbc40c49dc2caf4e9637f/LICENSE) | -| protobuf | [BSD 3-Clause License](https://github.com/ClickHouse-Extras/protobuf/blob/12735370922a35f03999afff478e1c6d7aa917a4/LICENSE) | -| re2 | [BSD 3-Clause License](https://github.com/google/re2/blob/7cf8b88e8f70f97fd4926b56aa87e7f53b2717e0/LICENSE) | -| sentry-native | [MIT License](https://github.com/getsentry/sentry-native/blob/master/LICENSE) | -| UnixODBC | [LGPL v2.1](https://github.com/ClickHouse-Extras/UnixODBC/tree/b0ad30f7f6289c12b76f04bfb9d466374bb32168) | -| zlib-ng | [Zlib License](https://github.com/ClickHouse-Extras/zlib-ng/blob/develop/LICENSE.md) | -| zstd | [BSD 3-Clause License](https://github.com/facebook/zstd/blob/dev/LICENSE) | +The list of third-party libraries can be obtained by the following query: + +``` +SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en' +``` + +[Example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIGxpYnJhcnlfbmFtZSwgbGljZW5zZV90eXBlLCBsaWNlbnNlX3BhdGggRlJPTSBzeXN0ZW0ubGljZW5zZXMgT1JERVIgQlkgbGlicmFyeV9uYW1lIENPTExBVEUgJ2VuJw==) + +| library_name | license_type | license_path | +|:-|:-|:-| +| abseil-cpp | Apache | /contrib/abseil-cpp/LICENSE | +| AMQP-CPP | Apache | /contrib/AMQP-CPP/LICENSE | +| arrow | Apache | /contrib/arrow/LICENSE.txt | +| avro | Apache | /contrib/avro/LICENSE.txt | +| aws | Apache | /contrib/aws/LICENSE.txt | +| aws-c-common | Apache | /contrib/aws-c-common/LICENSE | +| aws-c-event-stream | Apache | /contrib/aws-c-event-stream/LICENSE | +| aws-checksums | Apache | /contrib/aws-checksums/LICENSE | +| base64 | BSD 2-clause | /contrib/base64/LICENSE | +| boost | Boost | /contrib/boost/LICENSE_1_0.txt | +| boringssl | BSD | /contrib/boringssl/LICENSE | +| brotli | MIT | /contrib/brotli/LICENSE | +| capnproto | MIT | /contrib/capnproto/LICENSE | +| cassandra | Apache | /contrib/cassandra/LICENSE.txt | +| cctz | Apache | /contrib/cctz/LICENSE.txt | +| cityhash102 | MIT | /contrib/cityhash102/COPYING | +| cppkafka | BSD 2-clause | /contrib/cppkafka/LICENSE | +| croaring | Apache | /contrib/croaring/LICENSE | +| curl | Apache | /contrib/curl/docs/LICENSE-MIXING.md | +| cyrus-sasl | BSD 2-clause | /contrib/cyrus-sasl/COPYING | +| double-conversion | BSD 3-clause | /contrib/double-conversion/LICENSE | +| dragonbox | Apache | /contrib/dragonbox/LICENSE-Apache2-LLVM | +| fast_float | Apache | /contrib/fast_float/LICENSE | +| fastops | MIT | /contrib/fastops/LICENSE | +| flatbuffers | Apache | /contrib/flatbuffers/LICENSE.txt | +| fmtlib | Unknown | /contrib/fmtlib/LICENSE.rst | +| gcem | Apache | /contrib/gcem/LICENSE | +| googletest | BSD 3-clause | /contrib/googletest/LICENSE | +| grpc | Apache | /contrib/grpc/LICENSE | +| h3 | Apache | /contrib/h3/LICENSE | +| hyperscan | Boost | /contrib/hyperscan/LICENSE | +| icu | Public Domain | /contrib/icu/icu4c/LICENSE | +| icudata | Public Domain | /contrib/icudata/LICENSE | +| jemalloc | BSD 2-clause | /contrib/jemalloc/COPYING | +| krb5 | MIT | /contrib/krb5/src/lib/gssapi/LICENSE | +| libc-headers | LGPL | /contrib/libc-headers/LICENSE | +| libcpuid | BSD 2-clause | /contrib/libcpuid/COPYING | +| libcxx | Apache | /contrib/libcxx/LICENSE.TXT | +| libcxxabi | Apache | /contrib/libcxxabi/LICENSE.TXT | +| libdivide | zLib | /contrib/libdivide/LICENSE.txt | +| libfarmhash | MIT | /contrib/libfarmhash/COPYING | +| libgsasl | LGPL | /contrib/libgsasl/LICENSE | +| libhdfs3 | Apache | /contrib/libhdfs3/LICENSE.txt | +| libmetrohash | Apache | /contrib/libmetrohash/LICENSE | +| libpq | Unknown | /contrib/libpq/COPYRIGHT | +| libpqxx | BSD 3-clause | /contrib/libpqxx/COPYING | +| librdkafka | MIT | /contrib/librdkafka/LICENSE.murmur2 | +| libunwind | Apache | /contrib/libunwind/LICENSE.TXT | +| libuv | BSD | /contrib/libuv/LICENSE | +| llvm | Apache | /contrib/llvm/llvm/LICENSE.TXT | +| lz4 | BSD | /contrib/lz4/LICENSE | +| mariadb-connector-c | LGPL | /contrib/mariadb-connector-c/COPYING.LIB | +| miniselect | Boost | /contrib/miniselect/LICENSE_1_0.txt | +| msgpack-c | Boost | /contrib/msgpack-c/LICENSE_1_0.txt | +| murmurhash | Public Domain | /contrib/murmurhash/LICENSE | +| NuRaft | Apache | /contrib/NuRaft/LICENSE | +| openldap | Unknown | /contrib/openldap/LICENSE | +| orc | Apache | /contrib/orc/LICENSE | +| poco | Boost | /contrib/poco/LICENSE | +| protobuf | BSD 3-clause | /contrib/protobuf/LICENSE | +| rapidjson | MIT | /contrib/rapidjson/bin/jsonschema/LICENSE | +| re2 | BSD 3-clause | /contrib/re2/LICENSE | +| replxx | BSD 3-clause | /contrib/replxx/LICENSE.md | +| rocksdb | BSD 3-clause | /contrib/rocksdb/LICENSE.leveldb | +| sentry-native | MIT | /contrib/sentry-native/LICENSE | +| simdjson | Apache | /contrib/simdjson/LICENSE | +| snappy | Public Domain | /contrib/snappy/COPYING | +| sparsehash-c11 | BSD 3-clause | /contrib/sparsehash-c11/LICENSE | +| stats | Apache | /contrib/stats/LICENSE | +| thrift | Apache | /contrib/thrift/LICENSE | +| unixodbc | LGPL | /contrib/unixodbc/COPYING | +| xz | Public Domain | /contrib/xz/COPYING | +| zlib-ng | zLib | /contrib/zlib-ng/LICENSE.md | +| zstd | BSD | /contrib/zstd/LICENSE | diff --git a/docs/en/development/developer-instruction.md b/docs/en/development/developer-instruction.md index 5511e8e19c7..35ca4725af8 100644 --- a/docs/en/development/developer-instruction.md +++ b/docs/en/development/developer-instruction.md @@ -131,17 +131,18 @@ ClickHouse uses several external libraries for building. All of them do not need ## C++ Compiler {#c-compiler} -Compilers GCC starting from version 10 and Clang version 8 or above are supported for building ClickHouse. +Compilers Clang starting from version 11 is supported for building ClickHouse. -Official Yandex builds currently use GCC because it generates machine code of slightly better performance (yielding a difference of up to several percent according to our benchmarks). And Clang is more convenient for development usually. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations. +Clang should be used instead of gcc. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations. -To install GCC on Ubuntu run: `sudo apt install gcc g++` +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -Check the version of gcc: `gcc --version`. If it is below 10, then follow the instruction here: https://clickhouse.tech/docs/en/development/build/#install-gcc-10. +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" +``` -Mac OS X build is supported only for Clang. Just run `brew install llvm` +Mac OS X build is also supported. Just run `brew install llvm` -If you decide to use Clang, you can also install `libc++` and `lld`, if you know what it is. Using `ccache` is also recommended. ## The Building Process {#the-building-process} @@ -152,14 +153,7 @@ Now that you are ready to build ClickHouse we recommend you to create a separate You can have several different directories (build_release, build_debug, etc.) for different types of build. -While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler (version 10 gcc compiler in this example). - -Linux: - - export CC=gcc-10 CXX=g++-10 - cmake .. - -Mac OS X: +While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler. export CC=clang CXX=clang++ cmake .. diff --git a/docs/en/development/style.md b/docs/en/development/style.md index 4c620c44aef..b27534d9890 100644 --- a/docs/en/development/style.md +++ b/docs/en/development/style.md @@ -701,7 +701,7 @@ But other things being equal, cross-platform or portable code is preferred. **2.** Language: C++20 (see the list of available [C++20 features](https://en.cppreference.com/w/cpp/compiler_support#C.2B.2B20_features)). -**3.** Compiler: `gcc`. At this time (August 2020), the code is compiled using version 9.3. (It can also be compiled using `clang 8`.) +**3.** Compiler: `clang`. At this time (April 2021), the code is compiled using clang version 11. (It can also be compiled using `gcc` version 10, but it's untested and not suitable for production usage). The standard library is used (`libc++`). @@ -711,7 +711,7 @@ The standard library is used (`libc++`). The CPU instruction set is the minimum supported set among our servers. Currently, it is SSE 4.2. -**6.** Use `-Wall -Wextra -Werror` compilation flags. +**6.** Use `-Wall -Wextra -Werror` compilation flags. Also `-Weverything` is used with few exceptions. **7.** Use static linking with all libraries except those that are difficult to connect to statically (see the output of the `ldd` command). diff --git a/docs/en/engines/database-engines/atomic.md b/docs/en/engines/database-engines/atomic.md index d8ad18daec2..d897631dd6e 100644 --- a/docs/en/engines/database-engines/atomic.md +++ b/docs/en/engines/database-engines/atomic.md @@ -3,15 +3,52 @@ toc_priority: 32 toc_title: Atomic --- - # Atomic {#atomic} -It supports non-blocking `DROP` and `RENAME TABLE` queries and atomic `EXCHANGE TABLES t1 AND t2` queries. `Atomic` database engine is used by default. +It supports non-blocking [DROP TABLE](#drop-detach-table) and [RENAME TABLE](#rename-table) queries and atomic [EXCHANGE TABLES t1 AND t2](#exchange-tables) queries. `Atomic` database engine is used by default. ## Creating a Database {#creating-a-database} -```sql -CREATE DATABASE test ENGINE = Atomic; +``` sql + CREATE DATABASE test[ ENGINE = Atomic]; ``` -[Original article](https://clickhouse.tech/docs/en/engines/database-engines/atomic/) +## Specifics and recommendations {#specifics-and-recommendations} + +### Table UUID {#table-uuid} + +All tables in database `Atomic` have persistent [UUID](../../sql-reference/data-types/uuid.md) and store data in directory `/clickhouse_path/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/`, where `xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy` is UUID of the table. +Usually, the UUID is generated automatically, but the user can also explicitly specify the UUID in the same way when creating the table (this is not recommended). To display the `SHOW CREATE` query with the UUID you can use setting [show_table_uuid_in_table_create_query_if_not_nil](../../operations/settings/settings.md#show_table_uuid_in_table_create_query_if_not_nil). For example: + +```sql +CREATE TABLE name UUID '28f1c61c-2970-457a-bffe-454156ddcfef' (n UInt64) ENGINE = ...; +``` +### RENAME TABLE {#rename-table} + +`RENAME` queries are performed without changing UUID and moving table data. These queries do not wait for the completion of queries using the table and will be executed instantly. + +### DROP/DETACH TABLE {#drop-detach-table} + +On `DROP TABLE` no data is removed, database `Atomic` just marks table as dropped by moving metadata to `/clickhouse_path/metadata_dropped/` and notifies background thread. Delay before final table data deletion is specify by [database_atomic_delay_before_drop_table_sec](../../operations/server-configuration-parameters/settings.md#database_atomic_delay_before_drop_table_sec) setting. +You can specify synchronous mode using `SYNC` modifier. Use the [database_atomic_wait_for_drop_and_detach_synchronously](../../operations/settings/settings.md#database_atomic_wait_for_drop_and_detach_synchronously) setting to do this. In this case `DROP` waits for running `SELECT`, `INSERT` and other queries which are using the table to finish. Table will be actually removed when it's not in use. + +### EXCHANGE TABLES {#exchange-tables} + +`EXCHANGE` query swaps tables atomically. So instead of this non-atomic operation: + +```sql +RENAME TABLE new_table TO tmp, old_table TO new_table, tmp TO old_table; +``` +you can use one atomic query: + +``` sql +EXCHANGE TABLES new_table AND old_table; +``` + +### ReplicatedMergeTree in Atomic Database {#replicatedmergetree-in-atomic-database} + +For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables is recomended do not specify parameters of engine - path in ZooKeeper and replica name. In this case will be used parameters of the configuration [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). If you want specify parameters of engine explicitly than recomended to use {uuid} macros. This is useful so that unique paths are automatically generated for each table in the ZooKeeper. + +## See Also + +- [system.databases](../../operations/system-tables/databases.md) system table diff --git a/docs/en/engines/table-engines/integrations/postgresql.md b/docs/en/engines/table-engines/integrations/postgresql.md index ad5bebb3dea..4474b764d2e 100644 --- a/docs/en/engines/table-engines/integrations/postgresql.md +++ b/docs/en/engines/table-engines/integrations/postgresql.md @@ -94,10 +94,10 @@ postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> SELECT * FROM test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | -(1 row) + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) ``` Table in ClickHouse, retrieving data from the PostgreSQL table created above: diff --git a/docs/en/engines/table-engines/integrations/s3.md b/docs/en/engines/table-engines/integrations/s3.md index 3d02aa13812..6592f8b9752 100644 --- a/docs/en/engines/table-engines/integrations/s3.md +++ b/docs/en/engines/table-engines/integrations/s3.md @@ -19,26 +19,26 @@ ENGINE = S3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, - `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [below](#wildcards-in-path). - `format` — The [format](../../../interfaces/formats.md#formats) of the file. - `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`. -- `compression` — Compression type. Supported values: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. Parameter is optional. By default, it will autodetect compression by file extension. +- `compression` — Compression type. Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. Parameter is optional. By default, it will autodetect compression by file extension. -**Example:** +**Example** -**1.** Set up the `s3_engine_table` table: +1. Set up the `s3_engine_table` table: -```sql -CREATE TABLE s3_engine_table (name String, value UInt32) ENGINE=S3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip') +``` sql +CREATE TABLE s3_engine_table (name String, value UInt32) ENGINE=S3('https://storage.yandexcloud.net/my-test-bucket-768/test-data.csv.gz', 'CSV', 'name String, value UInt32', 'gzip'); ``` -**2.** Fill file: +2. Fill file: -```sql -INSERT INTO s3_engine_table VALUES ('one', 1), ('two', 2), ('three', 3) +``` sql +INSERT INTO s3_engine_table VALUES ('one', 1), ('two', 2), ('three', 3); ``` -**3.** Query the data: +3. Query the data: -```sql -SELECT * FROM s3_engine_table LIMIT 2 +``` sql +SELECT * FROM s3_engine_table LIMIT 2; ``` ```text @@ -73,13 +73,63 @@ For more information about virtual columns see [here](../../../engines/table-eng Constructions with `{}` are similar to the [remote](../../../sql-reference/table-functions/remote.md) table function. -## S3-related Settings {#s3-settings} +**Example** + +1. Suppose we have several files in CSV format with the following URIs on S3: + +- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_1.csv’ +- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_2.csv’ +- ‘https://storage.yandexcloud.net/my-test-bucket-768/some_prefix/some_file_3.csv’ +- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_1.csv’ +- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_2.csv’ +- ‘https://storage.yandexcloud.net/my-test-bucket-768/another_prefix/some_file_3.csv’ + +There are several ways to make a table consisting of all six files: + +The first way: + +``` sql +CREATE TABLE table_with_range (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_{1..3}', 'CSV'); +``` + +Another way: + +``` sql +CREATE TABLE table_with_question_mark (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/some_file_?', 'CSV'); +``` + +Table consists of all the files in both directories (all files should satisfy format and schema described in query): + +``` sql +CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV'); +``` + +If the listing of files contains number ranges with leading zeros, use the construction with braces for each digit separately or use `?`. + +**Example** + +Create table with files named `file-000.csv`, `file-001.csv`, … , `file-999.csv`: + +``` sql +CREATE TABLE big_table (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV'); +``` + +## Virtual Columns {#virtual-columns} + +- `_path` — Path to the file. +- `_file` — Name of the file. + +**See Also** + +- [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns) + +## S3-related settings {#settings} The following settings can be set before query execution or placed into configuration file. -- `s3_max_single_part_upload_size` — The maximum size of object to upload using singlepart upload to S3. Default value is `64Mb`. +- `s3_max_single_part_upload_size` — The maximum size of object to upload using singlepart upload to S3. Default value is `64Mb`. - `s3_min_upload_part_size` — The minimum size of part to upload during multipart upload to [S3 Multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html). Default value is `512Mb`. -- `s3_max_redirects` — Max number of S3 redirects hops allowed. Default value is `10`. +- `s3_max_redirects` — Max number of S3 redirects hops allowed. Default value is `10`. Security consideration: if malicious user can specify arbitrary S3 URLs, `s3_max_redirects` must be set to zero to avoid [SSRF](https://en.wikipedia.org/wiki/Server-side_request_forgery) attacks; or alternatively, `remote_host_filter` must be specified in server configuration. @@ -90,6 +140,7 @@ The following settings can be specified in configuration file for given endpoint - `endpoint` — Specifies prefix of an endpoint. Mandatory. - `access_key_id` and `secret_access_key` — Specifies credentials to use with given endpoint. Optional. - `use_environment_credentials` — If set to `true`, S3 client will try to obtain credentials from environment variables and Amazon EC2 metadata for given endpoint. Optional, default value is `false`. +- `use_insecure_imds_request` — If set to `true`, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Optional, default value is `false`. - `header` — Adds specified HTTP header to a request to given endpoint. Optional, can be speficied multiple times. - `server_side_encryption_customer_key_base64` — If specified, required headers for accessing S3 objects with SSE-C encryption will be set. Optional. @@ -102,11 +153,13 @@ The following settings can be specified in configuration file for given endpoint + ``` + ## Usage {#usage-examples} Suppose we have several files in TSV format with the following URIs on HDFS: @@ -149,8 +202,7 @@ ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_p CREATE TABLE big_table (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file-{000..999}.csv', 'CSV'); ``` + ## See also - [S3 table function](../../../sql-reference/table-functions/s3.md) - -[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/s3/) diff --git a/docs/en/engines/table-engines/mergetree-family/aggregatingmergetree.md b/docs/en/engines/table-engines/mergetree-family/aggregatingmergetree.md index 1a997b6b237..818830646cb 100644 --- a/docs/en/engines/table-engines/mergetree-family/aggregatingmergetree.md +++ b/docs/en/engines/table-engines/mergetree-family/aggregatingmergetree.md @@ -3,7 +3,7 @@ toc_priority: 35 toc_title: AggregatingMergeTree --- -# Aggregatingmergetree {#aggregatingmergetree} +# AggregatingMergeTree {#aggregatingmergetree} The engine inherits from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree), altering the logic for data parts merging. ClickHouse replaces all rows with the same primary key (or more accurately, with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md)) with a single row (within a one data part) that stores a combination of states of aggregate functions. diff --git a/docs/en/engines/table-engines/mergetree-family/mergetree.md b/docs/en/engines/table-engines/mergetree-family/mergetree.md index a24b7229d17..9874e87be78 100644 --- a/docs/en/engines/table-engines/mergetree-family/mergetree.md +++ b/docs/en/engines/table-engines/mergetree-family/mergetree.md @@ -767,6 +767,7 @@ Required parameters: Optional parameters: - `use_environment_credentials` — Reads AWS credentials from the Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN if they exist. Default value is `false`. +- `use_insecure_imds_request` — If set to `true`, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Default value is `false`. - `proxy` — Proxy configuration for S3 endpoint. Each `uri` element inside `proxy` block should contain a proxy URL. - `connect_timeout_ms` — Socket connect timeout in milliseconds. Default value is `10 seconds`. - `request_timeout_ms` — Request timeout in milliseconds. Default value is `5 seconds`. diff --git a/docs/en/engines/table-engines/special/buffer.md b/docs/en/engines/table-engines/special/buffer.md index bf6c08f8f6c..8245cd19e8c 100644 --- a/docs/en/engines/table-engines/special/buffer.md +++ b/docs/en/engines/table-engines/special/buffer.md @@ -18,11 +18,17 @@ Engine parameters: - `num_layers` – Parallelism layer. Physically, the table will be represented as `num_layers` of independent buffers. Recommended value: 16. - `min_time`, `max_time`, `min_rows`, `max_rows`, `min_bytes`, and `max_bytes` – Conditions for flushing data from the buffer. +Optional engine parameters: + +- `flush_time`, `flush_rows`, `flush_bytes` – Conditions for flushing data from the buffer, that will happen only in background (ommited or zero means no `flush*` parameters). + Data is flushed from the buffer and written to the destination table if all the `min*` conditions or at least one `max*` condition are met. -- `min_time`, `max_time` – Condition for the time in seconds from the moment of the first write to the buffer. -- `min_rows`, `max_rows` – Condition for the number of rows in the buffer. -- `min_bytes`, `max_bytes` – Condition for the number of bytes in the buffer. +Also if at least one `flush*` condition are met flush initiated in background, this is different from `max*`, since `flush*` allows you to configure background flushes separately to avoid adding latency for `INSERT` (into `Buffer`) queries. + +- `min_time`, `max_time`, `flush_time` – Condition for the time in seconds from the moment of the first write to the buffer. +- `min_rows`, `max_rows`, `flush_rows` – Condition for the number of rows in the buffer. +- `min_bytes`, `max_bytes`, `flush_bytes` – Condition for the number of bytes in the buffer. During the write operation, data is inserted to a `num_layers` number of random buffers. Or, if the data part to insert is large enough (greater than `max_rows` or `max_bytes`), it is written directly to the destination table, omitting the buffer. diff --git a/docs/en/getting-started/example-datasets/cell-towers.md b/docs/en/getting-started/example-datasets/cell-towers.md index 76effdd4c62..7028b650ad1 100644 --- a/docs/en/getting-started/example-datasets/cell-towers.md +++ b/docs/en/getting-started/example-datasets/cell-towers.md @@ -3,31 +3,31 @@ toc_priority: 21 toc_title: Cell Towers --- -# Cell Towers +# Cell Towers {#cell-towers} This dataset is from [OpenCellid](https://www.opencellid.org/) - The world's largest Open Database of Cell Towers. -As of 2021 it contains more than 40 million records about cell towers (GSM, LTE, UMTS, etc.) around the world with their geographical coordinates and metadata (country code, network, etc). +As of 2021, it contains more than 40 million records about cell towers (GSM, LTE, UMTS, etc.) around the world with their geographical coordinates and metadata (country code, network, etc). -OpenCelliD Project is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, and we redistribute a snapshot of this dataset under the terms of the same license. The up to date version of the dataset is available to download after sign in. +OpenCelliD Project is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, and we redistribute a snapshot of this dataset under the terms of the same license. The up-to-date version of the dataset is available to download after sign in. -## Get the Dataset +## Get the Dataset {#get-the-dataset} -Download the snapshot of the dataset from Feb 2021: [https://datasets.clickhouse.tech/cell_towers.csv.xz] (729 MB). +1. Download the snapshot of the dataset from February 2021: [https://datasets.clickhouse.tech/cell_towers.csv.xz] (729 MB). -Optionally validate the integrity: +2. Validate the integrity (optional step): ``` md5sum cell_towers.csv.xz 8cf986f4a0d9f12c6f384a0e9192c908 cell_towers.csv.xz ``` -Decompress it with the following command: +3. Decompress it with the following command: ``` xz -d cell_towers.csv.xz ``` -Create a table: +4. Create a table: ``` CREATE TABLE cell_towers @@ -50,15 +50,15 @@ CREATE TABLE cell_towers ENGINE = MergeTree ORDER BY (radio, mcc, net, created); ``` -Insert the dataset: +5. Insert the dataset: ``` clickhouse-client --query "INSERT INTO cell_towers FORMAT CSVWithNames" < cell_towers.csv ``` +## Examples {#examples} -## Run some queries +1. A number of cell towers by type: -Number of cell towers by type: ``` SELECT radio, count() AS c FROM cell_towers GROUP BY radio ORDER BY c DESC @@ -73,7 +73,8 @@ SELECT radio, count() AS c FROM cell_towers GROUP BY radio ORDER BY c DESC 5 rows in set. Elapsed: 0.011 sec. Processed 43.28 million rows, 43.28 MB (3.83 billion rows/s., 3.83 GB/s.) ``` -Cell towers by mobile country code (MCC): +2. Cell towers by [mobile country code (MCC)](https://en.wikipedia.org/wiki/Mobile_country_code): + ``` SELECT mcc, count() FROM cell_towers GROUP BY mcc ORDER BY count() DESC LIMIT 10 @@ -93,28 +94,28 @@ SELECT mcc, count() FROM cell_towers GROUP BY mcc ORDER BY count() DESC LIMIT 10 10 rows in set. Elapsed: 0.019 sec. Processed 43.28 million rows, 86.55 MB (2.33 billion rows/s., 4.65 GB/s.) ``` -See the dictionary here: [https://en.wikipedia.org/wiki/Mobile_country_code](https://en.wikipedia.org/wiki/Mobile_country_code). +So, the top countries are: the USA, Germany, and Russia. -So, the top countries are USA, Germany and Russia. - -You may want to create an [External Dictionary](../../sql-reference/dictionaries/external-dictionaries/external-dicts/) in ClickHouse to decode these values. +You may want to create an [External Dictionary](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) in ClickHouse to decode these values. -### Example of using `pointInPolygon` function +## Use case {#use-case} -Create a table where we will store polygons: +Using `pointInPolygon` function. + +1. Create a table where we will store polygons: ``` CREATE TEMPORARY TABLE moscow (polygon Array(Tuple(Float64, Float64))); ``` -This is a rough shape of Moscow (without "new Moscow"): +2. This is a rough shape of Moscow (without "new Moscow"): ``` INSERT INTO moscow VALUES ([(37.84172564285271, 55.78000432402266), (37.8381207618713, 55.775874525970494), (37.83979446823122, 55.775626746008065), (37.84243326983639, 55.77446586811748), (37.84262672750849, 55.771974101091104), (37.84153238623039, 55.77114545193181), (37.841124690460184, 55.76722010265554), (37.84239076983644, 55.76654891107098), (37.842283558197025, 55.76258709833121), (37.8421759312134, 55.758073999993734), (37.84198330422974, 55.75381499999371), (37.8416827275085, 55.749277102484484), (37.84157576190186, 55.74794544108413), (37.83897929098507, 55.74525257875241), (37.83739676451868, 55.74404373042019), (37.838732481460525, 55.74298009816793), (37.841183997352545, 55.743060321833575), (37.84097476190185, 55.73938799999373), (37.84048155819702, 55.73570799999372), (37.840095812164286, 55.73228210777237), (37.83983814285274, 55.73080491981639), (37.83846476321406, 55.729799917464675), (37.83835745269769, 55.72919751082619), (37.838636380279524, 55.72859509486539), (37.8395161005249, 55.727705075632784), (37.83897964285276, 55.722727886185154), (37.83862557539366, 55.72034817326636), (37.83559735744853, 55.71944437307499), (37.835370708803126, 55.71831419154461), (37.83738169402022, 55.71765218986692), (37.83823396494291, 55.71691750159089), (37.838056931213345, 55.71547311301385), (37.836812846557606, 55.71221445615604), (37.83522525396725, 55.709331054395555), (37.83269301586908, 55.70953687463627), (37.829667367706236, 55.70903403789297), (37.83311126588435, 55.70552351822608), (37.83058993121339, 55.70041317726053), (37.82983872750851, 55.69883771404813), (37.82934501586913, 55.69718947487017), (37.828926414016685, 55.69504441658371), (37.82876530422971, 55.69287499999378), (37.82894754100031, 55.690759754047335), (37.827697554878185, 55.68951421135665), (37.82447346292115, 55.68965045405069), (37.83136543914793, 55.68322046195302), (37.833554015869154, 55.67814012759211), (37.83544184655761, 55.67295011628339), (37.837480388885474, 55.6672498719639), (37.838960677246064, 55.66316274139358), (37.83926093121332, 55.66046999999383), (37.839025050262435, 55.65869897264431), (37.83670784390257, 55.65794084879904), (37.835656529083245, 55.65694309303843), (37.83704060449217, 55.65689306460552), (37.83696819873806, 55.65550363526252), (37.83760389616388, 55.65487847246661), (37.83687972750851, 55.65356745541324), (37.83515216004943, 55.65155951234079), (37.83312418518067, 55.64979413590619), (37.82801726983639, 55.64640836412121), (37.820614174591, 55.64164525405531), (37.818908190475426, 55.6421883258084), (37.81717543386075, 55.64112490388471), (37.81690987037274, 55.63916106913107), (37.815099354492155, 55.637925371757085), (37.808769150787356, 55.633798276884455), (37.80100123544311, 55.62873670012244), (37.79598013491824, 55.62554336109055), (37.78634567724606, 55.62033499605651), (37.78334147619623, 55.618768681480326), (37.77746201055901, 55.619855533402706), (37.77527329626457, 55.61909966711279), (37.77801986242668, 55.618770300976294), (37.778212973541216, 55.617257701952106), (37.77784818518065, 55.61574504433011), (37.77016867724609, 55.61148576294007), (37.760191219573976, 55.60599579539028), (37.75338926983641, 55.60227892751446), (37.746329965606634, 55.59920577639331), (37.73939925396728, 55.59631430313617), (37.73273665739439, 55.5935318803559), (37.7299954450912, 55.59350760316188), (37.7268679946899, 55.59469840523759), (37.72626726983634, 55.59229549697373), (37.7262673598022, 55.59081598950582), (37.71897193121335, 55.5877595845419), (37.70871550793456, 55.58393177431724), (37.700497489410374, 55.580917323756644), (37.69204305026244, 55.57778089778455), (37.68544477378839, 55.57815154690915), (37.68391050793454, 55.57472945079756), (37.678803592590306, 55.57328235936491), (37.6743402539673, 55.57255251445782), (37.66813862698363, 55.57216388774464), (37.617927457672096, 55.57505691895805), (37.60443099999999, 55.5757737568051), (37.599683515869145, 55.57749105910326), (37.59754177842709, 55.57796291823627), (37.59625834786988, 55.57906686095235), (37.59501783265684, 55.57746616444403), (37.593090671936025, 55.57671634534502), (37.587018007904, 55.577944600233785), (37.578692203704804, 55.57982895000019), (37.57327546607398, 55.58116294118248), (37.57385012109279, 55.581550362779), (37.57399562266922, 55.5820107079112), (37.5735356072979, 55.58226289171689), (37.57290393054962, 55.582393529795155), (37.57037722355653, 55.581919415056234), (37.5592298306885, 55.584471614867844), (37.54189249206543, 55.58867650795186), (37.5297256269836, 55.59158133551745), (37.517837865081766, 55.59443656218868), (37.51200186508174, 55.59635625174229), (37.506808949737554, 55.59907823904434), (37.49820432275389, 55.6062944994944), (37.494406071441674, 55.60967103463367), (37.494760001358024, 55.61066689753365), (37.49397137107085, 55.61220931698269), (37.49016528606031, 55.613417718449064), (37.48773249206542, 55.61530616333343), (37.47921386508177, 55.622640129112334), (37.470652153442394, 55.62993723476164), (37.46273446298218, 55.6368075123157), (37.46350692265317, 55.64068225239439), (37.46050283203121, 55.640794546982576), (37.457627470916734, 55.64118904154646), (37.450718034393326, 55.64690488145138), (37.44239252645875, 55.65397824729769), (37.434587576721185, 55.66053543155961), (37.43582144975277, 55.661693766520735), (37.43576786245721, 55.662755031737014), (37.430982915344174, 55.664610641628116), (37.428547447097685, 55.66778515273695), (37.42945134592044, 55.668633314343566), (37.42859571562949, 55.66948145750025), (37.4262836402282, 55.670813882451405), (37.418709037048295, 55.6811141674414), (37.41922139651101, 55.68235377885389), (37.419218771842885, 55.68359335082235), (37.417196501327446, 55.684375235224735), (37.41607020370478, 55.68540557585352), (37.415640857147146, 55.68686637150793), (37.414632153442334, 55.68903015131686), (37.413344899475064, 55.690896881757396), (37.41171432275391, 55.69264232162232), (37.40948282275393, 55.69455101638112), (37.40703674603271, 55.69638690385348), (37.39607169577025, 55.70451821283731), (37.38952706878662, 55.70942491932811), (37.387778313491815, 55.71149057784176), (37.39049275399779, 55.71419814298992), (37.385557272491454, 55.7155489617061), (37.38388335714726, 55.71849856042102), (37.378368238098155, 55.7292763261685), (37.37763597123337, 55.730845879211614), (37.37890062088197, 55.73167906388319), (37.37750451918789, 55.734703664681774), (37.375610832015965, 55.734851959522246), (37.3723813571472, 55.74105626086403), (37.37014935714723, 55.746115620904355), (37.36944173016362, 55.750883999993725), (37.36975304365541, 55.76335905525834), (37.37244070571134, 55.76432079697595), (37.3724259757175, 55.76636979670426), (37.369922155757884, 55.76735417953104), (37.369892695770275, 55.76823419316575), (37.370214730163575, 55.782312184391266), (37.370493611114505, 55.78436801120489), (37.37120164550783, 55.78596427165359), (37.37284851456452, 55.7874378183096), (37.37608325135799, 55.7886695054807), (37.3764587460632, 55.78947647305964), (37.37530000265506, 55.79146512926804), (37.38235915344241, 55.79899647809345), (37.384344043655396, 55.80113596939471), (37.38594269577028, 55.80322699999366), (37.38711208598329, 55.804919036911976), (37.3880239841309, 55.806610999993666), (37.38928977249147, 55.81001864976979), (37.39038389947512, 55.81348641242801), (37.39235781481933, 55.81983538336746), (37.393709457672124, 55.82417822811877), (37.394685720901464, 55.82792275755836), (37.39557615344238, 55.830447148154136), (37.39844478226658, 55.83167107969975), (37.40019761214057, 55.83151823557964), (37.400398790382326, 55.83264967594742), (37.39659544313046, 55.83322180909622), (37.39667059524539, 55.83402792148566), (37.39682089947515, 55.83638877400216), (37.39643489154053, 55.83861656112751), (37.3955338994751, 55.84072348043264), (37.392680272491454, 55.84502158126453), (37.39241188227847, 55.84659117913199), (37.392529730163616, 55.84816071336481), (37.39486835714723, 55.85288092980303), (37.39873052645878, 55.859893456073635), (37.40272161111449, 55.86441833633205), (37.40697072750854, 55.867579567544375), (37.410007082016016, 55.868369880337), (37.4120992989502, 55.86920843741314), (37.412668021163924, 55.87055369615854), (37.41482461111453, 55.87170587948249), (37.41862266137694, 55.873183961039565), (37.42413732540892, 55.874879126654704), (37.4312182698669, 55.875614937236705), (37.43111093783558, 55.8762723478417), (37.43332105622856, 55.87706546369396), (37.43385747619623, 55.87790681284802), (37.441303050262405, 55.88027084462084), (37.44747234260555, 55.87942070143253), (37.44716141796871, 55.88072960917233), (37.44769797085568, 55.88121221323979), (37.45204320500181, 55.882080694420715), (37.45673176190186, 55.882346110794586), (37.463383999999984, 55.88252729504517), (37.46682797486874, 55.88294937719063), (37.470014457672086, 55.88361266759345), (37.47751410450743, 55.88546991372396), (37.47860317658232, 55.88534929207307), (37.48165826025772, 55.882563306475106), (37.48316434442331, 55.8815803226785), (37.483831555817645, 55.882427612793315), (37.483182967125686, 55.88372791409729), (37.483092277908824, 55.88495581062434), (37.4855716508179, 55.8875561994203), (37.486440636245746, 55.887827444039566), (37.49014203439328, 55.88897899871799), (37.493210285705544, 55.890208937135604), (37.497512451065035, 55.891342397444696), (37.49780744510645, 55.89174030252967), (37.49940333499519, 55.89239745507079), (37.50018383334346, 55.89339220941865), (37.52421672750851, 55.903869074155224), (37.52977457672118, 55.90564076517974), (37.53503220370484, 55.90661661218259), (37.54042858064267, 55.90714113744566), (37.54320461007303, 55.905645048442985), (37.545686966066306, 55.906608607018505), (37.54743976120755, 55.90788552162358), (37.55796999999999, 55.90901557907218), (37.572711542327866, 55.91059395704873), (37.57942799999998, 55.91073854155573), (37.58502865872187, 55.91009969268444), (37.58739968913264, 55.90794809960554), (37.59131567193598, 55.908713267595054), (37.612687423278814, 55.902866854295375), (37.62348079629517, 55.90041967242986), (37.635797880950896, 55.898141151686396), (37.649487626983664, 55.89639275532968), (37.65619302513125, 55.89572360207488), (37.66294133862307, 55.895295577183965), (37.66874564418033, 55.89505457604897), (37.67375601586915, 55.89254677027454), (37.67744661901856, 55.8947775867987), (37.688347, 55.89450045676125), (37.69480554232789, 55.89422926332761), (37.70107096560668, 55.89322256101114), (37.705962965606716, 55.891763491662616), (37.711885134918205, 55.889110234998974), (37.71682005026245, 55.886577568759876), (37.7199315476074, 55.88458159806678), (37.72234560316464, 55.882281005794134), (37.72364385977171, 55.8809452036196), (37.725371142837474, 55.8809722706006), (37.727870902099546, 55.88037213862385), (37.73394330422971, 55.877941504088696), (37.745339592590376, 55.87208120378722), (37.75525267724611, 55.86703807949492), (37.76919976190188, 55.859821640197474), (37.827835219574, 55.82962968399116), (37.83341438888553, 55.82575289922351), (37.83652584655761, 55.82188784027888), (37.83809213491821, 55.81612575504693), (37.83605359521481, 55.81460347077685), (37.83632178569025, 55.81276696067908), (37.838623105812026, 55.811486181656385), (37.83912198147584, 55.807329380532785), (37.839079078033414, 55.80510270463816), (37.83965844708251, 55.79940712529036), (37.840581150787344, 55.79131399999368), (37.84172564285271, 55.78000432402266)]); ``` -Check how many cell towers are in Moscow: +3. Check how many cell towers are in Moscow: ``` SELECT count() FROM cell_towers WHERE pointInPolygon((lon, lat), (SELECT * FROM moscow)) @@ -128,6 +129,4 @@ SELECT count() FROM cell_towers WHERE pointInPolygon((lon, lat), (SELECT * FROM The data is also available for interactive queries in the [Playground](https://gh-api.clickhouse.tech/play?user=play), [example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIG1jYywgY291bnQoKSBGUk9NIGNlbGxfdG93ZXJzIEdST1VQIEJZIG1jYyBPUkRFUiBCWSBjb3VudCgpIERFU0M=). -Although you cannot create temporary tables there. - -[Original article](https://clickhouse.tech/docs/en/getting_started/example_datasets/cell-towers/) +Although you cannot create temporary tables there. \ No newline at end of file diff --git a/docs/en/guides/apply-catboost-model.md b/docs/en/guides/apply-catboost-model.md index f614b121714..7c2c8a575ec 100644 --- a/docs/en/guides/apply-catboost-model.md +++ b/docs/en/guides/apply-catboost-model.md @@ -159,6 +159,9 @@ The fastest way to evaluate a CatBoost model is compile `libcatboostmodel./home/catboost/models/*_model.xml ``` +!!! note "Note" + You can change path to the CatBoost model configuration later without restarting server. + ## 4. Run the Model Inference from SQL {#run-model-inference} For test model run the ClickHouse client `$ clickhouse client`. diff --git a/docs/en/interfaces/third-party/client-libraries.md b/docs/en/interfaces/third-party/client-libraries.md index c08eec61b1c..f5c85289171 100644 --- a/docs/en/interfaces/third-party/client-libraries.md +++ b/docs/en/interfaces/third-party/client-libraries.md @@ -23,6 +23,7 @@ toc_title: Client Libraries - [SeasClick C++ client](https://github.com/SeasX/SeasClick) - [one-ck](https://github.com/lizhichao/one-ck) - [glushkovds/phpclickhouse-laravel](https://packagist.org/packages/glushkovds/phpclickhouse-laravel) + - [kolya7k ClickHouse PHP extension](https://github.com//kolya7k/clickhouse-php) - Go - [clickhouse](https://github.com/kshvakov/clickhouse/) - [go-clickhouse](https://github.com/roistat/go-clickhouse) diff --git a/docs/en/interfaces/third-party/gui.md b/docs/en/interfaces/third-party/gui.md index 5d14b3aa3cc..e54e40441ca 100644 --- a/docs/en/interfaces/third-party/gui.md +++ b/docs/en/interfaces/third-party/gui.md @@ -169,19 +169,21 @@ Features: ### SeekTable {#seektable} -[SeekTable](https://www.seektable.com) is a self-service BI tool for data exploration and operational reporting. SeekTable is available both as a cloud service and a self-hosted version. SeekTable reports may be embedded into any web-app. +[SeekTable](https://www.seektable.com) is a self-service BI tool for data exploration and operational reporting. It is available both as a cloud service and a self-hosted version. Reports from SeekTable may be embedded into any web-app. Features: - Business users-friendly reports builder. - Powerful report parameters for SQL filtering and report-specific query customizations. - Can connect to ClickHouse both with a native TCP/IP endpoint and a HTTP(S) interface (2 different drivers). -- It is possible to use all power of CH SQL dialect in dimensions/measures definitions +- It is possible to use all power of ClickHouse SQL dialect in dimensions/measures definitions. - [Web API](https://www.seektable.com/help/web-api-integration) for automated reports generation. -- Supports reports development flow with account data [backup/restore](https://www.seektable.com/help/self-hosted-backup-restore), data models (cubes) / reports configuration is a human-readable XML and can be stored under version control. +- Supports reports development flow with account data [backup/restore](https://www.seektable.com/help/self-hosted-backup-restore); data models (cubes) / reports configuration is a human-readable XML and can be stored under version control system. SeekTable is [free](https://www.seektable.com/help/cloud-pricing) for personal/individual usage. [How to configure ClickHouse connection in SeekTable.](https://www.seektable.com/help/clickhouse-pivot-table) -[Original article](https://clickhouse.tech/docs/en/interfaces/third-party/gui/) +### Chadmin {#chadmin} + +[Chadmin](https://github.com/bun4uk/chadmin) is a simple UI where you can visualize your currently running queries on your ClickHouse cluster and info about them and kill them if you want. diff --git a/docs/en/introduction/adopters.md b/docs/en/introduction/adopters.md index 012d86b1ef7..fa257a84173 100644 --- a/docs/en/introduction/adopters.md +++ b/docs/en/introduction/adopters.md @@ -13,6 +13,7 @@ toc_title: Adopters | 2gis | Maps | Monitoring | — | — | [Talk in Russian, July 2019](https://youtu.be/58sPkXfq6nw) | | Admiral | Martech | Engagement Management | — | — | [Webinar Slides, June 2020](https://altinity.com/presentations/2020/06/16/big-data-in-real-time-how-clickhouse-powers-admirals-visitor-relationships-for-publishers) | | AdScribe | Ads | TV Analytics | — | — | [A quote from CTO](https://altinity.com/24x7-support/) | +| Ahrefs | SEO | Analytics | — | — | [Job listing](https://ahrefs.com/jobs/data-scientist-search) | | Alibaba Cloud | Cloud | Managed Service | — | — | [Official Website](https://help.aliyun.com/product/144466.html) | | Aloha Browser | Mobile App | Browser backend | — | — | [Slides in Russian, May 2019](https://presentations.clickhouse.tech/meetup22/aloha.pdf) | | Altinity | Cloud, SaaS | Main product | — | — | [Official Website](https://altinity.com/) | @@ -47,7 +48,8 @@ toc_title: Adopters | Diva-e | Digital consulting | Main Product | — | — | [Slides in English, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup29/ClickHouse-MeetUp-Unusual-Applications-sd-2019-09-17.pdf) | | Ecwid | E-commerce SaaS | Metrics, Logging | — | — | [Slides in Russian, April 2019](https://nastachku.ru/var/files/1/presentation/backend/2_Backend_6.pdf) | | eBay | E-commerce | Logs, Metrics and Events | — | — | [Official website, Sep 2020](https://tech.ebayinc.com/engineering/ou-online-analytical-processing/) | -| Exness | Trading | Metrics, Logging | — | — | [Talk in Russian, May 2019](https://youtu.be/_rpU-TvSfZ8?t=3215) | +| Exness | Trading | Metrics, Logging | — | — | [Talk in Russian, May 2019](https://youtu.be/_rpU-TvSfZ8?t=3215) | +| EventBunker.io | Serverless Data Processing | — | — | — | [Tweet, April 2021](https://twitter.com/Halil_D_/status/1379839133472985091) | | FastNetMon | DDoS Protection | Main Product | | — | [Official website](https://fastnetmon.com/docs-fnm-advanced/fastnetmon-advanced-traffic-persistency/) | | Flipkart | e-Commerce | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=239) | | FunCorp | Games | | — | 14 bn records/day as of Jan 2021 | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) | @@ -75,7 +77,8 @@ toc_title: Adopters | Marilyn | Advertising | Statistics | — | — | [Talk in Russian, June 2017](https://www.youtube.com/watch?v=iXlIgx2khwc) | | Mello | Marketing | Analytics | 1 server | — | [Article, Oct 2020](https://vc.ru/marketing/166180-razrabotka-tipovogo-otcheta-skvoznoy-analitiki) | | MessageBird | Telecommunications | Statistics | — | — | [Slides in English, November 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup20/messagebird.pdf) | -| MindsDB | Machine Learning | Main Product | — | — | [Official Website](https://www.mindsdb.com/blog/machine-learning-models-as-tables-in-ch) |x +| Microsoft | Web Analytics | Clarity (Main Product) | — | — | [A question on GitHub](https://github.com/ClickHouse/ClickHouse/issues/21556) | +| MindsDB | Machine Learning | Main Product | — | — | [Official Website](https://www.mindsdb.com/blog/machine-learning-models-as-tables-in-ch) | | MUX | Online Video | Video Analytics | — | — | [Talk in English, August 2019](https://altinity.com/presentations/2019/8/13/how-clickhouse-became-the-default-analytics-database-for-mux/) | | MGID | Ad network | Web-analytics | — | — | [Blog post in Russian, April 2020](http://gs-studio.com/news-about-it/32777----clickhouse---c) | | Netskope | Network Security | — | — | — | [Job advertisement, March 2021](https://www.mendeley.com/careers/job/senior-software-developer-backend-developer-1346348) | diff --git a/docs/en/operations/performance-test.md b/docs/en/operations/performance-test.md index ca805923ba9..a808ffd0a85 100644 --- a/docs/en/operations/performance-test.md +++ b/docs/en/operations/performance-test.md @@ -12,6 +12,7 @@ With this instruction you can run basic ClickHouse performance test on any serve 3. Copy the link to `clickhouse` binary for amd64 or aarch64. 4. ssh to the server and download it with wget: ```bash +# These links are outdated, please obtain the fresh link from the "commits" page. # For amd64: wget https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_build_check/gcc-10_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse # For aarch64: diff --git a/docs/en/operations/server-configuration-parameters/settings.md b/docs/en/operations/server-configuration-parameters/settings.md index 0b45488ebf7..f86e9668f00 100644 --- a/docs/en/operations/server-configuration-parameters/settings.md +++ b/docs/en/operations/server-configuration-parameters/settings.md @@ -100,6 +100,11 @@ Default value: `1073741824` (1 GB). 1073741824 ``` +## database_atomic_delay_before_drop_table_sec {#database_atomic_delay_before_drop_table_sec} + +Sets the delay before remove table data in seconds. If the query has `SYNC` modifier, this setting is ignored. + +Default value: `480` (8 minute). ## default_database {#default-database} @@ -125,6 +130,25 @@ Settings profiles are located in the file specified in the parameter `user_confi default ``` +## default_replica_path {#default_replica_path} + +The path to the table in ZooKeeper. + +**Example** + +``` xml +/clickhouse/tables/{uuid}/{shard} +``` +## default_replica_name {#default_replica_name} + + The replica name in ZooKeeper. + +**Example** + +``` xml +{replica} +``` + ## dictionaries_config {#server_configuration_parameters-dictionaries_config} The path to the config file for external dictionaries. @@ -321,7 +345,8 @@ Similar to `interserver_http_host`, except that this hostname can be used by oth The username and password used to authenticate during [replication](../../engines/table-engines/mergetree-family/replication.md) with the Replicated\* engines. These credentials are used only for communication between replicas and are unrelated to credentials for ClickHouse clients. The server is checking these credentials for connecting replicas and use the same credentials when connecting to other replicas. So, these credentials should be set the same for all replicas in a cluster. By default, the authentication is not used. -**Note:** These credentials are common for replication through `HTTP` and `HTTPS`. +!!! note "Note" + These credentials are common for replication through `HTTP` and `HTTPS`. This section contains the following parameters: diff --git a/docs/en/operations/settings/merge-tree-settings.md b/docs/en/operations/settings/merge-tree-settings.md index 77b68715ba9..b2470207dcc 100644 --- a/docs/en/operations/settings/merge-tree-settings.md +++ b/docs/en/operations/settings/merge-tree-settings.md @@ -56,6 +56,26 @@ Default value: 150. ClickHouse artificially executes `INSERT` longer (adds ‘sleep’) so that the background merge process can merge parts faster than they are added. +## inactive_parts_to_throw_insert {#inactive-parts-to-throw-insert} + +If the number of inactive parts in a single partition more than the `inactive_parts_to_throw_insert` value, `INSERT` is interrupted with the "Too many inactive parts (N). Parts cleaning are processing significantly slower than inserts" exception. + +Possible values: + +- Any positive integer. + +Default value: 0 (unlimited). + +## inactive_parts_to_delay_insert {#inactive-parts-to-delay-insert} + +If the number of inactive parts in a single partition in the table at least that many the `inactive_parts_to_delay_insert` value, an `INSERT` artificially slows down. It is useful when a server fails to clean up parts quickly enough. + +Possible values: + +- Any positive integer. + +Default value: 0 (unlimited). + ## max_delay_to_insert {#max-delay-to-insert} The value in seconds, which is used to calculate the `INSERT` delay, if the number of active parts in a single partition exceeds the [parts_to_delay_insert](#parts-to-delay-insert) value. diff --git a/docs/en/operations/settings/settings.md b/docs/en/operations/settings/settings.md index 3696c89b93e..1b422785b4e 100644 --- a/docs/en/operations/settings/settings.md +++ b/docs/en/operations/settings/settings.md @@ -854,8 +854,6 @@ For example, when reading from a table, if it is possible to evaluate expression Default value: the number of physical CPU cores. -If less than one SELECT query is normally run on a server at a time, set this parameter to a value slightly less than the actual number of processor cores. - For queries that are completed quickly because of a LIMIT, you can set a lower ‘max_threads’. For example, if the necessary number of entries are located in every block and max_threads = 8, then 8 blocks are retrieved, although it would have been enough to read just one. The smaller the `max_threads` value, the less memory is consumed. @@ -1565,6 +1563,17 @@ Possible values: Default value: 0 +## optimize_skip_unused_shards_rewrite_in {#optimize-skip-unused-shardslrewrite-in} + +Rewrite IN in query for remote shards to exclude values that does not belong to the shard (requires optimize_skip_unused_shards). + +Possible values: + +- 0 — Disabled. +- 1 — Enabled. + +Default value: 1 (since it requires `optimize_skip_unused_shards` anyway, which `0` by default) + ## allow_nondeterministic_optimize_skip_unused_shards {#allow-nondeterministic-optimize-skip-unused-shards} Allow nondeterministic (like `rand` or `dictGet`, since later has some caveats with updates) functions in sharding key. @@ -2787,6 +2796,28 @@ Possible values: Default value: `0`. +## database_atomic_wait_for_drop_and_detach_synchronously {#database_atomic_wait_for_drop_and_detach_synchronously} + +Adds a modifier `SYNC` to all `DROP` and `DETACH` queries. + +Possible values: + +- 0 — Queries will be executed with delay. +- 1 — Queries will be executed without delay. + +Default value: `0`. + +## show_table_uuid_in_table_create_query_if_not_nil {#show_table_uuid_in_table_create_query_if_not_nil} + +Sets the `SHOW TABLE` query display. + +Possible values: + +- 0 — The query will be displayed without table UUID. +- 1 — The query will be displayed with table UUID. + +Default value: `0`. + ## allow_experimental_live_view {#allow-experimental-live-view} Allows creation of experimental [live views](../../sql-reference/statements/create/view.md#live-view). @@ -2822,6 +2853,17 @@ Sets the interval in seconds after which periodically refreshed [live view](../. Default value: `60`. +## check_query_single_value_result {#check_query_single_value_result} + +Defines the level of detail for the [CHECK TABLE](../../sql-reference/statements/check-table.md#checking-mergetree-tables) query result for `MergeTree` family engines . + +Possible values: + +- 0 — the query shows a check status for every individual data part of a table. +- 1 — the query shows the general table check status. + +Default value: `0`. + ## limit {#limit} Sets the number of rows to get from the query result. It adjust the limit previously set by the [LIMIT](../../sql-reference/statements/select/limit.md#limit-clause) clause. diff --git a/docs/en/operations/system-tables/columns.md b/docs/en/operations/system-tables/columns.md index 92a6315d06b..9160dca9a1a 100644 --- a/docs/en/operations/system-tables/columns.md +++ b/docs/en/operations/system-tables/columns.md @@ -4,7 +4,9 @@ Contains information about columns in all the tables. You can use this table to get information similar to the [DESCRIBE TABLE](../../sql-reference/statements/misc.md#misc-describe-table) query, but for multiple tables at once. -The `system.columns` table contains the following columns (the column type is shown in brackets): +Columns from [temporary tables](../../sql-reference/statements/create/table.md#temporary-tables) are visible in the `system.columns` only in those session where they have been created. They are shown with the empty `database` field. + +Columns: - `database` ([String](../../sql-reference/data-types/string.md)) — Database name. - `table` ([String](../../sql-reference/data-types/string.md)) — Table name. @@ -26,7 +28,7 @@ The `system.columns` table contains the following columns (the column type is sh **Example** ```sql -:) select * from system.columns LIMIT 2 FORMAT Vertical; +SELECT * FROM system.columns LIMIT 2 FORMAT Vertical; ``` ```text @@ -65,8 +67,6 @@ is_in_sorting_key: 0 is_in_primary_key: 0 is_in_sampling_key: 0 compression_codec: - -2 rows in set. Elapsed: 0.002 sec. ``` [Original article](https://clickhouse.tech/docs/en/operations/system_tables/columns) diff --git a/docs/en/operations/system-tables/replication_queue.md b/docs/en/operations/system-tables/replication_queue.md index f3e3a35f13b..539a29432ac 100644 --- a/docs/en/operations/system-tables/replication_queue.md +++ b/docs/en/operations/system-tables/replication_queue.md @@ -15,16 +15,16 @@ Columns: - `node_name` ([String](../../sql-reference/data-types/string.md)) — Node name in ZooKeeper. - `type` ([String](../../sql-reference/data-types/string.md)) — Type of the task in the queue, one of: - - `GET_PART` - Get the part from another replica. - - `ATTACH_PART` - Attach the part, possibly from our own replica (if found in `detached` folder). - You may think of it as a `GET_PART` with some optimisations as they're nearly identical. - - `MERGE_PARTS` - Merge the parts. - - `DROP_RANGE` - Delete the parts in the specified partition in the specified number range. - - `CLEAR_COLUMN` - NOTE: Deprecated. Drop specific column from specified partition. - - `CLEAR_INDEX` - NOTE: Deprecated. Drop specific index from specified partition. - - `REPLACE_RANGE` - Drop certain range of partitions and replace them by new ones - - `MUTATE_PART` - Apply one or several mutations to the part. - - `ALTER_METADATA` - Apply alter modification according to global /metadata and /columns paths + + - `GET_PART` — Get the part from another replica. + - `ATTACH_PART` — Attach the part, possibly from our own replica (if found in the `detached` folder). You may think of it as a `GET_PART` with some optimizations as they're nearly identical. + - `MERGE_PARTS` — Merge the parts. + - `DROP_RANGE` — Delete the parts in the specified partition in the specified number range. + - `CLEAR_COLUMN` — NOTE: Deprecated. Drop specific column from specified partition. + - `CLEAR_INDEX` — NOTE: Deprecated. Drop specific index from specified partition. + - `REPLACE_RANGE` — Drop a certain range of parts and replace them with new ones. + - `MUTATE_PART` — Apply one or several mutations to the part. + - `ALTER_METADATA` — Apply alter modification according to global /metadata and /columns paths. - `create_time` ([Datetime](../../sql-reference/data-types/datetime.md)) — Date and time when the task was submitted for execution. diff --git a/docs/en/operations/system-tables/tables.md b/docs/en/operations/system-tables/tables.md index 6ad1425e032..ccc9ab94f8b 100644 --- a/docs/en/operations/system-tables/tables.md +++ b/docs/en/operations/system-tables/tables.md @@ -1,59 +1,65 @@ # system.tables {#system-tables} -Contains metadata of each table that the server knows about. Detached tables are not shown in `system.tables`. +Contains metadata of each table that the server knows about. -This table contains the following columns (the column type is shown in brackets): +[Detached](../../sql-reference/statements/detach.md) tables are not shown in `system.tables`. -- `database` (String) — The name of the database the table is in. +[Temporary tables](../../sql-reference/statements/create/table.md#temporary-tables) are visible in the `system.tables` only in those session where they have been created. They are shown with the empty `database` field and with the `is_temporary` flag switched on. -- `name` (String) — Table name. +Columns: -- `engine` (String) — Table engine name (without parameters). +- `database` ([String](../../sql-reference/data-types/string.md)) — The name of the database the table is in. -- `is_temporary` (UInt8) - Flag that indicates whether the table is temporary. +- `name` ([String](../../sql-reference/data-types/string.md)) — Table name. -- `data_path` (String) - Path to the table data in the file system. +- `engine` ([String](../../sql-reference/data-types/string.md)) — Table engine name (without parameters). -- `metadata_path` (String) - Path to the table metadata in the file system. +- `is_temporary` ([UInt8](../../sql-reference/data-types/int-uint.md)) - Flag that indicates whether the table is temporary. -- `metadata_modification_time` (DateTime) - Time of latest modification of the table metadata. +- `data_path` ([String](../../sql-reference/data-types/string.md)) - Path to the table data in the file system. -- `dependencies_database` (Array(String)) - Database dependencies. +- `metadata_path` ([String](../../sql-reference/data-types/string.md)) - Path to the table metadata in the file system. -- `dependencies_table` (Array(String)) - Table dependencies ([MaterializedView](../../engines/table-engines/special/materializedview.md) tables based on the current table). +- `metadata_modification_time` ([DateTime](../../sql-reference/data-types/datetime.md)) - Time of latest modification of the table metadata. -- `create_table_query` (String) - The query that was used to create the table. +- `dependencies_database` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) - Database dependencies. -- `engine_full` (String) - Parameters of the table engine. +- `dependencies_table` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) - Table dependencies ([MaterializedView](../../engines/table-engines/special/materializedview.md) tables based on the current table). -- `partition_key` (String) - The partition key expression specified in the table. +- `create_table_query` ([String](../../sql-reference/data-types/string.md)) - The query that was used to create the table. -- `sorting_key` (String) - The sorting key expression specified in the table. +- `engine_full` ([String](../../sql-reference/data-types/string.md)) - Parameters of the table engine. -- `primary_key` (String) - The primary key expression specified in the table. +- `partition_key` ([String](../../sql-reference/data-types/string.md)) - The partition key expression specified in the table. -- `sampling_key` (String) - The sampling key expression specified in the table. +- `sorting_key` ([String](../../sql-reference/data-types/string.md)) - The sorting key expression specified in the table. -- `storage_policy` (String) - The storage policy: +- `primary_key` ([String](../../sql-reference/data-types/string.md)) - The primary key expression specified in the table. + +- `sampling_key` ([String](../../sql-reference/data-types/string.md)) - The sampling key expression specified in the table. + +- `storage_policy` ([String](../../sql-reference/data-types/string.md)) - The storage policy: - [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes) - [Distributed](../../engines/table-engines/special/distributed.md#distributed) -- `total_rows` (Nullable(UInt64)) - Total number of rows, if it is possible to quickly determine exact number of rows in the table, otherwise `Null` (including underying `Buffer` table). +- `total_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - Total number of rows, if it is possible to quickly determine exact number of rows in the table, otherwise `NULL` (including underying `Buffer` table). -- `total_bytes` (Nullable(UInt64)) - Total number of bytes, if it is possible to quickly determine exact number of bytes for the table on storage, otherwise `Null` (**does not** includes any underlying storage). +- `total_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - Total number of bytes, if it is possible to quickly determine exact number of bytes for the table on storage, otherwise `NULL` (does not includes any underlying storage). - If the table stores data on disk, returns used space on disk (i.e. compressed). - If the table stores data in memory, returns approximated number of used bytes in memory. -- `lifetime_rows` (Nullable(UInt64)) - Total number of rows INSERTed since server start (only for `Buffer` tables). +- `lifetime_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - Total number of rows INSERTed since server start (only for `Buffer` tables). -- `lifetime_bytes` (Nullable(UInt64)) - Total number of bytes INSERTed since server start (only for `Buffer` tables). +- `lifetime_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - Total number of bytes INSERTed since server start (only for `Buffer` tables). The `system.tables` table is used in `SHOW TABLES` query implementation. +**Example** + ```sql -:) SELECT * FROM system.tables LIMIT 2 FORMAT Vertical; +SELECT * FROM system.tables LIMIT 2 FORMAT Vertical; ``` ```text @@ -100,8 +106,6 @@ sampling_key: storage_policy: total_rows: ᴺᵁᴸᴸ total_bytes: ᴺᵁᴸᴸ - -2 rows in set. Elapsed: 0.004 sec. ``` [Original article](https://clickhouse.tech/docs/en/operations/system_tables/tables) diff --git a/docs/en/operations/system-tables/trace_log.md b/docs/en/operations/system-tables/trace_log.md index b3b04795a60..e4c01a65d9d 100644 --- a/docs/en/operations/system-tables/trace_log.md +++ b/docs/en/operations/system-tables/trace_log.md @@ -20,10 +20,12 @@ Columns: When connecting to the server by `clickhouse-client`, you see the string similar to `Connected to ClickHouse server version 19.18.1 revision 54429.`. This field contains the `revision`, but not the `version` of a server. -- `timer_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Timer type: +- `trace_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Trace type: - - `Real` represents wall-clock time. - - `CPU` represents CPU time. + - `Real` represents collecting stack traces by wall-clock time. + - `CPU` represents collecting stack traces by CPU time. + - `Memory` represents collecting allocations and deallocations when memory allocation exceeds the subsequent watermark. + - `MemorySample` represents collecting random allocations and deallocations. - `thread_number` ([UInt32](../../sql-reference/data-types/int-uint.md)) — Thread identifier. diff --git a/docs/en/operations/tips.md b/docs/en/operations/tips.md index e62dea0b04e..865fe58d7cd 100644 --- a/docs/en/operations/tips.md +++ b/docs/en/operations/tips.md @@ -191,8 +191,9 @@ dynamicConfigFile=/etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/zoo. Java version: ``` text -Java(TM) SE Runtime Environment (build 1.8.0_25-b17) -Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode) +openjdk 11.0.5-shenandoah 2019-10-15 +OpenJDK Runtime Environment (build 11.0.5-shenandoah+10-adhoc.heretic.src) +OpenJDK 64-Bit Server VM (build 11.0.5-shenandoah+10-adhoc.heretic.src, mixed mode) ``` JVM parameters: @@ -204,7 +205,7 @@ ZOOCFGDIR=/etc/$NAME/conf # TODO this is really ugly # How to find out, which jars are needed? # seems, that log4j requires the log4j.properties file to be in the classpath -CLASSPATH="$ZOOCFGDIR:/usr/build/classes:/usr/build/lib/*.jar:/usr/share/zookeeper/zookeeper-3.5.1-metrika.jar:/usr/share/zookeeper/slf4j-log4j12-1.7.5.jar:/usr/share/zookeeper/slf4j-api-1.7.5.jar:/usr/share/zookeeper/servlet-api-2.5-20081211.jar:/usr/share/zookeeper/netty-3.7.0.Final.jar:/usr/share/zookeeper/log4j-1.2.16.jar:/usr/share/zookeeper/jline-2.11.jar:/usr/share/zookeeper/jetty-util-6.1.26.jar:/usr/share/zookeeper/jetty-6.1.26.jar:/usr/share/zookeeper/javacc.jar:/usr/share/zookeeper/jackson-mapper-asl-1.9.11.jar:/usr/share/zookeeper/jackson-core-asl-1.9.11.jar:/usr/share/zookeeper/commons-cli-1.2.jar:/usr/src/java/lib/*.jar:/usr/etc/zookeeper" +CLASSPATH="$ZOOCFGDIR:/usr/build/classes:/usr/build/lib/*.jar:/usr/share/zookeeper-3.6.2/lib/audience-annotations-0.5.0.jar:/usr/share/zookeeper-3.6.2/lib/commons-cli-1.2.jar:/usr/share/zookeeper-3.6.2/lib/commons-lang-2.6.jar:/usr/share/zookeeper-3.6.2/lib/jackson-annotations-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/jackson-core-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/jackson-databind-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/javax.servlet-api-3.1.0.jar:/usr/share/zookeeper-3.6.2/lib/jetty-http-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-io-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-security-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-server-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-servlet-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-util-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jline-2.14.6.jar:/usr/share/zookeeper-3.6.2/lib/json-simple-1.1.1.jar:/usr/share/zookeeper-3.6.2/lib/log4j-1.2.17.jar:/usr/share/zookeeper-3.6.2/lib/metrics-core-3.2.5.jar:/usr/share/zookeeper-3.6.2/lib/netty-buffer-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-codec-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-common-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-handler-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-resolver-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-native-epoll-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-native-unix-common-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_common-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_hotspot-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_servlet-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/slf4j-api-1.7.25.jar:/usr/share/zookeeper-3.6.2/lib/slf4j-log4j12-1.7.25.jar:/usr/share/zookeeper-3.6.2/lib/snappy-java-1.1.7.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-3.6.2.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-jute-3.6.2.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-prometheus-metrics-3.6.2.jar:/usr/share/zookeeper-3.6.2/etc" ZOOCFG="$ZOOCFGDIR/zoo.cfg" ZOO_LOG_DIR=/var/log/$NAME @@ -213,27 +214,17 @@ GROUP=zookeeper PIDDIR=/var/run/$NAME PIDFILE=$PIDDIR/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME -JAVA=/usr/bin/java +JAVA=/usr/local/jdk-11/bin/java ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain" ZOO_LOG4J_PROP="INFO,ROLLINGFILE" JMXLOCALONLY=false JAVA_OPTS="-Xms{{ '{{' }} cluster.get('xms','128M') {{ '}}' }} \ -Xmx{{ '{{' }} cluster.get('xmx','1G') {{ '}}' }} \ - -Xloggc:/var/log/$NAME/zookeeper-gc.log \ - -XX:+UseGCLogFileRotation \ - -XX:NumberOfGCLogFiles=16 \ - -XX:GCLogFileSize=16M \ + -Xlog:safepoint,gc*=info,age*=debug:file=/var/log/$NAME/zookeeper-gc.log:time,level,tags:filecount=16,filesize=16M -verbose:gc \ - -XX:+PrintGCTimeStamps \ - -XX:+PrintGCDateStamps \ - -XX:+PrintGCDetails - -XX:+PrintTenuringDistribution \ - -XX:+PrintGCApplicationStoppedTime \ - -XX:+PrintGCApplicationConcurrentTime \ - -XX:+PrintSafepointStatistics \ - -XX:+UseParNewGC \ - -XX:+UseConcMarkSweepGC \ --XX:+CMSParallelRemarkEnabled" + -XX:+UseG1GC \ + -Djute.maxbuffer=8388608 \ + -XX:MaxGCPauseMillis=50" ``` Salt init: diff --git a/docs/en/operations/update.md b/docs/en/operations/update.md index 9fa9c44e130..dbcf9ae2b3e 100644 --- a/docs/en/operations/update.md +++ b/docs/en/operations/update.md @@ -15,7 +15,8 @@ $ sudo service clickhouse-server restart If you installed ClickHouse using something other than the recommended `deb` packages, use the appropriate update method. -ClickHouse does not support a distributed update. The operation should be performed consecutively on each separate server. Do not update all the servers on a cluster simultaneously, or the cluster will be unavailable for some time. +!!! note "Note" + You can update multiple servers at once as soon as there is no moment when all replicas of one shard are offline. The upgrade of older version of ClickHouse to specific version: @@ -28,7 +29,3 @@ $ sudo apt-get update $ sudo apt-get install clickhouse-server=xx.yy.a.b clickhouse-client=xx.yy.a.b clickhouse-common-static=xx.yy.a.b $ sudo service clickhouse-server restart ``` - - - - diff --git a/docs/en/sql-reference/aggregate-functions/combinators.md b/docs/en/sql-reference/aggregate-functions/combinators.md index cddef68d49c..259202805d3 100644 --- a/docs/en/sql-reference/aggregate-functions/combinators.md +++ b/docs/en/sql-reference/aggregate-functions/combinators.md @@ -27,7 +27,37 @@ Example 2: `uniqArray(arr)` – Counts the number of unique elements in all ‘a ## -SimpleState {#agg-functions-combinator-simplestate} -If you apply this combinator, the aggregate function returns the same value but with a different type. This is an `SimpleAggregateFunction(...)` that can be stored in a table to work with [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md) table engines. +If you apply this combinator, the aggregate function returns the same value but with a different type. This is a [SimpleAggregateFunction(...)](../../sql-reference/data-types/simpleaggregatefunction.md) that can be stored in a table to work with [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md) tables. + +**Syntax** + +``` sql +SimpleState(x) +``` + +**Arguments** + +- `x` — Aggregate function parameters. + +**Returned values** + +The value of an aggregate function with the `SimpleAggregateFunction(...)` type. + +**Example** + +Query: + +``` sql +WITH anySimpleState(number) AS c SELECT toTypeName(c), c FROM numbers(1); +``` + +Result: + +``` text +┌─toTypeName(c)────────────────────────┬─c─┐ +│ SimpleAggregateFunction(any, UInt64) │ 0 │ +└──────────────────────────────────────┴───┘ +``` ## -State {#agg-functions-combinator-state} @@ -249,4 +279,3 @@ FROM people └────────┴───────────────────────────┘ ``` - diff --git a/docs/en/sql-reference/aggregate-functions/reference/argmax.md b/docs/en/sql-reference/aggregate-functions/reference/argmax.md index 72aa607a751..0630e2f585e 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/argmax.md +++ b/docs/en/sql-reference/aggregate-functions/reference/argmax.md @@ -6,20 +6,12 @@ toc_priority: 106 Calculates the `arg` value for a maximum `val` value. If there are several different values of `arg` for maximum values of `val`, returns the first of these values encountered. -Tuple version of this function will return the tuple with the maximum `val` value. It is convenient for use with [SimpleAggregateFunction](../../../sql-reference/data-types/simpleaggregatefunction.md). - **Syntax** ``` sql argMax(arg, val) ``` -or - -``` sql -argMax(tuple(arg, val)) -``` - **Arguments** - `arg` — Argument. @@ -29,13 +21,7 @@ argMax(tuple(arg, val)) - `arg` value that corresponds to maximum `val` value. -Type: matches `arg` type. - -For tuple in the input: - -- Tuple `(arg, val)`, where `val` is the maximum value and `arg` is a corresponding value. - -Type: [Tuple](../../../sql-reference/data-types/tuple.md). +Type: matches `arg` type. **Example** @@ -52,15 +38,13 @@ Input table: Query: ``` sql -SELECT argMax(user, salary), argMax(tuple(user, salary), salary), argMax(tuple(user, salary)) FROM salary; +SELECT argMax(user, salary) FROM salary; ``` Result: ``` text -┌─argMax(user, salary)─┬─argMax(tuple(user, salary), salary)─┬─argMax(tuple(user, salary))─┐ -│ director │ ('director',5000) │ ('director',5000) │ -└──────────────────────┴─────────────────────────────────────┴─────────────────────────────┘ +┌─argMax(user, salary)─┐ +│ director │ +└──────────────────────┘ ``` - -[Original article](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmax/) diff --git a/docs/en/sql-reference/aggregate-functions/reference/argmin.md b/docs/en/sql-reference/aggregate-functions/reference/argmin.md index 7ddc38cd28a..a259a76b7d7 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/argmin.md +++ b/docs/en/sql-reference/aggregate-functions/reference/argmin.md @@ -6,20 +6,12 @@ toc_priority: 105 Calculates the `arg` value for a minimum `val` value. If there are several different values of `arg` for minimum values of `val`, returns the first of these values encountered. -Tuple version of this function will return the tuple with the minimum `val` value. It is convenient for use with [SimpleAggregateFunction](../../../sql-reference/data-types/simpleaggregatefunction.md). - **Syntax** ``` sql argMin(arg, val) ``` -or - -``` sql -argMin(tuple(arg, val)) -``` - **Arguments** - `arg` — Argument. @@ -29,13 +21,7 @@ argMin(tuple(arg, val)) - `arg` value that corresponds to minimum `val` value. -Type: matches `arg` type. - -For tuple in the input: - -- Tuple `(arg, val)`, where `val` is the minimum value and `arg` is a corresponding value. - -Type: [Tuple](../../../sql-reference/data-types/tuple.md). +Type: matches `arg` type. **Example** @@ -52,15 +38,13 @@ Input table: Query: ``` sql -SELECT argMin(user, salary), argMin(tuple(user, salary)) FROM salary; +SELECT argMin(user, salary) FROM salary ``` Result: ``` text -┌─argMin(user, salary)─┬─argMin(tuple(user, salary))─┐ -│ worker │ ('worker',1000) │ -└──────────────────────┴─────────────────────────────┘ +┌─argMin(user, salary)─┐ +│ worker │ +└──────────────────────┘ ``` - -[Original article](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmin/) diff --git a/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest.md b/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest.md index dcc665a68af..dd0d59978d1 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest.md +++ b/docs/en/sql-reference/aggregate-functions/reference/quantiletdigest.md @@ -6,7 +6,7 @@ toc_priority: 207 Computes an approximate [quantile](https://en.wikipedia.org/wiki/Quantile) of a numeric data sequence using the [t-digest](https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf) algorithm. -The maximum error is 1%. Memory consumption is `log(n)`, where `n` is a number of values. The result depends on the order of running the query, and is nondeterministic. +Memory consumption is `log(n)`, where `n` is a number of values. The result depends on the order of running the query, and is nondeterministic. The performance of the function is lower than performance of [quantile](../../../sql-reference/aggregate-functions/reference/quantile.md#quantile) or [quantileTiming](../../../sql-reference/aggregate-functions/reference/quantiletiming.md#quantiletiming). In terms of the ratio of State size to precision, this function is much better than `quantile`. diff --git a/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md b/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md index 5b23ea81eae..4983220ed7f 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md +++ b/docs/en/sql-reference/aggregate-functions/reference/uniqhll12.md @@ -26,7 +26,7 @@ Function: - Uses the HyperLogLog algorithm to approximate the number of different argument values. - 212 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). + 2^12 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). - Provides the determinate result (it doesn’t depend on the query processing order). diff --git a/docs/en/sql-reference/data-types/date.md b/docs/en/sql-reference/data-types/date.md index 886e93f433c..0cfac4d59fe 100644 --- a/docs/en/sql-reference/data-types/date.md +++ b/docs/en/sql-reference/data-types/date.md @@ -5,7 +5,7 @@ toc_title: Date # Date {#data_type-date} -A date. Stored in two bytes as the number of days since 1970-01-01 (unsigned). Allows storing values from just after the beginning of the Unix Epoch to the upper threshold defined by a constant at the compilation stage (currently, this is until the year 2106, but the final fully-supported year is 2105). +A date. Stored in two bytes as the number of days since 1970-01-01 (unsigned). Allows storing values from just after the beginning of the Unix Epoch to the upper threshold defined by a constant at the compilation stage (currently, this is until the year 2149, but the final fully-supported year is 2148). The date value is stored without the time zone. diff --git a/docs/en/sql-reference/data-types/datetime64.md b/docs/en/sql-reference/data-types/datetime64.md index 5cba8315090..1d3725b9fb3 100644 --- a/docs/en/sql-reference/data-types/datetime64.md +++ b/docs/en/sql-reference/data-types/datetime64.md @@ -9,7 +9,7 @@ Allows to store an instant in time, that can be expressed as a calendar date and Tick size (precision): 10-precision seconds -Syntax: +**Syntax:** ``` sql DateTime64(precision, [timezone]) @@ -17,9 +17,11 @@ DateTime64(precision, [timezone]) Internally, stores data as a number of ‘ticks’ since epoch start (1970-01-01 00:00:00 UTC) as Int64. The tick resolution is determined by the precision parameter. Additionally, the `DateTime64` type can store time zone that is the same for the entire column, that affects how the values of the `DateTime64` type values are displayed in text format and how the values specified as strings are parsed (‘2020-01-01 05:00:01.000’). The time zone is not stored in the rows of the table (or in resultset), but is stored in the column metadata. See details in [DateTime](../../sql-reference/data-types/datetime.md). +Supported range from January 1, 1925 till December 31, 2283. + ## Examples {#examples} -**1.** Creating a table with `DateTime64`-type column and inserting data into it: +1. Creating a table with `DateTime64`-type column and inserting data into it: ``` sql CREATE TABLE dt @@ -27,15 +29,15 @@ CREATE TABLE dt `timestamp` DateTime64(3, 'Europe/Moscow'), `event_id` UInt8 ) -ENGINE = TinyLog +ENGINE = TinyLog; ``` ``` sql -INSERT INTO dt Values (1546300800000, 1), ('2019-01-01 00:00:00', 2) +INSERT INTO dt Values (1546300800000, 1), ('2019-01-01 00:00:00', 2); ``` ``` sql -SELECT * FROM dt +SELECT * FROM dt; ``` ``` text @@ -45,13 +47,13 @@ SELECT * FROM dt └─────────────────────────┴──────────┘ ``` -- When inserting datetime as an integer, it is treated as an appropriately scaled Unix Timestamp (UTC). `1546300800000` (with precision 3) represents `'2019-01-01 00:00:00'` UTC. However, as `timestamp` column has `Europe/Moscow` (UTC+3) timezone specified, when outputting as a string the value will be shown as `'2019-01-01 03:00:00'` +- When inserting datetime as an integer, it is treated as an appropriately scaled Unix Timestamp (UTC). `1546300800000` (with precision 3) represents `'2019-01-01 00:00:00'` UTC. However, as `timestamp` column has `Europe/Moscow` (UTC+3) timezone specified, when outputting as a string the value will be shown as `'2019-01-01 03:00:00'`. - When inserting string value as datetime, it is treated as being in column timezone. `'2019-01-01 00:00:00'` will be treated as being in `Europe/Moscow` timezone and stored as `1546290000000`. -**2.** Filtering on `DateTime64` values +2. Filtering on `DateTime64` values ``` sql -SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow') +SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow'); ``` ``` text @@ -60,12 +62,12 @@ SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europ └─────────────────────────┴──────────┘ ``` -Unlike `DateTime`, `DateTime64` values are not converted from `String` automatically +Unlike `DateTime`, `DateTime64` values are not converted from `String` automatically. -**3.** Getting a time zone for a `DateTime64`-type value: +3. Getting a time zone for a `DateTime64`-type value: ``` sql -SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS x +SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS x; ``` ``` text @@ -74,13 +76,13 @@ SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS └─────────────────────────┴────────────────────────────────┘ ``` -**4.** Timezone conversion +4. Timezone conversion ``` sql SELECT toDateTime64(timestamp, 3, 'Europe/London') as lon_time, toDateTime64(timestamp, 3, 'Europe/Moscow') as mos_time -FROM dt +FROM dt; ``` ``` text @@ -90,7 +92,7 @@ FROM dt └─────────────────────────┴─────────────────────────┘ ``` -## See Also {#see-also} +**See Also** - [Type conversion functions](../../sql-reference/functions/type-conversion-functions.md) - [Functions for working with dates and times](../../sql-reference/functions/date-time-functions.md) diff --git a/docs/en/sql-reference/data-types/simpleaggregatefunction.md b/docs/en/sql-reference/data-types/simpleaggregatefunction.md index 244779c5ca8..f3a245e9627 100644 --- a/docs/en/sql-reference/data-types/simpleaggregatefunction.md +++ b/docs/en/sql-reference/data-types/simpleaggregatefunction.md @@ -2,6 +2,8 @@ `SimpleAggregateFunction(name, types_of_arguments…)` data type stores current value of the aggregate function, and does not store its full state as [`AggregateFunction`](../../sql-reference/data-types/aggregatefunction.md) does. This optimization can be applied to functions for which the following property holds: the result of applying a function `f` to a row set `S1 UNION ALL S2` can be obtained by applying `f` to parts of the row set separately, and then again applying `f` to the results: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`. This property guarantees that partial aggregation results are enough to compute the combined one, so we don’t have to store and process any extra data. +The common way to produce an aggregate function value is by calling the aggregate function with the [-SimpleState](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-simplestate) suffix. + The following aggregate functions are supported: - [`any`](../../sql-reference/aggregate-functions/reference/any.md#agg_function-any) @@ -18,8 +20,6 @@ The following aggregate functions are supported: - [`sumMap`](../../sql-reference/aggregate-functions/reference/summap.md#agg_functions-summap) - [`minMap`](../../sql-reference/aggregate-functions/reference/minmap.md#agg_functions-minmap) - [`maxMap`](../../sql-reference/aggregate-functions/reference/maxmap.md#agg_functions-maxmap) -- [`argMin`](../../sql-reference/aggregate-functions/reference/argmin.md) -- [`argMax`](../../sql-reference/aggregate-functions/reference/argmax.md) !!! note "Note" diff --git a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md index dbf2fa67ac5..f22d2a0b59e 100644 --- a/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md +++ b/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md @@ -159,14 +159,14 @@ Configuration fields: | Tag | Description | Required | |------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------| | `name` | Column name. | Yes | -| `type` | ClickHouse data type.
ClickHouse tries to cast value from dictionary to the specified data type. For example, for MySQL, the field might be `TEXT`, `VARCHAR`, or `BLOB` in the MySQL source table, but it can be uploaded as `String` in ClickHouse.
[Nullable](../../../sql-reference/data-types/nullable.md) is not supported. | Yes | -| `null_value` | Default value for a non-existing element.
In the example, it is an empty string. You cannot use `NULL` in this field. | Yes | +| `type` | ClickHouse data type.
ClickHouse tries to cast value from dictionary to the specified data type. For example, for MySQL, the field might be `TEXT`, `VARCHAR`, or `BLOB` in the MySQL source table, but it can be uploaded as `String` in ClickHouse.
[Nullable](../../../sql-reference/data-types/nullable.md) is currently supported for [Flat](external-dicts-dict-layout.md#flat), [Hashed](external-dicts-dict-layout.md#dicts-external_dicts_dict_layout-hashed), [ComplexKeyHashed](external-dicts-dict-layout.md#complex-key-hashed), [Direct](external-dicts-dict-layout.md#direct), [ComplexKeyDirect](external-dicts-dict-layout.md#complex-key-direct), [RangeHashed](external-dicts-dict-layout.md#range-hashed), [Polygon](external-dicts-dict-polygon.md) dictionaries. In [Cache](external-dicts-dict-layout.md#cache), [ComplexKeyCache](external-dicts-dict-layout.md#complex-key-cache), [SSDCache](external-dicts-dict-layout.md#ssd-cache), [SSDComplexKeyCache](external-dicts-dict-layout.md#complex-key-ssd-cache), [IPTrie](external-dicts-dict-layout.md#ip-trie) dictionaries `Nullable` types are not supported. | Yes | +| `null_value` | Default value for a non-existing element.
In the example, it is an empty string. [NULL](../../syntax.md#null-literal) value can be used only for the `Nullable` types (see the previous line with types description). | Yes | | `expression` | [Expression](../../../sql-reference/syntax.md#syntax-expressions) that ClickHouse executes on the value.
The expression can be a column name in the remote SQL database. Thus, you can use it to create an alias for the remote column.

Default value: no expression. | No | | `hierarchical` | If `true`, the attribute contains the value of a parent key for the current key. See [Hierarchical Dictionaries](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-hierarchical.md).

Default value: `false`. | No | | `injective` | Flag that shows whether the `id -> attribute` image is [injective](https://en.wikipedia.org/wiki/Injective_function).
If `true`, ClickHouse can automatically place after the `GROUP BY` clause the requests to dictionaries with injection. Usually it significantly reduces the amount of such requests.

Default value: `false`. | No | | `is_object_id` | Flag that shows whether the query is executed for a MongoDB document by `ObjectID`.

Default value: `false`. | No | -## See Also {#see-also} +**See Also** - [Functions for working with external dictionaries](../../../sql-reference/functions/ext-dict-functions.md). diff --git a/docs/en/sql-reference/dictionaries/index.md b/docs/en/sql-reference/dictionaries/index.md index fa127dab103..22f4182a1c0 100644 --- a/docs/en/sql-reference/dictionaries/index.md +++ b/docs/en/sql-reference/dictionaries/index.md @@ -10,8 +10,6 @@ A dictionary is a mapping (`key -> attributes`) that is convenient for various t ClickHouse supports special functions for working with dictionaries that can be used in queries. It is easier and more efficient to use dictionaries with functions than a `JOIN` with reference tables. -[NULL](../../sql-reference/syntax.md#null-literal) values can’t be stored in a dictionary. - ClickHouse supports: - [Built-in dictionaries](../../sql-reference/dictionaries/internal-dicts.md#internal_dicts) with a specific [set of functions](../../sql-reference/functions/ym-dict-functions.md). diff --git a/docs/en/sql-reference/functions/bitmap-functions.md b/docs/en/sql-reference/functions/bitmap-functions.md index 7ec400949e9..4875532605e 100644 --- a/docs/en/sql-reference/functions/bitmap-functions.md +++ b/docs/en/sql-reference/functions/bitmap-functions.md @@ -33,7 +33,7 @@ SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res); ``` text ┌─res─┬─toTypeName(bitmapBuild([1, 2, 3, 4, 5]))─────┐ -│  │ AggregateFunction(groupBitmap, UInt8) │ +│ │ AggregateFunction(groupBitmap, UInt8) │ └─────┴──────────────────────────────────────────────┘ ``` diff --git a/docs/en/sql-reference/functions/date-time-functions.md b/docs/en/sql-reference/functions/date-time-functions.md index 6b26dae4546..b0636b0305e 100644 --- a/docs/en/sql-reference/functions/date-time-functions.md +++ b/docs/en/sql-reference/functions/date-time-functions.md @@ -147,6 +147,9 @@ Result: └────────────────┘ ``` +!!! attention "Attention" + The return type `toStartOf*` functions described below is `Date` or `DateTime`. Though these functions can take `DateTime64` as an argument, passing them a `DateTime64` that is out of normal range (years 1970 - 2105) will give incorrect result. + ## toStartOfYear {#tostartofyear} Rounds down a date or date with time to the first day of the year. @@ -388,13 +391,13 @@ SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(d Truncates date and time data to the specified part of date. -**Syntax** +**Syntax** ``` sql date_trunc(unit, value[, timezone]) ``` -Alias: `dateTrunc`. +Alias: `dateTrunc`. **Arguments** @@ -457,13 +460,13 @@ Result: Adds the time interval or date interval to the provided date or date with time. -**Syntax** +**Syntax** ``` sql date_add(unit, value, date) ``` -Aliases: `dateAdd`, `DATE_ADD`. +Aliases: `dateAdd`, `DATE_ADD`. **Arguments** @@ -478,7 +481,7 @@ Aliases: `dateAdd`, `DATE_ADD`. - `month` - `quarter` - `year` - + - `value` — Value of interval to add. [Int](../../sql-reference/data-types/int-uint.md). - `date` — The date or date with time to which `value` is added. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). @@ -583,7 +586,7 @@ Aliases: `dateSub`, `DATE_SUB`. - `month` - `quarter` - `year` - + - `value` — Value of interval to subtract. [Int](../../sql-reference/data-types/int-uint.md). - `date` — The date or date with time from which `value` is subtracted. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). @@ -613,16 +616,16 @@ Result: Adds the specified time value with the provided date or date time value. -**Syntax** +**Syntax** ``` sql timestamp_add(date, INTERVAL value unit) ``` -Aliases: `timeStampAdd`, `TIMESTAMP_ADD`. +Aliases: `timeStampAdd`, `TIMESTAMP_ADD`. **Arguments** - + - `date` — Date or date with time. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). - `value` — Value of interval to add. [Int](../../sql-reference/data-types/int-uint.md). - `unit` — The type of interval to add. [String](../../sql-reference/data-types/string.md). @@ -642,7 +645,7 @@ Aliases: `timeStampAdd`, `TIMESTAMP_ADD`. Date or date with time with the specified `value` expressed in `unit` added to `date`. Type: [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). - + **Example** Query: @@ -663,13 +666,13 @@ Result: Subtracts the time interval from the provided date or date with time. -**Syntax** +**Syntax** ``` sql timestamp_sub(unit, value, date) ``` -Aliases: `timeStampSub`, `TIMESTAMP_SUB`. +Aliases: `timeStampSub`, `TIMESTAMP_SUB`. **Arguments** @@ -684,7 +687,7 @@ Aliases: `timeStampSub`, `TIMESTAMP_SUB`. - `month` - `quarter` - `year` - + - `value` — Value of interval to subtract. [Int](../../sql-reference/data-types/int-uint.md). - `date` — Date or date with time. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). @@ -709,12 +712,12 @@ Result: │ 2018-07-18 01:02:03 │ └──────────────────────────────────────────────────────────────┘ ``` - + ## now {#now} -Returns the current date and time. +Returns the current date and time. -**Syntax** +**Syntax** ``` sql now([timezone]) @@ -1069,4 +1072,3 @@ Result: │ 2020-01-01 │ └────────────────────────────────────┘ ``` - diff --git a/docs/en/sql-reference/functions/hash-functions.md b/docs/en/sql-reference/functions/hash-functions.md index c60067b06af..0ea4cfd6fbe 100644 --- a/docs/en/sql-reference/functions/hash-functions.md +++ b/docs/en/sql-reference/functions/hash-functions.md @@ -437,13 +437,13 @@ A [FixedString(16)](../../sql-reference/data-types/fixedstring.md) data type has **Example** ``` sql -SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) AS type; +SELECT hex(murmurHash3_128('example_string')) AS MurmurHash3, toTypeName(MurmurHash3) AS type; ``` ``` text -┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ -└──────────────────┴─────────────────┘ +┌─MurmurHash3──────────────────────┬─type───┐ +│ 368A1A311CB7342253354B548E7E7E71 │ String │ +└──────────────────────────────────┴────────┘ ``` ## xxHash32, xxHash64 {#hash-functions-xxhash32} diff --git a/docs/en/sql-reference/functions/json-functions.md b/docs/en/sql-reference/functions/json-functions.md index ca6ef684faf..d545a0ae4e6 100644 --- a/docs/en/sql-reference/functions/json-functions.md +++ b/docs/en/sql-reference/functions/json-functions.md @@ -16,46 +16,60 @@ The following assumptions are made: ## visitParamHas(params, name) {#visitparamhasparams-name} -Checks whether there is a field with the ‘name’ name. +Checks whether there is a field with the `name` name. + +Alias: `simpleJSONHas`. ## visitParamExtractUInt(params, name) {#visitparamextractuintparams-name} -Parses UInt64 from the value of the field named ‘name’. If this is a string field, it tries to parse a number from the beginning of the string. If the field doesn’t exist, or it exists but doesn’t contain a number, it returns 0. +Parses UInt64 from the value of the field named `name`. If this is a string field, it tries to parse a number from the beginning of the string. If the field doesn’t exist, or it exists but doesn’t contain a number, it returns 0. + +Alias: `simpleJSONExtractUInt`. ## visitParamExtractInt(params, name) {#visitparamextractintparams-name} The same as for Int64. +Alias: `simpleJSONExtractInt`. + ## visitParamExtractFloat(params, name) {#visitparamextractfloatparams-name} The same as for Float64. +Alias: `simpleJSONExtractFloat`. + ## visitParamExtractBool(params, name) {#visitparamextractboolparams-name} Parses a true/false value. The result is UInt8. +Alias: `simpleJSONExtractBool`. + ## visitParamExtractRaw(params, name) {#visitparamextractrawparams-name} Returns the value of a field, including separators. +Alias: `simpleJSONExtractRaw`. + Examples: ``` sql -visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"' -visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}' +visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"'; +visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}'; ``` ## visitParamExtractString(params, name) {#visitparamextractstringparams-name} Parses the string in double quotes. The value is unescaped. If unescaping failed, it returns an empty string. +Alias: `simpleJSONExtractString`. + Examples: ``` sql -visitParamExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0' -visitParamExtractString('{"abc":"\\u263a"}', 'abc') = '☺' -visitParamExtractString('{"abc":"\\u263"}', 'abc') = '' -visitParamExtractString('{"abc":"hello}', 'abc') = '' +visitParamExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0'; +visitParamExtractString('{"abc":"\\u263a"}', 'abc') = '☺'; +visitParamExtractString('{"abc":"\\u263"}', 'abc') = ''; +visitParamExtractString('{"abc":"hello}', 'abc') = ''; ``` There is currently no support for code points in the format `\uXXXX\uYYYY` that are not from the basic multilingual plane (they are converted to CESU-8 instead of UTF-8). diff --git a/docs/en/sql-reference/functions/other-functions.md b/docs/en/sql-reference/functions/other-functions.md index c40aa3d1eae..9d7743e186f 100644 --- a/docs/en/sql-reference/functions/other-functions.md +++ b/docs/en/sql-reference/functions/other-functions.md @@ -1192,6 +1192,109 @@ SELECT defaultValueOfTypeName('Nullable(Int8)') └──────────────────────────────────────────┘ ``` +## indexHint {#indexhint} +The function is intended for debugging and introspection purposes. The function ignores it's argument and always returns 1. Arguments are not even evaluated. + +But for the purpose of index analysis, the argument of this function is analyzed as if it was present directly without being wrapped inside `indexHint` function. This allows to select data in index ranges by the corresponding condition but without further filtering by this condition. The index in ClickHouse is sparse and using `indexHint` will yield more data than specifying the same condition directly. + +**Syntax** + +```sql +SELECT * FROM table WHERE indexHint() +``` + +**Returned value** + +1. Type: [Uint8](https://clickhouse.yandex/docs/en/data_types/int_uint/#diapazony-uint). + +**Example** + +Here is the example of test data from the table [ontime](../../getting-started/example-datasets/ontime.md). + +Input table: + +```sql +SELECT count() FROM ontime +``` + +```text +┌─count()─┐ +│ 4276457 │ +└─────────┘ +``` + +The table has indexes on the fields `(FlightDate, (Year, FlightDate))`. + +Create a query, where the index is not used. + +Query: + +```sql +SELECT FlightDate AS k, count() FROM ontime GROUP BY k ORDER BY k +``` + +ClickHouse processed the entire table (`Processed 4.28 million rows`). + +Result: + +```text +┌──────────k─┬─count()─┐ +│ 2017-01-01 │ 13970 │ +│ 2017-01-02 │ 15882 │ +........................ +│ 2017-09-28 │ 16411 │ +│ 2017-09-29 │ 16384 │ +│ 2017-09-30 │ 12520 │ +└────────────┴─────────┘ +``` + +To apply the index, select a specific date. + +Query: + +```sql +SELECT FlightDate AS k, count() FROM ontime WHERE k = '2017-09-15' GROUP BY k ORDER BY k +``` + +By using the index, ClickHouse processed a significantly smaller number of rows (`Processed 32.74 thousand rows`). + +Result: + +```text +┌──────────k─┬─count()─┐ +│ 2017-09-15 │ 16428 │ +└────────────┴─────────┘ +``` + +Now wrap the expression `k = '2017-09-15'` into `indexHint` function. + +Query: + +```sql +SELECT + FlightDate AS k, + count() +FROM ontime +WHERE indexHint(k = '2017-09-15') +GROUP BY k +ORDER BY k ASC +``` + +ClickHouse used the index in the same way as the previous time (`Processed 32.74 thousand rows`). +The expression `k = '2017-09-15'` was not used when generating the result. +In examle the `indexHint` function allows to see adjacent dates. + +Result: + +```text +┌──────────k─┬─count()─┐ +│ 2017-09-14 │ 7071 │ +│ 2017-09-15 │ 16428 │ +│ 2017-09-16 │ 1077 │ +│ 2017-09-30 │ 8167 │ +└────────────┴─────────┘ +``` + ## replicate {#other-functions-replicate} Creates an array with a single value. diff --git a/docs/en/sql-reference/functions/string-functions.md b/docs/en/sql-reference/functions/string-functions.md index 3d3caaf6e23..85570cb408d 100644 --- a/docs/en/sql-reference/functions/string-functions.md +++ b/docs/en/sql-reference/functions/string-functions.md @@ -649,3 +649,65 @@ Result: - [List of XML and HTML character entity references](https://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references) + +## extractTextFromHTML {#extracttextfromhtml} + +A function to extract text from HTML or XHTML. +It does not necessarily 100% conform to any of the HTML, XML or XHTML standards, but the implementation is reasonably accurate and it is fast. The rules are the following: + +1. Comments are skipped. Example: ``. Comment must end with `-->`. Nested comments are not possible. +Note: constructions like `` and `` are not valid comments in HTML but they are skipped by other rules. +2. CDATA is pasted verbatim. Note: CDATA is XML/XHTML specific. But it is processed for "best-effort" approach. +3. `script` and `style` elements are removed with all their content. Note: it is assumed that closing tag cannot appear inside content. For example, in JS string literal has to be escaped like `"<\/script>"`. +Note: comments and CDATA are possible inside `script` or `style` - then closing tags are not searched inside CDATA. Example: `]]>`. But they are still searched inside comments. Sometimes it becomes complicated: ` var y = "-->"; alert(x + y);` +Note: `script` and `style` can be the names of XML namespaces - then they are not treated like usual `script` or `style` elements. Example: `Hello`. +Note: whitespaces are possible after closing tag name: `` but not before: `< / script>`. +4. Other tags or tag-like elements are skipped without inner content. Example: `.` +Note: it is expected that this HTML is illegal: `` +Note: it also skips something like tags: `<>`, ``, etc. +Note: tag without end is skipped to the end of input: `world`, `Helloworld` - there is no whitespace in HTML, but the function inserts it. Also consider: `Hello

world

`, `Hello
world`. This behavior is reasonable for data analysis, e.g. to convert HTML to a bag of words. +7. Also note that correct handling of whitespaces requires the support of `
` and CSS `display` and `white-space` properties.
+
+**Syntax**
+
+``` sql
+extractTextFromHTML(x)
+```
+
+**Arguments**
+
+-   `x` — input text. [String](../../sql-reference/data-types/string.md). 
+
+**Returned value**
+
+-   Extracted text.
+
+Type: [String](../../sql-reference/data-types/string.md).
+
+**Example**
+
+The first example contains several tags and a comment and also shows whitespace processing.
+The second example shows `CDATA` and `script` tag processing.
+In the third example text is extracted from the full HTML response received by the [url](../../sql-reference/table-functions/url.md) function.
+
+Query:
+
+``` sql
+SELECT extractTextFromHTML(' 

A text withtags.

'); +SELECT extractTextFromHTML('CDATA]]> '); +SELECT extractTextFromHTML(html) FROM url('http://www.donothingfor2minutes.com/', RawBLOB, 'html String'); +``` + +Result: + +``` text +A text with tags . +The content within CDATA +Do Nothing for 2 Minutes 2:00   +``` diff --git a/docs/en/sql-reference/statements/alter/column.md b/docs/en/sql-reference/statements/alter/column.md index 3ece30be5b8..d661bd4cd59 100644 --- a/docs/en/sql-reference/statements/alter/column.md +++ b/docs/en/sql-reference/statements/alter/column.md @@ -74,6 +74,9 @@ Deletes the column with the name `name`. If the `IF EXISTS` clause is specified, Deletes data from the file system. Since this deletes entire files, the query is completed almost instantly. +!!! warning "Warning" + You can’t delete a column if it is referenced by [materialized view](../../../sql-reference/statements/create/view.md#materialized). Otherwise, it returns an error. + Example: ``` sql @@ -180,7 +183,7 @@ ALTER TABLE table_name MODIFY column_name REMOVE property; ALTER TABLE table_with_ttl MODIFY COLUMN column_ttl REMOVE TTL; ``` -## See Also +**See Also** - [REMOVE TTL](ttl.md). diff --git a/docs/en/sql-reference/statements/alter/partition.md b/docs/en/sql-reference/statements/alter/partition.md index f7183ba525c..b22f89928b9 100644 --- a/docs/en/sql-reference/statements/alter/partition.md +++ b/docs/en/sql-reference/statements/alter/partition.md @@ -16,7 +16,7 @@ The following operations with [partitions](../../../engines/table-engines/merget - [CLEAR COLUMN IN PARTITION](#alter_clear-column-partition) — Resets the value of a specified column in a partition. - [CLEAR INDEX IN PARTITION](#alter_clear-index-partition) — Resets the specified secondary index in a partition. - [FREEZE PARTITION](#alter_freeze-partition) — Creates a backup of a partition. -- [FETCH PARTITION](#alter_fetch-partition) — Downloads a partition from another server. +- [FETCH PARTITION\|PART](#alter_fetch-partition) — Downloads a part or partition from another server. - [MOVE PARTITION\|PART](#alter_move-partition) — Move partition/data part to another disk or volume. @@ -88,12 +88,10 @@ Read more about setting the partition expression in a section [How to specify th This query is replicated. The replica-initiator checks whether there is data in the `detached` directory. If data exists, the query checks its integrity. If everything is correct, the query adds the data to the table. -If the non-initiator replica, receiving the attach command, finds the part with the correct checksums in its own -`detached` folder, it attaches the data without fetching it from other replicas. +If the non-initiator replica, receiving the attach command, finds the part with the correct checksums in its own `detached` folder, it attaches the data without fetching it from other replicas. If there is no part with the correct checksums, the data is downloaded from any replica having the part. -You can put data to the `detached` directory on one replica and use the `ALTER ... ATTACH` query to add it to the -table on all replicas. +You can put data to the `detached` directory on one replica and use the `ALTER ... ATTACH` query to add it to the table on all replicas. ## ATTACH PARTITION FROM {#alter_attach-partition-from} @@ -101,8 +99,8 @@ table on all replicas. ALTER TABLE table2 ATTACH PARTITION partition_expr FROM table1 ``` -This query copies the data partition from the `table1` to `table2`. -Note that data won't be deleted neither from `table1` nor from `table2`. +This query copies the data partition from `table1` to `table2`. +Note that data will be deleted neither from `table1` nor from `table2`. For the query to run successfully, the following conditions must be met: @@ -198,29 +196,35 @@ ALTER TABLE table_name CLEAR INDEX index_name IN PARTITION partition_expr The query works similar to `CLEAR COLUMN`, but it resets an index instead of a column data. -## FETCH PARTITION {#alter_fetch-partition} +## FETCH PARTITION|PART {#alter_fetch-partition} ``` sql -ALTER TABLE table_name FETCH PARTITION partition_expr FROM 'path-in-zookeeper' +ALTER TABLE table_name FETCH PARTITION|PART partition_expr FROM 'path-in-zookeeper' ``` Downloads a partition from another server. This query only works for the replicated tables. The query does the following: -1. Downloads the partition from the specified shard. In ‘path-in-zookeeper’ you must specify a path to the shard in ZooKeeper. +1. Downloads the partition|part from the specified shard. In ‘path-in-zookeeper’ you must specify a path to the shard in ZooKeeper. 2. Then the query puts the downloaded data to the `detached` directory of the `table_name` table. Use the [ATTACH PARTITION\|PART](#alter_attach-partition) query to add the data to the table. For example: +1. FETCH PARTITION ``` sql ALTER TABLE users FETCH PARTITION 201902 FROM '/clickhouse/tables/01-01/visits'; ALTER TABLE users ATTACH PARTITION 201902; ``` +2. FETCH PART +``` sql +ALTER TABLE users FETCH PART 201901_2_2_0 FROM '/clickhouse/tables/01-01/visits'; +ALTER TABLE users ATTACH PART 201901_2_2_0; +``` Note that: -- The `ALTER ... FETCH PARTITION` query isn’t replicated. It places the partition to the `detached` directory only on the local server. +- The `ALTER ... FETCH PARTITION|PART` query isn’t replicated. It places the part or partition to the `detached` directory only on the local server. - The `ALTER TABLE ... ATTACH` query is replicated. It adds the data to all replicas. The data is added to one of the replicas from the `detached` directory, and to the others - from neighboring replicas. Before downloading, the system checks if the partition exists and the table structure matches. The most appropriate replica is selected automatically from the healthy replicas. diff --git a/docs/en/sql-reference/statements/alter/ttl.md b/docs/en/sql-reference/statements/alter/ttl.md index aa7ee838e10..9cd63d3b8fe 100644 --- a/docs/en/sql-reference/statements/alter/ttl.md +++ b/docs/en/sql-reference/statements/alter/ttl.md @@ -79,7 +79,7 @@ The `TTL` is no longer there, so the second row is not deleted: └───────────────────────┴─────────┴──────────────┘ ``` -### See Also +**See Also** - More about the [TTL-expression](../../../sql-reference/statements/create/table.md#ttl-expression). - Modify column [with TTL](../../../sql-reference/statements/alter/column.md#alter_modify-column). diff --git a/docs/en/sql-reference/statements/attach.md b/docs/en/sql-reference/statements/attach.md index ffb577a8839..01783e9cb2f 100644 --- a/docs/en/sql-reference/statements/attach.md +++ b/docs/en/sql-reference/statements/attach.md @@ -5,13 +5,14 @@ toc_title: ATTACH # ATTACH Statement {#attach} -This query is exactly the same as [CREATE](../../sql-reference/statements/create/table.md), but +Attaches the table, for example, when moving a database to another server. -- Instead of the word `CREATE` it uses the word `ATTACH`. -- The query does not create data on the disk, but assumes that data is already in the appropriate places, and just adds information about the table to the server. - After executing an ATTACH query, the server will know about the existence of the table. +The query does not create data on the disk, but assumes that data is already in the appropriate places, and just adds information about the table to the server. After executing an `ATTACH` query, the server will know about the existence of the table. -If the table was previously detached ([DETACH](../../sql-reference/statements/detach.md)), meaning that its structure is known, you can use shorthand without defining the structure. +If the table was previously detached ([DETACH](../../sql-reference/statements/detach.md)) query, meaning that its structure is known, you can use shorthand without defining the structure. + +## Syntax Forms {#syntax-forms} +### Attach Existing Table {#attach-existing-table} ``` sql ATTACH TABLE [IF NOT EXISTS] [db.]name [ON CLUSTER cluster] @@ -21,4 +22,38 @@ This query is used when starting the server. The server stores table metadata as If the table was detached permanently, it won't be reattached at the server start, so you need to use `ATTACH` query explicitly. -[Original article](https://clickhouse.tech/docs/en/sql-reference/statements/attach/) +### Сreate New Table And Attach Data {#create-new-table-and-attach-data} + +**With specify path to table data** + +```sql +ATTACH TABLE name FROM 'path/to/data/' (col1 Type1, ...) +``` + +It creates new table with provided structure and attaches table data from provided directory in `user_files`. + +**Example** + +Query: + +```sql +DROP TABLE IF EXISTS test; +INSERT INTO TABLE FUNCTION file('01188_attach/test/data.TSV', 'TSV', 's String, n UInt8') VALUES ('test', 42); +ATTACH TABLE test FROM '01188_attach/test' (s String, n UInt8) ENGINE = File(TSV); +SELECT * FROM test; +``` +Result: + +```sql +┌─s────┬──n─┐ +│ test │ 42 │ +└──────┴────┘ +``` + +**With specify table UUID** (Only for `Atomic` database) + +```sql +ATTACH TABLE name UUID '' (col1 Type1, ...) +``` + +It creates new table with provided structure and attaches data from table with the specified UUID. \ No newline at end of file diff --git a/docs/en/sql-reference/statements/check-table.md b/docs/en/sql-reference/statements/check-table.md index 450447acaf8..65e6238ebbc 100644 --- a/docs/en/sql-reference/statements/check-table.md +++ b/docs/en/sql-reference/statements/check-table.md @@ -30,9 +30,36 @@ Performed over the tables with another table engines causes an exception. Engines from the `*Log` family don’t provide automatic data recovery on failure. Use the `CHECK TABLE` query to track data loss in a timely manner. -For `MergeTree` family engines, the `CHECK TABLE` query shows a check status for every individual data part of a table on the local server. +## Checking the MergeTree Family Tables {#checking-mergetree-tables} -**If the data is corrupted** +For `MergeTree` family engines, if [check_query_single_value_result](../../operations/settings/settings.md#check_query_single_value_result) = 0, the `CHECK TABLE` query shows a check status for every individual data part of a table on the local server. + +```sql +SET check_query_single_value_result = 0; +CHECK TABLE test_table; +``` + +```text +┌─part_path─┬─is_passed─┬─message─┐ +│ all_1_4_1 │ 1 │ │ +│ all_1_4_2 │ 1 │ │ +└───────────┴───────────┴─────────┘ +``` + +If `check_query_single_value_result` = 0, the `CHECK TABLE` query shows the general table check status. + +```sql +SET check_query_single_value_result = 1; +CHECK TABLE test_table; +``` + +```text +┌─result─┐ +│ 1 │ +└────────┘ +``` + +## If the Data Is Corrupted {#if-data-is-corrupted} If the table is corrupted, you can copy the non-corrupted data to another table. To do this: diff --git a/docs/en/sql-reference/statements/create/row-policy.md b/docs/en/sql-reference/statements/create/row-policy.md index cbe639c6fc5..5a1fa218fad 100644 --- a/docs/en/sql-reference/statements/create/row-policy.md +++ b/docs/en/sql-reference/statements/create/row-policy.md @@ -5,39 +5,81 @@ toc_title: ROW POLICY # CREATE ROW POLICY {#create-row-policy-statement} -Creates [filters for rows](../../../operations/access-rights.md#row-policy-management), which a user can read from a table. +Creates a [row policy](../../../operations/access-rights.md#row-policy-management), i.e. a filter used to determine which rows a user can read from a table. Syntax: ``` sql CREATE [ROW] POLICY [IF NOT EXISTS | OR REPLACE] policy_name1 [ON CLUSTER cluster_name1] ON [db1.]table1 [, policy_name2 [ON CLUSTER cluster_name2] ON [db2.]table2 ...] + [FOR SELECT] USING condition [AS {PERMISSIVE | RESTRICTIVE}] - [FOR SELECT] - [USING condition] [TO {role1 [, role2 ...] | ALL | ALL EXCEPT role1 [, role2 ...]}] ``` -`ON CLUSTER` clause allows creating row policies on a cluster, see [Distributed DDL](../../../sql-reference/distributed-ddl.md). +## USING Clause {#create-row-policy-using} -## AS Clause {#create-row-policy-as} - -Using this section you can create permissive or restrictive policies. - -Permissive policy grants access to rows. Permissive policies which apply to the same table are combined together using the boolean `OR` operator. Policies are permissive by default. - -Restrictive policy restricts access to rows. Restrictive policies which apply to the same table are combined together using the boolean `AND` operator. - -Restrictive policies apply to rows that passed the permissive filters. If you set restrictive policies but no permissive policies, the user can’t get any row from the table. +Allows to specify a condition to filter rows. An user will see a row if the condition is calculated to non-zero for the row. ## TO Clause {#create-row-policy-to} -In the section `TO` you can provide a mixed list of roles and users, for example, `CREATE ROW POLICY ... TO accountant, john@localhost`. +In the section `TO` you can provide a list of users and roles this policy should work for. For example, `CREATE ROW POLICY ... TO accountant, john@localhost`. -Keyword `ALL` means all the ClickHouse users including current user. Keywords `ALL EXCEPT` allow to exclude some users from the all users list, for example, `CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost` +Keyword `ALL` means all the ClickHouse users including current user. Keyword `ALL EXCEPT` allow to exclude some users from the all users list, for example, `CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost` -## Examples {#examples} +!!! note "Note" + If there are no row policies defined for a table then any user can `SELECT` all the row from the table. Defining one or more row policies for the table makes the access to the table depending on the row policies no matter if those row policies are defined for the current user or not. For example, the following policy + + `CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter` -`CREATE ROW POLICY filter ON mydb.mytable FOR SELECT USING a<1000 TO accountant, john@localhost` + forbids the users `mira` and `peter` to see the rows with `b != 1`, and any non-mentioned user (e.g., the user `paul`) will see no rows from `mydb.table1` at all. + + If that's not desirable it can't be fixed by adding one more row policy, like the following: -`CREATE ROW POLICY filter ON mydb.mytable FOR SELECT USING a<1000 TO ALL EXCEPT mira` + `CREATE ROW POLICY pol2 ON mydb.table1 USING 1 TO ALL EXCEPT mira, peter` + +## AS Clause {#create-row-policy-as} + +It's allowed to have more than one policy enabled on the same table for the same user at the one time. So we need a way to combine the conditions from multiple policies. + +By default policies are combined using the boolean `OR` operator. For example, the following policies + +``` sql +CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter +CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 TO peter, antonio +``` + +enables the user `peter` to see rows with either `b=1` or `c=2`. + +The `AS` clause specifies how policies should be combined with other policies. Policies can be either permissive or restrictive. By default policies are permissive, which means they are combined using the boolean `OR` operator. + +A policy can be defined as restrictive as an alternative. Restrictive policies are combined using the boolean `AND` operator. + +Here is the general formula: + +``` +row_is_visible = (one or more of the permissive policies' conditions are non-zero) AND + (all of the restrictive policies's conditions are non-zero) +``` + +For example, the following policies + +``` sql +CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter +CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 AS RESTRICTIVE TO peter, antonio +``` + +enables the user `peter` to see rows only if both `b=1` AND `c=2`. + +## ON CLUSTER Clause {#create-row-policy-on-cluster} + +Allows creating row policies on a cluster, see [Distributed DDL](../../../sql-reference/distributed-ddl.md). + + +## Examples + +`CREATE ROW POLICY filter1 ON mydb.mytable USING a<1000 TO accountant, john@localhost` + +`CREATE ROW POLICY filter2 ON mydb.mytable USING a<1000 AND b=5 TO ALL EXCEPT mira` + +`CREATE ROW POLICY filter3 ON mydb.mytable USING 1 TO admin` diff --git a/docs/en/sql-reference/statements/create/table.md b/docs/en/sql-reference/statements/create/table.md index b98888f7bfa..5f1f0151350 100644 --- a/docs/en/sql-reference/statements/create/table.md +++ b/docs/en/sql-reference/statements/create/table.md @@ -50,15 +50,32 @@ Creates a table with the same result as that of the [table function](../../../sq ### From SELECT query {#from-select-query} ``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name ENGINE = engine AS SELECT ... +CREATE TABLE [IF NOT EXISTS] [db.]table_name[(name1 [type1], name2 [type2], ...)] ENGINE = engine AS SELECT ... ``` -Creates a table with a structure like the result of the `SELECT` query, with the `engine` engine, and fills it with data from SELECT. +Creates a table with a structure like the result of the `SELECT` query, with the `engine` engine, and fills it with data from `SELECT`. Also you can explicitly specify columns description. -In all cases, if `IF NOT EXISTS` is specified, the query won’t return an error if the table already exists. In this case, the query won’t do anything. +If the table already exists and `IF NOT EXISTS` is specified, the query won’t do anything. There can be other clauses after the `ENGINE` clause in the query. See detailed documentation on how to create tables in the descriptions of [table engines](../../../engines/table-engines/index.md#table_engines). +**Example** + +Query: + +``` sql +CREATE TABLE t1 (x String) ENGINE = Memory AS SELECT 1; +SELECT x, toTypeName(x) FROM t1; +``` + +Result: + +```text +┌─x─┬─toTypeName(x)─┐ +│ 1 │ String │ +└───┴───────────────┘ +``` + ## NULL Or NOT NULL Modifiers {#null-modifiers} `NULL` and `NOT NULL` modifiers after data type in column definition allow or do not allow it to be [Nullable](../../../sql-reference/data-types/nullable.md#data_type-nullable). @@ -287,7 +304,9 @@ REPLACE TABLE myOldTable SELECT * FROM myOldTable WHERE CounterID <12345; ### Syntax -{CREATE [OR REPLACE]|REPLACE} TABLE [db.]table_name +``` sql +{CREATE [OR REPLACE] | REPLACE} TABLE [db.]table_name +``` All syntax forms for `CREATE` query also work for this query. `REPLACE` for a non-existent table will cause an error. diff --git a/docs/en/sql-reference/statements/grant.md b/docs/en/sql-reference/statements/grant.md index 0afc9b5b95f..89f35b5f701 100644 --- a/docs/en/sql-reference/statements/grant.md +++ b/docs/en/sql-reference/statements/grant.md @@ -91,7 +91,7 @@ Hierarchy of privileges: - `ALTER ADD CONSTRAINT` - `ALTER DROP CONSTRAINT` - `ALTER TTL` - - `ALTER MATERIALIZE TTL` + - `ALTER MATERIALIZE TTL` - `ALTER SETTINGS` - `ALTER MOVE PARTITION` - `ALTER FETCH PARTITION` @@ -102,9 +102,9 @@ Hierarchy of privileges: - [CREATE](#grant-create) - `CREATE DATABASE` - `CREATE TABLE` + - `CREATE TEMPORARY TABLE` - `CREATE VIEW` - `CREATE DICTIONARY` - - `CREATE TEMPORARY TABLE` - [DROP](#grant-drop) - `DROP DATABASE` - `DROP TABLE` @@ -150,7 +150,7 @@ Hierarchy of privileges: - `SYSTEM RELOAD` - `SYSTEM RELOAD CONFIG` - `SYSTEM RELOAD DICTIONARY` - - `SYSTEM RELOAD EMBEDDED DICTIONARIES` + - `SYSTEM RELOAD EMBEDDED DICTIONARIES` - `SYSTEM MERGES` - `SYSTEM TTL MERGES` - `SYSTEM FETCHES` @@ -276,10 +276,10 @@ Allows executing [ALTER](../../sql-reference/statements/alter/index.md) queries - `ALTER ADD CONSTRAINT`. Level: `TABLE`. Aliases: `ADD CONSTRAINT` - `ALTER DROP CONSTRAINT`. Level: `TABLE`. Aliases: `DROP CONSTRAINT` - `ALTER TTL`. Level: `TABLE`. Aliases: `ALTER MODIFY TTL`, `MODIFY TTL` - - `ALTER MATERIALIZE TTL`. Level: `TABLE`. Aliases: `MATERIALIZE TTL` + - `ALTER MATERIALIZE TTL`. Level: `TABLE`. Aliases: `MATERIALIZE TTL` - `ALTER SETTINGS`. Level: `TABLE`. Aliases: `ALTER SETTING`, `ALTER MODIFY SETTING`, `MODIFY SETTING` - `ALTER MOVE PARTITION`. Level: `TABLE`. Aliases: `ALTER MOVE PART`, `MOVE PARTITION`, `MOVE PART` - - `ALTER FETCH PARTITION`. Level: `TABLE`. Aliases: `FETCH PARTITION` + - `ALTER FETCH PARTITION`. Level: `TABLE`. Aliases: `ALTER FETCH PART`, `FETCH PARTITION`, `FETCH PART` - `ALTER FREEZE PARTITION`. Level: `TABLE`. Aliases: `FREEZE PARTITION` - `ALTER VIEW` Level: `GROUP` - `ALTER VIEW REFRESH`. Level: `VIEW`. Aliases: `ALTER LIVE VIEW REFRESH`, `REFRESH VIEW` @@ -304,9 +304,9 @@ Allows executing [CREATE](../../sql-reference/statements/create/index.md) and [A - `CREATE`. Level: `GROUP` - `CREATE DATABASE`. Level: `DATABASE` - `CREATE TABLE`. Level: `TABLE` + - `CREATE TEMPORARY TABLE`. Level: `GLOBAL` - `CREATE VIEW`. Level: `VIEW` - `CREATE DICTIONARY`. Level: `DICTIONARY` - - `CREATE TEMPORARY TABLE`. Level: `GLOBAL` **Notes** @@ -401,7 +401,7 @@ Allows a user to execute [SYSTEM](../../sql-reference/statements/system.md) quer - `SYSTEM RELOAD`. Level: `GROUP` - `SYSTEM RELOAD CONFIG`. Level: `GLOBAL`. Aliases: `RELOAD CONFIG` - `SYSTEM RELOAD DICTIONARY`. Level: `GLOBAL`. Aliases: `SYSTEM RELOAD DICTIONARIES`, `RELOAD DICTIONARY`, `RELOAD DICTIONARIES` - - `SYSTEM RELOAD EMBEDDED DICTIONARIES`. Level: `GLOBAL`. Aliases: R`ELOAD EMBEDDED DICTIONARIES` + - `SYSTEM RELOAD EMBEDDED DICTIONARIES`. Level: `GLOBAL`. Aliases: `RELOAD EMBEDDED DICTIONARIES` - `SYSTEM MERGES`. Level: `TABLE`. Aliases: `SYSTEM STOP MERGES`, `SYSTEM START MERGES`, `STOP MERGES`, `START MERGES` - `SYSTEM TTL MERGES`. Level: `TABLE`. Aliases: `SYSTEM STOP TTL MERGES`, `SYSTEM START TTL MERGES`, `STOP TTL MERGES`, `START TTL MERGES` - `SYSTEM FETCHES`. Level: `TABLE`. Aliases: `SYSTEM STOP FETCHES`, `SYSTEM START FETCHES`, `STOP FETCHES`, `START FETCHES` diff --git a/docs/en/sql-reference/statements/optimize.md b/docs/en/sql-reference/statements/optimize.md index 49a7404d76e..247252d3f4e 100644 --- a/docs/en/sql-reference/statements/optimize.md +++ b/docs/en/sql-reference/statements/optimize.md @@ -5,13 +5,18 @@ toc_title: OPTIMIZE # OPTIMIZE Statement {#misc_operations-optimize} +This query tries to initialize an unscheduled merge of data parts for tables. + +!!! warning "Warning" + `OPTIMIZE` can’t fix the `Too many parts` error. + +**Syntax** + ``` sql OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition | PARTITION ID 'partition_id'] [FINAL] [DEDUPLICATE [BY expression]] ``` -This query tries to initialize an unscheduled merge of data parts for tables with a table engine from the [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) family. - -The `OPTMIZE` query is also supported for the [MaterializedView](../../engines/table-engines/special/materializedview.md) and the [Buffer](../../engines/table-engines/special/buffer.md) engines. Other table engines aren’t supported. +The `OPTMIZE` query is supported for [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) family, the [MaterializedView](../../engines/table-engines/special/materializedview.md) and the [Buffer](../../engines/table-engines/special/buffer.md) engines. Other table engines aren’t supported. When `OPTIMIZE` is used with the [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md) family of table engines, ClickHouse creates a task for merging and waits for execution on all nodes (if the `replication_alter_partitions_sync` setting is enabled). @@ -21,12 +26,13 @@ When `OPTIMIZE` is used with the [ReplicatedMergeTree](../../engines/table-engin - If you specify `DEDUPLICATE`, then completely identical rows (unless by-clause is specified) will be deduplicated (all columns are compared), it makes sense only for the MergeTree engine. -### BY expression {#by-expression} +## BY expression {#by-expression} If you want to perform deduplication on custom set of columns rather than on all, you can specify list of columns explicitly or use any combination of [`*`](../../sql-reference/statements/select/index.md#asterisk), [`COLUMNS`](../../sql-reference/statements/select/index.md#columns-expression) or [`EXCEPT`](../../sql-reference/statements/select/index.md#except-modifier) expressions. The explictly written or implicitly expanded list of columns must include all columns specified in row ordering expression (both primary and sorting keys) and partitioning expression (partitioning key). -Note that `*` behaves just like in `SELECT`: `MATERIALIZED`, and `ALIAS` columns are not used for expansion. -Also, it is an error to specify empty list of columns, or write an expression that results in an empty list of columns, or deduplicate by an ALIAS column. +!!! note "Note" + Notice that `*` behaves just like in `SELECT`: `MATERIALIZED` and `ALIAS` columns are not used for expansion. + Also, it is an error to specify empty list of columns, or write an expression that results in an empty list of columns, or deduplicate by an ALIAS column. ``` sql OPTIMIZE TABLE table DEDUPLICATE; -- the old one @@ -39,9 +45,10 @@ OPTIMIZE TABLE table DEDUPLICATE BY COLUMNS('column-matched-by-regex') EXCEPT co OPTIMIZE TABLE table DEDUPLICATE BY COLUMNS('column-matched-by-regex') EXCEPT (colX, colY); ``` -**Example:** +**Examples** + +Create a table: -A silly synthetic table. ``` sql CREATE TABLE example ( primary_key Int32, @@ -56,31 +63,31 @@ PARTITION BY partition_key ORDER BY (primary_key, secondary_key); ``` +The 'old' deduplicate, all columns are taken into account, i.e. row is removed only if all values in all columns are equal to corresponding values in previous row. + ``` sql --- The 'old' deduplicate, all columns are taken into account, i.e. row is removed only if all values in all columns are equal to corresponding values in previous row. OPTIMIZE TABLE example FINAL DEDUPLICATE; ``` +Deduplicate by all columns that are not `ALIAS` or `MATERIALIZED`: `primary_key`, `secondary_key`, `value`, `partition_key`, and `materialized_value` columns. + ``` sql --- Deduplicate by all columns that are not `ALIAS` or `MATERIALIZED`: `primary_key`, `secondary_key`, `value`, `partition_key`, and `materialized_value` columns. OPTIMIZE TABLE example FINAL DEDUPLICATE BY *; ``` +Deduplicate by all columns that are not `ALIAS` or `MATERIALIZED` and explicitly not `materialized_value`: `primary_key`, `secondary_key`, `value`, and `partition_key` columns. + ``` sql --- Deduplicate by all columns that are not `ALIAS` or `MATERIALIZED` and explicitly not `materialized_value`: `primary_key`, `secondary_key`, `value`, and `partition_key` columns. OPTIMIZE TABLE example FINAL DEDUPLICATE BY * EXCEPT materialized_value; ``` +Deduplicate explicitly by `primary_key`, `secondary_key`, and `partition_key` columns. ``` sql --- Deduplicate explicitly by `primary_key`, `secondary_key`, and `partition_key` columns. OPTIMIZE TABLE example FINAL DEDUPLICATE BY primary_key, secondary_key, partition_key; ``` +Deduplicate by any column matching a regex: `primary_key`, `secondary_key`, and `partition_key` columns. + ``` sql --- Deduplicate by any column matching a regex: `primary_key`, `secondary_key`, and `partition_key` columns. OPTIMIZE TABLE example FINAL DEDUPLICATE BY COLUMNS('.*_key'); ``` - - -!!! warning "Warning" - `OPTIMIZE` can’t fix the “Too many parts” error. diff --git a/docs/en/sql-reference/statements/rename.md b/docs/en/sql-reference/statements/rename.md index 4f14ad016a3..a9dda6ed3b2 100644 --- a/docs/en/sql-reference/statements/rename.md +++ b/docs/en/sql-reference/statements/rename.md @@ -5,6 +5,14 @@ toc_title: RENAME # RENAME Statement {#misc_operations-rename} +## RENAME DATABASE {#misc_operations-rename_database} +Renames database, support only for Atomic database engine + +``` +RENAME DATABASE atomic_database1 TO atomic_database2 [ON CLUSTER cluster] +``` + +## RENAME TABLE {#misc_operations-rename_table} Renames one or more tables. ``` sql diff --git a/docs/en/sql-reference/statements/system.md b/docs/en/sql-reference/statements/system.md index 2348a2a2668..7871894ccac 100644 --- a/docs/en/sql-reference/statements/system.md +++ b/docs/en/sql-reference/statements/system.md @@ -169,7 +169,7 @@ SYSTEM START MERGES [ON VOLUME | [db.]merge_tree_family_table_name ### STOP TTL MERGES {#query_language-stop-ttl-merges} Provides possibility to stop background delete old data according to [TTL expression](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl) for tables in the MergeTree family: -Return `Ok.` even table doesn’t exists or table have not MergeTree engine. Return error when database doesn’t exists: +Returns `Ok.` even if table doesn’t exist or table has not MergeTree engine. Returns error when database doesn’t exist: ``` sql SYSTEM STOP TTL MERGES [[db.]merge_tree_family_table_name] @@ -178,7 +178,7 @@ SYSTEM STOP TTL MERGES [[db.]merge_tree_family_table_name] ### START TTL MERGES {#query_language-start-ttl-merges} Provides possibility to start background delete old data according to [TTL expression](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-ttl) for tables in the MergeTree family: -Return `Ok.` even table doesn’t exists. Return error when database doesn’t exists: +Returns `Ok.` even if table doesn’t exist. Returns error when database doesn’t exist: ``` sql SYSTEM START TTL MERGES [[db.]merge_tree_family_table_name] @@ -187,7 +187,7 @@ SYSTEM START TTL MERGES [[db.]merge_tree_family_table_name] ### STOP MOVES {#query_language-stop-moves} Provides possibility to stop background move data according to [TTL table expression with TO VOLUME or TO DISK clause](../../engines/table-engines/mergetree-family/mergetree.md#mergetree-table-ttl) for tables in the MergeTree family: -Return `Ok.` even table doesn’t exists. Return error when database doesn’t exists: +Returns `Ok.` even if table doesn’t exist. Returns error when database doesn’t exist: ``` sql SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] @@ -196,7 +196,7 @@ SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] ### START MOVES {#query_language-start-moves} Provides possibility to start background move data according to [TTL table expression with TO VOLUME and TO DISK clause](../../engines/table-engines/mergetree-family/mergetree.md#mergetree-table-ttl) for tables in the MergeTree family: -Return `Ok.` even table doesn’t exists. Return error when database doesn’t exists: +Returns `Ok.` even if table doesn’t exist. Returns error when database doesn’t exist: ``` sql SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] @@ -209,7 +209,7 @@ ClickHouse can manage background replication related processes in [ReplicatedMer ### STOP FETCHES {#query_language-system-stop-fetches} Provides possibility to stop background fetches for inserted parts for tables in the `ReplicatedMergeTree` family: -Always returns `Ok.` regardless of the table engine and even table or database doesn’t exists. +Always returns `Ok.` regardless of the table engine and even if table or database doesn’t exist. ``` sql SYSTEM STOP FETCHES [[db.]replicated_merge_tree_family_table_name] @@ -218,7 +218,7 @@ SYSTEM STOP FETCHES [[db.]replicated_merge_tree_family_table_name] ### START FETCHES {#query_language-system-start-fetches} Provides possibility to start background fetches for inserted parts for tables in the `ReplicatedMergeTree` family: -Always returns `Ok.` regardless of the table engine and even table or database doesn’t exists. +Always returns `Ok.` regardless of the table engine and even if table or database doesn’t exist. ``` sql SYSTEM START FETCHES [[db.]replicated_merge_tree_family_table_name] @@ -264,9 +264,7 @@ Wait until a `ReplicatedMergeTree` table will be synced with other replicas in a SYSTEM SYNC REPLICA [db.]replicated_merge_tree_family_table_name ``` -After running this statement the `[db.]replicated_merge_tree_family_table_name` fetches commands from -the common replicated log into its own replication queue, and then the query waits till the replica processes all -of the fetched commands. +After running this statement the `[db.]replicated_merge_tree_family_table_name` fetches commands from the common replicated log into its own replication queue, and then the query waits till the replica processes all of the fetched commands. ### RESTART REPLICA {#query_language-system-restart-replica} @@ -280,4 +278,3 @@ SYSTEM RESTART REPLICA [db.]replicated_merge_tree_family_table_name ### RESTART REPLICAS {#query_language-system-restart-replicas} Provides possibility to reinitialize Zookeeper sessions state for all `ReplicatedMergeTree` tables, will compare current state with Zookeeper as source of true and add tasks to Zookeeper queue if needed - diff --git a/docs/en/sql-reference/table-functions/postgresql.md b/docs/en/sql-reference/table-functions/postgresql.md index bfb5fdf9be6..3eab572ac12 100644 --- a/docs/en/sql-reference/table-functions/postgresql.md +++ b/docs/en/sql-reference/table-functions/postgresql.md @@ -65,9 +65,9 @@ postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> SELECT * FROM test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | (1 row) ``` diff --git a/docs/en/sql-reference/table-functions/s3.md b/docs/en/sql-reference/table-functions/s3.md index 34f0607b94c..285ec862aab 100644 --- a/docs/en/sql-reference/table-functions/s3.md +++ b/docs/en/sql-reference/table-functions/s3.md @@ -18,7 +18,7 @@ s3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, [compres - `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [here](../../engines/table-engines/integrations/s3.md#wildcards-in-path). - `format` — The [format](../../interfaces/formats.md#formats) of the file. - `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`. -- `compression` — Parameter is optional. Supported values: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. By default, it will autodetect compression by file extension. +- `compression` — Parameter is optional. Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. By default, it will autodetect compression by file extension. **Returned value** diff --git a/docs/ja/development/build.md b/docs/ja/development/build.md index e44ba45485e..191fa665ccd 100644 --- a/docs/ja/development/build.md +++ b/docs/ja/development/build.md @@ -19,28 +19,17 @@ $ sudo apt-get install git cmake python ninja-build 古いシステムではcmakeの代わりにcmake3。 -## GCC9のインストール {#install-gcc-10} +## Clang 11 のインストール -これを行うにはいくつかの方法があります。 +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -### PPAパッケージからインストール {#install-from-a-ppa-package} - -``` bash -$ sudo apt-get install software-properties-common -$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test -$ sudo apt-get update -$ sudo apt-get install gcc-10 g++-10 +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ``` -### ソースからインスト {#install-from-sources} - -見て [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh) - -## ビルドにGCC9を使用する {#use-gcc-10-for-builds} - ``` bash -$ export CC=gcc-10 -$ export CXX=g++-10 +$ export CC=clang +$ export CXX=clang++ ``` ## ツつィツ姪"ツ債ツつケ {#checkout-clickhouse-sources} @@ -76,7 +65,7 @@ $ cd .. - Git(ソースをチェックアウトするためにのみ使用され、ビルドには必要ありません) - CMake3.10以降 - 忍者(推奨)または作る -- C++コンパイラ:gcc9またはclang8以降 +- C++コンパイラ:clang11以降 - リンカ:lldまたはgold(古典的なGNU ldは動作しません) - Python(LLVMビルド内でのみ使用され、オプションです) diff --git a/docs/ja/development/developer-instruction.md b/docs/ja/development/developer-instruction.md index ccc3a177d1f..d7e5217b3b6 100644 --- a/docs/ja/development/developer-instruction.md +++ b/docs/ja/development/developer-instruction.md @@ -133,19 +133,19 @@ ArchまたはGentooを使用する場合は、おそらくCMakeのインスト ClickHouseはビルドに複数の外部ライブラリを使用します。 それらのすべては、サブモジュールにあるソースからClickHouseと一緒に構築されているので、別々にインストールする必要はありません。 リストは次の場所で確認できます `contrib`. -# C++コンパイラ {#c-compiler} +## C++ Compiler {#c-compiler} -ClickHouseのビルドには、バージョン9以降のGCCとClangバージョン8以降のコンパイラがサポートされます。 +Compilers Clang starting from version 11 is supported for building ClickHouse. -公式のYandexビルドは、わずかに優れたパフォーマンスのマシンコードを生成するため、GCCを使用しています(私たちのベンチマークに応じて最大数パーセントの そしてClangは開発のために通常より便利です。 が、当社の継続的インテグレーション(CI)プラットフォームを運チェックのための十数の組み合わせとなります。 +Clang should be used instead of gcc. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations. -UBUNTUにGCCをインストールするには: `sudo apt install gcc g++` +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -Gccのバージョンを確認する: `gcc --version`. の場合は下記9その指示に従う。https://clickhouse.tech/docs/ja/development/build/#install-gcc-10. +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" +``` -Mac OS XのビルドはClangでのみサポートされています。 ちょうど実行 `brew install llvm` - -Clangを使用する場合は、次のものもインストールできます `libc++` と `lld` あなたがそれが何であるか知っていれば。 を使用して `ccache` また、推奨されます。 +Mac OS X build is also supported. Just run `brew install llvm` # 建築プロセス {#the-building-process} @@ -158,13 +158,6 @@ ClickHouseを構築する準備ができたので、別のディレクトリを 中の間 `build` cmakeを実行してビルドを構成します。 最初の実行の前に、コンパイラ(この例ではバージョン9gccコンパイラ)を指定する環境変数を定義する必要があります。 -Linux: - - export CC=gcc-10 CXX=g++-10 - cmake .. - -Mac OS X: - export CC=clang CXX=clang++ cmake .. diff --git a/docs/ja/sql-reference/aggregate-functions/reference.md b/docs/ja/sql-reference/aggregate-functions/reference.md index 465f36179da..c66e9b54746 100644 --- a/docs/ja/sql-reference/aggregate-functions/reference.md +++ b/docs/ja/sql-reference/aggregate-functions/reference.md @@ -624,7 +624,7 @@ uniqHLL12(x[, ...]) - HyperLogLogアルゴリズムを使用して、異なる引数値の数を近似します。 - 212 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). + 2^12 5-bit cells are used. The size of the state is slightly more than 2.5 KB. The result is not very accurate (up to ~10% error) for small data sets (<10K elements). However, the result is fairly accurate for high-cardinality data sets (10K-100M), with a maximum error of ~1.6%. Starting from 100M, the estimation error increases, and the function will return very inaccurate results for data sets with extremely high cardinality (1B+ elements). - 決定的な結果を提供します(クエリ処理順序に依存しません)。 diff --git a/docs/ja/sql-reference/functions/bitmap-functions.md b/docs/ja/sql-reference/functions/bitmap-functions.md index cc57e762610..de3ce938444 100644 --- a/docs/ja/sql-reference/functions/bitmap-functions.md +++ b/docs/ja/sql-reference/functions/bitmap-functions.md @@ -35,7 +35,7 @@ SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res) ``` text ┌─res─┬─toTypeName(bitmapBuild([1, 2, 3, 4, 5]))─────┐ -│  │ AggregateFunction(groupBitmap, UInt8) │ +│ │ AggregateFunction(groupBitmap, UInt8) │ └─────┴──────────────────────────────────────────────┘ ``` diff --git a/docs/ja/sql-reference/functions/hash-functions.md b/docs/ja/sql-reference/functions/hash-functions.md index d48e6846bb4..a98ae60690d 100644 --- a/docs/ja/sql-reference/functions/hash-functions.md +++ b/docs/ja/sql-reference/functions/hash-functions.md @@ -434,13 +434,13 @@ A [FixedString(16)](../../sql-reference/data-types/fixedstring.md) データ型 **例** ``` sql -SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) AS type +SELECT hex(murmurHash3_128('example_string')) AS MurmurHash3, toTypeName(MurmurHash3) AS type; ``` ``` text -┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ -└──────────────────┴─────────────────┘ +┌─MurmurHash3──────────────────────┬─type───┐ +│ 368A1A311CB7342253354B548E7E7E71 │ String │ +└──────────────────────────────────┴────────┘ ``` ## xxHash32,xxHash64 {#hash-functions-xxhash32} diff --git a/docs/ru/commercial/cloud.md b/docs/ru/commercial/cloud.md index 610f0f00a99..e00fc3be673 100644 --- a/docs/ru/commercial/cloud.md +++ b/docs/ru/commercial/cloud.md @@ -29,3 +29,30 @@ toc_title: "Поставщики облачных услуг ClickHouse" - cross-az масштабирование для повышения производительности и обеспечения высокой доступности - встроенный мониторинг и редактор SQL-запросов +## Alibaba Cloud {#alibaba-cloud} + +Управляемый облачный сервис Alibaba для ClickHouse: [китайская площадка](https://www.aliyun.com/product/clickhouse), будет доступен на международной площадке в мае 2021 года. Сервис предоставляет следующие возможности: + +- надежный сервер для облачного хранилища на основе распределенной системы [Alibaba Cloud Apsara](https://www.alibabacloud.com/product/apsara-stack); +- расширяемая по запросу емкость, без переноса данных вручную; +- поддержка одноузловой и многоузловой архитектуры, архитектуры с одной или несколькими репликами, а также многоуровневого хранения cold и hot data; +- поддержка прав доступа, one-key восстановления, многоуровневая защита сети, шифрование облачного диска; +- полная интеграция с облачными системами логирования, базами данных и инструментами обработки данных; +- встроенная платформа для мониторинга и управления базами данных; +- техническая поддержка от экспертов по работе с базами данных. + +## SberCloud {#sbercloud} + +[Облачная платформа SberCloud.Advanced](https://sbercloud.ru/ru/advanced): + +- предоставляет более 50 высокотехнологичных сервисов; +- позволяет быстро создавать и эффективно управлять ИТ-инфраструктурой, приложениями и интернет-сервисами; +- радикально минимизирует ресурсы, требуемые для работы корпоративных ИТ-систем; +- в разы сокращает время вывода новых продуктов на рынок. + +SberCloud.Advanced предоставляет [MapReduce Service (MRS)](https://docs.sbercloud.ru/mrs/ug/topics/ug__clickhouse.html) — надежную, безопасную и простую в использовании платформу корпоративного уровня для хранения, обработки и анализа больших данных. MRS позволяет быстро создавать и управлять кластерами ClickHouse. + +- Инстанс ClickHouse состоит из трех узлов ZooKeeper и нескольких узлов ClickHouse. Выделенный режим реплики используется для обеспечения высокой надежности двойных копий данных. +- MRS предлагает возможности гибкого масштабирования при быстром росте сервисов в сценариях, когда емкости кластерного хранилища или вычислительных ресурсов процессора недостаточно. MRS в один клик предоставляет инструмент для балансировки данных при расширении узлов ClickHouse в кластере. Вы можете определить режим и время балансировки данных на основе характеристик сервиса, чтобы обеспечить доступность сервиса. +- MRS использует архитектуру развертывания высокой доступности на основе Elastic Load Balance (ELB) — сервиса для автоматического распределения трафика на несколько внутренних узлов. Благодаря ELB, данные записываются в локальные таблицы и считываются из распределенных таблиц на разных узлах. Такая архитектура повышает отказоустойчивость кластера и гарантирует высокую доступность приложений. + diff --git a/docs/ru/development/architecture.md b/docs/ru/development/architecture.md index 9f43fabba4f..d2cfc44b711 100644 --- a/docs/ru/development/architecture.md +++ b/docs/ru/development/architecture.md @@ -27,7 +27,7 @@ ClickHouse - полноценная колоночная СУБД. Данные `IColumn` предоставляет методы для общих реляционных преобразований данных, но они не отвечают всем потребностям. Например, `ColumnUInt64` не имеет метода для вычисления суммы двух столбцов, а `ColumnString` не имеет метода для запуска поиска по подстроке. Эти бесчисленные процедуры реализованы вне `IColumn`. -Различные функции на колонках могут быть реализованы обобщенным, неэффективным путем, используя `IColumn` методы для извлечения значений `Field`, или специальным путем, используя знания о внутреннем распределение данных в памяти в конкретной реализации `IColumn`. Для этого функции приводятся к конкретному типу `IColumn` и работают напрямую с его внутренним представлением. Например, в `ColumnUInt64` есть метод getData, который возвращает ссылку на внутренний массив, чтение и заполнение которого, выполняется отдельной процедурой напрямую. Фактически, мы имеем "дырявую абстракции", обеспечивающие эффективные специализации различных процедур. +Различные функции на колонках могут быть реализованы обобщенным, неэффективным путем, используя `IColumn` методы для извлечения значений `Field`, или специальным путем, используя знания о внутреннем распределение данных в памяти в конкретной реализации `IColumn`. Для этого функции приводятся к конкретному типу `IColumn` и работают напрямую с его внутренним представлением. Например, в `ColumnUInt64` есть метод `getData`, который возвращает ссылку на внутренний массив, чтение и заполнение которого, выполняется отдельной процедурой напрямую. Фактически, мы имеем "дырявые абстракции", обеспечивающие эффективные специализации различных процедур. ## Типы данных (Data Types) {#data_types} @@ -42,7 +42,7 @@ ClickHouse - полноценная колоночная СУБД. Данные ## Блоки (Block) {#block} -`Block` это контейнер, который представляет фрагмент (chunk) таблицы в памяти. Это набор троек - `(IColumn, IDataType, имя колонки)`. В процессе выполнения запроса, данные обрабатываются `Block`ами. Если у нас есть `Block`, значит у нас есть данные (в объекте `IColumn`), информация о типе (в `IDataType`), которая говорит нам, как работать с колонкой, и имя колонки (оригинальное имя колонки таблицы или служебное имя, присвоенное для получения промежуточных результатов вычислений). +`Block` это контейнер, который представляет фрагмент (chunk) таблицы в памяти. Это набор троек - `(IColumn, IDataType, имя колонки)`. В процессе выполнения запроса, данные обрабатываются `Block`-ами. Если у нас есть `Block`, значит у нас есть данные (в объекте `IColumn`), информация о типе (в `IDataType`), которая говорит нам, как работать с колонкой, и имя колонки (оригинальное имя колонки таблицы или служебное имя, присвоенное для получения промежуточных результатов вычислений). При вычислении некоторой функции на колонках в блоке мы добавляем еще одну колонку с результатами в блок, не трогая колонки аргументов функции, потому что операции иммутабельные. Позже ненужные колонки могут быть удалены из блока, но не модифицированы. Это удобно для устранения общих подвыражений. @@ -58,7 +58,7 @@ ClickHouse - полноценная колоночная СУБД. Данные 2. Реализацию форматов данных. Например, при выводе данных в терминал в формате `Pretty`, вы создаете выходной поток блоков, который форматирует поступающие в него блоки. 3. Трансформацию данных. Допустим, у вас есть `IBlockInputStream` и вы хотите создать отфильтрованный поток. Вы создаете `FilterBlockInputStream` и инициализируете его вашим потоком. Затем вы тянете (pull) блоки из `FilterBlockInputStream`, а он тянет блоки исходного потока, фильтрует их и возвращает отфильтрованные блоки вам. Таким образом построены конвейеры выполнения запросов. -Имеются и более сложные трансформации. Например, когда вы тянете блоки из `AggregatingBlockInputStream`, он считывает все данные из своего источника, агрегирует их, и возвращает поток агрегированных данных вам. Другой пример: конструктор `UnionBlockInputStream` принимает множество источников входных данных и число потоков. Такой `Stream` работает в несколько потоков и читает данные источников параллельно. +Имеются и более сложные трансформации. Например, когда вы тянете блоки из `AggregatingBlockInputStream`, он считывает все данные из своего источника, агрегирует их, и возвращает поток агрегированных данных вам. Другой пример: конструктор `UnionBlockInputStream` принимает множество источников входных данных и число потоков. Такой `Stream` работает в несколько потоков и читает данные источников параллельно. > Потоки блоков используют «втягивающий» (pull) подход к управлению потоком выполнения: когда вы вытягиваете блок из первого потока, он, следовательно, вытягивает необходимые блоки из вложенных потоков, так и работает весь конвейер выполнения. Ни «pull» ни «push» не имеют явного преимущества, потому что поток управления неявный, и это ограничивает в реализации различных функций, таких как одновременное выполнение нескольких запросов (слияние нескольких конвейеров вместе). Это ограничение можно преодолеть с помощью сопрограмм (coroutines) или просто запуском дополнительных потоков, которые ждут друг друга. У нас может быть больше возможностей, если мы сделаем поток управления явным: если мы локализуем логику для передачи данных из одной расчетной единицы в другую вне этих расчетных единиц. Читайте эту [статью](http://journal.stuffwithstuff.com/2013/01/13/iteration-inside-and-out/) для углубленного изучения. @@ -110,9 +110,9 @@ ClickHouse - полноценная колоночная СУБД. Данные > Генераторы парсеров не используются по историческим причинам. ## Интерпретаторы {#interpreters} - + Интерпретаторы отвечают за создание конвейера выполнения запроса из `AST`. Есть простые интерпретаторы, такие как `InterpreterExistsQuery` и `InterpreterDropQuery` или более сложный `InterpreterSelectQuery`. Конвейер выполнения запроса представляет собой комбинацию входных и выходных потоков блоков. Например, результатом интерпретации `SELECT` запроса является `IBlockInputStream` для чтения результирующего набора данных; результат интерпретации `INSERT` запроса - это `IBlockOutputStream`, для записи данных, предназначенных для вставки; результат интерпретации `INSERT SELECT` запроса - это `IBlockInputStream`, который возвращает пустой результирующий набор при первом чтении, но копирует данные из `SELECT` к `INSERT`. - + `InterpreterSelectQuery` использует `ExpressionAnalyzer` и `ExpressionActions` механизмы для анализа запросов и преобразований. Именно здесь выполняется большинство оптимизаций запросов на основе правил. `ExpressionAnalyzer` написан довольно грязно и должен быть переписан: различные преобразования запросов и оптимизации должны быть извлечены в отдельные классы, чтобы позволить модульные преобразования или запросы. ## Функции {#functions} @@ -162,9 +162,9 @@ ClickHouse имеет сильную типизацию, поэтому нет Сервера в кластере в основном независимы. Вы можете создать `Распределенную` (`Distributed`) таблицу на одном или всех серверах в кластере. Такая таблица сама по себе не хранит данные - она только предоставляет возможность "просмотра" всех локальных таблиц на нескольких узлах кластера. При выполнении `SELECT` распределенная таблица переписывает запрос, выбирает удаленные узлы в соответствии с настройками балансировки нагрузки и отправляет им запрос. Распределенная таблица просит удаленные сервера обработать запрос до той стадии, когда промежуточные результаты с разных серверов могут быть объединены. Затем он получает промежуточные результаты и объединяет их. Распределенная таблица пытается возложить как можно больше работы на удаленные серверы и сократить объем промежуточных данных, передаваемых по сети. -Ситуация усложняется, при использовании подзапросы в случае IN или JOIN, когда каждый из них использует таблицу `Distributed`. Есть разные стратегии для выполнения таких запросов. +Ситуация усложняется, при использовании подзапросов в случае `IN` или `JOIN`, когда каждый из них использует таблицу `Distributed`. Есть разные стратегии для выполнения таких запросов. -Глобального плана выполнения распределенных запросов не существует. Каждый узел имеет собственный локальный план для своей части работы. У нас есть простое однонаправленное выполнение распределенных запросов: мы отправляем запросы на удаленные узлы и затем объединяем результаты. Но это невозможно для сложных запросов GROUP BY высокой кардинальности или запросов с большим числом временных данных в JOIN: в таких случаях нам необходимо перераспределить («reshuffle») данные между серверами, что требует дополнительной координации. ClickHouse не поддерживает выполнение запросов такого рода, и нам нужно работать над этим. +Глобального плана выполнения распределенных запросов не существует. Каждый узел имеет собственный локальный план для своей части работы. У нас есть простое однонаправленное выполнение распределенных запросов: мы отправляем запросы на удаленные узлы и затем объединяем результаты. Но это невозможно для сложных запросов `GROUP BY` высокой кардинальности или запросов с большим числом временных данных в `JOIN`: в таких случаях нам необходимо перераспределить («reshuffle») данные между серверами, что требует дополнительной координации. ClickHouse не поддерживает выполнение запросов такого рода, и нам нужно работать над этим. ## Merge Tree {#merge-tree} @@ -190,7 +190,7 @@ ClickHouse имеет сильную типизацию, поэтому нет Репликация использует асинхронную multi-master схему. Вы можете вставить данные в любую реплику, которая имеет открытую сессию в `ZooKeeper`, и данные реплицируются на все другие реплики асинхронно. Поскольку ClickHouse не поддерживает UPDATE, репликация исключает конфликты (conflict-free replication). Поскольку подтверждение вставок кворумом не реализовано, только что вставленные данные могут быть потеряны в случае сбоя одного узла. -Метаданные для репликации хранятся в `ZooKeeper`. Существует журнал репликации, в котором перечислены действия, которые необходимо выполнить. Среди этих действий: получить часть (get the part); объединить части (merge parts); удалить партицию (drop a partition) и так далее. Каждая реплика копирует журнал репликации в свою очередь, а затем выполняет действия из очереди. Например, при вставке в журнале создается действие «получить часть» (get the part), и каждая реплика загружает эту часть. Слияния координируются между репликами, чтобы получить идентичные до байта результаты. Все части объединяются одинаково на всех репликах. Одна из реплик-лидеров инициирует новое слияние кусков первой и записывает действия «слияния частей» в журнал. Несколько реплик (или все) могут быть лидерами одновременно. Реплике можно запретить быть лидером с помощью `merge_tree` настройки `replicated_can_become_leader`. +Метаданные для репликации хранятся в `ZooKeeper`. Существует журнал репликации, в котором перечислены действия, которые необходимо выполнить. Среди этих действий: получить часть (get the part); объединить части (merge parts); удалить партицию (drop a partition) и так далее. Каждая реплика копирует журнал репликации в свою очередь, а затем выполняет действия из очереди. Например, при вставке в журнале создается действие «получить часть» (get the part), и каждая реплика загружает эту часть. Слияния координируются между репликами, чтобы получить идентичные до байта результаты. Все части объединяются одинаково на всех репликах. Одна из реплик-лидеров инициирует новое слияние кусков первой и записывает действия «слияния частей» в журнал. Несколько реплик (или все) могут быть лидерами одновременно. Реплике можно запретить быть лидером с помощью `merge_tree` настройки `replicated_can_become_leader`. Репликация является физической: между узлами передаются только сжатые части, а не запросы. Слияния обрабатываются на каждой реплике независимо, в большинстве случаев, чтобы снизить затраты на сеть, во избежание усиления роли сети. Крупные объединенные части отправляются по сети только в случае значительной задержки репликации. diff --git a/docs/ru/development/developer-instruction.md b/docs/ru/development/developer-instruction.md index 9ddb17b7212..463d38a44fb 100644 --- a/docs/ru/development/developer-instruction.md +++ b/docs/ru/development/developer-instruction.md @@ -7,15 +7,15 @@ toc_title: "Инструкция для разработчиков" Сборка ClickHouse поддерживается на Linux, FreeBSD, Mac OS X. -# Если вы используете Windows {#esli-vy-ispolzuete-windows} +## Если вы используете Windows {#esli-vy-ispolzuete-windows} Если вы используете Windows, вам потребуется создать виртуальную машину с Ubuntu. Для работы с виртуальной машиной, установите VirtualBox. Скачать Ubuntu можно на сайте: https://www.ubuntu.com/#download Создайте виртуальную машину из полученного образа. Выделите для неё не менее 4 GB оперативной памяти. Для запуска терминала в Ubuntu, найдите в меню программу со словом terminal (gnome-terminal, konsole или что-то в этом роде) или нажмите Ctrl+Alt+T. -# Если вы используете 32-битную систему {#esli-vy-ispolzuete-32-bitnuiu-sistemu} +## Если вы используете 32-битную систему {#esli-vy-ispolzuete-32-bitnuiu-sistemu} ClickHouse не работает и не собирается на 32-битных системах. Получите доступ к 64-битной системе и продолжайте. -# Создание репозитория на GitHub {#sozdanie-repozitoriia-na-github} +## Создание репозитория на GitHub {#sozdanie-repozitoriia-na-github} Для работы с репозиторием ClickHouse, вам потребуется аккаунт на GitHub. Наверное, он у вас уже есть. @@ -34,7 +34,7 @@ ClickHouse не работает и не собирается на 32-битны Подробное руководство по использованию Git: https://git-scm.com/book/ru/v2 -# Клонирование репозитория на рабочую машину {#klonirovanie-repozitoriia-na-rabochuiu-mashinu} +## Клонирование репозитория на рабочую машину {#klonirovanie-repozitoriia-na-rabochuiu-mashinu} Затем вам потребуется загрузить исходники для работы на свой компьютер. Это называется «клонирование репозитория», потому что создаёт на вашем компьютере локальную копию репозитория, с которой вы будете работать. @@ -78,7 +78,7 @@ ClickHouse не работает и не собирается на 32-битны После этого, вы сможете добавлять в свой репозиторий обновления из репозитория Яндекса с помощью команды `git pull upstream master`. -## Работа с сабмодулями Git {#rabota-s-sabmoduliami-git} +### Работа с сабмодулями Git {#rabota-s-sabmoduliami-git} Работа с сабмодулями git может быть достаточно болезненной. Следующие команды позволят содержать их в порядке: @@ -110,7 +110,7 @@ The next commands would help you to reset all submodules to the initial state (! git submodule foreach git submodule foreach git reset --hard git submodule foreach git submodule foreach git clean -xfd -# Система сборки {#sistema-sborki} +## Система сборки {#sistema-sborki} ClickHouse использует систему сборки CMake и Ninja. @@ -130,25 +130,25 @@ Ninja - система запуска сборочных задач. Проверьте версию CMake: `cmake --version`. Если версия меньше 3.3, то установите новую версию с сайта https://cmake.org/download/ -# Необязательные внешние библиотеки {#neobiazatelnye-vneshnie-biblioteki} +## Необязательные внешние библиотеки {#neobiazatelnye-vneshnie-biblioteki} ClickHouse использует для сборки некоторое количество внешних библиотек. Но ни одну из них не требуется отдельно устанавливать, так как они собираются вместе с ClickHouse, из исходников, которые расположены в submodules. Посмотреть набор этих библиотек можно в директории contrib. -# Компилятор C++ {#kompiliator-c} +## Компилятор C++ {#kompiliator-c} -В качестве компилятора C++ поддерживается GCC начиная с версии 9 или Clang начиная с версии 8. +В качестве компилятора C++ поддерживается Clang начиная с версии 11. -Официальные сборки от Яндекса, на данный момент, используют GCC, так как он генерирует слегка более производительный машинный код (разница в среднем до нескольких процентов по нашим бенчмаркам). Clang обычно более удобен для разработки. Впрочем, наша среда continuous integration проверяет около десятка вариантов сборки. +Впрочем, наша среда continuous integration проверяет около десятка вариантов сборки, включая gcc, но сборка с помощью gcc непригодна для использования в продакшене. -Для установки GCC под Ubuntu, выполните: `sudo apt install gcc g++`. +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -Проверьте версию gcc: `gcc --version`. Если версия меньше 10, то следуйте инструкции: https://clickhouse.tech/docs/ru/development/build/#install-gcc-10. +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" +``` Сборка под Mac OS X поддерживается только для компилятора Clang. Чтобы установить его выполните `brew install llvm` -Если вы решили использовать Clang, вы также можете установить `libc++` и `lld`, если вы знаете, что это такое. При желании, установите `ccache`. - -# Процесс сборки {#protsess-sborki} +## Процесс сборки {#protsess-sborki} Теперь вы готовы к сборке ClickHouse. Для размещения собранных файлов, рекомендуется создать отдельную директорию build внутри директории ClickHouse: @@ -158,14 +158,7 @@ ClickHouse использует для сборки некоторое коли Вы можете иметь несколько разных директорий (build_release, build_debug) для разных вариантов сборки. Находясь в директории build, выполните конфигурацию сборки с помощью CMake. -Перед первым запуском необходимо выставить переменные окружения, отвечающие за выбор компилятора (в данном примере это - gcc версии 9). - -Linux: - - export CC=gcc-10 CXX=g++-10 - cmake .. - -Mac OS X: +Перед первым запуском необходимо выставить переменные окружения, отвечающие за выбор компилятора. export CC=clang CXX=clang++ cmake .. @@ -206,7 +199,7 @@ Mac OS X: ls -l programs/clickhouse -# Запуск собранной версии ClickHouse {#zapusk-sobrannoi-versii-clickhouse} +## Запуск собранной версии ClickHouse {#zapusk-sobrannoi-versii-clickhouse} Для запуска сервера из под текущего пользователя, с выводом логов в терминал и с использованием примеров конфигурационных файлов, расположенных в исходниках, перейдите в директорию `ClickHouse/programs/server/` (эта директория находится не в директории build) и выполните: @@ -233,7 +226,7 @@ Mac OS X: sudo service clickhouse-server stop sudo -u clickhouse ClickHouse/build/programs/clickhouse server --config-file /etc/clickhouse-server/config.xml -# Среда разработки {#sreda-razrabotki} +## Среда разработки {#sreda-razrabotki} Если вы не знаете, какую среду разработки использовать, то рекомендуется использовать CLion. CLion является платным ПО, но его можно использовать бесплатно в течение пробного периода. Также он бесплатен для учащихся. CLion можно использовать как под Linux, так и под Mac OS X. @@ -243,7 +236,7 @@ Mac OS X: На всякий случай заметим, что CLion самостоятельно создаёт свою build директорию, самостоятельно выбирает тип сборки debug по-умолчанию, для конфигурации использует встроенную в CLion версию CMake вместо установленного вами, а для запуска задач использует make вместо ninja. Это нормально, просто имейте это ввиду, чтобы не возникало путаницы. -# Написание кода {#napisanie-koda} +## Написание кода {#napisanie-koda} Описание архитектуры ClickHouse: https://clickhouse.tech/docs/ru/development/architecture/ @@ -253,7 +246,7 @@ Mac OS X: Список задач: https://github.com/ClickHouse/ClickHouse/issues?q=is%3Aopen+is%3Aissue+label%3A%22easy+task%22 -# Тестовые данные {#testovye-dannye} +## Тестовые данные {#testovye-dannye} Разработка ClickHouse часто требует загрузки реалистичных наборов данных. Особенно это важно для тестирования производительности. Специально для вас мы подготовили набор данных, представляющий собой анонимизированные данные Яндекс.Метрики. Загрузка этих данных потребует ещё 3 GB места на диске. Для выполнения большинства задач разработки, загружать эти данные не обязательно. @@ -274,7 +267,7 @@ Mac OS X: clickhouse-client --max_insert_block_size 100000 --query "INSERT INTO test.hits FORMAT TSV" < hits_v1.tsv clickhouse-client --max_insert_block_size 100000 --query "INSERT INTO test.visits FORMAT TSV" < visits_v1.tsv -# Создание Pull Request {#sozdanie-pull-request} +## Создание Pull Request {#sozdanie-pull-request} Откройте свой форк репозитория в интерфейсе GitHub. Если вы вели разработку в бранче, выберите этот бранч. На странице будет доступна кнопка «Pull request». По сути, это означает «создать заявку на принятие моих изменений в основной репозиторий». diff --git a/docs/ru/development/style.md b/docs/ru/development/style.md index f08ecc3c4c7..de29e629ceb 100644 --- a/docs/ru/development/style.md +++ b/docs/ru/development/style.md @@ -747,7 +747,7 @@ The dictionary is configured incorrectly. Есть два основных варианта проверки на такие ошибки: * Исключение с кодом `LOGICAL_ERROR`. Его можно использовать для важных проверок, которые делаются в том числе в релизной сборке. -* `assert`. Такие условия не проверяются в релизной сборке, можно использовать для тяжёлых и опциональных проверок. +* `assert`. Такие условия не проверяются в релизной сборке, можно использовать для тяжёлых и опциональных проверок. Пример сообщения, у которого должен быть код `LOGICAL_ERROR`: `Block header is inconsistent with Chunk in ICompicatedProcessor::munge(). It is a bug!` @@ -780,7 +780,7 @@ The dictionary is configured incorrectly. **2.** Язык - C++20 (см. список доступных [C++20 фич](https://en.cppreference.com/w/cpp/compiler_support#C.2B.2B20_features)). -**3.** Компилятор - `gcc`. На данный момент (август 2020), код собирается версией 9.3. (Также код может быть собран `clang` версий 10 и 9) +**3.** Компилятор - `clang`. На данный момент (апрель 2021), код собирается версией 11. (Также код может быть собран `gcc` версии 10, но такая сборка не тестируется и непригодна для продакшена). Используется стандартная библиотека (реализация `libc++`). diff --git a/docs/ru/engines/database-engines/atomic.md b/docs/ru/engines/database-engines/atomic.md index a371301fd2e..8c75be3d93b 100644 --- a/docs/ru/engines/database-engines/atomic.md +++ b/docs/ru/engines/database-engines/atomic.md @@ -3,15 +3,52 @@ toc_priority: 32 toc_title: Atomic --- - # Atomic {#atomic} -Поддерживает неблокирующие запросы `DROP` и `RENAME TABLE` и запросы `EXCHANGE TABLES t1 AND t2`. Движок `Atomic` используется по умолчанию. +Поддерживает неблокирующие запросы [DROP TABLE](#drop-detach-table) и [RENAME TABLE](#rename-table) и атомарные запросы [EXCHANGE TABLES t1 AND t](#exchange-tables). Движок `Atomic` используется по умолчанию. ## Создание БД {#creating-a-database} -```sql -CREATE DATABASE test ENGINE = Atomic; +``` sql + CREATE DATABASE test[ ENGINE = Atomic]; ``` -[Оригинальная статья](https://clickhouse.tech/docs/ru/engines/database-engines/atomic/) +## Особенности и рекомендации {#specifics-and-recommendations} + +### UUID {#table-uuid} + +Каждая таблица в базе данных `Atomic` имеет уникальный [UUID](../../sql-reference/data-types/uuid.md) и хранит данные в папке `/clickhouse_path/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/`, где `xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy` - это UUID таблицы. +Обычно UUID генерируется автоматически, но пользователь также может явно указать UUID в момент создания таблицы (однако это не рекомендуется). Для отображения UUID в запросе `SHOW CREATE` вы можете использовать настройку [show_table_uuid_in_table_create_query_if_not_nil](../../operations/settings/settings.md#show_table_uuid_in_table_create_query_if_not_nil). Результат выполнения в таком случае будет иметь вид: + +```sql +CREATE TABLE name UUID '28f1c61c-2970-457a-bffe-454156ddcfef' (n UInt64) ENGINE = ...; +``` +### RENAME TABLE {#rename-table} + +Запросы `RENAME` выполняются без изменения UUID и перемещения табличных данных. Эти запросы не ожидают завершения использующих таблицу запросов и будут выполнены мгновенно. + +### DROP/DETACH TABLE {#drop-detach-table} + +При выполнении запроса `DROP TABLE` никакие данные не удаляются. Таблица помечается как удаленная, метаданные перемещаются в папку `/clickhouse_path/metadata_dropped/` и база данных уведомляет фоновый поток. Задержка перед окончательным удалением данных задается настройкой [database_atomic_delay_before_drop_table_sec](../../operations/server-configuration-parameters/settings.md#database_atomic_delay_before_drop_table_sec). +Вы можете задать синхронный режим, определяя модификатор `SYNC`. Используйте для этого настройку [database_atomic_wait_for_drop_and_detach_synchronously](../../operations/settings/settings.md#database_atomic_wait_for_drop_and_detach_synchronously). В этом случае запрос `DROP` ждет завершения `SELECT`, `INSERT` и других запросов, которые используют таблицу. Таблица будет фактически удалена, когда она не будет использоваться. + +### EXCHANGE TABLES {#exchange-tables} + +Запрос `EXCHANGE` меняет местами две таблицы атомарно. Вместо неатомарной операции: + +```sql +RENAME TABLE new_table TO tmp, old_table TO new_table, tmp TO old_table; +``` +вы можете использовать один атомарный запрос: + +``` sql +EXCHANGE TABLES new_table AND old_table; +``` + +### ReplicatedMergeTree in Atomic Database {#replicatedmergetree-in-atomic-database} + +Для таблиц [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) рекомендуется не указывать параметры движка - путь в ZooKeeper и имя реплики. В этом случае будут использоваться параметры конфигурации: [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) и [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). Если вы хотите определить параметры движка явно, рекомендуется использовать макрос {uuid}. Это удобно, так как автоматически генерируются уникальные пути для каждой таблицы в ZooKeeper. + +## Смотрите также + +- Системная таблица [system.databases](../../operations/system-tables/databases.md). diff --git a/docs/ru/engines/table-engines/index.md b/docs/ru/engines/table-engines/index.md index a364a3cb972..b17b2124250 100644 --- a/docs/ru/engines/table-engines/index.md +++ b/docs/ru/engines/table-engines/index.md @@ -48,6 +48,14 @@ toc_title: "Введение" Движки семейства: +- [Kafka](integrations/kafka.md#kafka) +- [MySQL](integrations/mysql.md#mysql) +- [ODBC](integrations/odbc.md#table-engine-odbc) +- [JDBC](integrations/jdbc.md#table-engine-jdbc) +- [S3](integrations/s3.md#table-engine-s3) + +### Специальные движки {#spetsialnye-dvizhki} + - [ODBC](../../engines/table-engines/integrations/odbc.md) - [JDBC](../../engines/table-engines/integrations/jdbc.md) - [MySQL](../../engines/table-engines/integrations/mysql.md) @@ -84,4 +92,3 @@ toc_title: "Введение" Чтобы получить данные из виртуального столбца, необходимо указать его название в запросе `SELECT`. `SELECT *` не отображает данные из виртуальных столбцов. При создании таблицы со столбцом, имя которого совпадает с именем одного из виртуальных столбцов таблицы, виртуальный столбец становится недоступным. Не делайте так. Чтобы помочь избежать конфликтов, имена виртуальных столбцов обычно предваряются подчеркиванием. - diff --git a/docs/ru/engines/table-engines/integrations/postgresql.md b/docs/ru/engines/table-engines/integrations/postgresql.md index 8964b1dbf02..cb8e38ae5c9 100644 --- a/docs/ru/engines/table-engines/integrations/postgresql.md +++ b/docs/ru/engines/table-engines/integrations/postgresql.md @@ -22,7 +22,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] Структура таблицы может отличаться от исходной структуры таблицы PostgreSQL: -- Имена столбцов должны быть такими же, как в исходной таблице MySQL, но вы можете использовать только некоторые из этих столбцов и в любом порядке. +- Имена столбцов должны быть такими же, как в исходной таблице PostgreSQL, но вы можете использовать только некоторые из этих столбцов и в любом порядке. - Типы столбцов могут отличаться от типов в исходной таблице PostgreSQL. ClickHouse пытается [приводить](../../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) values to the ClickHouse data types. - Настройка `external_table_functions_use_nulls` определяет как обрабатывать Nullable столбцы. По умолчанию 1, если 0 - табличная функция не будет делать nullable столбцы и будет вместо null выставлять значения по умолчанию для скалярного типа. Это также применимо для null значений внутри массивов. @@ -94,10 +94,10 @@ postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> SELECT * FROM test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | -(1 row) + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) ``` Таблица в ClickHouse, получение данных из PostgreSQL таблицы, созданной выше: diff --git a/docs/ru/engines/table-engines/integrations/s3.md b/docs/ru/engines/table-engines/integrations/s3.md index fa10e8ebc34..216db98077c 100644 --- a/docs/ru/engines/table-engines/integrations/s3.md +++ b/docs/ru/engines/table-engines/integrations/s3.md @@ -19,7 +19,7 @@ ENGINE = S3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, - `path` — URL-адрес бакета с указанием пути к файлу. Поддерживает следующие подстановочные знаки в режиме "только чтение": `*`, `?`, `{abc,def}` и `{N..M}` где `N`, `M` — числа, `'abc'`, `'def'` — строки. Подробнее смотри [ниже](#wildcards-in-path). - `format` — [формат](../../../interfaces/formats.md#formats) файла. - `structure` — структура таблицы в формате `'column1_name column1_type, column2_name column2_type, ...'`. -- `compression` — тип сжатия. Возможные значения: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. Необязательный параметр. Если не указано, то тип сжатия определяется автоматически по расширению файла. +- `compression` — тип сжатия. Возможные значения: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. Необязательный параметр. Если не указано, то тип сжатия определяется автоматически по расширению файла. **Пример** @@ -73,17 +73,17 @@ SELECT * FROM s3_engine_table LIMIT 2; Соображение безопасности: если злонамеренный пользователь попробует указать произвольные URL-адреса S3, параметр `s3_max_redirects` должен быть установлен в ноль, чтобы избежать атак [SSRF] (https://en.wikipedia.org/wiki/Server-side_request_forgery). Как альтернатива, в конфигурации сервера должен быть указан `remote_host_filter`. -## Настройки конечных точек {#endpoint-settings} +## Настройки точки приема запроса {#endpoint-settings} -Для конечной точки (которая соответствует точному префиксу URL-адреса) в конфигурационном файле могут быть заданы следующие настройки: +Для точки приема запроса (которая соответствует точному префиксу URL-адреса) в конфигурационном файле могут быть заданы следующие настройки: Обязательная настройка: -- `endpoint` — указывает префикс конечной точки. +- `endpoint` — указывает префикс точки приема запроса. Необязательные настройки: -- `access_key_id` и `secret_access_key` — указывают учетные данные для использования с данной конечной точкой. -- `use_environment_credentials` — если `true`, S3-клиент будет пытаться получить учетные данные из переменных среды и метаданных Amazon EC2 для данной конечной точки. Значение по умолчанию - `false`. -- `header` — добавляет указанный HTTP-заголовок к запросу на заданную конечную точку. Может быть определен несколько раз. +- `access_key_id` и `secret_access_key` — указывают учетные данные для использования с данной точкой приема запроса. +- `use_environment_credentials` — если `true`, S3-клиент будет пытаться получить учетные данные из переменных среды и метаданных Amazon EC2 для данной точки приема запроса. Значение по умолчанию - `false`. +- `header` — добавляет указанный HTTP-заголовок к запросу на заданную точку приема запроса. Может быть определен несколько раз. - `server_side_encryption_customer_key_base64` — устанавливает необходимые заголовки для доступа к объектам S3 с шифрованием SSE-C. **Пример** @@ -133,8 +133,7 @@ CREATE TABLE table_with_asterisk (name String, value UInt32) ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/{some,another}_prefix/*', 'CSV'); ``` -!!! warning "Warning" - Если список файлов содержит диапазоны чисел с ведущими нулями, используйте конструкцию с фигурными скобками для каждой цифры отдельно или используйте `?`. +Если список файлов содержит диапазоны чисел с ведущими нулями, используйте конструкцию с фигурными скобками для каждой цифры отдельно или используйте `?`. 4. Создание таблицы из файлов с именами `file-000.csv`, `file-001.csv`, … , `file-999.csv`: @@ -145,6 +144,3 @@ ENGINE = S3('https://storage.yandexcloud.net/my-test-bucket-768/big_prefix/file- **Смотрите также** - [Табличная функция S3](../../../sql-reference/table-functions/s3.md) - -[Оригинальная статья](https://clickhouse.tech/docs/ru/engines/table-engines/integrations/s3/) - diff --git a/docs/ru/engines/table-engines/mergetree-family/mergetree.md b/docs/ru/engines/table-engines/mergetree-family/mergetree.md index 7d7641a417d..b8bd259167a 100644 --- a/docs/ru/engines/table-engines/mergetree-family/mergetree.md +++ b/docs/ru/engines/table-engines/mergetree-family/mergetree.md @@ -753,7 +753,8 @@ SETTINGS storage_policy = 'moving_from_ssd_to_hdd' Необязательные параметры: -- `use_environment_credentials` — признак, нужно ли считывать учетные данные AWS из переменных окружения `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` и `AWS_SESSION_TOKEN`, если они есть. Значение по умолчанию: `false`. +- `use_environment_credentials` — признак, нужно ли считывать учетные данные AWS из сетевого окружения, а также из переменных окружения `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` и `AWS_SESSION_TOKEN`, если они есть. Значение по умолчанию: `false`. +- `use_insecure_imds_request` — признак, нужно ли использовать менее безопасное соединение при выполнении запроса к IMDS при получении учётных данных из метаданных Amazon EC2. Значение по умолчанию: `false`. - `proxy` — конфигурация прокси-сервера для конечной точки S3. Каждый элемент `uri` внутри блока `proxy` должен содержать URL прокси-сервера. - `connect_timeout_ms` — таймаут подключения к сокету в миллисекундах. Значение по умолчанию: 10 секунд. - `request_timeout_ms` — таймаут выполнения запроса в миллисекундах. Значение по умолчанию: 5 секунд. diff --git a/docs/ru/getting-started/example-datasets/cell-towers.md b/docs/ru/getting-started/example-datasets/cell-towers.md new file mode 100644 index 00000000000..a5524248019 --- /dev/null +++ b/docs/ru/getting-started/example-datasets/cell-towers.md @@ -0,0 +1,128 @@ +--- +toc_priority: 21 +toc_title: Вышки сотовой связи +--- + +# Вышки сотовой связи {#cell-towers} + +Источник этого набора данных (dataset) - самая большая в мире открытая база данных о сотовых вышках - [OpenCellid](https://www.opencellid.org/). К 2021-му году здесь накопилось более, чем 40 миллионов записей о сотовых вышках (GSM, LTE, UMTS, и т.д.) по всему миру с их географическими координатами и метаданными (код страны, сети, и т.д.). + +OpenCelliD Project имеет лицензию Creative Commons Attribution-ShareAlike 4.0 International License, и мы распространяем снэпшот набора данных по условиям этой же лицензии. После авторизации можно загрузить последнюю версию набора данных. + +## Как получить набор данных {#get-the-dataset} + +1. Загрузите снэпшот набора данных за февраль 2021 [отсюда](https://datasets.clickhouse.tech/cell_towers.csv.xz) (729 MB). + +2. Если нужно, проверьте полноту и целостность при помощи команды: + +``` +md5sum cell_towers.csv.xz +8cf986f4a0d9f12c6f384a0e9192c908 cell_towers.csv.xz +``` + +3. Распакуйте набор данных при помощи команды: + +``` +xz -d cell_towers.csv.xz +``` + +4. Создайте таблицу: + +``` +CREATE TABLE cell_towers +( + radio Enum8('' = 0, 'CDMA' = 1, 'GSM' = 2, 'LTE' = 3, 'NR' = 4, 'UMTS' = 5), + mcc UInt16, + net UInt16, + area UInt16, + cell UInt64, + unit Int16, + lon Float64, + lat Float64, + range UInt32, + samples UInt32, + changeable UInt8, + created DateTime, + updated DateTime, + averageSignal UInt8 +) +ENGINE = MergeTree ORDER BY (radio, mcc, net, created); +``` + +5. Вставьте данные: +``` +clickhouse-client --query "INSERT INTO cell_towers FORMAT CSVWithNames" < cell_towers.csv +``` + +## Примеры {#examples} + +1. Количество вышек по типам: + +``` +SELECT radio, count() AS c FROM cell_towers GROUP BY radio ORDER BY c DESC + +┌─radio─┬────────c─┐ +│ UMTS │ 20686487 │ +│ LTE │ 12101148 │ +│ GSM │ 9931312 │ +│ CDMA │ 556344 │ +│ NR │ 867 │ +└───────┴──────────┘ + +5 rows in set. Elapsed: 0.011 sec. Processed 43.28 million rows, 43.28 MB (3.83 billion rows/s., 3.83 GB/s.) +``` + +2. Количество вышек по [мобильному коду страны (MCC)](https://ru.wikipedia.org/wiki/Mobile_Country_Code): + +``` +SELECT mcc, count() FROM cell_towers GROUP BY mcc ORDER BY count() DESC LIMIT 10 + +┌─mcc─┬─count()─┐ +│ 310 │ 5024650 │ +│ 262 │ 2622423 │ +│ 250 │ 1953176 │ +│ 208 │ 1891187 │ +│ 724 │ 1836150 │ +│ 404 │ 1729151 │ +│ 234 │ 1618924 │ +│ 510 │ 1353998 │ +│ 440 │ 1343355 │ +│ 311 │ 1332798 │ +└─────┴─────────┘ + +10 rows in set. Elapsed: 0.019 sec. Processed 43.28 million rows, 86.55 MB (2.33 billion rows/s., 4.65 GB/s.) +``` + +Можно увидеть, что по количеству вышек лидируют следующие страны: США, Германия, Россия. + +Вы также можете создать [внешний словарь](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) в ClickHouse для того, чтобы расшифровать эти значения. + +## Пример использования {#use-case} + +Рассмотрим применение функции `pointInPolygon`. + +1. Создаем таблицу, в которой будем хранить многоугольники: + +``` +CREATE TEMPORARY TABLE moscow (polygon Array(Tuple(Float64, Float64))); +``` + +2. Очертания Москвы выглядят приблизительно так ("Новая Москва" в них не включена): + +``` +INSERT INTO moscow VALUES ([(37.84172564285271, 55.78000432402266), (37.8381207618713, 55.775874525970494), (37.83979446823122, 55.775626746008065), (37.84243326983639, 55.77446586811748), (37.84262672750849, 55.771974101091104), (37.84153238623039, 55.77114545193181), (37.841124690460184, 55.76722010265554), (37.84239076983644, 55.76654891107098), (37.842283558197025, 55.76258709833121), (37.8421759312134, 55.758073999993734), (37.84198330422974, 55.75381499999371), (37.8416827275085, 55.749277102484484), (37.84157576190186, 55.74794544108413), (37.83897929098507, 55.74525257875241), (37.83739676451868, 55.74404373042019), (37.838732481460525, 55.74298009816793), (37.841183997352545, 55.743060321833575), (37.84097476190185, 55.73938799999373), (37.84048155819702, 55.73570799999372), (37.840095812164286, 55.73228210777237), (37.83983814285274, 55.73080491981639), (37.83846476321406, 55.729799917464675), (37.83835745269769, 55.72919751082619), (37.838636380279524, 55.72859509486539), (37.8395161005249, 55.727705075632784), (37.83897964285276, 55.722727886185154), (37.83862557539366, 55.72034817326636), (37.83559735744853, 55.71944437307499), (37.835370708803126, 55.71831419154461), (37.83738169402022, 55.71765218986692), (37.83823396494291, 55.71691750159089), (37.838056931213345, 55.71547311301385), (37.836812846557606, 55.71221445615604), (37.83522525396725, 55.709331054395555), (37.83269301586908, 55.70953687463627), (37.829667367706236, 55.70903403789297), (37.83311126588435, 55.70552351822608), (37.83058993121339, 55.70041317726053), (37.82983872750851, 55.69883771404813), (37.82934501586913, 55.69718947487017), (37.828926414016685, 55.69504441658371), (37.82876530422971, 55.69287499999378), (37.82894754100031, 55.690759754047335), (37.827697554878185, 55.68951421135665), (37.82447346292115, 55.68965045405069), (37.83136543914793, 55.68322046195302), (37.833554015869154, 55.67814012759211), (37.83544184655761, 55.67295011628339), (37.837480388885474, 55.6672498719639), (37.838960677246064, 55.66316274139358), (37.83926093121332, 55.66046999999383), (37.839025050262435, 55.65869897264431), (37.83670784390257, 55.65794084879904), (37.835656529083245, 55.65694309303843), (37.83704060449217, 55.65689306460552), (37.83696819873806, 55.65550363526252), (37.83760389616388, 55.65487847246661), (37.83687972750851, 55.65356745541324), (37.83515216004943, 55.65155951234079), (37.83312418518067, 55.64979413590619), (37.82801726983639, 55.64640836412121), (37.820614174591, 55.64164525405531), (37.818908190475426, 55.6421883258084), (37.81717543386075, 55.64112490388471), (37.81690987037274, 55.63916106913107), (37.815099354492155, 55.637925371757085), (37.808769150787356, 55.633798276884455), (37.80100123544311, 55.62873670012244), (37.79598013491824, 55.62554336109055), (37.78634567724606, 55.62033499605651), (37.78334147619623, 55.618768681480326), (37.77746201055901, 55.619855533402706), (37.77527329626457, 55.61909966711279), (37.77801986242668, 55.618770300976294), (37.778212973541216, 55.617257701952106), (37.77784818518065, 55.61574504433011), (37.77016867724609, 55.61148576294007), (37.760191219573976, 55.60599579539028), (37.75338926983641, 55.60227892751446), (37.746329965606634, 55.59920577639331), (37.73939925396728, 55.59631430313617), (37.73273665739439, 55.5935318803559), (37.7299954450912, 55.59350760316188), (37.7268679946899, 55.59469840523759), (37.72626726983634, 55.59229549697373), (37.7262673598022, 55.59081598950582), (37.71897193121335, 55.5877595845419), (37.70871550793456, 55.58393177431724), (37.700497489410374, 55.580917323756644), (37.69204305026244, 55.57778089778455), (37.68544477378839, 55.57815154690915), (37.68391050793454, 55.57472945079756), (37.678803592590306, 55.57328235936491), (37.6743402539673, 55.57255251445782), (37.66813862698363, 55.57216388774464), (37.617927457672096, 55.57505691895805), (37.60443099999999, 55.5757737568051), (37.599683515869145, 55.57749105910326), (37.59754177842709, 55.57796291823627), (37.59625834786988, 55.57906686095235), (37.59501783265684, 55.57746616444403), (37.593090671936025, 55.57671634534502), (37.587018007904, 55.577944600233785), (37.578692203704804, 55.57982895000019), (37.57327546607398, 55.58116294118248), (37.57385012109279, 55.581550362779), (37.57399562266922, 55.5820107079112), (37.5735356072979, 55.58226289171689), (37.57290393054962, 55.582393529795155), (37.57037722355653, 55.581919415056234), (37.5592298306885, 55.584471614867844), (37.54189249206543, 55.58867650795186), (37.5297256269836, 55.59158133551745), (37.517837865081766, 55.59443656218868), (37.51200186508174, 55.59635625174229), (37.506808949737554, 55.59907823904434), (37.49820432275389, 55.6062944994944), (37.494406071441674, 55.60967103463367), (37.494760001358024, 55.61066689753365), (37.49397137107085, 55.61220931698269), (37.49016528606031, 55.613417718449064), (37.48773249206542, 55.61530616333343), (37.47921386508177, 55.622640129112334), (37.470652153442394, 55.62993723476164), (37.46273446298218, 55.6368075123157), (37.46350692265317, 55.64068225239439), (37.46050283203121, 55.640794546982576), (37.457627470916734, 55.64118904154646), (37.450718034393326, 55.64690488145138), (37.44239252645875, 55.65397824729769), (37.434587576721185, 55.66053543155961), (37.43582144975277, 55.661693766520735), (37.43576786245721, 55.662755031737014), (37.430982915344174, 55.664610641628116), (37.428547447097685, 55.66778515273695), (37.42945134592044, 55.668633314343566), (37.42859571562949, 55.66948145750025), (37.4262836402282, 55.670813882451405), (37.418709037048295, 55.6811141674414), (37.41922139651101, 55.68235377885389), (37.419218771842885, 55.68359335082235), (37.417196501327446, 55.684375235224735), (37.41607020370478, 55.68540557585352), (37.415640857147146, 55.68686637150793), (37.414632153442334, 55.68903015131686), (37.413344899475064, 55.690896881757396), (37.41171432275391, 55.69264232162232), (37.40948282275393, 55.69455101638112), (37.40703674603271, 55.69638690385348), (37.39607169577025, 55.70451821283731), (37.38952706878662, 55.70942491932811), (37.387778313491815, 55.71149057784176), (37.39049275399779, 55.71419814298992), (37.385557272491454, 55.7155489617061), (37.38388335714726, 55.71849856042102), (37.378368238098155, 55.7292763261685), (37.37763597123337, 55.730845879211614), (37.37890062088197, 55.73167906388319), (37.37750451918789, 55.734703664681774), (37.375610832015965, 55.734851959522246), (37.3723813571472, 55.74105626086403), (37.37014935714723, 55.746115620904355), (37.36944173016362, 55.750883999993725), (37.36975304365541, 55.76335905525834), (37.37244070571134, 55.76432079697595), (37.3724259757175, 55.76636979670426), (37.369922155757884, 55.76735417953104), (37.369892695770275, 55.76823419316575), (37.370214730163575, 55.782312184391266), (37.370493611114505, 55.78436801120489), (37.37120164550783, 55.78596427165359), (37.37284851456452, 55.7874378183096), (37.37608325135799, 55.7886695054807), (37.3764587460632, 55.78947647305964), (37.37530000265506, 55.79146512926804), (37.38235915344241, 55.79899647809345), (37.384344043655396, 55.80113596939471), (37.38594269577028, 55.80322699999366), (37.38711208598329, 55.804919036911976), (37.3880239841309, 55.806610999993666), (37.38928977249147, 55.81001864976979), (37.39038389947512, 55.81348641242801), (37.39235781481933, 55.81983538336746), (37.393709457672124, 55.82417822811877), (37.394685720901464, 55.82792275755836), (37.39557615344238, 55.830447148154136), (37.39844478226658, 55.83167107969975), (37.40019761214057, 55.83151823557964), (37.400398790382326, 55.83264967594742), (37.39659544313046, 55.83322180909622), (37.39667059524539, 55.83402792148566), (37.39682089947515, 55.83638877400216), (37.39643489154053, 55.83861656112751), (37.3955338994751, 55.84072348043264), (37.392680272491454, 55.84502158126453), (37.39241188227847, 55.84659117913199), (37.392529730163616, 55.84816071336481), (37.39486835714723, 55.85288092980303), (37.39873052645878, 55.859893456073635), (37.40272161111449, 55.86441833633205), (37.40697072750854, 55.867579567544375), (37.410007082016016, 55.868369880337), (37.4120992989502, 55.86920843741314), (37.412668021163924, 55.87055369615854), (37.41482461111453, 55.87170587948249), (37.41862266137694, 55.873183961039565), (37.42413732540892, 55.874879126654704), (37.4312182698669, 55.875614937236705), (37.43111093783558, 55.8762723478417), (37.43332105622856, 55.87706546369396), (37.43385747619623, 55.87790681284802), (37.441303050262405, 55.88027084462084), (37.44747234260555, 55.87942070143253), (37.44716141796871, 55.88072960917233), (37.44769797085568, 55.88121221323979), (37.45204320500181, 55.882080694420715), (37.45673176190186, 55.882346110794586), (37.463383999999984, 55.88252729504517), (37.46682797486874, 55.88294937719063), (37.470014457672086, 55.88361266759345), (37.47751410450743, 55.88546991372396), (37.47860317658232, 55.88534929207307), (37.48165826025772, 55.882563306475106), (37.48316434442331, 55.8815803226785), (37.483831555817645, 55.882427612793315), (37.483182967125686, 55.88372791409729), (37.483092277908824, 55.88495581062434), (37.4855716508179, 55.8875561994203), (37.486440636245746, 55.887827444039566), (37.49014203439328, 55.88897899871799), (37.493210285705544, 55.890208937135604), (37.497512451065035, 55.891342397444696), (37.49780744510645, 55.89174030252967), (37.49940333499519, 55.89239745507079), (37.50018383334346, 55.89339220941865), (37.52421672750851, 55.903869074155224), (37.52977457672118, 55.90564076517974), (37.53503220370484, 55.90661661218259), (37.54042858064267, 55.90714113744566), (37.54320461007303, 55.905645048442985), (37.545686966066306, 55.906608607018505), (37.54743976120755, 55.90788552162358), (37.55796999999999, 55.90901557907218), (37.572711542327866, 55.91059395704873), (37.57942799999998, 55.91073854155573), (37.58502865872187, 55.91009969268444), (37.58739968913264, 55.90794809960554), (37.59131567193598, 55.908713267595054), (37.612687423278814, 55.902866854295375), (37.62348079629517, 55.90041967242986), (37.635797880950896, 55.898141151686396), (37.649487626983664, 55.89639275532968), (37.65619302513125, 55.89572360207488), (37.66294133862307, 55.895295577183965), (37.66874564418033, 55.89505457604897), (37.67375601586915, 55.89254677027454), (37.67744661901856, 55.8947775867987), (37.688347, 55.89450045676125), (37.69480554232789, 55.89422926332761), (37.70107096560668, 55.89322256101114), (37.705962965606716, 55.891763491662616), (37.711885134918205, 55.889110234998974), (37.71682005026245, 55.886577568759876), (37.7199315476074, 55.88458159806678), (37.72234560316464, 55.882281005794134), (37.72364385977171, 55.8809452036196), (37.725371142837474, 55.8809722706006), (37.727870902099546, 55.88037213862385), (37.73394330422971, 55.877941504088696), (37.745339592590376, 55.87208120378722), (37.75525267724611, 55.86703807949492), (37.76919976190188, 55.859821640197474), (37.827835219574, 55.82962968399116), (37.83341438888553, 55.82575289922351), (37.83652584655761, 55.82188784027888), (37.83809213491821, 55.81612575504693), (37.83605359521481, 55.81460347077685), (37.83632178569025, 55.81276696067908), (37.838623105812026, 55.811486181656385), (37.83912198147584, 55.807329380532785), (37.839079078033414, 55.80510270463816), (37.83965844708251, 55.79940712529036), (37.840581150787344, 55.79131399999368), (37.84172564285271, 55.78000432402266)]); +``` + +3. Проверяем, сколько сотовых вышек находится в Москве: + +``` +SELECT count() FROM cell_towers WHERE pointInPolygon((lon, lat), (SELECT * FROM moscow)) + +┌─count()─┐ +│ 310463 │ +└─────────┘ + +1 rows in set. Elapsed: 0.067 sec. Processed 43.28 million rows, 692.42 MB (645.83 million rows/s., 10.33 GB/s.) +``` + +Вы можете протестировать другие запросы с помощью интерактивного ресурса [Playground](https://gh-api.clickhouse.tech/play?user=play). Например, [вот так](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIG1jYywgY291bnQoKSBGUk9NIGNlbGxfdG93ZXJzIEdST1VQIEJZIG1jYyBPUkRFUiBCWSBjb3VudCgpIERFU0M=). Однако, обратите внимание, что здесь нельзя создавать временные таблицы. diff --git a/docs/ru/getting-started/example-datasets/index.md b/docs/ru/getting-started/example-datasets/index.md index f590300adda..756b3a75dee 100644 --- a/docs/ru/getting-started/example-datasets/index.md +++ b/docs/ru/getting-started/example-datasets/index.md @@ -16,4 +16,5 @@ toc_title: "Введение" - [AMPLab Big Data Benchmark](amplab-benchmark.md) - [Данные о такси в Нью-Йорке](nyc-taxi.md) - [OnTime](ontime.md) +- [Вышки сотовой связи](../../getting-started/example-datasets/cell-towers.md) diff --git a/docs/ru/guides/apply-catboost-model.md b/docs/ru/guides/apply-catboost-model.md index 11964c57fc7..db2be63692f 100644 --- a/docs/ru/guides/apply-catboost-model.md +++ b/docs/ru/guides/apply-catboost-model.md @@ -158,7 +158,9 @@ FROM amazon_train /home/catboost/data/libcatboostmodel.so /home/catboost/models/*_model.xml ``` - +!!! note "Примечание" + Вы можете позднее изменить путь к конфигурации модели CatBoost без перезагрузки сервера. + ## 4. Запустите вывод модели из SQL {#run-model-inference} Для тестирования модели запустите клиент ClickHouse `$ clickhouse client`. diff --git a/docs/ru/interfaces/cli.md b/docs/ru/interfaces/cli.md index 96ec36be79f..277b73a6d36 100644 --- a/docs/ru/interfaces/cli.md +++ b/docs/ru/interfaces/cli.md @@ -121,6 +121,7 @@ $ clickhouse-client --param_tbl="numbers" --param_db="system" --param_col="numbe - `--user, -u` — имя пользователя, по умолчанию — ‘default’. - `--password` — пароль, по умолчанию — пустая строка. - `--query, -q` — запрос для выполнения, при использовании в неинтерактивном режиме. +- `--queries-file, -qf` - путь к файлу с запросами для выполнения. Необходимо указать только одну из опций: `query` или `queries-file`. - `--database, -d` — выбрать текущую БД. Без указания значение берется из настроек сервера (по умолчанию — БД ‘default’). - `--multiline, -m` — если указано — разрешить многострочные запросы, не отправлять запрос по нажатию Enter. - `--multiquery, -n` — если указано — разрешить выполнять несколько запросов, разделённых точкой с запятой. @@ -130,6 +131,7 @@ $ clickhouse-client --param_tbl="numbers" --param_db="system" --param_col="numbe - `--stacktrace` — если указано, в случае исключения, выводить также его стек-трейс. - `--config-file` — имя конфигурационного файла. - `--secure` — если указано, будет использован безопасный канал. +- `--history_file` - путь к файлу с историей команд. - `--param_` — значение параметра для [запроса с параметрами](#cli-queries-with-parameters). Начиная с версии 20.5, в `clickhouse-client` есть автоматическая подсветка синтаксиса (включена всегда). diff --git a/docs/ru/interfaces/third-party/gui.md b/docs/ru/interfaces/third-party/gui.md index f913a0ff2cc..156f7130bc5 100644 --- a/docs/ru/interfaces/third-party/gui.md +++ b/docs/ru/interfaces/third-party/gui.md @@ -166,4 +166,19 @@ toc_title: "Визуальные интерфейсы от сторонних р [Как сконфигурировать ClickHouse в Looker.](https://docs.looker.com/setup-and-management/database-config/clickhouse) -[Original article](https://clickhouse.tech/docs/ru/interfaces/third-party/gui/) +### SeekTable {#seektable} + +[SeekTable](https://www.seektable.com) — это аналитический инструмент для самостоятельного анализа и обработки данных бизнес-аналитики. Он доступен как в виде облачного сервиса, так и в виде локальной версии. Отчеты из SeekTable могут быть встроены в любое веб-приложение. + +Основные возможности: + +- Удобный конструктор отчетов. +- Гибкая настройка отчетов SQL и создание запросов для специфичных отчетов. +- Интегрируется с ClickHouse, используя собственную точку приема запроса TCP/IP или интерфейс HTTP(S) (два разных драйвера). +- Поддерживает всю мощь диалекта ClickHouse SQL для построения запросов по различным измерениям и показателям. +- [WEB-API](https://www.seektable.com/help/web-api-integration) для автоматизированной генерации отчетов. +- Процесс разработки отчетов поддерживает [резервное копирование/восстановление данных](https://www.seektable.com/help/self-hosted-backup-restore); конфигурация моделей данных (кубов) / отчетов представляет собой удобочитаемый XML-файл, который может храниться в системе контроля версий. + +SeekTable [бесплатен](https://www.seektable.com/help/cloud-pricing) для личного/индивидуального использования. + +[Как сконфигурировать подключение ClickHouse в SeekTable.](https://www.seektable.com/help/clickhouse-pivot-table) diff --git a/docs/ru/operations/server-configuration-parameters/settings.md b/docs/ru/operations/server-configuration-parameters/settings.md index 109146d27f4..be9e2deab74 100644 --- a/docs/ru/operations/server-configuration-parameters/settings.md +++ b/docs/ru/operations/server-configuration-parameters/settings.md @@ -101,6 +101,12 @@ ClickHouse проверяет условия для `min_part_size` и `min_part ``` +## database_atomic_delay_before_drop_table_sec {#database_atomic_delay_before_drop_table_sec} + +Устанавливает задержку перед удалением табличных данных, в секундах. Если запрос имеет идентификатор `SYNC`, эта настройка игнорируется. + +Значение по умолчанию: `480` (8 минут). + ## default\_database {#default-database} База данных по умолчанию. @@ -285,7 +291,7 @@ ClickHouse проверяет условия для `min_part_size` и `min_part ## interserver_http_host {#interserver-http-host} -Имя хоста, которое могут использовать другие серверы для обращения к этому. +Имя хоста, которое могут использовать другие серверы для обращения к этому хосту. Если не указано, то определяется аналогично команде `hostname -f`. @@ -297,11 +303,36 @@ ClickHouse проверяет условия для `min_part_size` и `min_part example.yandex.ru ``` +## interserver_https_port {#interserver-https-port} + +Порт для обмена данными между репликами ClickHouse по протоколу `HTTPS`. + +**Пример** + +``` xml +9010 +``` + +## interserver_https_host {#interserver-https-host} + +Имя хоста, которое могут использовать другие реплики для обращения к нему по протоколу `HTTPS`. + +**Пример** + +``` xml +example.yandex.ru +``` + + + ## interserver_http_credentials {#server-settings-interserver-http-credentials} Имя пользователя и пароль, использующиеся для аутентификации при [репликации](../../operations/server-configuration-parameters/settings.md) движками Replicated\*. Это имя пользователя и пароль используются только для взаимодействия между репликами кластера и никак не связаны с аутентификацией клиентов ClickHouse. Сервер проверяет совпадение имени и пароля для соединяющихся с ним реплик, а также использует это же имя и пароль для соединения с другими репликами. Соответственно, эти имя и пароль должны быть прописаны одинаковыми для всех реплик кластера. По умолчанию аутентификация не используется. +!!! note "Примечание" + Эти учетные данные являются общими для обмена данными по протоколам `HTTP` и `HTTPS`. + Раздел содержит следующие параметры: - `user` — имя пользователя. diff --git a/docs/ru/operations/settings/merge-tree-settings.md b/docs/ru/operations/settings/merge-tree-settings.md index bfc0b0a2644..f9093d379e3 100644 --- a/docs/ru/operations/settings/merge-tree-settings.md +++ b/docs/ru/operations/settings/merge-tree-settings.md @@ -55,6 +55,26 @@ Eсли число кусков в партиции превышает знач ClickHouse искусственно выполняет `INSERT` дольше (добавляет ‘sleep’), чтобы фоновый механизм слияния успевал слиять куски быстрее, чем они добавляются. +## inactive_parts_to_throw_insert {#inactive-parts-to-throw-insert} + +Если число неактивных кусков в партиции превышает значение `inactive_parts_to_throw_insert`, `INSERT` прерывается с исключением «Too many inactive parts (N). Parts cleaning are processing significantly slower than inserts». + +Возможные значения: + +- Положительное целое число. + +Значение по умолчанию: 0 (не ограничено). + +## inactive_parts_to_delay_insert {#inactive-parts-to-delay-insert} + +Если число неактивных кусков в партиции больше или равно значению `inactive_parts_to_delay_insert`, `INSERT` искусственно замедляется. Это полезно, когда сервер не может быстро очистить неактивные куски. + +Возможные значения: + +- Положительное целое число. + +Значение по умолчанию: 0 (не ограничено). + ## max_delay_to_insert {#max-delay-to-insert} Величина в секундах, которая используется для расчета задержки `INSERT`, если число кусков в партиции превышает значение [parts_to_delay_insert](#parts-to-delay-insert). diff --git a/docs/ru/operations/settings/settings.md b/docs/ru/operations/settings/settings.md index d10ac2ab317..4951be49629 100644 --- a/docs/ru/operations/settings/settings.md +++ b/docs/ru/operations/settings/settings.md @@ -844,8 +844,6 @@ SELECT type, query FROM system.query_log WHERE log_comment = 'log_comment test' Значение по умолчанию: количество процессорных ядер без учёта Hyper-Threading. -Если на сервере обычно исполняется менее одного запроса SELECT одновременно, то выставите этот параметр в значение чуть меньше количества реальных процессорных ядер. - Для запросов, которые быстро завершаются из-за LIMIT-а, имеет смысл выставить max_threads поменьше. Например, если нужное количество записей находится в каждом блоке, то при max_threads = 8 будет считано 8 блоков, хотя достаточно было прочитать один. Чем меньше `max_threads`, тем меньше будет использоваться оперативки. @@ -2690,6 +2688,28 @@ SELECT * FROM test2; Значение по умолчанию: `0`. +## database_atomic_wait_for_drop_and_detach_synchronously {#database_atomic_wait_for_drop_and_detach_synchronously} + +Добавляет модификатор `SYNC` ко всем запросам `DROP` и `DETACH`. + +Возможные значения: + +- 0 — Запросы будут выполняться с задержкой. +- 1 — Запросы будут выполняться без задержки. + +Значение по умолчанию: `0`. + +## show_table_uuid_in_table_create_query_if_not_nil {#show_table_uuid_in_table_create_query_if_not_nil} + +Устанавливает отображение запроса `SHOW TABLE`. + +Возможные значения: + +- 0 — Запрос будет отображаться без UUID таблицы. +- 1 — Запрос будет отображаться с UUID таблицы. + +Значение по умолчанию: `0`. + ## allow_experimental_live_view {#allow-experimental-live-view} Включает экспериментальную возможность использования [LIVE-представлений](../../sql-reference/statements/create/view.md#live-view). @@ -2724,4 +2744,15 @@ SELECT * FROM test2; Значение по умолчанию: `60`. +## check_query_single_value_result {#check_query_single_value_result} + +Определяет уровень детализации результата для запросов [CHECK TABLE](../../sql-reference/statements/check-table.md#checking-mergetree-tables) для таблиц семейства `MergeTree`. + +Возможные значения: + +- 0 — запрос возвращает статус каждого куска данных таблицы. +- 1 — запрос возвращает статус таблицы в целом. + +Значение по умолчанию: `0`. + [Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/settings/) diff --git a/docs/ru/operations/system-tables/columns.md b/docs/ru/operations/system-tables/columns.md index af4cff85439..b8a0aef2299 100644 --- a/docs/ru/operations/system-tables/columns.md +++ b/docs/ru/operations/system-tables/columns.md @@ -4,7 +4,9 @@ С помощью этой таблицы можно получить информацию аналогично запросу [DESCRIBE TABLE](../../sql-reference/statements/misc.md#misc-describe-table), но для многих таблиц сразу. -Таблица `system.columns` содержит столбцы (тип столбца указан в скобках): +Колонки [временных таблиц](../../sql-reference/statements/create/table.md#temporary-tables) содержатся в `system.columns` только в тех сессиях, в которых эти таблицы были созданы. Поле `database` у таких колонок пустое. + +Cтолбцы: - `database` ([String](../../sql-reference/data-types/string.md)) — имя базы данных. - `table` ([String](../../sql-reference/data-types/string.md)) — имя таблицы. @@ -23,3 +25,46 @@ - `is_in_sampling_key` ([UInt8](../../sql-reference/data-types/int-uint.md)) — флаг, показывающий включение столбца в ключ выборки. - `compression_codec` ([String](../../sql-reference/data-types/string.md)) — имя кодека сжатия. +**Пример** + +```sql +SELECT * FROM system.columns LIMIT 2 FORMAT Vertical; +``` + +```text +Row 1: +────── +database: system +table: aggregate_function_combinators +name: name +type: String +default_kind: +default_expression: +data_compressed_bytes: 0 +data_uncompressed_bytes: 0 +marks_bytes: 0 +comment: +is_in_partition_key: 0 +is_in_sorting_key: 0 +is_in_primary_key: 0 +is_in_sampling_key: 0 +compression_codec: + +Row 2: +────── +database: system +table: aggregate_function_combinators +name: is_internal +type: UInt8 +default_kind: +default_expression: +data_compressed_bytes: 0 +data_uncompressed_bytes: 0 +marks_bytes: 0 +comment: +is_in_partition_key: 0 +is_in_sorting_key: 0 +is_in_primary_key: 0 +is_in_sampling_key: 0 +compression_codec: +``` diff --git a/docs/ru/operations/system-tables/replication_queue.md b/docs/ru/operations/system-tables/replication_queue.md index 56e8c695a21..2f9d80be16f 100644 --- a/docs/ru/operations/system-tables/replication_queue.md +++ b/docs/ru/operations/system-tables/replication_queue.md @@ -14,7 +14,17 @@ - `node_name` ([String](../../sql-reference/data-types/string.md)) — имя узла в ZooKeeper. -- `type` ([String](../../sql-reference/data-types/string.md)) — тип задачи в очереди: `GET_PARTS`, `MERGE_PARTS`, `DETACH_PARTS`, `DROP_PARTS` или `MUTATE_PARTS`. +- `type` ([String](../../sql-reference/data-types/string.md)) — тип задачи в очереди: + + - `GET_PART` — скачать кусок с другой реплики. + - `ATTACH_PART` — присоединить кусок. Задача может быть выполнена и с куском из нашей собственной реплики (если он находится в папке `detached`). Эта задача практически идентична задаче `GET_PART`, лишь немного оптимизирована. + - `MERGE_PARTS` — выполнить слияние кусков. + - `DROP_RANGE` — удалить куски в партициях из указнного диапазона. + - `CLEAR_COLUMN` — удалить указанный столбец из указанной партиции. Примечание: не используется с 20.4. + - `CLEAR_INDEX` — удалить указанный индекс из указанной партиции. Примечание: не используется с 20.4. + - `REPLACE_RANGE` — удалить указанный диапазон кусков и заменить их на новые. + - `MUTATE_PART` — применить одну или несколько мутаций к куску. + - `ALTER_METADATA` — применить изменения структуры таблицы в результате запросов с выражением `ALTER`. - `create_time` ([Datetime](../../sql-reference/data-types/datetime.md)) — дата и время отправки задачи на выполнение. @@ -77,4 +87,3 @@ last_postpone_time: 1970-01-01 03:00:00 **Смотрите также** - [Управление таблицами ReplicatedMergeTree](../../sql-reference/statements/system.md#query-language-system-replicated) - diff --git a/docs/ru/operations/system-tables/tables.md b/docs/ru/operations/system-tables/tables.md index 42e55b1f6b7..11bb6a9eda2 100644 --- a/docs/ru/operations/system-tables/tables.md +++ b/docs/ru/operations/system-tables/tables.md @@ -1,39 +1,94 @@ # system.tables {#system-tables} -Содержит метаданные каждой таблицы, о которой знает сервер. Отсоединённые таблицы не отображаются в `system.tables`. +Содержит метаданные каждой таблицы, о которой знает сервер. -Эта таблица содержит следующие столбцы (тип столбца показан в скобках): +Отсоединённые таблицы ([DETACH](../../sql-reference/statements/detach.md)) не отображаются в `system.tables`. -- `database String` — имя базы данных, в которой находится таблица. -- `name` (String) — имя таблицы. -- `engine` (String) — движок таблицы (без параметров). -- `is_temporary` (UInt8) — флаг, указывающий на то, временная это таблица или нет. -- `data_path` (String) — путь к данным таблицы в файловой системе. -- `metadata_path` (String) — путь к табличным метаданным в файловой системе. -- `metadata_modification_time` (DateTime) — время последней модификации табличных метаданных. -- `dependencies_database` (Array(String)) — зависимости базы данных. -- `dependencies_table` (Array(String)) — табличные зависимости (таблицы [MaterializedView](../../engines/table-engines/special/materializedview.md), созданные на базе текущей таблицы). -- `create_table_query` (String) — запрос, которым создавалась таблица. -- `engine_full` (String) — параметры табличного движка. -- `partition_key` (String) — ключ партиционирования таблицы. -- `sorting_key` (String) — ключ сортировки таблицы. -- `primary_key` (String) - первичный ключ таблицы. -- `sampling_key` (String) — ключ сэмплирования таблицы. -- `storage_policy` (String) - политика хранения данных: +Информация о [временных таблицах](../../sql-reference/statements/create/table.md#temporary-tables) содержится в `system.tables` только в тех сессиях, в которых эти таблицы были созданы. Поле `database` у таких таблиц пустое, а флаг `is_temporary` включен. + +Столбцы: + +- `database` ([String](../../sql-reference/data-types/string.md)) — имя базы данных, в которой находится таблица. +- `name` ([String](../../sql-reference/data-types/string.md)) — имя таблицы. +- `engine` ([String](../../sql-reference/data-types/string.md)) — движок таблицы (без параметров). +- `is_temporary` ([UInt8](../../sql-reference/data-types/int-uint.md)) — флаг, указывающий на то, временная это таблица или нет. +- `data_path` ([String](../../sql-reference/data-types/string.md)) — путь к данным таблицы в файловой системе. +- `metadata_path` ([String](../../sql-reference/data-types/string.md)) — путь к табличным метаданным в файловой системе. +- `metadata_modification_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — время последней модификации табличных метаданных. +- `dependencies_database` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — зависимости базы данных. +- `dependencies_table` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — табличные зависимости (таблицы [MaterializedView](../../engines/table-engines/special/materializedview.md), созданные на базе текущей таблицы). +- `create_table_query` ([String](../../sql-reference/data-types/string.md)) — запрос, при помощи которого создавалась таблица. +- `engine_full` ([String](../../sql-reference/data-types/string.md)) — параметры табличного движка. +- `partition_key` ([String](../../sql-reference/data-types/string.md)) — ключ партиционирования таблицы. +- `sorting_key` ([String](../../sql-reference/data-types/string.md)) — ключ сортировки таблицы. +- `primary_key` ([String](../../sql-reference/data-types/string.md)) - первичный ключ таблицы. +- `sampling_key` ([String](../../sql-reference/data-types/string.md)) — ключ сэмплирования таблицы. +- `storage_policy` ([String](../../sql-reference/data-types/string.md)) - политика хранения данных: - [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes) - [Distributed](../../engines/table-engines/special/distributed.md#distributed) -- `total_rows` (Nullable(UInt64)) - общее количество строк, если есть возможность быстро определить точное количество строк в таблице, в противном случае `Null` (включая базовую таблицу `Buffer`). +- `total_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - общее количество строк, если есть возможность быстро определить точное количество строк в таблице, в противном случае `NULL` (включая базовую таблицу `Buffer`). -- `total_bytes` (Nullable(UInt64)) - общее количество байт, если можно быстро определить точное количество байт для таблицы на накопителе, в противном случае `Null` (**не включает** в себя никакого базового хранилища). +- `total_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - общее количество байт, если можно быстро определить точное количество байт для таблицы на накопителе, в противном случае `NULL` (не включает в себя никакого базового хранилища). - Если таблица хранит данные на диске, возвращает используемое пространство на диске (т. е. сжатое). - Если таблица хранит данные в памяти, возвращает приблизительное количество используемых байт в памяти. -- `lifetime_rows` (Nullable(UInt64)) - общее количество строк, добавленных оператором `INSERT` с момента запуска сервера (только для таблиц `Buffer`). +- `lifetime_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - общее количество строк, добавленных оператором `INSERT` с момента запуска сервера (только для таблиц `Buffer`). -- `lifetime_bytes` (Nullable(UInt64)) - общее количество байт, добавленных оператором `INSERT` с момента запуска сервера (только для таблиц `Buffer`). +- `lifetime_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) - общее количество байт, добавленных оператором `INSERT` с момента запуска сервера (только для таблиц `Buffer`). Таблица `system.tables` используется при выполнении запроса `SHOW TABLES`. +**Пример** + +```sql +SELECT * FROM system.tables LIMIT 2 FORMAT Vertical; +``` + +```text +Row 1: +────── +database: system +name: aggregate_function_combinators +uuid: 00000000-0000-0000-0000-000000000000 +engine: SystemAggregateFunctionCombinators +is_temporary: 0 +data_paths: [] +metadata_path: /var/lib/clickhouse/metadata/system/aggregate_function_combinators.sql +metadata_modification_time: 1970-01-01 03:00:00 +dependencies_database: [] +dependencies_table: [] +create_table_query: +engine_full: +partition_key: +sorting_key: +primary_key: +sampling_key: +storage_policy: +total_rows: ᴺᵁᴸᴸ +total_bytes: ᴺᵁᴸᴸ + +Row 2: +────── +database: system +name: asynchronous_metrics +uuid: 00000000-0000-0000-0000-000000000000 +engine: SystemAsynchronousMetrics +is_temporary: 0 +data_paths: [] +metadata_path: /var/lib/clickhouse/metadata/system/asynchronous_metrics.sql +metadata_modification_time: 1970-01-01 03:00:00 +dependencies_database: [] +dependencies_table: [] +create_table_query: +engine_full: +partition_key: +sorting_key: +primary_key: +sampling_key: +storage_policy: +total_rows: ᴺᵁᴸᴸ +total_bytes: ᴺᵁᴸᴸ +``` diff --git a/docs/ru/operations/system-tables/trace_log.md b/docs/ru/operations/system-tables/trace_log.md index 3d22e4eabfd..6d8130c1d00 100644 --- a/docs/ru/operations/system-tables/trace_log.md +++ b/docs/ru/operations/system-tables/trace_log.md @@ -18,10 +18,12 @@ ClickHouse создает эту таблицу когда утсановлен Во время соединения с сервером через `clickhouse-client`, вы видите строку похожую на `Connected to ClickHouse server version 19.18.1 revision 54429.`. Это поле содержит номер после `revision`, но не содержит строку после `version`. -- `timer_type`([Enum8](../../sql-reference/data-types/enum.md)) — тип таймера: +- `trace_type`([Enum8](../../sql-reference/data-types/enum.md)) — тип трассировки: - - `Real` означает wall-clock время. - - `CPU` означает относительное CPU время. + - `Real` — сбор трассировок стека адресов вызова по времени wall-clock. + - `CPU` — сбор трассировок стека адресов вызова по времени CPU. + - `Memory` — сбор выделенной памяти, когда ее размер превышает относительный инкремент. + - `MemorySample` — сбор случайно выделенной памяти. - `thread_number`([UInt32](../../sql-reference/data-types/int-uint.md)) — идентификатор треда. diff --git a/docs/ru/operations/update.md b/docs/ru/operations/update.md index 5c187ed1604..a3e87b52ede 100644 --- a/docs/ru/operations/update.md +++ b/docs/ru/operations/update.md @@ -3,7 +3,7 @@ toc_priority: 47 toc_title: "Обновление ClickHouse" --- -# Обновление ClickHouse {#obnovlenie-clickhouse} +# Обновление ClickHouse {#clickhouse-upgrade} Если ClickHouse установлен с помощью deb-пакетов, выполните следующие команды на сервере: @@ -15,4 +15,17 @@ $ sudo service clickhouse-server restart Если ClickHouse установлен не из рекомендуемых deb-пакетов, используйте соответствующий метод обновления. -ClickHouse не поддерживает распределенное обновление. Операция должна выполняться последовательно на каждом отдельном сервере. Не обновляйте все серверы в кластере одновременно, иначе кластер становится недоступным в течение некоторого времени. +!!! note "Примечание" + Вы можете обновить сразу несколько серверов, кроме случая, когда все реплики одного шарда отключены. + +Обновление ClickHouse до определенной версии: + +**Пример** + +`xx.yy.a.b` — это номер текущей стабильной версии. Последнюю стабильную версию можно узнать [здесь](https://github.com/ClickHouse/ClickHouse/releases) + +```bash +$ sudo apt-get update +$ sudo apt-get install clickhouse-server=xx.yy.a.b clickhouse-client=xx.yy.a.b clickhouse-common-static=xx.yy.a.b +$ sudo service clickhouse-server restart +``` diff --git a/docs/ru/sql-reference/aggregate-functions/combinators.md b/docs/ru/sql-reference/aggregate-functions/combinators.md index eb52fa9bc75..74f9d1c1c05 100644 --- a/docs/ru/sql-reference/aggregate-functions/combinators.md +++ b/docs/ru/sql-reference/aggregate-functions/combinators.md @@ -27,6 +27,40 @@ toc_title: "Комбинаторы агрегатных функций" Комбинаторы -If и -Array можно сочетать. При этом, должен сначала идти Array, а потом If. Примеры: `uniqArrayIf(arr, cond)`, `quantilesTimingArrayIf(level1, level2)(arr, cond)`. Из-за такого порядка получается, что аргумент cond не должен быть массивом. +## -SimpleState {#agg-functions-combinator-simplestate} + +При использовании этого комбинатора агрегатная функция возвращает то же значение, но типа [SimpleAggregateFunction(...)](../../sql-reference/data-types/simpleaggregatefunction.md). Текущее значение функции может храниться в таблице для последующей работы с таблицами семейства [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md). + +**Синтаксис** + +``` sql +SimpleState(x) +``` + +**Аргументы** + +- `x` — параметры агрегатной функции. + +**Возвращаемое значение** + +Значение агрегатной функции типа `SimpleAggregateFunction(...)`. + +**Пример** + +Запрос: + +``` sql +WITH anySimpleState(number) AS c SELECT toTypeName(c), c FROM numbers(1); +``` + +Результат: + +``` text +┌─toTypeName(c)────────────────────────┬─c─┐ +│ SimpleAggregateFunction(any, UInt64) │ 0 │ +└──────────────────────────────────────┴───┘ +``` + ## -State {#state} В случае применения этого комбинатора, агрегатная функция возвращает не готовое значение (например, в случае функции [uniq](reference/uniq.md#agg_function-uniq) — количество уникальных значений), а промежуточное состояние агрегации (например, в случае функции `uniq` — хэш-таблицу для расчёта количества уникальных значений), которое имеет тип `AggregateFunction(...)` и может использоваться для дальнейшей обработки или может быть сохранено в таблицу для последующей доагрегации. @@ -247,4 +281,3 @@ FROM people │ [3,2] │ [11.5,12.949999809265137] │ └────────┴───────────────────────────┘ ``` - diff --git a/docs/ru/sql-reference/aggregate-functions/reference/uniqhll12.md b/docs/ru/sql-reference/aggregate-functions/reference/uniqhll12.md index 4002cc06383..7a421d419ae 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/uniqhll12.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/uniqhll12.md @@ -26,7 +26,7 @@ uniqHLL12(x[, ...]) - Использует алгоритм HyperLogLog для аппроксимации числа различных значений аргументов. - Используется 212 5-битовых ячеек. Размер состояния чуть больше 2.5 КБ. Результат не точный (ошибка до ~10%) для небольших множеств (<10K элементов). Однако для множеств большой кардинальности (10K - 100M) результат довольно точен (ошибка до ~1.6%). Начиная с 100M ошибка оценки будет только расти и для множеств огромной кардинальности (1B+ элементов) функция возвращает результат с очень большой неточностью. + Используется 2^12 5-битовых ячеек. Размер состояния чуть больше 2.5 КБ. Результат не точный (ошибка до ~10%) для небольших множеств (<10K элементов). Однако для множеств большой кардинальности (10K - 100M) результат довольно точен (ошибка до ~1.6%). Начиная с 100M ошибка оценки будет только расти и для множеств огромной кардинальности (1B+ элементов) функция возвращает результат с очень большой неточностью. - Результат детерминирован (не зависит от порядка выполнения запроса). diff --git a/docs/ru/sql-reference/data-types/datetime64.md b/docs/ru/sql-reference/data-types/datetime64.md index 6576bf9dc0d..3a08da75bb7 100644 --- a/docs/ru/sql-reference/data-types/datetime64.md +++ b/docs/ru/sql-reference/data-types/datetime64.md @@ -7,9 +7,9 @@ toc_title: DateTime64 Позволяет хранить момент времени, который может быть представлен как календарная дата и время, с заданной суб-секундной точностью. -Размер тика/точность: 10-precision секунд, где precision - целочисленный параметр типа. +Размер тика (точность, precision): 10-precision секунд, где precision - целочисленный параметр. -Синтаксис: +**Синтаксис:** ``` sql DateTime64(precision, [timezone]) @@ -17,9 +17,11 @@ DateTime64(precision, [timezone]) Данные хранятся в виде количества ‘тиков’, прошедших с момента начала эпохи (1970-01-01 00:00:00 UTC), в Int64. Размер тика определяется параметром precision. Дополнительно, тип `DateTime64` позволяет хранить часовой пояс, единый для всей колонки, который влияет на то, как будут отображаться значения типа `DateTime64` в текстовом виде и как будут парситься значения заданные в виде строк (‘2020-01-01 05:00:01.000’). Часовой пояс не хранится в строках таблицы (выборки), а хранится в метаданных колонки. Подробнее см. [DateTime](datetime.md). -## Пример {#primer} +Поддерживаются значения от 1 января 1925 г. и до 31 декабря 2283 г. -**1.** Создание таблицы с столбцом типа `DateTime64` и вставка данных в неё: +## Примеры {#examples} + +1. Создание таблицы со столбцом типа `DateTime64` и вставка данных в неё: ``` sql CREATE TABLE dt @@ -27,15 +29,15 @@ CREATE TABLE dt `timestamp` DateTime64(3, 'Europe/Moscow'), `event_id` UInt8 ) -ENGINE = TinyLog +ENGINE = TinyLog; ``` ``` sql -INSERT INTO dt Values (1546300800000, 1), ('2019-01-01 00:00:00', 2) +INSERT INTO dt Values (1546300800000, 1), ('2019-01-01 00:00:00', 2); ``` ``` sql -SELECT * FROM dt +SELECT * FROM dt; ``` ``` text @@ -46,12 +48,12 @@ SELECT * FROM dt ``` - При вставке даты-времени как числа (аналогично ‘Unix timestamp’), время трактуется как UTC. Unix timestamp `1546300800` в часовом поясе `Europe/London (UTC+0)` представляет время `'2019-01-01 00:00:00'`. Однако, столбец `timestamp` имеет тип `DateTime('Europe/Moscow (UTC+3)')`, так что при выводе в виде строки время отобразится как `2019-01-01 03:00:00`. -- При вставке даты-времени в виде строки, время трактуется соответственно часовому поясу установленному для колонки. `'2019-01-01 00:00:00'` трактуется как время по Москве (и в базу сохраняется `'2018-12-31 21:00:00'` в виде Unix Timestamp) +- При вставке даты-времени в виде строки, время трактуется соответственно часовому поясу установленному для колонки. `'2019-01-01 00:00:00'` трактуется как время по Москве (и в базу сохраняется `'2018-12-31 21:00:00'` в виде Unix Timestamp). -**2.** Фильтрация по значениям даты-времени +2. Фильтрация по значениям даты и времени ``` sql -SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow') +SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow'); ``` ``` text @@ -60,12 +62,12 @@ SELECT * FROM dt WHERE timestamp = toDateTime64('2019-01-01 00:00:00', 3, 'Europ └─────────────────────────┴──────────┘ ``` -В отличие от типа `DateTime`, `DateTime64` не конвертируется из строк автоматически +В отличие от типа `DateTime`, `DateTime64` не конвертируется из строк автоматически. -**3.** Получение часового пояса для значения типа `DateTime64`: +3. Получение часового пояса для значения типа `DateTime64`: ``` sql -SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS x +SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS x; ``` ``` text @@ -74,13 +76,13 @@ SELECT toDateTime64(now(), 3, 'Europe/Moscow') AS column, toTypeName(column) AS └─────────────────────────┴────────────────────────────────┘ ``` -**4.** Конвертация часовых поясов +4. Конвертация часовых поясов ``` sql SELECT toDateTime64(timestamp, 3, 'Europe/London') as lon_time, toDateTime64(timestamp, 3, 'Europe/Moscow') as mos_time -FROM dt +FROM dt; ``` ``` text @@ -90,7 +92,7 @@ FROM dt └─────────────────────────┴─────────────────────────┘ ``` -## See Also {#see-also} +**See Also** - [Функции преобразования типов](../../sql-reference/functions/type-conversion-functions.md) - [Функции для работы с датой и временем](../../sql-reference/functions/date-time-functions.md) diff --git a/docs/ru/sql-reference/data-types/simpleaggregatefunction.md b/docs/ru/sql-reference/data-types/simpleaggregatefunction.md index 0948153362b..7b81c577762 100644 --- a/docs/ru/sql-reference/data-types/simpleaggregatefunction.md +++ b/docs/ru/sql-reference/data-types/simpleaggregatefunction.md @@ -3,6 +3,8 @@ Хранит только текущее значение агрегатной функции и не сохраняет ее полное состояние, как это делает [`AggregateFunction`](../../sql-reference/data-types/aggregatefunction.md). Такая оптимизация может быть применена к функциям, которые обладают следующим свойством: результат выполнения функции `f` к набору строк `S1 UNION ALL S2` может быть получен путем выполнения `f` к отдельным частям набора строк, а затем повторного выполнения `f` к результатам: `f(S1 UNION ALL S2) = f(f(S1) UNION ALL f(S2))`. Это свойство гарантирует, что результатов частичной агрегации достаточно для вычисления комбинированной, поэтому хранить и обрабатывать какие-либо дополнительные данные не требуется. +Чтобы получить промежуточное значение, обычно используются агрегатные функции с суффиксом [-SimpleState](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-simplestate). + Поддерживаются следующие агрегатные функции: - [`any`](../../sql-reference/aggregate-functions/reference/any.md#agg_function-any) @@ -15,10 +17,12 @@ - [`groupBitOr`](../../sql-reference/aggregate-functions/reference/groupbitor.md#groupbitor) - [`groupBitXor`](../../sql-reference/aggregate-functions/reference/groupbitxor.md#groupbitxor) - [`groupArrayArray`](../../sql-reference/aggregate-functions/reference/grouparray.md#agg_function-grouparray) -- [`groupUniqArrayArray`](../../sql-reference/aggregate-functions/reference/groupuniqarray.md#groupuniqarray) +- [`groupUniqArrayArray`](../../sql-reference/aggregate-functions/reference/groupuniqarray.md) - [`sumMap`](../../sql-reference/aggregate-functions/reference/summap.md#agg_functions-summap) - [`minMap`](../../sql-reference/aggregate-functions/reference/minmap.md#agg_functions-minmap) - [`maxMap`](../../sql-reference/aggregate-functions/reference/maxmap.md#agg_functions-maxmap) +- [`argMin`](../../sql-reference/aggregate-functions/reference/argmin.md) +- [`argMax`](../../sql-reference/aggregate-functions/reference/argmax.md) !!! note "Примечание" Значения `SimpleAggregateFunction(func, Type)` отображаются и хранятся так же, как и `Type`, поэтому комбинаторы [-Merge](../../sql-reference/aggregate-functions/combinators.md#aggregate_functions_combinators-merge) и [-State](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-state) не требуются. diff --git a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md index 57f53390d1c..609ee225ce2 100644 --- a/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md +++ b/docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-structure.md @@ -3,7 +3,7 @@ toc_priority: 44 toc_title: "Ключ и поля словаря" --- -# Ключ и поля словаря {#kliuch-i-polia-slovaria} +# Ключ и поля словаря {#dictionary-key-and-fields} Секция `` описывает ключ словаря и поля, доступные для запросов. @@ -88,7 +88,7 @@ PRIMARY KEY Id - `PRIMARY KEY` – имя столбца с ключами. -### Составной ключ {#sostavnoi-kliuch} +### Составной ключ {#composite-key} Ключом может быть кортеж (`tuple`) из полей произвольных типов. В этом случае [layout](external-dicts-dict-layout.md) должен быть `complex_key_hashed` или `complex_key_cache`. @@ -159,13 +159,12 @@ CREATE DICTIONARY somename ( | Тег | Описание | Обязательный | |------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------| | `name` | Имя столбца. | Да | -| `type` | Тип данных ClickHouse.
ClickHouse пытается привести значение из словаря к заданному типу данных. Например, в случае MySQL, в таблице-источнике поле может быть `TEXT`, `VARCHAR`, `BLOB`, но загружено может быть как `String`. [Nullable](../../../sql-reference/data-types/nullable.md) не поддерживается. | Да | -| `null_value` | Значение по умолчанию для несуществующего элемента.
В примере это пустая строка. Нельзя указать значение `NULL`. | Да | +| `type` | Тип данных ClickHouse.
ClickHouse пытается привести значение из словаря к заданному типу данных. Например, в случае MySQL, в таблице-источнике поле может быть `TEXT`, `VARCHAR`, `BLOB`, но загружено может быть как `String`.
[Nullable](../../../sql-reference/data-types/nullable.md) в настоящее время поддерживается для словарей [Flat](external-dicts-dict-layout.md#flat), [Hashed](external-dicts-dict-layout.md#dicts-external_dicts_dict_layout-hashed), [ComplexKeyHashed](external-dicts-dict-layout.md#complex-key-hashed), [Direct](external-dicts-dict-layout.md#direct), [ComplexKeyDirect](external-dicts-dict-layout.md#complex-key-direct), [RangeHashed](external-dicts-dict-layout.md#range-hashed), [Polygon](external-dicts-dict-polygon.md). Для словарей [Cache](external-dicts-dict-layout.md#cache), [ComplexKeyCache](external-dicts-dict-layout.md#complex-key-cache), [SSDCache](external-dicts-dict-layout.md#ssd-cache), [SSDComplexKeyCache](external-dicts-dict-layout.md#complex-key-ssd-cache) и [IPTrie](external-dicts-dict-layout.md#ip-trie) `Nullable`-типы не поддерживаются. | Да | +| `null_value` | Значение по умолчанию для несуществующего элемента.
В примере это пустая строка. Значение [NULL](../../syntax.md#null-literal) можно указывать только для типов `Nullable` (см. предыдущую строку с описанием типов). | Да | | `expression` | [Выражение](../../syntax.md#syntax-expressions), которое ClickHouse выполняет со значением.
Выражением может быть имя столбца в удаленной SQL базе. Таким образом, вы можете использовать его для создания псевдонима удаленного столбца.

Значение по умолчанию: нет выражения. | Нет | -| `hierarchical` | Если `true`, то атрибут содержит ключ предка для текущего элемента. Смотрите [Иерархические словари](external-dicts-dict-hierarchical.md).

Default value: `false`. | No | +| `hierarchical` | Если `true`, то атрибут содержит ключ предка для текущего элемента. Смотрите [Иерархические словари](external-dicts-dict-hierarchical.md).

Значение по умолчанию: `false`. | Нет | | `is_object_id` | Признак того, что запрос выполняется к документу MongoDB по `ObjectID`.

Значение по умолчанию: `false`. | Нет | -## Смотрите также {#smotrite-takzhe} +**Смотрите также** - [Функции для работы с внешними словарями](../../../sql-reference/functions/ext-dict-functions.md). - diff --git a/docs/ru/sql-reference/dictionaries/index.md b/docs/ru/sql-reference/dictionaries/index.md index bd432497be8..59c7518d0c5 100644 --- a/docs/ru/sql-reference/dictionaries/index.md +++ b/docs/ru/sql-reference/dictionaries/index.md @@ -10,8 +10,6 @@ toc_title: "Введение" ClickHouse поддерживает специальные функции для работы со словарями, которые можно использовать в запросах. Проще и эффективнее использовать словари с помощью функций, чем `JOIN` с таблицами-справочниками. -В словаре нельзя хранить значения [NULL](../../sql-reference/syntax.md#null-literal). - ClickHouse поддерживает: - [Встроенные словари](internal-dicts.md#internal_dicts) со специфическим [набором функций](../../sql-reference/dictionaries/external-dictionaries/index.md). diff --git a/docs/ru/sql-reference/functions/bitmap-functions.md b/docs/ru/sql-reference/functions/bitmap-functions.md index ddae2f3eb40..3da729664d0 100644 --- a/docs/ru/sql-reference/functions/bitmap-functions.md +++ b/docs/ru/sql-reference/functions/bitmap-functions.md @@ -25,7 +25,7 @@ SELECT bitmapBuild([1, 2, 3, 4, 5]) AS res, toTypeName(res); ``` text ┌─res─┬─toTypeName(bitmapBuild([1, 2, 3, 4, 5]))─────┐ -│  │ AggregateFunction(groupBitmap, UInt8) │ +│ │ AggregateFunction(groupBitmap, UInt8) │ └─────┴──────────────────────────────────────────────┘ ``` diff --git a/docs/ru/sql-reference/functions/hash-functions.md b/docs/ru/sql-reference/functions/hash-functions.md index 2efff9c3727..07c741e0588 100644 --- a/docs/ru/sql-reference/functions/hash-functions.md +++ b/docs/ru/sql-reference/functions/hash-functions.md @@ -430,7 +430,7 @@ murmurHash3_128( expr ) **Аргументы** -- `expr` — [выражение](../syntax.md#syntax-expressions), возвращающее значение типа[String](../../sql-reference/functions/hash-functions.md). +- `expr` — [выражение](../syntax.md#syntax-expressions), возвращающее значение типа [String](../../sql-reference/functions/hash-functions.md). **Возвращаемое значение** @@ -439,13 +439,13 @@ murmurHash3_128( expr ) **Пример** ``` sql -SELECT murmurHash3_128('example_string') AS MurmurHash3, toTypeName(MurmurHash3) AS type; +SELECT hex(murmurHash3_128('example_string')) AS MurmurHash3, toTypeName(MurmurHash3) AS type; ``` ``` text -┌─MurmurHash3──────┬─type────────────┐ -│ 6�1�4"S5KT�~~q │ FixedString(16) │ -└──────────────────┴─────────────────┘ +┌─MurmurHash3──────────────────────┬─type───┐ +│ 368A1A311CB7342253354B548E7E7E71 │ String │ +└──────────────────────────────────┴────────┘ ``` ## xxHash32, xxHash64 {#hash-functions-xxhash32-xxhash64} diff --git a/docs/ru/sql-reference/functions/json-functions.md b/docs/ru/sql-reference/functions/json-functions.md index 5d419d26981..4de487c03ad 100644 --- a/docs/ru/sql-reference/functions/json-functions.md +++ b/docs/ru/sql-reference/functions/json-functions.md @@ -16,51 +16,65 @@ toc_title: JSON ## visitParamHas(params, name) {#visitparamhasparams-name} -Проверить наличие поля с именем name. +Проверяет наличие поля с именем `name`. + +Алиас: `simpleJSONHas`. ## visitParamExtractUInt(params, name) {#visitparamextractuintparams-name} -Распарсить UInt64 из значения поля с именем name. Если поле строковое - попытаться распарсить число из начала строки. Если такого поля нет, или если оно есть, но содержит не число, то вернуть 0. +Пытается выделить число типа UInt64 из значения поля с именем `name`. Если поле строковое, пытается выделить число из начала строки. Если такого поля нет, или если оно есть, но содержит не число, то возвращает 0. + +Алиас: `simpleJSONExtractUInt`. ## visitParamExtractInt(params, name) {#visitparamextractintparams-name} Аналогично для Int64. +Алиас: `simpleJSONExtractInt`. + ## visitParamExtractFloat(params, name) {#visitparamextractfloatparams-name} Аналогично для Float64. +Алиас: `simpleJSONExtractFloat`. + ## visitParamExtractBool(params, name) {#visitparamextractboolparams-name} -Распарсить значение true/false. Результат - UInt8. +Пытается выделить значение true/false. Результат — UInt8. + +Алиас: `simpleJSONExtractBool`. ## visitParamExtractRaw(params, name) {#visitparamextractrawparams-name} -Вернуть значение поля, включая разделители. +Возвращает значение поля, включая разделители. + +Алиас: `simpleJSONExtractRaw`. Примеры: ``` sql -visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"' -visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}' +visitParamExtractRaw('{"abc":"\\n\\u0000"}', 'abc') = '"\\n\\u0000"'; +visitParamExtractRaw('{"abc":{"def":[1,2,3]}}', 'abc') = '{"def":[1,2,3]}'; ``` ## visitParamExtractString(params, name) {#visitparamextractstringparams-name} -Распарсить строку в двойных кавычках. У значения убирается экранирование. Если убрать экранированные символы не удалось, то возвращается пустая строка. +Разбирает строку в двойных кавычках. У значения убирается экранирование. Если убрать экранированные символы не удалось, то возвращается пустая строка. + +Алиас: `simpleJSONExtractString`. Примеры: ``` sql -visitParamExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0' -visitParamExtractString('{"abc":"\\u263a"}', 'abc') = '☺' -visitParamExtractString('{"abc":"\\u263"}', 'abc') = '' -visitParamExtractString('{"abc":"hello}', 'abc') = '' +visitParamExtractString('{"abc":"\\n\\u0000"}', 'abc') = '\n\0'; +visitParamExtractString('{"abc":"\\u263a"}', 'abc') = '☺'; +visitParamExtractString('{"abc":"\\u263"}', 'abc') = ''; +visitParamExtractString('{"abc":"hello}', 'abc') = ''; ``` -На данный момент, не поддерживаются записанные в формате `\uXXXX\uYYYY` кодовые точки не из basic multilingual plane (они переводятся не в UTF-8, а в CESU-8). +На данный момент не поддерживаются записанные в формате `\uXXXX\uYYYY` кодовые точки не из basic multilingual plane (они переводятся не в UTF-8, а в CESU-8). -Следующие функции используют [simdjson](https://github.com/lemire/simdjson) который разработан под более сложные требования для разбора JSON. Упомянутое выше предположение 2 по-прежнему применимо. +Следующие функции используют [simdjson](https://github.com/lemire/simdjson), который разработан под более сложные требования для разбора JSON. Упомянутое выше допущение 2 по-прежнему применимо. ## isValidJSON(json) {#isvalidjsonjson} @@ -292,4 +306,3 @@ SELECT JSONExtractKeysAndValuesRaw('{"a": [-100, 200.0], "b":{"c": {"d": "hello" │ [('d','"hello"'),('f','"world"')] │ └───────────────────────────────────────────────────────────────────────────────────────────────────────┘ ``` - diff --git a/docs/ru/sql-reference/functions/other-functions.md b/docs/ru/sql-reference/functions/other-functions.md index f9b3e5c3e68..84bbc6af968 100644 --- a/docs/ru/sql-reference/functions/other-functions.md +++ b/docs/ru/sql-reference/functions/other-functions.md @@ -1133,6 +1133,111 @@ SELECT defaultValueOfTypeName('Nullable(Int8)') └──────────────────────────────────────────┘ ``` +## indexHint {#indexhint} +Возвращает все данные из диапазона, в который попадают данные, соответствующие указанному выражению. +Переданное выражение не будет вычислено. Выбор диапазона производится по индексу. +Индекс в ClickHouse разреженный, при чтении диапазона в ответ попадают «лишние» соседние данные. + +**Синтаксис** + +```sql +SELECT * FROM table WHERE indexHint() +``` + +**Возвращаемое значение** + +Возвращает диапазон индекса, в котором выполняется заданное условие. + +Тип: [Uint8](https://clickhouse.yandex/docs/ru/data_types/int_uint/#diapazony-uint). + +**Пример** + +Рассмотрим пример с использованием тестовых данных таблицы [ontime](../../getting-started/example-datasets/ontime.md). + +Исходная таблица: + +```sql +SELECT count() FROM ontime +``` + +```text +┌─count()─┐ +│ 4276457 │ +└─────────┘ +``` + +В таблице есть индексы по полям `(FlightDate, (Year, FlightDate))`. + +Выполним выборку по дате, где индекс не используется. + +Запрос: + +```sql +SELECT FlightDate AS k, count() FROM ontime GROUP BY k ORDER BY k +``` + +ClickHouse обработал всю таблицу (`Processed 4.28 million rows`). + +Результат: + +```text +┌──────────k─┬─count()─┐ +│ 2017-01-01 │ 13970 │ +│ 2017-01-02 │ 15882 │ +........................ +│ 2017-09-28 │ 16411 │ +│ 2017-09-29 │ 16384 │ +│ 2017-09-30 │ 12520 │ +└────────────┴─────────┘ +``` + +Для подключения индекса выбираем конкретную дату. + +Запрос: + +```sql +SELECT FlightDate AS k, count() FROM ontime WHERE k = '2017-09-15' GROUP BY k ORDER BY k +``` + +При использовании индекса ClickHouse обработал значительно меньшее количество строк (`Processed 32.74 thousand rows`). + +Результат: + +```text +┌──────────k─┬─count()─┐ +│ 2017-09-15 │ 16428 │ +└────────────┴─────────┘ +``` + +Передадим в функцию `indexHint` выражение `k = '2017-09-15'`. + +Запрос: + +```sql +SELECT + FlightDate AS k, + count() +FROM ontime +WHERE indexHint(k = '2017-09-15') +GROUP BY k +ORDER BY k ASC +``` + +ClickHouse применил индекс по аналогии с примером выше (`Processed 32.74 thousand rows`). +Выражение `k = '2017-09-15'` не используется при формировании результата. +Функция `indexHint` позволяет увидеть соседние данные. + +Результат: + +```text +┌──────────k─┬─count()─┐ +│ 2017-09-14 │ 7071 │ +│ 2017-09-15 │ 16428 │ +│ 2017-09-16 │ 1077 │ +│ 2017-09-30 │ 8167 │ +└────────────┴─────────┘ +``` + ## replicate {#other-functions-replicate} Создает массив, заполненный одним значением. diff --git a/docs/ru/sql-reference/functions/string-functions.md b/docs/ru/sql-reference/functions/string-functions.md index 6ef7dc01b6a..04af599c09a 100644 --- a/docs/ru/sql-reference/functions/string-functions.md +++ b/docs/ru/sql-reference/functions/string-functions.md @@ -645,3 +645,66 @@ SELECT decodeXMLComponent('< Σ >'); - [Мнемоники в HTML](https://ru.wikipedia.org/wiki/%D0%9C%D0%BD%D0%B5%D0%BC%D0%BE%D0%BD%D0%B8%D0%BA%D0%B8_%D0%B2_HTML) + + +## extractTextFromHTML {#extracttextfromhtml} + +Функция для извлечения текста из HTML или XHTML. +Она не соответствует всем HTML, XML или XHTML стандартам на 100%, но ее реализация достаточно точная и быстрая. Правила обработки следующие: + +1. Комментарии удаляются. Пример: ``. Комментарий должен оканчиваться символами `-->`. Вложенные комментарии недопустимы. +Примечание: конструкции наподобие `` и `` не являются допустимыми комментариями в HTML, но они будут удалены согласно другим правилам. +2. Содержимое CDATA вставляется дословно. Примечание: формат CDATA специфичен для XML/XHTML. Но он обрабатывается всегда по принципу "наилучшего возможного результата". +3. Элементы `script` и `style` удаляются вместе со всем содержимым. Примечание: предполагается, что закрывающий тег не может появиться внутри содержимого. Например, в JS строковый литерал должен быть экранирован как `"<\/script>"`. +Примечание: комментарии и CDATA возможны внутри `script` или `style` - тогда закрывающие теги не ищутся внутри CDATA. Пример: `]]>`. Но они ищутся внутри комментариев. Иногда возникают сложные случаи: ` var y = "-->"; alert(x + y);` +Примечание: `script` и `style` могут быть названиями пространств имен XML - тогда они не обрабатываются как обычные элементы `script` или `style`. Пример: `Hello`. +Примечание: пробелы возможны после имени закрывающего тега: ``, но не перед ним: `< / script>`. +4. Другие теги или элементы, подобные тегам, удаляются, а их внутреннее содержимое остается. Пример: `.` +Примечание: ожидается, что такой HTML является недопустимым: `` +Примечание: функция также удаляет подобные тегам элементы: `<>`, ``, и т. д. +Примечание: если встречается тег без завершающего символа `>`, то удаляется этот тег и весь следующий за ним текст: `world`, `Helloworld` — в HTML нет пробелов, но функция вставляет их. Также следует учитывать такие варианты написания: `Hello

world

`, `Hello
world`. Подобные результаты выполнения функции могут использоваться для анализа данных, например, для преобразования HTML-текста в набор используемых слов. +7. Также обратите внимание, что правильная обработка пробелов требует поддержки `
` и свойств CSS `display` и `white-space`.
+
+**Синтаксис**
+
+``` sql
+extractTextFromHTML(x)
+```
+
+**Аргументы**
+
+-   `x` — текст для обработки. [String](../../sql-reference/data-types/string.md). 
+
+**Возвращаемое значение**
+
+-   Извлеченный текст.
+
+Тип: [String](../../sql-reference/data-types/string.md).
+
+**Пример**
+
+Первый пример содержит несколько тегов и комментарий. На этом примере также видно, как обрабатываются пробелы.
+Второй пример показывает обработку `CDATA` и тега `script`.
+В третьем примере текст выделяется из полного HTML ответа, полученного с помощью функции [url](../../sql-reference/table-functions/url.md).
+
+Запрос:
+
+``` sql
+SELECT extractTextFromHTML(' 

A text withtags.

'); +SELECT extractTextFromHTML('CDATA]]> '); +SELECT extractTextFromHTML(html) FROM url('http://www.donothingfor2minutes.com/', RawBLOB, 'html String'); +``` + +Результат: + +``` text +A text with tags . +The content within CDATA +Do Nothing for 2 Minutes 2:00   +``` diff --git a/docs/ru/sql-reference/statements/alter/column.md b/docs/ru/sql-reference/statements/alter/column.md index 87fc1c78cd0..158ab2e7385 100644 --- a/docs/ru/sql-reference/statements/alter/column.md +++ b/docs/ru/sql-reference/statements/alter/column.md @@ -63,6 +63,9 @@ DROP COLUMN [IF EXISTS] name Запрос удаляет данные из файловой системы. Так как это представляет собой удаление целых файлов, запрос выполняется почти мгновенно. +!!! warning "Предупреждение" + Вы не можете удалить столбец, используемый в [материализованном представлениии](../../../sql-reference/statements/create/view.md#materialized). В противном случае будет ошибка. + Пример: ``` sql @@ -155,7 +158,7 @@ ALTER TABLE table_name MODIFY column_name REMOVE property; ALTER TABLE table_with_ttl MODIFY COLUMN column_ttl REMOVE TTL; ``` -## Смотрите также +**Смотрите также** - [REMOVE TTL](ttl.md). diff --git a/docs/ru/sql-reference/statements/alter/partition.md b/docs/ru/sql-reference/statements/alter/partition.md index 3e7b069b066..02a87406e86 100644 --- a/docs/ru/sql-reference/statements/alter/partition.md +++ b/docs/ru/sql-reference/statements/alter/partition.md @@ -38,7 +38,7 @@ ALTER TABLE mt DETACH PART 'all_2_2_0'; После того как запрос будет выполнен, вы сможете производить любые операции с данными в директории `detached`. Например, можно удалить их из файловой системы. -Запрос реплицируется — данные будут перенесены в директорию `detached` и забыты на всех репликах. Обратите внимание, запрос может быть отправлен только на реплику-лидер. Чтобы узнать, является ли реплика лидером, выполните запрос `SELECT` к системной таблице [system.replicas](../../../operations/system-tables/replicas.md#system_tables-replicas). Либо можно выполнить запрос `DETACH` на всех репликах — тогда на всех репликах, кроме реплики-лидера, запрос вернет ошибку. +Запрос реплицируется — данные будут перенесены в директорию `detached` и забыты на всех репликах. Обратите внимание, запрос может быть отправлен только на реплику-лидер. Чтобы узнать, является ли реплика лидером, выполните запрос `SELECT` к системной таблице [system.replicas](../../../operations/system-tables/replicas.md#system_tables-replicas). Либо можно выполнить запрос `DETACH` на всех репликах — тогда на всех репликах, кроме реплик-лидеров (поскольку допускается несколько лидеров), запрос вернет ошибку. ## DROP PARTITION\|PART {#alter_drop-partition} @@ -83,9 +83,13 @@ ALTER TABLE visits ATTACH PART 201901_2_2_0; Как корректно задать имя партиции или куска, см. в разделе [Как задавать имя партиции в запросах ALTER](#alter-how-to-specify-part-expr). -Этот запрос реплицируется. Реплика-иницатор проверяет, есть ли данные в директории `detached`. Если данные есть, то запрос проверяет их целостность. В случае успеха данные добавляются в таблицу. Все остальные реплики загружают данные с реплики-инициатора запроса. +Этот запрос реплицируется. Реплика-иницатор проверяет, есть ли данные в директории `detached`. +Если данные есть, то запрос проверяет их целостность. В случае успеха данные добавляются в таблицу. -Это означает, что вы можете разместить данные в директории `detached` на одной реплике и с помощью запроса `ALTER ... ATTACH` добавить их в таблицу на всех репликах. +Если реплика, не являющаяся инициатором запроса, получив команду присоединения, находит кусок с правильными контрольными суммами в своей собственной папке `detached`, она присоединяет данные, не скачивая их с других реплик. +Если нет куска с правильными контрольными суммами, данные загружаются из любой реплики, имеющей этот кусок. + +Вы можете поместить данные в директорию `detached` на одной реплике и с помощью запроса `ALTER ... ATTACH` добавить их в таблицу на всех репликах. ## ATTACH PARTITION FROM {#alter_attach-partition-from} @@ -93,7 +97,8 @@ ALTER TABLE visits ATTACH PART 201901_2_2_0; ALTER TABLE table2 ATTACH PARTITION partition_expr FROM table1 ``` -Копирует партицию из таблицы `table1` в таблицу `table2` и добавляет к существующим данным `table2`. Данные из `table1` не удаляются. +Копирует партицию из таблицы `table1` в таблицу `table2`. +Обратите внимание, что данные не удаляются ни из `table1`, ни из `table2`. Следует иметь в виду: @@ -305,4 +310,3 @@ OPTIMIZE TABLE table_not_partitioned PARTITION tuple() FINAL; `IN PARTITION` указывает на партицию, для которой применяются выражения [UPDATE](../../../sql-reference/statements/alter/update.md#alter-table-update-statements) или [DELETE](../../../sql-reference/statements/alter/delete.md#alter-mutations) в результате запроса `ALTER TABLE`. Новые куски создаются только в указанной партиции. Таким образом, `IN PARTITION` помогает снизить нагрузку, когда таблица разбита на множество партиций, а вам нужно обновить данные лишь точечно. Примеры запросов `ALTER ... PARTITION` можно посмотреть в тестах: [`00502_custom_partitioning_local`](https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/00502_custom_partitioning_local.sql) и [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql). - diff --git a/docs/ru/sql-reference/statements/alter/ttl.md b/docs/ru/sql-reference/statements/alter/ttl.md index e949c992bbe..2a2d10b69de 100644 --- a/docs/ru/sql-reference/statements/alter/ttl.md +++ b/docs/ru/sql-reference/statements/alter/ttl.md @@ -82,4 +82,4 @@ SELECT * FROM table_with_ttl; ### Смотрите также - Подробнее о [свойстве TTL](../../../engines/table-engines/mergetree-family/mergetree.md#mergetree-column-ttl). - +- Изменить столбец [с TTL](../../../sql-reference/statements/alter/column.md#alter_modify-column). \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/alter/user.md b/docs/ru/sql-reference/statements/alter/user.md index 604eff9de15..53d090f8eab 100644 --- a/docs/ru/sql-reference/statements/alter/user.md +++ b/docs/ru/sql-reference/statements/alter/user.md @@ -12,10 +12,10 @@ toc_title: USER ``` sql ALTER USER [IF EXISTS] name1 [ON CLUSTER cluster_name1] [RENAME TO new_name1] [, name2 [ON CLUSTER cluster_name2] [RENAME TO new_name2] ...] - [IDENTIFIED [WITH {PLAINTEXT_PASSWORD|SHA256_PASSWORD|DOUBLE_SHA1_PASSWORD}] BY {'password'|'hash'}] - [[ADD|DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] + [NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']}] + [[ADD | DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] [DEFAULT ROLE role [,...] | ALL | ALL EXCEPT role [,...] ] - [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] + [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...] ``` Для выполнения `ALTER USER` необходима привилегия [ALTER USER](../grant.md#grant-access-management). diff --git a/docs/ru/sql-reference/statements/attach.md b/docs/ru/sql-reference/statements/attach.md index 18ec47d05c8..b135507b818 100644 --- a/docs/ru/sql-reference/statements/attach.md +++ b/docs/ru/sql-reference/statements/attach.md @@ -5,12 +5,14 @@ toc_title: ATTACH # ATTACH Statement {#attach} -Запрос полностью аналогичен запросу `CREATE`, но: +Выполняет подключение таблицы, например, при перемещении базы данных на другой сервер. -- вместо слова `CREATE` используется слово `ATTACH`; -- запрос не создаёт данные на диске, а предполагает, что данные уже лежат в соответствующих местах, и всего лишь добавляет информацию о таблице на сервер. После выполнения запроса `ATTACH` сервер будет знать о существовании таблицы. +Запрос не создаёт данные на диске, а предполагает, что данные уже лежат в соответствующих местах, и всего лишь добавляет информацию о таблице на сервер. После выполнения запроса `ATTACH` сервер будет знать о существовании таблицы. -Если таблица перед этим была отключена ([DETACH](../../sql-reference/statements/detach.md)), т.е. её структура известна, можно использовать сокращенную форму записи без определения структуры. +Если таблица перед этим была отключена при помощи ([DETACH](../../sql-reference/statements/detach.md)), т.е. её структура известна, можно использовать сокращенную форму записи без определения структуры. + +## Варианты синтаксиса {#syntax-forms} +### Присоединение существующей таблицы {#attach-existing-table} ``` sql ATTACH TABLE [IF NOT EXISTS] [db.]name [ON CLUSTER cluster] @@ -20,4 +22,38 @@ ATTACH TABLE [IF NOT EXISTS] [db.]name [ON CLUSTER cluster] Если таблица была отключена перманентно, она не будет подключена обратно во время старта сервера, так что нужно явно использовать запрос `ATTACH`, чтобы подключить ее. +### Создание новой таблицы и присоединение данных {#create-new-table-and-attach-data} +**С указанием пути к табличным данным** + +```sql +ATTACH TABLE name FROM 'path/to/data/' (col1 Type1, ...) +``` + +Cоздает новую таблицу с указанной структурой и присоединяет табличные данные из соответствующего каталога в `user_files`. + +**Пример** + +Запрос: + +```sql +DROP TABLE IF EXISTS test; +INSERT INTO TABLE FUNCTION file('01188_attach/test/data.TSV', 'TSV', 's String, n UInt8') VALUES ('test', 42); +ATTACH TABLE test FROM '01188_attach/test' (s String, n UInt8) ENGINE = File(TSV); +SELECT * FROM test; +``` +Результат: + +```sql +┌─s────┬──n─┐ +│ test │ 42 │ +└──────┴────┘ +``` + +**С указанием UUID таблицы** (Только для баз данных `Atomic`) + +```sql +ATTACH TABLE name UUID '' (col1 Type1, ...) +``` + +Cоздает новую таблицу с указанной структурой и присоединяет данные из таблицы с указанным UUID. diff --git a/docs/ru/sql-reference/statements/check-table.md b/docs/ru/sql-reference/statements/check-table.md index 10336f821d0..9592c1a5bc2 100644 --- a/docs/ru/sql-reference/statements/check-table.md +++ b/docs/ru/sql-reference/statements/check-table.md @@ -29,9 +29,36 @@ CHECK TABLE [db.]name В движках `*Log` не предусмотрено автоматическое восстановление данных после сбоя. Используйте запрос `CHECK TABLE`, чтобы своевременно выявлять повреждение данных. -Для движков из семейства `MergeTree` запрос `CHECK TABLE` показывает статус проверки для каждого отдельного куска данных таблицы на локальном сервере. +## Проверка таблиц семейства MergeTree {#checking-mergetree-tables} -**Что делать, если данные повреждены** +Для таблиц семейства `MergeTree` если [check_query_single_value_result](../../operations/settings/settings.md#check_query_single_value_result) = 0, запрос `CHECK TABLE` возвращает статус каждого куска данных таблицы на локальном сервере. + +```sql +SET check_query_single_value_result = 0; +CHECK TABLE test_table; +``` + +```text +┌─part_path─┬─is_passed─┬─message─┐ +│ all_1_4_1 │ 1 │ │ +│ all_1_4_2 │ 1 │ │ +└───────────┴───────────┴─────────┘ +``` + +Если `check_query_single_value_result` = 0, запрос `CHECK TABLE` возвращает статус таблицы в целом. + +```sql +SET check_query_single_value_result = 1; +CHECK TABLE test_table; +``` + +```text +┌─result─┐ +│ 1 │ +└────────┘ +``` + +## Что делать, если данные повреждены {#if-data-is-corrupted} В этом случае можно скопировать оставшиеся неповрежденные данные в другую таблицу. Для этого: diff --git a/docs/ru/sql-reference/statements/create/row-policy.md b/docs/ru/sql-reference/statements/create/row-policy.md index 88709598906..6fe1dc45815 100644 --- a/docs/ru/sql-reference/statements/create/row-policy.md +++ b/docs/ru/sql-reference/statements/create/row-policy.md @@ -5,7 +5,7 @@ toc_title: "Политика доступа" # CREATE ROW POLICY {#create-row-policy-statement} -Создает [фильтры для строк](../../../operations/access-rights.md#row-policy-management), которые пользователь может прочесть из таблицы. +Создает [политики доступа к строкам](../../../operations/access-rights.md#row-policy-management), т.е. фильтры, которые определяют, какие строки пользователь может читать из таблицы. Синтаксис: @@ -13,33 +13,74 @@ toc_title: "Политика доступа" CREATE [ROW] POLICY [IF NOT EXISTS | OR REPLACE] policy_name1 [ON CLUSTER cluster_name1] ON [db1.]table1 [, policy_name2 [ON CLUSTER cluster_name2] ON [db2.]table2 ...] [AS {PERMISSIVE | RESTRICTIVE}] - [FOR SELECT] - [USING condition] + [FOR SELECT] USING condition [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] ``` -Секция `ON CLUSTER` позволяет создавать фильтры для строк на кластере, см. [Распределенные DDL запросы](../../../sql-reference/distributed-ddl.md). +## Секция USING {#create-row-policy-using} -## Секция AS {#create-row-policy-as} - -С помощью данной секции можно создать политику разрешения или ограничения. - -Политика разрешения предоставляет доступ к строкам. Разрешительные политики, которые применяются к одной таблице, объединяются с помощью логического оператора `OR`. Политики являются разрешительными по умолчанию. - -Политика ограничения запрещает доступ к строкам. Ограничительные политики, которые применяются к одной таблице, объединяются логическим оператором `AND`. - -Ограничительные политики применяются к строкам, прошедшим фильтр разрешительной политики. Если вы не зададите разрешительные политики, пользователь не сможет обращаться ни к каким строкам из таблицы. +Секция `USING` указывает условие для фильтрации строк. Пользователь может видеть строку, если это условие, вычисленное для строки, дает ненулевой результат. ## Секция TO {#create-row-policy-to} -В секции `TO` вы можете перечислить как роли, так и пользователей. Например, `CREATE ROW POLICY ... TO accountant, john@localhost`. +В секции `TO` перечисляются пользователи и роли, для которых должна действовать политика. Например, `CREATE ROW POLICY ... TO accountant, john@localhost`. Ключевым словом `ALL` обозначаются все пользователи, включая текущего. Ключевые слова `ALL EXCEPT` позволяют исключить пользователей из списка всех пользователей. Например, `CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost` +!!! note "Note" + Если для таблицы не задано ни одной политики доступа к строкам, то любой пользователь может выполнить команду SELECT и получить все строки таблицы. Если определить хотя бы одну политику для таблицы, до доступ к строкам будет управляться этими политиками, причем для всех пользователей (даже для тех, для кого политики не определялись). Например, следующая политика + + `CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter` + + запретит пользователям `mira` и `peter` видеть строки с `b != 1`, и еще запретит всем остальным пользователям (например, пользователю `paul`) видеть какие-либо строки вообще из таблицы `mydb.table1`. + + Если это нежелательно, такое поведение можно исправить, определив дополнительную политику: + + `CREATE ROW POLICY pol2 ON mydb.table1 USING 1 TO ALL EXCEPT mira, peter` + +## Секция AS {#create-row-policy-as} + +Может быть одновременно активно более одной политики для одной и той же таблицы и одного и того же пользователя. Поэтому нам нужен способ комбинировать политики. + +По умолчанию политики комбинируются с использованием логического оператора `OR`. Например, политики: + +``` sql +CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter +CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 TO peter, antonio +``` + +разрешат пользователю с именем `peter` видеть строки, для которых будет верно `b=1` или `c=2`. + +Секция `AS` указывает, как политики должны комбинироваться с другими политиками. Политики могут быть или разрешительными (`PERMISSIVE`), или ограничительными (`RESTRICTIVE`). По умолчанию политики создаются разрешительными (`PERMISSIVE`); такие политики комбинируются с использованием логического оператора `OR`. + +Ограничительные (`RESTRICTIVE`) политики комбинируются с использованием логического оператора `AND`. + +Общая формула выглядит так: + +``` +строка_видима = (одна или больше permissive-политик дала ненулевой результат проверки условия) И + (все restrictive-политики дали ненулевой результат проверки условия) +``` + +Например, политики + +``` sql +CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter +CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 AS RESTRICTIVE TO peter, antonio +``` + +разрешат пользователю с именем `peter` видеть только те строки, для которых будет одновременно `b=1` и `c=2`. + +## Секция ON CLUSTER {#create-row-policy-on-cluster} + +Секция `ON CLUSTER` позволяет создавать политики на кластере, см. [Распределенные DDL запросы](../../../sql-reference/distributed-ddl.md). + ## Примеры -`CREATE ROW POLICY filter ON mydb.mytable FOR SELECT USING a<1000 TO accountant, john@localhost` +`CREATE ROW POLICY filter1 ON mydb.mytable USING a<1000 TO accountant, john@localhost` -`CREATE ROW POLICY filter ON mydb.mytable FOR SELECT USING a<1000 TO ALL EXCEPT mira` +`CREATE ROW POLICY filter2 ON mydb.mytable USING a<1000 AND b=5 TO ALL EXCEPT mira` + +`CREATE ROW POLICY filter3 ON mydb.mytable USING 1 TO admin` \ No newline at end of file diff --git a/docs/ru/sql-reference/statements/create/table.md b/docs/ru/sql-reference/statements/create/table.md index b998435bcd8..1ccd0a600f3 100644 --- a/docs/ru/sql-reference/statements/create/table.md +++ b/docs/ru/sql-reference/statements/create/table.md @@ -46,15 +46,32 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name AS table_function() ### Из запроса SELECT {#from-select-query} ``` sql -CREATE TABLE [IF NOT EXISTS] [db.]table_name ENGINE = engine AS SELECT ... +CREATE TABLE [IF NOT EXISTS] [db.]table_name[(name1 [type1], name2 [type2], ...)] ENGINE = engine AS SELECT ... ``` -Создаёт таблицу со структурой, как результат запроса `SELECT`, с движком engine, и заполняет её данными из SELECT-а. +Создаёт таблицу со структурой, как результат запроса `SELECT`, с движком `engine`, и заполняет её данными из `SELECT`. Также вы можете явно задать описание столбцов. -Во всех случаях, если указано `IF NOT EXISTS`, то запрос не будет возвращать ошибку, если таблица уже существует. В этом случае, запрос будет ничего не делать. +Если таблица уже существует и указано `IF NOT EXISTS`, то запрос ничего не делает. После секции `ENGINE` в запросе могут использоваться и другие секции в зависимости от движка. Подробную документацию по созданию таблиц смотрите в описаниях [движков таблиц](../../../engines/table-engines/index.md#table_engines). +**Пример** + +Запрос: + +``` sql +CREATE TABLE t1 (x String) ENGINE = Memory AS SELECT 1; +SELECT x, toTypeName(x) FROM t1; +``` + +Результат: + +```text +┌─x─┬─toTypeName(x)─┐ +│ 1 │ String │ +└───┴───────────────┘ +``` + ## Модификатор NULL или NOT NULL {#null-modifiers} Модификатор `NULL` или `NOT NULL`, указанный после типа данных в определении столбца, позволяет или не позволяет типу данных быть [Nullable](../../../sql-reference/data-types/nullable.md#data_type-nullable). @@ -230,7 +247,7 @@ CREATE TABLE codec_example ) ENGINE = MergeTree() ``` -## Временные таблицы {#vremennye-tablitsy} +## Временные таблицы {#temporary-tables} ClickHouse поддерживает временные таблицы со следующими характеристиками: diff --git a/docs/ru/sql-reference/statements/create/user.md b/docs/ru/sql-reference/statements/create/user.md index 68277d67052..a487d1ac593 100644 --- a/docs/ru/sql-reference/statements/create/user.md +++ b/docs/ru/sql-reference/statements/create/user.md @@ -9,15 +9,17 @@ toc_title: "Пользователь" Синтаксис: -```sql +``` sql CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1] [, name2 [ON CLUSTER cluster_name2] ...] - [IDENTIFIED [WITH {NO_PASSWORD|PLAINTEXT_PASSWORD|SHA256_PASSWORD|SHA256_HASH|DOUBLE_SHA1_PASSWORD|DOUBLE_SHA1_HASH}] BY {'password'|'hash'}] + [NOT IDENTIFIED | IDENTIFIED {[WITH {no_password | plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']}] [HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE] [DEFAULT ROLE role [,...]] - [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | PROFILE 'profile_name'] [,...] + [SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY | WRITABLE] | PROFILE 'profile_name'] [,...] ``` +`ON CLUSTER` позволяет создавать пользователей в кластере, см. [Распределенные DDL](../../../sql-reference/distributed-ddl.md). + ## Идентификация Существует несколько способов идентификации пользователя: @@ -28,6 +30,8 @@ CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [ON CLUSTER cluster_name1] - `IDENTIFIED WITH sha256_hash BY 'hash'` - `IDENTIFIED WITH double_sha1_password BY 'qwerty'` - `IDENTIFIED WITH double_sha1_hash BY 'hash'` +- `IDENTIFIED WITH ldap SERVER 'server_name'` +- `IDENTIFIED WITH kerberos` or `IDENTIFIED WITH kerberos REALM 'realm'` ## Пользовательский хост diff --git a/docs/ru/sql-reference/statements/grant.md b/docs/ru/sql-reference/statements/grant.md index 7b2d26902ef..093e6eb3b93 100644 --- a/docs/ru/sql-reference/statements/grant.md +++ b/docs/ru/sql-reference/statements/grant.md @@ -93,7 +93,7 @@ GRANT SELECT(x,y) ON db.table TO john WITH GRANT OPTION - `ALTER ADD CONSTRAINT` - `ALTER DROP CONSTRAINT` - `ALTER TTL` - - `ALTER MATERIALIZE TTL` + - `ALTER MATERIALIZE TTL` - `ALTER SETTINGS` - `ALTER MOVE PARTITION` - `ALTER FETCH PARTITION` @@ -104,9 +104,9 @@ GRANT SELECT(x,y) ON db.table TO john WITH GRANT OPTION - [CREATE](#grant-create) - `CREATE DATABASE` - `CREATE TABLE` + - `CREATE TEMPORARY TABLE` - `CREATE VIEW` - `CREATE DICTIONARY` - - `CREATE TEMPORARY TABLE` - [DROP](#grant-drop) - `DROP DATABASE` - `DROP TABLE` @@ -152,7 +152,7 @@ GRANT SELECT(x,y) ON db.table TO john WITH GRANT OPTION - `SYSTEM RELOAD` - `SYSTEM RELOAD CONFIG` - `SYSTEM RELOAD DICTIONARY` - - `SYSTEM RELOAD EMBEDDED DICTIONARIES` + - `SYSTEM RELOAD EMBEDDED DICTIONARIES` - `SYSTEM MERGES` - `SYSTEM TTL MERGES` - `SYSTEM FETCHES` @@ -279,7 +279,7 @@ GRANT INSERT(x,y) ON db.table TO john - `ALTER ADD CONSTRAINT`. Уровень: `TABLE`. Алиасы: `ADD CONSTRAINT` - `ALTER DROP CONSTRAINT`. Уровень: `TABLE`. Алиасы: `DROP CONSTRAINT` - `ALTER TTL`. Уровень: `TABLE`. Алиасы: `ALTER MODIFY TTL`, `MODIFY TTL` - - `ALTER MATERIALIZE TTL`. Уровень: `TABLE`. Алиасы: `MATERIALIZE TTL` + - `ALTER MATERIALIZE TTL`. Уровень: `TABLE`. Алиасы: `MATERIALIZE TTL` - `ALTER SETTINGS`. Уровень: `TABLE`. Алиасы: `ALTER SETTING`, `ALTER MODIFY SETTING`, `MODIFY SETTING` - `ALTER MOVE PARTITION`. Уровень: `TABLE`. Алиасы: `ALTER MOVE PART`, `MOVE PARTITION`, `MOVE PART` - `ALTER FETCH PARTITION`. Уровень: `TABLE`. Алиасы: `FETCH PARTITION` @@ -307,9 +307,9 @@ GRANT INSERT(x,y) ON db.table TO john - `CREATE`. Уровень: `GROUP` - `CREATE DATABASE`. Уровень: `DATABASE` - `CREATE TABLE`. Уровень: `TABLE` + - `CREATE TEMPORARY TABLE`. Уровень: `GLOBAL` - `CREATE VIEW`. Уровень: `VIEW` - `CREATE DICTIONARY`. Уровень: `DICTIONARY` - - `CREATE TEMPORARY TABLE`. Уровень: `GLOBAL` **Дополнительно** @@ -407,7 +407,7 @@ GRANT INSERT(x,y) ON db.table TO john - `SYSTEM RELOAD`. Уровень: `GROUP` - `SYSTEM RELOAD CONFIG`. Уровень: `GLOBAL`. Алиасы: `RELOAD CONFIG` - `SYSTEM RELOAD DICTIONARY`. Уровень: `GLOBAL`. Алиасы: `SYSTEM RELOAD DICTIONARIES`, `RELOAD DICTIONARY`, `RELOAD DICTIONARIES` - - `SYSTEM RELOAD EMBEDDED DICTIONARIES`. Уровень: `GLOBAL`. Алиасы: `RELOAD EMBEDDED DICTIONARIES` + - `SYSTEM RELOAD EMBEDDED DICTIONARIES`. Уровень: `GLOBAL`. Алиасы: `RELOAD EMBEDDED DICTIONARIES` - `SYSTEM MERGES`. Уровень: `TABLE`. Алиасы: `SYSTEM STOP MERGES`, `SYSTEM START MERGES`, `STOP MERGES`, `START MERGES` - `SYSTEM TTL MERGES`. Уровень: `TABLE`. Алиасы: `SYSTEM STOP TTL MERGES`, `SYSTEM START TTL MERGES`, `STOP TTL MERGES`, `START TTL MERGES` - `SYSTEM FETCHES`. Уровень: `TABLE`. Алиасы: `SYSTEM STOP FETCHES`, `SYSTEM START FETCHES`, `STOP FETCHES`, `START FETCHES` diff --git a/docs/ru/sql-reference/statements/optimize.md b/docs/ru/sql-reference/statements/optimize.md index 44101910a6c..e1a9d613537 100644 --- a/docs/ru/sql-reference/statements/optimize.md +++ b/docs/ru/sql-reference/statements/optimize.md @@ -5,19 +5,83 @@ toc_title: OPTIMIZE # OPTIMIZE {#misc_operations-optimize} -``` sql -OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition | PARTITION ID 'partition_id'] [FINAL] [DEDUPLICATE] -``` - -Запрос пытается запустить внеплановый мёрж кусков данных для таблиц семейства [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md). Другие движки таблиц не поддерживаются. - -Если `OPTIMIZE` применяется к таблицам семейства [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md), ClickHouse создаёт задачу на мёрж и ожидает её исполнения на всех узлах (если активирована настройка `replication_alter_partitions_sync`). - -- Если `OPTIMIZE` не выполняет мёрж по любой причине, ClickHouse не оповещает об этом клиента. Чтобы включить оповещения, используйте настройку [optimize_throw_if_noop](../../operations/settings/settings.md#setting-optimize_throw_if_noop). -- Если указать `PARTITION`, то оптимизация выполняется только для указанной партиции. [Как задавать имя партиции в запросах](alter/index.md#alter-how-to-specify-part-expr). -- Если указать `FINAL`, то оптимизация выполняется даже в том случае, если все данные уже лежат в одном куске. Кроме того, слияние является принудительным, даже если выполняются параллельные слияния. -- Если указать `DEDUPLICATE`, то произойдет схлопывание полностью одинаковых строк (сравниваются значения во всех колонках), имеет смысл только для движка MergeTree. +Запрос пытается запустить внеплановое слияние кусков данных для таблиц. !!! warning "Внимание" - Запрос `OPTIMIZE` не может устранить причину появления ошибки «Too many parts». - + `OPTIMIZE` не устраняет причину появления ошибки `Too many parts`. + +**Синтаксис** + +``` sql +OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition | PARTITION ID 'partition_id'] [FINAL] [DEDUPLICATE [BY expression]] +``` + +Может применяться к таблицам семейства [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md), [MaterializedView](../../engines/table-engines/special/materializedview.md) и [Buffer](../../engines/table-engines/special/buffer.md). Другие движки таблиц не поддерживаются. + +Если запрос `OPTIMIZE` применяется к таблицам семейства [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replication.md), ClickHouse создаёт задачу на слияние и ожидает её исполнения на всех узлах (если активирована настройка `replication_alter_partitions_sync`). + +- По умолчанию, если запросу `OPTIMIZE` не удалось выполнить слияние, то +ClickHouse не оповещает клиента. Чтобы включить оповещения, используйте настройку [optimize_throw_if_noop](../../operations/settings/settings.md#setting-optimize_throw_if_noop). +- Если указать `PARTITION`, то оптимизация выполняется только для указанной партиции. [Как задавать имя партиции в запросах](alter/index.md#alter-how-to-specify-part-expr). +- Если указать `FINAL`, то оптимизация выполняется даже в том случае, если все данные уже лежат в одном куске данных. Кроме того, слияние является принудительным, даже если выполняются параллельные слияния. +- Если указать `DEDUPLICATE`, то произойдет схлопывание полностью одинаковых строк (сравниваются значения во всех столбцах), имеет смысл только для движка MergeTree. + +## Выражение BY {#by-expression} + +Чтобы выполнить дедупликацию по произвольному набору столбцов, вы можете явно указать список столбцов или использовать любую комбинацию подстановки [`*`](../../sql-reference/statements/select/index.md#asterisk), выражений [`COLUMNS`](../../sql-reference/statements/select/index.md#columns-expression) и [`EXCEPT`](../../sql-reference/statements/select/index.md#except-modifier). + + Список столбцов для дедупликации должен включать все столбцы, указанные в условиях сортировки (первичный ключ и ключ сортировки), а также в условиях партиционирования (ключ партиционирования). + + !!! note "Примечание" + Обратите внимание, что символ подстановки `*` обрабатывается так же, как и в запросах `SELECT`: столбцы `MATERIALIZED` и `ALIAS` не включаются в результат. + Если указать пустой список или выражение, которое возвращает пустой список, или дедуплицировать столбец по псевдониму (`ALIAS`), то сервер вернет ошибку. + + +**Примеры** + +Рассмотрим таблицу: + +``` sql +CREATE TABLE example ( + primary_key Int32, + secondary_key Int32, + value UInt32, + partition_key UInt32, + materialized_value UInt32 MATERIALIZED 12345, + aliased_value UInt32 ALIAS 2, + PRIMARY KEY primary_key +) ENGINE=MergeTree +PARTITION BY partition_key; +``` + +Прежний способ дедупликации, когда учитываются все столбцы. Строка удаляется только в том случае, если все значения во всех столбцах равны соответствующим значениям в предыдущей строке. + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE; +``` + +Дедупликация по всем столбцам, кроме `ALIAS` и `MATERIALIZED`: `primary_key`, `secondary_key`, `value`, `partition_key` и `materialized_value`. + + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY *; +``` + +Дедупликация по всем столбцам, кроме `ALIAS`, `MATERIALIZED` и `materialized_value`: столбцы `primary_key`, `secondary_key`, `value` и `partition_key`. + + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY * EXCEPT materialized_value; +``` + +Дедупликация по столбцам `primary_key`, `secondary_key` и `partition_key`. + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY primary_key, secondary_key, partition_key; +``` + +Дедупликация по любому столбцу, соответствующему регулярному выражению: столбцам `primary_key`, `secondary_key` и `partition_key`. + +``` sql +OPTIMIZE TABLE example FINAL DEDUPLICATE BY COLUMNS('.*_key'); +``` diff --git a/docs/ru/sql-reference/statements/rename.md b/docs/ru/sql-reference/statements/rename.md index 104918c1a73..192426dbafa 100644 --- a/docs/ru/sql-reference/statements/rename.md +++ b/docs/ru/sql-reference/statements/rename.md @@ -3,8 +3,16 @@ toc_priority: 48 toc_title: RENAME --- -# RENAME {#misc_operations-rename} +# RENAME Statement {#misc_operations-rename} +## RENAME DATABASE {#misc_operations-rename_database} +Переименование базы данных + +``` +RENAME DATABASE atomic_database1 TO atomic_database2 [ON CLUSTER cluster] +``` + +## RENAME TABLE {#misc_operations-rename_table} Переименовывает одну или несколько таблиц. ``` sql @@ -12,5 +20,3 @@ RENAME TABLE [db11.]name11 TO [db12.]name12, [db21.]name21 TO [db22.]name22, ... ``` Переименовывание таблицы является лёгкой операцией. Если вы указали после `TO` другую базу данных, то таблица будет перенесена в эту базу данных. При этом, директории с базами данных должны быть расположены в одной файловой системе (иначе возвращается ошибка). В случае переименования нескольких таблиц в одном запросе — это неатомарная операция, может выполнится частично, запросы в других сессиях могут получить ошибку `Table ... doesn't exist...`. - - diff --git a/docs/ru/sql-reference/statements/system.md b/docs/ru/sql-reference/statements/system.md index ab68033d4f3..f0f9b77b5ba 100644 --- a/docs/ru/sql-reference/statements/system.md +++ b/docs/ru/sql-reference/statements/system.md @@ -204,6 +204,7 @@ SYSTEM STOP MOVES [[db.]merge_tree_family_table_name] ClickHouse может управлять фоновыми процессами связанными c репликацией в таблицах семейства [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/replacingmergetree.md). ### STOP FETCHES {#query_language-system-stop-fetches} + Позволяет остановить фоновые процессы синхронизации новыми вставленными кусками данных с другими репликами в кластере для таблиц семейства `ReplicatedMergeTree`: Всегда возвращает `Ok.` вне зависимости от типа таблицы и даже если таблица или база данных не существет. @@ -212,6 +213,7 @@ SYSTEM STOP FETCHES [[db.]replicated_merge_tree_family_table_name] ``` ### START FETCHES {#query_language-system-start-fetches} + Позволяет запустить фоновые процессы синхронизации новыми вставленными кусками данных с другими репликами в кластере для таблиц семейства `ReplicatedMergeTree`: Всегда возвращает `Ok.` вне зависимости от типа таблицы и даже если таблица или база данных не существет. @@ -220,6 +222,7 @@ SYSTEM START FETCHES [[db.]replicated_merge_tree_family_table_name] ``` ### STOP REPLICATED SENDS {#query_language-system-start-replicated-sends} + Позволяет остановить фоновые процессы отсылки новых вставленных кусков данных другим репликам в кластере для таблиц семейства `ReplicatedMergeTree`: ``` sql @@ -227,6 +230,7 @@ SYSTEM STOP REPLICATED SENDS [[db.]replicated_merge_tree_family_table_name] ``` ### START REPLICATED SENDS {#query_language-system-start-replicated-sends} + Позволяет запустить фоновые процессы отсылки новых вставленных кусков данных другим репликам в кластере для таблиц семейства `ReplicatedMergeTree`: ``` sql @@ -234,6 +238,7 @@ SYSTEM START REPLICATED SENDS [[db.]replicated_merge_tree_family_table_name] ``` ### STOP REPLICATION QUEUES {#query_language-system-stop-replication-queues} + Останавливает фоновые процессы разбора заданий из очереди репликации которая хранится в Zookeeper для таблиц семейства `ReplicatedMergeTree`. Возможные типы заданий - merges, fetches, mutation, DDL запросы с ON CLUSTER: ``` sql @@ -241,6 +246,7 @@ SYSTEM STOP REPLICATION QUEUES [[db.]replicated_merge_tree_family_table_name] ``` ### START REPLICATION QUEUES {#query_language-system-start-replication-queues} + Запускает фоновые процессы разбора заданий из очереди репликации которая хранится в Zookeeper для таблиц семейства `ReplicatedMergeTree`. Возможные типы заданий - merges, fetches, mutation, DDL запросы с ON CLUSTER: ``` sql @@ -248,20 +254,24 @@ SYSTEM START REPLICATION QUEUES [[db.]replicated_merge_tree_family_table_name] ``` ### SYNC REPLICA {#query_language-system-sync-replica} + Ждет когда таблица семейства `ReplicatedMergeTree` будет синхронизирована с другими репликами в кластере, будет работать до достижения `receive_timeout`, если синхронизация для таблицы отключена в настоящий момент времени: ``` sql SYSTEM SYNC REPLICA [db.]replicated_merge_tree_family_table_name ``` +После выполнения этого запроса таблица `[db.]replicated_merge_tree_family_table_name` синхронизирует команды из общего реплицированного лога в свою собственную очередь репликации. Затем запрос ждет, пока реплика не обработает все синхронизированные команды. + ### RESTART REPLICA {#query_language-system-restart-replica} -Реинициализация состояния Zookeeper сессий для таблицы семейства `ReplicatedMergeTree`, сравнивает текущее состояние с тем что хранится в Zookeeper как источник правды и добавляет задачи Zookeeper очередь если необходимо -Инициализация очереди репликации на основе данных ZooKeeper, происходит так же как при attach table. На короткое время таблица станет недоступной для любых операций. + +Реинициализация состояния Zookeeper-сессий для таблицы семейства `ReplicatedMergeTree`. Сравнивает текущее состояние с тем, что хранится в Zookeeper, как источник правды, и добавляет задачи в очередь репликации в Zookeeper, если необходимо. +Инициализация очереди репликации на основе данных ZooKeeper происходит так же, как при attach table. На короткое время таблица станет недоступной для любых операций. ``` sql SYSTEM RESTART REPLICA [db.]replicated_merge_tree_family_table_name ``` ### RESTART REPLICAS {#query_language-system-restart-replicas} -Реинициализация состояния Zookeeper сессий для всех `ReplicatedMergeTree` таблиц, сравнивает текущее состояние с тем что хранится в Zookeeper как источник правды и добавляет задачи Zookeeper очередь если необходимо +Реинициализация состояния ZooKeeper-сессий для всех `ReplicatedMergeTree` таблиц. Сравнивает текущее состояние реплики с тем, что хранится в ZooKeeper, как c источником правды, и добавляет задачи в очередь репликации в ZooKeeper, если необходимо. diff --git a/docs/ru/sql-reference/table-functions/postgresql.md b/docs/ru/sql-reference/table-functions/postgresql.md index 66637276726..2d8afe28f1e 100644 --- a/docs/ru/sql-reference/table-functions/postgresql.md +++ b/docs/ru/sql-reference/table-functions/postgresql.md @@ -65,10 +65,10 @@ postgres=# INSERT INTO test (int_id, str, "float") VALUES (1,'test',2); INSERT 0 1 postgresql> SELECT * FROM test; - int_id | int_nullable | float | str | float_nullable ---------+--------------+-------+------+---------------- - 1 | | 2 | test | -(1 row) + int_id | int_nullable | float | str | float_nullable + --------+--------------+-------+------+---------------- + 1 | | 2 | test | + (1 row) ``` Получение данных в ClickHouse: diff --git a/docs/ru/sql-reference/table-functions/s3.md b/docs/ru/sql-reference/table-functions/s3.md index 1d3fc8cfdb7..e062e59c67c 100644 --- a/docs/ru/sql-reference/table-functions/s3.md +++ b/docs/ru/sql-reference/table-functions/s3.md @@ -18,7 +18,7 @@ s3(path, [aws_access_key_id, aws_secret_access_key,] format, structure, [compres - `path` — URL-адрес бакета с указанием пути к файлу. Поддерживает следующие подстановочные знаки в режиме "только чтение": `*, ?, {abc,def} и {N..M}` где `N, M` — числа, `'abc', 'def'` — строки. Подробнее смотри [здесь](../../engines/table-engines/integrations/s3.md#wildcards-in-path). - `format` — [формат](../../interfaces/formats.md#formats) файла. - `structure` — cтруктура таблицы. Формат `'column1_name column1_type, column2_name column2_type, ...'`. -- `compression` — автоматически обнаруживает сжатие по расширению файла. Возможные значения: none, gzip/gz, brotli/br, xz/LZMA, zstd/zst. Необязательный параметр. +- `compression` — автоматически обнаруживает сжатие по расширению файла. Возможные значения: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. Необязательный параметр. **Возвращаемые значения** diff --git a/docs/tools/single_page.py b/docs/tools/single_page.py index b88df5a03cb..a1e650d3ad3 100644 --- a/docs/tools/single_page.py +++ b/docs/tools/single_page.py @@ -109,7 +109,8 @@ def build_single_page_version(lang, args, nav, cfg): extra['single_page'] = True extra['is_amp'] = False - with open(os.path.join(args.docs_dir, lang, 'single.md'), 'w') as single_md: + single_md_path = os.path.join(args.docs_dir, lang, 'single.md') + with open(single_md_path, 'w') as single_md: concatenate(lang, args.docs_dir, single_md, nav) with util.temp_dir() as site_temp: @@ -221,3 +222,7 @@ def build_single_page_version(lang, args, nav, cfg): subprocess.check_call(' '.join(create_pdf_command), shell=True) logging.info(f'Finished building single page version for {lang}') + + if os.path.exists(single_md_path): + os.unlink(single_md_path) + \ No newline at end of file diff --git a/docs/zh/development/build.md b/docs/zh/development/build.md index 1aa5c1c97b7..01e0740bfa4 100644 --- a/docs/zh/development/build.md +++ b/docs/zh/development/build.md @@ -35,28 +35,12 @@ sudo apt-get install git cmake ninja-build 或cmake3而不是旧系统上的cmake。 或者在早期版本的系统中用 cmake3 替代 cmake -## 安装 GCC 10 {#an-zhuang-gcc-10} +## 安装 Clang -有几种方法可以做到这一点。 +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -### 安装 PPA 包 {#an-zhuang-ppa-bao} - -``` bash -sudo apt-get install software-properties-common -sudo apt-add-repository ppa:ubuntu-toolchain-r/test -sudo apt-get update -sudo apt-get install gcc-10 g++-10 -``` - -### 源码安装 gcc {#yuan-ma-an-zhuang-gcc} - -请查看 [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh) - -## 使用 GCC 10 来编译 {#shi-yong-gcc-10-lai-bian-yi} - -``` bash -export CC=gcc-10 -export CXX=g++-10 +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" ``` ## 拉取 ClickHouse 源码 {#la-qu-clickhouse-yuan-ma-1} diff --git a/docs/zh/development/developer-instruction.md b/docs/zh/development/developer-instruction.md index 53aab5dc086..04950c11521 100644 --- a/docs/zh/development/developer-instruction.md +++ b/docs/zh/development/developer-instruction.md @@ -123,17 +123,13 @@ ClickHouse使用多个外部库进行构建。大多数外部库不需要单独 # C++ 编译器 {#c-bian-yi-qi} -GCC编译器从版本9开始,以及Clang版本\>=8都可支持构建ClickHouse。 +We support clang starting from version 11. -Yandex官方当前使用GCC构建ClickHouse,因为它生成的机器代码性能较好(根据测评,最多可以相差几个百分点)。Clang通常可以更加便捷的开发。我们的持续集成(CI)平台会运行大约十二种构建组合的检查。 +On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/)) -在Ubuntu上安装GCC,请执行:`sudo apt install gcc g++` - -请使用`gcc --version`查看gcc的版本。如果gcc版本低于9,请参考此处的指示:https://clickhouse.tech/docs/zh/development/build/#an-zhuang-gcc-10 。 - -在Mac OS X上安装GCC,请执行:`brew install gcc` - -如果您决定使用Clang,还可以同时安装 `libc++`以及`lld`,前提是您也熟悉它们。此外,也推荐使用`ccache`。 +```bash +sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)" +``` # 构建的过程 {#gou-jian-de-guo-cheng} @@ -146,7 +142,7 @@ Yandex官方当前使用GCC构建ClickHouse,因为它生成的机器代码性 在`build`目录下,通过运行CMake配置构建。 在第一次运行之前,请定义用于指定编译器的环境变量(本示例中为gcc 9 编译器)。 - export CC=gcc-10 CXX=g++-10 + export CC=clang CXX=clang++ cmake .. `CC`变量指代C的编译器(C Compiler的缩写),而`CXX`变量指代要使用哪个C++编译器进行编译。 diff --git a/docs/zh/development/style.md b/docs/zh/development/style.md index c8e883920dd..bb9bfde7b9b 100644 --- a/docs/zh/development/style.md +++ b/docs/zh/development/style.md @@ -696,7 +696,7 @@ auto s = std::string{"Hello"}; **2.** 语言: C++20. -**3.** 编译器: `gcc`。 此时(2020年08月),代码使用9.3版编译。(它也可以使用`clang 8` 编译) +**3.** 编译器: `clang`。 此时(2021年03月),代码使用11版编译。(它也可以使用`gcc` 编译 but it is not suitable for production) 使用标准库 (`libc++`)。 diff --git a/docs/zh/sql-reference/functions/other-functions.md b/docs/zh/sql-reference/functions/other-functions.md index b17a5e89332..c58c4bd1510 100644 --- a/docs/zh/sql-reference/functions/other-functions.md +++ b/docs/zh/sql-reference/functions/other-functions.md @@ -477,6 +477,103 @@ FROM 1 rows in set. Elapsed: 0.002 sec. + +## indexHint {#indexhint} +输出符合索引选择范围内的所有数据,同时不实用参数中的表达式进行过滤。 + +传递给函数的表达式参数将不会被计算,但ClickHouse使用参数中的表达式进行索引过滤。 + +**返回值** + +- 1。 + +**示例** + +这是一个包含[ontime](../../getting-started/example-datasets/ontime.md)测试数据集的测试表。 + +``` +SELECT count() FROM ontime + +┌─count()─┐ +│ 4276457 │ +└─────────┘ +``` + +该表使用`(FlightDate, (Year, FlightDate))`作为索引。 + +对该表进行如下的查询: + +``` +:) SELECT FlightDate AS k, count() FROM ontime GROUP BY k ORDER BY k + +SELECT + FlightDate AS k, + count() +FROM ontime +GROUP BY k +ORDER BY k ASC + +┌──────────k─┬─count()─┐ +│ 2017-01-01 │ 13970 │ +│ 2017-01-02 │ 15882 │ +........................ +│ 2017-09-28 │ 16411 │ +│ 2017-09-29 │ 16384 │ +│ 2017-09-30 │ 12520 │ +└────────────┴─────────┘ + +273 rows in set. Elapsed: 0.072 sec. Processed 4.28 million rows, 8.55 MB (59.00 million rows/s., 118.01 MB/s.) +``` + +在这个查询中,由于没有使用索引,所以ClickHouse将处理整个表的所有数据(`Processed 4.28 million rows`)。使用下面的查询尝试使用索引进行查询: + +``` +:) SELECT FlightDate AS k, count() FROM ontime WHERE k = '2017-09-15' GROUP BY k ORDER BY k + +SELECT + FlightDate AS k, + count() +FROM ontime +WHERE k = '2017-09-15' +GROUP BY k +ORDER BY k ASC + +┌──────────k─┬─count()─┐ +│ 2017-09-15 │ 16428 │ +└────────────┴─────────┘ + +1 rows in set. Elapsed: 0.014 sec. Processed 32.74 thousand rows, 65.49 KB (2.31 million rows/s., 4.63 MB/s.) +``` + +在最后一行的显示中,通过索引ClickHouse处理的行数明显减少(`Processed 32.74 thousand rows`)。 + +现在将表达式`k = '2017-09-15'`传递给`indexHint`函数: + +``` +:) SELECT FlightDate AS k, count() FROM ontime WHERE indexHint(k = '2017-09-15') GROUP BY k ORDER BY k + +SELECT + FlightDate AS k, + count() +FROM ontime +WHERE indexHint(k = '2017-09-15') +GROUP BY k +ORDER BY k ASC + +┌──────────k─┬─count()─┐ +│ 2017-09-14 │ 7071 │ +│ 2017-09-15 │ 16428 │ +│ 2017-09-16 │ 1077 │ +│ 2017-09-30 │ 8167 │ +└────────────┴─────────┘ + +4 rows in set. Elapsed: 0.004 sec. Processed 32.74 thousand rows, 65.49 KB (8.97 million rows/s., 17.94 MB/s.) +``` + +对于这个请求,根据ClickHouse显示ClickHouse与上一次相同的方式应用了索引(`Processed 32.74 thousand rows`)。但是,最终返回的结果集中并没有根据`k = '2017-09-15'`表达式进行过滤结果。 + +由于ClickHouse中使用稀疏索引,因此在读取范围时(本示例中为相邻日期),"额外"的数据将包含在索引结果中。使用`indexHint`函数可以查看到它们。 + ## 复制 {#replicate} 使用单个值填充一个数组。 diff --git a/programs/CMakeLists.txt b/programs/CMakeLists.txt index c3600e5812a..ad3ff84d8bf 100644 --- a/programs/CMakeLists.txt +++ b/programs/CMakeLists.txt @@ -33,7 +33,14 @@ option (ENABLE_CLICKHOUSE_OBFUSCATOR "Table data obfuscator (convert real data t ${ENABLE_CLICKHOUSE_ALL}) # https://clickhouse.tech/docs/en/operations/utilities/odbc-bridge/ -option (ENABLE_CLICKHOUSE_ODBC_BRIDGE "HTTP-server working like a proxy to ODBC driver" +if (ENABLE_ODBC) + option (ENABLE_CLICKHOUSE_ODBC_BRIDGE "HTTP-server working like a proxy to ODBC driver" + ${ENABLE_CLICKHOUSE_ALL}) +else () + option (ENABLE_CLICKHOUSE_ODBC_BRIDGE "HTTP-server working like a proxy to ODBC driver" OFF) +endif () + +option (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE "HTTP-server working like a proxy to Library dictionary source" ${ENABLE_CLICKHOUSE_ALL}) # https://presentations.clickhouse.tech/matemarketing_2020/ @@ -109,6 +116,12 @@ else() message(STATUS "ODBC bridge mode: OFF") endif() +if (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE) + message(STATUS "Library bridge mode: ON") +else() + message(STATUS "Library bridge mode: OFF") +endif() + if (ENABLE_CLICKHOUSE_INSTALL) message(STATUS "ClickHouse install: ON") else() @@ -194,6 +207,10 @@ if (ENABLE_CLICKHOUSE_ODBC_BRIDGE) add_subdirectory (odbc-bridge) endif () +if (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE) + add_subdirectory (library-bridge) +endif () + if (CLICKHOUSE_ONE_SHARED) add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_GIT_IMPORT_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES}) target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_GIT_IMPORT_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK}) @@ -209,6 +226,10 @@ if (CLICKHOUSE_SPLIT_BINARY) list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-odbc-bridge) endif () + if (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE) + list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-library-bridge) + endif () + set_target_properties(${CLICKHOUSE_ALL_TARGETS} PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..) add_custom_target (clickhouse-bundle ALL DEPENDS ${CLICKHOUSE_ALL_TARGETS}) diff --git a/programs/benchmark/Benchmark.cpp b/programs/benchmark/Benchmark.cpp index a0e2ea155ba..1d2b579db3a 100644 --- a/programs/benchmark/Benchmark.cpp +++ b/programs/benchmark/Benchmark.cpp @@ -95,8 +95,8 @@ public: comparison_info_total.emplace_back(std::make_shared()); } - global_context.makeGlobalContext(); - global_context.setSettings(settings); + global_context->makeGlobalContext(); + global_context->setSettings(settings); std::cerr << std::fixed << std::setprecision(3); @@ -159,7 +159,7 @@ private: bool print_stacktrace; const Settings & settings; SharedContextHolder shared_context; - Context global_context; + ContextPtr global_context; QueryProcessingStage::Enum query_processing_stage; /// Don't execute new queries after timelimit or SIGINT or exception diff --git a/programs/client/Client.cpp b/programs/client/Client.cpp index ca9976ac4a8..1aec3677b41 100644 --- a/programs/client/Client.cpp +++ b/programs/client/Client.cpp @@ -21,7 +21,7 @@ #include #include #include -#include +#include #include #include #include @@ -191,7 +191,7 @@ private: bool has_vertical_output_suffix = false; /// Is \G present at the end of the query string? SharedContextHolder shared_context = Context::createShared(); - Context context = Context::createGlobal(shared_context.get()); + ContextPtr context = Context::createGlobal(shared_context.get()); /// Buffer that reads from stdin in batch mode. ReadBufferFromFileDescriptor std_in {STDIN_FILENO}; @@ -274,20 +274,20 @@ private: configReadClient(config(), home_path); - context.setApplicationType(Context::ApplicationType::CLIENT); - context.setQueryParameters(query_parameters); + context->setApplicationType(Context::ApplicationType::CLIENT); + context->setQueryParameters(query_parameters); /// settings and limits could be specified in config file, but passed settings has higher priority - for (const auto & setting : context.getSettingsRef().allUnchanged()) + for (const auto & setting : context->getSettingsRef().allUnchanged()) { const auto & name = setting.getName(); if (config().has(name)) - context.setSetting(name, config().getString(name)); + context->setSetting(name, config().getString(name)); } /// Set path for format schema files if (config().has("format_schema_path")) - context.setFormatSchemaPath(Poco::Path(config().getString("format_schema_path")).toString()); + context->setFormatSchemaPath(Poco::Path(config().getString("format_schema_path")).toString()); /// Initialize query_id_formats if any if (config().has("query_id_formats")) @@ -527,7 +527,10 @@ private: std::cerr << std::fixed << std::setprecision(3); if (is_interactive) + { + clearTerminal(); showClientVersion(); + } is_default_format = !config().has("vertical") && !config().has("format"); if (config().has("vertical")) @@ -535,15 +538,15 @@ private: else format = config().getString("format", is_interactive ? "PrettyCompact" : "TabSeparated"); - format_max_block_size = config().getInt("format_max_block_size", context.getSettingsRef().max_block_size); + format_max_block_size = config().getInt("format_max_block_size", context->getSettingsRef().max_block_size); insert_format = "Values"; /// Setting value from cmd arg overrides one from config - if (context.getSettingsRef().max_insert_block_size.changed) - insert_format_max_block_size = context.getSettingsRef().max_insert_block_size; + if (context->getSettingsRef().max_insert_block_size.changed) + insert_format_max_block_size = context->getSettingsRef().max_insert_block_size; else - insert_format_max_block_size = config().getInt("insert_format_max_block_size", context.getSettingsRef().max_insert_block_size); + insert_format_max_block_size = config().getInt("insert_format_max_block_size", context->getSettingsRef().max_insert_block_size); if (!is_interactive) { @@ -552,7 +555,7 @@ private: ignore_error = config().getBool("ignore-error", false); } - ClientInfo & client_info = context.getClientInfo(); + ClientInfo & client_info = context->getClientInfo(); client_info.setInitialQuery(); client_info.quota_key = config().getString("quota_key", ""); @@ -560,7 +563,7 @@ private: /// Initialize DateLUT here to avoid counting time spent here as query execution time. const auto local_tz = DateLUT::instance().getTimeZone(); - if (!context.getSettingsRef().use_client_time_zone) + if (!context->getSettingsRef().use_client_time_zone) { const auto & time_zone = connection->getServerTimezone(connection_parameters.timeouts); if (!time_zone.empty()) @@ -735,7 +738,7 @@ private: { auto query_id = config().getString("query_id", ""); if (!query_id.empty()) - context.setCurrentQueryId(query_id); + context->setCurrentQueryId(query_id); nonInteractive(); @@ -1035,7 +1038,7 @@ private: { Tokens tokens(this_query_begin, all_queries_end); IParser::Pos token_iterator(tokens, - context.getSettingsRef().max_parser_depth); + context->getSettingsRef().max_parser_depth); if (!token_iterator.isValid()) { break; @@ -1084,7 +1087,7 @@ private: if (ignore_error) { Tokens tokens(this_query_begin, all_queries_end); - IParser::Pos token_iterator(tokens, context.getSettingsRef().max_parser_depth); + IParser::Pos token_iterator(tokens, context->getSettingsRef().max_parser_depth); while (token_iterator->type != TokenType::Semicolon && token_iterator.isValid()) ++token_iterator; this_query_begin = token_iterator->end; @@ -1130,7 +1133,7 @@ private: // beneficial so that we see proper trailing comments in "echo" and // server log. adjustQueryEnd(this_query_end, all_queries_end, - context.getSettingsRef().max_parser_depth); + context->getSettingsRef().max_parser_depth); // full_query is the query + inline INSERT data + trailing comments // (the latter is our best guess for now). @@ -1170,7 +1173,7 @@ private: { this_query_end = insert_ast->end; adjustQueryEnd(this_query_end, all_queries_end, - context.getSettingsRef().max_parser_depth); + context->getSettingsRef().max_parser_depth); } // Now we know for sure where the query ends. @@ -1287,7 +1290,7 @@ private: // Prints changed settings to stderr. Useful for debugging fuzzing failures. void printChangedSettings() const { - const auto & changes = context.getSettingsRef().changes(); + const auto & changes = context->getSettingsRef().changes(); if (!changes.empty()) { fmt::print(stderr, "Changed settings: "); @@ -1587,11 +1590,11 @@ private: if (is_interactive) { // Generate a new query_id - context.setCurrentQueryId(""); + context->setCurrentQueryId(""); for (const auto & query_id_format : query_id_formats) { writeString(query_id_format.first, std_out); - writeString(fmt::format(query_id_format.second, fmt::arg("query_id", context.getCurrentQueryId())), std_out); + writeString(fmt::format(query_id_format.second, fmt::arg("query_id", context->getCurrentQueryId())), std_out); writeChar('\n', std_out); std_out.next(); } @@ -1607,12 +1610,12 @@ private: { /// Temporarily apply query settings to context. std::optional old_settings; - SCOPE_EXIT({ if (old_settings) context.setSettings(*old_settings); }); + SCOPE_EXIT_SAFE({ if (old_settings) context->setSettings(*old_settings); }); auto apply_query_settings = [&](const IAST & settings_ast) { if (!old_settings) - old_settings.emplace(context.getSettingsRef()); - context.applySettingsChanges(settings_ast.as()->changes); + old_settings.emplace(context->getSettingsRef()); + context->applySettingsChanges(settings_ast.as()->changes); }; const auto * insert = parsed_query->as(); if (insert && insert->settings_ast) @@ -1650,7 +1653,7 @@ private: if (change.name == "profile") current_profile = change.value.safeGet(); else - context.applySettingChange(change); + context->applySettingChange(change); } } @@ -1722,10 +1725,10 @@ private: connection->sendQuery( connection_parameters.timeouts, query_to_send, - context.getCurrentQueryId(), + context->getCurrentQueryId(), query_processing_stage, - &context.getSettingsRef(), - &context.getClientInfo(), + &context->getSettingsRef(), + &context->getClientInfo(), true); sendExternalTables(); @@ -1763,10 +1766,10 @@ private: connection->sendQuery( connection_parameters.timeouts, query_to_send, - context.getCurrentQueryId(), + context->getCurrentQueryId(), query_processing_stage, - &context.getSettingsRef(), - &context.getClientInfo(), + &context->getSettingsRef(), + &context->getClientInfo(), true); sendExternalTables(); @@ -1789,7 +1792,7 @@ private: ParserQuery parser(end); ASTPtr res; - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); size_t max_length = 0; if (!allow_multi_statements) max_length = settings.max_query_size; @@ -1877,7 +1880,7 @@ private: current_format = insert->format; } - BlockInputStreamPtr block_input = context.getInputFormat( + BlockInputStreamPtr block_input = context->getInputFormat( current_format, buf, sample, insert_format_max_block_size); if (columns_description.hasDefaults()) @@ -2201,9 +2204,9 @@ private: /// It is not clear how to write progress with parallel formatting. It may increase code complexity significantly. if (!need_render_progress) - block_out_stream = context.getOutputStreamParallelIfPossible(current_format, *out_buf, block); + block_out_stream = context->getOutputStreamParallelIfPossible(current_format, *out_buf, block); else - block_out_stream = context.getOutputStream(current_format, *out_buf, block); + block_out_stream = context->getOutputStream(current_format, *out_buf, block); block_out_stream->writePrefix(); } @@ -2467,6 +2470,17 @@ private: std::cout << DBMS_NAME << " client version " << VERSION_STRING << VERSION_OFFICIAL << "." << std::endl; } + static void clearTerminal() + { + /// Clear from cursor until end of screen. + /// It is needed if garbage is left in terminal. + /// Show cursor. It can be left hidden by invocation of previous programs. + /// A test for this feature: perl -e 'print "x"x100000'; echo -ne '\033[0;0H\033[?25l'; clickhouse-client + std::cout << + "\033[0J" + "\033[?25h"; + } + public: void init(int argc, char ** argv) { @@ -2696,12 +2710,12 @@ public: } } - context.makeGlobalContext(); - context.setSettings(cmd_settings); + context->makeGlobalContext(); + context->setSettings(cmd_settings); /// Copy settings-related program options to config. /// TODO: Is this code necessary? - for (const auto & setting : context.getSettingsRef().all()) + for (const auto & setting : context->getSettingsRef().all()) { const auto & name = setting.getName(); if (options.count(name)) @@ -2793,7 +2807,7 @@ public: { std::string traceparent = options["opentelemetry-traceparent"].as(); std::string error; - if (!context.getClientInfo().client_trace_context.parseTraceparentHeader( + if (!context->getClientInfo().client_trace_context.parseTraceparentHeader( traceparent, error)) { throw Exception(ErrorCodes::BAD_ARGUMENTS, @@ -2804,7 +2818,7 @@ public: if (options.count("opentelemetry-tracestate")) { - context.getClientInfo().client_trace_context.tracestate = + context->getClientInfo().client_trace_context.tracestate = options["opentelemetry-tracestate"].as(); } diff --git a/programs/client/ConnectionParameters.cpp b/programs/client/ConnectionParameters.cpp index 19734dd5ffa..6faf43759df 100644 --- a/programs/client/ConnectionParameters.cpp +++ b/programs/client/ConnectionParameters.cpp @@ -7,6 +7,8 @@ #include #include #include +#include +#include #include #include @@ -60,7 +62,9 @@ ConnectionParameters::ConnectionParameters(const Poco::Util::AbstractConfigurati #endif } - compression = config.getBool("compression", true) ? Protocol::Compression::Enable : Protocol::Compression::Disable; + /// By default compression is disabled if address looks like localhost. + compression = config.getBool("compression", !isLocalAddress(DNSResolver::instance().resolveHost(host))) + ? Protocol::Compression::Enable : Protocol::Compression::Disable; timeouts = ConnectionTimeouts( Poco::Timespan(config.getInt("connect_timeout", DBMS_DEFAULT_CONNECT_TIMEOUT_SEC), 0), diff --git a/programs/client/QueryFuzzer.cpp b/programs/client/QueryFuzzer.cpp index 0c8dc0731f9..6243e2c82ec 100644 --- a/programs/client/QueryFuzzer.cpp +++ b/programs/client/QueryFuzzer.cpp @@ -37,34 +37,33 @@ namespace ErrorCodes Field QueryFuzzer::getRandomField(int type) { + static constexpr Int64 bad_int64_values[] + = {-2, -1, 0, 1, 2, 3, 7, 10, 100, 255, 256, 257, 1023, 1024, + 1025, 65535, 65536, 65537, 1024 * 1024 - 1, 1024 * 1024, + 1024 * 1024 + 1, INT_MIN - 1ll, INT_MIN, INT_MIN + 1, + INT_MAX - 1, INT_MAX, INT_MAX + 1ll, INT64_MIN, INT64_MIN + 1, + INT64_MAX - 1, INT64_MAX}; switch (type) { case 0: { - static constexpr Int64 values[] - = {-2, -1, 0, 1, 2, 3, 7, 10, 100, 255, 256, 257, 1023, 1024, - 1025, 65535, 65536, 65537, 1024 * 1024 - 1, 1024 * 1024, - 1024 * 1024 + 1, INT64_MIN, INT64_MAX}; - return values[fuzz_rand() % (sizeof(values) / sizeof(*values))]; + return bad_int64_values[fuzz_rand() % (sizeof(bad_int64_values) + / sizeof(*bad_int64_values))]; } case 1: { static constexpr float values[] - = {NAN, INFINITY, -INFINITY, 0., 0.0001, 0.5, 0.9999, - 1., 1.0001, 2., 10.0001, 100.0001, 1000.0001}; - return values[fuzz_rand() % (sizeof(values) / sizeof(*values))]; + = {NAN, INFINITY, -INFINITY, 0., -0., 0.0001, 0.5, 0.9999, + 1., 1.0001, 2., 10.0001, 100.0001, 1000.0001, 1e10, 1e20, + FLT_MIN, FLT_MIN + FLT_EPSILON, FLT_MAX, FLT_MAX + FLT_EPSILON}; return values[fuzz_rand() % (sizeof(values) / sizeof(*values))]; } case 2: { - static constexpr Int64 values[] - = {-2, -1, 0, 1, 2, 3, 7, 10, 100, 255, 256, 257, 1023, 1024, - 1025, 65535, 65536, 65537, 1024 * 1024 - 1, 1024 * 1024, - 1024 * 1024 + 1, INT64_MIN, INT64_MAX}; static constexpr UInt64 scales[] = {0, 1, 2, 10}; return DecimalField( - values[fuzz_rand() % (sizeof(values) / sizeof(*values))], - scales[fuzz_rand() % (sizeof(scales) / sizeof(*scales))] - ); + bad_int64_values[fuzz_rand() % (sizeof(bad_int64_values) + / sizeof(*bad_int64_values))], + scales[fuzz_rand() % (sizeof(scales) / sizeof(*scales))]); } default: assert(false); diff --git a/programs/client/Suggest.cpp b/programs/client/Suggest.cpp index dfa7048349e..8d4c0fdbd5a 100644 --- a/programs/client/Suggest.cpp +++ b/programs/client/Suggest.cpp @@ -108,14 +108,6 @@ void Suggest::loadImpl(Connection & connection, const ConnectionTimeouts & timeo " UNION ALL " "SELECT cluster FROM system.clusters" " UNION ALL " - "SELECT name FROM system.errors" - " UNION ALL " - "SELECT event FROM system.events" - " UNION ALL " - "SELECT metric FROM system.asynchronous_metrics" - " UNION ALL " - "SELECT metric FROM system.metrics" - " UNION ALL " "SELECT macro FROM system.macros" " UNION ALL " "SELECT policy_name FROM system.storage_policies" @@ -139,17 +131,12 @@ void Suggest::loadImpl(Connection & connection, const ConnectionTimeouts & timeo query << ") WHERE notEmpty(res)"; - Settings settings; - /// To show all rows from: - /// - system.errors - /// - system.events - settings.system_events_show_zero_values = true; - fetch(connection, timeouts, query.str(), settings); + fetch(connection, timeouts, query.str()); } -void Suggest::fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query, Settings & settings) +void Suggest::fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query) { - connection.sendQuery(timeouts, query, "" /* query_id */, QueryProcessingStage::Complete, &settings); + connection.sendQuery(timeouts, query, "" /* query_id */, QueryProcessingStage::Complete); while (true) { diff --git a/programs/client/Suggest.h b/programs/client/Suggest.h index 0049bc08ebf..03332088cbe 100644 --- a/programs/client/Suggest.h +++ b/programs/client/Suggest.h @@ -33,7 +33,7 @@ public: private: void loadImpl(Connection & connection, const ConnectionTimeouts & timeouts, size_t suggestion_limit); - void fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query, Settings & settings); + void fetch(Connection & connection, const ConnectionTimeouts & timeouts, const std::string & query); void fillWordsFromBlock(const Block & block); /// Words are fetched asynchronously. diff --git a/programs/config_tools.h.in b/programs/config_tools.h.in index 7cb5a6d883a..abe9ef8c562 100644 --- a/programs/config_tools.h.in +++ b/programs/config_tools.h.in @@ -15,3 +15,4 @@ #cmakedefine01 ENABLE_CLICKHOUSE_GIT_IMPORT #cmakedefine01 ENABLE_CLICKHOUSE_INSTALL #cmakedefine01 ENABLE_CLICKHOUSE_ODBC_BRIDGE +#cmakedefine01 ENABLE_CLICKHOUSE_LIBRARY_BRIDGE diff --git a/programs/copier/ClusterCopier.cpp b/programs/copier/ClusterCopier.cpp index bede40d65f5..aa9b359993e 100644 --- a/programs/copier/ClusterCopier.cpp +++ b/programs/copier/ClusterCopier.cpp @@ -22,7 +22,7 @@ namespace ErrorCodes void ClusterCopier::init() { - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); task_description_watch_callback = [this] (const Coordination::WatchResponse & response) { @@ -39,14 +39,14 @@ void ClusterCopier::init() task_cluster_initial_config = task_cluster_current_config; task_cluster->loadTasks(*task_cluster_initial_config); - context.setClustersConfig(task_cluster_initial_config, task_cluster->clusters_prefix); + getContext()->setClustersConfig(task_cluster_initial_config, task_cluster->clusters_prefix); /// Set up shards and their priority task_cluster->random_engine.seed(task_cluster->random_device()); for (auto & task_table : task_cluster->table_tasks) { - task_table.cluster_pull = context.getCluster(task_table.cluster_pull_name); - task_table.cluster_push = context.getCluster(task_table.cluster_push_name); + task_table.cluster_pull = getContext()->getCluster(task_table.cluster_pull_name); + task_table.cluster_push = getContext()->getCluster(task_table.cluster_push_name); task_table.initShards(task_cluster->random_engine); } @@ -206,7 +206,7 @@ void ClusterCopier::uploadTaskDescription(const std::string & task_path, const s if (task_config_str.empty()) return; - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); zookeeper->createAncestors(local_task_description_path); auto code = zookeeper->tryCreate(local_task_description_path, task_config_str, zkutil::CreateMode::Persistent); @@ -219,7 +219,7 @@ void ClusterCopier::uploadTaskDescription(const std::string & task_path, const s void ClusterCopier::reloadTaskDescription() { - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); task_description_watch_zookeeper = zookeeper; String task_config_str; @@ -235,7 +235,7 @@ void ClusterCopier::reloadTaskDescription() /// Setup settings task_cluster->reloadSettings(*config); - context.setSettings(task_cluster->settings_common); + getContext()->setSettings(task_cluster->settings_common); task_cluster_current_config = config; task_description_current_stat = stat; @@ -440,7 +440,7 @@ bool ClusterCopier::checkPartitionPieceIsDone(const TaskTable & task_table, cons { LOG_DEBUG(log, "Check that all shards processed partition {} piece {} successfully", partition_name, piece_number); - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); /// Collect all shards that contain partition piece number piece_number. Strings piece_status_paths; @@ -532,7 +532,7 @@ TaskStatus ClusterCopier::tryMoveAllPiecesToDestinationTable(const TaskTable & t LOG_DEBUG(log, "Try to move {} to destination table", partition_name); - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); const auto current_partition_attach_is_active = task_table.getPartitionAttachIsActivePath(partition_name); const auto current_partition_attach_is_done = task_table.getPartitionAttachIsDonePath(partition_name); @@ -1095,7 +1095,7 @@ TaskStatus ClusterCopier::tryCreateDestinationTable(const ConnectionTimeouts & t = rewriteCreateQueryStorage(task_shard->current_pull_table_create_query, task_table.table_push, task_table.engine_push_ast); auto & create = create_query_push_ast->as(); create.if_not_exists = true; - InterpreterCreateQuery::prepareOnClusterQuery(create, context, task_table.cluster_push_name); + InterpreterCreateQuery::prepareOnClusterQuery(create, getContext(), task_table.cluster_push_name); String query = queryToString(create_query_push_ast); LOG_DEBUG(log, "Create destination tables. Query: {}", query); @@ -1211,7 +1211,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( auto split_table_for_current_piece = task_shard.list_of_split_tables_on_shard[current_piece_number]; - auto zookeeper = context.getZooKeeper(); + auto zookeeper = getContext()->getZooKeeper(); const String piece_is_dirty_flag_path = partition_piece.getPartitionPieceIsDirtyPath(); const String piece_is_dirty_cleaned_path = partition_piece.getPartitionPieceIsCleanedPath(); @@ -1262,7 +1262,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( ParserQuery p_query(query.data() + query.size()); - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); return parseQuery(p_query, query, settings.max_query_size, settings.max_parser_depth); }; @@ -1366,10 +1366,10 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( ASTPtr query_select_ast = get_select_query(split_table_for_current_piece, "count()", /*enable_splitting*/ true); UInt64 count; { - Context local_context = context; + auto local_context = Context::createCopy(context); // Use pull (i.e. readonly) settings, but fetch data from destination servers - local_context.setSettings(task_cluster->settings_pull); - local_context.setSetting("skip_unavailable_shards", true); + local_context->setSettings(task_cluster->settings_pull); + local_context->setSetting("skip_unavailable_shards", true); Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_select_ast, local_context)->execute().getInputStream()); count = (block) ? block.safeGetByPosition(0).column->getUInt(0) : 0; @@ -1468,7 +1468,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( query += "INSERT INTO " + getQuotedTable(split_table_for_current_piece) + " VALUES "; ParserQuery p_query(query.data() + query.size()); - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); query_insert_ast = parseQuery(p_query, query, settings.max_query_size, settings.max_parser_depth); LOG_DEBUG(log, "Executing INSERT query: {}", query); @@ -1476,18 +1476,18 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl( try { - std::unique_ptr context_select = std::make_unique(context); + auto context_select = Context::createCopy(context); context_select->setSettings(task_cluster->settings_pull); - std::unique_ptr context_insert = std::make_unique(context); + auto context_insert = Context::createCopy(context); context_insert->setSettings(task_cluster->settings_push); /// Custom INSERT SELECT implementation BlockInputStreamPtr input; BlockOutputStreamPtr output; { - BlockIO io_select = InterpreterFactory::get(query_select_ast, *context_select)->execute(); - BlockIO io_insert = InterpreterFactory::get(query_insert_ast, *context_insert)->execute(); + BlockIO io_select = InterpreterFactory::get(query_select_ast, context_select)->execute(); + BlockIO io_insert = InterpreterFactory::get(query_insert_ast, context_insert)->execute(); input = io_select.getInputStream(); output = io_insert.out; @@ -1581,7 +1581,7 @@ void ClusterCopier::dropAndCreateLocalTable(const ASTPtr & create_ast) const auto & create = create_ast->as(); dropLocalTableIfExists({create.database, create.table}); - InterpreterCreateQuery interpreter(create_ast, context); + InterpreterCreateQuery interpreter(create_ast, getContext()); interpreter.execute(); } @@ -1592,7 +1592,7 @@ void ClusterCopier::dropLocalTableIfExists(const DatabaseAndTableName & table_na drop_ast->database = table_name.first; drop_ast->table = table_name.second; - InterpreterDropQuery interpreter(drop_ast, context); + InterpreterDropQuery interpreter(drop_ast, getContext()); interpreter.execute(); } @@ -1654,8 +1654,8 @@ void ClusterCopier::dropParticularPartitionPieceFromAllHelpingTables(const TaskT String ClusterCopier::getRemoteCreateTable(const DatabaseAndTableName & table, Connection & connection, const Settings & settings) { - Context remote_context(context); - remote_context.setSettings(settings); + auto remote_context = Context::createCopy(context); + remote_context->setSettings(settings); String query = "SHOW CREATE TABLE " + getQuotedTable(table); Block block = getBlockWithAllStreamData(std::make_shared( @@ -1674,7 +1674,7 @@ ASTPtr ClusterCopier::getCreateTableForPullShard(const ConnectionTimeouts & time task_cluster->settings_pull); ParserCreateQuery parser_create_query; - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); return parseQuery(parser_create_query, create_query_pull_str, settings.max_query_size, settings.max_parser_depth); } @@ -1703,7 +1703,7 @@ void ClusterCopier::createShardInternalTables(const ConnectionTimeouts & timeout /// Create special cluster with single shard String shard_read_cluster_name = read_shard_prefix + task_table.cluster_pull_name; ClusterPtr cluster_pull_current_shard = task_table.cluster_pull->getClusterWithSingleShard(task_shard.indexInCluster()); - context.setCluster(shard_read_cluster_name, cluster_pull_current_shard); + getContext()->setCluster(shard_read_cluster_name, cluster_pull_current_shard); auto storage_shard_ast = createASTStorageDistributed(shard_read_cluster_name, task_table.table_pull.first, task_table.table_pull.second); @@ -1763,13 +1763,13 @@ std::set ClusterCopier::getShardPartitions(const ConnectionTimeouts & ti } ParserQuery parser_query(query.data() + query.size()); - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); ASTPtr query_ast = parseQuery(parser_query, query, settings.max_query_size, settings.max_parser_depth); LOG_DEBUG(log, "Computing destination partition set, executing query: {}", query); - Context local_context = context; - local_context.setSettings(task_cluster->settings_pull); + auto local_context = Context::createCopy(context); + local_context->setSettings(task_cluster->settings_pull); Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_ast, local_context)->execute().getInputStream()); if (block) @@ -1809,11 +1809,11 @@ bool ClusterCopier::checkShardHasPartition(const ConnectionTimeouts & timeouts, LOG_DEBUG(log, "Checking shard {} for partition {} existence, executing query: {}", task_shard.getDescription(), partition_quoted_name, query); ParserQuery parser_query(query.data() + query.size()); -const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); ASTPtr query_ast = parseQuery(parser_query, query, settings.max_query_size, settings.max_parser_depth); - Context local_context = context; - local_context.setSettings(task_cluster->settings_pull); + auto local_context = Context::createCopy(context); + local_context->setSettings(task_cluster->settings_pull); return InterpreterFactory::get(query_ast, local_context)->execute().getInputStream()->read().rows() != 0; } @@ -1848,11 +1848,11 @@ bool ClusterCopier::checkPresentPartitionPiecesOnCurrentShard(const ConnectionTi LOG_DEBUG(log, "Checking shard {} for partition {} piece {} existence, executing query: {}", task_shard.getDescription(), partition_quoted_name, std::to_string(current_piece_number), query); ParserQuery parser_query(query.data() + query.size()); - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); ASTPtr query_ast = parseQuery(parser_query, query, settings.max_query_size, settings.max_parser_depth); - Context local_context = context; - local_context.setSettings(task_cluster->settings_pull); + auto local_context = Context::createCopy(context); + local_context->setSettings(task_cluster->settings_pull); auto result = InterpreterFactory::get(query_ast, local_context)->execute().getInputStream()->read().rows(); if (result != 0) LOG_DEBUG(log, "Partition {} piece number {} is PRESENT on shard {}", partition_quoted_name, std::to_string(current_piece_number), task_shard.getDescription()); @@ -1908,7 +1908,7 @@ UInt64 ClusterCopier::executeQueryOnCluster( /// In that case we don't have local replicas, but do it just in case for (UInt64 i = 0; i < num_local_replicas; ++i) { - auto interpreter = InterpreterFactory::get(query_ast, context); + auto interpreter = InterpreterFactory::get(query_ast, getContext()); interpreter->execute(); if (increment_and_check_exit()) @@ -1923,8 +1923,8 @@ UInt64 ClusterCopier::executeQueryOnCluster( auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(shard_settings).getSaturated(shard_settings.max_execution_time); auto connections = shard.pool->getMany(timeouts, &shard_settings, pool_mode); - Context shard_context(context); - shard_context.setSettings(shard_settings); + auto shard_context = Context::createCopy(context); + shard_context->setSettings(shard_settings); for (auto & connection : connections) { diff --git a/programs/copier/ClusterCopier.h b/programs/copier/ClusterCopier.h index 95bb54cf4e1..e875ca7df2e 100644 --- a/programs/copier/ClusterCopier.h +++ b/programs/copier/ClusterCopier.h @@ -12,18 +12,17 @@ namespace DB { -class ClusterCopier +class ClusterCopier : WithContext { public: ClusterCopier(const String & task_path_, const String & host_id_, const String & proxy_database_name_, - Context & context_) - : + ContextPtr context_) + : WithContext(context_), task_zookeeper_path(task_path_), host_id(host_id_), working_database_name(proxy_database_name_), - context(context_), log(&Poco::Logger::get("ClusterCopier")) {} void init(); @@ -36,7 +35,7 @@ public: /// Compute set of partitions, assume set of partitions aren't changed during the processing void discoverTablePartitions(const ConnectionTimeouts & timeouts, TaskTable & task_table, UInt64 num_threads = 0); - void uploadTaskDescription(const std::string & task_path, const std::string & task_file, const bool force); + void uploadTaskDescription(const std::string & task_path, const std::string & task_file, bool force); void reloadTaskDescription(); @@ -120,7 +119,7 @@ protected: /// Removes MATERIALIZED and ALIAS columns from create table query static ASTPtr removeAliasColumnsFromCreateQuery(const ASTPtr & query_ast); - bool tryDropPartitionPiece(ShardPartition & task_partition, const size_t current_piece_number, + bool tryDropPartitionPiece(ShardPartition & task_partition, size_t current_piece_number, const zkutil::ZooKeeperPtr & zookeeper, const CleanStateClock & clean_state_clock); static constexpr UInt64 max_table_tries = 3; @@ -141,7 +140,7 @@ protected: TaskStatus processPartitionPieceTaskImpl(const ConnectionTimeouts & timeouts, ShardPartition & task_partition, - const size_t current_piece_number, + size_t current_piece_number, bool is_unprioritized_task); void dropAndCreateLocalTable(const ASTPtr & create_ast); @@ -219,7 +218,6 @@ private: bool experimental_use_sample_offset{false}; - Context & context; Poco::Logger * log; std::chrono::milliseconds default_sleep_time{1000}; diff --git a/programs/copier/ClusterCopierApp.cpp b/programs/copier/ClusterCopierApp.cpp index e3169a49ecf..d3fff616b65 100644 --- a/programs/copier/ClusterCopierApp.cpp +++ b/programs/copier/ClusterCopierApp.cpp @@ -3,6 +3,7 @@ #include #include #include +#include #include @@ -110,9 +111,9 @@ void ClusterCopierApp::mainImpl() LOG_INFO(log, "Starting clickhouse-copier (id {}, host_id {}, path {}, revision {})", process_id, host_id, process_path, ClickHouseRevision::getVersionRevision()); SharedContextHolder shared_context = Context::createShared(); - auto context = std::make_unique(Context::createGlobal(shared_context.get())); + auto context = Context::createGlobal(shared_context.get()); context->makeGlobalContext(); - SCOPE_EXIT(context->shutdown()); + SCOPE_EXIT_SAFE(context->shutdown()); context->setConfig(loaded_config.configuration); context->setApplicationType(Context::ApplicationType::LOCAL); @@ -127,13 +128,13 @@ void ClusterCopierApp::mainImpl() registerFormats(); static const std::string default_database = "_local"; - DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared(default_database, *context)); + DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared(default_database, context)); context->setCurrentDatabase(default_database); /// Initialize query scope just in case. - CurrentThread::QueryScope query_scope(*context); + CurrentThread::QueryScope query_scope(context); - auto copier = std::make_unique(task_path, host_id, default_database, *context); + auto copier = std::make_unique(task_path, host_id, default_database, context); copier->setSafeMode(is_safe_mode); copier->setCopyFaultProbability(copy_fault_probability); copier->setMoveFaultProbability(move_fault_probability); diff --git a/programs/copier/Internals.cpp b/programs/copier/Internals.cpp index ea2be469945..bec612a8226 100644 --- a/programs/copier/Internals.cpp +++ b/programs/copier/Internals.cpp @@ -222,8 +222,8 @@ Names extractPrimaryKeyColumnNames(const ASTPtr & storage_ast) { String pk_column = primary_key_expr_list->children[i]->getColumnName(); if (pk_column != sorting_key_column) - throw Exception("Primary key must be a prefix of the sorting key, but in position " - + toString(i) + " its column is " + pk_column + ", not " + sorting_key_column, + throw Exception("Primary key must be a prefix of the sorting key, but the column in the position " + + toString(i) + " is " + sorting_key_column +", not " + pk_column, ErrorCodes::BAD_ARGUMENTS); if (!primary_key_columns_set.emplace(pk_column).second) diff --git a/programs/format/Format.cpp b/programs/format/Format.cpp index ba3d6e8557b..5bf19191353 100644 --- a/programs/format/Format.cpp +++ b/programs/format/Format.cpp @@ -102,8 +102,8 @@ int mainEntryClickHouseFormat(int argc, char ** argv) } SharedContextHolder shared_context = Context::createShared(); - Context context = Context::createGlobal(shared_context.get()); - context.makeGlobalContext(); + auto context = Context::createGlobal(shared_context.get()); + context->makeGlobalContext(); registerFunctions(); registerAggregateFunctions(); diff --git a/programs/install/Install.cpp b/programs/install/Install.cpp index ef72624e7ab..2b0f390f709 100644 --- a/programs/install/Install.cpp +++ b/programs/install/Install.cpp @@ -71,6 +71,9 @@ namespace ErrorCodes } +/// ANSI escape sequence for intense color in terminal. +#define HILITE "\033[1m" +#define END_HILITE "\033[0m" using namespace DB; namespace po = boost::program_options; @@ -559,20 +562,32 @@ int mainEntryClickHouseInstall(int argc, char ** argv) bool stdin_is_a_tty = isatty(STDIN_FILENO); bool stdout_is_a_tty = isatty(STDOUT_FILENO); - bool is_interactive = stdin_is_a_tty && stdout_is_a_tty; + + /// dpkg or apt installers can ask for non-interactive work explicitly. + + const char * debian_frontend_var = getenv("DEBIAN_FRONTEND"); + bool noninteractive = debian_frontend_var && debian_frontend_var == std::string_view("noninteractive"); + + bool is_interactive = !noninteractive && stdin_is_a_tty && stdout_is_a_tty; + + /// We can ask password even if stdin is closed/redirected but /dev/tty is available. + bool can_ask_password = !noninteractive && stdout_is_a_tty; if (has_password_for_default_user) { - fmt::print("Password for default user is already specified. To remind or reset, see {} and {}.\n", + fmt::print(HILITE "Password for default user is already specified. To remind or reset, see {} and {}." END_HILITE "\n", users_config_file.string(), users_d.string()); } - else if (!is_interactive) + else if (!can_ask_password) { - fmt::print("Password for default user is empty string. See {} and {} to change it.\n", + fmt::print(HILITE "Password for default user is empty string. See {} and {} to change it." END_HILITE "\n", users_config_file.string(), users_d.string()); } else { + /// NOTE: When installing debian package with dpkg -i, stdin is not a terminal but we are still being able to enter password. + /// More sophisticated method with /dev/tty is used inside the `readpassphrase` function. + char buf[1000] = {}; std::string password; if (auto * result = readpassphrase("Enter password for default user: ", buf, sizeof(buf), 0)) @@ -600,7 +615,7 @@ int mainEntryClickHouseInstall(int argc, char ** argv) "
\n"; out.sync(); out.finalize(); - fmt::print("Password for default user is saved in file {}.\n", password_file); + fmt::print(HILITE "Password for default user is saved in file {}." END_HILITE "\n", password_file); #else out << "\n" " \n" @@ -611,12 +626,12 @@ int mainEntryClickHouseInstall(int argc, char ** argv) "\n"; out.sync(); out.finalize(); - fmt::print("Password for default user is saved in plaintext in file {}.\n", password_file); + fmt::print(HILITE "Password for default user is saved in plaintext in file {}." END_HILITE "\n", password_file); #endif has_password_for_default_user = true; } else - fmt::print("Password for default user is empty string. See {} and {} to change it.\n", + fmt::print(HILITE "Password for default user is empty string. See {} and {} to change it." END_HILITE "\n", users_config_file.string(), users_d.string()); } @@ -641,7 +656,6 @@ int mainEntryClickHouseInstall(int argc, char ** argv) " This is optional. Taskstats accounting will be disabled." " To enable taskstats accounting you may add the required capability later manually.\"", "/tmp/test_setcap.sh", fs::canonical(main_bin_path).string()); - fmt::print(" {}\n", command); executeScript(command); #endif diff --git a/programs/library-bridge/CMakeLists.txt b/programs/library-bridge/CMakeLists.txt new file mode 100644 index 00000000000..0913c6e4a9a --- /dev/null +++ b/programs/library-bridge/CMakeLists.txt @@ -0,0 +1,25 @@ +set (CLICKHOUSE_LIBRARY_BRIDGE_SOURCES + library-bridge.cpp + LibraryInterface.cpp + LibraryBridge.cpp + Handlers.cpp + HandlerFactory.cpp + SharedLibraryHandler.cpp + SharedLibraryHandlerFactory.cpp +) + +if (OS_LINUX) + set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,--no-export-dynamic") +endif () + +add_executable(clickhouse-library-bridge ${CLICKHOUSE_LIBRARY_BRIDGE_SOURCES}) + +target_link_libraries(clickhouse-library-bridge PRIVATE + daemon + dbms + bridge +) + +set_target_properties(clickhouse-library-bridge PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..) + +install(TARGETS clickhouse-library-bridge RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) diff --git a/programs/library-bridge/HandlerFactory.cpp b/programs/library-bridge/HandlerFactory.cpp new file mode 100644 index 00000000000..9f53a24156f --- /dev/null +++ b/programs/library-bridge/HandlerFactory.cpp @@ -0,0 +1,23 @@ +#include "HandlerFactory.h" + +#include +#include +#include "Handlers.h" + + +namespace DB +{ + std::unique_ptr LibraryBridgeHandlerFactory::createRequestHandler(const HTTPServerRequest & request) + { + Poco::URI uri{request.getURI()}; + LOG_DEBUG(log, "Request URI: {}", uri.toString()); + + if (uri == "/ping" && request.getMethod() == Poco::Net::HTTPRequest::HTTP_GET) + return std::make_unique(keep_alive_timeout); + + if (request.getMethod() == Poco::Net::HTTPRequest::HTTP_POST) + return std::make_unique(keep_alive_timeout, getContext()); + + return nullptr; + } +} diff --git a/programs/library-bridge/HandlerFactory.h b/programs/library-bridge/HandlerFactory.h new file mode 100644 index 00000000000..93f0721bf01 --- /dev/null +++ b/programs/library-bridge/HandlerFactory.h @@ -0,0 +1,37 @@ +#pragma once + +#include +#include +#include + + +namespace DB +{ + +class SharedLibraryHandler; +using SharedLibraryHandlerPtr = std::shared_ptr; + +/// Factory for '/ping', '/' handlers. +class LibraryBridgeHandlerFactory : public HTTPRequestHandlerFactory, WithContext +{ +public: + LibraryBridgeHandlerFactory( + const std::string & name_, + size_t keep_alive_timeout_, + ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get(name_)) + , name(name_) + , keep_alive_timeout(keep_alive_timeout_) + { + } + + std::unique_ptr createRequestHandler(const HTTPServerRequest & request) override; + +private: + Poco::Logger * log; + std::string name; + size_t keep_alive_timeout; +}; + +} diff --git a/programs/library-bridge/Handlers.cpp b/programs/library-bridge/Handlers.cpp new file mode 100644 index 00000000000..6a1bfbbccb7 --- /dev/null +++ b/programs/library-bridge/Handlers.cpp @@ -0,0 +1,288 @@ +#include "Handlers.h" +#include "SharedLibraryHandlerFactory.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + + +namespace DB +{ +namespace +{ + std::shared_ptr parseColumns(std::string && column_string) + { + auto sample_block = std::make_shared(); + auto names_and_types = NamesAndTypesList::parse(column_string); + + for (const NameAndTypePair & column_data : names_and_types) + sample_block->insert({column_data.type, column_data.name}); + + return sample_block; + } + + std::vector parseIdsFromBinary(const std::string & ids_string) + { + ReadBufferFromString buf(ids_string); + std::vector ids; + readVectorBinary(ids, buf); + return ids; + } + + std::vector parseNamesFromBinary(const std::string & names_string) + { + ReadBufferFromString buf(names_string); + std::vector names; + readVectorBinary(names, buf); + return names; + } +} + + +void LibraryRequestHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) +{ + LOG_TRACE(log, "Request URI: {}", request.getURI()); + HTMLForm params(request); + + if (!params.has("method")) + { + processError(response, "No 'method' in request URL"); + return; + } + + if (!params.has("dictionary_id")) + { + processError(response, "No 'dictionary_id in request URL"); + return; + } + + std::string method = params.get("method"); + std::string dictionary_id = params.get("dictionary_id"); + LOG_TRACE(log, "Library method: '{}', dictionary id: {}", method, dictionary_id); + + WriteBufferFromHTTPServerResponse out(response, request.getMethod() == Poco::Net::HTTPRequest::HTTP_HEAD, keep_alive_timeout); + + try + { + if (method == "libNew") + { + auto & read_buf = request.getStream(); + params.read(read_buf); + + if (!params.has("library_path")) + { + processError(response, "No 'library_path' in request URL"); + return; + } + + if (!params.has("library_settings")) + { + processError(response, "No 'library_settings' in request URL"); + return; + } + + std::string library_path = params.get("library_path"); + const auto & settings_string = params.get("library_settings"); + std::vector library_settings = parseNamesFromBinary(settings_string); + + /// Needed for library dictionary + if (!params.has("attributes_names")) + { + processError(response, "No 'attributes_names' in request URL"); + return; + } + + const auto & attributes_string = params.get("attributes_names"); + std::vector attributes_names = parseNamesFromBinary(attributes_string); + + /// Needed to parse block from binary string format + if (!params.has("sample_block")) + { + processError(response, "No 'sample_block' in request URL"); + return; + } + std::string sample_block_string = params.get("sample_block"); + + std::shared_ptr sample_block; + try + { + sample_block = parseColumns(std::move(sample_block_string)); + } + catch (const Exception & ex) + { + processError(response, "Invalid 'sample_block' parameter in request body '" + ex.message() + "'"); + LOG_WARNING(log, ex.getStackTraceString()); + return; + } + + if (!params.has("null_values")) + { + processError(response, "No 'null_values' in request URL"); + return; + } + + ReadBufferFromString read_block_buf(params.get("null_values")); + auto format = FormatFactory::instance().getInput(FORMAT, read_block_buf, *sample_block, getContext(), DEFAULT_BLOCK_SIZE); + auto reader = std::make_shared(format); + auto sample_block_with_nulls = reader->read(); + + LOG_DEBUG(log, "Dictionary sample block with null values: {}", sample_block_with_nulls.dumpStructure()); + + SharedLibraryHandlerFactory::instance().create(dictionary_id, library_path, library_settings, sample_block_with_nulls, attributes_names); + writeStringBinary("1", out); + } + else if (method == "libClone") + { + if (!params.has("from_dictionary_id")) + { + processError(response, "No 'from_dictionary_id' in request URL"); + return; + } + + std::string from_dictionary_id = params.get("from_dictionary_id"); + LOG_TRACE(log, "Calling libClone from {} to {}", from_dictionary_id, dictionary_id); + SharedLibraryHandlerFactory::instance().clone(from_dictionary_id, dictionary_id); + writeStringBinary("1", out); + } + else if (method == "libDelete") + { + SharedLibraryHandlerFactory::instance().remove(dictionary_id); + writeStringBinary("1", out); + } + else if (method == "isModified") + { + auto library_handler = SharedLibraryHandlerFactory::instance().get(dictionary_id); + bool res = library_handler->isModified(); + writeStringBinary(std::to_string(res), out); + } + else if (method == "supportsSelectiveLoad") + { + auto library_handler = SharedLibraryHandlerFactory::instance().get(dictionary_id); + bool res = library_handler->supportsSelectiveLoad(); + writeStringBinary(std::to_string(res), out); + } + else if (method == "loadAll") + { + auto library_handler = SharedLibraryHandlerFactory::instance().get(dictionary_id); + const auto & sample_block = library_handler->getSampleBlock(); + auto input = library_handler->loadAll(); + + BlockOutputStreamPtr output = FormatFactory::instance().getOutputStream(FORMAT, out, sample_block, getContext()); + copyData(*input, *output); + } + else if (method == "loadIds") + { + params.read(request.getStream()); + + if (!params.has("ids")) + { + processError(response, "No 'ids' in request URL"); + return; + } + + std::vector ids = parseIdsFromBinary(params.get("ids")); + auto library_handler = SharedLibraryHandlerFactory::instance().get(dictionary_id); + const auto & sample_block = library_handler->getSampleBlock(); + auto input = library_handler->loadIds(ids); + BlockOutputStreamPtr output = FormatFactory::instance().getOutputStream(FORMAT, out, sample_block, getContext()); + copyData(*input, *output); + } + else if (method == "loadKeys") + { + if (!params.has("requested_block_sample")) + { + processError(response, "No 'requested_block_sample' in request URL"); + return; + } + + std::string requested_block_string = params.get("requested_block_sample"); + + std::shared_ptr requested_sample_block; + try + { + requested_sample_block = parseColumns(std::move(requested_block_string)); + } + catch (const Exception & ex) + { + processError(response, "Invalid 'requested_block' parameter in request body '" + ex.message() + "'"); + LOG_WARNING(log, ex.getStackTraceString()); + return; + } + + auto & read_buf = request.getStream(); + auto format = FormatFactory::instance().getInput(FORMAT, read_buf, *requested_sample_block, getContext(), DEFAULT_BLOCK_SIZE); + auto reader = std::make_shared(format); + auto block = reader->read(); + + auto library_handler = SharedLibraryHandlerFactory::instance().get(dictionary_id); + const auto & sample_block = library_handler->getSampleBlock(); + auto input = library_handler->loadKeys(block.getColumns()); + BlockOutputStreamPtr output = FormatFactory::instance().getOutputStream(FORMAT, out, sample_block, getContext()); + copyData(*input, *output); + } + } + catch (...) + { + auto message = getCurrentExceptionMessage(true); + response.setStatusAndReason(Poco::Net::HTTPResponse::HTTP_INTERNAL_SERVER_ERROR, message); // can't call process_error, because of too soon response sending + + try + { + writeStringBinary(message, out); + out.finalize(); + } + catch (...) + { + tryLogCurrentException(log); + } + + tryLogCurrentException(log); + } + + try + { + out.finalize(); + } + catch (...) + { + tryLogCurrentException(log); + } +} + + +void LibraryRequestHandler::processError(HTTPServerResponse & response, const std::string & message) +{ + response.setStatusAndReason(HTTPResponse::HTTP_INTERNAL_SERVER_ERROR); + + if (!response.sent()) + *response.send() << message << std::endl; + + LOG_WARNING(log, message); +} + + +void PingHandler::handleRequest(HTTPServerRequest & /* request */, HTTPServerResponse & response) +{ + try + { + setResponseDefaultHeaders(response, keep_alive_timeout); + const char * data = "Ok.\n"; + response.sendBuffer(data, strlen(data)); + } + catch (...) + { + tryLogCurrentException("PingHandler"); + } +} + + +} diff --git a/programs/library-bridge/Handlers.h b/programs/library-bridge/Handlers.h new file mode 100644 index 00000000000..dac61d3a735 --- /dev/null +++ b/programs/library-bridge/Handlers.h @@ -0,0 +1,59 @@ +#pragma once + +#include +#include +#include +#include "SharedLibraryHandler.h" + + +namespace DB +{ + + +/// Handler for requests to Library Dictionary Source, returns response in RowBinary format. +/// When a library dictionary source is created, it sends libNew request to library bridge (which is started on first +/// request to it, if it was not yet started). On this request a new sharedLibrayHandler is added to a +/// sharedLibraryHandlerFactory by a dictionary uuid. With libNew request come: library_path, library_settings, +/// names of dictionary attributes, sample block to parse block of null values, block of null values. Everything is +/// passed in binary format and is urlencoded. When dictionary is cloned, a new handler is created. +/// Each handler is unique to dictionary. +class LibraryRequestHandler : public HTTPRequestHandler, WithContext +{ +public: + + LibraryRequestHandler( + size_t keep_alive_timeout_, + ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get("LibraryRequestHandler")) + , keep_alive_timeout(keep_alive_timeout_) + { + } + + void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override; + +private: + static constexpr inline auto FORMAT = "RowBinary"; + + void processError(HTTPServerResponse & response, const std::string & message); + + Poco::Logger * log; + size_t keep_alive_timeout; +}; + + +class PingHandler : public HTTPRequestHandler +{ +public: + explicit PingHandler(size_t keep_alive_timeout_) + : keep_alive_timeout(keep_alive_timeout_) + { + } + + void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override; + +private: + const size_t keep_alive_timeout; +}; + +} diff --git a/programs/library-bridge/LibraryBridge.cpp b/programs/library-bridge/LibraryBridge.cpp new file mode 100644 index 00000000000..2e5d6041151 --- /dev/null +++ b/programs/library-bridge/LibraryBridge.cpp @@ -0,0 +1,17 @@ +#include "LibraryBridge.h" + +#pragma GCC diagnostic ignored "-Wmissing-declarations" +int mainEntryClickHouseLibraryBridge(int argc, char ** argv) +{ + DB::LibraryBridge app; + try + { + return app.run(argc, argv); + } + catch (...) + { + std::cerr << DB::getCurrentExceptionMessage(true) << "\n"; + auto code = DB::getCurrentExceptionCode(); + return code ? code : 1; + } +} diff --git a/programs/library-bridge/LibraryBridge.h b/programs/library-bridge/LibraryBridge.h new file mode 100644 index 00000000000..9f2dafb89ab --- /dev/null +++ b/programs/library-bridge/LibraryBridge.h @@ -0,0 +1,26 @@ +#pragma once + +#include +#include +#include "HandlerFactory.h" + + +namespace DB +{ + +class LibraryBridge : public IBridge +{ + +protected: + std::string bridgeName() const override + { + return "LibraryBridge"; + } + + HandlerFactoryPtr getHandlerFactoryPtr(ContextPtr context) const override + { + return std::make_shared("LibraryRequestHandlerFactory-factory", keep_alive_timeout, context); + } +}; + +} diff --git a/src/Dictionaries/LibraryDictionarySourceExternal.cpp b/programs/library-bridge/LibraryInterface.cpp similarity index 97% rename from src/Dictionaries/LibraryDictionarySourceExternal.cpp rename to programs/library-bridge/LibraryInterface.cpp index 259d0a2846a..3975368c17f 100644 --- a/src/Dictionaries/LibraryDictionarySourceExternal.cpp +++ b/programs/library-bridge/LibraryInterface.cpp @@ -1,4 +1,5 @@ -#include "LibraryDictionarySourceExternal.h" +#include "LibraryInterface.h" + #include namespace diff --git a/src/Dictionaries/LibraryDictionarySourceExternal.h b/programs/library-bridge/LibraryInterface.h similarity index 97% rename from src/Dictionaries/LibraryDictionarySourceExternal.h rename to programs/library-bridge/LibraryInterface.h index 3b92707d091..d23de59bbb1 100644 --- a/src/Dictionaries/LibraryDictionarySourceExternal.h +++ b/programs/library-bridge/LibraryInterface.h @@ -101,7 +101,7 @@ using RequestedIds = const VectorUInt64 *; using LibraryLoadIdsFunc = RawClickHouseLibraryTable (*)(LibraryData, LibrarySettings, RequestedColumnsNames, RequestedIds); using RequestedKeys = Table *; -/// There is no requested columns names for load keys func +/// There are no requested column names for load keys func using LibraryLoadKeysFunc = RawClickHouseLibraryTable (*)(LibraryData, LibrarySettings, RequestedKeys); using LibraryIsModifiedFunc = bool (*)(LibraryContext, LibrarySettings); diff --git a/programs/library-bridge/LibraryUtils.h b/programs/library-bridge/LibraryUtils.h new file mode 100644 index 00000000000..8ced8df1c48 --- /dev/null +++ b/programs/library-bridge/LibraryUtils.h @@ -0,0 +1,44 @@ +#pragma once + +#include +#include +#include +#include + +#include "LibraryInterface.h" + + +namespace DB +{ + +class CStringsHolder +{ + +public: + using Container = std::vector; + + explicit CStringsHolder(const Container & strings_pass) + { + strings_holder = strings_pass; + strings.size = strings_holder.size(); + + ptr_holder = std::make_unique(strings.size); + strings.data = ptr_holder.get(); + + size_t i = 0; + for (auto & str : strings_holder) + { + strings.data[i] = str.c_str(); + ++i; + } + } + + ClickHouseLibrary::CStrings strings; // will pass pointer to lib + +private: + std::unique_ptr ptr_holder = nullptr; + Container strings_holder; +}; + + +} diff --git a/programs/library-bridge/SharedLibraryHandler.cpp b/programs/library-bridge/SharedLibraryHandler.cpp new file mode 100644 index 00000000000..ab8cf2417c2 --- /dev/null +++ b/programs/library-bridge/SharedLibraryHandler.cpp @@ -0,0 +1,219 @@ +#include "SharedLibraryHandler.h" + +#include +#include +#include + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int EXTERNAL_LIBRARY_ERROR; + extern const int SIZES_OF_COLUMNS_DOESNT_MATCH; +} + + +SharedLibraryHandler::SharedLibraryHandler( + const std::string & library_path_, + const std::vector & library_settings, + const Block & sample_block_, + const std::vector & attributes_names_) + : library_path(library_path_) + , sample_block(sample_block_) + , attributes_names(attributes_names_) +{ + library = std::make_shared(library_path, RTLD_LAZY); + settings_holder = std::make_shared(CStringsHolder(library_settings)); + + auto lib_new = library->tryGet(ClickHouseLibrary::LIBRARY_CREATE_NEW_FUNC_NAME); + + if (lib_new) + lib_data = lib_new(&settings_holder->strings, ClickHouseLibrary::log); + else + throw Exception("Method libNew failed", ErrorCodes::EXTERNAL_LIBRARY_ERROR); +} + + +SharedLibraryHandler::SharedLibraryHandler(const SharedLibraryHandler & other) + : library_path{other.library_path} + , sample_block{other.sample_block} + , attributes_names{other.attributes_names} + , library{other.library} + , settings_holder{other.settings_holder} +{ + + auto lib_clone = library->tryGet(ClickHouseLibrary::LIBRARY_CLONE_FUNC_NAME); + + if (lib_clone) + { + lib_data = lib_clone(other.lib_data); + } + else + { + auto lib_new = library->tryGet(ClickHouseLibrary::LIBRARY_CREATE_NEW_FUNC_NAME); + + if (lib_new) + lib_data = lib_new(&settings_holder->strings, ClickHouseLibrary::log); + } +} + + +SharedLibraryHandler::~SharedLibraryHandler() +{ + auto lib_delete = library->tryGet(ClickHouseLibrary::LIBRARY_DELETE_FUNC_NAME); + + if (lib_delete) + lib_delete(lib_data); +} + + +bool SharedLibraryHandler::isModified() +{ + auto func_is_modified = library->tryGet(ClickHouseLibrary::LIBRARY_IS_MODIFIED_FUNC_NAME); + + if (func_is_modified) + return func_is_modified(lib_data, &settings_holder->strings); + + return true; +} + + +bool SharedLibraryHandler::supportsSelectiveLoad() +{ + auto func_supports_selective_load = library->tryGet(ClickHouseLibrary::LIBRARY_SUPPORTS_SELECTIVE_LOAD_FUNC_NAME); + + if (func_supports_selective_load) + return func_supports_selective_load(lib_data, &settings_holder->strings); + + return true; +} + + +BlockInputStreamPtr SharedLibraryHandler::loadAll() +{ + auto columns_holder = std::make_unique(attributes_names.size()); + ClickHouseLibrary::CStrings columns{static_cast(columns_holder.get()), attributes_names.size()}; + for (size_t i = 0; i < attributes_names.size(); ++i) + columns.data[i] = attributes_names[i].c_str(); + + auto load_all_func = library->get(ClickHouseLibrary::LIBRARY_LOAD_ALL_FUNC_NAME); + auto data_new_func = library->get(ClickHouseLibrary::LIBRARY_DATA_NEW_FUNC_NAME); + auto data_delete_func = library->get(ClickHouseLibrary::LIBRARY_DATA_DELETE_FUNC_NAME); + + ClickHouseLibrary::LibraryData data_ptr = data_new_func(lib_data); + SCOPE_EXIT(data_delete_func(lib_data, data_ptr)); + + ClickHouseLibrary::RawClickHouseLibraryTable data = load_all_func(data_ptr, &settings_holder->strings, &columns); + auto block = dataToBlock(data); + + return std::make_shared(block); +} + + +BlockInputStreamPtr SharedLibraryHandler::loadIds(const std::vector & ids) +{ + const ClickHouseLibrary::VectorUInt64 ids_data{ext::bit_cast(ids.data()), ids.size()}; + + auto columns_holder = std::make_unique(attributes_names.size()); + ClickHouseLibrary::CStrings columns_pass{static_cast(columns_holder.get()), attributes_names.size()}; + + auto load_ids_func = library->get(ClickHouseLibrary::LIBRARY_LOAD_IDS_FUNC_NAME); + auto data_new_func = library->get(ClickHouseLibrary::LIBRARY_DATA_NEW_FUNC_NAME); + auto data_delete_func = library->get(ClickHouseLibrary::LIBRARY_DATA_DELETE_FUNC_NAME); + + ClickHouseLibrary::LibraryData data_ptr = data_new_func(lib_data); + SCOPE_EXIT(data_delete_func(lib_data, data_ptr)); + + ClickHouseLibrary::RawClickHouseLibraryTable data = load_ids_func(data_ptr, &settings_holder->strings, &columns_pass, &ids_data); + auto block = dataToBlock(data); + + return std::make_shared(block); +} + + +BlockInputStreamPtr SharedLibraryHandler::loadKeys(const Columns & key_columns) +{ + auto holder = std::make_unique(key_columns.size()); + std::vector> column_data_holders; + + for (size_t i = 0; i < key_columns.size(); ++i) + { + auto cell_holder = std::make_unique(key_columns[i]->size()); + + for (size_t j = 0; j < key_columns[i]->size(); ++j) + { + auto data_ref = key_columns[i]->getDataAt(j); + + cell_holder[j] = ClickHouseLibrary::Field{ + .data = static_cast(data_ref.data), + .size = data_ref.size}; + } + + holder[i] = ClickHouseLibrary::Row{ + .data = static_cast(cell_holder.get()), + .size = key_columns[i]->size()}; + + column_data_holders.push_back(std::move(cell_holder)); + } + + ClickHouseLibrary::Table request_cols{ + .data = static_cast(holder.get()), + .size = key_columns.size()}; + + auto load_keys_func = library->get(ClickHouseLibrary::LIBRARY_LOAD_KEYS_FUNC_NAME); + auto data_new_func = library->get(ClickHouseLibrary::LIBRARY_DATA_NEW_FUNC_NAME); + auto data_delete_func = library->get(ClickHouseLibrary::LIBRARY_DATA_DELETE_FUNC_NAME); + + ClickHouseLibrary::LibraryData data_ptr = data_new_func(lib_data); + SCOPE_EXIT(data_delete_func(lib_data, data_ptr)); + + ClickHouseLibrary::RawClickHouseLibraryTable data = load_keys_func(data_ptr, &settings_holder->strings, &request_cols); + auto block = dataToBlock(data); + + return std::make_shared(block); +} + + +Block SharedLibraryHandler::dataToBlock(const ClickHouseLibrary::RawClickHouseLibraryTable data) +{ + if (!data) + throw Exception("LibraryDictionarySource: No data returned", ErrorCodes::EXTERNAL_LIBRARY_ERROR); + + const auto * columns_received = static_cast(data); + if (columns_received->error_code) + throw Exception( + "LibraryDictionarySource: Returned error: " + std::to_string(columns_received->error_code) + " " + (columns_received->error_string ? columns_received->error_string : ""), + ErrorCodes::EXTERNAL_LIBRARY_ERROR); + + MutableColumns columns = sample_block.cloneEmptyColumns(); + + for (size_t col_n = 0; col_n < columns_received->size; ++col_n) + { + if (columns.size() != columns_received->data[col_n].size) + throw Exception( + "LibraryDictionarySource: Returned unexpected number of columns: " + std::to_string(columns_received->data[col_n].size) + ", must be " + std::to_string(columns.size()), + ErrorCodes::SIZES_OF_COLUMNS_DOESNT_MATCH); + + for (size_t row_n = 0; row_n < columns_received->data[col_n].size; ++row_n) + { + const auto & field = columns_received->data[col_n].data[row_n]; + if (!field.data) + { + /// sample_block contains null_value (from config) inside corresponding column + const auto & col = sample_block.getByPosition(row_n); + columns[row_n]->insertFrom(*(col.column), 0); + } + else + { + const auto & size = field.size; + columns[row_n]->insertData(static_cast(field.data), size); + } + } + } + + return sample_block.cloneWithColumns(std::move(columns)); +} + +} diff --git a/programs/library-bridge/SharedLibraryHandler.h b/programs/library-bridge/SharedLibraryHandler.h new file mode 100644 index 00000000000..5c0334ac89f --- /dev/null +++ b/programs/library-bridge/SharedLibraryHandler.h @@ -0,0 +1,54 @@ +#pragma once + +#include +#include +#include +#include "LibraryUtils.h" + + +namespace DB +{ + +/// A class that manages all operations with library dictionary. +/// Every library dictionary source has its own object of this class, accessed by UUID. +class SharedLibraryHandler +{ + +public: + SharedLibraryHandler( + const std::string & library_path_, + const std::vector & library_settings, + const Block & sample_block_, + const std::vector & attributes_names_); + + SharedLibraryHandler(const SharedLibraryHandler & other); + + ~SharedLibraryHandler(); + + BlockInputStreamPtr loadAll(); + + BlockInputStreamPtr loadIds(const std::vector & ids); + + BlockInputStreamPtr loadKeys(const Columns & key_columns); + + bool isModified(); + + bool supportsSelectiveLoad(); + + const Block & getSampleBlock() { return sample_block; } + +private: + Block dataToBlock(const ClickHouseLibrary::RawClickHouseLibraryTable data); + + std::string library_path; + const Block sample_block; + std::vector attributes_names; + + SharedLibraryPtr library; + std::shared_ptr settings_holder; + void * lib_data; +}; + +using SharedLibraryHandlerPtr = std::shared_ptr; + +} diff --git a/programs/library-bridge/SharedLibraryHandlerFactory.cpp b/programs/library-bridge/SharedLibraryHandlerFactory.cpp new file mode 100644 index 00000000000..05494c313c4 --- /dev/null +++ b/programs/library-bridge/SharedLibraryHandlerFactory.cpp @@ -0,0 +1,67 @@ +#include "SharedLibraryHandlerFactory.h" + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + +SharedLibraryHandlerPtr SharedLibraryHandlerFactory::get(const std::string & dictionary_id) +{ + std::lock_guard lock(mutex); + auto library_handler = library_handlers.find(dictionary_id); + + if (library_handler != library_handlers.end()) + return library_handler->second; + + return nullptr; +} + + +void SharedLibraryHandlerFactory::create( + const std::string & dictionary_id, + const std::string & library_path, + const std::vector & library_settings, + const Block & sample_block, + const std::vector & attributes_names) +{ + std::lock_guard lock(mutex); + library_handlers[dictionary_id] = std::make_shared(library_path, library_settings, sample_block, attributes_names); +} + + +void SharedLibraryHandlerFactory::clone(const std::string & from_dictionary_id, const std::string & to_dictionary_id) +{ + std::lock_guard lock(mutex); + auto from_library_handler = library_handlers.find(from_dictionary_id); + + /// This is not supposed to happen as libClone is called from copy constructor of LibraryDictionarySource + /// object, and shared library handler of from_dictionary is removed only in its destructor. + /// And if for from_dictionary there was no shared library handler, it would have received and exception in + /// its constructor, so no libClone would be made from it. + if (from_library_handler == library_handlers.end()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "No shared library handler found"); + + /// libClone method will be called in copy constructor + library_handlers[to_dictionary_id] = std::make_shared(*from_library_handler->second); +} + + +void SharedLibraryHandlerFactory::remove(const std::string & dictionary_id) +{ + std::lock_guard lock(mutex); + /// libDelete is called in destructor. + library_handlers.erase(dictionary_id); +} + + +SharedLibraryHandlerFactory & SharedLibraryHandlerFactory::instance() +{ + static SharedLibraryHandlerFactory ret; + return ret; +} + +} diff --git a/programs/library-bridge/SharedLibraryHandlerFactory.h b/programs/library-bridge/SharedLibraryHandlerFactory.h new file mode 100644 index 00000000000..473d90618a2 --- /dev/null +++ b/programs/library-bridge/SharedLibraryHandlerFactory.h @@ -0,0 +1,37 @@ +#pragma once + +#include "SharedLibraryHandler.h" +#include +#include + + +namespace DB +{ + +/// Each library dictionary source has unique UUID. When clone() method is called, a new UUID is generated. +/// There is a unique mapping from diciotnary UUID to sharedLibraryHandler. +class SharedLibraryHandlerFactory final : private boost::noncopyable +{ +public: + static SharedLibraryHandlerFactory & instance(); + + SharedLibraryHandlerPtr get(const std::string & dictionary_id); + + void create( + const std::string & dictionary_id, + const std::string & library_path, + const std::vector & library_settings, + const Block & sample_block, + const std::vector & attributes_names); + + void clone(const std::string & from_dictionary_id, const std::string & to_dictionary_id); + + void remove(const std::string & dictionary_id); + +private: + /// map: dict_id -> sharedLibraryHandler + std::unordered_map library_handlers; + std::mutex mutex; +}; + +} diff --git a/programs/library-bridge/library-bridge.cpp b/programs/library-bridge/library-bridge.cpp new file mode 100644 index 00000000000..5fff2ffe525 --- /dev/null +++ b/programs/library-bridge/library-bridge.cpp @@ -0,0 +1,3 @@ +int mainEntryClickHouseLibraryBridge(int argc, char ** argv); +int main(int argc_, char ** argv_) { return mainEntryClickHouseLibraryBridge(argc_, argv_); } + diff --git a/programs/local/LocalServer.cpp b/programs/local/LocalServer.cpp index 2909b838c84..f680c2c2da6 100644 --- a/programs/local/LocalServer.cpp +++ b/programs/local/LocalServer.cpp @@ -99,9 +99,9 @@ void LocalServer::initialize(Poco::Util::Application & self) } } -void LocalServer::applyCmdSettings(Context & context) +void LocalServer::applyCmdSettings(ContextPtr context) { - context.applySettingsChanges(cmd_settings.changes()); + context->applySettingsChanges(cmd_settings.changes()); } /// If path is specified and not empty, will try to setup server environment and load existing metadata @@ -176,7 +176,7 @@ void LocalServer::tryInitPath() } -static void attachSystemTables(const Context & context) +static void attachSystemTables(ContextPtr context) { DatabasePtr system_database = DatabaseCatalog::instance().tryGetDatabase(DatabaseCatalog::SYSTEM_DATABASE); if (!system_database) @@ -211,7 +211,7 @@ try } shared_context = Context::createShared(); - global_context = std::make_unique(Context::createGlobal(shared_context.get())); + global_context = Context::createGlobal(shared_context.get()); global_context->makeGlobalContext(); global_context->setApplicationType(Context::ApplicationType::LOCAL); tryInitPath(); @@ -274,9 +274,9 @@ try * if such tables will not be dropped, clickhouse-server will not be able to load them due to security reasons. */ std::string default_database = config().getString("default_database", "_local"); - DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared(default_database, *global_context)); + DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared(default_database, global_context)); global_context->setCurrentDatabase(default_database); - applyCmdOptions(*global_context); + applyCmdOptions(global_context); if (config().has("path")) { @@ -288,15 +288,15 @@ try LOG_DEBUG(log, "Loading metadata from {}", path); Poco::File(path + "data/").createDirectories(); Poco::File(path + "metadata/").createDirectories(); - loadMetadataSystem(*global_context); - attachSystemTables(*global_context); - loadMetadata(*global_context); + loadMetadataSystem(global_context); + attachSystemTables(global_context); + loadMetadata(global_context); DatabaseCatalog::instance().loadDatabases(); LOG_DEBUG(log, "Loaded metadata."); } else if (!config().has("no-system-tables")) { - attachSystemTables(*global_context); + attachSystemTables(global_context); } processQueries(); @@ -375,13 +375,13 @@ void LocalServer::processQueries() /// we can't mutate global global_context (can lead to races, as it was already passed to some background threads) /// so we can't reuse it safely as a query context and need a copy here - auto context = Context(*global_context); + auto context = Context::createCopy(global_context); - context.makeSessionContext(); - context.makeQueryContext(); + context->makeSessionContext(); + context->makeQueryContext(); - context.setUser("default", "", Poco::Net::SocketAddress{}); - context.setCurrentQueryId(""); + context->setUser("default", "", Poco::Net::SocketAddress{}); + context->setCurrentQueryId(""); applyCmdSettings(context); /// Use the same query_id (and thread group) for all queries @@ -618,9 +618,9 @@ void LocalServer::init(int argc, char ** argv) argsToConfig(arguments, config(), 100); } -void LocalServer::applyCmdOptions(Context & context) +void LocalServer::applyCmdOptions(ContextPtr context) { - context.setDefaultFormat(config().getString("output-format", config().getString("format", "TSV"))); + context->setDefaultFormat(config().getString("output-format", config().getString("format", "TSV"))); applyCmdSettings(context); } diff --git a/programs/local/LocalServer.h b/programs/local/LocalServer.h index 02778bd86cb..3555e8a38ad 100644 --- a/programs/local/LocalServer.h +++ b/programs/local/LocalServer.h @@ -36,15 +36,15 @@ private: std::string getInitialCreateTableQuery(); void tryInitPath(); - void applyCmdOptions(Context & context); - void applyCmdSettings(Context & context); + void applyCmdOptions(ContextPtr context); + void applyCmdSettings(ContextPtr context); void processQueries(); void setupUsers(); void cleanup(); protected: SharedContextHolder shared_context; - std::unique_ptr global_context; + ContextPtr global_context; /// Settings specified via command line args Settings cmd_settings; diff --git a/programs/obfuscator/Obfuscator.cpp b/programs/obfuscator/Obfuscator.cpp index aea70ba0986..c92eb5c6647 100644 --- a/programs/obfuscator/Obfuscator.cpp +++ b/programs/obfuscator/Obfuscator.cpp @@ -1129,8 +1129,8 @@ try } SharedContextHolder shared_context = Context::createShared(); - Context context = Context::createGlobal(shared_context.get()); - context.makeGlobalContext(); + ContextPtr context = Context::createGlobal(shared_context.get()); + context->makeGlobalContext(); ReadBufferFromFileDescriptor file_in(STDIN_FILENO); WriteBufferFromFileDescriptor file_out(STDOUT_FILENO); @@ -1152,7 +1152,7 @@ try if (!silent) std::cerr << "Training models\n"; - BlockInputStreamPtr input = context.getInputFormat(input_format, file_in, header, max_block_size); + BlockInputStreamPtr input = context->getInputFormat(input_format, file_in, header, max_block_size); input->readPrefix(); while (Block block = input->read()) @@ -1179,8 +1179,8 @@ try file_in.seek(0, SEEK_SET); - BlockInputStreamPtr input = context.getInputFormat(input_format, file_in, header, max_block_size); - BlockOutputStreamPtr output = context.getOutputStreamParallelIfPossible(output_format, file_out, header); + BlockInputStreamPtr input = context->getInputFormat(input_format, file_in, header, max_block_size); + BlockOutputStreamPtr output = context->getOutputStreamParallelIfPossible(output_format, file_out, header); if (processed_rows + source_rows > limit) input = std::make_shared(input, limit - processed_rows, 0); diff --git a/programs/odbc-bridge/CMakeLists.txt b/programs/odbc-bridge/CMakeLists.txt index 11864354619..7b232f2b5dc 100644 --- a/programs/odbc-bridge/CMakeLists.txt +++ b/programs/odbc-bridge/CMakeLists.txt @@ -24,12 +24,14 @@ add_executable(clickhouse-odbc-bridge ${CLICKHOUSE_ODBC_BRIDGE_SOURCES}) target_link_libraries(clickhouse-odbc-bridge PRIVATE daemon dbms + bridge clickhouse_parsers - Poco::Data - Poco::Data::ODBC + nanodbc + unixodbc ) set_target_properties(clickhouse-odbc-bridge PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..) +target_compile_options (clickhouse-odbc-bridge PRIVATE -Wno-reserved-id-macro -Wno-keyword-macro) if (USE_GDB_ADD_INDEX) add_custom_command(TARGET clickhouse-odbc-bridge POST_BUILD COMMAND ${GDB_ADD_INDEX_EXE} ../clickhouse-odbc-bridge COMMENT "Adding .gdb-index to clickhouse-odbc-bridge" VERBATIM) diff --git a/programs/odbc-bridge/ColumnInfoHandler.cpp b/programs/odbc-bridge/ColumnInfoHandler.cpp index 14fa734f246..e33858583c2 100644 --- a/programs/odbc-bridge/ColumnInfoHandler.cpp +++ b/programs/odbc-bridge/ColumnInfoHandler.cpp @@ -2,29 +2,36 @@ #if USE_ODBC -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include "getIdentifierQuote.h" -# include "validateODBCConnectionString.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "getIdentifierQuote.h" +#include "validateODBCConnectionString.h" +#include "ODBCConnectionFactory.h" + +#include +#include -# define POCO_SQL_ODBC_CLASS Poco::Data::ODBC namespace DB { + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; + extern const int BAD_ARGUMENTS; +} + namespace { DataTypePtr getDataType(SQLSMALLINT type) @@ -59,6 +66,7 @@ namespace } } + void ODBCColumnsInfoHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) { HTMLForm params(request, request.getStream()); @@ -77,88 +85,79 @@ void ODBCColumnsInfoHandler::handleRequest(HTTPServerRequest & request, HTTPServ process_error("No 'table' param in request URL"); return; } + if (!params.has("connection_string")) { process_error("No 'connection_string' in request URL"); return; } + std::string schema_name; std::string table_name = params.get("table"); std::string connection_string = params.get("connection_string"); if (params.has("schema")) - { schema_name = params.get("schema"); - LOG_TRACE(log, "Will fetch info for table '{}'", schema_name + "." + table_name); - } - else - LOG_TRACE(log, "Will fetch info for table '{}'", table_name); + LOG_TRACE(log, "Got connection str '{}'", connection_string); try { const bool external_table_functions_use_nulls = Poco::NumberParser::parseBool(params.get("external_table_functions_use_nulls", "false")); - POCO_SQL_ODBC_CLASS::SessionImpl session(validateODBCConnectionString(connection_string), DBMS_DEFAULT_CONNECT_TIMEOUT_SEC); - SQLHDBC hdbc = session.dbc().handle(); + auto connection = ODBCConnectionFactory::instance().get( + validateODBCConnectionString(connection_string), + getContext()->getSettingsRef().odbc_bridge_connection_pool_size); - SQLHSTMT hstmt = nullptr; + nanodbc::catalog catalog(*connection); + std::string catalog_name; - if (POCO_SQL_ODBC_CLASS::Utility::isError(SQLAllocStmt(hdbc, &hstmt))) - throw POCO_SQL_ODBC_CLASS::ODBCException("Could not allocate connection handle."); - - SCOPE_EXIT(SQLFreeStmt(hstmt, SQL_DROP)); - - const auto & context_settings = context.getSettingsRef(); - - /// TODO Why not do SQLColumns instead? - std::string name = schema_name.empty() ? backQuoteIfNeed(table_name) : backQuoteIfNeed(schema_name) + "." + backQuoteIfNeed(table_name); - WriteBufferFromOwnString buf; - std::string input = "SELECT * FROM " + name + " WHERE 1 = 0"; - ParserQueryWithOutput parser(input.data() + input.size()); - ASTPtr select = parseQuery(parser, input.data(), input.data() + input.size(), "", context_settings.max_query_size, context_settings.max_parser_depth); - - IAST::FormatSettings settings(buf, true); - settings.always_quote_identifiers = true; - settings.identifier_quoting_style = getQuotingStyle(hdbc); - select->format(settings); - std::string query = buf.str(); - - LOG_TRACE(log, "Inferring structure with query '{}'", query); - - if (POCO_SQL_ODBC_CLASS::Utility::isError(POCO_SQL_ODBC_CLASS::SQLPrepare(hstmt, reinterpret_cast(query.data()), query.size()))) - throw POCO_SQL_ODBC_CLASS::DescriptorException(session.dbc()); - - if (POCO_SQL_ODBC_CLASS::Utility::isError(SQLExecute(hstmt))) - throw POCO_SQL_ODBC_CLASS::StatementException(hstmt); - - SQLSMALLINT cols = 0; - if (POCO_SQL_ODBC_CLASS::Utility::isError(SQLNumResultCols(hstmt, &cols))) - throw POCO_SQL_ODBC_CLASS::StatementException(hstmt); - - /// TODO cols not checked - - NamesAndTypesList columns; - for (SQLSMALLINT ncol = 1; ncol <= cols; ++ncol) + /// In XDBC tables it is allowed to pass either database_name or schema_name in table definion, but not both of them. + /// They both are passed as 'schema' parameter in request URL, so it is not clear whether it is database_name or schema_name passed. + /// If it is schema_name then we know that database is added in odbc.ini. But if we have database_name as 'schema', + /// it is not guaranteed. For nanodbc database_name must be either in odbc.ini or passed as catalog_name. + auto get_columns = [&]() { - SQLSMALLINT type = 0; - /// TODO Why 301? - SQLCHAR column_name[301]; - - SQLSMALLINT is_nullable; - const auto result = POCO_SQL_ODBC_CLASS::SQLDescribeCol(hstmt, ncol, column_name, sizeof(column_name), nullptr, &type, nullptr, nullptr, &is_nullable); - if (POCO_SQL_ODBC_CLASS::Utility::isError(result)) - throw POCO_SQL_ODBC_CLASS::StatementException(hstmt); - - auto column_type = getDataType(type); - if (external_table_functions_use_nulls && is_nullable == SQL_NULLABLE) + nanodbc::catalog::tables tables = catalog.find_tables(table_name, /* type = */ "", /* schema = */ "", /* catalog = */ schema_name); + if (tables.next()) { - column_type = std::make_shared(column_type); + catalog_name = tables.table_catalog(); + LOG_TRACE(log, "Will fetch info for table '{}.{}'", catalog_name, table_name); + return catalog.find_columns(/* column = */ "", table_name, /* schema = */ "", catalog_name); } - columns.emplace_back(reinterpret_cast(column_name), std::move(column_type)); + tables = catalog.find_tables(table_name, /* type = */ "", /* schema = */ schema_name); + if (tables.next()) + { + catalog_name = tables.table_catalog(); + LOG_TRACE(log, "Will fetch info for table '{}.{}.{}'", catalog_name, schema_name, table_name); + return catalog.find_columns(/* column = */ "", table_name, schema_name, catalog_name); + } + + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Table {} not found", schema_name.empty() ? table_name : schema_name + '.' + table_name); + }; + + nanodbc::catalog::columns columns_definition = get_columns(); + + NamesAndTypesList columns; + while (columns_definition.next()) + { + SQLSMALLINT type = columns_definition.sql_data_type(); + std::string column_name = columns_definition.column_name(); + + bool is_nullable = columns_definition.nullable() == SQL_NULLABLE; + + auto column_type = getDataType(type); + + if (external_table_functions_use_nulls && is_nullable == SQL_NULLABLE) + column_type = std::make_shared(column_type); + + columns.emplace_back(column_name, std::move(column_type)); } + if (columns.empty()) + throw Exception("Columns definition was not returned", ErrorCodes::LOGICAL_ERROR); + WriteBufferFromHTTPServerResponse out(response, request.getMethod() == Poco::Net::HTTPRequest::HTTP_HEAD, keep_alive_timeout); try { diff --git a/programs/odbc-bridge/ColumnInfoHandler.h b/programs/odbc-bridge/ColumnInfoHandler.h index 9b5b470b31d..bc976f54aee 100644 --- a/programs/odbc-bridge/ColumnInfoHandler.h +++ b/programs/odbc-bridge/ColumnInfoHandler.h @@ -2,24 +2,23 @@ #if USE_ODBC -# include -# include -# include +#include +#include +#include +#include +#include -# include -/** The structure of the table is taken from the query "SELECT * FROM table WHERE 1=0". - * TODO: It would be much better to utilize ODBC methods dedicated for columns description. - * If there is no such table, an exception is thrown. - */ namespace DB { -class ODBCColumnsInfoHandler : public HTTPRequestHandler +class ODBCColumnsInfoHandler : public HTTPRequestHandler, WithContext { public: - ODBCColumnsInfoHandler(size_t keep_alive_timeout_, Context & context_) - : log(&Poco::Logger::get("ODBCColumnsInfoHandler")), keep_alive_timeout(keep_alive_timeout_), context(context_) + ODBCColumnsInfoHandler(size_t keep_alive_timeout_, ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get("ODBCColumnsInfoHandler")) + , keep_alive_timeout(keep_alive_timeout_) { } @@ -28,7 +27,6 @@ public: private: Poco::Logger * log; size_t keep_alive_timeout; - Context & context; }; } diff --git a/programs/odbc-bridge/HandlerFactory.cpp b/programs/odbc-bridge/HandlerFactory.cpp index 9ac48af4ace..49984453d33 100644 --- a/programs/odbc-bridge/HandlerFactory.cpp +++ b/programs/odbc-bridge/HandlerFactory.cpp @@ -8,7 +8,7 @@ namespace DB { -std::unique_ptr HandlerFactory::createRequestHandler(const HTTPServerRequest & request) +std::unique_ptr ODBCBridgeHandlerFactory::createRequestHandler(const HTTPServerRequest & request) { Poco::URI uri{request.getURI()}; LOG_TRACE(log, "Request URI: {}", uri.toString()); @@ -21,26 +21,26 @@ std::unique_ptr HandlerFactory::createRequestHandler(const H if (uri.getPath() == "/columns_info") #if USE_ODBC - return std::make_unique(keep_alive_timeout, context); + return std::make_unique(keep_alive_timeout, getContext()); #else return nullptr; #endif else if (uri.getPath() == "/identifier_quote") #if USE_ODBC - return std::make_unique(keep_alive_timeout, context); + return std::make_unique(keep_alive_timeout, getContext()); #else return nullptr; #endif else if (uri.getPath() == "/schema_allowed") #if USE_ODBC - return std::make_unique(keep_alive_timeout, context); + return std::make_unique(keep_alive_timeout, getContext()); #else return nullptr; #endif else if (uri.getPath() == "/write") - return std::make_unique(pool_map, keep_alive_timeout, context, "write"); + return std::make_unique(keep_alive_timeout, getContext(), "write"); else - return std::make_unique(pool_map, keep_alive_timeout, context, "read"); + return std::make_unique(keep_alive_timeout, getContext(), "read"); } return nullptr; } diff --git a/programs/odbc-bridge/HandlerFactory.h b/programs/odbc-bridge/HandlerFactory.h index 5dce6f02ecd..ffbbe3670af 100644 --- a/programs/odbc-bridge/HandlerFactory.h +++ b/programs/odbc-bridge/HandlerFactory.h @@ -1,32 +1,28 @@ #pragma once -#include +#include #include #include "ColumnInfoHandler.h" #include "IdentifierQuoteHandler.h" #include "MainHandler.h" #include "SchemaAllowedHandler.h" - #include -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wunused-parameter" -#include -#pragma GCC diagnostic pop - namespace DB { /** Factory for '/ping', '/', '/columns_info', '/identifier_quote', '/schema_allowed' handlers. * Also stores Session pools for ODBC connections */ -class HandlerFactory : public HTTPRequestHandlerFactory +class ODBCBridgeHandlerFactory : public HTTPRequestHandlerFactory, WithContext { public: - HandlerFactory(const std::string & name_, size_t keep_alive_timeout_, Context & context_) - : log(&Poco::Logger::get(name_)), name(name_), keep_alive_timeout(keep_alive_timeout_), context(context_) + ODBCBridgeHandlerFactory(const std::string & name_, size_t keep_alive_timeout_, ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get(name_)) + , name(name_) + , keep_alive_timeout(keep_alive_timeout_) { - pool_map = std::make_shared(); } std::unique_ptr createRequestHandler(const HTTPServerRequest & request) override; @@ -35,7 +31,6 @@ private: Poco::Logger * log; std::string name; size_t keep_alive_timeout; - Context & context; - std::shared_ptr pool_map; }; + } diff --git a/programs/odbc-bridge/IdentifierQuoteHandler.cpp b/programs/odbc-bridge/IdentifierQuoteHandler.cpp index 5060d37c479..a5a97cb8086 100644 --- a/programs/odbc-bridge/IdentifierQuoteHandler.cpp +++ b/programs/odbc-bridge/IdentifierQuoteHandler.cpp @@ -2,23 +2,20 @@ #if USE_ODBC -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include "getIdentifierQuote.h" -# include "validateODBCConnectionString.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "getIdentifierQuote.h" +#include "validateODBCConnectionString.h" +#include "ODBCConnectionFactory.h" -# define POCO_SQL_ODBC_CLASS Poco::Data::ODBC namespace DB { @@ -44,10 +41,12 @@ void IdentifierQuoteHandler::handleRequest(HTTPServerRequest & request, HTTPServ try { std::string connection_string = params.get("connection_string"); - POCO_SQL_ODBC_CLASS::SessionImpl session(validateODBCConnectionString(connection_string), DBMS_DEFAULT_CONNECT_TIMEOUT_SEC); - SQLHDBC hdbc = session.dbc().handle(); - auto identifier = getIdentifierQuote(hdbc); + auto connection = ODBCConnectionFactory::instance().get( + validateODBCConnectionString(connection_string), + getContext()->getSettingsRef().odbc_bridge_connection_pool_size); + + auto identifier = getIdentifierQuote(*connection); WriteBufferFromHTTPServerResponse out(response, request.getMethod() == Poco::Net::HTTPRequest::HTTP_HEAD, keep_alive_timeout); try diff --git a/programs/odbc-bridge/IdentifierQuoteHandler.h b/programs/odbc-bridge/IdentifierQuoteHandler.h index dad88c72ad8..ef3806fd802 100644 --- a/programs/odbc-bridge/IdentifierQuoteHandler.h +++ b/programs/odbc-bridge/IdentifierQuoteHandler.h @@ -11,11 +11,13 @@ namespace DB { -class IdentifierQuoteHandler : public HTTPRequestHandler +class IdentifierQuoteHandler : public HTTPRequestHandler, WithContext { public: - IdentifierQuoteHandler(size_t keep_alive_timeout_, Context &) - : log(&Poco::Logger::get("IdentifierQuoteHandler")), keep_alive_timeout(keep_alive_timeout_) + IdentifierQuoteHandler(size_t keep_alive_timeout_, ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get("IdentifierQuoteHandler")) + , keep_alive_timeout(keep_alive_timeout_) { } diff --git a/programs/odbc-bridge/MainHandler.cpp b/programs/odbc-bridge/MainHandler.cpp index 079fc371ab4..e24b51f6037 100644 --- a/programs/odbc-bridge/MainHandler.cpp +++ b/programs/odbc-bridge/MainHandler.cpp @@ -18,18 +18,17 @@ #include #include #include +#include "ODBCConnectionFactory.h" #include #include +#include -#if USE_ODBC -#include -#define POCO_SQL_ODBC_CLASS Poco::Data::ODBC -#endif namespace DB { + namespace { std::unique_ptr parseColumns(std::string && column_string) @@ -42,37 +41,6 @@ namespace } } -using PocoSessionPoolConstructor = std::function()>; -/** Is used to adjust max size of default Poco thread pool. See issue #750 - * Acquire the lock, resize pool and construct new Session. - */ -static std::shared_ptr createAndCheckResizePocoSessionPool(PocoSessionPoolConstructor pool_constr) -{ - static std::mutex mutex; - - Poco::ThreadPool & pool = Poco::ThreadPool::defaultPool(); - - /// NOTE: The lock don't guarantee that external users of the pool don't change its capacity - std::unique_lock lock(mutex); - - if (pool.available() == 0) - pool.addCapacity(2 * std::max(pool.capacity(), 1)); - - return pool_constr(); -} - -ODBCHandler::PoolPtr ODBCHandler::getPool(const std::string & connection_str) -{ - std::lock_guard lock(mutex); - if (!pool_map->count(connection_str)) - { - pool_map->emplace(connection_str, createAndCheckResizePocoSessionPool([connection_str] - { - return std::make_shared("ODBC", validateODBCConnectionString(connection_str)); - })); - } - return pool_map->at(connection_str); -} void ODBCHandler::processError(HTTPServerResponse & response, const std::string & message) { @@ -82,12 +50,14 @@ void ODBCHandler::processError(HTTPServerResponse & response, const std::string LOG_WARNING(log, message); } + void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) { HTMLForm params(request); + LOG_TRACE(log, "Request URI: {}", request.getURI()); + if (mode == "read") params.read(request.getStream()); - LOG_TRACE(log, "Request URI: {}", request.getURI()); if (mode == "read" && !params.has("query")) { @@ -95,11 +65,6 @@ void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse return; } - if (!params.has("columns")) - { - processError(response, "No 'columns' in request URL"); - return; - } if (!params.has("connection_string")) { @@ -107,6 +72,16 @@ void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse return; } + if (!params.has("sample_block")) + { + processError(response, "No 'sample_block' in request URL"); + return; + } + + std::string format = params.get("format", "RowBinary"); + std::string connection_string = params.get("connection_string"); + LOG_TRACE(log, "Connection string: '{}'", connection_string); + UInt64 max_block_size = DEFAULT_BLOCK_SIZE; if (params.has("max_block_size")) { @@ -119,28 +94,27 @@ void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse max_block_size = parse(max_block_size_str); } - std::string columns = params.get("columns"); + std::string sample_block_string = params.get("sample_block"); std::unique_ptr sample_block; try { - sample_block = parseColumns(std::move(columns)); + sample_block = parseColumns(std::move(sample_block_string)); } catch (const Exception & ex) { - processError(response, "Invalid 'columns' parameter in request body '" + ex.message() + "'"); - LOG_WARNING(log, ex.getStackTraceString()); + processError(response, "Invalid 'sample_block' parameter in request body '" + ex.message() + "'"); + LOG_ERROR(log, ex.getStackTraceString()); return; } - std::string format = params.get("format", "RowBinary"); - - std::string connection_string = params.get("connection_string"); - LOG_TRACE(log, "Connection string: '{}'", connection_string); - WriteBufferFromHTTPServerResponse out(response, request.getMethod() == Poco::Net::HTTPRequest::HTTP_HEAD, keep_alive_timeout); try { + auto connection = ODBCConnectionFactory::instance().get( + validateODBCConnectionString(connection_string), + getContext()->getSettingsRef().odbc_bridge_connection_pool_size); + if (mode == "write") { if (!params.has("db_name")) @@ -159,15 +133,12 @@ void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse auto quoting_style = IdentifierQuotingStyle::None; #if USE_ODBC - POCO_SQL_ODBC_CLASS::SessionImpl session(validateODBCConnectionString(connection_string), DBMS_DEFAULT_CONNECT_TIMEOUT_SEC); - quoting_style = getQuotingStyle(session.dbc().handle()); + quoting_style = getQuotingStyle(*connection); #endif - - auto pool = getPool(connection_string); auto & read_buf = request.getStream(); - auto input_format = FormatFactory::instance().getInput(format, read_buf, *sample_block, context, max_block_size); + auto input_format = FormatFactory::instance().getInput(format, read_buf, *sample_block, getContext(), max_block_size); auto input_stream = std::make_shared(input_format); - ODBCBlockOutputStream output_stream(pool->get(), db_name, table_name, *sample_block, quoting_style); + ODBCBlockOutputStream output_stream(*connection, db_name, table_name, *sample_block, getContext(), quoting_style); copyData(*input_stream, output_stream); writeStringBinary("Ok.", out); } @@ -176,9 +147,8 @@ void ODBCHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse std::string query = params.get("query"); LOG_TRACE(log, "Query: {}", query); - BlockOutputStreamPtr writer = FormatFactory::instance().getOutputStreamParallelIfPossible(format, out, *sample_block, context); - auto pool = getPool(connection_string); - ODBCBlockInputStream inp(pool->get(), query, *sample_block, max_block_size); + BlockOutputStreamPtr writer = FormatFactory::instance().getOutputStreamParallelIfPossible(format, out, *sample_block, getContext()); + ODBCBlockInputStream inp(*connection, query, *sample_block, max_block_size); copyData(inp, *writer); } } diff --git a/programs/odbc-bridge/MainHandler.h b/programs/odbc-bridge/MainHandler.h index e237ede5814..bc0fca8b9a5 100644 --- a/programs/odbc-bridge/MainHandler.h +++ b/programs/odbc-bridge/MainHandler.h @@ -1,14 +1,13 @@ #pragma once -#include +#include #include - #include -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wunused-parameter" -#include -#pragma GCC diagnostic pop + +#include +#include + namespace DB { @@ -17,20 +16,16 @@ namespace DB * and also query in request body * response in RowBinary format */ -class ODBCHandler : public HTTPRequestHandler +class ODBCHandler : public HTTPRequestHandler, WithContext { public: - using PoolPtr = std::shared_ptr; - using PoolMap = std::unordered_map; - - ODBCHandler(std::shared_ptr pool_map_, + ODBCHandler( size_t keep_alive_timeout_, - Context & context_, + ContextPtr context_, const String & mode_) - : log(&Poco::Logger::get("ODBCHandler")) - , pool_map(pool_map_) + : WithContext(context_) + , log(&Poco::Logger::get("ODBCHandler")) , keep_alive_timeout(keep_alive_timeout_) - , context(context_) , mode(mode_) { } @@ -40,14 +35,11 @@ public: private: Poco::Logger * log; - std::shared_ptr pool_map; size_t keep_alive_timeout; - Context & context; String mode; static inline std::mutex mutex; - PoolPtr getPool(const std::string & connection_str); void processError(HTTPServerResponse & response, const std::string & message); }; diff --git a/programs/odbc-bridge/ODBCBlockInputStream.cpp b/programs/odbc-bridge/ODBCBlockInputStream.cpp index b8a4209ac94..3a73cb9f601 100644 --- a/programs/odbc-bridge/ODBCBlockInputStream.cpp +++ b/programs/odbc-bridge/ODBCBlockInputStream.cpp @@ -1,5 +1,7 @@ #include "ODBCBlockInputStream.h" #include +#include +#include #include #include #include @@ -14,137 +16,143 @@ namespace DB { namespace ErrorCodes { - extern const int NUMBER_OF_COLUMNS_DOESNT_MATCH; extern const int UNKNOWN_TYPE; } ODBCBlockInputStream::ODBCBlockInputStream( - Poco::Data::Session && session_, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size_) - : session{session_} - , statement{(this->session << query_str, Poco::Data::Keywords::now)} - , result{statement} - , iterator{result.begin()} + nanodbc::connection & connection_, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size_) + : log(&Poco::Logger::get("ODBCBlockInputStream")) , max_block_size{max_block_size_} - , log(&Poco::Logger::get("ODBCBlockInputStream")) + , connection(connection_) + , query(query_str) { - if (sample_block.columns() != result.columnCount()) - throw Exception{"RecordSet contains " + toString(result.columnCount()) + " columns while " + toString(sample_block.columns()) - + " expected", - ErrorCodes::NUMBER_OF_COLUMNS_DOESNT_MATCH}; - description.init(sample_block); -} - - -namespace -{ - using ValueType = ExternalResultDescription::ValueType; - - void insertValue(IColumn & column, const ValueType type, const Poco::Dynamic::Var & value) - { - switch (type) - { - case ValueType::vtUInt8: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtUInt16: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtUInt32: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtUInt64: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtInt8: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtInt16: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtInt32: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtInt64: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtFloat32: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtFloat64: - assert_cast(column).insertValue(value.convert()); - break; - case ValueType::vtString: - assert_cast(column).insert(value.convert()); - break; - case ValueType::vtDate: - { - Poco::DateTime date = value.convert(); - assert_cast(column).insertValue(UInt16{LocalDate(date.year(), date.month(), date.day()).getDayNum()}); - break; - } - case ValueType::vtDateTime: - { - Poco::DateTime datetime = value.convert(); - assert_cast(column).insertValue(DateLUT::instance().makeDateTime( - datetime.year(), datetime.month(), datetime.day(), datetime.hour(), datetime.minute(), datetime.second())); - break; - } - case ValueType::vtUUID: - assert_cast(column).insert(parse(value.convert())); - break; - default: - throw Exception("Unsupported value type", ErrorCodes::UNKNOWN_TYPE); - } - } - - void insertDefaultValue(IColumn & column, const IColumn & sample_column) { column.insertFrom(sample_column, 0); } + result = execute(connection, NANODBC_TEXT(query)); } Block ODBCBlockInputStream::readImpl() { - if (iterator == result.end()) - return {}; - - MutableColumns columns(description.sample_block.columns()); - for (const auto i : ext::range(0, columns.size())) - columns[i] = description.sample_block.getByPosition(i).column->cloneEmpty(); + if (finished) + return Block(); + MutableColumns columns(description.sample_block.cloneEmptyColumns()); size_t num_rows = 0; - while (iterator != result.end()) + + while (true) { - Poco::Data::Row & row = *iterator; - - for (const auto idx : ext::range(0, row.fieldCount())) + if (!result.next()) { - /// TODO This is extremely slow. - const Poco::Dynamic::Var & value = row[idx]; + finished = true; + break; + } - if (!value.isEmpty()) + for (int idx = 0; idx < result.columns(); ++idx) + { + const auto & sample = description.sample_block.getByPosition(idx); + + if (!result.is_null(idx)) { - if (description.types[idx].second) + bool is_nullable = description.types[idx].second; + + if (is_nullable) { ColumnNullable & column_nullable = assert_cast(*columns[idx]); - insertValue(column_nullable.getNestedColumn(), description.types[idx].first, value); + const auto & data_type = assert_cast(*sample.type); + insertValue(column_nullable.getNestedColumn(), data_type.getNestedType(), description.types[idx].first, result, idx); column_nullable.getNullMapData().emplace_back(0); } else - insertValue(*columns[idx], description.types[idx].first, value); + { + insertValue(*columns[idx], sample.type, description.types[idx].first, result, idx); + } } else - insertDefaultValue(*columns[idx], *description.sample_block.getByPosition(idx).column); + insertDefaultValue(*columns[idx], *sample.column); } - ++iterator; - - ++num_rows; - if (num_rows == max_block_size) + if (++num_rows == max_block_size) break; } return description.sample_block.cloneWithColumns(std::move(columns)); } + +void ODBCBlockInputStream::insertValue( + IColumn & column, const DataTypePtr data_type, const ValueType type, nanodbc::result & row, size_t idx) +{ + switch (type) + { + case ValueType::vtUInt8: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtUInt16: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtUInt32: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtUInt64: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtInt8: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtInt16: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtInt32: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtInt64: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtFloat32: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtFloat64: + assert_cast(column).insertValue(row.get(idx)); + break; + case ValueType::vtFixedString:[[fallthrough]]; + case ValueType::vtString: + assert_cast(column).insert(row.get(idx)); + break; + case ValueType::vtUUID: + { + auto value = row.get(idx); + assert_cast(column).insert(parse(value.data(), value.size())); + break; + } + case ValueType::vtDate: + assert_cast(column).insertValue(UInt16{LocalDate{row.get(idx)}.getDayNum()}); + break; + case ValueType::vtDateTime: + { + auto value = row.get(idx); + ReadBufferFromString in(value); + time_t time = 0; + readDateTimeText(time, in); + if (time < 0) + time = 0; + assert_cast(column).insertValue(time); + break; + } + case ValueType::vtDateTime64:[[fallthrough]]; + case ValueType::vtDecimal32: [[fallthrough]]; + case ValueType::vtDecimal64: [[fallthrough]]; + case ValueType::vtDecimal128: [[fallthrough]]; + case ValueType::vtDecimal256: + { + auto value = row.get(idx); + ReadBufferFromString istr(value); + data_type->getDefaultSerialization()->deserializeWholeText(column, istr, FormatSettings{}); + break; + } + default: + throw Exception("Unsupported value type", ErrorCodes::UNKNOWN_TYPE); + } +} + } diff --git a/programs/odbc-bridge/ODBCBlockInputStream.h b/programs/odbc-bridge/ODBCBlockInputStream.h index 13491e05822..bbd90ce4d6c 100644 --- a/programs/odbc-bridge/ODBCBlockInputStream.h +++ b/programs/odbc-bridge/ODBCBlockInputStream.h @@ -3,10 +3,8 @@ #include #include #include -#include -#include -#include #include +#include namespace DB @@ -15,25 +13,33 @@ namespace DB class ODBCBlockInputStream final : public IBlockInputStream { public: - ODBCBlockInputStream( - Poco::Data::Session && session_, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size_); + ODBCBlockInputStream(nanodbc::connection & connection_, const std::string & query_str, const Block & sample_block, const UInt64 max_block_size_); String getName() const override { return "ODBC"; } Block getHeader() const override { return description.sample_block.cloneEmpty(); } private: + using QueryResult = std::shared_ptr; + using ValueType = ExternalResultDescription::ValueType; + Block readImpl() override; - Poco::Data::Session session; - Poco::Data::Statement statement; - Poco::Data::RecordSet result; - Poco::Data::RecordSet::Iterator iterator; + static void insertValue(IColumn & column, const DataTypePtr data_type, const ValueType type, nanodbc::result & row, size_t idx); + static void insertDefaultValue(IColumn & column, const IColumn & sample_column) + { + column.insertFrom(sample_column, 0); + } + + Poco::Logger * log; const UInt64 max_block_size; ExternalResultDescription description; - Poco::Logger * log; + nanodbc::connection & connection; + nanodbc::result result; + String query; + bool finished = false; }; } diff --git a/programs/odbc-bridge/ODBCBlockOutputStream.cpp b/programs/odbc-bridge/ODBCBlockOutputStream.cpp index db3c9441419..e4614204178 100644 --- a/programs/odbc-bridge/ODBCBlockOutputStream.cpp +++ b/programs/odbc-bridge/ODBCBlockOutputStream.cpp @@ -8,16 +8,14 @@ #include #include #include "getIdentifierQuote.h" +#include +#include +#include namespace DB { -namespace ErrorCodes -{ - extern const int UNKNOWN_TYPE; -} - namespace { using ValueType = ExternalResultDescription::ValueType; @@ -40,69 +38,21 @@ namespace return buf.str(); } - std::string getQuestionMarks(size_t n) - { - std::string result = "("; - for (size_t i = 0; i < n; ++i) - { - if (i > 0) - result += ","; - result += "?"; - } - return result + ")"; - } - - Poco::Dynamic::Var getVarFromField(const Field & field, const ValueType type) - { - switch (type) - { - case ValueType::vtUInt8: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtUInt16: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtUInt32: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtUInt64: - return Poco::Dynamic::Var(field.get()).convert(); - case ValueType::vtInt8: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtInt16: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtInt32: - return Poco::Dynamic::Var(static_cast(field.get())).convert(); - case ValueType::vtInt64: - return Poco::Dynamic::Var(field.get()).convert(); - case ValueType::vtFloat32: - return Poco::Dynamic::Var(field.get()).convert(); - case ValueType::vtFloat64: - return Poco::Dynamic::Var(field.get()).convert(); - case ValueType::vtString: - return Poco::Dynamic::Var(field.get()).convert(); - case ValueType::vtDate: - return Poco::Dynamic::Var(LocalDate(DayNum(field.get())).toString()).convert(); - case ValueType::vtDateTime: - return Poco::Dynamic::Var(DateLUT::instance().timeToString(time_t(field.get()))).convert(); - case ValueType::vtUUID: - return Poco::Dynamic::Var(UUID(field.get()).toUnderType().toHexString()).convert(); - default: - throw Exception("Unsupported value type", ErrorCodes::UNKNOWN_TYPE); - - } - __builtin_unreachable(); - } } -ODBCBlockOutputStream::ODBCBlockOutputStream(Poco::Data::Session && session_, +ODBCBlockOutputStream::ODBCBlockOutputStream(nanodbc::connection & connection_, const std::string & remote_database_name_, const std::string & remote_table_name_, const Block & sample_block_, + ContextPtr local_context_, IdentifierQuotingStyle quoting_) - : session(session_) + : log(&Poco::Logger::get("ODBCBlockOutputStream")) + , connection(connection_) , db_name(remote_database_name_) , table_name(remote_table_name_) , sample_block(sample_block_) + , local_context(local_context_) , quoting(quoting_) - , log(&Poco::Logger::get("ODBCBlockOutputStream")) { description.init(sample_block); } @@ -114,28 +64,12 @@ Block ODBCBlockOutputStream::getHeader() const void ODBCBlockOutputStream::write(const Block & block) { - ColumnsWithTypeAndName columns; - for (size_t i = 0; i < block.columns(); ++i) - columns.push_back({block.getColumns()[i], sample_block.getDataTypes()[i], sample_block.getNames()[i]}); + WriteBufferFromOwnString values_buf; + auto writer = FormatFactory::instance().getOutputStream("Values", values_buf, sample_block, local_context); + writer->write(block); - std::vector row_to_insert(block.columns()); - Poco::Data::Statement statement(session << getInsertQuery(db_name, table_name, columns, quoting) + getQuestionMarks(block.columns())); - for (size_t i = 0; i < block.columns(); ++i) - statement.addBind(Poco::Data::Keywords::use(row_to_insert[i])); - - for (size_t i = 0; i < block.rows(); ++i) - { - for (size_t col_idx = 0; col_idx < block.columns(); ++col_idx) - { - Field val; - columns[col_idx].column->get(i, val); - if (val.isNull()) - row_to_insert[col_idx] = Poco::Dynamic::Var(); - else - row_to_insert[col_idx] = getVarFromField(val, description.types[col_idx].first); - } - statement.execute(); - } + std::string query = getInsertQuery(db_name, table_name, block.getColumnsWithTypeAndName(), quoting) + values_buf.str(); + execute(connection, query); } } diff --git a/programs/odbc-bridge/ODBCBlockOutputStream.h b/programs/odbc-bridge/ODBCBlockOutputStream.h index 39e1d6f77ac..0b13f7039b5 100644 --- a/programs/odbc-bridge/ODBCBlockOutputStream.h +++ b/programs/odbc-bridge/ODBCBlockOutputStream.h @@ -2,30 +2,41 @@ #include #include -#include #include #include +#include +#include + namespace DB { + class ODBCBlockOutputStream : public IBlockOutputStream { + public: - ODBCBlockOutputStream(Poco::Data::Session && session_, const std::string & remote_database_name_, - const std::string & remote_table_name_, const Block & sample_block_, IdentifierQuotingStyle quoting); + ODBCBlockOutputStream( + nanodbc::connection & connection_, + const std::string & remote_database_name_, + const std::string & remote_table_name_, + const Block & sample_block_, + ContextPtr local_context_, + IdentifierQuotingStyle quoting); Block getHeader() const override; void write(const Block & block) override; private: - Poco::Data::Session session; + Poco::Logger * log; + + nanodbc::connection & connection; std::string db_name; std::string table_name; Block sample_block; + ContextPtr local_context; IdentifierQuotingStyle quoting; ExternalResultDescription description; - Poco::Logger * log; }; } diff --git a/programs/odbc-bridge/ODBCBridge.cpp b/programs/odbc-bridge/ODBCBridge.cpp index 8869a2639c1..0deefe46014 100644 --- a/programs/odbc-bridge/ODBCBridge.cpp +++ b/programs/odbc-bridge/ODBCBridge.cpp @@ -1,244 +1,4 @@ #include "ODBCBridge.h" -#include "HandlerFactory.h" - -#include -#include -#include -#include - -#if USE_ODBC -// It doesn't make much sense to build this bridge without ODBC, but we still do this. -# include -#endif - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - - -namespace DB -{ -namespace ErrorCodes -{ - extern const int ARGUMENT_OUT_OF_BOUND; -} - -namespace -{ - Poco::Net::SocketAddress makeSocketAddress(const std::string & host, UInt16 port, Poco::Logger * log) - { - Poco::Net::SocketAddress socket_address; - try - { - socket_address = Poco::Net::SocketAddress(host, port); - } - catch (const Poco::Net::DNSException & e) - { - const auto code = e.code(); - if (code == EAI_FAMILY -#if defined(EAI_ADDRFAMILY) - || code == EAI_ADDRFAMILY -#endif - ) - { - LOG_ERROR(log, "Cannot resolve listen_host ({}), error {}: {}. If it is an IPv6 address and your host has disabled IPv6, then consider to specify IPv4 address to listen in element of configuration file. Example: 0.0.0.0", host, e.code(), e.message()); - } - - throw; - } - return socket_address; - } - - Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, Poco::Logger * log) - { - auto address = makeSocketAddress(host, port, log); -#if POCO_VERSION < 0x01080000 - socket.bind(address, /* reuseAddress = */ true); -#else - socket.bind(address, /* reuseAddress = */ true, /* reusePort = */ false); -#endif - - socket.listen(/* backlog = */ 64); - - return address; - } -} - -void ODBCBridge::handleHelp(const std::string &, const std::string &) -{ - Poco::Util::HelpFormatter help_formatter(options()); - help_formatter.setCommand(commandName()); - help_formatter.setHeader("HTTP-proxy for odbc requests"); - help_formatter.setUsage("--http-port "); - help_formatter.format(std::cerr); - - stopOptionsProcessing(); -} - - -void ODBCBridge::defineOptions(Poco::Util::OptionSet & options) -{ - options.addOption(Poco::Util::Option("http-port", "", "port to listen").argument("http-port", true).binding("http-port")); - options.addOption( - Poco::Util::Option("listen-host", "", "hostname or address to listen, default 127.0.0.1").argument("listen-host").binding("listen-host")); - options.addOption( - Poco::Util::Option("http-timeout", "", "http timeout for socket, default 1800").argument("http-timeout").binding("http-timeout")); - - options.addOption(Poco::Util::Option("max-server-connections", "", "max connections to server, default 1024") - .argument("max-server-connections") - .binding("max-server-connections")); - options.addOption(Poco::Util::Option("keep-alive-timeout", "", "keepalive timeout, default 10") - .argument("keep-alive-timeout") - .binding("keep-alive-timeout")); - - options.addOption(Poco::Util::Option("log-level", "", "sets log level, default info").argument("log-level").binding("logger.level")); - - options.addOption( - Poco::Util::Option("log-path", "", "log path for all logs, default console").argument("log-path").binding("logger.log")); - - options.addOption(Poco::Util::Option("err-log-path", "", "err log path for all logs, default no") - .argument("err-log-path") - .binding("logger.errorlog")); - - options.addOption(Poco::Util::Option("stdout-path", "", "stdout log path, default console") - .argument("stdout-path") - .binding("logger.stdout")); - - options.addOption(Poco::Util::Option("stderr-path", "", "stderr log path, default console") - .argument("stderr-path") - .binding("logger.stderr")); - - using Me = std::decay_t; - options.addOption(Poco::Util::Option("help", "", "produce this help message") - .binding("help") - .callback(Poco::Util::OptionCallback(this, &Me::handleHelp))); - - ServerApplication::defineOptions(options); // NOLINT Don't need complex BaseDaemon's .xml config -} - -void ODBCBridge::initialize(Application & self) -{ - BaseDaemon::closeFDs(); - is_help = config().has("help"); - - if (is_help) - return; - - config().setString("logger", "ODBCBridge"); - - /// Redirect stdout, stderr to specified files. - /// Some libraries and sanitizers write to stderr in case of errors. - const auto stdout_path = config().getString("logger.stdout", ""); - if (!stdout_path.empty()) - { - if (!freopen(stdout_path.c_str(), "a+", stdout)) - throw Poco::OpenFileException("Cannot attach stdout to " + stdout_path); - - /// Disable buffering for stdout. - setbuf(stdout, nullptr); - } - const auto stderr_path = config().getString("logger.stderr", ""); - if (!stderr_path.empty()) - { - if (!freopen(stderr_path.c_str(), "a+", stderr)) - throw Poco::OpenFileException("Cannot attach stderr to " + stderr_path); - - /// Disable buffering for stderr. - setbuf(stderr, nullptr); - } - - buildLoggers(config(), logger(), self.commandName()); - - BaseDaemon::logRevision(); - - log = &logger(); - hostname = config().getString("listen-host", "127.0.0.1"); - port = config().getUInt("http-port"); - if (port > 0xFFFF) - throw Exception("Out of range 'http-port': " + std::to_string(port), ErrorCodes::ARGUMENT_OUT_OF_BOUND); - - http_timeout = config().getUInt("http-timeout", DEFAULT_HTTP_READ_BUFFER_TIMEOUT); - max_server_connections = config().getUInt("max-server-connections", 1024); - keep_alive_timeout = config().getUInt("keep-alive-timeout", 10); - - initializeTerminationAndSignalProcessing(); - -#if USE_ODBC - // It doesn't make much sense to build this bridge without ODBC, but we - // still do this. - Poco::Data::ODBC::Connector::registerConnector(); -#endif - - ServerApplication::initialize(self); // NOLINT -} - -void ODBCBridge::uninitialize() -{ - BaseDaemon::uninitialize(); -} - -int ODBCBridge::main(const std::vector & /*args*/) -{ - if (is_help) - return Application::EXIT_OK; - - registerFormats(); - - LOG_INFO(log, "Starting up"); - Poco::Net::ServerSocket socket; - auto address = socketBindListen(socket, hostname, port, log); - socket.setReceiveTimeout(http_timeout); - socket.setSendTimeout(http_timeout); - Poco::ThreadPool server_pool(3, max_server_connections); - Poco::Net::HTTPServerParams::Ptr http_params = new Poco::Net::HTTPServerParams; - http_params->setTimeout(http_timeout); - http_params->setKeepAliveTimeout(keep_alive_timeout); - - auto shared_context = Context::createShared(); - Context context(Context::createGlobal(shared_context.get())); - context.makeGlobalContext(); - - if (config().has("query_masking_rules")) - { - SensitiveDataMasker::setInstance(std::make_unique(config(), "query_masking_rules")); - } - - auto server = HTTPServer( - context, - std::make_shared("ODBCRequestHandlerFactory-factory", keep_alive_timeout, context), - server_pool, - socket, - http_params); - server.start(); - - LOG_INFO(log, "Listening http://{}", address.toString()); - - SCOPE_EXIT({ - LOG_DEBUG(log, "Received termination signal."); - LOG_DEBUG(log, "Waiting for current connections to close."); - server.stop(); - for (size_t count : ext::range(1, 6)) - { - if (server.currentConnections() == 0) - break; - LOG_DEBUG(log, "Waiting for {} connections, try {}", server.currentConnections(), count); - std::this_thread::sleep_for(std::chrono::milliseconds(1000)); - } - }); - - waitForTerminationRequest(); - return Application::EXIT_OK; -} -} #pragma GCC diagnostic ignored "-Wmissing-declarations" int mainEntryClickHouseODBCBridge(int argc, char ** argv) diff --git a/programs/odbc-bridge/ODBCBridge.h b/programs/odbc-bridge/ODBCBridge.h index 9a0d37fa0f9..b17051dce91 100644 --- a/programs/odbc-bridge/ODBCBridge.h +++ b/programs/odbc-bridge/ODBCBridge.h @@ -2,38 +2,25 @@ #include #include -#include +#include +#include "HandlerFactory.h" + namespace DB { -/** Class represents clickhouse-odbc-bridge server, which listen - * incoming HTTP POST and GET requests on specified port and host. - * Has two handlers '/' for all incoming POST requests to ODBC driver - * and /ping for GET request about service status - */ -class ODBCBridge : public BaseDaemon + +class ODBCBridge : public IBridge { -public: - void defineOptions(Poco::Util::OptionSet & options) override; protected: - void initialize(Application & self) override; + std::string bridgeName() const override + { + return "ODBCBridge"; + } - void uninitialize() override; - - int main(const std::vector & args) override; - -private: - void handleHelp(const std::string &, const std::string &); - - bool is_help; - std::string hostname; - size_t port; - size_t http_timeout; - std::string log_level; - size_t max_server_connections; - size_t keep_alive_timeout; - - Poco::Logger * log; + HandlerFactoryPtr getHandlerFactoryPtr(ContextPtr context) const override + { + return std::make_shared("ODBCRequestHandlerFactory-factory", keep_alive_timeout, context); + } }; } diff --git a/programs/odbc-bridge/ODBCConnectionFactory.h b/programs/odbc-bridge/ODBCConnectionFactory.h new file mode 100644 index 00000000000..56961ddb2fb --- /dev/null +++ b/programs/odbc-bridge/ODBCConnectionFactory.h @@ -0,0 +1,82 @@ +#pragma once + +#include +#include +#include +#include +#include + + +namespace nanodbc +{ + +static constexpr inline auto ODBC_CONNECT_TIMEOUT = 100; + +using ConnectionPtr = std::shared_ptr; +using Pool = BorrowedObjectPool; +using PoolPtr = std::shared_ptr; + +class ConnectionHolder +{ + +public: + ConnectionHolder(const std::string & connection_string_, PoolPtr pool_) : connection_string(connection_string_), pool(pool_) {} + + ~ConnectionHolder() + { + if (connection) + pool->returnObject(std::move(connection)); + } + + nanodbc::connection & operator*() + { + if (!connection) + { + pool->borrowObject(connection, [&]() + { + return std::make_shared(connection_string, ODBC_CONNECT_TIMEOUT); + }); + } + + return *connection; + } + +private: + std::string connection_string; + PoolPtr pool; + ConnectionPtr connection; +}; + +} + + +namespace DB +{ + +class ODBCConnectionFactory final : private boost::noncopyable +{ +public: + static ODBCConnectionFactory & instance() + { + static ODBCConnectionFactory ret; + return ret; + } + + nanodbc::ConnectionHolder get(const std::string & connection_string, size_t pool_size) + { + std::lock_guard lock(mutex); + + if (!factory.count(connection_string)) + factory.emplace(std::make_pair(connection_string, std::make_shared(pool_size))); + + return nanodbc::ConnectionHolder(connection_string, factory[connection_string]); + } + +private: + /// [connection_settings_string] -> [connection_pool] + using PoolFactory = std::unordered_map; + PoolFactory factory; + std::mutex mutex; +}; + +} diff --git a/programs/odbc-bridge/SchemaAllowedHandler.cpp b/programs/odbc-bridge/SchemaAllowedHandler.cpp index d4a70db61f4..4cceaee962c 100644 --- a/programs/odbc-bridge/SchemaAllowedHandler.cpp +++ b/programs/odbc-bridge/SchemaAllowedHandler.cpp @@ -2,33 +2,26 @@ #if USE_ODBC -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include "validateODBCConnectionString.h" +#include +#include +#include +#include +#include +#include +#include "validateODBCConnectionString.h" +#include "ODBCConnectionFactory.h" +#include +#include -# define POCO_SQL_ODBC_CLASS Poco::Data::ODBC namespace DB { namespace { - bool isSchemaAllowed(SQLHDBC hdbc) + bool isSchemaAllowed(nanodbc::connection & connection) { - SQLUINTEGER value; - SQLSMALLINT value_length = sizeof(value); - SQLRETURN r = POCO_SQL_ODBC_CLASS::SQLGetInfo(hdbc, SQL_SCHEMA_USAGE, &value, sizeof(value), &value_length); - - if (POCO_SQL_ODBC_CLASS::Utility::isError(r)) - throw POCO_SQL_ODBC_CLASS::ConnectionException(hdbc); - - return value != 0; + uint32_t result = connection.get_info(SQL_SCHEMA_USAGE); + return result != 0; } } @@ -55,10 +48,12 @@ void SchemaAllowedHandler::handleRequest(HTTPServerRequest & request, HTTPServer try { std::string connection_string = params.get("connection_string"); - POCO_SQL_ODBC_CLASS::SessionImpl session(validateODBCConnectionString(connection_string), DBMS_DEFAULT_CONNECT_TIMEOUT_SEC); - SQLHDBC hdbc = session.dbc().handle(); - bool result = isSchemaAllowed(hdbc); + auto connection = ODBCConnectionFactory::instance().get( + validateODBCConnectionString(connection_string), + getContext()->getSettingsRef().odbc_bridge_connection_pool_size); + + bool result = isSchemaAllowed(*connection); WriteBufferFromHTTPServerResponse out(response, request.getMethod() == Poco::Net::HTTPRequest::HTTP_HEAD, keep_alive_timeout); try diff --git a/programs/odbc-bridge/SchemaAllowedHandler.h b/programs/odbc-bridge/SchemaAllowedHandler.h index 91eddf67803..d7b922ed05b 100644 --- a/programs/odbc-bridge/SchemaAllowedHandler.h +++ b/programs/odbc-bridge/SchemaAllowedHandler.h @@ -1,22 +1,25 @@ #pragma once +#include #include - #include #if USE_ODBC + namespace DB { class Context; /// This handler establishes connection to database, and retrieves whether schema is allowed. -class SchemaAllowedHandler : public HTTPRequestHandler +class SchemaAllowedHandler : public HTTPRequestHandler, WithContext { public: - SchemaAllowedHandler(size_t keep_alive_timeout_, Context &) - : log(&Poco::Logger::get("SchemaAllowedHandler")), keep_alive_timeout(keep_alive_timeout_) + SchemaAllowedHandler(size_t keep_alive_timeout_, ContextPtr context_) + : WithContext(context_) + , log(&Poco::Logger::get("SchemaAllowedHandler")) + , keep_alive_timeout(keep_alive_timeout_) { } diff --git a/programs/odbc-bridge/getIdentifierQuote.cpp b/programs/odbc-bridge/getIdentifierQuote.cpp index 15b3749d37d..9ccad6e6e1d 100644 --- a/programs/odbc-bridge/getIdentifierQuote.cpp +++ b/programs/odbc-bridge/getIdentifierQuote.cpp @@ -2,11 +2,10 @@ #if USE_ODBC -# include -# include -# include - -# define POCO_SQL_ODBC_CLASS Poco::Data::ODBC +#include +#include +#include +#include namespace DB @@ -17,33 +16,27 @@ namespace ErrorCodes extern const int ILLEGAL_TYPE_OF_ARGUMENT; } -std::string getIdentifierQuote(SQLHDBC hdbc) + +std::string getIdentifierQuote(nanodbc::connection & connection) { - std::string identifier; - - SQLSMALLINT t; - SQLRETURN r = POCO_SQL_ODBC_CLASS::SQLGetInfo(hdbc, SQL_IDENTIFIER_QUOTE_CHAR, nullptr, 0, &t); - - if (POCO_SQL_ODBC_CLASS::Utility::isError(r)) - throw POCO_SQL_ODBC_CLASS::ConnectionException(hdbc); - - if (t > 0) + std::string quote; + try { - // I have no idea, why to add '2' here, got from: contrib/poco/Data/ODBC/src/ODBCStatementImpl.cpp:60 (SQL_DRIVER_NAME) - identifier.resize(static_cast(t) + 2); - - if (POCO_SQL_ODBC_CLASS::Utility::isError(POCO_SQL_ODBC_CLASS::SQLGetInfo( - hdbc, SQL_IDENTIFIER_QUOTE_CHAR, &identifier[0], SQLSMALLINT((identifier.length() - 1) * sizeof(identifier[0])), &t))) - throw POCO_SQL_ODBC_CLASS::ConnectionException(hdbc); - - identifier.resize(static_cast(t)); + quote = connection.get_info(SQL_IDENTIFIER_QUOTE_CHAR); } - return identifier; + catch (...) + { + LOG_WARNING(&Poco::Logger::get("ODBCGetIdentifierQuote"), "Cannot fetch identifier quote. Default double quote is used. Reason: {}", getCurrentExceptionMessage(false)); + return "\""; + } + + return quote; } -IdentifierQuotingStyle getQuotingStyle(SQLHDBC hdbc) + +IdentifierQuotingStyle getQuotingStyle(nanodbc::connection & connection) { - auto identifier_quote = getIdentifierQuote(hdbc); + auto identifier_quote = getIdentifierQuote(connection); if (identifier_quote.length() == 0) return IdentifierQuotingStyle::None; else if (identifier_quote[0] == '`') diff --git a/programs/odbc-bridge/getIdentifierQuote.h b/programs/odbc-bridge/getIdentifierQuote.h index 0fb4c3bddb1..7f7156eff82 100644 --- a/programs/odbc-bridge/getIdentifierQuote.h +++ b/programs/odbc-bridge/getIdentifierQuote.h @@ -2,20 +2,19 @@ #if USE_ODBC -# include -# include -# include - -# include - +#include +#include +#include #include +#include + namespace DB { -std::string getIdentifierQuote(SQLHDBC hdbc); +std::string getIdentifierQuote(nanodbc::connection & connection); -IdentifierQuotingStyle getQuotingStyle(SQLHDBC hdbc); +IdentifierQuotingStyle getQuotingStyle(nanodbc::connection & connection); } diff --git a/programs/server/.gitignore b/programs/server/.gitignore index b774776e4be..ddc480e4b29 100644 --- a/programs/server/.gitignore +++ b/programs/server/.gitignore @@ -1,8 +1,11 @@ -/access -/dictionaries_lib -/flags -/format_schemas +/metadata /metadata_dropped +/data +/store +/access +/flags +/dictionaries_lib +/format_schemas /preprocessed_configs /shadow /tmp diff --git a/programs/server/CMakeLists.txt b/programs/server/CMakeLists.txt index 697851b294b..3a04228942b 100644 --- a/programs/server/CMakeLists.txt +++ b/programs/server/CMakeLists.txt @@ -19,6 +19,7 @@ set (CLICKHOUSE_SERVER_LINK clickhouse_storages_system clickhouse_table_functions string_utils + jemalloc ${LINK_RESOURCE_LIB} diff --git a/programs/server/Server.cpp b/programs/server/Server.cpp index f2f43aabc7d..e874122250c 100644 --- a/programs/server/Server.cpp +++ b/programs/server/Server.cpp @@ -47,6 +47,7 @@ #include #include #include +#include #include #include #include @@ -100,6 +101,10 @@ # include #endif +#if USE_JEMALLOC +# include +#endif + namespace CurrentMetrics { extern const Metric Revision; @@ -108,11 +113,35 @@ namespace CurrentMetrics extern const Metric MaxDDLEntryID; } +#if USE_JEMALLOC +static bool jemallocOptionEnabled(const char *name) +{ + bool value; + size_t size = sizeof(value); + + if (mallctl(name, reinterpret_cast(&value), &size, /* newp= */ nullptr, /* newlen= */ 0)) + throw Poco::SystemException("mallctl() failed"); + + return value; +} +#else +static bool jemallocOptionEnabled(const char *) { return 0; } +#endif + int mainEntryClickHouseServer(int argc, char ** argv) { DB::Server app; + if (jemallocOptionEnabled("opt.background_thread")) + { + LOG_ERROR(&app.logger(), + "jemalloc.background_thread was requested, " + "however ClickHouse uses percpu_arena and background_thread most likely will not give any benefits, " + "and also background_thread is not compatible with ClickHouse watchdog " + "(that can be disabled with CLICKHOUSE_WATCHDOG_ENABLE=0)"); + } + /// Do not fork separate process from watchdog if we attached to terminal. /// Otherwise it breaks gdb usage. /// Can be overridden by environment variable (cannot use server config at this moment). @@ -172,18 +201,24 @@ int waitServersToFinish(std::vector & servers, size_t const int sleep_one_ms = 100; int sleep_current_ms = 0; int current_connections = 0; - while (sleep_current_ms < sleep_max_ms) + for (;;) { current_connections = 0; + for (auto & server : servers) { server.stop(); current_connections += server.currentConnections(); } + if (!current_connections) break; + sleep_current_ms += sleep_one_ms; - std::this_thread::sleep_for(std::chrono::milliseconds(sleep_one_ms)); + if (sleep_current_ms < sleep_max_ms) + std::this_thread::sleep_for(std::chrono::milliseconds(sleep_one_ms)); + else + break; } return current_connections; } @@ -425,8 +460,7 @@ int Server::main(const std::vector & /*args*/) * settings, available functions, data types, aggregate functions, databases, ... */ auto shared_context = Context::createShared(); - auto global_context = std::make_unique(Context::createGlobal(shared_context.get())); - global_context_ptr = global_context.get(); + global_context = Context::createGlobal(shared_context.get()); global_context->makeGlobalContext(); global_context->setApplicationType(Context::ApplicationType::SERVER); @@ -688,16 +722,8 @@ int Server::main(const std::vector & /*args*/) } } - if (config().has("interserver_http_credentials")) - { - String user = config().getString("interserver_http_credentials.user", ""); - String password = config().getString("interserver_http_credentials.password", ""); - - if (user.empty()) - throw Exception("Configuration parameter interserver_http_credentials user can't be empty", ErrorCodes::NO_ELEMENTS_IN_CONFIG); - - global_context->setInterserverCredentials(user, password); - } + LOG_DEBUG(log, "Initiailizing interserver credentials."); + global_context->updateInterserverCredentials(config()); if (config().has("macros")) global_context->setMacros(std::make_unique(config(), "macros", log)); @@ -758,6 +784,7 @@ int Server::main(const std::vector & /*args*/) global_context->setClustersConfig(config); global_context->setMacros(std::make_unique(*config, "macros", log)); global_context->setExternalAuthenticatorsConfig(*config); + global_context->setExternalModelsConfig(config); /// Setup protection to avoid accidental DROP for big tables (that are greater than 50 GB by default) if (config->has("max_table_size_to_drop")) @@ -777,6 +804,7 @@ int Server::main(const std::vector & /*args*/) } global_context->updateStorageConfiguration(*config); + global_context->updateInterserverCredentials(*config); }, /* already_loaded = */ false); /// Reload it right now (initial loading) @@ -885,10 +913,30 @@ int Server::main(const std::vector & /*args*/) servers_to_start_before_tables->emplace_back( port_name, std::make_unique( - new KeeperTCPHandlerFactory(*this), server_pool, socket, new Poco::Net::TCPServerParams)); + new KeeperTCPHandlerFactory(*this, false), server_pool, socket, new Poco::Net::TCPServerParams)); LOG_INFO(log, "Listening for connections to Keeper (tcp): {}", address.toString()); }); + + const char * secure_port_name = "keeper_server.tcp_port_secure"; + createServer(listen_host, secure_port_name, listen_try, [&](UInt16 port) + { +#if USE_SSL + Poco::Net::SecureServerSocket socket; + auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); + socket.setReceiveTimeout(settings.receive_timeout); + socket.setSendTimeout(settings.send_timeout); + servers_to_start_before_tables->emplace_back( + secure_port_name, + std::make_unique( + new KeeperTCPHandlerFactory(*this, true), server_pool, socket, new Poco::Net::TCPServerParams)); + LOG_INFO(log, "Listening for connections to Keeper with secure protocol (tcp_secure): {}", address.toString()); +#else + UNUSED(port); + throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.", + ErrorCodes::SUPPORT_IS_DISABLED}; +#endif + }); } #else throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "ClickHouse server built without NuRaft library. Cannot use internal coordination."); @@ -937,10 +985,12 @@ int Server::main(const std::vector & /*args*/) global_context->shutdownKeeperStorageDispatcher(); } + /// Wait server pool to avoid use-after-free of destroyed context in the handlers + server_pool.joinAll(); + /** Explicitly destroy Context. It is more convenient than in destructor of Server, because logger is still available. * At this moment, no one could own shared part of Context. */ - global_context_ptr = nullptr; global_context.reset(); shared_context.reset(); LOG_DEBUG(log, "Destroyed global context."); @@ -954,14 +1004,14 @@ int Server::main(const std::vector & /*args*/) try { - loadMetadataSystem(*global_context); + loadMetadataSystem(global_context); /// After attaching system databases we can initialize system log. global_context->initializeSystemLogs(); auto & database_catalog = DatabaseCatalog::instance(); /// After the system database is created, attach virtual system tables (in addition to query_log and part_log) attachSystemTablesServer(*database_catalog.getSystemDatabase(), has_zookeeper); /// Then, load remaining databases - loadMetadata(*global_context, default_database); + loadMetadata(global_context, default_database); database_catalog.loadDatabases(); /// After loading validate that default database exists database_catalog.assertDatabaseExists(default_database); @@ -1041,7 +1091,7 @@ int Server::main(const std::vector & /*args*/) else { /// Initialize a watcher periodically updating DNS cache - dns_cache_updater = std::make_unique(*global_context, config().getInt("dns_cache_update_period", 15)); + dns_cache_updater = std::make_unique(global_context, config().getInt("dns_cache_update_period", 15)); } #if defined(OS_LINUX) @@ -1073,7 +1123,7 @@ int Server::main(const std::vector & /*args*/) { /// This object will periodically calculate some metrics. AsynchronousMetrics async_metrics( - *global_context, config().getUInt("asynchronous_metrics_update_period_s", 60), servers_to_start_before_tables, servers); + global_context, config().getUInt("asynchronous_metrics_update_period_s", 60), servers_to_start_before_tables, servers); attachSystemTablesAsync(*DatabaseCatalog::instance().getSystemDatabase(), async_metrics); for (const auto & listen_host : listen_hosts) @@ -1310,7 +1360,7 @@ int Server::main(const std::vector & /*args*/) } /// try to load dictionaries immediately, throw on error and die - ext::scope_guard dictionaries_xmls, models_xmls; + ext::scope_guard dictionaries_xmls; try { if (!config().getBool("dictionaries_lazy_load", true)) @@ -1320,8 +1370,6 @@ int Server::main(const std::vector & /*args*/) } dictionaries_xmls = global_context->getExternalDictionariesLoader().addConfigRepository( std::make_unique(config(), "dictionaries_config")); - models_xmls = global_context->getExternalModelsLoader().addConfigRepository( - std::make_unique(config(), "models_config")); } catch (...) { @@ -1336,7 +1384,7 @@ int Server::main(const std::vector & /*args*/) int pool_size = config().getInt("distributed_ddl.pool_size", 1); if (pool_size < 1) throw Exception("distributed_ddl.pool_size should be greater then 0", ErrorCodes::ARGUMENT_OUT_OF_BOUND); - global_context->setDDLWorker(std::make_unique(pool_size, ddl_zookeeper_path, *global_context, &config(), + global_context->setDDLWorker(std::make_unique(pool_size, ddl_zookeeper_path, global_context, &config(), "distributed_ddl", "DDLWorker", &CurrentMetrics::MaxDDLEntryID)); } diff --git a/programs/server/Server.h b/programs/server/Server.h index fbfc26f6ee5..c698108767c 100644 --- a/programs/server/Server.h +++ b/programs/server/Server.h @@ -40,9 +40,9 @@ public: return BaseDaemon::logger(); } - Context & context() const override + ContextPtr context() const override { - return *global_context_ptr; + return global_context; } bool isCancelled() const override @@ -64,8 +64,7 @@ protected: std::string getDefaultCorePath() const override; private: - Context * global_context_ptr = nullptr; - + ContextPtr global_context; Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, [[maybe_unused]] bool secure = false) const; using CreateServerFunc = std::function; diff --git a/programs/server/config.xml b/programs/server/config.xml index 9c01b328290..195b6263595 100644 --- a/programs/server/config.xml +++ b/programs/server/config.xml @@ -7,7 +7,20 @@ --> - + trace /var/log/clickhouse-server/clickhouse-server.log /var/log/clickhouse-server/clickhouse-server.err.log @@ -76,7 +89,7 @@ - + 9005 +/// +/// +/// +/// admin +/// qqq +/// +/// +/// +/// johny +/// 333 +/// +/// +class InterserverCredentials +{ +public: + using UserWithPassword = std::pair; + using CheckResult = std::pair; + using CurrentCredentials = std::vector; + + InterserverCredentials(const InterserverCredentials &) = delete; + + static std::unique_ptr make(const Poco::Util::AbstractConfiguration & config, const std::string & root_tag); + + InterserverCredentials(const std::string & current_user_, const std::string & current_password_, const CurrentCredentials & all_users_store_) + : current_user(current_user_) + , current_password(current_password_) + , all_users_store(all_users_store_) + {} + + CheckResult isValidUser(const UserWithPassword & credentials) const; + CheckResult isValidUser(const std::string & user, const std::string & password) const; + + std::string getUser() const { return current_user; } + + std::string getPassword() const { return current_password; } + + +private: + std::string current_user; + std::string current_password; + + /// In common situation this store contains one record + CurrentCredentials all_users_store; + + static CurrentCredentials parseCredentialsFromConfig( + const std::string & current_user_, + const std::string & current_password_, + const Poco::Util::AbstractConfiguration & config, + const std::string & root_tag); +}; + +using InterserverCredentialsPtr = std::shared_ptr; + +} diff --git a/src/Interpreters/JoinSwitcher.h b/src/Interpreters/JoinSwitcher.h index 1fd719cd5dc..75ff7bb9b2c 100644 --- a/src/Interpreters/JoinSwitcher.h +++ b/src/Interpreters/JoinSwitcher.h @@ -19,6 +19,8 @@ class JoinSwitcher : public IJoin public: JoinSwitcher(std::shared_ptr table_join_, const Block & right_sample_block_); + const TableJoin & getTableJoin() const override { return *table_join; } + /// Add block of data from right hand of JOIN into current join object. /// If join-in-memory memory limit exceeded switches to join-on-disk and continue with it. /// @returns false, if join-on-disk disk limit exceeded diff --git a/src/Interpreters/JoinedTables.cpp b/src/Interpreters/JoinedTables.cpp index 17d7949e478..d947a3e2a48 100644 --- a/src/Interpreters/JoinedTables.cpp +++ b/src/Interpreters/JoinedTables.cpp @@ -1,25 +1,24 @@ #include -#include -#include -#include + #include #include - -#include -#include -#include -#include -#include - +#include +#include +#include #include +#include +#include #include #include #include #include -#include -#include #include #include +#include +#include +#include +#include +#include namespace DB { @@ -129,7 +128,7 @@ using RenameQualifiedIdentifiersVisitor = InDepthNodeVisitorgetQueryContext()->executeTableFunction(left_table_expression); StorageID table_id = StorageID::createEmpty(); if (left_db_and_table) { - table_id = context.resolveStorageID(StorageID(left_db_and_table->database, left_db_and_table->table, left_db_and_table->uuid)); + table_id = context->resolveStorageID(StorageID(left_db_and_table->database, left_db_and_table->table, left_db_and_table->uuid)); } else /// If the table is not specified - use the table `system.one`. { table_id = StorageID("system", "one"); } - if (auto view_source = context.getViewSource()) + if (auto view_source = context->getViewSource()) { const auto & storage_values = static_cast(*view_source); auto tmp_table_id = storage_values.getStorageID(); if (tmp_table_id.database_name == table_id.database_name && tmp_table_id.table_name == table_id.table_name) { /// Read from view source. - return context.getViewSource(); + return context->getViewSource(); } } @@ -192,7 +191,7 @@ bool JoinedTables::resolveTables() if (tables_with_columns.size() != table_expressions.size()) throw Exception("Unexpected tables count", ErrorCodes::LOGICAL_ERROR); - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); if (settings.joined_subquery_requires_alias && tables_with_columns.size() > 1) { for (size_t i = 0; i < tables_with_columns.size(); ++i) @@ -234,7 +233,7 @@ void JoinedTables::rewriteDistributedInAndJoins(ASTPtr & query) String database; if (!renamed_tables.empty()) - database = context.getCurrentDatabase(); + database = context->getCurrentDatabase(); for (auto & [subquery, ast_tables] : renamed_tables) { @@ -254,8 +253,8 @@ std::shared_ptr JoinedTables::makeTableJoin(const ASTSelectQuery & se if (tables_with_columns.size() < 2) return {}; - auto settings = context.getSettingsRef(); - auto table_join = std::make_shared(settings, context.getTemporaryVolume()); + auto settings = context->getSettingsRef(); + auto table_join = std::make_shared(settings, context->getTemporaryVolume()); const ASTTablesInSelectQueryElement * ast_join = select_query.join(); const auto & table_to_join = ast_join->table_expression->as(); @@ -263,7 +262,7 @@ std::shared_ptr JoinedTables::makeTableJoin(const ASTSelectQuery & se /// TODO This syntax does not support specifying a database name. if (table_to_join.database_and_table_name) { - auto joined_table_id = context.resolveStorageID(table_to_join.database_and_table_name); + auto joined_table_id = context->resolveStorageID(table_to_join.database_and_table_name); StoragePtr table = DatabaseCatalog::instance().tryGetTable(joined_table_id, context); if (table) { diff --git a/src/Interpreters/JoinedTables.h b/src/Interpreters/JoinedTables.h index 812808fed61..52eb71e419d 100644 --- a/src/Interpreters/JoinedTables.h +++ b/src/Interpreters/JoinedTables.h @@ -22,11 +22,11 @@ using StorageMetadataPtr = std::shared_ptr; class JoinedTables { public: - JoinedTables(Context && context, const ASTSelectQuery & select_query); + JoinedTables(ContextPtr context, const ASTSelectQuery & select_query); void reset(const ASTSelectQuery & select_query) { - *this = JoinedTables(std::move(context), select_query); + *this = JoinedTables(Context::createCopy(context), select_query); } StoragePtr getLeftTableStorage(); @@ -48,7 +48,7 @@ public: std::unique_ptr makeLeftTableSubquery(const SelectQueryOptions & select_options); private: - Context context; + ContextPtr context; std::vector table_expressions; TablesWithColumns tables_with_columns; diff --git a/src/Interpreters/MergeJoin.h b/src/Interpreters/MergeJoin.h index d145a69ce9d..f286e74b385 100644 --- a/src/Interpreters/MergeJoin.h +++ b/src/Interpreters/MergeJoin.h @@ -23,6 +23,7 @@ class MergeJoin : public IJoin public: MergeJoin(std::shared_ptr table_join_, const Block & right_sample_block); + const TableJoin & getTableJoin() const override { return *table_join; } bool addJoinedBlock(const Block & block, bool check_limits) override; void joinBlock(Block &, ExtraBlockPtr & not_processed) override; void joinTotals(Block &) const override; diff --git a/src/Interpreters/MonotonicityCheckVisitor.h b/src/Interpreters/MonotonicityCheckVisitor.h index 87571a44eb0..350318047c7 100644 --- a/src/Interpreters/MonotonicityCheckVisitor.h +++ b/src/Interpreters/MonotonicityCheckVisitor.h @@ -26,7 +26,7 @@ public: struct Data { const TablesWithColumns & tables; - const Context & context; + ContextPtr context; const std::unordered_set & group_by_function_hashes; Monotonicity monotonicity{true, true, true}; ASTIdentifier * identifier = nullptr; diff --git a/src/Interpreters/MutationsInterpreter.cpp b/src/Interpreters/MutationsInterpreter.cpp index 3573d48b837..1315f9efa05 100644 --- a/src/Interpreters/MutationsInterpreter.cpp +++ b/src/Interpreters/MutationsInterpreter.cpp @@ -26,6 +26,7 @@ #include #include #include +#include namespace DB @@ -50,7 +51,7 @@ class FirstNonDeterministicFunctionMatcher public: struct Data { - const Context & context; + ContextPtr context; std::optional nondeterministic_function_name; }; @@ -80,7 +81,7 @@ public: using FirstNonDeterministicFunctionFinder = InDepthNodeVisitor; -std::optional findFirstNonDeterministicFunctionName(const MutationCommand & command, const Context & context) +std::optional findFirstNonDeterministicFunctionName(const MutationCommand & command, ContextPtr context) { FirstNonDeterministicFunctionMatcher::Data finder_data{context, std::nullopt}; @@ -113,7 +114,7 @@ std::optional findFirstNonDeterministicFunctionName(const MutationComman return {}; } -ASTPtr prepareQueryAffectedAST(const std::vector & commands, const StoragePtr & storage, const Context & context) +ASTPtr prepareQueryAffectedAST(const std::vector & commands, const StoragePtr & storage, ContextPtr context) { /// Execute `SELECT count() FROM storage WHERE predicate1 OR predicate2 OR ...` query. /// The result can differ from the number of affected rows (e.g. if there is an UPDATE command that @@ -178,7 +179,7 @@ bool isStorageTouchedByMutations( const StoragePtr & storage, const StorageMetadataPtr & metadata_snapshot, const std::vector & commands, - Context context_copy) + ContextPtr context_copy) { if (commands.empty()) return false; @@ -206,8 +207,8 @@ bool isStorageTouchedByMutations( if (all_commands_can_be_skipped) return false; - context_copy.setSetting("max_streams_to_max_threads_ratio", 1); - context_copy.setSetting("max_threads", 1); + context_copy->setSetting("max_streams_to_max_threads_ratio", 1); + context_copy->setSetting("max_threads", 1); ASTPtr select_query = prepareQueryAffectedAST(commands, storage, context_copy); @@ -232,7 +233,7 @@ bool isStorageTouchedByMutations( ASTPtr getPartitionAndPredicateExpressionForMutationCommand( const MutationCommand & command, const StoragePtr & storage, - const Context & context + ContextPtr context ) { ASTPtr partition_predicate_as_ast_func; @@ -266,7 +267,7 @@ MutationsInterpreter::MutationsInterpreter( StoragePtr storage_, const StorageMetadataPtr & metadata_snapshot_, MutationCommands commands_, - const Context & context_, + ContextPtr context_, bool can_execute_) : storage(std::move(storage_)) , metadata_snapshot(metadata_snapshot_) @@ -349,6 +350,35 @@ static void validateUpdateColumns( } } +/// Returns ASTs of updated nested subcolumns, if all of subcolumns were updated. +/// They are used to validate sizes of nested arrays. +/// If some of subcolumns were updated and some weren't, +/// it makes sense to validate only updated columns with their old versions, +/// because their sizes couldn't change, since sizes of all nested subcolumns must be consistent. +static std::optional> getExpressionsOfUpdatedNestedSubcolumns( + const String & column_name, + const NamesAndTypesList & all_columns, + const std::unordered_map & column_to_update_expression) +{ + std::vector res; + auto source_name = Nested::splitName(column_name).first; + + /// Check this nested subcolumn + for (const auto & column : all_columns) + { + auto split = Nested::splitName(column.name); + if (isArray(column.type) && split.first == source_name && !split.second.empty()) + { + auto it = column_to_update_expression.find(column.name); + if (it == column_to_update_expression.end()) + return {}; + + res.push_back(it->second); + } + } + + return res; +} ASTPtr MutationsInterpreter::prepare(bool dry_run) { @@ -398,7 +428,7 @@ ASTPtr MutationsInterpreter::prepare(bool dry_run) auto dependencies = getAllColumnDependencies(metadata_snapshot, updated_columns); /// First, break a sequence of commands into stages. - for (const auto & command : commands) + for (auto & command : commands) { if (command.type == MutationCommand::DELETE) { @@ -438,12 +468,43 @@ ASTPtr MutationsInterpreter::prepare(bool dry_run) /// /// Outer CAST is added just in case if we don't trust the returning type of 'if'. - auto type_literal = std::make_shared(columns_desc.getPhysical(column).type->getName()); + const auto & type = columns_desc.getPhysical(column).type; + auto type_literal = std::make_shared(type->getName()); const auto & update_expr = kv.second; + + ASTPtr condition = getPartitionAndPredicateExpressionForMutationCommand(command); + + /// And new check validateNestedArraySizes for Nested subcolumns + if (isArray(type) && !Nested::splitName(column).second.empty()) + { + std::shared_ptr function = nullptr; + + auto nested_update_exprs = getExpressionsOfUpdatedNestedSubcolumns(column, all_columns, command.column_to_update_expression); + if (!nested_update_exprs) + { + function = makeASTFunction("validateNestedArraySizes", + condition, + update_expr->clone(), + std::make_shared(column)); + condition = makeASTFunction("and", condition, function); + } + else if (nested_update_exprs->size() > 1) + { + function = std::make_shared(); + function->name = "validateNestedArraySizes"; + function->arguments = std::make_shared(); + function->children.push_back(function->arguments); + function->arguments->children.push_back(condition); + for (const auto & it : *nested_update_exprs) + function->arguments->children.push_back(it->clone()); + condition = makeASTFunction("and", condition, function); + } + } + auto updated_column = makeASTFunction("CAST", makeASTFunction("if", - getPartitionAndPredicateExpressionForMutationCommand(command), + condition, makeASTFunction("CAST", update_expr->clone(), type_literal), @@ -649,9 +710,9 @@ ASTPtr MutationsInterpreter::prepareInterpreterSelectQuery(std::vector & all_asts->children.push_back(std::make_shared(column)); auto syntax_result = TreeRewriter(context).analyze(all_asts, all_columns, storage, metadata_snapshot); - if (context.hasQueryContext()) + if (context->hasQueryContext()) for (const auto & it : syntax_result->getScalars()) - context.getQueryContext().addScalar(it.first, it.second); + context->getQueryContext()->addScalar(it.first, it.second); stage.analyzer = std::make_unique(all_asts, syntax_result, context); @@ -756,7 +817,7 @@ QueryPipelinePtr MutationsInterpreter::addStreamsForLaterStages(const std::vecto SubqueriesForSets & subqueries_for_sets = stage.analyzer->getSubqueriesForSets(); if (!subqueries_for_sets.empty()) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); SizeLimits network_transfer_limits( settings.max_rows_to_transfer, settings.max_bytes_to_transfer, settings.transfer_overflow_mode); addCreatingSetsStep(plan, std::move(subqueries_for_sets), network_transfer_limits, context); @@ -780,7 +841,7 @@ void MutationsInterpreter::validate() if (!select_interpreter) select_interpreter = std::make_unique(mutation_ast, context, storage, metadata_snapshot, select_limits); - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); /// For Replicated* storages mutations cannot employ non-deterministic functions /// because that produces inconsistencies between replicas diff --git a/src/Interpreters/MutationsInterpreter.h b/src/Interpreters/MutationsInterpreter.h index dcebba5743e..34a9b61771d 100644 --- a/src/Interpreters/MutationsInterpreter.h +++ b/src/Interpreters/MutationsInterpreter.h @@ -23,13 +23,13 @@ bool isStorageTouchedByMutations( const StoragePtr & storage, const StorageMetadataPtr & metadata_snapshot, const std::vector & commands, - Context context_copy + ContextPtr context_copy ); ASTPtr getPartitionAndPredicateExpressionForMutationCommand( const MutationCommand & command, const StoragePtr & storage, - const Context & context + ContextPtr context ); /// Create an input stream that will read data from storage and apply mutation commands (UPDATEs, DELETEs, MATERIALIZEs) @@ -43,7 +43,7 @@ public: StoragePtr storage_, const StorageMetadataPtr & metadata_snapshot_, MutationCommands commands_, - const Context & context_, + ContextPtr context_, bool can_execute_); void validate(); @@ -74,7 +74,7 @@ private: StoragePtr storage; StorageMetadataPtr metadata_snapshot; MutationCommands commands; - Context context; + ContextPtr context; bool can_execute; SelectQueryOptions select_limits; @@ -101,7 +101,7 @@ private: struct Stage { - Stage(const Context & context_) : expressions_chain(context_) {} + explicit Stage(ContextPtr context_) : expressions_chain(context_) {} ASTs filters; std::unordered_map column_to_updated; diff --git a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp index dfc126a6c24..2420255c5c1 100644 --- a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp +++ b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp @@ -41,7 +41,7 @@ namespace MySQLInterpreter { static inline String resolveDatabase( - const String & database_in_query, const String & replica_mysql_database, const String & replica_clickhouse_database, const Context & context) + const String & database_in_query, const String & replica_mysql_database, const String & replica_clickhouse_database, ContextPtr context) { if (!database_in_query.empty()) { @@ -63,7 +63,7 @@ static inline String resolveDatabase( /// context.getCurrentDatabase() is always return `default database` /// When USE replica_mysql_database; CREATE TABLE table_name; /// context.getCurrentDatabase() is always return replica_clickhouse_database - const String & current_database = context.getCurrentDatabase(); + const String & current_database = context->getCurrentDatabase(); return current_database != replica_clickhouse_database ? "" : replica_clickhouse_database; } @@ -117,7 +117,7 @@ static inline NamesAndTypesList getColumnsList(ASTExpressionList * columns_defin return columns_name_and_type; } -static NamesAndTypesList getNames(const ASTFunction & expr, const Context & context, const NamesAndTypesList & columns) +static NamesAndTypesList getNames(const ASTFunction & expr, ContextPtr context, const NamesAndTypesList & columns) { if (expr.arguments->children.empty()) return NamesAndTypesList{}; @@ -158,7 +158,7 @@ static NamesAndTypesList modifyPrimaryKeysToNonNullable(const NamesAndTypesList } static inline std::tuple getKeys( - ASTExpressionList * columns_define, ASTExpressionList * indices_define, const Context & context, NamesAndTypesList & columns) + ASTExpressionList * columns_define, ASTExpressionList * indices_define, ContextPtr context, NamesAndTypesList & columns) { NameSet increment_columns; auto keys = makeASTFunction("tuple"); @@ -370,7 +370,7 @@ static ASTPtr getOrderByPolicy( return order_by_expression; } -void InterpreterCreateImpl::validate(const InterpreterCreateImpl::TQuery & create_query, const Context &) +void InterpreterCreateImpl::validate(const InterpreterCreateImpl::TQuery & create_query, ContextPtr) { /// This is dangerous, because the like table may not exists in ClickHouse if (create_query.like_table) @@ -383,7 +383,7 @@ void InterpreterCreateImpl::validate(const InterpreterCreateImpl::TQuery & creat } ASTs InterpreterCreateImpl::getRewrittenQueries( - const TQuery & create_query, const Context & context, const String & mapped_to_database, const String & mysql_database) + const TQuery & create_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database) { auto rewritten_query = std::make_shared(); if (resolveDatabase(create_query.database, mysql_database, mapped_to_database, context) != mapped_to_database) @@ -453,12 +453,12 @@ ASTs InterpreterCreateImpl::getRewrittenQueries( return ASTs{rewritten_query}; } -void InterpreterDropImpl::validate(const InterpreterDropImpl::TQuery & /*query*/, const Context & /*context*/) +void InterpreterDropImpl::validate(const InterpreterDropImpl::TQuery & /*query*/, ContextPtr /*context*/) { } ASTs InterpreterDropImpl::getRewrittenQueries( - const InterpreterDropImpl::TQuery & drop_query, const Context & context, const String & mapped_to_database, const String & mysql_database) + const InterpreterDropImpl::TQuery & drop_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database) { const auto & database_name = resolveDatabase(drop_query.database, mysql_database, mapped_to_database, context); @@ -471,14 +471,14 @@ ASTs InterpreterDropImpl::getRewrittenQueries( return ASTs{rewritten_query}; } -void InterpreterRenameImpl::validate(const InterpreterRenameImpl::TQuery & rename_query, const Context & /*context*/) +void InterpreterRenameImpl::validate(const InterpreterRenameImpl::TQuery & rename_query, ContextPtr /*context*/) { if (rename_query.exchange) throw Exception("Cannot execute exchange for external ddl query.", ErrorCodes::NOT_IMPLEMENTED); } ASTs InterpreterRenameImpl::getRewrittenQueries( - const InterpreterRenameImpl::TQuery & rename_query, const Context & context, const String & mapped_to_database, const String & mysql_database) + const InterpreterRenameImpl::TQuery & rename_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database) { ASTRenameQuery::Elements elements; for (const auto & rename_element : rename_query.elements) @@ -507,12 +507,12 @@ ASTs InterpreterRenameImpl::getRewrittenQueries( return ASTs{rewritten_query}; } -void InterpreterAlterImpl::validate(const InterpreterAlterImpl::TQuery & /*query*/, const Context & /*context*/) +void InterpreterAlterImpl::validate(const InterpreterAlterImpl::TQuery & /*query*/, ContextPtr /*context*/) { } ASTs InterpreterAlterImpl::getRewrittenQueries( - const InterpreterAlterImpl::TQuery & alter_query, const Context & context, const String & mapped_to_database, const String & mysql_database) + const InterpreterAlterImpl::TQuery & alter_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database) { if (resolveDatabase(alter_query.database, mysql_database, mapped_to_database, context) != mapped_to_database) return {}; diff --git a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.h b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.h index 497a661cc7f..3202612ac94 100644 --- a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.h +++ b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.h @@ -1,62 +1,66 @@ #pragma once -#include -#include #include #include #include #include +#include #include #include +#include namespace DB { namespace MySQLInterpreter { + struct InterpreterDropImpl + { + using TQuery = ASTDropQuery; -struct InterpreterDropImpl -{ - using TQuery = ASTDropQuery; + static void validate(const TQuery & query, ContextPtr context); - static void validate(const TQuery & query, const Context & context); + static ASTs getRewrittenQueries( + const TQuery & drop_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database); + }; - static ASTs getRewrittenQueries(const TQuery & drop_query, const Context & context, const String & mapped_to_database, const String & mysql_database); -}; + struct InterpreterAlterImpl + { + using TQuery = MySQLParser::ASTAlterQuery; -struct InterpreterAlterImpl -{ - using TQuery = MySQLParser::ASTAlterQuery; + static void validate(const TQuery & query, ContextPtr context); - static void validate(const TQuery & query, const Context & context); + static ASTs getRewrittenQueries( + const TQuery & alter_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database); + }; - static ASTs getRewrittenQueries(const TQuery & alter_query, const Context & context, const String & mapped_to_database, const String & mysql_database); -}; + struct InterpreterRenameImpl + { + using TQuery = ASTRenameQuery; -struct InterpreterRenameImpl -{ - using TQuery = ASTRenameQuery; + static void validate(const TQuery & query, ContextPtr context); - static void validate(const TQuery & query, const Context & context); + static ASTs getRewrittenQueries( + const TQuery & rename_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database); + }; - static ASTs getRewrittenQueries(const TQuery & rename_query, const Context & context, const String & mapped_to_database, const String & mysql_database); -}; + struct InterpreterCreateImpl + { + using TQuery = MySQLParser::ASTCreateQuery; -struct InterpreterCreateImpl -{ - using TQuery = MySQLParser::ASTCreateQuery; + static void validate(const TQuery & query, ContextPtr context); - static void validate(const TQuery & query, const Context & context); - - static ASTs getRewrittenQueries(const TQuery & create_query, const Context & context, const String & mapped_to_database, const String & mysql_database); -}; + static ASTs getRewrittenQueries( + const TQuery & create_query, ContextPtr context, const String & mapped_to_database, const String & mysql_database); + }; template -class InterpreterMySQLDDLQuery : public IInterpreter +class InterpreterMySQLDDLQuery : public IInterpreter, WithContext { public: - InterpreterMySQLDDLQuery(const ASTPtr & query_ptr_, Context & context_, const String & mapped_to_database_, const String & mysql_database_) - : query_ptr(query_ptr_), context(context_), mapped_to_database(mapped_to_database_), mysql_database(mysql_database_) + InterpreterMySQLDDLQuery( + const ASTPtr & query_ptr_, ContextPtr context_, const String & mapped_to_database_, const String & mysql_database_) + : WithContext(context_), query_ptr(query_ptr_), mapped_to_database(mapped_to_database_), mysql_database(mysql_database_) { } @@ -64,18 +68,17 @@ public: { const typename InterpreterImpl::TQuery & query = query_ptr->as(); - InterpreterImpl::validate(query, context); - ASTs rewritten_queries = InterpreterImpl::getRewrittenQueries(query, context, mapped_to_database, mysql_database); + InterpreterImpl::validate(query, getContext()); + ASTs rewritten_queries = InterpreterImpl::getRewrittenQueries(query, getContext(), mapped_to_database, mysql_database); for (const auto & rewritten_query : rewritten_queries) - executeQuery("/* Rewritten MySQL DDL Query */ " + queryToString(rewritten_query), context, true); + executeQuery("/* Rewritten MySQL DDL Query */ " + queryToString(rewritten_query), getContext(), true); return BlockIO{}; } private: ASTPtr query_ptr; - Context & context; const String mapped_to_database; const String mysql_database; }; diff --git a/src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp b/src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp index 5a82a570db0..77a14e780c5 100644 --- a/src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp +++ b/src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp @@ -18,7 +18,7 @@ using namespace DB; -static inline ASTPtr tryRewrittenCreateQuery(const String & query, const Context & context) +static inline ASTPtr tryRewrittenCreateQuery(const String & query, ContextPtr context) { ParserExternalDDLQuery external_ddl_parser; ASTPtr ast = parseQuery(external_ddl_parser, "EXTERNAL DDL FROM MySQL(test_database, test_database) " + query, 0, 0); diff --git a/src/Interpreters/OpenTelemetrySpanLog.cpp b/src/Interpreters/OpenTelemetrySpanLog.cpp index f9ae6518af0..c72b0f3d326 100644 --- a/src/Interpreters/OpenTelemetrySpanLog.cpp +++ b/src/Interpreters/OpenTelemetrySpanLog.cpp @@ -116,7 +116,7 @@ OpenTelemetrySpanHolder::~OpenTelemetrySpanHolder() return; } - auto * context = thread_group->query_context; + auto context = thread_group->query_context.lock(); if (!context) { // Both global and query contexts can be null when executing a diff --git a/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.cpp b/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.cpp new file mode 100644 index 00000000000..399def00006 --- /dev/null +++ b/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.cpp @@ -0,0 +1,124 @@ +#include +#include +#include +#include +#include +#include +#include +#include + +namespace +{ + +using namespace DB; + +Field executeFunctionOnField( + const Field & field, const std::string & name, + const ExpressionActionsPtr & sharding_expr, + const std::string & sharding_key_column_name) +{ + DataTypePtr type = applyVisitor(FieldToDataType{}, field); + + ColumnWithTypeAndName column; + column.column = type->createColumnConst(1, field); + column.name = name; + column.type = type; + + Block block{column}; + size_t num_rows = 1; + sharding_expr->execute(block, num_rows); + + ColumnWithTypeAndName & ret = block.getByName(sharding_key_column_name); + return (*ret.column)[0]; +} + +/// @param sharding_column_value - one of values from IN +/// @param sharding_column_name - name of that column +/// @param sharding_expr - expression of sharding_key for the Distributed() table +/// @param sharding_key_column_name - name of the column for sharding_expr +/// @param shard_info - info for the current shard (to compare shard_num with calculated) +/// @param slots - weight -> shard mapping +/// @return true if shard may contain such value (or it is unknown), otherwise false. +bool shardContains( + const Field & sharding_column_value, + const std::string & sharding_column_name, + const ExpressionActionsPtr & sharding_expr, + const std::string & sharding_key_column_name, + const Cluster::ShardInfo & shard_info, + const Cluster::SlotToShard & slots) +{ + /// NULL is not allowed in sharding key, + /// so it should be safe to assume that shard cannot contain it. + if (sharding_column_value.isNull()) + return false; + + Field sharding_value = executeFunctionOnField(sharding_column_value, sharding_column_name, sharding_expr, sharding_key_column_name); + /// The value from IN can be non-numeric, + /// but in this case it should be convertible to numeric type, let's try. + sharding_value = convertFieldToType(sharding_value, DataTypeUInt64()); + /// In case of conversion is not possible (NULL), shard cannot contain the value anyway. + if (sharding_value.isNull()) + return false; + + UInt64 value = sharding_value.get(); + const auto shard_num = slots[value % slots.size()] + 1; + return shard_info.shard_num == shard_num; +} + +} + +namespace DB +{ + +bool OptimizeShardingKeyRewriteInMatcher::needChildVisit(ASTPtr & /*node*/, const ASTPtr & /*child*/) +{ + return true; +} + +void OptimizeShardingKeyRewriteInMatcher::visit(ASTPtr & node, Data & data) +{ + if (auto * function = node->as()) + visit(*function, data); +} + +void OptimizeShardingKeyRewriteInMatcher::visit(ASTFunction & function, Data & data) +{ + if (function.name != "in") + return; + + auto * left = function.arguments->children.front().get(); + auto * right = function.arguments->children.back().get(); + auto * identifier = left->as(); + if (!identifier) + return; + + const auto & sharding_expr = data.sharding_key_expr; + const auto & sharding_key_column_name = data.sharding_key_column_name; + + if (!sharding_expr->getRequiredColumnsWithTypes().contains(identifier->name())) + return; + + /// NOTE: that we should not take care about empty tuple, + /// since after optimize_skip_unused_shards, + /// at least one element should match each shard. + if (auto * tuple_func = right->as(); tuple_func && tuple_func->name == "tuple") + { + auto * tuple_elements = tuple_func->children.front()->as(); + std::erase_if(tuple_elements->children, [&](auto & child) + { + auto * literal = child->template as(); + return literal && !shardContains(literal->value, identifier->name(), sharding_expr, sharding_key_column_name, data.shard_info, data.slots); + }); + } + else if (auto * tuple_literal = right->as(); + tuple_literal && tuple_literal->value.getType() == Field::Types::Tuple) + { + auto & tuple = tuple_literal->value.get(); + std::erase_if(tuple, [&](auto & child) + { + return !shardContains(child, identifier->name(), sharding_expr, sharding_key_column_name, data.shard_info, data.slots); + }); + } +} + +} diff --git a/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.h b/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.h new file mode 100644 index 00000000000..3087fb844ed --- /dev/null +++ b/src/Interpreters/OptimizeShardingKeyRewriteInVisitor.h @@ -0,0 +1,41 @@ +#pragma once + +#include +#include + +namespace DB +{ + +class ExpressionActions; +using ExpressionActionsPtr = std::shared_ptr; + +class ASTFunction; + +/// Rewrite `sharding_key IN (...)` for specific shard, +/// so that it will contain only values that belong to this specific shard. +/// +/// See also: +/// - evaluateExpressionOverConstantCondition() +/// - StorageDistributed::createSelector() +/// - createBlockSelector() +struct OptimizeShardingKeyRewriteInMatcher +{ + /// Cluster::SlotToShard + using SlotToShard = std::vector; + + struct Data + { + const ExpressionActionsPtr & sharding_key_expr; + const std::string & sharding_key_column_name; + const Cluster::ShardInfo & shard_info; + const Cluster::SlotToShard & slots; + }; + + static bool needChildVisit(ASTPtr & /*node*/, const ASTPtr & /*child*/); + static void visit(ASTPtr & node, Data & data); + static void visit(ASTFunction & function, Data & data); +}; + +using OptimizeShardingKeyRewriteInVisitor = InDepthNodeVisitor; + +} diff --git a/src/Interpreters/PartLog.cpp b/src/Interpreters/PartLog.cpp index c180a4dd254..e4459399336 100644 --- a/src/Interpreters/PartLog.cpp +++ b/src/Interpreters/PartLog.cpp @@ -103,7 +103,7 @@ void PartLogElement::appendToBlock(MutableColumns & columns) const bool PartLog::addNewPart( - Context & current_context, const MutableDataPartPtr & part, UInt64 elapsed_ns, const ExecutionStatus & execution_status) + ContextPtr current_context, const MutableDataPartPtr & part, UInt64 elapsed_ns, const ExecutionStatus & execution_status) { return addNewParts(current_context, {part}, elapsed_ns, execution_status); } @@ -120,7 +120,7 @@ inline UInt64 time_in_seconds(std::chrono::time_point } bool PartLog::addNewParts( - Context & current_context, const PartLog::MutableDataPartsVector & parts, UInt64 elapsed_ns, const ExecutionStatus & execution_status) + ContextPtr current_context, const PartLog::MutableDataPartsVector & parts, UInt64 elapsed_ns, const ExecutionStatus & execution_status) { if (parts.empty()) return true; @@ -130,7 +130,7 @@ bool PartLog::addNewParts( try { auto table_id = parts.front()->storage.getStorageID(); - part_log = current_context.getPartLog(table_id.database_name); // assume parts belong to the same table + part_log = current_context->getPartLog(table_id.database_name); // assume parts belong to the same table if (!part_log) return false; diff --git a/src/Interpreters/PartLog.h b/src/Interpreters/PartLog.h index c946d6ce85f..edb6ab4a45f 100644 --- a/src/Interpreters/PartLog.h +++ b/src/Interpreters/PartLog.h @@ -69,9 +69,9 @@ class PartLog : public SystemLog public: /// Add a record about creation of new part. - static bool addNewPart(Context & context, const MutableDataPartPtr & part, UInt64 elapsed_ns, + static bool addNewPart(ContextPtr context, const MutableDataPartPtr & part, UInt64 elapsed_ns, const ExecutionStatus & execution_status = {}); - static bool addNewParts(Context & context, const MutableDataPartsVector & parts, UInt64 elapsed_ns, + static bool addNewParts(ContextPtr context, const MutableDataPartsVector & parts, UInt64 elapsed_ns, const ExecutionStatus & execution_status = {}); }; diff --git a/src/Interpreters/PredicateExpressionsOptimizer.cpp b/src/Interpreters/PredicateExpressionsOptimizer.cpp index 476bdaaceea..f2e55441fb6 100644 --- a/src/Interpreters/PredicateExpressionsOptimizer.cpp +++ b/src/Interpreters/PredicateExpressionsOptimizer.cpp @@ -19,11 +19,11 @@ namespace ErrorCodes } PredicateExpressionsOptimizer::PredicateExpressionsOptimizer( - const Context & context_, const TablesWithColumns & tables_with_columns_, const Settings & settings) - : enable_optimize_predicate_expression(settings.enable_optimize_predicate_expression) + ContextPtr context_, const TablesWithColumns & tables_with_columns_, const Settings & settings) + : WithContext(context_) + , enable_optimize_predicate_expression(settings.enable_optimize_predicate_expression) , enable_optimize_predicate_expression_to_final_subquery(settings.enable_optimize_predicate_expression_to_final_subquery) , allow_push_predicate_when_subquery_contains_with(settings.allow_push_predicate_when_subquery_contains_with) - , context(context_) , tables_with_columns(tables_with_columns_) { } @@ -87,7 +87,7 @@ std::vector PredicateExpressionsOptimizer::extractTablesPredicates(const A for (const auto & predicate_expression : splitConjunctionPredicate({where, prewhere})) { - ExpressionInfoVisitor::Data expression_info{.context = context, .tables = tables_with_columns}; + ExpressionInfoVisitor::Data expression_info{WithContext{getContext()}, tables_with_columns}; ExpressionInfoVisitor(expression_info).visit(predicate_expression); if (expression_info.is_stateful_function @@ -162,7 +162,7 @@ bool PredicateExpressionsOptimizer::tryRewritePredicatesToTable(ASTPtr & table_e { auto optimize_final = enable_optimize_predicate_expression_to_final_subquery; auto optimize_with = allow_push_predicate_when_subquery_contains_with; - PredicateRewriteVisitor::Data data(context, table_predicates, table_columns, optimize_final, optimize_with); + PredicateRewriteVisitor::Data data(getContext(), table_predicates, table_columns, optimize_final, optimize_with); PredicateRewriteVisitor(data).visit(table_element); return data.is_rewrite; @@ -187,7 +187,8 @@ bool PredicateExpressionsOptimizer::tryMovePredicatesFromHavingToWhere(ASTSelect for (const auto & moving_predicate: splitConjunctionPredicate({select_query.having()})) { - ExpressionInfoVisitor::Data expression_info{.context = context, .tables = {}}; + TablesWithColumns tables; + ExpressionInfoVisitor::Data expression_info{WithContext{getContext()}, tables}; ExpressionInfoVisitor(expression_info).visit(moving_predicate); /// TODO: If there is no group by, where, and prewhere expression, we can push down the stateful function diff --git a/src/Interpreters/PredicateExpressionsOptimizer.h b/src/Interpreters/PredicateExpressionsOptimizer.h index 223ac1e8998..a31b9907da6 100644 --- a/src/Interpreters/PredicateExpressionsOptimizer.h +++ b/src/Interpreters/PredicateExpressionsOptimizer.h @@ -1,12 +1,12 @@ #pragma once -#include +#include #include +#include namespace DB { -class Context; struct Settings; /** Predicate optimization based on rewriting ast rules @@ -15,10 +15,10 @@ struct Settings; * - Move predicates from having to where * - Push the predicate down from the current query to the having of the subquery */ -class PredicateExpressionsOptimizer +class PredicateExpressionsOptimizer : WithContext { public: - PredicateExpressionsOptimizer(const Context & context_, const TablesWithColumns & tables_with_columns_, const Settings & settings_); + PredicateExpressionsOptimizer(ContextPtr context_, const TablesWithColumns & tables_with_columns_, const Settings & settings_); bool optimize(ASTSelectQuery & select_query); @@ -26,7 +26,6 @@ private: const bool enable_optimize_predicate_expression; const bool enable_optimize_predicate_expression_to_final_subquery; const bool allow_push_predicate_when_subquery_contains_with; - const Context & context; const TablesWithColumns & tables_with_columns; std::vector extractTablesPredicates(const ASTPtr & where, const ASTPtr & prewhere); diff --git a/src/Interpreters/PredicateRewriteVisitor.cpp b/src/Interpreters/PredicateRewriteVisitor.cpp index 6f28b9050df..092d37d78dd 100644 --- a/src/Interpreters/PredicateRewriteVisitor.cpp +++ b/src/Interpreters/PredicateRewriteVisitor.cpp @@ -17,8 +17,16 @@ namespace DB { PredicateRewriteVisitorData::PredicateRewriteVisitorData( - const Context & context_, const ASTs & predicates_, const TableWithColumnNamesAndTypes & table_columns_, bool optimize_final_, bool optimize_with_) - : context(context_), predicates(predicates_), table_columns(table_columns_), optimize_final(optimize_final_), optimize_with(optimize_with_) + ContextPtr context_, + const ASTs & predicates_, + const TableWithColumnNamesAndTypes & table_columns_, + bool optimize_final_, + bool optimize_with_) + : WithContext(context_) + , predicates(predicates_) + , table_columns(table_columns_) + , optimize_final(optimize_final_) + , optimize_with(optimize_with_) { } @@ -64,7 +72,7 @@ void PredicateRewriteVisitorData::visitOtherInternalSelect(ASTSelectQuery & sele } const Names & internal_columns = InterpreterSelectQuery( - temp_internal_select, context, SelectQueryOptions().analyze()).getSampleBlock().getNames(); + temp_internal_select, getContext(), SelectQueryOptions().analyze()).getSampleBlock().getNames(); if (rewriteSubquery(*temp_select_query, internal_columns)) { @@ -96,7 +104,7 @@ bool PredicateRewriteVisitorData::rewriteSubquery(ASTSelectQuery & subquery, con || (!optimize_with && subquery.with()) || subquery.withFill() || subquery.limitBy() || subquery.limitLength() - || hasNonRewritableFunction(subquery.select(), context)) + || hasNonRewritableFunction(subquery.select(), getContext())) return false; Names outer_columns = table_columns.columns.getNames(); diff --git a/src/Interpreters/PredicateRewriteVisitor.h b/src/Interpreters/PredicateRewriteVisitor.h index 1132d93a5ec..fc076464925 100644 --- a/src/Interpreters/PredicateRewriteVisitor.h +++ b/src/Interpreters/PredicateRewriteVisitor.h @@ -1,15 +1,16 @@ #pragma once -#include +#include +#include +#include #include #include -#include -#include +#include namespace DB { -class PredicateRewriteVisitorData +class PredicateRewriteVisitorData : WithContext { public: bool is_rewrite = false; @@ -19,17 +20,17 @@ public: static bool needChild(const ASTPtr & node, const ASTPtr &) { - if (node && node->as()) - return false; - - return true; + return !(node && node->as()); } - PredicateRewriteVisitorData(const Context & context_, const ASTs & predicates_, - const TableWithColumnNamesAndTypes & table_columns_, bool optimize_final_, bool optimize_with_); + PredicateRewriteVisitorData( + ContextPtr context_, + const ASTs & predicates_, + const TableWithColumnNamesAndTypes & table_columns_, + bool optimize_final_, + bool optimize_with_); private: - const Context & context; const ASTs & predicates; const TableWithColumnNamesAndTypes & table_columns; bool optimize_final; @@ -44,4 +45,5 @@ private: using PredicateRewriteMatcher = OneTypeMatcher; using PredicateRewriteVisitor = InDepthNodeVisitor; + } diff --git a/src/Interpreters/ProcessList.cpp b/src/Interpreters/ProcessList.cpp index 15bda5d213d..951ff6420c4 100644 --- a/src/Interpreters/ProcessList.cpp +++ b/src/Interpreters/ProcessList.cpp @@ -60,12 +60,12 @@ static bool isUnlimitedQuery(const IAST * ast) } -ProcessList::EntryPtr ProcessList::insert(const String & query_, const IAST * ast, Context & query_context) +ProcessList::EntryPtr ProcessList::insert(const String & query_, const IAST * ast, ContextPtr query_context) { EntryPtr res; - const ClientInfo & client_info = query_context.getClientInfo(); - const Settings & settings = query_context.getSettingsRef(); + const ClientInfo & client_info = query_context->getClientInfo(); + const Settings & settings = query_context->getSettingsRef(); if (client_info.current_query_id.empty()) throw Exception("Query id cannot be empty", ErrorCodes::LOGICAL_ERROR); @@ -174,12 +174,10 @@ ProcessList::EntryPtr ProcessList::insert(const String & query_, const IAST * as } auto process_it = processes.emplace(processes.end(), - query_, client_info, priorities.insert(settings.priority)); + query_context, query_, client_info, priorities.insert(settings.priority)); res = std::make_shared(*this, process_it); - process_it->query_context = &query_context; - ProcessListForUser & user_process_list = user_to_queries[client_info.current_user]; user_process_list.queries.emplace(client_info.current_query_id, &res->get()); @@ -201,7 +199,7 @@ ProcessList::EntryPtr ProcessList::insert(const String & query_, const IAST * as /// Set query-level memory trackers thread_group->memory_tracker.setOrRaiseHardLimit(settings.max_memory_usage); - if (query_context.hasTraceCollector()) + if (query_context->hasTraceCollector()) { /// Set up memory profiling thread_group->memory_tracker.setOrRaiseProfilerLimit(settings.memory_profiler_step); @@ -290,14 +288,12 @@ ProcessListEntry::~ProcessListEntry() QueryStatus::QueryStatus( - const String & query_, - const ClientInfo & client_info_, - QueryPriorities::Handle && priority_handle_) - : - query(query_), - client_info(client_info_), - priority_handle(std::move(priority_handle_)), - num_queries_increment{CurrentMetrics::Query} + ContextPtr context_, const String & query_, const ClientInfo & client_info_, QueryPriorities::Handle && priority_handle_) + : WithContext(context_) + , query(query_) + , client_info(client_info_) + , priority_handle(std::move(priority_handle_)) + , num_queries_increment{CurrentMetrics::Query} { } @@ -454,10 +450,10 @@ QueryStatusInfo QueryStatus::getInfo(bool get_thread_list, bool get_profile_even res.profile_counters = std::make_shared(thread_group->performance_counters.getPartiallyAtomicSnapshot()); } - if (get_settings && query_context) + if (get_settings && getContext()) { - res.query_settings = std::make_shared(query_context->getSettings()); - res.current_database = query_context->getCurrentDatabase(); + res.query_settings = std::make_shared(getContext()->getSettings()); + res.current_database = getContext()->getCurrentDatabase(); } return res; diff --git a/src/Interpreters/ProcessList.h b/src/Interpreters/ProcessList.h index bc93ce7e191..3eeea9c8e5b 100644 --- a/src/Interpreters/ProcessList.h +++ b/src/Interpreters/ProcessList.h @@ -1,6 +1,5 @@ #pragma once - #include #include #include @@ -33,7 +32,6 @@ namespace CurrentMetrics namespace DB { -class Context; struct Settings; class IAST; @@ -72,7 +70,7 @@ struct QueryStatusInfo }; /// Query and information about its execution. -class QueryStatus +class QueryStatus : public WithContext { protected: friend class ProcessList; @@ -83,9 +81,6 @@ protected: String query; ClientInfo client_info; - /// Is set once when init - Context * query_context = nullptr; - /// Info about all threads involved in query execution ThreadGroupStatusPtr thread_group; @@ -128,6 +123,7 @@ protected: public: QueryStatus( + ContextPtr context_, const String & query_, const ClientInfo & client_info_, QueryPriorities::Handle && priority_handle_); @@ -172,9 +168,6 @@ public: QueryStatusInfo getInfo(bool get_thread_list = false, bool get_profile_events = false, bool get_settings = false) const; - Context * tryGetQueryContext() { return query_context; } - const Context * tryGetQueryContext() const { return query_context; } - /// Copies pointers to in/out streams void setQueryStreams(const BlockIO & io); @@ -305,7 +298,7 @@ public: * If timeout is passed - throw an exception. * Don't count KILL QUERY queries. */ - EntryPtr insert(const String & query_, const IAST * ast, Context & query_context); + EntryPtr insert(const String & query_, const IAST * ast, ContextPtr query_context); /// Number of currently executing queries. size_t size() const { return processes.size(); } diff --git a/src/Interpreters/QueryAliasesVisitor.cpp b/src/Interpreters/QueryAliasesVisitor.cpp index d395bfc20e9..bd0b2e88d2f 100644 --- a/src/Interpreters/QueryAliasesVisitor.cpp +++ b/src/Interpreters/QueryAliasesVisitor.cpp @@ -15,15 +15,22 @@ namespace ErrorCodes extern const int MULTIPLE_EXPRESSIONS_FOR_ALIAS; } -static String wrongAliasMessage(const ASTPtr & ast, const ASTPtr & prev_ast, const String & alias) +namespace { - WriteBufferFromOwnString message; - message << "Different expressions with the same alias " << backQuoteIfNeed(alias) << ":\n"; - formatAST(*ast, message, false, true); - message << "\nand\n"; - formatAST(*prev_ast, message, false, true); - message << '\n'; - return message.str(); + + constexpr auto dummy_subquery_name_prefix = "_subquery"; + + String wrongAliasMessage(const ASTPtr & ast, const ASTPtr & prev_ast, const String & alias) + { + WriteBufferFromOwnString message; + message << "Different expressions with the same alias " << backQuoteIfNeed(alias) << ":\n"; + formatAST(*ast, message, false, true); + message << "\nand\n"; + formatAST(*prev_ast, message, false, true); + message << '\n'; + return message.str(); + } + } @@ -99,7 +106,7 @@ void QueryAliasesMatcher::visit(const ASTSubquery & const_subquery, const AST String alias; do { - alias = "_subquery" + std::to_string(++subquery_index); + alias = dummy_subquery_name_prefix + std::to_string(++subquery_index); } while (aliases.count(alias)); @@ -124,6 +131,30 @@ void QueryAliasesMatcher::visitOther(const ASTPtr & ast, Data & data) aliases[alias] = ast; } + + /** QueryAliasesVisitor is executed before ExecuteScalarSubqueriesVisitor. + For example we have subquery in our query (SELECT sum(number) FROM numbers(10)). + + After running QueryAliasesVisitor it will be (SELECT sum(number) FROM numbers(10)) as _subquery_1 + and prefer_alias_to_column_name for this subquery will be true. + + After running ExecuteScalarSubqueriesVisitor it will be converted to (45 as _subquery_1) + and prefer_alias_to_column_name for ast literal will be true. + + But if we send such query on remote host with Distributed engine for example we cannot send prefer_alias_to_column_name + information for our ast node with query string. And this alias will be dropped because prefer_alias_to_column_name for ASTWIthAlias + by default is false. + + It is imporant that subquery can be converted to literal during ExecuteScalarSubqueriesVisitor. + And code below check if we previously set for subquery alias as _subquery, and if it is true + then set prefer_alias_to_column_name = true for node that was optimized during ExecuteScalarSubqueriesVisitor. + */ + + if (auto * ast_with_alias = dynamic_cast(ast.get())) + { + if (startsWith(alias, dummy_subquery_name_prefix)) + ast_with_alias->prefer_alias_to_column_name = true; + } } /// Explicit template instantiations diff --git a/src/Interpreters/RedundantFunctionsInOrderByVisitor.h b/src/Interpreters/RedundantFunctionsInOrderByVisitor.h index d737e877f01..f807849fb86 100644 --- a/src/Interpreters/RedundantFunctionsInOrderByVisitor.h +++ b/src/Interpreters/RedundantFunctionsInOrderByVisitor.h @@ -16,7 +16,7 @@ public: struct Data { std::unordered_set & keys; - const Context & context; + ContextPtr context; bool redundant = true; bool done = false; diff --git a/src/Interpreters/RemoveInjectiveFunctionsVisitor.cpp b/src/Interpreters/RemoveInjectiveFunctionsVisitor.cpp index ae575b8aae7..f46e80a6370 100644 --- a/src/Interpreters/RemoveInjectiveFunctionsVisitor.cpp +++ b/src/Interpreters/RemoveInjectiveFunctionsVisitor.cpp @@ -16,7 +16,7 @@ static bool isUniq(const ASTFunction & func) } /// Remove injective functions of one argument: replace with a child -static bool removeInjectiveFunction(ASTPtr & ast, const Context & context, const FunctionFactory & function_factory) +static bool removeInjectiveFunction(ASTPtr & ast, ContextPtr context, const FunctionFactory & function_factory) { const ASTFunction * func = ast->as(); if (!func) @@ -46,7 +46,7 @@ void RemoveInjectiveFunctionsMatcher::visit(ASTFunction & func, ASTPtr &, const for (auto & arg : func.arguments->children) { - while (removeInjectiveFunction(arg, data.context, function_factory)) + while (removeInjectiveFunction(arg, data.getContext(), function_factory)) ; } } diff --git a/src/Interpreters/RemoveInjectiveFunctionsVisitor.h b/src/Interpreters/RemoveInjectiveFunctionsVisitor.h index 1adde0d35b0..a3bbd562407 100644 --- a/src/Interpreters/RemoveInjectiveFunctionsVisitor.h +++ b/src/Interpreters/RemoveInjectiveFunctionsVisitor.h @@ -1,7 +1,8 @@ #pragma once -#include +#include #include +#include namespace DB { @@ -12,9 +13,9 @@ class ASTFunction; class RemoveInjectiveFunctionsMatcher { public: - struct Data + struct Data : public WithContext { - const Context & context; + explicit Data(ContextPtr context_) : WithContext(context_) {} }; static void visit(ASTPtr & ast, const Data & data); diff --git a/src/Interpreters/RequiredSourceColumnsVisitor.cpp b/src/Interpreters/RequiredSourceColumnsVisitor.cpp index 54883043d30..2f2a68656bc 100644 --- a/src/Interpreters/RequiredSourceColumnsVisitor.cpp +++ b/src/Interpreters/RequiredSourceColumnsVisitor.cpp @@ -51,8 +51,10 @@ bool RequiredSourceColumnsMatcher::needChildVisit(const ASTPtr & node, const AST if (const auto * f = node->as()) { + /// "indexHint" is a special function for index analysis. + /// Everything that is inside it is not calculated. See KeyCondition /// "lambda" visit children itself. - if (f->name == "lambda") + if (f->name == "indexHint" || f->name == "lambda") return false; } diff --git a/src/Interpreters/SystemLog.cpp b/src/Interpreters/SystemLog.cpp index 1667d845d77..31ceca8ec05 100644 --- a/src/Interpreters/SystemLog.cpp +++ b/src/Interpreters/SystemLog.cpp @@ -30,7 +30,7 @@ constexpr size_t DEFAULT_METRIC_LOG_COLLECT_INTERVAL_MILLISECONDS = 1000; /// Creates a system log with MergeTree engine using parameters from config template std::shared_ptr createSystemLog( - Context & context, + ContextPtr context, const String & default_database_name, const String & default_table_name, const Poco::Util::AbstractConfiguration & config, @@ -88,7 +88,7 @@ std::shared_ptr createSystemLog( } -SystemLogs::SystemLogs(Context & global_context, const Poco::Util::AbstractConfiguration & config) +SystemLogs::SystemLogs(ContextPtr global_context, const Poco::Util::AbstractConfiguration & config) { query_log = createSystemLog(global_context, "system", "query_log", config, "query_log"); query_thread_log = createSystemLog(global_context, "system", "query_thread_log", config, "query_thread_log"); diff --git a/src/Interpreters/SystemLog.h b/src/Interpreters/SystemLog.h index aa3dc113e44..aa01ca3517b 100644 --- a/src/Interpreters/SystemLog.h +++ b/src/Interpreters/SystemLog.h @@ -93,7 +93,7 @@ public: /// because SystemLog destruction makes insert query while flushing data into underlying tables struct SystemLogs { - SystemLogs(Context & global_context, const Poco::Util::AbstractConfiguration & config); + SystemLogs(ContextPtr global_context, const Poco::Util::AbstractConfiguration & config); ~SystemLogs(); void shutdown(); @@ -115,7 +115,7 @@ struct SystemLogs template -class SystemLog : public ISystemLog, private boost::noncopyable +class SystemLog : public ISystemLog, private boost::noncopyable, WithContext { public: using Self = SystemLog; @@ -129,7 +129,7 @@ public: * and new table get created - as if previous table was not exist. */ SystemLog( - Context & context_, + ContextPtr context_, const String & database_name_, const String & table_name_, const String & storage_def_, @@ -152,6 +152,8 @@ public: void shutdown() override { stopFlushThread(); + if (table) + table->shutdown(); } String getName() override @@ -166,7 +168,6 @@ protected: private: /* Saving thread data */ - Context & context; const StorageID table_id; const String storage_def; StoragePtr table; @@ -184,12 +185,13 @@ private: // synchronous log flushing for SYSTEM FLUSH LOGS. uint64_t queue_front_index = 0; bool is_shutdown = false; + // A flag that says we must create the tables even if the queue is empty. bool is_force_prepare_tables = false; std::condition_variable flush_event; // Requested to flush logs up to this index, exclusive - uint64_t requested_flush_before = 0; + uint64_t requested_flush_up_to = 0; // Flushed log up to this index, exclusive - uint64_t flushed_before = 0; + uint64_t flushed_up_to = 0; // Logged overflow message at this queue front index uint64_t logged_queue_full_at_index = -1; @@ -207,12 +209,13 @@ private: template -SystemLog::SystemLog(Context & context_, +SystemLog::SystemLog( + ContextPtr context_, const String & database_name_, const String & table_name_, const String & storage_def_, size_t flush_interval_milliseconds_) - : context(context_) + : WithContext(context_) , table_id(database_name_, table_name_) , storage_def(storage_def_) , flush_interval_milliseconds(flush_interval_milliseconds_) @@ -267,8 +270,8 @@ void SystemLog::add(const LogElement & element) // It is enough to only wake the flushing thread once, after the message // count increases past half available size. const uint64_t queue_end = queue_front_index + queue.size(); - if (requested_flush_before < queue_end) - requested_flush_before = queue_end; + if (requested_flush_up_to < queue_end) + requested_flush_up_to = queue_end; flush_event.notify_all(); } @@ -304,24 +307,36 @@ void SystemLog::add(const LogElement & element) template void SystemLog::flush(bool force) { - std::unique_lock lock(mutex); + uint64_t this_thread_requested_offset; - if (is_shutdown) - return; - - const uint64_t queue_end = queue_front_index + queue.size(); - - is_force_prepare_tables = force; - if (requested_flush_before < queue_end || force) { - requested_flush_before = queue_end; + std::unique_lock lock(mutex); + + if (is_shutdown) + return; + + this_thread_requested_offset = queue_front_index + queue.size(); + + // Publish our flush request, taking care not to overwrite the requests + // made by other threads. + is_force_prepare_tables |= force; + requested_flush_up_to = std::max(requested_flush_up_to, + this_thread_requested_offset); + flush_event.notify_all(); } - // Use an arbitrary timeout to avoid endless waiting. - const int timeout_seconds = 60; + LOG_DEBUG(log, "Requested flush up to offset {}", + this_thread_requested_offset); + + // Use an arbitrary timeout to avoid endless waiting. 60s proved to be + // too fast for our parallel functional tests, probably because they + // heavily load the disk. + const int timeout_seconds = 180; + std::unique_lock lock(mutex); bool result = flush_event.wait_for(lock, std::chrono::seconds(timeout_seconds), - [&] { return flushed_before >= queue_end && !is_force_prepare_tables; }); + [&] { return flushed_up_to >= this_thread_requested_offset + && !is_force_prepare_tables; }); if (!result) { @@ -371,6 +386,8 @@ void SystemLog::savingThreadFunction() // The end index (exclusive, like std end()) of the messages we are // going to flush. uint64_t to_flush_end = 0; + // Should we prepare table even if there are no new messages. + bool should_prepare_tables_anyway = false; { std::unique_lock lock(mutex); @@ -378,7 +395,7 @@ void SystemLog::savingThreadFunction() std::chrono::milliseconds(flush_interval_milliseconds), [&] () { - return requested_flush_before > flushed_before || is_shutdown || is_force_prepare_tables; + return requested_flush_up_to > flushed_up_to || is_shutdown || is_force_prepare_tables; } ); @@ -389,18 +406,14 @@ void SystemLog::savingThreadFunction() to_flush.resize(0); queue.swap(to_flush); + should_prepare_tables_anyway = is_force_prepare_tables; + exit_this_thread = is_shutdown; } if (to_flush.empty()) { - bool force; - { - std::lock_guard lock(mutex); - force = is_force_prepare_tables; - } - - if (force) + if (should_prepare_tables_anyway) { prepareTable(); LOG_TRACE(log, "Table created (force)"); @@ -429,7 +442,8 @@ void SystemLog::flushImpl(const std::vector & to_flush, { try { - LOG_TRACE(log, "Flushing system log, {} entries to flush", to_flush.size()); + LOG_TRACE(log, "Flushing system log, {} entries to flush up to offset {}", + to_flush.size(), to_flush_end); /// We check for existence of the table and create it as needed at every /// flush. This is done to allow user to drop the table at any moment @@ -451,8 +465,8 @@ void SystemLog::flushImpl(const std::vector & to_flush, ASTPtr query_ptr(insert.release()); // we need query context to do inserts to target table with MV containing subqueries or joins - Context insert_context(context); - insert_context.makeQueryContext(); + auto insert_context = Context::createCopy(context); + insert_context->makeQueryContext(); InterpreterInsertQuery interpreter(query_ptr, insert_context); BlockIO io = interpreter.execute(); @@ -468,12 +482,12 @@ void SystemLog::flushImpl(const std::vector & to_flush, { std::lock_guard lock(mutex); - flushed_before = to_flush_end; + flushed_up_to = to_flush_end; is_force_prepare_tables = false; flush_event.notify_all(); } - LOG_TRACE(log, "Flushed system log"); + LOG_TRACE(log, "Flushed system log up to offset {}", to_flush_end); } @@ -482,7 +496,7 @@ void SystemLog::prepareTable() { String description = table_id.getNameForLogs(); - table = DatabaseCatalog::instance().tryGetTable(table_id, context); + table = DatabaseCatalog::instance().tryGetTable(table_id, getContext()); if (table) { @@ -494,7 +508,8 @@ void SystemLog::prepareTable() { /// Rename the existing table. int suffix = 0; - while (DatabaseCatalog::instance().isTableExist({table_id.database_name, table_id.table_name + "_" + toString(suffix)}, context)) + while (DatabaseCatalog::instance().isTableExist( + {table_id.database_name, table_id.table_name + "_" + toString(suffix)}, getContext())) ++suffix; auto rename = std::make_shared(); @@ -513,10 +528,14 @@ void SystemLog::prepareTable() rename->elements.emplace_back(elem); - LOG_DEBUG(log, "Existing table {} for system log has obsolete or different structure. Renaming it to {}", description, backQuoteIfNeed(to.table)); + LOG_DEBUG( + log, + "Existing table {} for system log has obsolete or different structure. Renaming it to {}", + description, + backQuoteIfNeed(to.table)); - Context query_context = context; - query_context.makeQueryContext(); + auto query_context = Context::createCopy(context); + query_context->makeQueryContext(); InterpreterRenameQuery(rename, query_context).execute(); /// The required table will be created. @@ -534,13 +553,14 @@ void SystemLog::prepareTable() auto create = getCreateTableQuery(); - Context query_context = context; - query_context.makeQueryContext(); + auto query_context = Context::createCopy(context); + query_context->makeQueryContext(); + InterpreterCreateQuery interpreter(create, query_context); interpreter.setInternal(true); interpreter.execute(); - table = DatabaseCatalog::instance().getTable(table_id, context); + table = DatabaseCatalog::instance().getTable(table_id, getContext()); } is_prepared = true; diff --git a/src/Interpreters/TextLog.cpp b/src/Interpreters/TextLog.cpp index 489bb302ad0..f5a0ce51d49 100644 --- a/src/Interpreters/TextLog.cpp +++ b/src/Interpreters/TextLog.cpp @@ -74,7 +74,7 @@ void TextLogElement::appendToBlock(MutableColumns & columns) const columns[i++]->insert(source_line); } -TextLog::TextLog(Context & context_, const String & database_name_, +TextLog::TextLog(ContextPtr context_, const String & database_name_, const String & table_name_, const String & storage_def_, size_t flush_interval_milliseconds_) : SystemLog(context_, database_name_, table_name_, diff --git a/src/Interpreters/TextLog.h b/src/Interpreters/TextLog.h index da678868be3..7ff55128a90 100644 --- a/src/Interpreters/TextLog.h +++ b/src/Interpreters/TextLog.h @@ -33,7 +33,7 @@ class TextLog : public SystemLog { public: TextLog( - Context & context_, + ContextPtr context_, const String & database_name_, const String & table_name_, const String & storage_def_, diff --git a/src/Interpreters/ThreadStatusExt.cpp b/src/Interpreters/ThreadStatusExt.cpp index 8a979721290..c04534e11a1 100644 --- a/src/Interpreters/ThreadStatusExt.cpp +++ b/src/Interpreters/ThreadStatusExt.cpp @@ -33,9 +33,11 @@ namespace ErrorCodes void ThreadStatus::applyQuerySettings() { - const Settings & settings = query_context->getSettingsRef(); + auto query_context_ptr = query_context.lock(); + assert(query_context_ptr); + const Settings & settings = query_context_ptr->getSettingsRef(); - query_id = query_context->getCurrentQueryId(); + query_id = query_context_ptr->getCurrentQueryId(); initQueryProfiler(); untracked_memory_limit = settings.max_untracked_memory; @@ -58,26 +60,26 @@ void ThreadStatus::applyQuerySettings() } -void ThreadStatus::attachQueryContext(Context & query_context_) +void ThreadStatus::attachQueryContext(ContextPtr query_context_) { - query_context = &query_context_; + query_context = query_context_; - if (!global_context) - global_context = &query_context->getGlobalContext(); + if (global_context.expired()) + global_context = query_context_->getGlobalContext(); if (thread_group) { std::lock_guard lock(thread_group->mutex); thread_group->query_context = query_context; - if (!thread_group->global_context) + if (thread_group->global_context.expired()) thread_group->global_context = global_context; } // Generate new span for thread manually here, because we can't depend // on OpenTelemetrySpanHolder due to link order issues. // FIXME why and how is this different from setupState()? - thread_trace_context = query_context->query_trace_context; + thread_trace_context = query_context_->query_trace_context; if (thread_trace_context.trace_id) { thread_trace_context.span_id = thread_local_rng(); @@ -113,17 +115,17 @@ void ThreadStatus::setupState(const ThreadGroupStatusPtr & thread_group_) fatal_error_callback = thread_group->fatal_error_callback; query_context = thread_group->query_context; - if (!global_context) + if (global_context.expired()) global_context = thread_group->global_context; } - if (query_context) + if (auto query_context_ptr = query_context.lock()) { applyQuerySettings(); // Generate new span for thread manually here, because we can't depend // on OpenTelemetrySpanHolder due to link order issues. - thread_trace_context = query_context->query_trace_context; + thread_trace_context = query_context_ptr->query_trace_context; if (thread_trace_context.trace_id) { thread_trace_context.span_id = thread_local_rng(); @@ -201,9 +203,9 @@ void ThreadStatus::initPerformanceCounters() // query_start_time_nanoseconds cannot be used here since RUsageCounters expect CLOCK_MONOTONIC *last_rusage = RUsageCounters::current(); - if (query_context) + if (auto query_context_ptr = query_context.lock()) { - const Settings & settings = query_context->getSettingsRef(); + const Settings & settings = query_context_ptr->getSettingsRef(); if (settings.metrics_perf_events_enabled) { try @@ -246,8 +248,8 @@ void ThreadStatus::finalizePerformanceCounters() // 'select 1 settings metrics_perf_events_enabled = 1', I still get // query_context->getSettingsRef().metrics_perf_events_enabled == 0 *shrug*. bool close_perf_descriptors = true; - if (query_context) - close_perf_descriptors = !query_context->getSettingsRef().metrics_perf_events_enabled; + if (auto query_context_ptr = query_context.lock()) + close_perf_descriptors = !query_context_ptr->getSettingsRef().metrics_perf_events_enabled; try { @@ -262,17 +264,19 @@ void ThreadStatus::finalizePerformanceCounters() try { - if (global_context && query_context) + auto global_context_ptr = global_context.lock(); + auto query_context_ptr = query_context.lock(); + if (global_context_ptr && query_context_ptr) { - const auto & settings = query_context->getSettingsRef(); + const auto & settings = query_context_ptr->getSettingsRef(); if (settings.log_queries && settings.log_query_threads) { const auto now = std::chrono::system_clock::now(); Int64 query_duration_ms = (time_in_microseconds(now) - query_start_time_microseconds) / 1000; if (query_duration_ms >= settings.log_queries_min_query_duration_ms.totalMilliseconds()) { - if (auto thread_log = global_context->getQueryThreadLog()) - logToQueryThreadLog(*thread_log, query_context->getCurrentDatabase(), now); + if (auto thread_log = global_context_ptr->getQueryThreadLog()) + logToQueryThreadLog(*thread_log, query_context_ptr->getCurrentDatabase(), now); } } } @@ -286,10 +290,13 @@ void ThreadStatus::finalizePerformanceCounters() void ThreadStatus::initQueryProfiler() { /// query profilers are useless without trace collector - if (!global_context || !global_context->hasTraceCollector()) + auto global_context_ptr = global_context.lock(); + if (!global_context_ptr || !global_context_ptr->hasTraceCollector()) return; - const auto & settings = query_context->getSettingsRef(); + auto query_context_ptr = query_context.lock(); + assert(query_context_ptr); + const auto & settings = query_context_ptr->getSettingsRef(); try { @@ -316,6 +323,8 @@ void ThreadStatus::finalizeQueryProfiler() void ThreadStatus::detachQuery(bool exit_if_already_detached, bool thread_exits) { + MemoryTracker::LockExceptionInThread lock(VariableContext::Global); + if (exit_if_already_detached && thread_state == ThreadState::DetachedFromQuery) { thread_state = thread_exits ? ThreadState::Died : ThreadState::DetachedFromQuery; @@ -325,9 +334,10 @@ void ThreadStatus::detachQuery(bool exit_if_already_detached, bool thread_exits) assertState({ThreadState::AttachedToQuery}, __PRETTY_FUNCTION__); std::shared_ptr opentelemetry_span_log; - if (thread_trace_context.trace_id && query_context) + auto query_context_ptr = query_context.lock(); + if (thread_trace_context.trace_id && query_context_ptr) { - opentelemetry_span_log = query_context->getOpenTelemetrySpanLog(); + opentelemetry_span_log = query_context_ptr->getOpenTelemetrySpanLog(); } if (opentelemetry_span_log) @@ -347,7 +357,8 @@ void ThreadStatus::detachQuery(bool exit_if_already_detached, bool thread_exits) // is going to fail, because we're going to reset it to zero later in // this function. span.span_id = thread_trace_context.span_id; - span.parent_span_id = query_context->query_trace_context.span_id; + assert(query_context_ptr); + span.parent_span_id = query_context_ptr->query_trace_context.span_id; span.operation_name = getThreadName(); span.start_time_us = query_start_time_microseconds; span.finish_time_us = @@ -370,7 +381,7 @@ void ThreadStatus::detachQuery(bool exit_if_already_detached, bool thread_exits) memory_tracker.setParent(thread_group->memory_tracker.getParent()); query_id.clear(); - query_context = nullptr; + query_context.reset(); thread_trace_context.trace_id = 0; thread_trace_context.span_id = 0; thread_group.reset(); @@ -429,11 +440,12 @@ void ThreadStatus::logToQueryThreadLog(QueryThreadLog & thread_log, const String } } - if (query_context) + auto query_context_ptr = query_context.lock(); + if (query_context_ptr) { - elem.client_info = query_context->getClientInfo(); + elem.client_info = query_context_ptr->getClientInfo(); - if (query_context->getSettingsRef().log_profile_events != 0) + if (query_context_ptr->getSettingsRef().log_profile_events != 0) { /// NOTE: Here we are in the same thread, so we can make memcpy() elem.profile_counters = std::make_shared(performance_counters.getPartiallyAtomicSnapshot()); @@ -467,7 +479,7 @@ void CurrentThread::attachToIfDetached(const ThreadGroupStatusPtr & thread_group current_thread->deleter = CurrentThread::defaultThreadDeleter; } -void CurrentThread::attachQueryContext(Context & query_context) +void CurrentThread::attachQueryContext(ContextPtr query_context) { if (unlikely(!current_thread)) return; @@ -496,12 +508,12 @@ void CurrentThread::detachQueryIfNotDetached() } -CurrentThread::QueryScope::QueryScope(Context & query_context) +CurrentThread::QueryScope::QueryScope(ContextPtr query_context) { CurrentThread::initializeQuery(); CurrentThread::attachQueryContext(query_context); - if (!query_context.hasQueryContext()) - query_context.makeQueryContext(); + if (!query_context->hasQueryContext()) + query_context->makeQueryContext(); } void CurrentThread::QueryScope::logPeakMemoryUsage() diff --git a/src/Interpreters/TreeOptimizer.cpp b/src/Interpreters/TreeOptimizer.cpp index 3f4c2e74e23..5b06c00435a 100644 --- a/src/Interpreters/TreeOptimizer.cpp +++ b/src/Interpreters/TreeOptimizer.cpp @@ -81,7 +81,7 @@ void appendUnusedGroupByColumn(ASTSelectQuery * select_query, const NameSet & so } /// Eliminates injective function calls and constant expressions from group by statement. -void optimizeGroupBy(ASTSelectQuery * select_query, const NameSet & source_columns, const Context & context) +void optimizeGroupBy(ASTSelectQuery * select_query, const NameSet & source_columns, ContextPtr context) { const FunctionFactory & function_factory = FunctionFactory::instance(); @@ -135,7 +135,7 @@ void optimizeGroupBy(ASTSelectQuery * select_query, const NameSet & source_colum const auto & dict_name = dict_name_ast->value.safeGet(); const auto & attr_name = attr_name_ast->value.safeGet(); - const auto & dict_ptr = context.getExternalDictionariesLoader().getDictionary(dict_name, context); + const auto & dict_ptr = context->getExternalDictionariesLoader().getDictionary(dict_name, context); if (!dict_ptr->isInjective(attr_name)) { ++i; @@ -270,7 +270,7 @@ void optimizeDuplicatesInOrderBy(const ASTSelectQuery * select_query) } /// Optimize duplicate ORDER BY -void optimizeDuplicateOrderBy(ASTPtr & query, const Context & context) +void optimizeDuplicateOrderBy(ASTPtr & query, ContextPtr context) { DuplicateOrderByVisitor::Data order_by_data{context}; DuplicateOrderByVisitor(order_by_data).visit(query); @@ -396,7 +396,7 @@ void optimizeDuplicateDistinct(ASTSelectQuery & select) /// Replace monotonous functions in ORDER BY if they don't participate in GROUP BY expression, /// has a single argument and not an aggregate functions. -void optimizeMonotonousFunctionsInOrderBy(ASTSelectQuery * select_query, const Context & context, +void optimizeMonotonousFunctionsInOrderBy(ASTSelectQuery * select_query, ContextPtr context, const TablesWithColumns & tables_with_columns, const Names & sorting_key_columns) { @@ -448,7 +448,7 @@ void optimizeMonotonousFunctionsInOrderBy(ASTSelectQuery * select_query, const C /// Optimize ORDER BY x, y, f(x), g(x, y), f(h(x)), t(f(x), g(x)) into ORDER BY x, y /// in case if f(), g(), h(), t() are deterministic (in scope of query). /// Don't optimize ORDER BY f(x), g(x), x even if f(x) is bijection for x or g(x). -void optimizeRedundantFunctionsInOrderBy(const ASTSelectQuery * select_query, const Context & context) +void optimizeRedundantFunctionsInOrderBy(const ASTSelectQuery * select_query, ContextPtr context) { const auto & order_by = select_query->orderBy(); if (!order_by) @@ -561,9 +561,9 @@ void optimizeCountConstantAndSumOne(ASTPtr & query) } -void optimizeInjectiveFunctionsInsideUniq(ASTPtr & query, const Context & context) +void optimizeInjectiveFunctionsInsideUniq(ASTPtr & query, ContextPtr context) { - RemoveInjectiveFunctionsVisitor::Data data = {context}; + RemoveInjectiveFunctionsVisitor::Data data(context); RemoveInjectiveFunctionsVisitor(data).visit(query); } @@ -592,10 +592,10 @@ void TreeOptimizer::optimizeIf(ASTPtr & query, Aliases & aliases, bool if_chain_ void TreeOptimizer::apply(ASTPtr & query, Aliases & aliases, const NameSet & source_columns_set, const std::vector & tables_with_columns, - const Context & context, const StorageMetadataPtr & metadata_snapshot, + ContextPtr context, const StorageMetadataPtr & metadata_snapshot, bool & rewrite_subqueries) { - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); auto * select_query = query->as(); if (!select_query) diff --git a/src/Interpreters/TreeOptimizer.h b/src/Interpreters/TreeOptimizer.h index a10dfc57451..b268b230f4e 100644 --- a/src/Interpreters/TreeOptimizer.h +++ b/src/Interpreters/TreeOptimizer.h @@ -1,13 +1,13 @@ #pragma once -#include #include +#include #include +#include namespace DB { -class Context; struct StorageInMemoryMetadata; using StorageMetadataPtr = std::shared_ptr; @@ -16,10 +16,14 @@ using StorageMetadataPtr = std::shared_ptr; class TreeOptimizer { public: - static void apply(ASTPtr & query, Aliases & aliases, const NameSet & source_columns_set, - const std::vector & tables_with_columns, - const Context & context, const StorageMetadataPtr & metadata_snapshot, - bool & rewrite_subqueries); + static void apply( + ASTPtr & query, + Aliases & aliases, + const NameSet & source_columns_set, + const std::vector & tables_with_columns, + ContextPtr context, + const StorageMetadataPtr & metadata_snapshot, + bool & rewrite_subqueries); static void optimizeIf(ASTPtr & query, Aliases & aliases, bool if_chain_to_multiif); }; diff --git a/src/Interpreters/TreeRewriter.cpp b/src/Interpreters/TreeRewriter.cpp index f88fd16045a..324a773fbc2 100644 --- a/src/Interpreters/TreeRewriter.cpp +++ b/src/Interpreters/TreeRewriter.cpp @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -181,8 +182,72 @@ struct CustomizeAggregateFunctionsMoveSuffixData } }; +struct FuseSumCountAggregates +{ + std::vector sums {}; + std::vector counts {}; + std::vector avgs {}; + + void addFuncNode(ASTFunction * func) + { + if (func->name == "sum") + sums.push_back(func); + else if (func->name == "count") + counts.push_back(func); + else + { + assert(func->name == "avg"); + avgs.push_back(func); + } + } + + bool canBeFused() const + { + // Need at least two different kinds of functions to fuse. + if (sums.empty() && counts.empty()) + return false; + if (sums.empty() && avgs.empty()) + return false; + if (counts.empty() && avgs.empty()) + return false; + return true; + } +}; + +struct FuseSumCountAggregatesVisitorData +{ + using TypeToVisit = ASTFunction; + + std::unordered_map fuse_map; + + void visit(ASTFunction & func, ASTPtr &) + { + if (func.name == "sum" || func.name == "avg" || func.name == "count") + { + if (func.arguments->children.empty()) + return; + + // Probably we can extend it to match count() for non-nullable argument + // to sum/avg with any other argument. Now we require strict match. + const auto argument = func.arguments->children.at(0)->getColumnName(); + auto it = fuse_map.find(argument); + if (it != fuse_map.end()) + { + it->second.addFuncNode(&func); + } + else + { + FuseSumCountAggregates funcs{}; + funcs.addFuncNode(&func); + fuse_map[argument] = funcs; + } + } + } +}; + using CustomizeAggregateFunctionsOrNullVisitor = InDepthNodeVisitor, true>; using CustomizeAggregateFunctionsMoveOrNullVisitor = InDepthNodeVisitor, true>; +using FuseSumCountAggregatesVisitor = InDepthNodeVisitor, true>; /// Translate qualified names such as db.table.column, table.column, table_alias.column to names' normal form. /// Expand asterisks and qualified asterisks with column names. @@ -200,6 +265,49 @@ void translateQualifiedNames(ASTPtr & query, const ASTSelectQuery & select_query throw Exception("Empty list of columns in SELECT query", ErrorCodes::EMPTY_LIST_OF_COLUMNS_QUERIED); } +// Replaces one avg/sum/count function with an appropriate expression with +// sumCount(). +void replaceWithSumCount(String column_name, ASTFunction & func) +{ + auto func_base = makeASTFunction("sumCount", std::make_shared(column_name)); + auto exp_list = std::make_shared(); + if (func.name == "sum" || func.name == "count") + { + /// Rewrite "sum" to sumCount().1, rewrite "count" to sumCount().2 + UInt8 idx = (func.name == "sum" ? 1 : 2); + func.name = "tupleElement"; + exp_list->children.push_back(func_base); + exp_list->children.push_back(std::make_shared(idx)); + } + else + { + /// Rewrite "avg" to sumCount().1 / sumCount().2 + auto new_arg1 = makeASTFunction("tupleElement", func_base, std::make_shared(UInt8(1))); + auto new_arg2 = makeASTFunction("tupleElement", func_base, std::make_shared(UInt8(2))); + func.name = "divide"; + exp_list->children.push_back(new_arg1); + exp_list->children.push_back(new_arg2); + } + func.arguments = exp_list; + func.children.push_back(func.arguments); +} + +void fuseSumCountAggregates(std::unordered_map & fuse_map) +{ + for (auto & it : fuse_map) + { + if (it.second.canBeFused()) + { + for (auto & func: it.second.sums) + replaceWithSumCount(it.first, *func); + for (auto & func: it.second.avgs) + replaceWithSumCount(it.first, *func); + for (auto & func: it.second.counts) + replaceWithSumCount(it.first, *func); + } + } +} + bool hasArrayJoin(const ASTPtr & ast) { if (const ASTFunction * function = ast->as()) @@ -293,13 +401,11 @@ void removeUnneededColumnsFromSelectClause(const ASTSelectQuery * select_query, else { ASTFunction * func = elem->as(); + + /// Never remove untuple. It's result column may be in required columns. + /// It is not easy to analyze untuple here, because types were not calculated yes. if (func && func->name == "untuple") - for (const auto & col : required_result_columns) - if (col.rfind("_ut_", 0) == 0) - { - new_elements.push_back(elem); - break; - } + new_elements.push_back(elem); } } @@ -307,10 +413,10 @@ void removeUnneededColumnsFromSelectClause(const ASTSelectQuery * select_query, } /// Replacing scalar subqueries with constant values. -void executeScalarSubqueries(ASTPtr & query, const Context & context, size_t subquery_depth, Scalars & scalars, bool only_analyze) +void executeScalarSubqueries(ASTPtr & query, ContextPtr context, size_t subquery_depth, Scalars & scalars, bool only_analyze) { LogAST log; - ExecuteScalarSubqueriesVisitor::Data visitor_data{context, subquery_depth, scalars, only_analyze}; + ExecuteScalarSubqueriesVisitor::Data visitor_data{WithContext{context}, subquery_depth, scalars, only_analyze}; ExecuteScalarSubqueriesVisitor(visitor_data, log.stream()).visit(query); } @@ -405,13 +511,13 @@ void setJoinStrictness(ASTSelectQuery & select_query, JoinStrictness join_defaul /// Find the columns that are obtained by JOIN. void collectJoinedColumns(TableJoin & analyzed_join, const ASTSelectQuery & select_query, - const TablesWithColumns & tables, const Aliases & aliases, ASTPtr & new_where_conditions) + const TablesWithColumns & tables, const Aliases & aliases) { const ASTTablesInSelectQueryElement * node = select_query.join(); - if (!node) + if (!node || tables.size() < 2) return; - auto & table_join = node->table_join->as(); + const auto & table_join = node->table_join->as(); if (table_join.using_expression_list) { @@ -430,33 +536,16 @@ void collectJoinedColumns(TableJoin & analyzed_join, const ASTSelectQuery & sele { bool is_asof = (table_join.strictness == ASTTableJoin::Strictness::Asof); - CollectJoinOnKeysVisitor::Data data{analyzed_join, tables[0], tables[1], aliases, is_asof, table_join.kind}; + CollectJoinOnKeysVisitor::Data data{analyzed_join, tables[0], tables[1], aliases, is_asof}; CollectJoinOnKeysVisitor(data).visit(table_join.on_expression); if (!data.has_some) throw Exception("Cannot get JOIN keys from JOIN ON section: " + queryToString(table_join.on_expression), ErrorCodes::INVALID_JOIN_ON_EXPRESSION); if (is_asof) - { data.asofToJoinKeys(); - } - else if (data.new_on_expression) - { - table_join.on_expression = data.new_on_expression; - new_where_conditions = data.new_where_conditions; - } } } -/// Move joined key related to only one table to WHERE clause -void moveJoinedKeyToWhere(ASTSelectQuery * select_query, ASTPtr & new_where_conditions) -{ - if (select_query->where()) - select_query->setExpression(ASTSelectQuery::Expression::WHERE, - makeASTFunction("and", new_where_conditions, select_query->where())); - else - select_query->setExpression(ASTSelectQuery::Expression::WHERE, new_where_conditions->clone()); -} - std::vector getAggregates(ASTPtr & query, const ASTSelectQuery & select_query) { @@ -789,7 +878,7 @@ TreeRewriterResultPtr TreeRewriter::analyzeSelect( size_t subquery_depth = select_options.subquery_depth; bool remove_duplicates = select_options.remove_duplicates; - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); const NameSet & source_columns_set = result.source_columns_set; @@ -832,25 +921,22 @@ TreeRewriterResultPtr TreeRewriter::analyzeSelect( removeUnneededColumnsFromSelectClause(select_query, required_result_columns, remove_duplicates); /// Executing scalar subqueries - replacing them with constant values. - executeScalarSubqueries(query, context, subquery_depth, result.scalars, select_options.only_analyze); + executeScalarSubqueries(query, getContext(), subquery_depth, result.scalars, select_options.only_analyze); - TreeOptimizer::apply(query, result.aliases, source_columns_set, tables_with_columns, context, result.metadata_snapshot, result.rewrite_subqueries); + TreeOptimizer::apply( + query, result.aliases, source_columns_set, tables_with_columns, getContext(), result.metadata_snapshot, result.rewrite_subqueries); /// array_join_alias_to_name, array_join_result_to_source. getArrayJoinedColumns(query, result, select_query, result.source_columns, source_columns_set); setJoinStrictness(*select_query, settings.join_default_strictness, settings.any_join_distinct_right_table_keys, result.analyzed_join->table_join); - - ASTPtr new_where_condition = nullptr; - collectJoinedColumns(*result.analyzed_join, *select_query, tables_with_columns, result.aliases, new_where_condition); - if (new_where_condition) - moveJoinedKeyToWhere(select_query, new_where_condition); + collectJoinedColumns(*result.analyzed_join, *select_query, tables_with_columns, result.aliases); /// rewrite filters for select query, must go after getArrayJoinedColumns if (settings.optimize_respect_aliases && result.metadata_snapshot) { - replaceAliasColumnsInQuery(query, result.metadata_snapshot->getColumns(), result.getArrayJoinSourceNameSet(), context); + replaceAliasColumnsInQuery(query, result.metadata_snapshot->getColumns(), result.getArrayJoinSourceNameSet(), getContext()); } result.aggregates = getAggregates(query, *select_query); @@ -877,14 +963,14 @@ TreeRewriterResultPtr TreeRewriter::analyze( if (query->as()) throw Exception("Not select analyze for select asts.", ErrorCodes::LOGICAL_ERROR); - const auto & settings = context.getSettingsRef(); + const auto & settings = getContext()->getSettingsRef(); TreeRewriterResult result(source_columns, storage, metadata_snapshot, false); normalize(query, result.aliases, result.source_columns_set, settings); /// Executing scalar subqueries. Column defaults could be a scalar subquery. - executeScalarSubqueries(query, context, 0, result.scalars, false); + executeScalarSubqueries(query, getContext(), 0, result.scalars, false); TreeOptimizer::optimizeIf(query, result.aliases, settings.optimize_if_chain_to_multiif); @@ -932,7 +1018,18 @@ void TreeRewriter::normalize(ASTPtr & query, Aliases & aliases, const NameSet & CustomizeGlobalNotInVisitor(data_global_not_null_in).visit(query); } - // Rewrite all aggregate functions to add -OrNull suffix to them + // Try to fuse sum/avg/count with identical arguments to one sumCount call, + // if we have at least two different functions. E.g. we will replace sum(x) + // and count(x) with sumCount(x).1 and sumCount(x).2, and sumCount() will + // be calculated only once because of CSE. + if (settings.optimize_fuse_sum_count_avg) + { + FuseSumCountAggregatesVisitor::Data data; + FuseSumCountAggregatesVisitor(data).visit(query); + fuseSumCountAggregates(data.fuse_map); + } + + /// Rewrite all aggregate functions to add -OrNull suffix to them if (settings.aggregate_functions_null_for_empty) { CustomizeAggregateFunctionsOrNullVisitor::Data data_or_null{"OrNull"}; diff --git a/src/Interpreters/TreeRewriter.h b/src/Interpreters/TreeRewriter.h index 4e3fe21bde9..26cfaad1fbb 100644 --- a/src/Interpreters/TreeRewriter.h +++ b/src/Interpreters/TreeRewriter.h @@ -3,8 +3,9 @@ #include #include #include -#include +#include #include +#include #include namespace DB @@ -13,7 +14,6 @@ namespace DB class ASTFunction; struct ASTTablesInSelectQueryElement; class TableJoin; -class Context; struct Settings; struct SelectQueryOptions; using Scalars = std::map; @@ -92,12 +92,10 @@ using TreeRewriterResultPtr = std::shared_ptr; /// * scalar subqueries are executed replaced with constants /// * unneeded columns are removed from SELECT clause /// * duplicated columns are removed from ORDER BY, LIMIT BY, USING(...). -class TreeRewriter +class TreeRewriter : WithContext { public: - TreeRewriter(const Context & context_) - : context(context_) - {} + explicit TreeRewriter(ContextPtr context_) : WithContext(context_) {} /// Analyze and rewrite not select query TreeRewriterResultPtr analyze( @@ -117,8 +115,6 @@ public: std::shared_ptr table_join = {}) const; private: - const Context & context; - static void normalize(ASTPtr & query, Aliases & aliases, const NameSet & source_columns_set, const Settings & settings); }; diff --git a/src/Interpreters/WindowDescription.cpp b/src/Interpreters/WindowDescription.cpp index a97ef41204a..05d75d4647e 100644 --- a/src/Interpreters/WindowDescription.cpp +++ b/src/Interpreters/WindowDescription.cpp @@ -86,6 +86,38 @@ void WindowFrame::toString(WriteBuffer & buf) const void WindowFrame::checkValid() const { + // Check the validity of offsets. + if (type == WindowFrame::FrameType::Rows + || type == WindowFrame::FrameType::Groups) + { + if (begin_type == BoundaryType::Offset + && !((begin_offset.getType() == Field::Types::UInt64 + || begin_offset.getType() == Field::Types::Int64) + && begin_offset.get() >= 0 + && begin_offset.get() < INT_MAX)) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Frame start offset for '{}' frame must be a nonnegative 32-bit integer, '{}' of type '{}' given.", + toString(type), + applyVisitor(FieldVisitorToString(), begin_offset), + Field::Types::toString(begin_offset.getType())); + } + + if (end_type == BoundaryType::Offset + && !((end_offset.getType() == Field::Types::UInt64 + || end_offset.getType() == Field::Types::Int64) + && end_offset.get() >= 0 + && end_offset.get() < INT_MAX)) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Frame end offset for '{}' frame must be a nonnegative 32-bit integer, '{}' of type '{}' given.", + toString(type), + applyVisitor(FieldVisitorToString(), end_offset), + Field::Types::toString(end_offset.getType())); + } + } + + // Check relative positioning of offsets. // UNBOUNDED PRECEDING end and UNBOUNDED FOLLOWING start should have been // forbidden at the parsing level. assert(!(begin_type == BoundaryType::Unbounded && !begin_preceding)); diff --git a/src/Interpreters/addMissingDefaults.cpp b/src/Interpreters/addMissingDefaults.cpp index bb444103d8e..ef3e4e095bc 100644 --- a/src/Interpreters/addMissingDefaults.cpp +++ b/src/Interpreters/addMissingDefaults.cpp @@ -19,7 +19,7 @@ ActionsDAGPtr addMissingDefaults( const Block & header, const NamesAndTypesList & required_columns, const ColumnsDescription & columns, - const Context & context) + ContextPtr context) { auto actions = std::make_shared(header.getColumnsWithTypeAndName()); auto & index = actions->getIndex(); diff --git a/src/Interpreters/addMissingDefaults.h b/src/Interpreters/addMissingDefaults.h index e746c7cc9e6..90376c41216 100644 --- a/src/Interpreters/addMissingDefaults.h +++ b/src/Interpreters/addMissingDefaults.h @@ -1,15 +1,16 @@ #pragma once -#include -#include +#include + #include +#include +#include namespace DB { class Block; -class Context; class NamesAndTypesList; class ColumnsDescription; @@ -23,9 +24,5 @@ using ActionsDAGPtr = std::shared_ptr; * All three types of columns are materialized (not constants). */ ActionsDAGPtr addMissingDefaults( - const Block & header, - const NamesAndTypesList & required_columns, - const ColumnsDescription & columns, - const Context & context); - + const Block & header, const NamesAndTypesList & required_columns, const ColumnsDescription & columns, ContextPtr context); } diff --git a/src/Interpreters/addTypeConversionToAST.cpp b/src/Interpreters/addTypeConversionToAST.cpp index 18591fd732c..73c95bd9a8c 100644 --- a/src/Interpreters/addTypeConversionToAST.cpp +++ b/src/Interpreters/addTypeConversionToAST.cpp @@ -32,7 +32,7 @@ ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name) return func; } -ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name, const NamesAndTypesList & all_columns, const Context & context) +ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name, const NamesAndTypesList & all_columns, ContextPtr context) { auto syntax_analyzer_result = TreeRewriter(context).analyze(ast, all_columns); const auto actions = ExpressionAnalyzer(ast, syntax_analyzer_result, context).getActions(true); diff --git a/src/Interpreters/addTypeConversionToAST.h b/src/Interpreters/addTypeConversionToAST.h index 16fa98f6e0c..eb391b2c749 100644 --- a/src/Interpreters/addTypeConversionToAST.h +++ b/src/Interpreters/addTypeConversionToAST.h @@ -1,17 +1,19 @@ #pragma once -#include +#include #include +#include namespace DB { -class Context; + class NamesAndTypesList; + /// It will produce an expression with CAST to get an AST with the required type. ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name); // If same type, then ignore the wrapper of CAST function -ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name, const NamesAndTypesList & all_columns, const Context & context); +ASTPtr addTypeConversionToAST(ASTPtr && ast, const String & type_name, const NamesAndTypesList & all_columns, ContextPtr context); } diff --git a/src/Interpreters/evaluateConstantExpression.cpp b/src/Interpreters/evaluateConstantExpression.cpp index a3301bcf55b..89924025c08 100644 --- a/src/Interpreters/evaluateConstantExpression.cpp +++ b/src/Interpreters/evaluateConstantExpression.cpp @@ -30,14 +30,14 @@ namespace ErrorCodes } -std::pair> evaluateConstantExpression(const ASTPtr & node, const Context & context) +std::pair> evaluateConstantExpression(const ASTPtr & node, ContextPtr context) { NamesAndTypesList source_columns = {{ "_dummy", std::make_shared() }}; auto ast = node->clone(); - ReplaceQueryParameterVisitor param_visitor(context.getQueryParameters()); + ReplaceQueryParameterVisitor param_visitor(context->getQueryParameters()); param_visitor.visit(ast); - if (context.getSettingsRef().normalize_function_names) + if (context->getSettingsRef().normalize_function_names) FunctionNameNormalizer().visit(ast.get()); String name = ast->getColumnName(); @@ -66,7 +66,7 @@ std::pair> evaluateConstantExpression(co } -ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, const Context & context) +ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, ContextPtr context) { /// If it's already a literal. if (node->as()) @@ -74,7 +74,7 @@ ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, const Context & return std::make_shared(evaluateConstantExpression(node, context).first); } -ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, const Context & context) +ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, ContextPtr context) { if (const auto * id = node->as()) return std::make_shared(id->name()); @@ -82,18 +82,18 @@ ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, cons return evaluateConstantExpressionAsLiteral(node, context); } -ASTPtr evaluateConstantExpressionForDatabaseName(const ASTPtr & node, const Context & context) +ASTPtr evaluateConstantExpressionForDatabaseName(const ASTPtr & node, ContextPtr context) { ASTPtr res = evaluateConstantExpressionOrIdentifierAsLiteral(node, context); auto & literal = res->as(); if (literal.value.safeGet().empty()) { - String current_database = context.getCurrentDatabase(); + String current_database = context->getCurrentDatabase(); if (current_database.empty()) { /// Table was created on older version of ClickHouse and CREATE contains not folded expression. /// Current database is not set yet during server startup, so we cannot evaluate it correctly. - literal.value = context.getConfigRef().getString("default_database", "default"); + literal.value = context->getConfigRef().getString("default_database", "default"); } else literal.value = current_database; diff --git a/src/Interpreters/evaluateConstantExpression.h b/src/Interpreters/evaluateConstantExpression.h index c797b8461de..b95982f5b99 100644 --- a/src/Interpreters/evaluateConstantExpression.h +++ b/src/Interpreters/evaluateConstantExpression.h @@ -2,6 +2,7 @@ #include #include +#include #include #include @@ -12,7 +13,6 @@ namespace DB { -class Context; class ExpressionActions; class IDataType; @@ -23,25 +23,25 @@ using ExpressionActionsPtr = std::shared_ptr; * Throws exception if it's not a constant expression. * Quite suboptimal. */ -std::pair> evaluateConstantExpression(const ASTPtr & node, const Context & context); +std::pair> evaluateConstantExpression(const ASTPtr & node, ContextPtr context); /** Evaluate constant expression and returns ASTLiteral with its value. */ -ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, const Context & context); +ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, ContextPtr context); /** Evaluate constant expression and returns ASTLiteral with its value. * Also, if AST is identifier, then return string literal with its name. * Useful in places where some name may be specified as identifier, or as result of a constant expression. */ -ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, const Context & context); +ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, ContextPtr context); /** The same as evaluateConstantExpressionOrIdentifierAsLiteral(...), * but if result is an empty string, replace it with current database name * or default database name. */ -ASTPtr evaluateConstantExpressionForDatabaseName(const ASTPtr & node, const Context & context); +ASTPtr evaluateConstantExpressionForDatabaseName(const ASTPtr & node, ContextPtr context); /** Try to fold condition to countable set of constant values. * @param node a condition that we try to fold. diff --git a/src/Interpreters/executeDDLQueryOnCluster.cpp b/src/Interpreters/executeDDLQueryOnCluster.cpp index d4e8d06e613..99ece6bb14c 100644 --- a/src/Interpreters/executeDDLQueryOnCluster.cpp +++ b/src/Interpreters/executeDDLQueryOnCluster.cpp @@ -48,17 +48,17 @@ bool isSupportedAlterType(int type) } -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & context) +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, ContextPtr context) { return executeDDLQueryOnCluster(query_ptr_, context, {}); } -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, const Context & context, const AccessRightsElements & query_requires_access) +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, ContextPtr context, const AccessRightsElements & query_requires_access) { return executeDDLQueryOnCluster(query_ptr, context, AccessRightsElements{query_requires_access}); } -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & context, AccessRightsElements && query_requires_access) +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, ContextPtr context, AccessRightsElements && query_requires_access) { /// Remove FORMAT and INTO OUTFILE if exists ASTPtr query_ptr = query_ptr_->clone(); @@ -71,7 +71,7 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont throw Exception("Distributed execution is not supported for such DDL queries", ErrorCodes::NOT_IMPLEMENTED); } - if (!context.getSettingsRef().allow_distributed_ddl) + if (!context->getSettingsRef().allow_distributed_ddl) throw Exception("Distributed DDL queries are prohibited for the user", ErrorCodes::QUERY_IS_PROHIBITED); if (const auto * query_alter = query_ptr->as()) @@ -83,9 +83,9 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont } } - query->cluster = context.getMacros()->expand(query->cluster); - ClusterPtr cluster = context.getCluster(query->cluster); - DDLWorker & ddl_worker = context.getDDLWorker(); + query->cluster = context->getMacros()->expand(query->cluster); + ClusterPtr cluster = context->getCluster(query->cluster); + DDLWorker & ddl_worker = context->getDDLWorker(); /// Enumerate hosts which will be used to send query. Cluster::AddressesWithFailover shards = cluster->getShardsAddresses(); @@ -109,7 +109,7 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont != query_requires_access.end()); bool use_local_default_database = false; - const String & current_database = context.getCurrentDatabase(); + const String & current_database = context->getCurrentDatabase(); if (need_replace_current_database) { @@ -157,7 +157,7 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont visitor.visitDDL(query_ptr); /// Check access rights, assume that all servers have the same users config - context.checkAccess(query_requires_access); + context->checkAccess(query_requires_access); DDLLogEntry entry; entry.hosts = std::move(hosts); @@ -169,14 +169,14 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont return getDistributedDDLStatus(node_path, entry, context); } -BlockIO getDistributedDDLStatus(const String & node_path, const DDLLogEntry & entry, const Context & context, const std::optional & hosts_to_wait) +BlockIO getDistributedDDLStatus(const String & node_path, const DDLLogEntry & entry, ContextPtr context, const std::optional & hosts_to_wait) { BlockIO io; - if (context.getSettingsRef().distributed_ddl_task_timeout == 0) + if (context->getSettingsRef().distributed_ddl_task_timeout == 0) return io; auto stream = std::make_shared(node_path, entry, context, hosts_to_wait); - if (context.getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NONE) + if (context->getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NONE) { /// Wait for query to finish, but ignore output NullBlockOutputStream output{Block{}}; @@ -189,18 +189,18 @@ BlockIO getDistributedDDLStatus(const String & node_path, const DDLLogEntry & en return io; } -DDLQueryStatusInputStream::DDLQueryStatusInputStream(const String & zk_node_path, const DDLLogEntry & entry, const Context & context_, +DDLQueryStatusInputStream::DDLQueryStatusInputStream(const String & zk_node_path, const DDLLogEntry & entry, ContextPtr context_, const std::optional & hosts_to_wait) : node_path(zk_node_path) , context(context_) , watch(CLOCK_MONOTONIC_COARSE) , log(&Poco::Logger::get("DDLQueryStatusInputStream")) { - if (context.getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::THROW || - context.getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NONE) + if (context->getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::THROW || + context->getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NONE) throw_on_timeout = true; - else if (context.getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NULL_STATUS_ON_TIMEOUT || - context.getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NEVER_THROW) + else if (context->getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NULL_STATUS_ON_TIMEOUT || + context->getSettingsRef().distributed_ddl_output_mode == DistributedDDLOutputMode::NEVER_THROW) throw_on_timeout = false; else throw Exception(ErrorCodes::LOGICAL_ERROR, "Unknown output mode"); @@ -235,7 +235,7 @@ DDLQueryStatusInputStream::DDLQueryStatusInputStream(const String & zk_node_path addTotalRowsApprox(waiting_hosts.size()); - timeout_seconds = context.getSettingsRef().distributed_ddl_task_timeout; + timeout_seconds = context->getSettingsRef().distributed_ddl_task_timeout; } std::pair DDLQueryStatusInputStream::parseHostAndPort(const String & host_id) const @@ -259,21 +259,21 @@ Block DDLQueryStatusInputStream::readImpl() assert(num_hosts_finished <= waiting_hosts.size()); if (all_hosts_finished || timeout_exceeded) { - bool throw_if_error_on_host = context.getSettingsRef().distributed_ddl_output_mode != DistributedDDLOutputMode::NEVER_THROW; + bool throw_if_error_on_host = context->getSettingsRef().distributed_ddl_output_mode != DistributedDDLOutputMode::NEVER_THROW; if (first_exception && throw_if_error_on_host) throw Exception(*first_exception); return res; } - auto zookeeper = context.getZooKeeper(); + auto zookeeper = context->getZooKeeper(); size_t try_number = 0; while (res.rows() == 0) { if (isCancelled()) { - bool throw_if_error_on_host = context.getSettingsRef().distributed_ddl_output_mode != DistributedDDLOutputMode::NEVER_THROW; + bool throw_if_error_on_host = context->getSettingsRef().distributed_ddl_output_mode != DistributedDDLOutputMode::NEVER_THROW; if (first_exception && throw_if_error_on_host) throw Exception(*first_exception); diff --git a/src/Interpreters/executeDDLQueryOnCluster.h b/src/Interpreters/executeDDLQueryOnCluster.h index a33b89d0cb3..bbd39a6e8ec 100644 --- a/src/Interpreters/executeDDLQueryOnCluster.h +++ b/src/Interpreters/executeDDLQueryOnCluster.h @@ -1,5 +1,7 @@ #pragma once + #include +#include #include namespace zkutil @@ -10,7 +12,6 @@ namespace zkutil namespace DB { -class Context; class AccessRightsElements; struct DDLLogEntry; @@ -20,16 +21,16 @@ bool isSupportedAlterType(int type); /// Pushes distributed DDL query to the queue. /// Returns DDLQueryStatusInputStream, which reads results of query execution on each host in the cluster. -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, const Context & context); -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, const Context & context, const AccessRightsElements & query_requires_access); -BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, const Context & context, AccessRightsElements && query_requires_access); +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, ContextPtr context); +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, ContextPtr context, const AccessRightsElements & query_requires_access); +BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr, ContextPtr context, AccessRightsElements && query_requires_access); -BlockIO getDistributedDDLStatus(const String & node_path, const DDLLogEntry & entry, const Context & context, const std::optional & hosts_to_wait = {}); +BlockIO getDistributedDDLStatus(const String & node_path, const DDLLogEntry & entry, ContextPtr context, const std::optional & hosts_to_wait = {}); class DDLQueryStatusInputStream final : public IBlockInputStream { public: - DDLQueryStatusInputStream(const String & zk_node_path, const DDLLogEntry & entry, const Context & context_, const std::optional & hosts_to_wait = {}); + DDLQueryStatusInputStream(const String & zk_node_path, const DDLLogEntry & entry, ContextPtr context_, const std::optional & hosts_to_wait = {}); String getName() const override { return "DDLQueryStatusInputStream"; } @@ -48,7 +49,7 @@ private: std::pair parseHostAndPort(const String & host_id) const; String node_path; - const Context & context; + ContextPtr context; Stopwatch watch; Poco::Logger * log; diff --git a/src/Interpreters/executeQuery.cpp b/src/Interpreters/executeQuery.cpp index a5c21405ff1..5df245f9f26 100644 --- a/src/Interpreters/executeQuery.cpp +++ b/src/Interpreters/executeQuery.cpp @@ -121,7 +121,7 @@ static String joinLines(const String & query) } -static String prepareQueryForLogging(const String & query, Context & context) +static String prepareQueryForLogging(const String & query, ContextPtr context) { String res = query; @@ -136,14 +136,14 @@ static String prepareQueryForLogging(const String & query, Context & context) } } - res = res.substr(0, context.getSettingsRef().log_queries_cut_to_length); + res = res.substr(0, context->getSettingsRef().log_queries_cut_to_length); return res; } /// Log query into text log (not into system table). -static void logQuery(const String & query, const Context & context, bool internal) +static void logQuery(const String & query, ContextPtr context, bool internal) { if (internal) { @@ -151,14 +151,14 @@ static void logQuery(const String & query, const Context & context, bool interna } else { - const auto & client_info = context.getClientInfo(); + const auto & client_info = context->getClientInfo(); const auto & current_query_id = client_info.current_query_id; const auto & initial_query_id = client_info.initial_query_id; const auto & current_user = client_info.current_user; - String comment = context.getSettingsRef().log_comment; - size_t max_query_size = context.getSettingsRef().max_query_size; + String comment = context->getSettingsRef().log_comment; + size_t max_query_size = context->getSettingsRef().max_query_size; if (comment.size() > max_query_size) comment.resize(max_query_size); @@ -170,7 +170,7 @@ static void logQuery(const String & query, const Context & context, bool interna client_info.current_address.toString(), (current_user != "default" ? ", user: " + current_user : ""), (!initial_query_id.empty() && current_query_id != initial_query_id ? ", initial_query_id: " + initial_query_id : std::string()), - (context.getSettingsRef().use_antlr_parser ? "experimental" : "production"), + (context->getSettingsRef().use_antlr_parser ? "experimental" : "production"), comment, joinLines(query)); @@ -204,19 +204,30 @@ static void setExceptionStackTrace(QueryLogElement & elem) /// Log exception (with query info) into text log (not into system table). -static void logException(Context & context, QueryLogElement & elem) +static void logException(ContextPtr context, QueryLogElement & elem) { String comment; if (!elem.log_comment.empty()) comment = fmt::format(" (comment: {})", elem.log_comment); if (elem.stack_trace.empty()) - LOG_ERROR(&Poco::Logger::get("executeQuery"), "{} (from {}){} (in query: {})", - elem.exception, context.getClientInfo().current_address.toString(), comment, joinLines(elem.query)); + LOG_ERROR( + &Poco::Logger::get("executeQuery"), + "{} (from {}){} (in query: {})", + elem.exception, + context->getClientInfo().current_address.toString(), + comment, + joinLines(elem.query)); else - LOG_ERROR(&Poco::Logger::get("executeQuery"), "{} (from {}){} (in query: {})" + LOG_ERROR( + &Poco::Logger::get("executeQuery"), + "{} (from {}){} (in query: {})" ", Stack trace (when copying this message, always include the lines below):\n\n{}", - elem.exception, context.getClientInfo().current_address.toString(), comment, joinLines(elem.query), elem.stack_trace); + elem.exception, + context->getClientInfo().current_address.toString(), + comment, + joinLines(elem.query), + elem.stack_trace); } inline UInt64 time_in_microseconds(std::chrono::time_point timepoint) @@ -230,13 +241,13 @@ inline UInt64 time_in_seconds(std::chrono::time_point return std::chrono::duration_cast(timepoint.time_since_epoch()).count(); } -static void onExceptionBeforeStart(const String & query_for_logging, Context & context, UInt64 current_time_us, ASTPtr ast) +static void onExceptionBeforeStart(const String & query_for_logging, ContextPtr context, UInt64 current_time_us, ASTPtr ast) { /// Exception before the query execution. - if (auto quota = context.getQuota()) + if (auto quota = context->getQuota()) quota->used(Quota::ERRORS, 1, /* check_exceeded = */ false); - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); /// Log the start of query execution into the table if necessary. QueryLogElement elem; @@ -251,7 +262,7 @@ static void onExceptionBeforeStart(const String & query_for_logging, Context & c elem.query_start_time = current_time_us / 1000000; elem.query_start_time_microseconds = current_time_us; - elem.current_database = context.getCurrentDatabase(); + elem.current_database = context->getCurrentDatabase(); elem.query = query_for_logging; elem.normalized_query_hash = normalizedQueryHash(query_for_logging); @@ -260,7 +271,7 @@ static void onExceptionBeforeStart(const String & query_for_logging, Context & c elem.exception_code = getCurrentExceptionCode(); elem.exception = getCurrentExceptionMessage(false); - elem.client_info = context.getClientInfo(); + elem.client_info = context->getClientInfo(); elem.log_comment = settings.log_comment; if (elem.log_comment.size() > settings.max_query_size) @@ -274,17 +285,17 @@ static void onExceptionBeforeStart(const String & query_for_logging, Context & c CurrentThread::finalizePerformanceCounters(); if (settings.log_queries && elem.type >= settings.log_queries_min_type && !settings.log_queries_min_query_duration_ms.totalMilliseconds()) - if (auto query_log = context.getQueryLog()) + if (auto query_log = context->getQueryLog()) query_log->add(elem); - if (auto opentelemetry_span_log = context.getOpenTelemetrySpanLog(); - context.query_trace_context.trace_id + if (auto opentelemetry_span_log = context->getOpenTelemetrySpanLog(); + context->query_trace_context.trace_id && opentelemetry_span_log) { OpenTelemetrySpanLogElement span; - span.trace_id = context.query_trace_context.trace_id; - span.span_id = context.query_trace_context.span_id; - span.parent_span_id = context.getClientInfo().client_trace_context.span_id; + span.trace_id = context->query_trace_context.trace_id; + span.span_id = context->query_trace_context.span_id; + span.parent_span_id = context->getClientInfo().client_trace_context.span_id; span.operation_name = "query"; span.start_time_us = current_time_us; span.finish_time_us = current_time_us; @@ -299,11 +310,11 @@ static void onExceptionBeforeStart(const String & query_for_logging, Context & c span.attribute_names.push_back("clickhouse.query_id"); span.attribute_values.push_back(elem.client_info.current_query_id); - if (!context.query_trace_context.tracestate.empty()) + if (!context->query_trace_context.tracestate.empty()) { span.attribute_names.push_back("clickhouse.tracestate"); span.attribute_values.push_back( - context.query_trace_context.tracestate); + context->query_trace_context.tracestate); } opentelemetry_span_log->add(span); @@ -324,19 +335,19 @@ static void onExceptionBeforeStart(const String & query_for_logging, Context & c } } -static void setQuerySpecificSettings(ASTPtr & ast, Context & context) +static void setQuerySpecificSettings(ASTPtr & ast, ContextPtr context) { if (auto * ast_insert_into = dynamic_cast(ast.get())) { if (ast_insert_into->watch) - context.setSetting("output_format_enable_streaming", 1); + context->setSetting("output_format_enable_streaming", 1); } } static std::tuple executeQueryImpl( const char * begin, const char * end, - Context & context, + ContextPtr context, bool internal, QueryProcessingStage::Enum stage, bool has_query_tail, @@ -349,7 +360,7 @@ static std::tuple executeQueryImpl( assert(internal || CurrentThread::get().getQueryContext()->getCurrentQueryId() == CurrentThread::getQueryId()); #endif - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); ASTPtr ast; const char * query_end; @@ -365,7 +376,7 @@ static std::tuple executeQueryImpl( #if !defined(ARCADIA_BUILD) if (settings.use_antlr_parser) { - ast = parseQuery(begin, end, max_query_size, settings.max_parser_depth, context.getCurrentDatabase()); + ast = parseQuery(begin, end, max_query_size, settings.max_parser_depth, context->getCurrentDatabase()); } else { @@ -456,9 +467,9 @@ static std::tuple executeQueryImpl( try { /// Replace ASTQueryParameter with ASTLiteral for prepared statements. - if (context.hasQueryParameters()) + if (context->hasQueryParameters()) { - ReplaceQueryParameterVisitor visitor(context.getQueryParameters()); + ReplaceQueryParameterVisitor visitor(context->getQueryParameters()); visitor.visit(ast); query = serializeAST(*ast); } @@ -476,7 +487,7 @@ static std::tuple executeQueryImpl( } /// Normalize SelectWithUnionQuery - NormalizeSelectWithUnionQueryVisitor::Data data{context.getSettingsRef().union_default_mode}; + NormalizeSelectWithUnionQueryVisitor::Data data{context->getSettingsRef().union_default_mode}; NormalizeSelectWithUnionQueryVisitor{data}.visit(ast); /// Check the limits. @@ -487,12 +498,12 @@ static std::tuple executeQueryImpl( if (!internal && !ast->as()) { /// processlist also has query masked now, to avoid secrets leaks though SHOW PROCESSLIST by other users. - process_list_entry = context.getProcessList().insert(query_for_logging, ast.get(), context); - context.setProcessListElement(&process_list_entry->get()); + process_list_entry = context->getProcessList().insert(query_for_logging, ast.get(), context); + context->setProcessListElement(&process_list_entry->get()); } /// Load external tables if they were provided - context.initializeExternalTablesIfSet(); + context->initializeExternalTablesIfSet(); auto * insert_query = ast->as(); if (insert_query && insert_query->select) @@ -504,7 +515,7 @@ static std::tuple executeQueryImpl( insert_query->tryFindInputFunction(input_function); if (input_function) { - StoragePtr storage = context.executeTableFunction(input_function); + StoragePtr storage = context->executeTableFunction(input_function); auto & input_storage = dynamic_cast(*storage); auto input_metadata_snapshot = input_storage.getInMemoryMetadataPtr(); BlockInputStreamPtr input_stream = std::make_shared( @@ -515,14 +526,14 @@ static std::tuple executeQueryImpl( } else /// reset Input callbacks if query is not INSERT SELECT - context.resetInputCallbacks(); + context->resetInputCallbacks(); auto interpreter = InterpreterFactory::get(ast, context, SelectQueryOptions(stage).setInternal(internal)); std::shared_ptr quota; if (!interpreter->ignoreQuota()) { - quota = context.getQuota(); + quota = context->getQuota(); if (quota) { if (ast->as() || ast->as()) @@ -558,7 +569,7 @@ static std::tuple executeQueryImpl( /// Save insertion table (not table function). TODO: support remote() table function. auto table_id = insert_interpreter->getDatabaseTable(); if (!table_id.empty()) - context.setInsertionTable(std::move(table_id)); + context->setInsertionTable(std::move(table_id)); } if (process_list_entry) @@ -578,8 +589,8 @@ static std::tuple executeQueryImpl( { /// Limits on the result, the quota on the result, and also callback for progress. /// Limits apply only to the final result. - pipeline.setProgressCallback(context.getProgressCallback()); - pipeline.setProcessListElement(context.getProcessListElement()); + pipeline.setProgressCallback(context->getProgressCallback()); + pipeline.setProcessListElement(context->getProcessListElement()); if (stage == QueryProcessingStage::Complete && !pipeline.isCompleted()) { pipeline.resize(1); @@ -597,8 +608,8 @@ static std::tuple executeQueryImpl( /// Limits apply only to the final result. if (res.in) { - res.in->setProgressCallback(context.getProgressCallback()); - res.in->setProcessListElement(context.getProcessListElement()); + res.in->setProgressCallback(context->getProgressCallback()); + res.in->setProcessListElement(context->getProcessListElement()); if (stage == QueryProcessingStage::Complete) { if (!interpreter->ignoreQuota()) @@ -612,7 +623,7 @@ static std::tuple executeQueryImpl( { if (auto * stream = dynamic_cast(res.out.get())) { - stream->setProcessListElement(context.getProcessListElement()); + stream->setProcessListElement(context->getProcessListElement()); } } } @@ -628,11 +639,11 @@ static std::tuple executeQueryImpl( elem.query_start_time = time_in_seconds(current_time); elem.query_start_time_microseconds = time_in_microseconds(current_time); - elem.current_database = context.getCurrentDatabase(); + elem.current_database = context->getCurrentDatabase(); elem.query = query_for_logging; elem.normalized_query_hash = normalizedQueryHash(query_for_logging); - elem.client_info = context.getClientInfo(); + elem.client_info = context->getClientInfo(); bool log_queries = settings.log_queries && !internal; @@ -641,7 +652,7 @@ static std::tuple executeQueryImpl( { if (use_processors) { - const auto & info = context.getQueryAccessInfo(); + const auto & info = context->getQueryAccessInfo(); elem.query_databases = info.databases; elem.query_tables = info.tables; elem.query_columns = info.columns; @@ -650,7 +661,7 @@ static std::tuple executeQueryImpl( interpreter->extendQueryLogElem(elem, ast, context, query_database, query_table); if (settings.log_query_settings) - elem.query_settings = std::make_shared(context.getSettingsRef()); + elem.query_settings = std::make_shared(context->getSettingsRef()); elem.log_comment = settings.log_comment; if (elem.log_comment.size() > settings.max_query_size) @@ -658,7 +669,7 @@ static std::tuple executeQueryImpl( if (elem.type >= settings.log_queries_min_type && !settings.log_queries_min_query_duration_ms.totalMilliseconds()) { - if (auto query_log = context.getQueryLog()) + if (auto query_log = context->getQueryLog()) query_log->add(elem); } } @@ -692,7 +703,7 @@ static std::tuple executeQueryImpl( }; /// Also make possible for caller to log successful query finish and exception during execution. - auto finish_callback = [elem, &context, ast, + auto finish_callback = [elem, context, ast, log_queries, log_queries_min_type = settings.log_queries_min_type, log_queries_min_query_duration_ms = settings.log_queries_min_query_duration_ms.totalMilliseconds(), @@ -700,7 +711,7 @@ static std::tuple executeQueryImpl( ] (IBlockInputStream * stream_in, IBlockOutputStream * stream_out, QueryPipeline * query_pipeline) mutable { - QueryStatus * process_list_elem = context.getProcessListElement(); + QueryStatus * process_list_elem = context->getProcessListElement(); if (!process_list_elem) return; @@ -708,7 +719,7 @@ static std::tuple executeQueryImpl( /// Update performance counters before logging to query_log CurrentThread::finalizePerformanceCounters(); - QueryStatusInfo info = process_list_elem->getInfo(true, context.getSettingsRef().log_profile_events); + QueryStatusInfo info = process_list_elem->getInfo(true, context->getSettingsRef().log_profile_events); double elapsed_seconds = info.elapsed_seconds; @@ -721,7 +732,7 @@ static std::tuple executeQueryImpl( elem.event_time_microseconds = time_in_microseconds(finish_time); status_info_to_query_log(elem, info, ast); - auto progress_callback = context.getProgressCallback(); + auto progress_callback = context->getProgressCallback(); if (progress_callback) progress_callback(Progress(WriteProgress(info.written_rows, info.written_bytes))); @@ -763,7 +774,7 @@ static std::tuple executeQueryImpl( elem.thread_ids = std::move(info.thread_ids); elem.profile_counters = std::move(info.profile_counters); - const auto & factories_info = context.getQueryFactoriesInfo(); + const auto & factories_info = context->getQueryFactoriesInfo(); elem.used_aggregate_functions = factories_info.aggregate_functions; elem.used_aggregate_function_combinators = factories_info.aggregate_function_combinators; elem.used_database_engines = factories_info.database_engines; @@ -776,18 +787,18 @@ static std::tuple executeQueryImpl( if (log_queries && elem.type >= log_queries_min_type && Int64(elem.query_duration_ms) >= log_queries_min_query_duration_ms) { - if (auto query_log = context.getQueryLog()) + if (auto query_log = context->getQueryLog()) query_log->add(elem); } - if (auto opentelemetry_span_log = context.getOpenTelemetrySpanLog(); - context.query_trace_context.trace_id + if (auto opentelemetry_span_log = context->getOpenTelemetrySpanLog(); + context->query_trace_context.trace_id && opentelemetry_span_log) { OpenTelemetrySpanLogElement span; - span.trace_id = context.query_trace_context.trace_id; - span.span_id = context.query_trace_context.span_id; - span.parent_span_id = context.getClientInfo().client_trace_context.span_id; + span.trace_id = context->query_trace_context.trace_id; + span.span_id = context->query_trace_context.span_id; + span.parent_span_id = context->getClientInfo().client_trace_context.span_id; span.operation_name = "query"; span.start_time_us = elem.query_start_time_microseconds; span.finish_time_us = time_in_microseconds(finish_time); @@ -801,18 +812,18 @@ static std::tuple executeQueryImpl( span.attribute_names.push_back("clickhouse.query_id"); span.attribute_values.push_back(elem.client_info.current_query_id); - if (!context.query_trace_context.tracestate.empty()) + if (!context->query_trace_context.tracestate.empty()) { span.attribute_names.push_back("clickhouse.tracestate"); span.attribute_values.push_back( - context.query_trace_context.tracestate); + context->query_trace_context.tracestate); } opentelemetry_span_log->add(span); } }; - auto exception_callback = [elem, &context, ast, + auto exception_callback = [elem, context, ast, log_queries, log_queries_min_type = settings.log_queries_min_type, log_queries_min_query_duration_ms = settings.log_queries_min_query_duration_ms.totalMilliseconds(), @@ -833,8 +844,8 @@ static std::tuple executeQueryImpl( elem.exception_code = getCurrentExceptionCode(); elem.exception = getCurrentExceptionMessage(false); - QueryStatus * process_list_elem = context.getProcessListElement(); - const Settings & current_settings = context.getSettingsRef(); + QueryStatus * process_list_elem = context->getProcessListElement(); + const Settings & current_settings = context->getSettingsRef(); /// Update performance counters before logging to query_log CurrentThread::finalizePerformanceCounters(); @@ -852,7 +863,7 @@ static std::tuple executeQueryImpl( /// In case of exception we log internal queries also if (log_queries && elem.type >= log_queries_min_type && Int64(elem.query_duration_ms) >= log_queries_min_query_duration_ms) { - if (auto query_log = context.getQueryLog()) + if (auto query_log = context->getQueryLog()) query_log->add(elem); } @@ -898,7 +909,7 @@ static std::tuple executeQueryImpl( BlockIO executeQuery( const String & query, - Context & context, + ContextPtr context, bool internal, QueryProcessingStage::Enum stage, bool may_have_embedded_data) @@ -912,7 +923,7 @@ BlockIO executeQuery( { String format_name = ast_query_with_output->format ? getIdentifierName(ast_query_with_output->format) - : context.getDefaultFormat(); + : context->getDefaultFormat(); if (format_name == "Null") streams.null_format = true; @@ -923,7 +934,7 @@ BlockIO executeQuery( BlockIO executeQuery( const String & query, - Context & context, + ContextPtr context, bool internal, QueryProcessingStage::Enum stage, bool may_have_embedded_data, @@ -942,7 +953,7 @@ void executeQuery( ReadBuffer & istr, WriteBuffer & ostr, bool allow_into_outfile, - Context & context, + ContextPtr context, std::function set_result_details) { PODArray parse_buf; @@ -953,7 +964,7 @@ void executeQuery( if (!istr.hasPendingData()) istr.next(); - size_t max_query_size = context.getSettingsRef().max_query_size; + size_t max_query_size = context->getSettingsRef().max_query_size; bool may_have_tail; if (istr.buffer().end() - istr.position() > static_cast(max_query_size)) @@ -1012,12 +1023,12 @@ void executeQuery( String format_name = ast_query_with_output && (ast_query_with_output->format != nullptr) ? getIdentifierName(ast_query_with_output->format) - : context.getDefaultFormat(); + : context->getDefaultFormat(); - auto out = context.getOutputStreamParallelIfPossible(format_name, *out_buf, streams.in->getHeader()); + auto out = context->getOutputStreamParallelIfPossible(format_name, *out_buf, streams.in->getHeader()); /// Save previous progress callback if any. TODO Do it more conveniently. - auto previous_progress_callback = context.getProgressCallback(); + auto previous_progress_callback = context->getProgressCallback(); /// NOTE Progress callback takes shared ownership of 'out'. streams.in->setProgressCallback([out, previous_progress_callback] (const Progress & progress) @@ -1028,7 +1039,8 @@ void executeQuery( }); if (set_result_details) - set_result_details(context.getClientInfo().current_query_id, out->getContentType(), format_name, DateLUT::instance().getTimeZone()); + set_result_details( + context->getClientInfo().current_query_id, out->getContentType(), format_name, DateLUT::instance().getTimeZone()); copyData(*streams.in, *out, [](){ return false; }, [&out](const Block &) { out->flush(); }); } @@ -1050,7 +1062,7 @@ void executeQuery( String format_name = ast_query_with_output && (ast_query_with_output->format != nullptr) ? getIdentifierName(ast_query_with_output->format) - : context.getDefaultFormat(); + : context->getDefaultFormat(); if (!pipeline.isCompleted()) { @@ -1059,11 +1071,11 @@ void executeQuery( return std::make_shared(header); }); - auto out = context.getOutputFormatParallelIfPossible(format_name, *out_buf, pipeline.getHeader()); + auto out = context->getOutputFormatParallelIfPossible(format_name, *out_buf, pipeline.getHeader()); out->setAutoFlush(); /// Save previous progress callback if any. TODO Do it more conveniently. - auto previous_progress_callback = context.getProgressCallback(); + auto previous_progress_callback = context->getProgressCallback(); /// NOTE Progress callback takes shared ownership of 'out'. pipeline.setProgressCallback([out, previous_progress_callback] (const Progress & progress) @@ -1074,13 +1086,14 @@ void executeQuery( }); if (set_result_details) - set_result_details(context.getClientInfo().current_query_id, out->getContentType(), format_name, DateLUT::instance().getTimeZone()); + set_result_details( + context->getClientInfo().current_query_id, out->getContentType(), format_name, DateLUT::instance().getTimeZone()); pipeline.setOutputFormat(std::move(out)); } else { - pipeline.setProgressCallback(context.getProgressCallback()); + pipeline.setProgressCallback(context->getProgressCallback()); } { diff --git a/src/Interpreters/executeQuery.h b/src/Interpreters/executeQuery.h index 2850bb3baf4..bdb1f877ce3 100644 --- a/src/Interpreters/executeQuery.h +++ b/src/Interpreters/executeQuery.h @@ -2,7 +2,6 @@ #include #include - #include namespace DB @@ -10,7 +9,6 @@ namespace DB class ReadBuffer; class WriteBuffer; -class Context; /// Parse and execute a query. @@ -18,7 +16,7 @@ void executeQuery( ReadBuffer & istr, /// Where to read query from (and data for INSERT, if present). WriteBuffer & ostr, /// Where to write query output to. bool allow_into_outfile, /// If true and the query contains INTO OUTFILE section, redirect output to that file. - Context & context, /// DB, tables, data types, storage engines, functions, aggregate functions... + ContextPtr context, /// DB, tables, data types, storage engines, functions, aggregate functions... std::function set_result_details /// If a non-empty callback is passed, it will be called with the query id, the content-type, the format, and the timezone. ); @@ -39,7 +37,7 @@ void executeQuery( /// must be done separately. BlockIO executeQuery( const String & query, /// Query text without INSERT data. The latter must be written to BlockIO::out. - Context & context, /// DB, tables, data types, storage engines, functions, aggregate functions... + ContextPtr context, /// DB, tables, data types, storage engines, functions, aggregate functions... bool internal = false, /// If true, this query is caused by another query and thus needn't be registered in the ProcessList. QueryProcessingStage::Enum stage = QueryProcessingStage::Complete, /// To which stage the query must be executed. bool may_have_embedded_data = false /// If insert query may have embedded data @@ -48,7 +46,7 @@ BlockIO executeQuery( /// Old interface with allow_processors flag. For compatibility. BlockIO executeQuery( const String & query, - Context & context, + ContextPtr context, bool internal, QueryProcessingStage::Enum stage, bool may_have_embedded_data, diff --git a/src/Interpreters/getHeaderForProcessingStage.cpp b/src/Interpreters/getHeaderForProcessingStage.cpp index b56b90cdf3f..9c7c86a0b88 100644 --- a/src/Interpreters/getHeaderForProcessingStage.cpp +++ b/src/Interpreters/getHeaderForProcessingStage.cpp @@ -12,21 +12,27 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -/// Rewrite original query removing joined tables from it -bool removeJoin(ASTSelectQuery & select) +bool hasJoin(const ASTSelectQuery & select) { const auto & tables = select.tables(); if (!tables || tables->children.size() < 2) return false; const auto & joined_table = tables->children[1]->as(); - if (!joined_table.table_join) - return false; + return joined_table.table_join != nullptr; +} - /// The most simple temporary solution: leave only the first table in query. - /// TODO: we also need to remove joined columns and related functions (taking in account aliases if any). - tables->children.resize(1); - return true; +/// Rewrite original query removing joined tables from it +bool removeJoin(ASTSelectQuery & select) +{ + if (hasJoin(select)) + { + /// The most simple temporary solution: leave only the first table in query. + /// TODO: we also need to remove joined columns and related functions (taking in account aliases if any). + select.tables()->children.resize(1); + return true; + } + return false; } Block getHeaderForProcessingStage( @@ -34,7 +40,7 @@ Block getHeaderForProcessingStage( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage) { switch (processed_stage) diff --git a/src/Interpreters/getHeaderForProcessingStage.h b/src/Interpreters/getHeaderForProcessingStage.h index ec238edf774..75a89bc5d39 100644 --- a/src/Interpreters/getHeaderForProcessingStage.h +++ b/src/Interpreters/getHeaderForProcessingStage.h @@ -1,7 +1,9 @@ #pragma once + #include #include #include +#include namespace DB @@ -11,9 +13,9 @@ class IStorage; struct StorageInMemoryMetadata; using StorageMetadataPtr = std::shared_ptr; struct SelectQueryInfo; -class Context; class ASTSelectQuery; +bool hasJoin(const ASTSelectQuery & select); bool removeJoin(ASTSelectQuery & select); Block getHeaderForProcessingStage( @@ -21,7 +23,7 @@ Block getHeaderForProcessingStage( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage); } diff --git a/src/Interpreters/getTableExpressions.cpp b/src/Interpreters/getTableExpressions.cpp index a4e971c302c..22eb307071c 100644 --- a/src/Interpreters/getTableExpressions.cpp +++ b/src/Interpreters/getTableExpressions.cpp @@ -75,7 +75,7 @@ ASTPtr extractTableExpression(const ASTSelectQuery & select, size_t table_number static NamesAndTypesList getColumnsFromTableExpression( const ASTTableExpression & table_expression, - const Context & context, + ContextPtr context, NamesAndTypesList & materialized, NamesAndTypesList & aliases, NamesAndTypesList & virtuals) @@ -89,7 +89,7 @@ static NamesAndTypesList getColumnsFromTableExpression( else if (table_expression.table_function) { const auto table_function = table_expression.table_function; - auto * query_context = const_cast(&context.getQueryContext()); + auto query_context = context->getQueryContext(); const auto & function_storage = query_context->executeTableFunction(table_function); auto function_metadata_snapshot = function_storage->getInMemoryMetadataPtr(); const auto & columns = function_metadata_snapshot->getColumns(); @@ -100,7 +100,7 @@ static NamesAndTypesList getColumnsFromTableExpression( } else if (table_expression.database_and_table_name) { - auto table_id = context.resolveStorageID(table_expression.database_and_table_name); + auto table_id = context->resolveStorageID(table_expression.database_and_table_name); const auto & table = DatabaseCatalog::instance().getTable(table_id, context); auto table_metadata_snapshot = table->getInMemoryMetadataPtr(); const auto & columns = table_metadata_snapshot->getColumns(); @@ -113,7 +113,7 @@ static NamesAndTypesList getColumnsFromTableExpression( return names_and_type_list; } -NamesAndTypesList getColumnsFromTableExpression(const ASTTableExpression & table_expression, const Context & context) +NamesAndTypesList getColumnsFromTableExpression(const ASTTableExpression & table_expression, ContextPtr context) { NamesAndTypesList materialized; NamesAndTypesList aliases; @@ -121,15 +121,15 @@ NamesAndTypesList getColumnsFromTableExpression(const ASTTableExpression & table return getColumnsFromTableExpression(table_expression, context, materialized, aliases, virtuals); } -TablesWithColumns getDatabaseAndTablesWithColumns(const std::vector & table_expressions, const Context & context) +TablesWithColumns getDatabaseAndTablesWithColumns(const std::vector & table_expressions, ContextPtr context) { TablesWithColumns tables_with_columns; if (!table_expressions.empty()) { - String current_database = context.getCurrentDatabase(); - bool include_alias_cols = context.getSettingsRef().asterisk_include_alias_columns; - bool include_materialized_cols = context.getSettingsRef().asterisk_include_materialized_columns; + String current_database = context->getCurrentDatabase(); + bool include_alias_cols = context->getSettingsRef().asterisk_include_alias_columns; + bool include_materialized_cols = context->getSettingsRef().asterisk_include_materialized_columns; for (const ASTTableExpression * table_expression : table_expressions) { diff --git a/src/Interpreters/getTableExpressions.h b/src/Interpreters/getTableExpressions.h index 9254fb9d6a0..961176437b5 100644 --- a/src/Interpreters/getTableExpressions.h +++ b/src/Interpreters/getTableExpressions.h @@ -1,6 +1,7 @@ #pragma once #include +#include #include namespace DB @@ -8,7 +9,6 @@ namespace DB struct ASTTableExpression; class ASTSelectQuery; -class Context; NameSet removeDuplicateColumns(NamesAndTypesList & columns); @@ -16,7 +16,7 @@ std::vector getTableExpressions(const ASTSelectQuery const ASTTableExpression * getTableExpression(const ASTSelectQuery & select, size_t table_number); ASTPtr extractTableExpression(const ASTSelectQuery & select, size_t table_number); -NamesAndTypesList getColumnsFromTableExpression(const ASTTableExpression & table_expression, const Context & context); -TablesWithColumns getDatabaseAndTablesWithColumns(const std::vector & table_expressions, const Context & context); +NamesAndTypesList getColumnsFromTableExpression(const ASTTableExpression & table_expression, ContextPtr context); +TablesWithColumns getDatabaseAndTablesWithColumns(const std::vector & table_expressions, ContextPtr context); } diff --git a/src/Interpreters/inplaceBlockConversions.cpp b/src/Interpreters/inplaceBlockConversions.cpp index 47cd6cc20f6..ec625ff186e 100644 --- a/src/Interpreters/inplaceBlockConversions.cpp +++ b/src/Interpreters/inplaceBlockConversions.cpp @@ -92,7 +92,7 @@ ActionsDAGPtr createExpressions( ASTPtr expr_list, bool save_unneeded_columns, const NamesAndTypesList & required_columns, - const Context & context) + ContextPtr context) { if (!expr_list) return nullptr; @@ -114,7 +114,7 @@ ActionsDAGPtr createExpressions( } -void performRequiredConversions(Block & block, const NamesAndTypesList & required_columns, const Context & context) +void performRequiredConversions(Block & block, const NamesAndTypesList & required_columns, ContextPtr context) { ASTPtr conversion_expr_list = convertRequiredExpressions(block, required_columns); if (conversion_expr_list->children.empty()) @@ -131,7 +131,7 @@ ActionsDAGPtr evaluateMissingDefaults( const Block & header, const NamesAndTypesList & required_columns, const ColumnsDescription & columns, - const Context & context, bool save_unneeded_columns) + ContextPtr context, bool save_unneeded_columns) { if (!columns.hasDefaults()) return nullptr; diff --git a/src/Interpreters/inplaceBlockConversions.h b/src/Interpreters/inplaceBlockConversions.h index 63540e2994d..3d46523fafb 100644 --- a/src/Interpreters/inplaceBlockConversions.h +++ b/src/Interpreters/inplaceBlockConversions.h @@ -1,15 +1,16 @@ #pragma once -#include -#include +#include + #include +#include +#include namespace DB { class Block; -class Context; class NamesAndTypesList; class ColumnsDescription; @@ -22,10 +23,9 @@ ActionsDAGPtr evaluateMissingDefaults( const Block & header, const NamesAndTypesList & required_columns, const ColumnsDescription & columns, - const Context & context, bool save_unneeded_columns = true); + ContextPtr context, bool save_unneeded_columns = true); /// Tries to convert columns in block to required_columns -void performRequiredConversions(Block & block, - const NamesAndTypesList & required_columns, - const Context & context); +void performRequiredConversions(Block & block, const NamesAndTypesList & required_columns, ContextPtr context); + } diff --git a/src/Interpreters/interpretSubquery.cpp b/src/Interpreters/interpretSubquery.cpp index cf343a4fda2..2fb2f390b67 100644 --- a/src/Interpreters/interpretSubquery.cpp +++ b/src/Interpreters/interpretSubquery.cpp @@ -22,14 +22,14 @@ namespace ErrorCodes } std::shared_ptr interpretSubquery( - const ASTPtr & table_expression, const Context & context, size_t subquery_depth, const Names & required_source_columns) + const ASTPtr & table_expression, ContextPtr context, size_t subquery_depth, const Names & required_source_columns) { auto subquery_options = SelectQueryOptions(QueryProcessingStage::Complete, subquery_depth); return interpretSubquery(table_expression, context, required_source_columns, subquery_options); } std::shared_ptr interpretSubquery( - const ASTPtr & table_expression, const Context & context, const Names & required_source_columns, const SelectQueryOptions & options) + const ASTPtr & table_expression, ContextPtr context, const Names & required_source_columns, const SelectQueryOptions & options) { if (auto * expr = table_expression->as()) { @@ -59,13 +59,13 @@ std::shared_ptr interpretSubquery( * max_rows_in_join, max_bytes_in_join, join_overflow_mode, * which are checked separately (in the Set, Join objects). */ - Context subquery_context = context; - Settings subquery_settings = context.getSettings(); + auto subquery_context = Context::createCopy(context); + Settings subquery_settings = context->getSettings(); subquery_settings.max_result_rows = 0; subquery_settings.max_result_bytes = 0; /// The calculation of `extremes` does not make sense and is not necessary (if you do it, then the `extremes` of the subquery can be taken instead of the whole query). subquery_settings.extremes = false; - subquery_context.setSettings(subquery_settings); + subquery_context->setSettings(subquery_settings); auto subquery_options = options.subquery(); @@ -88,14 +88,14 @@ std::shared_ptr interpretSubquery( /// get columns list for target table if (function) { - auto * query_context = const_cast(&context.getQueryContext()); + auto query_context = context->getQueryContext(); const auto & storage = query_context->executeTableFunction(table_expression); columns = storage->getInMemoryMetadataPtr()->getColumns().getOrdinary(); select_query->addTableFunction(*const_cast(&table_expression)); // XXX: const_cast should be avoided! } else { - auto table_id = context.resolveStorageID(table_expression); + auto table_id = context->resolveStorageID(table_expression); const auto & storage = DatabaseCatalog::instance().getTable(table_id, context); columns = storage->getInMemoryMetadataPtr()->getColumns().getOrdinary(); select_query->replaceDatabaseAndTable(table_id); diff --git a/src/Interpreters/interpretSubquery.h b/src/Interpreters/interpretSubquery.h index 2aee6ffd81a..3836d1f7664 100644 --- a/src/Interpreters/interpretSubquery.h +++ b/src/Interpreters/interpretSubquery.h @@ -6,12 +6,10 @@ namespace DB { -class Context; +std::shared_ptr interpretSubquery( + const ASTPtr & table_expression, ContextPtr context, size_t subquery_depth, const Names & required_source_columns); std::shared_ptr interpretSubquery( - const ASTPtr & table_expression, const Context & context, size_t subquery_depth, const Names & required_source_columns); - -std::shared_ptr interpretSubquery( - const ASTPtr & table_expression, const Context & context, const Names & required_source_columns, const SelectQueryOptions & options); + const ASTPtr & table_expression, ContextPtr context, const Names & required_source_columns, const SelectQueryOptions & options); } diff --git a/src/Interpreters/loadMetadata.cpp b/src/Interpreters/loadMetadata.cpp index 71d3c7e6e5b..79076e57328 100644 --- a/src/Interpreters/loadMetadata.cpp +++ b/src/Interpreters/loadMetadata.cpp @@ -25,13 +25,14 @@ namespace DB static void executeCreateQuery( const String & query, - Context & context, + ContextPtr context, const String & database, const String & file_name, bool has_force_restore_data_flag) { ParserCreateQuery parser; - ASTPtr ast = parseQuery(parser, query.data(), query.data() + query.size(), "in file " + file_name, 0, context.getSettingsRef().max_parser_depth); + ASTPtr ast = parseQuery( + parser, query.data(), query.data() + query.size(), "in file " + file_name, 0, context->getSettingsRef().max_parser_depth); auto & ast_create_query = ast->as(); ast_create_query.database = database; @@ -45,7 +46,7 @@ static void executeCreateQuery( static void loadDatabase( - Context & context, + ContextPtr context, const String & database, const String & database_path, bool force_restore_data) @@ -73,8 +74,7 @@ static void loadDatabase( try { - executeCreateQuery(database_attach_query, context, database, - database_metadata_file, force_restore_data); + executeCreateQuery(database_attach_query, context, database, database_metadata_file, force_restore_data); } catch (Exception & e) { @@ -84,18 +84,18 @@ static void loadDatabase( } -void loadMetadata(Context & context, const String & default_database_name) +void loadMetadata(ContextPtr context, const String & default_database_name) { Poco::Logger * log = &Poco::Logger::get("loadMetadata"); - String path = context.getPath() + "metadata"; + String path = context->getPath() + "metadata"; /** There may exist 'force_restore_data' file, that means, * skip safety threshold on difference of data parts while initializing tables. * This file is deleted after successful loading of tables. * (flag is "one-shot") */ - Poco::File force_restore_data_flag_file(context.getFlagsPath() + "force_restore_data"); + Poco::File force_restore_data_flag_file(context->getFlagsPath() + "force_restore_data"); bool has_force_restore_data_flag = force_restore_data_flag_file.exists(); /// Loop over databases. @@ -168,9 +168,9 @@ void loadMetadata(Context & context, const String & default_database_name) } -void loadMetadataSystem(Context & context) +void loadMetadataSystem(ContextPtr context) { - String path = context.getPath() + "metadata/" + DatabaseCatalog::SYSTEM_DATABASE; + String path = context->getPath() + "metadata/" + DatabaseCatalog::SYSTEM_DATABASE; String metadata_file = path + ".sql"; if (Poco::File(path).exists() || Poco::File(metadata_file).exists()) { diff --git a/src/Interpreters/loadMetadata.h b/src/Interpreters/loadMetadata.h index b23887d5282..047def84bba 100644 --- a/src/Interpreters/loadMetadata.h +++ b/src/Interpreters/loadMetadata.h @@ -1,16 +1,16 @@ #pragma once +#include + namespace DB { -class Context; - /// Load tables from system database. Only real tables like query_log, part_log. /// You should first load system database, then attach system tables that you need into it, then load other databases. -void loadMetadataSystem(Context & context); +void loadMetadataSystem(ContextPtr context); /// Load tables from databases and add them to context. Database 'system' is ignored. Use separate function to load system tables. -void loadMetadata(Context & context, const String & default_database_name = {}); +void loadMetadata(ContextPtr context, const String & default_database_name = {}); } diff --git a/src/Interpreters/replaceAliasColumnsInQuery.cpp b/src/Interpreters/replaceAliasColumnsInQuery.cpp index 4daa787c397..4c8367b269a 100644 --- a/src/Interpreters/replaceAliasColumnsInQuery.cpp +++ b/src/Interpreters/replaceAliasColumnsInQuery.cpp @@ -6,7 +6,7 @@ namespace DB { -void replaceAliasColumnsInQuery(ASTPtr & ast, const ColumnsDescription & columns, const NameSet & forbidden_columns, const Context & context) +void replaceAliasColumnsInQuery(ASTPtr & ast, const ColumnsDescription & columns, const NameSet & forbidden_columns, ContextPtr context) { ColumnAliasesVisitor::Data aliase_column_data(columns, forbidden_columns, context); ColumnAliasesVisitor aliase_column_visitor(aliase_column_data); diff --git a/src/Interpreters/replaceAliasColumnsInQuery.h b/src/Interpreters/replaceAliasColumnsInQuery.h index bf7143ba099..92d2686b45b 100644 --- a/src/Interpreters/replaceAliasColumnsInQuery.h +++ b/src/Interpreters/replaceAliasColumnsInQuery.h @@ -1,14 +1,15 @@ #pragma once -#include #include +#include #include +#include namespace DB { class ColumnsDescription; -class Context; -void replaceAliasColumnsInQuery(ASTPtr & ast, const ColumnsDescription & columns, const NameSet & forbidden_columns, const Context & context); + +void replaceAliasColumnsInQuery(ASTPtr & ast, const ColumnsDescription & columns, const NameSet & forbidden_columns, ContextPtr context); } diff --git a/src/Interpreters/tests/in_join_subqueries_preprocessor.cpp b/src/Interpreters/tests/in_join_subqueries_preprocessor.cpp index 2b53277d02f..efe3882d086 100644 --- a/src/Interpreters/tests/in_join_subqueries_preprocessor.cpp +++ b/src/Interpreters/tests/in_join_subqueries_preprocessor.cpp @@ -1156,9 +1156,9 @@ static bool run() TestResult check(const TestEntry & entry) { - static DB::SharedContextHolder shared_context = DB::Context::createShared(); - static DB::Context context = DB::Context::createGlobal(shared_context.get()); - context.makeGlobalContext(); + static auto shared_context = DB::Context::createShared(); + static auto context = DB::Context::createGlobal(shared_context.get()); + context->makeGlobalContext(); try { @@ -1170,8 +1170,8 @@ TestResult check(const TestEntry & entry) DB::DatabaseCatalog::instance().attachDatabase("test", database); database->attachTable("visits_all", storage_distributed_visits); database->attachTable("hits_all", storage_distributed_hits); - context.setCurrentDatabase("test"); - context.setSetting("distributed_product_mode", entry.mode); + context->setCurrentDatabase("test"); + context->setSetting("distributed_product_mode", entry.mode); /// Parse and process the incoming query. DB::ASTPtr ast_input; diff --git a/src/Interpreters/ya.make b/src/Interpreters/ya.make index 64f931a3eaf..90998077a5a 100644 --- a/src/Interpreters/ya.make +++ b/src/Interpreters/ya.make @@ -103,6 +103,7 @@ SRCS( InterpreterSystemQuery.cpp InterpreterUseQuery.cpp InterpreterWatchQuery.cpp + InterserverCredentials.cpp JoinSwitcher.cpp JoinToSubqueryTransformVisitor.cpp JoinedTables.cpp @@ -117,6 +118,7 @@ SRCS( OpenTelemetrySpanLog.cpp OptimizeIfChains.cpp OptimizeIfWithConstantConditionVisitor.cpp + OptimizeShardingKeyRewriteInVisitor.cpp PartLog.cpp PredicateExpressionsOptimizer.cpp PredicateRewriteVisitor.cpp diff --git a/src/Parsers/ASTAlterQuery.cpp b/src/Parsers/ASTAlterQuery.cpp index df4a9a5f99a..5b052bae856 100644 --- a/src/Parsers/ASTAlterQuery.cpp +++ b/src/Parsers/ASTAlterQuery.cpp @@ -245,7 +245,7 @@ void ASTAlterCommand::formatImpl( else if (type == ASTAlterCommand::FETCH_PARTITION) { settings.ostr << (settings.hilite ? hilite_keyword : "") << indent_str << "FETCH " - << "PARTITION " << (settings.hilite ? hilite_none : ""); + << (part ? "PART " : "PARTITION ") << (settings.hilite ? hilite_none : ""); partition->formatImpl(settings, state, frame); settings.ostr << (settings.hilite ? hilite_keyword : "") << " FROM " << (settings.hilite ? hilite_none : "") << DB::quote << from; diff --git a/src/Parsers/ASTFunction.cpp b/src/Parsers/ASTFunction.cpp index 3cb2e8bfa37..6871a817351 100644 --- a/src/Parsers/ASTFunction.cpp +++ b/src/Parsers/ASTFunction.cpp @@ -3,6 +3,7 @@ #include #include #include +#include #include #include #include @@ -15,8 +16,16 @@ namespace DB { +namespace ErrorCodes +{ + extern const int UNEXPECTED_EXPRESSION; +} + void ASTFunction::appendColumnNameImpl(WriteBuffer & ostr) const { + if (name == "view") + throw Exception("Table function view cannot be used as an expression", ErrorCodes::UNEXPECTED_EXPRESSION); + writeString(name, ostr); if (parameters) @@ -226,7 +235,11 @@ void ASTFunction::formatImplWithoutAlias(const FormatSettings & settings, Format * interpreted as a comment. Instead, negate the literal * in place. Another possible solution is to use parentheses, * but the old comment said it is impossible, without mentioning - * the reason. + * the reason. We should also negate the nonnegative literals, + * for symmetry. We print the negated value without parentheses, + * because they are not needed around a single literal. Also we + * use formatting from FieldVisitorToString, so that the type is + * preserved (e.g. -0. is printed with trailing period). */ if (literal && name == "negate") { @@ -243,26 +256,18 @@ void ASTFunction::formatImplWithoutAlias(const FormatSettings & settings, Format { // The parser doesn't create decimal literals, but // they can be produced by constant folding or the - // fuzzer. + // fuzzer. Decimals are always signed, so no need + // to deduce the result type like we do for ints. const auto int_value = value.getValue().value; - // We compare to zero so we don't care about scale. - if (int_value >= 0) - { - return false; - } - - settings.ostr << ValueType{-int_value, - value.getScale()}; + settings.ostr << FieldVisitorToString{}(ValueType{ + -int_value, + value.getScale()}); } else if constexpr (std::is_arithmetic_v) { - if (value >= 0) - { - return false; - } - // We don't need parentheses around a single - // literal. - settings.ostr << -value; + using ResultType = typename NumberTraits::ResultOfNegate::Type; + settings.ostr << FieldVisitorToString{}( + -static_cast(value)); return true; } @@ -483,14 +488,14 @@ void ASTFunction::formatImplWithoutAlias(const FormatSettings & settings, Format if (!written && 0 == strcmp(name.c_str(), "map")) { - settings.ostr << (settings.hilite ? hilite_operator : "") << '{' << (settings.hilite ? hilite_none : ""); + settings.ostr << (settings.hilite ? hilite_operator : "") << "map(" << (settings.hilite ? hilite_none : ""); for (size_t i = 0; i < arguments->children.size(); ++i) { if (i != 0) settings.ostr << ", "; arguments->children[i]->formatImpl(settings, state, nested_dont_need_parens); } - settings.ostr << (settings.hilite ? hilite_operator : "") << '}' << (settings.hilite ? hilite_none : ""); + settings.ostr << (settings.hilite ? hilite_operator : "") << ')' << (settings.hilite ? hilite_none : ""); written = true; } } diff --git a/src/Parsers/ASTFunctionWithKeyValueArguments.h b/src/Parsers/ASTFunctionWithKeyValueArguments.h index 88ab712cc04..f5eaa33bfc7 100644 --- a/src/Parsers/ASTFunctionWithKeyValueArguments.h +++ b/src/Parsers/ASTFunctionWithKeyValueArguments.h @@ -20,7 +20,7 @@ public: bool second_with_brackets; public: - ASTPair(bool second_with_brackets_) + explicit ASTPair(bool second_with_brackets_) : second_with_brackets(second_with_brackets_) { } @@ -49,7 +49,7 @@ public: /// Has brackets around arguments bool has_brackets; - ASTFunctionWithKeyValueArguments(bool has_brackets_ = true) + explicit ASTFunctionWithKeyValueArguments(bool has_brackets_ = true) : has_brackets(has_brackets_) { } diff --git a/src/Parsers/ASTSystemQuery.cpp b/src/Parsers/ASTSystemQuery.cpp index 71bda0c7709..c929383a256 100644 --- a/src/Parsers/ASTSystemQuery.cpp +++ b/src/Parsers/ASTSystemQuery.cpp @@ -54,6 +54,10 @@ const char * ASTSystemQuery::typeToString(Type type) return "RELOAD DICTIONARY"; case Type::RELOAD_DICTIONARIES: return "RELOAD DICTIONARIES"; + case Type::RELOAD_MODEL: + return "RELOAD MODEL"; + case Type::RELOAD_MODELS: + return "RELOAD MODELS"; case Type::RELOAD_EMBEDDED_DICTIONARIES: return "RELOAD EMBEDDED DICTIONARIES"; case Type::RELOAD_CONFIG: diff --git a/src/Parsers/ASTSystemQuery.h b/src/Parsers/ASTSystemQuery.h index 5bcdcc7875d..af3244573e4 100644 --- a/src/Parsers/ASTSystemQuery.h +++ b/src/Parsers/ASTSystemQuery.h @@ -36,6 +36,8 @@ public: SYNC_REPLICA, RELOAD_DICTIONARY, RELOAD_DICTIONARIES, + RELOAD_MODEL, + RELOAD_MODELS, RELOAD_EMBEDDED_DICTIONARIES, RELOAD_CONFIG, RELOAD_SYMBOLS, @@ -63,6 +65,7 @@ public: Type type = Type::UNKNOWN; String target_dictionary; + String target_model; String database; String table; String replica; diff --git a/src/Parsers/ExpressionElementParsers.cpp b/src/Parsers/ExpressionElementParsers.cpp index 84c178790b2..3e635b2accc 100644 --- a/src/Parsers/ExpressionElementParsers.cpp +++ b/src/Parsers/ExpressionElementParsers.cpp @@ -580,18 +580,6 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p else if (parser_literal.parse(pos, ast_literal, expected)) { const Field & value = ast_literal->as().value; - if ((node->frame.type == WindowFrame::FrameType::Rows - || node->frame.type == WindowFrame::FrameType::Groups) - && !(value.getType() == Field::Types::UInt64 - || (value.getType() == Field::Types::Int64 - && value.get() >= 0))) - { - throw Exception(ErrorCodes::BAD_ARGUMENTS, - "Frame offset for '{}' frame must be a nonnegative integer, '{}' of type '{}' given.", - WindowFrame::toString(node->frame.type), - applyVisitor(FieldVisitorToString(), value), - Field::Types::toString(value.getType())); - } node->frame.begin_offset = value; node->frame.begin_type = WindowFrame::BoundaryType::Offset; } @@ -641,18 +629,6 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p else if (parser_literal.parse(pos, ast_literal, expected)) { const Field & value = ast_literal->as().value; - if ((node->frame.type == WindowFrame::FrameType::Rows - || node->frame.type == WindowFrame::FrameType::Groups) - && !(value.getType() == Field::Types::UInt64 - || (value.getType() == Field::Types::Int64 - && value.get() >= 0))) - { - throw Exception(ErrorCodes::BAD_ARGUMENTS, - "Frame offset for '{}' frame must be a nonnegative integer, '{}' of type '{}' given.", - WindowFrame::toString(node->frame.type), - applyVisitor(FieldVisitorToString(), value), - Field::Types::toString(value.getType())); - } node->frame.end_offset = value; node->frame.end_type = WindowFrame::BoundaryType::Offset; } diff --git a/src/Parsers/ExpressionElementParsers.h b/src/Parsers/ExpressionElementParsers.h index cbbbd3f6d3b..f8b2408ac16 100644 --- a/src/Parsers/ExpressionElementParsers.h +++ b/src/Parsers/ExpressionElementParsers.h @@ -45,7 +45,7 @@ protected: class ParserIdentifier : public IParserBase { public: - ParserIdentifier(bool allow_query_parameter_ = false) : allow_query_parameter(allow_query_parameter_) {} + explicit ParserIdentifier(bool allow_query_parameter_ = false) : allow_query_parameter(allow_query_parameter_) {} protected: const char * getName() const override { return "identifier"; } bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override; @@ -59,7 +59,7 @@ protected: class ParserCompoundIdentifier : public IParserBase { public: - ParserCompoundIdentifier(bool table_name_with_optional_uuid_ = false, bool allow_query_parameter_ = false) + explicit ParserCompoundIdentifier(bool table_name_with_optional_uuid_ = false, bool allow_query_parameter_ = false) : table_name_with_optional_uuid(table_name_with_optional_uuid_), allow_query_parameter(allow_query_parameter_) { } @@ -85,7 +85,7 @@ public: using ColumnTransformers = MultiEnum; static constexpr auto AllTransformers = ColumnTransformers{ColumnTransformer::APPLY, ColumnTransformer::EXCEPT, ColumnTransformer::REPLACE}; - ParserColumnsTransformers(ColumnTransformers allowed_transformers_ = AllTransformers, bool is_strict_ = false) + explicit ParserColumnsTransformers(ColumnTransformers allowed_transformers_ = AllTransformers, bool is_strict_ = false) : allowed_transformers(allowed_transformers_) , is_strict(is_strict_) {} @@ -103,7 +103,7 @@ class ParserAsterisk : public IParserBase { public: using ColumnTransformers = ParserColumnsTransformers::ColumnTransformers; - ParserAsterisk(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) + explicit ParserAsterisk(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) : allowed_transformers(allowed_transformers_) {} @@ -129,7 +129,7 @@ class ParserColumnsMatcher : public IParserBase { public: using ColumnTransformers = ParserColumnsTransformers::ColumnTransformers; - ParserColumnsMatcher(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) + explicit ParserColumnsMatcher(ColumnTransformers allowed_transformers_ = ParserColumnsTransformers::AllTransformers) : allowed_transformers(allowed_transformers_) {} @@ -149,7 +149,7 @@ protected: class ParserFunction : public IParserBase { public: - ParserFunction(bool allow_function_parameters_ = true, bool is_table_function_ = false) + explicit ParserFunction(bool allow_function_parameters_ = true, bool is_table_function_ = false) : allow_function_parameters(allow_function_parameters_), is_table_function(is_table_function_) { } diff --git a/src/Parsers/New/AST/Identifier.cpp b/src/Parsers/New/AST/Identifier.cpp index a5c41bf9876..3b931d19720 100644 --- a/src/Parsers/New/AST/Identifier.cpp +++ b/src/Parsers/New/AST/Identifier.cpp @@ -142,16 +142,19 @@ antlrcpp::Any ParseTreeVisitor::visitIdentifierOrNull(ClickHouseParser::Identifi antlrcpp::Any ParseTreeVisitor::visitInterval(ClickHouseParser::IntervalContext *) { + asm (""); // prevent symbol removal __builtin_unreachable(); } antlrcpp::Any ParseTreeVisitor::visitKeyword(ClickHouseParser::KeywordContext *) { + asm (""); // prevent symbol removal __builtin_unreachable(); } antlrcpp::Any ParseTreeVisitor::visitKeywordForAlias(ClickHouseParser::KeywordForAliasContext *) { + asm (""); // prevent symbol removal __builtin_unreachable(); } diff --git a/src/Parsers/ParserAlterQuery.cpp b/src/Parsers/ParserAlterQuery.cpp index e5cc4b1b95e..de524342fb4 100644 --- a/src/Parsers/ParserAlterQuery.cpp +++ b/src/Parsers/ParserAlterQuery.cpp @@ -61,6 +61,7 @@ bool ParserAlterCommand::parseImpl(Pos & pos, ASTPtr & node, Expected & expected ParserKeyword s_drop_detached_partition("DROP DETACHED PARTITION"); ParserKeyword s_drop_detached_part("DROP DETACHED PART"); ParserKeyword s_fetch_partition("FETCH PARTITION"); + ParserKeyword s_fetch_part("FETCH PART"); ParserKeyword s_replace_partition("REPLACE PARTITION"); ParserKeyword s_freeze("FREEZE"); ParserKeyword s_unfreeze("UNFREEZE"); @@ -428,6 +429,21 @@ bool ParserAlterCommand::parseImpl(Pos & pos, ASTPtr & node, Expected & expected command->from = ast_from->as().value.get(); command->type = ASTAlterCommand::FETCH_PARTITION; } + else if (s_fetch_part.ignore(pos, expected)) + { + if (!parser_string_literal.parse(pos, command->partition, expected)) + return false; + + if (!s_from.ignore(pos, expected)) + return false; + + ASTPtr ast_from; + if (!parser_string_literal.parse(pos, ast_from, expected)) + return false; + command->from = ast_from->as().value.get(); + command->part = true; + command->type = ASTAlterCommand::FETCH_PARTITION; + } else if (s_freeze.ignore(pos, expected)) { if (s_partition.ignore(pos, expected)) diff --git a/src/Parsers/ParserSystemQuery.cpp b/src/Parsers/ParserSystemQuery.cpp index 491037da9a9..2fc168ea167 100644 --- a/src/Parsers/ParserSystemQuery.cpp +++ b/src/Parsers/ParserSystemQuery.cpp @@ -57,7 +57,35 @@ bool ParserSystemQuery::parseImpl(IParser::Pos & pos, ASTPtr & node, Expected & return false; break; } + case Type::RELOAD_MODEL: + { + String cluster_str; + if (ParserKeyword{"ON"}.ignore(pos, expected)) + { + if (!ASTQueryWithOnCluster::parse(pos, cluster_str, expected)) + return false; + } + res->cluster = cluster_str; + ASTPtr ast; + if (ParserStringLiteral{}.parse(pos, ast, expected)) + { + res->target_model = ast->as().value.safeGet(); + } + else + { + ParserIdentifier model_parser; + ASTPtr model; + String target_model; + if (!model_parser.parse(pos, model, expected)) + return false; + + if (!tryGetIdentifierNameInto(model, res->target_model)) + return false; + } + + break; + } case Type::DROP_REPLICA: { ASTPtr ast; diff --git a/src/Processors/DelayedPortsProcessor.cpp b/src/Processors/DelayedPortsProcessor.cpp index ae4ba4659aa..8174619f8ce 100644 --- a/src/Processors/DelayedPortsProcessor.cpp +++ b/src/Processors/DelayedPortsProcessor.cpp @@ -8,9 +8,35 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } +InputPorts createInputPorts( + const Block & header, + size_t num_ports, + IProcessor::PortNumbers delayed_ports, + bool assert_main_ports_empty) +{ + if (!assert_main_ports_empty) + return InputPorts(num_ports, header); + + InputPorts res; + std::sort(delayed_ports.begin(), delayed_ports.end()); + size_t next_delayed_port = 0; + for (size_t i = 0; i < num_ports; ++i) + { + if (next_delayed_port < delayed_ports.size() && i == delayed_ports[next_delayed_port]) + { + res.emplace_back(header); + ++next_delayed_port; + } + else + res.emplace_back(Block()); + } + + return res; +} + DelayedPortsProcessor::DelayedPortsProcessor( const Block & header, size_t num_ports, const PortNumbers & delayed_ports, bool assert_main_ports_empty) - : IProcessor(InputPorts(num_ports, header), + : IProcessor(createInputPorts(header, num_ports, delayed_ports, assert_main_ports_empty), OutputPorts((assert_main_ports_empty ? delayed_ports.size() : num_ports), header)) , num_delayed_ports(delayed_ports.size()) { diff --git a/src/Processors/Executors/PipelineExecutor.cpp b/src/Processors/Executors/PipelineExecutor.cpp index a724f22ed31..b1751dfd030 100644 --- a/src/Processors/Executors/PipelineExecutor.cpp +++ b/src/Processors/Executors/PipelineExecutor.cpp @@ -1,14 +1,15 @@ -#include #include #include -#include #include -#include #include -#include #include +#include +#include +#include +#include #include #include +#include #ifndef NDEBUG #include @@ -740,7 +741,7 @@ void PipelineExecutor::executeImpl(size_t num_threads) bool finished_flag = false; - SCOPE_EXIT( + SCOPE_EXIT_SAFE( if (!finished_flag) { finish(); @@ -766,9 +767,9 @@ void PipelineExecutor::executeImpl(size_t num_threads) if (thread_group) CurrentThread::attachTo(thread_group); - SCOPE_EXIT( - if (thread_group) - CurrentThread::detachQueryIfNotDetached(); + SCOPE_EXIT_SAFE( + if (thread_group) + CurrentThread::detachQueryIfNotDetached(); ); try diff --git a/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp b/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp index f1626414375..9f1999bc4a3 100644 --- a/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp +++ b/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp @@ -5,7 +5,7 @@ #include #include -#include +#include namespace DB { @@ -72,7 +72,7 @@ static void threadFunction(PullingAsyncPipelineExecutor::Data & data, ThreadGrou if (thread_group) CurrentThread::attachTo(thread_group); - SCOPE_EXIT( + SCOPE_EXIT_SAFE( if (thread_group) CurrentThread::detachQueryIfNotDetached(); ); diff --git a/src/Processors/Formats/IInputFormat.cpp b/src/Processors/Formats/IInputFormat.cpp index 069d25564b1..5594e04dc74 100644 --- a/src/Processors/Formats/IInputFormat.cpp +++ b/src/Processors/Formats/IInputFormat.cpp @@ -5,11 +5,6 @@ namespace DB { -namespace ErrorCodes -{ - extern const int LOGICAL_ERROR; -} - IInputFormat::IInputFormat(Block header, ReadBuffer & in_) : ISource(std::move(header)), in(in_) { @@ -18,9 +13,7 @@ IInputFormat::IInputFormat(Block header, ReadBuffer & in_) void IInputFormat::resetParser() { - if (in.hasPendingData()) - throw Exception("Unread data in IInputFormat::resetParser. Most likely it's a bug.", ErrorCodes::LOGICAL_ERROR); - + in.ignoreAll(); // those are protected attributes from ISource (I didn't want to propagate resetParser up there) finished = false; got_exception = false; diff --git a/src/Processors/Formats/IRowInputFormat.cpp b/src/Processors/Formats/IRowInputFormat.cpp index 75a9abf6845..52e64a9d90d 100644 --- a/src/Processors/Formats/IRowInputFormat.cpp +++ b/src/Processors/Formats/IRowInputFormat.cpp @@ -190,7 +190,7 @@ Chunk IRowInputFormat::generate() if (num_errors && (params.allow_errors_num > 0 || params.allow_errors_ratio > 0)) { Poco::Logger * log = &Poco::Logger::get("IRowInputFormat"); - LOG_TRACE(log, "Skipped {} rows with errors while reading the input stream", num_errors); + LOG_DEBUG(log, "Skipped {} rows with errors while reading the input stream", num_errors); } readSuffix(); diff --git a/src/Processors/Formats/IRowInputFormat.h b/src/Processors/Formats/IRowInputFormat.h index c802bd3066b..8c600ad7285 100644 --- a/src/Processors/Formats/IRowInputFormat.h +++ b/src/Processors/Formats/IRowInputFormat.h @@ -14,7 +14,7 @@ namespace DB /// Contains extra information about read data. struct RowReadExtension { - /// IRowInputStream.read() output. It contains non zero for columns that actually read from the source and zero otherwise. + /// IRowInputFormat::read output. It contains non zero for columns that actually read from the source and zero otherwise. /// It's used to attach defaults for partially filled rows. std::vector read_columns; }; diff --git a/src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp b/src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp index 4edef1f1365..52d2cf98c25 100644 --- a/src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp +++ b/src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp @@ -24,7 +24,6 @@ namespace ErrorCodes ArrowBlockInputFormat::ArrowBlockInputFormat(ReadBuffer & in_, const Block & header_, bool stream_) : IInputFormat(header_, in_), stream{stream_} { - prepareReader(); } Chunk ArrowBlockInputFormat::generate() @@ -35,12 +34,18 @@ Chunk ArrowBlockInputFormat::generate() if (stream) { + if (!stream_reader) + prepareReader(); + batch_result = stream_reader->Next(); if (batch_result.ok() && !(*batch_result)) return res; } else { + if (!file_reader) + prepareReader(); + if (record_batch_current >= record_batch_total) return res; @@ -71,14 +76,14 @@ void ArrowBlockInputFormat::resetParser() stream_reader.reset(); else file_reader.reset(); - prepareReader(); + record_batch_current = 0; } void ArrowBlockInputFormat::prepareReader() { if (stream) { - auto stream_reader_status = arrow::ipc::RecordBatchStreamReader::Open(asArrowFile(in)); + auto stream_reader_status = arrow::ipc::RecordBatchStreamReader::Open(std::make_unique(in)); if (!stream_reader_status.ok()) throw Exception(ErrorCodes::UNKNOWN_EXCEPTION, "Error while opening a table: {}", stream_reader_status.status().ToString()); @@ -101,7 +106,7 @@ void ArrowBlockInputFormat::prepareReader() record_batch_current = 0; } -void registerInputFormatProcessorArrow(FormatFactory &factory) +void registerInputFormatProcessorArrow(FormatFactory & factory) { factory.registerInputFormatProcessor( "Arrow", @@ -112,7 +117,7 @@ void registerInputFormatProcessorArrow(FormatFactory &factory) { return std::make_shared(buf, sample, false); }); - + factory.markFormatAsColumnOriented("Arrow"); factory.registerInputFormatProcessor( "ArrowStream", [](ReadBuffer & buf, diff --git a/src/Processors/Formats/Impl/ArrowBufferedStreams.cpp b/src/Processors/Formats/Impl/ArrowBufferedStreams.cpp index c783e10debb..9582e0c3312 100644 --- a/src/Processors/Formats/Impl/ArrowBufferedStreams.cpp +++ b/src/Processors/Formats/Impl/ArrowBufferedStreams.cpp @@ -55,26 +55,23 @@ arrow::Status RandomAccessFileFromSeekableReadBuffer::Close() arrow::Result RandomAccessFileFromSeekableReadBuffer::Tell() const { - return arrow::Result(in.getPosition()); + return in.getPosition(); } arrow::Result RandomAccessFileFromSeekableReadBuffer::Read(int64_t nbytes, void * out) { - int64_t bytes_read = in.readBig(reinterpret_cast(out), nbytes); - return arrow::Result(bytes_read); + return in.readBig(reinterpret_cast(out), nbytes); } arrow::Result> RandomAccessFileFromSeekableReadBuffer::Read(int64_t nbytes) { - auto buffer_status = arrow::AllocateBuffer(nbytes); - ARROW_RETURN_NOT_OK(buffer_status); + ARROW_ASSIGN_OR_RAISE(auto buffer, arrow::AllocateResizableBuffer(nbytes)) + ARROW_ASSIGN_OR_RAISE(int64_t bytes_read, Read(nbytes, buffer->mutable_data())) - auto shared_buffer = std::shared_ptr(std::move(std::move(*buffer_status))); + if (bytes_read < nbytes) + RETURN_NOT_OK(buffer->Resize(bytes_read)); - size_t n = in.readBig(reinterpret_cast(shared_buffer->mutable_data()), nbytes); - - auto read_buffer = arrow::SliceBuffer(shared_buffer, 0, n); - return arrow::Result>(shared_buffer); + return buffer; } arrow::Status RandomAccessFileFromSeekableReadBuffer::Seek(int64_t position) @@ -83,6 +80,43 @@ arrow::Status RandomAccessFileFromSeekableReadBuffer::Seek(int64_t position) return arrow::Status::OK(); } + +ArrowInputStreamFromReadBuffer::ArrowInputStreamFromReadBuffer(ReadBuffer & in_) : in(in_), is_open{true} +{ +} + +arrow::Result ArrowInputStreamFromReadBuffer::Read(int64_t nbytes, void * out) +{ + return in.readBig(reinterpret_cast(out), nbytes); +} + +arrow::Result> ArrowInputStreamFromReadBuffer::Read(int64_t nbytes) +{ + ARROW_ASSIGN_OR_RAISE(auto buffer, arrow::AllocateResizableBuffer(nbytes)) + ARROW_ASSIGN_OR_RAISE(int64_t bytes_read, Read(nbytes, buffer->mutable_data())) + + if (bytes_read < nbytes) + RETURN_NOT_OK(buffer->Resize(bytes_read)); + + return buffer; +} + +arrow::Status ArrowInputStreamFromReadBuffer::Abort() +{ + return arrow::Status(); +} + +arrow::Result ArrowInputStreamFromReadBuffer::Tell() const +{ + return in.count(); +} + +arrow::Status ArrowInputStreamFromReadBuffer::Close() +{ + is_open = false; + return arrow::Status(); +} + std::shared_ptr asArrowFile(ReadBuffer & in) { if (auto * fd_in = dynamic_cast(&in)) diff --git a/src/Processors/Formats/Impl/ArrowBufferedStreams.h b/src/Processors/Formats/Impl/ArrowBufferedStreams.h index bb94535549c..a10a5bcabdb 100644 --- a/src/Processors/Formats/Impl/ArrowBufferedStreams.h +++ b/src/Processors/Formats/Impl/ArrowBufferedStreams.h @@ -61,6 +61,24 @@ private: ARROW_DISALLOW_COPY_AND_ASSIGN(RandomAccessFileFromSeekableReadBuffer); }; +class ArrowInputStreamFromReadBuffer : public arrow::io::InputStream +{ +public: + explicit ArrowInputStreamFromReadBuffer(ReadBuffer & in); + arrow::Result Read(int64_t nbytes, void* out) override; + arrow::Result> Read(int64_t nbytes) override; + arrow::Status Abort() override; + arrow::Result Tell() const override; + arrow::Status Close() override; + bool closed() const override { return !is_open; } + +private: + ReadBuffer & in; + bool is_open = false; + + ARROW_DISALLOW_COPY_AND_ASSIGN(ArrowInputStreamFromReadBuffer); +}; + std::shared_ptr asArrowFile(ReadBuffer & in); } diff --git a/src/Processors/Formats/Impl/CSVRowInputFormat.cpp b/src/Processors/Formats/Impl/CSVRowInputFormat.cpp index 00381ab96d0..4ccc0db4cfe 100644 --- a/src/Processors/Formats/Impl/CSVRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/CSVRowInputFormat.cpp @@ -201,7 +201,10 @@ void CSVRowInputFormat::readPrefix() return; } else + { skipRow(in, format_settings.csv, num_columns); + setupAllColumnsByTableSchema(); + } } else if (!column_mapping->is_set) setupAllColumnsByTableSchema(); diff --git a/src/Processors/Formats/Impl/ConstantExpressionTemplate.cpp b/src/Processors/Formats/Impl/ConstantExpressionTemplate.cpp index caf57ded8b7..288c6ee09ef 100644 --- a/src/Processors/Formats/Impl/ConstantExpressionTemplate.cpp +++ b/src/Processors/Formats/Impl/ConstantExpressionTemplate.cpp @@ -144,9 +144,9 @@ class ReplaceLiteralsVisitor { public: LiteralsInfo replaced_literals; - const Context & context; + ContextPtr context; - explicit ReplaceLiteralsVisitor(const Context & context_) : context(context_) { } + explicit ReplaceLiteralsVisitor(ContextPtr context_) : context(context_) { } void visit(ASTPtr & ast, bool force_nullable) { @@ -293,7 +293,7 @@ private: /// E.g. template of "position('some string', 'other string') != 0" is /// ["position", "(", DataTypeString, ",", DataTypeString, ")", "!=", DataTypeUInt64] ConstantExpressionTemplate::TemplateStructure::TemplateStructure(LiteralsInfo & replaced_literals, TokenIterator expression_begin, TokenIterator expression_end, - ASTPtr & expression, const IDataType & result_type, bool null_as_default_, const Context & context) + ASTPtr & expression, const IDataType & result_type, bool null_as_default_, ContextPtr context) { null_as_default = null_as_default_; @@ -377,7 +377,7 @@ ConstantExpressionTemplate::Cache::getFromCacheOrConstruct(const DataTypePtr & r TokenIterator expression_begin, TokenIterator expression_end, const ASTPtr & expression_, - const Context & context, + ContextPtr context, bool * found_in_cache, const String & salt) { @@ -385,7 +385,7 @@ ConstantExpressionTemplate::Cache::getFromCacheOrConstruct(const DataTypePtr & r ASTPtr expression = expression_->clone(); ReplaceLiteralsVisitor visitor(context); visitor.visit(expression, result_column_type->isNullable() || null_as_default); - ReplaceQueryParameterVisitor param_visitor(context.getQueryParameters()); + ReplaceQueryParameterVisitor param_visitor(context->getQueryParameters()); param_visitor.visit(expression); size_t template_hash = TemplateStructure::getTemplateHash(expression, visitor.replaced_literals, result_column_type, null_as_default, salt); diff --git a/src/Processors/Formats/Impl/ConstantExpressionTemplate.h b/src/Processors/Formats/Impl/ConstantExpressionTemplate.h index 4317cf4a3da..6659243df63 100644 --- a/src/Processors/Formats/Impl/ConstantExpressionTemplate.h +++ b/src/Processors/Formats/Impl/ConstantExpressionTemplate.h @@ -23,7 +23,7 @@ class ConstantExpressionTemplate : boost::noncopyable struct TemplateStructure : boost::noncopyable { TemplateStructure(LiteralsInfo & replaced_literals, TokenIterator expression_begin, TokenIterator expression_end, - ASTPtr & expr, const IDataType & result_type, bool null_as_default_, const Context & context); + ASTPtr & expr, const IDataType & result_type, bool null_as_default_, ContextPtr context); static void addNodesToCastResult(const IDataType & result_column_type, ASTPtr & expr, bool null_as_default); static size_t getTemplateHash(const ASTPtr & expression, const LiteralsInfo & replaced_literals, @@ -59,7 +59,7 @@ public: TokenIterator expression_begin, TokenIterator expression_end, const ASTPtr & expression_, - const Context & context, + ContextPtr context, bool * found_in_cache = nullptr, const String & salt = {}); }; diff --git a/src/Processors/Formats/Impl/MarkdownRowOutputFormat.cpp b/src/Processors/Formats/Impl/MarkdownRowOutputFormat.cpp index 5108650ff0d..ee5d4193a45 100644 --- a/src/Processors/Formats/Impl/MarkdownRowOutputFormat.cpp +++ b/src/Processors/Formats/Impl/MarkdownRowOutputFormat.cpp @@ -21,16 +21,13 @@ void MarkdownRowOutputFormat::writePrefix() } writeCString("\n|", out); String left_alignment = ":-|"; - String central_alignment = ":-:|"; String right_alignment = "-:|"; for (size_t i = 0; i < columns; ++i) { - if (isInteger(types[i])) + if (types[i]->shouldAlignRightInPrettyFormats()) writeString(right_alignment, out); - else if (isString(types[i])) - writeString(left_alignment, out); else - writeString(central_alignment, out); + writeString(left_alignment, out); } writeChar('\n', out); } diff --git a/src/Processors/Formats/Impl/MySQLOutputFormat.cpp b/src/Processors/Formats/Impl/MySQLOutputFormat.cpp index 9733d479a77..0f73349c271 100644 --- a/src/Processors/Formats/Impl/MySQLOutputFormat.cpp +++ b/src/Processors/Formats/Impl/MySQLOutputFormat.cpp @@ -40,7 +40,7 @@ void MySQLOutputFormat::initialize() packet_endpoint->sendPacket(getColumnDefinition(column_name, data_types[i]->getTypeId())); } - if (!(context->mysql.client_capabilities & Capability::CLIENT_DEPRECATE_EOF)) + if (!(getContext()->mysql.client_capabilities & Capability::CLIENT_DEPRECATE_EOF)) { packet_endpoint->sendPacket(EOFPacket(0, 0)); } @@ -64,7 +64,7 @@ void MySQLOutputFormat::finalize() { size_t affected_rows = 0; std::string human_readable_info; - if (QueryStatus * process_list_elem = context->getProcessListElement()) + if (QueryStatus * process_list_elem = getContext()->getProcessListElement()) { CurrentThread::finalizePerformanceCounters(); QueryStatusInfo info = process_list_elem->getInfo(); @@ -78,10 +78,11 @@ void MySQLOutputFormat::finalize() const auto & header = getPort(PortKind::Main).getHeader(); if (header.columns() == 0) - packet_endpoint->sendPacket(OKPacket(0x0, context->mysql.client_capabilities, affected_rows, 0, 0, "", human_readable_info), true); - else - if (context->mysql.client_capabilities & CLIENT_DEPRECATE_EOF) - packet_endpoint->sendPacket(OKPacket(0xfe, context->mysql.client_capabilities, affected_rows, 0, 0, "", human_readable_info), true); + packet_endpoint->sendPacket( + OKPacket(0x0, getContext()->mysql.client_capabilities, affected_rows, 0, 0, "", human_readable_info), true); + else if (getContext()->mysql.client_capabilities & CLIENT_DEPRECATE_EOF) + packet_endpoint->sendPacket( + OKPacket(0xfe, getContext()->mysql.client_capabilities, affected_rows, 0, 0, "", human_readable_info), true); else packet_endpoint->sendPacket(EOFPacket(0, 0), true); } diff --git a/src/Processors/Formats/Impl/MySQLOutputFormat.h b/src/Processors/Formats/Impl/MySQLOutputFormat.h index c47bbaadc33..01a892410df 100644 --- a/src/Processors/Formats/Impl/MySQLOutputFormat.h +++ b/src/Processors/Formats/Impl/MySQLOutputFormat.h @@ -15,21 +15,20 @@ namespace DB class IColumn; class IDataType; class WriteBuffer; -class Context; /** A stream for outputting data in a binary line-by-line format. */ -class MySQLOutputFormat final : public IOutputFormat +class MySQLOutputFormat final : public IOutputFormat, WithConstContext { public: MySQLOutputFormat(WriteBuffer & out_, const Block & header_, const FormatSettings & settings_); String getName() const override { return "MySQLOutputFormat"; } - void setContext(const Context & context_) + void setContext(ContextConstPtr context_) { - context = &context_; - packet_endpoint = std::make_unique(out, const_cast(context_.mysql.sequence_id)); /// TODO: fix it + context = context_; + packet_endpoint = std::make_unique(out, const_cast(getContext()->mysql.sequence_id)); /// TODO: fix it } void consume(Chunk) override; @@ -40,10 +39,8 @@ public: void initialize(); private: - bool initialized = false; - const Context * context = nullptr; std::unique_ptr packet_endpoint; FormatSettings format_settings; DataTypes data_types; diff --git a/src/Processors/Formats/Impl/ORCBlockInputFormat.cpp b/src/Processors/Formats/Impl/ORCBlockInputFormat.cpp index 7776a904f1c..6f43addc4ed 100644 --- a/src/Processors/Formats/Impl/ORCBlockInputFormat.cpp +++ b/src/Processors/Formats/Impl/ORCBlockInputFormat.cpp @@ -19,6 +19,13 @@ namespace ErrorCodes extern const int CANNOT_READ_ALL_DATA; } +#define THROW_ARROW_NOT_OK(status) \ + do \ + { \ + if (::arrow::Status _s = (status); !_s.ok()) \ + throw Exception(_s.ToString(), ErrorCodes::BAD_ARGUMENTS); \ + } while (false) + ORCBlockInputFormat::ORCBlockInputFormat(ReadBuffer & in_, Block header_) : IInputFormat(std::move(header_), in_) { } @@ -28,21 +35,26 @@ Chunk ORCBlockInputFormat::generate() Chunk res; const Block & header = getPort().getHeader(); - if (file_reader) + if (!file_reader) + prepareReader(); + + if (stripe_current >= stripe_total) return res; - arrow::Status open_status = arrow::adapters::orc::ORCFileReader::Open(asArrowFile(in), arrow::default_memory_pool(), &file_reader); - if (!open_status.ok()) - throw Exception(open_status.ToString(), ErrorCodes::BAD_ARGUMENTS); + std::shared_ptr batch_result; + arrow::Status batch_status = file_reader->ReadStripe(stripe_current, include_indices, &batch_result); + if (!batch_status.ok()) + throw ParsingException(ErrorCodes::CANNOT_READ_ALL_DATA, + "Error while reading batch of ORC data: {}", batch_status.ToString()); - std::shared_ptr table; - arrow::Status read_status = file_reader->Read(&table); - if (!read_status.ok()) - throw ParsingException{"Error while reading ORC data: " + read_status.ToString(), - ErrorCodes::CANNOT_READ_ALL_DATA}; + auto table_result = arrow::Table::FromRecordBatches({batch_result}); + if (!table_result.ok()) + throw ParsingException(ErrorCodes::CANNOT_READ_ALL_DATA, + "Error while reading batch of ORC data: {}", table_result.status().ToString()); - ArrowColumnToCHColumn::arrowTableToCHChunk(res, table, header, "ORC"); + ++stripe_current; + ArrowColumnToCHColumn::arrowTableToCHChunk(res, *table_result, header, "ORC"); return res; } @@ -51,6 +63,26 @@ void ORCBlockInputFormat::resetParser() IInputFormat::resetParser(); file_reader.reset(); + include_indices.clear(); + stripe_current = 0; +} + +void ORCBlockInputFormat::prepareReader() +{ + THROW_ARROW_NOT_OK(arrow::adapters::orc::ORCFileReader::Open(asArrowFile(in), arrow::default_memory_pool(), &file_reader)); + stripe_total = file_reader->NumberOfStripes(); + stripe_current = 0; + + std::shared_ptr schema; + THROW_ARROW_NOT_OK(file_reader->ReadSchema(&schema)); + + for (int i = 0; i < schema->num_fields(); ++i) + { + if (getPort().getHeader().has(schema->field(i)->name())) + { + include_indices.push_back(i+1); + } + } } void registerInputFormatProcessorORC(FormatFactory &factory) @@ -64,6 +96,7 @@ void registerInputFormatProcessorORC(FormatFactory &factory) { return std::make_shared(buf, sample); }); + factory.markFormatAsColumnOriented("ORC"); } } diff --git a/src/Processors/Formats/Impl/ORCBlockInputFormat.h b/src/Processors/Formats/Impl/ORCBlockInputFormat.h index cff42560366..0c78290f3cc 100644 --- a/src/Processors/Formats/Impl/ORCBlockInputFormat.h +++ b/src/Processors/Formats/Impl/ORCBlockInputFormat.h @@ -25,6 +25,15 @@ private: // TODO: check that this class implements every part of its parent std::unique_ptr file_reader; + + int stripe_total = 0; + + int stripe_current = 0; + + // indices of columns to read from ORC file + std::vector include_indices; + + void prepareReader(); }; } diff --git a/src/Processors/Formats/Impl/ParallelParsingInputFormat.cpp b/src/Processors/Formats/Impl/ParallelParsingInputFormat.cpp index 1ad913a1a59..f295fe00299 100644 --- a/src/Processors/Formats/Impl/ParallelParsingInputFormat.cpp +++ b/src/Processors/Formats/Impl/ParallelParsingInputFormat.cpp @@ -2,14 +2,14 @@ #include #include #include -#include +#include namespace DB { void ParallelParsingInputFormat::segmentatorThreadFunction(ThreadGroupStatusPtr thread_group) { - SCOPE_EXIT( + SCOPE_EXIT_SAFE( if (thread_group) CurrentThread::detachQueryIfNotDetached(); ); @@ -60,7 +60,7 @@ void ParallelParsingInputFormat::segmentatorThreadFunction(ThreadGroupStatusPtr void ParallelParsingInputFormat::parserThreadFunction(ThreadGroupStatusPtr thread_group, size_t current_ticket_number) { - SCOPE_EXIT( + SCOPE_EXIT_SAFE( if (thread_group) CurrentThread::detachQueryIfNotDetached(); ); diff --git a/src/Processors/Formats/Impl/ParquetBlockInputFormat.cpp b/src/Processors/Formats/Impl/ParquetBlockInputFormat.cpp index bb55c71b7ca..162185e75b8 100644 --- a/src/Processors/Formats/Impl/ParquetBlockInputFormat.cpp +++ b/src/Processors/Formats/Impl/ParquetBlockInputFormat.cpp @@ -94,6 +94,7 @@ void registerInputFormatProcessorParquet(FormatFactory &factory) { return std::make_shared(buf, sample); }); + factory.markFormatAsColumnOriented("Parquet"); } } diff --git a/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.cpp b/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.cpp index 41adb6fc612..f89b76342a4 100644 --- a/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/TabSeparatedRowInputFormat.cpp @@ -7,6 +7,7 @@ #include #include #include +#include #include namespace DB @@ -338,8 +339,10 @@ void TabSeparatedRowInputFormat::tryDeserializeField(const DataTypePtr & type, I const auto & index = column_mapping->column_indexes_for_input_fields[file_column]; if (index) { + bool can_be_parsed_as_null = removeLowCardinality(type)->isNullable(); + // check null value for type is not nullable. don't cross buffer bound for simplicity, so maybe missing some case - if (!type->isNullable() && !in.eof()) + if (!can_be_parsed_as_null && !in.eof()) { if (*in.position() == '\\' && in.available() >= 2) { diff --git a/src/Processors/Formats/Impl/ValuesBlockInputFormat.cpp b/src/Processors/Formats/Impl/ValuesBlockInputFormat.cpp index c054145016d..701385447b4 100644 --- a/src/Processors/Formats/Impl/ValuesBlockInputFormat.cpp +++ b/src/Processors/Formats/Impl/ValuesBlockInputFormat.cpp @@ -358,7 +358,7 @@ bool ValuesBlockInputFormat::parseExpression(IColumn & column, size_t column_idx TokenIterator(tokens), token_iterator, ast, - *context, + context, &found_in_cache, delimiter); templates[column_idx].emplace(structure); @@ -400,7 +400,7 @@ bool ValuesBlockInputFormat::parseExpression(IColumn & column, size_t column_idx /// Try to evaluate single expression if other parsers don't work buf.position() = const_cast(token_iterator->begin); - std::pair value_raw = evaluateConstantExpression(ast, *context); + std::pair value_raw = evaluateConstantExpression(ast, context); Field & expression_value = value_raw.first; diff --git a/src/Processors/Formats/Impl/ValuesBlockInputFormat.h b/src/Processors/Formats/Impl/ValuesBlockInputFormat.h index 8e7e15c572d..ea5ab9239e0 100644 --- a/src/Processors/Formats/Impl/ValuesBlockInputFormat.h +++ b/src/Processors/Formats/Impl/ValuesBlockInputFormat.h @@ -1,21 +1,19 @@ #pragma once #include -#include -#include #include -#include - +#include #include #include +#include +#include +#include namespace DB { -class Context; class ReadBuffer; - /** Stream to read data in VALUES format (as in INSERT query). */ class ValuesBlockInputFormat final : public IInputFormat @@ -36,7 +34,7 @@ public: void resetParser() override; /// TODO: remove context somehow. - void setContext(const Context & context_) { context = std::make_unique(context_); } + void setContext(ContextConstPtr context_) { context = Context::createCopy(context_); } const BlockMissingValues & getMissingValues() const override { return block_missing_values; } @@ -68,12 +66,11 @@ private: bool skipToNextRow(size_t min_chunk_bytes = 0, int balance = 0); -private: PeekableReadBuffer buf; const RowInputFormatParams params; - std::unique_ptr context; /// pimpl + ContextPtr context; /// pimpl const FormatSettings format_settings; const size_t num_columns; diff --git a/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp b/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp index 3ef0caefd8f..3eb94ba78b7 100644 --- a/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp +++ b/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp @@ -24,11 +24,13 @@ FinishAggregatingInOrderAlgorithm::FinishAggregatingInOrderAlgorithm( const Block & header_, size_t num_inputs_, AggregatingTransformParamsPtr params_, - SortDescription description_) + SortDescription description_, + size_t max_block_size_) : header(header_) , num_inputs(num_inputs_) , params(params_) , description(std::move(description_)) + , max_block_size(max_block_size_) { /// Replace column names in description to positions. for (auto & column_description : description) @@ -56,6 +58,13 @@ void FinishAggregatingInOrderAlgorithm::consume(Input & input, size_t source_num IMergingAlgorithm::Status FinishAggregatingInOrderAlgorithm::merge() { + if (!inputs_to_update.empty()) + { + Status status(inputs_to_update.back()); + inputs_to_update.pop_back(); + return status; + } + /// Find the input with smallest last row. std::optional best_input; for (size_t i = 0; i < num_inputs; ++i) @@ -94,16 +103,30 @@ IMergingAlgorithm::Status FinishAggregatingInOrderAlgorithm::merge() states[i].to_row = (it == indices.end() ? states[i].num_rows : *it); } - Status status(*best_input); - status.chunk = aggregate(); + addToAggregation(); + + /// At least one chunk should be fully aggregated. + assert(!inputs_to_update.empty()); + Status status(inputs_to_update.back()); + inputs_to_update.pop_back(); + + /// Do not merge blocks, if there are too few rows. + if (accumulated_rows >= max_block_size) + status.chunk = aggregate(); return status; } Chunk FinishAggregatingInOrderAlgorithm::aggregate() { - BlocksList blocks; + auto aggregated = params->aggregator.mergeBlocks(blocks, false); + blocks.clear(); + accumulated_rows = 0; + return {aggregated.getColumns(), aggregated.rows()}; +} +void FinishAggregatingInOrderAlgorithm::addToAggregation() +{ for (size_t i = 0; i < num_inputs; ++i) { const auto & state = states[i]; @@ -112,7 +135,7 @@ Chunk FinishAggregatingInOrderAlgorithm::aggregate() if (state.to_row - state.current_row == state.num_rows) { - blocks.emplace_back(header.cloneWithColumns(states[i].all_columns)); + blocks.emplace_back(header.cloneWithColumns(state.all_columns)); } else { @@ -125,10 +148,11 @@ Chunk FinishAggregatingInOrderAlgorithm::aggregate() } states[i].current_row = states[i].to_row; + accumulated_rows += blocks.back().rows(); + + if (!states[i].isValid()) + inputs_to_update.push_back(i); } - - auto aggregated = params->aggregator.mergeBlocks(blocks, false); - return {aggregated.getColumns(), aggregated.rows()}; } } diff --git a/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.h b/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.h index 2f9cd5d71a2..119aefb0ab0 100644 --- a/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.h +++ b/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.h @@ -37,7 +37,8 @@ public: const Block & header_, size_t num_inputs_, AggregatingTransformParamsPtr params_, - SortDescription description_); + SortDescription description_, + size_t max_block_size_); void initialize(Inputs inputs) override; void consume(Input & input, size_t source_num) override; @@ -45,6 +46,7 @@ public: private: Chunk aggregate(); + void addToAggregation(); struct State { @@ -66,8 +68,13 @@ private: size_t num_inputs; AggregatingTransformParamsPtr params; SortDescription description; + size_t max_block_size; + Inputs current_inputs; std::vector states; + std::vector inputs_to_update; + BlocksList blocks; + size_t accumulated_rows = 0; }; } diff --git a/src/Processors/Merges/FinishAggregatingInOrderTransform.h b/src/Processors/Merges/FinishAggregatingInOrderTransform.h index e067b9472d9..4f9e53bd7d5 100644 --- a/src/Processors/Merges/FinishAggregatingInOrderTransform.h +++ b/src/Processors/Merges/FinishAggregatingInOrderTransform.h @@ -16,13 +16,15 @@ public: const Block & header, size_t num_inputs, AggregatingTransformParamsPtr params, - SortDescription description) + SortDescription description, + size_t max_block_size) : IMergingTransform( num_inputs, header, header, true, header, num_inputs, params, - std::move(description)) + std::move(description), + max_block_size) { } diff --git a/src/Processors/Pipe.cpp b/src/Processors/Pipe.cpp index 129bebf452a..044975448ad 100644 --- a/src/Processors/Pipe.cpp +++ b/src/Processors/Pipe.cpp @@ -8,6 +8,7 @@ #include #include #include +#include namespace DB { @@ -250,12 +251,53 @@ static Pipes removeEmptyPipes(Pipes pipes) return res; } -Pipe Pipe::unitePipes(Pipes pipes) +/// Calculate common header for pipes. +/// This function is needed only to remove ColumnConst from common header in case if some columns are const, and some not. +/// E.g. if the first header is `x, const y, const z` and the second is `const x, y, const z`, the common header will be `x, y, const z`. +static Block getCommonHeader(const Pipes & pipes) { - return Pipe::unitePipes(std::move(pipes), nullptr); + Block res; + + for (const auto & pipe : pipes) + { + if (const auto & header = pipe.getHeader()) + { + res = header; + break; + } + } + + for (const auto & pipe : pipes) + { + const auto & header = pipe.getHeader(); + for (size_t i = 0; i < res.columns(); ++i) + { + /// We do not check that headers are compatible here. Will do it later. + + if (i >= header.columns()) + break; + + auto & common = res.getByPosition(i).column; + const auto & cur = header.getByPosition(i).column; + + /// Only remove const from common header if it is not const for current pipe. + if (cur && common && !isColumnConst(*cur)) + { + if (const auto * column_const = typeid_cast(common.get())) + common = column_const->getDataColumnPtr(); + } + } + } + + return res; } -Pipe Pipe::unitePipes(Pipes pipes, Processors * collected_processors) +Pipe Pipe::unitePipes(Pipes pipes) +{ + return Pipe::unitePipes(std::move(pipes), nullptr, false); +} + +Pipe Pipe::unitePipes(Pipes pipes, Processors * collected_processors, bool allow_empty_header) { Pipe res; @@ -275,12 +317,14 @@ Pipe Pipe::unitePipes(Pipes pipes, Processors * collected_processors) OutputPortRawPtrs totals; OutputPortRawPtrs extremes; - res.header = pipes.front().header; res.collected_processors = collected_processors; + res.header = getCommonHeader(pipes); for (auto & pipe : pipes) { - assertBlocksHaveEqualStructure(res.header, pipe.header, "Pipe::unitePipes"); + if (!allow_empty_header || pipe.header) + assertCompatibleHeader(pipe.header, res.header, "Pipe::unitePipes"); + res.processors.insert(res.processors.end(), pipe.processors.begin(), pipe.processors.end()); res.output_ports.insert(res.output_ports.end(), pipe.output_ports.begin(), pipe.output_ports.end()); diff --git a/src/Processors/Pipe.h b/src/Processors/Pipe.h index f21f4761977..4ba08787579 100644 --- a/src/Processors/Pipe.h +++ b/src/Processors/Pipe.h @@ -1,8 +1,9 @@ #pragma once + #include -#include #include #include +#include namespace DB { @@ -155,7 +156,7 @@ private: /// This methods are for QueryPipeline. It is allowed to complete graph only there. /// So, we may be sure that Pipe always has output port if not empty. bool isCompleted() const { return !empty() && output_ports.empty(); } - static Pipe unitePipes(Pipes pipes, Processors * collected_processors); + static Pipe unitePipes(Pipes pipes, Processors * collected_processors, bool allow_empty_header); void setSinks(const Pipe::ProcessorGetterWithStreamKind & getter); void setOutputFormat(ProcessorPtr output); diff --git a/src/Processors/Port.cpp b/src/Processors/Port.cpp index 7e7ccb1adad..0a6026b27f2 100644 --- a/src/Processors/Port.cpp +++ b/src/Processors/Port.cpp @@ -16,7 +16,7 @@ void connect(OutputPort & output, InputPort & input) auto out_name = output.getProcessor().getName(); auto in_name = input.getProcessor().getName(); - assertBlocksHaveEqualStructure(input.getHeader(), output.getHeader(), " function connect between " + out_name + " and " + in_name); + assertCompatibleHeader(output.getHeader(), input.getHeader(), " function connect between " + out_name + " and " + in_name); input.output_port = &output; output.input_port = &input; diff --git a/src/Processors/QueryPipeline.cpp b/src/Processors/QueryPipeline.cpp index 637a9480034..1b803ec0886 100644 --- a/src/Processors/QueryPipeline.cpp +++ b/src/Processors/QueryPipeline.cpp @@ -211,11 +211,14 @@ void QueryPipeline::setOutputFormat(ProcessorPtr output) QueryPipeline QueryPipeline::unitePipelines( std::vector> pipelines, - const Block & common_header, - const ExpressionActionsSettings & settings, size_t max_threads_limit, Processors * collected_processors) { + if (pipelines.empty()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot unite an empty set of pipelines"); + + Block common_header = pipelines.front()->getHeader(); + /// Should we limit the number of threads for united pipeline. True if all pipelines have max_threads != 0. /// If true, result max_threads will be sum(max_threads). /// Note: it may be > than settings.max_threads, so we should apply this limit again. @@ -229,20 +232,6 @@ QueryPipeline QueryPipeline::unitePipelines( pipeline.checkInitialized(); pipeline.pipe.collected_processors = collected_processors; - if (!pipeline.isCompleted()) - { - auto actions_dag = ActionsDAG::makeConvertingActions( - pipeline.getHeader().getColumnsWithTypeAndName(), - common_header.getColumnsWithTypeAndName(), - ActionsDAG::MatchColumnsMode::Position); - auto actions = std::make_shared(actions_dag, settings); - - pipeline.addSimpleTransform([&](const Block & header) - { - return std::make_shared(header, actions); - }); - } - pipes.emplace_back(std::move(pipeline.pipe)); max_threads += pipeline.max_threads; @@ -255,7 +244,7 @@ QueryPipeline QueryPipeline::unitePipelines( } QueryPipeline pipeline; - pipeline.init(Pipe::unitePipes(std::move(pipes), collected_processors)); + pipeline.init(Pipe::unitePipes(std::move(pipes), collected_processors, false)); if (will_limit_max_threads) { @@ -267,7 +256,7 @@ QueryPipeline QueryPipeline::unitePipelines( } -void QueryPipeline::addCreatingSetsTransform(const Block & res_header, SubqueryForSet subquery_for_set, const SizeLimits & limits, const Context & context) +void QueryPipeline::addCreatingSetsTransform(const Block & res_header, SubqueryForSet subquery_for_set, const SizeLimits & limits, ContextPtr context) { resize(1); @@ -289,7 +278,9 @@ void QueryPipeline::addCreatingSetsTransform(const Block & res_header, SubqueryF void QueryPipeline::addPipelineBefore(QueryPipeline pipeline) { checkInitializedAndNotCompleted(); - assertBlocksHaveEqualStructure(getHeader(), pipeline.getHeader(), "QueryPipeline"); + if (pipeline.getHeader()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Pipeline for CreatingSets should have empty header. Got: {}", + pipeline.getHeader().dumpStructure()); IProcessor::PortNumbers delayed_streams(pipe.numOutputPorts()); for (size_t i = 0; i < delayed_streams.size(); ++i) @@ -300,7 +291,7 @@ void QueryPipeline::addPipelineBefore(QueryPipeline pipeline) Pipes pipes; pipes.emplace_back(std::move(pipe)); pipes.emplace_back(QueryPipeline::getPipe(std::move(pipeline))); - pipe = Pipe::unitePipes(std::move(pipes), collected_processors); + pipe = Pipe::unitePipes(std::move(pipes), collected_processors, true); auto processor = std::make_shared(getHeader(), pipe.numOutputPorts(), delayed_streams, true); addTransform(std::move(processor)); diff --git a/src/Processors/QueryPipeline.h b/src/Processors/QueryPipeline.h index 8799237384e..ac0777d22c6 100644 --- a/src/Processors/QueryPipeline.h +++ b/src/Processors/QueryPipeline.h @@ -1,19 +1,16 @@ #pragma once -#include -#include -#include #include #include - +#include +#include +#include #include #include namespace DB { -class Context; - class IOutputFormat; class QueryPipelineProcessorsCollector; @@ -90,16 +87,15 @@ public: /// If collector is used, it will collect only newly-added processors, but not processors from pipelines. static QueryPipeline unitePipelines( std::vector> pipelines, - const Block & common_header, - const ExpressionActionsSettings & settings, size_t max_threads_limit = 0, Processors * collected_processors = nullptr); /// Add other pipeline and execute it before current one. - /// Pipeline must have same header. + /// Pipeline must have empty header, it should not generate any chunk. + /// This is used for CreatingSets. void addPipelineBefore(QueryPipeline pipeline); - void addCreatingSetsTransform(const Block & res_header, SubqueryForSet subquery_for_set, const SizeLimits & limits, const Context & context); + void addCreatingSetsTransform(const Block & res_header, SubqueryForSet subquery_for_set, const SizeLimits & limits, ContextPtr context); PipelineExecutorPtr execute(); diff --git a/src/Processors/QueryPlan/AddingDelayedSourceStep.cpp b/src/Processors/QueryPlan/AddingDelayedSourceStep.cpp deleted file mode 100644 index 1f02205dad8..00000000000 --- a/src/Processors/QueryPlan/AddingDelayedSourceStep.cpp +++ /dev/null @@ -1,42 +0,0 @@ -#include -#include - -namespace DB -{ - -static ITransformingStep::Traits getTraits() -{ - return ITransformingStep::Traits - { - { - .preserves_distinct_columns = false, - .returns_single_stream = false, - .preserves_number_of_streams = false, - .preserves_sorting = false, - }, - { - .preserves_number_of_rows = false, /// New rows are added from delayed stream - } - }; -} - -AddingDelayedSourceStep::AddingDelayedSourceStep( - const DataStream & input_stream_, - ProcessorPtr source_) - : ITransformingStep(input_stream_, input_stream_.header, getTraits()) - , source(std::move(source_)) -{ -} - -void AddingDelayedSourceStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) -{ - source->setQueryPlanStep(this); - pipeline.addDelayedStream(source); - - /// Now, after adding delayed stream, it has implicit dependency on other port. - /// Here we add resize processor to remove this dependency. - /// Otherwise, if we add MergeSorting + MergingSorted transform to pipeline, we could get `Pipeline stuck` - pipeline.resize(pipeline.getNumStreams(), true); -} - -} diff --git a/src/Processors/QueryPlan/AddingDelayedSourceStep.h b/src/Processors/QueryPlan/AddingDelayedSourceStep.h deleted file mode 100644 index 30565f2002a..00000000000 --- a/src/Processors/QueryPlan/AddingDelayedSourceStep.h +++ /dev/null @@ -1,28 +0,0 @@ -#pragma once -#include -#include - -namespace DB -{ - -class IProcessor; -using ProcessorPtr = std::shared_ptr; - -/// Adds another source to pipeline. Data from this source will be read after data from all other sources. -/// NOTE: tis step is needed because of non-joined data from JOIN. Remove this step after adding JoinStep. -class AddingDelayedSourceStep : public ITransformingStep -{ -public: - AddingDelayedSourceStep( - const DataStream & input_stream_, - ProcessorPtr source_); - - String getName() const override { return "AddingDelayedSource"; } - - void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; - -private: - ProcessorPtr source; -}; - -} diff --git a/src/Processors/QueryPlan/AggregatingStep.cpp b/src/Processors/QueryPlan/AggregatingStep.cpp index b8d47fac826..daa6e4981bb 100644 --- a/src/Processors/QueryPlan/AggregatingStep.cpp +++ b/src/Processors/QueryPlan/AggregatingStep.cpp @@ -100,7 +100,8 @@ void AggregatingStep::transformPipeline(QueryPipeline & pipeline, const BuildQue pipeline.getHeader(), pipeline.getNumStreams(), transform_params, - group_by_sort_description); + group_by_sort_description, + max_block_size); pipeline.addTransform(std::move(transform)); aggregating_sorted = collector.detachProcessors(1); diff --git a/src/Processors/QueryPlan/BuildQueryPipelineSettings.cpp b/src/Processors/QueryPlan/BuildQueryPipelineSettings.cpp index 0ff77770793..9691da4a362 100644 --- a/src/Processors/QueryPlan/BuildQueryPipelineSettings.cpp +++ b/src/Processors/QueryPlan/BuildQueryPipelineSettings.cpp @@ -13,9 +13,9 @@ BuildQueryPipelineSettings BuildQueryPipelineSettings::fromSettings(const Settin return settings; } -BuildQueryPipelineSettings BuildQueryPipelineSettings::fromContext(const Context & from) +BuildQueryPipelineSettings BuildQueryPipelineSettings::fromContext(ContextPtr from) { - return fromSettings(from.getSettingsRef()); + return fromSettings(from->getSettingsRef()); } } diff --git a/src/Processors/QueryPlan/BuildQueryPipelineSettings.h b/src/Processors/QueryPlan/BuildQueryPipelineSettings.h index 3fd37b6042e..c3282d43778 100644 --- a/src/Processors/QueryPlan/BuildQueryPipelineSettings.h +++ b/src/Processors/QueryPlan/BuildQueryPipelineSettings.h @@ -1,12 +1,13 @@ #pragma once -#include + #include +#include + namespace DB { struct Settings; -class Context; struct BuildQueryPipelineSettings { @@ -15,7 +16,7 @@ struct BuildQueryPipelineSettings const ExpressionActionsSettings & getActionsSettings() const { return actions_settings; } static BuildQueryPipelineSettings fromSettings(const Settings & from); - static BuildQueryPipelineSettings fromContext(const Context & from); + static BuildQueryPipelineSettings fromContext(ContextPtr from); }; } diff --git a/src/Processors/QueryPlan/CreatingSetsStep.cpp b/src/Processors/QueryPlan/CreatingSetsStep.cpp index 73d5f479b98..9ea8e7b237b 100644 --- a/src/Processors/QueryPlan/CreatingSetsStep.cpp +++ b/src/Processors/QueryPlan/CreatingSetsStep.cpp @@ -30,22 +30,21 @@ static ITransformingStep::Traits getTraits() CreatingSetStep::CreatingSetStep( const DataStream & input_stream_, - Block header, String description_, SubqueryForSet subquery_for_set_, SizeLimits network_transfer_limits_, - const Context & context_) - : ITransformingStep(input_stream_, header, getTraits()) + ContextPtr context_) + : ITransformingStep(input_stream_, Block{}, getTraits()) + , WithContext(context_) , description(std::move(description_)) , subquery_for_set(std::move(subquery_for_set_)) , network_transfer_limits(std::move(network_transfer_limits_)) - , context(context_) { } void CreatingSetStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { - pipeline.addCreatingSetsTransform(getOutputStream().header, std::move(subquery_for_set), network_transfer_limits, context); + pipeline.addCreatingSetsTransform(getOutputStream().header, std::move(subquery_for_set), network_transfer_limits, getContext()); } void CreatingSetStep::describeActions(FormatSettings & settings) const @@ -70,10 +69,12 @@ CreatingSetsStep::CreatingSetsStep(DataStreams input_streams_) output_stream = input_streams.front(); for (size_t i = 1; i < input_streams.size(); ++i) - assertBlocksHaveEqualStructure(output_stream->header, input_streams[i].header, "CreatingSets"); + if (input_streams[i].header) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Creating set input must have empty header. Got: {}", + input_streams[i].header.dumpStructure()); } -QueryPipelinePtr CreatingSetsStep::updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings & settings) +QueryPipelinePtr CreatingSetsStep::updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings &) { if (pipelines.empty()) throw Exception("CreatingSetsStep cannot be created with no inputs", ErrorCodes::LOGICAL_ERROR); @@ -82,14 +83,13 @@ QueryPipelinePtr CreatingSetsStep::updatePipeline(QueryPipelines pipelines, cons if (pipelines.size() == 1) return main_pipeline; - std::swap(pipelines.front(), pipelines.back()); - pipelines.pop_back(); + pipelines.erase(pipelines.begin()); QueryPipeline delayed_pipeline; if (pipelines.size() > 1) { QueryPipelineProcessorsCollector collector(delayed_pipeline, this); - delayed_pipeline = QueryPipeline::unitePipelines(std::move(pipelines), output_stream->header, settings.getActionsSettings()); + delayed_pipeline = QueryPipeline::unitePipelines(std::move(pipelines)); processors = collector.detachProcessors(); } else @@ -109,7 +109,7 @@ void CreatingSetsStep::describePipeline(FormatSettings & settings) const } void addCreatingSetsStep( - QueryPlan & query_plan, SubqueriesForSets subqueries_for_sets, const SizeLimits & limits, const Context & context) + QueryPlan & query_plan, SubqueriesForSets subqueries_for_sets, const SizeLimits & limits, ContextPtr context) { DataStreams input_streams; input_streams.emplace_back(query_plan.getCurrentDataStream()); @@ -129,7 +129,6 @@ void addCreatingSetsStep( auto creating_set = std::make_unique( plan->getCurrentDataStream(), - input_streams.front().header, std::move(description), std::move(set), limits, diff --git a/src/Processors/QueryPlan/CreatingSetsStep.h b/src/Processors/QueryPlan/CreatingSetsStep.h index 79ae3ed65f0..c2b452ecdf5 100644 --- a/src/Processors/QueryPlan/CreatingSetsStep.h +++ b/src/Processors/QueryPlan/CreatingSetsStep.h @@ -1,22 +1,23 @@ #pragma once + #include #include #include +#include namespace DB { /// Creates sets for subqueries and JOIN. See CreatingSetsTransform. -class CreatingSetStep : public ITransformingStep +class CreatingSetStep : public ITransformingStep, WithContext { public: CreatingSetStep( const DataStream & input_stream_, - Block header, String description_, SubqueryForSet subquery_for_set_, SizeLimits network_transfer_limits_, - const Context & context_); + ContextPtr context_); String getName() const override { return "CreatingSet"; } @@ -28,7 +29,6 @@ private: String description; SubqueryForSet subquery_for_set; SizeLimits network_transfer_limits; - const Context & context; }; class CreatingSetsStep : public IQueryPlanStep @@ -38,7 +38,7 @@ public: String getName() const override { return "CreatingSets"; } - QueryPipelinePtr updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings & settings) override; + QueryPipelinePtr updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings &) override; void describePipeline(FormatSettings & settings) const override; @@ -50,6 +50,6 @@ void addCreatingSetsStep( QueryPlan & query_plan, SubqueriesForSets subqueries_for_sets, const SizeLimits & limits, - const Context & context); + ContextPtr context); } diff --git a/src/Processors/QueryPlan/ExpressionStep.cpp b/src/Processors/QueryPlan/ExpressionStep.cpp index f7bb4a9e9c2..c85092edf05 100644 --- a/src/Processors/QueryPlan/ExpressionStep.cpp +++ b/src/Processors/QueryPlan/ExpressionStep.cpp @@ -4,6 +4,8 @@ #include #include #include +#include +#include namespace DB { @@ -108,12 +110,14 @@ void ExpressionStep::describeActions(FormatSettings & settings) const settings.out << '\n'; } -JoinStep::JoinStep(const DataStream & input_stream_, JoinPtr join_) +JoinStep::JoinStep(const DataStream & input_stream_, JoinPtr join_, bool has_non_joined_rows_, size_t max_block_size_) : ITransformingStep( input_stream_, Transform::transformHeader(input_stream_.header, join_), getJoinTraits()) , join(std::move(join_)) + , has_non_joined_rows(has_non_joined_rows_) + , max_block_size(max_block_size_) { } @@ -132,6 +136,21 @@ void JoinStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipel bool on_totals = stream_type == QueryPipeline::StreamType::Totals; return std::make_shared(header, join, on_totals, add_default_totals); }); + + if (has_non_joined_rows) + { + const Block & join_result_sample = pipeline.getHeader(); + auto stream = std::make_shared(*join, join_result_sample, max_block_size); + auto source = std::make_shared(std::move(stream)); + + source->setQueryPlanStep(this); + pipeline.addDelayedStream(source); + + /// Now, after adding delayed stream, it has implicit dependency on other port. + /// Here we add resize processor to remove this dependency. + /// Otherwise, if we add MergeSorting + MergingSorted transform to pipeline, we could get `Pipeline stuck` + pipeline.resize(pipeline.getNumStreams(), true); + } } } diff --git a/src/Processors/QueryPlan/ExpressionStep.h b/src/Processors/QueryPlan/ExpressionStep.h index 71937b6f78f..bcc1b0ef7b6 100644 --- a/src/Processors/QueryPlan/ExpressionStep.h +++ b/src/Processors/QueryPlan/ExpressionStep.h @@ -40,13 +40,17 @@ class JoinStep : public ITransformingStep public: using Transform = JoiningTransform; - explicit JoinStep(const DataStream & input_stream_, JoinPtr join_); + explicit JoinStep(const DataStream & input_stream_, JoinPtr join_, bool has_non_joined_rows_, size_t max_block_size_); String getName() const override { return "Join"; } void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + const JoinPtr & getJoin() const { return join; } + private: JoinPtr join; + bool has_non_joined_rows; + size_t max_block_size; }; } diff --git a/src/Processors/QueryPlan/IQueryPlanStep.h b/src/Processors/QueryPlan/IQueryPlanStep.h index 8211b52a6c4..2974891e2bf 100644 --- a/src/Processors/QueryPlan/IQueryPlanStep.h +++ b/src/Processors/QueryPlan/IQueryPlanStep.h @@ -99,6 +99,9 @@ public: /// Get detailed description of step actions. This is shown in EXPLAIN query with options `actions = 1`. virtual void describeActions(FormatSettings & /*settings*/) const {} + /// Get detailed description of read-from-storage step indexes (if any). Shown in with options `indexes = 1`. + virtual void describeIndexes(FormatSettings & /*settings*/) const {} + /// Get description of processors added in current step. Should be called after updatePipeline(). virtual void describePipeline(FormatSettings & /*settings*/) const {} diff --git a/src/Processors/QueryPlan/Optimizations/Optimizations.h b/src/Processors/QueryPlan/Optimizations/Optimizations.h index f96237fc71a..7e946a71fad 100644 --- a/src/Processors/QueryPlan/Optimizations/Optimizations.h +++ b/src/Processors/QueryPlan/Optimizations/Optimizations.h @@ -1,5 +1,6 @@ #pragma once #include +#include #include namespace DB @@ -23,6 +24,7 @@ struct Optimization using Function = size_t (*)(QueryPlan::Node *, QueryPlan::Nodes &); const Function apply = nullptr; const char * name; + const bool QueryPlanOptimizationSettings::* const is_enabled; }; /// Move ARRAY JOIN up if possible. @@ -46,11 +48,11 @@ inline const auto & getOptimizations() { static const std::array optimizations = {{ - {tryLiftUpArrayJoin, "liftUpArrayJoin"}, - {tryPushDownLimit, "pushDownLimit"}, - {trySplitFilter, "splitFilter"}, - {tryMergeExpressions, "mergeExpressions"}, - {tryPushDownFilter, "pushDownFilter"}, + {tryLiftUpArrayJoin, "liftUpArrayJoin", &QueryPlanOptimizationSettings::optimize_plan}, + {tryPushDownLimit, "pushDownLimit", &QueryPlanOptimizationSettings::optimize_plan}, + {trySplitFilter, "splitFilter", &QueryPlanOptimizationSettings::optimize_plan}, + {tryMergeExpressions, "mergeExpressions", &QueryPlanOptimizationSettings::optimize_plan}, + {tryPushDownFilter, "pushDownFilter", &QueryPlanOptimizationSettings::filter_push_down}, }}; return optimizations; diff --git a/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.cpp b/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.cpp index a8791f757f4..1472fb87a89 100644 --- a/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.cpp +++ b/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.cpp @@ -8,13 +8,15 @@ namespace DB QueryPlanOptimizationSettings QueryPlanOptimizationSettings::fromSettings(const Settings & from) { QueryPlanOptimizationSettings settings; + settings.optimize_plan = from.query_plan_enable_optimizations; settings.max_optimizations_to_apply = from.query_plan_max_optimizations_to_apply; + settings.filter_push_down = from.query_plan_filter_push_down; return settings; } -QueryPlanOptimizationSettings QueryPlanOptimizationSettings::fromContext(const Context & from) +QueryPlanOptimizationSettings QueryPlanOptimizationSettings::fromContext(ContextPtr from) { - return fromSettings(from.getSettingsRef()); + return fromSettings(from->getSettingsRef()); } } diff --git a/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.h b/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.h index 64943ec40a8..b5a37bf69d6 100644 --- a/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.h +++ b/src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.h @@ -1,12 +1,13 @@ #pragma once +#include + #include namespace DB { struct Settings; -class Context; struct QueryPlanOptimizationSettings { @@ -14,8 +15,14 @@ struct QueryPlanOptimizationSettings /// It helps to avoid infinite optimization loop. size_t max_optimizations_to_apply = 0; + /// If disabled, no optimization applied. + bool optimize_plan = true; + + /// If filter push down optimization is enabled. + bool filter_push_down = true; + static QueryPlanOptimizationSettings fromSettings(const Settings & from); - static QueryPlanOptimizationSettings fromContext(const Context & from); + static QueryPlanOptimizationSettings fromContext(ContextPtr from); }; } diff --git a/src/Processors/QueryPlan/Optimizations/filterPushDown.cpp b/src/Processors/QueryPlan/Optimizations/filterPushDown.cpp index 552720fa1a4..20813e9f548 100644 --- a/src/Processors/QueryPlan/Optimizations/filterPushDown.cpp +++ b/src/Processors/QueryPlan/Optimizations/filterPushDown.cpp @@ -4,6 +4,7 @@ #include #include #include +#include #include #include #include @@ -11,8 +12,10 @@ #include #include #include +#include #include #include +#include #include #include @@ -73,8 +76,8 @@ static size_t tryAddNewFilterStep( child_node->children.emplace_back(&node); /// Expression/Filter -> Aggregating -> Filter -> Something - /// New filter column is added to the end. - auto split_filter_column_name = (*split_filter->getIndex().rbegin())->result_name; + /// New filter column is the first one. + auto split_filter_column_name = (*split_filter->getIndex().begin())->result_name; node.step = std::make_unique( node.children.at(0)->step->getOutputStream(), std::move(split_filter), std::move(split_filter_column_name), true); @@ -82,7 +85,7 @@ static size_t tryAddNewFilterStep( return 3; } -static Names getAggregatinKeys(const Aggregator::Params & params) +static Names getAggregatingKeys(const Aggregator::Params & params) { Names keys; keys.reserve(params.keys.size()); @@ -112,17 +115,36 @@ size_t tryPushDownFilter(QueryPlan::Node * parent_node, QueryPlan::Nodes & nodes if (auto * aggregating = typeid_cast(child.get())) { const auto & params = aggregating->getParams(); - Names keys = getAggregatinKeys(params); + Names keys = getAggregatingKeys(params); if (auto updated_steps = tryAddNewFilterStep(parent_node, nodes, keys)) return updated_steps; } + if (typeid_cast(child.get())) + { + /// CreatingSets does not change header. + /// We can push down filter and update header. + /// - Something + /// Filter - CreatingSets - CreatingSet + /// - CreatingSet + auto input_streams = child->getInputStreams(); + input_streams.front() = filter->getOutputStream(); + child = std::make_unique(input_streams); + std::swap(parent, child); + std::swap(parent_node->children, child_node->children); + std::swap(parent_node->children.front(), child_node->children.front()); + /// - Filter - Something + /// CreatingSets - CreatingSet + /// - CreatingSet + return 2; + } + if (auto * totals_having = typeid_cast(child.get())) { /// If totals step has HAVING expression, skip it for now. /// TODO: - /// We can merge HAING expression with current filer. + /// We can merge HAVING expression with current filer. /// Also, we can push down part of HAVING which depend only on aggregation keys. if (totals_having->getActions()) return 0; @@ -168,6 +190,36 @@ size_t tryPushDownFilter(QueryPlan::Node * parent_node, QueryPlan::Nodes & nodes return updated_steps; } + if (auto * join = typeid_cast(child.get())) + { + const auto & table_join = join->getJoin()->getTableJoin(); + /// Push down is for left table only. We need to update JoinStep for push down into right. + /// Only inner and left join are supported. Other types may generate default values for left table keys. + /// So, if we push down a condition like `key != 0`, not all rows may be filtered. + if (table_join.kind() == ASTTableJoin::Kind::Inner || table_join.kind() == ASTTableJoin::Kind::Left) + { + const auto & left_header = join->getInputStreams().front().header; + const auto & res_header = join->getOutputStream().header; + Names allowed_keys; + for (const auto & name : table_join.keyNamesLeft()) + { + /// Skip key if it is renamed. + /// I don't know if it is possible. Just in case. + if (!left_header.has(name) || !res_header.has(name)) + continue; + + /// Skip if type is changed. Push down expression expect equal types. + if (!left_header.getByName(name).type->equals(*res_header.getByName(name).type)) + continue; + + allowed_keys.push_back(name); + } + + if (auto updated_steps = tryAddNewFilterStep(parent_node, nodes, allowed_keys)) + return updated_steps; + } + } + /// TODO. /// We can filter earlier if expression does not depend on WITH FILL columns. /// But we cannot just push down condition, because other column may be filled with defaults. @@ -193,6 +245,48 @@ size_t tryPushDownFilter(QueryPlan::Node * parent_node, QueryPlan::Nodes & nodes return updated_steps; } + if (auto * union_step = typeid_cast(child.get())) + { + /// Union does not change header. + /// We can push down filter and update header. + auto union_input_streams = child->getInputStreams(); + for (auto & input_stream : union_input_streams) + input_stream.header = filter->getOutputStream().header; + + /// - Something + /// Filter - Union - Something + /// - Something + + child = std::make_unique(union_input_streams, union_step->getMaxThreads()); + + std::swap(parent, child); + std::swap(parent_node->children, child_node->children); + std::swap(parent_node->children.front(), child_node->children.front()); + + /// - Filter - Something + /// Union - Something + /// - Something + + for (size_t i = 1; i < parent_node->children.size(); ++i) + { + auto & filter_node = nodes.emplace_back(); + filter_node.children.push_back(parent_node->children[i]); + parent_node->children[i] = &filter_node; + + filter_node.step = std::make_unique( + filter_node.children.front()->step->getOutputStream(), + filter->getExpression()->clone(), + filter->getFilterColumnName(), + filter->removesFilterColumn()); + } + + /// - Filter - Something + /// Union - Filter - Something + /// - Filter - Something + + return 3; + } + return 0; } diff --git a/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp b/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp index 858bde9c660..da9b1e26f68 100644 --- a/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp +++ b/src/Processors/QueryPlan/Optimizations/optimizeTree.cpp @@ -16,6 +16,9 @@ namespace QueryPlanOptimizations void optimizeTree(const QueryPlanOptimizationSettings & settings, QueryPlan::Node & root, QueryPlan::Nodes & nodes) { + if (!settings.optimize_plan) + return; + const auto & optimizations = getOptimizations(); struct Frame @@ -63,6 +66,9 @@ void optimizeTree(const QueryPlanOptimizationSettings & settings, QueryPlan::Nod /// Apply all optimizations. for (const auto & optimization : optimizations) { + if (!(settings.*(optimization.is_enabled))) + continue; + /// Just in case, skip optimization if it is not initialized. if (!optimization.apply) continue; diff --git a/src/Processors/QueryPlan/QueryPlan.cpp b/src/Processors/QueryPlan/QueryPlan.cpp index 974da579d0c..ad3649385fd 100644 --- a/src/Processors/QueryPlan/QueryPlan.cpp +++ b/src/Processors/QueryPlan/QueryPlan.cpp @@ -243,6 +243,9 @@ static void explainStep( if (options.actions) step.describeActions(settings); + + if (options.indexes) + step.describeIndexes(settings); } std::string debugExplainStep(const IQueryPlanStep & step) diff --git a/src/Processors/QueryPlan/QueryPlan.h b/src/Processors/QueryPlan/QueryPlan.h index d5cc2e8f4e8..901d83c3ab8 100644 --- a/src/Processors/QueryPlan/QueryPlan.h +++ b/src/Processors/QueryPlan/QueryPlan.h @@ -1,10 +1,12 @@ #pragma once -#include -#include -#include -#include #include +#include + +#include +#include +#include +#include namespace DB { @@ -17,7 +19,6 @@ using QueryPlanStepPtr = std::unique_ptr; class QueryPipeline; using QueryPipelinePtr = std::unique_ptr; -class Context; class WriteBuffer; class QueryPlan; @@ -65,6 +66,8 @@ public: bool description = true; /// Add detailed information about step actions. bool actions = false; + /// Add information about indexes actions. + bool indexes = false; }; struct ExplainPipelineOptions diff --git a/src/Processors/QueryPlan/ReadFromMergeTree.cpp b/src/Processors/QueryPlan/ReadFromMergeTree.cpp new file mode 100644 index 00000000000..ebf9c9e4121 --- /dev/null +++ b/src/Processors/QueryPlan/ReadFromMergeTree.cpp @@ -0,0 +1,249 @@ +#include +#include +#include +#include +#include +#include +#include +#include + +namespace DB +{ + +ReadFromMergeTree::ReadFromMergeTree( + const MergeTreeData & storage_, + StorageMetadataPtr metadata_snapshot_, + String query_id_, + Names required_columns_, + RangesInDataParts parts_, + IndexStatPtr index_stats_, + PrewhereInfoPtr prewhere_info_, + Names virt_column_names_, + Settings settings_, + size_t num_streams_, + ReadType read_type_) + : ISourceStep(DataStream{.header = MergeTreeBaseSelectProcessor::transformHeader( + metadata_snapshot_->getSampleBlockForColumns(required_columns_, storage_.getVirtuals(), storage_.getStorageID()), + prewhere_info_, + virt_column_names_)}) + , storage(storage_) + , metadata_snapshot(std::move(metadata_snapshot_)) + , query_id(std::move(query_id_)) + , required_columns(std::move(required_columns_)) + , parts(std::move(parts_)) + , index_stats(std::move(index_stats_)) + , prewhere_info(std::move(prewhere_info_)) + , virt_column_names(std::move(virt_column_names_)) + , settings(std::move(settings_)) + , num_streams(num_streams_) + , read_type(read_type_) +{ +} + +Pipe ReadFromMergeTree::readFromPool() +{ + Pipes pipes; + size_t sum_marks = 0; + size_t total_rows = 0; + + for (const auto & part : parts) + { + sum_marks += part.getMarksCount(); + total_rows += part.getRowsCount(); + } + + auto pool = std::make_shared( + num_streams, + sum_marks, + settings.min_marks_for_concurrent_read, + std::move(parts), + storage, + metadata_snapshot, + prewhere_info, + true, + required_columns, + settings.backoff_settings, + settings.preferred_block_size_bytes, + false); + + auto * logger = &Poco::Logger::get(storage.getLogName() + " (SelectExecutor)"); + LOG_DEBUG(logger, "Reading approx. {} rows with {} streams", total_rows, num_streams); + + for (size_t i = 0; i < num_streams; ++i) + { + auto source = std::make_shared( + i, pool, settings.min_marks_for_concurrent_read, settings.max_block_size, + settings.preferred_block_size_bytes, settings.preferred_max_column_in_block_size_bytes, + storage, metadata_snapshot, settings.use_uncompressed_cache, + prewhere_info, settings.reader_settings, virt_column_names); + + if (i == 0) + { + /// Set the approximate number of rows for the first source only + source->addTotalRowsApprox(total_rows); + } + + pipes.emplace_back(std::move(source)); + } + + return Pipe::unitePipes(std::move(pipes)); +} + +template +ProcessorPtr ReadFromMergeTree::createSource(const RangesInDataPart & part) +{ + return std::make_shared( + storage, metadata_snapshot, part.data_part, settings.max_block_size, settings.preferred_block_size_bytes, + settings.preferred_max_column_in_block_size_bytes, required_columns, part.ranges, settings.use_uncompressed_cache, + prewhere_info, true, settings.reader_settings, virt_column_names, part.part_index_in_query); +} + +Pipe ReadFromMergeTree::readInOrder() +{ + Pipes pipes; + for (const auto & part : parts) + { + auto source = read_type == ReadType::InReverseOrder + ? createSource(part) + : createSource(part); + + pipes.emplace_back(std::move(source)); + } + + auto pipe = Pipe::unitePipes(std::move(pipes)); + + if (read_type == ReadType::InReverseOrder) + { + pipe.addSimpleTransform([&](const Block & header) + { + return std::make_shared(header); + }); + } + + return pipe; +} + +Pipe ReadFromMergeTree::read() +{ + if (read_type == ReadType::Default && num_streams > 1) + return readFromPool(); + + auto pipe = readInOrder(); + + /// Use ConcatProcessor to concat sources together. + /// It is needed to read in parts order (and so in PK order) if single thread is used. + if (read_type == ReadType::Default && pipe.numOutputPorts() > 1) + pipe.addTransform(std::make_shared(pipe.getHeader(), pipe.numOutputPorts())); + + return pipe; +} + +void ReadFromMergeTree::initializePipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) +{ + Pipe pipe = read(); + + for (const auto & processor : pipe.getProcessors()) + processors.emplace_back(processor); + + // Attach QueryIdHolder if needed + if (!query_id.empty()) + pipe.addQueryIdHolder(std::make_shared(query_id, storage)); + + pipeline.init(std::move(pipe)); +} + +static const char * indexTypeToString(ReadFromMergeTree::IndexType type) +{ + switch (type) + { + case ReadFromMergeTree::IndexType::None: + return "None"; + case ReadFromMergeTree::IndexType::MinMax: + return "MinMax"; + case ReadFromMergeTree::IndexType::Partition: + return "Partition"; + case ReadFromMergeTree::IndexType::PrimaryKey: + return "PrimaryKey"; + case ReadFromMergeTree::IndexType::Skip: + return "Skip"; + } + + __builtin_unreachable(); +} + +static const char * readTypeToString(ReadFromMergeTree::ReadType type) +{ + switch (type) + { + case ReadFromMergeTree::ReadType::Default: + return "Default"; + case ReadFromMergeTree::ReadType::InOrder: + return "InOrder"; + case ReadFromMergeTree::ReadType::InReverseOrder: + return "InReverseOrder"; + } + + __builtin_unreachable(); +} + +void ReadFromMergeTree::describeActions(FormatSettings & format_settings) const +{ + std::string prefix(format_settings.offset, format_settings.indent_char); + format_settings.out << prefix << "ReadType: " << readTypeToString(read_type) << '\n'; + + if (index_stats && !index_stats->empty()) + { + format_settings.out << prefix << "Parts: " << index_stats->back().num_parts_after << '\n'; + format_settings.out << prefix << "Granules: " << index_stats->back().num_granules_after << '\n'; + } +} + +void ReadFromMergeTree::describeIndexes(FormatSettings & format_settings) const +{ + std::string prefix(format_settings.offset, format_settings.indent_char); + if (index_stats && !index_stats->empty()) + { + std::string indent(format_settings.indent, format_settings.indent_char); + + /// Do not print anything if no indexes is applied. + if (index_stats->size() > 1 || index_stats->front().type != IndexType::None) + format_settings.out << prefix << "Indexes:\n"; + + for (size_t i = 0; i < index_stats->size(); ++i) + { + const auto & stat = (*index_stats)[i]; + if (stat.type == IndexType::None) + continue; + + format_settings.out << prefix << indent << indexTypeToString(stat.type) << '\n'; + + if (!stat.name.empty()) + format_settings.out << prefix << indent << indent << "Name: " << stat.name << '\n'; + + if (!stat.description.empty()) + format_settings.out << prefix << indent << indent << "Description: " << stat.description << '\n'; + + if (!stat.used_keys.empty()) + { + format_settings.out << prefix << indent << indent << "Keys: " << stat.name << '\n'; + for (const auto & used_key : stat.used_keys) + format_settings.out << prefix << indent << indent << indent << used_key << '\n'; + } + + if (!stat.condition.empty()) + format_settings.out << prefix << indent << indent << "Condition: " << stat.condition << '\n'; + + format_settings.out << prefix << indent << indent << "Parts: " << stat.num_parts_after; + if (i) + format_settings.out << '/' << (*index_stats)[i - 1].num_parts_after; + format_settings.out << '\n'; + + format_settings.out << prefix << indent << indent << "Granules: " << stat.num_granules_after; + if (i) + format_settings.out << '/' << (*index_stats)[i - 1].num_granules_after; + format_settings.out << '\n'; + } + } +} + +} diff --git a/src/Processors/QueryPlan/ReadFromMergeTree.h b/src/Processors/QueryPlan/ReadFromMergeTree.h new file mode 100644 index 00000000000..1d6a4491588 --- /dev/null +++ b/src/Processors/QueryPlan/ReadFromMergeTree.h @@ -0,0 +1,113 @@ +#pragma once +#include +#include +#include +#include + +namespace DB +{ + +/// This step is created to read from MergeTree* table. +/// For now, it takes a list of parts and creates source from it. +class ReadFromMergeTree final : public ISourceStep +{ +public: + + enum class IndexType + { + None, + MinMax, + Partition, + PrimaryKey, + Skip, + }; + + /// This is a struct with information about applied indexes. + /// Is used for introspection only, in EXPLAIN query. + struct IndexStat + { + IndexType type; + std::string name; + std::string description; + std::string condition; + std::vector used_keys; + size_t num_parts_after; + size_t num_granules_after; + }; + + using IndexStats = std::vector; + using IndexStatPtr = std::unique_ptr; + + /// Part of settings which are needed for reading. + struct Settings + { + UInt64 max_block_size; + size_t preferred_block_size_bytes; + size_t preferred_max_column_in_block_size_bytes; + size_t min_marks_for_concurrent_read; + bool use_uncompressed_cache; + + MergeTreeReaderSettings reader_settings; + MergeTreeReadPool::BackoffSettings backoff_settings; + }; + + enum class ReadType + { + /// By default, read will use MergeTreeReadPool and return pipe with num_streams outputs. + /// If num_streams == 1, will read without pool, in order specified in parts. + Default, + /// Read in sorting key order. + /// Returned pipe will have the number of ports equals to parts.size(). + /// Parameter num_streams_ is ignored in this case. + /// User should add MergingSorted itself if needed. + InOrder, + /// The same as InOrder, but in reverse order. + /// For every part, read ranges and granules from end to begin. Also add ReverseTransform. + InReverseOrder, + }; + + ReadFromMergeTree( + const MergeTreeData & storage_, + StorageMetadataPtr metadata_snapshot_, + String query_id_, + Names required_columns_, + RangesInDataParts parts_, + IndexStatPtr index_stats_, + PrewhereInfoPtr prewhere_info_, + Names virt_column_names_, + Settings settings_, + size_t num_streams_, + ReadType read_type_ + ); + + String getName() const override { return "ReadFromMergeTree"; } + + void initializePipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; + + void describeActions(FormatSettings & format_settings) const override; + void describeIndexes(FormatSettings & format_settings) const override; + +private: + const MergeTreeData & storage; + StorageMetadataPtr metadata_snapshot; + String query_id; + + Names required_columns; + RangesInDataParts parts; + IndexStatPtr index_stats; + PrewhereInfoPtr prewhere_info; + Names virt_column_names; + Settings settings; + + size_t num_streams; + ReadType read_type; + + Pipe read(); + Pipe readFromPool(); + Pipe readInOrder(); + + template + ProcessorPtr createSource(const RangesInDataPart & part); +}; + +} diff --git a/src/Processors/QueryPlan/ReverseRowsStep.cpp b/src/Processors/QueryPlan/ReverseRowsStep.cpp deleted file mode 100644 index 0a2e9f20cd9..00000000000 --- a/src/Processors/QueryPlan/ReverseRowsStep.cpp +++ /dev/null @@ -1,37 +0,0 @@ -#include -#include -#include - -namespace DB -{ - -static ITransformingStep::Traits getTraits() -{ - return ITransformingStep::Traits - { - { - .preserves_distinct_columns = true, - .returns_single_stream = false, - .preserves_number_of_streams = true, - .preserves_sorting = false, - }, - { - .preserves_number_of_rows = true, - } - }; -} - -ReverseRowsStep::ReverseRowsStep(const DataStream & input_stream_) - : ITransformingStep(input_stream_, input_stream_.header, getTraits()) -{ -} - -void ReverseRowsStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) -{ - pipeline.addSimpleTransform([&](const Block & header) - { - return std::make_shared(header); - }); -} - -} diff --git a/src/Processors/QueryPlan/ReverseRowsStep.h b/src/Processors/QueryPlan/ReverseRowsStep.h deleted file mode 100644 index 08d7833d130..00000000000 --- a/src/Processors/QueryPlan/ReverseRowsStep.h +++ /dev/null @@ -1,18 +0,0 @@ -#pragma once -#include - -namespace DB -{ - -/// Reverse rows in chunk. -class ReverseRowsStep : public ITransformingStep -{ -public: - explicit ReverseRowsStep(const DataStream & input_stream_); - - String getName() const override { return "ReverseRows"; } - - void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; -}; - -} diff --git a/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.cpp b/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.cpp index e1f72b2bb04..734e6db318d 100644 --- a/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.cpp +++ b/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.cpp @@ -28,7 +28,7 @@ SettingQuotaAndLimitsStep::SettingQuotaAndLimitsStep( StreamLocalLimits & limits_, SizeLimits & leaf_limits_, std::shared_ptr quota_, - std::shared_ptr context_) + ContextPtr context_) : ITransformingStep(input_stream_, input_stream_.header, getTraits()) , context(std::move(context_)) , storage(std::move(storage_)) diff --git a/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.h b/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.h index 167e644e26d..3c73c208b70 100644 --- a/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.h +++ b/src/Processors/QueryPlan/SettingQuotaAndLimitsStep.h @@ -1,4 +1,6 @@ #pragma once + +#include #include #include #include @@ -26,14 +28,14 @@ public: StreamLocalLimits & limits_, SizeLimits & leaf_limits_, std::shared_ptr quota_, - std::shared_ptr context_); + ContextPtr context_); String getName() const override { return "SettingQuotaAndLimits"; } void transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) override; private: - std::shared_ptr context; + ContextPtr context; StoragePtr storage; TableLockHolder table_lock; StreamLocalLimits limits; diff --git a/src/Processors/QueryPlan/UnionStep.cpp b/src/Processors/QueryPlan/UnionStep.cpp index 66fb3ba8593..7403dd0a12a 100644 --- a/src/Processors/QueryPlan/UnionStep.cpp +++ b/src/Processors/QueryPlan/UnionStep.cpp @@ -6,8 +6,25 @@ namespace DB { -UnionStep::UnionStep(DataStreams input_streams_, Block result_header, size_t max_threads_) - : header(std::move(result_header)) +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + +static Block checkHeaders(const DataStreams & input_streams) +{ + if (input_streams.empty()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot unite an empty set of query plan steps"); + + Block res = input_streams.front().header; + for (const auto & stream : input_streams) + assertBlocksHaveEqualStructure(stream.header, res, "UnionStep"); + + return res; +} + +UnionStep::UnionStep(DataStreams input_streams_, size_t max_threads_) + : header(checkHeaders(input_streams_)) , max_threads(max_threads_) { input_streams = std::move(input_streams_); @@ -18,7 +35,7 @@ UnionStep::UnionStep(DataStreams input_streams_, Block result_header, size_t max output_stream = DataStream{.header = header}; } -QueryPipelinePtr UnionStep::updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings & settings) +QueryPipelinePtr UnionStep::updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings &) { auto pipeline = std::make_unique(); QueryPipelineProcessorsCollector collector(*pipeline, this); @@ -30,7 +47,7 @@ QueryPipelinePtr UnionStep::updatePipeline(QueryPipelines pipelines, const Build return pipeline; } - *pipeline = QueryPipeline::unitePipelines(std::move(pipelines), output_stream->header, settings.getActionsSettings(), max_threads); + *pipeline = QueryPipeline::unitePipelines(std::move(pipelines), max_threads); processors = collector.detachProcessors(); return pipeline; diff --git a/src/Processors/QueryPlan/UnionStep.h b/src/Processors/QueryPlan/UnionStep.h index 2d997e0a36d..81bd033d045 100644 --- a/src/Processors/QueryPlan/UnionStep.h +++ b/src/Processors/QueryPlan/UnionStep.h @@ -9,14 +9,16 @@ class UnionStep : public IQueryPlanStep { public: /// max_threads is used to limit the number of threads for result pipeline. - UnionStep(DataStreams input_streams_, Block result_header, size_t max_threads_ = 0); + explicit UnionStep(DataStreams input_streams_, size_t max_threads_ = 0); String getName() const override { return "Union"; } - QueryPipelinePtr updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings & settings) override; + QueryPipelinePtr updatePipeline(QueryPipelines pipelines, const BuildQueryPipelineSettings &) override; void describePipeline(FormatSettings & settings) const override; + size_t getMaxThreads() const { return max_threads; } + private: Block header; size_t max_threads; diff --git a/src/Processors/QueryPlan/WindowStep.cpp b/src/Processors/QueryPlan/WindowStep.cpp index 76191eba51a..66c329acb4b 100644 --- a/src/Processors/QueryPlan/WindowStep.cpp +++ b/src/Processors/QueryPlan/WindowStep.cpp @@ -64,6 +64,11 @@ WindowStep::WindowStep(const DataStream & input_stream_, void WindowStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &) { + // This resize is needed for cases such as `over ()` when we don't have a + // sort node, and the input might have multiple streams. The sort node would + // have resized it. + pipeline.resize(1); + pipeline.addSimpleTransform([&](const Block & /*header*/) { return std::make_shared(input_header, diff --git a/src/Processors/Transforms/AggregatingInOrderTransform.cpp b/src/Processors/Transforms/AggregatingInOrderTransform.cpp index d448d31611d..d8b7742cdf4 100644 --- a/src/Processors/Transforms/AggregatingInOrderTransform.cpp +++ b/src/Processors/Transforms/AggregatingInOrderTransform.cpp @@ -1,6 +1,7 @@ #include #include #include +#include namespace DB { @@ -58,6 +59,7 @@ void AggregatingInOrderTransform::consume(Chunk chunk) LOG_TRACE(log, "Aggregating in order"); is_consume_started = true; } + src_rows += rows; src_bytes += chunk.bytes(); @@ -82,58 +84,55 @@ void AggregatingInOrderTransform::consume(Chunk chunk) res_aggregate_columns.resize(params->params.aggregates_size); for (size_t i = 0; i < params->params.keys_size; ++i) - { res_key_columns[i] = res_header.safeGetByPosition(i).type->createColumn(); - } + for (size_t i = 0; i < params->params.aggregates_size; ++i) - { res_aggregate_columns[i] = res_header.safeGetByPosition(i + params->params.keys_size).type->createColumn(); - } + params->aggregator.createStatesAndFillKeyColumnsWithSingleKey(variants, key_columns, key_begin, res_key_columns); + params->aggregator.addArenasToAggregateColumns(variants, res_aggregate_columns); ++cur_block_size; } - ssize_t mid = 0; - ssize_t high = 0; - ssize_t low = -1; + + /// Will split block into segments with the same key while (key_end != rows) { - high = rows; /// Find the first position of new (not current) key in current chunk - while (high - low > 1) - { - mid = (low + high) / 2; - if (!less(res_key_columns, key_columns, cur_block_size - 1, mid, group_by_description)) - low = mid; - else - high = mid; - } - key_end = high; + auto indices = ext::range(key_begin, rows); + auto it = std::upper_bound(indices.begin(), indices.end(), cur_block_size - 1, + [&](size_t lhs_row, size_t rhs_row) + { + return less(res_key_columns, key_columns, lhs_row, rhs_row, group_by_description); + }); + + key_end = (it == indices.end() ? rows : *it); + /// Add data to aggr. state if interval is not empty. Empty when haven't found current key in new block. if (key_begin != key_end) - { params->aggregator.executeOnIntervalWithoutKeyImpl(variants.without_key, key_begin, key_end, aggregate_function_instructions.data(), variants.aggregates_pool); - } - low = key_begin = key_end; /// We finalize last key aggregation state if a new key found. - if (key_begin != rows) + if (key_end != rows) { - params->aggregator.fillAggregateColumnsWithSingleKey(variants, res_aggregate_columns); + params->aggregator.addSingleKeyToAggregateColumns(variants, res_aggregate_columns); + /// If res_block_size is reached we have to stop consuming and generate the block. Save the extra rows into new chunk. if (cur_block_size == res_block_size) { Columns source_columns = chunk.detachColumns(); for (auto & source_column : source_columns) - source_column = source_column->cut(key_begin, rows - key_begin); + source_column = source_column->cut(key_end, rows - key_end); - current_chunk = Chunk(source_columns, rows - key_begin); + current_chunk = Chunk(source_columns, rows - key_end); src_rows -= current_chunk.getNumRows(); block_end_reached = true; need_generate = true; cur_block_size = 0; + variants.without_key = nullptr; + /// Arenas cannot be destroyed here, since later, in FinalizingSimpleTransform /// there will be finalizeChunk(), but even after /// finalizeChunk() we cannot destroy arena, since some memory @@ -155,10 +154,13 @@ void AggregatingInOrderTransform::consume(Chunk chunk) } /// We create a new state for the new key and update res_key_columns - params->aggregator.createStatesAndFillKeyColumnsWithSingleKey(variants, key_columns, key_begin, res_key_columns); + params->aggregator.createStatesAndFillKeyColumnsWithSingleKey(variants, key_columns, key_end, res_key_columns); ++cur_block_size; } + + key_begin = key_end; } + block_end_reached = false; } @@ -212,8 +214,8 @@ IProcessor::Status AggregatingInOrderTransform::prepare() { output.push(std::move(to_push_chunk)); output.finish(); - LOG_TRACE(log, "Aggregated. {} to {} rows (from {})", src_rows, res_rows, - formatReadableSizeWithBinarySuffix(src_bytes)); + LOG_DEBUG(log, "Aggregated. {} to {} rows (from {})", + src_rows, res_rows, formatReadableSizeWithBinarySuffix(src_bytes)); return Status::Finished; } if (input.isFinished()) @@ -234,7 +236,10 @@ IProcessor::Status AggregatingInOrderTransform::prepare() void AggregatingInOrderTransform::generate() { if (cur_block_size && is_consume_finished) - params->aggregator.fillAggregateColumnsWithSingleKey(variants, res_aggregate_columns); + { + params->aggregator.addSingleKeyToAggregateColumns(variants, res_aggregate_columns); + variants.without_key = nullptr; + } Block res = res_header.cloneEmpty(); diff --git a/src/Processors/Transforms/AggregatingTransform.cpp b/src/Processors/Transforms/AggregatingTransform.cpp index c6907202d31..3400d06dae3 100644 --- a/src/Processors/Transforms/AggregatingTransform.cpp +++ b/src/Processors/Transforms/AggregatingTransform.cpp @@ -541,7 +541,7 @@ void AggregatingTransform::initGenerate() double elapsed_seconds = watch.elapsedSeconds(); size_t rows = variants.sizeWithoutOverflowRow(); - LOG_TRACE(log, "Aggregated. {} to {} rows (from {}) in {} sec. ({} rows/sec., {}/sec.)", + LOG_DEBUG(log, "Aggregated. {} to {} rows (from {}) in {} sec. ({} rows/sec., {}/sec.)", src_rows, rows, ReadableSize(src_bytes), elapsed_seconds, src_rows / elapsed_seconds, ReadableSize(src_bytes / elapsed_seconds)); @@ -599,7 +599,7 @@ void AggregatingTransform::initGenerate() pipe = Pipe::unitePipes(std::move(pipes)); } - LOG_TRACE(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), ReadableSize(files.sum_size_compressed), ReadableSize(files.sum_size_uncompressed)); + LOG_DEBUG(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), ReadableSize(files.sum_size_compressed), ReadableSize(files.sum_size_uncompressed)); addMergingAggregatedMemoryEfficientTransform(pipe, params, temporary_data_merge_threads); diff --git a/src/Processors/Transforms/CreatingSetsTransform.cpp b/src/Processors/Transforms/CreatingSetsTransform.cpp index c5fb4f3a952..a5b5958eff1 100644 --- a/src/Processors/Transforms/CreatingSetsTransform.cpp +++ b/src/Processors/Transforms/CreatingSetsTransform.cpp @@ -25,11 +25,11 @@ CreatingSetsTransform::CreatingSetsTransform( Block out_header_, SubqueryForSet subquery_for_set_, SizeLimits network_transfer_limits_, - const Context & context_) + ContextPtr context_) : IAccumulatingTransform(std::move(in_header_), std::move(out_header_)) + , WithContext(context_) , subquery(std::move(subquery_for_set_)) , network_transfer_limits(std::move(network_transfer_limits_)) - , context(context_) { } @@ -51,7 +51,7 @@ void CreatingSetsTransform::startSubquery() LOG_TRACE(log, "Filling temporary table."); if (subquery.table) - table_out = subquery.table->write({}, subquery.table->getInMemoryMetadataPtr(), context); + table_out = subquery.table->write({}, subquery.table->getInMemoryMetadataPtr(), getContext()); done_with_set = !subquery.set; done_with_join = !subquery.join; diff --git a/src/Processors/Transforms/CreatingSetsTransform.h b/src/Processors/Transforms/CreatingSetsTransform.h index 3452de63ea0..a5787b0a5f5 100644 --- a/src/Processors/Transforms/CreatingSetsTransform.h +++ b/src/Processors/Transforms/CreatingSetsTransform.h @@ -1,9 +1,12 @@ #pragma once -#include -#include -#include -#include + #include +#include +#include +#include +#include + +#include namespace DB { @@ -16,7 +19,7 @@ using ProgressCallback = std::function; /// Don't return any data. Sets are created when Finish status is returned. /// In general, several work() methods need to be called to finish. /// Independent processors is created for each subquery. -class CreatingSetsTransform : public IAccumulatingTransform +class CreatingSetsTransform : public IAccumulatingTransform, WithContext { public: CreatingSetsTransform( @@ -24,7 +27,7 @@ public: Block out_header_, SubqueryForSet subquery_for_set_, SizeLimits network_transfer_limits_, - const Context & context_); + ContextPtr context_); String getName() const override { return "CreatingSetsTransform"; } @@ -44,7 +47,6 @@ private: bool done_with_table = true; SizeLimits network_transfer_limits; - const Context & context; size_t rows_to_transfer = 0; size_t bytes_to_transfer = 0; diff --git a/src/Processors/Transforms/MergingAggregatedTransform.cpp b/src/Processors/Transforms/MergingAggregatedTransform.cpp index 1a04f85fd9c..ddc58d830da 100644 --- a/src/Processors/Transforms/MergingAggregatedTransform.cpp +++ b/src/Processors/Transforms/MergingAggregatedTransform.cpp @@ -52,7 +52,7 @@ Chunk MergingAggregatedTransform::generate() if (!generate_started) { generate_started = true; - LOG_TRACE(log, "Read {} blocks of partially aggregated data, total {} rows.", total_input_blocks, total_input_rows); + LOG_DEBUG(log, "Read {} blocks of partially aggregated data, total {} rows.", total_input_blocks, total_input_rows); /// Exception safety. Make iterator valid in case any method below throws. next_block = blocks.begin(); diff --git a/src/Processors/Transforms/WindowTransform.cpp b/src/Processors/Transforms/WindowTransform.cpp index 16d028f0fc1..0521912402c 100644 --- a/src/Processors/Transforms/WindowTransform.cpp +++ b/src/Processors/Transforms/WindowTransform.cpp @@ -257,10 +257,9 @@ WindowTransform::WindowTransform(const Block & input_header_, const IColumn * column = entry.column.get(); APPLY_FOR_TYPES(compareValuesWithOffset) - // Check that the offset type matches the window type. // Convert the offsets to the ORDER BY column type. We can't just check - // that it matches, because e.g. the int literals are always (U)Int64, - // but the column might be Int8 and so on. + // that the type matches, because e.g. the int literals are always + // (U)Int64, but the column might be Int8 and so on. if (window_description.frame.begin_type == WindowFrame::BoundaryType::Offset) { @@ -435,6 +434,9 @@ auto WindowTransform::moveRowNumberNoCheck(const RowNumber & _x, int offset) con assertValid(x); assert(offset <= 0); + // abs(offset) is less than INT_MAX, as checked in the parser, so + // this negation should always work. + assert(offset >= -INT_MAX); if (x.row >= static_cast(-offset)) { x.row -= -offset; @@ -1375,6 +1377,8 @@ struct WindowFunctionRank final : public WindowFunction DataTypePtr getReturnType() const override { return std::make_shared(); } + bool allocatesMemoryInArena() const override { return false; } + void windowInsertResultInto(const WindowTransform * transform, size_t function_index) override { @@ -1395,6 +1399,8 @@ struct WindowFunctionDenseRank final : public WindowFunction DataTypePtr getReturnType() const override { return std::make_shared(); } + bool allocatesMemoryInArena() const override { return false; } + void windowInsertResultInto(const WindowTransform * transform, size_t function_index) override { @@ -1415,6 +1421,8 @@ struct WindowFunctionRowNumber final : public WindowFunction DataTypePtr getReturnType() const override { return std::make_shared(); } + bool allocatesMemoryInArena() const override { return false; } + void windowInsertResultInto(const WindowTransform * transform, size_t function_index) override { @@ -1481,6 +1489,8 @@ struct WindowFunctionLagLeadInFrame final : public WindowFunction DataTypePtr getReturnType() const override { return argument_types[0]; } + bool allocatesMemoryInArena() const override { return false; } + void windowInsertResultInto(const WindowTransform * transform, size_t function_index) override { @@ -1500,6 +1510,12 @@ struct WindowFunctionLagLeadInFrame final : public WindowFunction "The offset for function {} must be nonnegative, {} given", getName(), offset); } + if (offset > INT_MAX) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "The offset for function {} must be less than {}, {} given", + getName(), INT_MAX, offset); + } } const auto [target_row, offset_left] = transform->moveRowNumber( diff --git a/src/Processors/ya.make b/src/Processors/ya.make index 35abfbae756..18f285e60a2 100644 --- a/src/Processors/ya.make +++ b/src/Processors/ya.make @@ -93,7 +93,6 @@ SRCS( Pipe.cpp Port.cpp QueryPipeline.cpp - QueryPlan/AddingDelayedSourceStep.cpp QueryPlan/AggregatingStep.cpp QueryPlan/ArrayJoinStep.cpp QueryPlan/BuildQueryPipelineSettings.cpp @@ -125,9 +124,9 @@ SRCS( QueryPlan/PartialSortingStep.cpp QueryPlan/QueryIdHolder.cpp QueryPlan/QueryPlan.cpp + QueryPlan/ReadFromMergeTree.cpp QueryPlan/ReadFromPreparedSource.cpp QueryPlan/ReadNothingStep.cpp - QueryPlan/ReverseRowsStep.cpp QueryPlan/RollupStep.cpp QueryPlan/SettingQuotaAndLimitsStep.cpp QueryPlan/TotalsHavingStep.cpp diff --git a/src/Server/GRPCServer.cpp b/src/Server/GRPCServer.cpp index 52a2c106488..6f0f2d30123 100644 --- a/src/Server/GRPCServer.cpp +++ b/src/Server/GRPCServer.cpp @@ -521,7 +521,7 @@ namespace Poco::Logger * log = nullptr; std::shared_ptr session; - std::optional query_context; + ContextPtr query_context; std::optional query_scope; String query_text; ASTPtr ast; @@ -651,7 +651,7 @@ namespace } /// Create context. - query_context.emplace(iserver.context()); + query_context = Context::createCopy(iserver.context()); /// Authentication. query_context->setUser(user, password, user_address); @@ -665,11 +665,11 @@ namespace { session = query_context->acquireNamedSession( query_info.session_id(), getSessionTimeout(query_info, iserver.config()), query_info.session_check()); - query_context = session->context; + query_context = Context::createCopy(session->context); query_context->setSessionContext(session->context); } - query_scope.emplace(*query_context); + query_scope.emplace(query_context); /// Set client info. ClientInfo & client_info = query_context->getClientInfo(); @@ -741,26 +741,26 @@ namespace output_format = query_context->getDefaultFormat(); /// Set callback to create and fill external tables - query_context->setExternalTablesInitializer([this] (Context & context) + query_context->setExternalTablesInitializer([this] (ContextPtr context) { - if (&context != &*query_context) + if (context != query_context) throw Exception("Unexpected context in external tables initializer", ErrorCodes::LOGICAL_ERROR); createExternalTables(); }); /// Set callbacks to execute function input(). - query_context->setInputInitializer([this] (Context & context, const StoragePtr & input_storage) + query_context->setInputInitializer([this] (ContextPtr context, const StoragePtr & input_storage) { - if (&context != &query_context.value()) + if (context != query_context) throw Exception("Unexpected context in Input initializer", ErrorCodes::LOGICAL_ERROR); input_function_is_used = true; initializeBlockInputStream(input_storage->getInMemoryMetadataPtr()->getSampleBlock()); block_input_stream->readPrefix(); }); - query_context->setInputBlocksReaderCallback([this](Context & context) -> Block + query_context->setInputBlocksReaderCallback([this](ContextPtr context) -> Block { - if (&context != &query_context.value()) + if (context != query_context) throw Exception("Unexpected context in InputBlocksReader", ErrorCodes::LOGICAL_ERROR); auto block = block_input_stream->read(); if (!block) @@ -775,7 +775,7 @@ namespace query_end = insert_query->data; } String query(begin, query_end); - io = ::DB::executeQuery(query, *query_context, false, QueryProcessingStage::Complete, true, true); + io = ::DB::executeQuery(query, query_context, false, QueryProcessingStage::Complete, true, true); } void Call::processInput() @@ -878,10 +878,10 @@ namespace auto table_id = query_context->resolveStorageID(insert_query->table_id, Context::ResolveOrdinary); if (query_context->getSettingsRef().input_format_defaults_for_omitted_fields && table_id) { - StoragePtr storage = DatabaseCatalog::instance().getTable(table_id, *query_context); + StoragePtr storage = DatabaseCatalog::instance().getTable(table_id, query_context); const auto & columns = storage->getInMemoryMetadataPtr()->getColumns(); if (!columns.empty()) - block_input_stream = std::make_shared(block_input_stream, columns, *query_context); + block_input_stream = std::make_shared(block_input_stream, columns, query_context); } } } @@ -903,7 +903,7 @@ namespace StoragePtr storage; if (auto resolved = query_context->tryResolveStorageID(temporary_id, Context::ResolveExternal)) { - storage = DatabaseCatalog::instance().getTable(resolved, *query_context); + storage = DatabaseCatalog::instance().getTable(resolved, query_context); } else { @@ -918,7 +918,7 @@ namespace column.type = DataTypeFactory::instance().get(name_and_type.type()); columns.emplace_back(std::move(column)); } - auto temporary_table = TemporaryTableHolder(*query_context, ColumnsDescription{columns}, {}); + auto temporary_table = TemporaryTableHolder(query_context, ColumnsDescription{columns}, {}); storage = temporary_table.getTable(); query_context->addExternalTable(temporary_id.table_name, std::move(temporary_table)); } @@ -927,17 +927,17 @@ namespace { /// The data will be written directly to the table. auto metadata_snapshot = storage->getInMemoryMetadataPtr(); - auto out_stream = storage->write(ASTPtr(), metadata_snapshot, *query_context); + auto out_stream = storage->write(ASTPtr(), metadata_snapshot, query_context); ReadBufferFromMemory data(external_table.data().data(), external_table.data().size()); String format = external_table.format(); if (format.empty()) format = "TabSeparated"; - Context * external_table_context = &*query_context; - std::optional temp_context; + ContextPtr external_table_context = query_context; + ContextPtr temp_context; if (!external_table.settings().empty()) { - temp_context = *query_context; - external_table_context = &*temp_context; + temp_context = Context::createCopy(query_context); + external_table_context = temp_context; SettingsChanges settings_changes; for (const auto & [key, value] : external_table.settings()) settings_changes.push_back({key, value}); diff --git a/src/Server/HTTP/HTTPServer.cpp b/src/Server/HTTP/HTTPServer.cpp index 5554a0ee31d..42e6467d0af 100644 --- a/src/Server/HTTP/HTTPServer.cpp +++ b/src/Server/HTTP/HTTPServer.cpp @@ -6,7 +6,7 @@ namespace DB { HTTPServer::HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory_, UInt16 port_number, Poco::Net::HTTPServerParams::Ptr params) @@ -15,7 +15,7 @@ HTTPServer::HTTPServer( } HTTPServer::HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory_, const Poco::Net::ServerSocket & socket, Poco::Net::HTTPServerParams::Ptr params) @@ -24,7 +24,7 @@ HTTPServer::HTTPServer( } HTTPServer::HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory_, Poco::ThreadPool & thread_pool, const Poco::Net::ServerSocket & socket, diff --git a/src/Server/HTTP/HTTPServer.h b/src/Server/HTTP/HTTPServer.h index 3d2a2ac9cf4..d95bdff0baa 100644 --- a/src/Server/HTTP/HTTPServer.h +++ b/src/Server/HTTP/HTTPServer.h @@ -17,19 +17,19 @@ class HTTPServer : public Poco::Net::TCPServer { public: explicit HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory, UInt16 port_number = 80, Poco::Net::HTTPServerParams::Ptr params = new Poco::Net::HTTPServerParams); HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory, const Poco::Net::ServerSocket & socket, Poco::Net::HTTPServerParams::Ptr params); HTTPServer( - const Context & context, + ContextPtr context, HTTPRequestHandlerFactoryPtr factory, Poco::ThreadPool & thread_pool, const Poco::Net::ServerSocket & socket, diff --git a/src/Server/HTTP/HTTPServerConnection.cpp b/src/Server/HTTP/HTTPServerConnection.cpp index 7a6cd4cab54..19985949005 100644 --- a/src/Server/HTTP/HTTPServerConnection.cpp +++ b/src/Server/HTTP/HTTPServerConnection.cpp @@ -6,11 +6,11 @@ namespace DB { HTTPServerConnection::HTTPServerConnection( - const Context & context_, + ContextPtr context_, const Poco::Net::StreamSocket & socket, Poco::Net::HTTPServerParams::Ptr params_, HTTPRequestHandlerFactoryPtr factory_) - : TCPServerConnection(socket), context(context_), params(params_), factory(factory_), stopped(false) + : TCPServerConnection(socket), context(Context::createCopy(context_)), params(params_), factory(factory_), stopped(false) { poco_check_ptr(factory); } diff --git a/src/Server/HTTP/HTTPServerConnection.h b/src/Server/HTTP/HTTPServerConnection.h index 55b6e921d9f..1c7ae6cd2b7 100644 --- a/src/Server/HTTP/HTTPServerConnection.h +++ b/src/Server/HTTP/HTTPServerConnection.h @@ -14,7 +14,7 @@ class HTTPServerConnection : public Poco::Net::TCPServerConnection { public: HTTPServerConnection( - const Context & context, + ContextPtr context, const Poco::Net::StreamSocket & socket, Poco::Net::HTTPServerParams::Ptr params, HTTPRequestHandlerFactoryPtr factory); @@ -25,7 +25,7 @@ protected: static void sendErrorResponse(Poco::Net::HTTPServerSession & session, Poco::Net::HTTPResponse::HTTPStatus status); private: - Context context; + ContextPtr context; Poco::Net::HTTPServerParams::Ptr params; HTTPRequestHandlerFactoryPtr factory; bool stopped; diff --git a/src/Server/HTTP/HTTPServerConnectionFactory.cpp b/src/Server/HTTP/HTTPServerConnectionFactory.cpp index 876ccb9096b..0e4fb6cfcec 100644 --- a/src/Server/HTTP/HTTPServerConnectionFactory.cpp +++ b/src/Server/HTTP/HTTPServerConnectionFactory.cpp @@ -5,8 +5,8 @@ namespace DB { HTTPServerConnectionFactory::HTTPServerConnectionFactory( - const Context & context_, Poco::Net::HTTPServerParams::Ptr params_, HTTPRequestHandlerFactoryPtr factory_) - : context(context_), params(params_), factory(factory_) + ContextPtr context_, Poco::Net::HTTPServerParams::Ptr params_, HTTPRequestHandlerFactoryPtr factory_) + : context(Context::createCopy(context_)), params(params_), factory(factory_) { poco_check_ptr(factory); } diff --git a/src/Server/HTTP/HTTPServerConnectionFactory.h b/src/Server/HTTP/HTTPServerConnectionFactory.h index 4f8ca43cbfb..3f11eca0f69 100644 --- a/src/Server/HTTP/HTTPServerConnectionFactory.h +++ b/src/Server/HTTP/HTTPServerConnectionFactory.h @@ -12,12 +12,12 @@ namespace DB class HTTPServerConnectionFactory : public Poco::Net::TCPServerConnectionFactory { public: - HTTPServerConnectionFactory(const Context & context, Poco::Net::HTTPServerParams::Ptr params, HTTPRequestHandlerFactoryPtr factory); + HTTPServerConnectionFactory(ContextPtr context, Poco::Net::HTTPServerParams::Ptr params, HTTPRequestHandlerFactoryPtr factory); Poco::Net::TCPServerConnection * createConnection(const Poco::Net::StreamSocket & socket) override; private: - Context context; + ContextPtr context; Poco::Net::HTTPServerParams::Ptr params; HTTPRequestHandlerFactoryPtr factory; }; diff --git a/src/Server/HTTP/HTTPServerRequest.cpp b/src/Server/HTTP/HTTPServerRequest.cpp index ab8b803c29d..69dc8d4dbda 100644 --- a/src/Server/HTTP/HTTPServerRequest.cpp +++ b/src/Server/HTTP/HTTPServerRequest.cpp @@ -15,8 +15,8 @@ namespace DB { -HTTPServerRequest::HTTPServerRequest(const Context & context, HTTPServerResponse & response, Poco::Net::HTTPServerSession & session) - : max_uri_size(context.getSettingsRef().http_max_uri_size) +HTTPServerRequest::HTTPServerRequest(ContextPtr context, HTTPServerResponse & response, Poco::Net::HTTPServerSession & session) + : max_uri_size(context->getSettingsRef().http_max_uri_size) { response.attachRequest(this); @@ -24,8 +24,8 @@ HTTPServerRequest::HTTPServerRequest(const Context & context, HTTPServerResponse client_address = session.clientAddress(); server_address = session.serverAddress(); - auto receive_timeout = context.getSettingsRef().http_receive_timeout; - auto send_timeout = context.getSettingsRef().http_send_timeout; + auto receive_timeout = context->getSettingsRef().http_receive_timeout; + auto send_timeout = context->getSettingsRef().http_send_timeout; session.socket().setReceiveTimeout(receive_timeout); session.socket().setSendTimeout(send_timeout); diff --git a/src/Server/HTTP/HTTPServerRequest.h b/src/Server/HTTP/HTTPServerRequest.h index a0f022f32ec..a560f907cf0 100644 --- a/src/Server/HTTP/HTTPServerRequest.h +++ b/src/Server/HTTP/HTTPServerRequest.h @@ -1,5 +1,6 @@ #pragma once +#include #include #include @@ -8,14 +9,13 @@ namespace DB { -class Context; class HTTPServerResponse; class ReadBufferFromPocoSocket; class HTTPServerRequest : public HTTPRequest { public: - HTTPServerRequest(const Context & context, HTTPServerResponse & response, Poco::Net::HTTPServerSession & session); + HTTPServerRequest(ContextPtr context, HTTPServerResponse & response, Poco::Net::HTTPServerSession & session); /// FIXME: it's a little bit inconvenient interface. The rationale is that all other ReadBuffer's wrap each other /// via unique_ptr - but we can't inherit HTTPServerRequest from ReadBuffer and pass it around, diff --git a/src/Server/HTTP/ReadHeaders.cpp b/src/Server/HTTP/ReadHeaders.cpp index 77ec48c11b1..2fc2de8321a 100644 --- a/src/Server/HTTP/ReadHeaders.cpp +++ b/src/Server/HTTP/ReadHeaders.cpp @@ -51,7 +51,7 @@ void readHeaders( if (name.size() > max_name_length) throw Poco::Net::MessageException("Field name is too long"); if (ch != ':') - throw Poco::Net::MessageException("Field name is invalid or no colon found"); + throw Poco::Net::MessageException(fmt::format("Field name is invalid or no colon found: \"{}\"", name)); } in.ignore(); diff --git a/src/Server/HTTP/WriteBufferFromHTTPServerResponse.cpp b/src/Server/HTTP/WriteBufferFromHTTPServerResponse.cpp index 355af038da9..a4fe3649e6f 100644 --- a/src/Server/HTTP/WriteBufferFromHTTPServerResponse.cpp +++ b/src/Server/HTTP/WriteBufferFromHTTPServerResponse.cpp @@ -196,7 +196,7 @@ void WriteBufferFromHTTPServerResponse::finalize() WriteBufferFromHTTPServerResponse::~WriteBufferFromHTTPServerResponse() { /// FIXME move final flush into the caller - MemoryTracker::LockExceptionInThread lock; + MemoryTracker::LockExceptionInThread lock(VariableContext::Global); finalize(); } diff --git a/src/Server/HTTPHandler.cpp b/src/Server/HTTPHandler.cpp index 6b4981beae0..8aed5d20f74 100644 --- a/src/Server/HTTPHandler.cpp +++ b/src/Server/HTTPHandler.cpp @@ -277,7 +277,7 @@ HTTPHandler::~HTTPHandler() bool HTTPHandler::authenticateUser( - Context & context, + ContextPtr context, HTTPServerRequest & request, HTMLForm & params, HTTPServerResponse & response) @@ -381,7 +381,7 @@ bool HTTPHandler::authenticateUser( /// Set client info. It will be used for quota accounting parameters in 'setUser' method. - ClientInfo & client_info = context.getClientInfo(); + ClientInfo & client_info = context->getClientInfo(); client_info.query_kind = ClientInfo::QueryKind::INITIAL_QUERY; client_info.interface = ClientInfo::Interface::HTTP; @@ -398,7 +398,7 @@ bool HTTPHandler::authenticateUser( try { - context.setUser(*request_credentials, request.clientAddress()); + context->setUser(*request_credentials, request.clientAddress()); } catch (const Authentication::Require & required_credentials) { @@ -430,7 +430,7 @@ bool HTTPHandler::authenticateUser( request_credentials.reset(); if (!quota_key.empty()) - context.setQuotaKey(quota_key); + context->setQuotaKey(quota_key); /// Query sent through HTTP interface is initial. client_info.initial_user = client_info.current_user; @@ -441,7 +441,7 @@ bool HTTPHandler::authenticateUser( void HTTPHandler::processQuery( - Context & context, + ContextPtr context, HTTPServerRequest & request, HTMLForm & params, HTTPServerResponse & response, @@ -470,10 +470,10 @@ void HTTPHandler::processQuery( session_timeout = parseSessionTimeout(config, params); std::string session_check = params.get("session_check", ""); - session = context.acquireNamedSession(session_id, session_timeout, session_check == "1"); + session = context->acquireNamedSession(session_id, session_timeout, session_check == "1"); - context = session->context; - context.setSessionContext(session->context); + context->copyFrom(session->context); /// FIXME: maybe move this part to HandleRequest(), copyFrom() is used only here. + context->setSessionContext(session->context); } SCOPE_EXIT({ @@ -489,7 +489,7 @@ void HTTPHandler::processQuery( { std::string opentelemetry_traceparent = request.get("traceparent"); std::string error; - if (!context.getClientInfo().client_trace_context.parseTraceparentHeader( + if (!context->getClientInfo().client_trace_context.parseTraceparentHeader( opentelemetry_traceparent, error)) { throw Exception(ErrorCodes::BAD_REQUEST_PARAMETER, @@ -497,14 +497,14 @@ void HTTPHandler::processQuery( opentelemetry_traceparent, error); } - context.getClientInfo().client_trace_context.tracestate = request.get("tracestate", ""); + context->getClientInfo().client_trace_context.tracestate = request.get("tracestate", ""); } #endif // Set the query id supplied by the user, if any, and also update the OpenTelemetry fields. - context.setCurrentQueryId(params.get("query_id", request.get("X-ClickHouse-Query-Id", ""))); + context->setCurrentQueryId(params.get("query_id", request.get("X-ClickHouse-Query-Id", ""))); - ClientInfo & client_info = context.getClientInfo(); + ClientInfo & client_info = context->getClientInfo(); client_info.initial_query_id = client_info.current_query_id; /// The client can pass a HTTP header indicating supported compression method (gzip or deflate). @@ -570,7 +570,7 @@ void HTTPHandler::processQuery( if (buffer_until_eof) { - const std::string tmp_path(context.getTemporaryVolume()->getDisk()->getPath()); + const std::string tmp_path(context->getTemporaryVolume()->getDisk()->getPath()); const std::string tmp_path_template(tmp_path + "http_buffers/"); auto create_tmp_disk_buffer = [tmp_path_template] (const WriteBufferPtr &) @@ -658,13 +658,13 @@ void HTTPHandler::processQuery( /// In theory if initially readonly = 0, the client can change any setting and then set readonly /// to some other value. - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); /// Only readonly queries are allowed for HTTP GET requests. if (request.getMethod() == HTTPServerRequest::HTTP_GET) { if (settings.readonly == 0) - context.setSetting("readonly", 2); + context->setSetting("readonly", 2); } bool has_external_data = startsWith(request.getContentType(), "multipart/form-data"); @@ -707,14 +707,14 @@ void HTTPHandler::processQuery( } if (!database.empty()) - context.setCurrentDatabase(database); + context->setCurrentDatabase(database); if (!default_format.empty()) - context.setDefaultFormat(default_format); + context->setDefaultFormat(default_format); /// For external data we also want settings - context.checkSettingsConstraints(settings_changes); - context.applySettingsChanges(settings_changes); + context->checkSettingsConstraints(settings_changes); + context->applySettingsChanges(settings_changes); const auto & query = getQuery(request, params, context); std::unique_ptr in_param = std::make_unique(query); @@ -737,11 +737,11 @@ void HTTPHandler::processQuery( /// Origin header. used_output.out->addHeaderCORS(settings.add_http_cors_header && !request.get("Origin", "").empty()); - auto append_callback = [&context] (ProgressCallback callback) + auto append_callback = [context] (ProgressCallback callback) { - auto prev = context.getProgressCallback(); + auto prev = context->getProgressCallback(); - context.setProgressCallback([prev, callback] (const Progress & progress) + context->setProgressCallback([prev, callback] (const Progress & progress) { if (prev) prev(progress); @@ -756,12 +756,12 @@ void HTTPHandler::processQuery( if (settings.readonly > 0 && settings.cancel_http_readonly_queries_on_client_close) { - append_callback([&context, &request](const Progress &) + append_callback([context, &request](const Progress &) { /// Assume that at the point this method is called no one is reading data from the socket any more: /// should be true for read-only queries. if (!request.checkPeerConnected()) - context.killCurrentQuery(); + context->killCurrentQuery(); }); } @@ -877,7 +877,7 @@ void HTTPHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse if (!request_context) { // Context should be initialized before anything, for correct memory accounting. - request_context = std::make_unique(server.context()); + request_context = Context::createCopy(server.context()); request_credentials.reset(); } @@ -899,6 +899,7 @@ void HTTPHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse HTMLForm params(request); with_stacktrace = params.getParsed("stacktrace", false); + /// FIXME: maybe this check is already unnecessary. /// Workaround. Poco does not detect 411 Length Required case. if (request.getMethod() == HTTPRequest::HTTP_POST && !request.getChunkedTransferEncoding() && !request.hasContentLength()) { @@ -907,7 +908,7 @@ void HTTPHandler::handleRequest(HTTPServerRequest & request, HTTPServerResponse ErrorCodes::HTTP_LENGTH_REQUIRED); } - processQuery(*request_context, request, params, response, used_output, query_scope); + processQuery(request_context, request, params, response, used_output, query_scope); LOG_DEBUG(log, (request_credentials ? "Authentication in progress..." : "Done processing query")); } catch (...) @@ -936,7 +937,7 @@ DynamicQueryHandler::DynamicQueryHandler(IServer & server_, const std::string & { } -bool DynamicQueryHandler::customizeQueryParam(Context & context, const std::string & key, const std::string & value) +bool DynamicQueryHandler::customizeQueryParam(ContextPtr context, const std::string & key, const std::string & value) { if (key == param_name) return true; /// do nothing @@ -945,14 +946,14 @@ bool DynamicQueryHandler::customizeQueryParam(Context & context, const std::stri { /// Save name and values of substitution in dictionary. const String parameter_name = key.substr(strlen("param_")); - context.setQueryParameter(parameter_name, value); + context->setQueryParameter(parameter_name, value); return true; } return false; } -std::string DynamicQueryHandler::getQuery(HTTPServerRequest & request, HTMLForm & params, Context & context) +std::string DynamicQueryHandler::getQuery(HTTPServerRequest & request, HTMLForm & params, ContextPtr context) { if (likely(!startsWith(request.getContentType(), "multipart/form-data"))) { @@ -978,25 +979,31 @@ std::string DynamicQueryHandler::getQuery(HTTPServerRequest & request, HTMLForm } PredefinedQueryHandler::PredefinedQueryHandler( - IServer & server_, const NameSet & receive_params_, const std::string & predefined_query_ - , const CompiledRegexPtr & url_regex_, const std::unordered_map & header_name_with_regex_) - : HTTPHandler(server_, "PredefinedQueryHandler"), receive_params(receive_params_), predefined_query(predefined_query_) - , url_regex(url_regex_), header_name_with_capture_regex(header_name_with_regex_) + IServer & server_, + const NameSet & receive_params_, + const std::string & predefined_query_, + const CompiledRegexPtr & url_regex_, + const std::unordered_map & header_name_with_regex_) + : HTTPHandler(server_, "PredefinedQueryHandler") + , receive_params(receive_params_) + , predefined_query(predefined_query_) + , url_regex(url_regex_) + , header_name_with_capture_regex(header_name_with_regex_) { } -bool PredefinedQueryHandler::customizeQueryParam(Context & context, const std::string & key, const std::string & value) +bool PredefinedQueryHandler::customizeQueryParam(ContextPtr context, const std::string & key, const std::string & value) { if (receive_params.count(key)) { - context.setQueryParameter(key, value); + context->setQueryParameter(key, value); return true; } return false; } -void PredefinedQueryHandler::customizeContext(HTTPServerRequest & request, DB::Context & context) +void PredefinedQueryHandler::customizeContext(HTTPServerRequest & request, ContextPtr context) { /// If in the configuration file, the handler's header is regex and contains named capture group /// We will extract regex named capture groups as query parameters @@ -1014,7 +1021,7 @@ void PredefinedQueryHandler::customizeContext(HTTPServerRequest & request, DB::C const auto & capturing_value = matches[capturing_index]; if (capturing_value.data()) - context.setQueryParameter(capturing_name, String(capturing_value.data(), capturing_value.size())); + context->setQueryParameter(capturing_name, String(capturing_value.data(), capturing_value.size())); } } }; @@ -1032,7 +1039,7 @@ void PredefinedQueryHandler::customizeContext(HTTPServerRequest & request, DB::C } } -std::string PredefinedQueryHandler::getQuery(HTTPServerRequest & request, HTMLForm & params, Context & context) +std::string PredefinedQueryHandler::getQuery(HTTPServerRequest & request, HTMLForm & params, ContextPtr context) { if (unlikely(startsWith(request.getContentType(), "multipart/form-data"))) { diff --git a/src/Server/HTTPHandler.h b/src/Server/HTTPHandler.h index 0f1d75664bd..4715949cb87 100644 --- a/src/Server/HTTPHandler.h +++ b/src/Server/HTTPHandler.h @@ -18,7 +18,6 @@ namespace Poco { class Logger; } namespace DB { -class Context; class Credentials; class IServer; class WriteBufferFromHTTPServerResponse; @@ -34,11 +33,11 @@ public: void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response) override; /// This method is called right before the query execution. - virtual void customizeContext(HTTPServerRequest & /* request */, Context & /* context */) {} + virtual void customizeContext(HTTPServerRequest & /* request */, ContextPtr /* context */) {} - virtual bool customizeQueryParam(Context & context, const std::string & key, const std::string & value) = 0; + virtual bool customizeQueryParam(ContextPtr context, const std::string & key, const std::string & value) = 0; - virtual std::string getQuery(HTTPServerRequest & request, HTMLForm & params, Context & context) = 0; + virtual std::string getQuery(HTTPServerRequest & request, HTMLForm & params, ContextPtr context) = 0; private: struct Output @@ -74,7 +73,7 @@ private: // The request_context and the request_credentials instances may outlive a single request/response loop. // This happens only when the authentication mechanism requires more than a single request/response exchange (e.g., SPNEGO). - std::unique_ptr request_context; + ContextPtr request_context; std::unique_ptr request_credentials; // Returns true when the user successfully authenticated, @@ -83,14 +82,14 @@ private: // the request_context and request_credentials instances are preserved. // Throws an exception if authentication failed. bool authenticateUser( - Context & context, + ContextPtr context, HTTPServerRequest & request, HTMLForm & params, HTTPServerResponse & response); /// Also initializes 'used_output'. void processQuery( - Context & context, + ContextPtr context, HTTPServerRequest & request, HTMLForm & params, HTTPServerResponse & response, @@ -114,9 +113,9 @@ private: public: explicit DynamicQueryHandler(IServer & server_, const std::string & param_name_ = "query"); - std::string getQuery(HTTPServerRequest & request, HTMLForm & params, Context & context) override; + std::string getQuery(HTTPServerRequest & request, HTMLForm & params, ContextPtr context) override; - bool customizeQueryParam(Context &context, const std::string &key, const std::string &value) override; + bool customizeQueryParam(ContextPtr context, const std::string &key, const std::string &value) override; }; class PredefinedQueryHandler : public HTTPHandler @@ -131,11 +130,11 @@ public: IServer & server_, const NameSet & receive_params_, const std::string & predefined_query_ , const CompiledRegexPtr & url_regex_, const std::unordered_map & header_name_with_regex_); - virtual void customizeContext(HTTPServerRequest & request, Context & context) override; + virtual void customizeContext(HTTPServerRequest & request, ContextPtr context) override; - std::string getQuery(HTTPServerRequest & request, HTMLForm & params, Context & context) override; + std::string getQuery(HTTPServerRequest & request, HTMLForm & params, ContextPtr context) override; - bool customizeQueryParam(Context & context, const std::string & key, const std::string & value) override; + bool customizeQueryParam(ContextPtr context, const std::string & key, const std::string & value) override; }; } diff --git a/src/Server/IServer.h b/src/Server/IServer.h index 131e7443646..80736fda3ea 100644 --- a/src/Server/IServer.h +++ b/src/Server/IServer.h @@ -1,5 +1,7 @@ #pragma once +#include + namespace Poco { @@ -7,6 +9,7 @@ namespace Util { class LayeredConfiguration; } + class Logger; } @@ -15,8 +18,6 @@ class Logger; namespace DB { -class Context; - class IServer { public: @@ -27,12 +28,12 @@ public: virtual Poco::Logger & logger() const = 0; /// Returns global application's context. - virtual Context & context() const = 0; + virtual ContextPtr context() const = 0; /// Returns true if shutdown signaled. virtual bool isCancelled() const = 0; - virtual ~IServer() {} + virtual ~IServer() = default; }; } diff --git a/src/Server/InterserverIOHTTPHandler.cpp b/src/Server/InterserverIOHTTPHandler.cpp index 426e4ca2138..64af8860b23 100644 --- a/src/Server/InterserverIOHTTPHandler.cpp +++ b/src/Server/InterserverIOHTTPHandler.cpp @@ -25,29 +25,26 @@ namespace ErrorCodes std::pair InterserverIOHTTPHandler::checkAuthentication(HTTPServerRequest & request) const { - const auto & config = server.config(); - - if (config.has("interserver_http_credentials.user")) + auto server_credentials = server.context()->getInterserverCredentials(); + if (server_credentials) { if (!request.hasCredentials()) - return {"Server requires HTTP Basic authentication, but client doesn't provide it", false}; + return server_credentials->isValidUser("", ""); + String scheme, info; request.getCredentials(scheme, info); if (scheme != "Basic") return {"Server requires HTTP Basic authentication but client provides another method", false}; - String user = config.getString("interserver_http_credentials.user"); - String password = config.getString("interserver_http_credentials.password", ""); - Poco::Net::HTTPBasicCredentials credentials(info); - if (std::make_pair(user, password) != std::make_pair(credentials.getUsername(), credentials.getPassword())) - return {"Incorrect user or password in HTTP Basic authentication", false}; + return server_credentials->isValidUser(credentials.getUsername(), credentials.getPassword()); } else if (request.hasCredentials()) { return {"Client requires HTTP Basic authentication, but server doesn't provide it", false}; } + return {"", true}; } @@ -62,7 +59,7 @@ void InterserverIOHTTPHandler::processQuery(HTTPServerRequest & request, HTTPSer auto & body = request.getStream(); - auto endpoint = server.context().getInterserverIOHandler().getEndpoint(endpoint_name); + auto endpoint = server.context()->getInterserverIOHandler().getEndpoint(endpoint_name); /// Locked for read while query processing std::shared_lock lock(endpoint->rwlock); if (endpoint->blocker.isCancelled()) diff --git a/src/Server/InterserverIOHTTPHandler.h b/src/Server/InterserverIOHTTPHandler.h index 47892aa678f..c0d776115e1 100644 --- a/src/Server/InterserverIOHTTPHandler.h +++ b/src/Server/InterserverIOHTTPHandler.h @@ -2,10 +2,12 @@ #include #include +#include #include #include +#include namespace CurrentMetrics diff --git a/src/Server/KeeperTCPHandler.cpp b/src/Server/KeeperTCPHandler.cpp index bf725581a29..1dadd3437f7 100644 --- a/src/Server/KeeperTCPHandler.cpp +++ b/src/Server/KeeperTCPHandler.cpp @@ -192,11 +192,11 @@ struct SocketInterruptablePollWrapper KeeperTCPHandler::KeeperTCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_) : Poco::Net::TCPServerConnection(socket_) , server(server_) - , log(&Poco::Logger::get("KeeperTCPHandler")) - , global_context(server.context()) - , nu_keeper_storage_dispatcher(global_context.getKeeperStorageDispatcher()) - , operation_timeout(0, global_context.getConfigRef().getUInt("keeper_server.operation_timeout_ms", Coordination::DEFAULT_OPERATION_TIMEOUT_MS) * 1000) - , session_timeout(0, global_context.getConfigRef().getUInt("keeper_server.session_timeout_ms", Coordination::DEFAULT_SESSION_TIMEOUT_MS) * 1000) + , log(&Poco::Logger::get("NuKeeperTCPHandler")) + , global_context(Context::createCopy(server.context())) + , nu_keeper_storage_dispatcher(global_context->getKeeperStorageDispatcher()) + , operation_timeout(0, global_context->getConfigRef().getUInt("test_keeper_server.operation_timeout_ms", Coordination::DEFAULT_OPERATION_TIMEOUT_MS) * 1000) + , session_timeout(0, global_context->getConfigRef().getUInt("test_keeper_server.session_timeout_ms", Coordination::DEFAULT_SESSION_TIMEOUT_MS) * 1000) , poll_wrapper(std::make_unique(socket_)) , responses(std::make_unique()) { @@ -258,8 +258,8 @@ void KeeperTCPHandler::runImpl() { setThreadName("TstKprHandler"); ThreadStatus thread_status; - auto global_receive_timeout = global_context.getSettingsRef().receive_timeout; - auto global_send_timeout = global_context.getSettingsRef().send_timeout; + auto global_receive_timeout = global_context->getSettingsRef().receive_timeout; + auto global_send_timeout = global_context->getSettingsRef().send_timeout; socket().setReceiveTimeout(global_receive_timeout); socket().setSendTimeout(global_send_timeout); diff --git a/src/Server/KeeperTCPHandler.h b/src/Server/KeeperTCPHandler.h index fecaf1cc38f..6c3929198c0 100644 --- a/src/Server/KeeperTCPHandler.h +++ b/src/Server/KeeperTCPHandler.h @@ -37,7 +37,7 @@ public: private: IServer & server; Poco::Logger * log; - Context global_context; + ContextPtr global_context; std::shared_ptr nu_keeper_storage_dispatcher; Poco::Timespan operation_timeout; Poco::Timespan session_timeout; diff --git a/src/Server/KeeperTCPHandlerFactory.h b/src/Server/KeeperTCPHandlerFactory.h index adeb829b4c3..132a8b96c23 100644 --- a/src/Server/KeeperTCPHandlerFactory.h +++ b/src/Server/KeeperTCPHandlerFactory.h @@ -5,6 +5,7 @@ #include #include #include +#include namespace DB { @@ -21,9 +22,9 @@ private: void run() override {} }; public: - KeeperTCPHandlerFactory(IServer & server_) + KeeperTCPHandlerFactory(IServer & server_, bool secure) : server(server_) - , log(&Poco::Logger::get("KeeperTCPHandlerFactory")) + , log(&Poco::Logger::get(std::string{"KeeperTCP"} + (secure ? "S" : "") + "HandlerFactory")) { } diff --git a/src/Server/MySQLHandler.cpp b/src/Server/MySQLHandler.cpp index 75c88a6ff93..7b1df092aa1 100644 --- a/src/Server/MySQLHandler.cpp +++ b/src/Server/MySQLHandler.cpp @@ -72,7 +72,7 @@ MySQLHandler::MySQLHandler(IServer & server_, const Poco::Net::StreamSocket & so : Poco::Net::TCPServerConnection(socket_) , server(server_) , log(&Poco::Logger::get("MySQLHandler")) - , connection_context(server.context()) + , connection_context(Context::createCopy(server.context())) , connection_id(connection_id_) , auth_plugin(new MySQLProtocol::Authentication::Native41()) { @@ -89,14 +89,14 @@ void MySQLHandler::run() { setThreadName("MySQLHandler"); ThreadStatus thread_status; - connection_context.makeSessionContext(); - connection_context.getClientInfo().interface = ClientInfo::Interface::MYSQL; - connection_context.setDefaultFormat("MySQLWire"); - connection_context.getClientInfo().connection_id = connection_id; + connection_context->makeSessionContext(); + connection_context->getClientInfo().interface = ClientInfo::Interface::MYSQL; + connection_context->setDefaultFormat("MySQLWire"); + connection_context->getClientInfo().connection_id = connection_id; in = std::make_shared(socket()); out = std::make_shared(socket()); - packet_endpoint = std::make_shared(*in, *out, connection_context.mysql.sequence_id); + packet_endpoint = std::make_shared(*in, *out, connection_context->mysql.sequence_id); try { @@ -108,11 +108,11 @@ void MySQLHandler::run() HandshakeResponse handshake_response; finishHandshake(handshake_response); - connection_context.mysql.client_capabilities = handshake_response.capability_flags; + connection_context->mysql.client_capabilities = handshake_response.capability_flags; if (handshake_response.max_packet_size) - connection_context.mysql.max_packet_size = handshake_response.max_packet_size; - if (!connection_context.mysql.max_packet_size) - connection_context.mysql.max_packet_size = MAX_PACKET_LENGTH; + connection_context->mysql.max_packet_size = handshake_response.max_packet_size; + if (!connection_context->mysql.max_packet_size) + connection_context->mysql.max_packet_size = MAX_PACKET_LENGTH; LOG_TRACE(log, "Capabilities: {}, max_packet_size: {}, character_set: {}, user: {}, auth_response length: {}, database: {}, auth_plugin_name: {}", @@ -133,8 +133,8 @@ void MySQLHandler::run() try { if (!handshake_response.database.empty()) - connection_context.setCurrentDatabase(handshake_response.database); - connection_context.setCurrentQueryId(Poco::format("mysql:%lu", connection_id)); + connection_context->setCurrentDatabase(handshake_response.database); + connection_context->setCurrentQueryId(Poco::format("mysql:%lu", connection_id)); } catch (const Exception & exc) @@ -252,7 +252,7 @@ void MySQLHandler::authenticate(const String & user_name, const String & auth_pl try { // For compatibility with JavaScript MySQL client, Native41 authentication plugin is used when possible (if password is specified using double SHA1). Otherwise SHA256 plugin is used. - auto user = connection_context.getAccessControlManager().read(user_name); + auto user = connection_context->getAccessControlManager().read(user_name); const DB::Authentication::Type user_auth_type = user->authentication.getType(); if (user_auth_type == DB::Authentication::SHA256_PASSWORD) { @@ -276,7 +276,7 @@ void MySQLHandler::comInitDB(ReadBuffer & payload) String database; readStringUntilEOF(database, payload); LOG_DEBUG(log, "Setting current database to {}", database); - connection_context.setCurrentDatabase(database); + connection_context->setCurrentDatabase(database); packet_endpoint->sendPacket(OKPacket(0, client_capability_flags, 0, 0, 1), true); } @@ -284,7 +284,7 @@ void MySQLHandler::comFieldList(ReadBuffer & payload) { ComFieldList packet; packet.readPayloadWithUnpacked(payload); - String database = connection_context.getCurrentDatabase(); + String database = connection_context->getCurrentDatabase(); StoragePtr table_ptr = DatabaseCatalog::instance().getTable({database, packet.table}, connection_context); auto metadata_snapshot = table_ptr->getInMemoryMetadataPtr(); for (const NameAndTypePair & column : metadata_snapshot->getColumns().getAll()) @@ -332,11 +332,11 @@ void MySQLHandler::comQuery(ReadBuffer & payload) ReadBufferFromString replacement(replacement_query); - Context query_context = connection_context; + auto query_context = Context::createCopy(connection_context); std::atomic affected_rows {0}; - auto prev = query_context.getProgressCallback(); - query_context.setProgressCallback([&, prev = prev](const Progress & progress) + auto prev = query_context->getProgressCallback(); + query_context->setProgressCallback([&, prev = prev](const Progress & progress) { if (prev) prev(progress); @@ -391,14 +391,14 @@ void MySQLHandlerSSL::finishHandshakeSSL( ReadBufferFromMemory payload(buf, pos); payload.ignore(PACKET_HEADER_SIZE); ssl_request.readPayloadWithUnpacked(payload); - connection_context.mysql.client_capabilities = ssl_request.capability_flags; - connection_context.mysql.max_packet_size = ssl_request.max_packet_size ? ssl_request.max_packet_size : MAX_PACKET_LENGTH; + connection_context->mysql.client_capabilities = ssl_request.capability_flags; + connection_context->mysql.max_packet_size = ssl_request.max_packet_size ? ssl_request.max_packet_size : MAX_PACKET_LENGTH; secure_connection = true; ss = std::make_shared(SecureStreamSocket::attach(socket(), SSLManager::instance().defaultServerContext())); in = std::make_shared(*ss); out = std::make_shared(*ss); - connection_context.mysql.sequence_id = 2; - packet_endpoint = std::make_shared(*in, *out, connection_context.mysql.sequence_id); + connection_context->mysql.sequence_id = 2; + packet_endpoint = std::make_shared(*in, *out, connection_context->mysql.sequence_id); packet_endpoint->receivePacket(packet); /// Reading HandshakeResponse from secure socket. } diff --git a/src/Server/MySQLHandler.h b/src/Server/MySQLHandler.h index 1418d068ffd..f5fb82b5bef 100644 --- a/src/Server/MySQLHandler.h +++ b/src/Server/MySQLHandler.h @@ -56,7 +56,7 @@ private: protected: Poco::Logger * log; - Context connection_context; + ContextPtr connection_context; std::shared_ptr packet_endpoint; diff --git a/src/Server/PostgreSQLHandler.cpp b/src/Server/PostgreSQLHandler.cpp index b3a3bbf2aaa..01887444c65 100644 --- a/src/Server/PostgreSQLHandler.cpp +++ b/src/Server/PostgreSQLHandler.cpp @@ -33,7 +33,7 @@ PostgreSQLHandler::PostgreSQLHandler( std::vector> & auth_methods_) : Poco::Net::TCPServerConnection(socket_) , server(server_) - , connection_context(server.context()) + , connection_context(Context::createCopy(server.context())) , ssl_enabled(ssl_enabled_) , connection_id(connection_id_) , authentication_manager(auth_methods_) @@ -52,9 +52,9 @@ void PostgreSQLHandler::run() { setThreadName("PostgresHandler"); ThreadStatus thread_status; - connection_context.makeSessionContext(); - connection_context.getClientInfo().interface = ClientInfo::Interface::POSTGRESQL; - connection_context.setDefaultFormat("PostgreSQLWire"); + connection_context->makeSessionContext(); + connection_context->getClientInfo().interface = ClientInfo::Interface::POSTGRESQL; + connection_context->setDefaultFormat("PostgreSQLWire"); try { @@ -132,8 +132,8 @@ bool PostgreSQLHandler::startup() try { if (!start_up_msg->database.empty()) - connection_context.setCurrentDatabase(start_up_msg->database); - connection_context.setCurrentQueryId(Poco::format("postgres:%d:%d", connection_id, secret_key)); + connection_context->setCurrentDatabase(start_up_msg->database); + connection_context->setCurrentQueryId(Poco::format("postgres:%d:%d", connection_id, secret_key)); } catch (const Exception & exc) { @@ -213,8 +213,8 @@ void PostgreSQLHandler::sendParameterStatusData(PostgreSQLProtocol::Messaging::S void PostgreSQLHandler::cancelRequest() { - connection_context.setCurrentQueryId(""); - connection_context.setDefaultFormat("Null"); + connection_context->setCurrentQueryId(""); + connection_context->setDefaultFormat("Null"); std::unique_ptr msg = message_transport->receiveWithPayloadSize(8); @@ -268,7 +268,7 @@ void PostgreSQLHandler::processQuery() return; } - const auto & settings = connection_context.getSettingsRef(); + const auto & settings = connection_context->getSettingsRef(); std::vector queries; auto parse_res = splitMultipartQuery(query->query, queries, settings.max_query_size, settings.max_parser_depth); if (!parse_res.second) diff --git a/src/Server/PostgreSQLHandler.h b/src/Server/PostgreSQLHandler.h index 697aa9b6744..cc30c85d8bb 100644 --- a/src/Server/PostgreSQLHandler.h +++ b/src/Server/PostgreSQLHandler.h @@ -37,7 +37,7 @@ private: Poco::Logger * log = &Poco::Logger::get("PostgreSQLHandler"); IServer & server; - Context connection_context; + ContextPtr connection_context; bool ssl_enabled; Int32 connection_id; Int32 secret_key; diff --git a/src/Server/ReplicasStatusHandler.cpp b/src/Server/ReplicasStatusHandler.cpp index 778f9827131..86295cc5170 100644 --- a/src/Server/ReplicasStatusHandler.cpp +++ b/src/Server/ReplicasStatusHandler.cpp @@ -34,7 +34,7 @@ void ReplicasStatusHandler::handleRequest(HTTPServerRequest & request, HTTPServe /// Even if lag is small, output detailed information about the lag. bool verbose = params.get("verbose", "") == "1"; - const MergeTreeSettings & settings = context.getReplicatedMergeTreeSettings(); + const MergeTreeSettings & settings = context->getReplicatedMergeTreeSettings(); bool ok = true; WriteBufferFromOwnString message; @@ -73,7 +73,7 @@ void ReplicasStatusHandler::handleRequest(HTTPServerRequest & request, HTTPServe } } - const auto & config = context.getConfigRef(); + const auto & config = context->getConfigRef(); setResponseDefaultHeaders(response, config.getUInt("keep_alive_timeout", 10)); if (!ok) diff --git a/src/Server/ReplicasStatusHandler.h b/src/Server/ReplicasStatusHandler.h index 8a790b13ad6..eda0b15ed6f 100644 --- a/src/Server/ReplicasStatusHandler.h +++ b/src/Server/ReplicasStatusHandler.h @@ -12,7 +12,7 @@ class IServer; class ReplicasStatusHandler : public HTTPRequestHandler { private: - Context & context; + ContextPtr context; public: explicit ReplicasStatusHandler(IServer & server_); diff --git a/src/Server/StaticRequestHandler.cpp b/src/Server/StaticRequestHandler.cpp index 9f959239be9..169d6859b43 100644 --- a/src/Server/StaticRequestHandler.cpp +++ b/src/Server/StaticRequestHandler.cpp @@ -137,7 +137,7 @@ void StaticRequestHandler::writeResponse(WriteBuffer & out) if (startsWith(response_expression, file_prefix)) { - const auto & user_files_absolute_path = Poco::Path(server.context().getUserFilesPath()).makeAbsolute().makeDirectory().toString(); + const auto & user_files_absolute_path = Poco::Path(server.context()->getUserFilesPath()).makeAbsolute().makeDirectory().toString(); const auto & file_name = response_expression.substr(file_prefix.size(), response_expression.size() - file_prefix.size()); const auto & file_path = Poco::Path(user_files_absolute_path, file_name).makeAbsolute().toString(); diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index efda9bbfec3..36bc8d0e391 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -25,6 +25,7 @@ #include #include #include +#include #include #include #include @@ -33,6 +34,7 @@ #include +#include "Core/Protocol.h" #include "TCPHandler.h" #if !defined(ARCADIA_BUILD) @@ -55,6 +57,7 @@ namespace ErrorCodes extern const int SOCKET_TIMEOUT; extern const int UNEXPECTED_PACKET_FROM_CLIENT; extern const int SUPPORT_IS_DISABLED; + extern const int UNKNOWN_PROTOCOL; } TCPHandler::TCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, bool parse_proxy_protocol_, std::string server_display_name_) @@ -62,8 +65,8 @@ TCPHandler::TCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket , server(server_) , parse_proxy_protocol(parse_proxy_protocol_) , log(&Poco::Logger::get("TCPHandler")) - , connection_context(server.context()) - , query_context(server.context()) + , connection_context(Context::createCopy(server.context())) + , query_context(Context::createCopy(server.context())) , server_display_name(std::move(server_display_name_)) { } @@ -72,7 +75,8 @@ TCPHandler::~TCPHandler() try { state.reset(); - out->next(); + if (out) + out->next(); } catch (...) { @@ -85,13 +89,13 @@ void TCPHandler::runImpl() setThreadName("TCPHandler"); ThreadStatus thread_status; - connection_context = server.context(); - connection_context.makeSessionContext(); + connection_context = Context::createCopy(server.context()); + connection_context->makeSessionContext(); /// These timeouts can be changed after receiving query. - auto global_receive_timeout = connection_context.getSettingsRef().receive_timeout; - auto global_send_timeout = connection_context.getSettingsRef().send_timeout; + auto global_receive_timeout = connection_context->getSettingsRef().receive_timeout; + auto global_send_timeout = connection_context->getSettingsRef().send_timeout; socket().setReceiveTimeout(global_receive_timeout); socket().setSendTimeout(global_send_timeout); @@ -132,7 +136,7 @@ void TCPHandler::runImpl() try { /// We try to send error information to the client. - sendException(e, connection_context.getSettingsRef().calculate_text_stack_trace); + sendException(e, connection_context->getSettingsRef().calculate_text_stack_trace); } catch (...) {} @@ -146,28 +150,30 @@ void TCPHandler::runImpl() { Exception e("Database " + backQuote(default_database) + " doesn't exist", ErrorCodes::UNKNOWN_DATABASE); LOG_ERROR(log, "Code: {}, e.displayText() = {}, Stack trace:\n\n{}", e.code(), e.displayText(), e.getStackTraceString()); - sendException(e, connection_context.getSettingsRef().calculate_text_stack_trace); + sendException(e, connection_context->getSettingsRef().calculate_text_stack_trace); return; } - connection_context.setCurrentDatabase(default_database); + connection_context->setCurrentDatabase(default_database); } - Settings connection_settings = connection_context.getSettings(); + Settings connection_settings = connection_context->getSettings(); + UInt64 idle_connection_timeout = connection_settings.idle_connection_timeout; + UInt64 poll_interval = connection_settings.poll_interval; sendHello(); - connection_context.setProgressCallback([this] (const Progress & value) { return this->updateProgress(value); }); + connection_context->setProgressCallback([this] (const Progress & value) { return this->updateProgress(value); }); while (true) { /// We are waiting for a packet from the client. Thus, every `poll_interval` seconds check whether we need to shut down. { Stopwatch idle_time; - while (!server.isCancelled() && !static_cast(*in).poll( - std::min(connection_settings.poll_interval, connection_settings.idle_connection_timeout) * 1000000)) + UInt64 timeout_ms = std::min(poll_interval, idle_connection_timeout) * 1000000; + while (!server.isCancelled() && !static_cast(*in).poll(timeout_ms)) { - if (idle_time.elapsedSeconds() > connection_settings.idle_connection_timeout) + if (idle_time.elapsedSeconds() > idle_connection_timeout) { LOG_TRACE(log, "Closing idle connection"); return; @@ -180,7 +186,7 @@ void TCPHandler::runImpl() break; /// Set context of request. - query_context = connection_context; + query_context = Context::createCopy(connection_context); Stopwatch watch; state.reset(); @@ -208,12 +214,21 @@ void TCPHandler::runImpl() if (!receivePacket()) continue; + /** If Query received, then settings in query_context has been updated + * So, update some other connection settings, for flexibility. + */ + { + const Settings & settings = query_context->getSettingsRef(); + idle_connection_timeout = settings.idle_connection_timeout; + poll_interval = settings.poll_interval; + } + /** If part_uuids got received in previous packet, trying to read again. */ if (state.empty() && state.part_uuids && !receivePacket()) continue; - query_scope.emplace(*query_context); + query_scope.emplace(query_context); send_exception_with_stack_trace = query_context->getSettingsRef().calculate_text_stack_trace; @@ -228,9 +243,9 @@ void TCPHandler::runImpl() CurrentThread::setFatalErrorCallback([this]{ sendLogs(); }); } - query_context->setExternalTablesInitializer([&connection_settings, this] (Context & context) + query_context->setExternalTablesInitializer([&connection_settings, this] (ContextPtr context) { - if (&context != &*query_context) + if (context != query_context) throw Exception("Unexpected context in external tables initializer", ErrorCodes::LOGICAL_ERROR); /// Get blocks of temporary tables @@ -245,9 +260,9 @@ void TCPHandler::runImpl() }); /// Send structure of columns to client for function input() - query_context->setInputInitializer([this] (Context & context, const StoragePtr & input_storage) + query_context->setInputInitializer([this] (ContextPtr context, const StoragePtr & input_storage) { - if (&context != &query_context.value()) + if (context != query_context) throw Exception("Unexpected context in Input initializer", ErrorCodes::LOGICAL_ERROR); auto metadata_snapshot = input_storage->getInMemoryMetadataPtr(); @@ -265,15 +280,15 @@ void TCPHandler::runImpl() sendData(state.input_header); }); - query_context->setInputBlocksReaderCallback([&connection_settings, this] (Context & context) -> Block + query_context->setInputBlocksReaderCallback([&connection_settings, this] (ContextPtr context) -> Block { - if (&context != &query_context.value()) + if (context != query_context) throw Exception("Unexpected context in InputBlocksReader", ErrorCodes::LOGICAL_ERROR); - size_t poll_interval; + size_t poll_interval_ms; int receive_timeout; - std::tie(poll_interval, receive_timeout) = getReadTimeouts(connection_settings); - if (!readDataNext(poll_interval, receive_timeout)) + std::tie(poll_interval_ms, receive_timeout) = getReadTimeouts(connection_settings); + if (!readDataNext(poll_interval_ms, receive_timeout)) { state.block_in.reset(); state.maybe_compressed_in.reset(); @@ -282,11 +297,21 @@ void TCPHandler::runImpl() return state.block_for_input; }); - customizeContext(*query_context); + customizeContext(query_context); + + /// This callback is needed for requesting read tasks inside pipeline for distributed processing + query_context->setReadTaskCallback([this]() -> String + { + std::lock_guard lock(task_callback_mutex); + sendReadTaskRequestAssumeLocked(); + return receiveReadTaskResponseAssumeLocked(); + }); bool may_have_embedded_data = client_tcp_protocol_version >= DBMS_MIN_REVISION_WITH_CLIENT_SUPPORT_EMBEDDED_DATA; /// Processing Query - state.io = executeQuery(state.query, *query_context, false, state.stage, may_have_embedded_data); + state.io = executeQuery(state.query, query_context, false, state.stage, may_have_embedded_data); + + unknown_packet_in_send_data = query_context->getSettingsRef().unknown_packet_in_send_data; after_check_cancelled.restart(); after_send_progress.restart(); @@ -536,7 +561,7 @@ void TCPHandler::processInsertQuery(const Settings & connection_settings) { if (!table_id.empty()) { - auto storage_ptr = DatabaseCatalog::instance().getTable(table_id, *query_context); + auto storage_ptr = DatabaseCatalog::instance().getTable(table_id, query_context); sendTableColumns(storage_ptr->getInMemoryMetadataPtr()->getColumns()); } } @@ -643,6 +668,8 @@ void TCPHandler::processOrdinaryQueryWithProcessors() Block block; while (executor.pull(block, query_context->getSettingsRef().interactive_delay / 1000)) { + std::lock_guard lock(task_callback_mutex); + if (isQueryCancelled()) { /// A packet was received requesting to stop execution of the request. @@ -700,7 +727,7 @@ void TCPHandler::processTablesStatusRequest() TablesStatusResponse response; for (const QualifiedTableName & table_name: request.tables) { - auto resolved_id = connection_context.tryResolveStorageID({table_name.database, table_name.table}); + auto resolved_id = connection_context->tryResolveStorageID({table_name.database, table_name.table}); StoragePtr table = DatabaseCatalog::instance().tryGetTable(resolved_id, connection_context); if (!table) continue; @@ -754,6 +781,13 @@ void TCPHandler::sendPartUUIDs() } } + +void TCPHandler::sendReadTaskRequestAssumeLocked() +{ + writeVarUInt(Protocol::Server::ReadTaskRequest, *out); + out->next(); +} + void TCPHandler::sendProfileInfo(const BlockStreamProfileInfo & info) { writeVarUInt(Protocol::Server::ProfileInfo, *out); @@ -861,7 +895,7 @@ bool TCPHandler::receiveProxyHeader() } LOG_TRACE(log, "Forwarded client address from PROXY header: {}", forwarded_address); - connection_context.getClientInfo().forwarded_for = forwarded_address; + connection_context->getClientInfo().forwarded_for = forwarded_address; return true; } @@ -914,7 +948,7 @@ void TCPHandler::receiveHello() if (user != USER_INTERSERVER_MARKER) { - connection_context.setUser(user, password, socket().peerAddress()); + connection_context->setUser(user, password, socket().peerAddress()); } else { @@ -962,8 +996,6 @@ bool TCPHandler::receivePacket() UInt64 packet_type = 0; readVarUInt(packet_type, *in); -// std::cerr << "Server got packet: " << Protocol::Client::toString(packet_type) << "\n"; - switch (packet_type) { case Protocol::Client::IgnoredPartUUIDs: @@ -1015,6 +1047,34 @@ void TCPHandler::receiveIgnoredPartUUIDs() query_context->getIgnoredPartUUIDs()->add(uuids); } + +String TCPHandler::receiveReadTaskResponseAssumeLocked() +{ + UInt64 packet_type = 0; + readVarUInt(packet_type, *in); + if (packet_type != Protocol::Client::ReadTaskResponse) + { + if (packet_type == Protocol::Client::Cancel) + { + state.is_cancelled = true; + return {}; + } + else + { + throw Exception(fmt::format("Received {} packet after requesting read task", + Protocol::Client::toString(packet_type)), ErrorCodes::UNEXPECTED_PACKET_FROM_CLIENT); + } + } + UInt64 version; + readVarUInt(version, *in); + if (version != DBMS_CLUSTER_PROCESSING_PROTOCOL_VERSION) + throw Exception("Protocol version for distributed processing mismatched", ErrorCodes::UNKNOWN_PROTOCOL); + String response; + readStringBinary(response, *in); + return response; +} + + void TCPHandler::receiveClusterNameAndSalt() { readStringBinary(cluster, *in); @@ -1032,7 +1092,7 @@ void TCPHandler::receiveClusterNameAndSalt() try { /// We try to send error information to the client. - sendException(e, connection_context.getSettingsRef().calculate_text_stack_trace); + sendException(e, connection_context->getSettingsRef().calculate_text_stack_trace); } catch (...) {} @@ -1235,18 +1295,18 @@ bool TCPHandler::receiveData(bool scalar) /// If such a table does not exist, create it. if (resolved) { - storage = DatabaseCatalog::instance().getTable(resolved, *query_context); + storage = DatabaseCatalog::instance().getTable(resolved, query_context); } else { NamesAndTypesList columns = block.getNamesAndTypesList(); - auto temporary_table = TemporaryTableHolder(*query_context, ColumnsDescription{columns}, {}); + auto temporary_table = TemporaryTableHolder(query_context, ColumnsDescription{columns}, {}); storage = temporary_table.getTable(); query_context->addExternalTable(temporary_id.table_name, std::move(temporary_table)); } auto metadata_snapshot = storage->getInMemoryMetadataPtr(); /// The data will be written directly to the table. - auto temporary_table_out = storage->write(ASTPtr(), metadata_snapshot, *query_context); + auto temporary_table_out = storage->write(ASTPtr(), metadata_snapshot, query_context); temporary_table_out->write(block); temporary_table_out->writeSuffix(); @@ -1345,7 +1405,7 @@ void TCPHandler::initBlockOutput(const Block & block) *state.maybe_compressed_out, client_tcp_protocol_version, block.cloneEmpty(), - !connection_context.getSettingsRef().low_cardinality_allow_in_native_format); + !connection_context->getSettingsRef().low_cardinality_allow_in_native_format); } } @@ -1358,7 +1418,7 @@ void TCPHandler::initLogsBlockOutput(const Block & block) *out, client_tcp_protocol_version, block.cloneEmpty(), - !connection_context.getSettingsRef().low_cardinality_allow_in_native_format); + !connection_context->getSettingsRef().low_cardinality_allow_in_native_format); } } @@ -1414,6 +1474,14 @@ void TCPHandler::sendData(const Block & block) try { + /// For testing hedged requests + if (unknown_packet_in_send_data) + { + --unknown_packet_in_send_data; + if (unknown_packet_in_send_data == 0) + writeVarUInt(UInt64(-1), *out); + } + writeVarUInt(Protocol::Server::Data, *out); /// Send external table name (empty name is the main table) writeStringBinary("", *out); diff --git a/src/Server/TCPHandler.h b/src/Server/TCPHandler.h index c3dd8346c8e..ce0a4cee3ff 100644 --- a/src/Server/TCPHandler.h +++ b/src/Server/TCPHandler.h @@ -89,7 +89,7 @@ struct QueryState *this = QueryState(); } - bool empty() + bool empty() const { return is_empty; } @@ -113,14 +113,13 @@ public: * because it allows to check the IP ranges of the trusted proxy. * Proxy-forwarded (original client) IP address is used for quota accounting if quota is keyed by forwarded IP. */ - TCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, bool parse_proxy_protocol_, - std::string server_display_name_); + TCPHandler(IServer & server_, const Poco::Net::StreamSocket & socket_, bool parse_proxy_protocol_, std::string server_display_name_); ~TCPHandler() override; void run() override; /// This method is called right before the query execution. - virtual void customizeContext(DB::Context & /*context*/) {} + virtual void customizeContext(ContextPtr /*context*/) {} private: IServer & server; @@ -133,8 +132,10 @@ private: UInt64 client_version_patch = 0; UInt64 client_tcp_protocol_version = 0; - Context connection_context; - std::optional query_context; + ContextPtr connection_context; + ContextPtr query_context; + + size_t unknown_packet_in_send_data = 0; /// Streams for reading/writing from/to client connection socket. std::shared_ptr in; @@ -151,6 +152,7 @@ private: String cluster; String cluster_secret; + std::mutex task_callback_mutex; /// At the moment, only one ongoing query in the connection is supported at a time. QueryState state; @@ -170,9 +172,11 @@ private: bool receivePacket(); void receiveQuery(); void receiveIgnoredPartUUIDs(); + String receiveReadTaskResponseAssumeLocked(); bool receiveData(bool scalar); bool readDataNext(const size_t & poll_interval, const int & receive_timeout); void readData(const Settings & connection_settings); + void receiveClusterNameAndSalt(); std::tuple getReadTimeouts(const Settings & connection_settings); [[noreturn]] void receiveUnexpectedData(); @@ -199,12 +203,11 @@ private: void sendLogs(); void sendEndOfStream(); void sendPartUUIDs(); + void sendReadTaskRequestAssumeLocked(); void sendProfileInfo(const BlockStreamProfileInfo & info); void sendTotals(const Block & totals); void sendExtremes(const Block & extremes); - void receiveClusterNameAndSalt(); - /// Creates state.block_in/block_out for blocks read/write, depending on whether compression is enabled. void initBlockInput(); void initBlockOutput(const Block & block); diff --git a/src/Storages/AlterCommands.cpp b/src/Storages/AlterCommands.cpp index 7043a32760b..e3177c167c5 100644 --- a/src/Storages/AlterCommands.cpp +++ b/src/Storages/AlterCommands.cpp @@ -299,7 +299,7 @@ std::optional AlterCommand::parse(const ASTAlterCommand * command_ } -void AlterCommand::apply(StorageInMemoryMetadata & metadata, const Context & context) const +void AlterCommand::apply(StorageInMemoryMetadata & metadata, ContextPtr context) const { if (type == ADD_COLUMN) { @@ -320,7 +320,7 @@ void AlterCommand::apply(StorageInMemoryMetadata & metadata, const Context & con metadata.columns.add(column, after_column, first); /// Slow, because each time a list is copied - if (context.getSettingsRef().flatten_nested) + if (context->getSettingsRef().flatten_nested) metadata.columns.flattenNested(); } else if (type == DROP_COLUMN) @@ -702,7 +702,7 @@ bool AlterCommand::isRemovingProperty() const return to_remove != RemoveProperty::NO_PROPERTY; } -std::optional AlterCommand::tryConvertToMutationCommand(StorageInMemoryMetadata & metadata, const Context & context) const +std::optional AlterCommand::tryConvertToMutationCommand(StorageInMemoryMetadata & metadata, ContextPtr context) const { if (!isRequireMutationStage(metadata)) return {}; @@ -788,7 +788,7 @@ String alterTypeToString(const AlterCommand::Type type) __builtin_unreachable(); } -void AlterCommands::apply(StorageInMemoryMetadata & metadata, const Context & context) const +void AlterCommands::apply(StorageInMemoryMetadata & metadata, ContextPtr context) const { if (!prepared) throw DB::Exception("Alter commands is not prepared. Cannot apply. It's a bug", ErrorCodes::LOGICAL_ERROR); @@ -880,7 +880,7 @@ void AlterCommands::prepare(const StorageInMemoryMetadata & metadata) prepared = true; } -void AlterCommands::validate(const StorageInMemoryMetadata & metadata, const Context & context) const +void AlterCommands::validate(const StorageInMemoryMetadata & metadata, ContextPtr context) const { auto all_columns = metadata.columns; /// Default expression for all added/modified columns @@ -907,7 +907,7 @@ void AlterCommands::validate(const StorageInMemoryMetadata & metadata, const Con ErrorCodes::BAD_ARGUMENTS}; if (command.codec) - CompressionCodecFactory::instance().validateCodecAndGetPreprocessedAST(command.codec, command.data_type, !context.getSettingsRef().allow_suspicious_codecs); + CompressionCodecFactory::instance().validateCodecAndGetPreprocessedAST(command.codec, command.data_type, !context->getSettingsRef().allow_suspicious_codecs); all_columns.add(ColumnDescription(column_name, command.data_type)); } @@ -927,7 +927,7 @@ void AlterCommands::validate(const StorageInMemoryMetadata & metadata, const Con ErrorCodes::NOT_IMPLEMENTED}; if (command.codec) - CompressionCodecFactory::instance().validateCodecAndGetPreprocessedAST(command.codec, command.data_type, !context.getSettingsRef().allow_suspicious_codecs); + CompressionCodecFactory::instance().validateCodecAndGetPreprocessedAST(command.codec, command.data_type, !context->getSettingsRef().allow_suspicious_codecs); auto column_default = all_columns.getDefault(column_name); if (column_default) { @@ -1172,7 +1172,7 @@ static MutationCommand createMaterializeTTLCommand() return command; } -MutationCommands AlterCommands::getMutationCommands(StorageInMemoryMetadata metadata, bool materialize_ttl, const Context & context) const +MutationCommands AlterCommands::getMutationCommands(StorageInMemoryMetadata metadata, bool materialize_ttl, ContextPtr context) const { MutationCommands result; for (const auto & alter_cmd : *this) diff --git a/src/Storages/AlterCommands.h b/src/Storages/AlterCommands.h index c973b0b6a6f..d6c80bc5ed4 100644 --- a/src/Storages/AlterCommands.h +++ b/src/Storages/AlterCommands.h @@ -128,7 +128,7 @@ struct AlterCommand static std::optional parse(const ASTAlterCommand * command); - void apply(StorageInMemoryMetadata & metadata, const Context & context) const; + void apply(StorageInMemoryMetadata & metadata, ContextPtr context) const; /// Check that alter command require data modification (mutation) to be /// executed. For example, cast from Date to UInt16 type can be executed @@ -151,7 +151,7 @@ struct AlterCommand /// If possible, convert alter command to mutation command. In other case /// return empty optional. Some storages may execute mutations after /// metadata changes. - std::optional tryConvertToMutationCommand(StorageInMemoryMetadata & metadata, const Context & context) const; + std::optional tryConvertToMutationCommand(StorageInMemoryMetadata & metadata, ContextPtr context) const; }; /// Return string representation of AlterCommand::Type @@ -170,7 +170,7 @@ public: /// Checks that all columns exist and dependencies between them. /// This check is lightweight and base only on metadata. /// More accurate check have to be performed with storage->checkAlterIsPossible. - void validate(const StorageInMemoryMetadata & metadata, const Context & context) const; + void validate(const StorageInMemoryMetadata & metadata, ContextPtr context) const; /// Prepare alter commands. Set ignore flag to some of them and set some /// parts to commands from storage's metadata (for example, absent default) @@ -178,7 +178,7 @@ public: /// Apply all alter command in sequential order to storage metadata. /// Commands have to be prepared before apply. - void apply(StorageInMemoryMetadata & metadata, const Context & context) const; + void apply(StorageInMemoryMetadata & metadata, ContextPtr context) const; /// At least one command modify settings. bool isSettingsAlter() const; @@ -190,7 +190,7 @@ public: /// alter. If alter can be performed as pure metadata update, than result is /// empty. If some TTL changes happened than, depending on materialize_ttl /// additional mutation command (MATERIALIZE_TTL) will be returned. - MutationCommands getMutationCommands(StorageInMemoryMetadata metadata, bool materialize_ttl, const Context & context) const; + MutationCommands getMutationCommands(StorageInMemoryMetadata metadata, bool materialize_ttl, ContextPtr context) const; }; } diff --git a/src/Storages/ColumnDefault.h b/src/Storages/ColumnDefault.h index 1035bfcc834..38b61415a9a 100644 --- a/src/Storages/ColumnDefault.h +++ b/src/Storages/ColumnDefault.h @@ -1,10 +1,10 @@ #pragma once +#include + #include #include -#include - namespace DB { @@ -18,7 +18,7 @@ enum class ColumnDefaultKind ColumnDefaultKind columnDefaultKindFromString(const std::string & str); -std::string toString(const ColumnDefaultKind kind); +std::string toString(ColumnDefaultKind kind); struct ColumnDefault diff --git a/src/Storages/ColumnsDescription.cpp b/src/Storages/ColumnsDescription.cpp index fc6bb661986..545911f1465 100644 --- a/src/Storages/ColumnsDescription.cpp +++ b/src/Storages/ColumnsDescription.cpp @@ -582,7 +582,7 @@ void ColumnsDescription::removeSubcolumns(const String & name_in_storage, const subcolumns.erase(name_in_storage + "." + subcolumn_name); } -Block validateColumnsDefaultsAndGetSampleBlock(ASTPtr default_expr_list, const NamesAndTypesList & all_columns, const Context & context) +Block validateColumnsDefaultsAndGetSampleBlock(ASTPtr default_expr_list, const NamesAndTypesList & all_columns, ContextPtr context) { for (const auto & child : default_expr_list->children) if (child->as() || child->as() || child->as()) diff --git a/src/Storages/ColumnsDescription.h b/src/Storages/ColumnsDescription.h index 26e30004544..7fff22abf71 100644 --- a/src/Storages/ColumnsDescription.h +++ b/src/Storages/ColumnsDescription.h @@ -1,18 +1,20 @@ #pragma once -#include -#include -#include -#include -#include -#include -#include #include +#include +#include +#include +#include +#include +#include +#include -#include -#include -#include #include +#include +#include +#include + +#include namespace DB @@ -159,5 +161,5 @@ private: /// default expression result can be casted to column_type. Also checks, that we /// don't have strange constructions in default expression like SELECT query or /// arrayJoin function. -Block validateColumnsDefaultsAndGetSampleBlock(ASTPtr default_expr_list, const NamesAndTypesList & all_columns, const Context & context); +Block validateColumnsDefaultsAndGetSampleBlock(ASTPtr default_expr_list, const NamesAndTypesList & all_columns, ContextPtr context); } diff --git a/src/Storages/ConstraintsDescription.cpp b/src/Storages/ConstraintsDescription.cpp index e6315872a66..1e86a17523b 100644 --- a/src/Storages/ConstraintsDescription.cpp +++ b/src/Storages/ConstraintsDescription.cpp @@ -41,7 +41,7 @@ ConstraintsDescription ConstraintsDescription::parse(const String & str) return res; } -ConstraintsExpressions ConstraintsDescription::getExpressions(const DB::Context & context, +ConstraintsExpressions ConstraintsDescription::getExpressions(const DB::ContextPtr context, const DB::NamesAndTypesList & source_columns_) const { ConstraintsExpressions res; diff --git a/src/Storages/ConstraintsDescription.h b/src/Storages/ConstraintsDescription.h index d6d2baefbd2..5e6416822bb 100644 --- a/src/Storages/ConstraintsDescription.h +++ b/src/Storages/ConstraintsDescription.h @@ -19,7 +19,7 @@ struct ConstraintsDescription static ConstraintsDescription parse(const String & str); - ConstraintsExpressions getExpressions(const Context & context, const NamesAndTypesList & source_columns_) const; + ConstraintsExpressions getExpressions(ContextPtr context, const NamesAndTypesList & source_columns_) const; ConstraintsDescription(const ConstraintsDescription & other); ConstraintsDescription & operator=(const ConstraintsDescription & other); diff --git a/src/Storages/Distributed/DirectoryMonitor.cpp b/src/Storages/Distributed/DirectoryMonitor.cpp index fb5e5080314..e3b0b0d581c 100644 --- a/src/Storages/Distributed/DirectoryMonitor.cpp +++ b/src/Storages/Distributed/DirectoryMonitor.cpp @@ -9,6 +9,8 @@ #include #include #include +#include +#include #include #include #include @@ -104,12 +106,14 @@ namespace size_t rows = 0; size_t bytes = 0; - std::string header; + /// dumpStructure() of the header -- obsolete + std::string block_header_string; + Block block_header; }; - DistributedHeader readDistributedHeader(ReadBuffer & in, Poco::Logger * log) + DistributedHeader readDistributedHeader(ReadBufferFromFile & in, Poco::Logger * log) { - DistributedHeader header; + DistributedHeader distributed_header; UInt64 query_size; readVarUInt(query_size, in); @@ -135,17 +139,25 @@ namespace LOG_WARNING(log, "ClickHouse shard version is older than ClickHouse initiator version. It may lack support for new features."); } - readStringBinary(header.insert_query, header_buf); - header.insert_settings.read(header_buf); + readStringBinary(distributed_header.insert_query, header_buf); + distributed_header.insert_settings.read(header_buf); if (header_buf.hasPendingData()) - header.client_info.read(header_buf, initiator_revision); + distributed_header.client_info.read(header_buf, initiator_revision); if (header_buf.hasPendingData()) { - readVarUInt(header.rows, header_buf); - readVarUInt(header.bytes, header_buf); - readStringBinary(header.header, header_buf); + readVarUInt(distributed_header.rows, header_buf); + readVarUInt(distributed_header.bytes, header_buf); + readStringBinary(distributed_header.block_header_string, header_buf); + } + + if (header_buf.hasPendingData()) + { + NativeBlockInputStream header_block_in(header_buf, DBMS_TCP_PROTOCOL_VERSION); + distributed_header.block_header = header_block_in.read(); + if (!distributed_header.block_header) + throw Exception(ErrorCodes::CANNOT_READ_ALL_DATA, "Cannot read header from the {} batch", in.getFileName()); } /// Add handling new data here, for example: @@ -155,20 +167,20 @@ namespace /// /// And note that it is safe, because we have checksum and size for header. - return header; + return distributed_header; } if (query_size == DBMS_DISTRIBUTED_SIGNATURE_HEADER_OLD_FORMAT) { - header.insert_settings.read(in, SettingsWriteFormat::BINARY); - readStringBinary(header.insert_query, in); - return header; + distributed_header.insert_settings.read(in, SettingsWriteFormat::BINARY); + readStringBinary(distributed_header.insert_query, in); + return distributed_header; } - header.insert_query.resize(query_size); - in.readStrict(header.insert_query.data(), query_size); + distributed_header.insert_query.resize(query_size); + in.readStrict(distributed_header.insert_query.data(), query_size); - return header; + return distributed_header; } /// remote_error argument is used to decide whether some errors should be @@ -200,35 +212,71 @@ namespace return nullptr; } - void writeRemoteConvert(const DistributedHeader & header, RemoteBlockOutputStream & remote, ReadBufferFromFile & in, Poco::Logger * log) + void writeAndConvert(RemoteBlockOutputStream & remote, ReadBufferFromFile & in) { - if (remote.getHeader() && header.header != remote.getHeader().dumpStructure()) + CompressedReadBuffer decompressing_in(in); + NativeBlockInputStream block_in(decompressing_in, DBMS_TCP_PROTOCOL_VERSION); + block_in.readPrefix(); + + while (Block block = block_in.read()) { - LOG_WARNING(log, - "Structure does not match (remote: {}, local: {}), implicit conversion will be done", - remote.getHeader().dumpStructure(), header.header); - - CompressedReadBuffer decompressing_in(in); - /// Lack of header, requires to read blocks - NativeBlockInputStream block_in(decompressing_in, DBMS_TCP_PROTOCOL_VERSION); - - block_in.readPrefix(); - while (Block block = block_in.read()) - { - ConvertingBlockInputStream convert( - std::make_shared(block), - remote.getHeader(), - ConvertingBlockInputStream::MatchColumnsMode::Name); - auto adopted_block = convert.read(); - remote.write(adopted_block); - } - block_in.readSuffix(); + ConvertingBlockInputStream convert( + std::make_shared(block), + remote.getHeader(), + ConvertingBlockInputStream::MatchColumnsMode::Name); + auto adopted_block = convert.read(); + remote.write(adopted_block); } - else + + block_in.readSuffix(); + } + + void writeRemoteConvert( + const DistributedHeader & distributed_header, + RemoteBlockOutputStream & remote, + bool compression_expected, + ReadBufferFromFile & in, + Poco::Logger * log) + { + if (!remote.getHeader()) { CheckingCompressedReadBuffer checking_in(in); remote.writePrepared(checking_in); + return; } + + /// This is old format, that does not have header for the block in the file header, + /// applying ConvertingBlockInputStream in this case is not a big overhead. + /// + /// Anyway we can get header only from the first block, which contain all rows anyway. + if (!distributed_header.block_header) + { + LOG_TRACE(log, "Processing batch {} with old format (no header)", in.getFileName()); + + writeAndConvert(remote, in); + return; + } + + if (!blocksHaveEqualStructure(distributed_header.block_header, remote.getHeader())) + { + LOG_WARNING(log, + "Structure does not match (remote: {}, local: {}), implicit conversion will be done", + remote.getHeader().dumpStructure(), distributed_header.block_header.dumpStructure()); + + writeAndConvert(remote, in); + return; + } + + /// If connection does not use compression, we have to uncompress the data. + if (!compression_expected) + { + writeAndConvert(remote, in); + return; + } + + /// Otherwise write data as it was already prepared (more efficient path). + CheckingCompressedReadBuffer checking_in(in); + remote.writePrepared(checking_in); } } @@ -245,14 +293,14 @@ StorageDistributedDirectoryMonitor::StorageDistributedDirectoryMonitor( , disk(disk_) , relative_path(relative_path_) , path(disk->getPath() + relative_path + '/') - , should_batch_inserts(storage.global_context.getSettingsRef().distributed_directory_monitor_batch_inserts) + , should_batch_inserts(storage.getContext()->getSettingsRef().distributed_directory_monitor_batch_inserts) , dir_fsync(storage.getDistributedSettingsRef().fsync_directories) - , min_batched_block_size_rows(storage.global_context.getSettingsRef().min_insert_block_size_rows) - , min_batched_block_size_bytes(storage.global_context.getSettingsRef().min_insert_block_size_bytes) + , min_batched_block_size_rows(storage.getContext()->getSettingsRef().min_insert_block_size_rows) + , min_batched_block_size_bytes(storage.getContext()->getSettingsRef().min_insert_block_size_bytes) , current_batch_file_path(path + "current_batch.txt") - , default_sleep_time(storage.global_context.getSettingsRef().distributed_directory_monitor_sleep_time_ms.totalMilliseconds()) + , default_sleep_time(storage.getContext()->getSettingsRef().distributed_directory_monitor_sleep_time_ms.totalMilliseconds()) , sleep_time(default_sleep_time) - , max_sleep_time(storage.global_context.getSettingsRef().distributed_directory_monitor_max_sleep_time_ms.totalMilliseconds()) + , max_sleep_time(storage.getContext()->getSettingsRef().distributed_directory_monitor_max_sleep_time_ms.totalMilliseconds()) , log(&Poco::Logger::get(getLoggerName())) , monitor_blocker(monitor_blocker_) , metric_pending_files(CurrentMetrics::DistributedFilesToInsert, 0) @@ -427,7 +475,7 @@ ConnectionPoolPtr StorageDistributedDirectoryMonitor::createPool(const std::stri auto pools = createPoolsForAddresses(name, pool_factory, storage.log); - const auto settings = storage.global_context.getSettings(); + const auto settings = storage.getContext()->getSettings(); return pools.size() == 1 ? pools.front() : std::make_shared(pools, settings.load_balancing, settings.distributed_replica_error_half_life.totalSeconds(), @@ -490,21 +538,28 @@ bool StorageDistributedDirectoryMonitor::processFiles(const std::mapgetSettingsRef()); try { CurrentMetrics::Increment metric_increment{CurrentMetrics::DistributedSend}; ReadBufferFromFile in(file_path); - const auto & header = readDistributedHeader(in, log); + const auto & distributed_header = readDistributedHeader(in, log); - auto connection = pool->get(timeouts, &header.insert_settings); + LOG_DEBUG(log, "Started processing `{}` ({} rows, {} bytes)", file_path, + formatReadableQuantity(distributed_header.rows), + formatReadableSizeWithBinarySuffix(distributed_header.bytes)); + + auto connection = pool->get(timeouts, &distributed_header.insert_settings); RemoteBlockOutputStream remote{*connection, timeouts, - header.insert_query, header.insert_settings, header.client_info}; + distributed_header.insert_query, + distributed_header.insert_settings, + distributed_header.client_info}; remote.writePrefix(); - writeRemoteConvert(header, remote, in, log); + bool compression_expected = connection->getCompression() == Protocol::Compression::Enable; + writeRemoteConvert(distributed_header, remote, compression_expected, in, log); remote.writeSuffix(); } catch (const Exception & e) @@ -515,7 +570,7 @@ void StorageDistributedDirectoryMonitor::processFile(const std::string & file_pa auto dir_sync_guard = getDirectorySyncGuard(dir_fsync, disk, relative_path); markAsSend(file_path); - LOG_TRACE(log, "Finished processing `{}`", file_path); + LOG_TRACE(log, "Finished processing `{}` (took {} ms)", file_path, watch.elapsedMilliseconds()); } struct StorageDistributedDirectoryMonitor::BatchHeader @@ -523,20 +578,21 @@ struct StorageDistributedDirectoryMonitor::BatchHeader Settings settings; String query; ClientInfo client_info; - String sample_block_structure; + Block header; - BatchHeader(Settings settings_, String query_, ClientInfo client_info_, String sample_block_structure_) + BatchHeader(Settings settings_, String query_, ClientInfo client_info_, Block header_) : settings(std::move(settings_)) , query(std::move(query_)) , client_info(std::move(client_info_)) - , sample_block_structure(std::move(sample_block_structure_)) + , header(std::move(header_)) { } bool operator==(const BatchHeader & other) const { - return std::tie(settings, query, client_info.query_kind, sample_block_structure) == - std::tie(other.settings, other.query, other.client_info.query_kind, other.sample_block_structure); + return std::tie(settings, query, client_info.query_kind) == + std::tie(other.settings, other.query, other.client_info.query_kind) && + blocksHaveEqualStructure(header, other.header); } struct Hash @@ -545,7 +601,7 @@ struct StorageDistributedDirectoryMonitor::BatchHeader { SipHash hash_state; hash_state.update(batch_header.query.data(), batch_header.query.size()); - hash_state.update(batch_header.sample_block_structure.data(), batch_header.sample_block_structure.size()); + batch_header.header.updateHash(hash_state); return hash_state.get64(); } }; @@ -587,6 +643,12 @@ struct StorageDistributedDirectoryMonitor::Batch CurrentMetrics::Increment metric_increment{CurrentMetrics::DistributedSend}; + Stopwatch watch; + + LOG_DEBUG(parent.log, "Sending a batch of {} files ({} rows, {} bytes).", file_indices.size(), + formatReadableQuantity(total_rows), + formatReadableSizeWithBinarySuffix(total_bytes)); + if (!recovered) { /// For deduplication in Replicated tables to work, in case of error @@ -613,7 +675,7 @@ struct StorageDistributedDirectoryMonitor::Batch Poco::File{tmp_file}.renameTo(parent.current_batch_file_path); } - auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(parent.storage.global_context.getSettingsRef()); + auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(parent.storage.getContext()->getSettingsRef()); auto connection = parent.pool->get(timeouts); bool batch_broken = false; @@ -632,16 +694,18 @@ struct StorageDistributedDirectoryMonitor::Batch } ReadBufferFromFile in(file_path->second); - const auto & header = readDistributedHeader(in, parent.log); + const auto & distributed_header = readDistributedHeader(in, parent.log); if (!remote) { remote = std::make_unique(*connection, timeouts, - header.insert_query, header.insert_settings, header.client_info); + distributed_header.insert_query, + distributed_header.insert_settings, + distributed_header.client_info); remote->writePrefix(); } - - writeRemoteConvert(header, *remote, in, parent.log); + bool compression_expected = connection->getCompression() == Protocol::Compression::Enable; + writeRemoteConvert(distributed_header, *remote, compression_expected, in, parent.log); } if (remote) @@ -660,7 +724,7 @@ struct StorageDistributedDirectoryMonitor::Batch if (!batch_broken) { - LOG_TRACE(parent.log, "Sent a batch of {} files.", file_indices.size()); + LOG_TRACE(parent.log, "Sent a batch of {} files (took {} ms).", file_indices.size(), watch.elapsedMilliseconds()); auto dir_sync_guard = getDirectorySyncGuard(dir_fsync, parent.disk, parent.relative_path); for (UInt64 file_index : file_indices) @@ -808,22 +872,27 @@ void StorageDistributedDirectoryMonitor::processFilesWithBatching(const std::map size_t total_rows = 0; size_t total_bytes = 0; - std::string sample_block_structure; - DistributedHeader header; + Block header; + DistributedHeader distributed_header; try { /// Determine metadata of the current file and check if it is not broken. ReadBufferFromFile in{file_path}; - header = readDistributedHeader(in, log); + distributed_header = readDistributedHeader(in, log); - if (header.rows) + if (distributed_header.rows) { - total_rows += header.rows; - total_bytes += header.bytes; - sample_block_structure = header.header; + total_rows += distributed_header.rows; + total_bytes += distributed_header.bytes; } - else + + if (distributed_header.block_header) + header = distributed_header.block_header; + + if (!total_rows || !header) { + LOG_DEBUG(log, "Processing batch {} with old format (no header/rows)", in.getFileName()); + CompressedReadBuffer decompressing_in(in); NativeBlockInputStream block_in(decompressing_in, DBMS_TCP_PROTOCOL_VERSION); block_in.readPrefix(); @@ -833,8 +902,8 @@ void StorageDistributedDirectoryMonitor::processFilesWithBatching(const std::map total_rows += block.rows(); total_bytes += block.bytes(); - if (sample_block_structure.empty()) - sample_block_structure = block.cloneEmpty().dumpStructure(); + if (!header) + header = block.cloneEmpty(); } block_in.readSuffix(); } @@ -850,7 +919,12 @@ void StorageDistributedDirectoryMonitor::processFilesWithBatching(const std::map throw; } - BatchHeader batch_header(std::move(header.insert_settings), std::move(header.insert_query), std::move(header.client_info), std::move(sample_block_structure)); + BatchHeader batch_header( + std::move(distributed_header.insert_settings), + std::move(distributed_header.insert_query), + std::move(distributed_header.client_info), + std::move(header) + ); Batch & batch = header_to_batch.try_emplace(batch_header, *this, files).first->second; batch.file_indices.push_back(file_idx); diff --git a/src/Storages/Distributed/DistributedBlockOutputStream.cpp b/src/Storages/Distributed/DistributedBlockOutputStream.cpp index f8ba4221842..d05fbae60d9 100644 --- a/src/Storages/Distributed/DistributedBlockOutputStream.cpp +++ b/src/Storages/Distributed/DistributedBlockOutputStream.cpp @@ -58,6 +58,7 @@ namespace ErrorCodes { extern const int LOGICAL_ERROR; extern const int TIMEOUT_EXCEEDED; + extern const int TOO_LARGE_DISTRIBUTED_DEPTH; } static Block adoptBlock(const Block & header, const Block & block, Poco::Logger * log) @@ -86,14 +87,14 @@ static void writeBlockConvert(const BlockOutputStreamPtr & out, const Block & bl DistributedBlockOutputStream::DistributedBlockOutputStream( - const Context & context_, + ContextPtr context_, StorageDistributed & storage_, const StorageMetadataPtr & metadata_snapshot_, const ASTPtr & query_ast_, const ClusterPtr & cluster_, bool insert_sync_, UInt64 insert_timeout_) - : context(context_) + : context(Context::createCopy(context_)) , storage(storage_) , metadata_snapshot(metadata_snapshot_) , query_ast(query_ast_) @@ -103,6 +104,10 @@ DistributedBlockOutputStream::DistributedBlockOutputStream( , insert_timeout(insert_timeout_) , log(&Poco::Logger::get("DistributedBlockOutputStream")) { + const auto & settings = context->getSettingsRef(); + if (settings.max_distributed_depth && context->getClientInfo().distributed_depth > settings.max_distributed_depth) + throw Exception("Maximum distributed depth exceeded", ErrorCodes::TOO_LARGE_DISTRIBUTED_DEPTH); + context->getClientInfo().distributed_depth += 1; } @@ -143,7 +148,7 @@ void DistributedBlockOutputStream::write(const Block & block) void DistributedBlockOutputStream::writeAsync(const Block & block) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); bool random_shard_insert = settings.insert_distributed_one_random_shard && !storage.has_sharding_key; if (random_shard_insert) @@ -194,7 +199,7 @@ std::string DistributedBlockOutputStream::getCurrentStateDescription() void DistributedBlockOutputStream::initWritingJobs(const Block & first_block, size_t start, size_t end) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); const auto & addresses_with_failovers = cluster->getShardsAddresses(); const auto & shards_info = cluster->getShardsInfo(); size_t num_shards = end - start; @@ -303,7 +308,7 @@ DistributedBlockOutputStream::runWritingJob(DistributedBlockOutputStream::JobRep } const Block & shard_block = (num_shards > 1) ? job.current_shard_block : current_block; - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); /// Do not initiate INSERT for empty block. if (shard_block.rows() == 0) @@ -343,7 +348,8 @@ DistributedBlockOutputStream::runWritingJob(DistributedBlockOutputStream::JobRep if (throttler) job.connection_entry->setThrottler(throttler); - job.stream = std::make_shared(*job.connection_entry, timeouts, query_string, settings, context.getClientInfo()); + job.stream = std::make_shared( + *job.connection_entry, timeouts, query_string, settings, context->getClientInfo()); job.stream->writePrefix(); } @@ -357,7 +363,7 @@ DistributedBlockOutputStream::runWritingJob(DistributedBlockOutputStream::JobRep if (!job.stream) { /// Forward user settings - job.local_context = std::make_unique(context); + job.local_context = Context::createCopy(context); /// Copying of the query AST is required to avoid race, /// in case of INSERT into multiple local shards. @@ -367,7 +373,7 @@ DistributedBlockOutputStream::runWritingJob(DistributedBlockOutputStream::JobRep /// to resolve tables (in InterpreterInsertQuery::getTable()) auto copy_query_ast = query_ast->clone(); - InterpreterInsertQuery interp(copy_query_ast, *job.local_context); + InterpreterInsertQuery interp(copy_query_ast, job.local_context); auto block_io = interp.execute(); job.stream = block_io.out; @@ -385,7 +391,7 @@ DistributedBlockOutputStream::runWritingJob(DistributedBlockOutputStream::JobRep void DistributedBlockOutputStream::writeSync(const Block & block) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); const auto & shards_info = cluster->getShardsInfo(); bool random_shard_insert = settings.insert_distributed_one_random_shard && !storage.has_sharding_key; size_t start = 0; @@ -562,7 +568,7 @@ void DistributedBlockOutputStream::writeSplitAsync(const Block & block) void DistributedBlockOutputStream::writeAsyncImpl(const Block & block, size_t shard_id) { const auto & shard_info = cluster->getShardsInfo()[shard_id]; - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); if (shard_info.hasInternalReplication()) { @@ -610,7 +616,7 @@ void DistributedBlockOutputStream::writeToLocal(const Block & block, size_t repe void DistributedBlockOutputStream::writeToShard(const Block & block, const std::vector & dir_names) { - const auto & settings = context.getSettingsRef(); + const auto & settings = context->getSettingsRef(); const auto & distributed_settings = storage.getDistributedSettingsRef(); bool fsync = distributed_settings.fsync_after_insert; @@ -675,11 +681,17 @@ void DistributedBlockOutputStream::writeToShard(const Block & block, const std:: WriteBufferFromOwnString header_buf; writeVarUInt(DBMS_TCP_PROTOCOL_VERSION, header_buf); writeStringBinary(query_string, header_buf); - context.getSettingsRef().write(header_buf); - context.getClientInfo().write(header_buf, DBMS_TCP_PROTOCOL_VERSION); + context->getSettingsRef().write(header_buf); + context->getClientInfo().write(header_buf, DBMS_TCP_PROTOCOL_VERSION); writeVarUInt(block.rows(), header_buf); writeVarUInt(block.bytes(), header_buf); - writeStringBinary(block.cloneEmpty().dumpStructure(), header_buf); + writeStringBinary(block.cloneEmpty().dumpStructure(), header_buf); /// obsolete + /// Write block header separately in the batch header. + /// It is required for checking does conversion is required or not. + { + NativeBlockOutputStream header_stream{header_buf, DBMS_TCP_PROTOCOL_VERSION, block.cloneEmpty()}; + header_stream.write(block.cloneEmpty()); + } /// Add new fields here, for example: /// writeVarUInt(my_new_data, header_buf); @@ -724,7 +736,7 @@ void DistributedBlockOutputStream::writeToShard(const Block & block, const std:: Poco::File(first_file_tmp_path).remove(); /// Notify - auto sleep_ms = context.getSettingsRef().distributed_directory_monitor_sleep_time_ms; + auto sleep_ms = context->getSettingsRef().distributed_directory_monitor_sleep_time_ms; for (const auto & dir_name : dir_names) { auto & directory_monitor = storage.requireDirectoryMonitor(disk, dir_name); @@ -732,5 +744,4 @@ void DistributedBlockOutputStream::writeToShard(const Block & block, const std:: } } - } diff --git a/src/Storages/Distributed/DistributedBlockOutputStream.h b/src/Storages/Distributed/DistributedBlockOutputStream.h index ca57ad46fbb..a9425a98ebf 100644 --- a/src/Storages/Distributed/DistributedBlockOutputStream.h +++ b/src/Storages/Distributed/DistributedBlockOutputStream.h @@ -38,7 +38,7 @@ class DistributedBlockOutputStream : public IBlockOutputStream { public: DistributedBlockOutputStream( - const Context & context_, + ContextPtr context_, StorageDistributed & storage_, const StorageMetadataPtr & metadata_snapshot_, const ASTPtr & query_ast_, @@ -83,8 +83,7 @@ private: /// Returns the number of blocks was written for each cluster node. Uses during exception handling. std::string getCurrentStateDescription(); -private: - const Context & context; + ContextPtr context; StorageDistributed & storage; StorageMetadataPtr metadata_snapshot; ASTPtr query_ast; @@ -115,7 +114,7 @@ private: Block current_shard_block; ConnectionPool::Entry connection_entry; - std::unique_ptr local_context; + ContextPtr local_context; BlockOutputStreamPtr stream; UInt64 blocks_written = 0; diff --git a/src/Storages/HDFS/HDFSCommon.cpp b/src/Storages/HDFS/HDFSCommon.cpp index e5ec8a06139..40f52921008 100644 --- a/src/Storages/HDFS/HDFSCommon.cpp +++ b/src/Storages/HDFS/HDFSCommon.cpp @@ -9,14 +9,15 @@ #include #include + namespace DB { namespace ErrorCodes { -extern const int BAD_ARGUMENTS; -extern const int NETWORK_ERROR; -extern const int EXCESSIVE_ELEMENT_IN_CONFIG; -extern const int NO_ELEMENTS_IN_CONFIG; + extern const int BAD_ARGUMENTS; + extern const int NETWORK_ERROR; + extern const int EXCESSIVE_ELEMENT_IN_CONFIG; + extern const int NO_ELEMENTS_IN_CONFIG; } const String HDFSBuilderWrapper::CONFIG_PREFIX = "hdfs"; diff --git a/src/Storages/HDFS/HDFSCommon.h b/src/Storages/HDFS/HDFSCommon.h index fa1ca88464e..154c253a76b 100644 --- a/src/Storages/HDFS/HDFSCommon.h +++ b/src/Storages/HDFS/HDFSCommon.h @@ -17,6 +17,7 @@ namespace DB { + namespace detail { struct HDFSFsDeleter @@ -28,16 +29,14 @@ namespace detail }; } + struct HDFSFileInfo { hdfsFileInfo * file_info; int length; - HDFSFileInfo() - : file_info(nullptr) - , length(0) - { - } + HDFSFileInfo() : file_info(nullptr) , length(0) {} + HDFSFileInfo(const HDFSFileInfo & other) = delete; HDFSFileInfo(HDFSFileInfo && other) = default; HDFSFileInfo & operator=(const HDFSFileInfo & other) = delete; @@ -49,17 +48,30 @@ struct HDFSFileInfo } }; + class HDFSBuilderWrapper { - hdfsBuilder * hdfs_builder; - String hadoop_kerberos_keytab; - String hadoop_kerberos_principal; - String hadoop_kerberos_kinit_command = "kinit"; - String hadoop_security_kerberos_ticket_cache_path; - static std::mutex kinit_mtx; +friend HDFSBuilderWrapper createHDFSBuilder(const String & uri_str, const Poco::Util::AbstractConfiguration &); - std::vector> config_stor; +static const String CONFIG_PREFIX; + +public: + HDFSBuilderWrapper() : hdfs_builder(hdfsNewBuilder()) {} + + ~HDFSBuilderWrapper() { hdfsFreeBuilder(hdfs_builder); } + + HDFSBuilderWrapper(const HDFSBuilderWrapper &) = delete; + HDFSBuilderWrapper(HDFSBuilderWrapper &&) = default; + + hdfsBuilder * get() { return hdfs_builder; } + +private: + void loadFromConfig(const Poco::Util::AbstractConfiguration & config, const String & config_path, bool isUser = false); + + String getKinitCmd(); + + void runKinit(); // hdfs builder relies on an external config data storage std::pair& keep(const String & k, const String & v) @@ -67,48 +79,24 @@ class HDFSBuilderWrapper return config_stor.emplace_back(std::make_pair(k, v)); } + hdfsBuilder * hdfs_builder; + String hadoop_kerberos_keytab; + String hadoop_kerberos_principal; + String hadoop_kerberos_kinit_command = "kinit"; + String hadoop_security_kerberos_ticket_cache_path; + + static std::mutex kinit_mtx; + std::vector> config_stor; bool need_kinit{false}; - - static const String CONFIG_PREFIX; - -private: - - void loadFromConfig(const Poco::Util::AbstractConfiguration & config, const String & config_path, bool isUser = false); - - String getKinitCmd(); - - void runKinit(); - -public: - - hdfsBuilder * - get() - { - return hdfs_builder; - } - - HDFSBuilderWrapper() - : hdfs_builder(hdfsNewBuilder()) - { - } - - ~HDFSBuilderWrapper() - { - hdfsFreeBuilder(hdfs_builder); - - } - - HDFSBuilderWrapper(const HDFSBuilderWrapper &) = delete; - HDFSBuilderWrapper(HDFSBuilderWrapper &&) = default; - - friend HDFSBuilderWrapper createHDFSBuilder(const String & uri_str, const Poco::Util::AbstractConfiguration &); }; using HDFSFSPtr = std::unique_ptr, detail::HDFSFsDeleter>; + // set read/connect timeout, default value in libhdfs3 is about 1 hour, and too large /// TODO Allow to tune from query Settings. HDFSBuilderWrapper createHDFSBuilder(const String & uri_str, const Poco::Util::AbstractConfiguration &); HDFSFSPtr createHDFSFS(hdfsBuilder * builder); + } #endif diff --git a/src/Storages/HDFS/ReadBufferFromHDFS.cpp b/src/Storages/HDFS/ReadBufferFromHDFS.cpp index affb76314b1..29ea46c7590 100644 --- a/src/Storages/HDFS/ReadBufferFromHDFS.cpp +++ b/src/Storages/HDFS/ReadBufferFromHDFS.cpp @@ -8,6 +8,7 @@ namespace DB { + namespace ErrorCodes { extern const int NETWORK_ERROR; @@ -21,34 +22,39 @@ struct ReadBufferFromHDFS::ReadBufferFromHDFSImpl /// HDFS create/open functions are not thread safe static std::mutex hdfs_init_mutex; - std::string hdfs_uri; + String hdfs_uri; + String hdfs_file_path; + hdfsFile fin; HDFSBuilderWrapper builder; HDFSFSPtr fs; - ReadBufferFromHDFSImpl(const std::string & hdfs_name_, + explicit ReadBufferFromHDFSImpl( + const std::string & hdfs_uri_, + const std::string & hdfs_file_path_, const Poco::Util::AbstractConfiguration & config_) - : hdfs_uri(hdfs_name_), - builder(createHDFSBuilder(hdfs_uri, config_)) + : hdfs_uri(hdfs_uri_) + , hdfs_file_path(hdfs_file_path_) + , builder(createHDFSBuilder(hdfs_uri_, config_)) { std::lock_guard lock(hdfs_init_mutex); fs = createHDFSFS(builder.get()); - const size_t begin_of_path = hdfs_uri.find('/', hdfs_uri.find("//") + 2); - const std::string path = hdfs_uri.substr(begin_of_path); - fin = hdfsOpenFile(fs.get(), path.c_str(), O_RDONLY, 0, 0, 0); + fin = hdfsOpenFile(fs.get(), hdfs_file_path.c_str(), O_RDONLY, 0, 0, 0); if (fin == nullptr) - throw Exception("Unable to open HDFS file: " + path + " error: " + std::string(hdfsGetLastError()), - ErrorCodes::CANNOT_OPEN_FILE); + throw Exception(ErrorCodes::CANNOT_OPEN_FILE, + "Unable to open HDFS file: {}. Error: {}", + hdfs_uri + hdfs_file_path, std::string(hdfsGetLastError())); } int read(char * start, size_t size) const { int bytes_read = hdfsRead(fs.get(), fin, start, size); if (bytes_read < 0) - throw Exception("Fail to read HDFS file: " + hdfs_uri + " " + std::string(hdfsGetLastError()), - ErrorCodes::NETWORK_ERROR); + throw Exception(ErrorCodes::NETWORK_ERROR, + "Fail to read from HDFS: {}, file path: {}. Error: {}", + hdfs_uri, hdfs_file_path, std::string(hdfsGetLastError())); return bytes_read; } @@ -62,11 +68,13 @@ struct ReadBufferFromHDFS::ReadBufferFromHDFSImpl std::mutex ReadBufferFromHDFS::ReadBufferFromHDFSImpl::hdfs_init_mutex; -ReadBufferFromHDFS::ReadBufferFromHDFS(const std::string & hdfs_name_, - const Poco::Util::AbstractConfiguration & config_, - size_t buf_size_) +ReadBufferFromHDFS::ReadBufferFromHDFS( + const String & hdfs_uri_, + const String & hdfs_file_path_, + const Poco::Util::AbstractConfiguration & config_, + size_t buf_size_) : BufferWithOwnMemory(buf_size_) - , impl(std::make_unique(hdfs_name_, config_)) + , impl(std::make_unique(hdfs_uri_, hdfs_file_path_, config_)) { } diff --git a/src/Storages/HDFS/ReadBufferFromHDFS.h b/src/Storages/HDFS/ReadBufferFromHDFS.h index 8d26c001b2e..bd14e3d3792 100644 --- a/src/Storages/HDFS/ReadBufferFromHDFS.h +++ b/src/Storages/HDFS/ReadBufferFromHDFS.h @@ -7,11 +7,8 @@ #include #include #include - #include - #include - #include @@ -22,13 +19,19 @@ namespace DB */ class ReadBufferFromHDFS : public BufferWithOwnMemory { - struct ReadBufferFromHDFSImpl; - std::unique_ptr impl; +struct ReadBufferFromHDFSImpl; + public: - ReadBufferFromHDFS(const std::string & hdfs_name_, const Poco::Util::AbstractConfiguration &, size_t buf_size_ = DBMS_DEFAULT_BUFFER_SIZE); + ReadBufferFromHDFS(const String & hdfs_uri_, const String & hdfs_file_path_, + const Poco::Util::AbstractConfiguration &, size_t buf_size_ = DBMS_DEFAULT_BUFFER_SIZE); + ~ReadBufferFromHDFS() override; bool nextImpl() override; + +private: + std::unique_ptr impl; }; } + #endif diff --git a/src/Storages/HDFS/StorageHDFS.cpp b/src/Storages/HDFS/StorageHDFS.cpp index e26d3375c33..ad2a63c44b1 100644 --- a/src/Storages/HDFS/StorageHDFS.cpp +++ b/src/Storages/HDFS/StorageHDFS.cpp @@ -40,15 +40,15 @@ StorageHDFS::StorageHDFS(const String & uri_, const String & format_name_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - Context & context_, + ContextPtr context_, const String & compression_method_ = "") : IStorage(table_id_) + , WithContext(context_) , uri(uri_) , format_name(format_name_) - , context(context_) , compression_method(compression_method_) { - context.getRemoteHostFilter().checkURL(Poco::URI(uri)); + context_->getRemoteHostFilter().checkURL(Poco::URI(uri)); StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -59,7 +59,7 @@ StorageHDFS::StorageHDFS(const String & uri_, namespace { -class HDFSSource : public SourceWithProgress +class HDFSSource : public SourceWithProgress, WithContext { public: struct SourcesInfo @@ -90,16 +90,16 @@ public: String format_, String compression_method_, Block sample_block_, - const Context & context_, + ContextPtr context_, UInt64 max_block_size_) : SourceWithProgress(getHeader(sample_block_, source_info_->need_path_column, source_info_->need_file_column)) + , WithContext(context_) , source_info(std::move(source_info_)) , uri(std::move(uri_)) , format(std::move(format_)) , compression_method(compression_method_) , max_block_size(max_block_size_) , sample_block(std::move(sample_block_)) - , context(context_) { } @@ -122,8 +122,8 @@ public: current_path = uri + path; auto compression = chooseCompressionMethod(path, compression_method); - auto read_buf = wrapReadBufferWithCompressionMethod(std::make_unique(current_path, context.getGlobalContext().getConfigRef()), compression); - auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size); + auto read_buf = wrapReadBufferWithCompressionMethod(std::make_unique(uri, path, getContext()->getGlobalContext()->getConfigRef()), compression); + auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, getContext(), max_block_size); auto input_stream = std::make_shared(input_format); reader = std::make_shared>(input_stream, std::move(read_buf)); @@ -169,7 +169,6 @@ private: UInt64 max_block_size; Block sample_block; - const Context & context; }; class HDFSBlockOutputStream : public IBlockOutputStream @@ -178,11 +177,11 @@ public: HDFSBlockOutputStream(const String & uri, const String & format, const Block & sample_block_, - const Context & context, + ContextPtr context, const CompressionMethod compression_method) : sample_block(sample_block_) { - write_buf = wrapWriteBufferWithCompressionMethod(std::make_unique(uri, context.getGlobalContext().getConfigRef()), compression_method, 3); + write_buf = wrapWriteBufferWithCompressionMethod(std::make_unique(uri, context->getGlobalContext()->getConfigRef()), compression_method, 3); writer = FormatFactory::instance().getOutputStreamParallelIfPossible(format, *write_buf, sample_block, context); } @@ -267,21 +266,32 @@ Pipe StorageHDFS::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context_, + ContextPtr context_, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) { - const size_t begin_of_path = uri.find('/', uri.find("//") + 2); + size_t begin_of_path; + /// This uri is checked for correctness in constructor of StorageHDFS and never modified afterwards + auto two_slash = uri.find("//"); + + if (two_slash == std::string::npos) + begin_of_path = uri.find('/'); + else + begin_of_path = uri.find('/', two_slash + 2); + const String path_from_uri = uri.substr(begin_of_path); const String uri_without_path = uri.substr(0, begin_of_path); - HDFSBuilderWrapper builder = createHDFSBuilder(uri_without_path + "/", context_.getGlobalContext().getConfigRef()); + HDFSBuilderWrapper builder = createHDFSBuilder(uri_without_path + "/", context_->getGlobalContext()->getConfigRef()); HDFSFSPtr fs = createHDFSFS(builder.get()); auto sources_info = std::make_shared(); sources_info->uris = LSWithRegexpMatching("/", fs, path_from_uri); + if (sources_info->uris.empty()) + LOG_WARNING(log, "No file in HDFS matches the path: {}", uri); + for (const auto & column : column_names) { if (column == "_path") @@ -302,12 +312,12 @@ Pipe StorageHDFS::read( return Pipe::unitePipes(std::move(pipes)); } -BlockOutputStreamPtr StorageHDFS::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) +BlockOutputStreamPtr StorageHDFS::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/) { return std::make_shared(uri, format_name, metadata_snapshot->getSampleBlock(), - context, + getContext(), chooseCompressionMethod(uri, compression_method)); } @@ -321,22 +331,22 @@ void registerStorageHDFS(StorageFactory & factory) throw Exception( "Storage HDFS requires 2 or 3 arguments: url, name of used format and optional compression method.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); - engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[0], args.local_context); + engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[0], args.getLocalContext()); String url = engine_args[0]->as().value.safeGet(); - engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.local_context); + engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.getLocalContext()); String format_name = engine_args[1]->as().value.safeGet(); String compression_method; if (engine_args.size() == 3) { - engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.local_context); + engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.getLocalContext()); compression_method = engine_args[2]->as().value.safeGet(); } else compression_method = "auto"; - return StorageHDFS::create(url, args.table_id, format_name, args.columns, args.constraints, args.context, compression_method); + return StorageHDFS::create(url, args.table_id, format_name, args.columns, args.constraints, args.getContext(), compression_method); }, { .source_access_type = AccessType::HDFS, diff --git a/src/Storages/HDFS/StorageHDFS.h b/src/Storages/HDFS/StorageHDFS.h index 4172bce1cd1..e3f235296ac 100644 --- a/src/Storages/HDFS/StorageHDFS.h +++ b/src/Storages/HDFS/StorageHDFS.h @@ -13,7 +13,7 @@ namespace DB * This class represents table engine for external hdfs files. * Read method is supported for now. */ -class StorageHDFS final : public ext::shared_ptr_helper, public IStorage +class StorageHDFS final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; public: @@ -23,12 +23,12 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; NamesAndTypesList getVirtuals() const override; @@ -38,13 +38,12 @@ protected: const String & format_name_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - Context & context_, + ContextPtr context_, const String & compression_method_); private: - String uri; + const String uri; String format_name; - Context & context; String compression_method; Poco::Logger * log = &Poco::Logger::get("StorageHDFS"); diff --git a/src/Storages/IStorage.cpp b/src/Storages/IStorage.cpp index 39f6d1f632e..f7fb359432e 100644 --- a/src/Storages/IStorage.cpp +++ b/src/Storages/IStorage.cpp @@ -84,7 +84,7 @@ Pipe IStorage::read( const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) @@ -97,7 +97,7 @@ void IStorage::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) @@ -116,12 +116,12 @@ void IStorage::read( } Pipe IStorage::alterPartition( - const StorageMetadataPtr & /* metadata_snapshot */, const PartitionCommands & /* commands */, const Context & /* context */) + const StorageMetadataPtr & /* metadata_snapshot */, const PartitionCommands & /* commands */, ContextPtr /* context */) { throw Exception("Partition operations are not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); } -void IStorage::alter(const AlterCommands & params, const Context & context, TableLockHolder &) +void IStorage::alter(const AlterCommands & params, ContextPtr context, TableLockHolder &) { auto table_id = getStorageID(); StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); @@ -131,7 +131,7 @@ void IStorage::alter(const AlterCommands & params, const Context & context, Tabl } -void IStorage::checkAlterIsPossible(const AlterCommands & commands, const Context & /* context */) const +void IStorage::checkAlterIsPossible(const AlterCommands & commands, ContextPtr /* context */) const { for (const auto & command : commands) { @@ -179,7 +179,7 @@ Names IStorage::getAllRegisteredNames() const return result; } -NameDependencies IStorage::getDependentViewsByColumn(const Context & context) const +NameDependencies IStorage::getDependentViewsByColumn(ContextPtr context) const { NameDependencies name_deps; auto dependencies = DatabaseCatalog::instance().getDependencies(storage_id); diff --git a/src/Storages/IStorage.h b/src/Storages/IStorage.h index 4dfd2ca50f3..e48e9e49919 100644 --- a/src/Storages/IStorage.h +++ b/src/Storages/IStorage.h @@ -5,13 +5,15 @@ #include #include #include -#include +#include #include -#include +#include #include -#include #include +#include #include +#include +#include #include #include #include @@ -29,11 +31,10 @@ namespace ErrorCodes extern const int NOT_IMPLEMENTED; } -class Context; - using StorageActionBlockType = size_t; class ASTCreateQuery; +class ASTInsertQuery; struct Settings; @@ -50,6 +51,9 @@ class Pipe; class QueryPlan; using QueryPlanPtr = std::unique_ptr; +class QueryPipeline; +using QueryPipelinePtr = std::unique_ptr; + class IStoragePolicy; using StoragePolicyPtr = std::shared_ptr; @@ -176,7 +180,7 @@ public: Names getAllRegisteredNames() const override; - NameDependencies getDependentViewsByColumn(const Context & context) const; + NameDependencies getDependentViewsByColumn(ContextPtr context) const; protected: /// Returns whether the column is virtual - by default all columns are real. @@ -226,7 +230,7 @@ public: * QueryProcessingStage::Enum required for Distributed over Distributed, * since it cannot return Complete for intermediate queries never. */ - virtual QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const + virtual QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const { return QueryProcessingStage::FetchColumns; } @@ -253,7 +257,7 @@ public: virtual BlockInputStreams watch( const Names & /*column_names*/, const SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum & /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) @@ -285,7 +289,7 @@ public: const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/); @@ -297,7 +301,7 @@ public: const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/); @@ -314,11 +318,24 @@ public: virtual BlockOutputStreamPtr write( const ASTPtr & /*query*/, const StorageMetadataPtr & /*metadata_snapshot*/, - const Context & /*context*/) + ContextPtr /*context*/) { throw Exception("Method write is not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); } + /** Writes the data to a table in distributed manner. + * It is supposed that implementation looks into SELECT part of the query and executes distributed + * INSERT SELECT if it is possible with current storage as a receiver and query SELECT part as a producer. + * + * Returns query pipeline if distributed writing is possible, and nullptr otherwise. + */ + virtual QueryPipelinePtr distributedWrite( + const ASTInsertQuery & /*query*/, + ContextPtr /*context*/) + { + return nullptr; + } + /** Delete the table data. Called before deleting the directory with the data. * The method can be called only after detaching table from Context (when no queries are performed with table). * The table is not usable during and after call to this method. @@ -333,7 +350,7 @@ public: virtual void truncate( const ASTPtr & /*query*/, const StorageMetadataPtr & /* metadata_snapshot */, - const Context & /* context */, + ContextPtr /* context */, TableExclusiveLockHolder &) { throw Exception("Truncate is not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); @@ -361,12 +378,12 @@ public: /** ALTER tables in the form of column changes that do not affect the change * to Storage or its parameters. Executes under alter lock (lockForAlter). */ - virtual void alter(const AlterCommands & params, const Context & context, TableLockHolder & alter_lock_holder); + virtual void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & alter_lock_holder); /** Checks that alter commands can be applied to storage. For example, columns can be modified, * or primary key can be changes, etc. */ - virtual void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const; + virtual void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const; /** * Checks that mutation commands can be applied to storage. @@ -379,7 +396,7 @@ public: virtual Pipe alterPartition( const StorageMetadataPtr & /* metadata_snapshot */, const PartitionCommands & /* commands */, - const Context & /* context */); + ContextPtr /* context */); /// Checks that partition commands can be applied to storage. virtual void checkAlterPartitionIsPossible(const PartitionCommands & commands, const StorageMetadataPtr & metadata_snapshot, const Settings & settings) const; @@ -394,13 +411,13 @@ public: bool /*final*/, bool /*deduplicate*/, const Names & /* deduplicate_by_columns */, - const Context & /*context*/) + ContextPtr /*context*/) { throw Exception("Method optimize is not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); } /// Mutate the table contents - virtual void mutate(const MutationCommands &, const Context &) + virtual void mutate(const MutationCommands &, ContextPtr) { throw Exception("Mutations are not supported by storage " + getName(), ErrorCodes::NOT_IMPLEMENTED); } @@ -444,10 +461,10 @@ public: virtual bool supportsIndexForIn() const { return false; } /// Provides a hint that the storage engine may evaluate the IN-condition by using an index. - virtual bool mayBenefitFromIndexForIn(const ASTPtr & /* left_in_operand */, const Context & /* query_context */, const StorageMetadataPtr & /* metadata_snapshot */) const { return false; } + virtual bool mayBenefitFromIndexForIn(const ASTPtr & /* left_in_operand */, ContextPtr /* query_context */, const StorageMetadataPtr & /* metadata_snapshot */) const { return false; } /// Checks validity of the data - virtual CheckResults checkData(const ASTPtr & /* query */, const Context & /* context */) { throw Exception("Check query is not supported for " + getName() + " storage", ErrorCodes::NOT_IMPLEMENTED); } + virtual CheckResults checkData(const ASTPtr & /* query */, ContextPtr /* context */) { throw Exception("Check query is not supported for " + getName() + " storage", ErrorCodes::NOT_IMPLEMENTED); } /// Checks that table could be dropped right now /// Otherwise - throws an exception with detailed information. @@ -480,7 +497,7 @@ public: virtual std::optional totalRows(const Settings &) const { return {}; } /// Same as above but also take partition predicate into account. - virtual std::optional totalRowsByPartitionPredicate(const SelectQueryInfo &, const Context &) const { return {}; } + virtual std::optional totalRowsByPartitionPredicate(const SelectQueryInfo &, ContextPtr) const { return {}; } /// If it is possible to quickly determine exact number of bytes for the table on storage: /// - memory (approximated, resident) diff --git a/src/Storages/IndicesDescription.cpp b/src/Storages/IndicesDescription.cpp index dbc95615383..3147ad70696 100644 --- a/src/Storages/IndicesDescription.cpp +++ b/src/Storages/IndicesDescription.cpp @@ -67,7 +67,7 @@ IndexDescription & IndexDescription::operator=(const IndexDescription & other) return *this; } -IndexDescription IndexDescription::getIndexFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, const Context & context) +IndexDescription IndexDescription::getIndexFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, ContextPtr context) { const auto * index_definition = definition_ast->as(); if (!index_definition) @@ -118,7 +118,7 @@ IndexDescription IndexDescription::getIndexFromAST(const ASTPtr & definition_ast return result; } -void IndexDescription::recalculateWithNewColumns(const ColumnsDescription & new_columns, const Context & context) +void IndexDescription::recalculateWithNewColumns(const ColumnsDescription & new_columns, ContextPtr context) { *this = getIndexFromAST(definition_ast, new_columns, context); } @@ -144,7 +144,7 @@ String IndicesDescription::toString() const } -IndicesDescription IndicesDescription::parse(const String & str, const ColumnsDescription & columns, const Context & context) +IndicesDescription IndicesDescription::parse(const String & str, const ColumnsDescription & columns, ContextPtr context) { IndicesDescription result; if (str.empty()) @@ -160,7 +160,7 @@ IndicesDescription IndicesDescription::parse(const String & str, const ColumnsDe } -ExpressionActionsPtr IndicesDescription::getSingleExpressionForIndices(const ColumnsDescription & columns, const Context & context) const +ExpressionActionsPtr IndicesDescription::getSingleExpressionForIndices(const ColumnsDescription & columns, ContextPtr context) const { ASTPtr combined_expr_list = std::make_shared(); for (const auto & index : *this) diff --git a/src/Storages/IndicesDescription.h b/src/Storages/IndicesDescription.h index f383029837e..d9c7efdb75c 100644 --- a/src/Storages/IndicesDescription.h +++ b/src/Storages/IndicesDescription.h @@ -46,7 +46,7 @@ struct IndexDescription size_t granularity; /// Parse index from definition AST - static IndexDescription getIndexFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, const Context & context); + static IndexDescription getIndexFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, ContextPtr context); IndexDescription() = default; @@ -57,7 +57,7 @@ struct IndexDescription /// Recalculate index with new columns because index expression may change /// if something change in columns. - void recalculateWithNewColumns(const ColumnsDescription & new_columns, const Context & context); + void recalculateWithNewColumns(const ColumnsDescription & new_columns, ContextPtr context); }; /// All secondary indices in storage @@ -68,10 +68,10 @@ struct IndicesDescription : public std::vector /// Convert description to string String toString() const; /// Parse description from string - static IndicesDescription parse(const String & str, const ColumnsDescription & columns, const Context & context); + static IndicesDescription parse(const String & str, const ColumnsDescription & columns, ContextPtr context); /// Return common expression for all stored indices - ExpressionActionsPtr getSingleExpressionForIndices(const ColumnsDescription & columns, const Context & context) const; + ExpressionActionsPtr getSingleExpressionForIndices(const ColumnsDescription & columns, ContextPtr context) const; }; } diff --git a/src/Storages/Kafka/KafkaBlockInputStream.cpp b/src/Storages/Kafka/KafkaBlockInputStream.cpp index bf985902b4d..5d9b19b1972 100644 --- a/src/Storages/Kafka/KafkaBlockInputStream.cpp +++ b/src/Storages/Kafka/KafkaBlockInputStream.cpp @@ -35,8 +35,8 @@ KafkaBlockInputStream::KafkaBlockInputStream( , max_block_size(max_block_size_) , commit_in_suffix(commit_in_suffix_) , non_virtual_header(metadata_snapshot->getSampleBlockNonMaterialized()) - , virtual_header(metadata_snapshot->getSampleBlockForColumns( - {"_topic", "_key", "_offset", "_partition", "_timestamp", "_timestamp_ms", "_headers.name", "_headers.value"}, storage.getVirtuals(), storage.getStorageID())) + , virtual_header(metadata_snapshot->getSampleBlockForColumns(storage.getVirtualColumnNames(), storage.getVirtuals(), storage.getStorageID())) + , handle_error_mode(storage.getHandleKafkaErrorMode()) { } @@ -78,21 +78,22 @@ Block KafkaBlockInputStream::readImpl() // now it's one-time usage InputStream // one block of the needed size (or with desired flush timeout) is formed in one internal iteration // otherwise external iteration will reuse that and logic will became even more fuzzy - MutableColumns result_columns = non_virtual_header.cloneEmptyColumns(); MutableColumns virtual_columns = virtual_header.cloneEmptyColumns(); + auto put_error_to_stream = handle_error_mode == HandleKafkaErrorMode::STREAM; + auto input_format = FormatFactory::instance().getInputFormat( - storage.getFormatName(), *buffer, non_virtual_header, *context, max_block_size); + storage.getFormatName(), *buffer, non_virtual_header, context, max_block_size); InputPort port(input_format->getPort().getHeader(), input_format.get()); connect(input_format->getPort(), port); port.setNeeded(); + std::optional exception_message; auto read_kafka_message = [&] { size_t new_rows = 0; - while (true) { auto status = input_format->prepare(); @@ -136,7 +137,41 @@ Block KafkaBlockInputStream::readImpl() while (true) { - auto new_rows = buffer->poll() ? read_kafka_message() : 0; + size_t new_rows = 0; + exception_message.reset(); + if (buffer->poll()) + { + try + { + new_rows = read_kafka_message(); + } + catch (Exception & e) + { + if (put_error_to_stream) + { + input_format->resetParser(); + exception_message = e.message(); + for (auto & column : result_columns) + { + // read_kafka_message could already push some rows to result_columns + // before exception, we need to fix it. + auto cur_rows = column->size(); + if (cur_rows > total_rows) + { + column->popBack(cur_rows - total_rows); + } + // all data columns will get default value in case of error + column->insertDefault(); + } + new_rows = 1; + } + else + { + e.addMessage("while parsing Kafka message (topic: {}, partition: {}, offset: {})'", buffer->currentTopic(), buffer->currentPartition(), buffer->currentOffset()); + throw; + } + } + } if (new_rows) { @@ -189,6 +224,20 @@ Block KafkaBlockInputStream::readImpl() } virtual_columns[6]->insert(headers_names); virtual_columns[7]->insert(headers_values); + if (put_error_to_stream) + { + if (exception_message) + { + auto payload = buffer->currentPayload(); + virtual_columns[8]->insert(payload); + virtual_columns[9]->insert(*exception_message); + } + else + { + virtual_columns[8]->insertDefault(); + virtual_columns[9]->insertDefault(); + } + } } total_rows = total_rows + new_rows; diff --git a/src/Storages/Kafka/KafkaBlockInputStream.h b/src/Storages/Kafka/KafkaBlockInputStream.h index 517df6ecaf7..98e4b8982e0 100644 --- a/src/Storages/Kafka/KafkaBlockInputStream.h +++ b/src/Storages/Kafka/KafkaBlockInputStream.h @@ -39,7 +39,7 @@ public: private: StorageKafka & storage; StorageMetadataPtr metadata_snapshot; - const std::shared_ptr context; + ContextPtr context; Names column_names; Poco::Logger * log; UInt64 max_block_size; @@ -51,6 +51,7 @@ private: const Block non_virtual_header; const Block virtual_header; + const HandleKafkaErrorMode handle_error_mode; }; } diff --git a/src/Storages/Kafka/KafkaBlockOutputStream.cpp b/src/Storages/Kafka/KafkaBlockOutputStream.cpp index 2cb0fd98c71..21de27708b4 100644 --- a/src/Storages/Kafka/KafkaBlockOutputStream.cpp +++ b/src/Storages/Kafka/KafkaBlockOutputStream.cpp @@ -9,7 +9,7 @@ namespace DB KafkaBlockOutputStream::KafkaBlockOutputStream( StorageKafka & storage_, const StorageMetadataPtr & metadata_snapshot_, - const std::shared_ptr & context_) + const ContextPtr & context_) : storage(storage_) , metadata_snapshot(metadata_snapshot_) , context(context_) @@ -25,11 +25,11 @@ void KafkaBlockOutputStream::writePrefix() { buffer = storage.createWriteBuffer(getHeader()); - auto format_settings = getFormatSettings(*context); + auto format_settings = getFormatSettings(context); format_settings.protobuf.allow_multiple_rows_without_delimiter = true; child = FormatFactory::instance().getOutputStream(storage.getFormatName(), *buffer, - getHeader(), *context, + getHeader(), context, [this](const Columns & columns, size_t row) { buffer->countRow(columns, row); diff --git a/src/Storages/Kafka/KafkaSettings.h b/src/Storages/Kafka/KafkaSettings.h index 1df10d16339..1010c486abb 100644 --- a/src/Storages/Kafka/KafkaSettings.h +++ b/src/Storages/Kafka/KafkaSettings.h @@ -29,7 +29,8 @@ class ASTStorage; M(Char, kafka_row_delimiter, '\0', "The character to be considered as a delimiter in Kafka message.", 0) \ M(String, kafka_schema, "", "Schema identifier (used by schema-based formats) for Kafka engine", 0) \ M(UInt64, kafka_skip_broken_messages, 0, "Skip at least this number of broken messages from Kafka topic per block", 0) \ - M(Bool, kafka_thread_per_consumer, false, "Provide independent thread for each consumer", 0) + M(Bool, kafka_thread_per_consumer, false, "Provide independent thread for each consumer", 0) \ + M(HandleKafkaErrorMode, kafka_handle_error_mode, HandleKafkaErrorMode::DEFAULT, "How to handle errors for Kafka engine. Passible values: default, stream.", 0) \ /** TODO: */ /* https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md */ diff --git a/src/Storages/Kafka/ReadBufferFromKafkaConsumer.h b/src/Storages/Kafka/ReadBufferFromKafkaConsumer.h index 1d889655941..49d3df0e180 100644 --- a/src/Storages/Kafka/ReadBufferFromKafkaConsumer.h +++ b/src/Storages/Kafka/ReadBufferFromKafkaConsumer.h @@ -63,6 +63,7 @@ public: auto currentPartition() const { return current[-1].get_partition(); } auto currentTimestamp() const { return current[-1].get_timestamp(); } const auto & currentHeaderList() const { return current[-1].get_header_list(); } + String currentPayload() const { return current[-1].get_payload(); } private: using Messages = std::vector; diff --git a/src/Storages/Kafka/StorageKafka.cpp b/src/Storages/Kafka/StorageKafka.cpp index 45e4ec538a1..15dd5b553b0 100644 --- a/src/Storages/Kafka/StorageKafka.cpp +++ b/src/Storages/Kafka/StorageKafka.cpp @@ -28,6 +28,7 @@ #include #include #include +#include #include #include #include @@ -169,20 +170,19 @@ namespace } StorageKafka::StorageKafka( - const StorageID & table_id_, - const Context & context_, - const ColumnsDescription & columns_, - std::unique_ptr kafka_settings_) + const StorageID & table_id_, ContextPtr context_, const ColumnsDescription & columns_, std::unique_ptr kafka_settings_) : IStorage(table_id_) - , global_context(context_.getGlobalContext()) + , WithContext(context_->getGlobalContext()) , kafka_settings(std::move(kafka_settings_)) - , topics(parseTopics(global_context.getMacros()->expand(kafka_settings->kafka_topic_list.value))) - , brokers(global_context.getMacros()->expand(kafka_settings->kafka_broker_list.value)) - , group(global_context.getMacros()->expand(kafka_settings->kafka_group_name.value)) - , client_id(kafka_settings->kafka_client_id.value.empty() ? getDefaultClientId(table_id_) : global_context.getMacros()->expand(kafka_settings->kafka_client_id.value)) - , format_name(global_context.getMacros()->expand(kafka_settings->kafka_format.value)) + , topics(parseTopics(getContext()->getMacros()->expand(kafka_settings->kafka_topic_list.value))) + , brokers(getContext()->getMacros()->expand(kafka_settings->kafka_broker_list.value)) + , group(getContext()->getMacros()->expand(kafka_settings->kafka_group_name.value)) + , client_id( + kafka_settings->kafka_client_id.value.empty() ? getDefaultClientId(table_id_) + : getContext()->getMacros()->expand(kafka_settings->kafka_client_id.value)) + , format_name(getContext()->getMacros()->expand(kafka_settings->kafka_format.value)) , row_delimiter(kafka_settings->kafka_row_delimiter.value) - , schema_name(global_context.getMacros()->expand(kafka_settings->kafka_schema.value)) + , schema_name(getContext()->getMacros()->expand(kafka_settings->kafka_schema.value)) , num_consumers(kafka_settings->kafka_num_consumers.value) , log(&Poco::Logger::get("StorageKafka (" + table_id_.table_name + ")")) , semaphore(0, num_consumers) @@ -190,13 +190,18 @@ StorageKafka::StorageKafka( , settings_adjustments(createSettingsAdjustments()) , thread_per_consumer(kafka_settings->kafka_thread_per_consumer.value) { + if (kafka_settings->kafka_handle_error_mode == HandleKafkaErrorMode::STREAM) + { + kafka_settings->input_format_allow_errors_num = 0; + kafka_settings->input_format_allow_errors_ratio = 0; + } StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); setInMemoryMetadata(storage_metadata); auto task_count = thread_per_consumer ? num_consumers : 1; for (size_t i = 0; i < task_count; ++i) { - auto task = global_context.getMessageBrokerSchedulePool().createTask(log->name(), [this, i]{ threadFunc(i); }); + auto task = getContext()->getMessageBrokerSchedulePool().createTask(log->name(), [this, i]{ threadFunc(i); }); task->deactivate(); tasks.emplace_back(std::make_shared(std::move(task))); } @@ -255,7 +260,7 @@ Pipe StorageKafka::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /* query_info */, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum /* processed_stage */, size_t /* max_block_size */, unsigned /* num_streams */) @@ -266,7 +271,7 @@ Pipe StorageKafka::read( /// Always use all consumers at once, otherwise SELECT may not read messages from all partitions. Pipes pipes; pipes.reserve(num_created_consumers); - auto modified_context = std::make_shared(context); + auto modified_context = Context::createCopy(local_context); modified_context->applySettingsChanges(settings_adjustments); // Claim as many consumers as requested, but don't block @@ -284,9 +289,9 @@ Pipe StorageKafka::read( } -BlockOutputStreamPtr StorageKafka::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageKafka::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { - auto modified_context = std::make_shared(context); + auto modified_context = Context::createCopy(local_context); modified_context->applySettingsChanges(settings_adjustments); if (topics.size() > 1) @@ -382,7 +387,7 @@ ProducerBufferPtr StorageKafka::createWriteBuffer(const Block & header) updateConfiguration(conf); auto producer = std::make_shared(conf); - const Settings & settings = global_context.getSettingsRef(); + const Settings & settings = getContext()->getSettingsRef(); size_t poll_timeout = settings.stream_poll_timeout_ms.totalMilliseconds(); return std::make_shared( @@ -438,14 +443,14 @@ size_t StorageKafka::getMaxBlockSize() const { return kafka_settings->kafka_max_block_size.changed ? kafka_settings->kafka_max_block_size.value - : (global_context.getSettingsRef().max_insert_block_size.value / num_consumers); + : (getContext()->getSettingsRef().max_insert_block_size.value / num_consumers); } size_t StorageKafka::getPollMaxBatchSize() const { size_t batch_size = kafka_settings->kafka_poll_max_batch_size.changed ? kafka_settings->kafka_poll_max_batch_size.value - : global_context.getSettingsRef().max_block_size.value; + : getContext()->getSettingsRef().max_block_size.value; return std::min(batch_size,getMaxBlockSize()); } @@ -454,13 +459,13 @@ size_t StorageKafka::getPollTimeoutMillisecond() const { return kafka_settings->kafka_poll_timeout_ms.changed ? kafka_settings->kafka_poll_timeout_ms.totalMilliseconds() - : global_context.getSettingsRef().stream_poll_timeout_ms.totalMilliseconds(); + : getContext()->getSettingsRef().stream_poll_timeout_ms.totalMilliseconds(); } void StorageKafka::updateConfiguration(cppkafka::Configuration & conf) { // Update consumer configuration from the configuration - const auto & config = global_context.getConfigRef(); + const auto & config = getContext()->getConfigRef(); if (config.has(CONFIG_PREFIX)) loadFromConfig(conf, config, CONFIG_PREFIX); @@ -512,7 +517,7 @@ bool StorageKafka::checkDependencies(const StorageID & table_id) // Check the dependencies are ready? for (const auto & db_tab : dependencies) { - auto table = DatabaseCatalog::instance().tryGetTable(db_tab, global_context); + auto table = DatabaseCatalog::instance().tryGetTable(db_tab, getContext()); if (!table) return false; @@ -581,8 +586,10 @@ void StorageKafka::threadFunc(size_t idx) bool StorageKafka::streamToViews() { + Stopwatch watch; + auto table_id = getStorageID(); - auto table = DatabaseCatalog::instance().getTable(table_id, global_context); + auto table = DatabaseCatalog::instance().getTable(table_id, getContext()); if (!table) throw Exception("Engine table " + table_id.getNameForLogs() + " doesn't exist.", ErrorCodes::LOGICAL_ERROR); auto metadata_snapshot = getInMemoryMetadataPtr(); @@ -593,13 +600,13 @@ bool StorageKafka::streamToViews() size_t block_size = getMaxBlockSize(); - auto kafka_context = std::make_shared(global_context); + auto kafka_context = Context::createCopy(getContext()); kafka_context->makeQueryContext(); kafka_context->applySettingsChanges(settings_adjustments); // Create a stream for each consumer and join them in a union stream // Only insert into dependent views and expect that input blocks contain virtual columns - InterpreterInsertQuery interpreter(insert, *kafka_context, false, true, true); + InterpreterInsertQuery interpreter(insert, kafka_context, false, true, true); auto block_io = interpreter.execute(); // Create a stream for each consumer and join them in a union stream @@ -617,7 +624,7 @@ bool StorageKafka::streamToViews() limits.speed_limits.max_execution_time = kafka_settings->kafka_flush_interval_ms.changed ? kafka_settings->kafka_flush_interval_ms - : global_context.getSettingsRef().stream_flush_interval_ms; + : getContext()->getSettingsRef().stream_flush_interval_ms; limits.timeout_overflow_mode = OverflowMode::BREAK; stream->setLimits(limits); @@ -633,7 +640,11 @@ bool StorageKafka::streamToViews() // We can't cancel during copyData, as it's not aware of commits and other kafka-related stuff. // It will be cancelled on underlying layer (kafka buffer) std::atomic stub = {false}; - copyData(*in, *block_io.out, &stub); + size_t rows = 0; + copyData(*in, *block_io.out, [&rows](const Block & block) + { + rows += block.rows(); + }, &stub); bool some_stream_is_stalled = false; for (auto & stream : streams) @@ -642,6 +653,10 @@ bool StorageKafka::streamToViews() stream->as()->commit(); } + UInt64 milliseconds = watch.elapsedMilliseconds(); + LOG_DEBUG(log, "Pushing {} rows to {} took {} ms.", + formatReadableQuantity(rows), table_id.getNameForLogs(), milliseconds); + return some_stream_is_stalled; } @@ -690,14 +705,14 @@ void registerStorageKafka(StorageFactory & factory) engine_args[(ARG_NUM)-1] = \ evaluateConstantExpressionAsLiteral( \ engine_args[(ARG_NUM)-1], \ - args.local_context); \ + args.getLocalContext()); \ } \ if ((EVAL) == 2) \ { \ engine_args[(ARG_NUM)-1] = \ evaluateConstantExpressionOrIdentifierAsLiteral( \ engine_args[(ARG_NUM)-1], \ - args.local_context); \ + args.getLocalContext()); \ } \ kafka_settings->PAR_NAME = \ engine_args[(ARG_NUM)-1]->as().value; \ @@ -752,7 +767,7 @@ void registerStorageKafka(StorageFactory & factory) throw Exception("kafka_poll_max_batch_size can not be lower than 1", ErrorCodes::BAD_ARGUMENTS); } - return StorageKafka::create(args.table_id, args.context, args.columns, std::move(kafka_settings)); + return StorageKafka::create(args.table_id, args.getContext(), args.columns, std::move(kafka_settings)); }; factory.registerStorage("Kafka", creator_fn, StorageFactory::StorageFeatures{ .supports_settings = true, }); @@ -760,7 +775,7 @@ void registerStorageKafka(StorageFactory & factory) NamesAndTypesList StorageKafka::getVirtuals() const { - return NamesAndTypesList{ + auto result = NamesAndTypesList{ {"_topic", std::make_shared()}, {"_key", std::make_shared()}, {"_offset", std::make_shared()}, @@ -770,6 +785,32 @@ NamesAndTypesList StorageKafka::getVirtuals() const {"_headers.name", std::make_shared(std::make_shared())}, {"_headers.value", std::make_shared(std::make_shared())} }; + if (kafka_settings->kafka_handle_error_mode == HandleKafkaErrorMode::STREAM) + { + result.push_back({"_raw_message", std::make_shared()}); + result.push_back({"_error", std::make_shared()}); + } + return result; +} + +Names StorageKafka::getVirtualColumnNames() const +{ + auto result = Names { + "_topic", + "_key", + "_offset", + "_partition", + "_timestamp", + "_timestamp_ms", + "_headers.name", + "_headers.value", + }; + if (kafka_settings->kafka_handle_error_mode == HandleKafkaErrorMode::STREAM) + { + result.push_back({"_raw_message"}); + result.push_back({"_error"}); + } + return result; } } diff --git a/src/Storages/Kafka/StorageKafka.h b/src/Storages/Kafka/StorageKafka.h index 53871990810..b09b2ecd39e 100644 --- a/src/Storages/Kafka/StorageKafka.h +++ b/src/Storages/Kafka/StorageKafka.h @@ -28,7 +28,7 @@ struct StorageKafkaInterceptors; /** Implements a Kafka queue table engine that can be used as a persistent queue / buffer, * or as a basic building block for creating pipelines with a continuous insertion / ETL. */ -class StorageKafka final : public ext::shared_ptr_helper, public IStorage +class StorageKafka final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; friend struct StorageKafkaInterceptors; @@ -45,7 +45,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -53,7 +53,7 @@ public: BlockOutputStreamPtr write( const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, - const Context & context) override; + ContextPtr context) override; void pushReadBuffer(ConsumerBufferPtr buf); ConsumerBufferPtr popReadBuffer(); @@ -64,16 +64,17 @@ public: const auto & getFormatName() const { return format_name; } NamesAndTypesList getVirtuals() const override; + Names getVirtualColumnNames() const; + HandleKafkaErrorMode getHandleKafkaErrorMode() const { return kafka_settings->kafka_handle_error_mode; } protected: StorageKafka( const StorageID & table_id_, - const Context & context_, + ContextPtr context_, const ColumnsDescription & columns_, std::unique_ptr kafka_settings_); private: // Configuration and state - const Context & global_context; std::unique_ptr kafka_settings; const Names topics; const String brokers; @@ -112,6 +113,9 @@ private: std::mutex thread_statuses_mutex; std::list> thread_statuses; + /// Handle error mode + HandleKafkaErrorMode handle_error_mode; + SettingsChanges createSettingsAdjustments(); ConsumerBufferPtr createReadBuffer(const size_t consumer_number); diff --git a/src/Storages/KeyDescription.cpp b/src/Storages/KeyDescription.cpp index ee4a20bfc4f..be327313b4d 100644 --- a/src/Storages/KeyDescription.cpp +++ b/src/Storages/KeyDescription.cpp @@ -66,14 +66,14 @@ KeyDescription & KeyDescription::operator=(const KeyDescription & other) void KeyDescription::recalculateWithNewAST( const ASTPtr & new_ast, const ColumnsDescription & columns, - const Context & context) + ContextPtr context) { *this = getSortingKeyFromAST(new_ast, columns, context, additional_column); } void KeyDescription::recalculateWithNewColumns( const ColumnsDescription & new_columns, - const Context & context) + ContextPtr context) { *this = getSortingKeyFromAST(definition_ast, new_columns, context, additional_column); } @@ -81,7 +81,7 @@ void KeyDescription::recalculateWithNewColumns( KeyDescription KeyDescription::getKeyFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context) + ContextPtr context) { return getSortingKeyFromAST(definition_ast, columns, context, {}); } @@ -89,7 +89,7 @@ KeyDescription KeyDescription::getKeyFromAST( KeyDescription KeyDescription::getSortingKeyFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context, + ContextPtr context, const std::optional & additional_column) { KeyDescription result; diff --git a/src/Storages/KeyDescription.h b/src/Storages/KeyDescription.h index 7d1e7efb55f..194aad4d5b2 100644 --- a/src/Storages/KeyDescription.h +++ b/src/Storages/KeyDescription.h @@ -40,28 +40,28 @@ struct KeyDescription static KeyDescription getKeyFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context); + ContextPtr context); /// Sorting key can contain additional column defined by storage type (like /// Version column in VersionedCollapsingMergeTree). static KeyDescription getSortingKeyFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context, + ContextPtr context, const std::optional & additional_column); /// Recalculate all expressions and fields for key with new columns without /// changes in constant fields. Just wrapper for static methods. void recalculateWithNewColumns( const ColumnsDescription & new_columns, - const Context & context); + ContextPtr context); /// Recalculate all expressions and fields for key with new ast without /// changes in constant fields. Just wrapper for static methods. void recalculateWithNewAST( const ASTPtr & new_ast, const ColumnsDescription & columns, - const Context & context); + ContextPtr context); KeyDescription() = default; diff --git a/src/Storages/LiveView/StorageBlocks.h b/src/Storages/LiveView/StorageBlocks.h index 4ad0ffb93ca..f4ba8d7b09c 100644 --- a/src/Storages/LiveView/StorageBlocks.h +++ b/src/Storages/LiveView/StorageBlocks.h @@ -33,13 +33,13 @@ public: bool supportsSampling() const override { return true; } bool supportsFinal() const override { return true; } - QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override { return to_stage; } + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override { return to_stage; } Pipe read( const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) override diff --git a/src/Storages/LiveView/StorageLiveView.cpp b/src/Storages/LiveView/StorageLiveView.cpp index bfec7bffc8c..1d81405ec26 100644 --- a/src/Storages/LiveView/StorageLiveView.cpp +++ b/src/Storages/LiveView/StorageLiveView.cpp @@ -56,13 +56,13 @@ namespace ErrorCodes } -static StorageID extractDependentTable(ASTPtr & query, Context & context, const String & table_name, ASTPtr & inner_subquery) +static StorageID extractDependentTable(ASTPtr & query, ContextPtr context, const String & table_name, ASTPtr & inner_subquery) { ASTSelectQuery & select_query = typeid_cast(*query); if (auto db_and_table = getDatabaseAndTable(select_query, 0)) { - String select_database_name = context.getCurrentDatabase(); + String select_database_name = context->getCurrentDatabase(); String select_table_name = db_and_table->table; if (db_and_table->database.empty()) @@ -98,7 +98,7 @@ static StorageID extractDependentTable(ASTPtr & query, Context & context, const } } -MergeableBlocksPtr StorageLiveView::collectMergeableBlocks(const Context & context) +MergeableBlocksPtr StorageLiveView::collectMergeableBlocks(ContextPtr local_context) { ASTPtr mergeable_query = inner_query; @@ -109,7 +109,7 @@ MergeableBlocksPtr StorageLiveView::collectMergeableBlocks(const Context & conte BlocksPtrs new_blocks = std::make_shared>(); BlocksPtr base_blocks = std::make_shared(); - InterpreterSelectQuery interpreter(mergeable_query->clone(), context, SelectQueryOptions(QueryProcessingStage::WithMergeableState), Names()); + InterpreterSelectQuery interpreter(mergeable_query->clone(), local_context, SelectQueryOptions(QueryProcessingStage::WithMergeableState), Names()); auto view_mergeable_stream = std::make_shared(interpreter.execute().getInputStream()); @@ -137,7 +137,7 @@ Pipes StorageLiveView::blocksToPipes(BlocksPtrs blocks, Block & sample_block) BlockInputStreamPtr StorageLiveView::completeQuery(Pipes pipes) { //FIXME it's dangerous to create Context on stack - auto block_context = std::make_unique(global_context); + auto block_context = Context::createCopy(getContext()); block_context->makeQueryContext(); auto creator = [&](const StorageID & blocks_id_global) @@ -147,17 +147,17 @@ BlockInputStreamPtr StorageLiveView::completeQuery(Pipes pipes) blocks_id_global, parent_table_metadata->getColumns(), std::move(pipes), QueryProcessingStage::WithMergeableState); }; - block_context->addExternalTable(getBlocksTableName(), TemporaryTableHolder(global_context, creator)); + block_context->addExternalTable(getBlocksTableName(), TemporaryTableHolder(getContext(), creator)); - InterpreterSelectQuery select(getInnerBlocksQuery(), *block_context, StoragePtr(), nullptr, SelectQueryOptions(QueryProcessingStage::Complete)); + InterpreterSelectQuery select(getInnerBlocksQuery(), block_context, StoragePtr(), nullptr, SelectQueryOptions(QueryProcessingStage::Complete)); BlockInputStreamPtr data = std::make_shared(select.execute().getInputStream()); /// Squashing is needed here because the view query can generate a lot of blocks /// even when only one block is inserted into the parent table (e.g. if the query is a GROUP BY /// and two-level aggregation is triggered). data = std::make_shared( - data, global_context.getSettingsRef().min_insert_block_size_rows, - global_context.getSettingsRef().min_insert_block_size_bytes); + data, getContext()->getSettingsRef().min_insert_block_size_rows, + getContext()->getSettingsRef().min_insert_block_size_bytes); return data; } @@ -165,7 +165,7 @@ BlockInputStreamPtr StorageLiveView::completeQuery(Pipes pipes) void StorageLiveView::writeIntoLiveView( StorageLiveView & live_view, const Block & block, - const Context & context) + ContextPtr local_context) { BlockOutputStreamPtr output = std::make_shared(live_view); @@ -190,9 +190,9 @@ void StorageLiveView::writeIntoLiveView( std::lock_guard lock(live_view.mutex); mergeable_blocks = live_view.getMergeableBlocks(); - if (!mergeable_blocks || mergeable_blocks->blocks->size() >= context.getGlobalContext().getSettingsRef().max_live_view_insert_blocks_before_refresh) + if (!mergeable_blocks || mergeable_blocks->blocks->size() >= local_context->getGlobalContext()->getSettingsRef().max_live_view_insert_blocks_before_refresh) { - mergeable_blocks = live_view.collectMergeableBlocks(context); + mergeable_blocks = live_view.collectMergeableBlocks(local_context); live_view.setMergeableBlocks(mergeable_blocks); from = live_view.blocksToPipes(mergeable_blocks->blocks, mergeable_blocks->sample_block); is_block_processed = true; @@ -216,9 +216,9 @@ void StorageLiveView::writeIntoLiveView( blocks_id_global, parent_metadata->getColumns(), std::move(pipes), QueryProcessingStage::FetchColumns); }; - TemporaryTableHolder blocks_storage(context, creator); + TemporaryTableHolder blocks_storage(local_context, creator); - InterpreterSelectQuery select_block(mergeable_query, context, blocks_storage.getTable(), blocks_storage.getTable()->getInMemoryMetadataPtr(), + InterpreterSelectQuery select_block(mergeable_query, local_context, blocks_storage.getTable(), blocks_storage.getTable()->getInMemoryMetadataPtr(), QueryProcessingStage::WithMergeableState); auto data_mergeable_stream = std::make_shared( @@ -246,13 +246,13 @@ void StorageLiveView::writeIntoLiveView( StorageLiveView::StorageLiveView( const StorageID & table_id_, - Context & local_context, + ContextPtr context_, const ASTCreateQuery & query, const ColumnsDescription & columns_) : IStorage(table_id_) - , global_context(local_context.getGlobalContext()) + , WithContext(context_->getGlobalContext()) { - live_view_context = std::make_unique(global_context); + live_view_context = Context::createCopy(getContext()); live_view_context->makeQueryContext(); log = &Poco::Logger::get("StorageLiveView (" + table_id_.database_name + "." + table_id_.table_name + ")"); @@ -271,7 +271,7 @@ StorageLiveView::StorageLiveView( inner_query = query.select->list_of_selects->children.at(0); auto inner_query_tmp = inner_query->clone(); - select_table_id = extractDependentTable(inner_query_tmp, global_context, table_id_.table_name, inner_subquery); + select_table_id = extractDependentTable(inner_query_tmp, getContext(), table_id_.table_name, inner_subquery); DatabaseCatalog::instance().addDependency(select_table_id, table_id_); @@ -291,7 +291,7 @@ StorageLiveView::StorageLiveView( blocks_metadata_ptr = std::make_shared(); active_ptr = std::make_shared(true); - periodic_refresh_task = global_context.getSchedulePool().createTask("LieViewPeriodicRefreshTask", [this]{ periodicRefreshTaskFunc(); }); + periodic_refresh_task = getContext()->getSchedulePool().createTask("LieViewPeriodicRefreshTask", [this]{ periodicRefreshTaskFunc(); }); periodic_refresh_task->deactivate(); } @@ -301,7 +301,7 @@ Block StorageLiveView::getHeader() const if (!sample_block) { - sample_block = InterpreterSelectQuery(inner_query->clone(), *live_view_context, SelectQueryOptions(QueryProcessingStage::Complete)).getSampleBlock(); + sample_block = InterpreterSelectQuery(inner_query->clone(), live_view_context, SelectQueryOptions(QueryProcessingStage::Complete)).getSampleBlock(); sample_block.insert({DataTypeUInt64().createColumnConst( sample_block.rows(), 0)->convertToFullColumnIfConst(), std::make_shared(), @@ -318,7 +318,7 @@ Block StorageLiveView::getHeader() const StoragePtr StorageLiveView::getParentStorage() const { - return DatabaseCatalog::instance().getTable(select_table_id, global_context); + return DatabaseCatalog::instance().getTable(select_table_id, getContext()); } ASTPtr StorageLiveView::getInnerBlocksQuery() @@ -330,9 +330,9 @@ ASTPtr StorageLiveView::getInnerBlocksQuery() /// Rewrite inner query with right aliases for JOIN. /// It cannot be done in constructor or startup() because InterpreterSelectQuery may access table, /// which is not loaded yet during server startup, so we do it lazily - InterpreterSelectQuery(inner_blocks_query, *live_view_context, SelectQueryOptions().modify().analyze()); // NOLINT + InterpreterSelectQuery(inner_blocks_query, live_view_context, SelectQueryOptions().modify().analyze()); // NOLINT auto table_id = getStorageID(); - extractDependentTable(inner_blocks_query, global_context, table_id.table_name, inner_subquery); + extractDependentTable(inner_blocks_query, getContext(), table_id.table_name, inner_subquery); } return inner_blocks_query->clone(); } @@ -350,7 +350,7 @@ bool StorageLiveView::getNewBlocks() /// called before writeIntoLiveView function is called which can lead to /// the same block added twice to the mergeable_blocks leading to /// inserted data to be duplicated - auto new_mergeable_blocks = collectMergeableBlocks(*live_view_context); + auto new_mergeable_blocks = collectMergeableBlocks(live_view_context); Pipes from = blocksToPipes(new_mergeable_blocks->blocks, new_mergeable_blocks->sample_block); BlockInputStreamPtr data = completeQuery(std::move(from)); @@ -500,7 +500,7 @@ Pipe StorageLiveView::read( const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) @@ -525,7 +525,7 @@ Pipe StorageLiveView::read( BlockInputStreams StorageLiveView::watch( const Names & /*column_names*/, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum & processed_stage, size_t /*max_block_size*/, const unsigned /*num_streams*/) @@ -546,12 +546,12 @@ BlockInputStreams StorageLiveView::watch( reader = std::make_shared( std::static_pointer_cast(shared_from_this()), blocks_ptr, blocks_metadata_ptr, active_ptr, has_limit, limit, - context.getSettingsRef().live_view_heartbeat_interval.totalSeconds()); + local_context->getSettingsRef().live_view_heartbeat_interval.totalSeconds()); else reader = std::make_shared( std::static_pointer_cast(shared_from_this()), blocks_ptr, blocks_metadata_ptr, active_ptr, has_limit, limit, - context.getSettingsRef().live_view_heartbeat_interval.totalSeconds()); + local_context->getSettingsRef().live_view_heartbeat_interval.totalSeconds()); { std::lock_guard lock(mutex); @@ -578,10 +578,12 @@ void registerStorageLiveView(StorageFactory & factory) { factory.registerStorage("LiveView", [](const StorageFactory::Arguments & args) { - if (!args.attach && !args.local_context.getSettingsRef().allow_experimental_live_view) - throw Exception("Experimental LIVE VIEW feature is not enabled (the setting 'allow_experimental_live_view')", ErrorCodes::SUPPORT_IS_DISABLED); + if (!args.attach && !args.getLocalContext()->getSettingsRef().allow_experimental_live_view) + throw Exception( + "Experimental LIVE VIEW feature is not enabled (the setting 'allow_experimental_live_view')", + ErrorCodes::SUPPORT_IS_DISABLED); - return StorageLiveView::create(args.table_id, args.local_context, args.query, args.columns); + return StorageLiveView::create(args.table_id, args.getLocalContext(), args.query, args.columns); }); } diff --git a/src/Storages/LiveView/StorageLiveView.h b/src/Storages/LiveView/StorageLiveView.h index e30a8f51705..df09316f333 100644 --- a/src/Storages/LiveView/StorageLiveView.h +++ b/src/Storages/LiveView/StorageLiveView.h @@ -49,7 +49,7 @@ class Pipe; using Pipes = std::vector; -class StorageLiveView final : public ext::shared_ptr_helper, public IStorage +class StorageLiveView final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; friend class LiveViewBlockInputStream; @@ -142,13 +142,13 @@ public: void startup() override; void shutdown() override; - void refresh(const bool grab_lock = true); + void refresh(bool grab_lock = true); Pipe read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -156,7 +156,7 @@ public: BlockInputStreams watch( const Names & column_names, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -165,7 +165,7 @@ public: MergeableBlocksPtr getMergeableBlocks() { return mergeable_blocks; } /// Collect mergeable blocks and their sample. Must be called holding mutex - MergeableBlocksPtr collectMergeableBlocks(const Context & context); + MergeableBlocksPtr collectMergeableBlocks(ContextPtr context); /// Complete query using input streams from mergeable blocks BlockInputStreamPtr completeQuery(Pipes pipes); @@ -183,7 +183,7 @@ public: static void writeIntoLiveView( StorageLiveView & live_view, const Block & block, - const Context & context); + ContextPtr context); private: /// TODO move to common struct SelectQueryDescription @@ -191,8 +191,7 @@ private: ASTPtr inner_query; /// stored query : SELECT * FROM ( SELECT a FROM A) ASTPtr inner_subquery; /// stored query's innermost subquery if any ASTPtr inner_blocks_query; /// query over the mergeable blocks to produce final result - Context & global_context; - std::unique_ptr live_view_context; + ContextPtr live_view_context; Poco::Logger * log; @@ -231,7 +230,7 @@ private: StorageLiveView( const StorageID & table_id_, - Context & local_context, + ContextPtr context_, const ASTCreateQuery & query, const ColumnsDescription & columns ); diff --git a/src/Storages/LiveView/TemporaryLiveViewCleaner.cpp b/src/Storages/LiveView/TemporaryLiveViewCleaner.cpp index 143e7460cc3..7294b82f10d 100644 --- a/src/Storages/LiveView/TemporaryLiveViewCleaner.cpp +++ b/src/Storages/LiveView/TemporaryLiveViewCleaner.cpp @@ -1,8 +1,9 @@ #include -#include + #include #include #include +#include namespace DB @@ -15,7 +16,7 @@ namespace ErrorCodes namespace { - void executeDropQuery(const StorageID & storage_id, Context & context) + void executeDropQuery(const StorageID & storage_id, ContextPtr context) { if (!DatabaseCatalog::instance().isTableExist(storage_id, context)) return; @@ -41,45 +42,20 @@ namespace std::unique_ptr TemporaryLiveViewCleaner::the_instance; -void TemporaryLiveViewCleaner::init(Context & global_context_) +void TemporaryLiveViewCleaner::init(ContextPtr global_context_) { if (the_instance) throw Exception("TemporaryLiveViewCleaner already initialized", ErrorCodes::LOGICAL_ERROR); the_instance.reset(new TemporaryLiveViewCleaner(global_context_)); } -void TemporaryLiveViewCleaner::startupIfNecessary() +void TemporaryLiveViewCleaner::startup() { + background_thread_can_start = true; + std::lock_guard lock{mutex}; - if (background_thread_should_exit) - return; if (!views.empty()) - startupIfNecessaryImpl(lock); - else - can_start_background_thread = true; -} - -void TemporaryLiveViewCleaner::startupIfNecessaryImpl(const std::lock_guard &) -{ - /// If views.empty() the background thread isn't running or it's going to stop right now. - /// If can_start_background_thread is false, then the thread has not been started previously. - bool background_thread_is_running; - if (can_start_background_thread) - { - background_thread_is_running = !views.empty(); - } - else - { - can_start_background_thread = true; - background_thread_is_running = false; - } - - if (!background_thread_is_running) - { - if (background_thread.joinable()) - background_thread.join(); - background_thread = ThreadFromGlobalPool{&TemporaryLiveViewCleaner::backgroundThreadFunc, this}; - } + startBackgroundThread(); } void TemporaryLiveViewCleaner::shutdown() @@ -87,13 +63,10 @@ void TemporaryLiveViewCleaner::shutdown() the_instance.reset(); } - -TemporaryLiveViewCleaner::TemporaryLiveViewCleaner(Context & global_context_) - : global_context(global_context_) +TemporaryLiveViewCleaner::TemporaryLiveViewCleaner(ContextPtr global_context_) : WithContext(global_context_) { } - TemporaryLiveViewCleaner::~TemporaryLiveViewCleaner() { stopBackgroundThread(); @@ -108,27 +81,29 @@ void TemporaryLiveViewCleaner::addView(const std::shared_ptr & auto current_time = std::chrono::system_clock::now(); auto time_of_next_check = current_time + view->getTimeout(); - std::lock_guard lock{mutex}; - if (background_thread_should_exit) - return; - - if (can_start_background_thread) - startupIfNecessaryImpl(lock); - /// Keep the vector `views` sorted by time of next check. StorageAndTimeOfCheck storage_and_time_of_check{view, time_of_next_check}; + std::lock_guard lock{mutex}; views.insert(std::upper_bound(views.begin(), views.end(), storage_and_time_of_check), storage_and_time_of_check); - background_thread_wake_up.notify_one(); + if (background_thread_can_start) + { + startBackgroundThread(); + background_thread_wake_up.notify_one(); + } } void TemporaryLiveViewCleaner::backgroundThreadFunc() { std::unique_lock lock{mutex}; - while (!background_thread_should_exit && !views.empty()) + while (!background_thread_should_exit) { - background_thread_wake_up.wait_until(lock, views.front().time_of_check); + if (views.empty()) + background_thread_wake_up.wait(lock); + else + background_thread_wake_up.wait_until(lock, views.front().time_of_check); + if (background_thread_should_exit) break; @@ -167,20 +142,24 @@ void TemporaryLiveViewCleaner::backgroundThreadFunc() lock.unlock(); for (const auto & storage_id : storages_to_drop) - executeDropQuery(storage_id, global_context); + executeDropQuery(storage_id, getContext()); lock.lock(); } } +void TemporaryLiveViewCleaner::startBackgroundThread() +{ + if (!background_thread.joinable() && background_thread_can_start && !background_thread_should_exit) + background_thread = ThreadFromGlobalPool{&TemporaryLiveViewCleaner::backgroundThreadFunc, this}; +} + void TemporaryLiveViewCleaner::stopBackgroundThread() { + background_thread_should_exit = true; + background_thread_wake_up.notify_one(); if (background_thread.joinable()) - { - background_thread_should_exit = true; - background_thread_wake_up.notify_one(); background_thread.join(); - } } } diff --git a/src/Storages/LiveView/TemporaryLiveViewCleaner.h b/src/Storages/LiveView/TemporaryLiveViewCleaner.h index 8d57aa9fbfa..9b31bf9c999 100644 --- a/src/Storages/LiveView/TemporaryLiveViewCleaner.h +++ b/src/Storages/LiveView/TemporaryLiveViewCleaner.h @@ -1,17 +1,20 @@ #pragma once +#include #include + #include namespace DB { + class StorageLiveView; struct StorageID; /// This class removes temporary live views in the background thread when it's possible. /// There should only a single instance of this class. -class TemporaryLiveViewCleaner +class TemporaryLiveViewCleaner : WithContext { public: static TemporaryLiveViewCleaner & instance() { return *the_instance; } @@ -20,19 +23,19 @@ public: void addView(const std::shared_ptr & view); /// Should be called once. - static void init(Context & global_context_); + static void init(ContextPtr global_context_); static void shutdown(); - void startupIfNecessary(); - void startupIfNecessaryImpl(const std::lock_guard &); + void startup(); private: friend std::unique_ptr::deleter_type; - TemporaryLiveViewCleaner(Context & global_context_); + TemporaryLiveViewCleaner(ContextPtr global_context_); ~TemporaryLiveViewCleaner(); void backgroundThreadFunc(); + void startBackgroundThread(); void stopBackgroundThread(); struct StorageAndTimeOfCheck @@ -43,11 +46,10 @@ private: }; static std::unique_ptr the_instance; - Context & global_context; std::mutex mutex; std::vector views; ThreadFromGlobalPool background_thread; - bool can_start_background_thread = false; + std::atomic background_thread_can_start = false; std::atomic background_thread_should_exit = false; std::condition_variable background_thread_wake_up; }; diff --git a/src/Storages/MergeTree/BackgroundJobsExecutor.cpp b/src/Storages/MergeTree/BackgroundJobsExecutor.cpp index 8e5a0e8a3b8..ae06721b43d 100644 --- a/src/Storages/MergeTree/BackgroundJobsExecutor.cpp +++ b/src/Storages/MergeTree/BackgroundJobsExecutor.cpp @@ -16,10 +16,10 @@ namespace DB { IBackgroundJobExecutor::IBackgroundJobExecutor( - Context & global_context_, + ContextPtr global_context_, const BackgroundTaskSchedulingSettings & sleep_settings_, const std::vector & pools_configs_) - : global_context(global_context_) + : WithContext(global_context_) , sleep_settings(sleep_settings_) , rng(randomSeed()) { @@ -155,7 +155,7 @@ void IBackgroundJobExecutor::start() std::lock_guard lock(scheduling_task_mutex); if (!scheduling_task) { - scheduling_task = global_context.getSchedulePool().createTask( + scheduling_task = getContext()->getSchedulePool().createTask( getBackgroundTaskName(), [this]{ jobExecutingTask(); }); } @@ -187,12 +187,12 @@ IBackgroundJobExecutor::~IBackgroundJobExecutor() BackgroundJobsExecutor::BackgroundJobsExecutor( MergeTreeData & data_, - Context & global_context_) + ContextPtr global_context_) : IBackgroundJobExecutor( global_context_, - global_context_.getBackgroundProcessingTaskSchedulingSettings(), - {PoolConfig{PoolType::MERGE_MUTATE, global_context_.getSettingsRef().background_pool_size, CurrentMetrics::BackgroundPoolTask}, - PoolConfig{PoolType::FETCH, global_context_.getSettingsRef().background_fetches_pool_size, CurrentMetrics::BackgroundFetchesPoolTask}}) + global_context_->getBackgroundProcessingTaskSchedulingSettings(), + {PoolConfig{PoolType::MERGE_MUTATE, global_context_->getSettingsRef().background_pool_size, CurrentMetrics::BackgroundPoolTask}, + PoolConfig{PoolType::FETCH, global_context_->getSettingsRef().background_fetches_pool_size, CurrentMetrics::BackgroundFetchesPoolTask}}) , data(data_) { } @@ -209,11 +209,11 @@ std::optional BackgroundJobsExecutor::getBackgroundJob() BackgroundMovesExecutor::BackgroundMovesExecutor( MergeTreeData & data_, - Context & global_context_) + ContextPtr global_context_) : IBackgroundJobExecutor( global_context_, - global_context_.getBackgroundMoveTaskSchedulingSettings(), - {PoolConfig{PoolType::MOVE, global_context_.getSettingsRef().background_move_pool_size, CurrentMetrics::BackgroundMovePoolTask}}) + global_context_->getBackgroundMoveTaskSchedulingSettings(), + {PoolConfig{PoolType::MOVE, global_context_->getSettingsRef().background_move_pool_size, CurrentMetrics::BackgroundMovePoolTask}}) , data(data_) { } diff --git a/src/Storages/MergeTree/BackgroundJobsExecutor.h b/src/Storages/MergeTree/BackgroundJobsExecutor.h index da22c752e1b..e9cefc7a6b0 100644 --- a/src/Storages/MergeTree/BackgroundJobsExecutor.h +++ b/src/Storages/MergeTree/BackgroundJobsExecutor.h @@ -50,11 +50,9 @@ struct JobAndPool /// Consists of two important parts: /// 1) Task in background scheduling pool which receives new jobs from storages and put them into required pool. /// 2) One or more ThreadPool objects, which execute background jobs. -class IBackgroundJobExecutor +class IBackgroundJobExecutor : protected WithContext { protected: - Context & global_context; - /// Configuration for single background ThreadPool struct PoolConfig { @@ -106,7 +104,7 @@ public: protected: IBackgroundJobExecutor( - Context & global_context_, + ContextPtr global_context_, const BackgroundTaskSchedulingSettings & sleep_settings_, const std::vector & pools_configs_); @@ -134,7 +132,7 @@ private: public: BackgroundJobsExecutor( MergeTreeData & data_, - Context & global_context_); + ContextPtr global_context_); protected: String getBackgroundTaskName() const override; @@ -150,7 +148,7 @@ private: public: BackgroundMovesExecutor( MergeTreeData & data_, - Context & global_context_); + ContextPtr global_context_); protected: String getBackgroundTaskName() const override; diff --git a/src/Storages/MergeTree/DataPartsExchange.cpp b/src/Storages/MergeTree/DataPartsExchange.cpp index 862a3088f89..205d57f533e 100644 --- a/src/Storages/MergeTree/DataPartsExchange.cpp +++ b/src/Storages/MergeTree/DataPartsExchange.cpp @@ -481,7 +481,7 @@ MergeTreeData::MutableDataPartPtr Fetcher::fetchPart( auto storage_id = data.getStorageID(); String new_part_path = part_type == "InMemory" ? "memory" : data.getFullPathOnDisk(reservation->getDisk()) + part_name + "/"; - auto entry = data.global_context.getReplicatedFetchList().insert( + auto entry = data.getContext()->getReplicatedFetchList().insert( storage_id.getDatabaseName(), storage_id.getTableName(), part_info.partition_id, part_name, new_part_path, replica_path, uri, to_detached, sum_files_size); diff --git a/src/Storages/MergeTree/IMergeTreeDataPart.cpp b/src/Storages/MergeTree/IMergeTreeDataPart.cpp index c79e754f61a..36032f9208f 100644 --- a/src/Storages/MergeTree/IMergeTreeDataPart.cpp +++ b/src/Storages/MergeTree/IMergeTreeDataPart.cpp @@ -609,9 +609,13 @@ void IMergeTreeDataPart::loadIndex() size_t marks_count = index_granularity.getMarksCount(); + Serializations serializations(key_size); + for (size_t j = 0; j < key_size; ++j) + serializations[j] = primary_key.data_types[j]->getDefaultSerialization(); + for (size_t i = 0; i < marks_count; ++i) //-V756 for (size_t j = 0; j < key_size; ++j) - primary_key.data_types[j]->getDefaultSerialization()->deserializeBinary(*loaded_index[j], *index_file); + serializations[j]->deserializeBinary(*loaded_index[j], *index_file); for (size_t i = 0; i < key_size; ++i) { @@ -1103,13 +1107,13 @@ void IMergeTreeDataPart::remove(bool keep_s3) const { /// Remove each expected file in directory, then remove directory itself. - #if !__clang__ + #if !defined(__clang__) # pragma GCC diagnostic push # pragma GCC diagnostic ignored "-Wunused-variable" #endif for (const auto & [file, _] : checksums.files) volume->getDisk()->removeSharedFile(to + "/" + file, keep_s3); - #if !__clang__ + #if !defined(__clang__) # pragma GCC diagnostic pop #endif @@ -1352,6 +1356,24 @@ String IMergeTreeDataPart::getUniqueId() const return id; } + +String IMergeTreeDataPart::getZeroLevelPartBlockID() const +{ + if (info.level != 0) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Trying to get block id for non zero level part {}", name); + + SipHash hash; + checksums.computeTotalChecksumDataOnly(hash); + union + { + char bytes[16]; + UInt64 words[2]; + } hash_value; + hash.get128(hash_value.bytes); + + return info.partition_id + "_" + toString(hash_value.words[0]) + "_" + toString(hash_value.words[1]); +} + bool isCompactPart(const MergeTreeDataPartPtr & data_part) { return (data_part && data_part->getType() == MergeTreeDataPartType::COMPACT); @@ -1368,4 +1390,3 @@ bool isInMemoryPart(const MergeTreeDataPartPtr & data_part) } } - diff --git a/src/Storages/MergeTree/IMergeTreeDataPart.h b/src/Storages/MergeTree/IMergeTreeDataPart.h index 03f6564788a..4e531826c98 100644 --- a/src/Storages/MergeTree/IMergeTreeDataPart.h +++ b/src/Storages/MergeTree/IMergeTreeDataPart.h @@ -164,6 +164,9 @@ public: bool isEmpty() const { return rows_count == 0; } + /// Compute part block id for zero level part. Otherwise throws an exception. + String getZeroLevelPartBlockID() const; + const MergeTreeData & storage; String name; diff --git a/src/Storages/MergeTree/IMergeTreeReader.cpp b/src/Storages/MergeTree/IMergeTreeReader.cpp index 53ab4713267..52d3e7ca9ab 100644 --- a/src/Storages/MergeTree/IMergeTreeReader.cpp +++ b/src/Storages/MergeTree/IMergeTreeReader.cpp @@ -187,12 +187,12 @@ void IMergeTreeReader::evaluateMissingDefaults(Block additional_columns, Columns } auto dag = DB::evaluateMissingDefaults( - additional_columns, columns, metadata_snapshot->getColumns(), storage.global_context); + additional_columns, columns, metadata_snapshot->getColumns(), storage.getContext()); if (dag) { auto actions = std::make_shared< ExpressionActions>(std::move(dag), - ExpressionActionsSettings::fromSettings(storage.global_context.getSettingsRef())); + ExpressionActionsSettings::fromSettings(storage.getContext()->getSettingsRef())); actions->execute(additional_columns); } @@ -270,7 +270,7 @@ void IMergeTreeReader::performRequiredConversions(Columns & res_columns) copy_block.insert({res_columns[pos], getColumnFromPart(*name_and_type).type, name_and_type->name}); } - DB::performRequiredConversions(copy_block, columns, storage.global_context); + DB::performRequiredConversions(copy_block, columns, storage.getContext()); /// Move columns from block. name_and_type = columns.begin(); diff --git a/src/Storages/MergeTree/KeyCondition.cpp b/src/Storages/MergeTree/KeyCondition.cpp index 6833d2e2fd4..da36b7008bd 100644 --- a/src/Storages/MergeTree/KeyCondition.cpp +++ b/src/Storages/MergeTree/KeyCondition.cpp @@ -309,11 +309,11 @@ static const std::map inverse_relations = { bool isLogicalOperator(const String & func_name) { - return (func_name == "and" || func_name == "or" || func_name == "not"); + return (func_name == "and" || func_name == "or" || func_name == "not" || func_name == "indexHint"); } /// The node can be one of: -/// - Logical operator (AND, OR, NOT) +/// - Logical operator (AND, OR, NOT and indexHint() - logical NOOP) /// - An "atom" (relational operator, constant, expression) /// - A logical constant expression /// - Any other function @@ -330,7 +330,8 @@ ASTPtr cloneASTWithInversionPushDown(const ASTPtr node, const bool need_inversio const auto result_node = makeASTFunction(func->name); - if (need_inversion) + /// indexHint() is a special case - logical NOOP function + if (result_node->name != "indexHint" && need_inversion) { result_node->name = (result_node->name == "and") ? "or" : "and"; } @@ -370,7 +371,7 @@ inline bool Range::less(const Field & lhs, const Field & rhs) { return applyVisi * For index to work when something like "WHERE Date = toDate(now())" is written. */ Block KeyCondition::getBlockWithConstants( - const ASTPtr & query, const TreeRewriterResultPtr & syntax_analyzer_result, const Context & context) + const ASTPtr & query, const TreeRewriterResultPtr & syntax_analyzer_result, ContextPtr context) { Block result { @@ -387,7 +388,7 @@ Block KeyCondition::getBlockWithConstants( KeyCondition::KeyCondition( const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const Names & key_column_names, const ExpressionActionsPtr & key_expr_, bool single_point_, @@ -556,7 +557,7 @@ static FieldRef applyFunction(const FunctionBasePtr & func, const DataTypePtr & return {field.columns, field.row_idx, result_idx}; } -void KeyCondition::traverseAST(const ASTPtr & node, const Context & context, Block & block_with_constants) +void KeyCondition::traverseAST(const ASTPtr & node, ContextPtr context, Block & block_with_constants) { RPNElement element; @@ -786,7 +787,7 @@ bool KeyCondition::canConstantBeWrappedByFunctions( bool KeyCondition::tryPrepareSetIndex( const ASTs & args, - const Context & context, + ContextPtr context, RPNElement & out, size_t & out_key_column_num) { @@ -938,6 +939,9 @@ public: return func->getMonotonicityForRange(type, left, right); } + Kind getKind() const { return kind; } + const ColumnWithTypeAndName & getConstArg() const { return const_arg; } + private: FunctionBasePtr func; ColumnWithTypeAndName const_arg; @@ -947,7 +951,7 @@ private: bool KeyCondition::isKeyPossiblyWrappedByMonotonicFunctions( const ASTPtr & node, - const Context & context, + ContextPtr context, size_t & out_key_column_num, DataTypePtr & out_key_res_column_type, MonotonicFunctionsChain & out_functions_chain) @@ -962,6 +966,8 @@ bool KeyCondition::isKeyPossiblyWrappedByMonotonicFunctions( { const auto & args = (*it)->arguments->children; auto func_builder = FunctionFactory::instance().tryGet((*it)->name, context); + if (!func_builder) + return false; ColumnsWithTypeAndName arguments; ColumnWithTypeAndName const_arg; FunctionWithOptionalConstArg::Kind kind = FunctionWithOptionalConstArg::Kind::NO_CONST; @@ -1075,7 +1081,7 @@ static void castValueToType(const DataTypePtr & desired_type, Field & src_value, } -bool KeyCondition::tryParseAtomFromAST(const ASTPtr & node, const Context & context, Block & block_with_constants, RPNElement & out) +bool KeyCondition::tryParseAtomFromAST(const ASTPtr & node, ContextPtr context, Block & block_with_constants, RPNElement & out) { /** Functions < > = != <= >= in `notIn`, where one argument is a constant, and the other is one of columns of key, * or itself, wrapped in a chain of possibly-monotonic functions, @@ -1274,6 +1280,8 @@ bool KeyCondition::tryParseAtomFromAST(const ASTPtr & node, const Context & cont bool KeyCondition::tryParseLogicalOperatorFromAST(const ASTFunction * func, RPNElement & out) { /// Functions AND, OR, NOT. + /// Also a special function `indexHint` - works as if instead of calling a function there are just parentheses + /// (or, the same thing - calling the function `and` from one argument). const ASTs & args = func->arguments->children; if (func->name == "not") @@ -1285,7 +1293,7 @@ bool KeyCondition::tryParseLogicalOperatorFromAST(const ASTFunction * func, RPNE } else { - if (func->name == "and") + if (func->name == "and" || func->name == "indexHint") out.function = RPNElement::FUNCTION_AND; else if (func->name == "or") out.function = RPNElement::FUNCTION_OR; @@ -1308,6 +1316,235 @@ String KeyCondition::toString() const return res; } +KeyCondition::Description KeyCondition::getDescription() const +{ + /// This code may seem to be too difficult. + /// Here we want to convert RPN back to tree, and also simplify some logical expressions like `and(x, true) -> x`. + Description description; + + /// That's a binary tree. Explicit. + /// Build and optimize it simultaneously. + struct Node + { + enum class Type + { + /// Leaf, which is RPNElement. + Leaf, + /// Leafs, which are logical constants. + True, + False, + /// Binary operators. + And, + Or, + }; + + Type type; + + /// Only for Leaf + const RPNElement * element = nullptr; + /// This means that logical NOT is applied to leaf. + bool negate = false; + + std::unique_ptr left = nullptr; + std::unique_ptr right = nullptr; + }; + + /// The algorithm is the same as in KeyCondition::checkInHyperrectangle + /// We build a pair of trees on stack. For checking if key condition may be true, and if it may be false. + /// We need only `can_be_true` in result. + struct Frame + { + std::unique_ptr can_be_true; + std::unique_ptr can_be_false; + }; + + /// Combine two subtrees using logical operator. + auto combine = [](std::unique_ptr left, std::unique_ptr right, Node::Type type) + { + /// Simplify operators with for one constant condition. + + if (type == Node::Type::And) + { + /// false AND right + if (left->type == Node::Type::False) + return left; + + /// left AND false + if (right->type == Node::Type::False) + return right; + + /// true AND right + if (left->type == Node::Type::True) + return right; + + /// left AND true + if (right->type == Node::Type::True) + return left; + } + + if (type == Node::Type::Or) + { + /// false OR right + if (left->type == Node::Type::False) + return right; + + /// left OR false + if (right->type == Node::Type::False) + return left; + + /// true OR right + if (left->type == Node::Type::True) + return left; + + /// left OR true + if (right->type == Node::Type::True) + return right; + } + + return std::make_unique(Node{ + .type = type, + .left = std::move(left), + .right = std::move(right) + }); + }; + + std::vector rpn_stack; + for (const auto & element : rpn) + { + if (element.function == RPNElement::FUNCTION_UNKNOWN) + { + auto can_be_true = std::make_unique(Node{.type = Node::Type::True}); + auto can_be_false = std::make_unique(Node{.type = Node::Type::True}); + rpn_stack.emplace_back(Frame{.can_be_true = std::move(can_be_true), .can_be_false = std::move(can_be_false)}); + } + else if ( + element.function == RPNElement::FUNCTION_IN_RANGE + || element.function == RPNElement::FUNCTION_NOT_IN_RANGE + || element.function == RPNElement::FUNCTION_IN_SET + || element.function == RPNElement::FUNCTION_NOT_IN_SET) + { + auto can_be_true = std::make_unique(Node{.type = Node::Type::Leaf, .element = &element, .negate = false}); + auto can_be_false = std::make_unique(Node{.type = Node::Type::Leaf, .element = &element, .negate = true}); + rpn_stack.emplace_back(Frame{.can_be_true = std::move(can_be_true), .can_be_false = std::move(can_be_false)}); + } + else if (element.function == RPNElement::FUNCTION_NOT) + { + assert(!rpn_stack.empty()); + + std::swap(rpn_stack.back().can_be_true, rpn_stack.back().can_be_false); + } + else if (element.function == RPNElement::FUNCTION_AND) + { + assert(!rpn_stack.empty()); + auto arg1 = std::move(rpn_stack.back()); + + rpn_stack.pop_back(); + + assert(!rpn_stack.empty()); + auto arg2 = std::move(rpn_stack.back()); + + Frame frame; + frame.can_be_true = combine(std::move(arg1.can_be_true), std::move(arg2.can_be_true), Node::Type::And); + frame.can_be_false = combine(std::move(arg1.can_be_false), std::move(arg2.can_be_false), Node::Type::Or); + + rpn_stack.back() = std::move(frame); + } + else if (element.function == RPNElement::FUNCTION_OR) + { + assert(!rpn_stack.empty()); + auto arg1 = std::move(rpn_stack.back()); + + rpn_stack.pop_back(); + + assert(!rpn_stack.empty()); + auto arg2 = std::move(rpn_stack.back()); + + Frame frame; + frame.can_be_true = combine(std::move(arg1.can_be_true), std::move(arg2.can_be_true), Node::Type::Or); + frame.can_be_false = combine(std::move(arg1.can_be_false), std::move(arg2.can_be_false), Node::Type::And); + + rpn_stack.back() = std::move(frame); + } + else if (element.function == RPNElement::ALWAYS_FALSE) + { + auto can_be_true = std::make_unique(Node{.type = Node::Type::False}); + auto can_be_false = std::make_unique(Node{.type = Node::Type::True}); + + rpn_stack.emplace_back(Frame{.can_be_true = std::move(can_be_true), .can_be_false = std::move(can_be_false)}); + } + else if (element.function == RPNElement::ALWAYS_TRUE) + { + auto can_be_true = std::make_unique(Node{.type = Node::Type::True}); + auto can_be_false = std::make_unique(Node{.type = Node::Type::False}); + rpn_stack.emplace_back(Frame{.can_be_true = std::move(can_be_true), .can_be_false = std::move(can_be_false)}); + } + else + throw Exception("Unexpected function type in KeyCondition::RPNElement", ErrorCodes::LOGICAL_ERROR); + } + + if (rpn_stack.size() != 1) + throw Exception("Unexpected stack size in KeyCondition::checkInRange", ErrorCodes::LOGICAL_ERROR); + + std::vector key_names(key_columns.size()); + std::vector is_key_used(key_columns.size(), false); + + for (const auto & key : key_columns) + key_names[key.second] = key.first; + + WriteBufferFromOwnString buf; + + std::function describe; + describe = [&describe, &key_names, &is_key_used, &buf](const Node * node) + { + switch (node->type) + { + case Node::Type::Leaf: + { + is_key_used[node->element->key_column] = true; + + /// Note: for condition with double negation, like `not(x not in set)`, + /// we can replace it to `x in set` here. + /// But I won't do it, because `cloneASTWithInversionPushDown` already push down `not`. + /// So, this seem to be impossible for `can_be_true` tree. + if (node->negate) + buf << "not("; + buf << node->element->toString(key_names[node->element->key_column], true); + if (node->negate) + buf << ")"; + break; + } + case Node::Type::True: + buf << "true"; + break; + case Node::Type::False: + buf << "false"; + break; + case Node::Type::And: + buf << "and("; + describe(node->left.get()); + buf << ", "; + describe(node->right.get()); + buf << ")"; + break; + case Node::Type::Or: + buf << "or("; + describe(node->left.get()); + buf << ", "; + describe(node->right.get()); + buf << ")"; + break; + } + }; + + describe(rpn_stack.front().can_be_true.get()); + description.condition = std::move(buf.str()); + + for (size_t i = 0; i < key_names.size(); ++i) + if (is_key_used[i]) + description.used_keys.emplace_back(key_names[i]); + + return description; +} /** Index is the value of key every `index_granularity` rows. * This value is called a "mark". That is, the index consists of marks. @@ -1326,11 +1563,12 @@ String KeyCondition::toString() const * The set of all possible tuples can be considered as an n-dimensional space, where n is the size of the tuple. * A range of tuples specifies some subset of this space. * - * Hyperrectangles (you can also find the term "rail") - * will be the subrange of an n-dimensional space that is a direct product of one-dimensional ranges. - * In this case, the one-dimensional range can be: a period, a segment, an interval, a half-interval, unlimited on the left, unlimited on the right ... + * Hyperrectangles will be the subrange of an n-dimensional space that is a direct product of one-dimensional ranges. + * In this case, the one-dimensional range can be: + * a point, a segment, an open interval, a half-open interval; + * unlimited on the left, unlimited on the right ... * - * The range of tuples can always be represented as a combination of hyperrectangles. + * The range of tuples can always be represented as a combination (union) of hyperrectangles. * For example, the range [ x1 y1 .. x2 y2 ] given x1 != x2 is equal to the union of the following three hyperrectangles: * [x1] x [y1 .. +inf) * (x1 .. x2) x (-inf .. +inf) @@ -1732,18 +1970,38 @@ bool KeyCondition::mayBeTrueAfter( return checkInRange(used_key_size, left_key, nullptr, data_types, false, BoolMask::consider_only_can_be_true).can_be_true; } - -String KeyCondition::RPNElement::toString() const +String KeyCondition::RPNElement::toString() const { return toString("column " + std::to_string(key_column), false); } +String KeyCondition::RPNElement::toString(const std::string_view & column_name, bool print_constants) const { - auto print_wrapped_column = [this](WriteBuffer & buf) + auto print_wrapped_column = [this, &column_name, print_constants](WriteBuffer & buf) { for (auto it = monotonic_functions_chain.rbegin(); it != monotonic_functions_chain.rend(); ++it) + { buf << (*it)->getName() << "("; + if (print_constants) + { + if (const auto * func = typeid_cast(it->get())) + { + if (func->getKind() == FunctionWithOptionalConstArg::Kind::LEFT_CONST) + buf << applyVisitor(FieldVisitorToString(), (*func->getConstArg().column)[0]) << ", "; + } + } + } - buf << "column " << key_column; + buf << column_name; for (auto it = monotonic_functions_chain.rbegin(); it != monotonic_functions_chain.rend(); ++it) + { + if (print_constants) + { + if (const auto * func = typeid_cast(it->get())) + { + if (func->getKind() == FunctionWithOptionalConstArg::Kind::RIGHT_CONST) + buf << ", " << applyVisitor(FieldVisitorToString(), (*func->getConstArg().column)[0]); + } + } buf << ")"; + } }; WriteBufferFromOwnString buf; diff --git a/src/Storages/MergeTree/KeyCondition.h b/src/Storages/MergeTree/KeyCondition.h index b8167f406bd..bd51769ad1f 100644 --- a/src/Storages/MergeTree/KeyCondition.h +++ b/src/Storages/MergeTree/KeyCondition.h @@ -229,7 +229,7 @@ public: /// Does not take into account the SAMPLE section. all_columns - the set of all columns of the table. KeyCondition( const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const Names & key_column_names, const ExpressionActionsPtr & key_expr, bool single_point_ = false, @@ -293,6 +293,16 @@ public: String toString() const; + /// Condition description for EXPLAIN query. + struct Description + { + /// Which columns from PK were used, in PK order. + std::vector used_keys; + /// Condition which was applied, mostly human-readable. + std::string condition; + }; + + Description getDescription() const; /** A chain of possibly monotone functions. * If the key column is wrapped in functions that can be monotonous in some value ranges @@ -307,7 +317,7 @@ public: const ASTPtr & expr, Block & block_with_constants, Field & out_value, DataTypePtr & out_type); static Block getBlockWithConstants( - const ASTPtr & query, const TreeRewriterResultPtr & syntax_analyzer_result, const Context & context); + const ASTPtr & query, const TreeRewriterResultPtr & syntax_analyzer_result, ContextPtr context); static std::optional applyMonotonicFunctionsChainToRange( Range key_range, @@ -345,6 +355,7 @@ private: : function(function_), range(range_), key_column(key_column_) {} String toString() const; + String toString(const std::string_view & column_name, bool print_constants) const; Function function = FUNCTION_UNKNOWN; @@ -375,8 +386,8 @@ private: bool right_bounded, BoolMask initial_mask) const; - void traverseAST(const ASTPtr & node, const Context & context, Block & block_with_constants); - bool tryParseAtomFromAST(const ASTPtr & node, const Context & context, Block & block_with_constants, RPNElement & out); + void traverseAST(const ASTPtr & node, ContextPtr context, Block & block_with_constants); + bool tryParseAtomFromAST(const ASTPtr & node, ContextPtr context, Block & block_with_constants, RPNElement & out); static bool tryParseLogicalOperatorFromAST(const ASTFunction * func, RPNElement & out); /** Is node the key column @@ -387,7 +398,7 @@ private: */ bool isKeyPossiblyWrappedByMonotonicFunctions( const ASTPtr & node, - const Context & context, + ContextPtr context, size_t & out_key_column_num, DataTypePtr & out_key_res_column_type, MonotonicFunctionsChain & out_functions_chain); @@ -413,7 +424,7 @@ private: /// do it and return true. bool tryPrepareSetIndex( const ASTs & args, - const Context & context, + ContextPtr context, RPNElement & out, size_t & out_key_column_num); diff --git a/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp b/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp index 6bf164dd824..41ad71c89ce 100644 --- a/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp +++ b/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp @@ -30,7 +30,7 @@ MergeTreeBaseSelectProcessor::MergeTreeBaseSelectProcessor( const MergeTreeReaderSettings & reader_settings_, bool use_uncompressed_cache_, const Names & virt_column_names_) - : SourceWithProgress(getHeader(std::move(header), prewhere_info_, virt_column_names_)) + : SourceWithProgress(transformHeader(std::move(header), prewhere_info_, virt_column_names_)) , storage(storage_) , metadata_snapshot(metadata_snapshot_) , prewhere_info(prewhere_info_) @@ -370,7 +370,7 @@ void MergeTreeBaseSelectProcessor::executePrewhereActions(Block & block, const P } } -Block MergeTreeBaseSelectProcessor::getHeader( +Block MergeTreeBaseSelectProcessor::transformHeader( Block block, const PrewhereInfoPtr & prewhere_info, const Names & virtual_columns) { executePrewhereActions(block, prewhere_info); diff --git a/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.h b/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.h index 00ef131ae45..a4c55cbae45 100644 --- a/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.h +++ b/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.h @@ -33,6 +33,8 @@ public: ~MergeTreeBaseSelectProcessor() override; + static Block transformHeader(Block block, const PrewhereInfoPtr & prewhere_info, const Names & virtual_columns); + static void executePrewhereActions(Block & block, const PrewhereInfoPtr & prewhere_info); protected: @@ -49,8 +51,6 @@ protected: static void injectVirtualColumns(Block & block, MergeTreeReadTask * task, const Names & virtual_columns); static void injectVirtualColumns(Chunk & chunk, MergeTreeReadTask * task, const Names & virtual_columns); - static Block getHeader(Block block, const PrewhereInfoPtr & prewhere_info, const Names & virtual_columns); - void initializeRangeReaders(MergeTreeReadTask & task); protected: diff --git a/src/Storages/MergeTree/MergeTreeBlockOutputStream.cpp b/src/Storages/MergeTree/MergeTreeBlockOutputStream.cpp index bb5644567ae..bc91e29d900 100644 --- a/src/Storages/MergeTree/MergeTreeBlockOutputStream.cpp +++ b/src/Storages/MergeTree/MergeTreeBlockOutputStream.cpp @@ -35,12 +35,14 @@ void MergeTreeBlockOutputStream::write(const Block & block) if (!part) continue; - storage.renameTempPartAndAdd(part, &storage.increment); + /// Part can be deduplicated, so increment counters and add to part log only if it's really added + if (storage.renameTempPartAndAdd(part, &storage.increment, nullptr, storage.getDeduplicationLog())) + { + PartLog::addNewPart(storage.getContext(), part, watch.elapsed()); - PartLog::addNewPart(storage.global_context, part, watch.elapsed()); - - /// Initiate async merge - it will be done if it's good time for merge and if there are space in 'background_pool'. - storage.background_executor.triggerTask(); + /// Initiate async merge - it will be done if it's good time for merge and if there are space in 'background_pool'. + storage.background_executor.triggerTask(); + } } } diff --git a/src/Storages/MergeTree/MergeTreeData.cpp b/src/Storages/MergeTree/MergeTreeData.cpp index 71564cb1f54..f28d87bb9be 100644 --- a/src/Storages/MergeTree/MergeTreeData.cpp +++ b/src/Storages/MergeTree/MergeTreeData.cpp @@ -56,6 +56,8 @@ #include #include +#include + #include #include #include @@ -71,6 +73,7 @@ namespace ProfileEvents extern const Event RejectedInserts; extern const Event DelayedInserts; extern const Event DelayedInsertsMilliseconds; + extern const Event DuplicatedInsertedBlocks; } namespace CurrentMetrics @@ -132,7 +135,7 @@ MergeTreeData::MergeTreeData( const StorageID & table_id_, const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr storage_settings_, @@ -140,7 +143,7 @@ MergeTreeData::MergeTreeData( bool attach, BrokenPartCallback broken_part_callback_) : IStorage(table_id_) - , global_context(context_.getGlobalContext()) + , WithContext(context_->getGlobalContext()) , merging_params(merging_params_) , require_part_metadata(require_part_metadata_) , relative_data_path(relative_data_path_) @@ -160,7 +163,7 @@ MergeTreeData::MergeTreeData( /// Check sanity of MergeTreeSettings. Only when table is created. if (!attach) - settings->sanityCheck(global_context.getSettingsRef()); + settings->sanityCheck(getContext()->getSettingsRef()); MergeTreeDataFormatVersion min_format_version(0); if (!date_column_name.empty()) @@ -230,7 +233,7 @@ MergeTreeData::MergeTreeData( format_version = min_format_version; auto buf = version_file.second->writeFile(version_file.first); writeIntText(format_version.toUnderType(), *buf); - if (global_context.getSettingsRef().fsync_metadata) + if (getContext()->getSettingsRef().fsync_metadata) buf->sync(); } else @@ -259,7 +262,7 @@ MergeTreeData::MergeTreeData( StoragePolicyPtr MergeTreeData::getStoragePolicy() const { - return global_context.getStoragePolicy(getSettings()->storage_policy); + return getContext()->getStoragePolicy(getSettings()->storage_policy); } static void checkKeyExpression(const ExpressionActions & expr, const Block & sample_block, const String & key_name, bool allow_nullable_key) @@ -316,8 +319,8 @@ void MergeTreeData::checkProperties( { const String & pk_column = new_primary_key.column_names[i]; if (pk_column != sorting_key_column) - throw Exception("Primary key must be a prefix of the sorting key, but in position " - + toString(i) + " its column is " + pk_column + ", not " + sorting_key_column, + throw Exception("Primary key must be a prefix of the sorting key, but the column in the position " + + toString(i) + " is " + sorting_key_column +", not " + pk_column, ErrorCodes::BAD_ARGUMENTS); if (!primary_key_columns_set.emplace(pk_column).second) @@ -354,7 +357,7 @@ void MergeTreeData::checkProperties( if (!added_key_column_expr_list->children.empty()) { - auto syntax = TreeRewriter(global_context).analyze(added_key_column_expr_list, all_columns); + auto syntax = TreeRewriter(getContext()).analyze(added_key_column_expr_list, all_columns); Names used_columns = syntax->requiredSourceColumns(); NamesAndTypesList deleted_columns; @@ -411,7 +414,7 @@ ExpressionActionsPtr getCombinedIndicesExpression( const KeyDescription & key, const IndicesDescription & indices, const ColumnsDescription & columns, - const Context & context) + ContextPtr context) { ASTPtr combined_expr_list = key.expression_list_ast->clone(); @@ -450,12 +453,12 @@ DataTypes MergeTreeData::getMinMaxColumnsTypes(const KeyDescription & partition_ ExpressionActionsPtr MergeTreeData::getPrimaryKeyAndSkipIndicesExpression(const StorageMetadataPtr & metadata_snapshot) const { - return getCombinedIndicesExpression(metadata_snapshot->getPrimaryKey(), metadata_snapshot->getSecondaryIndices(), metadata_snapshot->getColumns(), global_context); + return getCombinedIndicesExpression(metadata_snapshot->getPrimaryKey(), metadata_snapshot->getSecondaryIndices(), metadata_snapshot->getColumns(), getContext()); } ExpressionActionsPtr MergeTreeData::getSortingKeyAndSkipIndicesExpression(const StorageMetadataPtr & metadata_snapshot) const { - return getCombinedIndicesExpression(metadata_snapshot->getSortingKey(), metadata_snapshot->getSecondaryIndices(), metadata_snapshot->getColumns(), global_context); + return getCombinedIndicesExpression(metadata_snapshot->getSortingKey(), metadata_snapshot->getSecondaryIndices(), metadata_snapshot->getColumns(), getContext()); } @@ -683,16 +686,16 @@ void MergeTreeData::MergingParams::check(const StorageInMemoryMetadata & metadat std::optional MergeTreeData::totalRowsByPartitionPredicateImpl( - const SelectQueryInfo & query_info, const Context & context, const DataPartsVector & parts) const + const SelectQueryInfo & query_info, ContextPtr local_context, const DataPartsVector & parts) const { auto metadata_snapshot = getInMemoryMetadataPtr(); ASTPtr expression_ast; Block virtual_columns_block = MergeTreeDataSelectExecutor::getSampleBlockWithVirtualPartColumns(); // Generate valid expressions for filtering - bool valid = VirtualColumnUtils::prepareFilterBlockWithQuery(query_info.query, context, virtual_columns_block, expression_ast); + bool valid = VirtualColumnUtils::prepareFilterBlockWithQuery(query_info.query, local_context, virtual_columns_block, expression_ast); - PartitionPruner partition_pruner(metadata_snapshot->getPartitionKey(), query_info, context, true /* strict */); + PartitionPruner partition_pruner(metadata_snapshot->getPartitionKey(), query_info, local_context, true /* strict */); if (partition_pruner.isUseless() && !valid) return {}; @@ -700,7 +703,7 @@ std::optional MergeTreeData::totalRowsByPartitionPredicateImpl( if (valid && expression_ast) { MergeTreeDataSelectExecutor::fillBlockWithVirtualPartColumns(parts, virtual_columns_block); - VirtualColumnUtils::filterBlockWithQuery(query_info.query, virtual_columns_block, context, expression_ast); + VirtualColumnUtils::filterBlockWithQuery(query_info.query, virtual_columns_block, local_context, expression_ast); part_values = VirtualColumnUtils::extractSingleValueFromBlock(virtual_columns_block, "_part"); if (part_values.empty()) return 0; @@ -765,7 +768,7 @@ void MergeTreeData::loadDataParts(bool skip_sanity_checks) for (const auto & disk_ptr : disks) defined_disk_names.insert(disk_ptr->getName()); - for (const auto & [disk_name, disk] : global_context.getDisksMap()) + for (const auto & [disk_name, disk] : getContext()->getDisksMap()) { if (defined_disk_names.count(disk_name) == 0 && disk->exists(relative_data_path)) { @@ -813,7 +816,7 @@ void MergeTreeData::loadDataParts(bool skip_sanity_checks) if (part_names_with_disks.empty() && parts_from_wal.empty()) { - LOG_DEBUG(log, "There is no data parts"); + LOG_DEBUG(log, "There are no data parts"); return; } @@ -1168,7 +1171,7 @@ void MergeTreeData::removePartsFinally(const MergeTreeData::DataPartsVector & pa /// NOTE: There is no need to log parts deletion somewhere else, all deleting parts pass through this function and pass away auto table_id = getStorageID(); - if (auto part_log = global_context.getPartLog(table_id.database_name)) + if (auto part_log = getContext()->getPartLog(table_id.database_name)) { PartLogElement part_log_elem; @@ -1200,7 +1203,7 @@ void MergeTreeData::clearOldPartsFromFilesystem(bool force) /// This is needed to close files to avoid they reside on disk after being deleted. /// NOTE: we can drop files from cache more selectively but this is good enough. if (!parts_to_remove.empty()) - global_context.dropMMappedFileCache(); + getContext()->dropMMappedFileCache(); } void MergeTreeData::clearPartsFromFilesystem(const DataPartsVector & parts_to_remove) @@ -1210,14 +1213,21 @@ void MergeTreeData::clearPartsFromFilesystem(const DataPartsVector & parts_to_re { /// Parallel parts removal. - size_t num_threads = std::min(size_t(settings->max_part_removal_threads), parts_to_remove.size()); + size_t num_threads = std::min(settings->max_part_removal_threads, parts_to_remove.size()); ThreadPool pool(num_threads); /// NOTE: Under heavy system load you may get "Cannot schedule a task" from ThreadPool. for (const DataPartPtr & part : parts_to_remove) { - pool.scheduleOrThrowOnError([&] + pool.scheduleOrThrowOnError([&, thread_group = CurrentThread::getGroup()] { + SCOPE_EXIT_SAFE( + if (thread_group) + CurrentThread::detachQueryIfNotDetached(); + ); + if (thread_group) + CurrentThread::attachTo(thread_group); + LOG_DEBUG(log, "Removing part from filesystem {}", part->name); part->remove(); }); @@ -1302,7 +1312,7 @@ void MergeTreeData::clearEmptyParts() { ASTPtr literal = std::make_shared(part->name); /// If another replica has already started drop, it's ok, no need to throw. - dropPartition(literal, /* detach = */ false, /*drop_part = */ true, global_context, /* throw_if_noop = */ false); + dropPartition(literal, /* detach = */ false, /*drop_part = */ true, getContext(), /* throw_if_noop = */ false); } } } @@ -1325,7 +1335,7 @@ void MergeTreeData::rename(const String & new_table_path, const StorageID & new_ } if (!getStorageID().hasUUID()) - global_context.dropCaches(); + getContext()->dropCaches(); relative_data_path = new_table_path; renameInMemory(new_table_id); @@ -1347,7 +1357,7 @@ void MergeTreeData::dropAllData() /// Tables in atomic databases have UUID and stored in persistent locations. /// No need to drop caches (that are keyed by filesystem path) because collision is not possible. if (!getStorageID().hasUUID()) - global_context.dropCaches(); + getContext()->dropCaches(); LOG_TRACE(log, "dropAllData: removing data from filesystem."); @@ -1472,23 +1482,23 @@ void checkVersionColumnTypesConversion(const IDataType * old_type, const IDataTy } -void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, ContextPtr local_context) const { /// Check that needed transformations can be applied to the list of columns without considering type conversions. StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); StorageInMemoryMetadata old_metadata = getInMemoryMetadata(); - const auto & settings = context.getSettingsRef(); + const auto & settings = local_context->getSettingsRef(); if (!settings.allow_non_metadata_alters) { - auto mutation_commands = commands.getMutationCommands(new_metadata, settings.materialize_ttl_after_modify, global_context); + auto mutation_commands = commands.getMutationCommands(new_metadata, settings.materialize_ttl_after_modify, getContext()); if (!mutation_commands.empty()) throw Exception(ErrorCodes::ALTER_OF_COLUMN_IS_FORBIDDEN, "The following alter commands: '{}' will modify data on disk, but setting `allow_non_metadata_alters` is disabled", queryToString(mutation_commands.ast())); } - commands.apply(new_metadata, global_context); + commands.apply(new_metadata, getContext()); /// Set of columns that shouldn't be altered. NameSet columns_alter_type_forbidden; @@ -1551,13 +1561,13 @@ void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, const C old_types.emplace(column.name, column.type.get()); NamesAndTypesList columns_to_check_conversion; - auto name_deps = getDependentViewsByColumn(context); + auto name_deps = getDependentViewsByColumn(local_context); for (const AlterCommand & command : commands) { /// Just validate partition expression if (command.partition) { - getPartitionIDFromQuery(command.partition, global_context); + getPartitionIDFromQuery(command.partition, getContext()); } if (command.column_name == merging_params.version_column) @@ -1691,7 +1701,7 @@ void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, const C if (!columns_to_check_conversion.empty()) { auto old_header = old_metadata.getSampleBlock(); - performRequiredConversions(old_header, columns_to_check_conversion, global_context); + performRequiredConversions(old_header, columns_to_check_conversion, getContext()); } if (old_metadata.hasSettingsChanges()) @@ -1722,7 +1732,7 @@ void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, const C } if (setting_name == "storage_policy") - checkStoragePolicy(global_context.getStoragePolicy(new_value.safeGet())); + checkStoragePolicy(getContext()->getStoragePolicy(new_value.safeGet())); } } @@ -1847,7 +1857,7 @@ void MergeTreeData::changeSettings( { if (change.name == "storage_policy") { - StoragePolicyPtr new_storage_policy = global_context.getStoragePolicy(change.value.safeGet()); + StoragePolicyPtr new_storage_policy = getContext()->getStoragePolicy(change.value.safeGet()); StoragePolicyPtr old_storage_policy = getStoragePolicy(); /// StoragePolicy of different version or name is guaranteed to have different pointer @@ -1884,7 +1894,7 @@ void MergeTreeData::changeSettings( MergeTreeSettings copy = *getSettings(); copy.applyChanges(new_changes); - copy.sanityCheck(global_context.getSettingsRef()); + copy.sanityCheck(getContext()->getSettingsRef()); storage_settings.set(std::make_unique(copy)); StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); @@ -2022,7 +2032,7 @@ MergeTreeData::DataPartsVector MergeTreeData::getActivePartsToReplace( } -bool MergeTreeData::renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction) +bool MergeTreeData::renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction, MergeTreeDeduplicationLog * deduplication_log) { if (out_transaction && &out_transaction->data != this) throw Exception("MergeTreeData::Transaction for one table cannot be used with another. It is a bug.", @@ -2031,7 +2041,7 @@ bool MergeTreeData::renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrem DataPartsVector covered_parts; { auto lock = lockParts(); - if (!renameTempPartAndReplace(part, increment, out_transaction, lock, &covered_parts)) + if (!renameTempPartAndReplace(part, increment, out_transaction, lock, &covered_parts, deduplication_log)) return false; } if (!covered_parts.empty()) @@ -2044,7 +2054,7 @@ bool MergeTreeData::renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrem bool MergeTreeData::renameTempPartAndReplace( MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction, - std::unique_lock & lock, DataPartsVector * out_covered_parts) + std::unique_lock & lock, DataPartsVector * out_covered_parts, MergeTreeDeduplicationLog * deduplication_log) { if (out_transaction && &out_transaction->data != this) throw Exception("MergeTreeData::Transaction for one table cannot be used with another. It is a bug.", @@ -2099,6 +2109,22 @@ bool MergeTreeData::renameTempPartAndReplace( return false; } + /// Deduplication log used only from non-replicated MergeTree. Replicated + /// tables have their own mechanism. We try to deduplicate at such deep + /// level, because only here we know real part name which is required for + /// deduplication. + if (deduplication_log) + { + String block_id = part->getZeroLevelPartBlockID(); + auto res = deduplication_log->addPart(block_id, part_info); + if (!res.second) + { + ProfileEvents::increment(ProfileEvents::DuplicatedInsertedBlocks); + LOG_INFO(log, "Block with ID {} already exists as part {}; ignoring it", block_id, res.first.getPartName()); + return false; + } + } + /// All checks are passed. Now we can rename the part on disk. /// So, we maintain invariant: if a non-temporary part in filesystem then it is in data_parts /// @@ -2155,7 +2181,7 @@ bool MergeTreeData::renameTempPartAndReplace( } MergeTreeData::DataPartsVector MergeTreeData::renameTempPartAndReplace( - MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction) + MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction, MergeTreeDeduplicationLog * deduplication_log) { if (out_transaction && &out_transaction->data != this) throw Exception("MergeTreeData::Transaction for one table cannot be used with another. It is a bug.", @@ -2164,7 +2190,7 @@ MergeTreeData::DataPartsVector MergeTreeData::renameTempPartAndReplace( DataPartsVector covered_parts; { auto lock = lockParts(); - renameTempPartAndReplace(part, increment, out_transaction, lock, &covered_parts); + renameTempPartAndReplace(part, increment, out_transaction, lock, &covered_parts, deduplication_log); } return covered_parts; } @@ -2521,7 +2547,7 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until) const if (settings->inactive_parts_to_throw_insert > 0 || settings->inactive_parts_to_delay_insert > 0) { size_t inactive_parts_count_in_partition = getMaxInactivePartsCountForPartition(); - if (inactive_parts_count_in_partition >= settings->inactive_parts_to_throw_insert) + if (settings->inactive_parts_to_throw_insert > 0 && inactive_parts_count_in_partition >= settings->inactive_parts_to_throw_insert) { ProfileEvents::increment(ProfileEvents::RejectedInserts); throw Exception( @@ -2537,7 +2563,7 @@ void MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event * until) const ProfileEvents::increment(ProfileEvents::RejectedInserts); throw Exception( ErrorCodes::TOO_MANY_PARTS, - "Too many parts ({}). Parts cleaning are processing significantly slower than inserts", + "Too many parts ({}). Merges are processing significantly slower than inserts", parts_count_in_partition); } @@ -2732,7 +2758,8 @@ void MergeTreeData::removePartContributionToColumnSizes(const DataPartPtr & part } } -void MergeTreeData::checkAlterPartitionIsPossible(const PartitionCommands & commands, const StorageMetadataPtr & /*metadata_snapshot*/, const Settings & settings) const +void MergeTreeData::checkAlterPartitionIsPossible( + const PartitionCommands & commands, const StorageMetadataPtr & /*metadata_snapshot*/, const Settings & settings) const { for (const auto & command : commands) { @@ -2752,7 +2779,7 @@ void MergeTreeData::checkAlterPartitionIsPossible(const PartitionCommands & comm else { /// We are able to parse it - getPartitionIDFromQuery(command.partition, global_context); + getPartitionIDFromQuery(command.partition, getContext()); } } } @@ -2760,7 +2787,7 @@ void MergeTreeData::checkAlterPartitionIsPossible(const PartitionCommands & comm void MergeTreeData::checkPartitionCanBeDropped(const ASTPtr & partition) { - const String partition_id = getPartitionIDFromQuery(partition, global_context); + const String partition_id = getPartitionIDFromQuery(partition, getContext()); auto parts_to_remove = getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); UInt64 partition_size = 0; @@ -2769,7 +2796,7 @@ void MergeTreeData::checkPartitionCanBeDropped(const ASTPtr & partition) partition_size += part->getBytesOnDisk(); auto table_id = getStorageID(); - global_context.checkPartitionCanBeDropped(table_id.database_name, table_id.table_name, partition_size); + getContext()->checkPartitionCanBeDropped(table_id.database_name, table_id.table_name, partition_size); } void MergeTreeData::checkPartCanBeDropped(const ASTPtr & part_ast) @@ -2780,17 +2807,17 @@ void MergeTreeData::checkPartCanBeDropped(const ASTPtr & part_ast) throw Exception(ErrorCodes::NO_SUCH_DATA_PART, "No part {} in committed state", part_name); auto table_id = getStorageID(); - global_context.checkPartitionCanBeDropped(table_id.database_name, table_id.table_name, part->getBytesOnDisk()); + getContext()->checkPartitionCanBeDropped(table_id.database_name, table_id.table_name, part->getBytesOnDisk()); } -void MergeTreeData::movePartitionToDisk(const ASTPtr & partition, const String & name, bool moving_part, const Context & context) +void MergeTreeData::movePartitionToDisk(const ASTPtr & partition, const String & name, bool moving_part, ContextPtr local_context) { String partition_id; if (moving_part) partition_id = partition->as().value.safeGet(); else - partition_id = getPartitionIDFromQuery(partition, context); + partition_id = getPartitionIDFromQuery(partition, local_context); DataPartsVector parts; if (moving_part) @@ -2828,14 +2855,14 @@ void MergeTreeData::movePartitionToDisk(const ASTPtr & partition, const String & } -void MergeTreeData::movePartitionToVolume(const ASTPtr & partition, const String & name, bool moving_part, const Context & context) +void MergeTreeData::movePartitionToVolume(const ASTPtr & partition, const String & name, bool moving_part, ContextPtr local_context) { String partition_id; if (moving_part) partition_id = partition->as().value.safeGet(); else - partition_id = getPartitionIDFromQuery(partition, context); + partition_id = getPartitionIDFromQuery(partition, local_context); DataPartsVector parts; if (moving_part) @@ -2853,7 +2880,7 @@ void MergeTreeData::movePartitionToVolume(const ASTPtr & partition, const String throw Exception("Volume " + name + " does not exists on policy " + getStoragePolicy()->getName(), ErrorCodes::UNKNOWN_DISK); if (parts.empty()) - throw Exception("Nothing to move", ErrorCodes::NO_SUCH_DATA_PART); + throw Exception("Nothing to move (сheck that the partition exists).", ErrorCodes::NO_SUCH_DATA_PART); parts.erase(std::remove_if(parts.begin(), parts.end(), [&](auto part_ptr) { @@ -2882,7 +2909,12 @@ void MergeTreeData::movePartitionToVolume(const ASTPtr & partition, const String throw Exception("Cannot move parts because moves are manually disabled", ErrorCodes::ABORTED); } -void MergeTreeData::fetchPartition(const ASTPtr & /*partition*/, const StorageMetadataPtr & /*metadata_snapshot*/, const String & /*from*/, const Context & /*query_context*/) +void MergeTreeData::fetchPartition( + const ASTPtr & /*partition*/, + const StorageMetadataPtr & /*metadata_snapshot*/, + const String & /*from*/, + bool /*fetch_part*/, + ContextPtr /*query_context*/) { throw Exception(ErrorCodes::NOT_IMPLEMENTED, "FETCH PARTITION is not supported by storage {}", getName()); } @@ -2890,7 +2922,7 @@ void MergeTreeData::fetchPartition(const ASTPtr & /*partition*/, const StorageMe Pipe MergeTreeData::alterPartition( const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, - const Context & query_context) + ContextPtr query_context) { PartitionCommandsResultInfo result; for (const PartitionCommand & command : commands) @@ -2927,7 +2959,7 @@ Pipe MergeTreeData::alterPartition( case PartitionCommand::MoveDestinationType::TABLE: checkPartitionCanBeDropped(command.partition); - String dest_database = query_context.resolveDatabase(command.to_database); + String dest_database = query_context->resolveDatabase(command.to_database); auto dest_storage = DatabaseCatalog::instance().getTable({dest_database, command.to_table}, query_context); movePartitionToTable(dest_storage, command.partition, query_context); break; @@ -2938,40 +2970,40 @@ Pipe MergeTreeData::alterPartition( case PartitionCommand::REPLACE_PARTITION: { checkPartitionCanBeDropped(command.partition); - String from_database = query_context.resolveDatabase(command.from_database); + String from_database = query_context->resolveDatabase(command.from_database); auto from_storage = DatabaseCatalog::instance().getTable({from_database, command.from_table}, query_context); replacePartitionFrom(from_storage, command.partition, command.replace, query_context); } break; case PartitionCommand::FETCH_PARTITION: - fetchPartition(command.partition, metadata_snapshot, command.from_zookeeper_path, query_context); + fetchPartition(command.partition, metadata_snapshot, command.from_zookeeper_path, command.part, query_context); break; case PartitionCommand::FREEZE_PARTITION: { - auto lock = lockForShare(query_context.getCurrentQueryId(), query_context.getSettingsRef().lock_acquire_timeout); + auto lock = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); current_command_results = freezePartition(command.partition, metadata_snapshot, command.with_name, query_context, lock); } break; case PartitionCommand::FREEZE_ALL_PARTITIONS: { - auto lock = lockForShare(query_context.getCurrentQueryId(), query_context.getSettingsRef().lock_acquire_timeout); + auto lock = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); current_command_results = freezeAll(command.with_name, metadata_snapshot, query_context, lock); } break; case PartitionCommand::UNFREEZE_PARTITION: { - auto lock = lockForShare(query_context.getCurrentQueryId(), query_context.getSettingsRef().lock_acquire_timeout); + auto lock = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); current_command_results = unfreezePartition(command.partition, command.with_name, query_context, lock); } break; case PartitionCommand::UNFREEZE_ALL_PARTITIONS: { - auto lock = lockForShare(query_context.getCurrentQueryId(), query_context.getSettingsRef().lock_acquire_timeout); + auto lock = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); current_command_results = unfreezeAll(command.with_name, query_context, lock); } @@ -2982,13 +3014,13 @@ Pipe MergeTreeData::alterPartition( result.insert(result.end(), current_command_results.begin(), current_command_results.end()); } - if (query_context.getSettingsRef().alter_partition_verbose_result) + if (query_context->getSettingsRef().alter_partition_verbose_result) return convertCommandsResultToSource(result); return {}; } -String MergeTreeData::getPartitionIDFromQuery(const ASTPtr & ast, const Context & context) const +String MergeTreeData::getPartitionIDFromQuery(const ASTPtr & ast, ContextPtr local_context) const { const auto & partition_ast = ast->as(); @@ -3030,7 +3062,12 @@ String MergeTreeData::getPartitionIDFromQuery(const ASTPtr & ast, const Context ReadBufferFromMemory right_paren_buf(")", 1); ConcatReadBuffer buf({&left_paren_buf, &fields_buf, &right_paren_buf}); - auto input_format = FormatFactory::instance().getInput("Values", buf, metadata_snapshot->getPartitionKey().sample_block, context, context.getSettingsRef().max_block_size); + auto input_format = FormatFactory::instance().getInput( + "Values", + buf, + metadata_snapshot->getPartitionKey().sample_block, + local_context, + local_context->getSettingsRef().max_block_size); auto input_stream = std::make_shared(input_format); auto block = input_stream->read(); @@ -3143,7 +3180,7 @@ void MergeTreeData::validateDetachedPartName(const String & name) const ErrorCodes::BAD_DATA_PART_NAME); } -void MergeTreeData::dropDetached(const ASTPtr & partition, bool part, const Context & context) +void MergeTreeData::dropDetached(const ASTPtr & partition, bool part, ContextPtr local_context) { PartsTemporaryRename renamed_parts(*this, "detached/"); @@ -3155,7 +3192,7 @@ void MergeTreeData::dropDetached(const ASTPtr & partition, bool part, const Cont } else { - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, local_context); DetachedPartsInfo detached_parts = getDetachedParts(); for (const auto & part_info : detached_parts) if (part_info.valid_name && part_info.partition_id == partition_id @@ -3177,7 +3214,7 @@ void MergeTreeData::dropDetached(const ASTPtr & partition, bool part, const Cont } MergeTreeData::MutableDataPartsVector MergeTreeData::tryLoadPartsToAttach(const ASTPtr & partition, bool attach_part, - const Context & context, PartsTemporaryRename & renamed_parts) + ContextPtr local_context, PartsTemporaryRename & renamed_parts) { const String source_dir = "detached/"; @@ -3196,7 +3233,7 @@ MergeTreeData::MutableDataPartsVector MergeTreeData::tryLoadPartsToAttach(const } else { - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, local_context); LOG_DEBUG(log, "Looking for parts for partition {} in {}", partition_id, source_dir); ActiveDataPartSet active_parts(format_version); @@ -3422,7 +3459,7 @@ CompressionCodecPtr MergeTreeData::getCompressionCodecForPart(size_t part_size_c if (best_ttl_entry) return CompressionCodecFactory::instance().get(best_ttl_entry->recompression_codec, {}); - return global_context.chooseCompressionCodec( + return getContext()->chooseCompressionCodec( part_size_compressed, static_cast(part_size_compressed) / getTotalActiveSizeInBytes()); } @@ -3583,7 +3620,7 @@ bool MergeTreeData::isPrimaryOrMinMaxKeyColumnPossiblyWrappedInFunctions( } bool MergeTreeData::mayBenefitFromIndexForIn( - const ASTPtr & left_in_operand, const Context &, const StorageMetadataPtr & metadata_snapshot) const + const ASTPtr & left_in_operand, ContextPtr, const StorageMetadataPtr & metadata_snapshot) const { /// Make sure that the left side of the IN operator contain part of the key. /// If there is a tuple on the left side of the IN operator, at least one item of the tuple @@ -3747,7 +3784,7 @@ MergeTreeData::PathsWithDisks MergeTreeData::getRelativeDataPathsWithDisks() con return res; } -MergeTreeData::MatcherFn MergeTreeData::getPartitionMatcher(const ASTPtr & partition_ast, const Context & context) const +MergeTreeData::MatcherFn MergeTreeData::getPartitionMatcher(const ASTPtr & partition_ast, ContextPtr local_context) const { bool prefixed = false; String id; @@ -3763,10 +3800,10 @@ MergeTreeData::MatcherFn MergeTreeData::getPartitionMatcher(const ASTPtr & parti prefixed = true; } else - id = getPartitionIDFromQuery(partition_ast, context); + id = getPartitionIDFromQuery(partition_ast, local_context); } else - id = getPartitionIDFromQuery(partition_ast, context); + id = getPartitionIDFromQuery(partition_ast, local_context); return [prefixed, id](const String & partition_id) { @@ -3781,28 +3818,28 @@ PartitionCommandsResultInfo MergeTreeData::freezePartition( const ASTPtr & partition_ast, const StorageMetadataPtr & metadata_snapshot, const String & with_name, - const Context & context, + ContextPtr local_context, TableLockHolder &) { - return freezePartitionsByMatcher(getPartitionMatcher(partition_ast, context), metadata_snapshot, with_name, context); + return freezePartitionsByMatcher(getPartitionMatcher(partition_ast, local_context), metadata_snapshot, with_name, local_context); } PartitionCommandsResultInfo MergeTreeData::freezeAll( const String & with_name, const StorageMetadataPtr & metadata_snapshot, - const Context & context, + ContextPtr local_context, TableLockHolder &) { - return freezePartitionsByMatcher([] (const String &) { return true; }, metadata_snapshot, with_name, context); + return freezePartitionsByMatcher([] (const String &) { return true; }, metadata_snapshot, with_name, local_context); } PartitionCommandsResultInfo MergeTreeData::freezePartitionsByMatcher( MatcherFn matcher, const StorageMetadataPtr & metadata_snapshot, const String & with_name, - const Context & context) + ContextPtr local_context) { - String clickhouse_path = Poco::Path(context.getPath()).makeAbsolute().toString(); + String clickhouse_path = Poco::Path(local_context->getPath()).makeAbsolute().toString(); String default_shadow_path = clickhouse_path + "shadow/"; Poco::File(default_shadow_path).createDirectories(); auto increment = Increment(default_shadow_path + "increment.txt").get(true); @@ -3856,21 +3893,21 @@ PartitionCommandsResultInfo MergeTreeData::freezePartitionsByMatcher( PartitionCommandsResultInfo MergeTreeData::unfreezePartition( const ASTPtr & partition, const String & backup_name, - const Context & context, + ContextPtr local_context, TableLockHolder &) { - return unfreezePartitionsByMatcher(getPartitionMatcher(partition, context), backup_name, context); + return unfreezePartitionsByMatcher(getPartitionMatcher(partition, local_context), backup_name, local_context); } PartitionCommandsResultInfo MergeTreeData::unfreezeAll( const String & backup_name, - const Context & context, + ContextPtr local_context, TableLockHolder &) { - return unfreezePartitionsByMatcher([] (const String &) { return true; }, backup_name, context); + return unfreezePartitionsByMatcher([] (const String &) { return true; }, backup_name, local_context); } -PartitionCommandsResultInfo MergeTreeData::unfreezePartitionsByMatcher(MatcherFn matcher, const String & backup_name, const Context &) +PartitionCommandsResultInfo MergeTreeData::unfreezePartitionsByMatcher(MatcherFn matcher, const String & backup_name, ContextPtr) { auto backup_path = std::filesystem::path("shadow") / escapeForFileName(backup_name) / relative_data_path; @@ -3953,7 +3990,7 @@ void MergeTreeData::writePartLog( try { auto table_id = getStorageID(); - auto part_log = global_context.getPartLog(table_id.database_name); + auto part_log = getContext()->getPartLog(table_id.database_name); if (!part_log) return; @@ -4220,7 +4257,7 @@ NamesAndTypesList MergeTreeData::getVirtuals() const size_t MergeTreeData::getTotalMergesWithTTLInMergeList() const { - return global_context.getMergeList().getMergesWithTTLCount(); + return getContext()->getMergeList().getMergesWithTTLCount(); } void MergeTreeData::addPartContributionToDataVolume(const DataPartPtr & part) diff --git a/src/Storages/MergeTree/MergeTreeData.h b/src/Storages/MergeTree/MergeTreeData.h index 63d776a838c..46c0014d9f7 100644 --- a/src/Storages/MergeTree/MergeTreeData.h +++ b/src/Storages/MergeTree/MergeTreeData.h @@ -54,6 +54,7 @@ struct CurrentlySubmergingEmergingTagger; class ExpressionActions; using ExpressionActionsPtr = std::shared_ptr; using ManyExpressionActions = std::vector; +class MergeTreeDeduplicationLog; namespace ErrorCodes { @@ -110,7 +111,7 @@ namespace ErrorCodes /// - MergeTreeDataWriter /// - MergeTreeDataMergerMutator -class MergeTreeData : public IStorage +class MergeTreeData : public IStorage, public WithContext { public: /// Function to call if the part is suspected to contain corrupt data. @@ -348,7 +349,7 @@ public: MergeTreeData(const StorageID & table_id_, const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr settings_, @@ -374,7 +375,7 @@ public: NamesAndTypesList getVirtuals() const override; - bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, const Context &, const StorageMetadataPtr & metadata_snapshot) const override; + bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, ContextPtr, const StorageMetadataPtr & metadata_snapshot) const override; /// Load the set of data parts from disk. Call once - immediately after the object is created. void loadDataParts(bool skip_sanity_checks); @@ -397,10 +398,10 @@ public: void validateDetachedPartName(const String & name) const; - void dropDetached(const ASTPtr & partition, bool part, const Context & context); + void dropDetached(const ASTPtr & partition, bool part, ContextPtr context); MutableDataPartsVector tryLoadPartsToAttach(const ASTPtr & partition, bool attach_part, - const Context & context, PartsTemporaryRename & renamed_parts); + ContextPtr context, PartsTemporaryRename & renamed_parts); /// Returns Committed parts DataParts getDataParts() const; @@ -447,18 +448,18 @@ public: /// active set later with out_transaction->commit()). /// Else, commits the part immediately. /// Returns true if part was added. Returns false if part is covered by bigger part. - bool renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrement * increment = nullptr, Transaction * out_transaction = nullptr); + bool renameTempPartAndAdd(MutableDataPartPtr & part, SimpleIncrement * increment = nullptr, Transaction * out_transaction = nullptr, MergeTreeDeduplicationLog * deduplication_log = nullptr); /// The same as renameTempPartAndAdd but the block range of the part can contain existing parts. /// Returns all parts covered by the added part (in ascending order). /// If out_transaction == nullptr, marks covered parts as Outdated. DataPartsVector renameTempPartAndReplace( - MutableDataPartPtr & part, SimpleIncrement * increment = nullptr, Transaction * out_transaction = nullptr); + MutableDataPartPtr & part, SimpleIncrement * increment = nullptr, Transaction * out_transaction = nullptr, MergeTreeDeduplicationLog * deduplication_log = nullptr); /// Low-level version of previous one, doesn't lock mutex bool renameTempPartAndReplace( MutableDataPartPtr & part, SimpleIncrement * increment, Transaction * out_transaction, DataPartsLock & lock, - DataPartsVector * out_covered_parts = nullptr); + DataPartsVector * out_covered_parts = nullptr, MergeTreeDeduplicationLog * deduplication_log = nullptr); /// Remove parts from working set immediately (without wait for background @@ -530,7 +531,7 @@ public: /// - all type conversions can be done. /// - columns corresponding to primary key, indices, sign, sampling expression and date are not affected. /// If something is wrong, throws an exception. - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; /// Checks if the Mutation can be performed. /// (currently no additional checks: always ok) @@ -565,35 +566,34 @@ public: const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, const String & with_name, - const Context & context, + ContextPtr context, TableLockHolder & table_lock_holder); /// Freezes all parts. PartitionCommandsResultInfo freezeAll( const String & with_name, const StorageMetadataPtr & metadata_snapshot, - const Context & context, + ContextPtr context, TableLockHolder & table_lock_holder); /// Unfreezes particular partition. PartitionCommandsResultInfo unfreezePartition( const ASTPtr & partition, const String & backup_name, - const Context & context, + ContextPtr context, TableLockHolder & table_lock_holder); /// Unfreezes all parts. PartitionCommandsResultInfo unfreezeAll( const String & backup_name, - const Context & context, + ContextPtr context, TableLockHolder & table_lock_holder); -public: /// Moves partition to specified Disk - void movePartitionToDisk(const ASTPtr & partition, const String & name, bool moving_part, const Context & context); + void movePartitionToDisk(const ASTPtr & partition, const String & name, bool moving_part, ContextPtr context); /// Moves partition to specified Volume - void movePartitionToVolume(const ASTPtr & partition, const String & name, bool moving_part, const Context & context); + void movePartitionToVolume(const ASTPtr & partition, const String & name, bool moving_part, ContextPtr context); void checkPartitionCanBeDropped(const ASTPtr & partition) override; @@ -602,7 +602,7 @@ public: Pipe alterPartition( const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, - const Context & query_context) override; + ContextPtr query_context) override; size_t getColumnCompressedSize(const std::string & name) const { @@ -618,7 +618,7 @@ public: } /// For ATTACH/DETACH/DROP PARTITION. - String getPartitionIDFromQuery(const ASTPtr & ast, const Context & context) const; + String getPartitionIDFromQuery(const ASTPtr & ast, ContextPtr context) const; /// Extracts MergeTreeData of other *MergeTree* storage /// and checks that their structure suitable for ALTER TABLE ATTACH PARTITION FROM @@ -709,12 +709,12 @@ public: /// Choose disk with max available free space /// Reserves 0 bytes - ReservationPtr makeEmptyReservationOnLargestDisk() { return getStoragePolicy()->makeEmptyReservationOnLargestDisk(); } + ReservationPtr makeEmptyReservationOnLargestDisk() const { return getStoragePolicy()->makeEmptyReservationOnLargestDisk(); } Disks getDisksByType(DiskType::Type type) const { return getStoragePolicy()->getDisksByType(type); } /// Return alter conversions for part which must be applied on fly. - AlterConversions getAlterConversionsForPart(const MergeTreeDataPartPtr part) const; + AlterConversions getAlterConversionsForPart(MergeTreeDataPartPtr part) const; /// Returns destination disk or volume for the TTL rule according to current storage policy /// 'is_insert' - is TTL move performed on new data part insert. SpacePtr getDestinationForMoveTTL(const TTLDescription & move_ttl, bool is_insert = false) const; @@ -733,8 +733,6 @@ public: MergeTreeDataFormatVersion format_version; - Context & global_context; - /// Merging params - what additional actions to perform during merge. const MergingParams merging_params; @@ -894,7 +892,7 @@ protected: } std::optional totalRowsByPartitionPredicateImpl( - const SelectQueryInfo & query_info, const Context & context, const DataPartsVector & parts) const; + const SelectQueryInfo & query_info, ContextPtr context, const DataPartsVector & parts) const; static decltype(auto) getStateModifier(DataPartState state) { @@ -960,19 +958,24 @@ protected: /// Common part for |freezePartition()| and |freezeAll()|. using MatcherFn = std::function; - PartitionCommandsResultInfo freezePartitionsByMatcher(MatcherFn matcher, const StorageMetadataPtr & metadata_snapshot, const String & with_name, const Context & context); - PartitionCommandsResultInfo unfreezePartitionsByMatcher(MatcherFn matcher, const String & backup_name, const Context & context); + PartitionCommandsResultInfo freezePartitionsByMatcher(MatcherFn matcher, const StorageMetadataPtr & metadata_snapshot, const String & with_name, ContextPtr context); + PartitionCommandsResultInfo unfreezePartitionsByMatcher(MatcherFn matcher, const String & backup_name, ContextPtr context); // Partition helpers bool canReplacePartition(const DataPartPtr & src_part) const; - virtual void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, const Context & context, bool throw_if_noop = true) = 0; - virtual PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, const Context & context) = 0; - virtual void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, const Context & context) = 0; - virtual void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, const Context & context) = 0; + virtual void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, ContextPtr context, bool throw_if_noop = true) = 0; + virtual PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, ContextPtr context) = 0; + virtual void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, ContextPtr context) = 0; + virtual void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, ContextPtr context) = 0; /// Makes sense only for replicated tables - virtual void fetchPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, const String & from, const Context & query_context); + virtual void fetchPartition( + const ASTPtr & partition, + const StorageMetadataPtr & metadata_snapshot, + const String & from, + bool fetch_part, + ContextPtr query_context); void writePartLog( PartLogElement::Type type, @@ -1047,7 +1050,7 @@ private: mutable std::mutex query_id_set_mutex; // Get partition matcher for FREEZE / UNFREEZE queries. - MatcherFn getPartitionMatcher(const ASTPtr & partition, const Context & context) const; + MatcherFn getPartitionMatcher(const ASTPtr & partition, ContextPtr context) const; }; /// RAII struct to record big parts that are submerging or emerging. diff --git a/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp b/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp index 4269aa89ad1..dfebd88abe9 100644 --- a/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp +++ b/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp @@ -651,7 +651,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mergePartsToTempor MergeList::Entry & merge_entry, TableLockHolder &, time_t time_of_merge, - const Context & context, + ContextPtr context, const ReservationPtr & space_reservation, bool deduplicate, const Names & deduplicate_by_columns) @@ -751,7 +751,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mergePartsToTempor /// deadlock is impossible. auto compression_codec = data.getCompressionCodecForPart(merge_entry->total_size_bytes_compressed, new_data_part->ttl_infos, time_of_merge); - auto tmp_disk = context.getTemporaryVolume()->getDisk(); + auto tmp_disk = context->getTemporaryVolume()->getDisk(); String rows_sources_file_path; std::unique_ptr rows_sources_uncompressed_write_buf; std::unique_ptr rows_sources_write_buf; @@ -910,7 +910,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mergePartsToTempor { const auto & indices = metadata_snapshot->getSecondaryIndices(); merged_stream = std::make_shared( - merged_stream, indices.getSingleExpressionForIndices(metadata_snapshot->getColumns(), data.global_context)); + merged_stream, indices.getSingleExpressionForIndices(metadata_snapshot->getColumns(), data.getContext())); merged_stream = std::make_shared(merged_stream); } @@ -1099,7 +1099,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mutatePartToTempor const MutationCommands & commands, MergeListEntry & merge_entry, time_t time_of_mutation, - const Context & context, + ContextPtr context, const ReservationPtr & space_reservation, TableLockHolder &) { @@ -1113,12 +1113,12 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mutatePartToTempor const auto & source_part = future_part.parts[0]; auto storage_from_source_part = StorageFromMergeTreeDataPart::create(source_part); - auto context_for_reading = context; - context_for_reading.setSetting("max_streams_to_max_threads_ratio", 1); - context_for_reading.setSetting("max_threads", 1); + auto context_for_reading = Context::createCopy(context); + context_for_reading->setSetting("max_streams_to_max_threads_ratio", 1); + context_for_reading->setSetting("max_threads", 1); /// Allow mutations to work when force_index_by_date or force_primary_key is on. - context_for_reading.setSetting("force_index_by_date", Field(0)); - context_for_reading.setSetting("force_primary_key", Field(0)); + context_for_reading->setSetting("force_index_by_date", Field(0)); + context_for_reading->setSetting("force_primary_key", Field(0)); MutationCommands commands_for_part; for (const auto & command : commands) @@ -1129,7 +1129,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mutatePartToTempor } if (source_part->isStoredOnDisk() && !isStorageTouchedByMutations( - storage_from_source_part, metadata_snapshot, commands_for_part, context_for_reading)) + storage_from_source_part, metadata_snapshot, commands_for_part, Context::createCopy(context_for_reading))) { LOG_TRACE(log, "Part {} doesn't change up to mutation version {}", source_part->name, future_part.part_info.mutation); return data.cloneAndLoadDataPartOnSameDisk(source_part, "tmp_clone_", future_part.part_info, metadata_snapshot); @@ -1690,7 +1690,7 @@ std::set MergeTreeDataMergerMutator::getIndicesToRecalculate( BlockInputStreamPtr & input_stream, const NamesAndTypesList & updated_columns, const StorageMetadataPtr & metadata_snapshot, - const Context & context) + ContextPtr context) { /// Checks if columns used in skipping indexes modified. const auto & index_factory = MergeTreeIndexFactory::instance(); diff --git a/src/Storages/MergeTree/MergeTreeDataMergerMutator.h b/src/Storages/MergeTree/MergeTreeDataMergerMutator.h index 2f3a898ba84..d4dc0ce8499 100644 --- a/src/Storages/MergeTree/MergeTreeDataMergerMutator.h +++ b/src/Storages/MergeTree/MergeTreeDataMergerMutator.h @@ -125,7 +125,7 @@ public: MergeListEntry & merge_entry, TableLockHolder & table_lock_holder, time_t time_of_merge, - const Context & context, + ContextPtr context, const ReservationPtr & space_reservation, bool deduplicate, const Names & deduplicate_by_columns); @@ -137,7 +137,7 @@ public: const MutationCommands & commands, MergeListEntry & merge_entry, time_t time_of_mutation, - const Context & context, + ContextPtr context, const ReservationPtr & space_reservation, TableLockHolder & table_lock_holder); @@ -199,7 +199,7 @@ private: BlockInputStreamPtr & input_stream, const NamesAndTypesList & updated_columns, const StorageMetadataPtr & metadata_snapshot, - const Context & context); + ContextPtr context); /// Override all columns of new part using mutating_stream void mutateAllPartColumns( diff --git a/src/Storages/MergeTree/MergeTreeDataPartInMemory.cpp b/src/Storages/MergeTree/MergeTreeDataPartInMemory.cpp index 96fa411339c..045ab488ada 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartInMemory.cpp +++ b/src/Storages/MergeTree/MergeTreeDataPartInMemory.cpp @@ -88,7 +88,7 @@ void MergeTreeDataPartInMemory::flushToDisk(const String & base_path, const Stri disk->createDirectories(destination_path); - auto compression_codec = storage.global_context.chooseCompressionCodec(0, 0); + auto compression_codec = storage.getContext()->chooseCompressionCodec(0, 0); auto indices = MergeTreeIndexFactory::instance().getMany(metadata_snapshot->getSecondaryIndices()); MergedBlockOutputStream out(new_data_part, metadata_snapshot, columns, indices, compression_codec); out.writePrefix(); diff --git a/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp b/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp index a2f7440b2e3..57e8cca46cd 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp +++ b/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp @@ -3,6 +3,7 @@ #include #include #include +#include namespace DB { @@ -337,7 +338,7 @@ void MergeTreeDataPartWriterWide::writeColumn( serializations[name]->serializeBinaryBulkStatePrefix(serialize_settings, it->second); } - const auto & global_settings = storage.global_context.getSettingsRef(); + const auto & global_settings = storage.getContext()->getSettingsRef(); ISerialization::SerializeBinaryBulkSettings serialize_settings; serialize_settings.getter = createStreamGetter(name_and_type, offset_columns); serialize_settings.low_cardinality_max_dictionary_size = global_settings.low_cardinality_max_dictionary_size; @@ -393,8 +394,9 @@ void MergeTreeDataPartWriterWide::validateColumnOfFixedSize(const String & name, throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot validate column of non fixed type {}", type.getName()); auto disk = data_part->volume->getDisk(); - String mrk_path = fullPath(disk, part_path + name + marks_file_extension); - String bin_path = fullPath(disk, part_path + name + DATA_FILE_EXTENSION); + String escaped_name = escapeForFileName(name); + String mrk_path = fullPath(disk, part_path + escaped_name + marks_file_extension); + String bin_path = fullPath(disk, part_path + escaped_name + DATA_FILE_EXTENSION); DB::ReadBufferFromFile mrk_in(mrk_path); DB::CompressedReadBufferFromFile bin_in(bin_path, 0, 0, 0, nullptr); bool must_be_last = false; @@ -501,7 +503,7 @@ void MergeTreeDataPartWriterWide::validateColumnOfFixedSize(const String & name, void MergeTreeDataPartWriterWide::finishDataSerialization(IMergeTreeDataPart::Checksums & checksums, bool sync) { - const auto & global_settings = storage.global_context.getSettingsRef(); + const auto & global_settings = storage.getContext()->getSettingsRef(); ISerialization::SerializeBinaryBulkSettings serialize_settings; serialize_settings.low_cardinality_max_dictionary_size = global_settings.low_cardinality_max_dictionary_size; serialize_settings.low_cardinality_use_single_dictionary_for_part = global_settings.low_cardinality_use_single_dictionary_for_part != 0; diff --git a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp index f3759107912..1340332350f 100644 --- a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp +++ b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp @@ -1,5 +1,5 @@ #include /// For calculations related to sampling coefficients. -#include +#include #include #include @@ -28,7 +28,7 @@ #include #include #include -#include +#include #include #include #include @@ -39,7 +39,6 @@ #include #include #include -#include namespace ProfileEvents { @@ -152,7 +151,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::read( const Names & column_names_to_return, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const UInt64 max_block_size, const unsigned num_streams, const PartitionIdToMaxBlock * max_block_numbers_to_read) const @@ -168,7 +167,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( const Names & column_names_to_return, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const UInt64 max_block_size, const unsigned num_streams, const PartitionIdToMaxBlock * max_block_numbers_to_read) const @@ -238,7 +237,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( metadata_snapshot->check(real_column_names, data.getVirtuals(), data.getStorageID()); - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); const auto & primary_key = metadata_snapshot->getPrimaryKey(); Names primary_key_columns = primary_key.column_names; @@ -280,13 +279,42 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( } } - const Context & query_context = context.hasQueryContext() ? context.getQueryContext() : context; + auto query_context = context->hasQueryContext() ? context->getQueryContext() : context; - if (query_context.getSettingsRef().allow_experimental_query_deduplication) - selectPartsToReadWithUUIDFilter(parts, part_values, minmax_idx_condition, minmax_columns_types, partition_pruner, max_block_numbers_to_read, query_context); + PartFilterCounters part_filter_counters; + auto index_stats = std::make_unique(); + + if (query_context->getSettingsRef().allow_experimental_query_deduplication) + selectPartsToReadWithUUIDFilter(parts, part_values, minmax_idx_condition, minmax_columns_types, partition_pruner, max_block_numbers_to_read, query_context, part_filter_counters); else - selectPartsToRead(parts, part_values, minmax_idx_condition, minmax_columns_types, partition_pruner, max_block_numbers_to_read); + selectPartsToRead(parts, part_values, minmax_idx_condition, minmax_columns_types, partition_pruner, max_block_numbers_to_read, part_filter_counters); + index_stats->emplace_back(ReadFromMergeTree::IndexStat{ + .type = ReadFromMergeTree::IndexType::None, + .num_parts_after = part_filter_counters.num_initial_selected_parts, + .num_granules_after = part_filter_counters.num_initial_selected_granules}); + + if (minmax_idx_condition) + { + auto description = minmax_idx_condition->getDescription(); + index_stats->emplace_back(ReadFromMergeTree::IndexStat{ + .type = ReadFromMergeTree::IndexType::MinMax, + .condition = std::move(description.condition), + .used_keys = std::move(description.used_keys), + .num_parts_after = part_filter_counters.num_parts_after_minmax, + .num_granules_after = part_filter_counters.num_granules_after_minmax}); + } + + if (partition_pruner) + { + auto description = partition_pruner->getKeyCondition().getDescription(); + index_stats->emplace_back(ReadFromMergeTree::IndexStat{ + .type = ReadFromMergeTree::IndexType::Partition, + .condition = std::move(description.condition), + .used_keys = std::move(description.used_keys), + .num_parts_after = part_filter_counters.num_parts_after_partition_pruner, + .num_granules_after = part_filter_counters.num_granules_after_partition_pruner}); + } /// Sampling. Names column_names_to_read = real_column_names; @@ -556,7 +584,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( { .min_bytes_to_use_direct_io = settings.min_bytes_to_use_direct_io, .min_bytes_to_use_mmap_io = settings.min_bytes_to_use_mmap_io, - .mmap_cache = context.getMMappedFileCache(), + .mmap_cache = context->getMMappedFileCache(), .max_read_buffer_size = settings.max_read_buffer_size, .save_marks_in_cache = true, .checksum_on_read = settings.checksum_on_read, @@ -568,6 +596,8 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( MergeTreeIndexConditionPtr condition; std::atomic total_granules{0}; std::atomic granules_dropped{0}; + std::atomic total_parts{0}; + std::atomic parts_dropped{0}; DataSkippingIndexAndCondition(MergeTreeIndexPtr index_, MergeTreeIndexConditionPtr condition_) : index(index_) @@ -620,6 +650,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( RangesInDataParts parts_with_ranges(parts.size()); size_t sum_marks = 0; std::atomic sum_marks_pk = 0; + std::atomic sum_parts_pk = 0; std::atomic total_marks_pk = 0; size_t sum_ranges = 0; @@ -642,25 +673,29 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( RangesInDataPart ranges(part, part_index); - total_marks_pk.fetch_add(part->index_granularity.getMarksCount(), std::memory_order_relaxed); + size_t total_marks_count = part->getMarksCount(); + if (total_marks_count && part->index_granularity.hasFinalMark()) + --total_marks_count; + + total_marks_pk.fetch_add(total_marks_count, std::memory_order_relaxed); if (metadata_snapshot->hasPrimaryKey()) ranges.ranges = markRangesFromPKRange(part, metadata_snapshot, key_condition, settings, log); - else - { - size_t total_marks_count = part->getMarksCount(); - if (total_marks_count) - { - if (part->index_granularity.hasFinalMark()) - --total_marks_count; - ranges.ranges = MarkRanges{MarkRange{0, total_marks_count}}; - } - } + else if (total_marks_count) + ranges.ranges = MarkRanges{MarkRange{0, total_marks_count}}; sum_marks_pk.fetch_add(ranges.getMarksCount(), std::memory_order_relaxed); + if (!ranges.ranges.empty()) + sum_parts_pk.fetch_add(1, std::memory_order_relaxed); + for (auto & index_and_condition : useful_indices) { + if (ranges.ranges.empty()) + break; + + index_and_condition.total_parts.fetch_add(1, std::memory_order_relaxed); + size_t total_granules = 0; size_t granules_dropped = 0; ranges.ranges = filterMarksUsingIndex( @@ -672,6 +707,9 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( index_and_condition.total_granules.fetch_add(total_granules, std::memory_order_relaxed); index_and_condition.granules_dropped.fetch_add(granules_dropped, std::memory_order_relaxed); + + if (ranges.ranges.empty()) + index_and_condition.parts_dropped.fetch_add(1, std::memory_order_relaxed); } if (!ranges.ranges.empty()) @@ -704,7 +742,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( for (size_t part_index = 0; part_index < parts.size(); ++part_index) pool.scheduleOrThrowOnError([&, part_index, thread_group = CurrentThread::getGroup()] { - SCOPE_EXIT( + SCOPE_EXIT_SAFE( if (thread_group) CurrentThread::detachQueryIfNotDetached(); ); @@ -737,12 +775,34 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( parts_with_ranges.resize(next_part); } + if (metadata_snapshot->hasPrimaryKey()) + { + auto description = key_condition.getDescription(); + + index_stats->emplace_back(ReadFromMergeTree::IndexStat{ + .type = ReadFromMergeTree::IndexType::PrimaryKey, + .condition = std::move(description.condition), + .used_keys = std::move(description.used_keys), + .num_parts_after = sum_parts_pk.load(std::memory_order_relaxed), + .num_granules_after = sum_marks_pk.load(std::memory_order_relaxed)}); + } + for (const auto & index_and_condition : useful_indices) { const auto & index_name = index_and_condition.index->index.name; LOG_DEBUG(log, "Index {} has dropped {}/{} granules.", backQuote(index_name), index_and_condition.granules_dropped, index_and_condition.total_granules); + + std::string description = index_and_condition.index->index.type + + " GRANULARITY " + std::to_string(index_and_condition.index->index.granularity); + + index_stats->emplace_back(ReadFromMergeTree::IndexStat{ + .type = ReadFromMergeTree::IndexType::Skip, + .name = index_name, + .description = std::move(description), + .num_parts_after = index_and_condition.total_parts - index_and_condition.parts_dropped, + .num_granules_after = index_and_condition.total_granules - index_and_condition.granules_dropped}); } LOG_DEBUG(log, "Selected {}/{} parts by partition key, {} parts by primary key, {}/{} marks by primary key, {} marks to read from {} ranges", @@ -776,7 +836,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( if (data_settings->min_marks_to_honor_max_concurrent_queries > 0 && sum_marks >= data_settings->min_marks_to_honor_max_concurrent_queries) { - query_id = context.getCurrentQueryId(); + query_id = context->getCurrentQueryId(); if (!query_id.empty()) data.insertQueryIdOrThrow(query_id, data_settings->max_concurrent_queries); } @@ -809,6 +869,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( plan = spreadMarkRangesAmongStreamsFinal( std::move(parts_with_ranges), + std::move(index_stats), num_streams, column_names_to_read, metadata_snapshot, @@ -832,6 +893,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( plan = spreadMarkRangesAmongStreamsWithOrder( std::move(parts_with_ranges), + std::move(index_stats), num_streams, column_names_to_read, metadata_snapshot, @@ -849,6 +911,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( { plan = spreadMarkRangesAmongStreams( std::move(parts_with_ranges), + std::move(index_stats), num_streams, column_names_to_read, metadata_snapshot, @@ -960,25 +1023,9 @@ size_t minMarksForConcurrentRead( } -static QueryPlanPtr createPlanFromPipe(Pipe pipe, const String & query_id, const MergeTreeData & data, const std::string & description = "") -{ - auto plan = std::make_unique(); - - std::string storage_name = "MergeTree"; - if (!description.empty()) - storage_name += ' ' + description; - - // Attach QueryIdHolder if needed - if (!query_id.empty()) - pipe.addQueryIdHolder(std::make_shared(query_id, data)); - - auto step = std::make_unique(std::move(pipe), storage_name); - plan->addStep(std::move(step)); - return plan; -} - QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreams( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -1030,75 +1077,32 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreams( if (0 == sum_marks) return {}; + ReadFromMergeTree::Settings step_settings + { + .max_block_size = max_block_size, + .preferred_block_size_bytes = settings.preferred_block_size_bytes, + .preferred_max_column_in_block_size_bytes = settings.preferred_max_column_in_block_size_bytes, + .min_marks_for_concurrent_read = min_marks_for_concurrent_read, + .use_uncompressed_cache = use_uncompressed_cache, + .reader_settings = reader_settings, + .backoff_settings = MergeTreeReadPool::BackoffSettings(settings), + }; + if (num_streams > 1) { - /// Parallel query execution. - Pipes res; - /// Reduce the number of num_streams if the data is small. if (sum_marks < num_streams * min_marks_for_concurrent_read && parts.size() < num_streams) num_streams = std::max((sum_marks + min_marks_for_concurrent_read - 1) / min_marks_for_concurrent_read, parts.size()); - - MergeTreeReadPoolPtr pool = std::make_shared( - num_streams, - sum_marks, - min_marks_for_concurrent_read, - std::move(parts), - data, - metadata_snapshot, - query_info.prewhere_info, - true, - column_names, - MergeTreeReadPool::BackoffSettings(settings), - settings.preferred_block_size_bytes, - false); - - /// Let's estimate total number of rows for progress bar. - LOG_TRACE(log, "Reading approx. {} rows with {} streams", total_rows, num_streams); - - for (size_t i = 0; i < num_streams; ++i) - { - auto source = std::make_shared( - i, pool, min_marks_for_concurrent_read, max_block_size, - settings.preferred_block_size_bytes, settings.preferred_max_column_in_block_size_bytes, - data, metadata_snapshot, use_uncompressed_cache, - query_info.prewhere_info, reader_settings, virt_columns); - - if (i == 0) - { - /// Set the approximate number of rows for the first source only - source->addTotalRowsApprox(total_rows); - } - - res.emplace_back(std::move(source)); - } - - return createPlanFromPipe(Pipe::unitePipes(std::move(res)), query_id, data); } - else - { - /// Sequential query execution. - Pipes res; - for (const auto & part : parts) - { - auto source = std::make_shared( - data, metadata_snapshot, part.data_part, max_block_size, settings.preferred_block_size_bytes, - settings.preferred_max_column_in_block_size_bytes, column_names, part.ranges, use_uncompressed_cache, - query_info.prewhere_info, true, reader_settings, virt_columns, part.part_index_in_query); + auto plan = std::make_unique(); + auto step = std::make_unique( + data, metadata_snapshot, query_id, + column_names, std::move(parts), std::move(index_stats), query_info.prewhere_info, virt_columns, + step_settings, num_streams, ReadFromMergeTree::ReadType::Default); - res.emplace_back(std::move(source)); - } - - auto pipe = Pipe::unitePipes(std::move(res)); - - /// Use ConcatProcessor to concat sources together. - /// It is needed to read in parts order (and so in PK order) if single thread is used. - if (pipe.numOutputPorts() > 1) - pipe.addTransform(std::make_shared(pipe.getHeader(), pipe.numOutputPorts())); - - return createPlanFromPipe(std::move(pipe), query_id, data); - } + plan->addStep(std::move(step)); + return plan; } static ActionsDAGPtr createProjection(const Block & header) @@ -1111,6 +1115,7 @@ static ActionsDAGPtr createProjection(const Block & header) QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsWithOrder( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -1218,8 +1223,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsWithOrder( for (size_t i = 0; i < num_streams && !parts.empty(); ++i) { size_t need_marks = min_marks_per_stream; - - Pipes pipes; + RangesInDataParts new_parts; /// Loop over parts. /// We will iteratively take part or some subrange of a part from the back @@ -1274,53 +1278,31 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsWithOrder( parts.emplace_back(part); } ranges_to_get_from_part = split_ranges(ranges_to_get_from_part, input_order_info->direction); - - if (input_order_info->direction == 1) - { - pipes.emplace_back(std::make_shared( - data, - metadata_snapshot, - part.data_part, - max_block_size, - settings.preferred_block_size_bytes, - settings.preferred_max_column_in_block_size_bytes, - column_names, - ranges_to_get_from_part, - use_uncompressed_cache, - query_info.prewhere_info, - true, - reader_settings, - virt_columns, - part.part_index_in_query)); - } - else - { - pipes.emplace_back(std::make_shared( - data, - metadata_snapshot, - part.data_part, - max_block_size, - settings.preferred_block_size_bytes, - settings.preferred_max_column_in_block_size_bytes, - column_names, - ranges_to_get_from_part, - use_uncompressed_cache, - query_info.prewhere_info, - true, - reader_settings, - virt_columns, - part.part_index_in_query)); - } + new_parts.emplace_back(part.data_part, part.part_index_in_query, std::move(ranges_to_get_from_part)); } - auto plan = createPlanFromPipe(Pipe::unitePipes(std::move(pipes)), query_id, data, "with order"); - - if (input_order_info->direction != 1) + ReadFromMergeTree::Settings step_settings { - auto reverse_step = std::make_unique(plan->getCurrentDataStream()); - plan->addStep(std::move(reverse_step)); - } + .max_block_size = max_block_size, + .preferred_block_size_bytes = settings.preferred_block_size_bytes, + .preferred_max_column_in_block_size_bytes = settings.preferred_max_column_in_block_size_bytes, + .min_marks_for_concurrent_read = min_marks_for_concurrent_read, + .use_uncompressed_cache = use_uncompressed_cache, + .reader_settings = reader_settings, + .backoff_settings = MergeTreeReadPool::BackoffSettings(settings), + }; + auto read_type = input_order_info->direction == 1 + ? ReadFromMergeTree::ReadType::InOrder + : ReadFromMergeTree::ReadType::InReverseOrder; + + auto plan = std::make_unique(); + auto step = std::make_unique( + data, metadata_snapshot, query_id, + column_names, std::move(new_parts), std::move(index_stats), query_info.prewhere_info, virt_columns, + step_settings, num_streams, read_type); + + plan->addStep(std::move(step)); plans.emplace_back(std::move(plan)); } @@ -1360,8 +1342,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsWithOrder( for (const auto & plan : plans) input_streams.emplace_back(plan->getCurrentDataStream()); - const auto & common_header = plans.front()->getCurrentDataStream().header; - auto union_step = std::make_unique(std::move(input_streams), common_header); + auto union_step = std::make_unique(std::move(input_streams)); auto plan = std::make_unique(); plan->unitePlans(std::move(union_step), std::move(plans)); @@ -1372,6 +1353,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsWithOrder( QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -1413,7 +1395,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( num_streams = settings.max_final_threads; /// If setting do_not_merge_across_partitions_select_final is true than we won't merge parts from different partitions. - /// We have all parts in parts vector, where parts with same partition are nerby. + /// We have all parts in parts vector, where parts with same partition are nearby. /// So we will store iterators pointed to the beginning of each partition range (and parts.end()), /// then we will create a pipe for each partition that will run selecting processor and merging processor /// for the parts with this partition. In the end we will unite all the pipes. @@ -1452,7 +1434,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( QueryPlanPtr plan; { - Pipes pipes; + RangesInDataParts new_parts; /// If do_not_merge_across_partitions_select_final is true and there is only one part in partition /// with level > 0 then we won't postprocess this part and if num_streams > 1 we @@ -1471,36 +1453,35 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( { for (auto part_it = parts_to_merge_ranges[range_index]; part_it != parts_to_merge_ranges[range_index + 1]; ++part_it) { - auto source_processor = std::make_shared( - data, - metadata_snapshot, - part_it->data_part, - max_block_size, - settings.preferred_block_size_bytes, - settings.preferred_max_column_in_block_size_bytes, - column_names, - part_it->ranges, - use_uncompressed_cache, - query_info.prewhere_info, - true, - reader_settings, - virt_columns, - part_it->part_index_in_query); - - pipes.emplace_back(std::move(source_processor)); + new_parts.emplace_back(part_it->data_part, part_it->part_index_in_query, part_it->ranges); } } - if (pipes.empty()) + if (new_parts.empty()) continue; - auto pipe = Pipe::unitePipes(std::move(pipes)); + ReadFromMergeTree::Settings step_settings + { + .max_block_size = max_block_size, + .preferred_block_size_bytes = settings.preferred_block_size_bytes, + .preferred_max_column_in_block_size_bytes = settings.preferred_max_column_in_block_size_bytes, + .min_marks_for_concurrent_read = 0, /// this setting is not used for reading in order + .use_uncompressed_cache = use_uncompressed_cache, + .reader_settings = reader_settings, + .backoff_settings = MergeTreeReadPool::BackoffSettings(settings), + }; + + plan = std::make_unique(); + auto step = std::make_unique( + data, metadata_snapshot, query_id, + column_names, std::move(new_parts), std::move(index_stats), query_info.prewhere_info, virt_columns, + step_settings, num_streams, ReadFromMergeTree::ReadType::InOrder); + + plan->addStep(std::move(step)); /// Drop temporary columns, added by 'sorting_key_expr' if (!out_projection) - out_projection = createProjection(pipe.getHeader()); - - plan = createPlanFromPipe(std::move(pipe), query_id, data, "with final"); + out_projection = createProjection(plan->getCurrentDataStream().header); } auto expression_step = std::make_unique( @@ -1547,7 +1528,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( if (!lonely_parts.empty()) { - Pipes pipes; + RangesInDataParts new_parts; size_t num_streams_for_lonely_parts = num_streams * lonely_parts.size(); @@ -1562,41 +1543,28 @@ QueryPlanPtr MergeTreeDataSelectExecutor::spreadMarkRangesAmongStreamsFinal( if (sum_marks_in_lonely_parts < num_streams_for_lonely_parts * min_marks_for_concurrent_read && lonely_parts.size() < num_streams_for_lonely_parts) num_streams_for_lonely_parts = std::max((sum_marks_in_lonely_parts + min_marks_for_concurrent_read - 1) / min_marks_for_concurrent_read, lonely_parts.size()); - - MergeTreeReadPoolPtr pool = std::make_shared( - num_streams_for_lonely_parts, - sum_marks_in_lonely_parts, - min_marks_for_concurrent_read, - std::move(lonely_parts), - data, - metadata_snapshot, - query_info.prewhere_info, - true, - column_names, - MergeTreeReadPool::BackoffSettings(settings), - settings.preferred_block_size_bytes, - false); - - LOG_TRACE(log, "Reading approx. {} rows with {} streams", total_rows_in_lonely_parts, num_streams_for_lonely_parts); - - for (size_t i = 0; i < num_streams_for_lonely_parts; ++i) + ReadFromMergeTree::Settings step_settings { - auto source = std::make_shared( - i, pool, min_marks_for_concurrent_read, max_block_size, - settings.preferred_block_size_bytes, settings.preferred_max_column_in_block_size_bytes, - data, metadata_snapshot, use_uncompressed_cache, - query_info.prewhere_info, reader_settings, virt_columns); + .max_block_size = max_block_size, + .preferred_block_size_bytes = settings.preferred_block_size_bytes, + .preferred_max_column_in_block_size_bytes = settings.preferred_max_column_in_block_size_bytes, + .min_marks_for_concurrent_read = min_marks_for_concurrent_read, + .use_uncompressed_cache = use_uncompressed_cache, + .reader_settings = reader_settings, + .backoff_settings = MergeTreeReadPool::BackoffSettings(settings), + }; - pipes.emplace_back(std::move(source)); - } + auto plan = std::make_unique(); + auto step = std::make_unique( + data, metadata_snapshot, query_id, + column_names, std::move(lonely_parts), std::move(index_stats), query_info.prewhere_info, virt_columns, + step_settings, num_streams_for_lonely_parts, ReadFromMergeTree::ReadType::Default); - auto pipe = Pipe::unitePipes(std::move(pipes)); + plan->addStep(std::move(step)); /// Drop temporary columns, added by 'sorting_key_expr' if (!out_projection) - out_projection = createProjection(pipe.getHeader()); - - QueryPlanPtr plan = createPlanFromPipe(std::move(pipe), query_id, data, "with final"); + out_projection = createProjection(plan->getCurrentDataStream().header); auto expression_step = std::make_unique( plan->getCurrentDataStream(), @@ -1897,7 +1865,8 @@ void MergeTreeDataSelectExecutor::selectPartsToRead( const std::optional & minmax_idx_condition, const DataTypes & minmax_columns_types, std::optional & partition_pruner, - const PartitionIdToMaxBlock * max_block_numbers_to_read) + const PartitionIdToMaxBlock * max_block_numbers_to_read, + PartFilterCounters & counters) { auto prev_parts = parts; parts.clear(); @@ -1910,22 +1879,35 @@ void MergeTreeDataSelectExecutor::selectPartsToRead( if (part->isEmpty()) continue; + if (max_block_numbers_to_read) + { + auto blocks_iterator = max_block_numbers_to_read->find(part->info.partition_id); + if (blocks_iterator == max_block_numbers_to_read->end() || part->info.max_block > blocks_iterator->second) + continue; + } + + size_t num_granules = part->getMarksCount(); + if (num_granules && part->index_granularity.hasFinalMark()) + --num_granules; + + counters.num_initial_selected_parts += 1; + counters.num_initial_selected_granules += num_granules; + if (minmax_idx_condition && !minmax_idx_condition->checkInHyperrectangle( part->minmax_idx.hyperrectangle, minmax_columns_types).can_be_true) continue; + counters.num_parts_after_minmax += 1; + counters.num_granules_after_minmax += num_granules; + if (partition_pruner) { if (partition_pruner->canBePruned(part)) continue; } - if (max_block_numbers_to_read) - { - auto blocks_iterator = max_block_numbers_to_read->find(part->info.partition_id); - if (blocks_iterator == max_block_numbers_to_read->end() || part->info.max_block > blocks_iterator->second) - continue; - } + counters.num_parts_after_partition_pruner += 1; + counters.num_granules_after_partition_pruner += num_granules; parts.push_back(part); } @@ -1938,16 +1920,14 @@ void MergeTreeDataSelectExecutor::selectPartsToReadWithUUIDFilter( const DataTypes & minmax_columns_types, std::optional & partition_pruner, const PartitionIdToMaxBlock * max_block_numbers_to_read, - const Context & query_context) const + ContextPtr query_context, + PartFilterCounters & counters) const { - /// const_cast to add UUIDs to context. Bad practice. - Context & non_const_context = const_cast(query_context); - /// process_parts prepare parts that have to be read for the query, /// returns false if duplicated parts' UUID have been met auto select_parts = [&] (MergeTreeData::DataPartsVector & selected_parts) -> bool { - auto ignored_part_uuids = non_const_context.getIgnoredPartUUIDs(); + auto ignored_part_uuids = query_context->getIgnoredPartUUIDs(); std::unordered_set temp_part_uuids; auto prev_parts = selected_parts; @@ -1961,17 +1941,6 @@ void MergeTreeDataSelectExecutor::selectPartsToReadWithUUIDFilter( if (part->isEmpty()) continue; - if (minmax_idx_condition - && !minmax_idx_condition->checkInHyperrectangle(part->minmax_idx.hyperrectangle, minmax_columns_types) - .can_be_true) - continue; - - if (partition_pruner) - { - if (partition_pruner->canBePruned(part)) - continue; - } - if (max_block_numbers_to_read) { auto blocks_iterator = max_block_numbers_to_read->find(part->info.partition_id); @@ -1979,13 +1948,37 @@ void MergeTreeDataSelectExecutor::selectPartsToReadWithUUIDFilter( continue; } + /// Skip the part if its uuid is meant to be excluded + if (part->uuid != UUIDHelpers::Nil && ignored_part_uuids->has(part->uuid)) + continue; + + size_t num_granules = part->getMarksCount(); + if (num_granules && part->index_granularity.hasFinalMark()) + --num_granules; + + counters.num_initial_selected_parts += 1; + counters.num_initial_selected_granules += num_granules; + + if (minmax_idx_condition + && !minmax_idx_condition->checkInHyperrectangle(part->minmax_idx.hyperrectangle, minmax_columns_types) + .can_be_true) + continue; + + counters.num_parts_after_minmax += 1; + counters.num_granules_after_minmax += num_granules; + + if (partition_pruner) + { + if (partition_pruner->canBePruned(part)) + continue; + } + + counters.num_parts_after_partition_pruner += 1; + counters.num_granules_after_partition_pruner += num_granules; + /// populate UUIDs and exclude ignored parts if enabled if (part->uuid != UUIDHelpers::Nil) { - /// Skip the part if its uuid is meant to be excluded - if (ignored_part_uuids->has(part->uuid)) - continue; - auto result = temp_part_uuids.insert(part->uuid); if (!result.second) throw Exception("Found a part with the same UUID on the same replica.", ErrorCodes::LOGICAL_ERROR); @@ -1996,12 +1989,12 @@ void MergeTreeDataSelectExecutor::selectPartsToReadWithUUIDFilter( if (!temp_part_uuids.empty()) { - auto duplicates = non_const_context.getPartUUIDs()->add(std::vector{temp_part_uuids.begin(), temp_part_uuids.end()}); + auto duplicates = query_context->getPartUUIDs()->add(std::vector{temp_part_uuids.begin(), temp_part_uuids.end()}); if (!duplicates.empty()) { /// on a local replica with prefer_localhost_replica=1 if any duplicates appeared during the first pass, /// adding them to the exclusion, so they will be skipped on second pass - non_const_context.getIgnoredPartUUIDs()->add(duplicates); + query_context->getIgnoredPartUUIDs()->add(duplicates); return false; } } @@ -2017,6 +2010,8 @@ void MergeTreeDataSelectExecutor::selectPartsToReadWithUUIDFilter( { LOG_DEBUG(log, "Found duplicate uuids locally, will retry part selection without them"); + counters = PartFilterCounters(); + /// Second attempt didn't help, throw an exception if (!select_parts(parts)) throw Exception("Found duplicate UUIDs while processing query.", ErrorCodes::DUPLICATED_PART_UUIDS); diff --git a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h index 0702605a539..4129b3ea2a0 100644 --- a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h +++ b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h @@ -5,6 +5,7 @@ #include #include #include +#include namespace DB @@ -29,7 +30,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, UInt64 max_block_size, unsigned num_streams, const PartitionIdToMaxBlock * max_block_numbers_to_read = nullptr) const; @@ -39,7 +40,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, UInt64 max_block_size, unsigned num_streams, const PartitionIdToMaxBlock * max_block_numbers_to_read = nullptr) const; @@ -57,6 +58,7 @@ private: QueryPlanPtr spreadMarkRangesAmongStreams( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -71,6 +73,7 @@ private: /// out_projection - save projection only with columns, requested to read QueryPlanPtr spreadMarkRangesAmongStreamsWithOrder( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -86,6 +89,7 @@ private: QueryPlanPtr spreadMarkRangesAmongStreamsFinal( RangesInDataParts && parts, + ReadFromMergeTree::IndexStatPtr index_stats, size_t num_streams, const Names & column_names, const StorageMetadataPtr & metadata_snapshot, @@ -123,6 +127,16 @@ private: size_t & granules_dropped, Poco::Logger * log); + struct PartFilterCounters + { + size_t num_initial_selected_parts = 0; + size_t num_initial_selected_granules = 0; + size_t num_parts_after_minmax = 0; + size_t num_granules_after_minmax = 0; + size_t num_parts_after_partition_pruner = 0; + size_t num_granules_after_partition_pruner = 0; + }; + /// Select the parts in which there can be data that satisfy `minmax_idx_condition` and that match the condition on `_part`, /// as well as `max_block_number_to_read`. static void selectPartsToRead( @@ -131,7 +145,8 @@ private: const std::optional & minmax_idx_condition, const DataTypes & minmax_columns_types, std::optional & partition_pruner, - const PartitionIdToMaxBlock * max_block_numbers_to_read); + const PartitionIdToMaxBlock * max_block_numbers_to_read, + PartFilterCounters & counters); /// Same as previous but also skip parts uuids if any to the query context, or skip parts which uuids marked as excluded. void selectPartsToReadWithUUIDFilter( @@ -141,7 +156,8 @@ private: const DataTypes & minmax_columns_types, std::optional & partition_pruner, const PartitionIdToMaxBlock * max_block_numbers_to_read, - const Context & query_context) const; + ContextPtr query_context, + PartFilterCounters & counters) const; }; } diff --git a/src/Storages/MergeTree/MergeTreeDataWriter.cpp b/src/Storages/MergeTree/MergeTreeDataWriter.cpp index 3b4cb385a34..79d95eb03ee 100644 --- a/src/Storages/MergeTree/MergeTreeDataWriter.cpp +++ b/src/Storages/MergeTree/MergeTreeDataWriter.cpp @@ -396,7 +396,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataWriter::writeTempPart(BlockWithPa /// This effectively chooses minimal compression method: /// either default lz4 or compression method with zero thresholds on absolute and relative part size. - auto compression_codec = data.global_context.chooseCompressionCodec(0, 0); + auto compression_codec = data.getContext()->chooseCompressionCodec(0, 0); const auto & index_factory = MergeTreeIndexFactory::instance(); MergedBlockOutputStream out(new_data_part, metadata_snapshot, columns, index_factory.getMany(metadata_snapshot->getSecondaryIndices()), compression_codec); diff --git a/src/Storages/MergeTree/MergeTreeDeduplicationLog.cpp b/src/Storages/MergeTree/MergeTreeDeduplicationLog.cpp new file mode 100644 index 00000000000..33960e2e1ff --- /dev/null +++ b/src/Storages/MergeTree/MergeTreeDeduplicationLog.cpp @@ -0,0 +1,311 @@ +#include +#include +#include +#include +#include +#include +#include +#include + +namespace DB +{ + +namespace +{ + +/// Deduplication operation part was dropped or added +enum class MergeTreeDeduplicationOp : uint8_t +{ + ADD = 1, + DROP = 2, +}; + +/// Record for deduplication on disk +struct MergeTreeDeduplicationLogRecord +{ + MergeTreeDeduplicationOp operation; + std::string part_name; + std::string block_id; +}; + +void writeRecord(const MergeTreeDeduplicationLogRecord & record, WriteBuffer & out) +{ + writeIntText(static_cast(record.operation), out); + writeChar('\t', out); + writeString(record.part_name, out); + writeChar('\t', out); + writeString(record.block_id, out); + writeChar('\n', out); + out.next(); +} + +void readRecord(MergeTreeDeduplicationLogRecord & record, ReadBuffer & in) +{ + uint8_t op; + readIntText(op, in); + record.operation = static_cast(op); + assertChar('\t', in); + readString(record.part_name, in); + assertChar('\t', in); + readString(record.block_id, in); + assertChar('\n', in); +} + + +std::string getLogPath(const std::string & prefix, size_t number) +{ + std::filesystem::path path(prefix); + path /= std::filesystem::path(std::string{"deduplication_log_"} + std::to_string(number) + ".txt"); + return path; +} + +size_t getLogNumber(const std::string & path_str) +{ + std::filesystem::path path(path_str); + std::string filename = path.stem(); + Strings filename_parts; + boost::split(filename_parts, filename, boost::is_any_of("_")); + + return parse(filename_parts[2]); +} + +} + +MergeTreeDeduplicationLog::MergeTreeDeduplicationLog( + const std::string & logs_dir_, + size_t deduplication_window_, + const MergeTreeDataFormatVersion & format_version_) + : logs_dir(logs_dir_) + , deduplication_window(deduplication_window_) + , rotate_interval(deduplication_window_ * 2) /// actually it doesn't matter + , format_version(format_version_) + , deduplication_map(deduplication_window) +{ + namespace fs = std::filesystem; + if (deduplication_window != 0 && !fs::exists(logs_dir)) + fs::create_directories(logs_dir); +} + +void MergeTreeDeduplicationLog::load() +{ + namespace fs = std::filesystem; + if (!fs::exists(logs_dir)) + return; + + for (const auto & p : fs::directory_iterator(logs_dir)) + { + const auto & path = p.path(); + auto log_number = getLogNumber(path); + existing_logs[log_number] = {path, 0}; + } + + /// We should know which logs are exist even in case + /// of deduplication_window = 0 + if (!existing_logs.empty()) + current_log_number = existing_logs.rbegin()->first; + + if (deduplication_window != 0) + { + /// Order important, we load history from the begging to the end + for (auto & [log_number, desc] : existing_logs) + { + try + { + desc.entries_count = loadSingleLog(desc.path); + } + catch (...) + { + tryLogCurrentException(__PRETTY_FUNCTION__, "Error while loading MergeTree deduplication log on path " + desc.path); + } + } + + /// Start new log, drop previous + rotateAndDropIfNeeded(); + + /// Can happen in case we have unfinished log + if (!current_writer) + current_writer = std::make_unique(existing_logs.rbegin()->second.path, DBMS_DEFAULT_BUFFER_SIZE, O_APPEND | O_CREAT | O_WRONLY); + } +} + +size_t MergeTreeDeduplicationLog::loadSingleLog(const std::string & path) +{ + ReadBufferFromFile read_buf(path); + + size_t total_entries = 0; + while (!read_buf.eof()) + { + MergeTreeDeduplicationLogRecord record; + readRecord(record, read_buf); + if (record.operation == MergeTreeDeduplicationOp::DROP) + deduplication_map.erase(record.block_id); + else + deduplication_map.insert(record.block_id, MergeTreePartInfo::fromPartName(record.part_name, format_version)); + total_entries++; + } + return total_entries; +} + +void MergeTreeDeduplicationLog::rotate() +{ + /// We don't deduplicate anything so we don't need any writers + if (deduplication_window == 0) + return; + + current_log_number++; + auto new_path = getLogPath(logs_dir, current_log_number); + MergeTreeDeduplicationLogNameDescription log_description{new_path, 0}; + existing_logs.emplace(current_log_number, log_description); + + if (current_writer) + current_writer->sync(); + + current_writer = std::make_unique(log_description.path, DBMS_DEFAULT_BUFFER_SIZE, O_APPEND | O_CREAT | O_WRONLY); +} + +void MergeTreeDeduplicationLog::dropOutdatedLogs() +{ + size_t current_sum = 0; + size_t remove_from_value = 0; + /// Go from end to the beginning + for (auto itr = existing_logs.rbegin(); itr != existing_logs.rend(); ++itr) + { + if (current_sum > deduplication_window) + { + /// We have more logs than required, all older files (including current) can be dropped + remove_from_value = itr->first; + break; + } + + auto & description = itr->second; + current_sum += description.entries_count; + } + + /// If we found some logs to drop + if (remove_from_value != 0) + { + /// Go from the beginning to the end and drop all outdated logs + for (auto itr = existing_logs.begin(); itr != existing_logs.end();) + { + size_t number = itr->first; + std::filesystem::remove(itr->second.path); + itr = existing_logs.erase(itr); + if (remove_from_value == number) + break; + } + } + +} + +void MergeTreeDeduplicationLog::rotateAndDropIfNeeded() +{ + /// If we don't have logs at all or already have enough records in current + if (existing_logs.empty() || existing_logs[current_log_number].entries_count >= rotate_interval) + { + rotate(); + dropOutdatedLogs(); + } +} + +std::pair MergeTreeDeduplicationLog::addPart(const std::string & block_id, const MergeTreePartInfo & part_info) +{ + std::lock_guard lock(state_mutex); + + /// We support zero case because user may want to disable deduplication with + /// ALTER MODIFY SETTING query. It's much more simpler to handle zero case + /// here then destroy whole object, check for null pointer from different + /// threads and so on. + if (deduplication_window == 0) + return std::make_pair(part_info, true); + + /// If we already have this block let's deduplicate it + if (deduplication_map.contains(block_id)) + { + auto info = deduplication_map.get(block_id); + return std::make_pair(info, false); + } + + assert(current_writer != nullptr); + + /// Create new record + MergeTreeDeduplicationLogRecord record; + record.operation = MergeTreeDeduplicationOp::ADD; + record.part_name = part_info.getPartName(); + record.block_id = block_id; + /// Write it to disk + writeRecord(record, *current_writer); + /// We have one more record in current log + existing_logs[current_log_number].entries_count++; + /// Add to deduplication map + deduplication_map.insert(record.block_id, part_info); + /// Rotate and drop old logs if needed + rotateAndDropIfNeeded(); + + return std::make_pair(part_info, true); +} + +void MergeTreeDeduplicationLog::dropPart(const MergeTreePartInfo & drop_part_info) +{ + std::lock_guard lock(state_mutex); + + /// We support zero case because user may want to disable deduplication with + /// ALTER MODIFY SETTING query. It's much more simpler to handle zero case + /// here then destroy whole object, check for null pointer from different + /// threads and so on. + if (deduplication_window == 0) + return; + + assert(current_writer != nullptr); + + for (auto itr = deduplication_map.begin(); itr != deduplication_map.end(); /* no increment here, we erasing from map */) + { + const auto & part_info = itr->value; + /// Part is covered by dropped part, let's remove it from + /// deduplication history + if (drop_part_info.contains(part_info)) + { + /// Create drop record + MergeTreeDeduplicationLogRecord record; + record.operation = MergeTreeDeduplicationOp::DROP; + record.part_name = part_info.getPartName(); + record.block_id = itr->key; + /// Write it to disk + writeRecord(record, *current_writer); + /// We have one more record on disk + existing_logs[current_log_number].entries_count++; + + /// Increment itr before erase, otherwise it will invalidated + ++itr; + /// Remove block_id from in-memory table + deduplication_map.erase(record.block_id); + + /// Rotate and drop old logs if needed + rotateAndDropIfNeeded(); + } + else + { + ++itr; + } + } +} + +void MergeTreeDeduplicationLog::setDeduplicationWindowSize(size_t deduplication_window_) +{ + std::lock_guard lock(state_mutex); + + deduplication_window = deduplication_window_; + rotate_interval = deduplication_window * 2; + + /// If settings was set for the first time with ALTER MODIFY SETTING query + if (deduplication_window != 0 && !std::filesystem::exists(logs_dir)) + std::filesystem::create_directories(logs_dir); + + deduplication_map.setMaxSize(deduplication_window); + rotateAndDropIfNeeded(); + + /// Can happen in case we have unfinished log + if (!current_writer) + current_writer = std::make_unique(existing_logs.rbegin()->second.path, DBMS_DEFAULT_BUFFER_SIZE, O_APPEND | O_CREAT | O_WRONLY); +} + +} diff --git a/src/Storages/MergeTree/MergeTreeDeduplicationLog.h b/src/Storages/MergeTree/MergeTreeDeduplicationLog.h new file mode 100644 index 00000000000..281a76050a2 --- /dev/null +++ b/src/Storages/MergeTree/MergeTreeDeduplicationLog.h @@ -0,0 +1,192 @@ +#pragma once +#include +#include +#include +#include +#include +#include +#include +#include +#include + +namespace DB +{ + +/// Description of dedupliction log +struct MergeTreeDeduplicationLogNameDescription +{ + /// Path to log + std::string path; + + /// How many entries we have in log + size_t entries_count; +}; + +/// Simple string-key HashTable with fixed size based on STL containers. +/// Preserves order using linked list and remove elements +/// on overflow in FIFO order. +template +class LimitedOrderedHashMap +{ +private: + struct ListNode + { + std::string key; + V value; + }; + using Queue = std::list; + using IndexMap = std::unordered_map; + + Queue queue; + IndexMap map; + size_t max_size; +public: + using iterator = typename Queue::iterator; + using const_iterator = typename Queue::const_iterator; + using reverse_iterator = typename Queue::reverse_iterator; + using const_reverse_iterator = typename Queue::const_reverse_iterator; + + explicit LimitedOrderedHashMap(size_t max_size_) + : max_size(max_size_) + {} + + bool contains(const std::string & key) const + { + return map.find(key) != map.end(); + } + + V get(const std::string & key) const + { + return map.at(key)->value; + } + + size_t size() const + { + return queue.size(); + } + + void setMaxSize(size_t max_size_) + { + max_size = max_size_; + while (size() > max_size) + { + map.erase(queue.front().key); + queue.pop_front(); + } + } + + bool erase(const std::string & key) + { + auto it = map.find(key); + if (it == map.end()) + return false; + + auto queue_itr = it->second; + map.erase(it); + queue.erase(queue_itr); + + return true; + } + + bool insert(const std::string & key, const V & value) + { + auto it = map.find(key); + if (it != map.end()) + return false; + + if (size() == max_size) + { + map.erase(queue.front().key); + queue.pop_front(); + } + + ListNode elem{key, value}; + auto itr = queue.insert(queue.end(), elem); + map.emplace(itr->key, itr); + return true; + } + + void clear() + { + map.clear(); + queue.clear(); + } + + iterator begin() { return queue.begin(); } + const_iterator begin() const { return queue.cbegin(); } + iterator end() { return queue.end(); } + const_iterator end() const { return queue.cend(); } + + reverse_iterator rbegin() { return queue.rbegin(); } + const_reverse_iterator rbegin() const { return queue.crbegin(); } + reverse_iterator rend() { return queue.rend(); } + const_reverse_iterator rend() const { return queue.crend(); } +}; + +/// Fixed-size log for deduplication in non-replicated MergeTree. +/// Stores records on disk for zero-level parts in human-readable format: +/// operation part_name partition_id_check_sum +/// 1 88_18_18_0 88_10619499460461868496_9553701830997749308 +/// 2 77_14_14_0 77_15147918179036854170_6725063583757244937 +/// 2 77_15_15_0 77_14977227047908934259_8047656067364802772 +/// 1 77_20_20_0 77_15147918179036854170_6725063583757244937 +/// Also stores them in memory in hash table with limited size. +class MergeTreeDeduplicationLog +{ +public: + MergeTreeDeduplicationLog( + const std::string & logs_dir_, + size_t deduplication_window_, + const MergeTreeDataFormatVersion & format_version_); + + /// Add part into in-memory hash table and to disk + /// Return true and part info if insertion was successful. + /// Otherwise, in case of duplicate, return false and previous part name with same hash (useful for logging) + std::pair addPart(const std::string & block_id, const MergeTreePartInfo & part); + + /// Remove all covered parts from in memory table and add DROP records to the disk + void dropPart(const MergeTreePartInfo & drop_part_info); + + /// Load history from disk. Ignores broken logs. + void load(); + + void setDeduplicationWindowSize(size_t deduplication_window_); +private: + const std::string logs_dir; + /// Size of deduplication window + size_t deduplication_window; + + /// How often we create new logs. Not very important, + /// default value equals deduplication_window * 2 + size_t rotate_interval; + const MergeTreeDataFormatVersion format_version; + + /// Current log number. Always growing number. + size_t current_log_number = 0; + + /// All existing logs in order of their numbers + std::map existing_logs; + + /// In memory hash-table + LimitedOrderedHashMap deduplication_map; + + /// Writer to the current log file + std::unique_ptr current_writer; + + /// Overall mutex because we can have a lot of cocurrent inserts + std::mutex state_mutex; + + /// Start new log + void rotate(); + + /// Remove all old logs with non-needed records for deduplication_window + void dropOutdatedLogs(); + + /// Execute both previous methods if needed + void rotateAndDropIfNeeded(); + + /// Load single log from disk. In case of corruption throws exceptions + size_t loadSingleLog(const std::string & path); +}; + +} diff --git a/src/Storages/MergeTree/MergeTreeIndexBloomFilter.cpp b/src/Storages/MergeTree/MergeTreeIndexBloomFilter.cpp index a98ba16978d..c37d710ec8f 100644 --- a/src/Storages/MergeTree/MergeTreeIndexBloomFilter.cpp +++ b/src/Storages/MergeTree/MergeTreeIndexBloomFilter.cpp @@ -67,7 +67,7 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexBloomFilter::createIndexAggregator() c return std::make_shared(bits_per_row, hash_functions, index.column_names); } -MergeTreeIndexConditionPtr MergeTreeIndexBloomFilter::createIndexCondition(const SelectQueryInfo & query_info, const Context & context) const +MergeTreeIndexConditionPtr MergeTreeIndexBloomFilter::createIndexCondition(const SelectQueryInfo & query_info, ContextPtr context) const { return std::make_shared(query_info, context, index.sample_block, hash_functions); } diff --git a/src/Storages/MergeTree/MergeTreeIndexBloomFilter.h b/src/Storages/MergeTree/MergeTreeIndexBloomFilter.h index 1aac2c22aa0..9112f23ee64 100644 --- a/src/Storages/MergeTree/MergeTreeIndexBloomFilter.h +++ b/src/Storages/MergeTree/MergeTreeIndexBloomFilter.h @@ -20,7 +20,7 @@ public: MergeTreeIndexAggregatorPtr createIndexAggregator() const override; - MergeTreeIndexConditionPtr createIndexCondition(const SelectQueryInfo & query_info, const Context & context) const override; + MergeTreeIndexConditionPtr createIndexCondition(const SelectQueryInfo & query_info, ContextPtr context) const override; bool mayBenefitFromIndexForIn(const ASTPtr & node) const override; diff --git a/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.cpp b/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.cpp index a9915f01645..031129a35f4 100644 --- a/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.cpp +++ b/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.cpp @@ -87,11 +87,11 @@ bool maybeTrueOnBloomFilter(const IColumn * hash_column, const BloomFilterPtr & } MergeTreeIndexConditionBloomFilter::MergeTreeIndexConditionBloomFilter( - const SelectQueryInfo & info_, const Context & context_, const Block & header_, size_t hash_functions_) - : header(header_), context(context_), query_info(info_), hash_functions(hash_functions_) + const SelectQueryInfo & info_, ContextPtr context_, const Block & header_, size_t hash_functions_) + : WithContext(context_), header(header_), query_info(info_), hash_functions(hash_functions_) { - auto atom_from_ast = [this](auto & node, auto &, auto & constants, auto & out) { return traverseAtomAST(node, constants, out); }; - rpn = std::move(RPNBuilder(info_, context, atom_from_ast).extractRPN()); + auto atom_from_ast = [this](auto & node, auto, auto & constants, auto & out) { return traverseAtomAST(node, constants, out); }; + rpn = std::move(RPNBuilder(info_, getContext(), atom_from_ast).extractRPN()); } bool MergeTreeIndexConditionBloomFilter::alwaysUnknownOrTrue() const diff --git a/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.h b/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.h index 0b02e64d43c..61e796fb6f7 100644 --- a/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.h +++ b/src/Storages/MergeTree/MergeTreeIndexConditionBloomFilter.h @@ -13,7 +13,7 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -class MergeTreeIndexConditionBloomFilter final : public IMergeTreeIndexCondition +class MergeTreeIndexConditionBloomFilter final : public IMergeTreeIndexCondition, WithContext { public: struct RPNElement @@ -42,7 +42,7 @@ public: std::vector> predicate; }; - MergeTreeIndexConditionBloomFilter(const SelectQueryInfo & info_, const Context & context_, const Block & header_, size_t hash_functions_); + MergeTreeIndexConditionBloomFilter(const SelectQueryInfo & info_, ContextPtr context_, const Block & header_, size_t hash_functions_); bool alwaysUnknownOrTrue() const override; @@ -56,7 +56,6 @@ public: private: const Block & header; - const Context & context; const SelectQueryInfo & query_info; const size_t hash_functions; std::vector rpn; diff --git a/src/Storages/MergeTree/MergeTreeIndexFullText.cpp b/src/Storages/MergeTree/MergeTreeIndexFullText.cpp index 419a417c3e8..10136cd1069 100644 --- a/src/Storages/MergeTree/MergeTreeIndexFullText.cpp +++ b/src/Storages/MergeTree/MergeTreeIndexFullText.cpp @@ -166,7 +166,7 @@ void MergeTreeIndexAggregatorFullText::update(const Block & block, size_t * pos, MergeTreeConditionFullText::MergeTreeConditionFullText( const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const Block & index_sample_block, const BloomFilterParameters & params_, TokenExtractorPtr token_extactor_) @@ -179,7 +179,7 @@ MergeTreeConditionFullText::MergeTreeConditionFullText( rpn = std::move( RPNBuilder( query_info, context, - [this] (const ASTPtr & node, const Context & /* context */, Block & block_with_constants, RPNElement & out) -> bool + [this] (const ASTPtr & node, ContextPtr /* context */, Block & block_with_constants, RPNElement & out) -> bool { return this->atomFromAST(node, block_with_constants, out); }).extractRPN()); @@ -566,7 +566,7 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexFullText::createIndexAggregator() cons } MergeTreeIndexConditionPtr MergeTreeIndexFullText::createIndexCondition( - const SelectQueryInfo & query, const Context & context) const + const SelectQueryInfo & query, ContextPtr context) const { return std::make_shared(query, context, index.sample_block, params, token_extractor.get()); }; diff --git a/src/Storages/MergeTree/MergeTreeIndexFullText.h b/src/Storages/MergeTree/MergeTreeIndexFullText.h index d861751c7df..1385621f97f 100644 --- a/src/Storages/MergeTree/MergeTreeIndexFullText.h +++ b/src/Storages/MergeTree/MergeTreeIndexFullText.h @@ -87,7 +87,7 @@ class MergeTreeConditionFullText final : public IMergeTreeIndexCondition public: MergeTreeConditionFullText( const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, const Block & index_sample_block, const BloomFilterParameters & params_, TokenExtractorPtr token_extactor_); @@ -208,7 +208,7 @@ public: MergeTreeIndexAggregatorPtr createIndexAggregator() const override; MergeTreeIndexConditionPtr createIndexCondition( - const SelectQueryInfo & query, const Context & context) const override; + const SelectQueryInfo & query, ContextPtr context) const override; bool mayBenefitFromIndexForIn(const ASTPtr & node) const override; diff --git a/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp b/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp index e8b526d1426..099d561cf80 100644 --- a/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp +++ b/src/Storages/MergeTree/MergeTreeIndexMinMax.cpp @@ -138,7 +138,7 @@ void MergeTreeIndexAggregatorMinMax::update(const Block & block, size_t * pos, s MergeTreeIndexConditionMinMax::MergeTreeIndexConditionMinMax( const IndexDescription & index, const SelectQueryInfo & query, - const Context & context) + ContextPtr context) : index_data_types(index.data_types) , condition(query, context, index.column_names, index.expression) { @@ -175,7 +175,7 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexMinMax::createIndexAggregator() const } MergeTreeIndexConditionPtr MergeTreeIndexMinMax::createIndexCondition( - const SelectQueryInfo & query, const Context & context) const + const SelectQueryInfo & query, ContextPtr context) const { return std::make_shared(index, query, context); }; diff --git a/src/Storages/MergeTree/MergeTreeIndexMinMax.h b/src/Storages/MergeTree/MergeTreeIndexMinMax.h index 8d782d9a7dc..97b9b874484 100644 --- a/src/Storages/MergeTree/MergeTreeIndexMinMax.h +++ b/src/Storages/MergeTree/MergeTreeIndexMinMax.h @@ -52,7 +52,7 @@ public: MergeTreeIndexConditionMinMax( const IndexDescription & index, const SelectQueryInfo & query, - const Context & context); + ContextPtr context); bool alwaysUnknownOrTrue() const override; @@ -78,7 +78,7 @@ public: MergeTreeIndexAggregatorPtr createIndexAggregator() const override; MergeTreeIndexConditionPtr createIndexCondition( - const SelectQueryInfo & query, const Context & context) const override; + const SelectQueryInfo & query, ContextPtr context) const override; bool mayBenefitFromIndexForIn(const ASTPtr & node) const override; }; diff --git a/src/Storages/MergeTree/MergeTreeIndexSet.cpp b/src/Storages/MergeTree/MergeTreeIndexSet.cpp index 4ab6ae01c8c..ff875b185e9 100644 --- a/src/Storages/MergeTree/MergeTreeIndexSet.cpp +++ b/src/Storages/MergeTree/MergeTreeIndexSet.cpp @@ -241,7 +241,7 @@ MergeTreeIndexConditionSet::MergeTreeIndexConditionSet( const Block & index_sample_block_, size_t max_rows_, const SelectQueryInfo & query, - const Context & context) + ContextPtr context) : index_name(index_name_) , max_rows(max_rows_) , index_sample_block(index_sample_block_) @@ -299,6 +299,10 @@ bool MergeTreeIndexConditionSet::mayBeTrueOnGranule(MergeTreeIndexGranulePtr idx auto column = result.getByName(expression_ast->getColumnName()).column->convertToFullColumnIfConst()->convertToFullColumnIfLowCardinality(); + + if (column->onlyNull()) + return false; + const auto * col_uint8 = typeid_cast(column.get()); const NullMap * null_map = nullptr; @@ -388,7 +392,7 @@ bool MergeTreeIndexConditionSet::operatorFromAST(ASTPtr & node) func->name = "__bitSwapLastTwo"; } - else if (func->name == "and") + else if (func->name == "and" || func->name == "indexHint") { auto last_arg = args.back(); args.pop_back(); @@ -444,7 +448,7 @@ bool MergeTreeIndexConditionSet::checkASTUseless(const ASTPtr & node, bool atomi const ASTs & args = func->arguments->children; - if (func->name == "and") + if (func->name == "and" || func->name == "indexHint") return checkASTUseless(args[0], atomic) && checkASTUseless(args[1], atomic); else if (func->name == "or") return checkASTUseless(args[0], atomic) || checkASTUseless(args[1], atomic); @@ -474,7 +478,7 @@ MergeTreeIndexAggregatorPtr MergeTreeIndexSet::createIndexAggregator() const } MergeTreeIndexConditionPtr MergeTreeIndexSet::createIndexCondition( - const SelectQueryInfo & query, const Context & context) const + const SelectQueryInfo & query, ContextPtr context) const { return std::make_shared(index.name, index.sample_block, max_rows, query, context); }; diff --git a/src/Storages/MergeTree/MergeTreeIndexSet.h b/src/Storages/MergeTree/MergeTreeIndexSet.h index 90389264d53..28afe4f714d 100644 --- a/src/Storages/MergeTree/MergeTreeIndexSet.h +++ b/src/Storages/MergeTree/MergeTreeIndexSet.h @@ -87,7 +87,7 @@ public: const Block & index_sample_block_, size_t max_rows_, const SelectQueryInfo & query, - const Context & context); + ContextPtr context); bool alwaysUnknownOrTrue() const override; @@ -129,7 +129,7 @@ public: MergeTreeIndexAggregatorPtr createIndexAggregator() const override; MergeTreeIndexConditionPtr createIndexCondition( - const SelectQueryInfo & query, const Context & context) const override; + const SelectQueryInfo & query, ContextPtr context) const override; bool mayBenefitFromIndexForIn(const ASTPtr & node) const override; diff --git a/src/Storages/MergeTree/MergeTreeIndices.h b/src/Storages/MergeTree/MergeTreeIndices.h index c7b9dfb123e..674daeb480d 100644 --- a/src/Storages/MergeTree/MergeTreeIndices.h +++ b/src/Storages/MergeTree/MergeTreeIndices.h @@ -84,7 +84,7 @@ struct IMergeTreeIndex virtual MergeTreeIndexAggregatorPtr createIndexAggregator() const = 0; virtual MergeTreeIndexConditionPtr createIndexCondition( - const SelectQueryInfo & query_info, const Context & context) const = 0; + const SelectQueryInfo & query_info, ContextPtr context) const = 0; Names getColumnsRequiredForIndexCalc() const { return index.expression->getRequiredColumns(); } diff --git a/src/Storages/MergeTree/MergeTreePartsMover.cpp b/src/Storages/MergeTree/MergeTreePartsMover.cpp index cb21f50f9a0..f9e3883d5e2 100644 --- a/src/Storages/MergeTree/MergeTreePartsMover.cpp +++ b/src/Storages/MergeTree/MergeTreePartsMover.cpp @@ -182,7 +182,7 @@ bool MergeTreePartsMover::selectPartsForMove( if (!parts_to_move.empty()) { - LOG_TRACE(log, "Selected {} parts to move according to storage policy rules and {} parts according to TTL rules, {} total", parts_to_move_by_policy_rules, parts_to_move_by_ttl_rules, ReadableSize(parts_to_move_total_size_bytes)); + LOG_DEBUG(log, "Selected {} parts to move according to storage policy rules and {} parts according to TTL rules, {} total", parts_to_move_by_policy_rules, parts_to_move_by_ttl_rules, ReadableSize(parts_to_move_total_size_bytes)); return true; } else diff --git a/src/Storages/MergeTree/MergeTreeRangeReader.cpp b/src/Storages/MergeTree/MergeTreeRangeReader.cpp index e72039f7172..d373c004d10 100644 --- a/src/Storages/MergeTree/MergeTreeRangeReader.cpp +++ b/src/Storages/MergeTree/MergeTreeRangeReader.cpp @@ -486,9 +486,13 @@ void MergeTreeRangeReader::ReadResult::setFilter(const ColumnPtr & new_filter) ConstantFilterDescription const_description(*new_filter); if (const_description.always_true) + { setFilterConstTrue(); + } else if (const_description.always_false) + { clear(); + } else { FilterDescription filter_description(*new_filter); diff --git a/src/Storages/MergeTree/MergeTreeReadPool.h b/src/Storages/MergeTree/MergeTreeReadPool.h index 366e9a2381a..9949bdf86f8 100644 --- a/src/Storages/MergeTree/MergeTreeReadPool.h +++ b/src/Storages/MergeTree/MergeTreeReadPool.h @@ -100,7 +100,7 @@ private: const MergeTreeData & data; StorageMetadataPtr metadata_snapshot; - Names column_names; + const Names column_names; bool do_not_steal_tasks; bool predict_block_size_bytes; std::vector per_part_column_name_set; diff --git a/src/Storages/MergeTree/MergeTreeReverseSelectProcessor.cpp b/src/Storages/MergeTree/MergeTreeReverseSelectProcessor.cpp index ee0a77ba3cf..e9527efaa4a 100644 --- a/src/Storages/MergeTree/MergeTreeReverseSelectProcessor.cpp +++ b/src/Storages/MergeTree/MergeTreeReverseSelectProcessor.cpp @@ -44,12 +44,11 @@ MergeTreeReverseSelectProcessor::MergeTreeReverseSelectProcessor( for (const auto & range : all_mark_ranges) total_marks_count += range.end - range.begin; - size_t total_rows = data_part->index_granularity.getTotalRows(); + size_t total_rows = data_part->index_granularity.getRowsCountInRanges(all_mark_ranges); if (!quiet) - LOG_TRACE(log, "Reading {} ranges in reverse order from part {}, approx. {}, up to {} rows starting from {}", + LOG_DEBUG(log, "Reading {} ranges in reverse order from part {}, approx. {} rows starting from {}", all_mark_ranges.size(), data_part->name, total_rows, - data_part->index_granularity.getRowsCountInRanges(all_mark_ranges), data_part->index_granularity.getMarkStartingRow(all_mark_ranges.front().begin)); addTotalRowsApprox(total_rows); @@ -63,9 +62,9 @@ MergeTreeReverseSelectProcessor::MergeTreeReverseSelectProcessor( column_name_set = NameSet{column_names.begin(), column_names.end()}; if (use_uncompressed_cache) - owned_uncompressed_cache = storage.global_context.getUncompressedCache(); + owned_uncompressed_cache = storage.getContext()->getUncompressedCache(); - owned_mark_cache = storage.global_context.getMarkCache(); + owned_mark_cache = storage.getContext()->getMarkCache(); reader = data_part->getReader(task_columns.columns, metadata_snapshot, all_mark_ranges, owned_uncompressed_cache.get(), diff --git a/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp b/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp index 65f9b1eba3b..980afa170e9 100644 --- a/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp +++ b/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp @@ -47,7 +47,7 @@ MergeTreeSelectProcessor::MergeTreeSelectProcessor( size_t total_rows = data_part->index_granularity.getRowsCountInRanges(all_mark_ranges); if (!quiet) - LOG_TRACE(log, "Reading {} ranges from part {}, approx. {} rows starting from {}", + LOG_DEBUG(log, "Reading {} ranges from part {}, approx. {} rows starting from {}", all_mark_ranges.size(), data_part->name, total_rows, data_part->index_granularity.getMarkStartingRow(all_mark_ranges.front().begin)); @@ -87,9 +87,9 @@ try if (!reader) { if (use_uncompressed_cache) - owned_uncompressed_cache = storage.global_context.getUncompressedCache(); + owned_uncompressed_cache = storage.getContext()->getUncompressedCache(); - owned_mark_cache = storage.global_context.getMarkCache(); + owned_mark_cache = storage.getContext()->getMarkCache(); reader = data_part->getReader(task_columns.columns, metadata_snapshot, all_mark_ranges, owned_uncompressed_cache.get(), owned_mark_cache.get(), reader_settings); diff --git a/src/Storages/MergeTree/MergeTreeSequentialSource.cpp b/src/Storages/MergeTree/MergeTreeSequentialSource.cpp index edd63aadd29..e82b1966461 100644 --- a/src/Storages/MergeTree/MergeTreeSequentialSource.cpp +++ b/src/Storages/MergeTree/MergeTreeSequentialSource.cpp @@ -23,16 +23,16 @@ MergeTreeSequentialSource::MergeTreeSequentialSource( , data_part(std::move(data_part_)) , columns_to_read(std::move(columns_to_read_)) , read_with_direct_io(read_with_direct_io_) - , mark_cache(storage.global_context.getMarkCache()) + , mark_cache(storage.getContext()->getMarkCache()) { if (!quiet) { /// Print column name but don't pollute logs in case of many columns. if (columns_to_read.size() == 1) - LOG_TRACE(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part, column {}", + LOG_DEBUG(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part, column {}", data_part->getMarksCount(), data_part->name, data_part->rows_count, columns_to_read.front()); else - LOG_TRACE(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part", + LOG_DEBUG(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part", data_part->getMarksCount(), data_part->name, data_part->rows_count); } diff --git a/src/Storages/MergeTree/MergeTreeSettings.h b/src/Storages/MergeTree/MergeTreeSettings.h index 7a1ef8aeed6..f422f00f4dc 100644 --- a/src/Storages/MergeTree/MergeTreeSettings.h +++ b/src/Storages/MergeTree/MergeTreeSettings.h @@ -2,6 +2,7 @@ #include #include +#include namespace Poco::Util @@ -54,6 +55,7 @@ struct Settings; M(UInt64, write_ahead_log_bytes_to_fsync, 100ULL * 1024 * 1024, "Amount of bytes, accumulated in WAL to do fsync.", 0) \ M(UInt64, write_ahead_log_interval_ms_to_fsync, 100, "Interval in milliseconds after which fsync for WAL is being done.", 0) \ M(Bool, in_memory_parts_insert_sync, false, "If true insert of part with in-memory format will wait for fsync of WAL", 0) \ + M(UInt64, non_replicated_deduplication_window, 0, "How many last blocks of hashes should be kept on disk (0 - disabled).", 0) \ \ /** Inserts settings. */ \ M(UInt64, parts_to_delay_insert, 150, "If table contains at least that many active parts in single partition, artificially slow down insert into table.", 0) \ diff --git a/src/Storages/MergeTree/MergeTreeThreadSelectBlockInputProcessor.cpp b/src/Storages/MergeTree/MergeTreeThreadSelectBlockInputProcessor.cpp index f57247e39ab..ba9216ac1b0 100644 --- a/src/Storages/MergeTree/MergeTreeThreadSelectBlockInputProcessor.cpp +++ b/src/Storages/MergeTree/MergeTreeThreadSelectBlockInputProcessor.cpp @@ -71,8 +71,8 @@ bool MergeTreeThreadSelectBlockInputProcessor::getNewTask() auto rest_mark_ranges = pool->getRestMarks(*task->data_part, task->mark_ranges[0]); if (use_uncompressed_cache) - owned_uncompressed_cache = storage.global_context.getUncompressedCache(); - owned_mark_cache = storage.global_context.getMarkCache(); + owned_uncompressed_cache = storage.getContext()->getUncompressedCache(); + owned_mark_cache = storage.getContext()->getMarkCache(); reader = task->data_part->getReader(task->columns, metadata_snapshot, rest_mark_ranges, owned_uncompressed_cache.get(), owned_mark_cache.get(), reader_settings, diff --git a/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp b/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp index f0f178cb71c..3e2e77d5de7 100644 --- a/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp +++ b/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp @@ -29,7 +29,7 @@ static constexpr auto threshold = 2; MergeTreeWhereOptimizer::MergeTreeWhereOptimizer( SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, std::unordered_map column_sizes_, const StorageMetadataPtr & metadata_snapshot, const Names & queried_columns_, @@ -339,6 +339,10 @@ bool MergeTreeWhereOptimizer::cannotBeMoved(const ASTPtr & ptr, bool is_final) c if ("globalIn" == function_ptr->name || "globalNotIn" == function_ptr->name) return true; + + /// indexHint is a special function that it does not make sense to transfer to PREWHERE + if ("indexHint" == function_ptr->name) + return true; } else if (auto opt_name = IdentifierSemantic::getColumnName(ptr)) { diff --git a/src/Storages/MergeTree/MergeTreeWhereOptimizer.h b/src/Storages/MergeTree/MergeTreeWhereOptimizer.h index 8fd973e9ba3..0559fdee2ae 100644 --- a/src/Storages/MergeTree/MergeTreeWhereOptimizer.h +++ b/src/Storages/MergeTree/MergeTreeWhereOptimizer.h @@ -1,12 +1,15 @@ #pragma once -#include -#include -#include -#include #include +#include #include +#include + +#include +#include +#include + namespace Poco { class Logger; } @@ -32,7 +35,7 @@ class MergeTreeWhereOptimizer : private boost::noncopyable public: MergeTreeWhereOptimizer( SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, std::unordered_map column_sizes_, const StorageMetadataPtr & metadata_snapshot, const Names & queried_columns_, diff --git a/src/Storages/MergeTree/MergeTreeWriteAheadLog.cpp b/src/Storages/MergeTree/MergeTreeWriteAheadLog.cpp index 4ca20572e90..4c92d4f6136 100644 --- a/src/Storages/MergeTree/MergeTreeWriteAheadLog.cpp +++ b/src/Storages/MergeTree/MergeTreeWriteAheadLog.cpp @@ -30,7 +30,7 @@ MergeTreeWriteAheadLog::MergeTreeWriteAheadLog( , disk(disk_) , name(name_) , path(storage.getRelativeDataPath() + name_) - , pool(storage.global_context.getSchedulePool()) + , pool(storage.getContext()->getSchedulePool()) { init(); sync_task = pool.createTask("MergeTreeWriteAheadLog::sync", [this] diff --git a/src/Storages/MergeTree/MergedBlockOutputStream.cpp b/src/Storages/MergeTree/MergedBlockOutputStream.cpp index 6988d48b18c..ab364e0e5aa 100644 --- a/src/Storages/MergeTree/MergedBlockOutputStream.cpp +++ b/src/Storages/MergeTree/MergedBlockOutputStream.cpp @@ -26,7 +26,7 @@ MergedBlockOutputStream::MergedBlockOutputStream( , default_codec(default_codec_) { MergeTreeWriterSettings writer_settings( - storage.global_context.getSettings(), + storage.getContext()->getSettings(), storage.getSettings(), data_part->index_granularity_info.is_adaptive, /* rewrite_primary_key = */ true, diff --git a/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp b/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp index 41479f104f3..298c550d496 100644 --- a/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp +++ b/src/Storages/MergeTree/MergedColumnOnlyOutputStream.cpp @@ -21,7 +21,7 @@ MergedColumnOnlyOutputStream::MergedColumnOnlyOutputStream( : IMergedBlockOutputStream(data_part, metadata_snapshot_) , header(header_) { - const auto & global_settings = data_part->storage.global_context.getSettings(); + const auto & global_settings = data_part->storage.getContext()->getSettings(); const auto & storage_settings = data_part->storage.getSettings(); MergeTreeWriterSettings writer_settings( diff --git a/src/Storages/MergeTree/PartitionPruner.h b/src/Storages/MergeTree/PartitionPruner.h index 3cb7552c427..a4035087b89 100644 --- a/src/Storages/MergeTree/PartitionPruner.h +++ b/src/Storages/MergeTree/PartitionPruner.h @@ -21,7 +21,7 @@ private: using DataPartPtr = std::shared_ptr; public: - PartitionPruner(const KeyDescription & partition_key_, const SelectQueryInfo & query_info, const Context & context, bool strict) + PartitionPruner(const KeyDescription & partition_key_, const SelectQueryInfo & query_info, ContextPtr context, bool strict) : partition_key(partition_key_) , partition_condition( query_info, context, partition_key.column_names, partition_key.expression, true /* single_point */, strict) @@ -32,6 +32,8 @@ public: bool canBePruned(const DataPartPtr & part); bool isUseless() const { return useless; } + + const KeyCondition & getKeyCondition() const { return partition_condition; } }; } diff --git a/src/Storages/MergeTree/RPNBuilder.h b/src/Storages/MergeTree/RPNBuilder.h index 292a120d28a..d63781db67d 100644 --- a/src/Storages/MergeTree/RPNBuilder.h +++ b/src/Storages/MergeTree/RPNBuilder.h @@ -1,35 +1,34 @@ #pragma once -#include #include #include #include -#include #include -#include +#include #include +#include +#include namespace DB { -class Context; /// Builds reverse polish notation template -class RPNBuilder +class RPNBuilder : WithContext { public: using RPN = std::vector; using AtomFromASTFunc = std::function< - bool(const ASTPtr & node, const Context & context, Block & block_with_constants, RPNElement & out)>; + bool(const ASTPtr & node, ContextPtr context, Block & block_with_constants, RPNElement & out)>; - RPNBuilder(const SelectQueryInfo & query_info, const Context & context_, const AtomFromASTFunc & atomFromAST_) - : context(context_), atomFromAST(atomFromAST_) + RPNBuilder(const SelectQueryInfo & query_info, ContextPtr context_, const AtomFromASTFunc & atomFromAST_) + : WithContext(context_), atomFromAST(atomFromAST_) { /** Evaluation of expressions that depend only on constants. * For the index to be used, if it is written, for example `WHERE Date = toDate(now())`. */ - block_with_constants = KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, context); + block_with_constants = KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, getContext()); /// Transform WHERE section to Reverse Polish notation const ASTSelectQuery & select = typeid_cast(*query_info.query); @@ -80,7 +79,7 @@ private: } } - if (!atomFromAST(node, context, block_with_constants, element)) + if (!atomFromAST(node, getContext(), block_with_constants, element)) { element.function = RPNElement::FUNCTION_UNKNOWN; } @@ -91,6 +90,8 @@ private: bool operatorFromAST(const ASTFunction * func, RPNElement & out) { /// Functions AND, OR, NOT. + /// Also a special function `indexHint` - works as if instead of calling a function there are just parentheses + /// (or, the same thing - calling the function `and` from one argument). const ASTs & args = typeid_cast(*func->arguments).children; if (func->name == "not") @@ -102,7 +103,7 @@ private: } else { - if (func->name == "and") + if (func->name == "and" || func->name == "indexHint") out.function = RPNElement::FUNCTION_AND; else if (func->name == "or") out.function = RPNElement::FUNCTION_OR; @@ -113,7 +114,6 @@ private: return true; } - const Context & context; const AtomFromASTFunc & atomFromAST; Block block_with_constants; RPN rpn; diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.cpp index 529e3d2ab49..df4f9124980 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.cpp @@ -155,18 +155,9 @@ void ReplicatedMergeTreeBlockOutputStream::write(const Block & block) if (deduplicate) { - SipHash hash; - part->checksums.computeTotalChecksumDataOnly(hash); - union - { - char bytes[16]; - UInt64 words[2]; - } hash_value; - hash.get128(hash_value.bytes); - /// We add the hash from the data and partition identifier to deduplication ID. /// That is, do not insert the same data to the same partition twice. - block_id = part->info.partition_id + "_" + toString(hash_value.words[0]) + "_" + toString(hash_value.words[1]); + block_id = part->getZeroLevelPartBlockID(); LOG_DEBUG(log, "Wrote block with ID '{}', {} rows", block_id, current_block.block.rows()); } @@ -181,11 +172,11 @@ void ReplicatedMergeTreeBlockOutputStream::write(const Block & block) /// Set a special error code if the block is duplicate int error = (deduplicate && last_block_is_duplicate) ? ErrorCodes::INSERT_WAS_DEDUPLICATED : 0; - PartLog::addNewPart(storage.global_context, part, watch.elapsed(), ExecutionStatus(error)); + PartLog::addNewPart(storage.getContext(), part, watch.elapsed(), ExecutionStatus(error)); } catch (...) { - PartLog::addNewPart(storage.global_context, part, watch.elapsed(), ExecutionStatus::fromCurrentException(__PRETTY_FUNCTION__)); + PartLog::addNewPart(storage.getContext(), part, watch.elapsed(), ExecutionStatus::fromCurrentException(__PRETTY_FUNCTION__)); throw; } } @@ -209,11 +200,11 @@ void ReplicatedMergeTreeBlockOutputStream::writeExistingPart(MergeTreeData::Muta try { commitPart(zookeeper, part, ""); - PartLog::addNewPart(storage.global_context, part, watch.elapsed()); + PartLog::addNewPart(storage.getContext(), part, watch.elapsed()); } catch (...) { - PartLog::addNewPart(storage.global_context, part, watch.elapsed(), ExecutionStatus::fromCurrentException(__PRETTY_FUNCTION__)); + PartLog::addNewPart(storage.getContext(), part, watch.elapsed(), ExecutionStatus::fromCurrentException(__PRETTY_FUNCTION__)); throw; } } @@ -350,13 +341,28 @@ void ReplicatedMergeTreeBlockOutputStream::commitPart( /// If it exists on our replica, ignore it. if (storage.getActiveContainingPart(existing_part_name)) { - LOG_INFO(log, "Block with ID {} already exists locally as part {}; ignoring it.", block_id, existing_part_name); part->is_duplicate = true; last_block_is_duplicate = true; ProfileEvents::increment(ProfileEvents::DuplicatedInsertedBlocks); + if (quorum) + { + LOG_INFO(log, "Block with ID {} already exists locally as part {}; ignoring it, but checking quorum.", block_id, existing_part_name); + + std::string quorum_path; + if (quorum_parallel) + quorum_path = storage.zookeeper_path + "/quorum/parallel/" + existing_part_name; + else + quorum_path = storage.zookeeper_path + "/quorum/status"; + + waitForQuorum(zookeeper, existing_part_name, quorum_path, quorum_info.is_active_node_value); + } + else + { + LOG_INFO(log, "Block with ID {} already exists locally as part {}; ignoring it.", block_id, existing_part_name); + } + return; } - LOG_INFO(log, "Block with ID {} already exists on other replicas as part {}; will write it locally with that name.", block_id, existing_part_name); @@ -495,50 +501,7 @@ void ReplicatedMergeTreeBlockOutputStream::commitPart( storage.updateQuorum(part->name, false); } - /// We are waiting for quorum to be satisfied. - LOG_TRACE(log, "Waiting for quorum"); - - try - { - while (true) - { - zkutil::EventPtr event = std::make_shared(); - - std::string value; - /// `get` instead of `exists` so that `watch` does not leak if the node is no longer there. - if (!zookeeper->tryGet(quorum_info.status_path, value, nullptr, event)) - break; - - LOG_TRACE(log, "Quorum node {} still exists, will wait for updates", quorum_info.status_path); - - ReplicatedMergeTreeQuorumEntry quorum_entry(value); - - /// If the node has time to disappear, and then appear again for the next insert. - if (quorum_entry.part_name != part->name) - break; - - if (!event->tryWait(quorum_timeout_ms)) - throw Exception("Timeout while waiting for quorum", ErrorCodes::TIMEOUT_EXCEEDED); - - LOG_TRACE(log, "Quorum {} updated, will check quorum node still exists", quorum_info.status_path); - } - - /// And what if it is possible that the current replica at this time has ceased to be active - /// and the quorum is marked as failed and deleted? - String value; - if (!zookeeper->tryGet(storage.replica_path + "/is_active", value, nullptr) - || value != quorum_info.is_active_node_value) - throw Exception("Replica become inactive while waiting for quorum", ErrorCodes::NO_ACTIVE_REPLICAS); - } - catch (...) - { - /// We do not know whether or not data has been inserted - /// - whether other replicas have time to download the part and mark the quorum as done. - throw Exception("Unknown status, client must retry. Reason: " + getCurrentExceptionMessage(false), - ErrorCodes::UNKNOWN_STATUS_OF_INSERT); - } - - LOG_TRACE(log, "Quorum satisfied"); + waitForQuorum(zookeeper, part->name, quorum_info.status_path, quorum_info.is_active_node_value); } } @@ -550,4 +513,57 @@ void ReplicatedMergeTreeBlockOutputStream::writePrefix() } +void ReplicatedMergeTreeBlockOutputStream::waitForQuorum( + zkutil::ZooKeeperPtr & zookeeper, + const std::string & part_name, + const std::string & quorum_path, + const std::string & is_active_node_value) const +{ + /// We are waiting for quorum to be satisfied. + LOG_TRACE(log, "Waiting for quorum"); + + try + { + while (true) + { + zkutil::EventPtr event = std::make_shared(); + + std::string value; + /// `get` instead of `exists` so that `watch` does not leak if the node is no longer there. + if (!zookeeper->tryGet(quorum_path, value, nullptr, event)) + break; + + LOG_TRACE(log, "Quorum node {} still exists, will wait for updates", quorum_path); + + ReplicatedMergeTreeQuorumEntry quorum_entry(value); + + /// If the node has time to disappear, and then appear again for the next insert. + if (quorum_entry.part_name != part_name) + break; + + if (!event->tryWait(quorum_timeout_ms)) + throw Exception("Timeout while waiting for quorum", ErrorCodes::TIMEOUT_EXCEEDED); + + LOG_TRACE(log, "Quorum {} updated, will check quorum node still exists", quorum_path); + } + + /// And what if it is possible that the current replica at this time has ceased to be active + /// and the quorum is marked as failed and deleted? + String value; + if (!zookeeper->tryGet(storage.replica_path + "/is_active", value, nullptr) + || value != is_active_node_value) + throw Exception("Replica become inactive while waiting for quorum", ErrorCodes::NO_ACTIVE_REPLICAS); + } + catch (...) + { + /// We do not know whether or not data has been inserted + /// - whether other replicas have time to download the part and mark the quorum as done. + throw Exception("Unknown status, client must retry. Reason: " + getCurrentExceptionMessage(false), + ErrorCodes::UNKNOWN_STATUS_OF_INSERT); + } + + LOG_TRACE(log, "Quorum satisfied"); +} + + } diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.h b/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.h index 860b0c4ed12..6ea16491d64 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.h +++ b/src/Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.h @@ -63,6 +63,12 @@ private: /// Rename temporary part and commit to ZooKeeper. void commitPart(zkutil::ZooKeeperPtr & zookeeper, MergeTreeData::MutableDataPartPtr & part, const String & block_id); + /// Wait for quorum to be satisfied on path (quorum_path) form part (part_name) + /// Also checks that replica still alive. + void waitForQuorum( + zkutil::ZooKeeperPtr & zookeeper, const std::string & part_name, + const std::string & quorum_path, const std::string & is_active_node_value) const; + StorageReplicatedMergeTree & storage; StorageMetadataPtr metadata_snapshot; size_t quorum; diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.cpp index 701cb2fa1ed..502c6215a9a 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.cpp @@ -24,7 +24,7 @@ ReplicatedMergeTreeCleanupThread::ReplicatedMergeTreeCleanupThread(StorageReplic , log_name(storage.getStorageID().getFullTableName() + " (ReplicatedMergeTreeCleanupThread)") , log(&Poco::Logger::get(log_name)) { - task = storage.global_context.getSchedulePool().createTask(log_name, [this]{ run(); }); + task = storage.getContext()->getSchedulePool().createTask(log_name, [this]{ run(); }); } void ReplicatedMergeTreeCleanupThread::run() @@ -342,6 +342,15 @@ void ReplicatedMergeTreeCleanupThread::clearOldBlocks() timed_blocks.begin(), timed_blocks.end(), block_threshold, NodeWithStat::greaterByTime); auto first_outdated_block = std::min(first_outdated_block_fixed_threshold, first_outdated_block_time_threshold); + auto num_nodes_to_delete = timed_blocks.end() - first_outdated_block; + if (!num_nodes_to_delete) + return; + + auto last_outdated_block = timed_blocks.end() - 1; + LOG_TRACE(log, "Will clear {} old blocks from {} (ctime {}) to {} (ctime {})", num_nodes_to_delete, + first_outdated_block->node, first_outdated_block->ctime, + last_outdated_block->node, last_outdated_block->ctime); + zkutil::AsyncResponses try_remove_futures; for (auto it = first_outdated_block; it != timed_blocks.end(); ++it) { @@ -372,9 +381,7 @@ void ReplicatedMergeTreeCleanupThread::clearOldBlocks() first_outdated_block++; } - auto num_nodes_to_delete = timed_blocks.end() - first_outdated_block; - if (num_nodes_to_delete) - LOG_TRACE(log, "Cleared {} old blocks from ZooKeeper", num_nodes_to_delete); + LOG_TRACE(log, "Cleared {} old blocks from ZooKeeper", num_nodes_to_delete); } diff --git a/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp b/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp index 95883c65abb..09b2a23767c 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreePartCheckThread.cpp @@ -28,7 +28,7 @@ ReplicatedMergeTreePartCheckThread::ReplicatedMergeTreePartCheckThread(StorageRe , log_name(storage.getStorageID().getFullTableName() + " (ReplicatedMergeTreePartCheckThread)") , log(&Poco::Logger::get(log_name)) { - task = storage.global_context.getSchedulePool().createTask(log_name, [this] { run(); }); + task = storage.getContext()->getSchedulePool().createTask(log_name, [this] { run(); }); task->schedule(); } diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeRestartingThread.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeRestartingThread.cpp index b3cb7c92def..ca6ea3103d1 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeRestartingThread.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeRestartingThread.cpp @@ -47,7 +47,7 @@ ReplicatedMergeTreeRestartingThread::ReplicatedMergeTreeRestartingThread(Storage const auto storage_settings = storage.getSettings(); check_period_ms = storage_settings->zookeeper_session_expiration_check_period.totalSeconds() * 1000; - task = storage.global_context.getSchedulePool().createTask(log_name, [this]{ run(); }); + task = storage.getContext()->getSchedulePool().createTask(log_name, [this]{ run(); }); } void ReplicatedMergeTreeRestartingThread::run() diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.cpp b/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.cpp index ac1c92849d5..de72ad1168b 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.cpp +++ b/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.cpp @@ -205,7 +205,7 @@ void ReplicatedMergeTreeTableMetadata::checkImmutableFieldsEquals(const Replicat } -void ReplicatedMergeTreeTableMetadata::checkEquals(const ReplicatedMergeTreeTableMetadata & from_zk, const ColumnsDescription & columns, const Context & context) const +void ReplicatedMergeTreeTableMetadata::checkEquals(const ReplicatedMergeTreeTableMetadata & from_zk, const ColumnsDescription & columns, ContextPtr context) const { checkImmutableFieldsEquals(from_zk); diff --git a/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.h b/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.h index c1c34637664..f398547e992 100644 --- a/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.h +++ b/src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.h @@ -63,7 +63,7 @@ struct ReplicatedMergeTreeTableMetadata } }; - void checkEquals(const ReplicatedMergeTreeTableMetadata & from_zk, const ColumnsDescription & columns, const Context & context) const; + void checkEquals(const ReplicatedMergeTreeTableMetadata & from_zk, const ColumnsDescription & columns, ContextPtr context) const; Diff checkAndFindDiff(const ReplicatedMergeTreeTableMetadata & from_zk) const; diff --git a/src/Storages/MergeTree/StorageFromMergeTreeDataPart.h b/src/Storages/MergeTree/StorageFromMergeTreeDataPart.h index b7579a3b7ea..9f1a28a1522 100644 --- a/src/Storages/MergeTree/StorageFromMergeTreeDataPart.h +++ b/src/Storages/MergeTree/StorageFromMergeTreeDataPart.h @@ -26,7 +26,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) override @@ -42,7 +42,7 @@ public: bool supportsIndexForIn() const override { return true; } bool mayBenefitFromIndexForIn( - const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & metadata_snapshot) const override + const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & metadata_snapshot) const override { return part->storage.mayBenefitFromIndexForIn(left_in_operand, query_context, metadata_snapshot); } @@ -57,7 +57,7 @@ public: return part->info.partition_id; } - String getPartitionIDFromQuery(const ASTPtr & ast, const Context & context) const + String getPartitionIDFromQuery(const ASTPtr & ast, ContextPtr context) const { return part->storage.getPartitionIDFromQuery(ast, context); } diff --git a/src/Storages/MergeTree/registerStorageMergeTree.cpp b/src/Storages/MergeTree/registerStorageMergeTree.cpp index 6dd005736f0..862747abcb9 100644 --- a/src/Storages/MergeTree/registerStorageMergeTree.cpp +++ b/src/Storages/MergeTree/registerStorageMergeTree.cpp @@ -184,9 +184,9 @@ appendGraphitePattern(const Poco::Util::AbstractConfiguration & config, const St patterns.emplace_back(pattern); } -static void setGraphitePatternsFromConfig(const Context & context, const String & config_element, Graphite::Params & params) +static void setGraphitePatternsFromConfig(ContextPtr context, const String & config_element, Graphite::Params & params) { - const auto & config = context.getConfigRef(); + const auto & config = context->getConfigRef(); if (!config.has(config_element)) throw Exception("No '" + config_element + "' element in configuration file", ErrorCodes::NO_ELEMENTS_IN_CONFIG); @@ -429,7 +429,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) /// Do not try evaluate array or tuple, because it's array or tuple of column identifiers. if (arg_func->name == "array" || arg_func->name == "tuple") continue; - Field value = evaluateConstantExpression(arg, args.local_context).first; + Field value = evaluateConstantExpression(arg, args.getLocalContext()).first; arg = std::make_shared(value); } } @@ -481,9 +481,9 @@ static StoragePtr create(const StorageFactory::Arguments & args) { /// Try use default values if arguments are not specified. /// Note: {uuid} macro works for ON CLUSTER queries when database engine is Atomic. - zookeeper_path = args.context.getConfigRef().getString("default_replica_path", "/clickhouse/tables/{uuid}/{shard}"); + zookeeper_path = args.getContext()->getConfigRef().getString("default_replica_path", "/clickhouse/tables/{uuid}/{shard}"); /// TODO maybe use hostname if {replica} is not defined? - replica_name = args.context.getConfigRef().getString("default_replica_name", "{replica}"); + replica_name = args.getContext()->getConfigRef().getString("default_replica_name", "{replica}"); /// Modify query, so default values will be written to metadata assert(arg_num == 0); @@ -503,8 +503,8 @@ static StoragePtr create(const StorageFactory::Arguments & args) throw Exception("Expected two string literal arguments: zookeeper_path and replica_name", ErrorCodes::BAD_ARGUMENTS); /// Allow implicit {uuid} macros only for zookeeper_path in ON CLUSTER queries - bool is_on_cluster = args.local_context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY; - bool is_replicated_database = args.local_context.getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY && + bool is_on_cluster = args.getLocalContext()->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY; + bool is_replicated_database = args.getLocalContext()->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY && DatabaseCatalog::instance().getDatabase(args.table_id.database_name)->getEngineName() == "Replicated"; bool allow_uuid_macro = is_on_cluster || is_replicated_database || args.query.attach; @@ -521,11 +521,11 @@ static StoragePtr create(const StorageFactory::Arguments & args) info.table_id = args.table_id; if (!allow_uuid_macro) info.table_id.uuid = UUIDHelpers::Nil; - zookeeper_path = args.context.getMacros()->expand(zookeeper_path, info); + zookeeper_path = args.getContext()->getMacros()->expand(zookeeper_path, info); info.level = 0; info.table_id.uuid = UUIDHelpers::Nil; - replica_name = args.context.getMacros()->expand(replica_name, info); + replica_name = args.getContext()->getMacros()->expand(replica_name, info); } ast_zk_path->value = zookeeper_path; @@ -537,11 +537,11 @@ static StoragePtr create(const StorageFactory::Arguments & args) info.table_id = args.table_id; if (!allow_uuid_macro) info.table_id.uuid = UUIDHelpers::Nil; - zookeeper_path = args.context.getMacros()->expand(zookeeper_path, info); + zookeeper_path = args.getContext()->getMacros()->expand(zookeeper_path, info); info.level = 0; info.table_id.uuid = UUIDHelpers::Nil; - replica_name = args.context.getMacros()->expand(replica_name, info); + replica_name = args.getContext()->getMacros()->expand(replica_name, info); /// We do not allow renaming table with these macros in metadata, because zookeeper_path will be broken after RENAME TABLE. /// NOTE: it may happen if table was created by older version of ClickHouse (< 20.10) and macros was not unfolded on table creation @@ -600,7 +600,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) throw Exception(error_msg, ErrorCodes::BAD_ARGUMENTS); --arg_cnt; - setGraphitePatternsFromConfig(args.context, graphite_config_name, merging_params.graphite_params); + setGraphitePatternsFromConfig(args.getContext(), graphite_config_name, merging_params.graphite_params); } else if (merging_params.mode == MergeTreeData::MergingParams::VersionedCollapsing) { @@ -629,9 +629,9 @@ static StoragePtr create(const StorageFactory::Arguments & args) std::unique_ptr storage_settings; if (replicated) - storage_settings = std::make_unique(args.context.getReplicatedMergeTreeSettings()); + storage_settings = std::make_unique(args.getContext()->getReplicatedMergeTreeSettings()); else - storage_settings = std::make_unique(args.context.getMergeTreeSettings()); + storage_settings = std::make_unique(args.getContext()->getMergeTreeSettings()); if (is_extended_storage_def) { @@ -642,7 +642,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) /// Partition key may be undefined, but despite this we store it's empty /// value in partition_key structure. MergeTree checks this case and use /// single default partition with name "all". - metadata.partition_key = KeyDescription::getKeyFromAST(partition_by_key, metadata.columns, args.context); + metadata.partition_key = KeyDescription::getKeyFromAST(partition_by_key, metadata.columns, args.getContext()); /// PRIMARY KEY without ORDER BY is allowed and considered as ORDER BY. if (!args.storage_def->order_by && args.storage_def->primary_key) @@ -660,33 +660,33 @@ static StoragePtr create(const StorageFactory::Arguments & args) /// before storage creation. After that storage will just copy this /// column if sorting key will be changed. metadata.sorting_key = KeyDescription::getSortingKeyFromAST( - args.storage_def->order_by->ptr(), metadata.columns, args.context, merging_param_key_arg); + args.storage_def->order_by->ptr(), metadata.columns, args.getContext(), merging_param_key_arg); /// If primary key explicitly defined, than get it from AST if (args.storage_def->primary_key) { - metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->primary_key->ptr(), metadata.columns, args.context); + metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->primary_key->ptr(), metadata.columns, args.getContext()); } else /// Otherwise we don't have explicit primary key and copy it from order by { - metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->order_by->ptr(), metadata.columns, args.context); + metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->order_by->ptr(), metadata.columns, args.getContext()); /// and set it's definition_ast to nullptr (so isPrimaryKeyDefined() /// will return false but hasPrimaryKey() will return true. metadata.primary_key.definition_ast = nullptr; } if (args.storage_def->sample_by) - metadata.sampling_key = KeyDescription::getKeyFromAST(args.storage_def->sample_by->ptr(), metadata.columns, args.context); + metadata.sampling_key = KeyDescription::getKeyFromAST(args.storage_def->sample_by->ptr(), metadata.columns, args.getContext()); if (args.storage_def->ttl_table) { metadata.table_ttl = TTLTableDescription::getTTLForTableFromAST( - args.storage_def->ttl_table->ptr(), metadata.columns, args.context, metadata.primary_key); + args.storage_def->ttl_table->ptr(), metadata.columns, args.getContext(), metadata.primary_key); } if (args.query.columns_list && args.query.columns_list->indices) for (auto & index : args.query.columns_list->indices->children) - metadata.secondary_indices.push_back(IndexDescription::getIndexFromAST(index, args.columns, args.context)); + metadata.secondary_indices.push_back(IndexDescription::getIndexFromAST(index, args.columns, args.getContext())); if (args.query.columns_list && args.query.columns_list->constraints) for (auto & constraint : args.query.columns_list->constraints->children) @@ -695,7 +695,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) auto column_ttl_asts = args.columns.getColumnTTLs(); for (const auto & [name, ast] : column_ttl_asts) { - auto new_ttl_entry = TTLDescription::getTTLFromAST(ast, args.columns, args.context, metadata.primary_key); + auto new_ttl_entry = TTLDescription::getTTLFromAST(ast, args.columns, args.getContext(), metadata.primary_key); metadata.column_ttls_by_name[name] = new_ttl_entry; } @@ -716,7 +716,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) auto partition_by_ast = makeASTFunction("toYYYYMM", std::make_shared(date_column_name)); - metadata.partition_key = KeyDescription::getKeyFromAST(partition_by_ast, metadata.columns, args.context); + metadata.partition_key = KeyDescription::getKeyFromAST(partition_by_ast, metadata.columns, args.getContext()); ++arg_num; @@ -724,7 +724,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) /// If there is an expression for sampling if (arg_cnt - arg_num == 3) { - metadata.sampling_key = KeyDescription::getKeyFromAST(engine_args[arg_num], metadata.columns, args.context); + metadata.sampling_key = KeyDescription::getKeyFromAST(engine_args[arg_num], metadata.columns, args.getContext()); ++arg_num; } @@ -734,10 +734,10 @@ static StoragePtr create(const StorageFactory::Arguments & args) /// before storage creation. After that storage will just copy this /// column if sorting key will be changed. metadata.sorting_key - = KeyDescription::getSortingKeyFromAST(engine_args[arg_num], metadata.columns, args.context, merging_param_key_arg); + = KeyDescription::getSortingKeyFromAST(engine_args[arg_num], metadata.columns, args.getContext(), merging_param_key_arg); /// In old syntax primary_key always equals to sorting key. - metadata.primary_key = KeyDescription::getKeyFromAST(engine_args[arg_num], metadata.columns, args.context); + metadata.primary_key = KeyDescription::getKeyFromAST(engine_args[arg_num], metadata.columns, args.getContext()); /// But it's not explicitly defined, so we evaluate definition to /// nullptr metadata.primary_key.definition_ast = nullptr; @@ -777,7 +777,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) args.table_id, args.relative_data_path, metadata, - args.context, + args.getContext(), date_column_name, merging_params, std::move(storage_settings), @@ -789,7 +789,7 @@ static StoragePtr create(const StorageFactory::Arguments & args) args.relative_data_path, metadata, args.attach, - args.context, + args.getContext(), date_column_name, merging_params, std::move(storage_settings), diff --git a/src/Storages/PartitionCommands.cpp b/src/Storages/PartitionCommands.cpp index e51a64d5d81..f09f60887e8 100644 --- a/src/Storages/PartitionCommands.cpp +++ b/src/Storages/PartitionCommands.cpp @@ -82,6 +82,7 @@ std::optional PartitionCommand::parse(const ASTAlterCommand * res.type = FETCH_PARTITION; res.partition = command_ast->partition; res.from_zookeeper_path = command_ast->from; + res.part = command_ast->part; return res; } else if (command_ast->type == ASTAlterCommand::FREEZE_PARTITION) @@ -140,7 +141,10 @@ std::string PartitionCommand::typeToString() const else return "DROP DETACHED PARTITION"; case PartitionCommand::Type::FETCH_PARTITION: - return "FETCH PARTITION"; + if (part) + return "FETCH PART"; + else + return "FETCH PARTITION"; case PartitionCommand::Type::FREEZE_ALL_PARTITIONS: return "FREEZE ALL"; case PartitionCommand::Type::FREEZE_PARTITION: diff --git a/src/Storages/PostgreSQL/PostgreSQLConnection.cpp b/src/Storages/PostgreSQL/PostgreSQLConnection.cpp index 61caba8ac81..53cf5159c5a 100644 --- a/src/Storages/PostgreSQL/PostgreSQLConnection.cpp +++ b/src/Storages/PostgreSQL/PostgreSQLConnection.cpp @@ -8,10 +8,10 @@ #include -namespace DB +namespace postgres { -PostgreSQLConnection::PostgreSQLConnection( +Connection::Connection( const String & connection_str_, const String & address_) : connection_str(connection_str_) @@ -20,14 +20,14 @@ PostgreSQLConnection::PostgreSQLConnection( } -PostgreSQLConnection::ConnectionPtr PostgreSQLConnection::get() +pqxx::ConnectionPtr Connection::get() { connectIfNeeded(); return connection; } -PostgreSQLConnection::ConnectionPtr PostgreSQLConnection::tryGet() +pqxx::ConnectionPtr Connection::tryGet() { if (tryConnectIfNeeded()) return connection; @@ -35,7 +35,7 @@ PostgreSQLConnection::ConnectionPtr PostgreSQLConnection::tryGet() } -void PostgreSQLConnection::connectIfNeeded() +void Connection::connectIfNeeded() { if (!connection || !connection->is_open()) { @@ -45,7 +45,7 @@ void PostgreSQLConnection::connectIfNeeded() } -bool PostgreSQLConnection::tryConnectIfNeeded() +bool Connection::tryConnectIfNeeded() { try { diff --git a/src/Storages/PostgreSQL/PostgreSQLConnection.h b/src/Storages/PostgreSQL/PostgreSQLConnection.h index c8e1c3dcc91..488f45a068d 100644 --- a/src/Storages/PostgreSQL/PostgreSQLConnection.h +++ b/src/Storages/PostgreSQL/PostgreSQLConnection.h @@ -10,24 +10,27 @@ #include -namespace DB +namespace pqxx +{ + using ConnectionPtr = std::shared_ptr; +} + +namespace postgres { -class PostgreSQLConnection +class Connection { -using ConnectionPtr = std::shared_ptr; - public: - PostgreSQLConnection( + Connection( const String & connection_str_, const String & address_); - PostgreSQLConnection(const PostgreSQLConnection & other) = delete; + Connection(const Connection & other) = delete; - ConnectionPtr get(); + pqxx::ConnectionPtr get(); - ConnectionPtr tryGet(); + pqxx::ConnectionPtr tryGet(); bool isConnected() { return tryConnectIfNeeded(); } @@ -38,40 +41,40 @@ private: const std::string & getAddress() { return address; } - ConnectionPtr connection; + pqxx::ConnectionPtr connection; std::string connection_str, address; }; -using PostgreSQLConnectionPtr = std::shared_ptr; +using ConnectionPtr = std::shared_ptr; -class PostgreSQLConnectionHolder +class ConnectionHolder { -using Pool = ConcurrentBoundedQueue; +using Pool = ConcurrentBoundedQueue; static constexpr inline auto POSTGRESQL_POOL_WAIT_MS = 50; public: - PostgreSQLConnectionHolder(PostgreSQLConnectionPtr connection_, Pool & pool_) + ConnectionHolder(ConnectionPtr connection_, Pool & pool_) : connection(std::move(connection_)) , pool(pool_) { } - PostgreSQLConnectionHolder(const PostgreSQLConnectionHolder & other) = delete; + ConnectionHolder(const ConnectionHolder & other) = delete; - ~PostgreSQLConnectionHolder() { pool.tryPush(connection, POSTGRESQL_POOL_WAIT_MS); } + ~ConnectionHolder() { pool.tryPush(connection, POSTGRESQL_POOL_WAIT_MS); } pqxx::connection & conn() const { return *connection->get(); } bool isConnected() { return connection->isConnected(); } private: - PostgreSQLConnectionPtr connection; + ConnectionPtr connection; Pool & pool; }; -using PostgreSQLConnectionHolderPtr = std::shared_ptr; +using ConnectionHolderPtr = std::shared_ptr; } diff --git a/src/Storages/PostgreSQL/PostgreSQLConnectionPool.cpp b/src/Storages/PostgreSQL/PostgreSQLConnectionPool.cpp index 659877b6b49..42c716dcf14 100644 --- a/src/Storages/PostgreSQL/PostgreSQLConnectionPool.cpp +++ b/src/Storages/PostgreSQL/PostgreSQLConnectionPool.cpp @@ -10,10 +10,10 @@ #include -namespace DB +namespace postgres { -PostgreSQLConnectionPool::PostgreSQLConnectionPool( +ConnectionPool::ConnectionPool( std::string dbname, std::string host, UInt16 port, @@ -37,7 +37,7 @@ PostgreSQLConnectionPool::PostgreSQLConnectionPool( } -PostgreSQLConnectionPool::PostgreSQLConnectionPool(const PostgreSQLConnectionPool & other) +ConnectionPool::ConnectionPool(const ConnectionPool & other) : pool(std::make_shared(other.pool_size)) , connection_str(other.connection_str) , address(other.address) @@ -49,46 +49,46 @@ PostgreSQLConnectionPool::PostgreSQLConnectionPool(const PostgreSQLConnectionPoo } -void PostgreSQLConnectionPool::initialize() +void ConnectionPool::initialize() { /// No connection is made, just fill pool with non-connected connection objects. for (size_t i = 0; i < pool_size; ++i) - pool->push(std::make_shared(connection_str, address)); + pool->push(std::make_shared(connection_str, address)); } -std::string PostgreSQLConnectionPool::formatConnectionString( +std::string ConnectionPool::formatConnectionString( std::string dbname, std::string host, UInt16 port, std::string user, std::string password) { - WriteBufferFromOwnString out; - out << "dbname=" << quote << dbname - << " host=" << quote << host + DB::WriteBufferFromOwnString out; + out << "dbname=" << DB::quote << dbname + << " host=" << DB::quote << host << " port=" << port - << " user=" << quote << user - << " password=" << quote << password; + << " user=" << DB::quote << user + << " password=" << DB::quote << password; return out.str(); } -PostgreSQLConnectionHolderPtr PostgreSQLConnectionPool::get() +ConnectionHolderPtr ConnectionPool::get() { - PostgreSQLConnectionPtr connection; + ConnectionPtr connection; /// Always blocks by default. if (block_on_empty_pool) { /// pop to ConcurrentBoundedQueue will block until it is non-empty. pool->pop(connection); - return std::make_shared(connection, *pool); + return std::make_shared(connection, *pool); } if (pool->tryPop(connection, pool_wait_timeout)) { - return std::make_shared(connection, *pool); + return std::make_shared(connection, *pool); } - connection = std::make_shared(connection_str, address); - return std::make_shared(connection, *pool); + connection = std::make_shared(connection_str, address); + return std::make_shared(connection, *pool); } } diff --git a/src/Storages/PostgreSQL/PostgreSQLConnectionPool.h b/src/Storages/PostgreSQL/PostgreSQLConnectionPool.h index a66254f81af..f1239fc78b5 100644 --- a/src/Storages/PostgreSQL/PostgreSQLConnectionPool.h +++ b/src/Storages/PostgreSQL/PostgreSQLConnectionPool.h @@ -8,42 +8,41 @@ #include "PostgreSQLConnection.h" -namespace DB +namespace postgres { -class PostgreSQLReplicaConnection; - +class PoolWithFailover; /// Connection pool size is defined by user with setting `postgresql_connection_pool_size` (default 16). /// If pool is empty, it will block until there are available connections. /// If setting `connection_pool_wait_timeout` is defined, it will not block on empty pool and will /// wait until the timeout and then create a new connection. (only for storage/db engine) -class PostgreSQLConnectionPool +class ConnectionPool { -friend class PostgreSQLReplicaConnection; +friend class PoolWithFailover; static constexpr inline auto POSTGRESQL_POOL_DEFAULT_SIZE = 16; public: - PostgreSQLConnectionPool( - std::string dbname, - std::string host, - UInt16 port, - std::string user, - std::string password, - size_t pool_size_ = POSTGRESQL_POOL_DEFAULT_SIZE, - int64_t pool_wait_timeout_ = -1); + ConnectionPool( + std::string dbname, + std::string host, + UInt16 port, + std::string user, + std::string password, + size_t pool_size_ = POSTGRESQL_POOL_DEFAULT_SIZE, + int64_t pool_wait_timeout_ = -1); - PostgreSQLConnectionPool(const PostgreSQLConnectionPool & other); + ConnectionPool(const ConnectionPool & other); - PostgreSQLConnectionPool operator =(const PostgreSQLConnectionPool &) = delete; + ConnectionPool operator =(const ConnectionPool &) = delete; - PostgreSQLConnectionHolderPtr get(); + ConnectionHolderPtr get(); private: - using Pool = ConcurrentBoundedQueue; + using Pool = ConcurrentBoundedQueue; using PoolPtr = std::shared_ptr; static std::string formatConnectionString( @@ -58,7 +57,7 @@ private: bool block_on_empty_pool; }; -using PostgreSQLConnectionPoolPtr = std::shared_ptr; +using ConnectionPoolPtr = std::shared_ptr; } diff --git a/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.cpp b/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.cpp new file mode 100644 index 00000000000..6230bb4bc3b --- /dev/null +++ b/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.cpp @@ -0,0 +1,110 @@ +#include "PostgreSQLPoolWithFailover.h" +#include "PostgreSQLConnection.h" +#include +#include +#include + + +namespace DB +{ +namespace ErrorCodes +{ + extern const int POSTGRESQL_CONNECTION_FAILURE; +} +} + +namespace postgres +{ + +PoolWithFailover::PoolWithFailover( + const Poco::Util::AbstractConfiguration & config, + const std::string & config_prefix, + const size_t max_tries_) + : max_tries(max_tries_) +{ + auto db = config.getString(config_prefix + ".db", ""); + auto host = config.getString(config_prefix + ".host", ""); + auto port = config.getUInt(config_prefix + ".port", 0); + auto user = config.getString(config_prefix + ".user", ""); + auto password = config.getString(config_prefix + ".password", ""); + + if (config.has(config_prefix + ".replica")) + { + Poco::Util::AbstractConfiguration::Keys config_keys; + config.keys(config_prefix, config_keys); + + for (const auto & config_key : config_keys) + { + if (config_key.starts_with("replica")) + { + std::string replica_name = config_prefix + "." + config_key; + size_t priority = config.getInt(replica_name + ".priority", 0); + + auto replica_host = config.getString(replica_name + ".host", host); + auto replica_port = config.getUInt(replica_name + ".port", port); + auto replica_user = config.getString(replica_name + ".user", user); + auto replica_password = config.getString(replica_name + ".password", password); + + replicas_with_priority[priority].emplace_back(std::make_shared(db, replica_host, replica_port, replica_user, replica_password)); + } + } + } + else + { + replicas_with_priority[0].emplace_back(std::make_shared(db, host, port, user, password)); + } +} + + +PoolWithFailover::PoolWithFailover( + const std::string & database, + const RemoteDescription & addresses, + const std::string & user, + const std::string & password, + size_t pool_size, + int64_t pool_wait_timeout, + size_t max_tries_) + : max_tries(max_tries_) +{ + /// Replicas have the same priority, but traversed replicas are moved to the end of the queue. + for (const auto & [host, port] : addresses) + { + LOG_DEBUG(&Poco::Logger::get("PostgreSQLPoolWithFailover"), "Adding address host: {}, port: {} to connection pool", host, port); + replicas_with_priority[0].emplace_back(std::make_shared(database, host, port, user, password, pool_size, pool_wait_timeout)); + } +} + + +PoolWithFailover::PoolWithFailover(const PoolWithFailover & other) + : replicas_with_priority(other.replicas_with_priority) + , max_tries(other.max_tries) +{ +} + + +ConnectionHolderPtr PoolWithFailover::get() +{ + std::lock_guard lock(mutex); + + for (size_t try_idx = 0; try_idx < max_tries; ++try_idx) + { + for (auto & priority : replicas_with_priority) + { + auto & replicas = priority.second; + for (size_t i = 0; i < replicas.size(); ++i) + { + auto connection = replicas[i]->get(); + if (connection->isConnected()) + { + /// Move all traversed replicas to the end. + std::rotate(replicas.begin(), replicas.begin() + i + 1, replicas.end()); + return connection; + } + } + } + } + + throw DB::Exception(DB::ErrorCodes::POSTGRESQL_CONNECTION_FAILURE, "Unable to connect to any of the replicas"); +} + +} diff --git a/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.h b/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.h new file mode 100644 index 00000000000..8f6027c2c0d --- /dev/null +++ b/src/Storages/PostgreSQL/PostgreSQLPoolWithFailover.h @@ -0,0 +1,51 @@ +#pragma once + +#include +#include +#include "PostgreSQLConnectionPool.h" + + +namespace postgres +{ + +class PoolWithFailover +{ + +using RemoteDescription = std::vector>; + +public: + static constexpr inline auto POSTGRESQL_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES = 5; + static constexpr inline auto POSTGRESQL_POOL_DEFAULT_SIZE = 16; + + PoolWithFailover( + const Poco::Util::AbstractConfiguration & config, + const std::string & config_prefix, + const size_t max_tries_ = POSTGRESQL_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); + + PoolWithFailover( + const std::string & database, + const RemoteDescription & addresses, + const std::string & user, + const std::string & password, + size_t pool_size = POSTGRESQL_POOL_DEFAULT_SIZE, + int64_t pool_wait_timeout = -1, + size_t max_tries_ = POSTGRESQL_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); + + PoolWithFailover(const PoolWithFailover & other); + + ConnectionHolderPtr get(); + + +private: + /// Highest priority is 0, the bigger the number in map, the less the priority + using Replicas = std::vector; + using ReplicasWithPriority = std::map; + + ReplicasWithPriority replicas_with_priority; + size_t max_tries; + std::mutex mutex; +}; + +using PoolWithFailoverPtr = std::shared_ptr; + +} diff --git a/src/Storages/PostgreSQL/PostgreSQLReplicaConnection.cpp b/src/Storages/PostgreSQL/PostgreSQLReplicaConnection.cpp deleted file mode 100644 index 30e06d4c17c..00000000000 --- a/src/Storages/PostgreSQL/PostgreSQLReplicaConnection.cpp +++ /dev/null @@ -1,79 +0,0 @@ -#include "PostgreSQLReplicaConnection.h" -#include "PostgreSQLConnection.h" -#include - - -namespace DB -{ - -namespace ErrorCodes -{ - extern const int POSTGRESQL_CONNECTION_FAILURE; -} - - -PostgreSQLReplicaConnection::PostgreSQLReplicaConnection( - const Poco::Util::AbstractConfiguration & config, - const String & config_prefix, - const size_t num_retries_) - : num_retries(num_retries_) -{ - auto db = config.getString(config_prefix + ".db", ""); - auto host = config.getString(config_prefix + ".host", ""); - auto port = config.getUInt(config_prefix + ".port", 0); - auto user = config.getString(config_prefix + ".user", ""); - auto password = config.getString(config_prefix + ".password", ""); - - if (config.has(config_prefix + ".replica")) - { - Poco::Util::AbstractConfiguration::Keys config_keys; - config.keys(config_prefix, config_keys); - - for (const auto & config_key : config_keys) - { - if (config_key.starts_with("replica")) - { - std::string replica_name = config_prefix + "." + config_key; - size_t priority = config.getInt(replica_name + ".priority", 0); - - auto replica_host = config.getString(replica_name + ".host", host); - auto replica_port = config.getUInt(replica_name + ".port", port); - auto replica_user = config.getString(replica_name + ".user", user); - auto replica_password = config.getString(replica_name + ".password", password); - - replicas[priority] = std::make_shared(db, replica_host, replica_port, replica_user, replica_password); - } - } - } - else - { - replicas[0] = std::make_shared(db, host, port, user, password); - } -} - - -PostgreSQLReplicaConnection::PostgreSQLReplicaConnection(const PostgreSQLReplicaConnection & other) - : replicas(other.replicas) - , num_retries(other.num_retries) -{ -} - - -PostgreSQLConnectionHolderPtr PostgreSQLReplicaConnection::get() -{ - std::lock_guard lock(mutex); - - for (size_t i = 0; i < num_retries; ++i) - { - for (auto & replica : replicas) - { - auto connection = replica.second->get(); - if (connection->isConnected()) - return connection; - } - } - - throw Exception(ErrorCodes::POSTGRESQL_CONNECTION_FAILURE, "Unable to connect to any of the replicas"); -} - -} diff --git a/src/Storages/RabbitMQ/RabbitMQBlockInputStream.cpp b/src/Storages/RabbitMQ/RabbitMQBlockInputStream.cpp index c5c43440228..6c3d3a53c21 100644 --- a/src/Storages/RabbitMQ/RabbitMQBlockInputStream.cpp +++ b/src/Storages/RabbitMQ/RabbitMQBlockInputStream.cpp @@ -16,7 +16,7 @@ namespace DB RabbitMQBlockInputStream::RabbitMQBlockInputStream( StorageRabbitMQ & storage_, const StorageMetadataPtr & metadata_snapshot_, - std::shared_ptr context_, + ContextPtr context_, const Names & columns, size_t max_block_size_, bool ack_in_suffix_) @@ -91,7 +91,7 @@ Block RabbitMQBlockInputStream::readImpl() MutableColumns virtual_columns = virtual_header.cloneEmptyColumns(); auto input_format = FormatFactory::instance().getInputFormat( - storage.getFormatName(), *buffer, non_virtual_header, *context, max_block_size); + storage.getFormatName(), *buffer, non_virtual_header, context, max_block_size); InputPort port(input_format->getPort().getHeader(), input_format.get()); connect(input_format->getPort(), port); diff --git a/src/Storages/RabbitMQ/RabbitMQBlockInputStream.h b/src/Storages/RabbitMQ/RabbitMQBlockInputStream.h index 8b93ca4e911..5ce1c96bf33 100644 --- a/src/Storages/RabbitMQ/RabbitMQBlockInputStream.h +++ b/src/Storages/RabbitMQ/RabbitMQBlockInputStream.h @@ -15,7 +15,7 @@ public: RabbitMQBlockInputStream( StorageRabbitMQ & storage_, const StorageMetadataPtr & metadata_snapshot_, - std::shared_ptr context_, + ContextPtr context_, const Names & columns, size_t max_block_size_, bool ack_in_suffix = true); @@ -38,7 +38,7 @@ public: private: StorageRabbitMQ & storage; StorageMetadataPtr metadata_snapshot; - std::shared_ptr context; + ContextPtr context; Names column_names; const size_t max_block_size; bool ack_in_suffix; diff --git a/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp b/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp index a987fff3c64..3c837cb95b1 100644 --- a/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp +++ b/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp @@ -11,7 +11,7 @@ namespace DB RabbitMQBlockOutputStream::RabbitMQBlockOutputStream( StorageRabbitMQ & storage_, const StorageMetadataPtr & metadata_snapshot_, - const Context & context_) + ContextPtr context_) : storage(storage_) , metadata_snapshot(metadata_snapshot_) , context(context_) diff --git a/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.h b/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.h index 7e5c22f9f39..3941875ea86 100644 --- a/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.h +++ b/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.h @@ -11,7 +11,7 @@ class RabbitMQBlockOutputStream : public IBlockOutputStream { public: - explicit RabbitMQBlockOutputStream(StorageRabbitMQ & storage_, const StorageMetadataPtr & metadata_snapshot_, const Context & context_); + explicit RabbitMQBlockOutputStream(StorageRabbitMQ & storage_, const StorageMetadataPtr & metadata_snapshot_, ContextPtr context_); Block getHeader() const override; @@ -22,7 +22,7 @@ public: private: StorageRabbitMQ & storage; StorageMetadataPtr metadata_snapshot; - const Context & context; + ContextPtr context; ProducerBufferPtr buffer; BlockOutputStreamPtr child; }; diff --git a/src/Storages/RabbitMQ/StorageRabbitMQ.cpp b/src/Storages/RabbitMQ/StorageRabbitMQ.cpp index 0ecf85e5c3d..55629f2a205 100644 --- a/src/Storages/RabbitMQ/StorageRabbitMQ.cpp +++ b/src/Storages/RabbitMQ/StorageRabbitMQ.cpp @@ -70,31 +70,31 @@ namespace ExchangeType StorageRabbitMQ::StorageRabbitMQ( const StorageID & table_id_, - const Context & context_, + ContextPtr context_, const ColumnsDescription & columns_, std::unique_ptr rabbitmq_settings_) : IStorage(table_id_) - , global_context(context_.getGlobalContext()) + , WithContext(context_->getGlobalContext()) , rabbitmq_settings(std::move(rabbitmq_settings_)) - , exchange_name(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_exchange_name.value)) - , format_name(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_format.value)) - , exchange_type(defineExchangeType(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_exchange_type.value))) - , routing_keys(parseRoutingKeys(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_routing_key_list.value))) + , exchange_name(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_exchange_name.value)) + , format_name(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_format.value)) + , exchange_type(defineExchangeType(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_exchange_type.value))) + , routing_keys(parseRoutingKeys(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_routing_key_list.value))) , row_delimiter(rabbitmq_settings->rabbitmq_row_delimiter.value) - , schema_name(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_schema.value)) + , schema_name(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_schema.value)) , num_consumers(rabbitmq_settings->rabbitmq_num_consumers.value) , num_queues(rabbitmq_settings->rabbitmq_num_queues.value) - , queue_base(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_queue_base.value)) - , deadletter_exchange(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_deadletter_exchange.value)) + , queue_base(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_queue_base.value)) + , deadletter_exchange(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_deadletter_exchange.value)) , persistent(rabbitmq_settings->rabbitmq_persistent.value) , hash_exchange(num_consumers > 1 || num_queues > 1) , log(&Poco::Logger::get("StorageRabbitMQ (" + table_id_.table_name + ")")) - , address(global_context.getMacros()->expand(rabbitmq_settings->rabbitmq_host_port.value)) + , address(getContext()->getMacros()->expand(rabbitmq_settings->rabbitmq_host_port.value)) , parsed_address(parseAddress(address, 5672)) , login_password(std::make_pair( - global_context.getConfigRef().getString("rabbitmq.username"), - global_context.getConfigRef().getString("rabbitmq.password"))) - , vhost(global_context.getConfigRef().getString("rabbitmq.vhost", "/")) + getContext()->getConfigRef().getString("rabbitmq.username"), + getContext()->getConfigRef().getString("rabbitmq.password"))) + , vhost(getContext()->getConfigRef().getString("rabbitmq.vhost", "/")) , semaphore(0, num_consumers) , unique_strbase(getRandomName()) , queue_size(std::max(QUEUE_SIZE, static_cast(getMaxBlockSize()))) @@ -106,18 +106,18 @@ StorageRabbitMQ::StorageRabbitMQ( storage_metadata.setColumns(columns_); setInMemoryMetadata(storage_metadata); - rabbitmq_context = addSettings(global_context); + rabbitmq_context = addSettings(getContext()); rabbitmq_context->makeQueryContext(); /// One looping task for all consumers as they share the same connection == the same handler == the same event loop event_handler->updateLoopState(Loop::STOP); - looping_task = global_context.getMessageBrokerSchedulePool().createTask("RabbitMQLoopingTask", [this]{ loopingFunc(); }); + looping_task = getContext()->getMessageBrokerSchedulePool().createTask("RabbitMQLoopingTask", [this]{ loopingFunc(); }); looping_task->deactivate(); - streaming_task = global_context.getMessageBrokerSchedulePool().createTask("RabbitMQStreamingTask", [this]{ streamingToViewsFunc(); }); + streaming_task = getContext()->getMessageBrokerSchedulePool().createTask("RabbitMQStreamingTask", [this]{ streamingToViewsFunc(); }); streaming_task->deactivate(); - connection_task = global_context.getMessageBrokerSchedulePool().createTask("RabbitMQConnectionTask", [this]{ connectionFunc(); }); + connection_task = getContext()->getMessageBrokerSchedulePool().createTask("RabbitMQConnectionTask", [this]{ connectionFunc(); }); connection_task->deactivate(); if (queue_base.empty()) @@ -188,9 +188,9 @@ String StorageRabbitMQ::getTableBasedName(String name, const StorageID & table_i } -std::shared_ptr StorageRabbitMQ::addSettings(const Context & context) const +std::shared_ptr StorageRabbitMQ::addSettings(ContextPtr local_context) const { - auto modified_context = std::make_shared(context); + auto modified_context = Context::createCopy(local_context); modified_context->setSetting("input_format_skip_unknown_fields", true); modified_context->setSetting("input_format_allow_errors_ratio", 0.); modified_context->setSetting("input_format_allow_errors_num", rabbitmq_settings->rabbitmq_skip_broken_messages.value); @@ -253,7 +253,7 @@ size_t StorageRabbitMQ::getMaxBlockSize() const { return rabbitmq_settings->rabbitmq_max_block_size.changed ? rabbitmq_settings->rabbitmq_max_block_size.value - : (global_context.getSettingsRef().max_insert_block_size.value / num_consumers); + : (getContext()->getSettingsRef().max_insert_block_size.value / num_consumers); } @@ -562,7 +562,7 @@ Pipe StorageRabbitMQ::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /* query_info */, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum /* processed_stage */, size_t /* max_block_size */, unsigned /* num_streams */) @@ -574,7 +574,7 @@ Pipe StorageRabbitMQ::read( return {}; auto sample_block = metadata_snapshot->getSampleBlockForColumns(column_names, getVirtuals(), getStorageID()); - auto modified_context = addSettings(context); + auto modified_context = addSettings(local_context); auto block_size = getMaxBlockSize(); if (!event_handler->connectionRunning()) @@ -607,9 +607,9 @@ Pipe StorageRabbitMQ::read( } -BlockOutputStreamPtr StorageRabbitMQ::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageRabbitMQ::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { - return std::make_shared(*this, metadata_snapshot, context); + return std::make_shared(*this, metadata_snapshot, local_context); } @@ -712,7 +712,7 @@ ConsumerBufferPtr StorageRabbitMQ::createReadBuffer() ProducerBufferPtr StorageRabbitMQ::createWriteBuffer() { return std::make_shared( - parsed_address, global_context, login_password, vhost, routing_keys, exchange_name, exchange_type, + parsed_address, getContext(), login_password, vhost, routing_keys, exchange_name, exchange_type, producer_id.fetch_add(1), persistent, wait_confirm, log, row_delimiter ? std::optional{row_delimiter} : std::nullopt, 1, 1024); } @@ -728,7 +728,7 @@ bool StorageRabbitMQ::checkDependencies(const StorageID & table_id) // Check the dependencies are ready? for (const auto & db_tab : dependencies) { - auto table = DatabaseCatalog::instance().tryGetTable(db_tab, global_context); + auto table = DatabaseCatalog::instance().tryGetTable(db_tab, getContext()); if (!table) return false; @@ -798,7 +798,7 @@ void StorageRabbitMQ::streamingToViewsFunc() bool StorageRabbitMQ::streamToViews() { auto table_id = getStorageID(); - auto table = DatabaseCatalog::instance().getTable(table_id, global_context); + auto table = DatabaseCatalog::instance().getTable(table_id, getContext()); if (!table) throw Exception("Engine table " + table_id.getNameForLogs() + " doesn't exist.", ErrorCodes::LOGICAL_ERROR); @@ -807,7 +807,7 @@ bool StorageRabbitMQ::streamToViews() insert->table_id = table_id; // Only insert into dependent views and expect that input blocks contain virtual columns - InterpreterInsertQuery interpreter(insert, *rabbitmq_context, false, true, true); + InterpreterInsertQuery interpreter(insert, rabbitmq_context, false, true, true); auto block_io = interpreter.execute(); auto metadata_snapshot = getInMemoryMetadataPtr(); @@ -831,7 +831,7 @@ bool StorageRabbitMQ::streamToViews() limits.speed_limits.max_execution_time = rabbitmq_settings->rabbitmq_flush_interval_ms.changed ? rabbitmq_settings->rabbitmq_flush_interval_ms - : global_context.getSettingsRef().stream_flush_interval_ms; + : getContext()->getSettingsRef().stream_flush_interval_ms; limits.timeout_overflow_mode = OverflowMode::BREAK; @@ -990,7 +990,7 @@ void registerStorageRabbitMQ(StorageFactory & factory) #undef CHECK_RABBITMQ_STORAGE_ARGUMENT - return StorageRabbitMQ::create(args.table_id, args.context, args.columns, std::move(rabbitmq_settings)); + return StorageRabbitMQ::create(args.table_id, args.getContext(), args.columns, std::move(rabbitmq_settings)); }; factory.registerStorage("RabbitMQ", creator_fn, StorageFactory::StorageFeatures{ .supports_settings = true, }); diff --git a/src/Storages/RabbitMQ/StorageRabbitMQ.h b/src/Storages/RabbitMQ/StorageRabbitMQ.h index 9f573ea4a3e..eeda6b9fdca 100644 --- a/src/Storages/RabbitMQ/StorageRabbitMQ.h +++ b/src/Storages/RabbitMQ/StorageRabbitMQ.h @@ -19,11 +19,9 @@ namespace DB { -class Context; - using ChannelPtr = std::shared_ptr; -class StorageRabbitMQ final: public ext::shared_ptr_helper, public IStorage +class StorageRabbitMQ final: public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; @@ -40,7 +38,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -48,7 +46,7 @@ public: BlockOutputStreamPtr write( const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, - const Context & context) override; + ContextPtr context) override; void pushReadBuffer(ConsumerBufferPtr buf); ConsumerBufferPtr popReadBuffer(); @@ -59,7 +57,7 @@ public: const String & getFormatName() const { return format_name; } NamesAndTypesList getVirtuals() const override; - const String getExchange() const { return exchange_name; } + String getExchange() const { return exchange_name; } void unbindExchange(); bool exchangeRemoved() { return exchange_removed.load(); } @@ -69,13 +67,12 @@ public: protected: StorageRabbitMQ( const StorageID & table_id_, - const Context & context_, + ContextPtr context_, const ColumnsDescription & columns_, std::unique_ptr rabbitmq_settings_); private: - const Context & global_context; - std::shared_ptr rabbitmq_context; + ContextPtr rabbitmq_context; std::unique_ptr rabbitmq_settings; const String exchange_name; @@ -139,7 +136,7 @@ private: static AMQP::ExchangeType defineExchangeType(String exchange_type_); static String getTableBasedName(String name, const StorageID & table_id); - std::shared_ptr addSettings(const Context & context) const; + std::shared_ptr addSettings(ContextPtr context) const; size_t getMaxBlockSize() const; void deactivateTask(BackgroundSchedulePool::TaskHolder & task, bool wait, bool stop_loop); diff --git a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp index ebee27faf17..b9af60eb66f 100644 --- a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp +++ b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp @@ -27,7 +27,7 @@ namespace ErrorCodes WriteBufferToRabbitMQProducer::WriteBufferToRabbitMQProducer( std::pair & parsed_address_, - const Context & global_context, + ContextPtr global_context, const std::pair & login_password_, const String & vhost_, const Names & routing_keys_, @@ -72,7 +72,7 @@ WriteBufferToRabbitMQProducer::WriteBufferToRabbitMQProducer( ErrorCodes::CANNOT_CONNECT_RABBITMQ); } - writing_task = global_context.getSchedulePool().createTask("RabbitMQWritingTask", [this]{ writingFunc(); }); + writing_task = global_context->getSchedulePool().createTask("RabbitMQWritingTask", [this]{ writingFunc(); }); writing_task->deactivate(); if (exchange_type == AMQP::ExchangeType::headers) diff --git a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h index e88f5e10e74..452cc38d751 100644 --- a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h +++ b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h @@ -20,7 +20,7 @@ class WriteBufferToRabbitMQProducer : public WriteBuffer public: WriteBufferToRabbitMQProducer( std::pair & parsed_address_, - const Context & global_context, + ContextPtr global_context, const std::pair & login_password_, const String & vhost_, const Names & routing_keys_, diff --git a/src/Storages/ReadInOrderOptimizer.cpp b/src/Storages/ReadInOrderOptimizer.cpp index 2b751329208..3bb7034b588 100644 --- a/src/Storages/ReadInOrderOptimizer.cpp +++ b/src/Storages/ReadInOrderOptimizer.cpp @@ -34,7 +34,7 @@ ReadInOrderOptimizer::ReadInOrderOptimizer( forbidden_columns = syntax_result->getArrayJoinSourceNameSet(); } -InputOrderInfoPtr ReadInOrderOptimizer::getInputOrder(const StorageMetadataPtr & metadata_snapshot, const Context & context) const +InputOrderInfoPtr ReadInOrderOptimizer::getInputOrder(const StorageMetadataPtr & metadata_snapshot, ContextPtr context) const { Names sorting_key_columns = metadata_snapshot->getSortingKeyColumns(); if (!metadata_snapshot->hasSortingKey()) @@ -44,7 +44,7 @@ InputOrderInfoPtr ReadInOrderOptimizer::getInputOrder(const StorageMetadataPtr & int read_direction = required_sort_description.at(0).direction; size_t prefix_size = std::min(required_sort_description.size(), sorting_key_columns.size()); - auto aliase_columns = metadata_snapshot->getColumns().getAliases(); + auto aliased_columns = metadata_snapshot->getColumns().getAliases(); for (size_t i = 0; i < prefix_size; ++i) { @@ -55,13 +55,18 @@ InputOrderInfoPtr ReadInOrderOptimizer::getInputOrder(const StorageMetadataPtr & /// or in some simple cases when order key element is wrapped into monotonic function. auto apply_order_judge = [&] (const ExpressionActions::Actions & actions, const String & sort_column) { + /// If required order depend on collation, it cannot be matched with primary key order. + /// Because primary keys cannot have collations. + if (required_sort_description[i].collator) + return false; + int current_direction = required_sort_description[i].direction; - /// For the path: order by (sort_column, ...) + /// For the path: order by (sort_column, ...) if (sort_column == sorting_key_columns[i] && current_direction == read_direction) { return true; } - /// For the path: order by (function(sort_column), ...) + /// For the path: order by (function(sort_column), ...) /// Allow only one simple monotonic functions with one argument /// Why not allow multi monotonic functions? else @@ -125,7 +130,7 @@ InputOrderInfoPtr ReadInOrderOptimizer::getInputOrder(const StorageMetadataPtr & /// currently we only support alias column without any function wrapper /// ie: `order by aliased_column` can have this optimization, but `order by function(aliased_column)` can not. /// This suits most cases. - if (context.getSettingsRef().optimize_respect_aliases && aliase_columns.contains(required_sort_description[i].column_name)) + if (context->getSettingsRef().optimize_respect_aliases && aliased_columns.contains(required_sort_description[i].column_name)) { auto column_expr = metadata_snapshot->getColumns().get(required_sort_description[i].column_name).default_desc.expression->clone(); replaceAliasColumnsInQuery(column_expr, metadata_snapshot->getColumns(), forbidden_columns, context); diff --git a/src/Storages/ReadInOrderOptimizer.h b/src/Storages/ReadInOrderOptimizer.h index 3676f4cc88c..0af1121db32 100644 --- a/src/Storages/ReadInOrderOptimizer.h +++ b/src/Storages/ReadInOrderOptimizer.h @@ -22,7 +22,7 @@ public: const SortDescription & required_sort_description, const TreeRewriterResultPtr & syntax_result); - InputOrderInfoPtr getInputOrder(const StorageMetadataPtr & metadata_snapshot, const Context & context) const; + InputOrderInfoPtr getInputOrder(const StorageMetadataPtr & metadata_snapshot, ContextPtr context) const; private: /// Actions for every element of order expression to analyze functions for monotonicity diff --git a/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp b/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp index 9b0a0c36b45..9173c23ec5a 100644 --- a/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp +++ b/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp @@ -245,12 +245,12 @@ StorageEmbeddedRocksDB::StorageEmbeddedRocksDB(const StorageID & table_id_, const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, bool attach, - Context & context_, + ContextPtr context_, const String & primary_key_) : IStorage(table_id_), primary_key{primary_key_} { setInMemoryMetadata(metadata_); - rocksdb_dir = context_.getPath() + relative_data_path_; + rocksdb_dir = context_->getPath() + relative_data_path_; if (!attach) { Poco::File(rocksdb_dir).createDirectories(); @@ -258,7 +258,7 @@ StorageEmbeddedRocksDB::StorageEmbeddedRocksDB(const StorageID & table_id_, initDb(); } -void StorageEmbeddedRocksDB::truncate(const ASTPtr &, const StorageMetadataPtr & , const Context &, TableExclusiveLockHolder &) +void StorageEmbeddedRocksDB::truncate(const ASTPtr &, const StorageMetadataPtr & , ContextPtr, TableExclusiveLockHolder &) { rocksdb_ptr->Close(); Poco::File(rocksdb_dir).remove(true); @@ -284,7 +284,7 @@ Pipe StorageEmbeddedRocksDB::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) @@ -331,7 +331,7 @@ Pipe StorageEmbeddedRocksDB::read( } BlockOutputStreamPtr StorageEmbeddedRocksDB::write( - const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) + const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/) { return std::make_shared(*this, metadata_snapshot); } @@ -352,13 +352,13 @@ static StoragePtr create(const StorageFactory::Arguments & args) if (!args.storage_def->primary_key) throw Exception("StorageEmbeddedRocksDB must require one column in primary key", ErrorCodes::BAD_ARGUMENTS); - metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->primary_key->ptr(), metadata.columns, args.context); + metadata.primary_key = KeyDescription::getKeyFromAST(args.storage_def->primary_key->ptr(), metadata.columns, args.getContext()); auto primary_key_names = metadata.getColumnsRequiredForPrimaryKey(); if (primary_key_names.size() != 1) { throw Exception("StorageEmbeddedRocksDB must require one column in primary key", ErrorCodes::BAD_ARGUMENTS); } - return StorageEmbeddedRocksDB::create(args.table_id, args.relative_data_path, metadata, args.attach, args.context, primary_key_names[0]); + return StorageEmbeddedRocksDB::create(args.table_id, args.relative_data_path, metadata, args.attach, args.getContext(), primary_key_names[0]); } diff --git a/src/Storages/RocksDB/StorageEmbeddedRocksDB.h b/src/Storages/RocksDB/StorageEmbeddedRocksDB.h index f1a8c4713eb..64255392c35 100644 --- a/src/Storages/RocksDB/StorageEmbeddedRocksDB.h +++ b/src/Storages/RocksDB/StorageEmbeddedRocksDB.h @@ -29,18 +29,18 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; - void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; + void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) override; bool supportsParallelInsert() const override { return true; } bool supportsIndexForIn() const override { return true; } bool mayBenefitFromIndexForIn( - const ASTPtr & node, const Context & /*query_context*/, const StorageMetadataPtr & /*metadata_snapshot*/) const override + const ASTPtr & node, ContextPtr /*query_context*/, const StorageMetadataPtr & /*metadata_snapshot*/) const override { return node->getColumnName() == primary_key; } @@ -53,7 +53,7 @@ protected: const String & relative_data_path_, const StorageInMemoryMetadata & metadata, bool attach, - Context & context_, + ContextPtr context_, const String & primary_key_); private: diff --git a/src/Storages/SelectQueryDescription.cpp b/src/Storages/SelectQueryDescription.cpp index c11e6bd74f8..05747a9a260 100644 --- a/src/Storages/SelectQueryDescription.cpp +++ b/src/Storages/SelectQueryDescription.cpp @@ -44,11 +44,11 @@ SelectQueryDescription & SelectQueryDescription::SelectQueryDescription::operato namespace { -StorageID extractDependentTableFromSelectQuery(ASTSelectQuery & query, const Context & context, bool add_default_db = true) +StorageID extractDependentTableFromSelectQuery(ASTSelectQuery & query, ContextPtr context, bool add_default_db = true) { if (add_default_db) { - AddDefaultDatabaseVisitor visitor(context.getCurrentDatabase(), false, nullptr); + AddDefaultDatabaseVisitor visitor(context->getCurrentDatabase(), false, nullptr); visitor.visit(query); } @@ -114,7 +114,7 @@ static bool isSingleSelect(const ASTPtr & select, ASTPtr & res) return isSingleSelect(new_inner_query, res); } -SelectQueryDescription SelectQueryDescription::getSelectQueryFromASTForMatView(const ASTPtr & select, const Context & context) +SelectQueryDescription SelectQueryDescription::getSelectQueryFromASTForMatView(const ASTPtr & select, ContextPtr context) { ASTPtr new_inner_query; diff --git a/src/Storages/SelectQueryDescription.h b/src/Storages/SelectQueryDescription.h index ce3ca44c147..28a0a186a07 100644 --- a/src/Storages/SelectQueryDescription.h +++ b/src/Storages/SelectQueryDescription.h @@ -1,5 +1,6 @@ #pragma once +#include #include namespace DB @@ -17,7 +18,7 @@ struct SelectQueryDescription /// Parse description from select query for materialized view. Also /// validates query. - static SelectQueryDescription getSelectQueryFromASTForMatView(const ASTPtr & select, const Context & context); + static SelectQueryDescription getSelectQueryFromASTForMatView(const ASTPtr & select, ContextPtr context); SelectQueryDescription() = default; SelectQueryDescription(const SelectQueryDescription & other); diff --git a/src/Storages/SelectQueryInfo.h b/src/Storages/SelectQueryInfo.h index fea9a7bad68..b4ac07c612a 100644 --- a/src/Storages/SelectQueryInfo.h +++ b/src/Storages/SelectQueryInfo.h @@ -119,9 +119,13 @@ struct SelectQueryInfo ASTPtr query; ASTPtr view_query; /// Optimized VIEW query - /// For optimize_skip_unused_shards. - /// Can be modified in getQueryProcessingStage() + /// Cluster for the query. ClusterPtr cluster; + /// Optimized cluster for the query. + /// In case of optimize_skip_unused_shards it may differs from original cluster. + /// + /// Configured in StorageDistributed::getQueryProcessingStage() + ClusterPtr optimized_cluster; TreeRewriterResultPtr syntax_analyzer_result; @@ -134,6 +138,8 @@ struct SelectQueryInfo /// Prepared sets are used for indices by storage engine. /// Example: x IN (1, 2, 3) PreparedSets sets; + + ClusterPtr getCluster() const { return !optimized_cluster ? cluster : optimized_cluster; } }; } diff --git a/src/Storages/StorageBuffer.cpp b/src/Storages/StorageBuffer.cpp index 6dc32f4c880..afe37d0bcbe 100644 --- a/src/Storages/StorageBuffer.cpp +++ b/src/Storages/StorageBuffer.cpp @@ -40,6 +40,11 @@ namespace ProfileEvents extern const Event StorageBufferPassedTimeMaxThreshold; extern const Event StorageBufferPassedRowsMaxThreshold; extern const Event StorageBufferPassedBytesMaxThreshold; + extern const Event StorageBufferPassedTimeFlushThreshold; + extern const Event StorageBufferPassedRowsFlushThreshold; + extern const Event StorageBufferPassedBytesFlushThreshold; + extern const Event StorageBufferLayerLockReadersWaitMilliseconds; + extern const Event StorageBufferLayerLockWritersWaitMilliseconds; } namespace CurrentMetrics @@ -63,25 +68,57 @@ namespace ErrorCodes } +std::unique_lock StorageBuffer::Buffer::lockForReading() const +{ + return lockImpl(/* read= */true); +} +std::unique_lock StorageBuffer::Buffer::lockForWriting() const +{ + return lockImpl(/* read= */false); +} +std::unique_lock StorageBuffer::Buffer::tryLock() const +{ + std::unique_lock lock(mutex, std::try_to_lock); + return lock; +} +std::unique_lock StorageBuffer::Buffer::lockImpl(bool read) const +{ + std::unique_lock lock(mutex, std::defer_lock); + + Stopwatch watch(CLOCK_MONOTONIC_COARSE); + lock.lock(); + UInt64 elapsed = watch.elapsedMilliseconds(); + + if (read) + ProfileEvents::increment(ProfileEvents::StorageBufferLayerLockReadersWaitMilliseconds, elapsed); + else + ProfileEvents::increment(ProfileEvents::StorageBufferLayerLockWritersWaitMilliseconds, elapsed); + + return lock; +} + + StorageBuffer::StorageBuffer( const StorageID & table_id_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_, + ContextPtr context_, size_t num_shards_, const Thresholds & min_thresholds_, const Thresholds & max_thresholds_, + const Thresholds & flush_thresholds_, const StorageID & destination_id_, bool allow_materialized_) : IStorage(table_id_) - , buffer_context(context_.getBufferContext()) + , WithContext(context_->getBufferContext()) , num_shards(num_shards_), buffers(num_shards_) , min_thresholds(min_thresholds_) , max_thresholds(max_thresholds_) + , flush_thresholds(flush_thresholds_) , destination_id(destination_id_) , allow_materialized(allow_materialized_) , log(&Poco::Logger::get("StorageBuffer (" + table_id_.getFullTableName() + ")")) - , bg_pool(buffer_context.getBufferFlushSchedulePool()) + , bg_pool(getContext()->getBufferFlushSchedulePool()) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -111,7 +148,7 @@ protected: return res; has_been_read = true; - std::lock_guard lock(buffer.mutex); + std::unique_lock lock(buffer.lockForReading()); if (!buffer.data.rows()) return res; @@ -141,16 +178,16 @@ private: }; -QueryProcessingStage::Enum StorageBuffer::getQueryProcessingStage(const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const +QueryProcessingStage::Enum StorageBuffer::getQueryProcessingStage(ContextPtr local_context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const { if (destination_id) { - auto destination = DatabaseCatalog::instance().getTable(destination_id, context); + auto destination = DatabaseCatalog::instance().getTable(destination_id, local_context); if (destination.get() == this) throw Exception("Destination table is myself. Read will cause infinite loop.", ErrorCodes::INFINITE_LOOP); - return destination->getQueryProcessingStage(context, to_stage, query_info); + return destination->getQueryProcessingStage(local_context, to_stage, query_info); } return QueryProcessingStage::FetchColumns; @@ -161,16 +198,16 @@ Pipe StorageBuffer::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { QueryPlan plan; - read(plan, column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); + read(plan, column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); return plan.convertToPipe( - QueryPlanOptimizationSettings::fromContext(context), - BuildQueryPipelineSettings::fromContext(context)); + QueryPlanOptimizationSettings::fromContext(local_context), + BuildQueryPipelineSettings::fromContext(local_context)); } void StorageBuffer::read( @@ -178,19 +215,19 @@ void StorageBuffer::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) { if (destination_id) { - auto destination = DatabaseCatalog::instance().getTable(destination_id, context); + auto destination = DatabaseCatalog::instance().getTable(destination_id, local_context); if (destination.get() == this) throw Exception("Destination table is myself. Read will cause infinite loop.", ErrorCodes::INFINITE_LOOP); - auto destination_lock = destination->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto destination_lock = destination->lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); auto destination_metadata_snapshot = destination->getInMemoryMetadataPtr(); @@ -205,12 +242,12 @@ void StorageBuffer::read( if (dst_has_same_structure) { if (query_info.order_optimizer) - query_info.input_order_info = query_info.order_optimizer->getInputOrder(destination_metadata_snapshot, context); + query_info.input_order_info = query_info.order_optimizer->getInputOrder(destination_metadata_snapshot, local_context); /// The destination table has the same structure of the requested columns and we can simply read blocks from there. destination->read( query_plan, column_names, destination_metadata_snapshot, query_info, - context, processed_stage, max_block_size, num_streams); + local_context, processed_stage, max_block_size, num_streams); } else { @@ -245,7 +282,7 @@ void StorageBuffer::read( { destination->read( query_plan, columns_intersection, destination_metadata_snapshot, query_info, - context, processed_stage, max_block_size, num_streams); + local_context, processed_stage, max_block_size, num_streams); if (query_plan.isInitialized()) { @@ -254,7 +291,7 @@ void StorageBuffer::read( query_plan.getCurrentDataStream().header, header_after_adding_defaults.getNamesAndTypesList(), metadata_snapshot->getColumns(), - context); + local_context); auto adding_missed = std::make_unique( query_plan.getCurrentDataStream(), @@ -317,7 +354,7 @@ void StorageBuffer::read( if (processed_stage > QueryProcessingStage::FetchColumns) { auto interpreter = InterpreterSelectQuery( - query_info.query, context, std::move(pipe_from_buffers), + query_info.query, local_context, std::move(pipe_from_buffers), SelectQueryOptions(processed_stage)); interpreter.buildQueryPlan(buffers_plan); } @@ -391,7 +428,7 @@ void StorageBuffer::read( plans.emplace_back(std::make_unique(std::move(buffers_plan))); query_plan = QueryPlan(); - auto union_step = std::make_unique(std::move(input_streams), result_header); + auto union_step = std::make_unique(std::move(input_streams)); union_step->setStepDescription("Unite sources from Buffer table"); query_plan.unitePlans(std::move(union_step), std::move(plans)); } @@ -495,7 +532,7 @@ public: StoragePtr destination; if (storage.destination_id) { - destination = DatabaseCatalog::instance().tryGetTable(storage.destination_id, storage.buffer_context); + destination = DatabaseCatalog::instance().tryGetTable(storage.destination_id, storage.getContext()); if (destination.get() == &storage) throw Exception("Destination table is myself. Write will cause infinite loop.", ErrorCodes::INFINITE_LOOP); } @@ -510,7 +547,7 @@ public: { if (storage.destination_id) { - LOG_TRACE(storage.log, "Writing block with {} rows, {} bytes directly.", rows, bytes); + LOG_DEBUG(storage.log, "Writing block with {} rows, {} bytes directly.", rows, bytes); storage.writeBlockToDestination(block, destination); } return; @@ -528,7 +565,7 @@ public: for (size_t try_no = 0; try_no < storage.num_shards; ++try_no) { - std::unique_lock lock(storage.buffers[shard_num].mutex, std::try_to_lock); + std::unique_lock lock(storage.buffers[shard_num].tryLock()); if (lock.owns_lock()) { @@ -548,7 +585,7 @@ public: if (!least_busy_buffer) { least_busy_buffer = &storage.buffers[start_shard_num]; - least_busy_lock = std::unique_lock(least_busy_buffer->mutex); + least_busy_lock = least_busy_buffer->lockForWriting(); } insertIntoBuffer(block, *least_busy_buffer); least_busy_lock.unlock(); @@ -570,7 +607,7 @@ private: { buffer.data = sorted_block.cloneEmpty(); } - else if (storage.checkThresholds(buffer, current_time, sorted_block.rows(), sorted_block.bytes())) + else if (storage.checkThresholds(buffer, /* direct= */true, current_time, sorted_block.rows(), sorted_block.bytes())) { /** If, after inserting the buffer, the constraints are exceeded, then we will reset the buffer. * This also protects against unlimited consumption of RAM, since if it is impossible to write to the table, @@ -588,14 +625,14 @@ private: }; -BlockOutputStreamPtr StorageBuffer::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) +BlockOutputStreamPtr StorageBuffer::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/) { return std::make_shared(*this, metadata_snapshot); } bool StorageBuffer::mayBenefitFromIndexForIn( - const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & /*metadata_snapshot*/) const + const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & /*metadata_snapshot*/) const { if (!destination_id) return false; @@ -611,7 +648,7 @@ bool StorageBuffer::mayBenefitFromIndexForIn( void StorageBuffer::startup() { - if (buffer_context.getSettingsRef().readonly) + if (getContext()->getSettingsRef().readonly) { LOG_WARNING(log, "Storage {} is run with readonly settings, it will not be able to insert data. Set appropriate buffer_profile to fix this.", getName()); } @@ -630,7 +667,7 @@ void StorageBuffer::shutdown() try { - optimize(nullptr /*query*/, getInMemoryMetadataPtr(), {} /*partition*/, false /*final*/, false /*deduplicate*/, {}, buffer_context); + optimize(nullptr /*query*/, getInMemoryMetadataPtr(), {} /*partition*/, false /*final*/, false /*deduplicate*/, {}, getContext()); } catch (...) { @@ -656,7 +693,7 @@ bool StorageBuffer::optimize( bool final, bool deduplicate, const Names & /* deduplicate_by_columns */, - const Context & /*context*/) + ContextPtr /*context*/) { if (partition) throw Exception("Partition cannot be specified when optimizing table of type Buffer", ErrorCodes::NOT_IMPLEMENTED); @@ -675,13 +712,13 @@ bool StorageBuffer::supportsPrewhere() const { if (!destination_id) return false; - auto dest = DatabaseCatalog::instance().tryGetTable(destination_id, buffer_context); + auto dest = DatabaseCatalog::instance().tryGetTable(destination_id, getContext()); if (dest && dest.get() != this) return dest->supportsPrewhere(); return false; } -bool StorageBuffer::checkThresholds(const Buffer & buffer, time_t current_time, size_t additional_rows, size_t additional_bytes) const +bool StorageBuffer::checkThresholds(const Buffer & buffer, bool direct, time_t current_time, size_t additional_rows, size_t additional_bytes) const { time_t time_passed = 0; if (buffer.first_write_time) @@ -690,11 +727,11 @@ bool StorageBuffer::checkThresholds(const Buffer & buffer, time_t current_time, size_t rows = buffer.data.rows() + additional_rows; size_t bytes = buffer.data.bytes() + additional_bytes; - return checkThresholdsImpl(rows, bytes, time_passed); + return checkThresholdsImpl(direct, rows, bytes, time_passed); } -bool StorageBuffer::checkThresholdsImpl(size_t rows, size_t bytes, time_t time_passed) const +bool StorageBuffer::checkThresholdsImpl(bool direct, size_t rows, size_t bytes, time_t time_passed) const { if (time_passed > min_thresholds.time && rows > min_thresholds.rows && bytes > min_thresholds.bytes) { @@ -720,6 +757,27 @@ bool StorageBuffer::checkThresholdsImpl(size_t rows, size_t bytes, time_t time_p return true; } + if (!direct) + { + if (flush_thresholds.time && time_passed > flush_thresholds.time) + { + ProfileEvents::increment(ProfileEvents::StorageBufferPassedTimeFlushThreshold); + return true; + } + + if (flush_thresholds.rows && rows > flush_thresholds.rows) + { + ProfileEvents::increment(ProfileEvents::StorageBufferPassedRowsFlushThreshold); + return true; + } + + if (flush_thresholds.bytes && bytes > flush_thresholds.bytes) + { + ProfileEvents::increment(ProfileEvents::StorageBufferPassedBytesFlushThreshold); + return true; + } + } + return false; } @@ -740,9 +798,9 @@ void StorageBuffer::flushBuffer(Buffer & buffer, bool check_thresholds, bool loc size_t bytes = 0; time_t time_passed = 0; - std::unique_lock lock(buffer.mutex, std::defer_lock); + std::optional> lock; if (!locked) - lock.lock(); + lock.emplace(buffer.lockForReading()); block_to_write = buffer.data.cloneEmpty(); @@ -753,7 +811,7 @@ void StorageBuffer::flushBuffer(Buffer & buffer, bool check_thresholds, bool loc if (check_thresholds) { - if (!checkThresholdsImpl(rows, bytes, time_passed)) + if (!checkThresholdsImpl(/* direct= */false, rows, bytes, time_passed)) return; } else @@ -772,7 +830,7 @@ void StorageBuffer::flushBuffer(Buffer & buffer, bool check_thresholds, bool loc if (!destination_id) { - LOG_TRACE(log, "Flushing buffer with {} rows (discarded), {} bytes, age {} seconds {}.", rows, bytes, time_passed, (check_thresholds ? "(bg)" : "(direct)")); + LOG_DEBUG(log, "Flushing buffer with {} rows (discarded), {} bytes, age {} seconds {}.", rows, bytes, time_passed, (check_thresholds ? "(bg)" : "(direct)")); return; } @@ -786,7 +844,7 @@ void StorageBuffer::flushBuffer(Buffer & buffer, bool check_thresholds, bool loc Stopwatch watch; try { - writeBlockToDestination(block_to_write, DatabaseCatalog::instance().tryGetTable(destination_id, buffer_context)); + writeBlockToDestination(block_to_write, DatabaseCatalog::instance().tryGetTable(destination_id, getContext())); if (reset_block_structure) buffer.data.clear(); } @@ -809,7 +867,7 @@ void StorageBuffer::flushBuffer(Buffer & buffer, bool check_thresholds, bool loc } UInt64 milliseconds = watch.elapsedMilliseconds(); - LOG_TRACE(log, "Flushing buffer with {} rows, {} bytes, age {} seconds, took {} ms {}.", rows, bytes, time_passed, milliseconds, (check_thresholds ? "(bg)" : "(direct)")); + LOG_DEBUG(log, "Flushing buffer with {} rows, {} bytes, age {} seconds, took {} ms {}.", rows, bytes, time_passed, milliseconds, (check_thresholds ? "(bg)" : "(direct)")); } @@ -868,8 +926,8 @@ void StorageBuffer::writeBlockToDestination(const Block & block, StoragePtr tabl for (const auto & column : block_to_write) list_of_columns->children.push_back(std::make_shared(column.name)); - auto insert_context = Context(buffer_context); - insert_context.makeQueryContext(); + auto insert_context = Context::createCopy(getContext()); + insert_context->makeQueryContext(); InterpreterInsertQuery interpreter{insert, insert_context, allow_materialized}; @@ -910,7 +968,7 @@ void StorageBuffer::reschedule() /// try_to_lock is also ok for background flush, since if there is /// INSERT contended, then the reschedule will be done after /// INSERT will be done. - std::unique_lock lock(buffer.mutex, std::try_to_lock); + std::unique_lock lock(buffer.tryLock()); if (lock.owns_lock()) { min_first_write_time = buffer.first_write_time; @@ -930,9 +988,9 @@ void StorageBuffer::reschedule() flush_handle->scheduleAfter(std::min(min, max) * 1000); } -void StorageBuffer::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +void StorageBuffer::checkAlterIsPossible(const AlterCommands & commands, ContextPtr local_context) const { - auto name_deps = getDependentViewsByColumn(context); + auto name_deps = getDependentViewsByColumn(local_context); for (const auto & command : commands) { if (command.type != AlterCommand::Type::ADD_COLUMN && command.type != AlterCommand::Type::MODIFY_COLUMN @@ -957,7 +1015,7 @@ void StorageBuffer::checkAlterIsPossible(const AlterCommands & commands, const C std::optional StorageBuffer::totalRows(const Settings & settings) const { std::optional underlying_rows; - auto underlying = DatabaseCatalog::instance().tryGetTable(destination_id, buffer_context); + auto underlying = DatabaseCatalog::instance().tryGetTable(destination_id, getContext()); if (underlying) underlying_rows = underlying->totalRows(settings); @@ -967,7 +1025,7 @@ std::optional StorageBuffer::totalRows(const Settings & settings) const UInt64 rows = 0; for (const auto & buffer : buffers) { - std::lock_guard lock(buffer.mutex); + const auto lock(buffer.lockForReading()); rows += buffer.data.rows(); } return rows + *underlying_rows; @@ -978,26 +1036,26 @@ std::optional StorageBuffer::totalBytes(const Settings & /*settings*/) c UInt64 bytes = 0; for (const auto & buffer : buffers) { - std::lock_guard lock(buffer.mutex); + const auto lock(buffer.lockForReading()); bytes += buffer.data.allocatedBytes(); } return bytes; } -void StorageBuffer::alter(const AlterCommands & params, const Context & context, TableLockHolder &) +void StorageBuffer::alter(const AlterCommands & params, ContextPtr local_context, TableLockHolder &) { auto table_id = getStorageID(); - checkAlterIsPossible(params, context); + checkAlterIsPossible(params, local_context); auto metadata_snapshot = getInMemoryMetadataPtr(); /// Flush all buffers to storages, so that no non-empty blocks of the old /// structure remain. Structure of empty blocks will be updated during first /// insert. - optimize({} /*query*/, metadata_snapshot, {} /*partition_id*/, false /*final*/, false /*deduplicate*/, {}, context); + optimize({} /*query*/, metadata_snapshot, {} /*partition_id*/, false /*final*/, false /*deduplicate*/, {}, local_context); StorageInMemoryMetadata new_metadata = *metadata_snapshot; - params.apply(new_metadata, context); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, new_metadata); + params.apply(new_metadata, local_context); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, new_metadata); setInMemoryMetadata(new_metadata); } @@ -1008,25 +1066,26 @@ void registerStorageBuffer(StorageFactory & factory) * * db, table - in which table to put data from buffer. * num_buckets - level of parallelism. - * min_time, max_time, min_rows, max_rows, min_bytes, max_bytes - conditions for flushing the buffer. + * min_time, max_time, min_rows, max_rows, min_bytes, max_bytes - conditions for flushing the buffer, + * flush_time, flush_rows, flush_bytes - conditions for flushing. */ factory.registerStorage("Buffer", [](const StorageFactory::Arguments & args) { ASTs & engine_args = args.engine_args; - if (engine_args.size() != 9) - throw Exception("Storage Buffer requires 9 parameters: " - " destination_database, destination_table, num_buckets, min_time, max_time, min_rows, max_rows, min_bytes, max_bytes.", + if (engine_args.size() < 9 || engine_args.size() > 12) + throw Exception("Storage Buffer requires from 9 to 12 parameters: " + " destination_database, destination_table, num_buckets, min_time, max_time, min_rows, max_rows, min_bytes, max_bytes[, flush_time, flush_rows, flush_bytes].", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); // Table and database name arguments accept expressions, evaluate them. - engine_args[0] = evaluateConstantExpressionForDatabaseName(engine_args[0], args.local_context); - engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.local_context); + engine_args[0] = evaluateConstantExpressionForDatabaseName(engine_args[0], args.getLocalContext()); + engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.getLocalContext()); // After we evaluated all expressions, check that all arguments are // literals. - for (size_t i = 0; i < 9; i++) + for (size_t i = 0; i < engine_args.size(); i++) { if (!typeid_cast(engine_args[i].get())) { @@ -1036,23 +1095,35 @@ void registerStorageBuffer(StorageFactory & factory) } } - String destination_database = engine_args[0]->as().value.safeGet(); - String destination_table = engine_args[1]->as().value.safeGet(); + size_t i = 0; - UInt64 num_buckets = applyVisitor(FieldVisitorConvertToNumber(), engine_args[2]->as().value); + String destination_database = engine_args[i++]->as().value.safeGet(); + String destination_table = engine_args[i++]->as().value.safeGet(); - Int64 min_time = applyVisitor(FieldVisitorConvertToNumber(), engine_args[3]->as().value); - Int64 max_time = applyVisitor(FieldVisitorConvertToNumber(), engine_args[4]->as().value); - UInt64 min_rows = applyVisitor(FieldVisitorConvertToNumber(), engine_args[5]->as().value); - UInt64 max_rows = applyVisitor(FieldVisitorConvertToNumber(), engine_args[6]->as().value); - UInt64 min_bytes = applyVisitor(FieldVisitorConvertToNumber(), engine_args[7]->as().value); - UInt64 max_bytes = applyVisitor(FieldVisitorConvertToNumber(), engine_args[8]->as().value); + UInt64 num_buckets = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + + StorageBuffer::Thresholds min; + StorageBuffer::Thresholds max; + StorageBuffer::Thresholds flush; + + min.time = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + max.time = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + min.rows = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + max.rows = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + min.bytes = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + max.bytes = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + if (engine_args.size() > i) + flush.time = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + if (engine_args.size() > i) + flush.rows = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); + if (engine_args.size() > i) + flush.bytes = applyVisitor(FieldVisitorConvertToNumber(), engine_args[i++]->as().value); /// If destination_id is not set, do not write data from the buffer, but simply empty the buffer. StorageID destination_id = StorageID::createEmpty(); if (!destination_table.empty()) { - destination_id.database_name = args.context.resolveDatabase(destination_database); + destination_id.database_name = args.getContext()->resolveDatabase(destination_database); destination_id.table_name = destination_table; } @@ -1060,12 +1131,11 @@ void registerStorageBuffer(StorageFactory & factory) args.table_id, args.columns, args.constraints, - args.context, + args.getContext(), num_buckets, - StorageBuffer::Thresholds{min_time, min_rows, min_bytes}, - StorageBuffer::Thresholds{max_time, max_rows, max_bytes}, + min, max, flush, destination_id, - static_cast(args.local_context.getSettingsRef().insert_allow_materialized_columns)); + static_cast(args.getLocalContext()->getSettingsRef().insert_allow_materialized_columns)); }, { .supports_parallel_insert = true, diff --git a/src/Storages/StorageBuffer.h b/src/Storages/StorageBuffer.h index f6904ddb0e4..1747c024a74 100644 --- a/src/Storages/StorageBuffer.h +++ b/src/Storages/StorageBuffer.h @@ -1,15 +1,17 @@ #pragma once -#include -#include -#include -#include -#include #include -#include +#include #include +#include +#include + #include +#include +#include +#include + namespace Poco { class Logger; } @@ -33,33 +35,36 @@ namespace DB * Thresholds can be exceeded. For example, if max_rows = 1 000 000, the buffer already had 500 000 rows, * and a part of 800 000 rows is added, then there will be 1 300 000 rows in the buffer, and then such a block will be written to the subordinate table. * + * There are also separate thresholds for flush, those thresholds are checked only for non-direct flush. + * This maybe useful if you do not want to add extra latency for INSERT queries, + * so you can set max_rows=1e6 and flush_rows=500e3, then each 500e3 rows buffer will be flushed in background only. + * * When you destroy a Buffer table, all remaining data is flushed to the subordinate table. * The data in the buffer is not replicated, not logged to disk, not indexed. With a rough restart of the server, the data is lost. */ -class StorageBuffer final : public ext::shared_ptr_helper, public IStorage +class StorageBuffer final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; friend class BufferSource; friend class BufferBlockOutputStream; public: - /// Thresholds. struct Thresholds { - time_t time; /// The number of seconds from the insertion of the first row into the block. - size_t rows; /// The number of rows in the block. - size_t bytes; /// The number of (uncompressed) bytes in the block. + time_t time = 0; /// The number of seconds from the insertion of the first row into the block. + size_t rows = 0; /// The number of rows in the block. + size_t bytes = 0; /// The number of (uncompressed) bytes in the block. }; std::string getName() const override { return "Buffer"; } - QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; Pipe read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -69,7 +74,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -78,7 +83,7 @@ public: bool supportsSubcolumns() const override { return true; } - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; void startup() override; /// Flush all buffers into the subordinate table and stop background thread. @@ -90,19 +95,19 @@ public: bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) override; + ContextPtr context) override; bool supportsSampling() const override { return true; } bool supportsPrewhere() const override; bool supportsFinal() const override { return true; } bool supportsIndexForIn() const override { return true; } - bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & metadata_snapshot) const override; + bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & metadata_snapshot) const override; - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; /// The structure of the subordinate table is not checked and does not change. - void alter(const AlterCommands & params, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & table_lock_holder) override; std::optional totalRows(const Settings & settings) const override; std::optional totalBytes(const Settings & settings) const override; @@ -112,13 +117,19 @@ public: private: - const Context & buffer_context; - struct Buffer { time_t first_write_time = 0; Block data; + + std::unique_lock lockForReading() const; + std::unique_lock lockForWriting() const; + std::unique_lock tryLock() const; + + private: mutable std::mutex mutex; + + std::unique_lock lockImpl(bool read) const; }; /// There are `num_shards` of independent buffers. @@ -127,6 +138,7 @@ private: const Thresholds min_thresholds; const Thresholds max_thresholds; + const Thresholds flush_thresholds; StorageID destination_id; bool allow_materialized; @@ -145,8 +157,8 @@ private: /// are exceeded. If reset_block_structure is set - clears inner block /// structure inside buffer (useful in OPTIMIZE and ALTER). void flushBuffer(Buffer & buffer, bool check_thresholds, bool locked = false, bool reset_block_structure = false); - bool checkThresholds(const Buffer & buffer, time_t current_time, size_t additional_rows = 0, size_t additional_bytes = 0) const; - bool checkThresholdsImpl(size_t rows, size_t bytes, time_t time_passed) const; + bool checkThresholds(const Buffer & buffer, bool direct, time_t current_time, size_t additional_rows = 0, size_t additional_bytes = 0) const; + bool checkThresholdsImpl(bool direct, size_t rows, size_t bytes, time_t time_passed) const; /// `table` argument is passed, as it is sometimes evaluated beforehand. It must match the `destination`. void writeBlockToDestination(const Block & block, StoragePtr table); @@ -165,10 +177,11 @@ protected: const StorageID & table_id_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_, + ContextPtr context_, size_t num_shards_, const Thresholds & min_thresholds_, const Thresholds & max_thresholds_, + const Thresholds & flush_thresholds_, const StorageID & destination_id, bool allow_materialized_); }; diff --git a/src/Storages/StorageDictionary.cpp b/src/Storages/StorageDictionary.cpp index 36241cd5582..e2cab153092 100644 --- a/src/Storages/StorageDictionary.cpp +++ b/src/Storages/StorageDictionary.cpp @@ -128,12 +128,12 @@ Pipe StorageDictionary::read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t max_block_size, const unsigned /*threads*/) { - auto dictionary = context.getExternalDictionariesLoader().getDictionary(dictionary_name, context); + auto dictionary = context->getExternalDictionariesLoader().getDictionary(dictionary_name, context); auto stream = dictionary->getBlockInputStream(column_names, max_block_size); /// TODO: update dictionary interface for processors. return Pipe(std::make_shared(stream)); @@ -148,13 +148,12 @@ void registerStorageDictionary(StorageFactory & factory) throw Exception("Storage Dictionary requires single parameter: name of dictionary", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); - args.engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(args.engine_args[0], args.local_context); + args.engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(args.engine_args[0], args.getLocalContext()); String dictionary_name = args.engine_args[0]->as().value.safeGet(); if (!args.attach) { - const auto & context = args.context; - const auto & dictionary = context.getExternalDictionariesLoader().getDictionary(dictionary_name, context); + const auto & dictionary = args.getContext()->getExternalDictionariesLoader().getDictionary(dictionary_name, args.getContext()); const DictionaryStructure & dictionary_structure = dictionary->getStructure(); checkNamesAndTypesCompatibleWithDictionary(dictionary_name, args.columns, dictionary_structure); } diff --git a/src/Storages/StorageDictionary.h b/src/Storages/StorageDictionary.h index 563def8672b..9e8564d4349 100644 --- a/src/Storages/StorageDictionary.h +++ b/src/Storages/StorageDictionary.h @@ -23,7 +23,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned threads) override; diff --git a/src/Storages/StorageDistributed.cpp b/src/Storages/StorageDistributed.cpp index 28156458342..3d96796a79b 100644 --- a/src/Storages/StorageDistributed.cpp +++ b/src/Storages/StorageDistributed.cpp @@ -1,8 +1,11 @@ #include #include + #include +#include + #include #include #include @@ -31,9 +34,7 @@ #include #include #include - -#include -#include +#include #include #include @@ -42,6 +43,7 @@ #include #include #include +#include #include #include #include @@ -51,8 +53,12 @@ #include #include +#include +#include #include #include +#include +#include #include #include @@ -60,6 +66,7 @@ #include #include #include +#include #include @@ -75,6 +82,8 @@ const UInt64 FORCE_OPTIMIZE_SKIP_UNUSED_SHARDS_HAS_SHARDING_KEY = 1; const UInt64 FORCE_OPTIMIZE_SKIP_UNUSED_SHARDS_ALWAYS = 2; const UInt64 DISTRIBUTED_GROUP_BY_NO_MERGE_AFTER_AGGREGATION = 2; + +const UInt64 PARALLEL_DISTRIBUTED_INSERT_SELECT_ALL = 2; } namespace ProfileEvents @@ -212,7 +221,7 @@ std::string makeFormattedListOfShards(const ClusterPtr & cluster) return buf.str(); } -ExpressionActionsPtr buildShardingKeyExpression(const ASTPtr & sharding_key, const Context & context, const NamesAndTypesList & columns, bool project) +ExpressionActionsPtr buildShardingKeyExpression(const ASTPtr & sharding_key, ContextPtr context, const NamesAndTypesList & columns, bool project) { ASTPtr query = sharding_key; auto syntax_result = TreeRewriter(context).analyze(query, columns); @@ -260,7 +269,7 @@ public: void replaceConstantExpressions( ASTPtr & node, - const Context & context, + ContextPtr context, const NamesAndTypesList & columns, ConstStoragePtr storage, const StorageMetadataPtr & metadata_snapshot) @@ -380,7 +389,7 @@ StorageDistributed::StorageDistributed( const String & remote_database_, const String & remote_table_, const String & cluster_name_, - const Context & context_, + ContextPtr context_, const ASTPtr & sharding_key_, const String & storage_policy_name_, const String & relative_data_path_, @@ -388,12 +397,12 @@ StorageDistributed::StorageDistributed( bool attach_, ClusterPtr owned_cluster_) : IStorage(id_) + , WithContext(context_->getGlobalContext()) , remote_database(remote_database_) , remote_table(remote_table_) - , global_context(context_.getGlobalContext()) , log(&Poco::Logger::get("StorageDistributed (" + id_.table_name + ")")) , owned_cluster(std::move(owned_cluster_)) - , cluster_name(global_context.getMacros()->expand(cluster_name_)) + , cluster_name(getContext()->getMacros()->expand(cluster_name_)) , has_sharding_key(sharding_key_) , relative_data_path(relative_data_path_) , distributed_settings(distributed_settings_) @@ -406,14 +415,14 @@ StorageDistributed::StorageDistributed( if (sharding_key_) { - sharding_key_expr = buildShardingKeyExpression(sharding_key_, global_context, storage_metadata.getColumns().getAllPhysical(), false); + sharding_key_expr = buildShardingKeyExpression(sharding_key_, getContext(), storage_metadata.getColumns().getAllPhysical(), false); sharding_key_column_name = sharding_key_->getColumnName(); sharding_key_is_deterministic = isExpressionActionsDeterministics(sharding_key_expr); } if (!relative_data_path.empty()) { - storage_policy = global_context.getStoragePolicy(storage_policy_name_); + storage_policy = getContext()->getStoragePolicy(storage_policy_name_); data_volume = storage_policy->getVolume(0); if (storage_policy->getVolumes().size() > 1) LOG_WARNING(log, "Storage policy for Distributed table has multiple volumes. " @@ -423,7 +432,7 @@ StorageDistributed::StorageDistributed( /// Sanity check. Skip check if the table is already created to allow the server to start. if (!attach_ && !cluster_name.empty()) { - size_t num_local_shards = global_context.getCluster(cluster_name)->getLocalShardCount(); + size_t num_local_shards = getContext()->getCluster(cluster_name)->getLocalShardCount(); if (num_local_shards && remote_database == id_.database_name && remote_table == id_.table_name) throw Exception("Distributed table " + id_.table_name + " looks at itself", ErrorCodes::INFINITE_LOOP); } @@ -436,22 +445,23 @@ StorageDistributed::StorageDistributed( const ConstraintsDescription & constraints_, ASTPtr remote_table_function_ptr_, const String & cluster_name_, - const Context & context_, + ContextPtr context_, const ASTPtr & sharding_key_, const String & storage_policy_name_, const String & relative_data_path_, const DistributedSettings & distributed_settings_, bool attach, ClusterPtr owned_cluster_) - : StorageDistributed(id_, columns_, constraints_, String{}, String{}, cluster_name_, context_, sharding_key_, storage_policy_name_, relative_data_path_, distributed_settings_, attach, std::move(owned_cluster_)) + : StorageDistributed(id_, columns_, constraints_, String{}, String{}, cluster_name_, context_, sharding_key_, + storage_policy_name_, relative_data_path_, distributed_settings_, attach, std::move(owned_cluster_)) { remote_table_function_ptr = std::move(remote_table_function_ptr_); } QueryProcessingStage::Enum StorageDistributed::getQueryProcessingStage( - const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const + ContextPtr local_context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const { - const auto & settings = context.getSettingsRef(); + const auto & settings = local_context->getSettingsRef(); auto metadata_snapshot = getInMemoryMetadataPtr(); ClusterPtr cluster = getCluster(); @@ -459,18 +469,20 @@ QueryProcessingStage::Enum StorageDistributed::getQueryProcessingStage( /// Always calculate optimized cluster here, to avoid conditions during read() /// (Anyway it will be calculated in the read()) - if (settings.optimize_skip_unused_shards) + if (getClusterQueriedNodes(settings, cluster) > 1 && settings.optimize_skip_unused_shards) { - ClusterPtr optimized_cluster = getOptimizedCluster(context, metadata_snapshot, query_info.query); + ClusterPtr optimized_cluster = getOptimizedCluster(local_context, metadata_snapshot, query_info.query); if (optimized_cluster) { - LOG_DEBUG(log, "Skipping irrelevant shards - the query will be sent to the following shards of the cluster (shard numbers): {}", makeFormattedListOfShards(optimized_cluster)); + LOG_DEBUG(log, "Skipping irrelevant shards - the query will be sent to the following shards of the cluster (shard numbers): {}", + makeFormattedListOfShards(optimized_cluster)); cluster = optimized_cluster; - query_info.cluster = cluster; + query_info.optimized_cluster = cluster; } else { - LOG_DEBUG(log, "Unable to figure out irrelevant shards from WHERE/PREWHERE clauses - the query will be sent to all shards of the cluster{}", has_sharding_key ? "" : " (no sharding key)"); + LOG_DEBUG(log, "Unable to figure out irrelevant shards from WHERE/PREWHERE clauses - the query will be sent to all shards of the cluster{}", + has_sharding_key ? "" : " (no sharding key)"); } } @@ -513,16 +525,16 @@ Pipe StorageDistributed::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { QueryPlan plan; - read(plan, column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); + read(plan, column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); return plan.convertToPipe( - QueryPlanOptimizationSettings::fromContext(context), - BuildQueryPipelineSettings::fromContext(context)); + QueryPlanOptimizationSettings::fromContext(local_context), + BuildQueryPipelineSettings::fromContext(local_context)); } void StorageDistributed::read( @@ -530,7 +542,7 @@ void StorageDistributed::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t /*max_block_size*/, const unsigned /*num_streams*/) @@ -539,10 +551,10 @@ void StorageDistributed::read( query_info.query, remote_database, remote_table, remote_table_function_ptr); Block header = - InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); + InterpreterSelectQuery(query_info.query, local_context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); /// Return directly (with correct header) if no shard to query. - if (query_info.cluster->getShardsInfo().empty()) + if (query_info.getCluster()->getShardsInfo().empty()) { Pipe pipe(std::make_shared(header)); auto read_from_pipe = std::make_unique(std::move(pipe)); @@ -552,7 +564,7 @@ void StorageDistributed::read( return; } - const Scalars & scalars = context.hasQueryContext() ? context.getQueryContext().getScalars() : Scalars{}; + const Scalars & scalars = local_context->hasQueryContext() ? local_context->getQueryContext()->getScalars() : Scalars{}; bool has_virtual_shard_num_column = std::find(column_names.begin(), column_names.end(), "_shard_num") != column_names.end(); if (has_virtual_shard_num_column && !isVirtualColumn("_shard_num", metadata_snapshot)) @@ -560,12 +572,19 @@ void StorageDistributed::read( ClusterProxy::SelectStreamFactory select_stream_factory = remote_table_function_ptr ? ClusterProxy::SelectStreamFactory( - header, processed_stage, remote_table_function_ptr, scalars, has_virtual_shard_num_column, context.getExternalTables()) + header, processed_stage, remote_table_function_ptr, scalars, has_virtual_shard_num_column, local_context->getExternalTables()) : ClusterProxy::SelectStreamFactory( - header, processed_stage, StorageID{remote_database, remote_table}, scalars, has_virtual_shard_num_column, context.getExternalTables()); + header, + processed_stage, + StorageID{remote_database, remote_table}, + scalars, + has_virtual_shard_num_column, + local_context->getExternalTables()); ClusterProxy::executeQuery(query_plan, select_stream_factory, log, - modified_query_ast, context, query_info); + modified_query_ast, local_context, query_info, + sharding_key_expr, sharding_key_column_name, + getCluster()); /// This is a bug, it is possible only when there is no shards to query, and this is handled earlier. if (!query_plan.isInitialized()) @@ -573,10 +592,10 @@ void StorageDistributed::read( } -BlockOutputStreamPtr StorageDistributed::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageDistributed::write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { auto cluster = getCluster(); - const auto & settings = context.getSettingsRef(); + const auto & settings = local_context->getSettingsRef(); /// Ban an attempt to make async insert into the table belonging to DatabaseMemory if (!storage_policy && !owned_cluster && !settings.insert_distributed_sync && !settings.insert_shard_id) @@ -606,16 +625,94 @@ BlockOutputStreamPtr StorageDistributed::write(const ASTPtr &, const StorageMeta /// DistributedBlockOutputStream will not own cluster, but will own ConnectionPools of the cluster return std::make_shared( - context, *this, metadata_snapshot, + local_context, *this, metadata_snapshot, createInsertToRemoteTableQuery( remote_database, remote_table, metadata_snapshot->getSampleBlockNonMaterialized()), cluster, insert_sync, timeout); } -void StorageDistributed::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +QueryPipelinePtr StorageDistributed::distributedWrite(const ASTInsertQuery & query, ContextPtr local_context) { - auto name_deps = getDependentViewsByColumn(context); + const Settings & settings = local_context->getSettingsRef(); + std::shared_ptr storage_src; + auto & select = query.select->as(); + auto new_query = std::dynamic_pointer_cast(query.clone()); + if (select.list_of_selects->children.size() == 1) + { + if (auto * select_query = select.list_of_selects->children.at(0)->as()) + { + JoinedTables joined_tables(Context::createCopy(local_context), *select_query); + + if (joined_tables.tablesCount() == 1) + { + storage_src = std::dynamic_pointer_cast(joined_tables.getLeftTableStorage()); + if (storage_src) + { + const auto select_with_union_query = std::make_shared(); + select_with_union_query->list_of_selects = std::make_shared(); + + auto new_select_query = std::dynamic_pointer_cast(select_query->clone()); + select_with_union_query->list_of_selects->children.push_back(new_select_query); + + new_select_query->replaceDatabaseAndTable(storage_src->getRemoteDatabaseName(), storage_src->getRemoteTableName()); + + new_query->select = select_with_union_query; + } + } + } + } + + if (!storage_src || storage_src->getClusterName() != getClusterName()) + { + return nullptr; + } + + if (settings.parallel_distributed_insert_select == PARALLEL_DISTRIBUTED_INSERT_SELECT_ALL) + { + new_query->table_id = StorageID(getRemoteDatabaseName(), getRemoteTableName()); + } + + const auto & cluster = getCluster(); + const auto & shards_info = cluster->getShardsInfo(); + + std::vector> pipelines; + + String new_query_str = queryToString(new_query); + for (size_t shard_index : ext::range(0, shards_info.size())) + { + const auto & shard_info = shards_info[shard_index]; + if (shard_info.isLocal()) + { + InterpreterInsertQuery interpreter(new_query, local_context); + pipelines.emplace_back(std::make_unique(interpreter.execute().pipeline)); + } + else + { + auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(settings); + auto connections = shard_info.pool->getMany(timeouts, &settings, PoolMode::GET_ONE); + if (connections.empty() || connections.front().isNull()) + throw Exception( + "Expected exactly one connection for shard " + toString(shard_info.shard_num), ErrorCodes::LOGICAL_ERROR); + + /// INSERT SELECT query returns empty block + auto in_stream = std::make_shared(std::move(connections), new_query_str, Block{}, local_context); + pipelines.emplace_back(std::make_unique()); + pipelines.back()->init(Pipe(std::make_shared(std::move(in_stream)))); + pipelines.back()->setSinks([](const Block & header, QueryPipeline::StreamType) -> ProcessorPtr + { + return std::make_shared(header); + }); + } + } + + return std::make_unique(QueryPipeline::unitePipelines(std::move(pipelines))); +} + + +void StorageDistributed::checkAlterIsPossible(const AlterCommands & commands, ContextPtr local_context) const +{ + auto name_deps = getDependentViewsByColumn(local_context); for (const auto & command : commands) { if (command.type != AlterCommand::Type::ADD_COLUMN @@ -640,21 +737,21 @@ void StorageDistributed::checkAlterIsPossible(const AlterCommands & commands, co } } -void StorageDistributed::alter(const AlterCommands & params, const Context & context, TableLockHolder &) +void StorageDistributed::alter(const AlterCommands & params, ContextPtr local_context, TableLockHolder &) { auto table_id = getStorageID(); - checkAlterIsPossible(params, context); + checkAlterIsPossible(params, local_context); StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); - params.apply(new_metadata, context); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, new_metadata); + params.apply(new_metadata, local_context); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, new_metadata); setInMemoryMetadata(new_metadata); } void StorageDistributed::startup() { - if (remote_database.empty() && !remote_table_function_ptr) + if (remote_database.empty() && !remote_table_function_ptr && !getCluster()->maybeCrossReplication()) LOG_WARNING(log, "Name of remote database is empty. Default database will be used implicitly."); if (!storage_policy) @@ -721,7 +818,7 @@ Strings StorageDistributed::getDataPaths() const return paths; } -void StorageDistributed::truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) +void StorageDistributed::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) { std::lock_guard lock(cluster_nodes_mutex); @@ -788,7 +885,7 @@ StorageDistributedDirectoryMonitor& StorageDistributed::requireDirectoryMonitor( *this, disk, relative_data_path + name, node_data.connection_pool, monitors_blocker, - global_context.getDistributedSchedulePool()); + getContext()->getDistributedSchedulePool()); } return *node_data.directory_monitor; } @@ -818,19 +915,20 @@ size_t StorageDistributed::getShardCount() const ClusterPtr StorageDistributed::getCluster() const { - return owned_cluster ? owned_cluster : global_context.getCluster(cluster_name); + return owned_cluster ? owned_cluster : getContext()->getCluster(cluster_name); } -ClusterPtr StorageDistributed::getOptimizedCluster(const Context & context, const StorageMetadataPtr & metadata_snapshot, const ASTPtr & query_ptr) const +ClusterPtr StorageDistributed::getOptimizedCluster( + ContextPtr local_context, const StorageMetadataPtr & metadata_snapshot, const ASTPtr & query_ptr) const { ClusterPtr cluster = getCluster(); - const Settings & settings = context.getSettingsRef(); + const Settings & settings = local_context->getSettingsRef(); bool sharding_key_is_usable = settings.allow_nondeterministic_optimize_skip_unused_shards || sharding_key_is_deterministic; if (has_sharding_key && sharding_key_is_usable) { - ClusterPtr optimized = skipUnusedShards(cluster, query_ptr, metadata_snapshot, context); + ClusterPtr optimized = skipUnusedShards(cluster, query_ptr, metadata_snapshot, local_context); if (optimized) return optimized; } @@ -852,7 +950,7 @@ ClusterPtr StorageDistributed::getOptimizedCluster(const Context & context, cons throw Exception(exception_message.str(), ErrorCodes::UNABLE_TO_SKIP_UNUSED_SHARDS); } - return cluster; + return {}; } IColumn::Selector StorageDistributed::createSelector(const ClusterPtr cluster, const ColumnWithTypeAndName & result) @@ -887,7 +985,7 @@ ClusterPtr StorageDistributed::skipUnusedShards( ClusterPtr cluster, const ASTPtr & query_ptr, const StorageMetadataPtr & metadata_snapshot, - const Context & context) const + ContextPtr local_context) const { const auto & select = query_ptr->as(); @@ -906,9 +1004,9 @@ ClusterPtr StorageDistributed::skipUnusedShards( condition_ast = select.prewhere() ? select.prewhere()->clone() : select.where()->clone(); } - replaceConstantExpressions(condition_ast, context, metadata_snapshot->getColumns().getAll(), shared_from_this(), metadata_snapshot); + replaceConstantExpressions(condition_ast, local_context, metadata_snapshot->getColumns().getAll(), shared_from_this(), metadata_snapshot); - size_t limit = context.getSettingsRef().optimize_skip_unused_shards_limit; + size_t limit = local_context->getSettingsRef().optimize_skip_unused_shards_limit; if (!limit || limit > SSIZE_MAX) { throw Exception("optimize_skip_unused_shards_limit out of range (0, {}]", ErrorCodes::ARGUMENT_OUT_OF_BOUND, SSIZE_MAX); @@ -922,7 +1020,7 @@ ClusterPtr StorageDistributed::skipUnusedShards( LOG_TRACE(log, "Number of values for sharding key exceeds optimize_skip_unused_shards_limit={}, " "try to increase it, but note that this may increase query processing time.", - context.getSettingsRef().optimize_skip_unused_shards_limit); + local_context->getSettingsRef().optimize_skip_unused_shards_limit); return nullptr; } @@ -955,10 +1053,10 @@ ActionLock StorageDistributed::getActionLock(StorageActionBlockType type) return {}; } -void StorageDistributed::flushClusterNodesAllData(const Context & context) +void StorageDistributed::flushClusterNodesAllData(ContextPtr local_context) { /// Sync SYSTEM FLUSH DISTRIBUTED with TRUNCATE - auto table_lock = lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto table_lock = lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); std::vector> directory_monitors; @@ -1033,7 +1131,7 @@ void StorageDistributed::delayInsertOrThrowIfNeeded() const !distributed_settings.bytes_to_delay_insert) return; - UInt64 total_bytes = *totalBytes(global_context.getSettingsRef()); + UInt64 total_bytes = *totalBytes(getContext()->getSettingsRef()); if (distributed_settings.bytes_to_throw_insert && total_bytes > distributed_settings.bytes_to_throw_insert) { @@ -1054,12 +1152,12 @@ void StorageDistributed::delayInsertOrThrowIfNeeded() const do { delayed_ms += step_ms; std::this_thread::sleep_for(std::chrono::milliseconds(step_ms)); - } while (*totalBytes(global_context.getSettingsRef()) > distributed_settings.bytes_to_delay_insert && delayed_ms < distributed_settings.max_delay_to_insert*1000); + } while (*totalBytes(getContext()->getSettingsRef()) > distributed_settings.bytes_to_delay_insert && delayed_ms < distributed_settings.max_delay_to_insert*1000); ProfileEvents::increment(ProfileEvents::DistributedDelayedInserts); ProfileEvents::increment(ProfileEvents::DistributedDelayedInsertsMilliseconds, delayed_ms); - UInt64 new_total_bytes = *totalBytes(global_context.getSettingsRef()); + UInt64 new_total_bytes = *totalBytes(getContext()->getSettingsRef()); LOG_INFO(log, "Too many bytes pending for async INSERT: was {}, now {}, INSERT was delayed to {} ms", formatReadableSizeWithBinarySuffix(total_bytes), formatReadableSizeWithBinarySuffix(new_total_bytes), @@ -1109,8 +1207,8 @@ void registerStorageDistributed(StorageFactory & factory) String cluster_name = getClusterNameAndMakeLiteral(engine_args[0]); - engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.local_context); - engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.local_context); + engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.getLocalContext()); + engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.getLocalContext()); String remote_database = engine_args[1]->as().value.safeGet(); String remote_table = engine_args[2]->as().value.safeGet(); @@ -1121,7 +1219,7 @@ void registerStorageDistributed(StorageFactory & factory) /// Check that sharding_key exists in the table and has numeric type. if (sharding_key) { - auto sharding_expr = buildShardingKeyExpression(sharding_key, args.context, args.columns.getAllPhysical(), true); + auto sharding_expr = buildShardingKeyExpression(sharding_key, args.getContext(), args.columns.getAllPhysical(), true); const Block & block = sharding_expr->getSampleBlock(); if (block.columns() != 1) @@ -1155,7 +1253,7 @@ void registerStorageDistributed(StorageFactory & factory) return StorageDistributed::create( args.table_id, args.columns, args.constraints, remote_database, remote_table, cluster_name, - args.context, + args.getContext(), sharding_key, storage_policy, args.relative_data_path, diff --git a/src/Storages/StorageDistributed.h b/src/Storages/StorageDistributed.h index 5904124505a..886a8e032de 100644 --- a/src/Storages/StorageDistributed.h +++ b/src/Storages/StorageDistributed.h @@ -36,7 +36,7 @@ using ExpressionActionsPtr = std::shared_ptr; * You can pass one address, not several. * In this case, the table can be considered remote, rather than distributed. */ -class StorageDistributed final : public ext::shared_ptr_helper, public IStorage +class StorageDistributed final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; friend class DistributedBlockOutputStream; @@ -55,13 +55,13 @@ public: bool isRemote() const override { return true; } - QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; Pipe read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -71,7 +71,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t /*max_block_size*/, unsigned /*num_streams*/) override; @@ -79,18 +79,20 @@ public: bool supportsParallelInsert() const override { return true; } std::optional totalBytes(const Settings &) const override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; + + QueryPipelinePtr distributedWrite(const ASTInsertQuery & query, ContextPtr context) override; /// Removes temporary data in local filesystem. - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) override; void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; /// in the sub-tables, you need to manually add and delete columns /// the structure of the sub-table is not checked - void alter(const AlterCommands & params, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & table_lock_holder) override; void startup() override; void shutdown() override; @@ -111,7 +113,7 @@ public: ClusterPtr getCluster() const; /// Used by InterpreterSystemQuery - void flushClusterNodesAllData(const Context & context); + void flushClusterNodesAllData(ContextPtr context); /// Used by ClusterCopier size_t getShardCount() const; @@ -124,7 +126,7 @@ private: const String & remote_database_, const String & remote_table_, const String & cluster_name_, - const Context & context_, + ContextPtr context_, const ASTPtr & sharding_key_, const String & storage_policy_name_, const String & relative_data_path_, @@ -138,7 +140,7 @@ private: const ConstraintsDescription & constraints_, ASTPtr remote_table_function_ptr_, const String & cluster_name_, - const Context & context_, + ContextPtr context_, const ASTPtr & sharding_key_, const String & storage_policy_name_, const String & relative_data_path_, @@ -163,12 +165,13 @@ private: /// Used by StorageSystemDistributionQueue std::vector getDirectoryMonitorsStatuses() const; - static IColumn::Selector createSelector(const ClusterPtr cluster, const ColumnWithTypeAndName & result); + static IColumn::Selector createSelector(ClusterPtr cluster, const ColumnWithTypeAndName & result); /// Apply the following settings: /// - optimize_skip_unused_shards /// - force_optimize_skip_unused_shards - ClusterPtr getOptimizedCluster(const Context &, const StorageMetadataPtr & metadata_snapshot, const ASTPtr & query_ptr) const; - ClusterPtr skipUnusedShards(ClusterPtr cluster, const ASTPtr & query_ptr, const StorageMetadataPtr & metadata_snapshot, const Context & context) const; + ClusterPtr getOptimizedCluster(ContextPtr, const StorageMetadataPtr & metadata_snapshot, const ASTPtr & query_ptr) const; + ClusterPtr + skipUnusedShards(ClusterPtr cluster, const ASTPtr & query_ptr, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) const; size_t getRandomShardIndex(const Cluster::ShardsInfo & shards); @@ -176,12 +179,10 @@ private: void delayInsertOrThrowIfNeeded() const; -private: String remote_database; String remote_table; ASTPtr remote_table_function_ptr; - const Context & global_context; Poco::Logger * log; /// Used to implement TableFunctionRemote. diff --git a/src/Storages/StorageExternalDistributed.cpp b/src/Storages/StorageExternalDistributed.cpp new file mode 100644 index 00000000000..3489de0161a --- /dev/null +++ b/src/Storages/StorageExternalDistributed.cpp @@ -0,0 +1,189 @@ +#include "StorageExternalDistributed.h" + +#if USE_MYSQL || USE_LIBPQXX + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; + extern const int BAD_ARGUMENTS; +} + +StorageExternalDistributed::StorageExternalDistributed( + const StorageID & table_id_, + ExternalStorageEngine table_engine, + const String & cluster_description, + const String & remote_database, + const String & remote_table, + const String & username, + const String & password, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + ContextPtr context) + : IStorage(table_id_) +{ + StorageInMemoryMetadata storage_metadata; + storage_metadata.setColumns(columns_); + storage_metadata.setConstraints(constraints_); + setInMemoryMetadata(storage_metadata); + + size_t max_addresses = context->getSettingsRef().glob_expansion_max_elements; + std::vector shards_descriptions = parseRemoteDescription(cluster_description, 0, cluster_description.size(), ',', max_addresses); + std::vector> addresses; + + /// For each shard pass replicas description into storage, replicas are managed by storage's PoolWithFailover. + for (const auto & shard_description : shards_descriptions) + { + StoragePtr shard; + + switch (table_engine) + { +#if USE_MYSQL + case ExternalStorageEngine::MySQL: + { + addresses = parseRemoteDescriptionForExternalDatabase(shard_description, max_addresses, 3306); + + mysqlxx::PoolWithFailover pool( + remote_database, + addresses, + username, password); + + shard = StorageMySQL::create( + table_id_, + std::move(pool), + remote_database, + remote_table, + /* replace_query = */ false, + /* on_duplicate_clause = */ "", + columns_, constraints_, + context); + break; + } +#endif +#if USE_LIBPQXX + + case ExternalStorageEngine::PostgreSQL: + { + addresses = parseRemoteDescriptionForExternalDatabase(shard_description, max_addresses, 5432); + + postgres::PoolWithFailover pool( + remote_database, + addresses, + username, password, + context->getSettingsRef().postgresql_connection_pool_size, + context->getSettingsRef().postgresql_connection_pool_wait_timeout); + + shard = StoragePostgreSQL::create( + table_id_, + std::move(pool), + remote_table, + columns_, constraints_, + context); + break; + } +#endif + default: + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Unsupported table engine. Supported engines are: MySQL, PostgreSQL"); + } + + shards.emplace(std::move(shard)); + } +} + + +Pipe StorageExternalDistributed::read( + const Names & column_names, + const StorageMetadataPtr & metadata_snapshot, + SelectQueryInfo & query_info, + ContextPtr context, + QueryProcessingStage::Enum processed_stage, + size_t max_block_size, + unsigned num_streams) +{ + Pipes pipes; + for (const auto & shard : shards) + { + pipes.emplace_back(shard->read( + column_names, + metadata_snapshot, + query_info, + context, + processed_stage, + max_block_size, + num_streams + )); + } + + return Pipe::unitePipes(std::move(pipes)); +} + + +void registerStorageExternalDistributed(StorageFactory & factory) +{ + factory.registerStorage("ExternalDistributed", [](const StorageFactory::Arguments & args) + { + ASTs & engine_args = args.engine_args; + + if (engine_args.size() != 6) + throw Exception( + "Storage MySQLiDistributed requires 5 parameters: ExternalDistributed('engine_name', 'cluster_description', database, table, 'user', 'password').", + ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + + for (auto & engine_arg : engine_args) + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext()); + + const String & engine_name = engine_args[0]->as().value.safeGet(); + const String & cluster_description = engine_args[1]->as().value.safeGet(); + const String & remote_database = engine_args[2]->as().value.safeGet(); + const String & remote_table = engine_args[3]->as().value.safeGet(); + const String & username = engine_args[4]->as().value.safeGet(); + const String & password = engine_args[5]->as().value.safeGet(); + + StorageExternalDistributed::ExternalStorageEngine table_engine; + if (engine_name == "MySQL") + table_engine = StorageExternalDistributed::ExternalStorageEngine::MySQL; + else if (engine_name == "PostgreSQL") + table_engine = StorageExternalDistributed::ExternalStorageEngine::PostgreSQL; + else + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "External storage engine {} is not supported for StorageExternalDistributed. Supported engines are: MySQL, PostgreSQL", + engine_name); + + return StorageExternalDistributed::create( + args.table_id, + table_engine, + cluster_description, + remote_database, + remote_table, + username, + password, + args.columns, + args.constraints, + args.getContext()); + }, + { + .source_access_type = AccessType::MYSQL, + }); +} + +} + +#endif diff --git a/src/Storages/StorageExternalDistributed.h b/src/Storages/StorageExternalDistributed.h new file mode 100644 index 00000000000..71022f5eaa3 --- /dev/null +++ b/src/Storages/StorageExternalDistributed.h @@ -0,0 +1,65 @@ +#pragma once + +#if !defined(ARCADIA_BUILD) +#include "config_core.h" +#endif + +#if USE_MYSQL || USE_LIBPQXX + +#include +#include +#include + + +namespace DB +{ + +/// Storages MySQL and PostgreSQL use ConnectionPoolWithFailover and support multiple replicas. +/// This class unites multiple storages with replicas into multiple shards with replicas. +/// A query to external database is passed to one replica on each shard, the result is united. +/// Replicas on each shard have the same priority, traversed replicas are moved to the end of the queue. +/// TODO: try `load_balancing` setting for replicas priorities same way as for table function `remote` +class StorageExternalDistributed final : public ext::shared_ptr_helper, public DB::IStorage +{ + friend struct ext::shared_ptr_helper; + +public: + enum class ExternalStorageEngine + { + MySQL, + PostgreSQL, + Default + }; + + std::string getName() const override { return "ExternalDistributed"; } + + Pipe read( + const Names & column_names, + const StorageMetadataPtr & /*metadata_snapshot*/, + SelectQueryInfo & query_info, + ContextPtr context, + QueryProcessingStage::Enum processed_stage, + size_t max_block_size, + unsigned num_streams) override; + +protected: + StorageExternalDistributed( + const StorageID & table_id_, + ExternalStorageEngine table_engine, + const String & cluster_description, + const String & remote_database_, + const String & remote_table_, + const String & username, + const String & password, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + ContextPtr context_); + +private: + using Shards = std::unordered_set; + Shards shards; +}; + +} + +#endif diff --git a/src/Storages/StorageFactory.cpp b/src/Storages/StorageFactory.cpp index 7aaec9b7e76..3a57c8ab4f6 100644 --- a/src/Storages/StorageFactory.cpp +++ b/src/Storages/StorageFactory.cpp @@ -31,6 +31,23 @@ static void checkAllTypesAreAllowedInTable(const NamesAndTypesList & names_and_t } +ContextPtr StorageFactory::Arguments::getContext() const +{ + auto ptr = context.lock(); + if (!ptr) + throw Exception("Context has expired", ErrorCodes::LOGICAL_ERROR); + return ptr; +} + +ContextPtr StorageFactory::Arguments::getLocalContext() const +{ + auto ptr = local_context.lock(); + if (!ptr) + throw Exception("Context has expired", ErrorCodes::LOGICAL_ERROR); + return ptr; +} + + void StorageFactory::registerStorage(const std::string & name, CreatorFn creator_fn, StorageFeatures features) { if (!storages.emplace(name, Creator{std::move(creator_fn), features}).second) @@ -42,8 +59,8 @@ void StorageFactory::registerStorage(const std::string & name, CreatorFn creator StoragePtr StorageFactory::get( const ASTCreateQuery & query, const String & relative_data_path, - Context & local_context, - Context & context, + ContextPtr local_context, + ContextPtr context, const ColumnsDescription & columns, const ConstraintsDescription & constraints, bool has_force_restore_data_flag) const @@ -179,7 +196,7 @@ StoragePtr StorageFactory::get( .attach = query.attach, .has_force_restore_data_flag = has_force_restore_data_flag }; - assert(&arguments.context == &arguments.context.getGlobalContext()); + assert(arguments.getContext() == arguments.getContext()->getGlobalContext()); auto res = storages.at(name).creator_fn(arguments); if (!empty_engine_args.empty()) @@ -191,8 +208,8 @@ StoragePtr StorageFactory::get( storage_def->engine->arguments->children = empty_engine_args; } - if (local_context.hasQueryContext() && context.getSettingsRef().log_queries) - local_context.getQueryContext().addQueryFactoriesInfo(Context::QueryLogFactories::Storage, name); + if (local_context->hasQueryContext() && context->getSettingsRef().log_queries) + local_context->getQueryContext()->addQueryFactoriesInfo(Context::QueryLogFactories::Storage, name); return res; } diff --git a/src/Storages/StorageFactory.h b/src/Storages/StorageFactory.h index 18dd24e10db..43f6a6d6f7d 100644 --- a/src/Storages/StorageFactory.h +++ b/src/Storages/StorageFactory.h @@ -39,12 +39,15 @@ public: /// Relative to from server config (possibly of some of some for *MergeTree) const String & relative_data_path; const StorageID & table_id; - Context & local_context; - Context & context; + ContextWeakPtr local_context; + ContextWeakPtr context; const ColumnsDescription & columns; const ConstraintsDescription & constraints; bool attach; bool has_force_restore_data_flag; + + ContextPtr getContext() const; + ContextPtr getLocalContext() const; }; /// Analog of the IStorage::supports*() helpers @@ -76,8 +79,8 @@ public: StoragePtr get( const ASTCreateQuery & query, const String & relative_data_path, - Context & local_context, - Context & context, + ContextPtr local_context, + ContextPtr context, const ColumnsDescription & columns, const ConstraintsDescription & constraints, bool has_force_restore_data_flag) const; diff --git a/src/Storages/StorageFile.cpp b/src/Storages/StorageFile.cpp index 5524569e1f0..14b91d29805 100644 --- a/src/Storages/StorageFile.cpp +++ b/src/Storages/StorageFile.cpp @@ -22,6 +22,8 @@ #include #include #include +#include +#include #include #include @@ -114,9 +116,9 @@ std::string getTablePath(const std::string & table_dir_path, const std::string & } /// Both db_dir_path and table_path must be converted to absolute paths (in particular, path cannot contain '..'). -void checkCreationIsAllowed(const Context & context_global, const std::string & db_dir_path, const std::string & table_path) +void checkCreationIsAllowed(ContextPtr context_global, const std::string & db_dir_path, const std::string & table_path) { - if (context_global.getApplicationType() != Context::ApplicationType::SERVER) + if (context_global->getApplicationType() != Context::ApplicationType::SERVER) return; /// "/dev/null" is allowed for perf testing @@ -129,7 +131,7 @@ void checkCreationIsAllowed(const Context & context_global, const std::string & } } -Strings StorageFile::getPathsList(const String & table_path, const String & user_files_path, const Context & context) +Strings StorageFile::getPathsList(const String & table_path, const String & user_files_path, ContextPtr context) { String user_files_absolute_path = Poco::Path(user_files_path).makeAbsolute().makeDirectory().toString(); Poco::Path poco_path = Poco::Path(table_path); @@ -149,10 +151,15 @@ Strings StorageFile::getPathsList(const String & table_path, const String & user return paths; } +bool StorageFile::isColumnOriented() const +{ + return format_name != "Distributed" && FormatFactory::instance().checkIfFormatIsColumnOriented(format_name); +} + StorageFile::StorageFile(int table_fd_, CommonArguments args) : StorageFile(args) { - if (args.context.getApplicationType() == Context::ApplicationType::SERVER) + if (args.getContext()->getApplicationType() == Context::ApplicationType::SERVER) throw Exception("Using file descriptor as source of storage isn't allowed for server daemons", ErrorCodes::DATABASE_ACCESS_DENIED); if (args.format_name == "Distributed") throw Exception("Distributed format is allowed only with explicit file path", ErrorCodes::INCORRECT_FILE_NAME); @@ -170,7 +177,7 @@ StorageFile::StorageFile(const std::string & table_path_, const std::string & us : StorageFile(args) { is_db_table = false; - paths = getPathsList(table_path_, user_files_path, args.context); + paths = getPathsList(table_path_, user_files_path, args.getContext()); if (args.format_name == "Distributed") { @@ -207,7 +214,7 @@ StorageFile::StorageFile(CommonArguments args) , format_name(args.format_name) , format_settings(args.format_settings) , compression_method(args.compression_method) - , base_path(args.context.getPath()) + , base_path(args.getContext()->getPath()) { StorageInMemoryMetadata storage_metadata; if (args.format_name != "Distributed") @@ -218,15 +225,17 @@ StorageFile::StorageFile(CommonArguments args) } -static std::chrono::seconds getLockTimeout(const Context & context) +static std::chrono::seconds getLockTimeout(ContextPtr context) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); Int64 lock_timeout = settings.lock_acquire_timeout.totalSeconds(); if (settings.max_execution_time.totalSeconds() != 0 && settings.max_execution_time.totalSeconds() < lock_timeout) lock_timeout = settings.max_execution_time.totalSeconds(); return std::chrono::seconds{lock_timeout}; } +using StorageFilePtr = std::shared_ptr; + class StorageFileSource : public SourceWithProgress { @@ -257,14 +266,26 @@ public: return header; } + static Block getBlockForSource( + const StorageFilePtr & storage, + const StorageMetadataPtr & metadata_snapshot, + const ColumnsDescription & columns_description, + const FilesInfoPtr & files_info) + { + if (storage->isColumnOriented()) + return metadata_snapshot->getSampleBlockForColumns(columns_description.getNamesOfPhysical(), storage->getVirtuals(), storage->getStorageID()); + else + return getHeader(metadata_snapshot, files_info->need_path_column, files_info->need_file_column); + } + StorageFileSource( std::shared_ptr storage_, const StorageMetadataPtr & metadata_snapshot_, - const Context & context_, + ContextPtr context_, UInt64 max_block_size_, FilesInfoPtr files_info_, ColumnsDescription columns_description_) - : SourceWithProgress(getHeader(metadata_snapshot_, files_info_->need_path_column, files_info_->need_file_column)) + : SourceWithProgress(getBlockForSource(storage_, metadata_snapshot_, columns_description_, files_info_)) , storage(std::move(storage_)) , metadata_snapshot(metadata_snapshot_) , files_info(std::move(files_info_)) @@ -344,8 +365,16 @@ public: } read_buf = wrapReadBufferWithCompressionMethod(std::move(nested_buffer), method); + + auto get_block_for_format = [&]() -> Block + { + if (storage->isColumnOriented()) + return metadata_snapshot->getSampleBlockForColumns(columns_description.getNamesOfPhysical()); + return metadata_snapshot->getSampleBlock(); + }; + auto format = FormatFactory::instance().getInput( - storage->format_name, *read_buf, metadata_snapshot->getSampleBlock(), context, max_block_size, storage->format_settings); + storage->format_name, *read_buf, get_block_for_format(), context, max_block_size, storage->format_settings); reader = std::make_shared(format); @@ -403,7 +432,7 @@ private: ColumnsDescription columns_description; - const Context & context; /// TODO Untangle potential issues with context lifetime. + ContextPtr context; /// TODO Untangle potential issues with context lifetime. UInt64 max_block_size; bool finished_generate = false; @@ -412,12 +441,11 @@ private: std::unique_lock unique_lock; }; - Pipe StorageFile::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) @@ -429,7 +457,7 @@ Pipe StorageFile::read( else if (paths.size() == 1 && !Poco::File(paths[0]).exists()) { - if (context.getSettingsRef().engine_file_empty_if_not_exists) + if (context->getSettingsRef().engine_file_empty_if_not_exists) return Pipe(std::make_shared(metadata_snapshot->getSampleBlockForColumns(column_names, getVirtuals(), getStorageID()))); else throw Exception("File " + paths[0] + " doesn't exist", ErrorCodes::FILE_DOESNT_EXIST); @@ -457,9 +485,16 @@ Pipe StorageFile::read( for (size_t i = 0; i < num_streams; ++i) { + const auto get_columns_for_format = [&]() -> ColumnsDescription + { + if (isColumnOriented()) + return ColumnsDescription{ + metadata_snapshot->getSampleBlockForColumns(column_names, getVirtuals(), getStorageID()).getNamesAndTypesList()}; + else + return metadata_snapshot->getColumns(); + }; pipes.emplace_back(std::make_shared( - this_ptr, metadata_snapshot, context, max_block_size, files_info, - metadata_snapshot->getColumns())); + this_ptr, metadata_snapshot, context, max_block_size, files_info, get_columns_for_format())); } return Pipe::unitePipes(std::move(pipes)); @@ -474,7 +509,7 @@ public: const StorageMetadataPtr & metadata_snapshot_, std::unique_lock && lock_, const CompressionMethod compression_method, - const Context & context, + ContextPtr context, const std::optional & format_settings, int & flags) : storage(storage_) @@ -549,7 +584,7 @@ private: BlockOutputStreamPtr StorageFile::write( const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, - const Context & context) + ContextPtr context) { if (format_name == "Distributed") throw Exception("Method write is not implemented for Distributed format", ErrorCodes::NOT_IMPLEMENTED); @@ -557,7 +592,7 @@ BlockOutputStreamPtr StorageFile::write( int flags = 0; std::string path; - if (context.getSettingsRef().engine_file_truncate_on_insert) + if (context->getSettingsRef().engine_file_truncate_on_insert) flags |= O_TRUNC; if (!paths.empty()) @@ -610,7 +645,7 @@ void StorageFile::rename(const String & new_path_to_table_data, const StorageID void StorageFile::truncate( const ASTPtr & /*query*/, const StorageMetadataPtr & /* metadata_snapshot */, - const Context & /* context */, + ContextPtr /* context */, TableExclusiveLockHolder &) { if (paths.size() != 1) @@ -643,11 +678,15 @@ void registerStorageFile(StorageFactory & factory) "File", [](const StorageFactory::Arguments & factory_args) { - StorageFile::CommonArguments storage_args{ - .table_id = factory_args.table_id, - .columns = factory_args.columns, - .constraints = factory_args.constraints, - .context = factory_args.context + StorageFile::CommonArguments storage_args + { + WithContext(factory_args.getContext()), + factory_args.table_id, + {}, + {}, + {}, + factory_args.columns, + factory_args.constraints, }; ASTs & engine_args_ast = factory_args.engine_args; @@ -657,7 +696,7 @@ void registerStorageFile(StorageFactory & factory) "Storage File requires from 1 to 3 arguments: name of used format, source and compression_method.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); - engine_args_ast[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args_ast[0], factory_args.local_context); + engine_args_ast[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args_ast[0], factory_args.getLocalContext()); storage_args.format_name = engine_args_ast[0]->as().value.safeGet(); // Use format settings from global server context + settings from @@ -669,7 +708,7 @@ void registerStorageFile(StorageFactory & factory) // Apply changed settings from global context, but ignore the // unknown ones, because we only have the format settings here. - const auto & changes = factory_args.context.getSettingsRef().changes(); + const auto & changes = factory_args.getContext()->getSettingsRef().changes(); for (const auto & change : changes) { if (user_format_settings.has(change.name)) @@ -683,12 +722,12 @@ void registerStorageFile(StorageFactory & factory) factory_args.storage_def->settings->changes); storage_args.format_settings = getFormatSettings( - factory_args.context, user_format_settings); + factory_args.getContext(), user_format_settings); } else { storage_args.format_settings = getFormatSettings( - factory_args.context); + factory_args.getContext()); } if (engine_args_ast.size() == 1) /// Table in database @@ -725,7 +764,7 @@ void registerStorageFile(StorageFactory & factory) if (engine_args_ast.size() == 3) { - engine_args_ast[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args_ast[2], factory_args.local_context); + engine_args_ast[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args_ast[2], factory_args.getLocalContext()); storage_args.compression_method = engine_args_ast[2]->as().value.safeGet(); } else @@ -734,7 +773,7 @@ void registerStorageFile(StorageFactory & factory) if (0 <= source_fd) /// File descriptor return StorageFile::create(source_fd, storage_args); else /// User's file - return StorageFile::create(source_path, factory_args.context.getUserFilesPath(), storage_args); + return StorageFile::create(source_path, factory_args.getContext()->getUserFilesPath(), storage_args); }, storage_features); } diff --git a/src/Storages/StorageFile.h b/src/Storages/StorageFile.h index c316412f808..a277dda7cc0 100644 --- a/src/Storages/StorageFile.h +++ b/src/Storages/StorageFile.h @@ -28,7 +28,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -36,12 +36,12 @@ public: BlockOutputStreamPtr write( const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, - const Context & context) override; + ContextPtr context) override; void truncate( const ASTPtr & /*query*/, const StorageMetadataPtr & /* metadata_snapshot */, - const Context & /* context */, + ContextPtr /* context */, TableExclusiveLockHolder &) override; void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; @@ -49,7 +49,7 @@ public: bool storesDataOnDisk() const override; Strings getDataPaths() const override; - struct CommonArguments + struct CommonArguments : public WithContext { StorageID table_id; std::string format_name; @@ -57,12 +57,17 @@ public: std::string compression_method; const ColumnsDescription & columns; const ConstraintsDescription & constraints; - const Context & context; }; NamesAndTypesList getVirtuals() const override; - static Strings getPathsList(const String & table_path, const String & user_files_path, const Context & context); + static Strings getPathsList(const String & table_path, const String & user_files_path, ContextPtr context); + + /// Check if the format is column-oriented. + /// Is is useful because column oriented formats could effectively skip unknown columns + /// So we can create a header of only required columns in read method and ask + /// format to read only them. Note: this hack cannot be done with ordinary formats like TSV. + bool isColumnOriented() const; protected: friend class StorageFileSource; diff --git a/src/Storages/StorageGenerateRandom.cpp b/src/Storages/StorageGenerateRandom.cpp index f06daa3a2bd..bc158c38f37 100644 --- a/src/Storages/StorageGenerateRandom.cpp +++ b/src/Storages/StorageGenerateRandom.cpp @@ -65,7 +65,7 @@ ColumnPtr fillColumnWithRandomData( UInt64 max_array_length, UInt64 max_string_length, pcg64 & rng, - const Context & context) + ContextPtr context) { TypeIndex idx = type->getTypeId(); @@ -339,7 +339,7 @@ ColumnPtr fillColumnWithRandomData( class GenerateSource : public SourceWithProgress { public: - GenerateSource(UInt64 block_size_, UInt64 max_array_length_, UInt64 max_string_length_, UInt64 random_seed_, Block block_header_, const Context & context_) + GenerateSource(UInt64 block_size_, UInt64 max_array_length_, UInt64 max_string_length_, UInt64 random_seed_, Block block_header_, ContextPtr context_) : SourceWithProgress(Nested::flatten(prepareBlockToFill(block_header_))) , block_size(block_size_), max_array_length(max_array_length_), max_string_length(max_string_length_) , block_to_fill(std::move(block_header_)), rng(random_seed_), context(context_) {} @@ -367,7 +367,7 @@ private: pcg64 rng; - const Context & context; + ContextPtr context; static Block & prepareBlockToFill(Block & block) { @@ -442,7 +442,7 @@ Pipe StorageGenerateRandom::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) diff --git a/src/Storages/StorageGenerateRandom.h b/src/Storages/StorageGenerateRandom.h index 965c5b3a9d3..d9c2acb782b 100644 --- a/src/Storages/StorageGenerateRandom.h +++ b/src/Storages/StorageGenerateRandom.h @@ -19,7 +19,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageInMemoryMetadata.cpp b/src/Storages/StorageInMemoryMetadata.cpp index 871ff38c07f..2f4a24a5c60 100644 --- a/src/Storages/StorageInMemoryMetadata.cpp +++ b/src/Storages/StorageInMemoryMetadata.cpp @@ -291,9 +291,10 @@ Block StorageInMemoryMetadata::getSampleBlockForColumns( { Block res; - std::unordered_map columns_map; - auto all_columns = getColumns().getAllWithSubcolumns(); + std::unordered_map columns_map; + columns_map.reserve(all_columns.size()); + for (const auto & elem : all_columns) columns_map.emplace(elem.name, elem.type); @@ -306,15 +307,11 @@ Block StorageInMemoryMetadata::getSampleBlockForColumns( { auto it = columns_map.find(name); if (it != columns_map.end()) - { res.insert({it->second->createColumn(), it->second, it->first}); - } else - { throw Exception( - "Column " + backQuote(name) + " not found in table " + storage_id.getNameForLogs(), + "Column " + backQuote(name) + " not found in table " + (storage_id.empty() ? "" : storage_id.getNameForLogs()), ErrorCodes::NOT_FOUND_COLUMN_IN_BLOCK); - } } return res; diff --git a/src/Storages/StorageInMemoryMetadata.h b/src/Storages/StorageInMemoryMetadata.h index 038416aff7d..00fb944c0b5 100644 --- a/src/Storages/StorageInMemoryMetadata.h +++ b/src/Storages/StorageInMemoryMetadata.h @@ -85,9 +85,10 @@ struct StorageInMemoryMetadata /// Returns combined set of columns const ColumnsDescription & getColumns() const; - /// Returns secondary indices + /// Returns secondary indices const IndicesDescription & getSecondaryIndices() const; + /// Has at least one non primary index bool hasSecondaryIndices() const; @@ -146,8 +147,7 @@ struct StorageInMemoryMetadata /// Storage metadata. StorageID required only for more clear exception /// message. Block getSampleBlockForColumns( - const Names & column_names, const NamesAndTypesList & virtuals, const StorageID & storage_id) const; - + const Names & column_names, const NamesAndTypesList & virtuals = {}, const StorageID & storage_id = StorageID::createEmpty()) const; /// Returns structure with partition key. const KeyDescription & getPartitionKey() const; /// Returns ASTExpressionList of partition key expression for storage or nullptr if there is none. diff --git a/src/Storages/StorageInput.cpp b/src/Storages/StorageInput.cpp index 1f881bccf07..63b440aff08 100644 --- a/src/Storages/StorageInput.cpp +++ b/src/Storages/StorageInput.cpp @@ -27,17 +27,14 @@ StorageInput::StorageInput(const StorageID & table_id, const ColumnsDescription } -class StorageInputSource : public SourceWithProgress +class StorageInputSource : public SourceWithProgress, WithContext { public: - StorageInputSource(Context & context_, Block sample_block) - : SourceWithProgress(std::move(sample_block)), context(context_) - { - } + StorageInputSource(ContextPtr context_, Block sample_block) : SourceWithProgress(std::move(sample_block)), WithContext(context_) {} Chunk generate() override { - auto block = context.getInputBlocksReaderCallback()(context); + auto block = getContext()->getInputBlocksReaderCallback()(getContext()); if (!block) return {}; @@ -46,9 +43,6 @@ public: } String getName() const override { return "Input"; } - -private: - Context & context; }; @@ -62,18 +56,18 @@ Pipe StorageInput::read( const Names & /*column_names*/, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) { Pipes pipes; - Context & query_context = const_cast(context).getQueryContext(); + auto query_context = context->getQueryContext(); /// It is TCP request if we have callbacks for input(). - if (query_context.getInputBlocksReaderCallback()) + if (query_context->getInputBlocksReaderCallback()) { /// Send structure to the client. - query_context.initializeInput(shared_from_this()); + query_context->initializeInput(shared_from_this()); return Pipe(std::make_shared(query_context, metadata_snapshot->getSampleBlock())); } diff --git a/src/Storages/StorageInput.h b/src/Storages/StorageInput.h index 3cb64993d45..106c22db385 100644 --- a/src/Storages/StorageInput.h +++ b/src/Storages/StorageInput.h @@ -21,7 +21,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageJoin.cpp b/src/Storages/StorageJoin.cpp index a449cebba51..983b9213a35 100644 --- a/src/Storages/StorageJoin.cpp +++ b/src/Storages/StorageJoin.cpp @@ -68,7 +68,7 @@ StorageJoin::StorageJoin( void StorageJoin::truncate( - const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder&) + const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder&) { disk->removeRecursive(path); disk->createDirectories(path); @@ -146,7 +146,7 @@ void registerStorageJoin(StorageFactory & factory) ASTs & engine_args = args.engine_args; - const auto & settings = args.context.getSettingsRef(); + const auto & settings = args.getContext()->getSettingsRef(); auto join_use_nulls = settings.join_use_nulls; auto max_rows_in_join = settings.max_rows_in_join; @@ -186,7 +186,7 @@ void registerStorageJoin(StorageFactory & factory) } } - DiskPtr disk = args.context.getDisk(disk_name); + DiskPtr disk = args.getContext()->getDisk(disk_name); if (engine_args.size() < 3) throw Exception( @@ -492,7 +492,7 @@ Pipe StorageJoin::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned /*num_streams*/) diff --git a/src/Storages/StorageJoin.h b/src/Storages/StorageJoin.h index 5f0f9f92404..4baac53c69c 100644 --- a/src/Storages/StorageJoin.h +++ b/src/Storages/StorageJoin.h @@ -27,7 +27,7 @@ class StorageJoin final : public ext::shared_ptr_helper, public Sto public: String getName() const override { return "Join"; } - void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) override; /// Return instance of HashJoin holding lock that protects from insertions to StorageJoin. /// HashJoin relies on structure of hash table that's why we need to return it with locked mutex. @@ -45,7 +45,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageLog.cpp b/src/Storages/StorageLog.cpp index ddb0cadc148..8ed68e0b44d 100644 --- a/src/Storages/StorageLog.cpp +++ b/src/Storages/StorageLog.cpp @@ -357,6 +357,11 @@ void LogBlockOutputStream::writeSuffix() streams.clear(); done = true; + + /// unlock should be done from the same thread as lock, and dtor may be + /// called from different thread, so it should be done here (at least in + /// case of no exceptions occurred) + lock.unlock(); } @@ -586,7 +591,7 @@ void StorageLog::rename(const String & new_path_to_table_data, const StorageID & renameInMemory(new_table_id); } -void StorageLog::truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) +void StorageLog::truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) { files.clear(); file_count = 0; @@ -628,9 +633,9 @@ const StorageLog::Marks & StorageLog::getMarksWithRealRowCount(const StorageMeta } -static std::chrono::seconds getLockTimeout(const Context & context) +static std::chrono::seconds getLockTimeout(ContextPtr context) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); Int64 lock_timeout = settings.lock_acquire_timeout.totalSeconds(); if (settings.max_execution_time.totalSeconds() != 0 && settings.max_execution_time.totalSeconds() < lock_timeout) lock_timeout = settings.max_execution_time.totalSeconds(); @@ -642,7 +647,7 @@ Pipe StorageLog::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) @@ -667,7 +672,7 @@ Pipe StorageLog::read( if (num_streams > marks_size) num_streams = marks_size; - size_t max_read_buffer_size = context.getSettingsRef().max_read_buffer_size; + size_t max_read_buffer_size = context->getSettingsRef().max_read_buffer_size; for (size_t stream = 0; stream < num_streams; ++stream) { @@ -690,7 +695,7 @@ Pipe StorageLog::read( return Pipe::unitePipes(std::move(pipes)); } -BlockOutputStreamPtr StorageLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) { auto lock_timeout = getLockTimeout(context); loadMarks(lock_timeout); @@ -702,7 +707,7 @@ BlockOutputStreamPtr StorageLog::write(const ASTPtr & /*query*/, const StorageMe return std::make_shared(*this, metadata_snapshot, std::move(lock)); } -CheckResults StorageLog::checkData(const ASTPtr & /* query */, const Context & context) +CheckResults StorageLog::checkData(const ASTPtr & /* query */, ContextPtr context) { std::shared_lock lock(rwlock, getLockTimeout(context)); if (!lock) @@ -726,11 +731,11 @@ void registerStorageLog(StorageFactory & factory) ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); String disk_name = getDiskName(*args.storage_def); - DiskPtr disk = args.context.getDisk(disk_name); + DiskPtr disk = args.getContext()->getDisk(disk_name); return StorageLog::create( disk, args.relative_data_path, args.table_id, args.columns, args.constraints, - args.attach, args.context.getSettings().max_compress_block_size); + args.attach, args.getContext()->getSettings().max_compress_block_size); }, features); } diff --git a/src/Storages/StorageLog.h b/src/Storages/StorageLog.h index acb03658182..4fbaf53529f 100644 --- a/src/Storages/StorageLog.h +++ b/src/Storages/StorageLog.h @@ -29,18 +29,18 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; - CheckResults checkData(const ASTPtr & /* query */, const Context & /* context */) override; + CheckResults checkData(const ASTPtr & /* query */, ContextPtr /* context */) override; - void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) override; bool storesDataOnDisk() const override { return true; } Strings getDataPaths() const override { return {DB::fullPath(disk, table_path)}; } @@ -86,7 +86,7 @@ private: DiskPtr disk; String table_path; - mutable std::shared_timed_mutex rwlock; + std::shared_timed_mutex rwlock; Files files; diff --git a/src/Storages/StorageMaterializeMySQL.cpp b/src/Storages/StorageMaterializeMySQL.cpp index e59f1e22958..8e6f2e1ad63 100644 --- a/src/Storages/StorageMaterializeMySQL.cpp +++ b/src/Storages/StorageMaterializeMySQL.cpp @@ -39,7 +39,7 @@ Pipe StorageMaterializeMySQL::read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned int num_streams) @@ -48,7 +48,7 @@ Pipe StorageMaterializeMySQL::read( rethrowSyncExceptionIfNeed(database); NameSet column_names_set = NameSet(column_names.begin(), column_names.end()); - auto lock = nested_storage->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock = nested_storage->lockForShare(context->getCurrentQueryId(), context->getSettingsRef().lock_acquire_timeout); const StorageMetadataPtr & nested_metadata = nested_storage->getInMemoryMetadataPtr(); Block nested_header = nested_metadata->getSampleBlock(); @@ -92,7 +92,7 @@ Pipe StorageMaterializeMySQL::read( { Block pipe_header = pipe.getHeader(); auto syntax = TreeRewriter(context).analyze(expressions, pipe_header.getNamesAndTypesList()); - ExpressionActionsPtr expression_actions = ExpressionAnalyzer(expressions, syntax, context).getActions(true); + ExpressionActionsPtr expression_actions = ExpressionAnalyzer(expressions, syntax, context).getActions(true /* add_aliases */, false /* project_result */); pipe.addSimpleTransform([&](const Block & header) { diff --git a/src/Storages/StorageMaterializeMySQL.h b/src/Storages/StorageMaterializeMySQL.h index f787470e2d2..8a4b88cbbb4 100644 --- a/src/Storages/StorageMaterializeMySQL.h +++ b/src/Storages/StorageMaterializeMySQL.h @@ -26,9 +26,9 @@ public: Pipe read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr &, const StorageMetadataPtr &, const Context &) override { throwNotAllowed(); } + BlockOutputStreamPtr write(const ASTPtr &, const StorageMetadataPtr &, ContextPtr) override { throwNotAllowed(); } NamesAndTypesList getVirtuals() const override; ColumnSizeByName getColumnSizes() const override; diff --git a/src/Storages/StorageMaterializedView.cpp b/src/Storages/StorageMaterializedView.cpp index c89187a46e2..89b8bc72526 100644 --- a/src/Storages/StorageMaterializedView.cpp +++ b/src/Storages/StorageMaterializedView.cpp @@ -50,11 +50,11 @@ static inline String generateInnerTableName(const StorageID & view_id) StorageMaterializedView::StorageMaterializedView( const StorageID & table_id_, - Context & local_context, + ContextPtr local_context, const ASTCreateQuery & query, const ColumnsDescription & columns_, bool attach_) - : IStorage(table_id_), global_context(local_context.getGlobalContext()) + : IStorage(table_id_), WithContext(local_context->getGlobalContext()) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -76,10 +76,15 @@ StorageMaterializedView::StorageMaterializedView( storage_metadata.setSelectQuery(select); setInMemoryMetadata(storage_metadata); + bool point_to_itself_by_uuid = has_inner_table && query.to_inner_uuid != UUIDHelpers::Nil + && query.to_inner_uuid == table_id_.uuid; + bool point_to_itself_by_name = !has_inner_table && query.to_table_id.database_name == table_id_.database_name + && query.to_table_id.table_name == table_id_.table_name; + if (point_to_itself_by_uuid || point_to_itself_by_name) + throw Exception(ErrorCodes::BAD_ARGUMENTS, "Materialized view {} cannot point to itself", table_id_.getFullTableName()); + if (!has_inner_table) { - if (query.to_table_id.database_name == table_id_.database_name && query.to_table_id.table_name == table_id_.table_name) - throw Exception(ErrorCodes::BAD_ARGUMENTS, "Materialized view {} cannot point to itself", table_id_.getFullTableName()); target_table_id = query.to_table_id; } else if (attach_) @@ -90,7 +95,7 @@ StorageMaterializedView::StorageMaterializedView( else { /// We will create a query to create an internal table. - auto create_context = Context(local_context); + auto create_context = Context::createCopy(local_context); auto manual_create_query = std::make_shared(); manual_create_query->database = getStorageID().database_name; manual_create_query->table = generateInnerTableName(getStorageID()); @@ -106,32 +111,33 @@ StorageMaterializedView::StorageMaterializedView( create_interpreter.setInternal(true); create_interpreter.execute(); - target_table_id = DatabaseCatalog::instance().getTable({manual_create_query->database, manual_create_query->table}, global_context)->getStorageID(); + target_table_id = DatabaseCatalog::instance().getTable({manual_create_query->database, manual_create_query->table}, getContext())->getStorageID(); } if (!select.select_table_id.empty()) DatabaseCatalog::instance().addDependency(select.select_table_id, getStorageID()); } -QueryProcessingStage::Enum StorageMaterializedView::getQueryProcessingStage(const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const +QueryProcessingStage::Enum StorageMaterializedView::getQueryProcessingStage( + ContextPtr local_context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const { - return getTargetTable()->getQueryProcessingStage(context, to_stage, query_info); + return getTargetTable()->getQueryProcessingStage(local_context, to_stage, query_info); } Pipe StorageMaterializedView::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { QueryPlan plan; - read(plan, column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); + read(plan, column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); return plan.convertToPipe( - QueryPlanOptimizationSettings::fromContext(context), - BuildQueryPipelineSettings::fromContext(context)); + QueryPlanOptimizationSettings::fromContext(local_context), + BuildQueryPipelineSettings::fromContext(local_context)); } void StorageMaterializedView::read( @@ -139,23 +145,23 @@ void StorageMaterializedView::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { auto storage = getTargetTable(); - auto lock = storage->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock = storage->lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); auto target_metadata_snapshot = storage->getInMemoryMetadataPtr(); if (query_info.order_optimizer) - query_info.input_order_info = query_info.order_optimizer->getInputOrder(target_metadata_snapshot, context); + query_info.input_order_info = query_info.order_optimizer->getInputOrder(target_metadata_snapshot, local_context); - storage->read(query_plan, column_names, target_metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); + storage->read(query_plan, column_names, target_metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); if (query_plan.isInitialized()) { - auto mv_header = getHeaderForProcessingStage(*this, column_names, metadata_snapshot, query_info, context, processed_stage); + auto mv_header = getHeaderForProcessingStage(*this, column_names, metadata_snapshot, query_info, local_context, processed_stage); auto target_header = query_plan.getCurrentDataStream().header; if (!blocksHaveEqualStructure(mv_header, target_header)) { @@ -185,20 +191,20 @@ void StorageMaterializedView::read( } } -BlockOutputStreamPtr StorageMaterializedView::write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) +BlockOutputStreamPtr StorageMaterializedView::write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr local_context) { auto storage = getTargetTable(); - auto lock = storage->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock = storage->lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); auto metadata_snapshot = storage->getInMemoryMetadataPtr(); - auto stream = storage->write(query, metadata_snapshot, context); + auto stream = storage->write(query, metadata_snapshot, local_context); stream->addTableLock(lock); return stream; } -static void executeDropQuery(ASTDropQuery::Kind kind, const Context & global_context, const Context & current_context, const StorageID & target_table_id, bool no_delay) +static void executeDropQuery(ASTDropQuery::Kind kind, ContextPtr global_context, ContextPtr current_context, const StorageID & target_table_id, bool no_delay) { if (DatabaseCatalog::instance().tryGetTable(target_table_id, current_context)) { @@ -214,13 +220,13 @@ static void executeDropQuery(ASTDropQuery::Kind kind, const Context & global_con /// to avoid "Not enough privileges" error if current user has only DROP VIEW ON mat_view_name privilege /// and not allowed to drop inner table explicitly. Allowing to drop inner table without explicit grant /// looks like expected behaviour and we have tests for it. - auto drop_context = Context(global_context); - drop_context.getClientInfo().query_kind = ClientInfo::QueryKind::SECONDARY_QUERY; - if (auto txn = current_context.getZooKeeperMetadataTransaction()) + auto drop_context = Context::createCopy(global_context); + drop_context->getClientInfo().query_kind = ClientInfo::QueryKind::SECONDARY_QUERY; + if (auto txn = current_context->getZooKeeperMetadataTransaction()) { /// For Replicated database - drop_context.setQueryContext(const_cast(current_context)); - drop_context.initZooKeeperMetadataTransaction(txn, true); + drop_context->setQueryContext(current_context); + drop_context->initZooKeeperMetadataTransaction(txn, true); } InterpreterDropQuery drop_interpreter(ast_drop_query, drop_context); drop_interpreter.execute(); @@ -235,19 +241,19 @@ void StorageMaterializedView::drop() if (!select_query.select_table_id.empty()) DatabaseCatalog::instance().removeDependency(select_query.select_table_id, table_id); - dropInnerTable(true, global_context); + dropInnerTable(true, getContext()); } -void StorageMaterializedView::dropInnerTable(bool no_delay, const Context & context) +void StorageMaterializedView::dropInnerTable(bool no_delay, ContextPtr local_context) { if (has_inner_table && tryGetTargetTable()) - executeDropQuery(ASTDropQuery::Kind::Drop, global_context, context, target_table_id, no_delay); + executeDropQuery(ASTDropQuery::Kind::Drop, getContext(), local_context, target_table_id, no_delay); } -void StorageMaterializedView::truncate(const ASTPtr &, const StorageMetadataPtr &, const Context & context, TableExclusiveLockHolder &) +void StorageMaterializedView::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr local_context, TableExclusiveLockHolder &) { if (has_inner_table) - executeDropQuery(ASTDropQuery::Kind::Truncate, global_context, context, target_table_id, true); + executeDropQuery(ASTDropQuery::Kind::Truncate, getContext(), local_context, target_table_id, true); } void StorageMaterializedView::checkStatementCanBeForwarded() const @@ -265,26 +271,26 @@ bool StorageMaterializedView::optimize( bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) + ContextPtr local_context) { checkStatementCanBeForwarded(); auto storage_ptr = getTargetTable(); auto metadata_snapshot = storage_ptr->getInMemoryMetadataPtr(); - return getTargetTable()->optimize(query, metadata_snapshot, partition, final, deduplicate, deduplicate_by_columns, context); + return getTargetTable()->optimize(query, metadata_snapshot, partition, final, deduplicate, deduplicate_by_columns, local_context); } void StorageMaterializedView::alter( const AlterCommands & params, - const Context & context, + ContextPtr local_context, TableLockHolder &) { auto table_id = getStorageID(); StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); StorageInMemoryMetadata old_metadata = getInMemoryMetadata(); - params.apply(new_metadata, context); + params.apply(new_metadata, local_context); /// start modify query - if (context.getSettingsRef().allow_experimental_alter_materialized_view_structure) + if (local_context->getSettingsRef().allow_experimental_alter_materialized_view_structure) { const auto & new_select = new_metadata.select; const auto & old_select = old_metadata.getSelectQuery(); @@ -295,14 +301,14 @@ void StorageMaterializedView::alter( } /// end modify query - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, new_metadata); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, new_metadata); setInMemoryMetadata(new_metadata); } -void StorageMaterializedView::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +void StorageMaterializedView::checkAlterIsPossible(const AlterCommands & commands, ContextPtr local_context) const { - const auto & settings = context.getSettingsRef(); + const auto & settings = local_context->getSettingsRef(); if (settings.allow_experimental_alter_materialized_view_structure) { for (const auto & command : commands) @@ -332,10 +338,10 @@ void StorageMaterializedView::checkMutationIsPossible(const MutationCommands & c } Pipe StorageMaterializedView::alterPartition( - const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, const Context & context) + const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, ContextPtr local_context) { checkStatementCanBeForwarded(); - return getTargetTable()->alterPartition(metadata_snapshot, commands, context); + return getTargetTable()->alterPartition(metadata_snapshot, commands, local_context); } void StorageMaterializedView::checkAlterPartitionIsPossible( @@ -345,10 +351,10 @@ void StorageMaterializedView::checkAlterPartitionIsPossible( getTargetTable()->checkAlterPartitionIsPossible(commands, metadata_snapshot, settings); } -void StorageMaterializedView::mutate(const MutationCommands & commands, const Context & context) +void StorageMaterializedView::mutate(const MutationCommands & commands, ContextPtr local_context) { checkStatementCanBeForwarded(); - getTargetTable()->mutate(commands, context); + getTargetTable()->mutate(commands, local_context); } void StorageMaterializedView::renameInMemory(const StorageID & new_table_id) @@ -375,7 +381,7 @@ void StorageMaterializedView::renameInMemory(const StorageID & new_table_id) elem.to = to; rename->elements.emplace_back(elem); - InterpreterRenameQuery(rename, global_context).execute(); + InterpreterRenameQuery(rename, getContext()).execute(); target_table_id.table_name = new_target_table_name; } @@ -397,13 +403,13 @@ void StorageMaterializedView::shutdown() StoragePtr StorageMaterializedView::getTargetTable() const { checkStackSize(); - return DatabaseCatalog::instance().getTable(target_table_id, global_context); + return DatabaseCatalog::instance().getTable(target_table_id, getContext()); } StoragePtr StorageMaterializedView::tryGetTargetTable() const { checkStackSize(); - return DatabaseCatalog::instance().tryGetTable(target_table_id, global_context); + return DatabaseCatalog::instance().tryGetTable(target_table_id, getContext()); } Strings StorageMaterializedView::getDataPaths() const @@ -424,7 +430,7 @@ void registerStorageMaterializedView(StorageFactory & factory) { /// Pass local_context here to convey setting for inner table return StorageMaterializedView::create( - args.table_id, args.local_context, args.query, + args.table_id, args.getLocalContext(), args.query, args.columns, args.attach); }); } diff --git a/src/Storages/StorageMaterializedView.h b/src/Storages/StorageMaterializedView.h index a5dc089d68e..cda8112a8c3 100644 --- a/src/Storages/StorageMaterializedView.h +++ b/src/Storages/StorageMaterializedView.h @@ -12,7 +12,7 @@ namespace DB { -class StorageMaterializedView final : public ext::shared_ptr_helper, public IStorage +class StorageMaterializedView final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; public: @@ -27,19 +27,19 @@ public: bool supportsIndexForIn() const override { return getTargetTable()->supportsIndexForIn(); } bool supportsParallelInsert() const override { return getTargetTable()->supportsParallelInsert(); } bool supportsSubcolumns() const override { return getTargetTable()->supportsSubcolumns(); } - bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & /* metadata_snapshot */) const override + bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & /* metadata_snapshot */) const override { auto target_table = getTargetTable(); auto metadata_snapshot = target_table->getInMemoryMetadataPtr(); return target_table->mayBenefitFromIndexForIn(left_in_operand, query_context, metadata_snapshot); } - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; void drop() override; - void dropInnerTable(bool no_delay, const Context & context); + void dropInnerTable(bool no_delay, ContextPtr context); - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) override; bool optimize( const ASTPtr & query, @@ -48,25 +48,25 @@ public: bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) override; + ContextPtr context) override; - void alter(const AlterCommands & params, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & table_lock_holder) override; void checkMutationIsPossible(const MutationCommands & commands, const Settings & settings) const override; - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; - Pipe alterPartition(const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, const Context & context) override; + Pipe alterPartition(const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, ContextPtr context) override; void checkAlterPartitionIsPossible(const PartitionCommands & commands, const StorageMetadataPtr & metadata_snapshot, const Settings & settings) const override; - void mutate(const MutationCommands & commands, const Context & context) override; + void mutate(const MutationCommands & commands, ContextPtr context) override; void renameInMemory(const StorageID & new_table_id) override; void shutdown() override; - QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; StoragePtr getTargetTable() const; StoragePtr tryGetTargetTable() const; @@ -77,7 +77,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -87,7 +87,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -98,7 +98,6 @@ private: /// Will be initialized in constructor StorageID target_table_id = StorageID::createEmpty(); - Context & global_context; bool has_inner_table = false; void checkStatementCanBeForwarded() const; @@ -106,7 +105,7 @@ private: protected: StorageMaterializedView( const StorageID & table_id_, - Context & local_context, + ContextPtr local_context, const ASTCreateQuery & query, const ColumnsDescription & columns_, bool attach_); diff --git a/src/Storages/StorageMemory.cpp b/src/Storages/StorageMemory.cpp index d98cd4212e9..4cae7367606 100644 --- a/src/Storages/StorageMemory.cpp +++ b/src/Storages/StorageMemory.cpp @@ -26,10 +26,6 @@ class MemorySource : public SourceWithProgress { using InitializerFunc = std::function &)>; public: - /// Blocks are stored in std::list which may be appended in another thread. - /// We use pointer to the beginning of the list and its current size. - /// We don't need synchronisation in this reader, because while we hold SharedLock on storage, - /// only new elements can be added to the back of the list, so our iterators remain valid MemorySource( Names column_names_, @@ -59,7 +55,7 @@ protected: size_t current_index = getAndIncrementExecutionIndex(); - if (current_index >= data->size()) + if (!data || current_index >= data->size()) { return {}; } @@ -182,7 +178,7 @@ Pipe StorageMemory::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned num_streams) @@ -230,7 +226,7 @@ Pipe StorageMemory::read( } -BlockOutputStreamPtr StorageMemory::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) +BlockOutputStreamPtr StorageMemory::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/) { return std::make_shared(*this, metadata_snapshot); } @@ -258,7 +254,7 @@ void StorageMemory::checkMutationIsPossible(const MutationCommands & /*commands* /// Some validation will be added } -void StorageMemory::mutate(const MutationCommands & commands, const Context & context) +void StorageMemory::mutate(const MutationCommands & commands, ContextPtr context) { std::lock_guard lock(mutex); auto metadata_snapshot = getInMemoryMetadataPtr(); @@ -320,7 +316,7 @@ void StorageMemory::mutate(const MutationCommands & commands, const Context & co void StorageMemory::truncate( - const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) + const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) { data.set(std::make_unique()); total_size_bytes.store(0, std::memory_order_relaxed); diff --git a/src/Storages/StorageMemory.h b/src/Storages/StorageMemory.h index b7fa4d7b222..1118474deee 100644 --- a/src/Storages/StorageMemory.h +++ b/src/Storages/StorageMemory.h @@ -34,7 +34,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -47,14 +47,14 @@ public: bool hasEvenlyDistributedRead() const override { return true; } - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) override; void drop() override; void checkMutationIsPossible(const MutationCommands & commands, const Settings & settings) const override; - void mutate(const MutationCommands & commands, const Context & context) override; + void mutate(const MutationCommands & commands, ContextPtr context) override; - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) override; std::optional totalRows(const Settings &) const override; std::optional totalBytes(const Settings &) const override; @@ -97,7 +97,7 @@ public: void delayReadForGlobalSubqueries() { delay_read_for_global_subqueries = true; } private: - /// MultiVersion data storage, so that we can copy the list of blocks to readers. + /// MultiVersion data storage, so that we can copy the vector of blocks to readers. MultiVersion data; diff --git a/src/Storages/StorageMerge.cpp b/src/Storages/StorageMerge.cpp index b8aaa52f92c..6ad7b0bce6e 100644 --- a/src/Storages/StorageMerge.cpp +++ b/src/Storages/StorageMerge.cpp @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include @@ -43,12 +44,15 @@ namespace ErrorCodes namespace { -void modifySelect(ASTSelectQuery & select, const TreeRewriterResult & rewriter_result) +TreeRewriterResult modifySelect(ASTSelectQuery & select, const TreeRewriterResult & rewriter_result, ContextPtr context) { + + TreeRewriterResult new_rewriter_result = rewriter_result; if (removeJoin(select)) { /// Also remove GROUP BY cause ExpressionAnalyzer would check if it has all aggregate columns but joined columns would be missed. select.setExpression(ASTSelectQuery::Expression::GROUP_BY, {}); + new_rewriter_result.aggregates.clear(); /// Replace select list to remove joined columns auto select_list = std::make_shared(); @@ -57,12 +61,40 @@ void modifySelect(ASTSelectQuery & select, const TreeRewriterResult & rewriter_r select.setExpression(ASTSelectQuery::Expression::SELECT, select_list); - /// TODO: keep WHERE/PREWHERE. We have to remove joined columns and their expressions but keep others. - select.setExpression(ASTSelectQuery::Expression::WHERE, {}); - select.setExpression(ASTSelectQuery::Expression::PREWHERE, {}); + const DB::IdentifierMembershipCollector membership_collector{select, context}; + + /// Remove unknown identifiers from where, leave only ones from left table + auto replace_where = [&membership_collector](ASTSelectQuery & query, ASTSelectQuery::Expression expr) + { + auto where = query.getExpression(expr, false); + if (!where) + return; + + const size_t left_table_pos = 0; + /// Test each argument of `and` function and select ones related to only left table + std::shared_ptr new_conj = makeASTFunction("and"); + for (const auto & node : collectConjunctions(where)) + { + if (membership_collector.getIdentsMembership(node) == left_table_pos) + new_conj->arguments->children.push_back(std::move(node)); + } + + if (new_conj->arguments->children.empty()) + /// No identifiers from left table + query.setExpression(expr, {}); + else if (new_conj->arguments->children.size() == 1) + /// Only one expression, lift from `and` + query.setExpression(expr, std::move(new_conj->arguments->children[0])); + else + /// Set new expression + query.setExpression(expr, std::move(new_conj)); + }; + replace_where(select,ASTSelectQuery::Expression::WHERE); + replace_where(select,ASTSelectQuery::Expression::PREWHERE); select.setExpression(ASTSelectQuery::Expression::HAVING, {}); select.setExpression(ASTSelectQuery::Expression::ORDER_BY, {}); } + return new_rewriter_result; } } @@ -72,11 +104,11 @@ StorageMerge::StorageMerge( const ColumnsDescription & columns_, const String & source_database_, const Strings & source_tables_, - const Context & context_) + ContextPtr context_) : IStorage(table_id_) + , WithContext(context_->getGlobalContext()) , source_database(source_database_) , source_tables(std::in_place, source_tables_.begin(), source_tables_.end()) - , global_context(context_.getGlobalContext()) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -88,11 +120,11 @@ StorageMerge::StorageMerge( const ColumnsDescription & columns_, const String & source_database_, const String & source_table_regexp_, - const Context & context_) + ContextPtr context_) : IStorage(table_id_) + , WithContext(context_->getGlobalContext()) , source_database(source_database_) , source_table_regexp(source_table_regexp_) - , global_context(context_.getGlobalContext()) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -102,7 +134,7 @@ StorageMerge::StorageMerge( template StoragePtr StorageMerge::getFirstTable(F && predicate) const { - auto iterator = getDatabaseIterator(global_context); + auto iterator = getDatabaseIterator(getContext()); while (iterator->isValid()) { @@ -124,11 +156,10 @@ bool StorageMerge::isRemote() const } -bool StorageMerge::mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & /*metadata_snapshot*/) const +bool StorageMerge::mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & /*metadata_snapshot*/) const { /// It's beneficial if it is true for at least one table. - StorageListWithLocks selected_tables = getSelectedTables( - query_context.getCurrentQueryId(), query_context.getSettingsRef()); + StorageListWithLocks selected_tables = getSelectedTables(query_context); size_t i = 0; for (const auto & table : selected_tables) @@ -148,10 +179,9 @@ bool StorageMerge::mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, cons } -QueryProcessingStage::Enum StorageMerge::getQueryProcessingStage(const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const +QueryProcessingStage::Enum +StorageMerge::getQueryProcessingStage(ContextPtr local_context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & query_info) const { - ASTPtr modified_query = query_info.query->clone(); - auto & modified_select = modified_query->as(); /// In case of JOIN the first stage (which includes JOIN) /// should be done on the initiator always. /// @@ -159,12 +189,12 @@ QueryProcessingStage::Enum StorageMerge::getQueryProcessingStage(const Context & /// (see modifySelect()/removeJoin()) /// /// And for this we need to return FetchColumns. - if (removeJoin(modified_select)) + if (const auto * select = query_info.query->as(); select && hasJoin(*select)) return QueryProcessingStage::FetchColumns; auto stage_in_source_tables = QueryProcessingStage::FetchColumns; - DatabaseTablesIteratorPtr iterator = getDatabaseIterator(context); + DatabaseTablesIteratorPtr iterator = getDatabaseIterator(local_context); size_t selected_table_size = 0; @@ -174,7 +204,7 @@ QueryProcessingStage::Enum StorageMerge::getQueryProcessingStage(const Context & if (table && table.get() != this) { ++selected_table_size; - stage_in_source_tables = std::max(stage_in_source_tables, table->getQueryProcessingStage(context, to_stage, query_info)); + stage_in_source_tables = std::max(stage_in_source_tables, table->getQueryProcessingStage(local_context, to_stage, query_info)); } iterator->next(); @@ -188,7 +218,7 @@ Pipe StorageMerge::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, unsigned num_streams) @@ -210,17 +240,16 @@ Pipe StorageMerge::read( /** Just in case, turn off optimization "transfer to PREWHERE", * since there is no certainty that it works when one of table is MergeTree and other is not. */ - auto modified_context = std::make_shared(context); + auto modified_context = Context::createCopy(local_context); modified_context->setSetting("optimize_move_to_prewhere", false); /// What will be result structure depending on query processed stage in source tables? - Block header = getHeaderForProcessingStage(*this, column_names, metadata_snapshot, query_info, context, processed_stage); + Block header = getHeaderForProcessingStage(*this, column_names, metadata_snapshot, query_info, local_context, processed_stage); /** First we make list of selected tables to find out its size. * This is necessary to correctly pass the recommended number of threads to each table. */ - StorageListWithLocks selected_tables - = getSelectedTables(query_info, has_table_virtual_column, context.getCurrentQueryId(), context.getSettingsRef()); + StorageListWithLocks selected_tables = getSelectedTables(local_context, query_info.query, has_table_virtual_column); if (selected_tables.empty()) /// FIXME: do we support sampling in this case? @@ -228,7 +257,8 @@ Pipe StorageMerge::read( {}, query_info, processed_stage, max_block_size, header, {}, real_column_names, modified_context, 0, has_table_virtual_column); size_t tables_count = selected_tables.size(); - Float64 num_streams_multiplier = std::min(unsigned(tables_count), std::max(1U, unsigned(context.getSettingsRef().max_streams_multiplier_for_merge_tables))); + Float64 num_streams_multiplier + = std::min(unsigned(tables_count), std::max(1U, unsigned(local_context->getSettingsRef().max_streams_multiplier_for_merge_tables))); num_streams *= num_streams_multiplier; size_t remaining_streams = num_streams; @@ -239,7 +269,7 @@ Pipe StorageMerge::read( { auto storage_ptr = std::get<0>(*it); auto storage_metadata_snapshot = storage_ptr->getInMemoryMetadataPtr(); - auto current_info = query_info.order_optimizer->getInputOrder(storage_metadata_snapshot, context); + auto current_info = query_info.order_optimizer->getInputOrder(storage_metadata_snapshot, local_context); if (it == selected_tables.begin()) input_sorting_info = current_info; else if (!current_info || (input_sorting_info && *current_info != *input_sorting_info)) @@ -293,7 +323,7 @@ Pipe StorageMerge::createSources( const Block & header, const StorageWithLockAndName & storage_with_lock, Names & real_column_names, - const std::shared_ptr & modified_context, + ContextPtr modified_context, size_t streams_num, bool has_table_virtual_column, bool concat_streams) @@ -304,7 +334,8 @@ Pipe StorageMerge::createSources( /// Original query could contain JOIN but we need only the first joined table and its columns. auto & modified_select = modified_query_info.query->as(); - modifySelect(modified_select, *query_info.syntax_analyzer_result); + auto new_analyzer_res = modifySelect(modified_select, *query_info.syntax_analyzer_result, modified_context); + modified_query_info.syntax_analyzer_result = std::make_shared(std::move(new_analyzer_res)); VirtualColumnUtils::rewriteEntityInAst(modified_query_info.query, "_table", table_name); @@ -313,7 +344,7 @@ Pipe StorageMerge::createSources( if (!storage) { pipe = QueryPipeline::getPipe(InterpreterSelectQuery( - modified_query_info.query, *modified_context, + modified_query_info.query, modified_context, std::make_shared(header), SelectQueryOptions(processed_stage).analyze()).execute().pipeline); @@ -321,15 +352,21 @@ Pipe StorageMerge::createSources( return pipe; } - auto storage_stage = storage->getQueryProcessingStage(*modified_context, QueryProcessingStage::Complete, modified_query_info); + auto storage_stage = storage->getQueryProcessingStage(modified_context, QueryProcessingStage::Complete, modified_query_info); if (processed_stage <= storage_stage) { /// If there are only virtual columns in query, you must request at least one other column. if (real_column_names.empty()) real_column_names.push_back(ExpressionActions::getSmallestColumn(metadata_snapshot->getColumns().getAllPhysical())); - - pipe = storage->read(real_column_names, metadata_snapshot, modified_query_info, *modified_context, processed_stage, max_block_size, UInt32(streams_num)); + pipe = storage->read( + real_column_names, + metadata_snapshot, + modified_query_info, + modified_context, + processed_stage, + max_block_size, + UInt32(streams_num)); } else if (processed_stage > storage_stage) { @@ -339,7 +376,7 @@ Pipe StorageMerge::createSources( modified_context->setSetting("max_threads", streams_num); modified_context->setSetting("max_streams_to_max_threads_ratio", 1); - InterpreterSelectQuery interpreter{modified_query_info.query, *modified_context, SelectQueryOptions(processed_stage)}; + InterpreterSelectQuery interpreter{modified_query_info.query, modified_context, SelectQueryOptions(processed_stage)}; pipe = QueryPipeline::getPipe(interpreter.execute().pipeline); @@ -368,7 +405,7 @@ Pipe StorageMerge::createSources( auto adding_column_dag = ActionsDAG::makeAddingColumnActions(std::move(column)); auto adding_column_actions = std::make_shared( std::move(adding_column_dag), - ExpressionActionsSettings::fromContext(*modified_context)); + ExpressionActionsSettings::fromContext(modified_context)); pipe.addSimpleTransform([&](const Block & stream_header) { @@ -378,7 +415,7 @@ Pipe StorageMerge::createSources( /// Subordinary tables could have different but convertible types, like numeric types of different width. /// We must return streams with structure equals to structure of Merge table. - convertingSourceStream(header, metadata_snapshot, *modified_context, modified_query_info.query, pipe, processed_stage); + convertingSourceStream(header, metadata_snapshot, modified_context, modified_query_info.query, pipe, processed_stage); pipe.addTableLock(struct_lock); pipe.addStorageHolder(storage); @@ -388,34 +425,20 @@ Pipe StorageMerge::createSources( return pipe; } - -StorageMerge::StorageListWithLocks StorageMerge::getSelectedTables(const String & query_id, const Settings & settings) const -{ - StorageListWithLocks selected_tables; - auto iterator = getDatabaseIterator(global_context); - - while (iterator->isValid()) - { - const auto & table = iterator->table(); - if (table && table.get() != this) - selected_tables.emplace_back( - table, table->lockForShare(query_id, settings.lock_acquire_timeout), iterator->name()); - - iterator->next(); - } - - return selected_tables; -} - - StorageMerge::StorageListWithLocks StorageMerge::getSelectedTables( - const SelectQueryInfo & query_info, bool has_virtual_column, const String & query_id, const Settings & settings) const + ContextPtr query_context, + const ASTPtr & query /* = nullptr */, + bool filter_by_virtual_column /* = false */) const { - const ASTPtr & query = query_info.query; - StorageListWithLocks selected_tables; - DatabaseTablesIteratorPtr iterator = getDatabaseIterator(global_context); + assert(!filter_by_virtual_column || query); - auto virtual_column = ColumnString::create(); + const Settings & settings = query_context->getSettingsRef(); + StorageListWithLocks selected_tables; + DatabaseTablesIteratorPtr iterator = getDatabaseIterator(getContext()); + + MutableColumnPtr table_name_virtual_column; + if (filter_by_virtual_column) + table_name_virtual_column = ColumnString::create(); while (iterator->isValid()) { @@ -428,18 +451,20 @@ StorageMerge::StorageListWithLocks StorageMerge::getSelectedTables( if (storage.get() != this) { - selected_tables.emplace_back( - storage, storage->lockForShare(query_id, settings.lock_acquire_timeout), iterator->name()); - virtual_column->insert(iterator->name()); + auto table_lock = storage->lockForShare(query_context->getCurrentQueryId(), settings.lock_acquire_timeout); + selected_tables.emplace_back(storage, std::move(table_lock), iterator->name()); + if (filter_by_virtual_column) + table_name_virtual_column->insert(iterator->name()); } iterator->next(); } - if (has_virtual_column) + if (filter_by_virtual_column) { - Block virtual_columns_block = Block{ColumnWithTypeAndName(std::move(virtual_column), std::make_shared(), "_table")}; - VirtualColumnUtils::filterBlockWithQuery(query_info.query, virtual_columns_block, global_context); + /// Filter names of selected tables if there is a condition on "_table" virtual column in WHERE clause + Block virtual_columns_block = Block{ColumnWithTypeAndName(std::move(table_name_virtual_column), std::make_shared(), "_table")}; + VirtualColumnUtils::filterBlockWithQuery(query, virtual_columns_block, query_context); auto values = VirtualColumnUtils::extractSingleValueFromBlock(virtual_columns_block, "_table"); /// Remove unused tables from the list @@ -449,8 +474,7 @@ StorageMerge::StorageListWithLocks StorageMerge::getSelectedTables( return selected_tables; } - -DatabaseTablesIteratorPtr StorageMerge::getDatabaseIterator(const Context & context) const +DatabaseTablesIteratorPtr StorageMerge::getDatabaseIterator(ContextPtr local_context) const { try { @@ -472,13 +496,13 @@ DatabaseTablesIteratorPtr StorageMerge::getDatabaseIterator(const Context & cont return source_table_regexp->match(table_name_); }; - return database->getTablesIterator(context, table_name_match); + return database->getTablesIterator(local_context, table_name_match); } -void StorageMerge::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +void StorageMerge::checkAlterIsPossible(const AlterCommands & commands, ContextPtr local_context) const { - auto name_deps = getDependentViewsByColumn(context); + auto name_deps = getDependentViewsByColumn(local_context); for (const auto & command : commands) { if (command.type != AlterCommand::Type::ADD_COLUMN && command.type != AlterCommand::Type::MODIFY_COLUMN @@ -501,20 +525,20 @@ void StorageMerge::checkAlterIsPossible(const AlterCommands & commands, const Co } void StorageMerge::alter( - const AlterCommands & params, const Context & context, TableLockHolder &) + const AlterCommands & params, ContextPtr local_context, TableLockHolder &) { auto table_id = getStorageID(); StorageInMemoryMetadata storage_metadata = getInMemoryMetadata(); - params.apply(storage_metadata, context); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, storage_metadata); + params.apply(storage_metadata, local_context); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, storage_metadata); setInMemoryMetadata(storage_metadata); } void StorageMerge::convertingSourceStream( const Block & header, const StorageMetadataPtr & metadata_snapshot, - const Context & context, + ContextPtr local_context, ASTPtr & query, Pipe & pipe, QueryProcessingStage::Enum processed_stage) @@ -525,7 +549,7 @@ void StorageMerge::convertingSourceStream( pipe.getHeader().getColumnsWithTypeAndName(), header.getColumnsWithTypeAndName(), ActionsDAG::MatchColumnsMode::Name); - auto convert_actions = std::make_shared(convert_actions_dag, ExpressionActionsSettings::fromContext(context)); + auto convert_actions = std::make_shared(convert_actions_dag, ExpressionActionsSettings::fromContext(local_context)); pipe.addSimpleTransform([&](const Block & stream_header) { @@ -549,8 +573,8 @@ void StorageMerge::convertingSourceStream( NamesAndTypesList source_columns = metadata_snapshot->getSampleBlock().getNamesAndTypesList(); auto virtual_column = *getVirtuals().tryGetByName("_table"); source_columns.emplace_back(NameAndTypePair{virtual_column.name, virtual_column.type}); - auto syntax_result = TreeRewriter(context).analyze(where_expression, source_columns); - ExpressionActionsPtr actions = ExpressionAnalyzer{where_expression, syntax_result, context}.getActions(false, false); + auto syntax_result = TreeRewriter(local_context).analyze(where_expression, source_columns); + ExpressionActionsPtr actions = ExpressionAnalyzer{where_expression, syntax_result, local_context}.getActions(false, false); Names required_columns = actions->getRequiredColumns(); for (const auto & required_column : required_columns) @@ -587,15 +611,15 @@ void registerStorageMerge(StorageFactory & factory) " - name of source database and regexp for table names.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); - engine_args[0] = evaluateConstantExpressionForDatabaseName(engine_args[0], args.local_context); - engine_args[1] = evaluateConstantExpressionAsLiteral(engine_args[1], args.local_context); + engine_args[0] = evaluateConstantExpressionForDatabaseName(engine_args[0], args.getLocalContext()); + engine_args[1] = evaluateConstantExpressionAsLiteral(engine_args[1], args.getLocalContext()); String source_database = engine_args[0]->as().value.safeGet(); String table_name_regexp = engine_args[1]->as().value.safeGet(); return StorageMerge::create( args.table_id, args.columns, - source_database, table_name_regexp, args.context); + source_database, table_name_regexp, args.getContext()); }); } diff --git a/src/Storages/StorageMerge.h b/src/Storages/StorageMerge.h index ea8667aa186..ff016952686 100644 --- a/src/Storages/StorageMerge.h +++ b/src/Storages/StorageMerge.h @@ -12,7 +12,7 @@ namespace DB /** A table that represents the union of an arbitrary number of other tables. * All tables must have the same structure. */ -class StorageMerge final : public ext::shared_ptr_helper, public IStorage +class StorageMerge final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; public: @@ -27,44 +27,41 @@ public: bool supportsIndexForIn() const override { return true; } bool supportsSubcolumns() const override { return true; } - QueryProcessingStage::Enum getQueryProcessingStage(const Context &, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum /*to_stage*/, SelectQueryInfo &) const override; Pipe read( const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; /// you need to add and remove columns in the sub-tables manually /// the structure of sub-tables is not checked - void alter(const AlterCommands & params, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & table_lock_holder) override; bool mayBenefitFromIndexForIn( - const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & metadata_snapshot) const override; + const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & metadata_snapshot) const override; private: String source_database; std::optional> source_tables; std::optional source_table_regexp; - const Context & global_context; using StorageWithLockAndName = std::tuple; using StorageListWithLocks = std::list; - StorageListWithLocks getSelectedTables(const String & query_id, const Settings & settings) const; - StorageMerge::StorageListWithLocks getSelectedTables( - const SelectQueryInfo & query_info, bool has_virtual_column, const String & query_id, const Settings & settings) const; + ContextPtr query_context, const ASTPtr & query = nullptr, bool filter_by_virtual_column = false) const; template StoragePtr getFirstTable(F && predicate) const; - DatabaseTablesIteratorPtr getDatabaseIterator(const Context & context) const; + DatabaseTablesIteratorPtr getDatabaseIterator(ContextPtr context) const; NamesAndTypesList getVirtuals() const override; ColumnSizeByName getColumnSizes() const override; @@ -75,31 +72,31 @@ protected: const ColumnsDescription & columns_, const String & source_database_, const Strings & source_tables_, - const Context & context_); + ContextPtr context_); StorageMerge( const StorageID & table_id_, const ColumnsDescription & columns_, const String & source_database_, const String & source_table_regexp_, - const Context & context_); + ContextPtr context_); Pipe createSources( const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, const QueryProcessingStage::Enum & processed_stage, - const UInt64 max_block_size, + UInt64 max_block_size, const Block & header, const StorageWithLockAndName & storage_with_lock, Names & real_column_names, - const std::shared_ptr & modified_context, + ContextPtr modified_context, size_t streams_num, bool has_table_virtual_column, bool concat_streams = false); void convertingSourceStream( const Block & header, const StorageMetadataPtr & metadata_snapshot, - const Context & context, ASTPtr & query, + ContextPtr context, ASTPtr & query, Pipe & pipe, QueryProcessingStage::Enum processed_stage); }; diff --git a/src/Storages/StorageMergeTree.cpp b/src/Storages/StorageMergeTree.cpp index 10790057ac9..4d7f7d8c887 100644 --- a/src/Storages/StorageMergeTree.cpp +++ b/src/Storages/StorageMergeTree.cpp @@ -63,7 +63,7 @@ StorageMergeTree::StorageMergeTree( const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, bool attach, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr storage_settings_, @@ -80,9 +80,9 @@ StorageMergeTree::StorageMergeTree( attach) , reader(*this) , writer(*this) - , merger_mutator(*this, global_context.getSettingsRef().background_pool_size) - , background_executor(*this, global_context) - , background_moves_executor(*this, global_context) + , merger_mutator(*this, getContext()->getSettingsRef().background_pool_size) + , background_executor(*this, getContext()) + , background_moves_executor(*this, getContext()) { loadDataParts(has_force_restore_data_flag); @@ -93,6 +93,8 @@ StorageMergeTree::StorageMergeTree( increment.set(getMaxBlockNumber()); loadMutations(); + + loadDeduplicationLog(); } @@ -180,12 +182,12 @@ void StorageMergeTree::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) { - if (auto plan = reader.read(column_names, metadata_snapshot, query_info, context, max_block_size, num_streams)) + if (auto plan = reader.read(column_names, metadata_snapshot, query_info, local_context, max_block_size, num_streams)) query_plan = std::move(*plan); } @@ -193,16 +195,16 @@ Pipe StorageMergeTree::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { QueryPlan plan; - read(plan, column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); + read(plan, column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); return plan.convertToPipe( - QueryPlanOptimizationSettings::fromContext(context), - BuildQueryPipelineSettings::fromContext(context)); + QueryPlanOptimizationSettings::fromContext(local_context), + BuildQueryPipelineSettings::fromContext(local_context)); } std::optional StorageMergeTree::totalRows(const Settings &) const @@ -210,10 +212,10 @@ std::optional StorageMergeTree::totalRows(const Settings &) const return getTotalActiveSizeInRows(); } -std::optional StorageMergeTree::totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, const Context & context) const +std::optional StorageMergeTree::totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, ContextPtr local_context) const { auto parts = getDataPartsVector({DataPartState::Committed}); - return totalRowsByPartitionPredicateImpl(query_info, context, parts); + return totalRowsByPartitionPredicateImpl(query_info, local_context, parts); } std::optional StorageMergeTree::totalBytes(const Settings &) const @@ -221,18 +223,18 @@ std::optional StorageMergeTree::totalBytes(const Settings &) const return getTotalActiveSizeInBytes(); } -BlockOutputStreamPtr StorageMergeTree::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr +StorageMergeTree::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { - - const auto & settings = context.getSettingsRef(); + const auto & settings = local_context->getSettingsRef(); return std::make_shared( - *this, metadata_snapshot, settings.max_partitions_per_insert_block, context.getSettingsRef().optimize_on_insert); + *this, metadata_snapshot, settings.max_partitions_per_insert_block, local_context->getSettingsRef().optimize_on_insert); } void StorageMergeTree::checkTableCanBeDropped() const { auto table_id = getStorageID(); - global_context.checkTableCanBeDropped(table_id.database_name, table_id.table_name, getTotalActiveSizeInBytes()); + getContext()->checkTableCanBeDropped(table_id.database_name, table_id.table_name, getTotalActiveSizeInBytes()); } void StorageMergeTree::drop() @@ -241,7 +243,7 @@ void StorageMergeTree::drop() dropAllData(); } -void StorageMergeTree::truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) +void StorageMergeTree::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) { { /// Asks to complete merges and does not allow them to start. @@ -261,24 +263,25 @@ void StorageMergeTree::truncate(const ASTPtr &, const StorageMetadataPtr &, cons void StorageMergeTree::alter( const AlterCommands & commands, - const Context & context, + ContextPtr local_context, TableLockHolder & table_lock_holder) { auto table_id = getStorageID(); + auto old_storage_settings = getSettings(); StorageInMemoryMetadata new_metadata = getInMemoryMetadata(); StorageInMemoryMetadata old_metadata = getInMemoryMetadata(); - auto maybe_mutation_commands = commands.getMutationCommands(new_metadata, context.getSettingsRef().materialize_ttl_after_modify, context); + auto maybe_mutation_commands = commands.getMutationCommands(new_metadata, local_context->getSettingsRef().materialize_ttl_after_modify, local_context); String mutation_file_name; Int64 mutation_version = -1; - commands.apply(new_metadata, context); + commands.apply(new_metadata, local_context); /// This alter can be performed at new_metadata level only if (commands.isSettingsAlter()) { changeSettings(new_metadata.settings_changes, table_lock_holder); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, new_metadata); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, new_metadata); } else { @@ -288,7 +291,7 @@ void StorageMergeTree::alter( /// Reinitialize primary key because primary key column types might have changed. setProperties(new_metadata, old_metadata); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(context, table_id, new_metadata); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, new_metadata); if (!maybe_mutation_commands.empty()) mutation_version = startMutation(maybe_mutation_commands, mutation_file_name); @@ -299,6 +302,21 @@ void StorageMergeTree::alter( if (!maybe_mutation_commands.empty()) waitForMutation(mutation_version, mutation_file_name); } + + { + /// Some additional changes in settings + auto new_storage_settings = getSettings(); + + if (old_storage_settings->non_replicated_deduplication_window != new_storage_settings->non_replicated_deduplication_window) + { + /// We cannot place this check into settings sanityCheck because it depends on format_version. + /// sanityCheck must work event without storage. + if (new_storage_settings->non_replicated_deduplication_window != 0 && format_version < MERGE_TREE_DATA_MIN_FORMAT_VERSION_WITH_CUSTOM_PARTITIONING) + throw Exception("Deduplication for non-replicated MergeTree in old syntax is not supported", ErrorCodes::BAD_ARGUMENTS); + + deduplication_log->setDeduplicationWindowSize(new_storage_settings->non_replicated_deduplication_window); + } + } } @@ -461,12 +479,12 @@ void StorageMergeTree::waitForMutation(Int64 version, const String & file_name) LOG_INFO(log, "Mutation {} done", file_name); } -void StorageMergeTree::mutate(const MutationCommands & commands, const Context & query_context) +void StorageMergeTree::mutate(const MutationCommands & commands, ContextPtr query_context) { String mutation_file_name; Int64 version = startMutation(commands, mutation_file_name); - if (query_context.getSettingsRef().mutations_sync > 0) + if (query_context->getSettingsRef().mutations_sync > 0) waitForMutation(version, mutation_file_name); } @@ -600,7 +618,7 @@ CancellationCode StorageMergeTree::killMutation(const String & mutation_id) if (!to_kill) return CancellationCode::NotFound; - global_context.getMergeList().cancelPartMutations({}, to_kill->block_number); + getContext()->getMergeList().cancelPartMutations({}, to_kill->block_number); to_kill->removeFile(); LOG_TRACE(log, "Cancelled part mutations and removed mutation file {}", mutation_id); { @@ -614,6 +632,16 @@ CancellationCode StorageMergeTree::killMutation(const String & mutation_id) return CancellationCode::CancelSent; } +void StorageMergeTree::loadDeduplicationLog() +{ + auto settings = getSettings(); + if (settings->non_replicated_deduplication_window != 0 && format_version < MERGE_TREE_DATA_MIN_FORMAT_VERSION_WITH_CUSTOM_PARTITIONING) + throw Exception("Deduplication for non-replicated MergeTree in old syntax is not supported", ErrorCodes::BAD_ARGUMENTS); + + std::string path = getDataPaths()[0] + "/deduplication_logs"; + deduplication_log = std::make_unique(path, settings->non_replicated_deduplication_window, format_version); + deduplication_log->load(); +} void StorageMergeTree::loadMutations() { @@ -740,7 +768,7 @@ std::shared_ptr StorageMergeTree::se /// Account TTL merge here to avoid exceeding the max_number_of_merges_with_ttl_in_pool limit if (isTTLMergeType(future_part.merge_type)) - global_context.getMergeList().bookMergeWithTTL(); + getContext()->getMergeList().bookMergeWithTTL(); merging_tagger = std::make_unique(future_part, MergeTreeDataMergerMutator::estimateNeededDiskSpace(future_part.parts), *this, metadata_snapshot, false); return std::make_shared(future_part, std::move(merging_tagger), MutationCommands{}); @@ -784,7 +812,7 @@ bool StorageMergeTree::mergeSelectedParts( MutableDataPartPtr new_part; auto table_id = getStorageID(); - auto merge_list_entry = global_context.getMergeList().insert(table_id.database_name, table_id.table_name, future_part); + auto merge_list_entry = getContext()->getMergeList().insert(table_id.database_name, table_id.table_name, future_part); auto write_part_log = [&] (const ExecutionStatus & execution_status) { @@ -802,7 +830,7 @@ bool StorageMergeTree::mergeSelectedParts( { new_part = merger_mutator.mergePartsToTemporaryPart( future_part, metadata_snapshot, *(merge_list_entry), table_lock_holder, time(nullptr), - global_context, merge_mutate_entry.tagger->reserved_space, deduplicate, deduplicate_by_columns); + getContext(), merge_mutate_entry.tagger->reserved_space, deduplicate, deduplicate_by_columns); merger_mutator.renameMergedTemporaryPart(new_part, future_part.parts, nullptr); write_part_log({}); @@ -826,7 +854,7 @@ std::shared_ptr StorageMergeTree::se const StorageMetadataPtr & metadata_snapshot, String * /* disable_reason */, TableLockHolder & /* table_lock_holder */) { std::lock_guard lock(currently_processing_in_background_mutex); - size_t max_ast_elements = global_context.getSettingsRef().max_expanded_ast_elements; + size_t max_ast_elements = getContext()->getSettingsRef().max_expanded_ast_elements; FutureMergedMutatedPart future_part; if (storage_settings.get()->assign_part_uuids) @@ -881,7 +909,7 @@ std::shared_ptr StorageMergeTree::se if (!commands_for_size_validation.empty()) { MutationsInterpreter interpreter( - shared_from_this(), metadata_snapshot, commands_for_size_validation, global_context, false); + shared_from_this(), metadata_snapshot, commands_for_size_validation, getContext(), false); commands_size += interpreter.evaluateCommandsSize(); } @@ -911,7 +939,7 @@ bool StorageMergeTree::mutateSelectedPart(const StorageMetadataPtr & metadata_sn auto & future_part = merge_mutate_entry.future_part; auto table_id = getStorageID(); - auto merge_list_entry = global_context.getMergeList().insert(table_id.database_name, table_id.table_name, future_part); + auto merge_list_entry = getContext()->getMergeList().insert(table_id.database_name, table_id.table_name, future_part); Stopwatch stopwatch; MutableDataPartPtr new_part; @@ -931,7 +959,7 @@ bool StorageMergeTree::mutateSelectedPart(const StorageMetadataPtr & metadata_sn { new_part = merger_mutator.mutatePartToTemporaryPart( future_part, metadata_snapshot, merge_mutate_entry.commands, *(merge_list_entry), - time(nullptr), global_context, merge_mutate_entry.tagger->reserved_space, table_lock_holder); + time(nullptr), getContext(), merge_mutate_entry.tagger->reserved_space, table_lock_holder); renameTempPartAndReplace(new_part); @@ -1056,7 +1084,7 @@ bool StorageMergeTree::optimize( bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) + ContextPtr local_context) { if (deduplicate) { @@ -1077,14 +1105,21 @@ bool StorageMergeTree::optimize( for (const String & partition_id : partition_ids) { - if (!merge(true, partition_id, true, deduplicate, deduplicate_by_columns, &disable_reason, context.getSettingsRef().optimize_skip_merged_partitions)) + if (!merge( + true, + partition_id, + true, + deduplicate, + deduplicate_by_columns, + &disable_reason, + local_context->getSettingsRef().optimize_skip_merged_partitions)) { constexpr const char * message = "Cannot OPTIMIZE table: {}"; if (disable_reason.empty()) disable_reason = "unknown reason"; LOG_INFO(log, message, disable_reason); - if (context.getSettingsRef().optimize_throw_if_noop) + if (local_context->getSettingsRef().optimize_throw_if_noop) throw Exception(ErrorCodes::CANNOT_ASSIGN_OPTIMIZE, message, disable_reason); return false; } @@ -1094,16 +1129,23 @@ bool StorageMergeTree::optimize( { String partition_id; if (partition) - partition_id = getPartitionIDFromQuery(partition, context); + partition_id = getPartitionIDFromQuery(partition, local_context); - if (!merge(true, partition_id, final, deduplicate, deduplicate_by_columns, &disable_reason, context.getSettingsRef().optimize_skip_merged_partitions)) + if (!merge( + true, + partition_id, + final, + deduplicate, + deduplicate_by_columns, + &disable_reason, + local_context->getSettingsRef().optimize_skip_merged_partitions)) { constexpr const char * message = "Cannot OPTIMIZE table: {}"; if (disable_reason.empty()) disable_reason = "unknown reason"; LOG_INFO(log, message, disable_reason); - if (context.getSettingsRef().optimize_throw_if_noop) + if (local_context->getSettingsRef().optimize_throw_if_noop) throw Exception(ErrorCodes::CANNOT_ASSIGN_OPTIMIZE, message, disable_reason); return false; } @@ -1171,7 +1213,7 @@ MergeTreeDataPartPtr StorageMergeTree::outdatePart(const String & part_name, boo } } -void StorageMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool drop_part, const Context & context, bool throw_if_noop) +void StorageMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool drop_part, ContextPtr local_context, bool throw_if_noop) { { MergeTreeData::DataPartsVector parts_to_remove; @@ -1191,7 +1233,7 @@ void StorageMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool /// Asks to complete merges and does not allow them to start. /// This protects against "revival" of data for a removed partition after completion of merge. auto merge_blocker = stopMergesAndWait(); - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, local_context); parts_to_remove = getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); /// TODO should we throw an exception if parts_to_remove is empty? @@ -1209,6 +1251,12 @@ void StorageMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool } } + if (deduplication_log) + { + for (const auto & part : parts_to_remove) + deduplication_log->dropPart(part->info); + } + if (detach) LOG_INFO(log, "Detached {} parts.", parts_to_remove.size()); else @@ -1221,11 +1269,11 @@ void StorageMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool PartitionCommandsResultInfo StorageMergeTree::attachPartition( const ASTPtr & partition, const StorageMetadataPtr & /* metadata_snapshot */, - bool attach_part, const Context & context) + bool attach_part, ContextPtr local_context) { PartitionCommandsResultInfo results; PartsTemporaryRename renamed_parts(*this, "detached/"); - MutableDataPartsVector loaded_parts = tryLoadPartsToAttach(partition, attach_part, context, renamed_parts); + MutableDataPartsVector loaded_parts = tryLoadPartsToAttach(partition, attach_part, local_context, renamed_parts); for (size_t i = 0; i < loaded_parts.size(); ++i) { @@ -1244,20 +1292,20 @@ PartitionCommandsResultInfo StorageMergeTree::attachPartition( } /// New parts with other data may appear in place of deleted parts. - context.dropCaches(); + local_context->dropCaches(); return results; } -void StorageMergeTree::replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, const Context & context) +void StorageMergeTree::replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, ContextPtr local_context) { - auto lock1 = lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); - auto lock2 = source_table->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock1 = lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); + auto lock2 = source_table->lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); auto source_metadata_snapshot = source_table->getInMemoryMetadataPtr(); auto my_metadata_snapshot = getInMemoryMetadataPtr(); Stopwatch watch; MergeTreeData & src_data = checkStructureAndGetMergeTreeData(source_table, source_metadata_snapshot, my_metadata_snapshot); - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, local_context); DataPartsVector src_parts = src_data.getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); MutableDataPartsVector dst_parts; @@ -1312,19 +1360,19 @@ void StorageMergeTree::replacePartitionFrom(const StoragePtr & source_table, con removePartsInRangeFromWorkingSet(drop_range, true, false, data_parts_lock); } - PartLog::addNewParts(global_context, dst_parts, watch.elapsed()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed()); } catch (...) { - PartLog::addNewParts(global_context, dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); throw; } } -void StorageMergeTree::movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, const Context & context) +void StorageMergeTree::movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, ContextPtr local_context) { - auto lock1 = lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); - auto lock2 = dest_table->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock1 = lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); + auto lock2 = dest_table->lockForShare(local_context->getCurrentQueryId(), local_context->getSettingsRef().lock_acquire_timeout); auto dest_table_storage = std::dynamic_pointer_cast(dest_table); if (!dest_table_storage) @@ -1341,7 +1389,7 @@ void StorageMergeTree::movePartitionToTable(const StoragePtr & dest_table, const Stopwatch watch; MergeTreeData & src_data = dest_table_storage->checkStructureAndGetMergeTreeData(*this, metadata_snapshot, dest_metadata_snapshot); - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, local_context); DataPartsVector src_parts = src_data.getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); MutableDataPartsVector dst_parts; @@ -1388,11 +1436,11 @@ void StorageMergeTree::movePartitionToTable(const StoragePtr & dest_table, const clearOldMutations(true); clearOldPartsFromFilesystem(); - PartLog::addNewParts(global_context, dst_parts, watch.elapsed()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed()); } catch (...) { - PartLog::addNewParts(global_context, dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); throw; } } @@ -1417,13 +1465,13 @@ void StorageMergeTree::onActionLockRemove(StorageActionBlockType action_type) background_moves_executor.triggerTask(); } -CheckResults StorageMergeTree::checkData(const ASTPtr & query, const Context & context) +CheckResults StorageMergeTree::checkData(const ASTPtr & query, ContextPtr local_context) { CheckResults results; DataPartsVector data_parts; if (const auto & check_query = query->as(); check_query.partition) { - String partition_id = getPartitionIDFromQuery(check_query.partition, context); + String partition_id = getPartitionIDFromQuery(check_query.partition, local_context); data_parts = getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); } else diff --git a/src/Storages/StorageMergeTree.h b/src/Storages/StorageMergeTree.h index 246ce151a02..2a50cb33912 100644 --- a/src/Storages/StorageMergeTree.h +++ b/src/Storages/StorageMergeTree.h @@ -12,6 +12,8 @@ #include #include #include +#include + #include #include #include @@ -40,7 +42,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -50,16 +52,16 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; std::optional totalRows(const Settings &) const override; - std::optional totalRowsByPartitionPredicate(const SelectQueryInfo &, const Context &) const override; + std::optional totalRowsByPartitionPredicate(const SelectQueryInfo &, ContextPtr) const override; std::optional totalBytes(const Settings &) const override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; /** Perform the next step in combining the parts. */ @@ -70,9 +72,9 @@ public: bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) override; + ContextPtr context) override; - void mutate(const MutationCommands & commands, const Context & context) override; + void mutate(const MutationCommands & commands, ContextPtr context) override; /// Return introspection information about currently processing or recently processed mutations. std::vector getMutationsStatus() const override; @@ -80,9 +82,9 @@ public: CancellationCode killMutation(const String & mutation_id) override; void drop() override; - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) override; - void alter(const AlterCommands & commands, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & commands, ContextPtr context, TableLockHolder & table_lock_holder) override; void checkTableCanBeDropped() const override; @@ -90,9 +92,11 @@ public: void onActionLockRemove(StorageActionBlockType action_type) override; - CheckResults checkData(const ASTPtr & query, const Context & context) override; + CheckResults checkData(const ASTPtr & query, ContextPtr context) override; std::optional getDataProcessingJob() override; + + MergeTreeDeduplicationLog * getDeduplicationLog() { return deduplication_log.get(); } private: /// Mutex and condvar for synchronous mutations wait @@ -105,6 +109,8 @@ private: BackgroundJobsExecutor background_executor; BackgroundMovesExecutor background_moves_executor; + std::unique_ptr deduplication_log; + /// For block numbers. SimpleIncrement increment; @@ -128,6 +134,10 @@ private: void loadMutations(); + /// Load and initialize deduplication logs. Even if deduplication setting + /// equals zero creates object with deduplication window equals zero. + void loadDeduplicationLog(); + /** Determines what parts should be merged and merges it. * If aggressive - when selects parts don't takes into account their ratio size and novelty (used for OPTIMIZE query). * Returns true if merge is finished successfully. @@ -201,11 +211,11 @@ private: void clearOldMutations(bool truncate = false); // Partition helpers - void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, const Context & context, bool throw_if_noop) override; - PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, const Context & context) override; + void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, ContextPtr context, bool throw_if_noop) override; + PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, ContextPtr context) override; - void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, const Context & context) override; - void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, const Context & context) override; + void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, ContextPtr context) override; + void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, ContextPtr context) override; bool partIsAssignedToBackgroundOperation(const DataPartPtr & part) const override; /// Update mutation entries after part mutation execution. May reset old /// errors if mutation was successful. Otherwise update last_failed* fields @@ -239,7 +249,7 @@ protected: const String & relative_data_path_, const StorageInMemoryMetadata & metadata, bool attach, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr settings_, diff --git a/src/Storages/StorageMongoDB.cpp b/src/Storages/StorageMongoDB.cpp index 09fd413af75..2b0200f3643 100644 --- a/src/Storages/StorageMongoDB.cpp +++ b/src/Storages/StorageMongoDB.cpp @@ -74,7 +74,7 @@ Pipe StorageMongoDB::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned) @@ -106,7 +106,7 @@ void registerStorageMongoDB(StorageFactory & factory) ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & engine_arg : engine_args) - engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.local_context); + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext()); /// 27017 is the default MongoDB port. auto parsed_host_port = parseAddress(engine_args[0]->as().value.safeGet(), 27017); diff --git a/src/Storages/StorageMongoDB.h b/src/Storages/StorageMongoDB.h index 589ab276539..5e96d1543a2 100644 --- a/src/Storages/StorageMongoDB.h +++ b/src/Storages/StorageMongoDB.h @@ -35,7 +35,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageMySQL.cpp b/src/Storages/StorageMySQL.cpp index caac7c5d95e..35eb85e41d2 100644 --- a/src/Storages/StorageMySQL.cpp +++ b/src/Storages/StorageMySQL.cpp @@ -18,6 +18,7 @@ #include #include #include +#include namespace DB @@ -41,21 +42,21 @@ static String backQuoteMySQL(const String & x) StorageMySQL::StorageMySQL( const StorageID & table_id_, - mysqlxx::Pool && pool_, + mysqlxx::PoolWithFailover && pool_, const std::string & remote_database_name_, const std::string & remote_table_name_, const bool replace_query_, const std::string & on_duplicate_clause_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_) + ContextPtr context_) : IStorage(table_id_) + , WithContext(context_->getGlobalContext()) , remote_database_name(remote_database_name_) , remote_table_name(remote_table_name_) , replace_query{replace_query_} , on_duplicate_clause{on_duplicate_clause_} - , pool(std::move(pool_)) - , global_context(context_.getGlobalContext()) + , pool(std::make_shared(pool_)) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -68,9 +69,9 @@ Pipe StorageMySQL::read( const Names & column_names_, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info_, - const Context & context_, + ContextPtr context_, QueryProcessingStage::Enum /*processed_stage*/, - size_t max_block_size_, + size_t /*max_block_size*/, unsigned) { metadata_snapshot->check(column_names_, getVirtuals(), getStorageID()); @@ -94,9 +95,10 @@ Pipe StorageMySQL::read( sample_block.insert({ column_data.type, column_data.name }); } - /// TODO: rewrite MySQLBlockInputStream + + StreamSettings mysql_input_stream_settings(context_->getSettingsRef(), true, false); return Pipe(std::make_shared( - std::make_shared(pool, query, sample_block, max_block_size_, /* auto_close = */ true))); + std::make_shared(pool, query, sample_block, mysql_input_stream_settings))); } @@ -144,10 +146,12 @@ public: { WriteBufferFromOwnString sqlbuf; sqlbuf << (storage.replace_query ? "REPLACE" : "INSERT") << " INTO "; - sqlbuf << backQuoteMySQL(remote_database_name) << "." << backQuoteMySQL(remote_table_name); + if (!remote_database_name.empty()) + sqlbuf << backQuoteMySQL(remote_database_name) << "."; + sqlbuf << backQuoteMySQL(remote_table_name); sqlbuf << " (" << dumpNamesWithBackQuote(block) << ") VALUES "; - auto writer = FormatFactory::instance().getOutputStream("Values", sqlbuf, metadata_snapshot->getSampleBlock(), storage.global_context); + auto writer = FormatFactory::instance().getOutputStream("Values", sqlbuf, metadata_snapshot->getSampleBlock(), storage.getContext()); writer->write(block); if (!storage.on_duplicate_clause.empty()) @@ -211,9 +215,15 @@ private: }; -BlockOutputStreamPtr StorageMySQL::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageMySQL::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { - return std::make_shared(*this, metadata_snapshot, remote_database_name, remote_table_name, pool.get(), context.getSettingsRef().mysql_max_rows_to_insert); + return std::make_shared( + *this, + metadata_snapshot, + remote_database_name, + remote_table_name, + pool->get(), + local_context->getSettingsRef().mysql_max_rows_to_insert); } void registerStorageMySQL(StorageFactory & factory) @@ -224,21 +234,22 @@ void registerStorageMySQL(StorageFactory & factory) if (engine_args.size() < 5 || engine_args.size() > 7) throw Exception( - "Storage MySQL requires 5-7 parameters: MySQL('host:port', database, table, 'user', 'password'[, replace_query, 'on_duplicate_clause']).", + "Storage MySQL requires 5-7 parameters: MySQL('host:port' (or 'addresses_pattern'), database, table, 'user', 'password'[, replace_query, 'on_duplicate_clause']).", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & engine_arg : engine_args) - engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.local_context); + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext()); /// 3306 is the default MySQL port. - auto parsed_host_port = parseAddress(engine_args[0]->as().value.safeGet(), 3306); - + const String & host_port = engine_args[0]->as().value.safeGet(); const String & remote_database = engine_args[1]->as().value.safeGet(); const String & remote_table = engine_args[2]->as().value.safeGet(); const String & username = engine_args[3]->as().value.safeGet(); const String & password = engine_args[4]->as().value.safeGet(); + size_t max_addresses = args.getContext()->getSettingsRef().glob_expansion_max_elements; - mysqlxx::Pool pool(remote_database, parsed_host_port.first, username, password, parsed_host_port.second); + auto addresses = parseRemoteDescriptionForExternalDatabase(host_port, max_addresses, 3306); + mysqlxx::PoolWithFailover pool(remote_database, addresses, username, password); bool replace_query = false; std::string on_duplicate_clause; @@ -261,7 +272,7 @@ void registerStorageMySQL(StorageFactory & factory) on_duplicate_clause, args.columns, args.constraints, - args.context); + args.getContext()); }, { .source_access_type = AccessType::MYSQL, diff --git a/src/Storages/StorageMySQL.h b/src/Storages/StorageMySQL.h index 645f3600eee..a68c06c1abe 100644 --- a/src/Storages/StorageMySQL.h +++ b/src/Storages/StorageMySQL.h @@ -1,15 +1,15 @@ #pragma once #if !defined(ARCADIA_BUILD) -# include "config_core.h" +#include "config_core.h" #endif #if USE_MYSQL -# include +#include -# include -# include +#include +#include namespace DB @@ -19,20 +19,20 @@ namespace DB * Use ENGINE = mysql(host_port, database_name, table_name, user_name, password) * Read only. */ -class StorageMySQL final : public ext::shared_ptr_helper, public IStorage +class StorageMySQL final : public ext::shared_ptr_helper, public IStorage, WithContext { friend struct ext::shared_ptr_helper; public: StorageMySQL( const StorageID & table_id_, - mysqlxx::Pool && pool_, + mysqlxx::PoolWithFailover && pool_, const std::string & remote_database_name_, const std::string & remote_table_name_, - const bool replace_query_, + bool replace_query_, const std::string & on_duplicate_clause_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_); + ContextPtr context_); std::string getName() const override { return "MySQL"; } @@ -40,12 +40,12 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; private: friend class StorageMySQLBlockOutputStream; @@ -55,8 +55,7 @@ private: bool replace_query; std::string on_duplicate_clause; - mysqlxx::Pool pool; - const Context & global_context; + mysqlxx::PoolWithFailoverPtr pool; }; } diff --git a/src/Storages/StorageNull.cpp b/src/Storages/StorageNull.cpp index ed9a7fffc63..46f88bbc7ac 100644 --- a/src/Storages/StorageNull.cpp +++ b/src/Storages/StorageNull.cpp @@ -36,7 +36,7 @@ void registerStorageNull(StorageFactory & factory) }); } -void StorageNull::checkAlterIsPossible(const AlterCommands & commands, const Context & context) const +void StorageNull::checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const { auto name_deps = getDependentViewsByColumn(context); for (const auto & command : commands) @@ -61,7 +61,7 @@ void StorageNull::checkAlterIsPossible(const AlterCommands & commands, const Con } -void StorageNull::alter(const AlterCommands & params, const Context & context, TableLockHolder &) +void StorageNull::alter(const AlterCommands & params, ContextPtr context, TableLockHolder &) { auto table_id = getStorageID(); diff --git a/src/Storages/StorageNull.h b/src/Storages/StorageNull.h index 943c056a588..7fe65eb25dc 100644 --- a/src/Storages/StorageNull.h +++ b/src/Storages/StorageNull.h @@ -25,7 +25,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo &, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processing_stage*/, size_t, unsigned) override @@ -36,14 +36,14 @@ public: bool supportsParallelInsert() const override { return true; } - BlockOutputStreamPtr write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &) override + BlockOutputStreamPtr write(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr) override { return std::make_shared(metadata_snapshot->getSampleBlock()); } - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override; + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override; - void alter(const AlterCommands & params, const Context & context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & table_lock_holder) override; std::optional totalRows(const Settings &) const override { diff --git a/src/Storages/StoragePostgreSQL.cpp b/src/Storages/StoragePostgreSQL.cpp index e1b927027f9..a99568c0036 100644 --- a/src/Storages/StoragePostgreSQL.cpp +++ b/src/Storages/StoragePostgreSQL.cpp @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -41,17 +42,17 @@ namespace ErrorCodes StoragePostgreSQL::StoragePostgreSQL( const StorageID & table_id_, + const postgres::PoolWithFailover & pool_, const String & remote_table_name_, - PostgreSQLConnectionPoolPtr connection_pool_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_, + ContextPtr context_, const String & remote_table_schema_) : IStorage(table_id_) , remote_table_name(remote_table_name_) , remote_table_schema(remote_table_schema_) , global_context(context_) - , connection_pool(std::move(connection_pool_)) + , pool(std::make_shared(pool_)) { StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); @@ -64,7 +65,7 @@ Pipe StoragePostgreSQL::read( const Names & column_names_, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info_, - const Context & context_, + ContextPtr context_, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size_, unsigned) @@ -88,7 +89,7 @@ Pipe StoragePostgreSQL::read( } return Pipe(std::make_shared( - std::make_shared(connection_pool->get(), query, sample_block, max_block_size_))); + std::make_shared(pool->get(), query, sample_block, max_block_size_))); } @@ -97,7 +98,7 @@ class PostgreSQLBlockOutputStream : public IBlockOutputStream public: explicit PostgreSQLBlockOutputStream( const StorageMetadataPtr & metadata_snapshot_, - PostgreSQLConnectionHolderPtr connection_, + postgres::ConnectionHolderPtr connection_, const std::string & remote_table_name_) : metadata_snapshot(metadata_snapshot_) , connection(std::move(connection_)) @@ -276,7 +277,7 @@ public: private: StorageMetadataPtr metadata_snapshot; - PostgreSQLConnectionHolderPtr connection; + postgres::ConnectionHolderPtr connection; std::string remote_table_name; std::unique_ptr work; @@ -285,9 +286,9 @@ private: BlockOutputStreamPtr StoragePostgreSQL::write( - const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /* context */) + const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /* context */) { - return std::make_shared(metadata_snapshot, connection_pool->get(), remote_table_name); + return std::make_shared(metadata_snapshot, pool->get(), remote_table_name); } @@ -303,26 +304,33 @@ void registerStoragePostgreSQL(StorageFactory & factory) ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & engine_arg : engine_args) - engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.local_context); + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext()); - auto parsed_host_port = parseAddress(engine_args[0]->as().value.safeGet(), 5432); + auto host_port = engine_args[0]->as().value.safeGet(); + /// Split into replicas if needed. + size_t max_addresses = args.getContext()->getSettingsRef().glob_expansion_max_elements; + auto addresses = parseRemoteDescriptionForExternalDatabase(host_port, max_addresses, 5432); + + const String & remote_database = engine_args[1]->as().value.safeGet(); const String & remote_table = engine_args[2]->as().value.safeGet(); + const String & username = engine_args[3]->as().value.safeGet(); + const String & password = engine_args[4]->as().value.safeGet(); String remote_table_schema; if (engine_args.size() == 6) remote_table_schema = engine_args[5]->as().value.safeGet(); - auto connection_pool = std::make_shared( - engine_args[1]->as().value.safeGet(), - parsed_host_port.first, - parsed_host_port.second, - engine_args[3]->as().value.safeGet(), - engine_args[4]->as().value.safeGet(), - args.context.getSettingsRef().postgresql_connection_pool_size, - args.context.getSettingsRef().postgresql_connection_pool_wait_timeout); + postgres::PoolWithFailover pool( + remote_database, + addresses, + username, + password, + args.getContext()->getSettingsRef().postgresql_connection_pool_size, + args.getContext()->getSettingsRef().postgresql_connection_pool_wait_timeout); return StoragePostgreSQL::create( - args.table_id, remote_table, connection_pool, args.columns, args.constraints, args.context, remote_table_schema); + args.table_id, pool, remote_table, + args.columns, args.constraints, args.getContext(), remote_table_schema); }, { .source_access_type = AccessType::POSTGRES, diff --git a/src/Storages/StoragePostgreSQL.h b/src/Storages/StoragePostgreSQL.h index fb80352f58d..e4ab59f7a06 100644 --- a/src/Storages/StoragePostgreSQL.h +++ b/src/Storages/StoragePostgreSQL.h @@ -9,8 +9,7 @@ #include #include #include -#include -#include +#include namespace DB @@ -23,11 +22,11 @@ class StoragePostgreSQL final : public ext::shared_ptr_helper public: StoragePostgreSQL( const StorageID & table_id_, + const postgres::PoolWithFailover & pool_, const String & remote_table_name_, - PostgreSQLConnectionPoolPtr connection_pool_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_, + ContextPtr context_, const std::string & remote_table_schema_ = ""); String getName() const override { return "PostgreSQL"; } @@ -36,20 +35,20 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; private: friend class PostgreSQLBlockOutputStream; String remote_table_name; String remote_table_schema; - Context global_context; - PostgreSQLConnectionPoolPtr connection_pool; + ContextPtr global_context; + postgres::PoolWithFailoverPtr pool; }; } diff --git a/src/Storages/StorageProxy.h b/src/Storages/StorageProxy.h index 0349319d8fa..2c3e9d610b0 100644 --- a/src/Storages/StorageProxy.h +++ b/src/Storages/StorageProxy.h @@ -11,7 +11,7 @@ class StorageProxy : public IStorage { public: - StorageProxy(const StorageID & table_id_) : IStorage(table_id_) {} + explicit StorageProxy(const StorageID & table_id_) : IStorage(table_id_) {} virtual StoragePtr getNested() const = 0; @@ -32,7 +32,7 @@ public: NamesAndTypesList getVirtuals() const override { return getNested()->getVirtuals(); } QueryProcessingStage::Enum getQueryProcessingStage( - const Context & context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & ast) const override + ContextPtr context, QueryProcessingStage::Enum to_stage, SelectQueryInfo & ast) const override { return getNested()->getQueryProcessingStage(context, to_stage, ast); } @@ -40,7 +40,7 @@ public: BlockInputStreams watch( const Names & column_names, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size, unsigned num_streams) override @@ -52,7 +52,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override @@ -60,10 +60,7 @@ public: return getNested()->read(column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); } - BlockOutputStreamPtr write( - const ASTPtr & query, - const StorageMetadataPtr & metadata_snapshot, - const Context & context) override + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) override { return getNested()->write(query, metadata_snapshot, context); } @@ -73,7 +70,7 @@ public: void truncate( const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, - const Context & context, + ContextPtr context, TableExclusiveLockHolder & lock) override { getNested()->truncate(query, metadata_snapshot, context, lock); @@ -91,13 +88,13 @@ public: IStorage::renameInMemory(new_table_id); } - void alter(const AlterCommands & params, const Context & context, TableLockHolder & alter_lock_holder) override + void alter(const AlterCommands & params, ContextPtr context, TableLockHolder & alter_lock_holder) override { getNested()->alter(params, context, alter_lock_holder); IStorage::setInMemoryMetadata(getNested()->getInMemoryMetadata()); } - void checkAlterIsPossible(const AlterCommands & commands, const Context & context) const override + void checkAlterIsPossible(const AlterCommands & commands, ContextPtr context) const override { getNested()->checkAlterIsPossible(commands, context); } @@ -105,7 +102,7 @@ public: Pipe alterPartition( const StorageMetadataPtr & metadata_snapshot, const PartitionCommands & commands, - const Context & context) override + ContextPtr context) override { return getNested()->alterPartition(metadata_snapshot, commands, context); } @@ -122,12 +119,12 @@ public: bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & context) override + ContextPtr context) override { return getNested()->optimize(query, metadata_snapshot, partition, final, deduplicate, deduplicate_by_columns, context); } - void mutate(const MutationCommands & commands, const Context & context) override { getNested()->mutate(commands, context); } + void mutate(const MutationCommands & commands, ContextPtr context) override { getNested()->mutate(commands, context); } CancellationCode killMutation(const String & mutation_id) override { return getNested()->killMutation(mutation_id); } @@ -137,12 +134,12 @@ public: ActionLock getActionLock(StorageActionBlockType action_type) override { return getNested()->getActionLock(action_type); } bool supportsIndexForIn() const override { return getNested()->supportsIndexForIn(); } - bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, const Context & query_context, const StorageMetadataPtr & metadata_snapshot) const override + bool mayBenefitFromIndexForIn(const ASTPtr & left_in_operand, ContextPtr query_context, const StorageMetadataPtr & metadata_snapshot) const override { return getNested()->mayBenefitFromIndexForIn(left_in_operand, query_context, metadata_snapshot); } - CheckResults checkData(const ASTPtr & query , const Context & context) override { return getNested()->checkData(query, context); } + CheckResults checkData(const ASTPtr & query , ContextPtr context) override { return getNested()->checkData(query, context); } void checkTableCanBeDropped() const override { getNested()->checkTableCanBeDropped(); } void checkPartitionCanBeDropped(const ASTPtr & partition) override { getNested()->checkPartitionCanBeDropped(partition); } bool storesDataOnDisk() const override { return getNested()->storesDataOnDisk(); } diff --git a/src/Storages/StorageReplicatedMergeTree.cpp b/src/Storages/StorageReplicatedMergeTree.cpp index f9d63132a1b..3b4a1ec4e16 100644 --- a/src/Storages/StorageReplicatedMergeTree.cpp +++ b/src/Storages/StorageReplicatedMergeTree.cpp @@ -26,6 +26,7 @@ #include #include #include +#include #include @@ -51,6 +52,7 @@ #include #include #include +#include #include #include @@ -59,7 +61,7 @@ #include #include -#include "Storages/MergeTree/MergeTreeReaderCompact.h" +#include #include #include @@ -128,6 +130,7 @@ namespace ErrorCodes extern const int UNKNOWN_POLICY; extern const int NO_SUCH_DATA_PART; extern const int INTERSERVER_SCHEME_DOESNT_MATCH; + extern const int DUPLICATE_DATA_PART; } namespace ActionLocks @@ -161,11 +164,11 @@ void StorageReplicatedMergeTree::setZooKeeper() std::lock_guard lock(current_zookeeper_mutex); if (zookeeper_name == default_zookeeper_name) { - current_zookeeper = global_context.getZooKeeper(); + current_zookeeper = getContext()->getZooKeeper(); } else { - current_zookeeper = global_context.getAuxiliaryZooKeeper(zookeeper_name); + current_zookeeper = getContext()->getAuxiliaryZooKeeper(zookeeper_name); } } @@ -229,7 +232,7 @@ StorageReplicatedMergeTree::StorageReplicatedMergeTree( const StorageID & table_id_, const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr settings_, @@ -251,34 +254,34 @@ StorageReplicatedMergeTree::StorageReplicatedMergeTree( , replica_path(zookeeper_path + "/replicas/" + replica_name_) , reader(*this) , writer(*this) - , merger_mutator(*this, global_context.getSettingsRef().background_pool_size) + , merger_mutator(*this, getContext()->getSettingsRef().background_pool_size) , merge_strategy_picker(*this) , queue(*this, merge_strategy_picker) , fetcher(*this) - , background_executor(*this, global_context) - , background_moves_executor(*this, global_context) + , background_executor(*this, getContext()) + , background_moves_executor(*this, getContext()) , cleanup_thread(*this) , part_check_thread(*this) , restarting_thread(*this) , allow_renaming(allow_renaming_) - , replicated_fetches_pool_size(global_context.getSettingsRef().background_fetches_pool_size) + , replicated_fetches_pool_size(getContext()->getSettingsRef().background_fetches_pool_size) { - queue_updating_task = global_context.getSchedulePool().createTask( + queue_updating_task = getContext()->getSchedulePool().createTask( getStorageID().getFullTableName() + " (StorageReplicatedMergeTree::queueUpdatingTask)", [this]{ queueUpdatingTask(); }); - mutations_updating_task = global_context.getSchedulePool().createTask( + mutations_updating_task = getContext()->getSchedulePool().createTask( getStorageID().getFullTableName() + " (StorageReplicatedMergeTree::mutationsUpdatingTask)", [this]{ mutationsUpdatingTask(); }); - merge_selecting_task = global_context.getSchedulePool().createTask( + merge_selecting_task = getContext()->getSchedulePool().createTask( getStorageID().getFullTableName() + " (StorageReplicatedMergeTree::mergeSelectingTask)", [this] { mergeSelectingTask(); }); /// Will be activated if we win leader election. merge_selecting_task->deactivate(); - mutations_finalizing_task = global_context.getSchedulePool().createTask( + mutations_finalizing_task = getContext()->getSchedulePool().createTask( getStorageID().getFullTableName() + " (StorageReplicatedMergeTree::mutationsFinalizingTask)", [this] { mutationsFinalizingTask(); }); - if (global_context.hasZooKeeper() || global_context.hasAuxiliaryZooKeeper(zookeeper_name)) + if (getContext()->hasZooKeeper() || getContext()->hasAuxiliaryZooKeeper(zookeeper_name)) { /// It's possible for getZooKeeper() to timeout if zookeeper host(s) can't /// be reached. In such cases Poco::Exception is thrown after a connection @@ -297,11 +300,11 @@ StorageReplicatedMergeTree::StorageReplicatedMergeTree( { if (zookeeper_name == default_zookeeper_name) { - current_zookeeper = global_context.getZooKeeper(); + current_zookeeper = getContext()->getZooKeeper(); } else { - current_zookeeper = global_context.getAuxiliaryZooKeeper(zookeeper_name); + current_zookeeper = getContext()->getAuxiliaryZooKeeper(zookeeper_name); } } catch (...) @@ -455,12 +458,12 @@ void StorageReplicatedMergeTree::waitMutationToFinishOnReplicas( if (replicas.empty()) return; - zkutil::EventPtr wait_event = std::make_shared(); std::set inactive_replicas; for (const String & replica : replicas) { LOG_DEBUG(log, "Waiting for {} to apply mutation {}", replica, mutation_id); + zkutil::EventPtr wait_event = std::make_shared(); while (!partial_shutdown_called) { @@ -484,9 +487,8 @@ void StorageReplicatedMergeTree::waitMutationToFinishOnReplicas( String mutation_pointer = zookeeper_path + "/replicas/" + replica + "/mutation_pointer"; std::string mutation_pointer_value; - Coordination::Stat get_stat; /// Replica could be removed - if (!zookeeper->tryGet(mutation_pointer, mutation_pointer_value, &get_stat, wait_event)) + if (!zookeeper->tryGet(mutation_pointer, mutation_pointer_value, nullptr, wait_event)) { LOG_WARNING(log, "Replica {} was removed", replica); break; @@ -496,8 +498,10 @@ void StorageReplicatedMergeTree::waitMutationToFinishOnReplicas( /// Replica can become inactive, so wait with timeout and recheck it if (wait_event->tryWait(1000)) - break; + continue; + /// Here we check mutation for errors or kill on local replica. If they happen on this replica + /// they will happen on each replica, so we can check only in-memory info. auto mutation_status = queue.getIncompleteMutationsStatus(mutation_id); if (!mutation_status || !mutation_status->latest_fail_reason.empty()) break; @@ -514,6 +518,8 @@ void StorageReplicatedMergeTree::waitMutationToFinishOnReplicas( std::set mutation_ids; mutation_ids.insert(mutation_id); + /// Here we check mutation for errors or kill on local replica. If they happen on this replica + /// they will happen on each replica, so we can check only in-memory info. auto mutation_status = queue.getIncompleteMutationsStatus(mutation_id, &mutation_ids); checkMutationStatus(mutation_status, mutation_ids); @@ -579,42 +585,24 @@ bool StorageReplicatedMergeTree::createTableIfNotExists(const StorageMetadataPtr /// This is Ok because another replica is definitely going to drop the table. LOG_WARNING(log, "Removing leftovers from table {} (this might take several minutes)", zookeeper_path); + String drop_lock_path = zookeeper_path + "/dropped/lock"; + Coordination::Error code = zookeeper->tryCreate(drop_lock_path, "", zkutil::CreateMode::Ephemeral); - Strings children; - Coordination::Error code = zookeeper->tryGetChildren(zookeeper_path, children); - if (code == Coordination::Error::ZNONODE) + if (code == Coordination::Error::ZNONODE || code == Coordination::Error::ZNODEEXISTS) { - LOG_WARNING(log, "Table {} is already finished removing by another replica right now", replica_path); + LOG_WARNING(log, "The leftovers from table {} were removed by another replica", zookeeper_path); + } + else if (code != Coordination::Error::ZOK) + { + throw Coordination::Exception(code, drop_lock_path); } else { - for (const auto & child : children) - if (child != "dropped") - zookeeper->tryRemoveRecursive(zookeeper_path + "/" + child); - - Coordination::Requests ops; - Coordination::Responses responses; - ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path + "/dropped", -1)); - ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path, -1)); - code = zookeeper->tryMulti(ops, responses); - - if (code == Coordination::Error::ZNONODE) + auto metadata_drop_lock = zkutil::EphemeralNodeHolder::existing(drop_lock_path, *zookeeper); + if (!removeTableNodesFromZooKeeper(zookeeper, zookeeper_path, metadata_drop_lock, log)) { - LOG_WARNING(log, "Table {} is already finished removing by another replica right now", replica_path); - } - else if (code == Coordination::Error::ZNOTEMPTY) - { - throw Exception(fmt::format( - "The old table was not completely removed from ZooKeeper, {} still exists and may contain some garbage. But it should never happen according to the logic of operations (it's a bug).", zookeeper_path), ErrorCodes::LOGICAL_ERROR); - } - else if (code != Coordination::Error::ZOK) - { - /// It is still possible that ZooKeeper session is expired or server is killed in the middle of the delete operation. - zkutil::KeeperMultiException::check(code, ops, responses); - } - else - { - LOG_WARNING(log, "The leftovers from table {} was successfully removed from ZooKeeper", zookeeper_path); + /// Someone is recursively removing table right now, we cannot create new table until old one is removed + continue; } } } @@ -627,10 +615,6 @@ bool StorageReplicatedMergeTree::createTableIfNotExists(const StorageMetadataPtr Coordination::Requests ops; ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path, "", zkutil::CreateMode::Persistent)); - /// Check that the table is not being dropped right now. - ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/dropped", "", zkutil::CreateMode::Persistent)); - ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path + "/dropped", -1)); - ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/metadata", metadata_str, zkutil::CreateMode::Persistent)); ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/columns", metadata_snapshot->getColumns().toString(), @@ -773,9 +757,9 @@ void StorageReplicatedMergeTree::drop() /// and calling StorageReplicatedMergeTree::getZooKeeper()/getAuxiliaryZooKeeper() won't suffice. zkutil::ZooKeeperPtr zookeeper; if (zookeeper_name == default_zookeeper_name) - zookeeper = global_context.getZooKeeper(); + zookeeper = getContext()->getZooKeeper(); else - zookeeper = global_context.getAuxiliaryZooKeeper(zookeeper_name); + zookeeper = getContext()->getAuxiliaryZooKeeper(zookeeper_name); /// If probably there is metadata in ZooKeeper, we don't allow to drop the table. if (!zookeeper) @@ -818,10 +802,18 @@ void StorageReplicatedMergeTree::dropReplica(zkutil::ZooKeeperPtr zookeeper, con * because table creation is executed in single transaction that will conflict with remaining nodes. */ + /// Node /dropped works like a lock that protects from concurrent removal of old table and creation of new table. + /// But recursive removal may fail in the middle of operation leaving some garbage in zookeeper_path, so + /// we remove it on table creation if there is /dropped node. Creating thread may remove /dropped node created by + /// removing thread, and it causes race condition if removing thread is not finished yet. + /// To avoid this we also create ephemeral child before starting recursive removal. + /// (The existence of child node does not allow to remove parent node). Coordination::Requests ops; Coordination::Responses responses; + String drop_lock_path = zookeeper_path + "/dropped/lock"; ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path + "/replicas", -1)); ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/dropped", "", zkutil::CreateMode::Persistent)); + ops.emplace_back(zkutil::makeCreateRequest(drop_lock_path, "", zkutil::CreateMode::Ephemeral)); Coordination::Error code = zookeeper->tryMulti(ops, responses); if (code == Coordination::Error::ZNONODE || code == Coordination::Error::ZNODEEXISTS) @@ -838,48 +830,57 @@ void StorageReplicatedMergeTree::dropReplica(zkutil::ZooKeeperPtr zookeeper, con } else { + auto metadata_drop_lock = zkutil::EphemeralNodeHolder::existing(drop_lock_path, *zookeeper); LOG_INFO(logger, "Removing table {} (this might take several minutes)", zookeeper_path); - - Strings children; - code = zookeeper->tryGetChildren(zookeeper_path, children); - if (code == Coordination::Error::ZNONODE) - { - LOG_WARNING(logger, "Table {} is already finished removing by another replica right now", remote_replica_path); - } - else - { - for (const auto & child : children) - if (child != "dropped") - zookeeper->tryRemoveRecursive(zookeeper_path + "/" + child); - - ops.clear(); - responses.clear(); - ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path + "/dropped", -1)); - ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path, -1)); - code = zookeeper->tryMulti(ops, responses); - - if (code == Coordination::Error::ZNONODE) - { - LOG_WARNING(logger, "Table {} is already finished removing by another replica right now", remote_replica_path); - } - else if (code == Coordination::Error::ZNOTEMPTY) - { - LOG_ERROR(logger, "Table was not completely removed from ZooKeeper, {} still exists and may contain some garbage.", - zookeeper_path); - } - else if (code != Coordination::Error::ZOK) - { - /// It is still possible that ZooKeeper session is expired or server is killed in the middle of the delete operation. - zkutil::KeeperMultiException::check(code, ops, responses); - } - else - { - LOG_INFO(logger, "Table {} was successfully removed from ZooKeeper", zookeeper_path); - } - } + removeTableNodesFromZooKeeper(zookeeper, zookeeper_path, metadata_drop_lock, logger); } } +bool StorageReplicatedMergeTree::removeTableNodesFromZooKeeper(zkutil::ZooKeeperPtr zookeeper, + const String & zookeeper_path, const zkutil::EphemeralNodeHolder::Ptr & metadata_drop_lock, Poco::Logger * logger) +{ + bool completely_removed = false; + Strings children; + Coordination::Error code = zookeeper->tryGetChildren(zookeeper_path, children); + if (code == Coordination::Error::ZNONODE) + throw Exception(ErrorCodes::LOGICAL_ERROR, "There is a race condition between creation and removal of replicated table. It's a bug"); + + + for (const auto & child : children) + if (child != "dropped") + zookeeper->tryRemoveRecursive(zookeeper_path + "/" + child); + + Coordination::Requests ops; + Coordination::Responses responses; + ops.emplace_back(zkutil::makeRemoveRequest(metadata_drop_lock->getPath(), -1)); + ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path + "/dropped", -1)); + ops.emplace_back(zkutil::makeRemoveRequest(zookeeper_path, -1)); + code = zookeeper->tryMulti(ops, responses); + + if (code == Coordination::Error::ZNONODE) + { + throw Exception(ErrorCodes::LOGICAL_ERROR, "There is a race condition between creation and removal of replicated table. It's a bug"); + } + else if (code == Coordination::Error::ZNOTEMPTY) + { + LOG_ERROR(logger, "Table was not completely removed from ZooKeeper, {} still exists and may contain some garbage," + "but someone is removing it right now.", zookeeper_path); + } + else if (code != Coordination::Error::ZOK) + { + /// It is still possible that ZooKeeper session is expired or server is killed in the middle of the delete operation. + zkutil::KeeperMultiException::check(code, ops, responses); + } + else + { + metadata_drop_lock->setAlreadyRemoved(); + completely_removed = true; + LOG_INFO(logger, "Table {} was successfully removed from ZooKeeper", zookeeper_path); + } + + return completely_removed; +} + /** Verify that list of columns and table storage_settings_ptr match those specified in ZK (/metadata). * If not, throw an exception. @@ -893,7 +894,7 @@ void StorageReplicatedMergeTree::checkTableStructure(const String & zookeeper_pr Coordination::Stat metadata_stat; String metadata_str = zookeeper->get(zookeeper_prefix + "/metadata", &metadata_stat); auto metadata_from_zk = ReplicatedMergeTreeTableMetadata::parse(metadata_str); - old_metadata.checkEquals(metadata_from_zk, metadata_snapshot->getColumns(), global_context); + old_metadata.checkEquals(metadata_from_zk, metadata_snapshot->getColumns(), getContext()); Coordination::Stat columns_stat; auto columns_from_zk = ColumnsDescription::parse(zookeeper->get(zookeeper_prefix + "/columns", &columns_stat)); @@ -938,7 +939,7 @@ void StorageReplicatedMergeTree::setTableStructure( auto & sorting_key = new_metadata.sorting_key; auto & primary_key = new_metadata.primary_key; - sorting_key.recalculateWithNewAST(order_by_ast, new_metadata.columns, global_context); + sorting_key.recalculateWithNewAST(order_by_ast, new_metadata.columns, getContext()); if (primary_key.definition_ast == nullptr) { @@ -946,18 +947,18 @@ void StorageReplicatedMergeTree::setTableStructure( /// save the old ORDER BY expression as the new primary key. auto old_sorting_key_ast = old_metadata.getSortingKey().definition_ast; primary_key = KeyDescription::getKeyFromAST( - old_sorting_key_ast, new_metadata.columns, global_context); + old_sorting_key_ast, new_metadata.columns, getContext()); } } if (metadata_diff.sampling_expression_changed) { auto sample_by_ast = parse_key_expr(metadata_diff.new_sampling_expression); - new_metadata.sampling_key.recalculateWithNewAST(sample_by_ast, new_metadata.columns, global_context); + new_metadata.sampling_key.recalculateWithNewAST(sample_by_ast, new_metadata.columns, getContext()); } if (metadata_diff.skip_indices_changed) - new_metadata.secondary_indices = IndicesDescription::parse(metadata_diff.new_skip_indices, new_columns, global_context); + new_metadata.secondary_indices = IndicesDescription::parse(metadata_diff.new_skip_indices, new_columns, getContext()); if (metadata_diff.constraints_changed) new_metadata.constraints = ConstraintsDescription::parse(metadata_diff.new_constraints); @@ -969,7 +970,7 @@ void StorageReplicatedMergeTree::setTableStructure( ParserTTLExpressionList parser; auto ttl_for_table_ast = parseQuery(parser, metadata_diff.new_ttl_table, 0, DBMS_DEFAULT_MAX_PARSER_DEPTH); new_metadata.table_ttl = TTLTableDescription::getTTLForTableFromAST( - ttl_for_table_ast, new_metadata.columns, global_context, new_metadata.primary_key); + ttl_for_table_ast, new_metadata.columns, getContext(), new_metadata.primary_key); } else /// TTL was removed { @@ -982,39 +983,39 @@ void StorageReplicatedMergeTree::setTableStructure( new_metadata.column_ttls_by_name.clear(); for (const auto & [name, ast] : new_metadata.columns.getColumnTTLs()) { - auto new_ttl_entry = TTLDescription::getTTLFromAST(ast, new_metadata.columns, global_context, new_metadata.primary_key); + auto new_ttl_entry = TTLDescription::getTTLFromAST(ast, new_metadata.columns, getContext(), new_metadata.primary_key); new_metadata.column_ttls_by_name[name] = new_ttl_entry; } if (new_metadata.partition_key.definition_ast != nullptr) - new_metadata.partition_key.recalculateWithNewColumns(new_metadata.columns, global_context); + new_metadata.partition_key.recalculateWithNewColumns(new_metadata.columns, getContext()); if (!metadata_diff.sorting_key_changed) /// otherwise already updated - new_metadata.sorting_key.recalculateWithNewColumns(new_metadata.columns, global_context); + new_metadata.sorting_key.recalculateWithNewColumns(new_metadata.columns, getContext()); /// Primary key is special, it exists even if not defined if (new_metadata.primary_key.definition_ast != nullptr) { - new_metadata.primary_key.recalculateWithNewColumns(new_metadata.columns, global_context); + new_metadata.primary_key.recalculateWithNewColumns(new_metadata.columns, getContext()); } else { - new_metadata.primary_key = KeyDescription::getKeyFromAST(new_metadata.sorting_key.definition_ast, new_metadata.columns, global_context); + new_metadata.primary_key = KeyDescription::getKeyFromAST(new_metadata.sorting_key.definition_ast, new_metadata.columns, getContext()); new_metadata.primary_key.definition_ast = nullptr; } if (!metadata_diff.sampling_expression_changed && new_metadata.sampling_key.definition_ast != nullptr) - new_metadata.sampling_key.recalculateWithNewColumns(new_metadata.columns, global_context); + new_metadata.sampling_key.recalculateWithNewColumns(new_metadata.columns, getContext()); if (!metadata_diff.skip_indices_changed) /// otherwise already updated { for (auto & index : new_metadata.secondary_indices) - index.recalculateWithNewColumns(new_metadata.columns, global_context); + index.recalculateWithNewColumns(new_metadata.columns, getContext()); } if (!metadata_diff.ttl_table_changed && new_metadata.table_ttl.definition_ast != nullptr) new_metadata.table_ttl = TTLTableDescription::getTTLForTableFromAST( - new_metadata.table_ttl.definition_ast, new_metadata.columns, global_context, new_metadata.primary_key); + new_metadata.table_ttl.definition_ast, new_metadata.columns, getContext(), new_metadata.primary_key); /// Even if the primary/sorting/partition keys didn't change we must reinitialize it /// because primary/partition key column types might have changed. @@ -1022,7 +1023,7 @@ void StorageReplicatedMergeTree::setTableStructure( setProperties(new_metadata, old_metadata); auto table_id = getStorageID(); - DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(global_context, table_id, new_metadata); + DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(getContext(), table_id, new_metadata); } @@ -1649,13 +1650,12 @@ bool StorageReplicatedMergeTree::tryExecuteMerge(const LogEntry & entry) /// Account TTL merge if (isTTLMergeType(future_merged_part.merge_type)) - global_context.getMergeList().bookMergeWithTTL(); + getContext()->getMergeList().bookMergeWithTTL(); auto table_id = getStorageID(); /// Add merge to list - MergeList::EntryPtr merge_entry = global_context.getMergeList().insert( - table_id.database_name, table_id.table_name, future_merged_part); + MergeList::EntryPtr merge_entry = getContext()->getMergeList().insert(table_id.database_name, table_id.table_name, future_merged_part); Transaction transaction(*this); MutableDataPartPtr part; @@ -1673,7 +1673,7 @@ bool StorageReplicatedMergeTree::tryExecuteMerge(const LogEntry & entry) { part = merger_mutator.mergePartsToTemporaryPart( future_merged_part, metadata_snapshot, *merge_entry, - table_lock, entry.create_time, global_context, reserved_space, entry.deduplicate, entry.deduplicate_by_columns); + table_lock, entry.create_time, getContext(), reserved_space, entry.deduplicate, entry.deduplicate_by_columns); merger_mutator.renameMergedTemporaryPart(part, parts, &transaction); @@ -1793,7 +1793,7 @@ bool StorageReplicatedMergeTree::tryExecutePartMutation(const StorageReplicatedM future_mutated_part.type = source_part->getType(); auto table_id = getStorageID(); - MergeList::EntryPtr merge_entry = global_context.getMergeList().insert( + MergeList::EntryPtr merge_entry = getContext()->getMergeList().insert( table_id.database_name, table_id.table_name, future_mutated_part); Stopwatch stopwatch; @@ -1809,7 +1809,7 @@ bool StorageReplicatedMergeTree::tryExecutePartMutation(const StorageReplicatedM { new_part = merger_mutator.mutatePartToTemporaryPart( future_mutated_part, metadata_snapshot, commands, *merge_entry, - entry.create_time, global_context, reserved_space, table_lock); + entry.create_time, getContext(), reserved_space, table_lock); renameTempPartAndReplace(new_part, nullptr, &transaction); try @@ -2159,12 +2159,20 @@ bool StorageReplicatedMergeTree::executeReplaceRange(const LogEntry & entry) struct PartDescription { - PartDescription(size_t index_, const String & src_part_name_, const String & new_part_name_, const String & checksum_hex_, - MergeTreeDataFormatVersion format_version) - : index(index_), - src_part_name(src_part_name_), src_part_info(MergeTreePartInfo::fromPartName(src_part_name_, format_version)), - new_part_name(new_part_name_), new_part_info(MergeTreePartInfo::fromPartName(new_part_name_, format_version)), - checksum_hex(checksum_hex_) {} + PartDescription( + size_t index_, + const String & src_part_name_, + const String & new_part_name_, + const String & checksum_hex_, + MergeTreeDataFormatVersion format_version) + : index(index_) + , src_part_name(src_part_name_) + , src_part_info(MergeTreePartInfo::fromPartName(src_part_name_, format_version)) + , new_part_name(new_part_name_) + , new_part_info(MergeTreePartInfo::fromPartName(new_part_name_, format_version)) + , checksum_hex(checksum_hex_) + { + } size_t index; // in log entry arrays String src_part_name; @@ -2239,7 +2247,7 @@ bool StorageReplicatedMergeTree::executeReplaceRange(const LogEntry & entry) auto clone_data_parts_from_source_table = [&] () -> size_t { - source_table = DatabaseCatalog::instance().tryGetTable(source_table_id, global_context); + source_table = DatabaseCatalog::instance().tryGetTable(source_table_id, getContext()); if (!source_table) { LOG_DEBUG(log, "Can't use {} as source table for REPLACE PARTITION command. It does not exist.", source_table_id.getNameForLogs()); @@ -2395,17 +2403,17 @@ bool StorageReplicatedMergeTree::executeReplaceRange(const LogEntry & entry) { String source_replica_path = zookeeper_path + "/replicas/" + part_desc->replica; ReplicatedMergeTreeAddress address(getZooKeeper()->get(source_replica_path + "/host")); - auto timeouts = getFetchPartHTTPTimeouts(global_context); + auto timeouts = getFetchPartHTTPTimeouts(getContext()); - auto [user, password] = global_context.getInterserverCredentials(); - String interserver_scheme = global_context.getInterserverScheme(); + auto credentials = getContext()->getInterserverCredentials(); + String interserver_scheme = getContext()->getInterserverScheme(); if (interserver_scheme != address.scheme) throw Exception("Interserver schemas are different '" + interserver_scheme + "' != '" + address.scheme + "', can't fetch part from " + address.host, ErrorCodes::LOGICAL_ERROR); part_desc->res_part = fetcher.fetchPart( metadata_snapshot, part_desc->found_new_part_name, source_replica_path, - address.host, address.replication_port, timeouts, user, password, interserver_scheme, false, TMP_PREFIX + "fetch_"); + address.host, address.replication_port, timeouts, credentials->getUser(), credentials->getPassword(), interserver_scheme, false, TMP_PREFIX + "fetch_"); /// TODO: check columns_version of fetched part @@ -2454,11 +2462,11 @@ bool StorageReplicatedMergeTree::executeReplaceRange(const LogEntry & entry) parts_to_remove = removePartsInRangeFromWorkingSet(drop_range, true, false, data_parts_lock); } - PartLog::addNewParts(global_context, res_parts, watch.elapsed()); + PartLog::addNewParts(getContext(), res_parts, watch.elapsed()); } catch (...) { - PartLog::addNewParts(global_context, res_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); + PartLog::addNewParts(getContext(), res_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); throw; } @@ -3293,7 +3301,7 @@ void StorageReplicatedMergeTree::enterLeaderElection() try { leader_election = std::make_shared( - global_context.getSchedulePool(), + getContext()->getSchedulePool(), zookeeper_path + "/leader_election", *current_zookeeper, /// current_zookeeper lives for the lifetime of leader_election, /// since before changing `current_zookeeper`, `leader_election` object is destroyed in `partialShutdown` method. @@ -3329,9 +3337,9 @@ void StorageReplicatedMergeTree::exitLeaderElection() leader_election = nullptr; } -ConnectionTimeouts StorageReplicatedMergeTree::getFetchPartHTTPTimeouts(const Context & context) +ConnectionTimeouts StorageReplicatedMergeTree::getFetchPartHTTPTimeouts(ContextPtr local_context) { - auto timeouts = ConnectionTimeouts::getHTTPTimeouts(context); + auto timeouts = ConnectionTimeouts::getHTTPTimeouts(local_context); auto settings = getSettings(); if (settings->replicated_fetches_http_connection_timeout.changed) @@ -3690,7 +3698,7 @@ bool StorageReplicatedMergeTree::fetchPart(const String & part_name, const Stora } } - SCOPE_EXIT + SCOPE_EXIT_MEMORY ({ std::lock_guard lock(currently_fetching_parts_mutex); currently_fetching_parts.erase(part_name); @@ -3751,8 +3759,8 @@ bool StorageReplicatedMergeTree::fetchPart(const String & part_name, const Stora ReplicatedMergeTreeAddress address; ConnectionTimeouts timeouts; - std::pair user_password; String interserver_scheme; + InterserverCredentialsPtr credentials; std::optional tagger_ptr; std::function get_part; @@ -3766,12 +3774,12 @@ bool StorageReplicatedMergeTree::fetchPart(const String & part_name, const Stora else { address.fromString(zookeeper->get(source_replica_path + "/host")); - timeouts = getFetchPartHTTPTimeouts(global_context); + timeouts = getFetchPartHTTPTimeouts(getContext()); - user_password = global_context.getInterserverCredentials(); - interserver_scheme = global_context.getInterserverScheme(); + credentials = getContext()->getInterserverCredentials(); + interserver_scheme = getContext()->getInterserverScheme(); - get_part = [&, address, timeouts, user_password, interserver_scheme]() + get_part = [&, address, timeouts, credentials, interserver_scheme]() { if (interserver_scheme != address.scheme) throw Exception("Interserver schemes are different: '" + interserver_scheme @@ -3785,8 +3793,8 @@ bool StorageReplicatedMergeTree::fetchPart(const String & part_name, const Stora address.host, address.replication_port, timeouts, - user_password.first, - user_password.second, + credentials->getUser(), + credentials->getPassword(), interserver_scheme, to_detached, "", @@ -3898,7 +3906,7 @@ bool StorageReplicatedMergeTree::fetchExistsPart(const String & part_name, const } } - SCOPE_EXIT + SCOPE_EXIT_MEMORY ({ std::lock_guard lock(currently_fetching_parts_mutex); currently_fetching_parts.erase(part_name); @@ -3923,11 +3931,11 @@ bool StorageReplicatedMergeTree::fetchExistsPart(const String & part_name, const std::function get_part; ReplicatedMergeTreeAddress address(zookeeper->get(source_replica_path + "/host")); - auto timeouts = ConnectionTimeouts::getHTTPTimeouts(global_context); - auto user_password = global_context.getInterserverCredentials(); - String interserver_scheme = global_context.getInterserverScheme(); + auto timeouts = ConnectionTimeouts::getHTTPTimeouts(getContext()); + auto credentials = getContext()->getInterserverCredentials(); + String interserver_scheme = getContext()->getInterserverScheme(); - get_part = [&, address, timeouts, user_password, interserver_scheme]() + get_part = [&, address, timeouts, interserver_scheme, credentials]() { if (interserver_scheme != address.scheme) throw Exception("Interserver schemes are different: '" + interserver_scheme @@ -3937,7 +3945,7 @@ bool StorageReplicatedMergeTree::fetchExistsPart(const String & part_name, const return fetcher.fetchPart( metadata_snapshot, part_name, source_replica_path, address.host, address.replication_port, - timeouts, user_password.first, user_password.second, interserver_scheme, false, "", nullptr, true, + timeouts, credentials->getUser(), credentials->getPassword(), interserver_scheme, false, "", nullptr, true, replaced_disk); }; @@ -3985,7 +3993,7 @@ void StorageReplicatedMergeTree::startup() InterserverIOEndpointPtr data_parts_exchange_ptr = std::make_shared(*this); [[maybe_unused]] auto prev_ptr = std::atomic_exchange(&data_parts_exchange_endpoint, data_parts_exchange_ptr); assert(prev_ptr == nullptr); - global_context.getInterserverIOHandler().addEndpoint(data_parts_exchange_ptr->getId(replica_path), data_parts_exchange_ptr); + getContext()->getInterserverIOHandler().addEndpoint(data_parts_exchange_ptr->getId(replica_path), data_parts_exchange_ptr); /// In this thread replica will be activated. restarting_thread.start(); @@ -4040,7 +4048,7 @@ void StorageReplicatedMergeTree::shutdown() auto data_parts_exchange_ptr = std::atomic_exchange(&data_parts_exchange_endpoint, InterserverIOEndpointPtr{}); if (data_parts_exchange_ptr) { - global_context.getInterserverIOHandler().removeEndpointIfExists(data_parts_exchange_ptr->getId(replica_path)); + getContext()->getInterserverIOHandler().removeEndpointIfExists(data_parts_exchange_ptr->getId(replica_path)); /// Ask all parts exchange handlers to finish asap. New ones will fail to start data_parts_exchange_ptr->blocker.cancelForever(); /// Wait for all of them @@ -4126,7 +4134,7 @@ void StorageReplicatedMergeTree::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum /*processed_stage*/, const size_t max_block_size, const unsigned num_streams) @@ -4136,15 +4144,15 @@ void StorageReplicatedMergeTree::read( * 2. Do not read parts that have not yet been written to the quorum of the replicas. * For this you have to synchronously go to ZooKeeper. */ - if (context.getSettingsRef().select_sequential_consistency) + if (local_context->getSettingsRef().select_sequential_consistency) { auto max_added_blocks = getMaxAddedBlocks(); - if (auto plan = reader.read(column_names, metadata_snapshot, query_info, context, max_block_size, num_streams, &max_added_blocks)) + if (auto plan = reader.read(column_names, metadata_snapshot, query_info, local_context, max_block_size, num_streams, &max_added_blocks)) query_plan = std::move(*plan); return; } - if (auto plan = reader.read(column_names, metadata_snapshot, query_info, context, max_block_size, num_streams)) + if (auto plan = reader.read(column_names, metadata_snapshot, query_info, local_context, max_block_size, num_streams)) query_plan = std::move(*plan); } @@ -4152,16 +4160,16 @@ Pipe StorageReplicatedMergeTree::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) { QueryPlan plan; - read(plan, column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); + read(plan, column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); return plan.convertToPipe( - QueryPlanOptimizationSettings::fromContext(context), - BuildQueryPipelineSettings::fromContext(context)); + QueryPlanOptimizationSettings::fromContext(local_context), + BuildQueryPipelineSettings::fromContext(local_context)); } @@ -4200,11 +4208,11 @@ std::optional StorageReplicatedMergeTree::totalRows(const Settings & set return res; } -std::optional StorageReplicatedMergeTree::totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, const Context & context) const +std::optional StorageReplicatedMergeTree::totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, ContextPtr local_context) const { DataPartsVector parts; - foreachCommittedParts([&](auto & part) { parts.push_back(part); }, context.getSettingsRef().select_sequential_consistency); - return totalRowsByPartitionPredicateImpl(query_info, context, parts); + foreachCommittedParts([&](auto & part) { parts.push_back(part); }, local_context->getSettingsRef().select_sequential_consistency); + return totalRowsByPartitionPredicateImpl(query_info, local_context, parts); } std::optional StorageReplicatedMergeTree::totalBytes(const Settings & settings) const @@ -4222,12 +4230,12 @@ void StorageReplicatedMergeTree::assertNotReadonly() const } -BlockOutputStreamPtr StorageReplicatedMergeTree::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageReplicatedMergeTree::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { const auto storage_settings_ptr = getSettings(); assertNotReadonly(); - const Settings & query_settings = context.getSettingsRef(); + const Settings & query_settings = local_context->getSettingsRef(); bool deduplicate = storage_settings_ptr->replicated_deduplication_window != 0 && query_settings.insert_deduplicate; // TODO: should we also somehow pass list of columns to deduplicate on to the ReplicatedMergeTreeBlockOutputStream ? @@ -4237,7 +4245,7 @@ BlockOutputStreamPtr StorageReplicatedMergeTree::write(const ASTPtr & /*query*/, query_settings.max_partitions_per_insert_block, query_settings.insert_quorum_parallel, deduplicate, - context.getSettingsRef().optimize_on_insert); + local_context->getSettingsRef().optimize_on_insert); } @@ -4248,11 +4256,11 @@ bool StorageReplicatedMergeTree::optimize( bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & query_context) + ContextPtr query_context) { /// NOTE: exclusive lock cannot be used here, since this may lead to deadlock (see comments below), /// but it should be safe to use non-exclusive to avoid dropping parts that may be required for processing queue. - auto table_lock = lockForShare(query_context.getCurrentQueryId(), query_context.getSettingsRef().lock_acquire_timeout); + auto table_lock = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); assertNotReadonly(); @@ -4267,7 +4275,7 @@ bool StorageReplicatedMergeTree::optimize( auto handle_noop = [&] (const String & message) { - if (query_context.getSettingsRef().optimize_throw_if_noop) + if (query_context->getSettingsRef().optimize_throw_if_noop) throw Exception(message, ErrorCodes::CANNOT_ASSIGN_OPTIMIZE); return false; }; @@ -4301,7 +4309,7 @@ bool StorageReplicatedMergeTree::optimize( future_merged_part.uuid = UUIDHelpers::generateV4(); SelectPartsDecision select_decision = merger_mutator.selectAllPartsToMergeWithinPartition( - future_merged_part, disk_space, can_merge, partition_id, true, metadata_snapshot, nullptr, query_context.getSettingsRef().optimize_skip_merged_partitions); + future_merged_part, disk_space, can_merge, partition_id, true, metadata_snapshot, nullptr, query_context->getSettingsRef().optimize_skip_merged_partitions); if (select_decision != SelectPartsDecision::SELECTED) break; @@ -4352,7 +4360,7 @@ bool StorageReplicatedMergeTree::optimize( UInt64 disk_space = getStoragePolicy()->getMaxUnreservedFreeSpace(); String partition_id = getPartitionIDFromQuery(partition, query_context); select_decision = merger_mutator.selectAllPartsToMergeWithinPartition( - future_merged_part, disk_space, can_merge, partition_id, final, metadata_snapshot, &disable_reason, query_context.getSettingsRef().optimize_skip_merged_partitions); + future_merged_part, disk_space, can_merge, partition_id, final, metadata_snapshot, &disable_reason, query_context->getSettingsRef().optimize_skip_merged_partitions); } /// If there is nothing to merge then we treat this merge as successful (needed for optimize final optimization) @@ -4390,7 +4398,7 @@ bool StorageReplicatedMergeTree::optimize( } } - if (query_context.getSettingsRef().replication_alter_partitions_sync != 0) + if (query_context->getSettingsRef().replication_alter_partitions_sync != 0) { /// NOTE Table lock must not be held while waiting. Some combination of R-W-R locks from different threads will yield to deadlock. for (auto & merge_entry : merge_entries) @@ -4435,7 +4443,7 @@ bool StorageReplicatedMergeTree::executeMetadataAlter(const StorageReplicatedMer std::set StorageReplicatedMergeTree::getPartitionIdsAffectedByCommands( - const MutationCommands & commands, const Context & query_context) const + const MutationCommands & commands, ContextPtr query_context) const { std::set affected_partition_ids; @@ -4457,7 +4465,7 @@ std::set StorageReplicatedMergeTree::getPartitionIdsAffectedByCommands( PartitionBlockNumbersHolder StorageReplicatedMergeTree::allocateBlockNumbersInAffectedPartitions( - const MutationCommands & commands, const Context & query_context, const zkutil::ZooKeeperPtr & zookeeper) const + const MutationCommands & commands, ContextPtr query_context, const zkutil::ZooKeeperPtr & zookeeper) const { const std::set mutation_affected_partition_ids = getPartitionIdsAffectedByCommands(commands, query_context); @@ -4489,7 +4497,7 @@ PartitionBlockNumbersHolder StorageReplicatedMergeTree::allocateBlockNumbersInAf void StorageReplicatedMergeTree::alter( - const AlterCommands & commands, const Context & query_context, TableLockHolder & table_lock_holder) + const AlterCommands & commands, ContextPtr query_context, TableLockHolder & table_lock_holder) { assertNotReadonly(); @@ -4596,7 +4604,7 @@ void StorageReplicatedMergeTree::alter( alter_entry->create_time = time(nullptr); auto maybe_mutation_commands = commands.getMutationCommands( - *current_metadata, query_context.getSettingsRef().materialize_ttl_after_modify, query_context); + *current_metadata, query_context->getSettingsRef().materialize_ttl_after_modify, query_context); alter_entry->have_mutation = !maybe_mutation_commands.empty(); alter_path_idx = ops.size(); @@ -4628,7 +4636,7 @@ void StorageReplicatedMergeTree::alter( zkutil::makeCreateRequest(mutations_path + "/", mutation_entry.toString(), zkutil::CreateMode::PersistentSequential)); } - if (auto txn = query_context.getZooKeeperMetadataTransaction()) + if (auto txn = query_context->getZooKeeperMetadataTransaction()) { txn->moveOpsTo(ops); /// NOTE: IDatabase::alterTable(...) is called when executing ALTER_METADATA queue entry without query context, @@ -4684,12 +4692,12 @@ void StorageReplicatedMergeTree::alter( table_lock_holder.reset(); std::vector unwaited; - if (query_context.getSettingsRef().replication_alter_partitions_sync == 2) + if (query_context->getSettingsRef().replication_alter_partitions_sync == 2) { LOG_DEBUG(log, "Updated shared metadata nodes in ZooKeeper. Waiting for replicas to apply changes."); unwaited = waitForAllReplicasToProcessLogEntry(*alter_entry, false); } - else if (query_context.getSettingsRef().replication_alter_partitions_sync == 1) + else if (query_context->getSettingsRef().replication_alter_partitions_sync == 1) { LOG_DEBUG(log, "Updated shared metadata nodes in ZooKeeper. Waiting for replicas to apply changes."); waitForReplicaToProcessLogEntry(replica_name, *alter_entry); @@ -4701,7 +4709,7 @@ void StorageReplicatedMergeTree::alter( if (mutation_znode) { LOG_DEBUG(log, "Metadata changes applied. Will wait for data changes."); - waitMutation(*mutation_znode, query_context.getSettingsRef().replication_alter_partitions_sync); + waitMutation(*mutation_znode, query_context->getSettingsRef().replication_alter_partitions_sync); LOG_DEBUG(log, "Data changes applied."); } } @@ -4765,7 +4773,7 @@ bool StorageReplicatedMergeTree::getFakePartCoveringAllPartsInPartition(const St } -void StorageReplicatedMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool drop_part, const Context & query_context, bool throw_if_noop) +void StorageReplicatedMergeTree::dropPartition(const ASTPtr & partition, bool detach, bool drop_part, ContextPtr query_context, bool throw_if_noop) { assertNotReadonly(); if (!is_leader) @@ -4790,9 +4798,9 @@ void StorageReplicatedMergeTree::dropPartition(const ASTPtr & partition, bool de if (did_drop) { /// If necessary, wait until the operation is performed on itself or on all replicas. - if (query_context.getSettingsRef().replication_alter_partitions_sync != 0) + if (query_context->getSettingsRef().replication_alter_partitions_sync != 0) { - if (query_context.getSettingsRef().replication_alter_partitions_sync == 1) + if (query_context->getSettingsRef().replication_alter_partitions_sync == 1) waitForReplicaToProcessLogEntry(replica_name, entry); else waitForAllReplicasToProcessLogEntry(entry); @@ -4808,7 +4816,7 @@ void StorageReplicatedMergeTree::dropPartition(const ASTPtr & partition, bool de void StorageReplicatedMergeTree::truncate( - const ASTPtr &, const StorageMetadataPtr &, const Context & query_context, TableExclusiveLockHolder & table_lock) + const ASTPtr &, const StorageMetadataPtr &, ContextPtr query_context, TableExclusiveLockHolder & table_lock) { table_lock.release(); /// Truncate is done asynchronously. @@ -4834,7 +4842,7 @@ PartitionCommandsResultInfo StorageReplicatedMergeTree::attachPartition( const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool attach_part, - const Context & query_context) + ContextPtr query_context) { assertNotReadonly(); @@ -4869,7 +4877,7 @@ PartitionCommandsResultInfo StorageReplicatedMergeTree::attachPartition( void StorageReplicatedMergeTree::checkTableCanBeDropped() const { auto table_id = getStorageID(); - global_context.checkTableCanBeDropped(table_id.database_name, table_id.table_name, getTotalActiveSizeInBytes()); + getContext()->checkTableCanBeDropped(table_id.database_name, table_id.table_name, getTotalActiveSizeInBytes()); } void StorageReplicatedMergeTree::checkTableCanBeRenamed() const @@ -5344,57 +5352,71 @@ void StorageReplicatedMergeTree::getReplicaDelays(time_t & out_absolute_delay, t } } - void StorageReplicatedMergeTree::fetchPartition( const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, const String & from_, - const Context & query_context) + bool fetch_part, + ContextPtr query_context) { Macros::MacroExpansionInfo info; info.expand_special_macros_only = false; info.table_id = getStorageID(); info.table_id.uuid = UUIDHelpers::Nil; - auto expand_from = query_context.getMacros()->expand(from_, info); + auto expand_from = query_context->getMacros()->expand(from_, info); String auxiliary_zookeeper_name = extractZooKeeperName(expand_from); String from = extractZooKeeperPath(expand_from); if (from.empty()) throw Exception("ZooKeeper path should not be empty", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT); - String partition_id = getPartitionIDFromQuery(partition, query_context); zkutil::ZooKeeperPtr zookeeper; if (auxiliary_zookeeper_name != default_zookeeper_name) - { - zookeeper = global_context.getAuxiliaryZooKeeper(auxiliary_zookeeper_name); - - LOG_INFO(log, "Will fetch partition {} from shard {} (auxiliary zookeeper '{}')", partition_id, from_, auxiliary_zookeeper_name); - } + zookeeper = getContext()->getAuxiliaryZooKeeper(auxiliary_zookeeper_name); else - { zookeeper = getZooKeeper(); - LOG_INFO(log, "Will fetch partition {} from shard {}", partition_id, from_); - } - if (from.back() == '/') from.resize(from.size() - 1); + if (fetch_part) + { + String part_name = partition->as().value.safeGet(); + auto part_path = findReplicaHavingPart(part_name, from, zookeeper); + + if (part_path.empty()) + throw Exception(ErrorCodes::NO_REPLICA_HAS_PART, "Part {} does not exist on any replica", part_name); + /** Let's check that there is no such part in the `detached` directory (where we will write the downloaded parts). + * Unreliable (there is a race condition) - such a part may appear a little later. + */ + if (checkIfDetachedPartExists(part_name)) + throw Exception(ErrorCodes::DUPLICATE_DATA_PART, "Detached part " + part_name + " already exists."); + LOG_INFO(log, "Will fetch part {} from shard {} (zookeeper '{}')", part_name, from_, auxiliary_zookeeper_name); + + try + { + /// part name , metadata, part_path , true, 0, zookeeper + if (!fetchPart(part_name, metadata_snapshot, part_path, true, 0, zookeeper)) + throw Exception(ErrorCodes::UNFINISHED, "Failed to fetch part {} from {}", part_name, from_); + } + catch (const DB::Exception & e) + { + if (e.code() != ErrorCodes::RECEIVED_ERROR_FROM_REMOTE_IO_SERVER && e.code() != ErrorCodes::RECEIVED_ERROR_TOO_MANY_REQUESTS + && e.code() != ErrorCodes::CANNOT_READ_ALL_DATA) + throw; + + LOG_INFO(log, e.displayText()); + } + return; + } + + String partition_id = getPartitionIDFromQuery(partition, query_context); + LOG_INFO(log, "Will fetch partition {} from shard {} (zookeeper '{}')", partition_id, from_, auxiliary_zookeeper_name); /** Let's check that there is no such partition in the `detached` directory (where we will write the downloaded parts). * Unreliable (there is a race condition) - such a partition may appear a little later. */ - Poco::DirectoryIterator dir_end; - for (const std::string & path : getDataPaths()) - { - for (Poco::DirectoryIterator dir_it{path + "detached/"}; dir_it != dir_end; ++dir_it) - { - MergeTreePartInfo part_info; - if (MergeTreePartInfo::tryParsePartName(dir_it.name(), &part_info, format_version) - && part_info.partition_id == partition_id) - throw Exception("Detached partition " + partition_id + " already exists.", ErrorCodes::PARTITION_ALREADY_EXISTS); - } - - } + if (checkIfDetachedPartitionExists(partition_id)) + throw Exception("Detached partition " + partition_id + " already exists.", ErrorCodes::PARTITION_ALREADY_EXISTS); zkutil::Strings replicas; zkutil::Strings active_replicas; @@ -5466,7 +5488,7 @@ void StorageReplicatedMergeTree::fetchPartition( if (try_no) LOG_INFO(log, "Some of parts ({}) are missing. Will try to fetch covering parts.", missing_parts.size()); - if (try_no >= query_context.getSettings().max_fetch_partition_retries_count) + if (try_no >= query_context->getSettings().max_fetch_partition_retries_count) throw Exception("Too many retries to fetch parts from " + best_replica_path, ErrorCodes::TOO_MANY_RETRIES_TO_FETCH_PARTS); Strings parts = zookeeper->getChildren(best_replica_path + "/parts"); @@ -5531,7 +5553,7 @@ void StorageReplicatedMergeTree::fetchPartition( } -void StorageReplicatedMergeTree::mutate(const MutationCommands & commands, const Context & query_context) +void StorageReplicatedMergeTree::mutate(const MutationCommands & commands, ContextPtr query_context) { /// Overview of the mutation algorithm. /// @@ -5615,7 +5637,7 @@ void StorageReplicatedMergeTree::mutate(const MutationCommands & commands, const requests.emplace_back(zkutil::makeCreateRequest( mutations_path + "/", mutation_entry.toString(), zkutil::CreateMode::PersistentSequential)); - if (auto txn = query_context.getZooKeeperMetadataTransaction()) + if (auto txn = query_context->getZooKeeperMetadataTransaction()) txn->moveOpsTo(requests); Coordination::Responses responses; @@ -5640,7 +5662,7 @@ void StorageReplicatedMergeTree::mutate(const MutationCommands & commands, const throw Coordination::Exception("Unable to create a mutation znode", rc); } - waitMutation(mutation_entry.znode_name, query_context.getSettingsRef().mutations_sync); + waitMutation(mutation_entry.znode_name, query_context->getSettingsRef().mutations_sync); } void StorageReplicatedMergeTree::waitMutation(const String & znode_name, size_t mutations_sync) const @@ -5684,11 +5706,61 @@ CancellationCode StorageReplicatedMergeTree::killMutation(const String & mutatio { const String & partition_id = pair.first; Int64 block_number = pair.second; - global_context.getMergeList().cancelPartMutations(partition_id, block_number); + getContext()->getMergeList().cancelPartMutations(partition_id, block_number); } return CancellationCode::CancelSent; } +void StorageReplicatedMergeTree::removePartsFromFilesystem(const DataPartsVector & parts) +{ + auto remove_part = [&](const auto & part) + { + LOG_DEBUG(log, "Removing part from filesystem {}", part.name); + try + { + bool keep_s3 = !this->unlockSharedData(part); + part.remove(keep_s3); + } + catch (...) + { + tryLogCurrentException(log, "There is a problem with deleting part " + part.name + " from filesystem"); + } + }; + + const auto settings = getSettings(); + if (settings->max_part_removal_threads > 1 && parts.size() > settings->concurrent_part_removal_threshold) + { + /// Parallel parts removal. + + size_t num_threads = std::min(settings->max_part_removal_threads, parts.size()); + ThreadPool pool(num_threads); + + /// NOTE: Under heavy system load you may get "Cannot schedule a task" from ThreadPool. + for (const DataPartPtr & part : parts) + { + pool.scheduleOrThrowOnError([&, thread_group = CurrentThread::getGroup()] + { + SCOPE_EXIT_SAFE( + if (thread_group) + CurrentThread::detachQueryIfNotDetached(); + ); + if (thread_group) + CurrentThread::attachTo(thread_group); + + remove_part(*part); + }); + } + + pool.wait(); + } + else + { + for (const DataPartPtr & part : parts) + { + remove_part(*part); + } + } +} void StorageReplicatedMergeTree::clearOldPartsAndRemoveFromZK() { @@ -5714,26 +5786,10 @@ void StorageReplicatedMergeTree::clearOldPartsAndRemoveFromZK() } parts.clear(); - auto remove_parts_from_filesystem = [log=log, this] (const DataPartsVector & parts_to_remove) - { - for (const auto & part : parts_to_remove) - { - try - { - bool keep_s3 = !this->unlockSharedData(*part); - part->remove(keep_s3); - } - catch (...) - { - tryLogCurrentException(log, "There is a problem with deleting part " + part->name + " from filesystem"); - } - } - }; - /// Delete duplicate parts from filesystem if (!parts_to_delete_only_from_filesystem.empty()) { - remove_parts_from_filesystem(parts_to_delete_only_from_filesystem); + removePartsFromFilesystem(parts_to_delete_only_from_filesystem); removePartsFinally(parts_to_delete_only_from_filesystem); LOG_DEBUG(log, "Removed {} old duplicate parts", parts_to_delete_only_from_filesystem.size()); @@ -5778,7 +5834,7 @@ void StorageReplicatedMergeTree::clearOldPartsAndRemoveFromZK() /// Remove parts from filesystem and finally from data_parts if (!parts_to_remove_from_filesystem.empty()) { - remove_parts_from_filesystem(parts_to_remove_from_filesystem); + removePartsFromFilesystem(parts_to_remove_from_filesystem); removePartsFinally(parts_to_remove_from_filesystem); LOG_DEBUG(log, "Removed {} old parts", parts_to_remove_from_filesystem.size()); @@ -5983,18 +6039,18 @@ void StorageReplicatedMergeTree::clearBlocksInPartition( } void StorageReplicatedMergeTree::replacePartitionFrom( - const StoragePtr & source_table, const ASTPtr & partition, bool replace, const Context & context) + const StoragePtr & source_table, const ASTPtr & partition, bool replace, ContextPtr query_context) { /// First argument is true, because we possibly will add new data to current table. - auto lock1 = lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); - auto lock2 = source_table->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + auto lock1 = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); + auto lock2 = source_table->lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); auto source_metadata_snapshot = source_table->getInMemoryMetadataPtr(); auto metadata_snapshot = getInMemoryMetadataPtr(); Stopwatch watch; MergeTreeData & src_data = checkStructureAndGetMergeTreeData(source_table, source_metadata_snapshot, metadata_snapshot); - String partition_id = getPartitionIDFromQuery(partition, context); + String partition_id = getPartitionIDFromQuery(partition, query_context); DataPartsVector src_all_parts = src_data.getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); DataPartsVector src_parts; @@ -6120,7 +6176,7 @@ void StorageReplicatedMergeTree::replacePartitionFrom( } } - if (auto txn = context.getZooKeeperMetadataTransaction()) + if (auto txn = query_context->getZooKeeperMetadataTransaction()) txn->moveOpsTo(ops); ops.emplace_back(zkutil::makeSetRequest(zookeeper_path + "/log", "", -1)); /// Just update version @@ -6144,11 +6200,11 @@ void StorageReplicatedMergeTree::replacePartitionFrom( parts_to_remove = removePartsInRangeFromWorkingSet(drop_range, true, false, data_parts_lock); } - PartLog::addNewParts(global_context, dst_parts, watch.elapsed()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed()); } catch (...) { - PartLog::addNewParts(global_context, dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); throw; } @@ -6166,7 +6222,7 @@ void StorageReplicatedMergeTree::replacePartitionFrom( cleanup_thread.wakeup(); /// If necessary, wait until the operation is performed on all replicas. - if (context.getSettingsRef().replication_alter_partitions_sync > 1) + if (query_context->getSettingsRef().replication_alter_partitions_sync > 1) { lock2.reset(); lock1.reset(); @@ -6174,10 +6230,10 @@ void StorageReplicatedMergeTree::replacePartitionFrom( } } -void StorageReplicatedMergeTree::movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, const Context & query_context) +void StorageReplicatedMergeTree::movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, ContextPtr query_context) { - auto lock1 = lockForShare(query_context.getCurrentQueryId(), query_context.getSettingsRef().lock_acquire_timeout); - auto lock2 = dest_table->lockForShare(query_context.getCurrentQueryId(), query_context.getSettingsRef().lock_acquire_timeout); + auto lock1 = lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); + auto lock2 = dest_table->lockForShare(query_context->getCurrentQueryId(), query_context->getSettingsRef().lock_acquire_timeout); auto dest_table_storage = std::dynamic_pointer_cast(dest_table); if (!dest_table_storage) @@ -6330,11 +6386,11 @@ void StorageReplicatedMergeTree::movePartitionToTable(const StoragePtr & dest_ta transaction.commit(&lock); } - PartLog::addNewParts(global_context, dst_parts, watch.elapsed()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed()); } catch (...) { - PartLog::addNewParts(global_context, dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); + PartLog::addNewParts(getContext(), dst_parts, watch.elapsed(), ExecutionStatus::fromCurrentException()); throw; } @@ -6349,7 +6405,7 @@ void StorageReplicatedMergeTree::movePartitionToTable(const StoragePtr & dest_ta parts_to_remove.clear(); cleanup_thread.wakeup(); - if (query_context.getSettingsRef().replication_alter_partitions_sync > 1) + if (query_context->getSettingsRef().replication_alter_partitions_sync > 1) { lock2.reset(); dest_table_storage->waitForAllReplicasToProcessLogEntry(entry); @@ -6366,7 +6422,7 @@ void StorageReplicatedMergeTree::movePartitionToTable(const StoragePtr & dest_ta log_znode_path = dynamic_cast(*op_results.front()).path_created; entry_delete.znode_name = log_znode_path.substr(log_znode_path.find_last_of('/') + 1); - if (query_context.getSettingsRef().replication_alter_partitions_sync > 1) + if (query_context->getSettingsRef().replication_alter_partitions_sync > 1) { lock1.reset(); waitForAllReplicasToProcessLogEntry(entry_delete); @@ -6421,16 +6477,16 @@ void StorageReplicatedMergeTree::getCommitPartOps( ReplicatedMergeTreeAddress StorageReplicatedMergeTree::getReplicatedMergeTreeAddress() const { - auto host_port = global_context.getInterserverIOAddress(); + auto host_port = getContext()->getInterserverIOAddress(); auto table_id = getStorageID(); ReplicatedMergeTreeAddress res; res.host = host_port.first; res.replication_port = host_port.second; - res.queries_port = global_context.getTCPPort(); + res.queries_port = getContext()->getTCPPort(); res.database = table_id.database_name; res.table = table_id.table_name; - res.scheme = global_context.getInterserverScheme(); + res.scheme = getContext()->getInterserverScheme(); return res; } @@ -6591,7 +6647,7 @@ bool StorageReplicatedMergeTree::dropPart( } bool StorageReplicatedMergeTree::dropAllPartsInPartition( - zkutil::ZooKeeper & zookeeper, String & partition_id, LogEntry & entry, const Context & query_context, bool detach) + zkutil::ZooKeeper & zookeeper, String & partition_id, LogEntry & entry, ContextPtr query_context, bool detach) { MergeTreePartInfo drop_range_info; if (!getFakePartCoveringAllPartsInPartition(partition_id, drop_range_info)) @@ -6623,7 +6679,7 @@ bool StorageReplicatedMergeTree::dropAllPartsInPartition( Coordination::Requests ops; ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/log/log-", entry.toString(), zkutil::CreateMode::PersistentSequential)); ops.emplace_back(zkutil::makeSetRequest(zookeeper_path + "/log", "", -1)); /// Just update version. - if (auto txn = query_context.getZooKeeperMetadataTransaction()) + if (auto txn = query_context->getZooKeeperMetadataTransaction()) txn->moveOpsTo(ops); Coordination::Responses responses = zookeeper.multi(ops); @@ -6634,13 +6690,13 @@ bool StorageReplicatedMergeTree::dropAllPartsInPartition( } -CheckResults StorageReplicatedMergeTree::checkData(const ASTPtr & query, const Context & context) +CheckResults StorageReplicatedMergeTree::checkData(const ASTPtr & query, ContextPtr local_context) { CheckResults results; DataPartsVector data_parts; if (const auto & check_query = query->as(); check_query.partition) { - String partition_id = getPartitionIDFromQuery(check_query.partition, context); + String partition_id = getPartitionIDFromQuery(check_query.partition, local_context); data_parts = getDataPartsVectorInPartition(MergeTreeDataPartState::Committed, partition_id); } else @@ -6867,4 +6923,46 @@ String StorageReplicatedMergeTree::getSharedDataReplica( return best_replica; } +String StorageReplicatedMergeTree::findReplicaHavingPart( + const String & part_name, const String & zookeeper_path_, zkutil::ZooKeeper::Ptr zookeeper_) +{ + Strings replicas = zookeeper_->getChildren(zookeeper_path_ + "/replicas"); + + /// Select replicas in uniformly random order. + std::shuffle(replicas.begin(), replicas.end(), thread_local_rng); + + for (const String & replica : replicas) + { + if (zookeeper_->exists(zookeeper_path_ + "/replicas/" + replica + "/parts/" + part_name) + && zookeeper_->exists(zookeeper_path_ + "/replicas/" + replica + "/is_active")) + return zookeeper_path_ + "/replicas/" + replica; + } + + return {}; +} + +bool StorageReplicatedMergeTree::checkIfDetachedPartExists(const String & part_name) +{ + Poco::DirectoryIterator dir_end; + for (const std::string & path : getDataPaths()) + for (Poco::DirectoryIterator dir_it{path + "detached/"}; dir_it != dir_end; ++dir_it) + if (dir_it.name() == part_name) + return true; + return false; +} + +bool StorageReplicatedMergeTree::checkIfDetachedPartitionExists(const String & partition_name) +{ + Poco::DirectoryIterator dir_end; + for (const std::string & path : getDataPaths()) + { + for (Poco::DirectoryIterator dir_it{path + "detached/"}; dir_it != dir_end; ++dir_it) + { + MergeTreePartInfo part_info; + if (MergeTreePartInfo::tryParsePartName(dir_it.name(), &part_info, format_version) && part_info.partition_id == partition_name) + return true; + } + } + return false; +} } diff --git a/src/Storages/StorageReplicatedMergeTree.h b/src/Storages/StorageReplicatedMergeTree.h index 0c8aca18c6a..c70556f40df 100644 --- a/src/Storages/StorageReplicatedMergeTree.h +++ b/src/Storages/StorageReplicatedMergeTree.h @@ -96,7 +96,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -106,16 +106,16 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; std::optional totalRows(const Settings & settings) const override; - std::optional totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, const Context & context) const override; + std::optional totalRowsByPartitionPredicate(const SelectQueryInfo & query_info, ContextPtr context) const override; std::optional totalBytes(const Settings & settings) const override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; bool optimize( const ASTPtr & query, @@ -124,11 +124,11 @@ public: bool final, bool deduplicate, const Names & deduplicate_by_columns, - const Context & query_context) override; + ContextPtr query_context) override; - void alter(const AlterCommands & commands, const Context & query_context, TableLockHolder & table_lock_holder) override; + void alter(const AlterCommands & commands, ContextPtr query_context, TableLockHolder & table_lock_holder) override; - void mutate(const MutationCommands & commands, const Context & context) override; + void mutate(const MutationCommands & commands, ContextPtr context) override; void waitMutation(const String & znode_name, size_t mutations_sync) const; std::vector getMutationsStatus() const override; CancellationCode killMutation(const String & mutation_id) override; @@ -137,7 +137,7 @@ public: */ void drop() override; - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context & query_context, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr query_context, TableExclusiveLockHolder &) override; void checkTableCanBeRenamed() const override; @@ -197,7 +197,7 @@ public: part_check_thread.enqueuePart(part_name, delay_to_check_seconds); } - CheckResults checkData(const ASTPtr & query, const Context & context) override; + CheckResults checkData(const ASTPtr & query, ContextPtr context) override; /// Checks ability to use granularity bool canUseAdaptiveGranularity() const override; @@ -208,6 +208,10 @@ public: */ static void dropReplica(zkutil::ZooKeeperPtr zookeeper, const String & zookeeper_path, const String & replica, Poco::Logger * logger); + /// Removes table from ZooKeeper after the last replica was dropped + static bool removeTableNodesFromZooKeeper(zkutil::ZooKeeperPtr zookeeper, const String & zookeeper_path, + const zkutil::EphemeralNodeHolder::Ptr & metadata_drop_lock, Poco::Logger * logger); + /// Get job to execute in background pool (merge, mutate, drop range and so on) std::optional getDataProcessingJob() override; @@ -408,6 +412,8 @@ private: /// Just removes part from ZooKeeper using previous method void removePartFromZooKeeper(const String & part_name); + void removePartsFromFilesystem(const DataPartsVector & parts); + /// Quickly removes big set of parts from ZooKeeper (using async multi queries) void removePartsFromZooKeeper(zkutil::ZooKeeperPtr & zookeeper, const Strings & part_names, NameSet * parts_should_be_retried = nullptr); @@ -515,13 +521,16 @@ private: /// Exchange parts. - ConnectionTimeouts getFetchPartHTTPTimeouts(const Context & context); + ConnectionTimeouts getFetchPartHTTPTimeouts(ContextPtr context); /** Returns an empty string if no one has a part. */ String findReplicaHavingPart(const String & part_name, bool active); + static String findReplicaHavingPart(const String & part_name, const String & zookeeper_path_, zkutil::ZooKeeper::Ptr zookeeper_); bool checkReplicaHavePart(const String & replica, const String & part_name); + bool checkIfDetachedPartExists(const String & part_name); + bool checkIfDetachedPartitionExists(const String & partition_name); /** Find replica having specified part or any part that covers it. * If active = true, consider only active replicas. @@ -617,14 +626,19 @@ private: bool dropPart(zkutil::ZooKeeperPtr & zookeeper, String part_name, LogEntry & entry, bool detach, bool throw_if_noop); bool dropAllPartsInPartition( - zkutil::ZooKeeper & zookeeper, String & partition_id, LogEntry & entry, const Context & query_context, bool detach); + zkutil::ZooKeeper & zookeeper, String & partition_id, LogEntry & entry, ContextPtr query_context, bool detach); // Partition helpers - void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, const Context & query_context, bool throw_if_noop) override; - PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, const Context & query_context) override; - void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, const Context & query_context) override; - void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, const Context & query_context) override; - void fetchPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, const String & from, const Context & query_context) override; + void dropPartition(const ASTPtr & partition, bool detach, bool drop_part, ContextPtr query_context, bool throw_if_noop) override; + PartitionCommandsResultInfo attachPartition(const ASTPtr & partition, const StorageMetadataPtr & metadata_snapshot, bool part, ContextPtr query_context) override; + void replacePartitionFrom(const StoragePtr & source_table, const ASTPtr & partition, bool replace, ContextPtr query_context) override; + void movePartitionToTable(const StoragePtr & dest_table, const ASTPtr & partition, ContextPtr query_context) override; + void fetchPartition( + const ASTPtr & partition, + const StorageMetadataPtr & metadata_snapshot, + const String & from, + bool fetch_part, + ContextPtr query_context) override; /// Check granularity of already existing replicated table in zookeeper if it exists /// return true if it's fixed @@ -638,9 +652,9 @@ private: void startBackgroundMovesIfNeeded() override; - std::set getPartitionIdsAffectedByCommands(const MutationCommands & commands, const Context & query_context) const; + std::set getPartitionIdsAffectedByCommands(const MutationCommands & commands, ContextPtr query_context) const; PartitionBlockNumbersHolder allocateBlockNumbersInAffectedPartitions( - const MutationCommands & commands, const Context & query_context, const zkutil::ZooKeeperPtr & zookeeper) const; + const MutationCommands & commands, ContextPtr query_context, const zkutil::ZooKeeperPtr & zookeeper) const; protected: /** If not 'attach', either creates a new table in ZK, or adds a replica to an existing table. @@ -652,7 +666,7 @@ protected: const StorageID & table_id_, const String & relative_data_path_, const StorageInMemoryMetadata & metadata_, - Context & context_, + ContextPtr context_, const String & date_column_name, const MergingParams & merging_params_, std::unique_ptr settings_, diff --git a/src/Storages/StorageS3.cpp b/src/Storages/StorageS3.cpp index 6a6a9b8b7b9..8a42caf41b1 100644 --- a/src/Storages/StorageS3.cpp +++ b/src/Storages/StorageS3.cpp @@ -45,244 +45,300 @@ namespace ErrorCodes extern const int UNEXPECTED_EXPRESSION; extern const int S3_ERROR; } - - -namespace +class StorageS3Source::DisclosedGlobIterator::Impl { - class StorageS3Source : public SourceWithProgress + +public: + Impl(Aws::S3::S3Client & client_, const S3::URI & globbed_uri_) + : client(client_), globbed_uri(globbed_uri_) { - public: + std::lock_guard lock(mutex); - static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column) + if (globbed_uri.bucket.find_first_of("*?{") != globbed_uri.bucket.npos) + throw Exception("Expression can not have wildcards inside bucket name", ErrorCodes::UNEXPECTED_EXPRESSION); + + const String key_prefix = globbed_uri.key.substr(0, globbed_uri.key.find_first_of("*?{")); + + /// We don't have to list bucket, because there is no asterics. + if (key_prefix.size() == globbed_uri.key.size()) { - if (with_path_column) - sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_path"}); - if (with_file_column) - sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_file"}); - - return sample_block; + buffer.emplace_back(globbed_uri.key); + buffer_iter = buffer.begin(); + is_finished = true; + return; } - StorageS3Source( - bool need_path, - bool need_file, - const String & format, - String name_, - const Block & sample_block, - const Context & context, - const ColumnsDescription & columns, - UInt64 max_block_size, - const CompressionMethod compression_method, - const std::shared_ptr & client, - const String & bucket, - const String & key) - : SourceWithProgress(getHeader(sample_block, need_path, need_file)) - , name(std::move(name_)) - , with_file_column(need_file) - , with_path_column(need_path) - , file_path(bucket + "/" + key) - { - read_buf = wrapReadBufferWithCompressionMethod(std::make_unique(client, bucket, key), compression_method); - auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size); - reader = std::make_shared(input_format); + request.SetBucket(globbed_uri.bucket); + request.SetPrefix(key_prefix); + matcher = std::make_unique(makeRegexpPatternFromGlobs(globbed_uri.key)); + fillInternalBufferAssumeLocked(); + } - if (columns.hasDefaults()) - reader = std::make_shared(reader, columns, context); + String next() + { + std::lock_guard lock(mutex); + return nextAssumeLocked(); + } + +private: + + String nextAssumeLocked() + { + if (buffer_iter != buffer.end()) + { + auto answer = *buffer_iter; + ++buffer_iter; + return answer; } - String getName() const override - { - return name; - } - - Chunk generate() override - { - if (!reader) - return {}; - - if (!initialized) - { - reader->readSuffix(); - initialized = true; - } - - if (auto block = reader->read()) - { - auto columns = block.getColumns(); - UInt64 num_rows = block.rows(); - - if (with_path_column) - columns.push_back(DataTypeString().createColumnConst(num_rows, file_path)->convertToFullColumnIfConst()); - if (with_file_column) - { - size_t last_slash_pos = file_path.find_last_of('/'); - columns.push_back(DataTypeString().createColumnConst(num_rows, file_path.substr( - last_slash_pos + 1))->convertToFullColumnIfConst()); - } - - return Chunk(std::move(columns), num_rows); - } - - reader.reset(); - + if (is_finished) return {}; - } - private: - String name; - std::unique_ptr read_buf; - BlockInputStreamPtr reader; - bool initialized = false; - bool with_file_column = false; - bool with_path_column = false; - String file_path; - }; + fillInternalBufferAssumeLocked(); - class StorageS3BlockOutputStream : public IBlockOutputStream + return nextAssumeLocked(); + } + + void fillInternalBufferAssumeLocked() { - public: - StorageS3BlockOutputStream( - const String & format, - const Block & sample_block_, - const Context & context, - const CompressionMethod compression_method, - const std::shared_ptr & client, - const String & bucket, - const String & key, - size_t min_upload_part_size, - size_t max_single_part_upload_size) - : sample_block(sample_block_) - { - write_buf = wrapWriteBufferWithCompressionMethod( - std::make_unique(client, bucket, key, min_upload_part_size, max_single_part_upload_size), compression_method, 3); - writer = FormatFactory::instance().getOutputStreamParallelIfPossible(format, *write_buf, sample_block, context); - } + buffer.clear(); - Block getHeader() const override - { - return sample_block; - } + outcome = client.ListObjectsV2(request); + if (!outcome.IsSuccess()) + throw Exception(ErrorCodes::S3_ERROR, "Could not list objects in bucket {} with prefix {}, S3 exception: {}, message: {}", + quoteString(request.GetBucket()), quoteString(request.GetPrefix()), + backQuote(outcome.GetError().GetExceptionName()), quoteString(outcome.GetError().GetMessage())); - void write(const Block & block) override - { - writer->write(block); - } + const auto & result_batch = outcome.GetResult().GetContents(); - void writePrefix() override + buffer.reserve(result_batch.size()); + for (const auto & row : result_batch) { - writer->writePrefix(); + String key = row.GetKey(); + if (re2::RE2::FullMatch(key, *matcher)) + buffer.emplace_back(std::move(key)); } + /// Set iterator only after the whole batch is processed + buffer_iter = buffer.begin(); - void flush() override - { - writer->flush(); - } + request.SetContinuationToken(outcome.GetResult().GetNextContinuationToken()); - void writeSuffix() override - { - writer->writeSuffix(); - writer->flush(); - write_buf->finalize(); - } + /// It returns false when all objects were returned + is_finished = !outcome.GetResult().GetIsTruncated(); + } - private: - Block sample_block; - std::unique_ptr write_buf; - BlockOutputStreamPtr writer; - }; + std::mutex mutex; + Strings buffer; + Strings::iterator buffer_iter; + Aws::S3::S3Client client; + S3::URI globbed_uri; + Aws::S3::Model::ListObjectsV2Request request; + Aws::S3::Model::ListObjectsV2Outcome outcome; + std::unique_ptr matcher; + bool is_finished{false}; +}; + +StorageS3Source::DisclosedGlobIterator::DisclosedGlobIterator(Aws::S3::S3Client & client_, const S3::URI & globbed_uri_) + : pimpl(std::make_shared(client_, globbed_uri_)) {} + +String StorageS3Source::DisclosedGlobIterator::next() +{ + return pimpl->next(); } +Block StorageS3Source::getHeader(Block sample_block, bool with_path_column, bool with_file_column) +{ + if (with_path_column) + sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_path"}); + if (with_file_column) + sample_block.insert({DataTypeString().createColumn(), std::make_shared(), "_file"}); + + return sample_block; +} + +StorageS3Source::StorageS3Source( + bool need_path, + bool need_file, + const String & format_, + String name_, + const Block & sample_block_, + ContextPtr context_, + const ColumnsDescription & columns_, + UInt64 max_block_size_, + UInt64 s3_max_single_read_retries_, + const String compression_hint_, + const std::shared_ptr & client_, + const String & bucket_, + std::shared_ptr file_iterator_) + : SourceWithProgress(getHeader(sample_block_, need_path, need_file)) + , WithContext(context_) + , name(std::move(name_)) + , bucket(bucket_) + , format(format_) + , columns_desc(columns_) + , max_block_size(max_block_size_) + , s3_max_single_read_retries(s3_max_single_read_retries_) + , compression_hint(compression_hint_) + , client(client_) + , sample_block(sample_block_) + , with_file_column(need_file) + , with_path_column(need_path) + , file_iterator(file_iterator_) +{ + initialize(); +} + + +bool StorageS3Source::initialize() +{ + String current_key = (*file_iterator)(); + if (current_key.empty()) + return false; + + file_path = bucket + "/" + current_key; + + read_buf = wrapReadBufferWithCompressionMethod( + std::make_unique(client, bucket, current_key, s3_max_single_read_retries), chooseCompressionMethod(current_key, compression_hint)); + auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, getContext(), max_block_size); + reader = std::make_shared(input_format); + + if (columns_desc.hasDefaults()) + reader = std::make_shared(reader, columns_desc, getContext()); + + initialized = false; + return true; +} + +String StorageS3Source::getName() const +{ + return name; +} + +Chunk StorageS3Source::generate() +{ + if (!reader) + return {}; + + if (!initialized) + { + reader->readPrefix(); + initialized = true; + } + + if (auto block = reader->read()) + { + auto columns = block.getColumns(); + UInt64 num_rows = block.rows(); + + if (with_path_column) + columns.push_back(DataTypeString().createColumnConst(num_rows, file_path)->convertToFullColumnIfConst()); + if (with_file_column) + { + size_t last_slash_pos = file_path.find_last_of('/'); + columns.push_back(DataTypeString().createColumnConst(num_rows, file_path.substr( + last_slash_pos + 1))->convertToFullColumnIfConst()); + } + + return Chunk(std::move(columns), num_rows); + } + + reader->readSuffix(); + reader.reset(); + read_buf.reset(); + + if (!initialize()) + return {}; + + return generate(); +} + + +class StorageS3BlockOutputStream : public IBlockOutputStream +{ +public: + StorageS3BlockOutputStream( + const String & format, + const Block & sample_block_, + ContextPtr context, + const CompressionMethod compression_method, + const std::shared_ptr & client, + const String & bucket, + const String & key, + size_t min_upload_part_size, + size_t max_single_part_upload_size) + : sample_block(sample_block_) + { + write_buf = wrapWriteBufferWithCompressionMethod( + std::make_unique(client, bucket, key, min_upload_part_size, max_single_part_upload_size), compression_method, 3); + writer = FormatFactory::instance().getOutputStreamParallelIfPossible(format, *write_buf, sample_block, context); + } + + Block getHeader() const override + { + return sample_block; + } + + void write(const Block & block) override + { + writer->write(block); + } + + void writePrefix() override + { + writer->writePrefix(); + } + + void flush() override + { + writer->flush(); + } + + void writeSuffix() override + { + writer->writeSuffix(); + writer->flush(); + write_buf->finalize(); + } + +private: + Block sample_block; + std::unique_ptr write_buf; + BlockOutputStreamPtr writer; +}; + + StorageS3::StorageS3( const S3::URI & uri_, const String & access_key_id_, const String & secret_access_key_, const StorageID & table_id_, const String & format_name_, + UInt64 s3_max_single_read_retries_, UInt64 min_upload_part_size_, UInt64 max_single_part_upload_size_, UInt64 max_connections_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_, - const String & compression_method_) + ContextPtr context_, + const String & compression_method_, + bool distributed_processing_) : IStorage(table_id_) - , uri(uri_) - , access_key_id(access_key_id_) - , secret_access_key(secret_access_key_) - , max_connections(max_connections_) - , global_context(context_.getGlobalContext()) + , client_auth{uri_, access_key_id_, secret_access_key_, max_connections_, {}, {}} /// Client and settings will be updated later , format_name(format_name_) + , s3_max_single_read_retries(s3_max_single_read_retries_) , min_upload_part_size(min_upload_part_size_) , max_single_part_upload_size(max_single_part_upload_size_) , compression_method(compression_method_) , name(uri_.storage_name) + , distributed_processing(distributed_processing_) { - global_context.getRemoteHostFilter().checkURL(uri_.uri); + context_->getGlobalContext()->getRemoteHostFilter().checkURL(uri_.uri); StorageInMemoryMetadata storage_metadata; storage_metadata.setColumns(columns_); storage_metadata.setConstraints(constraints_); setInMemoryMetadata(storage_metadata); - updateAuthSettings(context_); -} - - -namespace -{ - /* "Recursive" directory listing with matched paths as a result. - * Have the same method in StorageFile. - */ -Strings listFilesWithRegexpMatching(Aws::S3::S3Client & client, const S3::URI & globbed_uri) -{ - if (globbed_uri.bucket.find_first_of("*?{") != globbed_uri.bucket.npos) - { - throw Exception("Expression can not have wildcards inside bucket name", ErrorCodes::UNEXPECTED_EXPRESSION); - } - - const String key_prefix = globbed_uri.key.substr(0, globbed_uri.key.find_first_of("*?{")); - if (key_prefix.size() == globbed_uri.key.size()) - { - return {globbed_uri.key}; - } - - Aws::S3::Model::ListObjectsV2Request request; - request.SetBucket(globbed_uri.bucket); - request.SetPrefix(key_prefix); - - re2::RE2 matcher(makeRegexpPatternFromGlobs(globbed_uri.key)); - Strings result; - Aws::S3::Model::ListObjectsV2Outcome outcome; - int page = 0; - do - { - ++page; - outcome = client.ListObjectsV2(request); - if (!outcome.IsSuccess()) - { - if (page > 1) - throw Exception(ErrorCodes::S3_ERROR, "Could not list objects in bucket {} with prefix {}, page {}, S3 exception: {}, message: {}", - quoteString(request.GetBucket()), quoteString(request.GetPrefix()), page, - backQuote(outcome.GetError().GetExceptionName()), quoteString(outcome.GetError().GetMessage())); - - throw Exception(ErrorCodes::S3_ERROR, "Could not list objects in bucket {} with prefix {}, S3 exception: {}, message: {}", - quoteString(request.GetBucket()), quoteString(request.GetPrefix()), - backQuote(outcome.GetError().GetExceptionName()), quoteString(outcome.GetError().GetMessage())); - } - - for (const auto & row : outcome.GetResult().GetContents()) - { - String key = row.GetKey(); - if (re2::RE2::FullMatch(key, matcher)) - result.emplace_back(std::move(key)); - } - - request.SetContinuationToken(outcome.GetResult().GetNextContinuationToken()); - } - while (outcome.GetResult().GetIsTruncated()); - - return result; -} - + updateClientAndAuthSettings(context_, client_auth); } @@ -290,12 +346,12 @@ Pipe StorageS3::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) { - updateAuthSettings(context); + updateClientAndAuthSettings(local_context, client_auth); Pipes pipes; bool need_path_column = false; @@ -308,73 +364,93 @@ Pipe StorageS3::read( need_file_column = true; } - for (const String & key : listFilesWithRegexpMatching(*client, uri)) + std::shared_ptr iterator_wrapper{nullptr}; + if (distributed_processing) + { + iterator_wrapper = std::make_shared( + [callback = local_context->getReadTaskCallback()]() -> String { + return callback(); + }); + } + else + { + /// Iterate through disclosed globs and make a source for each file + auto glob_iterator = std::make_shared(*client_auth.client, client_auth.uri); + iterator_wrapper = std::make_shared([glob_iterator]() + { + return glob_iterator->next(); + }); + } + + for (size_t i = 0; i < num_streams; ++i) + { pipes.emplace_back(std::make_shared( need_path_column, need_file_column, format_name, getName(), metadata_snapshot->getSampleBlock(), - context, + local_context, metadata_snapshot->getColumns(), max_block_size, - chooseCompressionMethod(uri.key, compression_method), - client, - uri.bucket, - key)); - + s3_max_single_read_retries, + compression_method, + client_auth.client, + client_auth.uri.bucket, + iterator_wrapper)); + } auto pipe = Pipe::unitePipes(std::move(pipes)); - // It's possible to have many buckets read from s3, resize(num_streams) might open too many handles at the same time. - // Using narrowPipe instead. + narrowPipe(pipe, num_streams); return pipe; } -BlockOutputStreamPtr StorageS3::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageS3::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { - updateAuthSettings(context); + updateClientAndAuthSettings(local_context, client_auth); return std::make_shared( format_name, metadata_snapshot->getSampleBlock(), - global_context, - chooseCompressionMethod(uri.key, compression_method), - client, - uri.bucket, - uri.key, + local_context, + chooseCompressionMethod(client_auth.uri.key, compression_method), + client_auth.client, + client_auth.uri.bucket, + client_auth.uri.key, min_upload_part_size, max_single_part_upload_size); } -void StorageS3::updateAuthSettings(const Context & context) +void StorageS3::updateClientAndAuthSettings(ContextPtr ctx, StorageS3::ClientAuthentificaiton & upd) { - auto settings = context.getStorageS3Settings().getSettings(uri.uri.toString()); - if (client && (!access_key_id.empty() || settings == auth_settings)) + auto settings = ctx->getStorageS3Settings().getSettings(upd.uri.uri.toString()); + if (upd.client && (!upd.access_key_id.empty() || settings == upd.auth_settings)) return; - Aws::Auth::AWSCredentials credentials(access_key_id, secret_access_key); + Aws::Auth::AWSCredentials credentials(upd.access_key_id, upd.secret_access_key); HeaderCollection headers; - if (access_key_id.empty()) + if (upd.access_key_id.empty()) { credentials = Aws::Auth::AWSCredentials(settings.access_key_id, settings.secret_access_key); headers = settings.headers; } S3::PocoHTTPClientConfiguration client_configuration = S3::ClientFactory::instance().createClientConfiguration( - context.getRemoteHostFilter(), context.getGlobalContext().getSettingsRef().s3_max_redirects); + ctx->getRemoteHostFilter(), ctx->getGlobalContext()->getSettingsRef().s3_max_redirects); - client_configuration.endpointOverride = uri.endpoint; - client_configuration.maxConnections = max_connections; + client_configuration.endpointOverride = upd.uri.endpoint; + client_configuration.maxConnections = upd.max_connections; - client = S3::ClientFactory::instance().create( + upd.client = S3::ClientFactory::instance().create( client_configuration, - uri.is_virtual_hosted_style, + upd.uri.is_virtual_hosted_style, credentials.GetAWSAccessKeyId(), credentials.GetAWSSecretKey(), settings.server_side_encryption_customer_key_base64, std::move(headers), - settings.use_environment_credentials.value_or(global_context.getConfigRef().getBool("s3.use_environment_credentials", false))); + settings.use_environment_credentials.value_or(ctx->getConfigRef().getBool("s3.use_environment_credentials", false)), + settings.use_insecure_imds_request.value_or(ctx->getConfigRef().getBool("s3.use_insecure_imds_request", false))); - auth_settings = std::move(settings); + upd.auth_settings = std::move(settings); } void registerStorageS3Impl(const String & name, StorageFactory & factory) @@ -385,10 +461,11 @@ void registerStorageS3Impl(const String & name, StorageFactory & factory) if (engine_args.size() < 2 || engine_args.size() > 5) throw Exception( - "Storage S3 requires 2 to 5 arguments: url, [access_key_id, secret_access_key], name of used format and [compression_method].", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + "Storage S3 requires 2 to 5 arguments: url, [access_key_id, secret_access_key], name of used format and [compression_method].", + ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & engine_arg : engine_args) - engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.local_context); + engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext()); String url = engine_args[0]->as().value.safeGet(); Poco::URI uri (url); @@ -402,9 +479,10 @@ void registerStorageS3Impl(const String & name, StorageFactory & factory) secret_access_key = engine_args[2]->as().value.safeGet(); } - UInt64 min_upload_part_size = args.local_context.getSettingsRef().s3_min_upload_part_size; - UInt64 max_single_part_upload_size = args.local_context.getSettingsRef().s3_max_single_part_upload_size; - UInt64 max_connections = args.local_context.getSettingsRef().s3_max_connections; + UInt64 s3_max_single_read_retries = args.getLocalContext()->getSettingsRef().s3_max_single_read_retries; + UInt64 min_upload_part_size = args.getLocalContext()->getSettingsRef().s3_min_upload_part_size; + UInt64 max_single_part_upload_size = args.getLocalContext()->getSettingsRef().s3_max_single_part_upload_size; + UInt64 max_connections = args.getLocalContext()->getSettingsRef().s3_max_connections; String compression_method; String format_name; @@ -425,12 +503,13 @@ void registerStorageS3Impl(const String & name, StorageFactory & factory) secret_access_key, args.table_id, format_name, + s3_max_single_read_retries, min_upload_part_size, max_single_part_upload_size, max_connections, args.columns, args.constraints, - args.context, + args.getContext(), compression_method ); }, diff --git a/src/Storages/StorageS3.h b/src/Storages/StorageS3.h index 46d8c9276a2..b068f82cfb1 100644 --- a/src/Storages/StorageS3.h +++ b/src/Storages/StorageS3.h @@ -4,11 +4,20 @@ #if USE_AWS_S3 +#include + +#include + #include #include + +#include #include #include #include +#include +#include +#include namespace Aws::S3 { @@ -18,12 +27,74 @@ namespace Aws::S3 namespace DB { +class StorageS3SequentialSource; +class StorageS3Source : public SourceWithProgress, WithContext +{ +public: + class DisclosedGlobIterator + { + public: + DisclosedGlobIterator(Aws::S3::S3Client &, const S3::URI &); + String next(); + private: + class Impl; + /// shared_ptr to have copy constructor + std::shared_ptr pimpl; + }; + + using IteratorWrapper = std::function; + + static Block getHeader(Block sample_block, bool with_path_column, bool with_file_column); + + StorageS3Source( + bool need_path, + bool need_file, + const String & format, + String name_, + const Block & sample_block, + ContextPtr context_, + const ColumnsDescription & columns_, + UInt64 max_block_size_, + UInt64 s3_max_single_read_retries_, + const String compression_hint_, + const std::shared_ptr & client_, + const String & bucket, + std::shared_ptr file_iterator_); + + String getName() const override; + + Chunk generate() override; + +private: + String name; + String bucket; + String file_path; + String format; + ColumnsDescription columns_desc; + UInt64 max_block_size; + UInt64 s3_max_single_read_retries; + String compression_hint; + std::shared_ptr client; + Block sample_block; + + + std::unique_ptr read_buf; + BlockInputStreamPtr reader; + bool initialized = false; + bool with_file_column = false; + bool with_path_column = false; + std::shared_ptr file_iterator; + + /// Recreate ReadBuffer and BlockInputStream for each file. + bool initialize(); +}; + /** * This class represents table engine for external S3 urls. * It sends HTTP GET to server when select is called and * HTTP PUT when insert is called. */ -class StorageS3 : public ext::shared_ptr_helper, public IStorage +class StorageS3 : public ext::shared_ptr_helper, public IStorage, WithContext { public: StorageS3(const S3::URI & uri, @@ -31,13 +102,15 @@ public: const String & secret_access_key, const StorageID & table_id_, const String & format_name_, + UInt64 s3_max_single_read_retries_, UInt64 min_upload_part_size_, UInt64 max_single_part_upload_size_, UInt64 max_connections_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - const Context & context_, - const String & compression_method_ = ""); + ContextPtr context_, + const String & compression_method_ = "", + bool distributed_processing_ = false); String getName() const override { @@ -48,31 +121,41 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; NamesAndTypesList getVirtuals() const override; private: - const S3::URI uri; - const String access_key_id; - const String secret_access_key; - const UInt64 max_connections; - const Context & global_context; + + friend class StorageS3Cluster; + friend class TableFunctionS3Cluster; + + struct ClientAuthentificaiton + { + const S3::URI uri; + const String access_key_id; + const String secret_access_key; + const UInt64 max_connections; + std::shared_ptr client; + S3AuthSettings auth_settings; + }; + + ClientAuthentificaiton client_auth; String format_name; + UInt64 s3_max_single_read_retries; size_t min_upload_part_size; size_t max_single_part_upload_size; String compression_method; - std::shared_ptr client; String name; - S3AuthSettings auth_settings; + const bool distributed_processing; - void updateAuthSettings(const Context & context); + static void updateClientAndAuthSettings(ContextPtr, ClientAuthentificaiton &); }; } diff --git a/src/Storages/StorageS3Cluster.cpp b/src/Storages/StorageS3Cluster.cpp new file mode 100644 index 00000000000..8afc0e44023 --- /dev/null +++ b/src/Storages/StorageS3Cluster.cpp @@ -0,0 +1,166 @@ +#include "Storages/StorageS3Cluster.h" + +#if !defined(ARCADIA_BUILD) +#include +#endif + +#if USE_AWS_S3 + +#include "Common/Exception.h" +#include +#include "Client/Connection.h" +#include "Core/QueryProcessingStage.h" +#include +#include "DataStreams/RemoteBlockInputStream.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "Processors/Sources/SourceWithProgress.h" +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include +#include +#include +#include +#include + +namespace DB +{ + + +StorageS3Cluster::StorageS3Cluster( + const String & filename_, + const String & access_key_id_, + const String & secret_access_key_, + const StorageID & table_id_, + String cluster_name_, + const String & format_name_, + UInt64 max_connections_, + const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, + ContextPtr context_, + const String & compression_method_) + : IStorage(table_id_) + , client_auth{S3::URI{Poco::URI{filename_}}, access_key_id_, secret_access_key_, max_connections_, {}, {}} + , filename(filename_) + , cluster_name(cluster_name_) + , format_name(format_name_) + , compression_method(compression_method_) +{ + StorageInMemoryMetadata storage_metadata; + storage_metadata.setColumns(columns_); + storage_metadata.setConstraints(constraints_); + setInMemoryMetadata(storage_metadata); + StorageS3::updateClientAndAuthSettings(context_, client_auth); +} + +/// The code executes on initiator +Pipe StorageS3Cluster::read( + const Names & column_names, + const StorageMetadataPtr & metadata_snapshot, + SelectQueryInfo & query_info, + ContextPtr context, + QueryProcessingStage::Enum processed_stage, + size_t /*max_block_size*/, + unsigned /*num_streams*/) +{ + StorageS3::updateClientAndAuthSettings(context, client_auth); + + auto cluster = context->getCluster(cluster_name)->getClusterWithReplicasAsShards(context->getSettings()); + S3::URI s3_uri(Poco::URI{filename}); + StorageS3::updateClientAndAuthSettings(context, client_auth); + + auto iterator = std::make_shared(*client_auth.client, client_auth.uri); + auto callback = std::make_shared([iterator]() mutable -> String + { + return iterator->next(); + }); + + /// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*) + Block header = + InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze()).getSampleBlock(); + + const Scalars & scalars = context->hasQueryContext() ? context->getQueryContext()->getScalars() : Scalars{}; + + Pipes pipes; + connections.reserve(cluster->getShardCount()); + + const bool add_agg_info = processed_stage == QueryProcessingStage::WithMergeableState; + + for (const auto & replicas : cluster->getShardsAddresses()) + { + /// There will be only one replica, because we consider each replica as a shard + for (const auto & node : replicas) + { + connections.emplace_back(std::make_shared( + node.host_name, node.port, context->getGlobalContext()->getCurrentDatabase(), + node.user, node.password, node.cluster, node.cluster_secret, + "S3ClusterInititiator", + node.compression, + node.secure + )); + + /// For unknown reason global context is passed to IStorage::read() method + /// So, task_identifier is passed as constructor argument. It is more obvious. + auto remote_query_executor = std::make_shared( + *connections.back(), queryToString(query_info.query), header, context, + /*throttler=*/nullptr, scalars, Tables(), processed_stage, callback); + + pipes.emplace_back(std::make_shared(remote_query_executor, add_agg_info, false)); + } + } + + metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); + return Pipe::unitePipes(std::move(pipes)); +} + +QueryProcessingStage::Enum StorageS3Cluster::getQueryProcessingStage( + ContextPtr context, QueryProcessingStage::Enum to_stage, SelectQueryInfo &) const +{ + /// Initiator executes query on remote node. + if (context->getClientInfo().query_kind == ClientInfo::QueryKind::INITIAL_QUERY) + if (to_stage >= QueryProcessingStage::Enum::WithMergeableState) + return QueryProcessingStage::Enum::WithMergeableState; + + /// Follower just reads the data. + return QueryProcessingStage::Enum::FetchColumns; +} + + +NamesAndTypesList StorageS3Cluster::getVirtuals() const +{ + return NamesAndTypesList{ + {"_path", std::make_shared()}, + {"_file", std::make_shared()} + }; +} + + +} + +#endif diff --git a/src/Storages/StorageS3Cluster.h b/src/Storages/StorageS3Cluster.h new file mode 100644 index 00000000000..c98840d62fc --- /dev/null +++ b/src/Storages/StorageS3Cluster.h @@ -0,0 +1,63 @@ +#pragma once + +#if !defined(ARCADIA_BUILD) +#include +#endif + +#if USE_AWS_S3 + +#include "Client/Connection.h" +#include +#include +#include + +#include +#include +#include "ext/shared_ptr_helper.h" + +namespace DB +{ + +class Context; + +struct ClientAuthentificationBuilder +{ + String access_key_id; + String secret_access_key; + UInt64 max_connections; +}; + +class StorageS3Cluster : public ext::shared_ptr_helper, public IStorage +{ + friend struct ext::shared_ptr_helper; +public: + std::string getName() const override { return "S3Cluster"; } + + Pipe read(const Names &, const StorageMetadataPtr &, SelectQueryInfo &, + ContextPtr, QueryProcessingStage::Enum, size_t /*max_block_size*/, unsigned /*num_streams*/) override; + + QueryProcessingStage::Enum getQueryProcessingStage(ContextPtr, QueryProcessingStage::Enum, SelectQueryInfo &) const override; + + NamesAndTypesList getVirtuals() const override; + +protected: + StorageS3Cluster( + const String & filename_, const String & access_key_id_, const String & secret_access_key_, const StorageID & table_id_, + String cluster_name_, const String & format_name_, UInt64 max_connections_, const ColumnsDescription & columns_, + const ConstraintsDescription & constraints_, ContextPtr context_, const String & compression_method_); + +private: + /// Connections from initiator to other nodes + std::vector> connections; + StorageS3::ClientAuthentificaiton client_auth; + + String filename; + String cluster_name; + String format_name; + String compression_method; +}; + + +} + +#endif diff --git a/src/Storages/StorageS3Settings.cpp b/src/Storages/StorageS3Settings.cpp index 6d97e6fae95..8aafc12a688 100644 --- a/src/Storages/StorageS3Settings.cpp +++ b/src/Storages/StorageS3Settings.cpp @@ -36,6 +36,11 @@ void StorageS3Settings::loadFromConfig(const String & config_elem, const Poco::U { use_environment_credentials = config.getBool(config_elem + "." + key + ".use_environment_credentials"); } + std::optional use_insecure_imds_request; + if (config.has(config_elem + "." + key + ".use_insecure_imds_request")) + { + use_insecure_imds_request = config.getBool(config_elem + "." + key + ".use_insecure_imds_request"); + } HeaderCollection headers; Poco::Util::AbstractConfiguration::Keys subconfig_keys; @@ -52,7 +57,7 @@ void StorageS3Settings::loadFromConfig(const String & config_elem, const Poco::U } } - settings.emplace(endpoint, S3AuthSettings{std::move(access_key_id), std::move(secret_access_key), std::move(server_side_encryption_customer_key_base64), std::move(headers), use_environment_credentials}); + settings.emplace(endpoint, S3AuthSettings{std::move(access_key_id), std::move(secret_access_key), std::move(server_side_encryption_customer_key_base64), std::move(headers), use_environment_credentials, use_insecure_imds_request}); } } } diff --git a/src/Storages/StorageS3Settings.h b/src/Storages/StorageS3Settings.h index 29c6c3bb415..66e776dbea2 100644 --- a/src/Storages/StorageS3Settings.h +++ b/src/Storages/StorageS3Settings.h @@ -33,12 +33,14 @@ struct S3AuthSettings HeaderCollection headers; std::optional use_environment_credentials; + std::optional use_insecure_imds_request; inline bool operator==(const S3AuthSettings & other) const { return access_key_id == other.access_key_id && secret_access_key == other.secret_access_key && server_side_encryption_customer_key_base64 == other.server_side_encryption_customer_key_base64 && headers == other.headers - && use_environment_credentials == other.use_environment_credentials; + && use_environment_credentials == other.use_environment_credentials + && use_insecure_imds_request == other.use_insecure_imds_request; } }; diff --git a/src/Storages/StorageSet.cpp b/src/Storages/StorageSet.cpp index d64042f0c1e..34bbfed874f 100644 --- a/src/Storages/StorageSet.cpp +++ b/src/Storages/StorageSet.cpp @@ -99,7 +99,7 @@ void SetOrJoinBlockOutputStream::writeSuffix() } -BlockOutputStreamPtr StorageSetOrJoinBase::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) +BlockOutputStreamPtr StorageSetOrJoinBase::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr /*context*/) { UInt64 id = ++increment; return std::make_shared(*this, metadata_snapshot, path, path + "tmp/", toString(id) + ".bin", persistent); @@ -156,7 +156,7 @@ size_t StorageSet::getSize() const { return set->getTotalRowCount(); } std::optional StorageSet::totalRows(const Settings &) const { return set->getTotalRowCount(); } std::optional StorageSet::totalBytes(const Settings &) const { return set->getTotalByteCount(); } -void StorageSet::truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) +void StorageSet::truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) { disk->removeRecursive(path); disk->createDirectories(path); @@ -246,7 +246,7 @@ void registerStorageSet(StorageFactory & factory) if (has_settings) set_settings.loadFromQuery(*args.storage_def); - DiskPtr disk = args.context.getDisk(set_settings.disk); + DiskPtr disk = args.getContext()->getDisk(set_settings.disk); return StorageSet::create(disk, args.relative_data_path, args.table_id, args.columns, args.constraints, set_settings.persistent); }, StorageFactory::StorageFeatures{ .supports_settings = true, }); } diff --git a/src/Storages/StorageSet.h b/src/Storages/StorageSet.h index 9b9078f7dd5..b87dcf21a23 100644 --- a/src/Storages/StorageSet.h +++ b/src/Storages/StorageSet.h @@ -23,7 +23,7 @@ class StorageSetOrJoinBase : public IStorage public: void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; bool storesDataOnDisk() const override { return true; } Strings getDataPaths() const override { return {path}; } @@ -72,7 +72,7 @@ public: /// Access the insides. SetPtr & getSet() { return set; } - void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) override; std::optional totalRows(const Settings & settings) const override; std::optional totalBytes(const Settings & settings) const override; diff --git a/src/Storages/StorageStripeLog.cpp b/src/Storages/StorageStripeLog.cpp index db4fbff78cd..d845dfb71f2 100644 --- a/src/Storages/StorageStripeLog.cpp +++ b/src/Storages/StorageStripeLog.cpp @@ -228,6 +228,11 @@ public: storage.file_checker.save(); done = true; + + /// unlock should be done from the same thread as lock, and dtor may be + /// called from different thread, so it should be done here (at least in + /// case of no exceptions occurred) + lock.unlock(); } private: @@ -302,9 +307,9 @@ void StorageStripeLog::rename(const String & new_path_to_table_data, const Stora } -static std::chrono::seconds getLockTimeout(const Context & context) +static std::chrono::seconds getLockTimeout(ContextPtr context) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); Int64 lock_timeout = settings.lock_acquire_timeout.totalSeconds(); if (settings.max_execution_time.totalSeconds() != 0 && settings.max_execution_time.totalSeconds() < lock_timeout) lock_timeout = settings.max_execution_time.totalSeconds(); @@ -316,7 +321,7 @@ Pipe StorageStripeLog::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, unsigned num_streams) @@ -353,7 +358,7 @@ Pipe StorageStripeLog::read( std::advance(end, (stream + 1) * size / num_streams); pipes.emplace_back(std::make_shared( - *this, metadata_snapshot, column_names, context.getSettingsRef().max_read_buffer_size, index, begin, end)); + *this, metadata_snapshot, column_names, context->getSettingsRef().max_read_buffer_size, index, begin, end)); } /// We do not keep read lock directly at the time of reading, because we read ranges of data that do not change. @@ -362,7 +367,7 @@ Pipe StorageStripeLog::read( } -BlockOutputStreamPtr StorageStripeLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageStripeLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) { std::unique_lock lock(rwlock, getLockTimeout(context)); if (!lock) @@ -372,7 +377,7 @@ BlockOutputStreamPtr StorageStripeLog::write(const ASTPtr & /*query*/, const Sto } -CheckResults StorageStripeLog::checkData(const ASTPtr & /* query */, const Context & context) +CheckResults StorageStripeLog::checkData(const ASTPtr & /* query */, ContextPtr context) { std::shared_lock lock(rwlock, getLockTimeout(context)); if (!lock) @@ -381,7 +386,7 @@ CheckResults StorageStripeLog::checkData(const ASTPtr & /* query */, const Conte return file_checker.check(); } -void StorageStripeLog::truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder &) +void StorageStripeLog::truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder &) { disk->clearDirectory(table_path); file_checker = FileChecker{disk, table_path + "sizes.json"}; @@ -402,11 +407,11 @@ void registerStorageStripeLog(StorageFactory & factory) ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); String disk_name = getDiskName(*args.storage_def); - DiskPtr disk = args.context.getDisk(disk_name); + DiskPtr disk = args.getContext()->getDisk(disk_name); return StorageStripeLog::create( disk, args.relative_data_path, args.table_id, args.columns, args.constraints, - args.attach, args.context.getSettings().max_compress_block_size); + args.attach, args.getContext()->getSettings().max_compress_block_size); }, features); } diff --git a/src/Storages/StorageStripeLog.h b/src/Storages/StorageStripeLog.h index 5782e2526d3..7fad94870dc 100644 --- a/src/Storages/StorageStripeLog.h +++ b/src/Storages/StorageStripeLog.h @@ -29,21 +29,21 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; - CheckResults checkData(const ASTPtr & /* query */, const Context & /* context */) override; + CheckResults checkData(const ASTPtr & /* query */, ContextPtr /* context */) override; bool storesDataOnDisk() const override { return true; } Strings getDataPaths() const override { return {DB::fullPath(disk, table_path)}; } - void truncate(const ASTPtr &, const StorageMetadataPtr &, const Context &, TableExclusiveLockHolder&) override; + void truncate(const ASTPtr &, const StorageMetadataPtr &, ContextPtr, TableExclusiveLockHolder&) override; protected: StorageStripeLog( @@ -68,7 +68,7 @@ private: size_t max_compress_block_size; FileChecker file_checker; - mutable std::shared_timed_mutex rwlock; + std::shared_timed_mutex rwlock; Poco::Logger * log; }; diff --git a/src/Storages/StorageTableFunction.h b/src/Storages/StorageTableFunction.h index adb54d65bb4..7d909165d5f 100644 --- a/src/Storages/StorageTableFunction.h +++ b/src/Storages/StorageTableFunction.h @@ -73,7 +73,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override @@ -96,7 +96,7 @@ public: ActionsDAG::MatchColumnsMode::Name); auto convert_actions = std::make_shared( convert_actions_dag, - ExpressionActionsSettings::fromSettings(context.getSettingsRef())); + ExpressionActionsSettings::fromSettings(context->getSettingsRef())); pipe.addSimpleTransform([&](const Block & header) { @@ -109,7 +109,7 @@ public: BlockOutputStreamPtr write( const ASTPtr & query, const StorageMetadataPtr & metadata_snapshot, - const Context & context) override + ContextPtr context) override { auto storage = getNested(); auto cached_structure = metadata_snapshot->getSampleBlock(); diff --git a/src/Storages/StorageTinyLog.cpp b/src/Storages/StorageTinyLog.cpp index 6ce41dac614..41c2961e929 100644 --- a/src/Storages/StorageTinyLog.cpp +++ b/src/Storages/StorageTinyLog.cpp @@ -358,6 +358,9 @@ void TinyLogBlockOutputStream::writeSuffix() storage.file_checker.update(file); storage.file_checker.save(); + /// unlock should be done from the same thread as lock, and dtor may be + /// called from different thread, so it should be done here (at least in + /// case of no exceptions occurred) lock.unlock(); } @@ -462,9 +465,9 @@ void StorageTinyLog::rename(const String & new_path_to_table_data, const Storage } -static std::chrono::seconds getLockTimeout(const Context & context) +static std::chrono::seconds getLockTimeout(ContextPtr context) { - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); Int64 lock_timeout = settings.lock_acquire_timeout.totalSeconds(); if (settings.max_execution_time.totalSeconds() != 0 && settings.max_execution_time.totalSeconds() < lock_timeout) lock_timeout = settings.max_execution_time.totalSeconds(); @@ -476,7 +479,7 @@ Pipe StorageTinyLog::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t max_block_size, const unsigned /*num_streams*/) @@ -487,7 +490,7 @@ Pipe StorageTinyLog::read( // When reading, we lock the entire storage, because we only have one file // per column and can't modify it concurrently. - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); std::shared_lock lock{rwlock, getLockTimeout(context)}; if (!lock) @@ -503,13 +506,13 @@ Pipe StorageTinyLog::read( } -BlockOutputStreamPtr StorageTinyLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageTinyLog::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) { return std::make_shared(*this, metadata_snapshot, std::unique_lock{rwlock, getLockTimeout(context)}); } -CheckResults StorageTinyLog::checkData(const ASTPtr & /* query */, const Context & context) +CheckResults StorageTinyLog::checkData(const ASTPtr & /* query */, ContextPtr context) { std::shared_lock lock(rwlock, getLockTimeout(context)); if (!lock) @@ -519,7 +522,7 @@ CheckResults StorageTinyLog::checkData(const ASTPtr & /* query */, const Context } void StorageTinyLog::truncate( - const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) + const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) { disk->clearDirectory(table_path); @@ -545,11 +548,11 @@ void registerStorageTinyLog(StorageFactory & factory) ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); String disk_name = getDiskName(*args.storage_def); - DiskPtr disk = args.context.getDisk(disk_name); + DiskPtr disk = args.getContext()->getDisk(disk_name); return StorageTinyLog::create( disk, args.relative_data_path, args.table_id, args.columns, args.constraints, - args.attach, args.context.getSettings().max_compress_block_size); + args.attach, args.getContext()->getSettings().max_compress_block_size); }, features); } diff --git a/src/Storages/StorageTinyLog.h b/src/Storages/StorageTinyLog.h index 1187f7f905d..01652169b62 100644 --- a/src/Storages/StorageTinyLog.h +++ b/src/Storages/StorageTinyLog.h @@ -28,22 +28,22 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; void rename(const String & new_path_to_table_data, const StorageID & new_table_id) override; - CheckResults checkData(const ASTPtr & /* query */, const Context & /* context */) override; + CheckResults checkData(const ASTPtr & /* query */, ContextPtr /* context */) override; bool storesDataOnDisk() const override { return true; } Strings getDataPaths() const override { return {DB::fullPath(disk, table_path)}; } bool supportsSubcolumns() const override { return true; } - void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, const Context &, TableExclusiveLockHolder &) override; + void truncate(const ASTPtr &, const StorageMetadataPtr & metadata_snapshot, ContextPtr, TableExclusiveLockHolder &) override; protected: StorageTinyLog( diff --git a/src/Storages/StorageURL.cpp b/src/Storages/StorageURL.cpp index 2d3879340dc..8b6d7839de0 100644 --- a/src/Storages/StorageURL.cpp +++ b/src/Storages/StorageURL.cpp @@ -33,7 +33,7 @@ namespace ErrorCodes IStorageURLBase::IStorageURLBase( const Poco::URI & uri_, - const Context & /*context_*/, + ContextPtr /*context_*/, const StorageID & table_id_, const String & format_name_, const std::optional & format_settings_, @@ -64,7 +64,7 @@ namespace const std::optional & format_settings, String name_, const Block & sample_block, - const Context & context, + ContextPtr context, const ColumnsDescription & columns, UInt64 max_block_size, const ConnectionTimeouts & timeouts, @@ -96,11 +96,11 @@ namespace method, std::move(callback), timeouts, - context.getSettingsRef().max_http_get_redirects, + context->getSettingsRef().max_http_get_redirects, Poco::Net::HTTPBasicCredentials{}, DBMS_DEFAULT_BUFFER_SIZE, header, - context.getRemoteHostFilter()), + context->getRemoteHostFilter()), compression_method); auto input_format = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size, format_settings); @@ -144,7 +144,7 @@ StorageURLBlockOutputStream::StorageURLBlockOutputStream(const Poco::URI & uri, const String & format, const std::optional & format_settings, const Block & sample_block_, - const Context & context, + ContextPtr context, const ConnectionTimeouts & timeouts, const CompressionMethod compression_method) : sample_block(sample_block_) @@ -184,7 +184,7 @@ std::vector> IStorageURLBase::getReadURIPara const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, const SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum & /*processed_stage*/, size_t /*max_block_size*/) const { @@ -195,7 +195,7 @@ std::function IStorageURLBase::getReadPOSTDataCallback( const Names & /*column_names*/, const StorageMetadataPtr & /*metadata_snapshot*/, const SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum & /*processed_stage*/, size_t /*max_block_size*/) const { @@ -207,13 +207,13 @@ Pipe IStorageURLBase::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned /*num_streams*/) { auto request_uri = uri; - auto params = getReadURIParams(column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size); + auto params = getReadURIParams(column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size); for (const auto & [param, value] : params) request_uri.addQueryParameter(param, value); @@ -222,19 +222,19 @@ Pipe IStorageURLBase::read( getReadMethod(), getReadPOSTDataCallback( column_names, metadata_snapshot, query_info, - context, processed_stage, max_block_size), + local_context, processed_stage, max_block_size), format_name, format_settings, getName(), getHeaderBlock(column_names, metadata_snapshot), - context, + local_context, metadata_snapshot->getColumns(), max_block_size, - ConnectionTimeouts::getHTTPTimeouts(context), + ConnectionTimeouts::getHTTPTimeouts(local_context), chooseCompressionMethod(request_uri.getPath(), compression_method))); } -BlockOutputStreamPtr IStorageURLBase::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr IStorageURLBase::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr context) { return std::make_shared(uri, format_name, format_settings, metadata_snapshot->getSampleBlock(), context, @@ -248,12 +248,12 @@ StorageURL::StorageURL(const Poco::URI & uri_, const std::optional & format_settings_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - Context & context_, + ContextPtr context_, const String & compression_method_) : IStorageURLBase(uri_, context_, table_id_, format_name_, format_settings_, columns_, constraints_, compression_method_) { - context_.getRemoteHostFilter().checkURL(uri); + context_->getRemoteHostFilter().checkURL(uri); } void registerStorageURL(StorageFactory & factory) @@ -266,19 +266,19 @@ void registerStorageURL(StorageFactory & factory) throw Exception( "Storage URL requires 2 or 3 arguments: url, name of used format and optional compression method.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); - engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[0], args.local_context); + engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[0], args.getLocalContext()); String url = engine_args[0]->as().value.safeGet(); Poco::URI uri(url); - engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.local_context); + engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.getLocalContext()); String format_name = engine_args[1]->as().value.safeGet(); String compression_method; if (engine_args.size() == 3) { - engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.local_context); + engine_args[2] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[2], args.getLocalContext()); compression_method = engine_args[2]->as().value.safeGet(); } else @@ -296,7 +296,7 @@ void registerStorageURL(StorageFactory & factory) // Apply changed settings from global context, but ignore the // unknown ones, because we only have the format settings here. - const auto & changes = args.context.getSettingsRef().changes(); + const auto & changes = args.getContext()->getSettingsRef().changes(); for (const auto & change : changes) { if (user_format_settings.has(change.name)) @@ -308,12 +308,12 @@ void registerStorageURL(StorageFactory & factory) // Apply changes from SETTINGS clause, with validation. user_format_settings.applyChanges(args.storage_def->settings->changes); - format_settings = getFormatSettings(args.context, + format_settings = getFormatSettings(args.getContext(), user_format_settings); } else { - format_settings = getFormatSettings(args.context); + format_settings = getFormatSettings(args.getContext()); } return StorageURL::create( @@ -321,7 +321,7 @@ void registerStorageURL(StorageFactory & factory) args.table_id, format_name, format_settings, - args.columns, args.constraints, args.context, + args.columns, args.constraints, args.getContext(), compression_method); }, { diff --git a/src/Storages/StorageURL.h b/src/Storages/StorageURL.h index 2b2384b1043..6fc1a6006ec 100644 --- a/src/Storages/StorageURL.h +++ b/src/Storages/StorageURL.h @@ -26,17 +26,17 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; protected: IStorageURLBase( const Poco::URI & uri_, - const Context & context_, + ContextPtr context_, const StorageID & id_, const String & format_name_, const std::optional & format_settings_, @@ -60,7 +60,7 @@ private: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size) const; @@ -68,7 +68,7 @@ private: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size) const; @@ -83,9 +83,9 @@ public: const String & format, const std::optional & format_settings, const Block & sample_block_, - const Context & context, + ContextPtr context, const ConnectionTimeouts & timeouts, - const CompressionMethod compression_method); + CompressionMethod compression_method); Block getHeader() const override { @@ -112,7 +112,7 @@ public: const std::optional & format_settings_, const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, - Context & context_, + ContextPtr context_, const String & compression_method_); String getName() const override diff --git a/src/Storages/StorageValues.cpp b/src/Storages/StorageValues.cpp index 500deac5f25..ace5ca3667c 100644 --- a/src/Storages/StorageValues.cpp +++ b/src/Storages/StorageValues.cpp @@ -24,7 +24,7 @@ Pipe StorageValues::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) diff --git a/src/Storages/StorageValues.h b/src/Storages/StorageValues.h index 5729f245149..6ae33ed70f1 100644 --- a/src/Storages/StorageValues.h +++ b/src/Storages/StorageValues.h @@ -19,7 +19,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageView.cpp b/src/Storages/StorageView.cpp index bcaf63152c1..75bd4b2967f 100644 --- a/src/Storages/StorageView.cpp +++ b/src/Storages/StorageView.cpp @@ -54,7 +54,7 @@ Pipe StorageView::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, const size_t max_block_size, const unsigned num_streams) @@ -71,7 +71,7 @@ void StorageView::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) diff --git a/src/Storages/StorageView.h b/src/Storages/StorageView.h index 6f894ce2775..fa11472218d 100644 --- a/src/Storages/StorageView.h +++ b/src/Storages/StorageView.h @@ -25,7 +25,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -35,7 +35,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/StorageXDBC.cpp b/src/Storages/StorageXDBC.cpp index f2f8cdb23f5..f94696c716b 100644 --- a/src/Storages/StorageXDBC.cpp +++ b/src/Storages/StorageXDBC.cpp @@ -14,6 +14,8 @@ #include #include #include +#include + namespace DB { @@ -28,7 +30,7 @@ StorageXDBC::StorageXDBC( const std::string & remote_database_name_, const std::string & remote_table_name_, const ColumnsDescription & columns_, - const Context & context_, + ContextPtr context_, const BridgeHelperPtr bridge_helper_) /// Please add support for constraints as soon as StorageODBC or JDBC will support insertion. : IStorageURLBase(Poco::URI(), @@ -53,27 +55,21 @@ std::string StorageXDBC::getReadMethod() const } std::vector> StorageXDBC::getReadURIParams( - const Names & column_names, - const StorageMetadataPtr & metadata_snapshot, + const Names & /* column_names */, + const StorageMetadataPtr & /* metadata_snapshot */, const SelectQueryInfo & /*query_info*/, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum & /*processed_stage*/, size_t max_block_size) const { - NamesAndTypesList cols; - for (const String & name : column_names) - { - auto column_data = metadata_snapshot->getColumns().getPhysical(name); - cols.emplace_back(column_data.name, column_data.type); - } - return bridge_helper->getURLParams(cols.toString(), max_block_size); + return bridge_helper->getURLParams(max_block_size); } std::function StorageXDBC::getReadPOSTDataCallback( - const Names & /*column_names*/, + const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum & /*processed_stage*/, size_t /*max_block_size*/) const { @@ -82,16 +78,30 @@ std::function StorageXDBC::getReadPOSTDataCallback( bridge_helper->getIdentifierQuotingStyle(), remote_database_name, remote_table_name, - context); + local_context); - return [query](std::ostream & os) { os << "query=" << query; }; + NamesAndTypesList cols; + for (const String & name : column_names) + { + auto column_data = metadata_snapshot->getColumns().getPhysical(name); + cols.emplace_back(column_data.name, column_data.type); + } + + auto write_body_callback = [query, cols](std::ostream & os) + { + os << "sample_block=" << escapeForFileName(cols.toString()); + os << "&"; + os << "query=" << escapeForFileName(query); + }; + + return write_body_callback; } Pipe StorageXDBC::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr local_context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) @@ -99,35 +109,32 @@ Pipe StorageXDBC::read( metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); bridge_helper->startBridgeSync(); - return IStorageURLBase::read(column_names, metadata_snapshot, query_info, context, processed_stage, max_block_size, num_streams); + return IStorageURLBase::read(column_names, metadata_snapshot, query_info, local_context, processed_stage, max_block_size, num_streams); } -BlockOutputStreamPtr StorageXDBC::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & context) +BlockOutputStreamPtr StorageXDBC::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, ContextPtr local_context) { bridge_helper->startBridgeSync(); - NamesAndTypesList cols; Poco::URI request_uri = uri; request_uri.setPath("/write"); - for (const String & name : metadata_snapshot->getSampleBlock().getNames()) - { - auto column_data = metadata_snapshot->getColumns().getPhysical(name); - cols.emplace_back(column_data.name, column_data.type); - } - auto url_params = bridge_helper->getURLParams(cols.toString(), 65536); + + auto url_params = bridge_helper->getURLParams(65536); for (const auto & [param, value] : url_params) request_uri.addQueryParameter(param, value); + request_uri.addQueryParameter("db_name", remote_database_name); request_uri.addQueryParameter("table_name", remote_table_name); request_uri.addQueryParameter("format_name", format_name); + request_uri.addQueryParameter("sample_block", metadata_snapshot->getSampleBlock().getNamesAndTypesList().toString()); return std::make_shared( request_uri, format_name, - getFormatSettings(context), + getFormatSettings(local_context), metadata_snapshot->getSampleBlock(), - context, - ConnectionTimeouts::getHTTPTimeouts(context), + local_context, + ConnectionTimeouts::getHTTPTimeouts(local_context), chooseCompressionMethod(uri.toString(), compression_method)); } @@ -155,16 +162,16 @@ namespace ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (size_t i = 0; i < 3; ++i) - engine_args[i] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[i], args.local_context); + engine_args[i] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[i], args.getLocalContext()); - BridgeHelperPtr bridge_helper = std::make_shared>(args.context, - args.context.getSettingsRef().http_receive_timeout.value, + BridgeHelperPtr bridge_helper = std::make_shared>(args.getContext(), + args.getContext()->getSettingsRef().http_receive_timeout.value, engine_args[0]->as().value.safeGet()); return std::make_shared(args.table_id, engine_args[1]->as().value.safeGet(), engine_args[2]->as().value.safeGet(), args.columns, - args.context, + args.getContext(), bridge_helper); }, diff --git a/src/Storages/StorageXDBC.h b/src/Storages/StorageXDBC.h index 8524a03503a..064912fda92 100644 --- a/src/Storages/StorageXDBC.h +++ b/src/Storages/StorageXDBC.h @@ -1,7 +1,7 @@ #pragma once #include -#include +#include namespace DB @@ -19,7 +19,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; @@ -29,10 +29,10 @@ public: const std::string & remote_database_name, const std::string & remote_table_name, const ColumnsDescription & columns_, - const Context & context_, + ContextPtr context_, BridgeHelperPtr bridge_helper_); - BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, const Context & context) override; + BlockOutputStreamPtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override; std::string getName() const override; private: @@ -49,7 +49,7 @@ private: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size) const override; @@ -57,7 +57,7 @@ private: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, const SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum & processed_stage, size_t max_block_size) const override; diff --git a/src/Storages/System/IStorageSystemOneBlock.h b/src/Storages/System/IStorageSystemOneBlock.h index d83a71c2592..fdc966130ad 100644 --- a/src/Storages/System/IStorageSystemOneBlock.h +++ b/src/Storages/System/IStorageSystemOneBlock.h @@ -12,13 +12,22 @@ namespace DB class Context; -/** Base class for system tables whose all columns have String type. +/** IStorageSystemOneBlock is base class for system tables whose all columns can be synchronously fetched. + * + * Client class need to provide static method static NamesAndTypesList getNamesAndTypes() that will return list of column names and + * their types. IStorageSystemOneBlock during read will create result columns in same order as result of getNamesAndTypes + * and pass it with fillData method. + * + * Client also must override fillData and fill result columns. + * + * If subclass want to support virtual columns, it should override getVirtuals method of IStorage interface. + * IStorageSystemOneBlock will add virtuals columns at the end of result columns of fillData method. */ template class IStorageSystemOneBlock : public IStorage { protected: - virtual void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const = 0; + virtual void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const = 0; public: #if defined(ARCADIA_BUILD) @@ -36,14 +45,15 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, size_t /*max_block_size*/, unsigned /*num_streams*/) override { - metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); + auto virtuals_names_and_types = getVirtuals(); + metadata_snapshot->check(column_names, virtuals_names_and_types, getStorageID()); - Block sample_block = metadata_snapshot->getSampleBlock(); + Block sample_block = metadata_snapshot->getSampleBlockWithVirtuals(virtuals_names_and_types); MutableColumns res_columns = sample_block.cloneEmptyColumns(); fillData(res_columns, context, query_info); diff --git a/src/Storages/System/StorageSystemAggregateFunctionCombinators.cpp b/src/Storages/System/StorageSystemAggregateFunctionCombinators.cpp index c0dd5cc85d3..c2d82c6cd7c 100644 --- a/src/Storages/System/StorageSystemAggregateFunctionCombinators.cpp +++ b/src/Storages/System/StorageSystemAggregateFunctionCombinators.cpp @@ -12,13 +12,13 @@ NamesAndTypesList StorageSystemAggregateFunctionCombinators::getNamesAndTypes() }; } -void StorageSystemAggregateFunctionCombinators::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemAggregateFunctionCombinators::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { const auto & combinators = AggregateFunctionCombinatorFactory::instance().getAllAggregateFunctionCombinators(); for (const auto & pair : combinators) { - res_columns[0]->insert(pair.first); - res_columns[1]->insert(pair.second->isForInternalUsageOnly()); + res_columns[0]->insert(pair.name); + res_columns[1]->insert(pair.combinator_ptr->isForInternalUsageOnly()); } } diff --git a/src/Storages/System/StorageSystemAggregateFunctionCombinators.h b/src/Storages/System/StorageSystemAggregateFunctionCombinators.h index 8d204020160..a978bfbface 100644 --- a/src/Storages/System/StorageSystemAggregateFunctionCombinators.h +++ b/src/Storages/System/StorageSystemAggregateFunctionCombinators.h @@ -12,7 +12,7 @@ class StorageSystemAggregateFunctionCombinators final : public ext::shared_ptr_h { friend struct ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; public: diff --git a/src/Storages/System/StorageSystemAsynchronousMetrics.cpp b/src/Storages/System/StorageSystemAsynchronousMetrics.cpp index 8dabac4fb49..70e12440678 100644 --- a/src/Storages/System/StorageSystemAsynchronousMetrics.cpp +++ b/src/Storages/System/StorageSystemAsynchronousMetrics.cpp @@ -21,7 +21,7 @@ StorageSystemAsynchronousMetrics::StorageSystemAsynchronousMetrics(const Storage { } -void StorageSystemAsynchronousMetrics::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemAsynchronousMetrics::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { auto async_metrics_values = async_metrics.getValues(); for (const auto & name_value : async_metrics_values) diff --git a/src/Storages/System/StorageSystemAsynchronousMetrics.h b/src/Storages/System/StorageSystemAsynchronousMetrics.h index a2a92d248d8..eee029bbe51 100644 --- a/src/Storages/System/StorageSystemAsynchronousMetrics.h +++ b/src/Storages/System/StorageSystemAsynchronousMetrics.h @@ -33,7 +33,7 @@ protected: #endif StorageSystemAsynchronousMetrics(const StorageID & table_id_, const AsynchronousMetrics & async_metrics_); - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemBuildOptions.cpp b/src/Storages/System/StorageSystemBuildOptions.cpp index a63afcf4ce5..01a60a0235c 100644 --- a/src/Storages/System/StorageSystemBuildOptions.cpp +++ b/src/Storages/System/StorageSystemBuildOptions.cpp @@ -16,7 +16,7 @@ NamesAndTypesList StorageSystemBuildOptions::getNamesAndTypes() }; } -void StorageSystemBuildOptions::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemBuildOptions::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { #if !defined(ARCADIA_BUILD) for (auto * it = auto_config_build; *it; it += 2) diff --git a/src/Storages/System/StorageSystemBuildOptions.h b/src/Storages/System/StorageSystemBuildOptions.h index afd27f00bcc..8a22a3dcb45 100644 --- a/src/Storages/System/StorageSystemBuildOptions.h +++ b/src/Storages/System/StorageSystemBuildOptions.h @@ -16,7 +16,7 @@ class StorageSystemBuildOptions final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemClusters.cpp b/src/Storages/System/StorageSystemClusters.cpp index e20ce233190..8a3227aafdb 100644 --- a/src/Storages/System/StorageSystemClusters.cpp +++ b/src/Storages/System/StorageSystemClusters.cpp @@ -29,9 +29,9 @@ NamesAndTypesList StorageSystemClusters::getNamesAndTypes() } -void StorageSystemClusters::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemClusters::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - for (const auto & name_and_cluster : context.getClusters().getContainer()) + for (const auto & name_and_cluster : context->getClusters().getContainer()) writeCluster(res_columns, name_and_cluster); const auto databases = DatabaseCatalog::instance().getDatabases(); diff --git a/src/Storages/System/StorageSystemClusters.h b/src/Storages/System/StorageSystemClusters.h index 4f2a843999f..81aefaff1c4 100644 --- a/src/Storages/System/StorageSystemClusters.h +++ b/src/Storages/System/StorageSystemClusters.h @@ -28,7 +28,7 @@ protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; using NameAndCluster = std::pair>; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; static void writeCluster(MutableColumns & res_columns, const NameAndCluster & name_and_cluster); }; diff --git a/src/Storages/System/StorageSystemCollations.cpp b/src/Storages/System/StorageSystemCollations.cpp index a870a7c7c78..c9343ccd146 100644 --- a/src/Storages/System/StorageSystemCollations.cpp +++ b/src/Storages/System/StorageSystemCollations.cpp @@ -13,7 +13,7 @@ NamesAndTypesList StorageSystemCollations::getNamesAndTypes() }; } -void StorageSystemCollations::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemCollations::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { for (const auto & [locale, lang]: AvailableCollationLocales::instance().getAvailableCollations()) { diff --git a/src/Storages/System/StorageSystemCollations.h b/src/Storages/System/StorageSystemCollations.h index 133acd937a1..454fd968511 100644 --- a/src/Storages/System/StorageSystemCollations.h +++ b/src/Storages/System/StorageSystemCollations.h @@ -10,7 +10,7 @@ class StorageSystemCollations final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; public: diff --git a/src/Storages/System/StorageSystemColumns.cpp b/src/Storages/System/StorageSystemColumns.cpp index 6726d502071..8f65147bb11 100644 --- a/src/Storages/System/StorageSystemColumns.cpp +++ b/src/Storages/System/StorageSystemColumns.cpp @@ -65,12 +65,12 @@ public: ColumnPtr databases_, ColumnPtr tables_, Storages storages_, - const Context & context) + ContextPtr context) : SourceWithProgress(header_) , columns_mask(std::move(columns_mask_)), max_block_size(max_block_size_) , databases(std::move(databases_)), tables(std::move(tables_)), storages(std::move(storages_)) - , total_tables(tables->size()), access(context.getAccess()) - , query_id(context.getCurrentQueryId()), lock_acquire_timeout(context.getSettingsRef().lock_acquire_timeout) + , total_tables(tables->size()), access(context->getAccess()) + , query_id(context->getCurrentQueryId()), lock_acquire_timeout(context->getSettingsRef().lock_acquire_timeout) { } @@ -243,7 +243,7 @@ Pipe StorageSystemColumns::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t max_block_size, const unsigned /*num_streams*/) @@ -289,9 +289,9 @@ Pipe StorageSystemColumns::read( } Tables external_tables; - if (context.hasSessionContext()) + if (context->hasSessionContext()) { - external_tables = context.getSessionContext().getExternalTables(); + external_tables = context->getSessionContext()->getExternalTables(); if (!external_tables.empty()) database_column_mut->insertDefault(); /// Empty database for external tables. } diff --git a/src/Storages/System/StorageSystemColumns.h b/src/Storages/System/StorageSystemColumns.h index c4f35485612..5cd8c5b38fd 100644 --- a/src/Storages/System/StorageSystemColumns.h +++ b/src/Storages/System/StorageSystemColumns.h @@ -21,7 +21,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemContributors.cpp b/src/Storages/System/StorageSystemContributors.cpp index cd0f31975cc..ed28be2a4ab 100644 --- a/src/Storages/System/StorageSystemContributors.cpp +++ b/src/Storages/System/StorageSystemContributors.cpp @@ -16,7 +16,7 @@ NamesAndTypesList StorageSystemContributors::getNamesAndTypes() }; } -void StorageSystemContributors::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemContributors::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { std::vector contributors; for (auto * it = auto_contributors; *it; ++it) diff --git a/src/Storages/System/StorageSystemContributors.generated.cpp b/src/Storages/System/StorageSystemContributors.generated.cpp index 46ead225102..b8741e6951c 100644 --- a/src/Storages/System/StorageSystemContributors.generated.cpp +++ b/src/Storages/System/StorageSystemContributors.generated.cpp @@ -17,6 +17,7 @@ const char * auto_contributors[] { "Aleksei Semiglazov", "Aleksey", "Aleksey Akulovich", + "Alex", "Alex Bocharov", "Alex Karo", "Alex Krash", @@ -144,6 +145,7 @@ const char * auto_contributors[] { "Chao Wang", "Chen Yufei", "Chienlung Cheung", + "Christian", "Ciprian Hacman", "Clement Rodriguez", "Clément Rodriguez", @@ -175,6 +177,7 @@ const char * auto_contributors[] { "Dmitry Belyavtsev", "Dmitry Bilunov", "Dmitry Galuza", + "Dmitry Krylov", "Dmitry Luhtionov", "Dmitry Moskowski", "Dmitry Muzyka", @@ -185,6 +188,7 @@ const char * auto_contributors[] { "Dongdong Yang", "DoomzD", "Dr. Strange Looker", + "Egor O'Sten", "Ekaterina", "Eldar Zaitov", "Elena Baskakova", @@ -286,6 +290,7 @@ const char * auto_contributors[] { "Jochen Schalanda", "John", "John Hummel", + "John Skopis", "Jonatas Freitas", "Kang Liu", "Karl Pietrzak", @@ -395,6 +400,7 @@ const char * auto_contributors[] { "NeZeD [Mac Pro]", "Neeke Gao", "Neng Liu", + "Nickolay Yastrebov", "Nico Mandery", "Nico Piderman", "Nicolae Vartolomei", @@ -472,6 +478,7 @@ const char * auto_contributors[] { "Sami Kerola", "Samuel Chou", "Saulius Valatka", + "Serg Kulakov", "Serge Rider", "Sergei Bocharov", "Sergei Semin", @@ -606,6 +613,7 @@ const char * auto_contributors[] { "abyss7", "achimbab", "achulkov2", + "adevyatova", "ageraab", "akazz", "akonyaev", @@ -631,6 +639,7 @@ const char * auto_contributors[] { "artpaul", "asiana21", "avasiliev", + "avogar", "avsharapov", "awesomeleo", "benamazing", @@ -647,6 +656,8 @@ const char * auto_contributors[] { "centos7", "champtar", "chang.chen", + "changvvb", + "chasingegg", "chengy8934", "chenqi", "chenxing-xc", @@ -769,6 +780,7 @@ const char * auto_contributors[] { "maxim-babenko", "maxkuzn", "maxulan", + "mehanizm", "melin", "memo", "meo", @@ -831,6 +843,7 @@ const char * auto_contributors[] { "shangshujie", "shedx", "simon-says", + "songenjie", "spff", "spongedc", "spyros87", diff --git a/src/Storages/System/StorageSystemContributors.h b/src/Storages/System/StorageSystemContributors.h index 0fd77f78655..c4a50263c5b 100644 --- a/src/Storages/System/StorageSystemContributors.h +++ b/src/Storages/System/StorageSystemContributors.h @@ -16,7 +16,7 @@ class StorageSystemContributors final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemCurrentRoles.cpp b/src/Storages/System/StorageSystemCurrentRoles.cpp index b0667f2f3ca..a5b3566f5f7 100644 --- a/src/Storages/System/StorageSystemCurrentRoles.cpp +++ b/src/Storages/System/StorageSystemCurrentRoles.cpp @@ -22,10 +22,10 @@ NamesAndTypesList StorageSystemCurrentRoles::getNamesAndTypes() } -void StorageSystemCurrentRoles::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemCurrentRoles::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - auto roles_info = context.getRolesInfo(); - auto user = context.getUser(); + auto roles_info = context->getRolesInfo(); + auto user = context->getUser(); if (!roles_info || !user) return; diff --git a/src/Storages/System/StorageSystemCurrentRoles.h b/src/Storages/System/StorageSystemCurrentRoles.h index 807db661371..77ab95547fa 100644 --- a/src/Storages/System/StorageSystemCurrentRoles.h +++ b/src/Storages/System/StorageSystemCurrentRoles.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemDDLWorkerQueue.cpp b/src/Storages/System/StorageSystemDDLWorkerQueue.cpp index 04321544f5d..98b15bfa6e2 100644 --- a/src/Storages/System/StorageSystemDDLWorkerQueue.cpp +++ b/src/Storages/System/StorageSystemDDLWorkerQueue.cpp @@ -95,7 +95,7 @@ NamesAndTypesList StorageSystemDDLWorkerQueue::getNamesAndTypes() }; } -static String clusterNameFromDDLQuery(const Context & context, const DDLLogEntry & entry) +static String clusterNameFromDDLQuery(ContextPtr context, const DDLLogEntry & entry) { const char * begin = entry.query.data(); const char * end = begin + entry.query.size(); @@ -104,15 +104,15 @@ static String clusterNameFromDDLQuery(const Context & context, const DDLLogEntry String cluster_name; ParserQuery parser_query(end); String description; - query = parseQuery(parser_query, begin, end, description, 0, context.getSettingsRef().max_parser_depth); + query = parseQuery(parser_query, begin, end, description, 0, context->getSettingsRef().max_parser_depth); if (query && (query_on_cluster = dynamic_cast(query.get()))) cluster_name = query_on_cluster->cluster; return cluster_name; } -void StorageSystemDDLWorkerQueue::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemDDLWorkerQueue::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - zkutil::ZooKeeperPtr zookeeper = context.getZooKeeper(); + zkutil::ZooKeeperPtr zookeeper = context->getZooKeeper(); Coordination::Error zk_exception_code = Coordination::Error::ZOK; String ddl_zookeeper_path = config.getString("distributed_ddl.path", "/clickhouse/task_queue/ddl/"); String ddl_query_path; @@ -130,7 +130,7 @@ void StorageSystemDDLWorkerQueue::fillData(MutableColumns & res_columns, const C if (code != Coordination::Error::ZOK && code != Coordination::Error::ZNONODE) zk_exception_code = code; - const auto & clusters = context.getClusters(); + const auto & clusters = context->getClusters(); for (const auto & name_and_cluster : clusters.getContainer()) { const ClusterPtr & cluster = name_and_cluster.second; diff --git a/src/Storages/System/StorageSystemDDLWorkerQueue.h b/src/Storages/System/StorageSystemDDLWorkerQueue.h index 9326d4dcb26..d1afa2d546a 100644 --- a/src/Storages/System/StorageSystemDDLWorkerQueue.h +++ b/src/Storages/System/StorageSystemDDLWorkerQueue.h @@ -22,7 +22,7 @@ class StorageSystemDDLWorkerQueue final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemDatabases.cpp b/src/Storages/System/StorageSystemDatabases.cpp index 88ac987014d..e09e47d8baf 100644 --- a/src/Storages/System/StorageSystemDatabases.cpp +++ b/src/Storages/System/StorageSystemDatabases.cpp @@ -20,9 +20,9 @@ NamesAndTypesList StorageSystemDatabases::getNamesAndTypes() }; } -void StorageSystemDatabases::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemDatabases::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_DATABASES); const auto databases = DatabaseCatalog::instance().getDatabases(); @@ -36,7 +36,7 @@ void StorageSystemDatabases::fillData(MutableColumns & res_columns, const Contex res_columns[0]->insert(database_name); res_columns[1]->insert(database->getEngineName()); - res_columns[2]->insert(context.getPath() + database->getDataPath()); + res_columns[2]->insert(context->getPath() + database->getDataPath()); res_columns[3]->insert(database->getMetadataPath()); res_columns[4]->insert(database->getUUID()); } diff --git a/src/Storages/System/StorageSystemDatabases.h b/src/Storages/System/StorageSystemDatabases.h index fe517c0f651..33f91fee837 100644 --- a/src/Storages/System/StorageSystemDatabases.h +++ b/src/Storages/System/StorageSystemDatabases.h @@ -26,7 +26,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemDetachedParts.cpp b/src/Storages/System/StorageSystemDetachedParts.cpp index f96566026b1..56644620f97 100644 --- a/src/Storages/System/StorageSystemDetachedParts.cpp +++ b/src/Storages/System/StorageSystemDetachedParts.cpp @@ -33,7 +33,7 @@ Pipe StorageSystemDetachedParts::read( const Names & /* column_names */, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) diff --git a/src/Storages/System/StorageSystemDetachedParts.h b/src/Storages/System/StorageSystemDetachedParts.h index 4c6970dadd6..18a6f5576d6 100644 --- a/src/Storages/System/StorageSystemDetachedParts.h +++ b/src/Storages/System/StorageSystemDetachedParts.h @@ -26,7 +26,7 @@ protected: const Names & /* column_names */, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) override; diff --git a/src/Storages/System/StorageSystemDictionaries.cpp b/src/Storages/System/StorageSystemDictionaries.cpp index 378905b7dc0..c76dba9df58 100644 --- a/src/Storages/System/StorageSystemDictionaries.cpp +++ b/src/Storages/System/StorageSystemDictionaries.cpp @@ -50,12 +50,19 @@ NamesAndTypesList StorageSystemDictionaries::getNamesAndTypes() }; } -void StorageSystemDictionaries::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & /*query_info*/) const +NamesAndTypesList StorageSystemDictionaries::getVirtuals() const { - const auto access = context.getAccess(); + return { + {"key", std::make_shared()} + }; +} + +void StorageSystemDictionaries::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & /*query_info*/) const +{ + const auto access = context->getAccess(); const bool check_access_for_dictionaries = !access->isGranted(AccessType::SHOW_DICTIONARIES); - const auto & external_dictionaries = context.getExternalDictionariesLoader(); + const auto & external_dictionaries = context->getExternalDictionariesLoader(); for (const auto & load_result : external_dictionaries.getLoadResults()) { const auto dict_ptr = std::dynamic_pointer_cast(load_result.object); @@ -128,6 +135,9 @@ void StorageSystemDictionaries::fillData(MutableColumns & res_columns, const Con else res_columns[i++]->insertDefault(); + /// Start fill virtual columns + + res_columns[i++]->insert(dictionary_structure.getKeyDescription()); } } diff --git a/src/Storages/System/StorageSystemDictionaries.h b/src/Storages/System/StorageSystemDictionaries.h index 5139ce3c5f6..aa65a946127 100644 --- a/src/Storages/System/StorageSystemDictionaries.h +++ b/src/Storages/System/StorageSystemDictionaries.h @@ -18,10 +18,12 @@ public: static NamesAndTypesList getNamesAndTypes(); + NamesAndTypesList getVirtuals() const override; + protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemDisks.cpp b/src/Storages/System/StorageSystemDisks.cpp index b04d24cc705..5d7628acb2a 100644 --- a/src/Storages/System/StorageSystemDisks.cpp +++ b/src/Storages/System/StorageSystemDisks.cpp @@ -30,7 +30,7 @@ Pipe StorageSystemDisks::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) @@ -44,7 +44,7 @@ Pipe StorageSystemDisks::read( MutableColumnPtr col_keep = ColumnUInt64::create(); MutableColumnPtr col_type = ColumnString::create(); - for (const auto & [disk_name, disk_ptr] : context.getDisksMap()) + for (const auto & [disk_name, disk_ptr] : context->getDisksMap()) { col_name->insert(disk_name); col_path->insert(disk_ptr->getPath()); diff --git a/src/Storages/System/StorageSystemDisks.h b/src/Storages/System/StorageSystemDisks.h index cff05242019..fa0f6fe4b8a 100644 --- a/src/Storages/System/StorageSystemDisks.h +++ b/src/Storages/System/StorageSystemDisks.h @@ -24,7 +24,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemDistributionQueue.cpp b/src/Storages/System/StorageSystemDistributionQueue.cpp index db649e7e1ba..9c0f8818011 100644 --- a/src/Storages/System/StorageSystemDistributionQueue.cpp +++ b/src/Storages/System/StorageSystemDistributionQueue.cpp @@ -103,9 +103,9 @@ NamesAndTypesList StorageSystemDistributionQueue::getNamesAndTypes() } -void StorageSystemDistributionQueue::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const +void StorageSystemDistributionQueue::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_TABLES); std::map> tables; diff --git a/src/Storages/System/StorageSystemDistributionQueue.h b/src/Storages/System/StorageSystemDistributionQueue.h index 88e7fa45cf5..9314418d242 100644 --- a/src/Storages/System/StorageSystemDistributionQueue.h +++ b/src/Storages/System/StorageSystemDistributionQueue.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemEnabledRoles.cpp b/src/Storages/System/StorageSystemEnabledRoles.cpp index 27a42ca6f8b..99370dd647d 100644 --- a/src/Storages/System/StorageSystemEnabledRoles.cpp +++ b/src/Storages/System/StorageSystemEnabledRoles.cpp @@ -23,10 +23,10 @@ NamesAndTypesList StorageSystemEnabledRoles::getNamesAndTypes() } -void StorageSystemEnabledRoles::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemEnabledRoles::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - auto roles_info = context.getRolesInfo(); - auto user = context.getUser(); + auto roles_info = context->getRolesInfo(); + auto user = context->getUser(); if (!roles_info || !user) return; diff --git a/src/Storages/System/StorageSystemEnabledRoles.h b/src/Storages/System/StorageSystemEnabledRoles.h index 18df31c646a..13b0533b790 100644 --- a/src/Storages/System/StorageSystemEnabledRoles.h +++ b/src/Storages/System/StorageSystemEnabledRoles.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemErrors.cpp b/src/Storages/System/StorageSystemErrors.cpp index 09d0aaddb3d..d08ffd730ac 100644 --- a/src/Storages/System/StorageSystemErrors.cpp +++ b/src/Storages/System/StorageSystemErrors.cpp @@ -23,11 +23,11 @@ NamesAndTypesList StorageSystemErrors::getNamesAndTypes() } -void StorageSystemErrors::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemErrors::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { auto add_row = [&](std::string_view name, size_t code, const auto & error, bool remote) { - if (error.count || context.getSettingsRef().system_events_show_zero_values) + if (error.count || context->getSettingsRef().system_events_show_zero_values) { size_t col_num = 0; res_columns[col_num++]->insert(name); diff --git a/src/Storages/System/StorageSystemErrors.h b/src/Storages/System/StorageSystemErrors.h index 569a7a998b7..ff3af11d251 100644 --- a/src/Storages/System/StorageSystemErrors.h +++ b/src/Storages/System/StorageSystemErrors.h @@ -25,7 +25,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemEvents.cpp b/src/Storages/System/StorageSystemEvents.cpp index ddb00659473..be2d3f8d49e 100644 --- a/src/Storages/System/StorageSystemEvents.cpp +++ b/src/Storages/System/StorageSystemEvents.cpp @@ -16,13 +16,13 @@ NamesAndTypesList StorageSystemEvents::getNamesAndTypes() }; } -void StorageSystemEvents::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemEvents::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { for (size_t i = 0, end = ProfileEvents::end(); i < end; ++i) { UInt64 value = ProfileEvents::global_counters[i]; - if (0 != value || context.getSettingsRef().system_events_show_zero_values) + if (0 != value || context->getSettingsRef().system_events_show_zero_values) { res_columns[0]->insert(ProfileEvents::getName(ProfileEvents::Event(i))); res_columns[1]->insert(value); diff --git a/src/Storages/System/StorageSystemEvents.h b/src/Storages/System/StorageSystemEvents.h index f1687e42233..6071cb7b2b3 100644 --- a/src/Storages/System/StorageSystemEvents.h +++ b/src/Storages/System/StorageSystemEvents.h @@ -22,7 +22,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemFormats.cpp b/src/Storages/System/StorageSystemFormats.cpp index 7048ab98a0d..86e0212a523 100644 --- a/src/Storages/System/StorageSystemFormats.cpp +++ b/src/Storages/System/StorageSystemFormats.cpp @@ -15,7 +15,7 @@ NamesAndTypesList StorageSystemFormats::getNamesAndTypes() }; } -void StorageSystemFormats::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemFormats::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { const auto & formats = FormatFactory::instance().getAllFormats(); for (const auto & pair : formats) diff --git a/src/Storages/System/StorageSystemFormats.h b/src/Storages/System/StorageSystemFormats.h index f90839e44e9..ed65cd2af88 100644 --- a/src/Storages/System/StorageSystemFormats.h +++ b/src/Storages/System/StorageSystemFormats.h @@ -9,7 +9,7 @@ class StorageSystemFormats final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; public: diff --git a/src/Storages/System/StorageSystemFunctions.cpp b/src/Storages/System/StorageSystemFunctions.cpp index e46b7007dc2..973bf493cd1 100644 --- a/src/Storages/System/StorageSystemFunctions.cpp +++ b/src/Storages/System/StorageSystemFunctions.cpp @@ -34,7 +34,7 @@ NamesAndTypesList StorageSystemFunctions::getNamesAndTypes() }; } -void StorageSystemFunctions::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemFunctions::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { const auto & functions_factory = FunctionFactory::instance(); const auto & function_names = functions_factory.getAllRegisteredNames(); diff --git a/src/Storages/System/StorageSystemFunctions.h b/src/Storages/System/StorageSystemFunctions.h index f62d731f288..62942721995 100644 --- a/src/Storages/System/StorageSystemFunctions.h +++ b/src/Storages/System/StorageSystemFunctions.h @@ -24,7 +24,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemGrants.cpp b/src/Storages/System/StorageSystemGrants.cpp index 0c06ad99b22..1ba5e6d96a4 100644 --- a/src/Storages/System/StorageSystemGrants.cpp +++ b/src/Storages/System/StorageSystemGrants.cpp @@ -35,10 +35,10 @@ NamesAndTypesList StorageSystemGrants::getNamesAndTypes() } -void StorageSystemGrants::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemGrants::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_USERS | AccessType::SHOW_ROLES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_USERS | AccessType::SHOW_ROLES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); boost::range::push_back(ids, access_control.findAll()); diff --git a/src/Storages/System/StorageSystemGrants.h b/src/Storages/System/StorageSystemGrants.h index 39c38deed85..8c8a0f9f7bf 100644 --- a/src/Storages/System/StorageSystemGrants.h +++ b/src/Storages/System/StorageSystemGrants.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemGraphite.cpp b/src/Storages/System/StorageSystemGraphite.cpp index 93bc16785b2..dd592600d18 100644 --- a/src/Storages/System/StorageSystemGraphite.cpp +++ b/src/Storages/System/StorageSystemGraphite.cpp @@ -25,7 +25,7 @@ NamesAndTypesList StorageSystemGraphite::getNamesAndTypes() /* * Looking for (Replicated)*GraphiteMergeTree and get all configuration parameters for them */ -static StorageSystemGraphite::Configs getConfigs(const Context & context) +static StorageSystemGraphite::Configs getConfigs(ContextPtr context) { const Databases databases = DatabaseCatalog::instance().getDatabases(); StorageSystemGraphite::Configs graphite_configs; @@ -73,7 +73,7 @@ static StorageSystemGraphite::Configs getConfigs(const Context & context) return graphite_configs; } -void StorageSystemGraphite::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemGraphite::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { Configs graphite_configs = getConfigs(context); diff --git a/src/Storages/System/StorageSystemGraphite.h b/src/Storages/System/StorageSystemGraphite.h index 703db41dc39..256ad50e472 100644 --- a/src/Storages/System/StorageSystemGraphite.h +++ b/src/Storages/System/StorageSystemGraphite.h @@ -32,7 +32,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemLicenses.cpp b/src/Storages/System/StorageSystemLicenses.cpp index 894c861de29..6f880f03e10 100644 --- a/src/Storages/System/StorageSystemLicenses.cpp +++ b/src/Storages/System/StorageSystemLicenses.cpp @@ -18,7 +18,7 @@ NamesAndTypesList StorageSystemLicenses::getNamesAndTypes() }; } -void StorageSystemLicenses::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemLicenses::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { for (const auto * it = library_licenses; *it; it += 4) { diff --git a/src/Storages/System/StorageSystemLicenses.h b/src/Storages/System/StorageSystemLicenses.h index cee48abacab..43bb1c20c22 100644 --- a/src/Storages/System/StorageSystemLicenses.h +++ b/src/Storages/System/StorageSystemLicenses.h @@ -17,7 +17,7 @@ class StorageSystemLicenses final : { friend struct ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemMacros.cpp b/src/Storages/System/StorageSystemMacros.cpp index 8e6420add8b..576fbc69039 100644 --- a/src/Storages/System/StorageSystemMacros.cpp +++ b/src/Storages/System/StorageSystemMacros.cpp @@ -14,9 +14,9 @@ NamesAndTypesList StorageSystemMacros::getNamesAndTypes() }; } -void StorageSystemMacros::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemMacros::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - auto macros = context.getMacros(); + auto macros = context->getMacros(); for (const auto & macro : macros->getMacroMap()) { diff --git a/src/Storages/System/StorageSystemMacros.h b/src/Storages/System/StorageSystemMacros.h index 52336bd6f69..298aa488265 100644 --- a/src/Storages/System/StorageSystemMacros.h +++ b/src/Storages/System/StorageSystemMacros.h @@ -24,7 +24,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemMergeTreeSettings.cpp b/src/Storages/System/StorageSystemMergeTreeSettings.cpp index 19cbf76f252..626319af63f 100644 --- a/src/Storages/System/StorageSystemMergeTreeSettings.cpp +++ b/src/Storages/System/StorageSystemMergeTreeSettings.cpp @@ -20,9 +20,9 @@ NamesAndTypesList SystemMergeTreeSettings::getNamesAndTypes() } template -void SystemMergeTreeSettings::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void SystemMergeTreeSettings::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto & settings = replicated ? context.getReplicatedMergeTreeSettings().all() : context.getMergeTreeSettings().all(); + const auto & settings = replicated ? context->getReplicatedMergeTreeSettings().all() : context->getMergeTreeSettings().all(); for (const auto & setting : settings) { res_columns[0]->insert(setting.getName()); diff --git a/src/Storages/System/StorageSystemMergeTreeSettings.h b/src/Storages/System/StorageSystemMergeTreeSettings.h index 9f61fa6f780..b02b191fb69 100644 --- a/src/Storages/System/StorageSystemMergeTreeSettings.h +++ b/src/Storages/System/StorageSystemMergeTreeSettings.h @@ -28,7 +28,7 @@ public: protected: using IStorageSystemOneBlock>::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemMerges.cpp b/src/Storages/System/StorageSystemMerges.cpp index b61324818e4..b29836206d0 100644 --- a/src/Storages/System/StorageSystemMerges.cpp +++ b/src/Storages/System/StorageSystemMerges.cpp @@ -36,12 +36,12 @@ NamesAndTypesList StorageSystemMerges::getNamesAndTypes() } -void StorageSystemMerges::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemMerges::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_tables = !access->isGranted(AccessType::SHOW_TABLES); - for (const auto & merge : context.getMergeList().get()) + for (const auto & merge : context->getMergeList().get()) { if (check_access_for_tables && !access->isGranted(AccessType::SHOW_TABLES, merge.database, merge.table)) continue; diff --git a/src/Storages/System/StorageSystemMerges.h b/src/Storages/System/StorageSystemMerges.h index 81c03c4e397..5898bf62825 100644 --- a/src/Storages/System/StorageSystemMerges.h +++ b/src/Storages/System/StorageSystemMerges.h @@ -24,7 +24,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemMetrics.cpp b/src/Storages/System/StorageSystemMetrics.cpp index b2332c52817..6007c8a7c71 100644 --- a/src/Storages/System/StorageSystemMetrics.cpp +++ b/src/Storages/System/StorageSystemMetrics.cpp @@ -17,7 +17,7 @@ NamesAndTypesList StorageSystemMetrics::getNamesAndTypes() }; } -void StorageSystemMetrics::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemMetrics::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { for (size_t i = 0, end = CurrentMetrics::end(); i < end; ++i) { diff --git a/src/Storages/System/StorageSystemMetrics.h b/src/Storages/System/StorageSystemMetrics.h index c47bcea656f..af5d32ec46b 100644 --- a/src/Storages/System/StorageSystemMetrics.h +++ b/src/Storages/System/StorageSystemMetrics.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemModels.cpp b/src/Storages/System/StorageSystemModels.cpp index 9fae9803b96..3df48e830bb 100644 --- a/src/Storages/System/StorageSystemModels.cpp +++ b/src/Storages/System/StorageSystemModels.cpp @@ -25,9 +25,9 @@ NamesAndTypesList StorageSystemModels::getNamesAndTypes() }; } -void StorageSystemModels::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemModels::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto & external_models_loader = context.getExternalModelsLoader(); + const auto & external_models_loader = context->getExternalModelsLoader(); auto load_results = external_models_loader.getLoadResults(); for (const auto & load_result : load_results) diff --git a/src/Storages/System/StorageSystemModels.h b/src/Storages/System/StorageSystemModels.h index cee5200e7de..832a9d550db 100644 --- a/src/Storages/System/StorageSystemModels.h +++ b/src/Storages/System/StorageSystemModels.h @@ -21,7 +21,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemMutations.cpp b/src/Storages/System/StorageSystemMutations.cpp index f66f57ef5d1..fa521c632b8 100644 --- a/src/Storages/System/StorageSystemMutations.cpp +++ b/src/Storages/System/StorageSystemMutations.cpp @@ -35,9 +35,9 @@ NamesAndTypesList StorageSystemMutations::getNamesAndTypes() } -void StorageSystemMutations::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const +void StorageSystemMutations::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_TABLES); /// Collect a set of *MergeTree tables. diff --git a/src/Storages/System/StorageSystemMutations.h b/src/Storages/System/StorageSystemMutations.h index f7bc5f6f33c..1f41ff6051b 100644 --- a/src/Storages/System/StorageSystemMutations.h +++ b/src/Storages/System/StorageSystemMutations.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemNumbers.cpp b/src/Storages/System/StorageSystemNumbers.cpp index 677e0c02400..f8a0e94bf98 100644 --- a/src/Storages/System/StorageSystemNumbers.cpp +++ b/src/Storages/System/StorageSystemNumbers.cpp @@ -126,7 +126,7 @@ Pipe StorageSystemNumbers::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo &, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) diff --git a/src/Storages/System/StorageSystemNumbers.h b/src/Storages/System/StorageSystemNumbers.h index d12c28c1ce2..708ace7a4cd 100644 --- a/src/Storages/System/StorageSystemNumbers.h +++ b/src/Storages/System/StorageSystemNumbers.h @@ -33,7 +33,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemOne.cpp b/src/Storages/System/StorageSystemOne.cpp index c456b22e97b..7c28f897121 100644 --- a/src/Storages/System/StorageSystemOne.cpp +++ b/src/Storages/System/StorageSystemOne.cpp @@ -24,7 +24,7 @@ Pipe StorageSystemOne::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo &, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) diff --git a/src/Storages/System/StorageSystemOne.h b/src/Storages/System/StorageSystemOne.h index 8228ce465e0..a14d5e15726 100644 --- a/src/Storages/System/StorageSystemOne.h +++ b/src/Storages/System/StorageSystemOne.h @@ -25,13 +25,13 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; protected: - StorageSystemOne(const StorageID & table_id_); + explicit StorageSystemOne(const StorageID & table_id_); }; } diff --git a/src/Storages/System/StorageSystemPartsBase.cpp b/src/Storages/System/StorageSystemPartsBase.cpp index 02627a3ba03..f1c82aa4c63 100644 --- a/src/Storages/System/StorageSystemPartsBase.cpp +++ b/src/Storages/System/StorageSystemPartsBase.cpp @@ -62,8 +62,8 @@ StoragesInfo::getParts(MergeTreeData::DataPartStateVector & state, bool has_stat return data->getDataPartsVector({State::Committed}, &state); } -StoragesInfoStream::StoragesInfoStream(const SelectQueryInfo & query_info, const Context & context) - : query_id(context.getCurrentQueryId()), settings(context.getSettings()) +StoragesInfoStream::StoragesInfoStream(const SelectQueryInfo & query_info, ContextPtr context) + : query_id(context->getCurrentQueryId()), settings(context->getSettings()) { /// Will apply WHERE to subset of columns and then add more columns. /// This is kind of complicated, but we use WHERE to do less work. @@ -74,7 +74,7 @@ StoragesInfoStream::StoragesInfoStream(const SelectQueryInfo & query_info, const MutableColumnPtr engine_column_mut = ColumnString::create(); MutableColumnPtr active_column_mut = ColumnUInt8::create(); - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_tables = !access->isGranted(AccessType::SHOW_TABLES); { @@ -234,7 +234,7 @@ Pipe StorageSystemPartsBase::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) diff --git a/src/Storages/System/StorageSystemPartsBase.h b/src/Storages/System/StorageSystemPartsBase.h index 3f63d75e2b6..33f82d04252 100644 --- a/src/Storages/System/StorageSystemPartsBase.h +++ b/src/Storages/System/StorageSystemPartsBase.h @@ -31,7 +31,7 @@ struct StoragesInfo class StoragesInfoStream { public: - StoragesInfoStream(const SelectQueryInfo & query_info, const Context & context); + StoragesInfoStream(const SelectQueryInfo & query_info, ContextPtr context); StoragesInfo next(); private: @@ -59,7 +59,7 @@ public: const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemPrivileges.cpp b/src/Storages/System/StorageSystemPrivileges.cpp index 5dda0caf201..ca369efe43a 100644 --- a/src/Storages/System/StorageSystemPrivileges.cpp +++ b/src/Storages/System/StorageSystemPrivileges.cpp @@ -74,7 +74,7 @@ NamesAndTypesList StorageSystemPrivileges::getNamesAndTypes() } -void StorageSystemPrivileges::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemPrivileges::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { size_t column_index = 0; auto & column_access_type = assert_cast(*res_columns[column_index++]).getData(); diff --git a/src/Storages/System/StorageSystemPrivileges.h b/src/Storages/System/StorageSystemPrivileges.h index 8540e3d7ec3..618e1c91597 100644 --- a/src/Storages/System/StorageSystemPrivileges.h +++ b/src/Storages/System/StorageSystemPrivileges.h @@ -19,7 +19,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemProcesses.cpp b/src/Storages/System/StorageSystemProcesses.cpp index e6266503095..785b4c0df11 100644 --- a/src/Storages/System/StorageSystemProcesses.cpp +++ b/src/Storages/System/StorageSystemProcesses.cpp @@ -70,9 +70,9 @@ NamesAndTypesList StorageSystemProcesses::getNamesAndTypes() } -void StorageSystemProcesses::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemProcesses::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - ProcessList::Info info = context.getProcessList().getInfo(true, true, true); + ProcessList::Info info = context->getProcessList().getInfo(true, true, true); for (const auto & process : info) { diff --git a/src/Storages/System/StorageSystemProcesses.h b/src/Storages/System/StorageSystemProcesses.h index 62c568970e7..4f876348a4b 100644 --- a/src/Storages/System/StorageSystemProcesses.h +++ b/src/Storages/System/StorageSystemProcesses.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemQuotaLimits.cpp b/src/Storages/System/StorageSystemQuotaLimits.cpp index c6e99cc9203..63a419e213c 100644 --- a/src/Storages/System/StorageSystemQuotaLimits.cpp +++ b/src/Storages/System/StorageSystemQuotaLimits.cpp @@ -69,10 +69,10 @@ NamesAndTypesList StorageSystemQuotaLimits::getNamesAndTypes() } -void StorageSystemQuotaLimits::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemQuotaLimits::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_QUOTAS); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_QUOTAS); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemQuotaLimits.h b/src/Storages/System/StorageSystemQuotaLimits.h index e9ae7fc09d0..8f496734e0f 100644 --- a/src/Storages/System/StorageSystemQuotaLimits.h +++ b/src/Storages/System/StorageSystemQuotaLimits.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemQuotaUsage.cpp b/src/Storages/System/StorageSystemQuotaUsage.cpp index 6d6e22e7be6..a25a130bf6c 100644 --- a/src/Storages/System/StorageSystemQuotaUsage.cpp +++ b/src/Storages/System/StorageSystemQuotaUsage.cpp @@ -81,10 +81,10 @@ NamesAndTypesList StorageSystemQuotaUsage::getNamesAndTypesImpl(bool add_column_ } -void StorageSystemQuotaUsage::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemQuotaUsage::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_QUOTAS); - auto usage = context.getQuotaUsage(); + context->checkAccess(AccessType::SHOW_QUOTAS); + auto usage = context->getQuotaUsage(); if (!usage) return; @@ -94,7 +94,7 @@ void StorageSystemQuotaUsage::fillData(MutableColumns & res_columns, const Conte void StorageSystemQuotaUsage::fillDataImpl( MutableColumns & res_columns, - const Context & context, + ContextPtr context, bool add_column_is_current, const std::vector & quotas_usage) { @@ -128,7 +128,7 @@ void StorageSystemQuotaUsage::fillDataImpl( std::optional current_quota_id; if (add_column_is_current) { - if (auto current_usage = context.getQuotaUsage()) + if (auto current_usage = context->getQuotaUsage()) current_quota_id = current_usage->quota_id; } diff --git a/src/Storages/System/StorageSystemQuotaUsage.h b/src/Storages/System/StorageSystemQuotaUsage.h index abb9505eb5a..806c3eb3f4a 100644 --- a/src/Storages/System/StorageSystemQuotaUsage.h +++ b/src/Storages/System/StorageSystemQuotaUsage.h @@ -20,12 +20,12 @@ public: static NamesAndTypesList getNamesAndTypes(); static NamesAndTypesList getNamesAndTypesImpl(bool add_column_is_current); - static void fillDataImpl(MutableColumns & res_columns, const Context & context, bool add_column_is_current, const std::vector & quotas_usage); + static void fillDataImpl(MutableColumns & res_columns, ContextPtr context, bool add_column_is_current, const std::vector & quotas_usage); protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemQuotas.cpp b/src/Storages/System/StorageSystemQuotas.cpp index fab6384e6a8..4bba082f66e 100644 --- a/src/Storages/System/StorageSystemQuotas.cpp +++ b/src/Storages/System/StorageSystemQuotas.cpp @@ -52,10 +52,10 @@ NamesAndTypesList StorageSystemQuotas::getNamesAndTypes() } -void StorageSystemQuotas::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemQuotas::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_QUOTAS); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_QUOTAS); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemQuotas.h b/src/Storages/System/StorageSystemQuotas.h index 8d1da53d641..fb74ea9b05f 100644 --- a/src/Storages/System/StorageSystemQuotas.h +++ b/src/Storages/System/StorageSystemQuotas.h @@ -19,7 +19,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemQuotasUsage.cpp b/src/Storages/System/StorageSystemQuotasUsage.cpp index 5c6879cd143..363562bce19 100644 --- a/src/Storages/System/StorageSystemQuotasUsage.cpp +++ b/src/Storages/System/StorageSystemQuotasUsage.cpp @@ -13,10 +13,10 @@ NamesAndTypesList StorageSystemQuotasUsage::getNamesAndTypes() return StorageSystemQuotaUsage::getNamesAndTypesImpl(/* add_column_is_current = */ true); } -void StorageSystemQuotasUsage::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemQuotasUsage::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_QUOTAS); - auto all_quotas_usage = context.getAccessControlManager().getAllQuotasUsage(); + context->checkAccess(AccessType::SHOW_QUOTAS); + auto all_quotas_usage = context->getAccessControlManager().getAllQuotasUsage(); StorageSystemQuotaUsage::fillDataImpl(res_columns, context, /* add_column_is_current = */ true, all_quotas_usage); } } diff --git a/src/Storages/System/StorageSystemQuotasUsage.h b/src/Storages/System/StorageSystemQuotasUsage.h index d4fd93b577d..1f29ea9b886 100644 --- a/src/Storages/System/StorageSystemQuotasUsage.h +++ b/src/Storages/System/StorageSystemQuotasUsage.h @@ -20,7 +20,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemReplicas.cpp b/src/Storages/System/StorageSystemReplicas.cpp index 0af67ab6986..fc33c6b421b 100644 --- a/src/Storages/System/StorageSystemReplicas.cpp +++ b/src/Storages/System/StorageSystemReplicas.cpp @@ -60,14 +60,14 @@ Pipe StorageSystemReplicas::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) { metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_TABLES); /// We collect a set of replicated tables. diff --git a/src/Storages/System/StorageSystemReplicas.h b/src/Storages/System/StorageSystemReplicas.h index d9e364a28c0..2352d7ccdf2 100644 --- a/src/Storages/System/StorageSystemReplicas.h +++ b/src/Storages/System/StorageSystemReplicas.h @@ -22,7 +22,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemReplicatedFetches.cpp b/src/Storages/System/StorageSystemReplicatedFetches.cpp index 53bec5aa42f..453568e3b86 100644 --- a/src/Storages/System/StorageSystemReplicatedFetches.cpp +++ b/src/Storages/System/StorageSystemReplicatedFetches.cpp @@ -30,12 +30,12 @@ NamesAndTypesList StorageSystemReplicatedFetches::getNamesAndTypes() }; } -void StorageSystemReplicatedFetches::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemReplicatedFetches::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_tables = !access->isGranted(AccessType::SHOW_TABLES); - for (const auto & fetch : context.getReplicatedFetchList().get()) + for (const auto & fetch : context->getReplicatedFetchList().get()) { if (check_access_for_tables && !access->isGranted(AccessType::SHOW_TABLES, fetch.database, fetch.table)) continue; diff --git a/src/Storages/System/StorageSystemReplicatedFetches.h b/src/Storages/System/StorageSystemReplicatedFetches.h index 34081923e4f..ed25e75eb70 100644 --- a/src/Storages/System/StorageSystemReplicatedFetches.h +++ b/src/Storages/System/StorageSystemReplicatedFetches.h @@ -22,7 +22,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemReplicationQueue.cpp b/src/Storages/System/StorageSystemReplicationQueue.cpp index 9cd5e8b8ff3..8acd192eac4 100644 --- a/src/Storages/System/StorageSystemReplicationQueue.cpp +++ b/src/Storages/System/StorageSystemReplicationQueue.cpp @@ -47,9 +47,9 @@ NamesAndTypesList StorageSystemReplicationQueue::getNamesAndTypes() } -void StorageSystemReplicationQueue::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const +void StorageSystemReplicationQueue::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const { - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_TABLES); std::map> replicated_tables; diff --git a/src/Storages/System/StorageSystemReplicationQueue.h b/src/Storages/System/StorageSystemReplicationQueue.h index 36841fb9be9..f85f23a2b20 100644 --- a/src/Storages/System/StorageSystemReplicationQueue.h +++ b/src/Storages/System/StorageSystemReplicationQueue.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemRoleGrants.cpp b/src/Storages/System/StorageSystemRoleGrants.cpp index cf0fad8f8ce..32984afcfc5 100644 --- a/src/Storages/System/StorageSystemRoleGrants.cpp +++ b/src/Storages/System/StorageSystemRoleGrants.cpp @@ -31,10 +31,10 @@ NamesAndTypesList StorageSystemRoleGrants::getNamesAndTypes() } -void StorageSystemRoleGrants::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemRoleGrants::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_USERS | AccessType::SHOW_ROLES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_USERS | AccessType::SHOW_ROLES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); boost::range::push_back(ids, access_control.findAll()); diff --git a/src/Storages/System/StorageSystemRoleGrants.h b/src/Storages/System/StorageSystemRoleGrants.h index 0a02303abc3..a290dcf320d 100644 --- a/src/Storages/System/StorageSystemRoleGrants.h +++ b/src/Storages/System/StorageSystemRoleGrants.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemRoles.cpp b/src/Storages/System/StorageSystemRoles.cpp index c560bc2bc6e..65ae74887a7 100644 --- a/src/Storages/System/StorageSystemRoles.cpp +++ b/src/Storages/System/StorageSystemRoles.cpp @@ -23,10 +23,10 @@ NamesAndTypesList StorageSystemRoles::getNamesAndTypes() } -void StorageSystemRoles::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemRoles::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_ROLES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_ROLES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemRoles.h b/src/Storages/System/StorageSystemRoles.h index fb44194baff..38c7ed05f1e 100644 --- a/src/Storages/System/StorageSystemRoles.h +++ b/src/Storages/System/StorageSystemRoles.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemRowPolicies.cpp b/src/Storages/System/StorageSystemRowPolicies.cpp index 9b11a781d6f..f9d6b14957e 100644 --- a/src/Storages/System/StorageSystemRowPolicies.cpp +++ b/src/Storages/System/StorageSystemRowPolicies.cpp @@ -52,10 +52,10 @@ NamesAndTypesList StorageSystemRowPolicies::getNamesAndTypes() } -void StorageSystemRowPolicies::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemRowPolicies::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_ROW_POLICIES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_ROW_POLICIES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemRowPolicies.h b/src/Storages/System/StorageSystemRowPolicies.h index b81020b421c..3b9ebfcc25a 100644 --- a/src/Storages/System/StorageSystemRowPolicies.h +++ b/src/Storages/System/StorageSystemRowPolicies.h @@ -20,7 +20,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemSettings.cpp b/src/Storages/System/StorageSystemSettings.cpp index 07a2d450b12..1aca7e45190 100644 --- a/src/Storages/System/StorageSystemSettings.cpp +++ b/src/Storages/System/StorageSystemSettings.cpp @@ -26,10 +26,10 @@ NamesAndTypesList StorageSystemSettings::getNamesAndTypes() #pragma GCC optimize("-fno-var-tracking-assignments") #endif -void StorageSystemSettings::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemSettings::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const Settings & settings = context.getSettingsRef(); - auto settings_constraints = context.getSettingsConstraints(); + const Settings & settings = context->getSettingsRef(); + auto settings_constraints = context->getSettingsConstraints(); for (const auto & setting : settings.all()) { const auto & setting_name = setting.getName(); diff --git a/src/Storages/System/StorageSystemSettings.h b/src/Storages/System/StorageSystemSettings.h index 6cb5e18e1d7..d93c09d3f80 100644 --- a/src/Storages/System/StorageSystemSettings.h +++ b/src/Storages/System/StorageSystemSettings.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/System/StorageSystemSettingsProfileElements.cpp b/src/Storages/System/StorageSystemSettingsProfileElements.cpp index cf47416e188..fa824091238 100644 --- a/src/Storages/System/StorageSystemSettingsProfileElements.cpp +++ b/src/Storages/System/StorageSystemSettingsProfileElements.cpp @@ -37,10 +37,10 @@ NamesAndTypesList StorageSystemSettingsProfileElements::getNamesAndTypes() } -void StorageSystemSettingsProfileElements::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemSettingsProfileElements::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_SETTINGS_PROFILES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_SETTINGS_PROFILES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); boost::range::push_back(ids, access_control.findAll()); boost::range::push_back(ids, access_control.findAll()); diff --git a/src/Storages/System/StorageSystemSettingsProfileElements.h b/src/Storages/System/StorageSystemSettingsProfileElements.h index 2dc79fed0e7..2262ea96dde 100644 --- a/src/Storages/System/StorageSystemSettingsProfileElements.h +++ b/src/Storages/System/StorageSystemSettingsProfileElements.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemSettingsProfiles.cpp b/src/Storages/System/StorageSystemSettingsProfiles.cpp index a678290d447..c726f54a324 100644 --- a/src/Storages/System/StorageSystemSettingsProfiles.cpp +++ b/src/Storages/System/StorageSystemSettingsProfiles.cpp @@ -30,10 +30,10 @@ NamesAndTypesList StorageSystemSettingsProfiles::getNamesAndTypes() } -void StorageSystemSettingsProfiles::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemSettingsProfiles::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_SETTINGS_PROFILES); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_SETTINGS_PROFILES); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemSettingsProfiles.h b/src/Storages/System/StorageSystemSettingsProfiles.h index c6b887c99df..580430dc28b 100644 --- a/src/Storages/System/StorageSystemSettingsProfiles.h +++ b/src/Storages/System/StorageSystemSettingsProfiles.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemStackTrace.cpp b/src/Storages/System/StorageSystemStackTrace.cpp index e74d56108ad..a6651aff8be 100644 --- a/src/Storages/System/StorageSystemStackTrace.cpp +++ b/src/Storages/System/StorageSystemStackTrace.cpp @@ -183,7 +183,7 @@ NamesAndTypesList StorageSystemStackTrace::getNamesAndTypes() } -void StorageSystemStackTrace::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemStackTrace::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { /// It shouldn't be possible to do concurrent reads from this table. std::lock_guard lock(mutex); diff --git a/src/Storages/System/StorageSystemStackTrace.h b/src/Storages/System/StorageSystemStackTrace.h index 582618d2ecd..7f10e309775 100644 --- a/src/Storages/System/StorageSystemStackTrace.h +++ b/src/Storages/System/StorageSystemStackTrace.h @@ -31,7 +31,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; mutable std::mutex mutex; diff --git a/src/Storages/System/StorageSystemStoragePolicies.cpp b/src/Storages/System/StorageSystemStoragePolicies.cpp index 7a10b986c11..48dfadd2b3c 100644 --- a/src/Storages/System/StorageSystemStoragePolicies.cpp +++ b/src/Storages/System/StorageSystemStoragePolicies.cpp @@ -39,7 +39,7 @@ Pipe StorageSystemStoragePolicies::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & /*query_info*/, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t /*max_block_size*/, const unsigned /*num_streams*/) @@ -55,7 +55,7 @@ Pipe StorageSystemStoragePolicies::read( MutableColumnPtr col_move_factor = ColumnFloat32::create(); MutableColumnPtr col_prefer_not_to_merge = ColumnUInt8::create(); - for (const auto & [policy_name, policy_ptr] : context.getPoliciesMap()) + for (const auto & [policy_name, policy_ptr] : context->getPoliciesMap()) { const auto & volumes = policy_ptr->getVolumes(); for (size_t i = 0; i != volumes.size(); ++i) diff --git a/src/Storages/System/StorageSystemStoragePolicies.h b/src/Storages/System/StorageSystemStoragePolicies.h index afd5e672d66..70053ebc1bc 100644 --- a/src/Storages/System/StorageSystemStoragePolicies.h +++ b/src/Storages/System/StorageSystemStoragePolicies.h @@ -24,7 +24,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemTableEngines.cpp b/src/Storages/System/StorageSystemTableEngines.cpp index 3f06faf6736..bc33cd9189c 100644 --- a/src/Storages/System/StorageSystemTableEngines.cpp +++ b/src/Storages/System/StorageSystemTableEngines.cpp @@ -20,7 +20,7 @@ NamesAndTypesList StorageSystemTableEngines::getNamesAndTypes() }; } -void StorageSystemTableEngines::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemTableEngines::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { for (const auto & pair : StorageFactory::instance().getAllStorages()) { diff --git a/src/Storages/System/StorageSystemTableEngines.h b/src/Storages/System/StorageSystemTableEngines.h index 1c080c3040b..37f7f354073 100644 --- a/src/Storages/System/StorageSystemTableEngines.h +++ b/src/Storages/System/StorageSystemTableEngines.h @@ -12,7 +12,7 @@ class StorageSystemTableEngines final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemTableFunctions.cpp b/src/Storages/System/StorageSystemTableFunctions.cpp index 65b1dc41879..2824e1726e9 100644 --- a/src/Storages/System/StorageSystemTableFunctions.cpp +++ b/src/Storages/System/StorageSystemTableFunctions.cpp @@ -9,7 +9,7 @@ NamesAndTypesList StorageSystemTableFunctions::getNamesAndTypes() return {{"name", std::make_shared()}}; } -void StorageSystemTableFunctions::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemTableFunctions::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { const auto & functions_names = TableFunctionFactory::instance().getAllRegisteredNames(); for (const auto & function_name : functions_names) diff --git a/src/Storages/System/StorageSystemTableFunctions.h b/src/Storages/System/StorageSystemTableFunctions.h index 95e025b9881..a5db5450d20 100644 --- a/src/Storages/System/StorageSystemTableFunctions.h +++ b/src/Storages/System/StorageSystemTableFunctions.h @@ -14,7 +14,7 @@ protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; public: diff --git a/src/Storages/System/StorageSystemTables.cpp b/src/Storages/System/StorageSystemTables.cpp index 4599dad2e3d..9602339f381 100644 --- a/src/Storages/System/StorageSystemTables.cpp +++ b/src/Storages/System/StorageSystemTables.cpp @@ -62,7 +62,7 @@ StorageSystemTables::StorageSystemTables(const StorageID & table_id_) } -static ColumnPtr getFilteredDatabases(const SelectQueryInfo & query_info, const Context & context) +static ColumnPtr getFilteredDatabases(const SelectQueryInfo & query_info, ContextPtr context) { MutableColumnPtr column = ColumnString::create(); @@ -104,12 +104,12 @@ public: Block header, UInt64 max_block_size_, ColumnPtr databases_, - const Context & context_) + ContextPtr context_) : SourceWithProgress(std::move(header)) , columns_mask(std::move(columns_mask_)) , max_block_size(max_block_size_) , databases(std::move(databases_)) - , context(context_) {} + , context(Context::createCopy(context_)) {} String getName() const override { return "Tables"; } @@ -121,7 +121,7 @@ protected: MutableColumns res_columns = getPort().getHeader().cloneEmptyColumns(); - const auto access = context.getAccess(); + const auto access = context->getAccess(); const bool check_access_for_databases = !access->isGranted(AccessType::SHOW_TABLES); size_t rows_count = 0; @@ -148,9 +148,9 @@ protected: /// This is for temporary tables. They are output in single block regardless to max_block_size. if (database_idx >= databases->size()) { - if (context.hasSessionContext()) + if (context->hasSessionContext()) { - Tables external_tables = context.getSessionContext().getExternalTables(); + Tables external_tables = context->getSessionContext()->getExternalTables(); for (auto & table : external_tables) { @@ -278,7 +278,7 @@ protected: } try { - lock = table->lockForShare(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); + lock = table->lockForShare(context->getCurrentQueryId(), context->getSettingsRef().lock_acquire_timeout); } catch (const Exception & e) { @@ -355,7 +355,7 @@ protected: { ASTPtr ast = database->tryGetCreateTableQuery(table_name, context); - if (ast && !context.getSettingsRef().show_table_uuid_in_table_create_query_if_not_nil) + if (ast && !context->getSettingsRef().show_table_uuid_in_table_create_query_if_not_nil) { auto & create = ast->as(); create.uuid = UUIDHelpers::Nil; @@ -442,7 +442,7 @@ protected: if (columns_mask[src_index++]) { assert(table != nullptr); - auto total_rows = table->totalRows(context.getSettingsRef()); + auto total_rows = table->totalRows(context->getSettingsRef()); if (total_rows) res_columns[res_index++]->insert(*total_rows); else @@ -452,7 +452,7 @@ protected: if (columns_mask[src_index++]) { assert(table != nullptr); - auto total_bytes = table->totalBytes(context.getSettingsRef()); + auto total_bytes = table->totalBytes(context->getSettingsRef()); if (total_bytes) res_columns[res_index++]->insert(*total_bytes); else @@ -490,7 +490,7 @@ private: ColumnPtr databases; size_t database_idx = 0; DatabaseTablesIteratorPtr tables_it; - const Context context; + ContextPtr context; bool done = false; DatabasePtr database; std::string database_name; @@ -501,7 +501,7 @@ Pipe StorageSystemTables::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum /*processed_stage*/, const size_t max_block_size, const unsigned /*num_streams*/) diff --git a/src/Storages/System/StorageSystemTables.h b/src/Storages/System/StorageSystemTables.h index 2e0b3386f8c..da5e236b33f 100644 --- a/src/Storages/System/StorageSystemTables.h +++ b/src/Storages/System/StorageSystemTables.h @@ -22,7 +22,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemTimeZones.cpp b/src/Storages/System/StorageSystemTimeZones.cpp index e5523f54caf..dc3711812a6 100644 --- a/src/Storages/System/StorageSystemTimeZones.cpp +++ b/src/Storages/System/StorageSystemTimeZones.cpp @@ -15,7 +15,7 @@ NamesAndTypesList StorageSystemTimeZones::getNamesAndTypes() }; } -void StorageSystemTimeZones::fillData(MutableColumns & res_columns, const Context &, const SelectQueryInfo &) const +void StorageSystemTimeZones::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const { for (auto * it = auto_time_zones; *it; ++it) res_columns[0]->insert(String(*it)); diff --git a/src/Storages/System/StorageSystemTimeZones.h b/src/Storages/System/StorageSystemTimeZones.h index b7544ecb16d..0f68b2de293 100644 --- a/src/Storages/System/StorageSystemTimeZones.h +++ b/src/Storages/System/StorageSystemTimeZones.h @@ -17,7 +17,7 @@ class StorageSystemTimeZones final : public ext::shared_ptr_helper; protected: - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; using IStorageSystemOneBlock::IStorageSystemOneBlock; diff --git a/src/Storages/System/StorageSystemUserDirectories.cpp b/src/Storages/System/StorageSystemUserDirectories.cpp index 519f0c0dcb0..7858af25365 100644 --- a/src/Storages/System/StorageSystemUserDirectories.cpp +++ b/src/Storages/System/StorageSystemUserDirectories.cpp @@ -22,9 +22,9 @@ NamesAndTypesList StorageSystemUserDirectories::getNamesAndTypes() } -void StorageSystemUserDirectories::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemUserDirectories::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - const auto & access_control = context.getAccessControlManager(); + const auto & access_control = context->getAccessControlManager(); auto storages = access_control.getStorages(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemUserDirectories.h b/src/Storages/System/StorageSystemUserDirectories.h index 902c890fe29..0ddb0ad49d8 100644 --- a/src/Storages/System/StorageSystemUserDirectories.h +++ b/src/Storages/System/StorageSystemUserDirectories.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemUsers.cpp b/src/Storages/System/StorageSystemUsers.cpp index eaebf759a85..e60f1372df9 100644 --- a/src/Storages/System/StorageSystemUsers.cpp +++ b/src/Storages/System/StorageSystemUsers.cpp @@ -55,10 +55,10 @@ NamesAndTypesList StorageSystemUsers::getNamesAndTypes() } -void StorageSystemUsers::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const +void StorageSystemUsers::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const { - context.checkAccess(AccessType::SHOW_USERS); - const auto & access_control = context.getAccessControlManager(); + context->checkAccess(AccessType::SHOW_USERS); + const auto & access_control = context->getAccessControlManager(); std::vector ids = access_control.findAll(); size_t column_index = 0; diff --git a/src/Storages/System/StorageSystemUsers.h b/src/Storages/System/StorageSystemUsers.h index 707ea94591d..3c463a23db9 100644 --- a/src/Storages/System/StorageSystemUsers.h +++ b/src/Storages/System/StorageSystemUsers.h @@ -18,7 +18,7 @@ public: protected: friend struct ext::shared_ptr_helper; using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const override; }; } diff --git a/src/Storages/System/StorageSystemZeros.cpp b/src/Storages/System/StorageSystemZeros.cpp index ed5ab93369a..d1456d72685 100644 --- a/src/Storages/System/StorageSystemZeros.cpp +++ b/src/Storages/System/StorageSystemZeros.cpp @@ -94,7 +94,7 @@ Pipe StorageSystemZeros::read( const Names & column_names, const StorageMetadataPtr & metadata_snapshot, SelectQueryInfo &, - const Context & /*context*/, + ContextPtr /*context*/, QueryProcessingStage::Enum /*processed_stage*/, size_t max_block_size, unsigned num_streams) diff --git a/src/Storages/System/StorageSystemZeros.h b/src/Storages/System/StorageSystemZeros.h index 04733f550c1..2ccdcf9c944 100644 --- a/src/Storages/System/StorageSystemZeros.h +++ b/src/Storages/System/StorageSystemZeros.h @@ -24,7 +24,7 @@ public: const Names & column_names, const StorageMetadataPtr & /*metadata_snapshot*/, SelectQueryInfo & query_info, - const Context & context, + ContextPtr context, QueryProcessingStage::Enum processed_stage, size_t max_block_size, unsigned num_streams) override; diff --git a/src/Storages/System/StorageSystemZooKeeper.cpp b/src/Storages/System/StorageSystemZooKeeper.cpp index 8fa5ccbd630..1a8aac3b277 100644 --- a/src/Storages/System/StorageSystemZooKeeper.cpp +++ b/src/Storages/System/StorageSystemZooKeeper.cpp @@ -63,7 +63,7 @@ static String pathCorrected(const String & path) } -static bool extractPathImpl(const IAST & elem, Paths & res, const Context & context) +static bool extractPathImpl(const IAST & elem, Paths & res, ContextPtr context) { const auto * function = elem.as(); if (!function) @@ -94,8 +94,8 @@ static bool extractPathImpl(const IAST & elem, Paths & res, const Context & cont { auto interpreter_subquery = interpretSubquery(value, context, {}, {}); auto stream = interpreter_subquery->execute().getInputStream(); - SizeLimits limites(context.getSettingsRef().max_rows_in_set, context.getSettingsRef().max_bytes_in_set, OverflowMode::THROW); - Set set(limites, true, context.getSettingsRef().transform_null_in); + SizeLimits limites(context->getSettingsRef().max_rows_in_set, context->getSettingsRef().max_bytes_in_set, OverflowMode::THROW); + Set set(limites, true, context->getSettingsRef().transform_null_in); set.setHeader(stream->getHeader()); stream->readPrefix(); @@ -165,7 +165,7 @@ static bool extractPathImpl(const IAST & elem, Paths & res, const Context & cont /** Retrieve from the query a condition of the form `path = 'path'`, from conjunctions in the WHERE clause. */ -static Paths extractPath(const ASTPtr & query, const Context & context) +static Paths extractPath(const ASTPtr & query, ContextPtr context) { const auto & select = query->as(); if (!select.where()) @@ -176,13 +176,13 @@ static Paths extractPath(const ASTPtr & query, const Context & context) } -void StorageSystemZooKeeper::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const +void StorageSystemZooKeeper::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const { const Paths & paths = extractPath(query_info.query, context); if (paths.empty()) throw Exception("SELECT from system.zookeeper table must contain condition like path = 'path' or path IN ('path1','path2'...) or path IN (subquery) in WHERE clause.", ErrorCodes::BAD_ARGUMENTS); - zkutil::ZooKeeperPtr zookeeper = context.getZooKeeper(); + zkutil::ZooKeeperPtr zookeeper = context->getZooKeeper(); std::unordered_set paths_corrected; for (const auto & path : paths) diff --git a/src/Storages/System/StorageSystemZooKeeper.h b/src/Storages/System/StorageSystemZooKeeper.h index 06611f61dae..226ca79facf 100644 --- a/src/Storages/System/StorageSystemZooKeeper.h +++ b/src/Storages/System/StorageSystemZooKeeper.h @@ -23,7 +23,7 @@ public: protected: using IStorageSystemOneBlock::IStorageSystemOneBlock; - void fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const override; + void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override; }; } diff --git a/src/Storages/TTLDescription.cpp b/src/Storages/TTLDescription.cpp index 41c20b2714b..95ea4f07f18 100644 --- a/src/Storages/TTLDescription.cpp +++ b/src/Storages/TTLDescription.cpp @@ -162,7 +162,7 @@ TTLDescription & TTLDescription::operator=(const TTLDescription & other) TTLDescription TTLDescription::getTTLFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context, + ContextPtr context, const KeyDescription & primary_key) { TTLDescription result; @@ -289,7 +289,7 @@ TTLDescription TTLDescription::getTTLFromAST( { result.recompression_codec = CompressionCodecFactory::instance().validateCodecAndGetPreprocessedAST( - ttl_element->recompression_codec, {}, !context.getSettingsRef().allow_suspicious_codecs); + ttl_element->recompression_codec, {}, !context->getSettingsRef().allow_suspicious_codecs); } } @@ -330,7 +330,7 @@ TTLTableDescription & TTLTableDescription::operator=(const TTLTableDescription & TTLTableDescription TTLTableDescription::getTTLForTableFromAST( const ASTPtr & definition_ast, const ColumnsDescription & columns, - const Context & context, + ContextPtr context, const KeyDescription & primary_key) { TTLTableDescription result; diff --git a/src/Storages/TTLDescription.h b/src/Storages/TTLDescription.h index a2340ad6bcd..6288098b3c5 100644 --- a/src/Storages/TTLDescription.h +++ b/src/Storages/TTLDescription.h @@ -80,7 +80,7 @@ struct TTLDescription /// Parse TTL structure from definition. Able to parse both column and table /// TTLs. - static TTLDescription getTTLFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, const Context & context, const KeyDescription & primary_key); + static TTLDescription getTTLFromAST(const ASTPtr & definition_ast, const ColumnsDescription & columns, ContextPtr context, const KeyDescription & primary_key); TTLDescription() = default; TTLDescription(const TTLDescription & other); @@ -117,7 +117,7 @@ struct TTLTableDescription TTLTableDescription & operator=(const TTLTableDescription & other); static TTLTableDescription getTTLForTableFromAST( - const ASTPtr & definition_ast, const ColumnsDescription & columns, const Context & context, const KeyDescription & primary_key); + const ASTPtr & definition_ast, const ColumnsDescription & columns, ContextPtr context, const KeyDescription & primary_key); }; } diff --git a/src/Storages/VirtualColumnUtils.cpp b/src/Storages/VirtualColumnUtils.cpp index 0002d7c9c28..a6a68f598c7 100644 --- a/src/Storages/VirtualColumnUtils.cpp +++ b/src/Storages/VirtualColumnUtils.cpp @@ -122,7 +122,7 @@ void rewriteEntityInAst(ASTPtr ast, const String & column_name, const Field & va } } -bool prepareFilterBlockWithQuery(const ASTPtr & query, const Context & context, Block block, ASTPtr & expression_ast) +bool prepareFilterBlockWithQuery(const ASTPtr & query, ContextPtr context, Block block, ASTPtr & expression_ast) { bool unmodified = true; const auto & select = query->as(); @@ -167,7 +167,7 @@ bool prepareFilterBlockWithQuery(const ASTPtr & query, const Context & context, return unmodified; } -void filterBlockWithQuery(const ASTPtr & query, Block & block, const Context & context, ASTPtr expression_ast) +void filterBlockWithQuery(const ASTPtr & query, Block & block, ContextPtr context, ASTPtr expression_ast) { if (!expression_ast) prepareFilterBlockWithQuery(query, context, block, expression_ast); @@ -191,10 +191,15 @@ void filterBlockWithQuery(const ASTPtr & query, Block & block, const Context & c ConstantFilterDescription constant_filter(*filter_column); if (constant_filter.always_true) + { return; + } if (constant_filter.always_false) + { block = block.cloneEmpty(); + return; + } FilterDescription filter(*filter_column); diff --git a/src/Storages/VirtualColumnUtils.h b/src/Storages/VirtualColumnUtils.h index 78e3d62472e..15783f6e79f 100644 --- a/src/Storages/VirtualColumnUtils.h +++ b/src/Storages/VirtualColumnUtils.h @@ -1,16 +1,16 @@ #pragma once -#include - #include +#include #include #include +#include + namespace DB { -class Context; class NamesAndTypesList; @@ -27,12 +27,12 @@ void rewriteEntityInAst(ASTPtr ast, const String & column_name, const Field & va /// Prepare `expression_ast` to filter block. Returns true if `expression_ast` is not trimmed, that is, /// `block` provides all needed columns for `expression_ast`, else return false. -bool prepareFilterBlockWithQuery(const ASTPtr & query, const Context & context, Block block, ASTPtr & expression_ast); +bool prepareFilterBlockWithQuery(const ASTPtr & query, ContextPtr context, Block block, ASTPtr & expression_ast); /// Leave in the block only the rows that fit under the WHERE clause and the PREWHERE clause of the query. /// Only elements of the outer conjunction are considered, depending only on the columns present in the block. /// If `expression_ast` is passed, use it to filter block. -void filterBlockWithQuery(const ASTPtr & query, Block & block, const Context & context, ASTPtr expression_ast = {}); +void filterBlockWithQuery(const ASTPtr & query, Block & block, ContextPtr context, ASTPtr expression_ast = {}); /// Extract from the input stream a set of `name` column values template diff --git a/src/Storages/getStructureOfRemoteTable.cpp b/src/Storages/getStructureOfRemoteTable.cpp index de5f3924ca9..fb828b8f744 100644 --- a/src/Storages/getStructureOfRemoteTable.cpp +++ b/src/Storages/getStructureOfRemoteTable.cpp @@ -29,7 +29,7 @@ ColumnsDescription getStructureOfRemoteTableInShard( const Cluster & cluster, const Cluster::ShardInfo & shard_info, const StorageID & table_id, - const Context & context, + ContextPtr context, const ASTPtr & table_func_ptr) { String query; @@ -59,7 +59,7 @@ ColumnsDescription getStructureOfRemoteTableInShard( ColumnsDescription res; - auto new_context = ClusterProxy::updateSettingsForCluster(cluster, context, context.getSettingsRef()); + auto new_context = ClusterProxy::updateSettingsForCluster(cluster, context, context->getSettingsRef()); /// Expect only needed columns from the result of DESC TABLE. NOTE 'comment' column is ignored for compatibility reasons. Block sample_block @@ -71,7 +71,7 @@ ColumnsDescription getStructureOfRemoteTableInShard( }; /// Execute remote query without restrictions (because it's not real user query, but part of implementation) - auto input = std::make_shared(shard_info.pool, query, sample_block, *new_context); + auto input = std::make_shared(shard_info.pool, query, sample_block, new_context); input->setPoolMode(PoolMode::GET_ONE); if (!table_func_ptr) input->setMainTable(table_id); @@ -104,7 +104,7 @@ ColumnsDescription getStructureOfRemoteTableInShard( column.default_desc.kind = columnDefaultKindFromString(kind_name); String expr_str = (*default_expr)[i].get(); column.default_desc.expression = parseQuery( - expr_parser, expr_str.data(), expr_str.data() + expr_str.size(), "default expression", 0, context.getSettingsRef().max_parser_depth); + expr_parser, expr_str.data(), expr_str.data() + expr_str.size(), "default expression", 0, context->getSettingsRef().max_parser_depth); } res.add(column); @@ -117,7 +117,7 @@ ColumnsDescription getStructureOfRemoteTableInShard( ColumnsDescription getStructureOfRemoteTable( const Cluster & cluster, const StorageID & table_id, - const Context & context, + ContextPtr context, const ASTPtr & table_func_ptr) { const auto & shards_info = cluster.getShardsInfo(); diff --git a/src/Storages/getStructureOfRemoteTable.h b/src/Storages/getStructureOfRemoteTable.h index af418144cb0..3f77236c756 100644 --- a/src/Storages/getStructureOfRemoteTable.h +++ b/src/Storages/getStructureOfRemoteTable.h @@ -16,7 +16,7 @@ struct StorageID; ColumnsDescription getStructureOfRemoteTable( const Cluster & cluster, const StorageID & table_id, - const Context & context, + ContextPtr context, const ASTPtr & table_func_ptr = nullptr); } diff --git a/src/Storages/registerStorages.cpp b/src/Storages/registerStorages.cpp index 0022ee6bd4f..7100afa6909 100644 --- a/src/Storages/registerStorages.cpp +++ b/src/Storages/registerStorages.cpp @@ -62,6 +62,10 @@ void registerStorageEmbeddedRocksDB(StorageFactory & factory); void registerStoragePostgreSQL(StorageFactory & factory); #endif +#if USE_MYSQL || USE_LIBPQXX +void registerStorageExternalDistributed(StorageFactory & factory); +#endif + void registerStorages() { auto & factory = StorageFactory::instance(); @@ -118,6 +122,10 @@ void registerStorages() #if USE_LIBPQXX registerStoragePostgreSQL(factory); #endif + + #if USE_MYSQL || USE_LIBPQXX + registerStorageExternalDistributed(factory); + #endif } } diff --git a/src/Storages/tests/gtest_background_executor.cpp b/src/Storages/tests/gtest_background_executor.cpp index 0ddf2d9ea2a..283cdf3fbf8 100644 --- a/src/Storages/tests/gtest_background_executor.cpp +++ b/src/Storages/tests/gtest_background_executor.cpp @@ -17,9 +17,9 @@ static std::atomic counter{0}; class TestJobExecutor : public IBackgroundJobExecutor { public: - explicit TestJobExecutor(Context & context) + explicit TestJobExecutor(ContextPtr local_context) :IBackgroundJobExecutor( - context, + local_context, BackgroundTaskSchedulingSettings{}, {PoolConfig{PoolType::MERGE_MUTATE, 4, CurrentMetrics::BackgroundPoolTask}}) {} @@ -43,7 +43,7 @@ TEST(BackgroundExecutor, TestMetric) const auto & context_holder = getContext(); std::vector executors; for (size_t i = 0; i < 100; ++i) - executors.emplace_back(std::make_unique(const_cast(context_holder.context))); + executors.emplace_back(std::make_unique(context_holder.context)); for (size_t i = 0; i < 100; ++i) executors[i]->start(); diff --git a/src/Storages/tests/gtest_storage_log.cpp b/src/Storages/tests/gtest_storage_log.cpp index cbb894c7420..41c1b6ac75a 100644 --- a/src/Storages/tests/gtest_storage_log.cpp +++ b/src/Storages/tests/gtest_storage_log.cpp @@ -19,7 +19,7 @@ #include #include -#if !__clang__ +#if !defined(__clang__) # pragma GCC diagnostic push # pragma GCC diagnostic ignored "-Wsuggest-override" #endif @@ -70,7 +70,7 @@ using DiskImplementations = testing::Types; TYPED_TEST_SUITE(StorageLogTest, DiskImplementations); // Returns data written to table in Values format. -std::string writeData(int rows, DB::StoragePtr & table, const DB::Context & context) +std::string writeData(int rows, DB::StoragePtr & table, const DB::ContextPtr context) { using namespace DB; auto metadata_snapshot = table->getInMemoryMetadataPtr(); @@ -108,7 +108,7 @@ std::string writeData(int rows, DB::StoragePtr & table, const DB::Context & cont } // Returns all table data in Values format. -std::string readData(DB::StoragePtr & table, const DB::Context & context) +std::string readData(DB::StoragePtr & table, const DB::ContextPtr context) { using namespace DB; auto metadata_snapshot = table->getInMemoryMetadataPtr(); diff --git a/src/Storages/tests/gtest_transform_query_for_external_database.cpp b/src/Storages/tests/gtest_transform_query_for_external_database.cpp index d40c62fef60..d774fd144cf 100644 --- a/src/Storages/tests/gtest_transform_query_for_external_database.cpp +++ b/src/Storages/tests/gtest_transform_query_for_external_database.cpp @@ -22,7 +22,7 @@ struct State { State(const State&) = delete; - Context context; + ContextPtr context; static const State & instance() { @@ -74,7 +74,7 @@ private: }; explicit State() - : context(getContext().context) + : context(Context::createCopy(getContext().context)) { tryRegisterFunctions(); DatabasePtr database = std::make_shared("test", context); @@ -88,7 +88,7 @@ private: StorageMemory::create(StorageID(db_name, table_name), ColumnsDescription{getColumns()}, ConstraintsDescription{})); } DatabaseCatalog::instance().attachDatabase(database->getDatabaseName(), database); - context.setCurrentDatabase("test"); + context->setCurrentDatabase("test"); } }; diff --git a/src/Storages/transformQueryForExternalDatabase.cpp b/src/Storages/transformQueryForExternalDatabase.cpp index 59d357f72e6..b3fe788d874 100644 --- a/src/Storages/transformQueryForExternalDatabase.cpp +++ b/src/Storages/transformQueryForExternalDatabase.cpp @@ -88,7 +88,7 @@ public: } }; -void replaceConstantExpressions(ASTPtr & node, const Context & context, const NamesAndTypesList & all_columns) +void replaceConstantExpressions(ASTPtr & node, ContextPtr context, const NamesAndTypesList & all_columns) { auto syntax_result = TreeRewriter(context).analyze(node, all_columns); Block block_with_constants = KeyCondition::getBlockWithConstants(node, syntax_result, context); @@ -239,7 +239,7 @@ String transformQueryForExternalDatabase( IdentifierQuotingStyle identifier_quoting_style, const String & database, const String & table, - const Context & context) + ContextPtr context) { auto clone_query = query_info.query->clone(); const Names used_columns = query_info.syntax_analyzer_result->requiredSourceColumns(); diff --git a/src/Storages/transformQueryForExternalDatabase.h b/src/Storages/transformQueryForExternalDatabase.h index c760c628970..215afab8b30 100644 --- a/src/Storages/transformQueryForExternalDatabase.h +++ b/src/Storages/transformQueryForExternalDatabase.h @@ -4,13 +4,13 @@ #include #include #include +#include namespace DB { class IAST; -class Context; /** For given ClickHouse query, * creates another query in a form of @@ -29,6 +29,6 @@ String transformQueryForExternalDatabase( IdentifierQuotingStyle identifier_quoting_style, const String & database, const String & table, - const Context & context); + ContextPtr context); } diff --git a/src/Storages/ya.make b/src/Storages/ya.make index e3e1807c566..ba294b05857 100644 --- a/src/Storages/ya.make +++ b/src/Storages/ya.make @@ -57,6 +57,7 @@ SRCS( MergeTree/MergeTreeDataPartWriterWide.cpp MergeTree/MergeTreeDataSelectExecutor.cpp MergeTree/MergeTreeDataWriter.cpp + MergeTree/MergeTreeDeduplicationLog.cpp MergeTree/MergeTreeIndexAggregatorBloomFilter.cpp MergeTree/MergeTreeIndexBloomFilter.cpp MergeTree/MergeTreeIndexConditionBloomFilter.cpp @@ -117,6 +118,7 @@ SRCS( StorageBuffer.cpp StorageDictionary.cpp StorageDistributed.cpp + StorageExternalDistributed.cpp StorageFactory.cpp StorageFile.cpp StorageGenerateRandom.cpp diff --git a/src/TableFunctions/ITableFunction.cpp b/src/TableFunctions/ITableFunction.cpp index b637838c6da..218d86fe4a2 100644 --- a/src/TableFunctions/ITableFunction.cpp +++ b/src/TableFunctions/ITableFunction.cpp @@ -14,11 +14,11 @@ namespace ProfileEvents namespace DB { -StoragePtr ITableFunction::execute(const ASTPtr & ast_function, const Context & context, const std::string & table_name, +StoragePtr ITableFunction::execute(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const { ProfileEvents::increment(ProfileEvents::TableFunctionExecute); - context.checkAccess(AccessType::CREATE_TEMPORARY_TABLE | StorageFactory::instance().getSourceAccessType(getStorageTypeName())); + context->checkAccess(AccessType::CREATE_TEMPORARY_TABLE | StorageFactory::instance().getSourceAccessType(getStorageTypeName())); if (cached_columns.empty()) return executeImpl(ast_function, context, table_name, std::move(cached_columns)); @@ -26,12 +26,12 @@ StoragePtr ITableFunction::execute(const ASTPtr & ast_function, const Context & /// We have table structure, so it's CREATE AS table_function(). /// We should use global context here because there will be no query context on server startup /// and because storage lifetime is bigger than query context lifetime. - const Context & global_context = context.getGlobalContext(); + auto global_context = context->getGlobalContext(); if (hasStaticStructure() && cached_columns == getActualTableStructure(context)) return executeImpl(ast_function, global_context, table_name, std::move(cached_columns)); auto this_table_function = shared_from_this(); - auto get_storage = [=, &global_context]() -> StoragePtr + auto get_storage = [=]() -> StoragePtr { return this_table_function->executeImpl(ast_function, global_context, table_name, cached_columns); }; diff --git a/src/TableFunctions/ITableFunction.h b/src/TableFunctions/ITableFunction.h index eb5e1618b3c..56147ffd598 100644 --- a/src/TableFunctions/ITableFunction.h +++ b/src/TableFunctions/ITableFunction.h @@ -47,18 +47,20 @@ public: /// Returns false if storage returned by table function supports type conversion (e.g. StorageDistributed) virtual bool needStructureConversion() const { return true; } - virtual void parseArguments(const ASTPtr & /*ast_function*/, const Context & /*context*/) {} + virtual void parseArguments(const ASTPtr & /*ast_function*/, ContextPtr /*context*/) {} /// Returns actual table structure probably requested from remote server, may fail - virtual ColumnsDescription getActualTableStructure(const Context & /*context*/) const = 0; + virtual ColumnsDescription getActualTableStructure(ContextPtr /*context*/) const = 0; /// Create storage according to the query. - StoragePtr execute(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns_ = {}) const; + StoragePtr + execute(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns_ = {}) const; virtual ~ITableFunction() = default; private: - virtual StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const = 0; + virtual StoragePtr executeImpl( + const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const = 0; virtual const char * getStorageTypeName() const = 0; }; diff --git a/src/TableFunctions/ITableFunctionFileLike.cpp b/src/TableFunctions/ITableFunctionFileLike.cpp index 1349c166474..44a917a0f00 100644 --- a/src/TableFunctions/ITableFunctionFileLike.cpp +++ b/src/TableFunctions/ITableFunctionFileLike.cpp @@ -26,7 +26,7 @@ namespace ErrorCodes extern const int BAD_ARGUMENTS; } -void ITableFunctionFileLike::parseArguments(const ASTPtr & ast_function, const Context & context) +void ITableFunctionFileLike::parseArguments(const ASTPtr & ast_function, ContextPtr context) { /// Parse args ASTs & args_func = ast_function->children; @@ -64,20 +64,20 @@ void ITableFunctionFileLike::parseArguments(const ASTPtr & ast_function, const C compression_method = args[3]->as().value.safeGet(); } -StoragePtr ITableFunctionFileLike::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr ITableFunctionFileLike::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); - StoragePtr storage = getStorage(filename, format, columns, const_cast(context), table_name, compression_method); + StoragePtr storage = getStorage(filename, format, columns, context, table_name, compression_method); storage->startup(); return storage; } -ColumnsDescription ITableFunctionFileLike::getActualTableStructure(const Context & context) const +ColumnsDescription ITableFunctionFileLike::getActualTableStructure(ContextPtr context) const { if (structure.empty()) { assert(getName() == "file" && format == "Distributed"); - Strings paths = StorageFile::getPathsList(filename, context.getUserFilesPath(), context); + Strings paths = StorageFile::getPathsList(filename, context->getUserFilesPath(), context); if (paths.empty()) throw Exception("Cannot get table structure from file, because no files match specified name", ErrorCodes::INCORRECT_FILE_NAME); auto read_stream = StorageDistributedDirectoryMonitor::createStreamFromFile(paths[0]); diff --git a/src/TableFunctions/ITableFunctionFileLike.h b/src/TableFunctions/ITableFunctionFileLike.h index f1c648ac0aa..7c96ce610b3 100644 --- a/src/TableFunctions/ITableFunctionFileLike.h +++ b/src/TableFunctions/ITableFunctionFileLike.h @@ -13,15 +13,15 @@ class Context; class ITableFunctionFileLike : public ITableFunction { private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; virtual StoragePtr getStorage( - const String & source, const String & format, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const String & compression_method) const = 0; - ColumnsDescription getActualTableStructure(const Context & context) const override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; bool hasStaticStructure() const override { return true; } diff --git a/src/TableFunctions/ITableFunctionXDBC.cpp b/src/TableFunctions/ITableFunctionXDBC.cpp index 21c78d199db..51431a1e3a6 100644 --- a/src/TableFunctions/ITableFunctionXDBC.cpp +++ b/src/TableFunctions/ITableFunctionXDBC.cpp @@ -28,7 +28,7 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -void ITableFunctionXDBC::parseArguments(const ASTPtr & ast_function, const Context & context) +void ITableFunctionXDBC::parseArguments(const ASTPtr & ast_function, ContextPtr context) { const auto & args_func = ast_function->as(); @@ -57,17 +57,17 @@ void ITableFunctionXDBC::parseArguments(const ASTPtr & ast_function, const Conte } } -void ITableFunctionXDBC::startBridgeIfNot(const Context & context) const +void ITableFunctionXDBC::startBridgeIfNot(ContextPtr context) const { if (!helper) { /// Have to const_cast, because bridges store their commands inside context - helper = createBridgeHelper(const_cast(context), context.getSettingsRef().http_receive_timeout.value, connection_string); + helper = createBridgeHelper(context, context->getSettingsRef().http_receive_timeout.value, connection_string); helper->startBridgeSync(); } } -ColumnsDescription ITableFunctionXDBC::getActualTableStructure(const Context & context) const +ColumnsDescription ITableFunctionXDBC::getActualTableStructure(ContextPtr context) const { startBridgeIfNot(context); @@ -78,7 +78,7 @@ ColumnsDescription ITableFunctionXDBC::getActualTableStructure(const Context & c columns_info_uri.addQueryParameter("schema", schema_name); columns_info_uri.addQueryParameter("table", remote_table_name); - const auto use_nulls = context.getSettingsRef().external_table_functions_use_nulls; + const auto use_nulls = context->getSettingsRef().external_table_functions_use_nulls; columns_info_uri.addQueryParameter("external_table_functions_use_nulls", Poco::NumberFormatter::format(use_nulls)); @@ -91,7 +91,7 @@ ColumnsDescription ITableFunctionXDBC::getActualTableStructure(const Context & c return ColumnsDescription{columns}; } -StoragePtr ITableFunctionXDBC::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr ITableFunctionXDBC::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { startBridgeIfNot(context); auto columns = getActualTableStructure(context); diff --git a/src/TableFunctions/ITableFunctionXDBC.h b/src/TableFunctions/ITableFunctionXDBC.h index f3ff64c2f2d..a58d574513f 100644 --- a/src/TableFunctions/ITableFunctionXDBC.h +++ b/src/TableFunctions/ITableFunctionXDBC.h @@ -3,7 +3,7 @@ #include #include #include -#include +#include #if !defined(ARCADIA_BUILD) # include @@ -18,18 +18,18 @@ namespace DB class ITableFunctionXDBC : public ITableFunction { private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; /* A factory method to create bridge helper, that will assist in remote interaction */ - virtual BridgeHelperPtr createBridgeHelper(Context & context, + virtual BridgeHelperPtr createBridgeHelper(ContextPtr context, const Poco::Timespan & http_timeout_, const std::string & connection_string_) const = 0; - ColumnsDescription getActualTableStructure(const Context & context) const override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; - void startBridgeIfNot(const Context & context) const; + void startBridgeIfNot(ContextPtr context) const; String connection_string; String schema_name; @@ -47,7 +47,7 @@ public: } private: - BridgeHelperPtr createBridgeHelper(Context & context, + BridgeHelperPtr createBridgeHelper(ContextPtr context, const Poco::Timespan & http_timeout_, const std::string & connection_string_) const override { @@ -67,7 +67,7 @@ public: } private: - BridgeHelperPtr createBridgeHelper(Context & context, + BridgeHelperPtr createBridgeHelper(ContextPtr context, const Poco::Timespan & http_timeout_, const std::string & connection_string_) const override { diff --git a/src/TableFunctions/TableFunctionDictionary.cpp b/src/TableFunctions/TableFunctionDictionary.cpp index 722ffccc07d..46d3183bba9 100644 --- a/src/TableFunctions/TableFunctionDictionary.cpp +++ b/src/TableFunctions/TableFunctionDictionary.cpp @@ -19,7 +19,7 @@ namespace ErrorCodes extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } -void TableFunctionDictionary::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionDictionary::parseArguments(const ASTPtr & ast_function, ContextPtr context) { // Parse args ASTs & args_func = ast_function->children; @@ -38,9 +38,9 @@ void TableFunctionDictionary::parseArguments(const ASTPtr & ast_function, const dictionary_name = args[0]->as().value.safeGet(); } -ColumnsDescription TableFunctionDictionary::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionDictionary::getActualTableStructure(ContextPtr context) const { - const ExternalDictionariesLoader & external_loader = context.getExternalDictionariesLoader(); + const ExternalDictionariesLoader & external_loader = context->getExternalDictionariesLoader(); auto dictionary_structure = external_loader.getDictionaryStructure(dictionary_name, context); auto result = ColumnsDescription(StorageDictionary::getNamesAndTypes(dictionary_structure)); @@ -48,7 +48,7 @@ ColumnsDescription TableFunctionDictionary::getActualTableStructure(const Contex } StoragePtr TableFunctionDictionary::executeImpl( - const ASTPtr &, const Context & context, const std::string & table_name, ColumnsDescription) const + const ASTPtr &, ContextPtr context, const std::string & table_name, ColumnsDescription) const { StorageID dict_id(getDatabaseName(), table_name); auto dictionary_table_structure = getActualTableStructure(context); diff --git a/src/TableFunctions/TableFunctionDictionary.h b/src/TableFunctions/TableFunctionDictionary.h index 8c518eb7929..aed435bebfd 100644 --- a/src/TableFunctions/TableFunctionDictionary.h +++ b/src/TableFunctions/TableFunctionDictionary.h @@ -20,11 +20,11 @@ public: return name; } - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; - ColumnsDescription getActualTableStructure(const Context & context) const override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription) const override; const char * getStorageTypeName() const override { return "Dictionary"; } diff --git a/src/TableFunctions/TableFunctionFactory.cpp b/src/TableFunctions/TableFunctionFactory.cpp index e8f844e8074..15e61354f6d 100644 --- a/src/TableFunctions/TableFunctionFactory.cpp +++ b/src/TableFunctions/TableFunctionFactory.cpp @@ -31,7 +31,7 @@ void TableFunctionFactory::registerFunction(const std::string & name, Value crea TableFunctionPtr TableFunctionFactory::get( const ASTPtr & ast_function, - const Context & context) const + ContextPtr context) const { const auto * table_function = ast_function->as(); auto res = tryGet(table_function->name, context); @@ -50,7 +50,7 @@ TableFunctionPtr TableFunctionFactory::get( TableFunctionPtr TableFunctionFactory::tryGet( const std::string & name_param, - const Context &) const + ContextPtr) const { String name = getAliasToOrName(name_param); TableFunctionPtr res; @@ -70,7 +70,7 @@ TableFunctionPtr TableFunctionFactory::tryGet( if (CurrentThread::isInitialized()) { - const auto * query_context = CurrentThread::get().getQueryContext(); + auto query_context = CurrentThread::get().getQueryContext(); if (query_context && query_context->getSettingsRef().log_queries) query_context->addQueryFactoriesInfo(Context::QueryLogFactories::TableFunction, name); } diff --git a/src/TableFunctions/TableFunctionFactory.h b/src/TableFunctions/TableFunctionFactory.h index 820b5eb1c7b..59b4ffb9fd5 100644 --- a/src/TableFunctions/TableFunctionFactory.h +++ b/src/TableFunctions/TableFunctionFactory.h @@ -41,10 +41,10 @@ public: } /// Throws an exception if not found. - TableFunctionPtr get(const ASTPtr & ast_function, const Context & context) const; + TableFunctionPtr get(const ASTPtr & ast_function, ContextPtr context) const; /// Returns nullptr if not found. - TableFunctionPtr tryGet(const std::string & name, const Context & context) const; + TableFunctionPtr tryGet(const std::string & name, ContextPtr context) const; bool isTableFunctionName(const std::string & name) const; diff --git a/src/TableFunctions/TableFunctionFile.cpp b/src/TableFunctions/TableFunctionFile.cpp index 13ac6dc2145..6ecb5606d56 100644 --- a/src/TableFunctions/TableFunctionFile.cpp +++ b/src/TableFunctions/TableFunctionFile.cpp @@ -12,20 +12,23 @@ namespace DB { StoragePtr TableFunctionFile::getStorage(const String & source, const String & format_, const ColumnsDescription & columns, - Context & global_context, const std::string & table_name, + ContextPtr global_context, const std::string & table_name, const std::string & compression_method_) const { // For `file` table function, we are going to use format settings from the // query context. - StorageFile::CommonArguments args{StorageID(getDatabaseName(), table_name), + StorageFile::CommonArguments args + { + WithContext(global_context), + StorageID(getDatabaseName(), table_name), format_, std::nullopt /*format settings*/, compression_method_, columns, ConstraintsDescription{}, - global_context}; + }; - return StorageFile::create(source, global_context.getUserFilesPath(), args); + return StorageFile::create(source, global_context->getUserFilesPath(), args); } void registerTableFunctionFile(TableFunctionFactory & factory) diff --git a/src/TableFunctions/TableFunctionFile.h b/src/TableFunctions/TableFunctionFile.h index 02704e4bf7f..460656a7218 100644 --- a/src/TableFunctions/TableFunctionFile.h +++ b/src/TableFunctions/TableFunctionFile.h @@ -5,7 +5,6 @@ namespace DB { -class Context; /* file(path, format, structure) - creates a temporary storage from file * @@ -23,7 +22,7 @@ public: private: StoragePtr getStorage( - const String & source, const String & format_, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const std::string & compression_method_) const override; const char * getStorageTypeName() const override { return "File"; } };} diff --git a/src/TableFunctions/TableFunctionGenerateRandom.cpp b/src/TableFunctions/TableFunctionGenerateRandom.cpp index 15c2c2bfa1f..b19be7bd7a3 100644 --- a/src/TableFunctions/TableFunctionGenerateRandom.cpp +++ b/src/TableFunctions/TableFunctionGenerateRandom.cpp @@ -26,7 +26,7 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -void TableFunctionGenerateRandom::parseArguments(const ASTPtr & ast_function, const Context & /*context*/) +void TableFunctionGenerateRandom::parseArguments(const ASTPtr & ast_function, ContextPtr /*context*/) { ASTs & args_func = ast_function->children; @@ -74,12 +74,12 @@ void TableFunctionGenerateRandom::parseArguments(const ASTPtr & ast_function, co max_array_length = args[3]->as().value.safeGet(); } -ColumnsDescription TableFunctionGenerateRandom::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionGenerateRandom::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionGenerateRandom::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionGenerateRandom::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); auto res = StorageGenerateRandom::create(StorageID(getDatabaseName(), table_name), columns, max_array_length, max_string_length, random_seed); diff --git a/src/TableFunctions/TableFunctionGenerateRandom.h b/src/TableFunctions/TableFunctionGenerateRandom.h index 1d8839ba6d4..bcad11156be 100644 --- a/src/TableFunctions/TableFunctionGenerateRandom.h +++ b/src/TableFunctions/TableFunctionGenerateRandom.h @@ -15,11 +15,11 @@ public: std::string getName() const override { return name; } bool hasStaticStructure() const override { return true; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "GenerateRandom"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String structure; UInt64 max_string_length = 10; diff --git a/src/TableFunctions/TableFunctionHDFS.cpp b/src/TableFunctions/TableFunctionHDFS.cpp index 700cb93ca06..714c6ea1f59 100644 --- a/src/TableFunctions/TableFunctionHDFS.cpp +++ b/src/TableFunctions/TableFunctionHDFS.cpp @@ -10,7 +10,7 @@ namespace DB { StoragePtr TableFunctionHDFS::getStorage( - const String & source, const String & format_, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const String & compression_method_) const { return StorageHDFS::create( diff --git a/src/TableFunctions/TableFunctionHDFS.h b/src/TableFunctions/TableFunctionHDFS.h index 47e040f7593..d9ee9b47868 100644 --- a/src/TableFunctions/TableFunctionHDFS.h +++ b/src/TableFunctions/TableFunctionHDFS.h @@ -26,7 +26,7 @@ public: private: StoragePtr getStorage( - const String & source, const String & format_, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const String & compression_method_) const override; const char * getStorageTypeName() const override { return "HDFS"; } }; diff --git a/src/TableFunctions/TableFunctionInput.cpp b/src/TableFunctions/TableFunctionInput.cpp index 41c41835434..677a6ff3ce4 100644 --- a/src/TableFunctions/TableFunctionInput.cpp +++ b/src/TableFunctions/TableFunctionInput.cpp @@ -22,7 +22,7 @@ namespace ErrorCodes extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } -void TableFunctionInput::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionInput::parseArguments(const ASTPtr & ast_function, ContextPtr context) { const auto * function = ast_function->as(); @@ -38,12 +38,12 @@ void TableFunctionInput::parseArguments(const ASTPtr & ast_function, const Conte structure = evaluateConstantExpressionOrIdentifierAsLiteral(args[0], context)->as().value.safeGet(); } -ColumnsDescription TableFunctionInput::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionInput::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionInput::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionInput::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto storage = StorageInput::create(StorageID(getDatabaseName(), table_name), getActualTableStructure(context)); storage->startup(); diff --git a/src/TableFunctions/TableFunctionInput.h b/src/TableFunctions/TableFunctionInput.h index 5809d48a77c..5953693e711 100644 --- a/src/TableFunctions/TableFunctionInput.h +++ b/src/TableFunctions/TableFunctionInput.h @@ -18,11 +18,11 @@ public: bool hasStaticStructure() const override { return true; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "Input"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String structure; }; diff --git a/src/TableFunctions/TableFunctionMerge.cpp b/src/TableFunctions/TableFunctionMerge.cpp index c5fb9a7686d..6d10b0d04b6 100644 --- a/src/TableFunctions/TableFunctionMerge.cpp +++ b/src/TableFunctions/TableFunctionMerge.cpp @@ -33,7 +33,7 @@ namespace } -void TableFunctionMerge::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionMerge::parseArguments(const ASTPtr & ast_function, ContextPtr context) { ASTs & args_func = ast_function->children; @@ -57,7 +57,7 @@ void TableFunctionMerge::parseArguments(const ASTPtr & ast_function, const Conte } -const Strings & TableFunctionMerge::getSourceTables(const Context & context) const +const Strings & TableFunctionMerge::getSourceTables(ContextPtr context) const { if (source_tables) return *source_tables; @@ -67,7 +67,7 @@ const Strings & TableFunctionMerge::getSourceTables(const Context & context) con OptimizedRegularExpression re(source_table_regexp); auto table_name_match = [&](const String & table_name_) { return re.match(table_name_); }; - auto access = context.getAccess(); + auto access = context->getAccess(); bool granted_show_on_all_tables = access->isGranted(AccessType::SHOW_TABLES, source_database); bool granted_select_on_all_tables = access->isGranted(AccessType::SELECT, source_database); @@ -91,7 +91,7 @@ const Strings & TableFunctionMerge::getSourceTables(const Context & context) con } -ColumnsDescription TableFunctionMerge::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionMerge::getActualTableStructure(ContextPtr context) const { for (const auto & table_name : getSourceTables(context)) { @@ -104,7 +104,7 @@ ColumnsDescription TableFunctionMerge::getActualTableStructure(const Context & c } -StoragePtr TableFunctionMerge::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionMerge::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto res = StorageMerge::create( StorageID(getDatabaseName(), table_name), diff --git a/src/TableFunctions/TableFunctionMerge.h b/src/TableFunctions/TableFunctionMerge.h index 8f9f4522d17..04027b9d76a 100644 --- a/src/TableFunctions/TableFunctionMerge.h +++ b/src/TableFunctions/TableFunctionMerge.h @@ -16,12 +16,12 @@ public: static constexpr auto name = "merge"; std::string getName() const override { return name; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "Merge"; } - const Strings & getSourceTables(const Context & context) const; - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + const Strings & getSourceTables(ContextPtr context) const; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String source_database; String source_table_regexp; diff --git a/src/TableFunctions/TableFunctionMySQL.cpp b/src/TableFunctions/TableFunctionMySQL.cpp index d6a62dc68b4..7d3fca58451 100644 --- a/src/TableFunctions/TableFunctionMySQL.cpp +++ b/src/TableFunctions/TableFunctionMySQL.cpp @@ -1,29 +1,30 @@ #if !defined(ARCADIA_BUILD) -# include "config_core.h" +#include "config_core.h" #endif #if USE_MYSQL -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include -# include "registerTableFunctions.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "registerTableFunctions.h" -# include // for fetchTablesColumnsList +#include // for fetchTablesColumnsList +#include namespace DB @@ -37,7 +38,7 @@ namespace ErrorCodes extern const int UNKNOWN_TABLE; } -void TableFunctionMySQL::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionMySQL::parseArguments(const ASTPtr & ast_function, ContextPtr context) { const auto & args_func = ast_function->as(); @@ -59,6 +60,11 @@ void TableFunctionMySQL::parseArguments(const ASTPtr & ast_function, const Conte user_name = args[3]->as().value.safeGet(); password = args[4]->as().value.safeGet(); + /// Split into replicas if needed. 3306 is the default MySQL port number + size_t max_addresses = context->getSettingsRef().glob_expansion_max_elements; + auto addresses = parseRemoteDescriptionForExternalDatabase(host_port, max_addresses, 3306); + pool.emplace(remote_database_name, addresses, user_name, password); + if (args.size() >= 6) replace_query = args[5]->as().value.safeGet() > 0; if (args.size() == 7) @@ -68,33 +74,27 @@ void TableFunctionMySQL::parseArguments(const ASTPtr & ast_function, const Conte throw Exception( "Only one of 'replace_query' and 'on_duplicate_clause' can be specified, or none of them", ErrorCodes::BAD_ARGUMENTS); - - /// 3306 is the default MySQL port number - parsed_host_port = parseAddress(host_port, 3306); } -ColumnsDescription TableFunctionMySQL::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionMySQL::getActualTableStructure(ContextPtr context) const { - assert(!parsed_host_port.first.empty()); - if (!pool) - pool.emplace(remote_database_name, parsed_host_port.first, user_name, password, parsed_host_port.second); - - const auto & settings = context.getSettingsRef(); - const auto tables_and_columns = fetchTablesColumnsList(*pool, remote_database_name, {remote_table_name}, settings.external_table_functions_use_nulls, settings.mysql_datatypes_support_level); + const auto & settings = context->getSettingsRef(); + const auto tables_and_columns = fetchTablesColumnsList(*pool, remote_database_name, {remote_table_name}, settings, settings.mysql_datatypes_support_level); const auto columns = tables_and_columns.find(remote_table_name); if (columns == tables_and_columns.end()) - throw Exception("MySQL table " + backQuoteIfNeed(remote_database_name) + "." + backQuoteIfNeed(remote_table_name) + " doesn't exist.", ErrorCodes::UNKNOWN_TABLE); + throw Exception("MySQL table " + (remote_database_name.empty() ? "" : (backQuote(remote_database_name) + ".")) + + backQuote(remote_table_name) + " doesn't exist.", ErrorCodes::UNKNOWN_TABLE); return ColumnsDescription{columns->second}; } -StoragePtr TableFunctionMySQL::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionMySQL::executeImpl( + const ASTPtr & /*ast_function*/, + ContextPtr context, + const std::string & table_name, + ColumnsDescription /*cached_columns*/) const { - assert(!parsed_host_port.first.empty()); - if (!pool) - pool.emplace(remote_database_name, parsed_host_port.first, user_name, password, parsed_host_port.second); - auto columns = getActualTableStructure(context); auto res = StorageMySQL::create( diff --git a/src/TableFunctions/TableFunctionMySQL.h b/src/TableFunctions/TableFunctionMySQL.h index 1fe5a4aa4ac..64c7d56cf2a 100644 --- a/src/TableFunctions/TableFunctionMySQL.h +++ b/src/TableFunctions/TableFunctionMySQL.h @@ -24,13 +24,12 @@ public: return name; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "MySQL"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; - std::pair parsed_host_port; String remote_database_name; String remote_table_name; String user_name; @@ -38,7 +37,7 @@ private: bool replace_query = false; String on_duplicate_clause; - mutable std::optional pool; + mutable std::optional pool; }; } diff --git a/src/TableFunctions/TableFunctionNull.cpp b/src/TableFunctions/TableFunctionNull.cpp index 6abe0319394..334d7c3dcbd 100644 --- a/src/TableFunctions/TableFunctionNull.cpp +++ b/src/TableFunctions/TableFunctionNull.cpp @@ -17,7 +17,7 @@ namespace ErrorCodes extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } -void TableFunctionNull::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionNull::parseArguments(const ASTPtr & ast_function, ContextPtr context) { const auto * function = ast_function->as(); if (!function || !function->arguments) @@ -30,12 +30,12 @@ void TableFunctionNull::parseArguments(const ASTPtr & ast_function, const Contex structure = evaluateConstantExpressionOrIdentifierAsLiteral(arguments[0], context)->as()->value.safeGet(); } -ColumnsDescription TableFunctionNull::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionNull::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionNull::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionNull::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); auto res = StorageNull::create(StorageID(getDatabaseName(), table_name), columns, ConstraintsDescription()); diff --git a/src/TableFunctions/TableFunctionNull.h b/src/TableFunctions/TableFunctionNull.h index 4d4cecb0292..6734fb8efb6 100644 --- a/src/TableFunctions/TableFunctionNull.h +++ b/src/TableFunctions/TableFunctionNull.h @@ -17,11 +17,11 @@ public: static constexpr auto name = "null"; std::string getName() const override { return name; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const String & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const String & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "Null"; } - void parseArguments(const ASTPtr & ast_function, const Context & context) override; - ColumnsDescription getActualTableStructure(const Context & context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; String structure; }; diff --git a/src/TableFunctions/TableFunctionNumbers.cpp b/src/TableFunctions/TableFunctionNumbers.cpp index 594075b1c82..01ffd2b2e3d 100644 --- a/src/TableFunctions/TableFunctionNumbers.cpp +++ b/src/TableFunctions/TableFunctionNumbers.cpp @@ -23,14 +23,14 @@ namespace ErrorCodes template -ColumnsDescription TableFunctionNumbers::getActualTableStructure(const Context & /*context*/) const +ColumnsDescription TableFunctionNumbers::getActualTableStructure(ContextPtr /*context*/) const { /// NOTE: https://bugs.llvm.org/show_bug.cgi?id=47418 return ColumnsDescription{{{"number", std::make_shared()}}}; } template -StoragePtr TableFunctionNumbers::executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionNumbers::executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { if (const auto * function = ast_function->as()) { @@ -56,7 +56,7 @@ void registerTableFunctionNumbers(TableFunctionFactory & factory) } template -UInt64 TableFunctionNumbers::evaluateArgument(const Context & context, ASTPtr & argument) const +UInt64 TableFunctionNumbers::evaluateArgument(ContextPtr context, ASTPtr & argument) const { const auto & [field, type] = evaluateConstantExpression(argument, context); diff --git a/src/TableFunctions/TableFunctionNumbers.h b/src/TableFunctions/TableFunctionNumbers.h index c27bb7319ba..6cee752390e 100644 --- a/src/TableFunctions/TableFunctionNumbers.h +++ b/src/TableFunctions/TableFunctionNumbers.h @@ -19,12 +19,12 @@ public: std::string getName() const override { return name; } bool hasStaticStructure() const override { return true; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "SystemNumbers"; } - UInt64 evaluateArgument(const Context & context, ASTPtr & argument) const; + UInt64 evaluateArgument(ContextPtr context, ASTPtr & argument) const; - ColumnsDescription getActualTableStructure(const Context & context) const override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; }; diff --git a/src/TableFunctions/TableFunctionPostgreSQL.cpp b/src/TableFunctions/TableFunctionPostgreSQL.cpp index f20aae11648..6e7ba1825fc 100644 --- a/src/TableFunctions/TableFunctionPostgreSQL.cpp +++ b/src/TableFunctions/TableFunctionPostgreSQL.cpp @@ -12,6 +12,7 @@ #include #include #include +#include namespace DB @@ -25,21 +26,21 @@ namespace ErrorCodes StoragePtr TableFunctionPostgreSQL::executeImpl(const ASTPtr & /*ast_function*/, - const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const + ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); auto result = std::make_shared( - StorageID(getDatabaseName(), table_name), remote_table_name, - connection_pool, columns, ConstraintsDescription{}, context, remote_table_schema); + StorageID(getDatabaseName(), table_name), *connection_pool, remote_table_name, + columns, ConstraintsDescription{}, context, remote_table_schema); result->startup(); return result; } -ColumnsDescription TableFunctionPostgreSQL::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionPostgreSQL::getActualTableStructure(ContextPtr context) const { - const bool use_nulls = context.getSettingsRef().external_table_functions_use_nulls; + const bool use_nulls = context->getSettingsRef().external_table_functions_use_nulls; auto columns = fetchPostgreSQLTableStructure( connection_pool->get(), remote_table_schema.empty() ? doubleQuoteString(remote_table_name) @@ -50,7 +51,7 @@ ColumnsDescription TableFunctionPostgreSQL::getActualTableStructure(const Contex } -void TableFunctionPostgreSQL::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionPostgreSQL::parseArguments(const ASTPtr & ast_function, ContextPtr context) { const auto & func_args = ast_function->as(); @@ -67,16 +68,19 @@ void TableFunctionPostgreSQL::parseArguments(const ASTPtr & ast_function, const for (auto & arg : args) arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); - auto parsed_host_port = parseAddress(args[0]->as().value.safeGet(), 5432); + /// Split into replicas if needed. 5432 is a default postgresql port. + const auto & host_port = args[0]->as().value.safeGet(); + size_t max_addresses = context->getSettingsRef().glob_expansion_max_elements; + auto addresses = parseRemoteDescriptionForExternalDatabase(host_port, max_addresses, 5432); + remote_table_name = args[2]->as().value.safeGet(); if (args.size() == 6) remote_table_schema = args[5]->as().value.safeGet(); - connection_pool = std::make_shared( + connection_pool = std::make_shared( args[1]->as().value.safeGet(), - parsed_host_port.first, - parsed_host_port.second, + addresses, args[3]->as().value.safeGet(), args[4]->as().value.safeGet()); } diff --git a/src/TableFunctions/TableFunctionPostgreSQL.h b/src/TableFunctions/TableFunctionPostgreSQL.h index 601b2a090b2..44f804fbb30 100644 --- a/src/TableFunctions/TableFunctionPostgreSQL.h +++ b/src/TableFunctions/TableFunctionPostgreSQL.h @@ -5,14 +5,12 @@ #if USE_LIBPQXX #include +#include namespace DB { -class PostgreSQLConnectionPool; -using PostgreSQLConnectionPoolPtr = std::shared_ptr; - class TableFunctionPostgreSQL : public ITableFunction { public: @@ -21,17 +19,17 @@ public: private: StoragePtr executeImpl( - const ASTPtr & ast_function, const Context & context, + const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "PostgreSQL"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String connection_str; String remote_table_name, remote_table_schema; - PostgreSQLConnectionPoolPtr connection_pool; + postgres::PoolWithFailoverPtr connection_pool; }; } diff --git a/src/TableFunctions/TableFunctionRemote.cpp b/src/TableFunctions/TableFunctionRemote.cpp index 0e7623c0ac3..ab2458b64f4 100644 --- a/src/TableFunctions/TableFunctionRemote.cpp +++ b/src/TableFunctions/TableFunctionRemote.cpp @@ -28,7 +28,7 @@ namespace ErrorCodes } -void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, ContextPtr context) { ASTs & args_func = ast_function->children; @@ -162,14 +162,14 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, const Cont { /// Use an existing cluster from the main config if (name != "clusterAllReplicas") - cluster = context.getCluster(cluster_name); + cluster = context->getCluster(cluster_name); else - cluster = context.getCluster(cluster_name)->getClusterWithReplicasAsShards(context.getSettings()); + cluster = context->getCluster(cluster_name)->getClusterWithReplicasAsShards(context->getSettings()); } else { /// Create new cluster from the scratch - size_t max_addresses = context.getSettingsRef().table_function_remote_max_addresses; + size_t max_addresses = context->getSettingsRef().table_function_remote_max_addresses; std::vector shards = parseRemoteDescription(cluster_description, 0, cluster_description.size(), ',', max_addresses); std::vector> names; @@ -180,7 +180,7 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, const Cont if (names.empty()) throw Exception("Shard list is empty after parsing first argument", ErrorCodes::BAD_ARGUMENTS); - auto maybe_secure_port = context.getTCPPortSecure(); + auto maybe_secure_port = context->getTCPPortSecure(); /// Check host and port on affiliation allowed hosts. for (const auto & hosts : names) @@ -189,20 +189,20 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, const Cont { size_t colon = host.find(':'); if (colon == String::npos) - context.getRemoteHostFilter().checkHostAndPort( + context->getRemoteHostFilter().checkHostAndPort( host, - toString((secure ? (maybe_secure_port ? *maybe_secure_port : DBMS_DEFAULT_SECURE_PORT) : context.getTCPPort()))); + toString((secure ? (maybe_secure_port ? *maybe_secure_port : DBMS_DEFAULT_SECURE_PORT) : context->getTCPPort()))); else - context.getRemoteHostFilter().checkHostAndPort(host.substr(0, colon), host.substr(colon + 1)); + context->getRemoteHostFilter().checkHostAndPort(host.substr(0, colon), host.substr(colon + 1)); } } cluster = std::make_shared( - context.getSettings(), + context->getSettings(), names, username, password, - (secure ? (maybe_secure_port ? *maybe_secure_port : DBMS_DEFAULT_SECURE_PORT) : context.getTCPPort()), + (secure ? (maybe_secure_port ? *maybe_secure_port : DBMS_DEFAULT_SECURE_PORT) : context->getTCPPort()), false, secure); } @@ -214,7 +214,7 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, const Cont remote_table_id.table_name = remote_table; } -StoragePtr TableFunctionRemote::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const +StoragePtr TableFunctionRemote::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const { /// StorageDistributed supports mismatching structure of remote table, so we can use outdated structure for CREATE ... AS remote(...) /// without additional conversion in StorageTableFunctionProxy @@ -255,7 +255,7 @@ StoragePtr TableFunctionRemote::executeImpl(const ASTPtr & /*ast_function*/, con return res; } -ColumnsDescription TableFunctionRemote::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionRemote::getActualTableStructure(ContextPtr context) const { assert(cluster); return getStructureOfRemoteTable(*cluster, remote_table_id, context, remote_table_function_ptr); diff --git a/src/TableFunctions/TableFunctionRemote.h b/src/TableFunctions/TableFunctionRemote.h index d485440d604..845c36182dc 100644 --- a/src/TableFunctions/TableFunctionRemote.h +++ b/src/TableFunctions/TableFunctionRemote.h @@ -18,19 +18,19 @@ namespace DB class TableFunctionRemote : public ITableFunction { public: - TableFunctionRemote(const std::string & name_, bool secure_ = false); + explicit TableFunctionRemote(const std::string & name_, bool secure_ = false); std::string getName() const override { return name; } - ColumnsDescription getActualTableStructure(const Context & context) const override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; bool needStructureConversion() const override { return false; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "Distributed"; } - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; std::string name; bool is_cluster_function; diff --git a/src/TableFunctions/TableFunctionS3.cpp b/src/TableFunctions/TableFunctionS3.cpp index 6dc9230ca46..973899d2101 100644 --- a/src/TableFunctions/TableFunctionS3.cpp +++ b/src/TableFunctions/TableFunctionS3.cpp @@ -17,58 +17,76 @@ namespace DB namespace ErrorCodes { - extern const int LOGICAL_ERROR; extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } -void TableFunctionS3::parseArguments(const ASTPtr & ast_function, const Context & context) +void TableFunctionS3::parseArguments(const ASTPtr & ast_function, ContextPtr context) { /// Parse args ASTs & args_func = ast_function->children; + const auto message = fmt::format( + "The signature of table function {} could be the following:\n" \ + " - url, format, structure\n" \ + " - url, format, structure, compression_method\n" \ + " - url, access_key_id, secret_access_key, format, structure\n" \ + " - url, access_key_id, secret_access_key, format, structure, compression_method", + getName()); + if (args_func.size() != 1) - throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::LOGICAL_ERROR); + throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); ASTs & args = args_func.at(0)->children; if (args.size() < 3 || args.size() > 6) - throw Exception("Table function '" + getName() + "' requires 3 to 6 arguments: url, [access_key_id, secret_access_key,] format, structure and [compression_method].", - ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + throw Exception(message, ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); for (auto & arg : args) arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); + /// Size -> argument indexes + static auto size_to_args = std::map> + { + {3, {{"format", 1}, {"structure", 2}}}, + {4, {{"format", 1}, {"structure", 2}, {"compression_method", 3}}}, + {5, {{"access_key_id", 1}, {"secret_access_key", 2}, {"format", 3}, {"structure", 4}}}, + {6, {{"access_key_id", 1}, {"secret_access_key", 2}, {"format", 3}, {"structure", 4}, {"compression_method", 5}}} + }; + + /// This argument is always the first filename = args[0]->as().value.safeGet(); - if (args.size() < 5) - { - format = args[1]->as().value.safeGet(); - structure = args[2]->as().value.safeGet(); - } - else - { - access_key_id = args[1]->as().value.safeGet(); - secret_access_key = args[2]->as().value.safeGet(); - format = args[3]->as().value.safeGet(); - structure = args[4]->as().value.safeGet(); - } + auto & args_to_idx = size_to_args[args.size()]; - if (args.size() == 4 || args.size() == 6) - compression_method = args.back()->as().value.safeGet(); + if (args_to_idx.contains("format")) + format = args[args_to_idx["format"]]->as().value.safeGet(); + + if (args_to_idx.contains("structure")) + structure = args[args_to_idx["structure"]]->as().value.safeGet(); + + if (args_to_idx.contains("compression_method")) + compression_method = args[args_to_idx["compression_method"]]->as().value.safeGet(); + + if (args_to_idx.contains("access_key_id")) + access_key_id = args[args_to_idx["access_key_id"]]->as().value.safeGet(); + + if (args_to_idx.contains("secret_access_key")) + secret_access_key = args[args_to_idx["secret_access_key"]]->as().value.safeGet(); } -ColumnsDescription TableFunctionS3::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionS3::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionS3::executeImpl(const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionS3::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { Poco::URI uri (filename); S3::URI s3_uri (uri); - UInt64 min_upload_part_size = context.getSettingsRef().s3_min_upload_part_size; - UInt64 max_single_part_upload_size = context.getSettingsRef().s3_max_single_part_upload_size; - UInt64 max_connections = context.getSettingsRef().s3_max_connections; + UInt64 s3_max_single_read_retries = context->getSettingsRef().s3_max_single_read_retries; + UInt64 min_upload_part_size = context->getSettingsRef().s3_min_upload_part_size; + UInt64 max_single_part_upload_size = context->getSettingsRef().s3_max_single_part_upload_size; + UInt64 max_connections = context->getSettingsRef().s3_max_connections; StoragePtr storage = StorageS3::create( s3_uri, @@ -76,12 +94,13 @@ StoragePtr TableFunctionS3::executeImpl(const ASTPtr & /*ast_function*/, const C secret_access_key, StorageID(getDatabaseName(), table_name), format, + s3_max_single_read_retries, min_upload_part_size, max_single_part_upload_size, max_connections, getActualTableStructure(context), ConstraintsDescription{}, - const_cast(context), + context, compression_method); storage->startup(); diff --git a/src/TableFunctions/TableFunctionS3.h b/src/TableFunctions/TableFunctionS3.h index 722fb9eb23c..1835fa3daa9 100644 --- a/src/TableFunctions/TableFunctionS3.h +++ b/src/TableFunctions/TableFunctionS3.h @@ -27,14 +27,14 @@ public: protected: StoragePtr executeImpl( const ASTPtr & ast_function, - const Context & context, + ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "S3"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String filename; String format; diff --git a/src/TableFunctions/TableFunctionS3Cluster.cpp b/src/TableFunctions/TableFunctionS3Cluster.cpp new file mode 100644 index 00000000000..16f48c70608 --- /dev/null +++ b/src/TableFunctions/TableFunctionS3Cluster.cpp @@ -0,0 +1,149 @@ +#if !defined(ARCADIA_BUILD) +#include +#endif + +#if USE_AWS_S3 + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "registerTableFunctions.h" + +#include +#include + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; +} + + +void TableFunctionS3Cluster::parseArguments(const ASTPtr & ast_function, ContextPtr context) +{ + /// Parse args + ASTs & args_func = ast_function->children; + + if (args_func.size() != 1) + throw Exception("Table function '" + getName() + "' must have arguments.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + + ASTs & args = args_func.at(0)->children; + + const auto message = fmt::format( + "The signature of table function {} could be the following:\n" \ + " - cluster, url, format, structure\n" \ + " - cluster, url, format, structure, compression_method\n" \ + " - cluster, url, access_key_id, secret_access_key, format, structure\n" \ + " - cluster, url, access_key_id, secret_access_key, format, structure, compression_method", + getName()); + + if (args.size() < 4 || args.size() > 7) + throw Exception(message, ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + + for (auto & arg : args) + arg = evaluateConstantExpressionOrIdentifierAsLiteral(arg, context); + + /// This arguments are always the first + cluster_name = args[0]->as().value.safeGet(); + filename = args[1]->as().value.safeGet(); + + /// Size -> argument indexes + static auto size_to_args = std::map> + { + {4, {{"format", 2}, {"structure", 3}}}, + {5, {{"format", 2}, {"structure", 3}, {"compression_method", 4}}}, + {6, {{"access_key_id", 2}, {"secret_access_key", 3}, {"format", 4}, {"structure", 5}}}, + {7, {{"access_key_id", 2}, {"secret_access_key", 3}, {"format", 4}, {"structure", 5}, {"compression_method", 6}}} + }; + + auto & args_to_idx = size_to_args[args.size()]; + + if (args_to_idx.contains("format")) + format = args[args_to_idx["format"]]->as().value.safeGet(); + + if (args_to_idx.contains("structure")) + structure = args[args_to_idx["structure"]]->as().value.safeGet(); + + if (args_to_idx.contains("compression_method")) + compression_method = args[args_to_idx["compression_method"]]->as().value.safeGet(); + + if (args_to_idx.contains("access_key_id")) + access_key_id = args[args_to_idx["access_key_id"]]->as().value.safeGet(); + + if (args_to_idx.contains("secret_access_key")) + secret_access_key = args[args_to_idx["secret_access_key"]]->as().value.safeGet(); +} + + +ColumnsDescription TableFunctionS3Cluster::getActualTableStructure(ContextPtr context) const +{ + return parseColumnsListFromString(structure, context); +} + +StoragePtr TableFunctionS3Cluster::executeImpl( + const ASTPtr & /*function*/, ContextPtr context, + const std::string & table_name, ColumnsDescription /*cached_columns*/) const +{ + StoragePtr storage; + if (context->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY) + { + /// On worker node this filename won't contains globs + Poco::URI uri (filename); + S3::URI s3_uri (uri); + /// Actually this parameters are not used + UInt64 s3_max_single_read_retries = context->getSettingsRef().s3_max_single_read_retries; + UInt64 min_upload_part_size = context->getSettingsRef().s3_min_upload_part_size; + UInt64 max_single_part_upload_size = context->getSettingsRef().s3_max_single_part_upload_size; + UInt64 max_connections = context->getSettingsRef().s3_max_connections; + storage = StorageS3::create( + s3_uri, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), + format, + s3_max_single_read_retries, + min_upload_part_size, + max_single_part_upload_size, + max_connections, + getActualTableStructure(context), ConstraintsDescription{}, + context, compression_method, /*distributed_processing=*/true); + } + else + { + storage = StorageS3Cluster::create( + filename, access_key_id, secret_access_key, StorageID(getDatabaseName(), table_name), + cluster_name, format, context->getSettingsRef().s3_max_connections, + getActualTableStructure(context), ConstraintsDescription{}, + context, compression_method); + } + + storage->startup(); + + return storage; +} + + +void registerTableFunctionS3Cluster(TableFunctionFactory & factory) +{ + factory.registerFunction(); +} + + +} + +#endif diff --git a/src/TableFunctions/TableFunctionS3Cluster.h b/src/TableFunctions/TableFunctionS3Cluster.h new file mode 100644 index 00000000000..cc857725ce6 --- /dev/null +++ b/src/TableFunctions/TableFunctionS3Cluster.h @@ -0,0 +1,56 @@ +#pragma once + +#include + +#if USE_AWS_S3 + +#include + + +namespace DB +{ + +class Context; + +/** + * s3Cluster(cluster_name, source, [access_key_id, secret_access_key,] format, structure) + * A table function, which allows to process many files from S3 on a specific cluster + * On initiator it creates a connection to _all_ nodes in cluster, discloses asterics + * in S3 file path and dispatch each file dynamically. + * On worker node it asks initiator about next task to process, processes it. + * This is repeated until the tasks are finished. + */ +class TableFunctionS3Cluster : public ITableFunction +{ +public: + static constexpr auto name = "s3Cluster"; + std::string getName() const override + { + return name; + } + bool hasStaticStructure() const override { return true; } + +protected: + StoragePtr executeImpl( + const ASTPtr & ast_function, + ContextPtr context, + const std::string & table_name, + ColumnsDescription cached_columns) const override; + + const char * getStorageTypeName() const override { return "S3Cluster"; } + + ColumnsDescription getActualTableStructure(ContextPtr) const override; + void parseArguments(const ASTPtr &, ContextPtr) override; + + String cluster_name; + String filename; + String format; + String structure; + String access_key_id; + String secret_access_key; + String compression_method = "auto"; +}; + +} + +#endif diff --git a/src/TableFunctions/TableFunctionURL.cpp b/src/TableFunctions/TableFunctionURL.cpp index 1c0109e892b..a77b9140508 100644 --- a/src/TableFunctions/TableFunctionURL.cpp +++ b/src/TableFunctions/TableFunctionURL.cpp @@ -12,7 +12,7 @@ namespace DB { StoragePtr TableFunctionURL::getStorage( - const String & source, const String & format_, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const String & compression_method_) const { Poco::URI uri(source); diff --git a/src/TableFunctions/TableFunctionURL.h b/src/TableFunctions/TableFunctionURL.h index 5eb027e2b8a..fde361e8bbb 100644 --- a/src/TableFunctions/TableFunctionURL.h +++ b/src/TableFunctions/TableFunctionURL.h @@ -21,7 +21,7 @@ public: private: StoragePtr getStorage( - const String & source, const String & format_, const ColumnsDescription & columns, Context & global_context, + const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context, const std::string & table_name, const String & compression_method_) const override; const char * getStorageTypeName() const override { return "URL"; } }; diff --git a/src/TableFunctions/TableFunctionValues.cpp b/src/TableFunctions/TableFunctionValues.cpp index 4127a30892f..c66ebe7322e 100644 --- a/src/TableFunctions/TableFunctionValues.cpp +++ b/src/TableFunctions/TableFunctionValues.cpp @@ -30,7 +30,7 @@ namespace ErrorCodes extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } -static void parseAndInsertValues(MutableColumns & res_columns, const ASTs & args, const Block & sample_block, const Context & context) +static void parseAndInsertValues(MutableColumns & res_columns, const ASTs & args, const Block & sample_block, ContextPtr context) { if (res_columns.size() == 1) /// Parsing arguments as Fields { @@ -68,7 +68,7 @@ static void parseAndInsertValues(MutableColumns & res_columns, const ASTs & args } } -void TableFunctionValues::parseArguments(const ASTPtr & ast_function, const Context & /*context*/) +void TableFunctionValues::parseArguments(const ASTPtr & ast_function, ContextPtr /*context*/) { ASTs & args_func = ast_function->children; @@ -93,12 +93,12 @@ void TableFunctionValues::parseArguments(const ASTPtr & ast_function, const Cont structure = args[0]->as().value.safeGet(); } -ColumnsDescription TableFunctionValues::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionValues::getActualTableStructure(ContextPtr context) const { return parseColumnsListFromString(structure, context); } -StoragePtr TableFunctionValues::executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionValues::executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); diff --git a/src/TableFunctions/TableFunctionValues.h b/src/TableFunctions/TableFunctionValues.h index 549fa2de507..058f5f1d2ed 100644 --- a/src/TableFunctions/TableFunctionValues.h +++ b/src/TableFunctions/TableFunctionValues.h @@ -14,11 +14,11 @@ public: std::string getName() const override { return name; } bool hasStaticStructure() const override { return true; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "Values"; } - ColumnsDescription getActualTableStructure(const Context & context) const override; - void parseArguments(const ASTPtr & ast_function, const Context & context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; String structure; }; diff --git a/src/TableFunctions/TableFunctionView.cpp b/src/TableFunctions/TableFunctionView.cpp index 62a833dabc4..3f51e0bbc95 100644 --- a/src/TableFunctions/TableFunctionView.cpp +++ b/src/TableFunctions/TableFunctionView.cpp @@ -15,7 +15,7 @@ namespace ErrorCodes extern const int BAD_ARGUMENTS; } -void TableFunctionView::parseArguments(const ASTPtr & ast_function, const Context & /*context*/) +void TableFunctionView::parseArguments(const ASTPtr & ast_function, ContextPtr /*context*/) { const auto * function = ast_function->as(); if (function) @@ -29,7 +29,7 @@ void TableFunctionView::parseArguments(const ASTPtr & ast_function, const Contex throw Exception("Table function '" + getName() + "' requires a query argument.", ErrorCodes::BAD_ARGUMENTS); } -ColumnsDescription TableFunctionView::getActualTableStructure(const Context & context) const +ColumnsDescription TableFunctionView::getActualTableStructure(ContextPtr context) const { assert(create.select); assert(create.children.size() == 1); @@ -39,7 +39,7 @@ ColumnsDescription TableFunctionView::getActualTableStructure(const Context & co } StoragePtr TableFunctionView::executeImpl( - const ASTPtr & /*ast_function*/, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const + const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { auto columns = getActualTableStructure(context); auto res = StorageView::create(StorageID(getDatabaseName(), table_name), create, columns); diff --git a/src/TableFunctions/TableFunctionView.h b/src/TableFunctions/TableFunctionView.h index 0ed66ff712c..9ef634746eb 100644 --- a/src/TableFunctions/TableFunctionView.h +++ b/src/TableFunctions/TableFunctionView.h @@ -17,11 +17,11 @@ public: static constexpr auto name = "view"; std::string getName() const override { return name; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const String & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const String & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "View"; } - void parseArguments(const ASTPtr & ast_function, const Context & context) override; - ColumnsDescription getActualTableStructure(const Context & context) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; ASTCreateQuery create; }; diff --git a/src/TableFunctions/TableFunctionZeros.cpp b/src/TableFunctions/TableFunctionZeros.cpp index 9b0c6c6e78b..9fd14eec4af 100644 --- a/src/TableFunctions/TableFunctionZeros.cpp +++ b/src/TableFunctions/TableFunctionZeros.cpp @@ -20,14 +20,14 @@ extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; template -ColumnsDescription TableFunctionZeros::getActualTableStructure(const Context & /*context*/) const +ColumnsDescription TableFunctionZeros::getActualTableStructure(ContextPtr /*context*/) const { /// NOTE: https://bugs.llvm.org/show_bug.cgi?id=47418 return ColumnsDescription{{{"zero", std::make_shared()}}}; } template -StoragePtr TableFunctionZeros::executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const +StoragePtr TableFunctionZeros::executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription /*cached_columns*/) const { if (const auto * function = ast_function->as()) { @@ -53,7 +53,7 @@ void registerTableFunctionZeros(TableFunctionFactory & factory) } template -UInt64 TableFunctionZeros::evaluateArgument(const Context & context, ASTPtr & argument) const +UInt64 TableFunctionZeros::evaluateArgument(ContextPtr context, ASTPtr & argument) const { return evaluateConstantExpressionOrIdentifierAsLiteral(argument, context)->as().value.safeGet(); } diff --git a/src/TableFunctions/TableFunctionZeros.h b/src/TableFunctions/TableFunctionZeros.h index 48a2d8019f6..0407eff2f78 100644 --- a/src/TableFunctions/TableFunctionZeros.h +++ b/src/TableFunctions/TableFunctionZeros.h @@ -19,12 +19,12 @@ public: std::string getName() const override { return name; } bool hasStaticStructure() const override { return true; } private: - StoragePtr executeImpl(const ASTPtr & ast_function, const Context & context, const std::string & table_name, ColumnsDescription cached_columns) const override; + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const override; const char * getStorageTypeName() const override { return "SystemZeros"; } - UInt64 evaluateArgument(const Context & context, ASTPtr & argument) const; + UInt64 evaluateArgument(ContextPtr context, ASTPtr & argument) const; - ColumnsDescription getActualTableStructure(const Context & context) const override; + ColumnsDescription getActualTableStructure(ContextPtr context) const override; }; diff --git a/src/TableFunctions/parseColumnsListForTableFunction.cpp b/src/TableFunctions/parseColumnsListForTableFunction.cpp index 5221d96e086..08e80ef425a 100644 --- a/src/TableFunctions/parseColumnsListForTableFunction.cpp +++ b/src/TableFunctions/parseColumnsListForTableFunction.cpp @@ -14,10 +14,10 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } -ColumnsDescription parseColumnsListFromString(const std::string & structure, const Context & context) +ColumnsDescription parseColumnsListFromString(const std::string & structure, ContextPtr context) { ParserColumnDeclarationList parser; - const Settings & settings = context.getSettingsRef(); + const Settings & settings = context->getSettingsRef(); ASTPtr columns_list_raw = parseQuery(parser, structure, "columns declaration list", settings.max_query_size, settings.max_parser_depth); @@ -25,7 +25,7 @@ ColumnsDescription parseColumnsListFromString(const std::string & structure, con if (!columns_list) throw Exception("Could not cast AST to ASTExpressionList", ErrorCodes::LOGICAL_ERROR); - return InterpreterCreateQuery::getColumnsDescription(*columns_list, context, !settings.allow_suspicious_codecs); + return InterpreterCreateQuery::getColumnsDescription(*columns_list, context, false); } } diff --git a/src/TableFunctions/parseColumnsListForTableFunction.h b/src/TableFunctions/parseColumnsListForTableFunction.h index d077d308e37..e0130a2618d 100644 --- a/src/TableFunctions/parseColumnsListForTableFunction.h +++ b/src/TableFunctions/parseColumnsListForTableFunction.h @@ -10,6 +10,6 @@ namespace DB class Context; /// Parses a common argument for table functions such as table structure given in string -ColumnsDescription parseColumnsListFromString(const std::string & structure, const Context & context); +ColumnsDescription parseColumnsListFromString(const std::string & structure, ContextPtr context); } diff --git a/src/TableFunctions/registerTableFunctions.cpp b/src/TableFunctions/registerTableFunctions.cpp index 2e55c16d815..6cf40c4f090 100644 --- a/src/TableFunctions/registerTableFunctions.cpp +++ b/src/TableFunctions/registerTableFunctions.cpp @@ -21,6 +21,7 @@ void registerTableFunctions() #if USE_AWS_S3 registerTableFunctionS3(factory); + registerTableFunctionS3Cluster(factory); registerTableFunctionCOS(factory); #endif diff --git a/src/TableFunctions/registerTableFunctions.h b/src/TableFunctions/registerTableFunctions.h index 2654ab2afc2..c49fafc5f86 100644 --- a/src/TableFunctions/registerTableFunctions.h +++ b/src/TableFunctions/registerTableFunctions.h @@ -21,6 +21,7 @@ void registerTableFunctionGenerate(TableFunctionFactory & factory); #if USE_AWS_S3 void registerTableFunctionS3(TableFunctionFactory & factory); +void registerTableFunctionS3Cluster(TableFunctionFactory & factory); void registerTableFunctionCOS(TableFunctionFactory & factory); #endif diff --git a/src/ya.make b/src/ya.make index 5361c8a5695..6537f67d66f 100644 --- a/src/ya.make +++ b/src/ya.make @@ -5,6 +5,7 @@ LIBRARY() PEERDIR( clickhouse/src/Access clickhouse/src/AggregateFunctions + clickhouse/src/Bridge clickhouse/src/Client clickhouse/src/Columns clickhouse/src/Common diff --git a/tests/clickhouse-test b/tests/clickhouse-test index a44f7972397..5ae894cc55f 100755 --- a/tests/clickhouse-test +++ b/tests/clickhouse-test @@ -116,6 +116,8 @@ def get_db_engine(args, database_name): def run_single_test(args, ext, server_logs_level, client_options, case_file, stdout_file, stderr_file, suite_tmp_dir): # print(client_options) + client = f"{args.client} --log_comment='{case_file}'" + start_time = datetime.now() if args.database: database = args.database @@ -130,7 +132,7 @@ def run_single_test(args, ext, server_logs_level, client_options, case_file, std return ''.join(random.choice(alphabet) for _ in range(length)) database = 'test_{suffix}'.format(suffix=random_str()) - clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) + clickhouse_proc_create = Popen(shlex.split(client), stdin=PIPE, stdout=PIPE, stderr=None, universal_newlines=True) try: clickhouse_proc_create.communicate(("CREATE DATABASE " + database + get_db_engine(args, database)), timeout=args.timeout) except TimeoutExpired: @@ -149,7 +151,7 @@ def run_single_test(args, ext, server_logs_level, client_options, case_file, std os.environ["CLICKHOUSE_LOG_COMMENT"] = case_file params = { - 'client': args.client + ' --database=' + database, + 'client': client + ' --database=' + database, 'logs_level': server_logs_level, 'options': client_options, 'test': case_file, @@ -160,7 +162,7 @@ def run_single_test(args, ext, server_logs_level, client_options, case_file, std pattern = '{test} > {stdout} 2> {stderr}' if ext == '.sql': - pattern = "{client} --send_logs_level={logs_level} --testmode --multiquery {options} --log_comment='{test}' < " + pattern + pattern = "{client} --send_logs_level={logs_level} --testmode --multiquery {options} < " + pattern command = pattern.format(**params) @@ -177,7 +179,7 @@ def run_single_test(args, ext, server_logs_level, client_options, case_file, std need_drop_database = not maybe_passed if need_drop_database: - clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) + clickhouse_proc_create = Popen(shlex.split(client), stdin=PIPE, stdout=PIPE, stderr=None, universal_newlines=True) seconds_left = max(args.timeout - (datetime.now() - start_time).total_seconds(), 20) try: drop_database_query = "DROP DATABASE " + database @@ -406,7 +408,7 @@ def run_tests_array(all_tests_with_params): status += stderr else: counter = 1 - while proc.returncode != 0 and need_retry(stderr): + while need_retry(stderr): proc, stdout, stderr, total_time = run_single_test(args, ext, server_logs_level, client_options, case_file, stdout_file, stderr_file, suite_tmp_dir) sleep(2**counter) counter += 1 @@ -704,10 +706,10 @@ def main(args): args.shard = False if args.database and args.database != "test": - clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) + clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=None, universal_newlines=True) clickhouse_proc_create.communicate(("CREATE DATABASE IF NOT EXISTS " + args.database + get_db_engine(args, args.database))) - clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) + clickhouse_proc_create = Popen(shlex.split(args.client), stdin=PIPE, stdout=PIPE, stderr=None, universal_newlines=True) clickhouse_proc_create.communicate(("CREATE DATABASE IF NOT EXISTS test" + get_db_engine(args, 'test'))) def is_test_from_dir(suite_dir, case): diff --git a/tests/config/config.d/database_replicated.xml b/tests/config/config.d/database_replicated.xml index c2e62f9645a..9a3b4d68ea6 100644 --- a/tests/config/config.d/database_replicated.xml +++ b/tests/config/config.d/database_replicated.xml @@ -19,13 +19,15 @@ 1 - 5000 - 10000 + 10000 + 30000 1000 2000 4000 trace false + + 1000000000000000 diff --git a/tests/config/config.d/keeper_port.xml b/tests/config/config.d/keeper_port.xml index c41040f1613..b21df47bc85 100644 --- a/tests/config/config.d/keeper_port.xml +++ b/tests/config/config.d/keeper_port.xml @@ -8,6 +8,8 @@ 30000 false 60000 + + 1000000000000000 diff --git a/tests/config/install.sh b/tests/config/install.sh index 9c4f8caca07..7e01860e241 100755 --- a/tests/config/install.sh +++ b/tests/config/install.sh @@ -71,8 +71,8 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]] # There is a bug in config reloading, so we cannot override macros using --macros.replica r2 # And we have to copy configs... - mkdir /etc/clickhouse-server1 - mkdir /etc/clickhouse-server2 + mkdir -p /etc/clickhouse-server1 + mkdir -p /etc/clickhouse-server2 chown clickhouse /etc/clickhouse-server1 chown clickhouse /etc/clickhouse-server2 chgrp clickhouse /etc/clickhouse-server1 @@ -84,8 +84,8 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]] sudo -u clickhouse cat /etc/clickhouse-server/config.d/macros.xml | sed "s|r1|r2|" > /etc/clickhouse-server1/config.d/macros.xml sudo -u clickhouse cat /etc/clickhouse-server/config.d/macros.xml | sed "s|s1|s2|" > /etc/clickhouse-server2/config.d/macros.xml - sudo mkdir /var/lib/clickhouse1 - sudo mkdir /var/lib/clickhouse2 + sudo mkdir -p /var/lib/clickhouse1 + sudo mkdir -p /var/lib/clickhouse2 sudo chown clickhouse /var/lib/clickhouse1 sudo chown clickhouse /var/lib/clickhouse2 sudo chgrp clickhouse /var/lib/clickhouse1 diff --git a/tests/fuzz/ast.dict b/tests/fuzz/ast.dict index 8327f276b31..7befb36c840 100644 --- a/tests/fuzz/ast.dict +++ b/tests/fuzz/ast.dict @@ -156,6 +156,7 @@ "extractURLParameterNames" "extractURLParameters" "FETCH PARTITION" +"FETCH PART" "FINAL" "FIRST" "firstSignificantSubdomain" diff --git a/tests/integration/ci-runner.py b/tests/integration/ci-runner.py index 9215cc56a50..a21f3d344ba 100755 --- a/tests/integration/ci-runner.py +++ b/tests/integration/ci-runner.py @@ -15,6 +15,7 @@ MAX_RETRY = 2 SLEEP_BETWEEN_RETRIES = 5 CLICKHOUSE_BINARY_PATH = "/usr/bin/clickhouse" CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH = "/usr/bin/clickhouse-odbc-bridge" +CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH = "/usr/bin/clickhouse-library-bridge" TRIES_COUNT = 10 MAX_TIME_SECONDS = 3600 @@ -238,10 +239,13 @@ class ClickhouseIntegrationTestsRunner: logging.info("All packages installed") os.chmod(CLICKHOUSE_BINARY_PATH, 0o777) os.chmod(CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH, 0o777) + os.chmod(CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH, 0o777) result_path_bin = os.path.join(str(self.base_path()), "clickhouse") - result_path_bridge = os.path.join(str(self.base_path()), "clickhouse-odbc-bridge") + result_path_odbc_bridge = os.path.join(str(self.base_path()), "clickhouse-odbc-bridge") + result_path_library_bridge = os.path.join(str(self.base_path()), "clickhouse-library-bridge") shutil.copy(CLICKHOUSE_BINARY_PATH, result_path_bin) - shutil.copy(CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH, result_path_bridge) + shutil.copy(CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH, result_path_odbc_bridge) + shutil.copy(CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH, result_path_library_bridge) return None, None def _compress_logs(self, path, result_path): @@ -330,12 +334,11 @@ class ClickhouseIntegrationTestsRunner: logging.info("Task timeout exceeded, skipping %s", test) counters["SKIPPED"].append(test) tests_times[test] = 0 - log_name = None - log_path = None - return counters, tests_times, log_name, log_path + return counters, tests_times, [] image_cmd = self._get_runner_image_cmd(repo_path) test_group_str = test_group.replace('/', '_').replace('.', '_') + log_paths = [] for i in range(num_tries): logging.info("Running test group %s for the %s retry", test_group, i) @@ -344,6 +347,7 @@ class ClickhouseIntegrationTestsRunner: output_path = os.path.join(str(self.path()), "test_output_" + test_group_str + "_" + str(i) + ".log") log_name = "integration_run_" + test_group_str + "_" + str(i) + ".txt" log_path = os.path.join(str(self.path()), log_name) + log_paths.append(log_path) logging.info("Will wait output inside %s", output_path) test_names = set([]) @@ -386,7 +390,7 @@ class ClickhouseIntegrationTestsRunner: if test not in counters["PASSED"] and test not in counters["ERROR"] and test not in counters["FAILED"]: counters["ERROR"].append(test) - return counters, tests_times, log_name, log_path + return counters, tests_times, log_paths def run_flaky_check(self, repo_path, build_path): pr_info = self.params['pr_info'] @@ -404,12 +408,12 @@ class ClickhouseIntegrationTestsRunner: start = time.time() logging.info("Starting check with retries") final_retry = 0 - log_paths = [] + logs = [] for i in range(TRIES_COUNT): final_retry += 1 logging.info("Running tests for the %s time", i) - counters, tests_times, _, log_path = self.run_test_group(repo_path, "flaky", tests_to_run, 1) - log_paths.append(log_path) + counters, tests_times, log_paths = self.run_test_group(repo_path, "flaky", tests_to_run, 1) + logs += log_paths if counters["FAILED"]: logging.info("Found failed tests: %s", ' '.join(counters["FAILED"])) description_prefix = "Flaky tests found: " @@ -431,7 +435,7 @@ class ClickhouseIntegrationTestsRunner: time.sleep(5) logging.info("Finally all tests done, going to compress test dir") - test_logs = os.path.join(str(self.path()), "./test_dir.tar") + test_logs = os.path.join(str(self.path()), "./test_dir.tar.gz") self._compress_logs("{}/tests/integration".format(repo_path), test_logs) logging.info("Compression finished") @@ -446,7 +450,7 @@ class ClickhouseIntegrationTestsRunner: test_result += [(c + ' (✕' + str(final_retry) + ')', text_state, "{:.2f}".format(tests_times[c])) for c in counters[state]] status_text = description_prefix + ', '.join([str(n).lower().replace('failed', 'fail') + ': ' + str(len(c)) for n, c in counters.items()]) - return result_state, status_text, test_result, [test_logs] + log_paths + return result_state, status_text, test_result, [test_logs] + logs def run_impl(self, repo_path, build_path): if self.flaky_check: @@ -467,8 +471,8 @@ class ClickhouseIntegrationTestsRunner: "FLAKY": [], } tests_times = defaultdict(float) + tests_log_paths = defaultdict(list) - logs = [] items_to_run = list(grouped_tests.items()) logging.info("Total test groups %s", len(items_to_run)) @@ -478,7 +482,7 @@ class ClickhouseIntegrationTestsRunner: for group, tests in items_to_run: logging.info("Running test group %s countaining %s tests", group, len(tests)) - group_counters, group_test_times, _, log_path = self.run_test_group(repo_path, group, tests, MAX_RETRY) + group_counters, group_test_times, log_paths = self.run_test_group(repo_path, group, tests, MAX_RETRY) total_tests = 0 for counter, value in group_counters.items(): logging.info("Tests from group %s stats, %s count %s", group, counter, len(value)) @@ -489,13 +493,14 @@ class ClickhouseIntegrationTestsRunner: for test_name, test_time in group_test_times.items(): tests_times[test_name] = test_time - logs.append(log_path) + tests_log_paths[test_name] = log_paths + if len(counters["FAILED"]) + len(counters["ERROR"]) >= 20: logging.info("Collected more than 20 failed/error tests, stopping") break logging.info("Finally all tests done, going to compress test dir") - test_logs = os.path.join(str(self.path()), "./test_dir.tar") + test_logs = os.path.join(str(self.path()), "./test_dir.tar.gz") self._compress_logs("{}/tests/integration".format(repo_path), test_logs) logging.info("Compression finished") @@ -514,7 +519,7 @@ class ClickhouseIntegrationTestsRunner: text_state = "FAIL" else: text_state = state - test_result += [(c, text_state, "{:.2f}".format(tests_times[c])) for c in counters[state]] + test_result += [(c, text_state, "{:.2f}".format(tests_times[c]), tests_log_paths[c]) for c in counters[state]] failed_sum = len(counters['FAILED']) + len(counters['ERROR']) status_text = "fail: {}, passed: {}, flaky: {}".format(failed_sum, len(counters['PASSED']), len(counters['FLAKY'])) @@ -531,7 +536,7 @@ class ClickhouseIntegrationTestsRunner: if '(memory)' in self.params['context_name']: result_state = "success" - return result_state, status_text, test_result, [test_logs] + logs + return result_state, status_text, test_result, [test_logs] def write_results(results_file, status_file, results, status): with open(results_file, 'w') as f: diff --git a/tests/integration/helpers/cluster.py b/tests/integration/helpers/cluster.py index 00ada45398f..69a66a50b6d 100644 --- a/tests/integration/helpers/cluster.py +++ b/tests/integration/helpers/cluster.py @@ -54,6 +54,26 @@ def run_and_check(args, env=None, shell=False, stdout=subprocess.PIPE, stderr=su raise Exception('Command {} return non-zero code {}: {}'.format(args, res.returncode, res.stderr.decode('utf-8'))) +def retry_exception(num, delay, func, exception=Exception, *args, **kwargs): + """ + Retry if `func()` throws, `num` times. + + :param func: func to run + :param num: number of retries + + :throws StopIteration + """ + i = 0 + while i <= num: + try: + func(*args, **kwargs) + time.sleep(delay) + except exception: # pylint: disable=broad-except + i += 1 + continue + return + raise StopIteration('Function did not finished successfully') + def subprocess_check_call(args): # Uncomment for debugging # print('run:', ' ' . join(args)) @@ -75,6 +95,15 @@ def get_odbc_bridge_path(): return '/usr/bin/clickhouse-odbc-bridge' return path +def get_library_bridge_path(): + path = os.environ.get('CLICKHOUSE_TESTS_LIBRARY_BRIDGE_BIN_PATH') + if path is None: + server_path = os.environ.get('CLICKHOUSE_TESTS_SERVER_BIN_PATH') + if server_path is not None: + return os.path.join(os.path.dirname(server_path), 'clickhouse-library-bridge') + else: + return '/usr/bin/clickhouse-library-bridge' + return path def get_docker_compose_path(): compose_path = os.environ.get('DOCKER_COMPOSE_DIR') @@ -98,7 +127,7 @@ class ClickHouseCluster: """ def __init__(self, base_path, name=None, base_config_dir=None, server_bin_path=None, client_bin_path=None, - odbc_bridge_bin_path=None, zookeeper_config_path=None, custom_dockerd_host=None): + odbc_bridge_bin_path=None, library_bridge_bin_path=None, zookeeper_config_path=None, custom_dockerd_host=None): for param in list(os.environ.keys()): print("ENV %40s %s" % (param, os.environ[param])) self.base_dir = p.dirname(base_path) @@ -109,6 +138,7 @@ class ClickHouseCluster: self.server_bin_path = p.realpath( server_bin_path or os.environ.get('CLICKHOUSE_TESTS_SERVER_BIN_PATH', '/usr/bin/clickhouse')) self.odbc_bridge_bin_path = p.realpath(odbc_bridge_bin_path or get_odbc_bridge_path()) + self.library_bridge_bin_path = p.realpath(library_bridge_bin_path or get_library_bridge_path()) self.client_bin_path = p.realpath( client_bin_path or os.environ.get('CLICKHOUSE_TESTS_CLIENT_BIN_PATH', '/usr/bin/clickhouse-client')) self.zookeeper_config_path = p.join(self.base_dir, zookeeper_config_path) if zookeeper_config_path else p.join( @@ -139,7 +169,9 @@ class ClickHouseCluster: self.instances = {} self.with_zookeeper = False self.with_mysql = False + self.with_mysql_cluster = False self.with_postgres = False + self.with_postgres_cluster = False self.with_kafka = False self.with_kerberized_kafka = False self.with_rabbitmq = False @@ -180,9 +212,9 @@ class ClickHouseCluster: def add_instance(self, name, base_config_dir=None, main_configs=None, user_configs=None, dictionaries=None, macros=None, - with_zookeeper=False, with_mysql=False, with_kafka=False, with_kerberized_kafka=False, with_rabbitmq=False, + with_zookeeper=False, with_mysql=False, with_mysql_cluster=False, with_kafka=False, with_kerberized_kafka=False, with_rabbitmq=False, clickhouse_path_dir=None, - with_odbc_drivers=False, with_postgres=False, with_hdfs=False, with_kerberized_hdfs=False, with_mongo=False, + with_odbc_drivers=False, with_postgres=False, with_postgres_cluster=False, with_hdfs=False, with_kerberized_hdfs=False, with_mongo=False, with_redis=False, with_minio=False, with_cassandra=False, hostname=None, env_variables=None, image="yandex/clickhouse-integration-test", tag=None, stay_alive=False, ipv4_address=None, ipv6_address=None, with_installed_binary=False, tmpfs=None, @@ -223,6 +255,7 @@ class ClickHouseCluster: with_zookeeper=with_zookeeper, zookeeper_config_path=self.zookeeper_config_path, with_mysql=with_mysql, + with_mysql_cluster=with_mysql_cluster, with_kafka=with_kafka, with_kerberized_kafka=with_kerberized_kafka, with_rabbitmq=with_rabbitmq, @@ -233,6 +266,7 @@ class ClickHouseCluster: with_cassandra=with_cassandra, server_bin_path=self.server_bin_path, odbc_bridge_bin_path=self.odbc_bridge_bin_path, + library_bridge_bin_path=self.library_bridge_bin_path, clickhouse_path_dir=clickhouse_path_dir, with_odbc_drivers=with_odbc_drivers, hostname=hostname, @@ -274,6 +308,14 @@ class ClickHouseCluster: cmds.append(self.base_mysql_cmd) + if with_mysql_cluster and not self.with_mysql_cluster: + self.with_mysql_cluster = True + self.base_cmd.extend(['--file', p.join(docker_compose_yml_dir, 'docker_compose_mysql_cluster.yml')]) + self.base_mysql_cluster_cmd = ['docker-compose', '--project-name', self.project_name, + '--file', p.join(docker_compose_yml_dir, 'docker_compose_mysql_cluster.yml')] + + cmds.append(self.base_mysql_cluster_cmd) + if with_postgres and not self.with_postgres: self.with_postgres = True self.base_cmd.extend(['--file', p.join(docker_compose_yml_dir, 'docker_compose_postgres.yml')]) @@ -281,6 +323,13 @@ class ClickHouseCluster: '--file', p.join(docker_compose_yml_dir, 'docker_compose_postgres.yml')] cmds.append(self.base_postgres_cmd) + if with_postgres_cluster and not self.with_postgres_cluster: + self.with_postgres_cluster = True + self.base_cmd.extend(['--file', p.join(docker_compose_yml_dir, 'docker_compose_postgres.yml')]) + self.base_postgres_cluster_cmd = ['docker-compose', '--project-name', self.project_name, + '--file', p.join(docker_compose_yml_dir, 'docker_compose_postgres_cluster.yml')] + cmds.append(self.base_postgres_cluster_cmd) + if with_odbc_drivers and not self.with_odbc_drivers: self.with_odbc_drivers = True if not self.with_mysql: @@ -449,11 +498,11 @@ class ClickHouseCluster: ["bash", "-c", "echo {} | base64 --decode > {}".format(encodedStr, dest_path)], user='root') - def wait_mysql_to_start(self, timeout=60): + def wait_mysql_to_start(self, timeout=60, port=3308): start = time.time() while time.time() - start < timeout: try: - conn = pymysql.connect(user='root', password='clickhouse', host='127.0.0.1', port=3308) + conn = pymysql.connect(user='root', password='clickhouse', host='127.0.0.1', port=port) conn.close() print("Mysql Started") return @@ -464,11 +513,11 @@ class ClickHouseCluster: subprocess_call(['docker-compose', 'ps', '--services', '--all']) raise Exception("Cannot wait MySQL container") - def wait_postgres_to_start(self, timeout=60): + def wait_postgres_to_start(self, timeout=60, port=5432): start = time.time() while time.time() - start < timeout: try: - conn_string = "host='localhost' user='postgres' password='mysecretpassword'" + conn_string = "host='localhost' port={} user='postgres' password='mysecretpassword'".format(port) conn = psycopg2.connect(conn_string) conn.close() print("Postgres Started") @@ -603,16 +652,6 @@ class ClickHouseCluster: if self.is_up: return - # Just in case kill unstopped containers from previous launch - try: - print("Trying to kill unstopped containers...") - - if not subprocess_call(['docker-compose', 'kill']): - subprocess_call(['docker-compose', 'down', '--volumes']) - print("Unstopped containers killed") - except: - pass - try: if destroy_dirs and p.exists(self.instances_dir): print(("Removing instances dir %s", self.instances_dir)) @@ -622,9 +661,24 @@ class ClickHouseCluster: print(('Setup directory for instance: {} destroy_dirs: {}'.format(instance.name, destroy_dirs))) instance.create_dir(destroy_dir=destroy_dirs) + # In case of multiple cluster we should not stop compose services. + if destroy_dirs: + # Just in case kill unstopped containers from previous launch + try: + print("Trying to kill unstopped containers...") + subprocess_call(['docker-compose', 'kill']) + subprocess_call(self.base_cmd + ['down', '--volumes', '--remove-orphans']) + print("Unstopped containers killed") + except: + pass + + clickhouse_pull_cmd = self.base_cmd + ['pull'] + print(f"Pulling images for {self.base_cmd}") + retry_exception(10, 5, subprocess_check_call, Exception, clickhouse_pull_cmd) + self.docker_client = docker.from_env(version=self.docker_api_version) - common_opts = ['up', '-d', '--force-recreate'] + common_opts = ['up', '-d'] if self.with_zookeeper and self.base_zookeeper_cmd: print('Setup ZooKeeper') @@ -650,11 +704,25 @@ class ClickHouseCluster: subprocess_check_call(self.base_mysql_cmd + common_opts) self.wait_mysql_to_start(120) + if self.with_mysql_cluster and self.base_mysql_cluster_cmd: + print('Setup MySQL') + subprocess_check_call(self.base_mysql_cluster_cmd + common_opts) + self.wait_mysql_to_start(120, port=3348) + self.wait_mysql_to_start(120, port=3368) + self.wait_mysql_to_start(120, port=3388) + if self.with_postgres and self.base_postgres_cmd: print('Setup Postgres') subprocess_check_call(self.base_postgres_cmd + common_opts) self.wait_postgres_to_start(120) + if self.with_postgres_cluster and self.base_postgres_cluster_cmd: + print('Setup Postgres') + subprocess_check_call(self.base_postgres_cluster_cmd + common_opts) + self.wait_postgres_to_start(120, port=5421) + self.wait_postgres_to_start(120, port=5441) + self.wait_postgres_to_start(120, port=5461) + if self.with_kafka and self.base_kafka_cmd: print('Setup Kafka') subprocess_check_call(self.base_kafka_cmd + common_opts + ['--renew-anon-volumes']) @@ -692,7 +760,7 @@ class ClickHouseCluster: if self.with_redis and self.base_redis_cmd: print('Setup Redis') - subprocess_check_call(self.base_redis_cmd + ['up', '-d', '--force-recreate']) + subprocess_check_call(self.base_redis_cmd + ['up', '-d']) time.sleep(10) if self.with_minio and self.base_minio_cmd: @@ -726,7 +794,7 @@ class ClickHouseCluster: os.environ.pop('SSL_CERT_FILE') if self.with_cassandra and self.base_cassandra_cmd: - subprocess_check_call(self.base_cassandra_cmd + ['up', '-d', '--force-recreate']) + subprocess_check_call(self.base_cassandra_cmd + ['up', '-d']) self.wait_cassandra_to_start() clickhouse_start_cmd = self.base_cmd + ['up', '-d', '--no-recreate'] @@ -861,6 +929,7 @@ services: - /etc/passwd:/etc/passwd:ro {binary_volume} {odbc_bridge_volume} + {library_bridge_volume} {odbc_ini_path} {keytab_path} {krb5_conf} @@ -896,9 +965,9 @@ class ClickHouseInstance: def __init__( self, cluster, base_path, name, base_config_dir, custom_main_configs, custom_user_configs, custom_dictionaries, - macros, with_zookeeper, zookeeper_config_path, with_mysql, with_kafka, with_kerberized_kafka, with_rabbitmq, with_kerberized_hdfs, + macros, with_zookeeper, zookeeper_config_path, with_mysql, with_mysql_cluster, with_kafka, with_kerberized_kafka, with_rabbitmq, with_kerberized_hdfs, with_mongo, with_redis, with_minio, - with_cassandra, server_bin_path, odbc_bridge_bin_path, clickhouse_path_dir, with_odbc_drivers, + with_cassandra, server_bin_path, odbc_bridge_bin_path, library_bridge_bin_path, clickhouse_path_dir, with_odbc_drivers, hostname=None, env_variables=None, image="yandex/clickhouse-integration-test", tag="latest", stay_alive=False, ipv4_address=None, ipv6_address=None, with_installed_binary=False, tmpfs=None): @@ -922,8 +991,10 @@ class ClickHouseInstance: self.server_bin_path = server_bin_path self.odbc_bridge_bin_path = odbc_bridge_bin_path + self.library_bridge_bin_path = library_bridge_bin_path self.with_mysql = with_mysql + self.with_mysql_cluster = with_mysql_cluster self.with_kafka = with_kafka self.with_kerberized_kafka = with_kerberized_kafka self.with_rabbitmq = with_rabbitmq @@ -1053,23 +1124,28 @@ class ClickHouseInstance: return self.http_query(sql=sql, data=data, params=params, user=user, password=password, expect_fail_and_get_error=True) - def stop_clickhouse(self, start_wait_sec=5, kill=False): + def stop_clickhouse(self, stop_wait_sec=30, kill=False): if not self.stay_alive: raise Exception("clickhouse can be stopped only with stay_alive=True instance") self.exec_in_container(["bash", "-c", "pkill {} clickhouse".format("-9" if kill else "")], user='root') - time.sleep(start_wait_sec) + deadline = time.time() + stop_wait_sec + while time.time() < deadline: + time.sleep(0.5) + if self.get_process_pid("clickhouse") is None: + break + assert self.get_process_pid("clickhouse") is None, "ClickHouse was not stopped" - def start_clickhouse(self, stop_wait_sec=5): + def start_clickhouse(self, start_wait_sec=30): if not self.stay_alive: raise Exception("clickhouse can be started again only with stay_alive=True instance") self.exec_in_container(["bash", "-c", "{} --daemon".format(CLICKHOUSE_START_COMMAND)], user=str(os.getuid())) # wait start from helpers.test_tools import assert_eq_with_retry - assert_eq_with_retry(self, "select 1", "1", retry_count=int(stop_wait_sec / 0.5), sleep_time=0.5) + assert_eq_with_retry(self, "select 1", "1", retry_count=int(start_wait_sec / 0.5), sleep_time=0.5) - def restart_clickhouse(self, stop_start_wait_sec=5, kill=False): + def restart_clickhouse(self, stop_start_wait_sec=30, kill=False): self.stop_clickhouse(stop_start_wait_sec, kill) self.start_clickhouse(stop_start_wait_sec) @@ -1389,9 +1465,11 @@ class ClickHouseInstance: if not self.with_installed_binary: binary_volume = "- " + self.server_bin_path + ":/usr/bin/clickhouse" odbc_bridge_volume = "- " + self.odbc_bridge_bin_path + ":/usr/bin/clickhouse-odbc-bridge" + library_bridge_volume = "- " + self.library_bridge_bin_path + ":/usr/bin/clickhouse-library-bridge" else: binary_volume = "- " + self.server_bin_path + ":/usr/share/clickhouse_fresh" odbc_bridge_volume = "- " + self.odbc_bridge_bin_path + ":/usr/share/clickhouse-odbc-bridge_fresh" + library_bridge_volume = "- " + self.library_bridge_bin_path + ":/usr/share/clickhouse-library-bridge_fresh" with open(self.docker_compose_path, 'w') as docker_compose: docker_compose.write(DOCKER_COMPOSE_TEMPLATE.format( @@ -1401,6 +1479,7 @@ class ClickHouseInstance: hostname=self.hostname, binary_volume=binary_volume, odbc_bridge_volume=odbc_bridge_volume, + library_bridge_volume=library_bridge_volume, instance_config_dir=instance_config_dir, config_d_dir=self.config_d_dir, db_dir=db_dir, diff --git a/tests/integration/runner b/tests/integration/runner index 6dca7663310..e89e10fbc21 100755 --- a/tests/integration/runner +++ b/tests/integration/runner @@ -33,10 +33,15 @@ def check_args_and_update_paths(args): if not os.path.isabs(args.binary): args.binary = os.path.abspath(os.path.join(CURRENT_WORK_DIR, args.binary)) - if not args.bridge_binary: - args.bridge_binary = os.path.join(os.path.dirname(args.binary), 'clickhouse-odbc-bridge') - elif not os.path.isabs(args.bridge_binary): - args.bridge_binary = os.path.abspath(os.path.join(CURRENT_WORK_DIR, args.bridge_binary)) + if not args.odbc_bridge_binary: + args.odbc_bridge_binary = os.path.join(os.path.dirname(args.binary), 'clickhouse-odbc-bridge') + elif not os.path.isabs(args.odbc_bridge_binary): + args.odbc_bridge_binary = os.path.abspath(os.path.join(CURRENT_WORK_DIR, args.odbc_bridge_binary)) + + if not args.library_bridge_binary: + args.library_bridge_binary = os.path.join(os.path.dirname(args.binary), 'clickhouse-library-bridge') + elif not os.path.isabs(args.library_bridge_binary): + args.library_bridge_binary = os.path.abspath(os.path.join(CURRENT_WORK_DIR, args.library_bridge_binary)) if args.base_configs_dir: if not os.path.isabs(args.base_configs_dir): @@ -61,7 +66,7 @@ def check_args_and_update_paths(args): logging.info("base_configs_dir: {}, binary: {}, cases_dir: {} ".format(args.base_configs_dir, args.binary, args.cases_dir)) - for path in [args.binary, args.bridge_binary, args.base_configs_dir, args.cases_dir, CLICKHOUSE_ROOT]: + for path in [args.binary, args.odbc_bridge_binary, args.library_bridge_binary, args.base_configs_dir, args.cases_dir, CLICKHOUSE_ROOT]: if not os.path.exists(path): raise Exception("Path {} doesn't exist".format(path)) @@ -82,7 +87,8 @@ signal.signal(signal.SIGINT, docker_kill_handler_handler) # To run integration tests following artfacts should be sufficient: # - clickhouse binaries (env CLICKHOUSE_TESTS_SERVER_BIN_PATH or --binary arg) # - clickhouse default configs(config.xml, users.xml) from same version as binary (env CLICKHOUSE_TESTS_BASE_CONFIG_DIR or --base-configs-dir arg) -# - odbc bridge binary (env CLICKHOUSE_TESTS_ODBC_BRIDGE_BIN_PATH or --bridge-binary arg) +# - odbc bridge binary (env CLICKHOUSE_TESTS_ODBC_BRIDGE_BIN_PATH or --odbc-bridge-binary arg) +# - library bridge binary (env CLICKHOUSE_TESTS_LIBRARY_BRIDGE_BIN_PATH or --library-bridge-binary) # - tests/integration directory with all test cases and configs (env CLICKHOUSE_TESTS_INTEGRATION_PATH or --cases-dir) # # 1) --clickhouse-root is only used to determine other paths on default places @@ -98,10 +104,15 @@ if __name__ == "__main__": help="Path to clickhouse binary. For example /usr/bin/clickhouse") parser.add_argument( - "--bridge-binary", + "--odbc-bridge-binary", default=os.environ.get("CLICKHOUSE_TESTS_ODBC_BRIDGE_BIN_PATH", ""), help="Path to clickhouse-odbc-bridge binary. Defaults to clickhouse-odbc-bridge in the same dir as clickhouse.") + parser.add_argument( + "--library-bridge-binary", + default=os.environ.get("CLICKHOUSE_TESTS_LIBRARY_BRIDGE_BIN_PATH", ""), + help="Path to clickhouse-library-bridge binary. Defaults to clickhouse-library-bridge in the same dir as clickhouse.") + parser.add_argument( "--base-configs-dir", default=os.environ.get("CLICKHOUSE_TESTS_BASE_CONFIG_DIR"), @@ -185,14 +196,17 @@ if __name__ == "__main__": if sys.stdout.isatty() and sys.stdin.isatty(): tty = "-it" - cmd = "docker run {net} {tty} --rm --name {name} --privileged --volume={bridge_bin}:/clickhouse-odbc-bridge --volume={bin}:/clickhouse \ + cmd = "docker run {net} {tty} --rm --name {name} --privileged \ + --volume={odbc_bridge_bin}:/clickhouse-odbc-bridge --volume={bin}:/clickhouse \ + --volume={library_bridge_bin}:/clickhouse-library-bridge --volume={bin}:/clickhouse \ --volume={base_cfg}:/clickhouse-config --volume={cases_dir}:/ClickHouse/tests/integration \ --volume={src_dir}/Server/grpc_protos:/ClickHouse/src/Server/grpc_protos \ --volume={name}_volume:/var/lib/docker {env_tags} -e PYTEST_OPTS='{opts}' {img} {command}".format( net=net, tty=tty, bin=args.binary, - bridge_bin=args.bridge_binary, + odbc_bridge_bin=args.odbc_bridge_binary, + library_bridge_bin=args.library_bridge_binary, base_cfg=args.base_configs_dir, cases_dir=args.cases_dir, src_dir=args.src_dir, diff --git a/programs/server/data/default/.gitignore b/tests/integration/test_catboost_model_config_reload/__init__.py similarity index 100% rename from programs/server/data/default/.gitignore rename to tests/integration/test_catboost_model_config_reload/__init__.py diff --git a/tests/integration/test_catboost_model_config_reload/config/catboost_lib.xml b/tests/integration/test_catboost_model_config_reload/config/catboost_lib.xml new file mode 100644 index 00000000000..745be7cebe6 --- /dev/null +++ b/tests/integration/test_catboost_model_config_reload/config/catboost_lib.xml @@ -0,0 +1,3 @@ + + /etc/clickhouse-server/model/libcatboostmodel.so + diff --git a/tests/integration/test_catboost_model_config_reload/config/models_config.xml b/tests/integration/test_catboost_model_config_reload/config/models_config.xml new file mode 100644 index 00000000000..7e62283a83c --- /dev/null +++ b/tests/integration/test_catboost_model_config_reload/config/models_config.xml @@ -0,0 +1,2 @@ + + diff --git a/tests/integration/test_catboost_model_config_reload/model/libcatboostmodel.so b/tests/integration/test_catboost_model_config_reload/model/libcatboostmodel.so new file mode 100755 index 00000000000..388d9f887b4 Binary files /dev/null and b/tests/integration/test_catboost_model_config_reload/model/libcatboostmodel.so differ diff --git a/tests/integration/test_catboost_model_config_reload/model/model.bin b/tests/integration/test_catboost_model_config_reload/model/model.bin new file mode 100644 index 00000000000..118e099d176 Binary files /dev/null and b/tests/integration/test_catboost_model_config_reload/model/model.bin differ diff --git a/tests/integration/test_catboost_model_config_reload/model/model_config.xml b/tests/integration/test_catboost_model_config_reload/model/model_config.xml new file mode 100644 index 00000000000..af9778097fa --- /dev/null +++ b/tests/integration/test_catboost_model_config_reload/model/model_config.xml @@ -0,0 +1,8 @@ + + + catboost + model1 + /etc/clickhouse-server/model/model.bin + 0 + + diff --git a/tests/integration/test_catboost_model_config_reload/model/model_config2.xml b/tests/integration/test_catboost_model_config_reload/model/model_config2.xml new file mode 100644 index 00000000000..b81120ec900 --- /dev/null +++ b/tests/integration/test_catboost_model_config_reload/model/model_config2.xml @@ -0,0 +1,8 @@ + + + catboost + model2 + /etc/clickhouse-server/model/model.bin + 0 + + diff --git a/tests/integration/test_catboost_model_config_reload/test.py b/tests/integration/test_catboost_model_config_reload/test.py new file mode 100644 index 00000000000..34da1cda2d5 --- /dev/null +++ b/tests/integration/test_catboost_model_config_reload/test.py @@ -0,0 +1,58 @@ +import os +import sys +import time + +import pytest + +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) + +from helpers.cluster import ClickHouseCluster + +cluster = ClickHouseCluster(__file__) +node = cluster.add_instance('node', stay_alive=True, main_configs=['config/models_config.xml', 'config/catboost_lib.xml']) + + +def copy_file_to_container(local_path, dist_path, container_id): + os.system("docker cp {local} {cont_id}:{dist}".format(local=local_path, cont_id=container_id, dist=dist_path)) + + +config = ''' + /etc/clickhouse-server/model/{model_config} +''' + + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + copy_file_to_container(os.path.join(SCRIPT_DIR, 'model/.'), '/etc/clickhouse-server/model', node.docker_id) + node.restart_clickhouse() + + yield cluster + + finally: + cluster.shutdown() + + +def change_config(model_config): + node.replace_config("/etc/clickhouse-server/config.d/models_config.xml", config.format(model_config=model_config)) + node.query("SYSTEM RELOAD CONFIG;") + + +def test(started_cluster): + # Set config with the path to the first model. + change_config("model_config.xml") + + node.query("SELECT modelEvaluate('model1', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11);") + + # Change path to the second model in config. + change_config("model_config2.xml") + + # Check that the new model is loaded. + node.query("SELECT modelEvaluate('model2', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11);") + + # Check that the old model was unloaded. + node.query_and_get_error("SELECT modelEvaluate('model1', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11);") + diff --git a/programs/server/metadata/default/.gitignore b/tests/integration/test_catboost_model_reload/__init__.py similarity index 100% rename from programs/server/metadata/default/.gitignore rename to tests/integration/test_catboost_model_reload/__init__.py diff --git a/tests/integration/test_catboost_model_reload/config/catboost_lib.xml b/tests/integration/test_catboost_model_reload/config/catboost_lib.xml new file mode 100644 index 00000000000..745be7cebe6 --- /dev/null +++ b/tests/integration/test_catboost_model_reload/config/catboost_lib.xml @@ -0,0 +1,3 @@ + + /etc/clickhouse-server/model/libcatboostmodel.so + diff --git a/tests/integration/test_catboost_model_reload/config/models_config.xml b/tests/integration/test_catboost_model_reload/config/models_config.xml new file mode 100644 index 00000000000..e84ca8b5285 --- /dev/null +++ b/tests/integration/test_catboost_model_reload/config/models_config.xml @@ -0,0 +1,3 @@ + + /etc/clickhouse-server/model/model_config.xml + diff --git a/tests/integration/test_catboost_model_reload/model/conjunction.cbm b/tests/integration/test_catboost_model_reload/model/conjunction.cbm new file mode 100644 index 00000000000..7b75fb5f886 Binary files /dev/null and b/tests/integration/test_catboost_model_reload/model/conjunction.cbm differ diff --git a/tests/integration/test_catboost_model_reload/model/disjunction.cbm b/tests/integration/test_catboost_model_reload/model/disjunction.cbm new file mode 100644 index 00000000000..8145c24637f Binary files /dev/null and b/tests/integration/test_catboost_model_reload/model/disjunction.cbm differ diff --git a/tests/integration/test_catboost_model_reload/model/libcatboostmodel.so b/tests/integration/test_catboost_model_reload/model/libcatboostmodel.so new file mode 100755 index 00000000000..388d9f887b4 Binary files /dev/null and b/tests/integration/test_catboost_model_reload/model/libcatboostmodel.so differ diff --git a/tests/integration/test_catboost_model_reload/model/model_config.xml b/tests/integration/test_catboost_model_reload/model/model_config.xml new file mode 100644 index 00000000000..7cbda165ce9 --- /dev/null +++ b/tests/integration/test_catboost_model_reload/model/model_config.xml @@ -0,0 +1,8 @@ + + + catboost + model + /etc/clickhouse-server/model/model.cbm + 0 + + diff --git a/tests/integration/test_catboost_model_reload/test.py b/tests/integration/test_catboost_model_reload/test.py new file mode 100644 index 00000000000..8283e6af975 --- /dev/null +++ b/tests/integration/test_catboost_model_reload/test.py @@ -0,0 +1,74 @@ +import os +import sys +import time + +import pytest + +sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) + +from helpers.cluster import ClickHouseCluster + +cluster = ClickHouseCluster(__file__) +node = cluster.add_instance('node', stay_alive=True, main_configs=['config/models_config.xml', 'config/catboost_lib.xml']) + +def copy_file_to_container(local_path, dist_path, container_id): + os.system("docker cp {local} {cont_id}:{dist}".format(local=local_path, cont_id=container_id, dist=dist_path)) + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + copy_file_to_container(os.path.join(SCRIPT_DIR, 'model/.'), '/etc/clickhouse-server/model', node.docker_id) + node.query("CREATE TABLE binary (x UInt64, y UInt64) ENGINE = TinyLog()") + node.query("INSERT INTO binary VALUES (1, 1), (1, 0), (0, 1), (0, 0)") + + node.restart_clickhouse() + + yield cluster + + finally: + cluster.shutdown() + +def test_model_reload(started_cluster): + node.exec_in_container(["bash", "-c", "rm -f /etc/clickhouse-server/model/model.cbm"]) + node.exec_in_container(["bash", "-c", "ln /etc/clickhouse-server/model/conjunction.cbm /etc/clickhouse-server/model/model.cbm"]) + node.query("SYSTEM RELOAD MODEL model") + + result = node.query(""" + WITH modelEvaluate('model', toFloat64(x), toFloat64(y)) as prediction, exp(prediction) / (1 + exp(prediction)) as probability + SELECT if(probability > 0.5, 1, 0) FROM binary; + """) + assert result == '1\n0\n0\n0\n' + + node.exec_in_container(["bash", "-c", "rm /etc/clickhouse-server/model/model.cbm"]) + node.exec_in_container(["bash", "-c", "ln /etc/clickhouse-server/model/disjunction.cbm /etc/clickhouse-server/model/model.cbm"]) + node.query("SYSTEM RELOAD MODEL model") + + result = node.query(""" + WITH modelEvaluate('model', toFloat64(x), toFloat64(y)) as prediction, exp(prediction) / (1 + exp(prediction)) as probability + SELECT if(probability > 0.5, 1, 0) FROM binary; + """) + assert result == '1\n1\n1\n0\n' + +def test_models_reload(started_cluster): + node.exec_in_container(["bash", "-c", "rm -f /etc/clickhouse-server/model/model.cbm"]) + node.exec_in_container(["bash", "-c", "ln /etc/clickhouse-server/model/conjunction.cbm /etc/clickhouse-server/model/model.cbm"]) + node.query("SYSTEM RELOAD MODELS") + + result = node.query(""" + WITH modelEvaluate('model', toFloat64(x), toFloat64(y)) as prediction, exp(prediction) / (1 + exp(prediction)) as probability + SELECT if(probability > 0.5, 1, 0) FROM binary; + """) + assert result == '1\n0\n0\n0\n' + + node.exec_in_container(["bash", "-c", "rm /etc/clickhouse-server/model/model.cbm"]) + node.exec_in_container(["bash", "-c", "ln /etc/clickhouse-server/model/disjunction.cbm /etc/clickhouse-server/model/model.cbm"]) + node.query("SYSTEM RELOAD MODELS") + + result = node.query(""" + WITH modelEvaluate('model', toFloat64(x), toFloat64(y)) as prediction, exp(prediction) / (1 + exp(prediction)) as probability + SELECT if(probability > 0.5, 1, 0) FROM binary; + """) + assert result == '1\n1\n1\n0\n' \ No newline at end of file diff --git a/tests/integration/test_cluster_copier/configs/users.xml b/tests/integration/test_cluster_copier/configs/users.xml index e742d4f05a6..d27ca56eec7 100644 --- a/tests/integration/test_cluster_copier/configs/users.xml +++ b/tests/integration/test_cluster_copier/configs/users.xml @@ -17,6 +17,14 @@ default default + + 12345678 + + ::/0 + + default + default + diff --git a/tests/integration/test_cluster_copier/task_self_copy.xml b/tests/integration/test_cluster_copier/task_self_copy.xml new file mode 100644 index 00000000000..e0e35ccfe99 --- /dev/null +++ b/tests/integration/test_cluster_copier/task_self_copy.xml @@ -0,0 +1,64 @@ + + + 9440 + + + + false + + s0_0_0 + 9000 + dbuser + 12345678 + 0 + + + + + + + false + + s0_0_0 + 9000 + dbuser + 12345678 + 0 + + + + + + 2 + + + 1 + + + + 0 + + + + 3 + 1 + + + + + source_cluster + db1 + source_table + + destination_cluster + db2 + destination_table + + + ENGINE = MergeTree PARTITION BY a ORDER BY a SETTINGS index_granularity = 8192 + + + rand() + + + \ No newline at end of file diff --git a/tests/integration/test_cluster_copier/test.py b/tests/integration/test_cluster_copier/test.py index d87969630cd..57f9d150c8d 100644 --- a/tests/integration/test_cluster_copier/test.py +++ b/tests/integration/test_cluster_copier/test.py @@ -251,6 +251,31 @@ class Task_non_partitioned_table: instance = cluster.instances['s1_1_0'] instance.query("DROP TABLE copier_test1_1") +class Task_self_copy: + + def __init__(self, cluster): + self.cluster = cluster + self.zk_task_path = "/clickhouse-copier/task_self_copy" + self.copier_task_config = open(os.path.join(CURRENT_TEST_DIR, 'task_self_copy.xml'), 'r').read() + + def start(self): + instance = cluster.instances['s0_0_0'] + instance.query("CREATE DATABASE db1;") + instance.query( + "CREATE TABLE db1.source_table (`a` Int8, `b` String, `c` Int8) ENGINE = MergeTree PARTITION BY a ORDER BY a SETTINGS index_granularity = 8192") + instance.query("CREATE DATABASE db2;") + instance.query( + "CREATE TABLE db2.destination_table (`a` Int8, `b` String, `c` Int8) ENGINE = MergeTree PARTITION BY a ORDER BY a SETTINGS index_granularity = 8192") + instance.query("INSERT INTO db1.source_table VALUES (1, 'ClickHouse', 1);") + instance.query("INSERT INTO db1.source_table VALUES (2, 'Copier', 2);") + + def check(self): + instance = cluster.instances['s0_0_0'] + assert TSV(instance.query("SELECT * FROM db2.destination_table ORDER BY a")) == TSV(instance.query("SELECT * FROM db1.source_table ORDER BY a")) + instance = cluster.instances['s0_0_0'] + instance.query("DROP DATABASE db1 SYNC") + instance.query("DROP DATABASE db2 SYNC") + def execute_task(task, cmd_options): task.start() @@ -380,9 +405,14 @@ def test_no_index(started_cluster): def test_no_arg(started_cluster): execute_task(Task_no_arg(started_cluster), []) + def test_non_partitioned_table(started_cluster): execute_task(Task_non_partitioned_table(started_cluster), []) + +def test_self_copy(started_cluster): + execute_task(Task_self_copy(started_cluster), []) + if __name__ == '__main__': with contextmanager(started_cluster)() as cluster: for name, instance in list(cluster.instances.items()): diff --git a/tests/integration/test_dictionaries_postgresql/test.py b/tests/integration/test_dictionaries_postgresql/test.py index 10d9f4213e1..5b3b5a5aa45 100644 --- a/tests/integration/test_dictionaries_postgresql/test.py +++ b/tests/integration/test_dictionaries_postgresql/test.py @@ -9,7 +9,7 @@ cluster = ClickHouseCluster(__file__) node1 = cluster.add_instance('node1', main_configs=[ 'configs/config.xml', 'configs/dictionaries/postgres_dict.xml', - 'configs/log_conf.xml'], with_postgres=True) + 'configs/log_conf.xml'], with_postgres=True, with_postgres_cluster=True) postgres_dict_table_template = """ CREATE TABLE IF NOT EXISTS {} ( @@ -62,7 +62,7 @@ def started_cluster(): print("postgres1 connected") create_postgres_db(postgres_conn, 'clickhouse') - postgres_conn = get_postgres_conn(port=5441) + postgres_conn = get_postgres_conn(port=5421) print("postgres2 connected") create_postgres_db(postgres_conn, 'clickhouse') @@ -131,7 +131,7 @@ def test_invalidate_query(started_cluster): def test_dictionary_with_replicas(started_cluster): conn1 = get_postgres_conn(port=5432, database=True) cursor1 = conn1.cursor() - conn2 = get_postgres_conn(port=5441, database=True) + conn2 = get_postgres_conn(port=5421, database=True) cursor2 = conn2.cursor() create_postgres_table(cursor1, 'test1') diff --git a/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/test.py b/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/test.py index f9c10d68fe3..7bce2d50011 100644 --- a/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/test.py +++ b/tests/integration/test_fetch_partition_from_auxiliary_zookeeper/test.py @@ -1,5 +1,3 @@ - - import pytest from helpers.client import QueryRuntimeException from helpers.cluster import ClickHouseCluster @@ -18,23 +16,33 @@ def start_cluster(): cluster.shutdown() -def test_fetch_part_from_allowed_zookeeper(start_cluster): +@pytest.mark.parametrize( + ('part', 'date', 'part_name'), + [ + ('PARTITION', '2020-08-27', '2020-08-27'), + ('PART', '2020-08-28', '20200828_0_0_0'), + ] +) +def test_fetch_part_from_allowed_zookeeper(start_cluster, part, date, part_name): node.query( - "CREATE TABLE simple (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/0/simple', 'node') ORDER BY tuple() PARTITION BY date;" + "CREATE TABLE IF NOT EXISTS simple (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/0/simple', 'node') ORDER BY tuple() PARTITION BY date;" ) - node.query("INSERT INTO simple VALUES ('2020-08-27', 1)") + + node.query("""INSERT INTO simple VALUES ('{date}', 1)""".format(date=date)) node.query( - "CREATE TABLE simple2 (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/simple', 'node') ORDER BY tuple() PARTITION BY date;" + "CREATE TABLE IF NOT EXISTS simple2 (date Date, id UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/simple', 'node') ORDER BY tuple() PARTITION BY date;" ) + node.query( - "ALTER TABLE simple2 FETCH PARTITION '2020-08-27' FROM 'zookeeper2:/clickhouse/tables/0/simple';" - ) - node.query("ALTER TABLE simple2 ATTACH PARTITION '2020-08-27';") + """ALTER TABLE simple2 FETCH {part} '{part_name}' FROM 'zookeeper2:/clickhouse/tables/0/simple';""".format( + part=part, part_name=part_name)) + + node.query("""ALTER TABLE simple2 ATTACH {part} '{part_name}';""".format(part=part, part_name=part_name)) with pytest.raises(QueryRuntimeException): node.query( - "ALTER TABLE simple2 FETCH PARTITION '2020-08-27' FROM 'zookeeper:/clickhouse/tables/0/simple';" - ) + """ALTER TABLE simple2 FETCH {part} '{part_name}' FROM 'zookeeper:/clickhouse/tables/0/simple';""".format( + part=part, part_name=part_name)) - assert node.query("SELECT id FROM simple2").strip() == "1" + assert node.query("""SELECT id FROM simple2 where date = '{date}'""".format(date=date)).strip() == "1" diff --git a/tests/integration/test_hedged_requests/configs/users.xml b/tests/integration/test_hedged_requests/configs/users.xml index a3ab176b811..ac42155a18a 100644 --- a/tests/integration/test_hedged_requests/configs/users.xml +++ b/tests/integration/test_hedged_requests/configs/users.xml @@ -5,6 +5,8 @@ in_order 100 2000 + 1 + 1 diff --git a/tests/integration/test_hedged_requests/test.py b/tests/integration/test_hedged_requests/test.py index a1693206ecc..e40b3109c44 100644 --- a/tests/integration/test_hedged_requests/test.py +++ b/tests/integration/test_hedged_requests/test.py @@ -87,7 +87,10 @@ def check_settings(node_name, sleep_in_send_tables_status_ms, sleep_in_send_data def check_changing_replica_events(expected_count): result = NODES['node'].query("SELECT value FROM system.events WHERE event='HedgedRequestsChangeReplica'") - assert int(result) == expected_count + + # If server load is high we can see more than expected + # replica change events, but never less than expected + assert int(result) >= expected_count def update_configs(node_1_sleep_in_send_tables_status=0, node_1_sleep_in_send_data=0, diff --git a/tests/integration/test_hedged_requests_parallel/configs/users.xml b/tests/integration/test_hedged_requests_parallel/configs/users.xml index 3f3578903b4..9600c0c7124 100644 --- a/tests/integration/test_hedged_requests_parallel/configs/users.xml +++ b/tests/integration/test_hedged_requests_parallel/configs/users.xml @@ -6,6 +6,8 @@ 2 100 2000 + 1 + 1 diff --git a/tests/integration/test_hedged_requests_parallel/test.py b/tests/integration/test_hedged_requests_parallel/test.py index 33f70da00ca..7abc2eb1d2a 100644 --- a/tests/integration/test_hedged_requests_parallel/test.py +++ b/tests/integration/test_hedged_requests_parallel/test.py @@ -88,7 +88,10 @@ def check_settings(node_name, sleep_in_send_tables_status_ms, sleep_in_send_data def check_changing_replica_events(expected_count): result = NODES['node'].query("SELECT value FROM system.events WHERE event='HedgedRequestsChangeReplica'") - assert int(result) == expected_count + + # If server load is high we can see more than expected + # replica change events, but never less than expected + assert int(result) >= expected_count def update_configs(node_1_sleep_in_send_tables_status=0, node_1_sleep_in_send_data=0, diff --git a/tests/integration/test_keeper_internal_secure/__init__.py b/tests/integration/test_keeper_internal_secure/__init__.py new file mode 100644 index 00000000000..e5a0d9b4834 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/__init__.py @@ -0,0 +1 @@ +#!/usr/bin/env python3 diff --git a/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper1.xml b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper1.xml new file mode 100644 index 00000000000..ecbd50c72a6 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper1.xml @@ -0,0 +1,42 @@ + + + 9181 + 1 + /var/lib/clickhouse/coordination/log + /var/lib/clickhouse/coordination/snapshots + + + 5000 + 10000 + 75 + trace + + + + true + + 1 + node1 + 44444 + true + 3 + + + 2 + node2 + 44444 + true + true + 2 + + + 3 + node3 + 44444 + true + true + 1 + + + + diff --git a/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper2.xml b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper2.xml new file mode 100644 index 00000000000..53129ae0a75 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper2.xml @@ -0,0 +1,42 @@ + + + 9181 + 2 + /var/lib/clickhouse/coordination/log + /var/lib/clickhouse/coordination/snapshots + + + 5000 + 10000 + 75 + trace + + + + true + + 1 + node1 + 44444 + true + 3 + + + 2 + node2 + 44444 + true + true + 2 + + + 3 + node3 + 44444 + true + true + 1 + + + + diff --git a/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper3.xml b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper3.xml new file mode 100644 index 00000000000..4c685764ec0 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/enable_secure_keeper3.xml @@ -0,0 +1,42 @@ + + + 9181 + 3 + /var/lib/clickhouse/coordination/log + /var/lib/clickhouse/coordination/snapshots + + + 5000 + 10000 + 75 + trace + + + + true + + 1 + node1 + 44444 + true + 3 + + + 2 + node2 + 44444 + true + true + 2 + + + 3 + node3 + 44444 + true + true + 1 + + + + diff --git a/tests/integration/test_keeper_internal_secure/configs/rootCA.pem b/tests/integration/test_keeper_internal_secure/configs/rootCA.pem new file mode 100644 index 00000000000..ec16533d98a --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/rootCA.pem @@ -0,0 +1,21 @@ +-----BEGIN CERTIFICATE----- +MIIDazCCAlOgAwIBAgIUUiyhAav08YhTLfUIXLN/0Ln09n4wDQYJKoZIhvcNAQEL +BQAwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM +GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMTA0MTIxMTQ1MjBaFw0yMTA1 +MTIxMTQ1MjBaMEUxCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw +HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggEiMA0GCSqGSIb3DQEB +AQUAA4IBDwAwggEKAoIBAQDK0Ww4voPlkePBPS2MsEi7e1ePS+CDxTdDuOwWWEA7 +JiOyqIGqdyL6AE2EqjL3sSdVFVxytpGQWDuM6JHXdb01AnMngBuql9Jkiln7i267 +v54HtMWdm8o3rik/b/mB+kkn/sP715tI49Ybh/RobtvtK16ZgHr1ombkq6rXiom2 +8GmSmpYFwZtZsXtm2JwbZVayupQpWwdu3KrTXKBtVyKVvvWdgkf47DWYtWDS3vqE +cShM1H97G4DvI+4RX1WtQevQ0yCx1aFTg4xMHFkpUxlP8iW6mQaQPqy9rnI57e3L +RHc2I/B56xa43R3GmQ2S7bE4hvm1SrZDtVgrZLf4nvwNAgMBAAGjUzBRMB0GA1Ud +DgQWBBQ4+o0x1FzK7nRbcnm2pNLwaywCdzAfBgNVHSMEGDAWgBQ4+o0x1FzK7nRb +cnm2pNLwaywCdzAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQDE +YmM8MH6RKcaqMqCBefWLj0LTcZ/Wm4G/eCFC51PkAIsf7thnzViemBHRXUSF8wzc +1MBPD6II6OB1F0i7ntGjtlhnL2WcPYbo2Np59p7fo9SMbYwF49OZ40twsuKeeoAp +pfow+y/EBZqa99MY2q6FU6FDA3Rpv0Sdk+/5PHdsSP6cgeMszFBUS0tCQEvEl83n +FJUb0vjEX4x3J64XO/0DKXyCxFyF77OwHG2ZV5BeCpIhGXu+d/e221LJkGI2orKR +kgsaUwrkS8HQt3Hd0gYpLI1Opx/JlRpB0VLYLzRGj7kDpbAcTj3SMEUp/FAZmlXR +Iiebt73eE3rOWVFgyY9f +-----END CERTIFICATE----- diff --git a/tests/integration/test_keeper_internal_secure/configs/server.crt b/tests/integration/test_keeper_internal_secure/configs/server.crt new file mode 100644 index 00000000000..dfa32da5444 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/server.crt @@ -0,0 +1,19 @@ +-----BEGIN CERTIFICATE----- +MIIDETCCAfkCFHL+gKBQnU0P73/nrFrGaVPauTPmMA0GCSqGSIb3DQEBCwUAMEUx +CzAJBgNVBAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRl +cm5ldCBXaWRnaXRzIFB0eSBMdGQwHhcNMjEwNDEyMTE0NzI5WhcNMjEwNTEyMTE0 +NzI5WjBFMQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UE +CgwYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOC +AQ8AMIIBCgKCAQEA1iPeYn1Vy4QnQi6uNVqQnFLr0u3qdrMjGEBNAOuGmtIdhIn8 +rMCzaehNr3y2YTMRbZAqmv28P/wOXpzR1uQaFlQzTOjmsn/HOZ9JX2hv5sBUv7SU +UiPJS7UtptKDPbLv3N/v1dOXbY+vVyzo8U1Q9OS1J5yhYW6KtxP++hfSrOsFu669 +d1pqWFWaNBsmf0zF+ETvi6lywhyTFA1/PazcStP5GntcDL7eDvGq+DDsRC40oRpy +S4xRQRSteCTtGGmWpx+Jmt+90wFnLgruUbWT0veCoLxLvz0tJUk3ueUVnMkrxBQG +Fz+IWm+SQppNU5LlAcBcu9wJfo3h34BXp0NFNQIDAQABMA0GCSqGSIb3DQEBCwUA +A4IBAQCUnvQsv+GsPwGnIWqH9iiFVhgDx5QbSTW94Fyqk8dcIJBzWAiCshmLBWPJ +pfy4y2nxJbzovFsd9DA49pxqqILeLjue99yma2DVKeo+XDLDN3OX5faIMTBd7AnL +0MKqW7gUSLRUZrNOvFciAY8xRezgBQQBo4mcmmMbAbk5wKndGY6ZZOcY+JwXlqGB +5hyi6ishO8ciiZi3GMFNWWk9ViSfo27IqjKdSkQq1pr3FULvepd6SkdX+NvfZTAH +rG+CSoFGiJcOBbhDkvpY32cAJEnJOA1vHpFxfnGP8/1haeVZHqSwH1cySD78HVtF +fBs000wGHzBYWNI2KkwjNtYf06P4 +-----END CERTIFICATE----- diff --git a/tests/integration/test_keeper_internal_secure/configs/server.key b/tests/integration/test_keeper_internal_secure/configs/server.key new file mode 100644 index 00000000000..7e57c8b6b34 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/server.key @@ -0,0 +1,27 @@ +-----BEGIN RSA PRIVATE KEY----- +MIIEowIBAAKCAQEA1iPeYn1Vy4QnQi6uNVqQnFLr0u3qdrMjGEBNAOuGmtIdhIn8 +rMCzaehNr3y2YTMRbZAqmv28P/wOXpzR1uQaFlQzTOjmsn/HOZ9JX2hv5sBUv7SU +UiPJS7UtptKDPbLv3N/v1dOXbY+vVyzo8U1Q9OS1J5yhYW6KtxP++hfSrOsFu669 +d1pqWFWaNBsmf0zF+ETvi6lywhyTFA1/PazcStP5GntcDL7eDvGq+DDsRC40oRpy +S4xRQRSteCTtGGmWpx+Jmt+90wFnLgruUbWT0veCoLxLvz0tJUk3ueUVnMkrxBQG +Fz+IWm+SQppNU5LlAcBcu9wJfo3h34BXp0NFNQIDAQABAoIBAHYDso2o8V2F6XTp +8QxqawQcFudaQztDonW9CjMVmks8vRPMUDqMwNP/OMEcBA8xa8tsBm8Ao3zH1suB +tYuujkn8AYHDYVDCZvN0u6UfE3yiRpKYXJ2gJ1HX+d7UaYvZT6P0rmKzh+LTqxhq +Ib7Kk3FDkirQgYgGueAH3x/JfUvaAGvFrq+HvvlhHOs7M7iFU4nJA8jNfBolpTnG +v5MMI+f8/GHGreVICJUoclE+4V/4LDHUlrc3l1kQk0keeD6ECw/pl48TNL6ncXKu +baez1rfKbMPjhLUy2q5UZa93oW+olchEOXs1nUNKUhIOOr0f0YweYhUHNTineVM9 +yTecMIkCgYEA7CFQMyeLVeBA6C9AHBe8Zf/k64cMPyr0lUz6548ulil580PNPbvW +kd2vIKfUMgCO5lMA47ArL4bXZ7cjTvJmPYE1Yv8z+F0Tk03fnTrudHOSBEiGXAu3 +MPTxCDU7Se5Dwj0Fq81aFRtCHl8Rrss+WiBD8eRoxb/vwXKFc6VUAWMCgYEA6CjZ +XrZz11lySBhjkyVXcdLj89hDZ+bPxA7b3VB7TfCxsn5xVck7U3TFkg5Z9XwEQ7Ob +XFAPuwT9GKm7QPp6L8T2RltoJ3ys40UH1RtcNLz2aIo/xSP7lopPdAfWHef5r4y9 +kRw+Gh4NP/l5wefXsRz/D0jY3+t+QnwnhuCKbocCgYEAiR6bPOlkvzyXVH1DxEyA +Sdb8b00f7nqaRyzJsrfxvJ9fQsWHpKa0ZkYOUW9ECLlMQjHHHXEK0vGBmqe9qDWY +63RhtRgvbLVYDb018k7rc9I846Hd7AudmJ9UbIjE4hyrWlsnNOntur32ej6IvTEn +Bx0fd5NEyDi6GGLRXiOOkbMCgYAressLE/yqDlR68CZl/o5cAPU0TAKDyRSMUYQX +9OTC+hstpMSxHlkADlSaQBnVAf8CdvbX2R65FfwYzGEHkGGl5KuDDcd57b2rathG +rzMbpXA4r/u1fkG2Nf0fbABL5ZA7so4mSTXQSmSM4LpO+I7K2vVh9XC4rzAcX4g/ +mHoUrQKBgBf3rxp5h9P3HWoZYjzBDo2FqXUjKLLjE9ed5e/VqecqfHIkmueuNHlN +xifHr7lpsYu6IXkTnlK14pvLoPuwP59dCIOUYwAFz8RlH4MSUGNhYeGA8cqRrhmJ +tYk3OKExuN/+O12kUPVTy6BMH1hBdRJP+7y7lapWsRhZt18y+8Za +-----END RSA PRIVATE KEY----- diff --git a/tests/integration/test_keeper_internal_secure/configs/ssl_conf.xml b/tests/integration/test_keeper_internal_secure/configs/ssl_conf.xml new file mode 100644 index 00000000000..babc7cf0f18 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/configs/ssl_conf.xml @@ -0,0 +1,15 @@ + + + + + /etc/clickhouse-server/config.d/server.crt + /etc/clickhouse-server/config.d/server.key + /etc/clickhouse-server/config.d/rootCA.pem + true + none + true + sslv2,sslv3 + true + + + diff --git a/tests/integration/test_keeper_internal_secure/test.py b/tests/integration/test_keeper_internal_secure/test.py new file mode 100644 index 00000000000..d9fbca624e1 --- /dev/null +++ b/tests/integration/test_keeper_internal_secure/test.py @@ -0,0 +1,58 @@ +#!/usr/bin/env python3 + +import pytest +from helpers.cluster import ClickHouseCluster +import random +import string +import os +import time + +cluster = ClickHouseCluster(__file__) +node1 = cluster.add_instance('node1', main_configs=['configs/enable_secure_keeper1.xml', 'configs/ssl_conf.xml', 'configs/server.crt', 'configs/server.key', 'configs/rootCA.pem']) +node2 = cluster.add_instance('node2', main_configs=['configs/enable_secure_keeper2.xml', 'configs/ssl_conf.xml', 'configs/server.crt', 'configs/server.key', 'configs/rootCA.pem']) +node3 = cluster.add_instance('node3', main_configs=['configs/enable_secure_keeper3.xml', 'configs/ssl_conf.xml', 'configs/server.crt', 'configs/server.key', 'configs/rootCA.pem']) + +from kazoo.client import KazooClient, KazooState + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + yield cluster + + finally: + cluster.shutdown() + +def get_fake_zk(nodename, timeout=30.0): + _fake_zk_instance = KazooClient(hosts=cluster.get_instance_ip(nodename) + ":9181", timeout=timeout) + def reset_listener(state): + nonlocal _fake_zk_instance + print("Fake zk callback called for state", state) + if state != KazooState.CONNECTED: + _fake_zk_instance._reset() + + _fake_zk_instance.add_listener(reset_listener) + _fake_zk_instance.start() + return _fake_zk_instance + +def test_secure_raft_works(started_cluster): + try: + node1_zk = get_fake_zk("node1") + node2_zk = get_fake_zk("node2") + node3_zk = get_fake_zk("node3") + + node1_zk.create("/test_node", b"somedata1") + node2_zk.sync("/test_node") + node3_zk.sync("/test_node") + + assert node1_zk.exists("/test_node") is not None + assert node2_zk.exists("/test_node") is not None + assert node3_zk.exists("/test_node") is not None + finally: + try: + for zk_conn in [node1_zk, node2_zk, node3_zk]: + zk_conn.stop() + zk_conn.close() + except: + pass diff --git a/tests/integration/test_keeper_secure_client/__init__.py b/tests/integration/test_keeper_secure_client/__init__.py new file mode 100644 index 00000000000..e5a0d9b4834 --- /dev/null +++ b/tests/integration/test_keeper_secure_client/__init__.py @@ -0,0 +1 @@ +#!/usr/bin/env python3 diff --git a/tests/integration/test_keeper_secure_client/configs/dhparam.pem b/tests/integration/test_keeper_secure_client/configs/dhparam.pem new file mode 100644 index 00000000000..2e6cee0798d --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/dhparam.pem @@ -0,0 +1,8 @@ +-----BEGIN DH PARAMETERS----- +MIIBCAKCAQEAua92DDli13gJ+//ZXyGaggjIuidqB0crXfhUlsrBk9BV1hH3i7fR +XGP9rUdk2ubnB3k2ejBStL5oBrkHm9SzUFSQHqfDjLZjKoUpOEmuDc4cHvX1XTR5 +Pr1vf5cd0yEncJWG5W4zyUB8k++SUdL2qaeslSs+f491HBLDYn/h8zCgRbBvxhxb +9qeho1xcbnWeqkN6Kc9bgGozA16P9NLuuLttNnOblkH+lMBf42BSne/TWt3AlGZf +slKmmZcySUhF8aKfJnLKbkBCFqOtFRh8zBA9a7g+BT/lSANATCDPaAk1YVih2EKb +dpc3briTDbRsiqg2JKMI7+VdULY9bh3EawIBAg== +-----END DH PARAMETERS----- diff --git a/tests/integration/test_keeper_secure_client/configs/enable_secure_keeper.xml b/tests/integration/test_keeper_secure_client/configs/enable_secure_keeper.xml new file mode 100644 index 00000000000..af815f4a3bc --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/enable_secure_keeper.xml @@ -0,0 +1,24 @@ + + + + 10181 + 1 + /var/lib/clickhouse/coordination/log + /var/lib/clickhouse/coordination/snapshots + + + 10000 + 30000 + trace + false + + + + + 1 + localhost + 44444 + + + + diff --git a/tests/integration/test_keeper_secure_client/configs/server.crt b/tests/integration/test_keeper_secure_client/configs/server.crt new file mode 100644 index 00000000000..7ade2d96273 --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/server.crt @@ -0,0 +1,19 @@ +-----BEGIN CERTIFICATE----- +MIIC/TCCAeWgAwIBAgIJANjx1QSR77HBMA0GCSqGSIb3DQEBCwUAMBQxEjAQBgNV +BAMMCWxvY2FsaG9zdDAgFw0xODA3MzAxODE2MDhaGA8yMjkyMDUxNDE4MTYwOFow +FDESMBAGA1UEAwwJbG9jYWxob3N0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB +CgKCAQEAs9uSo6lJG8o8pw0fbVGVu0tPOljSWcVSXH9uiJBwlZLQnhN4SFSFohfI +4K8U1tBDTnxPLUo/V1K9yzoLiRDGMkwVj6+4+hE2udS2ePTQv5oaMeJ9wrs+5c9T +4pOtlq3pLAdm04ZMB1nbrEysceVudHRkQbGHzHp6VG29Fw7Ga6YpqyHQihRmEkTU +7UCYNA+Vk7aDPdMS/khweyTpXYZimaK9f0ECU3/VOeG3fH6Sp2X6FN4tUj/aFXEj +sRmU5G2TlYiSIUMF2JPdhSihfk1hJVALrHPTU38SOL+GyyBRWdNcrIwVwbpvsvPg +pryMSNxnpr0AK0dFhjwnupIv5hJIOQIDAQABo1AwTjAdBgNVHQ4EFgQUjPLb3uYC +kcamyZHK4/EV8jAP0wQwHwYDVR0jBBgwFoAUjPLb3uYCkcamyZHK4/EV8jAP0wQw +DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAM/ocuDvfPus/KpMVD51j +4IdlU8R0vmnYLQ+ygzOAo7+hUWP5j0yvq4ILWNmQX6HNvUggCgFv9bjwDFhb/5Vr +85ieWfTd9+LTjrOzTw4avdGwpX9G+6jJJSSq15tw5ElOIFb/qNA9O4dBiu8vn03C +L/zRSXrARhSqTW5w/tZkUcSTT+M5h28+Lgn9ysx4Ff5vi44LJ1NnrbJbEAIYsAAD ++UA+4MBFKx1r6hHINULev8+lCfkpwIaeS8RL+op4fr6kQPxnULw8wT8gkuc8I4+L +P9gg/xDHB44T3ADGZ5Ib6O0DJaNiToO6rnoaaxs0KkotbvDWvRoxEytSbXKoYjYp +0g== +-----END CERTIFICATE----- diff --git a/tests/integration/test_keeper_secure_client/configs/server.key b/tests/integration/test_keeper_secure_client/configs/server.key new file mode 100644 index 00000000000..f0fb61ac443 --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/server.key @@ -0,0 +1,28 @@ +-----BEGIN PRIVATE KEY----- +MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCz25KjqUkbyjyn +DR9tUZW7S086WNJZxVJcf26IkHCVktCeE3hIVIWiF8jgrxTW0ENOfE8tSj9XUr3L +OguJEMYyTBWPr7j6ETa51LZ49NC/mhox4n3Cuz7lz1Pik62WreksB2bThkwHWdus +TKxx5W50dGRBsYfMenpUbb0XDsZrpimrIdCKFGYSRNTtQJg0D5WTtoM90xL+SHB7 +JOldhmKZor1/QQJTf9U54bd8fpKnZfoU3i1SP9oVcSOxGZTkbZOViJIhQwXYk92F +KKF+TWElUAusc9NTfxI4v4bLIFFZ01ysjBXBum+y8+CmvIxI3GemvQArR0WGPCe6 +ki/mEkg5AgMBAAECggEATrbIBIxwDJOD2/BoUqWkDCY3dGevF8697vFuZKIiQ7PP +TX9j4vPq0DfsmDjHvAPFkTHiTQXzlroFik3LAp+uvhCCVzImmHq0IrwvZ9xtB43f +7Pkc5P6h1l3Ybo8HJ6zRIY3TuLtLxuPSuiOMTQSGRL0zq3SQ5DKuGwkz+kVjHXUN +MR2TECFwMHKQ5VLrC+7PMpsJYyOMlDAWhRfUalxC55xOXTpaN8TxNnwQ8K2ISVY5 +212Jz/a4hn4LdwxSz3Tiu95PN072K87HLWx3EdT6vW4Ge5P/A3y+smIuNAlanMnu +plHBRtpATLiTxZt/n6npyrfQVbYjSH7KWhB8hBHtaQKBgQDh9Cq1c/KtqDtE0Ccr +/r9tZNTUwBE6VP+3OJeKdEdtsfuxjOCkS1oAjgBJiSDOiWPh1DdoDeVZjPKq6pIu +Mq12OE3Doa8znfCXGbkSzEKOb2unKZMJxzrz99kXt40W5DtrqKPNb24CNqTiY8Aa +CjtcX+3weat82VRXvph6U8ltMwKBgQDLxjiQQzNoY7qvg7CwJCjf9qq8jmLK766g +1FHXopqS+dTxDLM8eJSRrpmxGWJvNeNc1uPhsKsKgotqAMdBUQTf7rSTbt4MyoH5 +bUcRLtr+0QTK9hDWMOOvleqNXha68vATkohWYfCueNsC60qD44o8RZAS6UNy3ENq +cM1cxqe84wKBgQDKkHutWnooJtajlTxY27O/nZKT/HA1bDgniMuKaz4R4Gr1PIez +on3YW3V0d0P7BP6PWRIm7bY79vkiMtLEKdiKUGWeyZdo3eHvhDb/3DCawtau8L2K +GZsHVp2//mS1Lfz7Qh8/L/NedqCQ+L4iWiPnZ3THjjwn3CoZ05ucpvrAMwKBgB54 +nay039MUVq44Owub3KDg+dcIU62U+cAC/9oG7qZbxYPmKkc4oL7IJSNecGHA5SbU +2268RFdl/gLz6tfRjbEOuOHzCjFPdvAdbysanpTMHLNc6FefJ+zxtgk9sJh0C4Jh +vxFrw9nTKKzfEl12gQ1SOaEaUIO0fEBGbe8ZpauRAoGAMAlGV+2/K4ebvAJKOVTa +dKAzQ+TD2SJmeR1HZmKDYddNqwtZlzg3v4ZhCk4eaUmGeC1Bdh8MDuB3QQvXz4Dr +vOIP4UVaOr+uM+7TgAgVnP4/K6IeJGzUDhX93pmpWhODfdu/oojEKVcpCojmEmS1 +KCBtmIrQLqzMpnBpLNuSY+Q= +-----END PRIVATE KEY----- diff --git a/tests/integration/test_keeper_secure_client/configs/ssl_conf.xml b/tests/integration/test_keeper_secure_client/configs/ssl_conf.xml new file mode 100644 index 00000000000..7ca51acde22 --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/ssl_conf.xml @@ -0,0 +1,26 @@ + + + + /etc/clickhouse-server/config.d/server.crt + /etc/clickhouse-server/config.d/server.key + /etc/clickhouse-server/config.d/dhparam.pem + none + true + true + sslv2,sslv3 + true + + + /etc/clickhouse-server/config.d/server.crt + /etc/clickhouse-server/config.d/server.key + true + true + sslv2,sslv3 + true + none + + RejectCertificateHandler + + + + diff --git a/tests/integration/test_keeper_secure_client/configs/use_secure_keeper.xml b/tests/integration/test_keeper_secure_client/configs/use_secure_keeper.xml new file mode 100644 index 00000000000..a0d19300022 --- /dev/null +++ b/tests/integration/test_keeper_secure_client/configs/use_secure_keeper.xml @@ -0,0 +1,9 @@ + + + + node1 + 10181 + 1 + + + diff --git a/tests/integration/test_keeper_secure_client/test.py b/tests/integration/test_keeper_secure_client/test.py new file mode 100644 index 00000000000..fe03ed8dcf8 --- /dev/null +++ b/tests/integration/test_keeper_secure_client/test.py @@ -0,0 +1,26 @@ +#!/usr/bin/env python3 +import pytest +from helpers.cluster import ClickHouseCluster +import string +import os +import time + +cluster = ClickHouseCluster(__file__) +node1 = cluster.add_instance('node1', main_configs=['configs/enable_secure_keeper.xml', 'configs/ssl_conf.xml', "configs/dhparam.pem", "configs/server.crt", "configs/server.key"]) +node2 = cluster.add_instance('node2', main_configs=['configs/use_secure_keeper.xml', 'configs/ssl_conf.xml', "configs/server.crt", "configs/server.key"]) + + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + + yield cluster + + finally: + cluster.shutdown() + + +def test_connection(started_cluster): + # just nothrow + node2.query("SELECT * FROM system.zookeeper WHERE path = '/'") diff --git a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.reference b/tests/integration/test_library_bridge/__init__.py similarity index 100% rename from tests/queries/0_stateless/01676_clickhouse_client_autocomplete.reference rename to tests/integration/test_library_bridge/__init__.py diff --git a/tests/integration/test_library_bridge/configs/config.d/config.xml b/tests/integration/test_library_bridge/configs/config.d/config.xml new file mode 100644 index 00000000000..9bea75fbb6f --- /dev/null +++ b/tests/integration/test_library_bridge/configs/config.d/config.xml @@ -0,0 +1,12 @@ + + /etc/clickhouse-server/config.d/dictionaries_lib + + trace + /var/log/clickhouse-server/log.log + /var/log/clickhouse-server/log.err.log + 1000M + 10 + /var/log/clickhouse-server/stderr.log + /var/log/clickhouse-server/stdout.log + + diff --git a/tests/integration/test_library_bridge/configs/dict_lib.cpp b/tests/integration/test_library_bridge/configs/dict_lib.cpp new file mode 100644 index 00000000000..be25804ed64 --- /dev/null +++ b/tests/integration/test_library_bridge/configs/dict_lib.cpp @@ -0,0 +1,298 @@ +/// c++ sample dictionary library + +#include +#include +#include +#include +#include +#include + +namespace ClickHouseLibrary +{ +using CString = const char *; +using ColumnName = CString; +using ColumnNames = ColumnName[]; + +struct CStrings +{ + CString * data = nullptr; + uint64_t size = 0; +}; + +struct VectorUInt64 +{ + const uint64_t * data = nullptr; + uint64_t size = 0; +}; + +struct ColumnsUInt64 +{ + VectorUInt64 * data = nullptr; + uint64_t size = 0; +}; + +struct Field +{ + const void * data = nullptr; + uint64_t size = 0; +}; + +struct Row +{ + const Field * data = nullptr; + uint64_t size = 0; +}; + +struct Table +{ + const Row * data = nullptr; + uint64_t size = 0; + uint64_t error_code = 0; // 0 = ok; !0 = error, with message in error_string + const char * error_string = nullptr; +}; + +enum LogLevel +{ + FATAL = 1, + CRITICAL, + ERROR, + WARNING, + NOTICE, + INFORMATION, + DEBUG, + TRACE, +}; + +void log(LogLevel level, CString msg); +} + + +#define LOG(logger, message) \ + do \ + { \ + std::stringstream builder; \ + builder << message; \ + (logger)(ClickHouseLibrary::INFORMATION, builder.str().c_str()); \ + } while (false) + + +struct LibHolder +{ + std::function log; +}; + + +struct DataHolder +{ + std::vector> dataHolder; // Actual data storage + std::vector> fieldHolder; // Pointers and sizes of data + std::unique_ptr rowHolder; + ClickHouseLibrary::Table ctable; // Result data prepared for transfer via c-style interface + LibHolder * lib = nullptr; + + size_t num_rows; + size_t num_cols; +}; + + +template +void MakeColumnsFromVector(T * ptr) +{ + if (ptr->dataHolder.empty()) + { + LOG(ptr->lib->log, "generating null values, cols: " << ptr->num_cols); + std::vector fields; + for (size_t i = 0; i < ptr->num_cols; ++i) + fields.push_back({nullptr, 0}); + ptr->fieldHolder.push_back(fields); + } + else + { + for (const auto & row : ptr->dataHolder) + { + std::vector fields; + for (const auto & field : row) + fields.push_back({&field, sizeof(field)}); + ptr->fieldHolder.push_back(fields); + } + } + + const auto rows_num = ptr->fieldHolder.size(); + ptr->rowHolder = std::make_unique(rows_num); + size_t i = 0; + for (auto & row : ptr->fieldHolder) + { + ptr->rowHolder[i].size = row.size(); + ptr->rowHolder[i].data = row.data(); + ++i; + } + ptr->ctable.size = rows_num; + ptr->ctable.data = ptr->rowHolder.get(); +} + + +extern "C" +{ + +void * ClickHouseDictionary_v3_loadIds(void * data_ptr, + ClickHouseLibrary::CStrings * settings, + ClickHouseLibrary::CStrings * columns, + const struct ClickHouseLibrary::VectorUInt64 * ids) +{ + auto ptr = static_cast(data_ptr); + + if (ids) + LOG(ptr->lib->log, "loadIds lib call ptr=" << data_ptr << " => " << ptr << " size=" << ids->size); + + if (!ptr) + return nullptr; + + if (settings) + { + LOG(ptr->lib->log, "settings passed: " << settings->size); + for (size_t i = 0; i < settings->size; ++i) + { + LOG(ptr->lib->log, "setting " << i << " :" << settings->data[i]); + } + } + + if (columns) + { + LOG(ptr->lib->log, "columns passed:" << columns->size); + for (size_t i = 0; i < columns->size; ++i) + { + LOG(ptr->lib->log, "column " << i << " :" << columns->data[i]); + } + } + + if (ids) + { + LOG(ptr->lib->log, "ids passed: " << ids->size); + for (size_t i = 0; i < ids->size; ++i) + { + LOG(ptr->lib->log, "id " << i << " :" << ids->data[i] << " generating."); + ptr->dataHolder.emplace_back(std::vector{ids->data[i], ids->data[i] + 100, ids->data[i] + 200, ids->data[i] + 300}); + } + } + + MakeColumnsFromVector(ptr); + return static_cast(&ptr->ctable); +} + + +void * ClickHouseDictionary_v3_loadAll(void * data_ptr, ClickHouseLibrary::CStrings * settings, ClickHouseLibrary::CStrings * /*columns*/) +{ + auto ptr = static_cast(data_ptr); + + LOG(ptr->lib->log, "loadAll lib call ptr=" << data_ptr << " => " << ptr); + + if (!ptr) + return nullptr; + + size_t num_rows = 0, num_cols = 4; + std::string test_type; + std::vector settings_values; + if (settings) + { + LOG(ptr->lib->log, "settings size: " << settings->size); + + for (size_t i = 0; i < settings->size; ++i) + { + std::string setting_name = settings->data[i]; + std::string setting_value = settings->data[++i]; + LOG(ptr->lib->log, "setting " + std::to_string(i) + " name " + setting_name + " value " + setting_value); + + if (setting_name == "num_rows") + num_rows = std::atoi(setting_value.data()); + else if (setting_name == "num_cols") + num_cols = std::atoi(setting_value.data()); + else if (setting_name == "test_type") + test_type = setting_value; + else + { + LOG(ptr->lib->log, "Adding setting " + setting_name); + settings_values.push_back(setting_value); + } + } + } + + if (test_type == "test_simple") + { + for (size_t i = 0; i < 10; ++i) + { + LOG(ptr->lib->log, "id " << i << " :" << " generating."); + ptr->dataHolder.emplace_back(std::vector{i, i + 10, i + 20, i + 30}); + } + } + else if (test_type == "test_many_rows" && num_rows) + { + for (size_t i = 0; i < num_rows; ++i) + { + ptr->dataHolder.emplace_back(std::vector{i, i, i, i}); + } + } + + ptr->num_cols = num_cols; + ptr->num_rows = num_rows; + + MakeColumnsFromVector(ptr); + return static_cast(&ptr->ctable); +} + + +void * ClickHouseDictionary_v3_loadKeys(void * data_ptr, ClickHouseLibrary::CStrings * settings, ClickHouseLibrary::Table * requested_keys) +{ + auto ptr = static_cast(data_ptr); + LOG(ptr->lib->log, "loadKeys lib call ptr=" << data_ptr << " => " << ptr); + if (settings) + { + LOG(ptr->lib->log, "settings passed: " << settings->size); + for (size_t i = 0; i < settings->size; ++i) + { + LOG(ptr->lib->log, "setting " << i << " :" << settings->data[i]); + } + } + if (requested_keys) + { + LOG(ptr->lib->log, "requested_keys columns passed: " << requested_keys->size); + for (size_t i = 0; i < requested_keys->size; ++i) + { + LOG(ptr->lib->log, "requested_keys at column " << i << " passed: " << requested_keys->data[i].size); + ptr->dataHolder.emplace_back(std::vector{i, i + 100, i + 200, i + 300}); + } + } + + MakeColumnsFromVector(ptr); + return static_cast(&ptr->ctable); +} + +void * ClickHouseDictionary_v3_libNew( + ClickHouseLibrary::CStrings * /*settings*/, void (*logFunc)(ClickHouseLibrary::LogLevel, ClickHouseLibrary::CString)) +{ + auto lib_ptr = new LibHolder; + lib_ptr->log = logFunc; + return lib_ptr; +} + +void ClickHouseDictionary_v3_libDelete(void * lib_ptr) +{ + auto ptr = static_cast(lib_ptr); + delete ptr; + return; +} + +void * ClickHouseDictionary_v3_dataNew(void * lib_ptr) +{ + auto data_ptr = new DataHolder; + data_ptr->lib = static_castlib)>(lib_ptr); + return data_ptr; +} + +void ClickHouseDictionary_v3_dataDelete(void * /*lib_ptr*/, void * data_ptr) +{ + auto ptr = static_cast(data_ptr); + delete ptr; + return; +} + +} diff --git a/tests/integration/test_library_bridge/configs/dictionaries/dict1.xml b/tests/integration/test_library_bridge/configs/dictionaries/dict1.xml new file mode 100644 index 00000000000..9be21aea1e3 --- /dev/null +++ b/tests/integration/test_library_bridge/configs/dictionaries/dict1.xml @@ -0,0 +1,86 @@ + + + + dict1 + + + /etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so + + test_simple + nice key + interesting, nice value + //home/interesting-path/to-/interesting_data
+ 11 + user-u -user +
+
+ + + + + + 1 + 1 + + + + key + UInt64 + + + value1 + + UInt64 + + + value2 + + UInt64 + + + value3 + + UInt64 + + +
+ + dict2 + + + /etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so + + test_nulls + + + + + + + + 1 + 1 + + + + key + UInt64 + + + value1 + 12 + UInt64 + + + value2 + 12 + UInt64 + + + value3 + 12 + UInt64 + + + +
diff --git a/tests/integration/test_library_bridge/configs/enable_dict.xml b/tests/integration/test_library_bridge/configs/enable_dict.xml new file mode 100644 index 00000000000..264f1f667b1 --- /dev/null +++ b/tests/integration/test_library_bridge/configs/enable_dict.xml @@ -0,0 +1,4 @@ + + + /etc/clickhouse-server/config.d/dict*.xml + diff --git a/tests/integration/test_library_bridge/configs/log_conf.xml b/tests/integration/test_library_bridge/configs/log_conf.xml new file mode 100644 index 00000000000..eed7a435b68 --- /dev/null +++ b/tests/integration/test_library_bridge/configs/log_conf.xml @@ -0,0 +1,17 @@ + + + + trace + /var/log/clickhouse-server/log.log + /var/log/clickhouse-server/log.err.log + 1000M + 10 + /var/log/clickhouse-server/stderr.log + /var/log/clickhouse-server/stdout.log + /var/log/clickhouse-server/clickhouse-library-bridge.log + /var/log/clickhouse-server/clickhouse-library-bridge.err.log + /var/log/clickhouse-server/clickhouse-library-bridge.stdout + /var/log/clickhouse-server/clickhouse-library-bridge.stderr + trace + + diff --git a/tests/integration/test_library_bridge/test.py b/tests/integration/test_library_bridge/test.py new file mode 100644 index 00000000000..f0aeb85a52b --- /dev/null +++ b/tests/integration/test_library_bridge/test.py @@ -0,0 +1,154 @@ +import os +import os.path as p +import pytest +import time + +from helpers.cluster import ClickHouseCluster, run_and_check + +cluster = ClickHouseCluster(__file__) + +instance = cluster.add_instance('instance', + main_configs=[ + 'configs/enable_dict.xml', + 'configs/config.d/config.xml', + 'configs/dictionaries/dict1.xml', + 'configs/log_conf.xml']) + +@pytest.fixture(scope="module") +def ch_cluster(): + try: + cluster.start() + instance.query('CREATE DATABASE test') + container_lib_path = '/etc/clickhouse-server/config.d/dictionarites_lib/dict_lib.cpp' + + instance.copy_file_to_container(os.path.join(os.path.dirname(os.path.realpath(__file__)), "configs/dict_lib.cpp"), + "/etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.cpp") + + instance.query("SYSTEM RELOAD CONFIG") + + instance.exec_in_container( + ['bash', '-c', + '/usr/bin/g++ -shared -o /etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so -fPIC /etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.cpp'], + user='root') + + yield cluster + + finally: + cluster.shutdown() + + +@pytest.fixture(autouse=True) +def setup_teardown(): + yield # run test + + +def test_load_all(ch_cluster): + instance.query(''' + CREATE DICTIONARY lib_dict (key UInt64, value1 UInt64, value2 UInt64, value3 UInt64) + PRIMARY KEY key + SOURCE(library( + PATH '/etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so' + SETTINGS (test_type test_simple))) + LAYOUT(HASHED()) + LIFETIME (MIN 0 MAX 10) + ''') + + result = instance.query('SELECT * FROM lib_dict ORDER BY key') + expected = ( +"0\t10\t20\t30\n" + +"1\t11\t21\t31\n" + +"2\t12\t22\t32\n" + +"3\t13\t23\t33\n" + +"4\t14\t24\t34\n" + +"5\t15\t25\t35\n" + +"6\t16\t26\t36\n" + +"7\t17\t27\t37\n" + +"8\t18\t28\t38\n" + +"9\t19\t29\t39\n" +) + instance.query('SYSTEM RELOAD DICTIONARY dict1') + instance.query('DROP DICTIONARY lib_dict') + assert(result == expected) + + instance.query(""" + CREATE TABLE IF NOT EXISTS `dict1_table` ( + key UInt64, value1 UInt64, value2 UInt64, value3 UInt64 + ) ENGINE = Dictionary(dict1) + """) + + result = instance.query('SELECT * FROM dict1_table ORDER BY key') + assert(result == expected) + + +def test_load_ids(ch_cluster): + instance.query(''' + CREATE DICTIONARY lib_dict_c (key UInt64, value1 UInt64, value2 UInt64, value3 UInt64) + PRIMARY KEY key SOURCE(library(PATH '/etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so')) + LAYOUT(CACHE( + SIZE_IN_CELLS 10000000 + BLOCK_SIZE 4096 + FILE_SIZE 16777216 + READ_BUFFER_SIZE 1048576 + MAX_STORED_KEYS 1048576)) + LIFETIME(2) ; + ''') + + result = instance.query('''select dictGet(lib_dict_c, 'value1', toUInt64(0));''') + assert(result.strip() == '100') + result = instance.query('''select dictGet(lib_dict_c, 'value1', toUInt64(1));''') + assert(result.strip() == '101') + instance.query('DROP DICTIONARY lib_dict_c') + + +def test_load_keys(ch_cluster): + instance.query(''' + CREATE DICTIONARY lib_dict_ckc (key UInt64, value1 UInt64, value2 UInt64, value3 UInt64) + PRIMARY KEY key + SOURCE(library(PATH '/etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so')) + LAYOUT(COMPLEX_KEY_CACHE( SIZE_IN_CELLS 10000000)) + LIFETIME(2); + ''') + + result = instance.query('''select dictGet(lib_dict_ckc, 'value1', tuple(toUInt64(0)));''') + assert(result.strip() == '100') + result = instance.query('''select dictGet(lib_dict_ckc, 'value2', tuple(toUInt64(0)));''') + assert(result.strip() == '200') + instance.query('DROP DICTIONARY lib_dict_ckc') + + +def test_load_all_many_rows(ch_cluster): + num_rows = [1000, 10000, 100000, 1000000] + for num in num_rows: + instance.query(''' + CREATE DICTIONARY lib_dict (key UInt64, value1 UInt64, value2 UInt64, value3 UInt64) + PRIMARY KEY key + SOURCE(library( + PATH '/etc/clickhouse-server/config.d/dictionaries_lib/dict_lib.so' + SETTINGS (num_rows {} test_type test_many_rows))) + LAYOUT(HASHED()) + LIFETIME (MIN 0 MAX 10) + '''.format(num)) + + result = instance.query('SELECT * FROM lib_dict ORDER BY key') + expected = instance.query('SELECT number, number, number, number FROM numbers({})'.format(num)) + instance.query('DROP DICTIONARY lib_dict') + assert(result == expected) + + +def test_null_values(ch_cluster): + instance.query('SYSTEM RELOAD DICTIONARY dict2') + instance.query(""" + CREATE TABLE IF NOT EXISTS `dict2_table` ( + key UInt64, value1 UInt64, value2 UInt64, value3 UInt64 + ) ENGINE = Dictionary(dict2) + """) + + result = instance.query('SELECT * FROM dict2_table ORDER BY key') + expected = "0\t12\t12\t12\n" + assert(result == expected) + + +if __name__ == '__main__': + cluster.start() + input("Cluster created, press any key to destroy...") + cluster.shutdown() diff --git a/tests/integration/test_limited_replicated_fetches/configs/custom_settings.xml b/tests/integration/test_limited_replicated_fetches/configs/custom_settings.xml new file mode 100644 index 00000000000..b5e6bb80891 --- /dev/null +++ b/tests/integration/test_limited_replicated_fetches/configs/custom_settings.xml @@ -0,0 +1,7 @@ + + + + 3 + + + diff --git a/tests/integration/test_limited_replicated_fetches/test.py b/tests/integration/test_limited_replicated_fetches/test.py index 9b9b8befd67..7b0c7aed15d 100644 --- a/tests/integration/test_limited_replicated_fetches/test.py +++ b/tests/integration/test_limited_replicated_fetches/test.py @@ -6,12 +6,14 @@ from helpers.cluster import ClickHouseCluster from helpers.network import PartitionManager import random import string +import os cluster = ClickHouseCluster(__file__) -node1 = cluster.add_instance('node1', with_zookeeper=True) -node2 = cluster.add_instance('node2', with_zookeeper=True) +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) +node1 = cluster.add_instance('node1', user_configs=['configs/custom_settings.xml'], with_zookeeper=True) +node2 = cluster.add_instance('node2', user_configs=['configs/custom_settings.xml'], with_zookeeper=True) -DEFAULT_MAX_THREADS_FOR_FETCH = 3 +MAX_THREADS_FOR_FETCH = 3 @pytest.fixture(scope="module") def started_cluster(): @@ -64,11 +66,11 @@ def test_limited_fetches(started_cluster): time.sleep(0.1) for concurrently_fetching_parts in fetches_result: - if len(concurrently_fetching_parts) > DEFAULT_MAX_THREADS_FOR_FETCH: - assert False, "Found more than {} concurrently fetching parts: {}".format(DEFAULT_MAX_THREADS_FOR_FETCH, ', '.join(concurrently_fetching_parts)) + if len(concurrently_fetching_parts) > MAX_THREADS_FOR_FETCH: + assert False, "Found more than {} concurrently fetching parts: {}".format(MAX_THREADS_FOR_FETCH, ', '.join(concurrently_fetching_parts)) assert max([len(parts) for parts in fetches_result]) == 3, "Strange, but we don't utilize max concurrent threads for fetches" assert(max(background_fetches_metric)) == 3, "Just checking metric consistent with table" node1.query("DROP TABLE IF EXISTS t SYNC") - node2.query("DROP TABLE IF EXISTS t SYNC") \ No newline at end of file + node2.query("DROP TABLE IF EXISTS t SYNC") diff --git a/tests/integration/test_materialize_mysql_database/configs/users_disable_bytes_settings.xml b/tests/integration/test_materialize_mysql_database/configs/users_disable_bytes_settings.xml new file mode 100644 index 00000000000..4516cb80c17 --- /dev/null +++ b/tests/integration/test_materialize_mysql_database/configs/users_disable_bytes_settings.xml @@ -0,0 +1,21 @@ + + + + + 1 + Atomic + 1 + 0 + + + + + + + + ::/0 + + default + + + diff --git a/tests/integration/test_materialize_mysql_database/configs/users_disable_rows_settings.xml b/tests/integration/test_materialize_mysql_database/configs/users_disable_rows_settings.xml new file mode 100644 index 00000000000..dea20eb9e12 --- /dev/null +++ b/tests/integration/test_materialize_mysql_database/configs/users_disable_rows_settings.xml @@ -0,0 +1,21 @@ + + + + + 1 + Atomic + 0 + 1 + + + + + + + + ::/0 + + default + + + diff --git a/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py b/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py index 1675b72e0c4..813a654add3 100644 --- a/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py +++ b/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py @@ -117,6 +117,45 @@ def dml_with_materialize_mysql_database(clickhouse_node, mysql_node, service_nam mysql_node.query("DROP DATABASE test_database") +def materialize_mysql_database_with_views(clickhouse_node, mysql_node, service_name): + mysql_node.query("DROP DATABASE IF EXISTS test_database") + clickhouse_node.query("DROP DATABASE IF EXISTS test_database") + mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'") + # existed before the mapping was created + + mysql_node.query("CREATE TABLE test_database.test_table_1 (" + "`key` INT NOT NULL PRIMARY KEY, " + "unsigned_tiny_int TINYINT UNSIGNED, tiny_int TINYINT, " + "unsigned_small_int SMALLINT UNSIGNED, small_int SMALLINT, " + "unsigned_medium_int MEDIUMINT UNSIGNED, medium_int MEDIUMINT, " + "unsigned_int INT UNSIGNED, _int INT, " + "unsigned_integer INTEGER UNSIGNED, _integer INTEGER, " + "unsigned_bigint BIGINT UNSIGNED, _bigint BIGINT, " + "/* Need ClickHouse support read mysql decimal unsigned_decimal DECIMAL(19, 10) UNSIGNED, _decimal DECIMAL(19, 10), */" + "unsigned_float FLOAT UNSIGNED, _float FLOAT, " + "unsigned_double DOUBLE UNSIGNED, _double DOUBLE, " + "_varchar VARCHAR(10), _char CHAR(10), binary_col BINARY(8), " + "/* Need ClickHouse support Enum('a', 'b', 'v') _enum ENUM('a', 'b', 'c'), */" + "_date Date, _datetime DateTime, _timestamp TIMESTAMP, _bool BOOLEAN) ENGINE = InnoDB;") + + mysql_node.query("CREATE VIEW test_database.test_table_1_view AS SELECT SUM(tiny_int) FROM test_database.test_table_1 GROUP BY _date;") + + # it already has some data + mysql_node.query(""" + INSERT INTO test_database.test_table_1 VALUES(1, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5, 6, -6, 3.2, -3.2, 3.4, -3.4, 'varchar', 'char', 'binary', + '2020-01-01', '2020-01-01 00:00:00', '2020-01-01 00:00:00', true); + """) + clickhouse_node.query( + "CREATE DATABASE test_database ENGINE = MaterializeMySQL('{}:3306', 'test_database', 'root', 'clickhouse')".format( + service_name)) + + assert "test_database" in clickhouse_node.query("SHOW DATABASES") + check_query(clickhouse_node, "SHOW TABLES FROM test_database FORMAT TSV", "test_table_1\n") + + clickhouse_node.query("DROP DATABASE test_database") + mysql_node.query("DROP DATABASE test_database") + + def materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, mysql_node, service_name): mysql_node.query("DROP DATABASE IF EXISTS test_database") clickhouse_node.query("DROP DATABASE IF EXISTS test_database") @@ -803,3 +842,31 @@ def system_tables_test(clickhouse_node, mysql_node, service_name): mysql_node.query("CREATE TABLE system_tables_test.test (id int NOT NULL PRIMARY KEY) ENGINE=InnoDB") clickhouse_node.query("CREATE DATABASE system_tables_test ENGINE = MaterializeMySQL('{}:3306', 'system_tables_test', 'root', 'clickhouse')".format(service_name)) check_query(clickhouse_node, "SELECT partition_key, sorting_key, primary_key FROM system.tables WHERE database = 'system_tables_test' AND name = 'test'", "intDiv(id, 4294967)\tid\tid\n") + +def move_to_prewhere_and_column_filtering(clickhouse_node, mysql_node, service_name): + clickhouse_node.query("DROP DATABASE IF EXISTS cond_on_key_col") + mysql_node.query("DROP DATABASE IF EXISTS cond_on_key_col") + mysql_node.query("CREATE DATABASE cond_on_key_col") + clickhouse_node.query("CREATE DATABASE cond_on_key_col ENGINE = MaterializeMySQL('{}:3306', 'cond_on_key_col', 'root', 'clickhouse')".format(service_name)) + mysql_node.query("create table cond_on_key_col.products (id int primary key, product_id int not null, catalog_id int not null, brand_id int not null, name text)") + mysql_node.query("insert into cond_on_key_col.products (id, name, catalog_id, brand_id, product_id) values (915, 'ertyui', 5287, 15837, 0), (990, 'wer', 1053, 24390, 1), (781, 'qwerty', 1041, 1176, 2);") + check_query(clickhouse_node, "SELECT DISTINCT P.id, P.name, P.catalog_id FROM cond_on_key_col.products P WHERE P.name ILIKE '%e%' and P.catalog_id=5287", '915\tertyui\t5287\n') + clickhouse_node.query("DROP DATABASE cond_on_key_col") + mysql_node.query("DROP DATABASE cond_on_key_col") + +def mysql_settings_test(clickhouse_node, mysql_node, service_name): + mysql_node.query("DROP DATABASE IF EXISTS test_database") + clickhouse_node.query("DROP DATABASE IF EXISTS test_database") + mysql_node.query("CREATE DATABASE test_database") + mysql_node.query("CREATE TABLE test_database.a (id INT(11) NOT NULL PRIMARY KEY, value VARCHAR(255))") + mysql_node.query("INSERT INTO test_database.a VALUES(1, 'foo')") + mysql_node.query("INSERT INTO test_database.a VALUES(2, 'bar')") + + clickhouse_node.query("CREATE DATABASE test_database ENGINE = MaterializeMySQL('{}:3306', 'test_database', 'root', 'clickhouse')".format(service_name)) + check_query(clickhouse_node, "SELECT COUNT() FROM test_database.a FORMAT TSV", "2\n") + + assert clickhouse_node.query("SELECT COUNT(DISTINCT blockNumber()) FROM test_database.a FORMAT TSV") == "2\n" + + clickhouse_node.query("DROP DATABASE test_database") + mysql_node.query("DROP DATABASE test_database") + diff --git a/tests/integration/test_materialize_mysql_database/test.py b/tests/integration/test_materialize_mysql_database/test.py index 730305a6f16..6c777c7e6f8 100644 --- a/tests/integration/test_materialize_mysql_database/test.py +++ b/tests/integration/test_materialize_mysql_database/test.py @@ -16,7 +16,8 @@ cluster = ClickHouseCluster(__file__) node_db_ordinary = cluster.add_instance('node1', user_configs=["configs/users.xml"], with_mysql=False, stay_alive=True) node_db_atomic = cluster.add_instance('node2', user_configs=["configs/users_db_atomic.xml"], with_mysql=False, stay_alive=True) - +node_disable_bytes_settings = cluster.add_instance('node3', user_configs=["configs/users_disable_bytes_settings.xml"], with_mysql=False, stay_alive=True) +node_disable_rows_settings = cluster.add_instance('node4', user_configs=["configs/users_disable_rows_settings.xml"], with_mysql=False, stay_alive=True) @pytest.fixture(scope="module") def started_cluster(): @@ -150,13 +151,17 @@ def started_mysql_8_0(): @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_materialize_database_dml_with_mysql_5_7(started_cluster, started_mysql_5_7, clickhouse_node): materialize_with_ddl.dml_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.materialize_mysql_database_with_views(clickhouse_node, started_mysql_5_7, "mysql1") materialize_with_ddl.materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.move_to_prewhere_and_column_filtering(clickhouse_node, started_mysql_5_7, "mysql1") @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) def test_materialize_database_dml_with_mysql_8_0(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.dml_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0") + materialize_with_ddl.materialize_mysql_database_with_views(clickhouse_node, started_mysql_8_0, "mysql8_0") materialize_with_ddl.materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, started_mysql_8_0, "mysql8_0") + materialize_with_ddl.move_to_prewhere_and_column_filtering(clickhouse_node, started_mysql_8_0, "mysql8_0") @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_atomic]) @@ -287,5 +292,12 @@ def test_multi_table_update(started_cluster, started_mysql_8_0, started_mysql_5_ @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_ordinary]) -def test_system_tables_table(started_cluster, started_mysql_8_0, clickhouse_node): +def test_system_tables_table(started_cluster, started_mysql_8_0, started_mysql_5_7, clickhouse_node): + materialize_with_ddl.system_tables_test(clickhouse_node, started_mysql_5_7, "mysql1") materialize_with_ddl.system_tables_test(clickhouse_node, started_mysql_8_0, "mysql8_0") + + +@pytest.mark.parametrize(('clickhouse_node'), [node_disable_bytes_settings, node_disable_rows_settings]) +def test_mysql_settings(started_cluster, started_mysql_8_0, started_mysql_5_7, clickhouse_node): + materialize_with_ddl.mysql_settings_test(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.mysql_settings_test(clickhouse_node, started_mysql_8_0, "mysql8_0") diff --git a/tests/integration/test_merge_tree_s3/test.py b/tests/integration/test_merge_tree_s3/test.py index 3ab8d5d006b..4b685542170 100644 --- a/tests/integration/test_merge_tree_s3/test.py +++ b/tests/integration/test_merge_tree_s3/test.py @@ -68,6 +68,16 @@ def create_table(cluster, table_name, additional_settings=None): node.query(create_table_statement) +def wait_for_delete_s3_objects(cluster, expected, timeout=30): + minio = cluster.minio_client + while timeout > 0: + if len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == expected: + return + timeout -= 1 + time.sleep(1) + assert(len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == expected) + + @pytest.fixture(autouse=True) def drop_table(cluster): yield @@ -75,8 +85,9 @@ def drop_table(cluster): minio = cluster.minio_client node.query("DROP TABLE IF EXISTS s3_test NO DELAY") + try: - assert len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == 0 + wait_for_delete_s3_objects(cluster, 0) finally: # Remove extra objects to prevent tests cascade failing for obj in list(minio.list_objects(cluster.minio_bucket, 'data/')): @@ -151,7 +162,7 @@ def test_insert_same_partition_and_merge(cluster, merge_vertical): assert node.query("SELECT sum(id) FROM s3_test FORMAT Values") == "(0)" assert node.query("SELECT count(distinct(id)) FROM s3_test FORMAT Values") == "(8192)" - assert len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD) def test_alter_table_columns(cluster): @@ -167,32 +178,20 @@ def test_alter_table_columns(cluster): # To ensure parts have merged node.query("OPTIMIZE TABLE s3_test") - # Wait for merges, mutations and old parts deletion - time.sleep(3) - assert node.query("SELECT sum(col1) FROM s3_test FORMAT Values") == "(8192)" assert node.query("SELECT sum(col1) FROM s3_test WHERE id > 0 FORMAT Values") == "(4096)" - assert len(list(minio.list_objects(cluster.minio_bucket, - 'data/'))) == FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD_PER_COLUMN + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD_PER_COLUMN) node.query("ALTER TABLE s3_test MODIFY COLUMN col1 String", settings={"mutations_sync": 2}) - # Wait for old parts deletion - time.sleep(3) - assert node.query("SELECT distinct(col1) FROM s3_test FORMAT Values") == "('1')" # and file with mutation - assert len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == ( - FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD_PER_COLUMN + 1) + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + FILES_OVERHEAD_PER_COLUMN + 1) node.query("ALTER TABLE s3_test DROP COLUMN col1", settings={"mutations_sync": 2}) - # Wait for old parts deletion - time.sleep(3) - # and 2 files with mutations - assert len( - list(minio.list_objects(cluster.minio_bucket, 'data/'))) == FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + 2 + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE + 2) def test_attach_detach_partition(cluster): @@ -320,9 +319,7 @@ def test_move_replace_partition_to_another_table(cluster): assert node.query("SELECT count(*) FROM s3_clone FORMAT Values") == "(8192)" # Wait for outdated partitions deletion. - time.sleep(3) - assert len(list( - minio.list_objects(cluster.minio_bucket, 'data/'))) == FILES_OVERHEAD * 2 + FILES_OVERHEAD_PER_PART_WIDE * 4 + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD * 2 + FILES_OVERHEAD_PER_PART_WIDE * 4) node.query("DROP TABLE s3_clone NO DELAY") assert node.query("SELECT sum(id) FROM s3_test FORMAT Values") == "(0)" @@ -338,7 +335,8 @@ def test_move_replace_partition_to_another_table(cluster): node.query("DROP TABLE s3_test NO DELAY") # Backup data should remain in S3. - assert len(list(minio.list_objects(cluster.minio_bucket, 'data/'))) == FILES_OVERHEAD_PER_PART_WIDE * 4 + + wait_for_delete_s3_objects(cluster, FILES_OVERHEAD_PER_PART_WIDE * 4) for obj in list(minio.list_objects(cluster.minio_bucket, 'data/')): minio.remove_object(cluster.minio_bucket, obj.object_name) diff --git a/tests/integration/test_merge_tree_s3_restore/configs/config.d/clusters.xml b/tests/integration/test_merge_tree_s3_restore/configs/config.d/clusters.xml new file mode 100644 index 00000000000..4808ae4bc4a --- /dev/null +++ b/tests/integration/test_merge_tree_s3_restore/configs/config.d/clusters.xml @@ -0,0 +1,23 @@ + + + + + + true + + node + 9000 + + + + + + true + + node_another_bucket + 9000 + + + + + diff --git a/tests/integration/test_merge_tree_s3_restore/test.py b/tests/integration/test_merge_tree_s3_restore/test.py index c0ebce68480..0781f0b9ce9 100644 --- a/tests/integration/test_merge_tree_s3_restore/test.py +++ b/tests/integration/test_merge_tree_s3_restore/test.py @@ -7,20 +7,21 @@ import time import pytest from helpers.cluster import ClickHouseCluster + logging.getLogger().setLevel(logging.INFO) logging.getLogger().addHandler(logging.StreamHandler()) - SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) -CONFIG_PATH = os.path.join(SCRIPT_DIR, './_instances/node_not_restorable/configs/config.d/storage_conf_not_restorable.xml') +NOT_RESTORABLE_CONFIG_PATH = os.path.join(SCRIPT_DIR, './_instances/node_not_restorable/configs/config.d/storage_conf_not_restorable.xml') +COMMON_CONFIGS = ["configs/config.d/bg_processing_pool_conf.xml", "configs/config.d/log_conf.xml", "configs/config.d/clusters.xml"] def replace_config(old, new): - config = open(CONFIG_PATH, 'r') + config = open(NOT_RESTORABLE_CONFIG_PATH, 'r') config_lines = config.readlines() config.close() config_lines = [line.replace(old, new) for line in config_lines] - config = open(CONFIG_PATH, 'w') + config = open(NOT_RESTORABLE_CONFIG_PATH, 'w') config.writelines(config_lines) config.close() @@ -29,22 +30,22 @@ def replace_config(old, new): def cluster(): try: cluster = ClickHouseCluster(__file__) - cluster.add_instance("node", main_configs=[ - "configs/config.d/storage_conf.xml", - "configs/config.d/bg_processing_pool_conf.xml", - "configs/config.d/log_conf.xml"], user_configs=[], with_minio=True, stay_alive=True) - cluster.add_instance("node_another_bucket", main_configs=[ - "configs/config.d/storage_conf_another_bucket.xml", - "configs/config.d/bg_processing_pool_conf.xml", - "configs/config.d/log_conf.xml"], user_configs=[], stay_alive=True) - cluster.add_instance("node_another_bucket_path", main_configs=[ - "configs/config.d/storage_conf_another_bucket_path.xml", - "configs/config.d/bg_processing_pool_conf.xml", - "configs/config.d/log_conf.xml"], user_configs=[], stay_alive=True) - cluster.add_instance("node_not_restorable", main_configs=[ - "configs/config.d/storage_conf_not_restorable.xml", - "configs/config.d/bg_processing_pool_conf.xml", - "configs/config.d/log_conf.xml"], user_configs=[], stay_alive=True) + + cluster.add_instance("node", + main_configs=COMMON_CONFIGS + ["configs/config.d/storage_conf.xml"], + macros={"cluster": "node", "replica": "0"}, + with_minio=True, with_zookeeper=True, stay_alive=True) + cluster.add_instance("node_another_bucket", + main_configs=COMMON_CONFIGS + ["configs/config.d/storage_conf_another_bucket.xml"], + macros={"cluster": "node_another_bucket", "replica": "0"}, + with_zookeeper=True, stay_alive=True) + cluster.add_instance("node_another_bucket_path", + main_configs=COMMON_CONFIGS + ["configs/config.d/storage_conf_another_bucket_path.xml"], + stay_alive=True) + cluster.add_instance("node_not_restorable", + main_configs=COMMON_CONFIGS + ["configs/config.d/storage_conf_not_restorable.xml"], + stay_alive=True) + logging.info("Starting cluster...") cluster.start() logging.info("Cluster started") @@ -65,28 +66,26 @@ def generate_values(date_str, count, sign=1): return ",".join(["('{}',{},'{}',{})".format(x, y, z, 0) for x, y, z in data]) -def create_table(node, table_name, additional_settings=None): +def create_table(node, table_name, replicated=False): node.query("CREATE DATABASE IF NOT EXISTS s3 ENGINE = Ordinary") create_table_statement = """ - CREATE TABLE s3.{} ( + CREATE TABLE s3.{table_name} {on_cluster} ( dt Date, id Int64, data String, counter Int64, INDEX min_max (id) TYPE minmax GRANULARITY 3 - ) ENGINE=MergeTree() + ) ENGINE={engine} PARTITION BY dt ORDER BY (dt, id) SETTINGS storage_policy='s3', old_parts_lifetime=600, index_granularity=512 - """.format(table_name) - - if additional_settings: - create_table_statement += "," - create_table_statement += additional_settings + """.format(table_name=table_name, + on_cluster="ON CLUSTER '{}'".format(node.name) if replicated else "", + engine="ReplicatedMergeTree('/clickhouse/tables/{cluster}/test', '{replica}')" if replicated else "MergeTree()") node.query(create_table_statement) @@ -107,17 +106,23 @@ def drop_shadow_information(node): node.exec_in_container(['bash', '-c', 'rm -rf /var/lib/clickhouse/shadow/*'], user='root') -def create_restore_file(node, revision=0, bucket=None, path=None): - add_restore_option = 'echo -en "{}\n" >> /var/lib/clickhouse/disks/s3/restore' - node.exec_in_container(['bash', '-c', add_restore_option.format(revision)], user='root') +def create_restore_file(node, revision=None, bucket=None, path=None, detached=None): + node.exec_in_container(['bash', '-c', 'touch /var/lib/clickhouse/disks/s3/restore'], user='root') + + add_restore_option = 'echo -en "{}={}\n" >> /var/lib/clickhouse/disks/s3/restore' + if revision: + node.exec_in_container(['bash', '-c', add_restore_option.format('revision', revision)], user='root') if bucket: - node.exec_in_container(['bash', '-c', add_restore_option.format(bucket)], user='root') + node.exec_in_container(['bash', '-c', add_restore_option.format('source_bucket', bucket)], user='root') if path: - node.exec_in_container(['bash', '-c', add_restore_option.format(path)], user='root') + node.exec_in_container(['bash', '-c', add_restore_option.format('source_path', path)], user='root') + if detached: + node.exec_in_container(['bash', '-c', add_restore_option.format('detached', 'true')], user='root') def get_revision_counter(node, backup_number): - return int(node.exec_in_container(['bash', '-c', 'cat /var/lib/clickhouse/disks/s3/shadow/{}/revision.txt'.format(backup_number)], user='root')) + return int(node.exec_in_container( + ['bash', '-c', 'cat /var/lib/clickhouse/disks/s3/shadow/{}/revision.txt'.format(backup_number)], user='root')) @pytest.fixture(autouse=True) @@ -128,7 +133,8 @@ def drop_table(cluster): for node_name in node_names: node = cluster.instances[node_name] - node.query("DROP TABLE IF EXISTS s3.test NO DELAY") + node.query("DROP TABLE IF EXISTS s3.test SYNC") + node.query("DROP DATABASE IF EXISTS s3 SYNC") drop_s3_metadata(node) drop_shadow_information(node) @@ -138,32 +144,23 @@ def drop_table(cluster): purge_s3(cluster, bucket) -def test_full_restore(cluster): +@pytest.mark.parametrize( + "replicated", [False, True] +) +def test_full_restore(cluster, replicated): node = cluster.instances["node"] - create_table(node, "test") + create_table(node, "test", replicated) node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-03', 4096))) node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-04', 4096, -1))) node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096))) node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096, -1))) - # To ensure parts have merged - node.query("OPTIMIZE TABLE s3.test") - - assert node.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) - assert node.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) - node.stop_clickhouse() drop_s3_metadata(node) - node.start_clickhouse() - - # All data is removed. - assert node.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(0) - - node.stop_clickhouse() create_restore_file(node) - node.start_clickhouse(10) + node.start_clickhouse() assert node.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) assert node.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -191,7 +188,7 @@ def test_restore_another_bucket_path(cluster): node_another_bucket.stop_clickhouse() create_restore_file(node_another_bucket, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.start_clickhouse() assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -202,7 +199,7 @@ def test_restore_another_bucket_path(cluster): node_another_bucket_path.stop_clickhouse() create_restore_file(node_another_bucket_path, bucket="root2", path="data") - node_another_bucket_path.start_clickhouse(10) + node_another_bucket_path.start_clickhouse() assert node_another_bucket_path.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) assert node_another_bucket_path.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -244,7 +241,7 @@ def test_restore_different_revisions(cluster): drop_s3_metadata(node_another_bucket) purge_s3(cluster, cluster.minio_bucket_2) create_restore_file(node_another_bucket, revision=revision1, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.start_clickhouse() assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -255,7 +252,7 @@ def test_restore_different_revisions(cluster): drop_s3_metadata(node_another_bucket) purge_s3(cluster, cluster.minio_bucket_2) create_restore_file(node_another_bucket, revision=revision2, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.start_clickhouse() assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -266,7 +263,7 @@ def test_restore_different_revisions(cluster): drop_s3_metadata(node_another_bucket) purge_s3(cluster, cluster.minio_bucket_2) create_restore_file(node_another_bucket, revision=revision3, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.start_clickhouse() assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -298,7 +295,7 @@ def test_restore_mutations(cluster): drop_s3_metadata(node_another_bucket) purge_s3(cluster, cluster.minio_bucket_2) create_restore_file(node_another_bucket, revision=revision_before_mutation, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.start_clickhouse() assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -309,7 +306,7 @@ def test_restore_mutations(cluster): drop_s3_metadata(node_another_bucket) purge_s3(cluster, cluster.minio_bucket_2) create_restore_file(node_another_bucket, revision=revision_after_mutation, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.start_clickhouse() assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) @@ -323,7 +320,7 @@ def test_restore_mutations(cluster): purge_s3(cluster, cluster.minio_bucket_2) revision = (revision_before_mutation + revision_after_mutation) // 2 create_restore_file(node_another_bucket, revision=revision, bucket="root") - node_another_bucket.start_clickhouse(10) + node_another_bucket.start_clickhouse() # Wait for unfinished mutation completion. time.sleep(3) @@ -365,7 +362,57 @@ def test_migrate_to_restorable_schema(cluster): drop_s3_metadata(node_another_bucket) purge_s3(cluster, cluster.minio_bucket_2) create_restore_file(node_another_bucket, revision=revision, bucket="root", path="another_data") - node_another_bucket.start_clickhouse(10) + node_another_bucket.start_clickhouse() assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 6) assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + + +@pytest.mark.parametrize( + "replicated", [False, True] +) +def test_restore_to_detached(cluster, replicated): + node = cluster.instances["node"] + + create_table(node, "test", replicated) + + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-03', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-04', 4096, -1))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-06', 4096, -1))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-07', 4096, 0))) + + # Add some mutation. + node.query("ALTER TABLE s3.test UPDATE counter = 1 WHERE 1", settings={"mutations_sync": 2}) + + # Detach some partition. + node.query("ALTER TABLE s3.test DETACH PARTITION '2020-01-07'") + + node.query("ALTER TABLE s3.test FREEZE") + revision = get_revision_counter(node, 1) + + node_another_bucket = cluster.instances["node_another_bucket"] + + create_table(node_another_bucket, "test", replicated) + + node_another_bucket.stop_clickhouse() + create_restore_file(node_another_bucket, revision=revision, bucket="root", path="data", detached=True) + node_another_bucket.start_clickhouse() + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(0) + + node_another_bucket.query("ALTER TABLE s3.test ATTACH PARTITION '2020-01-03'") + node_another_bucket.query("ALTER TABLE s3.test ATTACH PARTITION '2020-01-04'") + node_another_bucket.query("ALTER TABLE s3.test ATTACH PARTITION '2020-01-05'") + node_another_bucket.query("ALTER TABLE s3.test ATTACH PARTITION '2020-01-06'") + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + assert node_another_bucket.query("SELECT sum(counter) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + + # Attach partition that was already detached before backup-restore. + node_another_bucket.query("ALTER TABLE s3.test ATTACH PARTITION '2020-01-07'") + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 5) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + assert node_another_bucket.query("SELECT sum(counter) FROM s3.test FORMAT Values") == "({})".format(4096 * 5) diff --git a/tests/integration/test_mysql_protocol/test.py b/tests/integration/test_mysql_protocol/test.py index 7f7d59674bc..43daeebeaf5 100644 --- a/tests/integration/test_mysql_protocol/test.py +++ b/tests/integration/test_mysql_protocol/test.py @@ -149,8 +149,8 @@ def test_mysql_client_exception(mysql_client, server_address): -e "CREATE TABLE default.t1_remote_mysql AS mysql('127.0.0.1:10086','default','t1_local','default','');" '''.format(host=server_address, port=server_port), demux=True) - assert stderr[0:266].decode() == "mysql: [Warning] Using a password on the command line interface can be insecure.\n" \ - "ERROR 1000 (00000) at line 1: Poco::Exception. Code: 1000, e.code() = 2002, e.displayText() = mysqlxx::ConnectionFailed: Can't connect to MySQL server on '127.0.0.1' (115) ((nullptr):0)" + assert stderr[0:258].decode() == "mysql: [Warning] Using a password on the command line interface can be insecure.\n" \ + "ERROR 1000 (00000) at line 1: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = Exception: Connections to all replicas failed: default@127.0.0.1:10086 as user default" def test_mysql_affected_rows(mysql_client, server_address): diff --git a/tests/integration/test_odbc_interaction/test.py b/tests/integration/test_odbc_interaction/test.py index 2ef71927bdf..47d01389530 100644 --- a/tests/integration/test_odbc_interaction/test.py +++ b/tests/integration/test_odbc_interaction/test.py @@ -6,6 +6,7 @@ import pytest from helpers.cluster import ClickHouseCluster from helpers.test_tools import assert_eq_with_retry from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT +from multiprocessing.dummy import Pool cluster = ClickHouseCluster(__file__) node1 = cluster.add_instance('node1', with_odbc_drivers=True, with_mysql=True, @@ -269,7 +270,7 @@ def test_sqlite_odbc_cached_dictionary(started_cluster): node1.exec_in_container(["bash", "-c", "chmod a+rw /tmp"], privileged=True, user='root') node1.exec_in_container(["bash", "-c", "chmod a+rw {}".format(sqlite_db)], privileged=True, user='root') - node1.query("insert into table function odbc('DSN={};', '', 't3') values (200, 2, 7)".format( + node1.query("insert into table function odbc('DSN={};ReadOnly=0', '', 't3') values (200, 2, 7)".format( node1.odbc_drivers["SQLite3"]["DSN"])) assert node1.query("select dictGetUInt8('sqlite3_odbc_cached', 'Z', toUInt64(200))") == "7\n" # new value @@ -381,5 +382,182 @@ def test_odbc_postgres_date_data_type(started_cluster): expected = '1\t2020-12-01\n2\t2020-12-02\n3\t2020-12-03\n' result = node1.query('SELECT * FROM test_date'); assert(result == expected) + cursor.execute("DROP TABLE IF EXISTS clickhouse.test_date") + node1.query("DROP TABLE IF EXISTS test_date") +def test_odbc_postgres_conversions(started_cluster): + conn = get_postgres_conn() + cursor = conn.cursor() + + cursor.execute( + '''CREATE TABLE IF NOT EXISTS clickhouse.test_types ( + a smallint, b integer, c bigint, d real, e double precision, f serial, g bigserial, + h timestamp)''') + + node1.query(''' + INSERT INTO TABLE FUNCTION + odbc('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_types') + VALUES (-32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12')''') + + result = node1.query(''' + SELECT a, b, c, d, e, f, g, h + FROM odbc('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_types') + ''') + + assert(result == '-32768\t-2147483648\t-9223372036854775808\t1.12345\t1.123456789\t2147483647\t9223372036854775807\t2000-05-12 12:12:12\n') + cursor.execute("DROP TABLE IF EXISTS clickhouse.test_types") + + cursor.execute("""CREATE TABLE IF NOT EXISTS clickhouse.test_types (column1 Timestamp, column2 Numeric)""") + + node1.query( + ''' + CREATE TABLE test_types (column1 DateTime64, column2 Decimal(5, 1)) + ENGINE=ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_types')''') + + node1.query( + """INSERT INTO test_types + SELECT toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow'), toDecimal32(1.1, 1)""") + + expected = node1.query("SELECT toDateTime64('2019-01-01 00:00:00', 3, 'Europe/Moscow'), toDecimal32(1.1, 1)") + result = node1.query("SELECT * FROM test_types") + print(result) + cursor.execute("DROP TABLE IF EXISTS clickhouse.test_types") + assert(result == expected) + + +def test_odbc_cyrillic_with_varchar(started_cluster): + conn = get_postgres_conn() + cursor = conn.cursor() + + cursor.execute("DROP TABLE IF EXISTS clickhouse.test_cyrillic") + cursor.execute("CREATE TABLE clickhouse.test_cyrillic (name varchar(11))") + + node1.query(''' + CREATE TABLE test_cyrillic (name String) + ENGINE = ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_cyrillic')''') + + cursor.execute("INSERT INTO clickhouse.test_cyrillic VALUES ('A-nice-word')") + cursor.execute("INSERT INTO clickhouse.test_cyrillic VALUES ('Красивенько')") + + result = node1.query(''' SELECT * FROM test_cyrillic ORDER BY name''') + assert(result == 'A-nice-word\nКрасивенько\n') + result = node1.query(''' SELECT name FROM odbc('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_cyrillic') ''') + assert(result == 'A-nice-word\nКрасивенько\n') + + +def test_many_connections(started_cluster): + conn = get_postgres_conn() + cursor = conn.cursor() + + cursor.execute('DROP TABLE IF EXISTS clickhouse.test_pg_table') + cursor.execute('CREATE TABLE clickhouse.test_pg_table (key integer, value integer)') + + node1.query(''' + DROP TABLE IF EXISTS test_pg_table; + CREATE TABLE test_pg_table (key UInt32, value UInt32) + ENGINE = ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_pg_table')''') + + node1.query("INSERT INTO test_pg_table SELECT number, number FROM numbers(10)") + + query = "SELECT count() FROM (" + for i in range (24): + query += "SELECT key FROM {t} UNION ALL " + query += "SELECT key FROM {t})" + + assert node1.query(query.format(t='test_pg_table')) == '250\n' + + +def test_concurrent_queries(started_cluster): + conn = get_postgres_conn() + cursor = conn.cursor() + + node1.query(''' + DROP TABLE IF EXISTS test_pg_table; + CREATE TABLE test_pg_table (key UInt32, value UInt32) + ENGINE = ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_pg_table')''') + + cursor.execute('DROP TABLE IF EXISTS clickhouse.test_pg_table') + cursor.execute('CREATE TABLE clickhouse.test_pg_table (key integer, value integer)') + + def node_insert(_): + for i in range(5): + node1.query("INSERT INTO test_pg_table SELECT number, number FROM numbers(1000)", user='default') + + busy_pool = Pool(5) + p = busy_pool.map_async(node_insert, range(5)) + p.wait() + result = node1.query("SELECT count() FROM test_pg_table", user='default') + print(result) + assert(int(result) == 5 * 5 * 1000) + + def node_insert_select(_): + for i in range(5): + result = node1.query("INSERT INTO test_pg_table SELECT number, number FROM numbers(1000)", user='default') + result = node1.query("SELECT * FROM test_pg_table LIMIT 100", user='default') + + busy_pool = Pool(5) + p = busy_pool.map_async(node_insert_select, range(5)) + p.wait() + result = node1.query("SELECT count() FROM test_pg_table", user='default') + print(result) + assert(int(result) == 5 * 5 * 1000 * 2) + + node1.query('DROP TABLE test_pg_table;') + cursor.execute('DROP TABLE clickhouse.test_pg_table;') + + +def test_odbc_long_column_names(started_cluster): + conn = get_postgres_conn(); + cursor = conn.cursor() + + column_name = "column" * 8 + create_table = "CREATE TABLE clickhouse.test_long_column_names (" + for i in range(1000): + if i != 0: + create_table += ", " + create_table += "{} integer".format(column_name + str(i)) + create_table += ")" + cursor.execute(create_table) + insert = "INSERT INTO clickhouse.test_long_column_names SELECT i" + ", i" * 999 + " FROM generate_series(0, 99) as t(i)" + cursor.execute(insert) + conn.commit() + + create_table = "CREATE TABLE test_long_column_names (" + for i in range(1000): + if i != 0: + create_table += ", " + create_table += "{} UInt32".format(column_name + str(i)) + create_table += ") ENGINE=ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_long_column_names')" + result = node1.query(create_table); + + result = node1.query('SELECT * FROM test_long_column_names'); + expected = node1.query("SELECT number" + ", number" * 999 + " FROM numbers(100)") + assert(result == expected) + + cursor.execute("DROP TABLE IF EXISTS clickhouse.test_long_column_names") + node1.query("DROP TABLE IF EXISTS test_long_column_names") + + +def test_odbc_long_text(started_cluster): + conn = get_postgres_conn() + cursor = conn.cursor() + cursor.execute("drop table if exists clickhouse.test_long_text") + cursor.execute("create table clickhouse.test_long_text(flen int, field1 text)"); + + # sample test from issue 9363 + text_from_issue = """BEGIN These examples only show the order that data is arranged in. The values from different columns are stored separately, and data from the same column is stored together. Examples of a column-oriented DBMS: Vertica, Paraccel (Actian Matrix and Amazon Redshift), Sybase IQ, Exasol, Infobright, InfiniDB, MonetDB (VectorWise and Actian Vector), LucidDB, SAP HANA, Google Dremel, Google PowerDrill, Druid, and kdb+. Different orders for storing data are better suited to different scenarios. The data access scenario refers to what queries are made, how often, and in what proportion; how much data is read for each type of query – rows, columns, and bytes; the relationship between reading and updating data; the working size of the data and how locally it is used; whether transactions are used, and how isolated they are; requirements for data replication and logical integrity; requirements for latency and throughput for each type of query, and so on. The higher the load on the system, the more important it is to customize the system set up to match the requirements of the usage scenario, and the more fine grained this customization becomes. There is no system that is equally well-suited to significantly different scenarios. If a system is adaptable to a wide set of scenarios, under a high load, the system will handle all the scenarios equally poorly, or will work well for just one or few of possible scenarios. Key Properties of OLAP Scenario¶ The vast majority of requests are for read access. Data is updated in fairly large batches (> 1000 rows), not by single rows; or it is not updated at all. Data is added to the DB but is not modified. For reads, quite a large number of rows are extracted from the DB, but only a small subset of columns. Tables are "wide," meaning they contain a large number of columns. Queries are relatively rare (usually hundreds of queries per server or less per second). For simple queries, latencies around 50 ms are allowed. Column values are fairly small: numbers and short strings (for example, 60 bytes per URL). Requires high throughput when processing a single query (up to billions of rows per second per server). Transactions are not necessary. Low requirements for data consistency. There is one large table per query. All tables are small, except for one. A query result is significantly smaller than the source data. In other words, data is filtered or aggregated, so the result fits in a single server"s RAM. It is easy to see that the OLAP scenario is very different from other popular scenarios (such as OLTP or Key-Value access). So it doesn"t make sense to try to use OLTP or a Key-Value DB for processing analytical queries if you want to get decent performance. For example, if you try to use MongoDB or Redis for analytics, you will get very poor performance compared to OLAP databases. Why Column-Oriented Databases Work Better in the OLAP Scenario¶ Column-oriented databases are better suited to OLAP scenarios: they are at least 100 times faster in processing most queries. The reasons are explained in detail below, but the fact is easier to demonstrate visually. END""" + cursor.execute("""insert into clickhouse.test_long_text (flen, field1) values (3248, '{}')""".format(text_from_issue)); + + node1.query(''' + DROP TABLE IF EXISTS test_long_test; + CREATE TABLE test_long_text (flen UInt32, field1 String) + ENGINE = ODBC('DSN=postgresql_odbc; Servername=postgre-sql.local', 'clickhouse', 'test_long_text')''') + result = node1.query("select field1 from test_long_text;") + assert(result.strip() == text_from_issue) + + long_text = "text" * 1000000 + cursor.execute("""insert into clickhouse.test_long_text (flen, field1) values (400000, '{}')""".format(long_text)); + result = node1.query("select field1 from test_long_text where flen=400000;") + assert(result.strip() == long_text) + diff --git a/tests/integration/test_replication_credentials/test.py b/tests/integration/test_replication_credentials/test.py index 4f07d6966a6..9181c515adf 100644 --- a/tests/integration/test_replication_credentials/test.py +++ b/tests/integration/test_replication_credentials/test.py @@ -9,7 +9,6 @@ def _fill_nodes(nodes, shard): node.query( ''' CREATE DATABASE test; - CREATE TABLE test_table(date Date, id UInt32, dummy UInt32) ENGINE = ReplicatedMergeTree('/clickhouse/tables/test{shard}/replicated', '{replica}', date, id, 8192); '''.format(shard=shard, replica=node.name)) @@ -114,6 +113,32 @@ def test_different_credentials(different_credentials_cluster): assert node5.query("SELECT id FROM test_table order by id") == '111\n' assert node6.query("SELECT id FROM test_table order by id") == '222\n' + add_old = """ + + 9009 + + admin + 222 + + root + 111 + + + aaa + 333 + + + + """ + + node5.replace_config("/etc/clickhouse-server/config.d/credentials1.xml", add_old) + + node5.query("SYSTEM RELOAD CONFIG") + node5.query("INSERT INTO test_table values('2017-06-21', 333, 1)") + node6.query("SYSTEM SYNC REPLICA test_table", timeout=10) + + assert node6.query("SELECT id FROM test_table order by id") == '111\n222\n333\n' + node7 = cluster.add_instance('node7', main_configs=['configs/remote_servers.xml', 'configs/credentials1.xml'], with_zookeeper=True) @@ -146,3 +171,23 @@ def test_credentials_and_no_credentials(credentials_and_no_credentials_cluster): assert node7.query("SELECT id FROM test_table order by id") == '111\n' assert node8.query("SELECT id FROM test_table order by id") == '222\n' + + allow_empty = """ + + 9009 + + admin + 222 + true + + + """ + + # change state: Flip node7 to mixed auth/non-auth (allow node8) + node7.replace_config("/etc/clickhouse-server/config.d/credentials1.xml", + allow_empty) + + node7.query("SYSTEM RELOAD CONFIG") + node7.query("insert into test_table values ('2017-06-22', 333, 1)") + node8.query("SYSTEM SYNC REPLICA test_table", timeout=10) + assert node8.query("SELECT id FROM test_table order by id") == '111\n222\n333\n' diff --git a/tests/queries/0_stateless/01709_inactive_parts_to_delay_throw.reference b/tests/integration/test_s3_cluster/__init__.py similarity index 100% rename from tests/queries/0_stateless/01709_inactive_parts_to_delay_throw.reference rename to tests/integration/test_s3_cluster/__init__.py diff --git a/tests/integration/test_s3_cluster/configs/cluster.xml b/tests/integration/test_s3_cluster/configs/cluster.xml new file mode 100644 index 00000000000..8334ace15eb --- /dev/null +++ b/tests/integration/test_s3_cluster/configs/cluster.xml @@ -0,0 +1,24 @@ + + + + + + + s0_0_0 + 9000 + + + s0_0_1 + 9000 + + + + + s0_1_0 + 9000 + + + + + + \ No newline at end of file diff --git a/tests/integration/test_s3_cluster/data/clickhouse/part1.csv b/tests/integration/test_s3_cluster/data/clickhouse/part1.csv new file mode 100644 index 00000000000..a44d3ca1ffb --- /dev/null +++ b/tests/integration/test_s3_cluster/data/clickhouse/part1.csv @@ -0,0 +1,10 @@ +"fSRH",976027584,"[[(-1.5346513608456012e-204,-2.867937504545497e266),(3.1627675144114637e-231,-2.20343471241604e-54),(-1.866886218651809e-89,-7.695893036366416e100),(8.196307577166986e-169,-8.203793887684096e-263),(-1.6150328830402252e-215,8.531116551449711e-296),(4.3378407855931477e92,1.1313645428723989e117),(-4.238081208165573e137,-8.969951719788361e67)],[(-3.409639554701108e169,-7.277093176871153e-254),(1.1466207153308928e-226,3.429893348531029e96),(6.451302850199177e-189,-7.52379443153242e125),(-1.7132078539493614e-127,-2.3177814806867505e241),(1.4996520594989919e-257,4.271017883966942e128)],[(65460976657479156000,1.7055814145588595e253),(-1.921491101580189e154,3.2912740465446566e-286),(0.0008437955075350972,-5.143493717005472e-107),(8.637208599142187e-150,7.825076274945548e136),(1.8077733932468565e-159,5.51061479974026e-77),(1.300406236793709e-260,10669142.497111017),(-1.731981751951159e91,-1.270795062098902e102)],[(3.336706342781395e-7,-1.1919528866481513e266)]]" +"sX6>",733011552,"[[(-3.737863336077909e-44,3.066510481088993e-161),(-1.0047259170558555e-31,8.066145272086467e-274)],[(1.2261835328136691e-58,-6.154561379350395e258),(8.26019994651558e35,-6.736984599062694e-19),(-1.4143671344485664e-238,-1.220003479858045e203),(2.466089772925698e-207,1.0025476904532926e-242),(-6.3786667153054354e240,-7.010834902137467e-103),(-6.766918514324285e-263,7.404639608483947e188),(2.753493977757937e126,-4.089565842001999e-152)],[(4.339873790493155e239,-5.022554811588342e24),(-1.7712390083519473e-66,1.3290563068463308e112),(3.3648764781548893e233,1.1123394188044336e112),(-5.415278137684864e195,5.590597851016202e-270),(-2.1032310903543943e99,-2.2335799924679948e-184)]]" +"",2396526460,"[[(1.9925796792641788e-261,1.647618305107044e158),(3.014593666207223e-222,-9.016473078578002e-20),(-1.5307802021477097e-230,-7.867078587209265e-243),(-7.330317098800564e295,1.7496539408601967e-281)],[(2.2816938730052074e98,-3.3089122320442997e-136),(-4.930983789361344e-263,-6.526758521792829e59),(-2.6482873886835413e34,-4.1985691142515947e83),(1.5496810029349365e238,-4.790553105593492e71),(-7.597436233325566e83,-1.3791763752378415e137),(-1.917321980700588e-307,-1.5913257477581824e62)]]" +"=@ep",3618088392,"[[(-2.2303235811290024e-306,8.64070367587338e-13),(-7.403012423264767e-129,-1.0825508572345856e-147),(-3.6080301450167e286,1.7302718548299961e285),(-1.3839239794870825e-156,4.255424291564323e107),(2.3191305762555e-33,-2.873899421579949e-145),(7.237414513124649e-159,-4.926574547865783e178),(4.251831312243431e-199,1.2164714479391436e201)],[(-5.114074387943793e242,2.0119340496886292e295),(-3.3663670765548e-262,-6.1992631068472835e221),(1.1539386993255106e-261,1.582903697171063e-33),(-6.1914577817088e118,-1.0401495621681123e145)],[],[(-5.9815907467493136e82,4.369047439375412e219),(-4.485368440431237e89,-3.633023372434946e-59),(-2.087497331251707e-180,1.0524018118646965e257)],[(-1.2636503461000215e-228,-4.8426877075223456e204),(2.74943107551342e281,-7.453097760262003e-14)]]" +"",3467776823,"[]" +"b'zQ",484159052,"[[(3.041838095219909e276,-6.956822159518612e-87)],[(6.636906358770296e-97,1.0531865724169307e-214)],[(-8.429249069245283e-243,-2.134779842898037e243)],[(-0.4657586598569572,2.799768548127799e187),(-5.961335445789657e-129,2.560331789344886e293),(-3.139409694983184e45,2.8011384557268085e-47)]]" +"6xGw",4126129912,"[]" +"Q",3109335413,"[[(-2.8435266267772945e39,9.548278488724291e26),(-1.1682790407223344e46,-3.925561182768867e-266),(2.8381633655721614e-202,-3.472921303086527e40),(3.3968328275944204e-150,-2.2188876184777275e-69),(-1.2612795000783405e-88,-1.2942793285205966e-49),(1.3678466236967012e179,1.721664680964459e97),(-1.1020844667744628e198,-3.403142062758506e-47)],[],[(1.343149099058239e-279,9.397894929770352e-132),(-5.280854317597215e250,9.862550191577643e-292),(-7.11468799151533e-58,7.510011657942604e96),(1.183774454157175e-288,-1.5697197095936546e272),(-3.727289017361602e120,2.422831380775067e-107),(1.4345094301262986e-177,2.4990983297605437e-91)],[(9.195226893854516e169,6.546374357272709e-236),(2.320311199531441e-126,2.2257031285964243e-185),(3.351868475505779e-184,1.84394695526876e88)],[(1.6290814396647987e-112,-3.589542711073253e38),(4.0060174859833907e-261,-1.9900431208726192e-296),(2.047468933030435e56,8.483912759156179e-57),(3.1165727272872075e191,-1.5487136748040008e-156),(0.43564020198461034,4.618165048931035e-244),(-7.674951896752824e-214,1.1652522629091777e-105),(4.838653901829244e-89,5.3085904574780206e169)],[(1.8286703553352283e-246,2.0403170465657044e255),(2.040810692623279e267,4.3956975402250484e-8),(2.4101343663018673e131,-8.672394158504762e167),(3.092080945239809e-219,-3.775474693770226e293),(-1.527991241079512e-15,-1.2603969180963007e226),(9.17470637459212e-56,1.6021090930395906e-133),(7.877647227721046e58,3.2592118033868903e-108)],[(1.4334765313272463e170,2.6971234798957105e-50)]]" +"^ip",1015254922,"[[(-2.227414144223298e-63,1.2391785738638914e276),(1.2668491759136862e207,2.5656762953078853e-67),(2.385410876813441e-268,1.451107969531624e25),(-5.475956161647574e131,2239495689376746),(1.5591286361054593e180,3.672868971445151e117)]]" +"5N]",1720727300,"[[(-2.0670321228319122e-258,-2.6893477429616666e-32),(-2.2424105705209414e225,3.547832127050775e25),(4.452916756606404e-121,-3.71114618421911e156),(-1.966961937965055e-110,3.1217044497868816e227),(20636923519704216,1.3500210618276638e30),(3.3195926701816527e-276,1.5557140338374535e234)],[]]" diff --git a/tests/integration/test_s3_cluster/data/clickhouse/part123.csv b/tests/integration/test_s3_cluster/data/clickhouse/part123.csv new file mode 100644 index 00000000000..1ca3353b741 --- /dev/null +++ b/tests/integration/test_s3_cluster/data/clickhouse/part123.csv @@ -0,0 +1,3 @@ +"b'zQ",2960084897,"[[(3.014593666207223e-222,-7.277093176871153e-254),(-1.5307802021477097e-230,3.429893348531029e96),(-7.330317098800564e295,-7.52379443153242e125),(2.2816938730052074e98,-2.3177814806867505e241),(-4.930983789361344e-263,4.271017883966942e128)],[(-2.6482873886835413e34,1.7055814145588595e253),(1.5496810029349365e238,3.2912740465446566e-286),(-7.597436233325566e83,-5.143493717005472e-107),(-1.917321980700588e-307,7.825076274945548e136)],[(-2.2303235811290024e-306,5.51061479974026e-77),(-7.403012423264767e-129,10669142.497111017),(-3.6080301450167e286,-1.270795062098902e102),(-1.3839239794870825e-156,-1.1919528866481513e266),(2.3191305762555e-33,3.066510481088993e-161),(7.237414513124649e-159,8.066145272086467e-274)],[(4.251831312243431e-199,-6.154561379350395e258),(-5.114074387943793e242,-6.736984599062694e-19),(-3.3663670765548e-262,-1.220003479858045e203),(1.1539386993255106e-261,1.0025476904532926e-242),(-6.1914577817088e118,-7.010834902137467e-103),(-5.9815907467493136e82,7.404639608483947e188),(-4.485368440431237e89,-4.089565842001999e-152)]]" +"6xGw",2107128550,"[[(-2.087497331251707e-180,-5.022554811588342e24),(-1.2636503461000215e-228,1.3290563068463308e112),(2.74943107551342e281,1.1123394188044336e112),(3.041838095219909e276,5.590597851016202e-270)],[],[(6.636906358770296e-97,-2.2335799924679948e-184),(-8.429249069245283e-243,1.647618305107044e158),(-0.4657586598569572,-9.016473078578002e-20)]]" +"Q",2713167232,"[[(-5.961335445789657e-129,-7.867078587209265e-243),(-3.139409694983184e45,1.7496539408601967e-281)],[(-2.8435266267772945e39,-3.3089122320442997e-136)]]" diff --git a/tests/integration/test_s3_cluster/data/database/part2.csv b/tests/integration/test_s3_cluster/data/database/part2.csv new file mode 100644 index 00000000000..572676e47c6 --- /dev/null +++ b/tests/integration/test_s3_cluster/data/database/part2.csv @@ -0,0 +1,5 @@ +"~m`",820408404,"[]" +"~E",3621610983,"[[(1.183772215004139e-238,-1.282774073199881e211),(1.6787305112393978e-46,7.500499989257719e25),(-2.458759475104641e-260,3.1724599388651864e-171),(-2.0163203163062471e118,-4.677226438945462e-162),(-5.52491070012707e-135,7.051780441780731e-236)]]" +"~1",1715555780,"[[(-6.847404226505131e-267,5.939552045362479e-272),(8.02275075985457e-160,8.369250185716419e-104),(-1.193940928527857e-258,-1.132580458849774e39)],[(1.1866087552639048e253,3.104988412734545e57),(-3.37278669639914e84,-2.387628643569968e287),(-2.452136349495753e73,3.194309776006896e-204),(-1001997440265471100,3.482122851077378e-182)],[],[(-5.754682082202988e-20,6.598766936241908e156)],[(8.386764833095757e300,1.2049637765877942e229),(3.136243074210055e53,5.764669663844127e-100),(-4.190632347661851e195,-5.053553379163823e302),(2.0805194731736336e-19,-1.0849036699112485e-271),(1.1292361211411365e227,-8.767824448179629e229),(-3.6938137156625264e-19,-5.387931698392423e109),(-1.2240482125885677e189,-1.5631467861525635e-103)],[(-2.3917431782202442e138,7.817228281030191e-242),(-1.1462343232899826e279,-1.971215065504208e-225),(5.4316119855340265e-62,3.761081156597423e-60),(8.111852137718836e306,8.115485489580134e-208)],[]]" +"~%]",1606443384,"[[]]" +"}or",726681547,"[]" \ No newline at end of file diff --git a/tests/integration/test_s3_cluster/data/database/partition675.csv b/tests/integration/test_s3_cluster/data/database/partition675.csv new file mode 100644 index 00000000000..e8496680368 --- /dev/null +++ b/tests/integration/test_s3_cluster/data/database/partition675.csv @@ -0,0 +1,7 @@ +"kvUES",4281162618,"[[(2.4538308454074088e303,1.2209370543175666e178),(1.4564007891121754e-186,2.340773478952682e-273),(-1.01791181533976e165,-3.9617466227377253e248)]]" +"Gu",4280623186,"[[(-1.623487579335014e38,-1.0633405021023563e225),(-4.373688812751571e180,2.5511550357717127e138)]]" +"J_u1",4277430503,"[[(2.981826196369429e-294,-6.059236590410922e236),(8.502045137575854e-296,3.0210403188125657e-91),(-9.370591842861745e175,4.150870185764185e129),(1.011801592194125e275,-9.236010982686472e266),(-3.1830638196303316e277,2.417706446545472e-105),(-1.4369143023804266e-201,4.7529126795899655e238)],[(-2.118789593804697e186,-1.8760231612433755e-280),(2.5982563179976053e200,-1.4683025762313524e-40)],[(-1.873397623255704e-240,1.4363190147949886e-283),(-1.5760337746177136e153,1.5272278536086246e-34),(-8.117473317695919e155,2.4375370926733504e150),(-1.179230972881795e99,1.7693459774706515e-259),(2.2102106250558424e-40,4.734162675762768e-56),(6.058833110550111e-8,8.892471775821198e164),(-1.8208740799996599e59,6.446958261080721e178)]]" +"s:\",4265055390,"[[(-3.291651377214531e-167,3.9198636942402856e185),(2.4897781692770126e176,2.579309759138358e188),(4.653945381397663e205,3.216314556208208e158),(-5.3373279440714224e-39,2.404386813826413e212),(-1.4217294382527138e307,8.874978978402512e-173)],[(8.527603121149904e-58,-5.0520795335878225e88),(-0.00022870878520550814,-3.2334214176860943e-68),(-6.97683613433404e304,-2.1573757788072144e-82),(-1.1394163455875937e36,-3.817990182461824e271),(2.4099027412881423e-209,8.542179392011098e-156),(3.2610511540394803e174,1.1692631657517616e-20)],[(3.625474290538107e261,-5.359205062039837e-193),(-3.574126569378072e-112,-5.421804160994412e265),(-4.873653931207849e-76,3219678918284.317),(-7.030770825898911e-57,1.4647389742249787e-274),(-4.4882439220492357e-203,6.569338333730439e-38)],[(-2.2418056002374865e-136,5.113251922954469e-16),(2.5156744571032497e297,-3.0536957683846124e-192)],[(1.861112291954516e306,-1.8160882143331256e129),(1.982573454900027e290,-2.451412311394593e170)],[(-2.8292230178712157e-18,1.2570198161962067e216),(6.24832495972797e-164,-2.0770908334330718e-273)],[(980143647.1858811,1.2738714961511727e106),(6.516450532397311e-184,4.088688742052062e31),(-2.246311532913914e269,-7.418103885850518e-179),(1.2222973942835046e-289,2.750544834553288e-46),(9.503169349701076e159,-1.355457053256579e215)]]" +":hzO",4263726959,"[[(-2.553206398375626e-90,1.6536977728640226e199),(1.5630078027143848e-36,2.805242683101373e-211),(2.2573933085983554e-92,3.450501333524858e292),(-1.215900901292646e-275,-3.860558658606121e272),(6.65716072773856e-145,2.5359010031217893e217)],[(-1.3308039625779135e308,1.7464622720773261e258),(-3.2986890093446374e179,3.9038871583175653e-69),(-4.3594764087383885e-95,4.229921973278908e-123),(-5.455694205415656e137,3.597894902167716e108),(1.2480860990110662e-29,-1.4873488392480292e-185),(7.563210285835444e55,-5624068447.488605)],[(3.9517937289943195e181,-3.2799189227094424e-68),(8.906762198487649e-167,3.952452177941537e-159)]]" +"a",4258301804,"[[(5.827965576703262e-281,2.2523852665173977e90)],[(-6.837604072282348e-97,8.125864241406046e-61)],[(-2.3047912084435663e53,-8.814499720685194e36),(1.2072558137199047e-79,1.2096862541827071e142),(2.2000026293774143e275,-3.2571689055108606e-199),(1.1822278574921316e134,2.9571188365006754e-86),(1.0448954272555034e-169,1.2182183489600953e-60)],[(-3.1366540817730525e89,9.327128058982966e-306),(6.588968210928936e73,-11533531378.938957),(-2.6715943555840563e44,-4.557428011172859e224),(-3.8334913754415923e285,-4.748721454106074e-173),(-1.6912052107425128e275,-4.789382438422238e-219),(1.8538365229016863e151,-3.5698172075468775e-37)],[(-2.1963131282037294e49,-5.53604352524995e-296)],[(-8.834414834987965e167,1.3186354307320576e247),(2.109209547987338e298,1.2191009105107557e-32),(-3.896880410603213e-92,-3.4589588698231044e-121),(-3.252529090888335e138,-7.862741341454407e204)],[(-9.673078095447289e-207,8.839303128607278e123),(2.6043620378793597e-244,-6.898328199987363e-308),(-2.5921142292355475e-54,1.0352159149517285e-143)]]" +"S+",4257734123,"[[(1.5714269203495863e245,-15651321.549208183),(-3.7292056272445236e-254,-4.556927533596056e-234),(-3.0309414401442555e-203,-3.84393827531526e-12)],[(1.7718777510571518e219,3.972086323144777e139),(1.5723805735454373e-67,-3.805243648123396e226),(154531069271292800000,1.1384408025183933e-285),(-2.009892367470994e-247,2.0325742976832167e81)],[(1.2145787097670788e55,-5.0579298233321666e-30),(5.05577441452021e-182,-2.968914705509665e-175),(-1.702335524921919e67,-2.852552828587631e-226),(-2.7664498327826963e-99,-1.2967072085088717e-305),(7.68881162387673e-68,-1.2506915095983359e-142),(-7.60308693295946e-40,5.414853590549086e218)],[(8.595602987813848e226,-3.9708286611967497e-206),(-5.80352787694746e-52,5.610493934761672e236),(2.1336999375861025e217,-5.431988994371099e-154),(-6.2758614367782974e29,-8.359901046980544e-55)],[(1.6910790690897504e54,9.798739710823911e197),(-6.530270107036228e-284,8.758552462406328e-302),(2.931625032390877e-118,2.8793800873550273e83),(-3.293986884112906e-88,11877326093331202),(0.0008071321465157103,1.0720860516457485e-298)]]" diff --git a/tests/integration/test_s3_cluster/test.py b/tests/integration/test_s3_cluster/test.py new file mode 100644 index 00000000000..f60e6e6862f --- /dev/null +++ b/tests/integration/test_s3_cluster/test.py @@ -0,0 +1,129 @@ +import logging +import os + +import pytest +from helpers.cluster import ClickHouseCluster +from helpers.test_tools import TSV + +logging.getLogger().setLevel(logging.INFO) +logging.getLogger().addHandler(logging.StreamHandler()) + +SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__)) +S3_DATA = ['data/clickhouse/part1.csv', 'data/clickhouse/part123.csv', 'data/database/part2.csv', 'data/database/partition675.csv'] + +def create_buckets_s3(cluster): + minio = cluster.minio_client + for file in S3_DATA: + minio.fput_object(bucket_name=cluster.minio_bucket, object_name=file, file_path=os.path.join(SCRIPT_DIR, file)) + for obj in minio.list_objects(cluster.minio_bucket, recursive=True): + print(obj.object_name) + + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster = ClickHouseCluster(__file__) + cluster.add_instance('s0_0_0', main_configs=["configs/cluster.xml"], with_minio=True) + cluster.add_instance('s0_0_1', main_configs=["configs/cluster.xml"]) + cluster.add_instance('s0_1_0', main_configs=["configs/cluster.xml"]) + + logging.info("Starting cluster...") + cluster.start() + logging.info("Cluster started") + + create_buckets_s3(cluster) + + yield cluster + finally: + cluster.shutdown() + + +def test_select_all(started_cluster): + node = started_cluster.instances['s0_0_0'] + pure_s3 = node.query(""" + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ORDER BY (name, value, polygon)""") + # print(pure_s3) + s3_distibuted = node.query(""" + SELECT * from s3Cluster( + 'cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') ORDER BY (name, value, polygon)""") + # print(s3_distibuted) + + assert TSV(pure_s3) == TSV(s3_distibuted) + + +def test_count(started_cluster): + node = started_cluster.instances['s0_0_0'] + pure_s3 = node.query(""" + SELECT count(*) from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") + # print(pure_s3) + s3_distibuted = node.query(""" + SELECT count(*) from s3Cluster( + 'cluster_simple', 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") + # print(s3_distibuted) + + assert TSV(pure_s3) == TSV(s3_distibuted) + + +def test_union_all(started_cluster): + node = started_cluster.instances['s0_0_0'] + pure_s3 = node.query(""" + SELECT * FROM + ( + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + UNION ALL + SELECT * from s3( + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ) + ORDER BY (name, value, polygon) + """) + # print(pure_s3) + s3_distibuted = node.query(""" + SELECT * FROM + ( + SELECT * from s3Cluster( + 'cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + UNION ALL + SELECT * from s3Cluster( + 'cluster_simple', + 'http://minio1:9001/root/data/{clickhouse,database}/*', 'minio', 'minio123', 'CSV', + 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + ) + ORDER BY (name, value, polygon) + """) + # print(s3_distibuted) + + assert TSV(pure_s3) == TSV(s3_distibuted) + + +def test_wrong_cluster(started_cluster): + node = started_cluster.instances['s0_0_0'] + error = node.query_and_get_error(""" + SELECT count(*) from s3Cluster( + 'non_existent_cluster', + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))') + UNION ALL + SELECT count(*) from s3Cluster( + 'non_existent_cluster', + 'http://minio1:9001/root/data/{clickhouse,database}/*', + 'minio', 'minio123', 'CSV', 'name String, value UInt32, polygon Array(Array(Tuple(Float64, Float64)))')""") + + assert "not found" in error \ No newline at end of file diff --git a/tests/integration/test_s3_zero_copy_replication/configs/config.d/s3.xml b/tests/integration/test_s3_zero_copy_replication/configs/config.d/s3.xml index 88eb49d9f17..ec28840054a 100644 --- a/tests/integration/test_s3_zero_copy_replication/configs/config.d/s3.xml +++ b/tests/integration/test_s3_zero_copy_replication/configs/config.d/s3.xml @@ -26,6 +26,7 @@ s31 + 0.0 diff --git a/tests/integration/test_s3_zero_copy_replication/test.py b/tests/integration/test_s3_zero_copy_replication/test.py index 5bc30ab1d6b..f7078d55c33 100644 --- a/tests/integration/test_s3_zero_copy_replication/test.py +++ b/tests/integration/test_s3_zero_copy_replication/test.py @@ -36,6 +36,15 @@ def get_large_objects_count(cluster, size=100): return counter +def wait_for_large_objects_count(cluster, expected, size=100, timeout=30): + while timeout > 0: + if get_large_objects_count(cluster, size) == expected: + return + timeout -= 1 + time.sleep(1) + assert get_large_objects_count(cluster, size) == expected + + @pytest.mark.parametrize( "policy", ["s3"] ) @@ -67,23 +76,15 @@ def test_s3_zero_copy_replication(cluster, policy): assert node1.query("SELECT * FROM s3_test order by id FORMAT Values") == "(0,'data'),(1,'data'),(2,'data'),(3,'data')" # Based on version 20.x - two parts - assert get_large_objects_count(cluster) == 2 + wait_for_large_objects_count(cluster, 2) node1.query("OPTIMIZE TABLE s3_test") - time.sleep(1) - # Based on version 20.x - after merge, two old parts and one merged - assert get_large_objects_count(cluster) == 3 + wait_for_large_objects_count(cluster, 3) # Based on version 20.x - after cleanup - only one merged part - countdown = 60 - while countdown > 0: - if get_large_objects_count(cluster) == 1: - break - time.sleep(1) - countdown -= 1 - assert get_large_objects_count(cluster) == 1 + wait_for_large_objects_count(cluster, 1, timeout=60) node1.query("DROP TABLE IF EXISTS s3_test NO DELAY") node2.query("DROP TABLE IF EXISTS s3_test NO DELAY") @@ -127,7 +128,7 @@ def test_s3_zero_copy_on_hybrid_storage(cluster): assert node2.query("SELECT partition_id,disk_name FROM system.parts WHERE table='hybrid_test' FORMAT Values") == "('all','s31')" # Check that after moving partition on node2 no new obects on s3 - assert get_large_objects_count(cluster, 0) == s3_objects + wait_for_large_objects_count(cluster, s3_objects, size=0) assert node1.query("SELECT * FROM hybrid_test ORDER BY id FORMAT Values") == "(0,'data'),(1,'data')" assert node2.query("SELECT * FROM hybrid_test ORDER BY id FORMAT Values") == "(0,'data'),(1,'data')" diff --git a/tests/integration/test_secure_socket/test.py b/tests/integration/test_secure_socket/test.py index 0ca6e6a6e6b..65c789f9d02 100644 --- a/tests/integration/test_secure_socket/test.py +++ b/tests/integration/test_secure_socket/test.py @@ -64,7 +64,7 @@ def test(started_cluster): assert end - start < 10 start = time.time() - error = NODES['node1'].query_and_get_error('SELECT * FROM distributed_table settings receive_timeout=5, send_timeout=5, use_hedged_requests=0;') + error = NODES['node1'].query_and_get_error('SELECT * FROM distributed_table settings receive_timeout=5, send_timeout=5, use_hedged_requests=0, async_socket_for_remote=1;') end = time.time() assert end - start < 10 @@ -73,7 +73,7 @@ def test(started_cluster): assert error.find('DB::ReadBufferFromPocoSocket::nextImpl()') == -1 start = time.time() - error = NODES['node1'].query_and_get_error('SELECT * FROM distributed_table settings receive_timeout=5, send_timeout=5;') + error = NODES['node1'].query_and_get_error('SELECT * FROM distributed_table settings receive_timeout=5, send_timeout=5, use_hedged_requests=1, async_socket_for_remote=1;') end = time.time() assert end - start < 10 diff --git a/tests/integration/test_storage_hdfs/test.py b/tests/integration/test_storage_hdfs/test.py index a6c8b7e1ee9..a0dc342e910 100644 --- a/tests/integration/test_storage_hdfs/test.py +++ b/tests/integration/test_storage_hdfs/test.py @@ -201,6 +201,24 @@ def test_write_gzip_storage(started_cluster): assert started_cluster.hdfs_api.read_gzip_data("/gzip_storage") == "1\tMark\t72.53\n" assert node1.query("select * from GZIPHDFSStorage") == "1\tMark\t72.53\n" + +def test_virtual_columns(started_cluster): + node1.query("create table virtual_cols (id UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/file*', 'TSV')") + started_cluster.hdfs_api.write_data("/file1", "1\n") + started_cluster.hdfs_api.write_data("/file2", "2\n") + started_cluster.hdfs_api.write_data("/file3", "3\n") + expected = "1\tfile1\thdfs://hdfs1:9000//file1\n2\tfile2\thdfs://hdfs1:9000//file2\n3\tfile3\thdfs://hdfs1:9000//file3\n" + assert node1.query("select id, _file as file_name, _path as file_path from virtual_cols order by id") == expected + + +def test_read_files_with_spaces(started_cluster): + started_cluster.hdfs_api.write_data("/test test test 1.txt", "1\n") + started_cluster.hdfs_api.write_data("/test test test 2.txt", "2\n") + started_cluster.hdfs_api.write_data("/test test test 3.txt", "3\n") + node1.query("create table test (id UInt32) ENGINE = HDFS('hdfs://hdfs1:9000/test*', 'TSV')") + assert node1.query("select * from test order by id") == "1\n2\n3\n" + + if __name__ == '__main__': cluster.start() input("Cluster created, press any key to destroy...") diff --git a/tests/integration/test_storage_kafka/test.py b/tests/integration/test_storage_kafka/test.py index 9b2f54a49a0..04a78c5f2c4 100644 --- a/tests/integration/test_storage_kafka/test.py +++ b/tests/integration/test_storage_kafka/test.py @@ -6,6 +6,7 @@ import subprocess import threading import time import io +import string import avro.schema import avro.io @@ -636,49 +637,21 @@ def test_kafka_formats(kafka_cluster): avro_message({'id': 0, 'blockNo': 0, 'val1': str('AM'), 'val2': 0.5, "val3": 1}), ], 'supports_empty_value': False, - } - # 'Arrow' : { - # # Not working at all: DB::Exception: Error while opening a table: Invalid: File is too small: 0, Stack trace (when copying this message, always include the lines below): - # # /src/Common/Exception.cpp:37: DB::Exception::Exception(std::__1::basic_string, std::__1::allocator > const&, int) @ 0x15c2d2a3 in /usr/bin/clickhouse - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:88: DB::ArrowBlockInputFormat::prepareReader() @ 0x1ddff1c3 in /usr/bin/clickhouse - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:26: DB::ArrowBlockInputFormat::ArrowBlockInputFormat(DB::ReadBuffer&, DB::Block const&, bool) @ 0x1ddfef63 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:2214: std::__1::__compressed_pair_elem::__compressed_pair_elem(std::__1::piecewise_construct_t, std::__1::tuple, std::__1::__tuple_indices<0ul, 1ul, 2ul>) @ 0x1de0470f in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:2299: std::__1::__compressed_pair, DB::ArrowBlockInputFormat>::__compressed_pair&, DB::ReadBuffer&, DB::Block const&, bool&&>(std::__1::piecewise_construct_t, std::__1::tuple&>, std::__1::tuple) @ 0x1de04375 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:3569: std::__1::__shared_ptr_emplace >::__shared_ptr_emplace(std::__1::allocator, DB::ReadBuffer&, DB::Block const&, bool&&) @ 0x1de03f97 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:4400: std::__1::enable_if::value), std::__1::shared_ptr >::type std::__1::make_shared(DB::ReadBuffer&, DB::Block const&, bool&&) @ 0x1de03d4c in /usr/bin/clickhouse - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:107: DB::registerInputFormatProcessorArrow(DB::FormatFactory&)::$_0::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) const @ 0x1de010df in /usr/bin/clickhouse - # 'data_sample' : [ - # '\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', - # '\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\xd8\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x06\x00\x00\x00\x08\x00\x00\x00\x0a\x00\x00\x00\x0c\x00\x00\x00\x0e\x00\x00\x00\x10\x00\x00\x00\x12\x00\x00\x00\x14\x00\x00\x00\x16\x00\x00\x00\x18\x00\x00\x00\x1a\x00\x00\x00\x1c\x00\x00\x00\x1e\x00\x00\x00\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', - # '\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', - # ], - # }, - # 'ArrowStream' : { - # # Not working at all: - # # Error while opening a table: Invalid: Tried reading schema message, was null or length 0, Stack trace (when copying this message, always include the lines below): - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:88: DB::ArrowBlockInputFormat::prepareReader() @ 0x1ddff1c3 in /usr/bin/clickhouse - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:26: DB::ArrowBlockInputFormat::ArrowBlockInputFormat(DB::ReadBuffer&, DB::Block const&, bool) @ 0x1ddfef63 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:2214: std::__1::__compressed_pair_elem::__compressed_pair_elem(std::__1::piecewise_construct_t, std::__1::tuple, std::__1::__tuple_indices<0ul, 1ul, 2ul>) @ 0x1de0470f in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:2299: std::__1::__compressed_pair, DB::ArrowBlockInputFormat>::__compressed_pair&, DB::ReadBuffer&, DB::Block const&, bool&&>(std::__1::piecewise_construct_t, std::__1::tuple&>, std::__1::tuple) @ 0x1de04375 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:3569: std::__1::__shared_ptr_emplace >::__shared_ptr_emplace(std::__1::allocator, DB::ReadBuffer&, DB::Block const&, bool&&) @ 0x1de03f97 in /usr/bin/clickhouse - # # /contrib/libcxx/include/memory:4400: std::__1::enable_if::value), std::__1::shared_ptr >::type std::__1::make_shared(DB::ReadBuffer&, DB::Block const&, bool&&) @ 0x1de03d4c in /usr/bin/clickhouse - # # /src/Processors/Formats/Impl/ArrowBlockInputFormat.cpp:117: DB::registerInputFormatProcessorArrow(DB::FormatFactory&)::$_1::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) const @ 0x1de0273f in /usr/bin/clickhouse - # # /contrib/libcxx/include/type_traits:3519: decltype(std::__1::forward(fp)(std::__1::forward(fp0), std::__1::forward(fp0), std::__1::forward(fp0), std::__1::forward(fp0))) std::__1::__invoke(DB::registerInputFormatProcessorArrow(DB::FormatFactory&)::$_1&, DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) @ 0x1de026da in /usr/bin/clickhouse - # # /contrib/libcxx/include/__functional_base:317: std::__1::shared_ptr std::__1::__invoke_void_return_wrapper >::__call(DB::registerInputFormatProcessorArrow(DB::FormatFactory&)::$_1&, DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) @ 0x1de025ed in /usr/bin/clickhouse - # # /contrib/libcxx/include/functional:1540: std::__1::__function::__alloc_func, std::__1::shared_ptr (DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&)>::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) @ 0x1de0254a in /usr/bin/clickhouse - # # /contrib/libcxx/include/functional:1714: std::__1::__function::__func, std::__1::shared_ptr (DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&)>::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) @ 0x1de0165c in /usr/bin/clickhouse - # # /contrib/libcxx/include/functional:1867: std::__1::__function::__value_func (DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&)>::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) const @ 0x1dd14dbd in /usr/bin/clickhouse - # # /contrib/libcxx/include/functional:2473: std::__1::function (DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&)>::operator()(DB::ReadBuffer&, DB::Block const&, DB::RowInputFormatParams const&, DB::FormatSettings const&) const @ 0x1dd07035 in /usr/bin/clickhouse - # # /src/Formats/FormatFactory.cpp:258: DB::FormatFactory::getInputFormat(std::__1::basic_string, std::__1::allocator > const&, DB::ReadBuffer&, DB::Block const&, DB::Context const&, unsigned long, std::__1::function) const @ 0x1dd04007 in /usr/bin/clickhouse - # # /src/Storages/Kafka/KafkaBlockInputStream.cpp:76: DB::KafkaBlockInputStream::readImpl() @ 0x1d8f6559 in /usr/bin/clickhouse - # # /src/DataStreams/IBlockInputStream.cpp:60: DB::IBlockInputStream::read() @ 0x1c9c92fd in /usr/bin/clickhouse - # # /src/DataStreams/copyData.cpp:26: void DB::copyDataImpl*)::$_0&, void (&)(DB::Block const&)>(DB::IBlockInputStream&, DB::IBlockOutputStream&, DB::copyData(DB::IBlockInputStream&, DB::IBlockOutputStream&, std::__1::atomic*)::$_0&, void (&)(DB::Block const&)) @ 0x1c9ea01c in /usr/bin/clickhouse - # 'data_sample' : [ - # '\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00', - # '\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\xd8\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x06\x00\x00\x00\x08\x00\x00\x00\x0a\x00\x00\x00\x0c\x00\x00\x00\x0e\x00\x00\x00\x10\x00\x00\x00\x12\x00\x00\x00\x14\x00\x00\x00\x16\x00\x00\x00\x18\x00\x00\x00\x1a\x00\x00\x00\x1c\x00\x00\x00\x1e\x00\x00\x00\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00\xff\xff\xff\xff\x00\x00\x00\x00', - # '\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00', - # ], - # }, + }, + 'Arrow' : { + 'data_sample' : [ + b'\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', + b'\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\xd8\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x06\x00\x00\x00\x08\x00\x00\x00\x0a\x00\x00\x00\x0c\x00\x00\x00\x0e\x00\x00\x00\x10\x00\x00\x00\x12\x00\x00\x00\x14\x00\x00\x00\x16\x00\x00\x00\x18\x00\x00\x00\x1a\x00\x00\x00\x1c\x00\x00\x00\x1e\x00\x00\x00\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', + b'\x41\x52\x52\x4f\x57\x31\x00\x00\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x10\x00\x00\x00\x0c\x00\x14\x00\x06\x00\x08\x00\x0c\x00\x10\x00\x0c\x00\x00\x00\x00\x00\x03\x00\x3c\x00\x00\x00\x28\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x58\x01\x00\x00\x00\x00\x00\x00\x60\x01\x00\x00\x00\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\x78\x01\x00\x00\x41\x52\x52\x4f\x57\x31', + ], + }, + 'ArrowStream' : { + 'data_sample' : [ + b'\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00', + b'\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x48\x01\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x78\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x98\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\xd8\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00\x00\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x38\x01\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x06\x00\x00\x00\x08\x00\x00\x00\x0a\x00\x00\x00\x0c\x00\x00\x00\x0e\x00\x00\x00\x10\x00\x00\x00\x12\x00\x00\x00\x14\x00\x00\x00\x16\x00\x00\x00\x18\x00\x00\x00\x1a\x00\x00\x00\x1c\x00\x00\x00\x1e\x00\x00\x00\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x41\x4d\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x00\xff\xff\xff\xff\x00\x00\x00\x00', + b'\xff\xff\xff\xff\x48\x01\x00\x00\x10\x00\x00\x00\x00\x00\x0a\x00\x0c\x00\x06\x00\x05\x00\x08\x00\x0a\x00\x00\x00\x00\x01\x03\x00\x0c\x00\x00\x00\x08\x00\x08\x00\x00\x00\x04\x00\x08\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\xe4\x00\x00\x00\x9c\x00\x00\x00\x6c\x00\x00\x00\x34\x00\x00\x00\x04\x00\x00\x00\x40\xff\xff\xff\x00\x00\x00\x02\x18\x00\x00\x00\x0c\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x72\xff\xff\xff\x08\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x33\x00\x00\x00\x00\x6c\xff\xff\xff\x00\x00\x00\x03\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x06\x00\x06\x00\x00\x00\x00\x00\x01\x00\x04\x00\x00\x00\x76\x61\x6c\x32\x00\x00\x00\x00\xa0\xff\xff\xff\x00\x00\x00\x05\x18\x00\x00\x00\x10\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x04\x00\x04\x00\x00\x00\x04\x00\x00\x00\x76\x61\x6c\x31\x00\x00\x00\x00\xcc\xff\xff\xff\x00\x00\x00\x02\x20\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x06\x00\x08\x00\x04\x00\x06\x00\x00\x00\x10\x00\x00\x00\x07\x00\x00\x00\x62\x6c\x6f\x63\x6b\x4e\x6f\x00\x10\x00\x14\x00\x08\x00\x00\x00\x07\x00\x0c\x00\x00\x00\x10\x00\x10\x00\x00\x00\x00\x00\x00\x02\x24\x00\x00\x00\x14\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x08\x00\x0c\x00\x08\x00\x07\x00\x08\x00\x00\x00\x00\x00\x00\x01\x40\x00\x00\x00\x02\x00\x00\x00\x69\x64\x00\x00\xff\xff\xff\xff\x58\x01\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x16\x00\x06\x00\x05\x00\x08\x00\x0c\x00\x0c\x00\x00\x00\x00\x03\x03\x00\x18\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x18\x00\x0c\x00\x04\x00\x08\x00\x0a\x00\x00\x00\xcc\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x18\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x20\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x28\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x41\x4d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00', + ], + }, } for format_name, format_opts in list(all_formats.items()): @@ -2533,6 +2506,382 @@ def test_kafka_csv_with_thread_per_consumer(kafka_cluster): kafka_check_result(result, True) +def random_string(size=8): + return ''.join(random.choices(string.ascii_uppercase + string.digits, k=size)) + +@pytest.mark.timeout(180) +def test_kafka_engine_put_errors_to_stream(kafka_cluster): + instance.query(''' + DROP TABLE IF EXISTS test.kafka; + DROP TABLE IF EXISTS test.kafka_data; + DROP TABLE IF EXISTS test.kafka_errors; + CREATE TABLE test.kafka (i Int64, s String) + ENGINE = Kafka + SETTINGS kafka_broker_list = 'kafka1:19092', + kafka_topic_list = 'json', + kafka_group_name = 'json', + kafka_format = 'JSONEachRow', + kafka_max_block_size = 128, + kafka_handle_error_mode = 'stream'; + CREATE MATERIALIZED VIEW test.kafka_data (i Int64, s String) + ENGINE = MergeTree + ORDER BY i + AS SELECT i, s FROM test.kafka WHERE length(_error) == 0; + CREATE MATERIALIZED VIEW test.kafka_errors (topic String, partition Int64, offset Int64, raw String, error String) + ENGINE = MergeTree + ORDER BY (topic, offset) + AS SELECT + _topic AS topic, + _partition AS partition, + _offset AS offset, + _raw_message AS raw, + _error AS error + FROM test.kafka WHERE length(_error) > 0; + ''') + + messages = [] + for i in range(128): + if i % 2 == 0: + messages.append(json.dumps({'i': i, 's': random_string(8)})) + else: + # Unexpected json content for table test.kafka. + messages.append(json.dumps({'i': 'n_' + random_string(4), 's': random_string(8)})) + + kafka_produce('json', messages) + + while True: + total_rows = instance.query('SELECT count() FROM test.kafka_data', ignore_error=True) + if total_rows == '64\n': + break + + while True: + total_error_rows = instance.query('SELECT count() FROM test.kafka_errors', ignore_error=True) + if total_error_rows == '64\n': + break + + instance.query(''' + DROP TABLE test.kafka; + DROP TABLE test.kafka_data; + DROP TABLE test.kafka_errors; + ''') + +def gen_normal_json(): + return '{"i":1000, "s":"ABC123abc"}' + +def gen_malformed_json(): + return '{"i":"n1000", "s":"1000"}' + +def gen_message_with_jsons(jsons = 10, malformed = 0): + s = io.StringIO() + for i in range (jsons): + if malformed and random.randint(0,1) == 1: + s.write(gen_malformed_json()) + else: + s.write(gen_normal_json()) + s.write(' ') + return s.getvalue() + + +def test_kafka_engine_put_errors_to_stream_with_random_malformed_json(kafka_cluster): + instance.query(''' + DROP TABLE IF EXISTS test.kafka; + DROP TABLE IF EXISTS test.kafka_data; + DROP TABLE IF EXISTS test.kafka_errors; + CREATE TABLE test.kafka (i Int64, s String) + ENGINE = Kafka + SETTINGS kafka_broker_list = 'kafka1:19092', + kafka_topic_list = 'json', + kafka_group_name = 'json', + kafka_format = 'JSONEachRow', + kafka_max_block_size = 100, + kafka_poll_max_batch_size = 1, + kafka_handle_error_mode = 'stream'; + CREATE MATERIALIZED VIEW test.kafka_data (i Int64, s String) + ENGINE = MergeTree + ORDER BY i + AS SELECT i, s FROM test.kafka WHERE length(_error) == 0; + CREATE MATERIALIZED VIEW test.kafka_errors (topic String, partition Int64, offset Int64, raw String, error String) + ENGINE = MergeTree + ORDER BY (topic, offset) + AS SELECT + _topic AS topic, + _partition AS partition, + _offset AS offset, + _raw_message AS raw, + _error AS error + FROM test.kafka WHERE length(_error) > 0; + ''') + + messages = [] + for i in range(128): + if i % 2 == 0: + messages.append(gen_message_with_jsons(10, 1)) + else: + messages.append(gen_message_with_jsons(10, 0)) + + kafka_produce('json', messages) + + while True: + total_rows = instance.query('SELECT count() FROM test.kafka_data', ignore_error=True) + if total_rows == '640\n': + break + + while True: + total_error_rows = instance.query('SELECT count() FROM test.kafka_errors', ignore_error=True) + if total_error_rows == '64\n': + break + + instance.query(''' + DROP TABLE test.kafka; + DROP TABLE test.kafka_data; + DROP TABLE test.kafka_errors; + ''') + +@pytest.mark.timeout(120) +def test_kafka_formats_with_broken_message(kafka_cluster): + # data was dumped from clickhouse itself in a following manner + # clickhouse-client --format=Native --query='SELECT toInt64(number) as id, toUInt16( intDiv( id, 65536 ) ) as blockNo, reinterpretAsString(19777) as val1, toFloat32(0.5) as val2, toUInt8(1) as val3 from numbers(100) ORDER BY id' | xxd -ps | tr -d '\n' | sed 's/\(..\)/\\x\1/g' + + all_formats = { + ## Text formats ## + # dumped with clickhouse-client ... | perl -pe 's/\n/\\n/; s/\t/\\t/g;' + 'JSONEachRow': { + 'data_sample': [ + '{"id":"0","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n', + '{"id":"1","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"2","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"3","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"4","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"5","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"6","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"7","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"8","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"9","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"10","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"11","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"12","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"13","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"14","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n{"id":"15","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n', + '{"id":"0","blockNo":0,"val1":"AM","val2":0.5,"val3":1}\n', + # broken message + '{"id":"0","blockNo":"BAD","val1":"AM","val2":0.5,"val3":1}', + ], + 'expected':'''{"raw_message":"{\\"id\\":\\"0\\",\\"blockNo\\":\\"BAD\\",\\"val1\\":\\"AM\\",\\"val2\\":0.5,\\"val3\\":1}","error":"Cannot parse input: expected '\\"' before: 'BAD\\",\\"val1\\":\\"AM\\",\\"val2\\":0.5,\\"val3\\":1}': (while reading the value of key blockNo)"}''', + 'supports_empty_value': True, + 'printable': True, + }, + # JSONAsString doesn't fit to that test, and tested separately + 'JSONCompactEachRow': { + 'data_sample': [ + '["0", 0, "AM", 0.5, 1]\n', + '["1", 0, "AM", 0.5, 1]\n["2", 0, "AM", 0.5, 1]\n["3", 0, "AM", 0.5, 1]\n["4", 0, "AM", 0.5, 1]\n["5", 0, "AM", 0.5, 1]\n["6", 0, "AM", 0.5, 1]\n["7", 0, "AM", 0.5, 1]\n["8", 0, "AM", 0.5, 1]\n["9", 0, "AM", 0.5, 1]\n["10", 0, "AM", 0.5, 1]\n["11", 0, "AM", 0.5, 1]\n["12", 0, "AM", 0.5, 1]\n["13", 0, "AM", 0.5, 1]\n["14", 0, "AM", 0.5, 1]\n["15", 0, "AM", 0.5, 1]\n', + '["0", 0, "AM", 0.5, 1]\n', + # broken message + '["0", "BAD", "AM", 0.5, 1]', + ], + 'expected':'''{"raw_message":"[\\"0\\", \\"BAD\\", \\"AM\\", 0.5, 1]","error":"Cannot parse input: expected '\\"' before: 'BAD\\", \\"AM\\", 0.5, 1]': (while reading the value of key blockNo)"}''', + 'supports_empty_value': True, + 'printable':True, + }, + 'JSONCompactEachRowWithNamesAndTypes': { + 'data_sample': [ + '["id", "blockNo", "val1", "val2", "val3"]\n["Int64", "UInt16", "String", "Float32", "UInt8"]\n["0", 0, "AM", 0.5, 1]\n', + '["id", "blockNo", "val1", "val2", "val3"]\n["Int64", "UInt16", "String", "Float32", "UInt8"]\n["1", 0, "AM", 0.5, 1]\n["2", 0, "AM", 0.5, 1]\n["3", 0, "AM", 0.5, 1]\n["4", 0, "AM", 0.5, 1]\n["5", 0, "AM", 0.5, 1]\n["6", 0, "AM", 0.5, 1]\n["7", 0, "AM", 0.5, 1]\n["8", 0, "AM", 0.5, 1]\n["9", 0, "AM", 0.5, 1]\n["10", 0, "AM", 0.5, 1]\n["11", 0, "AM", 0.5, 1]\n["12", 0, "AM", 0.5, 1]\n["13", 0, "AM", 0.5, 1]\n["14", 0, "AM", 0.5, 1]\n["15", 0, "AM", 0.5, 1]\n', + '["id", "blockNo", "val1", "val2", "val3"]\n["Int64", "UInt16", "String", "Float32", "UInt8"]\n["0", 0, "AM", 0.5, 1]\n', + # broken message + '["0", "BAD", "AM", 0.5, 1]', + ], + 'expected':'''{"raw_message":"[\\"0\\", \\"BAD\\", \\"AM\\", 0.5, 1]","error":"Cannot parse JSON string: expected opening quote"}''', + 'printable':True, + }, + 'TSKV': { + 'data_sample': [ + 'id=0\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\n', + 'id=1\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=2\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=3\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=4\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=5\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=6\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=7\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=8\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=9\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=10\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=11\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=12\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=13\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=14\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\nid=15\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\n', + 'id=0\tblockNo=0\tval1=AM\tval2=0.5\tval3=1\n', + # broken message + 'id=0\tblockNo=BAD\tval1=AM\tval2=0.5\tval3=1\n', + ], + 'expected':'{"raw_message":"id=0\\tblockNo=BAD\\tval1=AM\\tval2=0.5\\tval3=1\\n","error":"Found garbage after field in TSKV format: blockNo: (at row 1)\\n"}', + 'printable':True, + }, + 'CSV': { + 'data_sample': [ + '0,0,"AM",0.5,1\n', + '1,0,"AM",0.5,1\n2,0,"AM",0.5,1\n3,0,"AM",0.5,1\n4,0,"AM",0.5,1\n5,0,"AM",0.5,1\n6,0,"AM",0.5,1\n7,0,"AM",0.5,1\n8,0,"AM",0.5,1\n9,0,"AM",0.5,1\n10,0,"AM",0.5,1\n11,0,"AM",0.5,1\n12,0,"AM",0.5,1\n13,0,"AM",0.5,1\n14,0,"AM",0.5,1\n15,0,"AM",0.5,1\n', + '0,0,"AM",0.5,1\n', + # broken message + '0,"BAD","AM",0.5,1\n', + ], + 'expected':'''{"raw_message":"0,\\"BAD\\",\\"AM\\",0.5,1\\n","error":"Cannot parse input: expected '\\"' before: 'BAD\\",\\"AM\\",0.5,1\\\\n': Could not print diagnostic info because two last rows aren't in buffer (rare case)\\n"}''', + 'printable':True, + 'supports_empty_value': True, + }, + 'TSV': { + 'data_sample': [ + '0\t0\tAM\t0.5\t1\n', + '1\t0\tAM\t0.5\t1\n2\t0\tAM\t0.5\t1\n3\t0\tAM\t0.5\t1\n4\t0\tAM\t0.5\t1\n5\t0\tAM\t0.5\t1\n6\t0\tAM\t0.5\t1\n7\t0\tAM\t0.5\t1\n8\t0\tAM\t0.5\t1\n9\t0\tAM\t0.5\t1\n10\t0\tAM\t0.5\t1\n11\t0\tAM\t0.5\t1\n12\t0\tAM\t0.5\t1\n13\t0\tAM\t0.5\t1\n14\t0\tAM\t0.5\t1\n15\t0\tAM\t0.5\t1\n', + '0\t0\tAM\t0.5\t1\n', + # broken message + '0\tBAD\tAM\t0.5\t1\n', + ], + 'expected':'''{"raw_message":"0\\tBAD\\tAM\\t0.5\\t1\\n","error":"Cannot parse input: expected '\\\\t' before: 'BAD\\\\tAM\\\\t0.5\\\\t1\\\\n': Could not print diagnostic info because two last rows aren't in buffer (rare case)\\n"}''', + 'supports_empty_value': True, + 'printable':True, + }, + 'CSVWithNames': { + 'data_sample': [ + '"id","blockNo","val1","val2","val3"\n0,0,"AM",0.5,1\n', + '"id","blockNo","val1","val2","val3"\n1,0,"AM",0.5,1\n2,0,"AM",0.5,1\n3,0,"AM",0.5,1\n4,0,"AM",0.5,1\n5,0,"AM",0.5,1\n6,0,"AM",0.5,1\n7,0,"AM",0.5,1\n8,0,"AM",0.5,1\n9,0,"AM",0.5,1\n10,0,"AM",0.5,1\n11,0,"AM",0.5,1\n12,0,"AM",0.5,1\n13,0,"AM",0.5,1\n14,0,"AM",0.5,1\n15,0,"AM",0.5,1\n', + '"id","blockNo","val1","val2","val3"\n0,0,"AM",0.5,1\n', + # broken message + '"id","blockNo","val1","val2","val3"\n0,"BAD","AM",0.5,1\n', + ], + 'expected':'''{"raw_message":"\\"id\\",\\"blockNo\\",\\"val1\\",\\"val2\\",\\"val3\\"\\n0,\\"BAD\\",\\"AM\\",0.5,1\\n","error":"Cannot parse input: expected '\\"' before: 'BAD\\",\\"AM\\",0.5,1\\\\n': Could not print diagnostic info because two last rows aren't in buffer (rare case)\\n"}''', + 'printable':True, + }, + 'Values': { + 'data_sample': [ + "(0,0,'AM',0.5,1)", + "(1,0,'AM',0.5,1),(2,0,'AM',0.5,1),(3,0,'AM',0.5,1),(4,0,'AM',0.5,1),(5,0,'AM',0.5,1),(6,0,'AM',0.5,1),(7,0,'AM',0.5,1),(8,0,'AM',0.5,1),(9,0,'AM',0.5,1),(10,0,'AM',0.5,1),(11,0,'AM',0.5,1),(12,0,'AM',0.5,1),(13,0,'AM',0.5,1),(14,0,'AM',0.5,1),(15,0,'AM',0.5,1)", + "(0,0,'AM',0.5,1)", + # broken message + "(0,'BAD','AM',0.5,1)", + ], + 'expected':r'''{"raw_message":"(0,'BAD','AM',0.5,1)","error":"Cannot parse string 'BAD' as UInt16: syntax error at begin of string. Note: there are toUInt16OrZero and toUInt16OrNull functions, which returns zero\/NULL instead of throwing exception.: while executing 'FUNCTION CAST(assumeNotNull(_dummy_0) :: 2, 'UInt16' :: 1) -> CAST(assumeNotNull(_dummy_0), 'UInt16') UInt16 : 4'"}''', + 'supports_empty_value': True, + 'printable':True, + }, + 'TSVWithNames': { + 'data_sample': [ + 'id\tblockNo\tval1\tval2\tval3\n0\t0\tAM\t0.5\t1\n', + 'id\tblockNo\tval1\tval2\tval3\n1\t0\tAM\t0.5\t1\n2\t0\tAM\t0.5\t1\n3\t0\tAM\t0.5\t1\n4\t0\tAM\t0.5\t1\n5\t0\tAM\t0.5\t1\n6\t0\tAM\t0.5\t1\n7\t0\tAM\t0.5\t1\n8\t0\tAM\t0.5\t1\n9\t0\tAM\t0.5\t1\n10\t0\tAM\t0.5\t1\n11\t0\tAM\t0.5\t1\n12\t0\tAM\t0.5\t1\n13\t0\tAM\t0.5\t1\n14\t0\tAM\t0.5\t1\n15\t0\tAM\t0.5\t1\n', + 'id\tblockNo\tval1\tval2\tval3\n0\t0\tAM\t0.5\t1\n', + # broken message + 'id\tblockNo\tval1\tval2\tval3\n0\tBAD\tAM\t0.5\t1\n', + ], + 'expected':'''{"raw_message":"id\\tblockNo\\tval1\\tval2\\tval3\\n0\\tBAD\\tAM\\t0.5\\t1\\n","error":"Cannot parse input: expected '\\\\t' before: 'BAD\\\\tAM\\\\t0.5\\\\t1\\\\n': Could not print diagnostic info because two last rows aren't in buffer (rare case)\\n"}''', + 'supports_empty_value': True, + 'printable':True, + }, + 'TSVWithNamesAndTypes': { + 'data_sample': [ + 'id\tblockNo\tval1\tval2\tval3\nInt64\tUInt16\tString\tFloat32\tUInt8\n0\t0\tAM\t0.5\t1\n', + 'id\tblockNo\tval1\tval2\tval3\nInt64\tUInt16\tString\tFloat32\tUInt8\n1\t0\tAM\t0.5\t1\n2\t0\tAM\t0.5\t1\n3\t0\tAM\t0.5\t1\n4\t0\tAM\t0.5\t1\n5\t0\tAM\t0.5\t1\n6\t0\tAM\t0.5\t1\n7\t0\tAM\t0.5\t1\n8\t0\tAM\t0.5\t1\n9\t0\tAM\t0.5\t1\n10\t0\tAM\t0.5\t1\n11\t0\tAM\t0.5\t1\n12\t0\tAM\t0.5\t1\n13\t0\tAM\t0.5\t1\n14\t0\tAM\t0.5\t1\n15\t0\tAM\t0.5\t1\n', + 'id\tblockNo\tval1\tval2\tval3\nInt64\tUInt16\tString\tFloat32\tUInt8\n0\t0\tAM\t0.5\t1\n', + # broken message + 'id\tblockNo\tval1\tval2\tval3\nInt64\tUInt16\tString\tFloat32\tUInt8\n0\tBAD\tAM\t0.5\t1\n', + ], + 'expected':'''{"raw_message":"id\\tblockNo\\tval1\\tval2\\tval3\\nInt64\\tUInt16\\tString\\tFloat32\\tUInt8\\n0\\tBAD\\tAM\\t0.5\\t1\\n","error":"Cannot parse input: expected '\\\\t' before: 'BAD\\\\tAM\\\\t0.5\\\\t1\\\\n': Could not print diagnostic info because two last rows aren't in buffer (rare case)\\n"}''', + 'printable':True, + }, + 'Native': { + 'data_sample': [ + b'\x05\x01\x02\x69\x64\x05\x49\x6e\x74\x36\x34\x00\x00\x00\x00\x00\x00\x00\x00\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x06\x55\x49\x6e\x74\x31\x36\x00\x00\x04\x76\x61\x6c\x31\x06\x53\x74\x72\x69\x6e\x67\x02\x41\x4d\x04\x76\x61\x6c\x32\x07\x46\x6c\x6f\x61\x74\x33\x32\x00\x00\x00\x3f\x04\x76\x61\x6c\x33\x05\x55\x49\x6e\x74\x38\x01', + b'\x05\x0f\x02\x69\x64\x05\x49\x6e\x74\x36\x34\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x00\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x0e\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x06\x55\x49\x6e\x74\x31\x36\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x76\x61\x6c\x31\x06\x53\x74\x72\x69\x6e\x67\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x02\x41\x4d\x04\x76\x61\x6c\x32\x07\x46\x6c\x6f\x61\x74\x33\x32\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x00\x00\x00\x3f\x04\x76\x61\x6c\x33\x05\x55\x49\x6e\x74\x38\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01\x01', + b'\x05\x01\x02\x69\x64\x05\x49\x6e\x74\x36\x34\x00\x00\x00\x00\x00\x00\x00\x00\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x06\x55\x49\x6e\x74\x31\x36\x00\x00\x04\x76\x61\x6c\x31\x06\x53\x74\x72\x69\x6e\x67\x02\x41\x4d\x04\x76\x61\x6c\x32\x07\x46\x6c\x6f\x61\x74\x33\x32\x00\x00\x00\x3f\x04\x76\x61\x6c\x33\x05\x55\x49\x6e\x74\x38\x01', + # broken message + b'\x05\x01\x02\x69\x64\x05\x49\x6e\x74\x36\x34\x00\x00\x00\x00\x00\x00\x00\x00\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x06\x53\x74\x72\x69\x6e\x67\x03\x42\x41\x44\x04\x76\x61\x6c\x31\x06\x53\x74\x72\x69\x6e\x67\x02\x41\x4d\x04\x76\x61\x6c\x32\x07\x46\x6c\x6f\x61\x74\x33\x32\x00\x00\x00\x3f\x04\x76\x61\x6c\x33\x05\x55\x49\x6e\x74\x38\x01', + ], + 'expected':'''{"raw_message":"050102696405496E743634000000000000000007626C6F636B4E6F06537472696E67034241440476616C3106537472696E6702414D0476616C3207466C6F617433320000003F0476616C330555496E743801","error":"Cannot convert: String to UInt16"}''', + 'printable':False, + }, + 'RowBinary': { + 'data_sample': [ + b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + b'\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x05\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + # broken message + b'\x00\x00\x00\x00\x00\x00\x00\x00\x03\x42\x41\x44\x02\x41\x4d\x00\x00\x00\x3f\x01', + ], + 'expected':'{"raw_message":"00000000000000000342414402414D0000003F01","error":"Cannot read all data. Bytes read: 9. Bytes expected: 65.: (at row 1)\\n"}', + 'printable':False, + }, + 'RowBinaryWithNamesAndTypes': { + 'data_sample': [ + b'\x05\x02\x69\x64\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x04\x76\x61\x6c\x31\x04\x76\x61\x6c\x32\x04\x76\x61\x6c\x33\x05\x49\x6e\x74\x36\x34\x06\x55\x49\x6e\x74\x31\x36\x06\x53\x74\x72\x69\x6e\x67\x07\x46\x6c\x6f\x61\x74\x33\x32\x05\x55\x49\x6e\x74\x38\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + b'\x05\x02\x69\x64\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x04\x76\x61\x6c\x31\x04\x76\x61\x6c\x32\x04\x76\x61\x6c\x33\x05\x49\x6e\x74\x36\x34\x06\x55\x49\x6e\x74\x31\x36\x06\x53\x74\x72\x69\x6e\x67\x07\x46\x6c\x6f\x61\x74\x33\x32\x05\x55\x49\x6e\x74\x38\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x05\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0b\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + b'\x05\x02\x69\x64\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x04\x76\x61\x6c\x31\x04\x76\x61\x6c\x32\x04\x76\x61\x6c\x33\x05\x49\x6e\x74\x36\x34\x06\x55\x49\x6e\x74\x31\x36\x06\x53\x74\x72\x69\x6e\x67\x07\x46\x6c\x6f\x61\x74\x33\x32\x05\x55\x49\x6e\x74\x38\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x41\x4d\x00\x00\x00\x3f\x01', + # broken message + b'\x05\x02\x69\x64\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x04\x76\x61\x6c\x31\x04\x76\x61\x6c\x32\x04\x76\x61\x6c\x33\x05\x49\x6e\x74\x36\x34\x06\x53\x74\x72\x69\x6e\x67\x06\x53\x74\x72\x69\x6e\x67\x07\x46\x6c\x6f\x61\x74\x33\x32\x05\x55\x49\x6e\x74\x38\x00\x00\x00\x00\x00\x00\x00\x00\x03\x42\x41\x44\x02\x41\x4d\x00\x00\x00\x3f\x01', + ], + 'expected':'{"raw_message":"0502696407626C6F636B4E6F0476616C310476616C320476616C3305496E74363406537472696E6706537472696E6707466C6F617433320555496E743800000000000000000342414402414D0000003F01","error":"Cannot read all data. Bytes read: 9. Bytes expected: 65.: (at row 1)\\n"}', + 'printable':False, + }, + 'ORC': { + 'data_sample': [ + b'\x4f\x52\x43\x11\x00\x00\x0a\x06\x12\x04\x08\x01\x50\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x30\x00\x00\xe3\x12\xe7\x62\x65\x00\x01\x21\x3e\x0e\x46\x25\x0e\x2e\x46\x03\x21\x46\x03\x09\xa6\x00\x06\x00\x32\x00\x00\xe3\x92\xe4\x62\x65\x00\x01\x21\x01\x0e\x46\x25\x2e\x2e\x26\x47\x5f\x21\x20\x96\x60\x09\x60\x00\x00\x36\x00\x00\xe3\x92\xe1\x62\x65\x00\x01\x21\x61\x0e\x46\x23\x5e\x2e\x46\x03\x21\x66\x03\x3d\x53\x29\x10\x11\xc0\x00\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x02\x10\x02\x18\x02\x50\x00\x05\x00\x00\xff\x00\x03\x00\x00\x30\x07\x00\x00\x40\x00\x80\x05\x00\x00\x41\x4d\x07\x00\x00\x42\x00\x80\x03\x00\x00\x0a\x07\x00\x00\x42\x00\x80\x05\x00\x00\xff\x01\x88\x00\x00\x4d\xca\xc1\x0a\x80\x30\x0c\x03\xd0\x2e\x6b\xcb\x98\x17\xf1\x14\x50\xfc\xff\xcf\xb4\x66\x1e\x3c\x84\x47\x9a\xce\x1c\xb9\x1b\xb7\xf9\xda\x48\x09\x9e\xb2\xf3\x92\xce\x5b\x86\xf6\x56\x7f\x21\x41\x2f\x51\xa6\x7a\xd7\x1d\xe5\xea\xae\x3d\xca\xd5\x83\x71\x60\xd8\x17\xfc\x62\x0f\xa8\x00\x00\xe3\x4a\xe6\x62\xe1\x60\x0c\x60\xe0\xe2\xe3\x60\x14\x62\xe3\x60\x10\x60\x90\x60\x08\x60\x88\x60\xe5\x12\xe0\x60\x54\xe2\xe0\x62\x34\x10\x62\x34\x90\x60\x02\x8a\x70\x71\x09\x01\x45\xb8\xb8\x98\x1c\x7d\x85\x80\x58\x82\x05\x28\xc6\xcd\x25\xca\xc1\x68\xc4\x0b\x52\xc5\x6c\xa0\x67\x2a\x05\x22\xc0\x4a\x21\x86\x31\x09\x30\x81\xb5\xb2\x02\x00\x36\x01\x00\x25\x8c\xbd\x0a\xc2\x30\x14\x85\x73\x6f\x92\xf6\x92\x6a\x09\x01\x21\x64\x92\x4e\x75\x91\x58\x71\xc9\x64\x27\x5d\x2c\x1d\x5d\xfd\x59\xc4\x42\x37\x5f\xc0\x17\xe8\x23\x9b\xc6\xe1\x3b\x70\x0f\xdf\xb9\xc4\xf5\x17\x5d\x41\x5c\x4f\x60\x37\xeb\x53\x0d\x55\x4d\x0b\x23\x01\xb9\x90\x2e\xbf\x0f\xe3\xe3\xdd\x8d\x0e\x5f\x4f\x27\x3e\xb7\x61\x97\xb2\x49\xb9\xaf\x90\x20\x92\x27\x32\x2a\x6b\xf4\xf3\x0d\x1e\x82\x20\xe8\x59\x28\x09\x4c\x46\x4c\x33\xcb\x7a\x76\x95\x41\x47\x9f\x14\x78\x03\xde\x62\x6c\x54\x30\xb1\x51\x0a\xdb\x8b\x89\x58\x11\xbb\x22\xac\x08\x9a\xe5\x6c\x71\xbf\x3d\xb8\x39\x92\xfa\x7f\x86\x1a\xd3\x54\x1e\xa7\xee\xcc\x7e\x08\x9e\x01\x10\x01\x18\x80\x80\x10\x22\x02\x00\x0c\x28\x57\x30\x06\x82\xf4\x03\x03\x4f\x52\x43\x18', + b'\x4f\x52\x43\x11\x00\x00\x0a\x06\x12\x04\x08\x0f\x50\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x0f\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x30\x00\x00\xe3\x12\xe7\x62\x65\x00\x01\x21\x3e\x0e\x7e\x25\x0e\x2e\x46\x43\x21\x46\x4b\x09\xad\x00\x06\x00\x33\x00\x00\x0a\x17\x0a\x03\x00\x00\x00\x12\x10\x08\x0f\x22\x0a\x0a\x02\x41\x4d\x12\x02\x41\x4d\x18\x3c\x50\x00\x3a\x00\x00\xe3\x92\xe1\x62\x65\x00\x01\x21\x61\x0e\x7e\x23\x5e\x2e\x46\x03\x21\x66\x03\x3d\x53\x29\x66\x73\x3d\xd3\x00\x06\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x0f\x12\x06\x08\x02\x10\x02\x18\x1e\x50\x00\x05\x00\x00\x0c\x00\x2b\x00\x00\x31\x32\x33\x34\x35\x36\x37\x38\x39\x31\x30\x31\x31\x31\x32\x31\x33\x31\x34\x31\x35\x09\x00\x00\x06\x01\x03\x02\x09\x00\x00\xc0\x0e\x00\x00\x07\x00\x00\x42\x00\x80\x05\x00\x00\x41\x4d\x0a\x00\x00\xe3\xe2\x42\x01\x00\x09\x00\x00\xc0\x0e\x02\x00\x05\x00\x00\x0c\x01\x94\x00\x00\x2d\xca\xc1\x0e\x80\x30\x08\x03\xd0\xc1\x60\x2e\xf3\x62\x76\x6a\xe2\x0e\xfe\xff\x57\x5a\x3b\x0f\xe4\x51\xe8\x68\xbd\x5d\x05\xe7\xf8\x34\x40\x3a\x6e\x59\xb1\x64\xe0\x91\xa9\xbf\xb1\x97\xd2\x95\x9d\x1e\xca\x55\x3a\x6d\xb4\xd2\xdd\x0b\x74\x9a\x74\xf7\x12\x39\xbd\x97\x7f\x7c\x06\xbb\xa6\x8d\x97\x17\xb4\x00\x00\xe3\x4a\xe6\x62\xe1\xe0\x0f\x60\xe0\xe2\xe3\xe0\x17\x62\xe3\x60\x10\x60\x90\x60\x08\x60\x88\x60\xe5\x12\xe0\xe0\x57\xe2\xe0\x62\x34\x14\x62\xb4\x94\xd0\x02\x8a\xc8\x73\x09\x01\x45\xb8\xb8\x98\x1c\x7d\x85\x80\x58\xc2\x06\x28\x26\xc4\x25\xca\xc1\x6f\xc4\xcb\xc5\x68\x20\xc4\x6c\xa0\x67\x2a\xc5\x6c\xae\x67\x0a\x14\xe6\x87\x1a\xc6\x24\xc0\x24\x21\x07\x32\x0c\x00\x4a\x01\x00\xe3\x60\x16\x58\xc3\x24\xc5\xcd\xc1\x2c\x30\x89\x51\xc2\x4b\xc1\x57\x83\x5f\x49\x83\x83\x47\x88\x95\x91\x89\x99\x85\x55\x8a\x3d\x29\x27\x3f\x39\xdb\x2f\x5f\x8a\x29\x33\x45\x8a\xa5\x2c\x31\xc7\x10\x4c\x1a\x81\x49\x63\x25\x26\x0e\x46\x20\x66\x07\x63\x36\x0e\x3e\x0d\x26\x03\x10\x9f\xd1\x80\xdf\x8a\x85\x83\x3f\x80\xc1\x8a\x8f\x83\x5f\x88\x8d\x83\x41\x80\x41\x82\x21\x80\x21\x82\xd5\x4a\x80\x83\x5f\x89\x83\x8b\xd1\x50\x88\xd1\x52\x42\x0b\x28\x22\x6f\x25\x04\x14\xe1\xe2\x62\x72\xf4\x15\x02\x62\x09\x1b\xa0\x98\x90\x95\x28\x07\xbf\x11\x2f\x17\xa3\x81\x10\xb3\x81\x9e\xa9\x14\xb3\xb9\x9e\x29\x50\x98\x1f\x6a\x18\x93\x00\x93\x84\x1c\xc8\x30\x87\x09\x7e\x1e\x0c\x00\x08\xa8\x01\x10\x01\x18\x80\x80\x10\x22\x02\x00\x0c\x28\x5d\x30\x06\x82\xf4\x03\x03\x4f\x52\x43\x18', + b'\x4f\x52\x43\x11\x00\x00\x0a\x06\x12\x04\x08\x01\x50\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x30\x00\x00\xe3\x12\xe7\x62\x65\x00\x01\x21\x3e\x0e\x46\x25\x0e\x2e\x46\x03\x21\x46\x03\x09\xa6\x00\x06\x00\x32\x00\x00\xe3\x92\xe4\x62\x65\x00\x01\x21\x01\x0e\x46\x25\x2e\x2e\x26\x47\x5f\x21\x20\x96\x60\x09\x60\x00\x00\x36\x00\x00\xe3\x92\xe1\x62\x65\x00\x01\x21\x61\x0e\x46\x23\x5e\x2e\x46\x03\x21\x66\x03\x3d\x53\x29\x10\x11\xc0\x00\x00\x2b\x00\x00\x0a\x13\x0a\x03\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x02\x10\x02\x18\x02\x50\x00\x05\x00\x00\xff\x00\x03\x00\x00\x30\x07\x00\x00\x40\x00\x80\x05\x00\x00\x41\x4d\x07\x00\x00\x42\x00\x80\x03\x00\x00\x0a\x07\x00\x00\x42\x00\x80\x05\x00\x00\xff\x01\x88\x00\x00\x4d\xca\xc1\x0a\x80\x30\x0c\x03\xd0\x2e\x6b\xcb\x98\x17\xf1\x14\x50\xfc\xff\xcf\xb4\x66\x1e\x3c\x84\x47\x9a\xce\x1c\xb9\x1b\xb7\xf9\xda\x48\x09\x9e\xb2\xf3\x92\xce\x5b\x86\xf6\x56\x7f\x21\x41\x2f\x51\xa6\x7a\xd7\x1d\xe5\xea\xae\x3d\xca\xd5\x83\x71\x60\xd8\x17\xfc\x62\x0f\xa8\x00\x00\xe3\x4a\xe6\x62\xe1\x60\x0c\x60\xe0\xe2\xe3\x60\x14\x62\xe3\x60\x10\x60\x90\x60\x08\x60\x88\x60\xe5\x12\xe0\x60\x54\xe2\xe0\x62\x34\x10\x62\x34\x90\x60\x02\x8a\x70\x71\x09\x01\x45\xb8\xb8\x98\x1c\x7d\x85\x80\x58\x82\x05\x28\xc6\xcd\x25\xca\xc1\x68\xc4\x0b\x52\xc5\x6c\xa0\x67\x2a\x05\x22\xc0\x4a\x21\x86\x31\x09\x30\x81\xb5\xb2\x02\x00\x36\x01\x00\x25\x8c\xbd\x0a\xc2\x30\x14\x85\x73\x6f\x92\xf6\x92\x6a\x09\x01\x21\x64\x92\x4e\x75\x91\x58\x71\xc9\x64\x27\x5d\x2c\x1d\x5d\xfd\x59\xc4\x42\x37\x5f\xc0\x17\xe8\x23\x9b\xc6\xe1\x3b\x70\x0f\xdf\xb9\xc4\xf5\x17\x5d\x41\x5c\x4f\x60\x37\xeb\x53\x0d\x55\x4d\x0b\x23\x01\xb9\x90\x2e\xbf\x0f\xe3\xe3\xdd\x8d\x0e\x5f\x4f\x27\x3e\xb7\x61\x97\xb2\x49\xb9\xaf\x90\x20\x92\x27\x32\x2a\x6b\xf4\xf3\x0d\x1e\x82\x20\xe8\x59\x28\x09\x4c\x46\x4c\x33\xcb\x7a\x76\x95\x41\x47\x9f\x14\x78\x03\xde\x62\x6c\x54\x30\xb1\x51\x0a\xdb\x8b\x89\x58\x11\xbb\x22\xac\x08\x9a\xe5\x6c\x71\xbf\x3d\xb8\x39\x92\xfa\x7f\x86\x1a\xd3\x54\x1e\xa7\xee\xcc\x7e\x08\x9e\x01\x10\x01\x18\x80\x80\x10\x22\x02\x00\x0c\x28\x57\x30\x06\x82\xf4\x03\x03\x4f\x52\x43\x18', + # broken message + b'\x4f\x52\x43\x0a\x0b\x0a\x03\x00\x00\x00\x12\x04\x08\x01\x50\x00\x0a\x15\x0a\x05\x00\x00\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x0a\x12\x0a\x06\x00\x00\x00\x00\x00\x00\x12\x08\x08\x01\x42\x02\x08\x06\x50\x00\x0a\x12\x0a\x06\x00\x00\x00\x00\x00\x00\x12\x08\x08\x01\x42\x02\x08\x04\x50\x00\x0a\x29\x0a\x04\x00\x00\x00\x00\x12\x21\x08\x01\x1a\x1b\x09\x00\x00\x00\x00\x00\x00\xe0\x3f\x11\x00\x00\x00\x00\x00\x00\xe0\x3f\x19\x00\x00\x00\x00\x00\x00\xe0\x3f\x50\x00\x0a\x15\x0a\x05\x00\x00\x00\x00\x00\x12\x0c\x08\x01\x12\x06\x08\x02\x10\x02\x18\x02\x50\x00\xff\x80\xff\x80\xff\x00\xff\x80\xff\x03\x42\x41\x44\xff\x80\xff\x02\x41\x4d\xff\x80\x00\x00\x00\x3f\xff\x80\xff\x01\x0a\x06\x08\x06\x10\x00\x18\x0d\x0a\x06\x08\x06\x10\x01\x18\x17\x0a\x06\x08\x06\x10\x02\x18\x14\x0a\x06\x08\x06\x10\x03\x18\x14\x0a\x06\x08\x06\x10\x04\x18\x2b\x0a\x06\x08\x06\x10\x05\x18\x17\x0a\x06\x08\x00\x10\x00\x18\x02\x0a\x06\x08\x00\x10\x01\x18\x02\x0a\x06\x08\x01\x10\x01\x18\x02\x0a\x06\x08\x00\x10\x02\x18\x02\x0a\x06\x08\x02\x10\x02\x18\x02\x0a\x06\x08\x01\x10\x02\x18\x03\x0a\x06\x08\x00\x10\x03\x18\x02\x0a\x06\x08\x02\x10\x03\x18\x02\x0a\x06\x08\x01\x10\x03\x18\x02\x0a\x06\x08\x00\x10\x04\x18\x02\x0a\x06\x08\x01\x10\x04\x18\x04\x0a\x06\x08\x00\x10\x05\x18\x02\x0a\x06\x08\x01\x10\x05\x18\x02\x12\x04\x08\x00\x10\x00\x12\x04\x08\x00\x10\x00\x12\x04\x08\x00\x10\x00\x12\x04\x08\x00\x10\x00\x12\x04\x08\x00\x10\x00\x12\x04\x08\x00\x10\x00\x1a\x03\x47\x4d\x54\x0a\x59\x0a\x04\x08\x01\x50\x00\x0a\x0c\x08\x01\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x0a\x08\x08\x01\x42\x02\x08\x06\x50\x00\x0a\x08\x08\x01\x42\x02\x08\x04\x50\x00\x0a\x21\x08\x01\x1a\x1b\x09\x00\x00\x00\x00\x00\x00\xe0\x3f\x11\x00\x00\x00\x00\x00\x00\xe0\x3f\x19\x00\x00\x00\x00\x00\x00\xe0\x3f\x50\x00\x0a\x0c\x08\x01\x12\x06\x08\x02\x10\x02\x18\x02\x50\x00\x08\x03\x10\xec\x02\x1a\x0c\x08\x03\x10\x8e\x01\x18\x1d\x20\xc1\x01\x28\x01\x22\x2e\x08\x0c\x12\x05\x01\x02\x03\x04\x05\x1a\x02\x69\x64\x1a\x07\x62\x6c\x6f\x63\x6b\x4e\x6f\x1a\x04\x76\x61\x6c\x31\x1a\x04\x76\x61\x6c\x32\x1a\x04\x76\x61\x6c\x33\x20\x00\x28\x00\x30\x00\x22\x08\x08\x04\x20\x00\x28\x00\x30\x00\x22\x08\x08\x08\x20\x00\x28\x00\x30\x00\x22\x08\x08\x08\x20\x00\x28\x00\x30\x00\x22\x08\x08\x05\x20\x00\x28\x00\x30\x00\x22\x08\x08\x01\x20\x00\x28\x00\x30\x00\x30\x01\x3a\x04\x08\x01\x50\x00\x3a\x0c\x08\x01\x12\x06\x08\x00\x10\x00\x18\x00\x50\x00\x3a\x08\x08\x01\x42\x02\x08\x06\x50\x00\x3a\x08\x08\x01\x42\x02\x08\x04\x50\x00\x3a\x21\x08\x01\x1a\x1b\x09\x00\x00\x00\x00\x00\x00\xe0\x3f\x11\x00\x00\x00\x00\x00\x00\xe0\x3f\x19\x00\x00\x00\x00\x00\x00\xe0\x3f\x50\x00\x3a\x0c\x08\x01\x12\x06\x08\x02\x10\x02\x18\x02\x50\x00\x40\x90\x4e\x48\x01\x08\xd5\x01\x10\x00\x18\x80\x80\x04\x22\x02\x00\x0b\x28\x5b\x30\x06\x82\xf4\x03\x03\x4f\x52\x43\x18', + ], + 'expected':r'''{"raw_message":"4F52430A0B0A030000001204080150000A150A050000000000120C0801120608001000180050000A120A06000000000000120808014202080650000A120A06000000000000120808014202080450000A290A0400000000122108011A1B09000000000000E03F11000000000000E03F19000000000000E03F50000A150A050000000000120C080112060802100218025000FF80FF80FF00FF80FF03424144FF80FF02414DFF800000003FFF80FF010A0608061000180D0A060806100118170A060806100218140A060806100318140A0608061004182B0A060806100518170A060800100018020A060800100118020A060801100118020A060800100218020A060802100218020A060801100218030A060800100318020A060802100318020A060801100318020A060800100418020A060801100418040A060800100518020A060801100518021204080010001204080010001204080010001204080010001204080010001204080010001A03474D540A590A04080150000A0C0801120608001000180050000A0808014202080650000A0808014202080450000A2108011A1B09000000000000E03F11000000000000E03F19000000000000E03F50000A0C080112060802100218025000080310EC021A0C0803108E01181D20C1012801222E080C120501020304051A0269641A07626C6F636B4E6F1A0476616C311A0476616C321A0476616C33200028003000220808042000280030002208080820002800300022080808200028003000220808052000280030002208080120002800300030013A04080150003A0C0801120608001000180050003A0808014202080650003A0808014202080450003A2108011A1B09000000000000E03F11000000000000E03F19000000000000E03F50003A0C08011206080210021802500040904E480108D5011000188080042202000B285B300682F403034F524318","error":"Cannot parse string 'BAD' as UInt16: syntax error at begin of string. Note: there are toUInt16OrZero and toUInt16OrNull functions, which returns zero\/NULL instead of throwing exception."}''', + 'printable':False, + } + } + + topic_name_prefix = 'format_tests_4_stream_' + for format_name, format_opts in list(all_formats.items()): + print(('Set up {}'.format(format_name))) + topic_name = topic_name_prefix + '{}'.format(format_name) + data_sample = format_opts['data_sample'] + data_prefix = [] + raw_message = '_raw_message' + # prepend empty value when supported + if format_opts.get('supports_empty_value', False): + data_prefix = data_prefix + [''] + if format_opts.get('printable', False) == False: + raw_message = 'hex(_raw_message)' + kafka_produce(topic_name, data_prefix + data_sample) + instance.query(''' + DROP TABLE IF EXISTS test.kafka_{format_name}; + + CREATE TABLE test.kafka_{format_name} ( + id Int64, + blockNo UInt16, + val1 String, + val2 Float32, + val3 UInt8 + ) ENGINE = Kafka() + SETTINGS kafka_broker_list = 'kafka1:19092', + kafka_topic_list = '{topic_name}', + kafka_group_name = '{topic_name}', + kafka_format = '{format_name}', + kafka_handle_error_mode = 'stream', + kafka_flush_interval_ms = 1000 {extra_settings}; + + DROP TABLE IF EXISTS test.kafka_data_{format_name}_mv; + CREATE MATERIALIZED VIEW test.kafka_data_{format_name}_mv Engine=Log AS + SELECT *, _topic, _partition, _offset FROM test.kafka_{format_name} + WHERE length(_error) = 0; + + DROP TABLE IF EXISTS test.kafka_errors_{format_name}_mv; + CREATE MATERIALIZED VIEW test.kafka_errors_{format_name}_mv Engine=Log AS + SELECT {raw_message} as raw_message, _error as error, _topic as topic, _partition as partition, _offset as offset FROM test.kafka_{format_name} + WHERE length(_error) > 0; + '''.format(topic_name=topic_name, format_name=format_name, raw_message=raw_message, + extra_settings=format_opts.get('extra_settings') or '')) + + for format_name, format_opts in list(all_formats.items()): + print(('Checking {}'.format(format_name))) + topic_name = topic_name_prefix + '{}'.format(format_name) + # shift offsets by 1 if format supports empty value + offsets = [1, 2, 3] if format_opts.get('supports_empty_value', False) else [0, 1, 2] + result = instance.query('SELECT * FROM test.kafka_data_{format_name}_mv;'.format(format_name=format_name)) + expected = '''\ +0 0 AM 0.5 1 {topic_name} 0 {offset_0} +1 0 AM 0.5 1 {topic_name} 0 {offset_1} +2 0 AM 0.5 1 {topic_name} 0 {offset_1} +3 0 AM 0.5 1 {topic_name} 0 {offset_1} +4 0 AM 0.5 1 {topic_name} 0 {offset_1} +5 0 AM 0.5 1 {topic_name} 0 {offset_1} +6 0 AM 0.5 1 {topic_name} 0 {offset_1} +7 0 AM 0.5 1 {topic_name} 0 {offset_1} +8 0 AM 0.5 1 {topic_name} 0 {offset_1} +9 0 AM 0.5 1 {topic_name} 0 {offset_1} +10 0 AM 0.5 1 {topic_name} 0 {offset_1} +11 0 AM 0.5 1 {topic_name} 0 {offset_1} +12 0 AM 0.5 1 {topic_name} 0 {offset_1} +13 0 AM 0.5 1 {topic_name} 0 {offset_1} +14 0 AM 0.5 1 {topic_name} 0 {offset_1} +15 0 AM 0.5 1 {topic_name} 0 {offset_1} +0 0 AM 0.5 1 {topic_name} 0 {offset_2} +'''.format(topic_name=topic_name, offset_0=offsets[0], offset_1=offsets[1], offset_2=offsets[2]) + print(('Checking result\n {result} \n expected \n {expected}\n'.format(result=str(result), expected=str(expected)))) + assert TSV(result) == TSV(expected), 'Proper result for format: {}'.format(format_name) + errors_result = instance.query('SELECT raw_message, error FROM test.kafka_errors_{format_name}_mv format JSONEachRow'.format(format_name=format_name)) + errors_expected = format_opts['expected'] + print(errors_result.strip()) + print(errors_expected.strip()) + assert errors_result.strip() == errors_expected.strip(), 'Proper errors for format: {}'.format(format_name) if __name__ == '__main__': cluster.start() diff --git a/tests/integration/test_storage_mysql/configs/users.xml b/tests/integration/test_storage_mysql/configs/users.xml new file mode 100644 index 00000000000..27c4d46984e --- /dev/null +++ b/tests/integration/test_storage_mysql/configs/users.xml @@ -0,0 +1,18 @@ + + + + + 2 + + + + + + + + ::/0 + + default + + + diff --git a/tests/integration/test_storage_mysql/test.py b/tests/integration/test_storage_mysql/test.py index 7b23e20e200..9c3abd799af 100644 --- a/tests/integration/test_storage_mysql/test.py +++ b/tests/integration/test_storage_mysql/test.py @@ -8,6 +8,9 @@ from helpers.cluster import ClickHouseCluster cluster = ClickHouseCluster(__file__) node1 = cluster.add_instance('node1', main_configs=['configs/remote_servers.xml'], with_mysql=True) +node2 = cluster.add_instance('node2', main_configs=['configs/remote_servers.xml'], with_mysql_cluster=True) +node3 = cluster.add_instance('node3', main_configs=['configs/remote_servers.xml'], user_configs=['configs/users.xml'], with_mysql=True) + create_table_sql_template = """ CREATE TABLE `clickhouse`.`{}` ( `id` int(11) NOT NULL, @@ -18,15 +21,30 @@ create_table_sql_template = """ PRIMARY KEY (`id`)) ENGINE=InnoDB; """ +def get_mysql_conn(port=3308): + conn = pymysql.connect(user='root', password='clickhouse', host='127.0.0.1', port=port) + return conn + + +def create_mysql_db(conn, name): + with conn.cursor() as cursor: + cursor.execute( + "CREATE DATABASE {} DEFAULT CHARACTER SET 'utf8'".format(name)) + + +def create_mysql_table(conn, tableName): + with conn.cursor() as cursor: + cursor.execute(create_table_sql_template.format(tableName)) + @pytest.fixture(scope="module") def started_cluster(): try: cluster.start() - conn = get_mysql_conn() ## create mysql db and table - create_mysql_db(conn, 'clickhouse') + conn1 = get_mysql_conn(port=3308) + create_mysql_db(conn1, 'clickhouse') yield cluster finally: @@ -52,6 +70,7 @@ CREATE TABLE {}(id UInt32, name String, age UInt32, money UInt32) ENGINE = MySQL assert node1.query(query.format(t=table_name)) == '250\n' conn.close() + def test_insert_select(started_cluster): table_name = 'test_insert_select' conn = get_mysql_conn() @@ -148,6 +167,7 @@ def test_table_function(started_cluster): assert node1.query("SELECT sum(`money`) FROM {}".format(table_function)).rstrip() == '60000' conn.close() + def test_binary_type(started_cluster): conn = get_mysql_conn() with conn.cursor() as cursor: @@ -156,6 +176,7 @@ def test_binary_type(started_cluster): node1.query("INSERT INTO {} VALUES (42, 'clickhouse')".format('TABLE FUNCTION ' + table_function)) assert node1.query("SELECT * FROM {}".format(table_function)) == '42\tclickhouse\\0\\0\\0\\0\\0\\0\n' + def test_enum_type(started_cluster): table_name = 'test_enum_type' conn = get_mysql_conn() @@ -168,20 +189,95 @@ CREATE TABLE {}(id UInt32, name String, age UInt32, money UInt32, source Enum8(' conn.close() -def get_mysql_conn(): - conn = pymysql.connect(user='root', password='clickhouse', host='127.0.0.1', port=3308) - return conn +def test_mysql_distributed(started_cluster): + table_name = 'test_replicas' + + conn1 = get_mysql_conn(port=3348) + conn2 = get_mysql_conn(port=3388) + conn3 = get_mysql_conn(port=3368) + conn4 = get_mysql_conn(port=3308) + + create_mysql_db(conn1, 'clickhouse') + create_mysql_db(conn2, 'clickhouse') + create_mysql_db(conn3, 'clickhouse') + + create_mysql_table(conn1, table_name) + create_mysql_table(conn2, table_name) + create_mysql_table(conn3, table_name) + create_mysql_table(conn4, table_name) + + # Storage with with 3 replicas + node2.query(''' + CREATE TABLE test_replicas + (id UInt32, name String, age UInt32, money UInt32) + ENGINE = MySQL(`mysql{2|3|4}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse'); ''') + + # Fill remote tables with different data to be able to check + nodes = [node1, node2, node2, node2] + for i in range(1, 5): + nodes[i-1].query(''' + CREATE TABLE test_replica{} + (id UInt32, name String, age UInt32, money UInt32) + ENGINE = MySQL(`mysql{}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse');'''.format(i, i)) + nodes[i-1].query("INSERT INTO test_replica{} (id, name) SELECT number, 'host{}' from numbers(10) ".format(i, i)) + + # test multiple ports parsing + result = node2.query('''SELECT DISTINCT(name) FROM mysql(`mysql{1|2|3}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse'); ''') + assert(result == 'host1\n' or result == 'host2\n' or result == 'host3\n') + result = node2.query('''SELECT DISTINCT(name) FROM mysql(`mysql1:3306|mysql2:3306|mysql3:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse'); ''') + assert(result == 'host1\n' or result == 'host2\n' or result == 'host3\n') + + # check all replicas are traversed + query = "SELECT * FROM (" + for i in range (3): + query += "SELECT name FROM test_replicas UNION DISTINCT " + query += "SELECT name FROM test_replicas)" + + result = node2.query(query) + assert(result == 'host2\nhost3\nhost4\n') + + # Storage with with two shards, each has 2 replicas + node2.query(''' + CREATE TABLE test_shards + (id UInt32, name String, age UInt32, money UInt32) + ENGINE = ExternalDistributed('MySQL', `mysql{1|2}:3306,mysql{3|4}:3306`, 'clickhouse', 'test_replicas', 'root', 'clickhouse'); ''') + + # Check only one replica in each shard is used + result = node2.query("SELECT DISTINCT(name) FROM test_shards ORDER BY name") + assert(result == 'host1\nhost3\n') + + # check all replicas are traversed + query = "SELECT name FROM (" + for i in range (3): + query += "SELECT name FROM test_shards UNION DISTINCT " + query += "SELECT name FROM test_shards) ORDER BY name" + result = node2.query(query) + assert(result == 'host1\nhost2\nhost3\nhost4\n') + + # disconnect mysql1 + started_cluster.pause_container('mysql1') + result = node2.query("SELECT DISTINCT(name) FROM test_shards ORDER BY name") + started_cluster.unpause_container('mysql1') + assert(result == 'host2\nhost4\n' or result == 'host3\nhost4\n') -def create_mysql_db(conn, name): - with conn.cursor() as cursor: - cursor.execute( - "CREATE DATABASE {} DEFAULT CHARACTER SET 'utf8'".format(name)) +def test_external_settings(started_cluster): + table_name = 'test_external_settings' + conn = get_mysql_conn() + create_mysql_table(conn, table_name) - -def create_mysql_table(conn, tableName): - with conn.cursor() as cursor: - cursor.execute(create_table_sql_template.format(tableName)) + node3.query(''' +CREATE TABLE {}(id UInt32, name String, age UInt32, money UInt32) ENGINE = MySQL('mysql1:3306', 'clickhouse', '{}', 'root', 'clickhouse'); +'''.format(table_name, table_name)) + node3.query( + "INSERT INTO {}(id, name, money) select number, concat('name_', toString(number)), 3 from numbers(100) ".format( + table_name)) + assert node3.query("SELECT count() FROM {}".format(table_name)).rstrip() == '100' + assert node3.query("SELECT sum(money) FROM {}".format(table_name)).rstrip() == '300' + node3.query("select value from system.settings where name = 'max_block_size' FORMAT TSV") == "2\n" + node3.query("select value from system.settings where name = 'external_storage_max_read_rows' FORMAT TSV") == "0\n" + assert node3.query("SELECT COUNT(DISTINCT blockNumber()) FROM {} FORMAT TSV".format(table_name)) == '50\n' + conn.close() if __name__ == '__main__': diff --git a/tests/integration/test_storage_postgresql/test.py b/tests/integration/test_storage_postgresql/test.py index 86a1d3b4547..b1ef58866bc 100644 --- a/tests/integration/test_storage_postgresql/test.py +++ b/tests/integration/test_storage_postgresql/test.py @@ -10,12 +10,14 @@ from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT cluster = ClickHouseCluster(__file__) node1 = cluster.add_instance('node1', main_configs=["configs/log_conf.xml"], with_postgres=True) +node2 = cluster.add_instance('node2', main_configs=['configs/log_conf.xml'], with_postgres_cluster=True) -def get_postgres_conn(database=False): +def get_postgres_conn(database=False, port=5432): if database == True: - conn_string = "host='localhost' dbname='clickhouse' user='postgres' password='mysecretpassword'" + conn_string = "host='localhost' port={} dbname='clickhouse' user='postgres' password='mysecretpassword'".format(port) else: - conn_string = "host='localhost' user='postgres' password='mysecretpassword'" + conn_string = "host='localhost' port={} user='postgres' password='mysecretpassword'".format(port) + conn = psycopg2.connect(conn_string) conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT) conn.autocommit = True @@ -30,9 +32,20 @@ def create_postgres_db(conn, name): def started_cluster(): try: cluster.start() - postgres_conn = get_postgres_conn() - print("postgres connected") + + postgres_conn = get_postgres_conn(port=5432) create_postgres_db(postgres_conn, 'clickhouse') + + postgres_conn = get_postgres_conn(port=5421) + create_postgres_db(postgres_conn, 'clickhouse') + + postgres_conn = get_postgres_conn(port=5441) + create_postgres_db(postgres_conn, 'clickhouse') + + postgres_conn = get_postgres_conn(port=5461) + create_postgres_db(postgres_conn, 'clickhouse') + + print("postgres connected") yield cluster finally: @@ -65,13 +78,19 @@ def test_postgres_conversions(started_cluster): cursor.execute( '''CREATE TABLE IF NOT EXISTS test_types ( a smallint, b integer, c bigint, d real, e double precision, f serial, g bigserial, - h timestamp, i date, j decimal(5, 3), k numeric)''') + h timestamp, i date, j decimal(5, 3), k numeric, l boolean)''') node1.query(''' INSERT INTO TABLE FUNCTION postgresql('postgres1:5432', 'clickhouse', 'test_types', 'postgres', 'mysecretpassword') VALUES - (-32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12', '2000-05-12', 22.222, 22.222)''') + (-32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12', '2000-05-12', 22.222, 22.222, 1)''') result = node1.query(''' - SELECT a, b, c, d, e, f, g, h, i, j, toDecimal128(k, 3) FROM postgresql('postgres1:5432', 'clickhouse', 'test_types', 'postgres', 'mysecretpassword')''') - assert(result == '-32768\t-2147483648\t-9223372036854775808\t1.12345\t1.123456789\t2147483647\t9223372036854775807\t2000-05-12 12:12:12\t2000-05-12\t22.222\t22.222\n') + SELECT a, b, c, d, e, f, g, h, i, j, toDecimal128(k, 3), l FROM postgresql('postgres1:5432', 'clickhouse', 'test_types', 'postgres', 'mysecretpassword')''') + assert(result == '-32768\t-2147483648\t-9223372036854775808\t1.12345\t1.123456789\t2147483647\t9223372036854775807\t2000-05-12 12:12:12\t2000-05-12\t22.222\t22.222\t1\n') + + cursor.execute("INSERT INTO test_types (l) VALUES (TRUE), (true), ('yes'), ('y'), ('1');") + cursor.execute("INSERT INTO test_types (l) VALUES (FALSE), (false), ('no'), ('off'), ('0');") + expected = "1\n1\n1\n1\n1\n1\n0\n0\n0\n0\n0\n" + result = node1.query('''SELECT l FROM postgresql('postgres1:5432', 'clickhouse', 'test_types', 'postgres', 'mysecretpassword')''') + assert(result == expected) cursor.execute( '''CREATE TABLE IF NOT EXISTS test_array_dimensions @@ -219,6 +238,67 @@ def test_concurrent_queries(started_cluster): assert(int(count) == int(prev_count) + 16) +def test_postgres_distributed(started_cluster): + conn0 = get_postgres_conn(port=5432, database=True) + conn1 = get_postgres_conn(port=5421, database=True) + conn2 = get_postgres_conn(port=5441, database=True) + conn3 = get_postgres_conn(port=5461, database=True) + + cursor0 = conn0.cursor() + cursor1 = conn1.cursor() + cursor2 = conn2.cursor() + cursor3 = conn3.cursor() + cursors = [cursor0, cursor1, cursor2, cursor3] + + for i in range(4): + cursors[i].execute('CREATE TABLE test_replicas (id Integer, name Text)') + cursors[i].execute("""INSERT INTO test_replicas select i, 'host{}' from generate_series(0, 99) as t(i);""".format(i + 1)); + + # test multiple ports parsing + result = node2.query('''SELECT DISTINCT(name) FROM postgresql(`postgres{1|2|3}:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword'); ''') + assert(result == 'host1\n' or result == 'host2\n' or result == 'host3\n') + result = node2.query('''SELECT DISTINCT(name) FROM postgresql(`postgres2:5431|postgres3:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword'); ''') + assert(result == 'host3\n' or result == 'host2\n') + + # Create storage with with 3 replicas + node2.query(''' + CREATE TABLE test_replicas + (id UInt32, name String) + ENGINE = PostgreSQL(`postgres{2|3|4}:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword'); ''') + + # Check all replicas are traversed + query = "SELECT name FROM (" + for i in range (3): + query += "SELECT name FROM test_replicas UNION DISTINCT " + query += "SELECT name FROM test_replicas) ORDER BY name" + result = node2.query(query) + assert(result == 'host2\nhost3\nhost4\n') + + # Create storage with with two two shards, each has 2 replicas + node2.query(''' + CREATE TABLE test_shards + (id UInt32, name String, age UInt32, money UInt32) + ENGINE = ExternalDistributed('PostgreSQL', `postgres{1|2}:5432,postgres{3|4}:5432`, 'clickhouse', 'test_replicas', 'postgres', 'mysecretpassword'); ''') + + # Check only one replica in each shard is used + result = node2.query("SELECT DISTINCT(name) FROM test_shards ORDER BY name") + assert(result == 'host1\nhost3\n') + + # Check all replicas are traversed + query = "SELECT name FROM (" + for i in range (3): + query += "SELECT name FROM test_shards UNION DISTINCT " + query += "SELECT name FROM test_shards) ORDER BY name" + result = node2.query(query) + assert(result == 'host1\nhost2\nhost3\nhost4\n') + + # Disconnect postgres1 + started_cluster.pause_container('postgres1') + result = node2.query("SELECT DISTINCT(name) FROM test_shards ORDER BY name") + started_cluster.unpause_container('postgres1') + assert(result == 'host2\nhost4\n' or result == 'host3\nhost4\n') + + if __name__ == '__main__': cluster.start() input("Cluster created, press any key to destroy...") diff --git a/tests/integration/test_storage_s3/s3_mock/mock_s3.py b/tests/integration/test_storage_s3/s3_mocks/mock_s3.py similarity index 89% rename from tests/integration/test_storage_s3/s3_mock/mock_s3.py rename to tests/integration/test_storage_s3/s3_mocks/mock_s3.py index 088cc883e57..3e876689175 100644 --- a/tests/integration/test_storage_s3/s3_mock/mock_s3.py +++ b/tests/integration/test_storage_s3/s3_mocks/mock_s3.py @@ -1,3 +1,5 @@ +import sys + from bottle import abort, route, run, request, response @@ -21,4 +23,4 @@ def ping(): return 'OK' -run(host='0.0.0.0', port=8080) +run(host='0.0.0.0', port=int(sys.argv[1])) diff --git a/tests/integration/test_storage_s3/s3_mocks/unstable_server.py b/tests/integration/test_storage_s3/s3_mocks/unstable_server.py new file mode 100644 index 00000000000..4a27845ff9f --- /dev/null +++ b/tests/integration/test_storage_s3/s3_mocks/unstable_server.py @@ -0,0 +1,90 @@ +import http.server +import random +import re +import socket +import struct +import sys + + +def gen_n_digit_number(n): + assert 0 < n < 19 + return random.randint(10**(n-1), 10**n-1) + + +def gen_line(): + columns = 4 + + row = [] + def add_number(): + digits = random.randint(1, 18) + row.append(gen_n_digit_number(digits)) + + for i in range(columns // 2): + add_number() + row.append(1) + for i in range(columns - 1 - columns // 2): + add_number() + + line = ",".join(map(str, row)) + "\n" + return line.encode() + + +random.seed("Unstable server/1.0") +lines = b"".join((gen_line() for _ in range(500000))) + + +class RequestHandler(http.server.BaseHTTPRequestHandler): + def do_HEAD(self): + if self.path == "/root/test.csv": + self.from_bytes = 0 + self.end_bytes = len(lines) + self.size = self.end_bytes + self.send_block_size = 256 + self.stop_at = random.randint(900000, 1200000) // self.send_block_size # Block size is 1024**2. + + if "Range" in self.headers: + cr = self.headers["Range"] + parts = re.split("[ -/=]+", cr) + assert parts[0] == "bytes" + self.from_bytes = int(parts[1]) + if parts[2]: + self.end_bytes = int(parts[2])+1 + self.send_response(206) + self.send_header("Content-Range", f"bytes {self.from_bytes}-{self.end_bytes-1}/{self.size}") + else: + self.send_response(200) + + self.send_header("Accept-Ranges", "bytes") + self.send_header("Content-Type", "text/plain") + self.send_header("Content-Length", f"{self.end_bytes-self.from_bytes}") + self.end_headers() + + elif self.path == "/": + self.send_response(200) + self.send_header("Content-Type", "text/plain") + self.end_headers() + + else: + self.send_response(404) + self.send_header("Content-Type", "text/plain") + self.end_headers() + + + def do_GET(self): + self.do_HEAD() + if self.path == "/root/test.csv": + for c, i in enumerate(range(self.from_bytes, self.end_bytes, self.send_block_size)): + self.wfile.write(lines[i:min(i+self.send_block_size, self.end_bytes)]) + if (c + 1) % self.stop_at == 0: + #self.wfile._sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack("ii", 0, 0)) + #self.wfile._sock.shutdown(socket.SHUT_RDWR) + #self.wfile._sock.close() + print('Dropping connection') + break + + elif self.path == "/": + self.wfile.write(b"OK") + + +httpd = http.server.HTTPServer(("0.0.0.0", int(sys.argv[1])), RequestHandler) +httpd.serve_forever() diff --git a/tests/integration/test_storage_s3/test.py b/tests/integration/test_storage_s3/test.py index 8baa1cd64b0..c239dc68810 100644 --- a/tests/integration/test_storage_s3/test.py +++ b/tests/integration/test_storage_s3/test.py @@ -96,7 +96,7 @@ def cluster(): prepare_s3_bucket(cluster) logging.info("S3 bucket created") - run_s3_mock(cluster) + run_s3_mocks(cluster) yield cluster finally: @@ -113,13 +113,18 @@ def run_query(instance, query, stdin=None, settings=None): return result -# Test simple put. -@pytest.mark.parametrize("maybe_auth,positive", [ - ("", True), - ("'minio','minio123',", True), - ("'wrongid','wrongkey',", False) +# Test simple put. Also checks that wrong credentials produce an error with every compression method. +@pytest.mark.parametrize("maybe_auth,positive,compression", [ + ("", True, 'auto'), + ("'minio','minio123',", True, 'auto'), + ("'wrongid','wrongkey',", False, 'auto'), + ("'wrongid','wrongkey',", False, 'gzip'), + ("'wrongid','wrongkey',", False, 'deflate'), + ("'wrongid','wrongkey',", False, 'brotli'), + ("'wrongid','wrongkey',", False, 'xz'), + ("'wrongid','wrongkey',", False, 'zstd') ]) -def test_put(cluster, maybe_auth, positive): +def test_put(cluster, maybe_auth, positive, compression): # type: (ClickHouseCluster) -> None bucket = cluster.minio_bucket if not maybe_auth else cluster.minio_restricted_bucket @@ -128,8 +133,8 @@ def test_put(cluster, maybe_auth, positive): values = "(1, 2, 3), (3, 2, 1), (78, 43, 45)" values_csv = "1,2,3\n3,2,1\n78,43,45\n" filename = "test.csv" - put_query = "insert into table function s3('http://{}:{}/{}/{}', {}'CSV', '{}') values {}".format( - cluster.minio_host, cluster.minio_port, bucket, filename, maybe_auth, table_format, values) + put_query = f"""insert into table function s3('http://{cluster.minio_host}:{cluster.minio_port}/{bucket}/{filename}', + {maybe_auth}'CSV', '{table_format}', {compression}) values {values}""" try: run_query(instance, put_query) @@ -379,26 +384,32 @@ def test_s3_glob_scheherazade(cluster): assert run_query(instance, query).splitlines() == ["1001\t1001\t1001\t1001"] -def run_s3_mock(cluster): - logging.info("Starting s3 mock") - container_id = cluster.get_container_id('resolver') - current_dir = os.path.dirname(__file__) - cluster.copy_file_to_container(container_id, os.path.join(current_dir, "s3_mock", "mock_s3.py"), "mock_s3.py") - cluster.exec_in_container(container_id, ["python", "mock_s3.py"], detach=True) +def run_s3_mocks(cluster): + logging.info("Starting s3 mocks") + mocks = ( + ("mock_s3.py", "resolver", "8080"), + ("unstable_server.py", "resolver", "8081"), + ) + for mock_filename, container, port in mocks: + container_id = cluster.get_container_id(container) + current_dir = os.path.dirname(__file__) + cluster.copy_file_to_container(container_id, os.path.join(current_dir, "s3_mocks", mock_filename), mock_filename) + cluster.exec_in_container(container_id, ["python", mock_filename, port], detach=True) - # Wait for S3 mock start - for attempt in range(10): - ping_response = cluster.exec_in_container(cluster.get_container_id('resolver'), - ["curl", "-s", "http://resolver:8080/"], nothrow=True) - if ping_response != 'OK': - if attempt == 9: - assert ping_response == 'OK', 'Expected "OK", but got "{}"'.format(ping_response) + # Wait for S3 mocks to start + for mock_filename, container, port in mocks: + for attempt in range(10): + ping_response = cluster.exec_in_container(cluster.get_container_id(container), + ["curl", "-s", f"http://{container}:{port}/"], nothrow=True) + if ping_response != 'OK': + if attempt == 9: + assert ping_response == 'OK', 'Expected "OK", but got "{}"'.format(ping_response) + else: + time.sleep(1) else: - time.sleep(1) - else: - break + break - logging.info("S3 mock started") + logging.info("S3 mocks started") def replace_config(old, new): @@ -518,6 +529,15 @@ def test_storage_s3_get_gzip(cluster, extension, method): run_query(instance, f"DROP TABLE {name}") +def test_storage_s3_get_unstable(cluster): + bucket = cluster.minio_bucket + instance = cluster.instances["dummy"] + table_format = "column1 Int64, column2 Int64, column3 Int64, column4 Int64" + get_query = f"SELECT count(), sum(column3) FROM s3('http://resolver:8081/{cluster.minio_bucket}/test.csv', 'CSV', '{table_format}') FORMAT CSV" + result = run_query(instance, get_query) + assert result.splitlines() == ["500000,500000"] + + def test_storage_s3_put_uncompressed(cluster): bucket = cluster.minio_bucket instance = cluster.instances["dummy"] diff --git a/tests/jepsen.clickhouse-keeper/resources/keeper_config.xml b/tests/jepsen.clickhouse-keeper/resources/keeper_config.xml index 223481bdaea..528ea5d77be 100644 --- a/tests/jepsen.clickhouse-keeper/resources/keeper_config.xml +++ b/tests/jepsen.clickhouse-keeper/resources/keeper_config.xml @@ -9,6 +9,9 @@ false 120000 trace + 1000 + 2000 + 4000 {quorum_reads} {snapshot_distance} {stale_log_gap} diff --git a/tests/jepsen.clickhouse-keeper/resources/zoo.cfg b/tests/jepsen.clickhouse-keeper/resources/zoo.cfg new file mode 100644 index 00000000000..fd49be16d0f --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/resources/zoo.cfg @@ -0,0 +1,23 @@ +# http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html + +# The number of milliseconds of each tick +tickTime=2000 +# The number of ticks that the initial +# synchronization phase can take +initLimit=10 +# The number of ticks that can pass between +# sending a request and getting an acknowledgement +syncLimit=5 +# the directory where the snapshot is stored. +dataDir=/var/lib/zookeeper +# Place the dataLogDir to a separate physical disc for better performance +# dataLogDir=/disk2/zookeeper + +# the port at which the clients will connect +clientPort=2181 + +# Leader accepts client connections. Default value is "yes". The leader machine +# coordinates updates. For higher update throughput at thes slight expense of +# read throughput the leader can be configured to not accept clients and focus +# on coordination. +leaderServes=yes diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/bench.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/bench.clj new file mode 100644 index 00000000000..040d2eaa77b --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/bench.clj @@ -0,0 +1,39 @@ +(ns jepsen.clickhouse-keeper.bench + (:require [clojure.tools.logging :refer :all] + [jepsen + [client :as client]]) + (:import (java.lang ProcessBuilder) + (java.lang ProcessBuilder$Redirect))) + +(defn exec-process-builder + [command & args] + (let [pbuilder (ProcessBuilder. (into-array (cons command args)))] + (.redirectOutput pbuilder ProcessBuilder$Redirect/INHERIT) + (.redirectError pbuilder ProcessBuilder$Redirect/INHERIT) + (let [p (.start pbuilder)] + (.waitFor p)))) + +(defrecord BenchClient [port] + client/Client + (open! [this test node] + this) + + (setup! [this test] + this) + + (invoke! [this test op] + (let [bench-opts (into [] (clojure.string/split (:bench-opts op) #" ")) + bench-path (:bench-path op) + nodes (into [] (flatten (map (fn [x] (identity ["-h" (str x ":" port)])) (:nodes test)))) + all-args (concat [bench-path] bench-opts nodes)] + (info "Running cmd" all-args) + (apply exec-process-builder all-args) + (assoc op :type :ok :value "ok"))) + + (teardown! [_ test]) + + (close! [_ test])) + +(defn bench-client + [port] + (BenchClient. port)) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/constants.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/constants.clj index 15dafa1a514..cd62d66e652 100644 --- a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/constants.clj +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/constants.clj @@ -16,3 +16,5 @@ (def coordination-logs-dir (str coordination-data-dir "/logs")) (def stderr-file (str logs-dir "/stderr.log")) + +(def binaries-cache-dir (str common-prefix "/binaries")) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/db.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/db.clj index 0f86347d1f8..fdb6b233fec 100644 --- a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/db.clj +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/db.clj @@ -17,9 +17,11 @@ (defn get-clickhouse-url [url] - (let [download-result (cu/wget! url)] - (do (c/exec :mv download-result common-prefix) - (str common-prefix "/" download-result)))) + (non-precise-cached-wget! url)) + +(defn get-clickhouse-scp + [path] + (c/upload path (str common-prefix "/clickhouse"))) (defn download-clickhouse [source] @@ -27,6 +29,7 @@ (cond (clojure.string/starts-with? source "rbtorrent:") (get-clickhouse-sky source) (clojure.string/starts-with? source "http") (get-clickhouse-url source) + (.exists (io/file source)) (get-clickhouse-scp source) :else (throw (Exception. (str "Don't know how to download clickhouse from" source))))) (defn unpack-deb @@ -49,6 +52,7 @@ (defn chmod-binary [path] + (info "Binary path chmod" path) (c/exec :chmod :+x path)) (defn install-downloaded-clickhouse @@ -90,6 +94,13 @@ (c/exec :echo (slurp (io/resource "listen.xml")) :> (str sub-configs-dir "/listen.xml")) (c/exec :echo (cluster-config test node (slurp (io/resource "keeper_config.xml"))) :> (str sub-configs-dir "/keeper_config.xml"))) +(defn collect-traces + [test node] + (let [pid (c/exec :pidof "clickhouse")] + (c/exec :timeout :-s "KILL" "60" :gdb :-ex "set pagination off" :-ex (str "set logging file " logs-dir "/gdb.log") :-ex + "set logging on" :-ex "backtrace" :-ex "thread apply all backtrace" + :-ex "backtrace" :-ex "detach" :-ex "quit" :--pid pid :|| :true))) + (defn db [version reuse-binary] (reify db/DB @@ -110,19 +121,31 @@ (teardown! [_ test node] (info node "Tearing down clickhouse") - (kill-clickhouse! node test) (c/su + (kill-clickhouse! node test) (if (not reuse-binary) (c/exec :rm :-rf binary-path)) (c/exec :rm :-rf pid-file-path) (c/exec :rm :-rf data-dir) - ;(c/exec :rm :-rf logs-dir) + (c/exec :rm :-rf logs-dir) (c/exec :rm :-rf configs-dir))) db/LogFiles (log-files [_ test node] (c/su + ;(if (cu/exists? pid-file-path) + ;(do + ; (info node "Collecting traces") + ; (collect-traces test node)) + ;(info node "Pid files doesn't exists")) (kill-clickhouse! node test) - (c/cd data-dir - (c/exec :tar :czf "coordination.tar.gz" "coordination"))) - [stderr-file (str logs-dir "/clickhouse-server.log") (str data-dir "/coordination.tar.gz")]))) + (if (cu/exists? coordination-data-dir) + (do + (info node "Coordination files exists, going to compress") + (c/cd data-dir + (c/exec :tar :czf "coordination.tar.gz" "coordination"))))) + (let [common-logs [stderr-file (str logs-dir "/clickhouse-server.log") (str data-dir "/coordination.tar.gz")] + gdb-log (str logs-dir "/gdb.log")] + (if (cu/exists? (str logs-dir "/gdb.log")) + (conj common-logs gdb-log) + common-logs))))) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/main.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/main.clj index f88026500e6..0384d4d583a 100644 --- a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/main.clj +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/main.clj @@ -4,11 +4,13 @@ [clojure.pprint :refer [pprint]] [jepsen.clickhouse-keeper.set :as set] [jepsen.clickhouse-keeper.db :refer :all] + [jepsen.clickhouse-keeper.zookeeperdb :refer :all] [jepsen.clickhouse-keeper.nemesis :as custom-nemesis] [jepsen.clickhouse-keeper.register :as register] [jepsen.clickhouse-keeper.unique :as unique] [jepsen.clickhouse-keeper.queue :as queue] [jepsen.clickhouse-keeper.counter :as counter] + [jepsen.clickhouse-keeper.bench :as bench] [jepsen.clickhouse-keeper.constants :refer :all] [clojure.string :as str] [jepsen @@ -72,12 +74,29 @@ :validate [pos? "Must be a positive integer."]] [nil, "--lightweight-run" "Subset of workloads/nemesises which is simple to validate"] [nil, "--reuse-binary" "Use already downloaded binary if it exists, don't remove it on shutdown"] + [nil, "--bench" "Run perf-test mode"] + [nil, "--zookeeper-version VERSION" "Run zookeeper with version" + :default ""] + [nil, "--bench-opts STR" "Run perf-test mode" + :default "--generator list_medium_nodes -c 30 -i 1000"] ["-c" "--clickhouse-source URL" "URL for clickhouse deb or tgz package" - :default "https://clickhouse-builds.s3.yandex.net/21677/ef82333089156907a0979669d9374c2e18daabe5/clickhouse_build_check/clang-11_relwithdebuginfo_none_bundled_unsplitted_disable_False_deb/clickhouse-common-static_21.4.1.6313_amd64.deb"]]) + :default "https://clickhouse-builds.s3.yandex.net/21677/ef82333089156907a0979669d9374c2e18daabe5/clickhouse_build_check/clang-11_relwithdebuginfo_none_bundled_unsplitted_disable_False_deb/clickhouse-common-static_21.4.1.6313_amd64.deb"] + [nil "--bench-path path" "Path to keeper-bench util" + :default "/home/alesap/code/cpp/BuildCH/utils/keeper-bench/keeper-bench"]]) -(defn clickhouse-keeper-test - "Given an options map from the command line runner (e.g. :nodes, :ssh, - :concurrency, ...), constructs a test map." +(defn get-db + [opts] + (if (empty? (:zookeeper-version opts)) + (db (:clickhouse-source opts) (boolean (:reuse-binary opts))) + (zookeeper-db (:zookeeper-version opts)))) + +(defn get-port + [opts] + (if (empty? (:zookeeper-version opts)) + 9181 + 2181)) + +(defn clickhouse-func-tests [opts] (info "Test opts\n" (with-out-str (pprint opts))) (let [quorum (boolean (:quorum opts)) @@ -87,7 +106,7 @@ opts {:name (str "clickhouse-keeper-quorum=" quorum "-" (name (:workload opts)) "-" (name (:nemesis opts))) :os ubuntu/os - :db (db (:clickhouse-source opts) (boolean (:reuse-binary opts))) + :db (get-db opts) :pure-generators true :client (:client workload) :nemesis (:nemesis current-nemesis) @@ -105,6 +124,30 @@ (gen/sleep 10) (gen/clients (:final-generator workload)))}))) +(defn clickhouse-perf-test + [opts] + (info "Starting performance test") + (let [dct {:type :invoke :bench-opts (:bench-opts opts) :bench-path (:bench-path opts)}] + (merge tests/noop-test + opts + {:name (str "clickhouse-keeper-perf") + :os ubuntu/os + :db (get-db opts) + :pure-generators true + :client (bench/bench-client (get-port opts)) + :nemesis nemesis/noop + :generator (->> dct + (gen/stagger 1) + (gen/nemesis nil))}))) + +(defn clickhouse-keeper-test + "Given an options map from the command line runner (e.g. :nodes, :ssh, + :concurrency, ...), constructs a test map." + [opts] + (if (boolean (:bench opts)) + (clickhouse-perf-test opts) + (clickhouse-func-tests opts))) + (def all-nemesises (keys custom-nemesis/custom-nemesises)) (def all-workloads (keys workloads)) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/set.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/set.clj index a05338a7bc4..79ec4f824bb 100644 --- a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/set.clj +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/set.clj @@ -18,7 +18,8 @@ :nodename node)) (setup! [this test] - (zk-create-if-not-exists conn k "#{}")) + (exec-with-retries 30 (fn [] + (zk-create-if-not-exists conn k "#{}")))) (invoke! [this test op] (case (:f op) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/utils.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/utils.clj index ffb948041d1..70813457251 100644 --- a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/utils.clj +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/utils.clj @@ -6,11 +6,24 @@ [jepsen.control.util :as cu] [jepsen.clickhouse-keeper.constants :refer :all] [jepsen.control :as c] - [clojure.tools.logging :refer :all]) + [clojure.tools.logging :refer :all] + [clojure.java.io :as io]) (:import (org.apache.zookeeper.data Stat) (org.apache.zookeeper CreateMode ZooKeeper) - (org.apache.zookeeper ZooKeeper KeeperException KeeperException$BadVersionException))) + (org.apache.zookeeper ZooKeeper KeeperException KeeperException$BadVersionException) + (java.security MessageDigest))) + +(defn exec-with-retries + [retries f & args] + (let [res (try {:value (apply f args)} + (catch Exception e + (if (zero? retries) + (throw e) + {:exception e})))] + (if (:exception res) + (do (Thread/sleep 1000) (recur (dec retries) f args)) + (:value res)))) (defn parse-long "Parses a string to a Long. Passes through `nil` and empty strings." @@ -32,7 +45,7 @@ (defn zk-connect [host port timeout] - (zk/connect (str host ":" port) :timeout-msec timeout)) + (exec-with-retries 30 (fn [] (zk/connect (str host ":" port) :timeout-msec timeout)))) (defn zk-create-range [conn n] @@ -168,13 +181,23 @@ :--keeper_server.logs_storage_path coordination-logs-dir) (wait-clickhouse-alive! node test))) -(defn exec-with-retries - [retries f & args] - (let [res (try {:value (apply f args)} - (catch Exception e - (if (zero? retries) - (throw e) - {:exception e})))] - (if (:exception res) - (do (Thread/sleep 1000) (recur (dec retries) f args)) - (:value res)))) +(defn md5 [^String s] + (let [algorithm (MessageDigest/getInstance "MD5") + raw (.digest algorithm (.getBytes s))] + (format "%032x" (BigInteger. 1 raw)))) + +(defn non-precise-cached-wget! + [url] + (let [encoded-url (md5 url) + expected-file-name (.getName (io/file url)) + dest-file (str binaries-cache-dir "/" encoded-url) + dest-symlink (str common-prefix "/" expected-file-name) + wget-opts (concat cu/std-wget-opts [:-O dest-file])] + (when-not (cu/exists? dest-file) + (info "Downloading" url) + (do (c/exec :mkdir :-p binaries-cache-dir) + (c/cd binaries-cache-dir + (cu/wget-helper! wget-opts url)))) + (c/exec :rm :-rf dest-symlink) + (c/exec :ln :-s dest-file dest-symlink) + dest-symlink)) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/zookeeperdb.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/zookeeperdb.clj new file mode 100644 index 00000000000..7cb88cd1fd9 --- /dev/null +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/zookeeperdb.clj @@ -0,0 +1,64 @@ +(ns jepsen.clickhouse-keeper.zookeeperdb + (:require [clojure.tools.logging :refer :all] + [jepsen.clickhouse-keeper.utils :refer :all] + [clojure.java.io :as io] + [jepsen + [control :as c] + [db :as db]] + [jepsen.os.ubuntu :as ubuntu])) + +(defn zk-node-ids + "Returns a map of node names to node ids." + [test] + (->> test + :nodes + (map-indexed (fn [i node] [node (inc i)])) + (into {}))) + +(defn zk-node-id + "Given a test and a node name from that test, returns the ID for that node." + [test node] + ((zk-node-ids test) node)) + +(defn zoo-cfg-servers + "Constructs a zoo.cfg fragment for servers." + [test mynode] + (->> (zk-node-ids test) + (map (fn [[node id]] + (str "server." id "=" (if (= (name node) mynode) "0.0.0.0" (name node)) ":2888:3888"))) + (clojure.string/join "\n"))) + +(defn zookeeper-db + "Zookeeper DB for a particular version." + [version] + (reify db/DB + (setup! [_ test node] + (c/su + (info node "Installing ZK" version) + (c/exec :apt-get :update) + (c/exec :apt-get :install (str "zookeeper=" version)) + (c/exec :apt-get :install (str "zookeeperd=" version)) + (c/exec :echo (zk-node-id test node) :> "/etc/zookeeper/conf/myid") + + (c/exec :echo (str (slurp (io/resource "zoo.cfg")) + "\n" + (zoo-cfg-servers test node)) + :> "/etc/zookeeper/conf/zoo.cfg") + + (info node "ZK restarting") + (c/exec :service :zookeeper :restart) + (info "Connecting to zk" (name node)) + (zk-connect (name node) 2181 1000) + (info node "ZK ready"))) + + (teardown! [_ test node] + (info node "tearing down ZK") + (c/su + (c/exec :service :zookeeper :stop :|| true) + (c/exec :rm :-rf + (c/lit "/var/lib/zookeeper/version-*") + (c/lit "/var/log/zookeeper/*")))) + + db/LogFiles + (log-files [_ test node] + ["/var/log/zookeeper/zookeeper.log"]))) diff --git a/tests/msan_suppressions.txt b/tests/msan_suppressions.txt index 4c7aeaf4a4c..cf468b0be96 100644 --- a/tests/msan_suppressions.txt +++ b/tests/msan_suppressions.txt @@ -7,7 +7,6 @@ fun:tolower # Suppress some failures in contrib so that we can enable MSan in CI. # Ideally, we should report these upstream. -src:*/contrib/zlib-ng/* # Hyperscan fun:roseRunProgram diff --git a/tests/performance/ColumnMap.xml b/tests/performance/ColumnMap.xml index f6393985377..874ed638224 100644 --- a/tests/performance/ColumnMap.xml +++ b/tests/performance/ColumnMap.xml @@ -26,10 +26,13 @@ FROM arrayMap(x -> toString(x), range(100)) AS keys, arrayMap(x -> toString(x * x), range(100)) AS values, cast((keys, values), 'Map(String, String)') AS map - FROM numbers(10000) + FROM numbers_mt(10000) ) +SETTINGS max_insert_threads = 8 + optimize table column_map_test final + SELECT count() FROM column_map_test WHERE NOT ignore(arrayMap(x -> map[CONCAT(toString(x), {key_suffix})], range(0, 100, 10))) DROP TABLE IF EXISTS column_map_test diff --git a/tests/performance/agg_functions_min_max_any.xml b/tests/performance/agg_functions_min_max_any.xml index 79c9e2c6976..6ca9e3eb65a 100644 --- a/tests/performance/agg_functions_min_max_any.xml +++ b/tests/performance/agg_functions_min_max_any.xml @@ -6,7 +6,9 @@ group_scale - 1000000 + + 1000000 + diff --git a/tests/performance/async_remote_read.xml b/tests/performance/async_remote_read.xml index 7f0ee6473ab..4ea159f9a97 100644 --- a/tests/performance/async_remote_read.xml +++ b/tests/performance/async_remote_read.xml @@ -1,4 +1,7 @@ + + 1 + SELECT sum(x) FROM diff --git a/tests/performance/avg_weighted.xml b/tests/performance/avg_weighted.xml index df9e7c21068..2476011e6a9 100644 --- a/tests/performance/avg_weighted.xml +++ b/tests/performance/avg_weighted.xml @@ -11,8 +11,8 @@ CREATE TABLE perf_avg( num UInt64, - num_u Decimal256(75) DEFAULT toDecimal256(num / 400000, 75), - num_f Float64 DEFAULT num / 100 + num_u Decimal256(75) MATERIALIZED toDecimal256(num / 400000, 75), + num_f Float64 MATERIALIZED num / 100 ) ENGINE = MergeTree() ORDER BY num @@ -23,6 +23,8 @@ LIMIT 50000000 + optimize table perf_avg final + SELECT avg(num) FROM perf_avg FORMAT Null SELECT avg(2 * num) FROM perf_avg FORMAT Null SELECT avg(num_u) FROM perf_avg FORMAT Null diff --git a/tests/performance/decimal_aggregates.xml b/tests/performance/decimal_aggregates.xml index f7bc2ac1868..3fc1408d7e4 100644 --- a/tests/performance/decimal_aggregates.xml +++ b/tests/performance/decimal_aggregates.xml @@ -18,7 +18,7 @@ SELECT uniq(d32), uniqCombined(d32), uniqExact(d32), uniqHLL12(d32) FROM (SELECT * FROM t LIMIT 10000000) SELECT uniq(d64), uniqCombined(d64), uniqExact(d64), uniqHLL12(d64) FROM (SELECT * FROM t LIMIT 10000000) - SELECT uniq(d128), uniqCombined(d128), uniqExact(d128), uniqHLL12(d128) FROM (SELECT * FROM t LIMIT 1000000) + SELECT uniq(d128), uniqCombined(d128), uniqExact(d128), uniqHLL12(d128) FROM (SELECT * FROM t LIMIT 10000000) SELECT median(d32), medianExact(d32), medianExactWeighted(d32, 2) FROM (SELECT * FROM t LIMIT 10000000) SELECT median(d64), medianExact(d64), medianExactWeighted(d64, 2) FROM (SELECT * FROM t LIMIT 1000000) diff --git a/tests/performance/direct_dictionary.xml b/tests/performance/direct_dictionary.xml index 97ecdfe3e95..3f01449ed99 100644 --- a/tests/performance/direct_dictionary.xml +++ b/tests/performance/direct_dictionary.xml @@ -55,14 +55,14 @@ INSERT INTO simple_key_direct_dictionary_source_table SELECT number, number, toString(number), toDecimal64(number, 8), toString(number) FROM system.numbers - LIMIT 100000; + LIMIT 50000; INSERT INTO complex_key_direct_dictionary_source_table SELECT number, toString(number), number, toString(number), toDecimal64(number, 8), toString(number) FROM system.numbers - LIMIT 100000; + LIMIT 50000; @@ -79,35 +79,51 @@ elements_count - 25000 50000 75000 - 100000 - SELECT dictGet('default.simple_key_direct_dictionary', {column_name}, number) + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictGet('default.simple_key_direct_dictionary', {column_name}, key) FROM system.numbers LIMIT {elements_count} FORMAT Null; - SELECT dictHas('default.simple_key_direct_dictionary', number) + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictGet('default.simple_key_direct_dictionary', ('value_int', 'value_string', 'value_decimal', 'value_string_nullable'), key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictHas('default.simple_key_direct_dictionary', key) FROM system.numbers LIMIT {elements_count} FORMAT Null; - SELECT dictGet('default.complex_key_direct_dictionary', {column_name}, (number, toString(number))) + WITH (number, toString(number)) as key + SELECT dictGet('default.complex_key_direct_dictionary', {column_name}, key) FROM system.numbers LIMIT {elements_count} FORMAT Null; - SELECT dictHas('default.complex_key_direct_dictionary', (number, toString(number))) + WITH (number, toString(number)) as key + SELECT dictGet('default.complex_key_direct_dictionary', ('value_int', 'value_string', 'value_decimal', 'value_string_nullable'), key) + FROM system.numbers + LIMIT {elements_count} + FORMAT Null; + + + WITH (number, toString(number)) as key + SELECT dictHas('default.complex_key_direct_dictionary', key) FROM system.numbers LIMIT {elements_count} FORMAT Null; diff --git a/tests/performance/flat_dictionary.xml b/tests/performance/flat_dictionary.xml index 426aa929bbc..a80631db541 100644 --- a/tests/performance/flat_dictionary.xml +++ b/tests/performance/flat_dictionary.xml @@ -21,7 +21,7 @@ ) PRIMARY KEY id SOURCE(CLICKHOUSE(DB 'default' TABLE 'simple_key_flat_dictionary_source_table')) - LAYOUT(FLAT()) + LAYOUT(FLAT(INITIAL_ARRAY_SIZE 50000 MAX_ARRAY_SIZE 5000000)) LIFETIME(MIN 0 MAX 1000) @@ -29,7 +29,7 @@ INSERT INTO simple_key_flat_dictionary_source_table SELECT number, number, toString(number), toDecimal64(number, 8), toString(number) FROM system.numbers - LIMIT 500000; + LIMIT 5000000; @@ -46,25 +46,30 @@ elements_count - 250000 - 500000 - 750000 - 1000000 + 5000000 + 7500000 - SELECT dictGet('default.simple_key_flat_dictionary', {column_name}, number) + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictGet('default.simple_key_flat_dictionary', {column_name}, key) FROM system.numbers LIMIT {elements_count} - FORMAR Null; + FORMAT Null; - SELECT dictHas('default.simple_key_flat_dictionary', number) + SELECT * FROM simple_key_flat_dictionary + FORMAT Null; + + + + WITH rand64() % toUInt64(75000000) as key + SELECT dictHas('default.simple_key_flat_dictionary', key) FROM system.numbers - LIMIT {elements_count} + LIMIT 75000000 FORMAT Null; diff --git a/tests/performance/fuse_sumcount.xml b/tests/performance/fuse_sumcount.xml new file mode 100644 index 00000000000..b2eb0e678e2 --- /dev/null +++ b/tests/performance/fuse_sumcount.xml @@ -0,0 +1,33 @@ + + + + 1 + + + + + key + + 1 + intHash32(number) % 1000 + + + + + SELECT sum(number) FROM numbers(1000000000) FORMAT Null + SELECT sum(number), count(number) FROM numbers(1000000000) FORMAT Null + SELECT sum(number), count(number) FROM numbers(1000000000) SETTINGS optimize_fuse_sum_count_avg = 0 FORMAT Null + SELECT sum(number), avg(number), count(number) FROM numbers(1000000000) FORMAT Null + SELECT sum(number), avg(number), count(number) FROM numbers(1000000000) SETTINGS optimize_fuse_sum_count_avg = 0 FORMAT Null + + SELECT sum(number) FROM numbers(100000000) GROUP BY intHash32(number) % 1000 FORMAT Null + SELECT sum(number), count(number) FROM numbers(100000000) GROUP BY intHash32(number) % 1000 FORMAT Null + SELECT sum(number), count(number) FROM numbers(100000000) GROUP BY intHash32(number) % 1000 SETTINGS optimize_fuse_sum_count_avg = 0 FORMAT Null + SELECT sum(number), avg(number), count(number) FROM numbers(100000000) GROUP BY intHash32(number) % 1000 FORMAT Null + SELECT sum(number), avg(number), count(number) FROM numbers(100000000) GROUP BY intHash32(number) % 1000 SETTINGS optimize_fuse_sum_count_avg = 0 FORMAT Null + diff --git a/tests/performance/great_circle_dist.xml b/tests/performance/great_circle_dist.xml index b5e271ddfa8..ad445f34417 100644 --- a/tests/performance/great_circle_dist.xml +++ b/tests/performance/great_circle_dist.xml @@ -2,6 +2,6 @@ SELECT count() FROM numbers(1000000) WHERE NOT ignore(greatCircleDistance((rand(1) % 360) * 1. - 180, (number % 150) * 1.2 - 90, (number % 360) + toFloat64(rand(2)) / 4294967296 - 180, (rand(3) % 180) * 1. - 90)) - SELECT count() FROM zeros(1000000) WHERE NOT ignore(greatCircleDistance(55. + toFloat64(rand(1)) / 4294967296, 37. + toFloat64(rand(2)) / 4294967296, 55. + toFloat64(rand(3)) / 4294967296, 37. + toFloat64(rand(4)) / 4294967296)) + SELECT count() FROM zeros(10000000) WHERE NOT ignore(greatCircleDistance(55. + toFloat64(rand(1)) / 4294967296, 37. + toFloat64(rand(2)) / 4294967296, 55. + toFloat64(rand(3)) / 4294967296, 37. + toFloat64(rand(4)) / 4294967296)) diff --git a/tests/performance/hashed_dictionary.xml b/tests/performance/hashed_dictionary.xml index a38d2f30c23..26164b4f888 100644 --- a/tests/performance/hashed_dictionary.xml +++ b/tests/performance/hashed_dictionary.xml @@ -81,35 +81,37 @@ elements_count - 2500000 5000000 7500000 - 10000000 - SELECT dictGet('default.simple_key_hashed_dictionary', {column_name}, number) + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictGet('default.simple_key_hashed_dictionary', {column_name}, key) FROM system.numbers LIMIT {elements_count} FORMAT Null; - SELECT dictHas('default.simple_key_hashed_dictionary', number) + WITH rand64() % toUInt64({elements_count}) as key + SELECT dictHas('default.simple_key_hashed_dictionary', key) FROM system.numbers LIMIT {elements_count} FORMAT Null; - SELECT dictGet('default.complex_key_hashed_dictionary', {column_name}, (number, toString(number))) + WITH (rand64() % toUInt64({elements_count}), toString(rand64() % toUInt64({elements_count}))) as key + SELECT dictGet('default.complex_key_hashed_dictionary', {column_name}, key) FROM system.numbers LIMIT {elements_count} FORMAT Null; - SELECT dictHas('default.complex_key_hashed_dictionary', (number, toString(number))) + WITH (rand64() % toUInt64({elements_count}), toString(rand64() % toUInt64({elements_count}))) as key + SELECT dictHas('default.complex_key_hashed_dictionary', key) FROM system.numbers LIMIT {elements_count} FORMAT Null; diff --git a/tests/performance/if_array_string.xml b/tests/performance/if_array_string.xml index 773509e1c4b..f1752767e76 100644 --- a/tests/performance/if_array_string.xml +++ b/tests/performance/if_array_string.xml @@ -4,5 +4,5 @@ SELECT count() FROM zeros(10000000) WHERE NOT ignore(rand() % 2 ? ['Hello', 'World'] : materialize(['a', 'b', 'c'])) SELECT count() FROM zeros(10000000) WHERE NOT ignore(rand() % 2 ? materialize(['Hello', 'World']) : materialize(['a', 'b', 'c'])) SELECT count() FROM zeros(10000000) WHERE NOT ignore(rand() % 2 ? materialize(['', '']) : emptyArrayString()) - SELECT count() FROM zeros(1000000) WHERE NOT ignore(rand() % 2 ? materialize(['https://github.com/ClickHouse/ClickHouse/pull/1070', 'https://www.google.ru/search?newwindow=1&site=&source=hp&q=zookeeper+wire+protocol+exists&oq=zookeeper+wire+protocol+exists&gs_l=psy-ab.3...330.6300.0.6687.33.28.0.0.0.0.386.4838.0j5j9j5.19.0....0...1.1.64.psy-ab..14.17.4448.0..0j35i39k1j0i131k1j0i22i30k1j0i19k1j33i21k1.r_3uFoNOrSU']) : emptyArrayString()) + SELECT count() FROM zeros(10000000) WHERE NOT ignore(rand() % 2 ? materialize(['https://github.com/ClickHouse/ClickHouse/pull/1070', 'https://www.google.ru/search?newwindow=1&site=&source=hp&q=zookeeper+wire+protocol+exists&oq=zookeeper+wire+protocol+exists&gs_l=psy-ab.3...330.6300.0.6687.33.28.0.0.0.0.386.4838.0j5j9j5.19.0....0...1.1.64.psy-ab..14.17.4448.0..0j35i39k1j0i131k1j0i22i30k1j0i19k1j33i21k1.r_3uFoNOrSU']) : emptyArrayString()) diff --git a/tests/performance/intDiv.xml b/tests/performance/intDiv.xml new file mode 100644 index 00000000000..c6fa0238986 --- /dev/null +++ b/tests/performance/intDiv.xml @@ -0,0 +1,5 @@ + + SELECT count() FROM numbers(200000000) WHERE NOT ignore(intDiv(number, 1000000000)) + SELECT count() FROM numbers(200000000) WHERE NOT ignore(divide(number, 1000000000)) + SELECT count() FROM numbers(200000000) WHERE NOT ignore(toUInt32(divide(number, 1000000000))) + diff --git a/tests/performance/joins_in_memory.xml b/tests/performance/joins_in_memory.xml index fac6f2659c6..158602e28ab 100644 --- a/tests/performance/joins_in_memory.xml +++ b/tests/performance/joins_in_memory.xml @@ -13,12 +13,12 @@ SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 SELECT COUNT() FROM ints l INNER JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 SELECT COUNT() FROM ints l LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) diff --git a/tests/performance/joins_in_memory_pmj.xml b/tests/performance/joins_in_memory_pmj.xml index 87d1c0df14c..d122dba72c3 100644 --- a/tests/performance/joins_in_memory_pmj.xml +++ b/tests/performance/joins_in_memory_pmj.xml @@ -3,53 +3,54 @@ partial_merge + 0 - INSERT INTO ints SELECT number AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) - INSERT INTO ints SELECT 10000 + number % 1000 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) - INSERT INTO ints SELECT 20000 + number % 100 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) - INSERT INTO ints SELECT 30000 + number % 10 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) - INSERT INTO ints SELECT 40000 + number % 1 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) + INSERT INTO ints SELECT number AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) settings query_plan_filter_push_down = 0 + INSERT INTO ints SELECT 10000 + number % 1000 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) settings query_plan_filter_push_down = 0 + INSERT INTO ints SELECT 20000 + number % 100 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) settings query_plan_filter_push_down = 0 + INSERT INTO ints SELECT 30000 + number % 10 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) settings query_plan_filter_push_down = 0 + INSERT INTO ints SELECT 40000 + number % 1 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(10000) settings query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) settings query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 - SELECT COUNT() FROM ints l INNER JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) settings query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 - SELECT COUNT() FROM ints l LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) settings query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l INNER JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0 - SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) SETTINGS partial_merge_join_optimizations = 0, query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 - SELECT COUNT() FROM ints l RIGHT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) + SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l RIGHT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l RIGHT JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) settings query_plan_filter_push_down = 0 - SELECT COUNT() FROM ints l FULL JOIN ints r USING i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l FULL JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 - SELECT COUNT() FROM ints l FULL JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 - SELECT COUNT() FROM ints l FULL JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) + SELECT COUNT() FROM ints l FULL JOIN ints r USING i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l FULL JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l FULL JOIN ints r ON l.i64 = r.i64 WHERE i32 = 20042 settings query_plan_filter_push_down = 0 + SELECT COUNT() FROM ints l FULL JOIN ints r USING i64 WHERE i32 IN(42, 10042, 20042, 30042, 40042) settings query_plan_filter_push_down = 0 DROP TABLE IF EXISTS ints diff --git a/tests/performance/json_extract_simdjson.xml b/tests/performance/json_extract_simdjson.xml index f9f6df5140e..9ec3613d5e8 100644 --- a/tests/performance/json_extract_simdjson.xml +++ b/tests/performance/json_extract_simdjson.xml @@ -1,7 +1,4 @@ - - - json @@ -21,19 +18,19 @@ 1 - SELECT 'simdjson-1', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractString(materialize({json}), 'sparam')) - SELECT 'simdjson-2', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractString(materialize({json}), 'sparam', 'nested_1')) - SELECT 'simdjson-3', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractInt(materialize({json}), 'nparam')) - SELECT 'simdjson-4', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractUInt(materialize({json}), 'nparam')) - SELECT 'simdjson-5', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractFloat(materialize({json}), 'fparam')) + SELECT 'simdjson-1', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractString(materialize({json}), 'sparam')) + SELECT 'simdjson-2', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractString(materialize({json}), 'sparam', 'nested_1')) + SELECT 'simdjson-3', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractInt(materialize({json}), 'nparam')) + SELECT 'simdjson-4', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractUInt(materialize({json}), 'nparam')) + SELECT 'simdjson-5', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractFloat(materialize({json}), 'fparam')) - SELECT 'simdjson-6', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractString(materialize({long_json}), 'sparam')) - SELECT 'simdjson-7', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractString(materialize({long_json}), 'sparam', 'nested_1')) - SELECT 'simdjson-8', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractInt(materialize({long_json}), 'nparam')) - SELECT 'simdjson-9', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractUInt(materialize({long_json}), 'nparam')) - SELECT 'simdjson-10', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractRaw(materialize({long_json}), 'fparam')) - SELECT 'simdjson-11', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractFloat(materialize({long_json}), 'fparam')) - SELECT 'simdjson-12', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractFloat(materialize({long_json}), 'fparam', 'nested_2', -2)) - SELECT 'simdjson-13', count() FROM zeros(1000000) WHERE NOT ignore(JSONExtractBool(materialize({long_json}), 'bparam')) + SELECT 'simdjson-6', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractString(materialize({long_json}), 'sparam')) + SELECT 'simdjson-7', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractString(materialize({long_json}), 'sparam', 'nested_1')) + SELECT 'simdjson-8', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractInt(materialize({long_json}), 'nparam')) + SELECT 'simdjson-9', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractUInt(materialize({long_json}), 'nparam')) + SELECT 'simdjson-10', count() FROM zeros(3000000) WHERE NOT ignore(JSONExtractRaw(materialize({long_json}), 'fparam')) + SELECT 'simdjson-11', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractFloat(materialize({long_json}), 'fparam')) + SELECT 'simdjson-12', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractFloat(materialize({long_json}), 'fparam', 'nested_2', -2)) + SELECT 'simdjson-13', count() FROM zeros(5000000) WHERE NOT ignore(JSONExtractBool(materialize({long_json}), 'bparam')) diff --git a/tests/performance/order_by_decimals.xml b/tests/performance/order_by_decimals.xml index 4889137865d..20b860f0a2d 100644 --- a/tests/performance/order_by_decimals.xml +++ b/tests/performance/order_by_decimals.xml @@ -4,13 +4,10 @@ comparison + SELECT toInt32(number) AS n FROM numbers(10000000) ORDER BY n DESC FORMAT Null + SELECT toDecimal32(number, 0) AS n FROM numbers(10000000) ORDER BY n FORMAT Null - - SELECT toInt32(number) AS n FROM numbers(1000000) ORDER BY n DESC FORMAT Null - SELECT toDecimal32(number, 0) AS n FROM numbers(1000000) ORDER BY n FORMAT Null - - SELECT toDecimal32(number, 0) AS n FROM numbers(1000000) ORDER BY n DESC FORMAT Null - SELECT toDecimal64(number, 8) AS n FROM numbers(1000000) ORDER BY n DESC FORMAT Null - SELECT toDecimal128(number, 10) AS n FROM numbers(1000000) ORDER BY n DESC FORMAT Null - + SELECT toDecimal32(number, 0) AS n FROM numbers(10000000) ORDER BY n DESC FORMAT Null + SELECT toDecimal64(number, 8) AS n FROM numbers(10000000) ORDER BY n DESC FORMAT Null + SELECT toDecimal128(number, 10) AS n FROM numbers(10000000) ORDER BY n DESC FORMAT Null diff --git a/tests/performance/order_by_read_in_order.xml b/tests/performance/order_by_read_in_order.xml index b91cd14baf4..cdbf477c335 100644 --- a/tests/performance/order_by_read_in_order.xml +++ b/tests/performance/order_by_read_in_order.xml @@ -3,10 +3,11 @@ hits_100m_single -SELECT * FROM hits_100m_single ORDER BY CounterID, EventDate LIMIT 1000 -SELECT * FROM hits_100m_single ORDER BY CounterID DESC, toStartOfWeek(EventDate) DESC LIMIT 100 + +SELECT * FROM hits_100m_single ORDER BY CounterID, EventDate LIMIT 100 +SELECT * FROM hits_100m_single ORDER BY CounterID DESC, toStartOfWeek(EventDate) DESC LIMIT 100 SELECT * FROM hits_100m_single ORDER BY CounterID, EventDate, URL LIMIT 100 -SELECT * FROM hits_100m_single WHERE CounterID IN (152220, 168777, 149234, 149234) ORDER BY CounterID DESC, EventDate DESC LIMIT 100 +SELECT * FROM hits_100m_single WHERE CounterID IN (152220, 168777, 149234, 149234) ORDER BY CounterID DESC, EventDate DESC LIMIT 100 SELECT * FROM hits_100m_single WHERE UserID=1988954671305023629 ORDER BY CounterID, EventDate LIMIT 100 diff --git a/tests/performance/parse_engine_file.xml b/tests/performance/parse_engine_file.xml index d49670b36b5..d0226c3bb68 100644 --- a/tests/performance/parse_engine_file.xml +++ b/tests/performance/parse_engine_file.xml @@ -30,7 +30,7 @@ INSERT INTO table_{format} SELECT * FROM test.hits LIMIT 100000 -SELECT * FROM table_{format} FORMAT Null +SELECT * FROM table_{format} FORMAT Null DROP TABLE IF EXISTS table_{format} diff --git a/tests/performance/point_in_polygon.xml b/tests/performance/point_in_polygon.xml index 403c2d62cba..31c24eb006f 100644 --- a/tests/performance/point_in_polygon.xml +++ b/tests/performance/point_in_polygon.xml @@ -1,5 +1,9 @@ + 0 @@ -8,7 +12,8 @@ INSERT INTO polygons WITH number + 1 AS radius SELECT [arrayMap(x -> (cos(x / 90. * pi()) * radius, sin(x / 90. * pi()) * radius), range(180))] - FROM numbers(1000000) + FROM numbers_mt(5000000) + SETTINGS max_insert_threads = 2, max_memory_usage = 30000000000 SELECT pointInPolygon((100, 100), polygon) FROM polygons FORMAT Null diff --git a/tests/performance/questdb_sum_int32.xml b/tests/performance/questdb_sum_int32.xml index ae13210107e..613ef3dc058 100644 --- a/tests/performance/questdb_sum_int32.xml +++ b/tests/performance/questdb_sum_int32.xml @@ -25,7 +25,8 @@ CREATE TABLE `zz_{type}_{engine}` (x {type}) ENGINE {engine} - INSERT INTO `zz_{type}_{engine}` SELECT rand() FROM numbers(1000000000) + INSERT INTO `zz_{type}_{engine}` SELECT rand() FROM numbers_mt(1000000000) SETTINGS max_insert_threads = 8 + OPTIMIZE TABLE `zz_{type}_MergeTree ORDER BY tuple()` FINAL SELECT sum(x) FROM `zz_{type}_{engine}` diff --git a/tests/queries/0_stateless/00027_argMinMax.reference b/tests/queries/0_stateless/00027_argMinMax.reference index 5ba447dd04b..101e8c16044 100644 --- a/tests/queries/0_stateless/00027_argMinMax.reference +++ b/tests/queries/0_stateless/00027_argMinMax.reference @@ -1,5 +1,5 @@ -0 (0,1) 9 (9,10) -0 ('0',1) 9 ('9',10) -1970-01-01 ('1970-01-01','1970-01-01 00:00:01') 1970-01-10 ('1970-01-10','1970-01-01 00:00:10') -0.00 (0.00,1.00) 9.00 (9.00,10.00) +0 9 +0 9 +1970-01-01 1970-01-10 +0.00 9.00 4 1 diff --git a/tests/queries/0_stateless/00027_argMinMax.sql b/tests/queries/0_stateless/00027_argMinMax.sql index 2bb3b507df5..2b67b99ec77 100644 --- a/tests/queries/0_stateless/00027_argMinMax.sql +++ b/tests/queries/0_stateless/00027_argMinMax.sql @@ -1,8 +1,8 @@ -- types -select argMin(x.1, x.2), argMin(x), argMax(x.1, x.2), argMax(x) from (select (number, number + 1) as x from numbers(10)); -select argMin(x.1, x.2), argMin(x), argMax(x.1, x.2), argMax(x) from (select (toString(number), toInt32(number) + 1) as x from numbers(10)); -select argMin(x.1, x.2), argMin(x), argMax(x.1, x.2), argMax(x) from (select (toDate(number, 'UTC'), toDateTime(number, 'UTC') + 1) as x from numbers(10)); -select argMin(x.1, x.2), argMin(x), argMax(x.1, x.2), argMax(x) from (select (toDecimal32(number, 2), toDecimal64(number, 2) + 1) as x from numbers(10)); +select argMin(x.1, x.2), argMax(x.1, x.2) from (select (number, number + 1) as x from numbers(10)); +select argMin(x.1, x.2), argMax(x.1, x.2) from (select (toString(number), toInt32(number) + 1) as x from numbers(10)); +select argMin(x.1, x.2), argMax(x.1, x.2) from (select (toDate(number, 'UTC'), toDateTime(number, 'UTC') + 1) as x from numbers(10)); +select argMin(x.1, x.2), argMax(x.1, x.2) from (select (toDecimal32(number, 2), toDecimal64(number, 2) + 1) as x from numbers(10)); -- array SELECT argMinArray(id, num), argMaxArray(id, num) FROM (SELECT arrayJoin([[10, 4, 3], [7, 5, 6], [8, 8, 2]]) AS num, arrayJoin([[1, 2, 4], [2, 3, 3]]) AS id); diff --git a/tests/queries/0_stateless/00027_simple_argMinArray.reference b/tests/queries/0_stateless/00027_simple_argMinArray.reference new file mode 100644 index 00000000000..4482956b706 --- /dev/null +++ b/tests/queries/0_stateless/00027_simple_argMinArray.reference @@ -0,0 +1 @@ +4 1 diff --git a/tests/queries/0_stateless/00027_simple_argMinArray.sql b/tests/queries/0_stateless/00027_simple_argMinArray.sql new file mode 100644 index 00000000000..b681a2c53cf --- /dev/null +++ b/tests/queries/0_stateless/00027_simple_argMinArray.sql @@ -0,0 +1 @@ +SELECT argMinArray(id, num), argMaxArray(id, num) FROM (SELECT arrayJoin([[10, 4, 3], [7, 5, 6], [8, 8, 2]]) AS num, arrayJoin([[1, 2, 4], [2, 3, 3]]) AS id) diff --git a/tests/queries/0_stateless/00212_shard_aggregate_function_uniq.reference b/tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.reference similarity index 100% rename from tests/queries/0_stateless/00212_shard_aggregate_function_uniq.reference rename to tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.reference index 63686e2e352..33a2bb5437f 100644 --- a/tests/queries/0_stateless/00212_shard_aggregate_function_uniq.reference +++ b/tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.reference @@ -52,110 +52,110 @@ uniqHLL12 35 54328 36 52997 uniqHLL12 round(float) -0.125 1 -0.5 1 -0.05 1 -0.143 1 -0.056 1 -0.048 2 -0.083 1 -0.25 1 -0.1 1 -0.028 1 0.027 1 +0.028 1 0.031 1 -0.067 1 0.037 1 -0.045 161 -0.125 160 -0.5 164 -0.05 164 -0.143 162 -0.091 81 -0.056 163 -0.048 159 -0.083 158 -0.25 165 -1 159 -0.1 164 -0.028 160 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 +0.1 1 +0.125 1 +0.143 1 +0.25 1 +0.5 1 0.027 161 +0.028 160 0.031 164 -0.067 160 -0.043 159 0.037 160 +0.043 159 +0.045 161 +0.048 159 +0.05 164 +0.056 163 +0.067 160 0.071 161 +0.083 158 +0.091 81 +0.1 164 +0.125 160 +0.143 162 +0.25 165 +0.5 164 +1 159 +0.027 52997 +0.028 54328 +0.031 54151 +0.037 53394 +0.043 54620 0.045 54268 -0.125 54011 -0.5 55013 -0.05 55115 -0.143 52353 -0.091 26870 -0.056 55227 0.048 54370 +0.05 55115 +0.056 55227 +0.067 53396 +0.071 53951 0.083 54554 +0.091 26870 +0.1 54138 +0.125 54011 +0.143 52353 0.25 52912 +0.5 55013 1 54571 -0.1 54138 -0.028 54328 -0.027 52997 -0.031 54151 -0.067 53396 -0.043 54620 -0.037 53394 -0.071 53951 uniqHLL12 round(toFloat32()) -0.5 1 -0.05 1 -0.25 1 -0.048 2 -0.083 1 -0.125 1 -0.031 1 -0.143 1 -0.028 1 -0.067 1 0.027 1 -0.056 1 +0.028 1 +0.031 1 0.037 1 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 0.1 1 -0.5 164 -0.05 164 -0.25 165 -0.048 159 -0.091 81 +0.125 1 +0.143 1 +0.25 1 +0.5 1 +0.027 161 +0.028 160 +0.031 164 +0.037 160 0.043 159 +0.045 161 +0.048 159 +0.05 164 +0.056 163 +0.067 160 0.071 161 0.083 158 -0.125 160 -0.031 164 -0.143 162 -0.028 160 -0.067 160 -0.045 161 -0.027 161 -0.056 163 -0.037 160 +0.091 81 0.1 164 +0.125 160 +0.143 162 +0.25 165 +0.5 164 1 159 -0.5 55013 -0.05 55115 -0.25 52912 -0.048 54370 -0.091 26870 +0.027 52997 +0.028 54328 +0.031 54151 +0.037 53394 0.043 54620 +0.045 54268 +0.048 54370 +0.05 55115 +0.056 55227 +0.067 53396 0.071 53951 0.083 54554 -0.125 54011 -0.031 54151 -0.143 52353 -0.028 54328 -0.067 53396 -0.045 54268 -0.027 52997 -0.056 55227 -0.037 53394 +0.091 26870 0.1 54138 +0.125 54011 +0.143 52353 +0.25 52912 +0.5 55013 1 54571 uniqHLL12 IPv4NumToString 1 1 @@ -425,428 +425,428 @@ uniqCombined(20) 35 54054 36 54054 uniqCombined(round(float)) -0.125 1 -0.5 1 -0.05 1 -0.143 1 -0.056 1 -0.048 2 -0.083 1 -0.25 1 -0.1 1 -0.028 1 0.027 1 +0.028 1 0.031 1 -0.067 1 0.037 1 -0.045 162 -0.125 163 -0.5 162 -0.05 162 -0.143 162 -0.091 81 -0.056 162 -0.048 162 -0.083 163 -0.25 162 -1 162 -0.1 163 -0.028 162 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 +0.1 1 +0.125 1 +0.143 1 +0.25 1 +0.5 1 0.027 162 +0.028 162 0.031 162 -0.067 162 -0.043 162 0.037 162 +0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 -0.045 54117 -0.125 54213 -0.5 54056 -0.05 53923 -0.143 54129 -0.091 26975 -0.056 54129 -0.048 53958 -0.083 54064 -0.25 53999 -1 53901 -0.1 53853 -0.028 53931 +0.083 163 +0.091 81 +0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 +1 162 0.027 53982 +0.028 53931 0.031 53948 -0.067 53997 -0.043 54150 0.037 54047 +0.043 54150 +0.045 54117 +0.048 53958 +0.05 53923 +0.056 54129 +0.067 53997 0.071 53963 +0.083 54064 +0.091 26975 +0.1 53853 +0.125 54213 +0.143 54129 +0.25 53999 +0.5 54056 +1 53901 uniqCombined(12)(round(float)) -0.125 1 -0.5 1 -0.05 1 -0.143 1 -0.056 1 -0.048 2 -0.083 1 -0.25 1 -0.1 1 -0.028 1 0.027 1 +0.028 1 0.031 1 -0.067 1 0.037 1 -0.045 162 -0.125 163 -0.5 162 -0.05 162 -0.143 162 -0.091 81 -0.056 162 -0.048 162 -0.083 163 -0.25 162 -1 162 -0.1 163 -0.028 162 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 +0.1 1 +0.125 1 +0.143 1 +0.25 1 +0.5 1 0.027 162 +0.028 162 0.031 162 -0.067 162 -0.043 162 0.037 162 +0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 -0.045 53928 -0.125 52275 -0.5 53721 -0.05 54123 -0.143 54532 -0.091 26931 -0.056 55120 -0.048 53293 -0.083 54428 -0.25 53226 -1 54708 -0.1 53417 -0.028 54635 +0.083 163 +0.091 81 +0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 +1 162 0.027 53155 +0.028 54635 0.031 53763 -0.067 53188 -0.043 53827 0.037 53920 +0.043 53827 +0.045 53928 +0.048 53293 +0.05 54123 +0.056 55120 +0.067 53188 0.071 53409 +0.083 54428 +0.091 26931 +0.1 53417 +0.125 52275 +0.143 54532 +0.25 53226 +0.5 53721 +1 54708 uniqCombined(17)(round(float)) -0.125 1 -0.5 1 -0.05 1 -0.143 1 -0.056 1 -0.048 2 -0.083 1 -0.25 1 -0.1 1 -0.028 1 0.027 1 +0.028 1 0.031 1 -0.067 1 0.037 1 -0.045 162 -0.125 163 -0.5 162 -0.05 162 -0.143 162 -0.091 81 -0.056 162 -0.048 162 -0.083 163 -0.25 162 -1 162 -0.1 163 -0.028 162 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 +0.1 1 +0.125 1 +0.143 1 +0.25 1 +0.5 1 0.027 162 +0.028 162 0.031 162 -0.067 162 -0.043 162 0.037 162 +0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 +0.083 163 +0.091 81 +0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 +1 162 +0.027 53982 +0.028 53931 +0.031 53948 +0.037 54047 +0.043 54150 0.045 54117 -0.125 54213 -0.5 54056 -0.05 53923 -0.143 54129 -0.091 26975 -0.056 54129 0.048 53958 +0.05 53923 +0.056 54129 +0.067 53997 +0.071 53963 0.083 54064 +0.091 26975 +0.1 53853 +0.125 54213 +0.143 54129 0.25 53999 +0.5 54056 1 53901 -0.1 53853 -0.028 53931 -0.027 53982 -0.031 53948 -0.067 53997 -0.043 54150 -0.037 54047 -0.071 53963 uniqCombined(20)(round(float)) -0.125 1 -0.5 1 -0.05 1 -0.143 1 -0.056 1 -0.048 2 -0.083 1 -0.25 1 -0.1 1 -0.028 1 0.027 1 +0.028 1 0.031 1 -0.067 1 0.037 1 -0.045 162 -0.125 163 -0.5 162 -0.05 162 -0.143 162 -0.091 81 -0.056 162 -0.048 162 -0.083 163 -0.25 162 -1 162 -0.1 163 -0.028 162 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 +0.1 1 +0.125 1 +0.143 1 +0.25 1 +0.5 1 0.027 162 +0.028 162 0.031 162 -0.067 162 -0.043 162 0.037 162 +0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 -0.045 54054 -0.125 54053 -0.5 54054 -0.05 54053 -0.143 54054 -0.091 27027 -0.056 54054 -0.048 54053 -0.083 54055 -0.25 54054 -1 54054 -0.1 54053 -0.028 54054 +0.083 163 +0.091 81 +0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 +1 162 0.027 54054 +0.028 54054 0.031 54054 -0.067 54054 -0.043 54053 0.037 54053 +0.043 54053 +0.045 54054 +0.048 54053 +0.05 54053 +0.056 54054 +0.067 54054 0.071 54054 +0.083 54055 +0.091 27027 +0.1 54053 +0.125 54053 +0.143 54054 +0.25 54054 +0.5 54054 +1 54054 uniqCombined(X)(round(toFloat32())) -0.5 1 -0.05 1 -0.25 1 -0.048 2 -0.083 1 -0.125 1 -0.031 1 -0.143 1 -0.028 1 -0.067 1 0.027 1 -0.056 1 +0.028 1 +0.031 1 0.037 1 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 0.1 1 -0.5 162 -0.05 162 -0.25 162 -0.048 162 -0.091 81 +0.125 1 +0.143 1 +0.25 1 +0.5 1 +0.027 162 +0.028 162 +0.031 162 +0.037 162 0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 0.083 163 -0.125 163 -0.031 162 -0.143 162 -0.028 162 -0.067 162 -0.045 162 -0.027 162 -0.056 162 -0.037 162 +0.091 81 0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 1 162 -0.5 54056 -0.05 53923 -0.25 53999 -0.048 53958 -0.091 26975 +0.027 53982 +0.028 53931 +0.031 53948 +0.037 54047 0.043 54150 +0.045 54117 +0.048 53958 +0.05 53923 +0.056 54129 +0.067 53997 0.071 53963 0.083 54064 -0.125 54213 -0.031 53948 -0.143 54129 -0.028 53931 -0.067 53997 -0.045 54117 -0.027 53982 -0.056 54129 -0.037 54047 +0.091 26975 0.1 53853 +0.125 54213 +0.143 54129 +0.25 53999 +0.5 54056 1 53901 uniqCombined(12)(round(toFloat32())) -0.5 1 -0.05 1 -0.25 1 -0.048 2 -0.083 1 -0.125 1 -0.031 1 -0.143 1 -0.028 1 -0.067 1 0.027 1 -0.056 1 +0.028 1 +0.031 1 0.037 1 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 0.1 1 -0.5 162 -0.05 162 -0.25 162 -0.048 162 -0.091 81 +0.125 1 +0.143 1 +0.25 1 +0.5 1 +0.027 162 +0.028 162 +0.031 162 +0.037 162 0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 0.083 163 -0.125 163 -0.031 162 -0.143 162 -0.028 162 -0.067 162 -0.045 162 -0.027 162 -0.056 162 -0.037 162 +0.091 81 0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 1 162 -0.5 53721 -0.05 54123 -0.25 53226 -0.048 53293 -0.091 26931 +0.027 53155 +0.028 54635 +0.031 53763 +0.037 53920 0.043 53827 +0.045 53928 +0.048 53293 +0.05 54123 +0.056 55120 +0.067 53188 0.071 53409 0.083 54428 -0.125 52275 -0.031 53763 -0.143 54532 -0.028 54635 -0.067 53188 -0.045 53928 -0.027 53155 -0.056 55120 -0.037 53920 +0.091 26931 0.1 53417 +0.125 52275 +0.143 54532 +0.25 53226 +0.5 53721 1 54708 uniqCombined(17)(round(toFloat32())) -0.5 1 -0.05 1 -0.25 1 -0.048 2 -0.083 1 -0.125 1 -0.031 1 -0.143 1 -0.028 1 -0.067 1 0.027 1 -0.056 1 +0.028 1 +0.031 1 0.037 1 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 0.1 1 -0.5 162 -0.05 162 -0.25 162 -0.048 162 -0.091 81 +0.125 1 +0.143 1 +0.25 1 +0.5 1 +0.027 162 +0.028 162 +0.031 162 +0.037 162 0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 0.083 163 -0.125 163 -0.031 162 -0.143 162 -0.028 162 -0.067 162 -0.045 162 -0.027 162 -0.056 162 -0.037 162 +0.091 81 0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 1 162 -0.5 54056 -0.05 53923 -0.25 53999 -0.048 53958 -0.091 26975 +0.027 53982 +0.028 53931 +0.031 53948 +0.037 54047 0.043 54150 +0.045 54117 +0.048 53958 +0.05 53923 +0.056 54129 +0.067 53997 0.071 53963 0.083 54064 -0.125 54213 -0.031 53948 -0.143 54129 -0.028 53931 -0.067 53997 -0.045 54117 -0.027 53982 -0.056 54129 -0.037 54047 +0.091 26975 0.1 53853 +0.125 54213 +0.143 54129 +0.25 53999 +0.5 54056 1 53901 uniqCombined(20)(round(toFloat32())) -0.5 1 -0.05 1 -0.25 1 -0.048 2 -0.083 1 -0.125 1 -0.031 1 -0.143 1 -0.028 1 -0.067 1 0.027 1 -0.056 1 +0.028 1 +0.031 1 0.037 1 +0.048 2 +0.05 1 +0.056 1 +0.067 1 +0.083 1 0.1 1 -0.5 162 -0.05 162 -0.25 162 -0.048 162 -0.091 81 +0.125 1 +0.143 1 +0.25 1 +0.5 1 +0.027 162 +0.028 162 +0.031 162 +0.037 162 0.043 162 +0.045 162 +0.048 162 +0.05 162 +0.056 162 +0.067 162 0.071 162 0.083 163 -0.125 163 -0.031 162 -0.143 162 -0.028 162 -0.067 162 -0.045 162 -0.027 162 -0.056 162 -0.037 162 +0.091 81 0.1 163 +0.125 163 +0.143 162 +0.25 162 +0.5 162 1 162 -0.5 54054 -0.05 54053 -0.25 54054 -0.048 54053 -0.091 27027 +0.027 54054 +0.028 54054 +0.031 54054 +0.037 54053 0.043 54053 +0.045 54054 +0.048 54053 +0.05 54053 +0.056 54054 +0.067 54054 0.071 54054 0.083 54055 -0.125 54053 -0.031 54054 -0.143 54054 -0.028 54054 -0.067 54054 -0.045 54054 -0.027 54054 -0.056 54054 -0.037 54053 +0.091 27027 0.1 54053 +0.125 54053 +0.143 54054 +0.25 54054 +0.5 54054 1 54054 uniqCombined(Z)(IPv4NumToString) 1 1 diff --git a/tests/queries/0_stateless/00212_shard_aggregate_function_uniq.sql b/tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.sql similarity index 75% rename from tests/queries/0_stateless/00212_shard_aggregate_function_uniq.sql rename to tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.sql index afef71ae06d..f1b3c82fec3 100644 --- a/tests/queries/0_stateless/00212_shard_aggregate_function_uniq.sql +++ b/tests/queries/0_stateless/00212_long_shard_aggregate_function_uniq.sql @@ -2,27 +2,27 @@ SELECT 'uniqHLL12'; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqHLL12 round(float)'; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqHLL12 round(toFloat32())'; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqHLL12 IPv4NumToString'; -SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqHLL12(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqHLL12 remote()'; @@ -32,99 +32,99 @@ SELECT uniqHLL12(dummy) FROM remote('127.0.0.{2,3}', system.one); SELECT 'uniqCombined'; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(12)'; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(17)'; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(20)'; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(round(float))'; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(12)(round(float))'; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(17)(round(float))'; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(20)(round(float))'; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(1/(1 + (3*X*X - 7*X + 11) % 37), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(X)(round(toFloat32()))'; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(12)(round(toFloat32()))'; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(17)(round(toFloat32()))'; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(20)(round(toFloat32()))'; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(X) FROM (SELECT number AS X, round(toFloat32(1/(1 + (3*X*X - 7*X + 11) % 37)), 3) AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(Z)(IPv4NumToString)'; -SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(12)(IPv4NumToString)'; -SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(12)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(17)(IPv4NumToString)'; -SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(17)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined(20)(IPv4NumToString)'; -SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y; -SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y; -SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y; +SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 15) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 3000) GROUP BY Y ORDER BY Y; +SELECT Y, uniqCombined(20)(Z) FROM (SELECT number AS X, IPv4NumToString(toUInt32(X)) AS Z, (3*X*X - 7*X + 11) % 37 AS Y FROM system.numbers LIMIT 1000000) GROUP BY Y ORDER BY Y; SELECT 'uniqCombined remote()'; diff --git a/tests/queries/0_stateless/00232_format_readable_size.reference b/tests/queries/0_stateless/00232_format_readable_size.reference index 0f723e968d9..b9b9b467b50 100644 --- a/tests/queries/0_stateless/00232_format_readable_size.reference +++ b/tests/queries/0_stateless/00232_format_readable_size.reference @@ -20,51 +20,51 @@ 170.21 MiB 170.21 MiB 170.21 MiB 462.69 MiB 462.69 MiB 462.69 MiB 1.23 GiB 1.23 GiB 1.23 GiB -3.34 GiB 3.34 GiB -2.00 GiB -9.08 GiB 9.08 GiB -2.00 GiB -24.67 GiB 24.67 GiB -2.00 GiB -67.06 GiB 67.06 GiB -2.00 GiB -182.29 GiB 182.29 GiB -2.00 GiB -495.51 GiB 495.51 GiB -2.00 GiB -1.32 TiB 1.32 TiB -2.00 GiB -3.58 TiB 3.58 TiB -2.00 GiB -9.72 TiB 9.72 TiB -2.00 GiB -26.42 TiB 26.42 TiB -2.00 GiB -71.82 TiB 71.82 TiB -2.00 GiB -195.22 TiB 195.22 TiB -2.00 GiB -530.66 TiB 530.66 TiB -2.00 GiB -1.41 PiB 1.41 PiB -2.00 GiB -3.83 PiB 3.83 PiB -2.00 GiB -10.41 PiB 10.41 PiB -2.00 GiB -28.29 PiB 28.29 PiB -2.00 GiB -76.91 PiB 76.91 PiB -2.00 GiB -209.06 PiB 209.06 PiB -2.00 GiB -568.30 PiB 568.30 PiB -2.00 GiB -1.51 EiB 1.51 EiB -2.00 GiB -4.10 EiB 4.10 EiB -2.00 GiB -11.15 EiB 11.15 EiB -2.00 GiB -30.30 EiB 0.00 B -2.00 GiB -82.37 EiB 0.00 B -2.00 GiB -223.89 EiB 0.00 B -2.00 GiB -608.60 EiB 0.00 B -2.00 GiB -1.62 ZiB 0.00 B -2.00 GiB -4.39 ZiB 0.00 B -2.00 GiB -11.94 ZiB 0.00 B -2.00 GiB -32.45 ZiB 0.00 B -2.00 GiB -88.21 ZiB 0.00 B -2.00 GiB -239.77 ZiB 0.00 B -2.00 GiB -651.77 ZiB 0.00 B -2.00 GiB -1.73 YiB 0.00 B -2.00 GiB -4.70 YiB 0.00 B -2.00 GiB -12.78 YiB 0.00 B -2.00 GiB -34.75 YiB 0.00 B -2.00 GiB -94.46 YiB 0.00 B -2.00 GiB -256.78 YiB 0.00 B -2.00 GiB -698.00 YiB 0.00 B -2.00 GiB -1897.37 YiB 0.00 B -2.00 GiB -5157.59 YiB 0.00 B -2.00 GiB -14019.80 YiB 0.00 B -2.00 GiB -38109.75 YiB 0.00 B -2.00 GiB -103593.05 YiB 0.00 B -2.00 GiB -281595.11 YiB 0.00 B -2.00 GiB -765454.88 YiB 0.00 B -2.00 GiB +3.34 GiB 3.34 GiB 2.00 GiB +9.08 GiB 9.08 GiB 2.00 GiB +24.67 GiB 24.67 GiB 2.00 GiB +67.06 GiB 67.06 GiB 2.00 GiB +182.29 GiB 182.29 GiB 2.00 GiB +495.51 GiB 495.51 GiB 2.00 GiB +1.32 TiB 1.32 TiB 2.00 GiB +3.58 TiB 3.58 TiB 2.00 GiB +9.72 TiB 9.72 TiB 2.00 GiB +26.42 TiB 26.42 TiB 2.00 GiB +71.82 TiB 71.82 TiB 2.00 GiB +195.22 TiB 195.22 TiB 2.00 GiB +530.66 TiB 530.66 TiB 2.00 GiB +1.41 PiB 1.41 PiB 2.00 GiB +3.83 PiB 3.83 PiB 2.00 GiB +10.41 PiB 10.41 PiB 2.00 GiB +28.29 PiB 28.29 PiB 2.00 GiB +76.91 PiB 76.91 PiB 2.00 GiB +209.06 PiB 209.06 PiB 2.00 GiB +568.30 PiB 568.30 PiB 2.00 GiB +1.51 EiB 1.51 EiB 2.00 GiB +4.10 EiB 4.10 EiB 2.00 GiB +11.15 EiB 11.15 EiB 2.00 GiB +30.30 EiB 16.00 EiB 2.00 GiB +82.37 EiB 16.00 EiB 2.00 GiB +223.89 EiB 16.00 EiB 2.00 GiB +608.60 EiB 16.00 EiB 2.00 GiB +1.62 ZiB 16.00 EiB 2.00 GiB +4.39 ZiB 16.00 EiB 2.00 GiB +11.94 ZiB 16.00 EiB 2.00 GiB +32.45 ZiB 16.00 EiB 2.00 GiB +88.21 ZiB 16.00 EiB 2.00 GiB +239.77 ZiB 16.00 EiB 2.00 GiB +651.77 ZiB 16.00 EiB 2.00 GiB +1.73 YiB 16.00 EiB 2.00 GiB +4.70 YiB 16.00 EiB 2.00 GiB +12.78 YiB 16.00 EiB 2.00 GiB +34.75 YiB 16.00 EiB 2.00 GiB +94.46 YiB 16.00 EiB 2.00 GiB +256.78 YiB 16.00 EiB 2.00 GiB +698.00 YiB 16.00 EiB 2.00 GiB +1897.37 YiB 16.00 EiB 2.00 GiB +5157.59 YiB 16.00 EiB 2.00 GiB +14019.80 YiB 16.00 EiB 2.00 GiB +38109.75 YiB 16.00 EiB 2.00 GiB +103593.05 YiB 16.00 EiB 2.00 GiB +281595.11 YiB 16.00 EiB 2.00 GiB +765454.88 YiB 16.00 EiB 2.00 GiB diff --git a/tests/queries/0_stateless/00232_format_readable_size.sql b/tests/queries/0_stateless/00232_format_readable_size.sql index 952ee82b81a..e96f7ebeb20 100644 --- a/tests/queries/0_stateless/00232_format_readable_size.sql +++ b/tests/queries/0_stateless/00232_format_readable_size.sql @@ -1,4 +1,4 @@ -WITH round(exp(number), 6) AS x, toUInt64(x) AS y, toInt32(x) AS z +WITH round(exp(number), 6) AS x, x > 0xFFFFFFFFFFFFFFFF ? 0xFFFFFFFFFFFFFFFF : toUInt64(x) AS y, x > 0x7FFFFFFF ? 0x7FFFFFFF : toInt32(x) AS z SELECT formatReadableSize(x), formatReadableSize(y), formatReadableSize(z) FROM system.numbers LIMIT 70; diff --git a/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.reference b/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.reference index 53cdf1e9393..bc8e5e14552 100644 --- a/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.reference +++ b/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.reference @@ -1 +1,124 @@ -PASSED +0 0.000000000 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 +0 -1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 1.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 18446744073709551616.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 -9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 2251799813685248.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 4503599627370496.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9007199254740991.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9007199254740994.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 -9007199254740991.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 -9007199254740994.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 104.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 -4503599627370496.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 -0.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 0.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 -1.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 1.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 2251799813685247.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 2251799813685248.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 1152921504606846976.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +0 -1152921504606846976.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 -9223372036854786048.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +0 9223372036854786048.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 0.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -1.000000000 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 +-1 1.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 18446744073709551616.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 2251799813685248.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 4503599627370496.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9007199254740991.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9007199254740994.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -9007199254740991.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 -9007199254740994.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 104.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -4503599627370496.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 -0.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 0.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -1.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 1.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 2251799813685247.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 2251799813685248.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 1152921504606846976.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +-1 -1152921504606846976.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 -9223372036854786048.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +-1 9223372036854786048.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 0.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 1.000000000 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 +1 18446744073709551616.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 -9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 9223372036854775808.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 2251799813685248.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 4503599627370496.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9007199254740991.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9007199254740994.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 -9007199254740991.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -9007199254740994.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 104.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 -4503599627370496.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -0.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 0.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -1.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 1.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 9007199254740992.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 2251799813685247.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 2251799813685248.500000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 1152921504606846976.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +1 -1152921504606846976.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 -9223372036854786048.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +1 9223372036854786048.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +18446744073709551615 0.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 18446744073709551616.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 +18446744073709551615 9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9223372036854775808.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 2251799813685248.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 4503599627370496.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9007199254740991.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9007199254740994.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9007199254740991.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9007199254740994.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 104.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -4503599627370496.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -0.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 0.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -1.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 1.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9007199254740992.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 2251799813685247.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 2251799813685248.500000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 1152921504606846976.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -1152921504606846976.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 -9223372036854786048.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +18446744073709551615 9223372036854786048.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +9223372036854775808 0.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +9223372036854775808 -1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +9223372036854775808 1.000000000 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 +9223372036854775808 18446744073709551616.000000000 0 1 1 1 0 0 0 1 0 0 1 1 0 1 1 1 0 0 0 1 0 0 1 1 diff --git a/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sql b/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sql new file mode 100644 index 00000000000..16245c42a7a --- /dev/null +++ b/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sql @@ -0,0 +1,127 @@ +-- The results are different than in Python. That's why this file is genearated and the reference is edited instead of using the Python script. +-- Example: in ClickHouse, 9223372036854775808.0 != 9223372036854775808. + +SELECT '0', '0.000000000', 0 = 0.000000000, 0 != 0.000000000, 0 < 0.000000000, 0 <= 0.000000000, 0 > 0.000000000, 0 >= 0.000000000, 0.000000000 = 0, 0.000000000 != 0, 0.000000000 < 0, 0.000000000 <= 0, 0.000000000 > 0, 0.000000000 >= 0 , toUInt8(0) = 0.000000000, toUInt8(0) != 0.000000000, toUInt8(0) < 0.000000000, toUInt8(0) <= 0.000000000, toUInt8(0) > 0.000000000, toUInt8(0) >= 0.000000000, 0.000000000 = toUInt8(0), 0.000000000 != toUInt8(0), 0.000000000 < toUInt8(0), 0.000000000 <= toUInt8(0), 0.000000000 > toUInt8(0), 0.000000000 >= toUInt8(0) , toInt8(0) = 0.000000000, toInt8(0) != 0.000000000, toInt8(0) < 0.000000000, toInt8(0) <= 0.000000000, toInt8(0) > 0.000000000, toInt8(0) >= 0.000000000, 0.000000000 = toInt8(0), 0.000000000 != toInt8(0), 0.000000000 < toInt8(0), 0.000000000 <= toInt8(0), 0.000000000 > toInt8(0), 0.000000000 >= toInt8(0) , toUInt16(0) = 0.000000000, toUInt16(0) != 0.000000000, toUInt16(0) < 0.000000000, toUInt16(0) <= 0.000000000, toUInt16(0) > 0.000000000, toUInt16(0) >= 0.000000000, 0.000000000 = toUInt16(0), 0.000000000 != toUInt16(0), 0.000000000 < toUInt16(0), 0.000000000 <= toUInt16(0), 0.000000000 > toUInt16(0), 0.000000000 >= toUInt16(0) , toInt16(0) = 0.000000000, toInt16(0) != 0.000000000, toInt16(0) < 0.000000000, toInt16(0) <= 0.000000000, toInt16(0) > 0.000000000, toInt16(0) >= 0.000000000, 0.000000000 = toInt16(0), 0.000000000 != toInt16(0), 0.000000000 < toInt16(0), 0.000000000 <= toInt16(0), 0.000000000 > toInt16(0), 0.000000000 >= toInt16(0) , toUInt32(0) = 0.000000000, toUInt32(0) != 0.000000000, toUInt32(0) < 0.000000000, toUInt32(0) <= 0.000000000, toUInt32(0) > 0.000000000, toUInt32(0) >= 0.000000000, 0.000000000 = toUInt32(0), 0.000000000 != toUInt32(0), 0.000000000 < toUInt32(0), 0.000000000 <= toUInt32(0), 0.000000000 > toUInt32(0), 0.000000000 >= toUInt32(0) , toInt32(0) = 0.000000000, toInt32(0) != 0.000000000, toInt32(0) < 0.000000000, toInt32(0) <= 0.000000000, toInt32(0) > 0.000000000, toInt32(0) >= 0.000000000, 0.000000000 = toInt32(0), 0.000000000 != toInt32(0), 0.000000000 < toInt32(0), 0.000000000 <= toInt32(0), 0.000000000 > toInt32(0), 0.000000000 >= toInt32(0) , toUInt64(0) = 0.000000000, toUInt64(0) != 0.000000000, toUInt64(0) < 0.000000000, toUInt64(0) <= 0.000000000, toUInt64(0) > 0.000000000, toUInt64(0) >= 0.000000000, 0.000000000 = toUInt64(0), 0.000000000 != toUInt64(0), 0.000000000 < toUInt64(0), 0.000000000 <= toUInt64(0), 0.000000000 > toUInt64(0), 0.000000000 >= toUInt64(0) , toInt64(0) = 0.000000000, toInt64(0) != 0.000000000, toInt64(0) < 0.000000000, toInt64(0) <= 0.000000000, toInt64(0) > 0.000000000, toInt64(0) >= 0.000000000, 0.000000000 = toInt64(0), 0.000000000 != toInt64(0), 0.000000000 < toInt64(0), 0.000000000 <= toInt64(0), 0.000000000 > toInt64(0), 0.000000000 >= toInt64(0) ; +SELECT '0', '-1.000000000', 0 = -1.000000000, 0 != -1.000000000, 0 < -1.000000000, 0 <= -1.000000000, 0 > -1.000000000, 0 >= -1.000000000, -1.000000000 = 0, -1.000000000 != 0, -1.000000000 < 0, -1.000000000 <= 0, -1.000000000 > 0, -1.000000000 >= 0 , toUInt8(0) = -1.000000000, toUInt8(0) != -1.000000000, toUInt8(0) < -1.000000000, toUInt8(0) <= -1.000000000, toUInt8(0) > -1.000000000, toUInt8(0) >= -1.000000000, -1.000000000 = toUInt8(0), -1.000000000 != toUInt8(0), -1.000000000 < toUInt8(0), -1.000000000 <= toUInt8(0), -1.000000000 > toUInt8(0), -1.000000000 >= toUInt8(0) , toInt8(0) = -1.000000000, toInt8(0) != -1.000000000, toInt8(0) < -1.000000000, toInt8(0) <= -1.000000000, toInt8(0) > -1.000000000, toInt8(0) >= -1.000000000, -1.000000000 = toInt8(0), -1.000000000 != toInt8(0), -1.000000000 < toInt8(0), -1.000000000 <= toInt8(0), -1.000000000 > toInt8(0), -1.000000000 >= toInt8(0) , toUInt16(0) = -1.000000000, toUInt16(0) != -1.000000000, toUInt16(0) < -1.000000000, toUInt16(0) <= -1.000000000, toUInt16(0) > -1.000000000, toUInt16(0) >= -1.000000000, -1.000000000 = toUInt16(0), -1.000000000 != toUInt16(0), -1.000000000 < toUInt16(0), -1.000000000 <= toUInt16(0), -1.000000000 > toUInt16(0), -1.000000000 >= toUInt16(0) , toInt16(0) = -1.000000000, toInt16(0) != -1.000000000, toInt16(0) < -1.000000000, toInt16(0) <= -1.000000000, toInt16(0) > -1.000000000, toInt16(0) >= -1.000000000, -1.000000000 = toInt16(0), -1.000000000 != toInt16(0), -1.000000000 < toInt16(0), -1.000000000 <= toInt16(0), -1.000000000 > toInt16(0), -1.000000000 >= toInt16(0) , toUInt32(0) = -1.000000000, toUInt32(0) != -1.000000000, toUInt32(0) < -1.000000000, toUInt32(0) <= -1.000000000, toUInt32(0) > -1.000000000, toUInt32(0) >= -1.000000000, -1.000000000 = toUInt32(0), -1.000000000 != toUInt32(0), -1.000000000 < toUInt32(0), -1.000000000 <= toUInt32(0), -1.000000000 > toUInt32(0), -1.000000000 >= toUInt32(0) , toInt32(0) = -1.000000000, toInt32(0) != -1.000000000, toInt32(0) < -1.000000000, toInt32(0) <= -1.000000000, toInt32(0) > -1.000000000, toInt32(0) >= -1.000000000, -1.000000000 = toInt32(0), -1.000000000 != toInt32(0), -1.000000000 < toInt32(0), -1.000000000 <= toInt32(0), -1.000000000 > toInt32(0), -1.000000000 >= toInt32(0) , toUInt64(0) = -1.000000000, toUInt64(0) != -1.000000000, toUInt64(0) < -1.000000000, toUInt64(0) <= -1.000000000, toUInt64(0) > -1.000000000, toUInt64(0) >= -1.000000000, -1.000000000 = toUInt64(0), -1.000000000 != toUInt64(0), -1.000000000 < toUInt64(0), -1.000000000 <= toUInt64(0), -1.000000000 > toUInt64(0), -1.000000000 >= toUInt64(0) , toInt64(0) = -1.000000000, toInt64(0) != -1.000000000, toInt64(0) < -1.000000000, toInt64(0) <= -1.000000000, toInt64(0) > -1.000000000, toInt64(0) >= -1.000000000, -1.000000000 = toInt64(0), -1.000000000 != toInt64(0), -1.000000000 < toInt64(0), -1.000000000 <= toInt64(0), -1.000000000 > toInt64(0), -1.000000000 >= toInt64(0) ; +SELECT '0', '1.000000000', 0 = 1.000000000, 0 != 1.000000000, 0 < 1.000000000, 0 <= 1.000000000, 0 > 1.000000000, 0 >= 1.000000000, 1.000000000 = 0, 1.000000000 != 0, 1.000000000 < 0, 1.000000000 <= 0, 1.000000000 > 0, 1.000000000 >= 0 , toUInt8(0) = 1.000000000, toUInt8(0) != 1.000000000, toUInt8(0) < 1.000000000, toUInt8(0) <= 1.000000000, toUInt8(0) > 1.000000000, toUInt8(0) >= 1.000000000, 1.000000000 = toUInt8(0), 1.000000000 != toUInt8(0), 1.000000000 < toUInt8(0), 1.000000000 <= toUInt8(0), 1.000000000 > toUInt8(0), 1.000000000 >= toUInt8(0) , toInt8(0) = 1.000000000, toInt8(0) != 1.000000000, toInt8(0) < 1.000000000, toInt8(0) <= 1.000000000, toInt8(0) > 1.000000000, toInt8(0) >= 1.000000000, 1.000000000 = toInt8(0), 1.000000000 != toInt8(0), 1.000000000 < toInt8(0), 1.000000000 <= toInt8(0), 1.000000000 > toInt8(0), 1.000000000 >= toInt8(0) , toUInt16(0) = 1.000000000, toUInt16(0) != 1.000000000, toUInt16(0) < 1.000000000, toUInt16(0) <= 1.000000000, toUInt16(0) > 1.000000000, toUInt16(0) >= 1.000000000, 1.000000000 = toUInt16(0), 1.000000000 != toUInt16(0), 1.000000000 < toUInt16(0), 1.000000000 <= toUInt16(0), 1.000000000 > toUInt16(0), 1.000000000 >= toUInt16(0) , toInt16(0) = 1.000000000, toInt16(0) != 1.000000000, toInt16(0) < 1.000000000, toInt16(0) <= 1.000000000, toInt16(0) > 1.000000000, toInt16(0) >= 1.000000000, 1.000000000 = toInt16(0), 1.000000000 != toInt16(0), 1.000000000 < toInt16(0), 1.000000000 <= toInt16(0), 1.000000000 > toInt16(0), 1.000000000 >= toInt16(0) , toUInt32(0) = 1.000000000, toUInt32(0) != 1.000000000, toUInt32(0) < 1.000000000, toUInt32(0) <= 1.000000000, toUInt32(0) > 1.000000000, toUInt32(0) >= 1.000000000, 1.000000000 = toUInt32(0), 1.000000000 != toUInt32(0), 1.000000000 < toUInt32(0), 1.000000000 <= toUInt32(0), 1.000000000 > toUInt32(0), 1.000000000 >= toUInt32(0) , toInt32(0) = 1.000000000, toInt32(0) != 1.000000000, toInt32(0) < 1.000000000, toInt32(0) <= 1.000000000, toInt32(0) > 1.000000000, toInt32(0) >= 1.000000000, 1.000000000 = toInt32(0), 1.000000000 != toInt32(0), 1.000000000 < toInt32(0), 1.000000000 <= toInt32(0), 1.000000000 > toInt32(0), 1.000000000 >= toInt32(0) , toUInt64(0) = 1.000000000, toUInt64(0) != 1.000000000, toUInt64(0) < 1.000000000, toUInt64(0) <= 1.000000000, toUInt64(0) > 1.000000000, toUInt64(0) >= 1.000000000, 1.000000000 = toUInt64(0), 1.000000000 != toUInt64(0), 1.000000000 < toUInt64(0), 1.000000000 <= toUInt64(0), 1.000000000 > toUInt64(0), 1.000000000 >= toUInt64(0) , toInt64(0) = 1.000000000, toInt64(0) != 1.000000000, toInt64(0) < 1.000000000, toInt64(0) <= 1.000000000, toInt64(0) > 1.000000000, toInt64(0) >= 1.000000000, 1.000000000 = toInt64(0), 1.000000000 != toInt64(0), 1.000000000 < toInt64(0), 1.000000000 <= toInt64(0), 1.000000000 > toInt64(0), 1.000000000 >= toInt64(0) ; +SELECT '0', '18446744073709551616.000000000', 0 = 18446744073709551616.000000000, 0 != 18446744073709551616.000000000, 0 < 18446744073709551616.000000000, 0 <= 18446744073709551616.000000000, 0 > 18446744073709551616.000000000, 0 >= 18446744073709551616.000000000, 18446744073709551616.000000000 = 0, 18446744073709551616.000000000 != 0, 18446744073709551616.000000000 < 0, 18446744073709551616.000000000 <= 0, 18446744073709551616.000000000 > 0, 18446744073709551616.000000000 >= 0 , toUInt8(0) = 18446744073709551616.000000000, toUInt8(0) != 18446744073709551616.000000000, toUInt8(0) < 18446744073709551616.000000000, toUInt8(0) <= 18446744073709551616.000000000, toUInt8(0) > 18446744073709551616.000000000, toUInt8(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt8(0), 18446744073709551616.000000000 != toUInt8(0), 18446744073709551616.000000000 < toUInt8(0), 18446744073709551616.000000000 <= toUInt8(0), 18446744073709551616.000000000 > toUInt8(0), 18446744073709551616.000000000 >= toUInt8(0) , toInt8(0) = 18446744073709551616.000000000, toInt8(0) != 18446744073709551616.000000000, toInt8(0) < 18446744073709551616.000000000, toInt8(0) <= 18446744073709551616.000000000, toInt8(0) > 18446744073709551616.000000000, toInt8(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt8(0), 18446744073709551616.000000000 != toInt8(0), 18446744073709551616.000000000 < toInt8(0), 18446744073709551616.000000000 <= toInt8(0), 18446744073709551616.000000000 > toInt8(0), 18446744073709551616.000000000 >= toInt8(0) , toUInt16(0) = 18446744073709551616.000000000, toUInt16(0) != 18446744073709551616.000000000, toUInt16(0) < 18446744073709551616.000000000, toUInt16(0) <= 18446744073709551616.000000000, toUInt16(0) > 18446744073709551616.000000000, toUInt16(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt16(0), 18446744073709551616.000000000 != toUInt16(0), 18446744073709551616.000000000 < toUInt16(0), 18446744073709551616.000000000 <= toUInt16(0), 18446744073709551616.000000000 > toUInt16(0), 18446744073709551616.000000000 >= toUInt16(0) , toInt16(0) = 18446744073709551616.000000000, toInt16(0) != 18446744073709551616.000000000, toInt16(0) < 18446744073709551616.000000000, toInt16(0) <= 18446744073709551616.000000000, toInt16(0) > 18446744073709551616.000000000, toInt16(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt16(0), 18446744073709551616.000000000 != toInt16(0), 18446744073709551616.000000000 < toInt16(0), 18446744073709551616.000000000 <= toInt16(0), 18446744073709551616.000000000 > toInt16(0), 18446744073709551616.000000000 >= toInt16(0) , toUInt32(0) = 18446744073709551616.000000000, toUInt32(0) != 18446744073709551616.000000000, toUInt32(0) < 18446744073709551616.000000000, toUInt32(0) <= 18446744073709551616.000000000, toUInt32(0) > 18446744073709551616.000000000, toUInt32(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt32(0), 18446744073709551616.000000000 != toUInt32(0), 18446744073709551616.000000000 < toUInt32(0), 18446744073709551616.000000000 <= toUInt32(0), 18446744073709551616.000000000 > toUInt32(0), 18446744073709551616.000000000 >= toUInt32(0) , toInt32(0) = 18446744073709551616.000000000, toInt32(0) != 18446744073709551616.000000000, toInt32(0) < 18446744073709551616.000000000, toInt32(0) <= 18446744073709551616.000000000, toInt32(0) > 18446744073709551616.000000000, toInt32(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt32(0), 18446744073709551616.000000000 != toInt32(0), 18446744073709551616.000000000 < toInt32(0), 18446744073709551616.000000000 <= toInt32(0), 18446744073709551616.000000000 > toInt32(0), 18446744073709551616.000000000 >= toInt32(0) , toUInt64(0) = 18446744073709551616.000000000, toUInt64(0) != 18446744073709551616.000000000, toUInt64(0) < 18446744073709551616.000000000, toUInt64(0) <= 18446744073709551616.000000000, toUInt64(0) > 18446744073709551616.000000000, toUInt64(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt64(0), 18446744073709551616.000000000 != toUInt64(0), 18446744073709551616.000000000 < toUInt64(0), 18446744073709551616.000000000 <= toUInt64(0), 18446744073709551616.000000000 > toUInt64(0), 18446744073709551616.000000000 >= toUInt64(0) , toInt64(0) = 18446744073709551616.000000000, toInt64(0) != 18446744073709551616.000000000, toInt64(0) < 18446744073709551616.000000000, toInt64(0) <= 18446744073709551616.000000000, toInt64(0) > 18446744073709551616.000000000, toInt64(0) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt64(0), 18446744073709551616.000000000 != toInt64(0), 18446744073709551616.000000000 < toInt64(0), 18446744073709551616.000000000 <= toInt64(0), 18446744073709551616.000000000 > toInt64(0), 18446744073709551616.000000000 >= toInt64(0) ; +SELECT '0', '9223372036854775808.000000000', 0 = 9223372036854775808.000000000, 0 != 9223372036854775808.000000000, 0 < 9223372036854775808.000000000, 0 <= 9223372036854775808.000000000, 0 > 9223372036854775808.000000000, 0 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 0, 9223372036854775808.000000000 != 0, 9223372036854775808.000000000 < 0, 9223372036854775808.000000000 <= 0, 9223372036854775808.000000000 > 0, 9223372036854775808.000000000 >= 0 , toUInt8(0) = 9223372036854775808.000000000, toUInt8(0) != 9223372036854775808.000000000, toUInt8(0) < 9223372036854775808.000000000, toUInt8(0) <= 9223372036854775808.000000000, toUInt8(0) > 9223372036854775808.000000000, toUInt8(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt8(0), 9223372036854775808.000000000 != toUInt8(0), 9223372036854775808.000000000 < toUInt8(0), 9223372036854775808.000000000 <= toUInt8(0), 9223372036854775808.000000000 > toUInt8(0), 9223372036854775808.000000000 >= toUInt8(0) , toInt8(0) = 9223372036854775808.000000000, toInt8(0) != 9223372036854775808.000000000, toInt8(0) < 9223372036854775808.000000000, toInt8(0) <= 9223372036854775808.000000000, toInt8(0) > 9223372036854775808.000000000, toInt8(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(0), 9223372036854775808.000000000 != toInt8(0), 9223372036854775808.000000000 < toInt8(0), 9223372036854775808.000000000 <= toInt8(0), 9223372036854775808.000000000 > toInt8(0), 9223372036854775808.000000000 >= toInt8(0) , toUInt16(0) = 9223372036854775808.000000000, toUInt16(0) != 9223372036854775808.000000000, toUInt16(0) < 9223372036854775808.000000000, toUInt16(0) <= 9223372036854775808.000000000, toUInt16(0) > 9223372036854775808.000000000, toUInt16(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt16(0), 9223372036854775808.000000000 != toUInt16(0), 9223372036854775808.000000000 < toUInt16(0), 9223372036854775808.000000000 <= toUInt16(0), 9223372036854775808.000000000 > toUInt16(0), 9223372036854775808.000000000 >= toUInt16(0) , toInt16(0) = 9223372036854775808.000000000, toInt16(0) != 9223372036854775808.000000000, toInt16(0) < 9223372036854775808.000000000, toInt16(0) <= 9223372036854775808.000000000, toInt16(0) > 9223372036854775808.000000000, toInt16(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(0), 9223372036854775808.000000000 != toInt16(0), 9223372036854775808.000000000 < toInt16(0), 9223372036854775808.000000000 <= toInt16(0), 9223372036854775808.000000000 > toInt16(0), 9223372036854775808.000000000 >= toInt16(0) , toUInt32(0) = 9223372036854775808.000000000, toUInt32(0) != 9223372036854775808.000000000, toUInt32(0) < 9223372036854775808.000000000, toUInt32(0) <= 9223372036854775808.000000000, toUInt32(0) > 9223372036854775808.000000000, toUInt32(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt32(0), 9223372036854775808.000000000 != toUInt32(0), 9223372036854775808.000000000 < toUInt32(0), 9223372036854775808.000000000 <= toUInt32(0), 9223372036854775808.000000000 > toUInt32(0), 9223372036854775808.000000000 >= toUInt32(0) , toInt32(0) = 9223372036854775808.000000000, toInt32(0) != 9223372036854775808.000000000, toInt32(0) < 9223372036854775808.000000000, toInt32(0) <= 9223372036854775808.000000000, toInt32(0) > 9223372036854775808.000000000, toInt32(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(0), 9223372036854775808.000000000 != toInt32(0), 9223372036854775808.000000000 < toInt32(0), 9223372036854775808.000000000 <= toInt32(0), 9223372036854775808.000000000 > toInt32(0), 9223372036854775808.000000000 >= toInt32(0) , toUInt64(0) = 9223372036854775808.000000000, toUInt64(0) != 9223372036854775808.000000000, toUInt64(0) < 9223372036854775808.000000000, toUInt64(0) <= 9223372036854775808.000000000, toUInt64(0) > 9223372036854775808.000000000, toUInt64(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(0), 9223372036854775808.000000000 != toUInt64(0), 9223372036854775808.000000000 < toUInt64(0), 9223372036854775808.000000000 <= toUInt64(0), 9223372036854775808.000000000 > toUInt64(0), 9223372036854775808.000000000 >= toUInt64(0) , toInt64(0) = 9223372036854775808.000000000, toInt64(0) != 9223372036854775808.000000000, toInt64(0) < 9223372036854775808.000000000, toInt64(0) <= 9223372036854775808.000000000, toInt64(0) > 9223372036854775808.000000000, toInt64(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(0), 9223372036854775808.000000000 != toInt64(0), 9223372036854775808.000000000 < toInt64(0), 9223372036854775808.000000000 <= toInt64(0), 9223372036854775808.000000000 > toInt64(0), 9223372036854775808.000000000 >= toInt64(0) ; +SELECT '0', '-9223372036854775808.000000000', 0 = -9223372036854775808.000000000, 0 != -9223372036854775808.000000000, 0 < -9223372036854775808.000000000, 0 <= -9223372036854775808.000000000, 0 > -9223372036854775808.000000000, 0 >= -9223372036854775808.000000000, -9223372036854775808.000000000 = 0, -9223372036854775808.000000000 != 0, -9223372036854775808.000000000 < 0, -9223372036854775808.000000000 <= 0, -9223372036854775808.000000000 > 0, -9223372036854775808.000000000 >= 0 , toUInt8(0) = -9223372036854775808.000000000, toUInt8(0) != -9223372036854775808.000000000, toUInt8(0) < -9223372036854775808.000000000, toUInt8(0) <= -9223372036854775808.000000000, toUInt8(0) > -9223372036854775808.000000000, toUInt8(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt8(0), -9223372036854775808.000000000 != toUInt8(0), -9223372036854775808.000000000 < toUInt8(0), -9223372036854775808.000000000 <= toUInt8(0), -9223372036854775808.000000000 > toUInt8(0), -9223372036854775808.000000000 >= toUInt8(0) , toInt8(0) = -9223372036854775808.000000000, toInt8(0) != -9223372036854775808.000000000, toInt8(0) < -9223372036854775808.000000000, toInt8(0) <= -9223372036854775808.000000000, toInt8(0) > -9223372036854775808.000000000, toInt8(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt8(0), -9223372036854775808.000000000 != toInt8(0), -9223372036854775808.000000000 < toInt8(0), -9223372036854775808.000000000 <= toInt8(0), -9223372036854775808.000000000 > toInt8(0), -9223372036854775808.000000000 >= toInt8(0) , toUInt16(0) = -9223372036854775808.000000000, toUInt16(0) != -9223372036854775808.000000000, toUInt16(0) < -9223372036854775808.000000000, toUInt16(0) <= -9223372036854775808.000000000, toUInt16(0) > -9223372036854775808.000000000, toUInt16(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt16(0), -9223372036854775808.000000000 != toUInt16(0), -9223372036854775808.000000000 < toUInt16(0), -9223372036854775808.000000000 <= toUInt16(0), -9223372036854775808.000000000 > toUInt16(0), -9223372036854775808.000000000 >= toUInt16(0) , toInt16(0) = -9223372036854775808.000000000, toInt16(0) != -9223372036854775808.000000000, toInt16(0) < -9223372036854775808.000000000, toInt16(0) <= -9223372036854775808.000000000, toInt16(0) > -9223372036854775808.000000000, toInt16(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt16(0), -9223372036854775808.000000000 != toInt16(0), -9223372036854775808.000000000 < toInt16(0), -9223372036854775808.000000000 <= toInt16(0), -9223372036854775808.000000000 > toInt16(0), -9223372036854775808.000000000 >= toInt16(0) , toUInt32(0) = -9223372036854775808.000000000, toUInt32(0) != -9223372036854775808.000000000, toUInt32(0) < -9223372036854775808.000000000, toUInt32(0) <= -9223372036854775808.000000000, toUInt32(0) > -9223372036854775808.000000000, toUInt32(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt32(0), -9223372036854775808.000000000 != toUInt32(0), -9223372036854775808.000000000 < toUInt32(0), -9223372036854775808.000000000 <= toUInt32(0), -9223372036854775808.000000000 > toUInt32(0), -9223372036854775808.000000000 >= toUInt32(0) , toInt32(0) = -9223372036854775808.000000000, toInt32(0) != -9223372036854775808.000000000, toInt32(0) < -9223372036854775808.000000000, toInt32(0) <= -9223372036854775808.000000000, toInt32(0) > -9223372036854775808.000000000, toInt32(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt32(0), -9223372036854775808.000000000 != toInt32(0), -9223372036854775808.000000000 < toInt32(0), -9223372036854775808.000000000 <= toInt32(0), -9223372036854775808.000000000 > toInt32(0), -9223372036854775808.000000000 >= toInt32(0) , toUInt64(0) = -9223372036854775808.000000000, toUInt64(0) != -9223372036854775808.000000000, toUInt64(0) < -9223372036854775808.000000000, toUInt64(0) <= -9223372036854775808.000000000, toUInt64(0) > -9223372036854775808.000000000, toUInt64(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt64(0), -9223372036854775808.000000000 != toUInt64(0), -9223372036854775808.000000000 < toUInt64(0), -9223372036854775808.000000000 <= toUInt64(0), -9223372036854775808.000000000 > toUInt64(0), -9223372036854775808.000000000 >= toUInt64(0) , toInt64(0) = -9223372036854775808.000000000, toInt64(0) != -9223372036854775808.000000000, toInt64(0) < -9223372036854775808.000000000, toInt64(0) <= -9223372036854775808.000000000, toInt64(0) > -9223372036854775808.000000000, toInt64(0) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt64(0), -9223372036854775808.000000000 != toInt64(0), -9223372036854775808.000000000 < toInt64(0), -9223372036854775808.000000000 <= toInt64(0), -9223372036854775808.000000000 > toInt64(0), -9223372036854775808.000000000 >= toInt64(0) ; +SELECT '0', '9223372036854775808.000000000', 0 = 9223372036854775808.000000000, 0 != 9223372036854775808.000000000, 0 < 9223372036854775808.000000000, 0 <= 9223372036854775808.000000000, 0 > 9223372036854775808.000000000, 0 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 0, 9223372036854775808.000000000 != 0, 9223372036854775808.000000000 < 0, 9223372036854775808.000000000 <= 0, 9223372036854775808.000000000 > 0, 9223372036854775808.000000000 >= 0 , toUInt8(0) = 9223372036854775808.000000000, toUInt8(0) != 9223372036854775808.000000000, toUInt8(0) < 9223372036854775808.000000000, toUInt8(0) <= 9223372036854775808.000000000, toUInt8(0) > 9223372036854775808.000000000, toUInt8(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt8(0), 9223372036854775808.000000000 != toUInt8(0), 9223372036854775808.000000000 < toUInt8(0), 9223372036854775808.000000000 <= toUInt8(0), 9223372036854775808.000000000 > toUInt8(0), 9223372036854775808.000000000 >= toUInt8(0) , toInt8(0) = 9223372036854775808.000000000, toInt8(0) != 9223372036854775808.000000000, toInt8(0) < 9223372036854775808.000000000, toInt8(0) <= 9223372036854775808.000000000, toInt8(0) > 9223372036854775808.000000000, toInt8(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(0), 9223372036854775808.000000000 != toInt8(0), 9223372036854775808.000000000 < toInt8(0), 9223372036854775808.000000000 <= toInt8(0), 9223372036854775808.000000000 > toInt8(0), 9223372036854775808.000000000 >= toInt8(0) , toUInt16(0) = 9223372036854775808.000000000, toUInt16(0) != 9223372036854775808.000000000, toUInt16(0) < 9223372036854775808.000000000, toUInt16(0) <= 9223372036854775808.000000000, toUInt16(0) > 9223372036854775808.000000000, toUInt16(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt16(0), 9223372036854775808.000000000 != toUInt16(0), 9223372036854775808.000000000 < toUInt16(0), 9223372036854775808.000000000 <= toUInt16(0), 9223372036854775808.000000000 > toUInt16(0), 9223372036854775808.000000000 >= toUInt16(0) , toInt16(0) = 9223372036854775808.000000000, toInt16(0) != 9223372036854775808.000000000, toInt16(0) < 9223372036854775808.000000000, toInt16(0) <= 9223372036854775808.000000000, toInt16(0) > 9223372036854775808.000000000, toInt16(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(0), 9223372036854775808.000000000 != toInt16(0), 9223372036854775808.000000000 < toInt16(0), 9223372036854775808.000000000 <= toInt16(0), 9223372036854775808.000000000 > toInt16(0), 9223372036854775808.000000000 >= toInt16(0) , toUInt32(0) = 9223372036854775808.000000000, toUInt32(0) != 9223372036854775808.000000000, toUInt32(0) < 9223372036854775808.000000000, toUInt32(0) <= 9223372036854775808.000000000, toUInt32(0) > 9223372036854775808.000000000, toUInt32(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt32(0), 9223372036854775808.000000000 != toUInt32(0), 9223372036854775808.000000000 < toUInt32(0), 9223372036854775808.000000000 <= toUInt32(0), 9223372036854775808.000000000 > toUInt32(0), 9223372036854775808.000000000 >= toUInt32(0) , toInt32(0) = 9223372036854775808.000000000, toInt32(0) != 9223372036854775808.000000000, toInt32(0) < 9223372036854775808.000000000, toInt32(0) <= 9223372036854775808.000000000, toInt32(0) > 9223372036854775808.000000000, toInt32(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(0), 9223372036854775808.000000000 != toInt32(0), 9223372036854775808.000000000 < toInt32(0), 9223372036854775808.000000000 <= toInt32(0), 9223372036854775808.000000000 > toInt32(0), 9223372036854775808.000000000 >= toInt32(0) , toUInt64(0) = 9223372036854775808.000000000, toUInt64(0) != 9223372036854775808.000000000, toUInt64(0) < 9223372036854775808.000000000, toUInt64(0) <= 9223372036854775808.000000000, toUInt64(0) > 9223372036854775808.000000000, toUInt64(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(0), 9223372036854775808.000000000 != toUInt64(0), 9223372036854775808.000000000 < toUInt64(0), 9223372036854775808.000000000 <= toUInt64(0), 9223372036854775808.000000000 > toUInt64(0), 9223372036854775808.000000000 >= toUInt64(0) , toInt64(0) = 9223372036854775808.000000000, toInt64(0) != 9223372036854775808.000000000, toInt64(0) < 9223372036854775808.000000000, toInt64(0) <= 9223372036854775808.000000000, toInt64(0) > 9223372036854775808.000000000, toInt64(0) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(0), 9223372036854775808.000000000 != toInt64(0), 9223372036854775808.000000000 < toInt64(0), 9223372036854775808.000000000 <= toInt64(0), 9223372036854775808.000000000 > toInt64(0), 9223372036854775808.000000000 >= toInt64(0) ; +SELECT '0', '2251799813685248.000000000', 0 = 2251799813685248.000000000, 0 != 2251799813685248.000000000, 0 < 2251799813685248.000000000, 0 <= 2251799813685248.000000000, 0 > 2251799813685248.000000000, 0 >= 2251799813685248.000000000, 2251799813685248.000000000 = 0, 2251799813685248.000000000 != 0, 2251799813685248.000000000 < 0, 2251799813685248.000000000 <= 0, 2251799813685248.000000000 > 0, 2251799813685248.000000000 >= 0 , toUInt8(0) = 2251799813685248.000000000, toUInt8(0) != 2251799813685248.000000000, toUInt8(0) < 2251799813685248.000000000, toUInt8(0) <= 2251799813685248.000000000, toUInt8(0) > 2251799813685248.000000000, toUInt8(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt8(0), 2251799813685248.000000000 != toUInt8(0), 2251799813685248.000000000 < toUInt8(0), 2251799813685248.000000000 <= toUInt8(0), 2251799813685248.000000000 > toUInt8(0), 2251799813685248.000000000 >= toUInt8(0) , toInt8(0) = 2251799813685248.000000000, toInt8(0) != 2251799813685248.000000000, toInt8(0) < 2251799813685248.000000000, toInt8(0) <= 2251799813685248.000000000, toInt8(0) > 2251799813685248.000000000, toInt8(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt8(0), 2251799813685248.000000000 != toInt8(0), 2251799813685248.000000000 < toInt8(0), 2251799813685248.000000000 <= toInt8(0), 2251799813685248.000000000 > toInt8(0), 2251799813685248.000000000 >= toInt8(0) , toUInt16(0) = 2251799813685248.000000000, toUInt16(0) != 2251799813685248.000000000, toUInt16(0) < 2251799813685248.000000000, toUInt16(0) <= 2251799813685248.000000000, toUInt16(0) > 2251799813685248.000000000, toUInt16(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt16(0), 2251799813685248.000000000 != toUInt16(0), 2251799813685248.000000000 < toUInt16(0), 2251799813685248.000000000 <= toUInt16(0), 2251799813685248.000000000 > toUInt16(0), 2251799813685248.000000000 >= toUInt16(0) , toInt16(0) = 2251799813685248.000000000, toInt16(0) != 2251799813685248.000000000, toInt16(0) < 2251799813685248.000000000, toInt16(0) <= 2251799813685248.000000000, toInt16(0) > 2251799813685248.000000000, toInt16(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt16(0), 2251799813685248.000000000 != toInt16(0), 2251799813685248.000000000 < toInt16(0), 2251799813685248.000000000 <= toInt16(0), 2251799813685248.000000000 > toInt16(0), 2251799813685248.000000000 >= toInt16(0) , toUInt32(0) = 2251799813685248.000000000, toUInt32(0) != 2251799813685248.000000000, toUInt32(0) < 2251799813685248.000000000, toUInt32(0) <= 2251799813685248.000000000, toUInt32(0) > 2251799813685248.000000000, toUInt32(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt32(0), 2251799813685248.000000000 != toUInt32(0), 2251799813685248.000000000 < toUInt32(0), 2251799813685248.000000000 <= toUInt32(0), 2251799813685248.000000000 > toUInt32(0), 2251799813685248.000000000 >= toUInt32(0) , toInt32(0) = 2251799813685248.000000000, toInt32(0) != 2251799813685248.000000000, toInt32(0) < 2251799813685248.000000000, toInt32(0) <= 2251799813685248.000000000, toInt32(0) > 2251799813685248.000000000, toInt32(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt32(0), 2251799813685248.000000000 != toInt32(0), 2251799813685248.000000000 < toInt32(0), 2251799813685248.000000000 <= toInt32(0), 2251799813685248.000000000 > toInt32(0), 2251799813685248.000000000 >= toInt32(0) , toUInt64(0) = 2251799813685248.000000000, toUInt64(0) != 2251799813685248.000000000, toUInt64(0) < 2251799813685248.000000000, toUInt64(0) <= 2251799813685248.000000000, toUInt64(0) > 2251799813685248.000000000, toUInt64(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt64(0), 2251799813685248.000000000 != toUInt64(0), 2251799813685248.000000000 < toUInt64(0), 2251799813685248.000000000 <= toUInt64(0), 2251799813685248.000000000 > toUInt64(0), 2251799813685248.000000000 >= toUInt64(0) , toInt64(0) = 2251799813685248.000000000, toInt64(0) != 2251799813685248.000000000, toInt64(0) < 2251799813685248.000000000, toInt64(0) <= 2251799813685248.000000000, toInt64(0) > 2251799813685248.000000000, toInt64(0) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt64(0), 2251799813685248.000000000 != toInt64(0), 2251799813685248.000000000 < toInt64(0), 2251799813685248.000000000 <= toInt64(0), 2251799813685248.000000000 > toInt64(0), 2251799813685248.000000000 >= toInt64(0) ; +SELECT '0', '4503599627370496.000000000', 0 = 4503599627370496.000000000, 0 != 4503599627370496.000000000, 0 < 4503599627370496.000000000, 0 <= 4503599627370496.000000000, 0 > 4503599627370496.000000000, 0 >= 4503599627370496.000000000, 4503599627370496.000000000 = 0, 4503599627370496.000000000 != 0, 4503599627370496.000000000 < 0, 4503599627370496.000000000 <= 0, 4503599627370496.000000000 > 0, 4503599627370496.000000000 >= 0 , toUInt8(0) = 4503599627370496.000000000, toUInt8(0) != 4503599627370496.000000000, toUInt8(0) < 4503599627370496.000000000, toUInt8(0) <= 4503599627370496.000000000, toUInt8(0) > 4503599627370496.000000000, toUInt8(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt8(0), 4503599627370496.000000000 != toUInt8(0), 4503599627370496.000000000 < toUInt8(0), 4503599627370496.000000000 <= toUInt8(0), 4503599627370496.000000000 > toUInt8(0), 4503599627370496.000000000 >= toUInt8(0) , toInt8(0) = 4503599627370496.000000000, toInt8(0) != 4503599627370496.000000000, toInt8(0) < 4503599627370496.000000000, toInt8(0) <= 4503599627370496.000000000, toInt8(0) > 4503599627370496.000000000, toInt8(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt8(0), 4503599627370496.000000000 != toInt8(0), 4503599627370496.000000000 < toInt8(0), 4503599627370496.000000000 <= toInt8(0), 4503599627370496.000000000 > toInt8(0), 4503599627370496.000000000 >= toInt8(0) , toUInt16(0) = 4503599627370496.000000000, toUInt16(0) != 4503599627370496.000000000, toUInt16(0) < 4503599627370496.000000000, toUInt16(0) <= 4503599627370496.000000000, toUInt16(0) > 4503599627370496.000000000, toUInt16(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt16(0), 4503599627370496.000000000 != toUInt16(0), 4503599627370496.000000000 < toUInt16(0), 4503599627370496.000000000 <= toUInt16(0), 4503599627370496.000000000 > toUInt16(0), 4503599627370496.000000000 >= toUInt16(0) , toInt16(0) = 4503599627370496.000000000, toInt16(0) != 4503599627370496.000000000, toInt16(0) < 4503599627370496.000000000, toInt16(0) <= 4503599627370496.000000000, toInt16(0) > 4503599627370496.000000000, toInt16(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt16(0), 4503599627370496.000000000 != toInt16(0), 4503599627370496.000000000 < toInt16(0), 4503599627370496.000000000 <= toInt16(0), 4503599627370496.000000000 > toInt16(0), 4503599627370496.000000000 >= toInt16(0) , toUInt32(0) = 4503599627370496.000000000, toUInt32(0) != 4503599627370496.000000000, toUInt32(0) < 4503599627370496.000000000, toUInt32(0) <= 4503599627370496.000000000, toUInt32(0) > 4503599627370496.000000000, toUInt32(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt32(0), 4503599627370496.000000000 != toUInt32(0), 4503599627370496.000000000 < toUInt32(0), 4503599627370496.000000000 <= toUInt32(0), 4503599627370496.000000000 > toUInt32(0), 4503599627370496.000000000 >= toUInt32(0) , toInt32(0) = 4503599627370496.000000000, toInt32(0) != 4503599627370496.000000000, toInt32(0) < 4503599627370496.000000000, toInt32(0) <= 4503599627370496.000000000, toInt32(0) > 4503599627370496.000000000, toInt32(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt32(0), 4503599627370496.000000000 != toInt32(0), 4503599627370496.000000000 < toInt32(0), 4503599627370496.000000000 <= toInt32(0), 4503599627370496.000000000 > toInt32(0), 4503599627370496.000000000 >= toInt32(0) , toUInt64(0) = 4503599627370496.000000000, toUInt64(0) != 4503599627370496.000000000, toUInt64(0) < 4503599627370496.000000000, toUInt64(0) <= 4503599627370496.000000000, toUInt64(0) > 4503599627370496.000000000, toUInt64(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt64(0), 4503599627370496.000000000 != toUInt64(0), 4503599627370496.000000000 < toUInt64(0), 4503599627370496.000000000 <= toUInt64(0), 4503599627370496.000000000 > toUInt64(0), 4503599627370496.000000000 >= toUInt64(0) , toInt64(0) = 4503599627370496.000000000, toInt64(0) != 4503599627370496.000000000, toInt64(0) < 4503599627370496.000000000, toInt64(0) <= 4503599627370496.000000000, toInt64(0) > 4503599627370496.000000000, toInt64(0) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt64(0), 4503599627370496.000000000 != toInt64(0), 4503599627370496.000000000 < toInt64(0), 4503599627370496.000000000 <= toInt64(0), 4503599627370496.000000000 > toInt64(0), 4503599627370496.000000000 >= toInt64(0) ; +SELECT '0', '9007199254740991.000000000', 0 = 9007199254740991.000000000, 0 != 9007199254740991.000000000, 0 < 9007199254740991.000000000, 0 <= 9007199254740991.000000000, 0 > 9007199254740991.000000000, 0 >= 9007199254740991.000000000, 9007199254740991.000000000 = 0, 9007199254740991.000000000 != 0, 9007199254740991.000000000 < 0, 9007199254740991.000000000 <= 0, 9007199254740991.000000000 > 0, 9007199254740991.000000000 >= 0 , toUInt8(0) = 9007199254740991.000000000, toUInt8(0) != 9007199254740991.000000000, toUInt8(0) < 9007199254740991.000000000, toUInt8(0) <= 9007199254740991.000000000, toUInt8(0) > 9007199254740991.000000000, toUInt8(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt8(0), 9007199254740991.000000000 != toUInt8(0), 9007199254740991.000000000 < toUInt8(0), 9007199254740991.000000000 <= toUInt8(0), 9007199254740991.000000000 > toUInt8(0), 9007199254740991.000000000 >= toUInt8(0) , toInt8(0) = 9007199254740991.000000000, toInt8(0) != 9007199254740991.000000000, toInt8(0) < 9007199254740991.000000000, toInt8(0) <= 9007199254740991.000000000, toInt8(0) > 9007199254740991.000000000, toInt8(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt8(0), 9007199254740991.000000000 != toInt8(0), 9007199254740991.000000000 < toInt8(0), 9007199254740991.000000000 <= toInt8(0), 9007199254740991.000000000 > toInt8(0), 9007199254740991.000000000 >= toInt8(0) , toUInt16(0) = 9007199254740991.000000000, toUInt16(0) != 9007199254740991.000000000, toUInt16(0) < 9007199254740991.000000000, toUInt16(0) <= 9007199254740991.000000000, toUInt16(0) > 9007199254740991.000000000, toUInt16(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt16(0), 9007199254740991.000000000 != toUInt16(0), 9007199254740991.000000000 < toUInt16(0), 9007199254740991.000000000 <= toUInt16(0), 9007199254740991.000000000 > toUInt16(0), 9007199254740991.000000000 >= toUInt16(0) , toInt16(0) = 9007199254740991.000000000, toInt16(0) != 9007199254740991.000000000, toInt16(0) < 9007199254740991.000000000, toInt16(0) <= 9007199254740991.000000000, toInt16(0) > 9007199254740991.000000000, toInt16(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt16(0), 9007199254740991.000000000 != toInt16(0), 9007199254740991.000000000 < toInt16(0), 9007199254740991.000000000 <= toInt16(0), 9007199254740991.000000000 > toInt16(0), 9007199254740991.000000000 >= toInt16(0) , toUInt32(0) = 9007199254740991.000000000, toUInt32(0) != 9007199254740991.000000000, toUInt32(0) < 9007199254740991.000000000, toUInt32(0) <= 9007199254740991.000000000, toUInt32(0) > 9007199254740991.000000000, toUInt32(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt32(0), 9007199254740991.000000000 != toUInt32(0), 9007199254740991.000000000 < toUInt32(0), 9007199254740991.000000000 <= toUInt32(0), 9007199254740991.000000000 > toUInt32(0), 9007199254740991.000000000 >= toUInt32(0) , toInt32(0) = 9007199254740991.000000000, toInt32(0) != 9007199254740991.000000000, toInt32(0) < 9007199254740991.000000000, toInt32(0) <= 9007199254740991.000000000, toInt32(0) > 9007199254740991.000000000, toInt32(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt32(0), 9007199254740991.000000000 != toInt32(0), 9007199254740991.000000000 < toInt32(0), 9007199254740991.000000000 <= toInt32(0), 9007199254740991.000000000 > toInt32(0), 9007199254740991.000000000 >= toInt32(0) , toUInt64(0) = 9007199254740991.000000000, toUInt64(0) != 9007199254740991.000000000, toUInt64(0) < 9007199254740991.000000000, toUInt64(0) <= 9007199254740991.000000000, toUInt64(0) > 9007199254740991.000000000, toUInt64(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt64(0), 9007199254740991.000000000 != toUInt64(0), 9007199254740991.000000000 < toUInt64(0), 9007199254740991.000000000 <= toUInt64(0), 9007199254740991.000000000 > toUInt64(0), 9007199254740991.000000000 >= toUInt64(0) , toInt64(0) = 9007199254740991.000000000, toInt64(0) != 9007199254740991.000000000, toInt64(0) < 9007199254740991.000000000, toInt64(0) <= 9007199254740991.000000000, toInt64(0) > 9007199254740991.000000000, toInt64(0) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt64(0), 9007199254740991.000000000 != toInt64(0), 9007199254740991.000000000 < toInt64(0), 9007199254740991.000000000 <= toInt64(0), 9007199254740991.000000000 > toInt64(0), 9007199254740991.000000000 >= toInt64(0) ; +SELECT '0', '9007199254740992.000000000', 0 = 9007199254740992.000000000, 0 != 9007199254740992.000000000, 0 < 9007199254740992.000000000, 0 <= 9007199254740992.000000000, 0 > 9007199254740992.000000000, 0 >= 9007199254740992.000000000, 9007199254740992.000000000 = 0, 9007199254740992.000000000 != 0, 9007199254740992.000000000 < 0, 9007199254740992.000000000 <= 0, 9007199254740992.000000000 > 0, 9007199254740992.000000000 >= 0 , toUInt8(0) = 9007199254740992.000000000, toUInt8(0) != 9007199254740992.000000000, toUInt8(0) < 9007199254740992.000000000, toUInt8(0) <= 9007199254740992.000000000, toUInt8(0) > 9007199254740992.000000000, toUInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(0), 9007199254740992.000000000 != toUInt8(0), 9007199254740992.000000000 < toUInt8(0), 9007199254740992.000000000 <= toUInt8(0), 9007199254740992.000000000 > toUInt8(0), 9007199254740992.000000000 >= toUInt8(0) , toInt8(0) = 9007199254740992.000000000, toInt8(0) != 9007199254740992.000000000, toInt8(0) < 9007199254740992.000000000, toInt8(0) <= 9007199254740992.000000000, toInt8(0) > 9007199254740992.000000000, toInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(0), 9007199254740992.000000000 != toInt8(0), 9007199254740992.000000000 < toInt8(0), 9007199254740992.000000000 <= toInt8(0), 9007199254740992.000000000 > toInt8(0), 9007199254740992.000000000 >= toInt8(0) , toUInt16(0) = 9007199254740992.000000000, toUInt16(0) != 9007199254740992.000000000, toUInt16(0) < 9007199254740992.000000000, toUInt16(0) <= 9007199254740992.000000000, toUInt16(0) > 9007199254740992.000000000, toUInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(0), 9007199254740992.000000000 != toUInt16(0), 9007199254740992.000000000 < toUInt16(0), 9007199254740992.000000000 <= toUInt16(0), 9007199254740992.000000000 > toUInt16(0), 9007199254740992.000000000 >= toUInt16(0) , toInt16(0) = 9007199254740992.000000000, toInt16(0) != 9007199254740992.000000000, toInt16(0) < 9007199254740992.000000000, toInt16(0) <= 9007199254740992.000000000, toInt16(0) > 9007199254740992.000000000, toInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(0), 9007199254740992.000000000 != toInt16(0), 9007199254740992.000000000 < toInt16(0), 9007199254740992.000000000 <= toInt16(0), 9007199254740992.000000000 > toInt16(0), 9007199254740992.000000000 >= toInt16(0) , toUInt32(0) = 9007199254740992.000000000, toUInt32(0) != 9007199254740992.000000000, toUInt32(0) < 9007199254740992.000000000, toUInt32(0) <= 9007199254740992.000000000, toUInt32(0) > 9007199254740992.000000000, toUInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(0), 9007199254740992.000000000 != toUInt32(0), 9007199254740992.000000000 < toUInt32(0), 9007199254740992.000000000 <= toUInt32(0), 9007199254740992.000000000 > toUInt32(0), 9007199254740992.000000000 >= toUInt32(0) , toInt32(0) = 9007199254740992.000000000, toInt32(0) != 9007199254740992.000000000, toInt32(0) < 9007199254740992.000000000, toInt32(0) <= 9007199254740992.000000000, toInt32(0) > 9007199254740992.000000000, toInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(0), 9007199254740992.000000000 != toInt32(0), 9007199254740992.000000000 < toInt32(0), 9007199254740992.000000000 <= toInt32(0), 9007199254740992.000000000 > toInt32(0), 9007199254740992.000000000 >= toInt32(0) , toUInt64(0) = 9007199254740992.000000000, toUInt64(0) != 9007199254740992.000000000, toUInt64(0) < 9007199254740992.000000000, toUInt64(0) <= 9007199254740992.000000000, toUInt64(0) > 9007199254740992.000000000, toUInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(0), 9007199254740992.000000000 != toUInt64(0), 9007199254740992.000000000 < toUInt64(0), 9007199254740992.000000000 <= toUInt64(0), 9007199254740992.000000000 > toUInt64(0), 9007199254740992.000000000 >= toUInt64(0) , toInt64(0) = 9007199254740992.000000000, toInt64(0) != 9007199254740992.000000000, toInt64(0) < 9007199254740992.000000000, toInt64(0) <= 9007199254740992.000000000, toInt64(0) > 9007199254740992.000000000, toInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(0), 9007199254740992.000000000 != toInt64(0), 9007199254740992.000000000 < toInt64(0), 9007199254740992.000000000 <= toInt64(0), 9007199254740992.000000000 > toInt64(0), 9007199254740992.000000000 >= toInt64(0) ; +SELECT '0', '9007199254740992.000000000', 0 = 9007199254740992.000000000, 0 != 9007199254740992.000000000, 0 < 9007199254740992.000000000, 0 <= 9007199254740992.000000000, 0 > 9007199254740992.000000000, 0 >= 9007199254740992.000000000, 9007199254740992.000000000 = 0, 9007199254740992.000000000 != 0, 9007199254740992.000000000 < 0, 9007199254740992.000000000 <= 0, 9007199254740992.000000000 > 0, 9007199254740992.000000000 >= 0 , toUInt8(0) = 9007199254740992.000000000, toUInt8(0) != 9007199254740992.000000000, toUInt8(0) < 9007199254740992.000000000, toUInt8(0) <= 9007199254740992.000000000, toUInt8(0) > 9007199254740992.000000000, toUInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(0), 9007199254740992.000000000 != toUInt8(0), 9007199254740992.000000000 < toUInt8(0), 9007199254740992.000000000 <= toUInt8(0), 9007199254740992.000000000 > toUInt8(0), 9007199254740992.000000000 >= toUInt8(0) , toInt8(0) = 9007199254740992.000000000, toInt8(0) != 9007199254740992.000000000, toInt8(0) < 9007199254740992.000000000, toInt8(0) <= 9007199254740992.000000000, toInt8(0) > 9007199254740992.000000000, toInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(0), 9007199254740992.000000000 != toInt8(0), 9007199254740992.000000000 < toInt8(0), 9007199254740992.000000000 <= toInt8(0), 9007199254740992.000000000 > toInt8(0), 9007199254740992.000000000 >= toInt8(0) , toUInt16(0) = 9007199254740992.000000000, toUInt16(0) != 9007199254740992.000000000, toUInt16(0) < 9007199254740992.000000000, toUInt16(0) <= 9007199254740992.000000000, toUInt16(0) > 9007199254740992.000000000, toUInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(0), 9007199254740992.000000000 != toUInt16(0), 9007199254740992.000000000 < toUInt16(0), 9007199254740992.000000000 <= toUInt16(0), 9007199254740992.000000000 > toUInt16(0), 9007199254740992.000000000 >= toUInt16(0) , toInt16(0) = 9007199254740992.000000000, toInt16(0) != 9007199254740992.000000000, toInt16(0) < 9007199254740992.000000000, toInt16(0) <= 9007199254740992.000000000, toInt16(0) > 9007199254740992.000000000, toInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(0), 9007199254740992.000000000 != toInt16(0), 9007199254740992.000000000 < toInt16(0), 9007199254740992.000000000 <= toInt16(0), 9007199254740992.000000000 > toInt16(0), 9007199254740992.000000000 >= toInt16(0) , toUInt32(0) = 9007199254740992.000000000, toUInt32(0) != 9007199254740992.000000000, toUInt32(0) < 9007199254740992.000000000, toUInt32(0) <= 9007199254740992.000000000, toUInt32(0) > 9007199254740992.000000000, toUInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(0), 9007199254740992.000000000 != toUInt32(0), 9007199254740992.000000000 < toUInt32(0), 9007199254740992.000000000 <= toUInt32(0), 9007199254740992.000000000 > toUInt32(0), 9007199254740992.000000000 >= toUInt32(0) , toInt32(0) = 9007199254740992.000000000, toInt32(0) != 9007199254740992.000000000, toInt32(0) < 9007199254740992.000000000, toInt32(0) <= 9007199254740992.000000000, toInt32(0) > 9007199254740992.000000000, toInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(0), 9007199254740992.000000000 != toInt32(0), 9007199254740992.000000000 < toInt32(0), 9007199254740992.000000000 <= toInt32(0), 9007199254740992.000000000 > toInt32(0), 9007199254740992.000000000 >= toInt32(0) , toUInt64(0) = 9007199254740992.000000000, toUInt64(0) != 9007199254740992.000000000, toUInt64(0) < 9007199254740992.000000000, toUInt64(0) <= 9007199254740992.000000000, toUInt64(0) > 9007199254740992.000000000, toUInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(0), 9007199254740992.000000000 != toUInt64(0), 9007199254740992.000000000 < toUInt64(0), 9007199254740992.000000000 <= toUInt64(0), 9007199254740992.000000000 > toUInt64(0), 9007199254740992.000000000 >= toUInt64(0) , toInt64(0) = 9007199254740992.000000000, toInt64(0) != 9007199254740992.000000000, toInt64(0) < 9007199254740992.000000000, toInt64(0) <= 9007199254740992.000000000, toInt64(0) > 9007199254740992.000000000, toInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(0), 9007199254740992.000000000 != toInt64(0), 9007199254740992.000000000 < toInt64(0), 9007199254740992.000000000 <= toInt64(0), 9007199254740992.000000000 > toInt64(0), 9007199254740992.000000000 >= toInt64(0) ; +SELECT '0', '9007199254740994.000000000', 0 = 9007199254740994.000000000, 0 != 9007199254740994.000000000, 0 < 9007199254740994.000000000, 0 <= 9007199254740994.000000000, 0 > 9007199254740994.000000000, 0 >= 9007199254740994.000000000, 9007199254740994.000000000 = 0, 9007199254740994.000000000 != 0, 9007199254740994.000000000 < 0, 9007199254740994.000000000 <= 0, 9007199254740994.000000000 > 0, 9007199254740994.000000000 >= 0 , toUInt8(0) = 9007199254740994.000000000, toUInt8(0) != 9007199254740994.000000000, toUInt8(0) < 9007199254740994.000000000, toUInt8(0) <= 9007199254740994.000000000, toUInt8(0) > 9007199254740994.000000000, toUInt8(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt8(0), 9007199254740994.000000000 != toUInt8(0), 9007199254740994.000000000 < toUInt8(0), 9007199254740994.000000000 <= toUInt8(0), 9007199254740994.000000000 > toUInt8(0), 9007199254740994.000000000 >= toUInt8(0) , toInt8(0) = 9007199254740994.000000000, toInt8(0) != 9007199254740994.000000000, toInt8(0) < 9007199254740994.000000000, toInt8(0) <= 9007199254740994.000000000, toInt8(0) > 9007199254740994.000000000, toInt8(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt8(0), 9007199254740994.000000000 != toInt8(0), 9007199254740994.000000000 < toInt8(0), 9007199254740994.000000000 <= toInt8(0), 9007199254740994.000000000 > toInt8(0), 9007199254740994.000000000 >= toInt8(0) , toUInt16(0) = 9007199254740994.000000000, toUInt16(0) != 9007199254740994.000000000, toUInt16(0) < 9007199254740994.000000000, toUInt16(0) <= 9007199254740994.000000000, toUInt16(0) > 9007199254740994.000000000, toUInt16(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt16(0), 9007199254740994.000000000 != toUInt16(0), 9007199254740994.000000000 < toUInt16(0), 9007199254740994.000000000 <= toUInt16(0), 9007199254740994.000000000 > toUInt16(0), 9007199254740994.000000000 >= toUInt16(0) , toInt16(0) = 9007199254740994.000000000, toInt16(0) != 9007199254740994.000000000, toInt16(0) < 9007199254740994.000000000, toInt16(0) <= 9007199254740994.000000000, toInt16(0) > 9007199254740994.000000000, toInt16(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt16(0), 9007199254740994.000000000 != toInt16(0), 9007199254740994.000000000 < toInt16(0), 9007199254740994.000000000 <= toInt16(0), 9007199254740994.000000000 > toInt16(0), 9007199254740994.000000000 >= toInt16(0) , toUInt32(0) = 9007199254740994.000000000, toUInt32(0) != 9007199254740994.000000000, toUInt32(0) < 9007199254740994.000000000, toUInt32(0) <= 9007199254740994.000000000, toUInt32(0) > 9007199254740994.000000000, toUInt32(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt32(0), 9007199254740994.000000000 != toUInt32(0), 9007199254740994.000000000 < toUInt32(0), 9007199254740994.000000000 <= toUInt32(0), 9007199254740994.000000000 > toUInt32(0), 9007199254740994.000000000 >= toUInt32(0) , toInt32(0) = 9007199254740994.000000000, toInt32(0) != 9007199254740994.000000000, toInt32(0) < 9007199254740994.000000000, toInt32(0) <= 9007199254740994.000000000, toInt32(0) > 9007199254740994.000000000, toInt32(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt32(0), 9007199254740994.000000000 != toInt32(0), 9007199254740994.000000000 < toInt32(0), 9007199254740994.000000000 <= toInt32(0), 9007199254740994.000000000 > toInt32(0), 9007199254740994.000000000 >= toInt32(0) , toUInt64(0) = 9007199254740994.000000000, toUInt64(0) != 9007199254740994.000000000, toUInt64(0) < 9007199254740994.000000000, toUInt64(0) <= 9007199254740994.000000000, toUInt64(0) > 9007199254740994.000000000, toUInt64(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt64(0), 9007199254740994.000000000 != toUInt64(0), 9007199254740994.000000000 < toUInt64(0), 9007199254740994.000000000 <= toUInt64(0), 9007199254740994.000000000 > toUInt64(0), 9007199254740994.000000000 >= toUInt64(0) , toInt64(0) = 9007199254740994.000000000, toInt64(0) != 9007199254740994.000000000, toInt64(0) < 9007199254740994.000000000, toInt64(0) <= 9007199254740994.000000000, toInt64(0) > 9007199254740994.000000000, toInt64(0) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt64(0), 9007199254740994.000000000 != toInt64(0), 9007199254740994.000000000 < toInt64(0), 9007199254740994.000000000 <= toInt64(0), 9007199254740994.000000000 > toInt64(0), 9007199254740994.000000000 >= toInt64(0) ; +SELECT '0', '-9007199254740991.000000000', 0 = -9007199254740991.000000000, 0 != -9007199254740991.000000000, 0 < -9007199254740991.000000000, 0 <= -9007199254740991.000000000, 0 > -9007199254740991.000000000, 0 >= -9007199254740991.000000000, -9007199254740991.000000000 = 0, -9007199254740991.000000000 != 0, -9007199254740991.000000000 < 0, -9007199254740991.000000000 <= 0, -9007199254740991.000000000 > 0, -9007199254740991.000000000 >= 0 , toUInt8(0) = -9007199254740991.000000000, toUInt8(0) != -9007199254740991.000000000, toUInt8(0) < -9007199254740991.000000000, toUInt8(0) <= -9007199254740991.000000000, toUInt8(0) > -9007199254740991.000000000, toUInt8(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt8(0), -9007199254740991.000000000 != toUInt8(0), -9007199254740991.000000000 < toUInt8(0), -9007199254740991.000000000 <= toUInt8(0), -9007199254740991.000000000 > toUInt8(0), -9007199254740991.000000000 >= toUInt8(0) , toInt8(0) = -9007199254740991.000000000, toInt8(0) != -9007199254740991.000000000, toInt8(0) < -9007199254740991.000000000, toInt8(0) <= -9007199254740991.000000000, toInt8(0) > -9007199254740991.000000000, toInt8(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt8(0), -9007199254740991.000000000 != toInt8(0), -9007199254740991.000000000 < toInt8(0), -9007199254740991.000000000 <= toInt8(0), -9007199254740991.000000000 > toInt8(0), -9007199254740991.000000000 >= toInt8(0) , toUInt16(0) = -9007199254740991.000000000, toUInt16(0) != -9007199254740991.000000000, toUInt16(0) < -9007199254740991.000000000, toUInt16(0) <= -9007199254740991.000000000, toUInt16(0) > -9007199254740991.000000000, toUInt16(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt16(0), -9007199254740991.000000000 != toUInt16(0), -9007199254740991.000000000 < toUInt16(0), -9007199254740991.000000000 <= toUInt16(0), -9007199254740991.000000000 > toUInt16(0), -9007199254740991.000000000 >= toUInt16(0) , toInt16(0) = -9007199254740991.000000000, toInt16(0) != -9007199254740991.000000000, toInt16(0) < -9007199254740991.000000000, toInt16(0) <= -9007199254740991.000000000, toInt16(0) > -9007199254740991.000000000, toInt16(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt16(0), -9007199254740991.000000000 != toInt16(0), -9007199254740991.000000000 < toInt16(0), -9007199254740991.000000000 <= toInt16(0), -9007199254740991.000000000 > toInt16(0), -9007199254740991.000000000 >= toInt16(0) , toUInt32(0) = -9007199254740991.000000000, toUInt32(0) != -9007199254740991.000000000, toUInt32(0) < -9007199254740991.000000000, toUInt32(0) <= -9007199254740991.000000000, toUInt32(0) > -9007199254740991.000000000, toUInt32(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt32(0), -9007199254740991.000000000 != toUInt32(0), -9007199254740991.000000000 < toUInt32(0), -9007199254740991.000000000 <= toUInt32(0), -9007199254740991.000000000 > toUInt32(0), -9007199254740991.000000000 >= toUInt32(0) , toInt32(0) = -9007199254740991.000000000, toInt32(0) != -9007199254740991.000000000, toInt32(0) < -9007199254740991.000000000, toInt32(0) <= -9007199254740991.000000000, toInt32(0) > -9007199254740991.000000000, toInt32(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt32(0), -9007199254740991.000000000 != toInt32(0), -9007199254740991.000000000 < toInt32(0), -9007199254740991.000000000 <= toInt32(0), -9007199254740991.000000000 > toInt32(0), -9007199254740991.000000000 >= toInt32(0) , toUInt64(0) = -9007199254740991.000000000, toUInt64(0) != -9007199254740991.000000000, toUInt64(0) < -9007199254740991.000000000, toUInt64(0) <= -9007199254740991.000000000, toUInt64(0) > -9007199254740991.000000000, toUInt64(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt64(0), -9007199254740991.000000000 != toUInt64(0), -9007199254740991.000000000 < toUInt64(0), -9007199254740991.000000000 <= toUInt64(0), -9007199254740991.000000000 > toUInt64(0), -9007199254740991.000000000 >= toUInt64(0) , toInt64(0) = -9007199254740991.000000000, toInt64(0) != -9007199254740991.000000000, toInt64(0) < -9007199254740991.000000000, toInt64(0) <= -9007199254740991.000000000, toInt64(0) > -9007199254740991.000000000, toInt64(0) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt64(0), -9007199254740991.000000000 != toInt64(0), -9007199254740991.000000000 < toInt64(0), -9007199254740991.000000000 <= toInt64(0), -9007199254740991.000000000 > toInt64(0), -9007199254740991.000000000 >= toInt64(0) ; +SELECT '0', '-9007199254740992.000000000', 0 = -9007199254740992.000000000, 0 != -9007199254740992.000000000, 0 < -9007199254740992.000000000, 0 <= -9007199254740992.000000000, 0 > -9007199254740992.000000000, 0 >= -9007199254740992.000000000, -9007199254740992.000000000 = 0, -9007199254740992.000000000 != 0, -9007199254740992.000000000 < 0, -9007199254740992.000000000 <= 0, -9007199254740992.000000000 > 0, -9007199254740992.000000000 >= 0 , toUInt8(0) = -9007199254740992.000000000, toUInt8(0) != -9007199254740992.000000000, toUInt8(0) < -9007199254740992.000000000, toUInt8(0) <= -9007199254740992.000000000, toUInt8(0) > -9007199254740992.000000000, toUInt8(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt8(0), -9007199254740992.000000000 != toUInt8(0), -9007199254740992.000000000 < toUInt8(0), -9007199254740992.000000000 <= toUInt8(0), -9007199254740992.000000000 > toUInt8(0), -9007199254740992.000000000 >= toUInt8(0) , toInt8(0) = -9007199254740992.000000000, toInt8(0) != -9007199254740992.000000000, toInt8(0) < -9007199254740992.000000000, toInt8(0) <= -9007199254740992.000000000, toInt8(0) > -9007199254740992.000000000, toInt8(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(0), -9007199254740992.000000000 != toInt8(0), -9007199254740992.000000000 < toInt8(0), -9007199254740992.000000000 <= toInt8(0), -9007199254740992.000000000 > toInt8(0), -9007199254740992.000000000 >= toInt8(0) , toUInt16(0) = -9007199254740992.000000000, toUInt16(0) != -9007199254740992.000000000, toUInt16(0) < -9007199254740992.000000000, toUInt16(0) <= -9007199254740992.000000000, toUInt16(0) > -9007199254740992.000000000, toUInt16(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt16(0), -9007199254740992.000000000 != toUInt16(0), -9007199254740992.000000000 < toUInt16(0), -9007199254740992.000000000 <= toUInt16(0), -9007199254740992.000000000 > toUInt16(0), -9007199254740992.000000000 >= toUInt16(0) , toInt16(0) = -9007199254740992.000000000, toInt16(0) != -9007199254740992.000000000, toInt16(0) < -9007199254740992.000000000, toInt16(0) <= -9007199254740992.000000000, toInt16(0) > -9007199254740992.000000000, toInt16(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(0), -9007199254740992.000000000 != toInt16(0), -9007199254740992.000000000 < toInt16(0), -9007199254740992.000000000 <= toInt16(0), -9007199254740992.000000000 > toInt16(0), -9007199254740992.000000000 >= toInt16(0) , toUInt32(0) = -9007199254740992.000000000, toUInt32(0) != -9007199254740992.000000000, toUInt32(0) < -9007199254740992.000000000, toUInt32(0) <= -9007199254740992.000000000, toUInt32(0) > -9007199254740992.000000000, toUInt32(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt32(0), -9007199254740992.000000000 != toUInt32(0), -9007199254740992.000000000 < toUInt32(0), -9007199254740992.000000000 <= toUInt32(0), -9007199254740992.000000000 > toUInt32(0), -9007199254740992.000000000 >= toUInt32(0) , toInt32(0) = -9007199254740992.000000000, toInt32(0) != -9007199254740992.000000000, toInt32(0) < -9007199254740992.000000000, toInt32(0) <= -9007199254740992.000000000, toInt32(0) > -9007199254740992.000000000, toInt32(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(0), -9007199254740992.000000000 != toInt32(0), -9007199254740992.000000000 < toInt32(0), -9007199254740992.000000000 <= toInt32(0), -9007199254740992.000000000 > toInt32(0), -9007199254740992.000000000 >= toInt32(0) , toUInt64(0) = -9007199254740992.000000000, toUInt64(0) != -9007199254740992.000000000, toUInt64(0) < -9007199254740992.000000000, toUInt64(0) <= -9007199254740992.000000000, toUInt64(0) > -9007199254740992.000000000, toUInt64(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(0), -9007199254740992.000000000 != toUInt64(0), -9007199254740992.000000000 < toUInt64(0), -9007199254740992.000000000 <= toUInt64(0), -9007199254740992.000000000 > toUInt64(0), -9007199254740992.000000000 >= toUInt64(0) , toInt64(0) = -9007199254740992.000000000, toInt64(0) != -9007199254740992.000000000, toInt64(0) < -9007199254740992.000000000, toInt64(0) <= -9007199254740992.000000000, toInt64(0) > -9007199254740992.000000000, toInt64(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(0), -9007199254740992.000000000 != toInt64(0), -9007199254740992.000000000 < toInt64(0), -9007199254740992.000000000 <= toInt64(0), -9007199254740992.000000000 > toInt64(0), -9007199254740992.000000000 >= toInt64(0) ; +SELECT '0', '-9007199254740992.000000000', 0 = -9007199254740992.000000000, 0 != -9007199254740992.000000000, 0 < -9007199254740992.000000000, 0 <= -9007199254740992.000000000, 0 > -9007199254740992.000000000, 0 >= -9007199254740992.000000000, -9007199254740992.000000000 = 0, -9007199254740992.000000000 != 0, -9007199254740992.000000000 < 0, -9007199254740992.000000000 <= 0, -9007199254740992.000000000 > 0, -9007199254740992.000000000 >= 0 , toUInt8(0) = -9007199254740992.000000000, toUInt8(0) != -9007199254740992.000000000, toUInt8(0) < -9007199254740992.000000000, toUInt8(0) <= -9007199254740992.000000000, toUInt8(0) > -9007199254740992.000000000, toUInt8(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt8(0), -9007199254740992.000000000 != toUInt8(0), -9007199254740992.000000000 < toUInt8(0), -9007199254740992.000000000 <= toUInt8(0), -9007199254740992.000000000 > toUInt8(0), -9007199254740992.000000000 >= toUInt8(0) , toInt8(0) = -9007199254740992.000000000, toInt8(0) != -9007199254740992.000000000, toInt8(0) < -9007199254740992.000000000, toInt8(0) <= -9007199254740992.000000000, toInt8(0) > -9007199254740992.000000000, toInt8(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(0), -9007199254740992.000000000 != toInt8(0), -9007199254740992.000000000 < toInt8(0), -9007199254740992.000000000 <= toInt8(0), -9007199254740992.000000000 > toInt8(0), -9007199254740992.000000000 >= toInt8(0) , toUInt16(0) = -9007199254740992.000000000, toUInt16(0) != -9007199254740992.000000000, toUInt16(0) < -9007199254740992.000000000, toUInt16(0) <= -9007199254740992.000000000, toUInt16(0) > -9007199254740992.000000000, toUInt16(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt16(0), -9007199254740992.000000000 != toUInt16(0), -9007199254740992.000000000 < toUInt16(0), -9007199254740992.000000000 <= toUInt16(0), -9007199254740992.000000000 > toUInt16(0), -9007199254740992.000000000 >= toUInt16(0) , toInt16(0) = -9007199254740992.000000000, toInt16(0) != -9007199254740992.000000000, toInt16(0) < -9007199254740992.000000000, toInt16(0) <= -9007199254740992.000000000, toInt16(0) > -9007199254740992.000000000, toInt16(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(0), -9007199254740992.000000000 != toInt16(0), -9007199254740992.000000000 < toInt16(0), -9007199254740992.000000000 <= toInt16(0), -9007199254740992.000000000 > toInt16(0), -9007199254740992.000000000 >= toInt16(0) , toUInt32(0) = -9007199254740992.000000000, toUInt32(0) != -9007199254740992.000000000, toUInt32(0) < -9007199254740992.000000000, toUInt32(0) <= -9007199254740992.000000000, toUInt32(0) > -9007199254740992.000000000, toUInt32(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt32(0), -9007199254740992.000000000 != toUInt32(0), -9007199254740992.000000000 < toUInt32(0), -9007199254740992.000000000 <= toUInt32(0), -9007199254740992.000000000 > toUInt32(0), -9007199254740992.000000000 >= toUInt32(0) , toInt32(0) = -9007199254740992.000000000, toInt32(0) != -9007199254740992.000000000, toInt32(0) < -9007199254740992.000000000, toInt32(0) <= -9007199254740992.000000000, toInt32(0) > -9007199254740992.000000000, toInt32(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(0), -9007199254740992.000000000 != toInt32(0), -9007199254740992.000000000 < toInt32(0), -9007199254740992.000000000 <= toInt32(0), -9007199254740992.000000000 > toInt32(0), -9007199254740992.000000000 >= toInt32(0) , toUInt64(0) = -9007199254740992.000000000, toUInt64(0) != -9007199254740992.000000000, toUInt64(0) < -9007199254740992.000000000, toUInt64(0) <= -9007199254740992.000000000, toUInt64(0) > -9007199254740992.000000000, toUInt64(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(0), -9007199254740992.000000000 != toUInt64(0), -9007199254740992.000000000 < toUInt64(0), -9007199254740992.000000000 <= toUInt64(0), -9007199254740992.000000000 > toUInt64(0), -9007199254740992.000000000 >= toUInt64(0) , toInt64(0) = -9007199254740992.000000000, toInt64(0) != -9007199254740992.000000000, toInt64(0) < -9007199254740992.000000000, toInt64(0) <= -9007199254740992.000000000, toInt64(0) > -9007199254740992.000000000, toInt64(0) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(0), -9007199254740992.000000000 != toInt64(0), -9007199254740992.000000000 < toInt64(0), -9007199254740992.000000000 <= toInt64(0), -9007199254740992.000000000 > toInt64(0), -9007199254740992.000000000 >= toInt64(0) ; +SELECT '0', '-9007199254740994.000000000', 0 = -9007199254740994.000000000, 0 != -9007199254740994.000000000, 0 < -9007199254740994.000000000, 0 <= -9007199254740994.000000000, 0 > -9007199254740994.000000000, 0 >= -9007199254740994.000000000, -9007199254740994.000000000 = 0, -9007199254740994.000000000 != 0, -9007199254740994.000000000 < 0, -9007199254740994.000000000 <= 0, -9007199254740994.000000000 > 0, -9007199254740994.000000000 >= 0 , toUInt8(0) = -9007199254740994.000000000, toUInt8(0) != -9007199254740994.000000000, toUInt8(0) < -9007199254740994.000000000, toUInt8(0) <= -9007199254740994.000000000, toUInt8(0) > -9007199254740994.000000000, toUInt8(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt8(0), -9007199254740994.000000000 != toUInt8(0), -9007199254740994.000000000 < toUInt8(0), -9007199254740994.000000000 <= toUInt8(0), -9007199254740994.000000000 > toUInt8(0), -9007199254740994.000000000 >= toUInt8(0) , toInt8(0) = -9007199254740994.000000000, toInt8(0) != -9007199254740994.000000000, toInt8(0) < -9007199254740994.000000000, toInt8(0) <= -9007199254740994.000000000, toInt8(0) > -9007199254740994.000000000, toInt8(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt8(0), -9007199254740994.000000000 != toInt8(0), -9007199254740994.000000000 < toInt8(0), -9007199254740994.000000000 <= toInt8(0), -9007199254740994.000000000 > toInt8(0), -9007199254740994.000000000 >= toInt8(0) , toUInt16(0) = -9007199254740994.000000000, toUInt16(0) != -9007199254740994.000000000, toUInt16(0) < -9007199254740994.000000000, toUInt16(0) <= -9007199254740994.000000000, toUInt16(0) > -9007199254740994.000000000, toUInt16(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt16(0), -9007199254740994.000000000 != toUInt16(0), -9007199254740994.000000000 < toUInt16(0), -9007199254740994.000000000 <= toUInt16(0), -9007199254740994.000000000 > toUInt16(0), -9007199254740994.000000000 >= toUInt16(0) , toInt16(0) = -9007199254740994.000000000, toInt16(0) != -9007199254740994.000000000, toInt16(0) < -9007199254740994.000000000, toInt16(0) <= -9007199254740994.000000000, toInt16(0) > -9007199254740994.000000000, toInt16(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt16(0), -9007199254740994.000000000 != toInt16(0), -9007199254740994.000000000 < toInt16(0), -9007199254740994.000000000 <= toInt16(0), -9007199254740994.000000000 > toInt16(0), -9007199254740994.000000000 >= toInt16(0) , toUInt32(0) = -9007199254740994.000000000, toUInt32(0) != -9007199254740994.000000000, toUInt32(0) < -9007199254740994.000000000, toUInt32(0) <= -9007199254740994.000000000, toUInt32(0) > -9007199254740994.000000000, toUInt32(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt32(0), -9007199254740994.000000000 != toUInt32(0), -9007199254740994.000000000 < toUInt32(0), -9007199254740994.000000000 <= toUInt32(0), -9007199254740994.000000000 > toUInt32(0), -9007199254740994.000000000 >= toUInt32(0) , toInt32(0) = -9007199254740994.000000000, toInt32(0) != -9007199254740994.000000000, toInt32(0) < -9007199254740994.000000000, toInt32(0) <= -9007199254740994.000000000, toInt32(0) > -9007199254740994.000000000, toInt32(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt32(0), -9007199254740994.000000000 != toInt32(0), -9007199254740994.000000000 < toInt32(0), -9007199254740994.000000000 <= toInt32(0), -9007199254740994.000000000 > toInt32(0), -9007199254740994.000000000 >= toInt32(0) , toUInt64(0) = -9007199254740994.000000000, toUInt64(0) != -9007199254740994.000000000, toUInt64(0) < -9007199254740994.000000000, toUInt64(0) <= -9007199254740994.000000000, toUInt64(0) > -9007199254740994.000000000, toUInt64(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt64(0), -9007199254740994.000000000 != toUInt64(0), -9007199254740994.000000000 < toUInt64(0), -9007199254740994.000000000 <= toUInt64(0), -9007199254740994.000000000 > toUInt64(0), -9007199254740994.000000000 >= toUInt64(0) , toInt64(0) = -9007199254740994.000000000, toInt64(0) != -9007199254740994.000000000, toInt64(0) < -9007199254740994.000000000, toInt64(0) <= -9007199254740994.000000000, toInt64(0) > -9007199254740994.000000000, toInt64(0) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt64(0), -9007199254740994.000000000 != toInt64(0), -9007199254740994.000000000 < toInt64(0), -9007199254740994.000000000 <= toInt64(0), -9007199254740994.000000000 > toInt64(0), -9007199254740994.000000000 >= toInt64(0) ; +SELECT '0', '104.000000000', 0 = 104.000000000, 0 != 104.000000000, 0 < 104.000000000, 0 <= 104.000000000, 0 > 104.000000000, 0 >= 104.000000000, 104.000000000 = 0, 104.000000000 != 0, 104.000000000 < 0, 104.000000000 <= 0, 104.000000000 > 0, 104.000000000 >= 0 , toUInt8(0) = 104.000000000, toUInt8(0) != 104.000000000, toUInt8(0) < 104.000000000, toUInt8(0) <= 104.000000000, toUInt8(0) > 104.000000000, toUInt8(0) >= 104.000000000, 104.000000000 = toUInt8(0), 104.000000000 != toUInt8(0), 104.000000000 < toUInt8(0), 104.000000000 <= toUInt8(0), 104.000000000 > toUInt8(0), 104.000000000 >= toUInt8(0) , toInt8(0) = 104.000000000, toInt8(0) != 104.000000000, toInt8(0) < 104.000000000, toInt8(0) <= 104.000000000, toInt8(0) > 104.000000000, toInt8(0) >= 104.000000000, 104.000000000 = toInt8(0), 104.000000000 != toInt8(0), 104.000000000 < toInt8(0), 104.000000000 <= toInt8(0), 104.000000000 > toInt8(0), 104.000000000 >= toInt8(0) , toUInt16(0) = 104.000000000, toUInt16(0) != 104.000000000, toUInt16(0) < 104.000000000, toUInt16(0) <= 104.000000000, toUInt16(0) > 104.000000000, toUInt16(0) >= 104.000000000, 104.000000000 = toUInt16(0), 104.000000000 != toUInt16(0), 104.000000000 < toUInt16(0), 104.000000000 <= toUInt16(0), 104.000000000 > toUInt16(0), 104.000000000 >= toUInt16(0) , toInt16(0) = 104.000000000, toInt16(0) != 104.000000000, toInt16(0) < 104.000000000, toInt16(0) <= 104.000000000, toInt16(0) > 104.000000000, toInt16(0) >= 104.000000000, 104.000000000 = toInt16(0), 104.000000000 != toInt16(0), 104.000000000 < toInt16(0), 104.000000000 <= toInt16(0), 104.000000000 > toInt16(0), 104.000000000 >= toInt16(0) , toUInt32(0) = 104.000000000, toUInt32(0) != 104.000000000, toUInt32(0) < 104.000000000, toUInt32(0) <= 104.000000000, toUInt32(0) > 104.000000000, toUInt32(0) >= 104.000000000, 104.000000000 = toUInt32(0), 104.000000000 != toUInt32(0), 104.000000000 < toUInt32(0), 104.000000000 <= toUInt32(0), 104.000000000 > toUInt32(0), 104.000000000 >= toUInt32(0) , toInt32(0) = 104.000000000, toInt32(0) != 104.000000000, toInt32(0) < 104.000000000, toInt32(0) <= 104.000000000, toInt32(0) > 104.000000000, toInt32(0) >= 104.000000000, 104.000000000 = toInt32(0), 104.000000000 != toInt32(0), 104.000000000 < toInt32(0), 104.000000000 <= toInt32(0), 104.000000000 > toInt32(0), 104.000000000 >= toInt32(0) , toUInt64(0) = 104.000000000, toUInt64(0) != 104.000000000, toUInt64(0) < 104.000000000, toUInt64(0) <= 104.000000000, toUInt64(0) > 104.000000000, toUInt64(0) >= 104.000000000, 104.000000000 = toUInt64(0), 104.000000000 != toUInt64(0), 104.000000000 < toUInt64(0), 104.000000000 <= toUInt64(0), 104.000000000 > toUInt64(0), 104.000000000 >= toUInt64(0) , toInt64(0) = 104.000000000, toInt64(0) != 104.000000000, toInt64(0) < 104.000000000, toInt64(0) <= 104.000000000, toInt64(0) > 104.000000000, toInt64(0) >= 104.000000000, 104.000000000 = toInt64(0), 104.000000000 != toInt64(0), 104.000000000 < toInt64(0), 104.000000000 <= toInt64(0), 104.000000000 > toInt64(0), 104.000000000 >= toInt64(0) ; +SELECT '0', '-4503599627370496.000000000', 0 = -4503599627370496.000000000, 0 != -4503599627370496.000000000, 0 < -4503599627370496.000000000, 0 <= -4503599627370496.000000000, 0 > -4503599627370496.000000000, 0 >= -4503599627370496.000000000, -4503599627370496.000000000 = 0, -4503599627370496.000000000 != 0, -4503599627370496.000000000 < 0, -4503599627370496.000000000 <= 0, -4503599627370496.000000000 > 0, -4503599627370496.000000000 >= 0 , toUInt8(0) = -4503599627370496.000000000, toUInt8(0) != -4503599627370496.000000000, toUInt8(0) < -4503599627370496.000000000, toUInt8(0) <= -4503599627370496.000000000, toUInt8(0) > -4503599627370496.000000000, toUInt8(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt8(0), -4503599627370496.000000000 != toUInt8(0), -4503599627370496.000000000 < toUInt8(0), -4503599627370496.000000000 <= toUInt8(0), -4503599627370496.000000000 > toUInt8(0), -4503599627370496.000000000 >= toUInt8(0) , toInt8(0) = -4503599627370496.000000000, toInt8(0) != -4503599627370496.000000000, toInt8(0) < -4503599627370496.000000000, toInt8(0) <= -4503599627370496.000000000, toInt8(0) > -4503599627370496.000000000, toInt8(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt8(0), -4503599627370496.000000000 != toInt8(0), -4503599627370496.000000000 < toInt8(0), -4503599627370496.000000000 <= toInt8(0), -4503599627370496.000000000 > toInt8(0), -4503599627370496.000000000 >= toInt8(0) , toUInt16(0) = -4503599627370496.000000000, toUInt16(0) != -4503599627370496.000000000, toUInt16(0) < -4503599627370496.000000000, toUInt16(0) <= -4503599627370496.000000000, toUInt16(0) > -4503599627370496.000000000, toUInt16(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt16(0), -4503599627370496.000000000 != toUInt16(0), -4503599627370496.000000000 < toUInt16(0), -4503599627370496.000000000 <= toUInt16(0), -4503599627370496.000000000 > toUInt16(0), -4503599627370496.000000000 >= toUInt16(0) , toInt16(0) = -4503599627370496.000000000, toInt16(0) != -4503599627370496.000000000, toInt16(0) < -4503599627370496.000000000, toInt16(0) <= -4503599627370496.000000000, toInt16(0) > -4503599627370496.000000000, toInt16(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt16(0), -4503599627370496.000000000 != toInt16(0), -4503599627370496.000000000 < toInt16(0), -4503599627370496.000000000 <= toInt16(0), -4503599627370496.000000000 > toInt16(0), -4503599627370496.000000000 >= toInt16(0) , toUInt32(0) = -4503599627370496.000000000, toUInt32(0) != -4503599627370496.000000000, toUInt32(0) < -4503599627370496.000000000, toUInt32(0) <= -4503599627370496.000000000, toUInt32(0) > -4503599627370496.000000000, toUInt32(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt32(0), -4503599627370496.000000000 != toUInt32(0), -4503599627370496.000000000 < toUInt32(0), -4503599627370496.000000000 <= toUInt32(0), -4503599627370496.000000000 > toUInt32(0), -4503599627370496.000000000 >= toUInt32(0) , toInt32(0) = -4503599627370496.000000000, toInt32(0) != -4503599627370496.000000000, toInt32(0) < -4503599627370496.000000000, toInt32(0) <= -4503599627370496.000000000, toInt32(0) > -4503599627370496.000000000, toInt32(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt32(0), -4503599627370496.000000000 != toInt32(0), -4503599627370496.000000000 < toInt32(0), -4503599627370496.000000000 <= toInt32(0), -4503599627370496.000000000 > toInt32(0), -4503599627370496.000000000 >= toInt32(0) , toUInt64(0) = -4503599627370496.000000000, toUInt64(0) != -4503599627370496.000000000, toUInt64(0) < -4503599627370496.000000000, toUInt64(0) <= -4503599627370496.000000000, toUInt64(0) > -4503599627370496.000000000, toUInt64(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt64(0), -4503599627370496.000000000 != toUInt64(0), -4503599627370496.000000000 < toUInt64(0), -4503599627370496.000000000 <= toUInt64(0), -4503599627370496.000000000 > toUInt64(0), -4503599627370496.000000000 >= toUInt64(0) , toInt64(0) = -4503599627370496.000000000, toInt64(0) != -4503599627370496.000000000, toInt64(0) < -4503599627370496.000000000, toInt64(0) <= -4503599627370496.000000000, toInt64(0) > -4503599627370496.000000000, toInt64(0) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt64(0), -4503599627370496.000000000 != toInt64(0), -4503599627370496.000000000 < toInt64(0), -4503599627370496.000000000 <= toInt64(0), -4503599627370496.000000000 > toInt64(0), -4503599627370496.000000000 >= toInt64(0) ; +SELECT '0', '-0.500000000', 0 = -0.500000000, 0 != -0.500000000, 0 < -0.500000000, 0 <= -0.500000000, 0 > -0.500000000, 0 >= -0.500000000, -0.500000000 = 0, -0.500000000 != 0, -0.500000000 < 0, -0.500000000 <= 0, -0.500000000 > 0, -0.500000000 >= 0 , toUInt8(0) = -0.500000000, toUInt8(0) != -0.500000000, toUInt8(0) < -0.500000000, toUInt8(0) <= -0.500000000, toUInt8(0) > -0.500000000, toUInt8(0) >= -0.500000000, -0.500000000 = toUInt8(0), -0.500000000 != toUInt8(0), -0.500000000 < toUInt8(0), -0.500000000 <= toUInt8(0), -0.500000000 > toUInt8(0), -0.500000000 >= toUInt8(0) , toInt8(0) = -0.500000000, toInt8(0) != -0.500000000, toInt8(0) < -0.500000000, toInt8(0) <= -0.500000000, toInt8(0) > -0.500000000, toInt8(0) >= -0.500000000, -0.500000000 = toInt8(0), -0.500000000 != toInt8(0), -0.500000000 < toInt8(0), -0.500000000 <= toInt8(0), -0.500000000 > toInt8(0), -0.500000000 >= toInt8(0) , toUInt16(0) = -0.500000000, toUInt16(0) != -0.500000000, toUInt16(0) < -0.500000000, toUInt16(0) <= -0.500000000, toUInt16(0) > -0.500000000, toUInt16(0) >= -0.500000000, -0.500000000 = toUInt16(0), -0.500000000 != toUInt16(0), -0.500000000 < toUInt16(0), -0.500000000 <= toUInt16(0), -0.500000000 > toUInt16(0), -0.500000000 >= toUInt16(0) , toInt16(0) = -0.500000000, toInt16(0) != -0.500000000, toInt16(0) < -0.500000000, toInt16(0) <= -0.500000000, toInt16(0) > -0.500000000, toInt16(0) >= -0.500000000, -0.500000000 = toInt16(0), -0.500000000 != toInt16(0), -0.500000000 < toInt16(0), -0.500000000 <= toInt16(0), -0.500000000 > toInt16(0), -0.500000000 >= toInt16(0) , toUInt32(0) = -0.500000000, toUInt32(0) != -0.500000000, toUInt32(0) < -0.500000000, toUInt32(0) <= -0.500000000, toUInt32(0) > -0.500000000, toUInt32(0) >= -0.500000000, -0.500000000 = toUInt32(0), -0.500000000 != toUInt32(0), -0.500000000 < toUInt32(0), -0.500000000 <= toUInt32(0), -0.500000000 > toUInt32(0), -0.500000000 >= toUInt32(0) , toInt32(0) = -0.500000000, toInt32(0) != -0.500000000, toInt32(0) < -0.500000000, toInt32(0) <= -0.500000000, toInt32(0) > -0.500000000, toInt32(0) >= -0.500000000, -0.500000000 = toInt32(0), -0.500000000 != toInt32(0), -0.500000000 < toInt32(0), -0.500000000 <= toInt32(0), -0.500000000 > toInt32(0), -0.500000000 >= toInt32(0) , toUInt64(0) = -0.500000000, toUInt64(0) != -0.500000000, toUInt64(0) < -0.500000000, toUInt64(0) <= -0.500000000, toUInt64(0) > -0.500000000, toUInt64(0) >= -0.500000000, -0.500000000 = toUInt64(0), -0.500000000 != toUInt64(0), -0.500000000 < toUInt64(0), -0.500000000 <= toUInt64(0), -0.500000000 > toUInt64(0), -0.500000000 >= toUInt64(0) , toInt64(0) = -0.500000000, toInt64(0) != -0.500000000, toInt64(0) < -0.500000000, toInt64(0) <= -0.500000000, toInt64(0) > -0.500000000, toInt64(0) >= -0.500000000, -0.500000000 = toInt64(0), -0.500000000 != toInt64(0), -0.500000000 < toInt64(0), -0.500000000 <= toInt64(0), -0.500000000 > toInt64(0), -0.500000000 >= toInt64(0) ; +SELECT '0', '0.500000000', 0 = 0.500000000, 0 != 0.500000000, 0 < 0.500000000, 0 <= 0.500000000, 0 > 0.500000000, 0 >= 0.500000000, 0.500000000 = 0, 0.500000000 != 0, 0.500000000 < 0, 0.500000000 <= 0, 0.500000000 > 0, 0.500000000 >= 0 , toUInt8(0) = 0.500000000, toUInt8(0) != 0.500000000, toUInt8(0) < 0.500000000, toUInt8(0) <= 0.500000000, toUInt8(0) > 0.500000000, toUInt8(0) >= 0.500000000, 0.500000000 = toUInt8(0), 0.500000000 != toUInt8(0), 0.500000000 < toUInt8(0), 0.500000000 <= toUInt8(0), 0.500000000 > toUInt8(0), 0.500000000 >= toUInt8(0) , toInt8(0) = 0.500000000, toInt8(0) != 0.500000000, toInt8(0) < 0.500000000, toInt8(0) <= 0.500000000, toInt8(0) > 0.500000000, toInt8(0) >= 0.500000000, 0.500000000 = toInt8(0), 0.500000000 != toInt8(0), 0.500000000 < toInt8(0), 0.500000000 <= toInt8(0), 0.500000000 > toInt8(0), 0.500000000 >= toInt8(0) , toUInt16(0) = 0.500000000, toUInt16(0) != 0.500000000, toUInt16(0) < 0.500000000, toUInt16(0) <= 0.500000000, toUInt16(0) > 0.500000000, toUInt16(0) >= 0.500000000, 0.500000000 = toUInt16(0), 0.500000000 != toUInt16(0), 0.500000000 < toUInt16(0), 0.500000000 <= toUInt16(0), 0.500000000 > toUInt16(0), 0.500000000 >= toUInt16(0) , toInt16(0) = 0.500000000, toInt16(0) != 0.500000000, toInt16(0) < 0.500000000, toInt16(0) <= 0.500000000, toInt16(0) > 0.500000000, toInt16(0) >= 0.500000000, 0.500000000 = toInt16(0), 0.500000000 != toInt16(0), 0.500000000 < toInt16(0), 0.500000000 <= toInt16(0), 0.500000000 > toInt16(0), 0.500000000 >= toInt16(0) , toUInt32(0) = 0.500000000, toUInt32(0) != 0.500000000, toUInt32(0) < 0.500000000, toUInt32(0) <= 0.500000000, toUInt32(0) > 0.500000000, toUInt32(0) >= 0.500000000, 0.500000000 = toUInt32(0), 0.500000000 != toUInt32(0), 0.500000000 < toUInt32(0), 0.500000000 <= toUInt32(0), 0.500000000 > toUInt32(0), 0.500000000 >= toUInt32(0) , toInt32(0) = 0.500000000, toInt32(0) != 0.500000000, toInt32(0) < 0.500000000, toInt32(0) <= 0.500000000, toInt32(0) > 0.500000000, toInt32(0) >= 0.500000000, 0.500000000 = toInt32(0), 0.500000000 != toInt32(0), 0.500000000 < toInt32(0), 0.500000000 <= toInt32(0), 0.500000000 > toInt32(0), 0.500000000 >= toInt32(0) , toUInt64(0) = 0.500000000, toUInt64(0) != 0.500000000, toUInt64(0) < 0.500000000, toUInt64(0) <= 0.500000000, toUInt64(0) > 0.500000000, toUInt64(0) >= 0.500000000, 0.500000000 = toUInt64(0), 0.500000000 != toUInt64(0), 0.500000000 < toUInt64(0), 0.500000000 <= toUInt64(0), 0.500000000 > toUInt64(0), 0.500000000 >= toUInt64(0) , toInt64(0) = 0.500000000, toInt64(0) != 0.500000000, toInt64(0) < 0.500000000, toInt64(0) <= 0.500000000, toInt64(0) > 0.500000000, toInt64(0) >= 0.500000000, 0.500000000 = toInt64(0), 0.500000000 != toInt64(0), 0.500000000 < toInt64(0), 0.500000000 <= toInt64(0), 0.500000000 > toInt64(0), 0.500000000 >= toInt64(0) ; +SELECT '0', '-1.500000000', 0 = -1.500000000, 0 != -1.500000000, 0 < -1.500000000, 0 <= -1.500000000, 0 > -1.500000000, 0 >= -1.500000000, -1.500000000 = 0, -1.500000000 != 0, -1.500000000 < 0, -1.500000000 <= 0, -1.500000000 > 0, -1.500000000 >= 0 , toUInt8(0) = -1.500000000, toUInt8(0) != -1.500000000, toUInt8(0) < -1.500000000, toUInt8(0) <= -1.500000000, toUInt8(0) > -1.500000000, toUInt8(0) >= -1.500000000, -1.500000000 = toUInt8(0), -1.500000000 != toUInt8(0), -1.500000000 < toUInt8(0), -1.500000000 <= toUInt8(0), -1.500000000 > toUInt8(0), -1.500000000 >= toUInt8(0) , toInt8(0) = -1.500000000, toInt8(0) != -1.500000000, toInt8(0) < -1.500000000, toInt8(0) <= -1.500000000, toInt8(0) > -1.500000000, toInt8(0) >= -1.500000000, -1.500000000 = toInt8(0), -1.500000000 != toInt8(0), -1.500000000 < toInt8(0), -1.500000000 <= toInt8(0), -1.500000000 > toInt8(0), -1.500000000 >= toInt8(0) , toUInt16(0) = -1.500000000, toUInt16(0) != -1.500000000, toUInt16(0) < -1.500000000, toUInt16(0) <= -1.500000000, toUInt16(0) > -1.500000000, toUInt16(0) >= -1.500000000, -1.500000000 = toUInt16(0), -1.500000000 != toUInt16(0), -1.500000000 < toUInt16(0), -1.500000000 <= toUInt16(0), -1.500000000 > toUInt16(0), -1.500000000 >= toUInt16(0) , toInt16(0) = -1.500000000, toInt16(0) != -1.500000000, toInt16(0) < -1.500000000, toInt16(0) <= -1.500000000, toInt16(0) > -1.500000000, toInt16(0) >= -1.500000000, -1.500000000 = toInt16(0), -1.500000000 != toInt16(0), -1.500000000 < toInt16(0), -1.500000000 <= toInt16(0), -1.500000000 > toInt16(0), -1.500000000 >= toInt16(0) , toUInt32(0) = -1.500000000, toUInt32(0) != -1.500000000, toUInt32(0) < -1.500000000, toUInt32(0) <= -1.500000000, toUInt32(0) > -1.500000000, toUInt32(0) >= -1.500000000, -1.500000000 = toUInt32(0), -1.500000000 != toUInt32(0), -1.500000000 < toUInt32(0), -1.500000000 <= toUInt32(0), -1.500000000 > toUInt32(0), -1.500000000 >= toUInt32(0) , toInt32(0) = -1.500000000, toInt32(0) != -1.500000000, toInt32(0) < -1.500000000, toInt32(0) <= -1.500000000, toInt32(0) > -1.500000000, toInt32(0) >= -1.500000000, -1.500000000 = toInt32(0), -1.500000000 != toInt32(0), -1.500000000 < toInt32(0), -1.500000000 <= toInt32(0), -1.500000000 > toInt32(0), -1.500000000 >= toInt32(0) , toUInt64(0) = -1.500000000, toUInt64(0) != -1.500000000, toUInt64(0) < -1.500000000, toUInt64(0) <= -1.500000000, toUInt64(0) > -1.500000000, toUInt64(0) >= -1.500000000, -1.500000000 = toUInt64(0), -1.500000000 != toUInt64(0), -1.500000000 < toUInt64(0), -1.500000000 <= toUInt64(0), -1.500000000 > toUInt64(0), -1.500000000 >= toUInt64(0) , toInt64(0) = -1.500000000, toInt64(0) != -1.500000000, toInt64(0) < -1.500000000, toInt64(0) <= -1.500000000, toInt64(0) > -1.500000000, toInt64(0) >= -1.500000000, -1.500000000 = toInt64(0), -1.500000000 != toInt64(0), -1.500000000 < toInt64(0), -1.500000000 <= toInt64(0), -1.500000000 > toInt64(0), -1.500000000 >= toInt64(0) ; +SELECT '0', '1.500000000', 0 = 1.500000000, 0 != 1.500000000, 0 < 1.500000000, 0 <= 1.500000000, 0 > 1.500000000, 0 >= 1.500000000, 1.500000000 = 0, 1.500000000 != 0, 1.500000000 < 0, 1.500000000 <= 0, 1.500000000 > 0, 1.500000000 >= 0 , toUInt8(0) = 1.500000000, toUInt8(0) != 1.500000000, toUInt8(0) < 1.500000000, toUInt8(0) <= 1.500000000, toUInt8(0) > 1.500000000, toUInt8(0) >= 1.500000000, 1.500000000 = toUInt8(0), 1.500000000 != toUInt8(0), 1.500000000 < toUInt8(0), 1.500000000 <= toUInt8(0), 1.500000000 > toUInt8(0), 1.500000000 >= toUInt8(0) , toInt8(0) = 1.500000000, toInt8(0) != 1.500000000, toInt8(0) < 1.500000000, toInt8(0) <= 1.500000000, toInt8(0) > 1.500000000, toInt8(0) >= 1.500000000, 1.500000000 = toInt8(0), 1.500000000 != toInt8(0), 1.500000000 < toInt8(0), 1.500000000 <= toInt8(0), 1.500000000 > toInt8(0), 1.500000000 >= toInt8(0) , toUInt16(0) = 1.500000000, toUInt16(0) != 1.500000000, toUInt16(0) < 1.500000000, toUInt16(0) <= 1.500000000, toUInt16(0) > 1.500000000, toUInt16(0) >= 1.500000000, 1.500000000 = toUInt16(0), 1.500000000 != toUInt16(0), 1.500000000 < toUInt16(0), 1.500000000 <= toUInt16(0), 1.500000000 > toUInt16(0), 1.500000000 >= toUInt16(0) , toInt16(0) = 1.500000000, toInt16(0) != 1.500000000, toInt16(0) < 1.500000000, toInt16(0) <= 1.500000000, toInt16(0) > 1.500000000, toInt16(0) >= 1.500000000, 1.500000000 = toInt16(0), 1.500000000 != toInt16(0), 1.500000000 < toInt16(0), 1.500000000 <= toInt16(0), 1.500000000 > toInt16(0), 1.500000000 >= toInt16(0) , toUInt32(0) = 1.500000000, toUInt32(0) != 1.500000000, toUInt32(0) < 1.500000000, toUInt32(0) <= 1.500000000, toUInt32(0) > 1.500000000, toUInt32(0) >= 1.500000000, 1.500000000 = toUInt32(0), 1.500000000 != toUInt32(0), 1.500000000 < toUInt32(0), 1.500000000 <= toUInt32(0), 1.500000000 > toUInt32(0), 1.500000000 >= toUInt32(0) , toInt32(0) = 1.500000000, toInt32(0) != 1.500000000, toInt32(0) < 1.500000000, toInt32(0) <= 1.500000000, toInt32(0) > 1.500000000, toInt32(0) >= 1.500000000, 1.500000000 = toInt32(0), 1.500000000 != toInt32(0), 1.500000000 < toInt32(0), 1.500000000 <= toInt32(0), 1.500000000 > toInt32(0), 1.500000000 >= toInt32(0) , toUInt64(0) = 1.500000000, toUInt64(0) != 1.500000000, toUInt64(0) < 1.500000000, toUInt64(0) <= 1.500000000, toUInt64(0) > 1.500000000, toUInt64(0) >= 1.500000000, 1.500000000 = toUInt64(0), 1.500000000 != toUInt64(0), 1.500000000 < toUInt64(0), 1.500000000 <= toUInt64(0), 1.500000000 > toUInt64(0), 1.500000000 >= toUInt64(0) , toInt64(0) = 1.500000000, toInt64(0) != 1.500000000, toInt64(0) < 1.500000000, toInt64(0) <= 1.500000000, toInt64(0) > 1.500000000, toInt64(0) >= 1.500000000, 1.500000000 = toInt64(0), 1.500000000 != toInt64(0), 1.500000000 < toInt64(0), 1.500000000 <= toInt64(0), 1.500000000 > toInt64(0), 1.500000000 >= toInt64(0) ; +SELECT '0', '9007199254740992.000000000', 0 = 9007199254740992.000000000, 0 != 9007199254740992.000000000, 0 < 9007199254740992.000000000, 0 <= 9007199254740992.000000000, 0 > 9007199254740992.000000000, 0 >= 9007199254740992.000000000, 9007199254740992.000000000 = 0, 9007199254740992.000000000 != 0, 9007199254740992.000000000 < 0, 9007199254740992.000000000 <= 0, 9007199254740992.000000000 > 0, 9007199254740992.000000000 >= 0 , toUInt8(0) = 9007199254740992.000000000, toUInt8(0) != 9007199254740992.000000000, toUInt8(0) < 9007199254740992.000000000, toUInt8(0) <= 9007199254740992.000000000, toUInt8(0) > 9007199254740992.000000000, toUInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(0), 9007199254740992.000000000 != toUInt8(0), 9007199254740992.000000000 < toUInt8(0), 9007199254740992.000000000 <= toUInt8(0), 9007199254740992.000000000 > toUInt8(0), 9007199254740992.000000000 >= toUInt8(0) , toInt8(0) = 9007199254740992.000000000, toInt8(0) != 9007199254740992.000000000, toInt8(0) < 9007199254740992.000000000, toInt8(0) <= 9007199254740992.000000000, toInt8(0) > 9007199254740992.000000000, toInt8(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(0), 9007199254740992.000000000 != toInt8(0), 9007199254740992.000000000 < toInt8(0), 9007199254740992.000000000 <= toInt8(0), 9007199254740992.000000000 > toInt8(0), 9007199254740992.000000000 >= toInt8(0) , toUInt16(0) = 9007199254740992.000000000, toUInt16(0) != 9007199254740992.000000000, toUInt16(0) < 9007199254740992.000000000, toUInt16(0) <= 9007199254740992.000000000, toUInt16(0) > 9007199254740992.000000000, toUInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(0), 9007199254740992.000000000 != toUInt16(0), 9007199254740992.000000000 < toUInt16(0), 9007199254740992.000000000 <= toUInt16(0), 9007199254740992.000000000 > toUInt16(0), 9007199254740992.000000000 >= toUInt16(0) , toInt16(0) = 9007199254740992.000000000, toInt16(0) != 9007199254740992.000000000, toInt16(0) < 9007199254740992.000000000, toInt16(0) <= 9007199254740992.000000000, toInt16(0) > 9007199254740992.000000000, toInt16(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(0), 9007199254740992.000000000 != toInt16(0), 9007199254740992.000000000 < toInt16(0), 9007199254740992.000000000 <= toInt16(0), 9007199254740992.000000000 > toInt16(0), 9007199254740992.000000000 >= toInt16(0) , toUInt32(0) = 9007199254740992.000000000, toUInt32(0) != 9007199254740992.000000000, toUInt32(0) < 9007199254740992.000000000, toUInt32(0) <= 9007199254740992.000000000, toUInt32(0) > 9007199254740992.000000000, toUInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(0), 9007199254740992.000000000 != toUInt32(0), 9007199254740992.000000000 < toUInt32(0), 9007199254740992.000000000 <= toUInt32(0), 9007199254740992.000000000 > toUInt32(0), 9007199254740992.000000000 >= toUInt32(0) , toInt32(0) = 9007199254740992.000000000, toInt32(0) != 9007199254740992.000000000, toInt32(0) < 9007199254740992.000000000, toInt32(0) <= 9007199254740992.000000000, toInt32(0) > 9007199254740992.000000000, toInt32(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(0), 9007199254740992.000000000 != toInt32(0), 9007199254740992.000000000 < toInt32(0), 9007199254740992.000000000 <= toInt32(0), 9007199254740992.000000000 > toInt32(0), 9007199254740992.000000000 >= toInt32(0) , toUInt64(0) = 9007199254740992.000000000, toUInt64(0) != 9007199254740992.000000000, toUInt64(0) < 9007199254740992.000000000, toUInt64(0) <= 9007199254740992.000000000, toUInt64(0) > 9007199254740992.000000000, toUInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(0), 9007199254740992.000000000 != toUInt64(0), 9007199254740992.000000000 < toUInt64(0), 9007199254740992.000000000 <= toUInt64(0), 9007199254740992.000000000 > toUInt64(0), 9007199254740992.000000000 >= toUInt64(0) , toInt64(0) = 9007199254740992.000000000, toInt64(0) != 9007199254740992.000000000, toInt64(0) < 9007199254740992.000000000, toInt64(0) <= 9007199254740992.000000000, toInt64(0) > 9007199254740992.000000000, toInt64(0) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(0), 9007199254740992.000000000 != toInt64(0), 9007199254740992.000000000 < toInt64(0), 9007199254740992.000000000 <= toInt64(0), 9007199254740992.000000000 > toInt64(0), 9007199254740992.000000000 >= toInt64(0) ; +SELECT '0', '2251799813685247.500000000', 0 = 2251799813685247.500000000, 0 != 2251799813685247.500000000, 0 < 2251799813685247.500000000, 0 <= 2251799813685247.500000000, 0 > 2251799813685247.500000000, 0 >= 2251799813685247.500000000, 2251799813685247.500000000 = 0, 2251799813685247.500000000 != 0, 2251799813685247.500000000 < 0, 2251799813685247.500000000 <= 0, 2251799813685247.500000000 > 0, 2251799813685247.500000000 >= 0 , toUInt8(0) = 2251799813685247.500000000, toUInt8(0) != 2251799813685247.500000000, toUInt8(0) < 2251799813685247.500000000, toUInt8(0) <= 2251799813685247.500000000, toUInt8(0) > 2251799813685247.500000000, toUInt8(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt8(0), 2251799813685247.500000000 != toUInt8(0), 2251799813685247.500000000 < toUInt8(0), 2251799813685247.500000000 <= toUInt8(0), 2251799813685247.500000000 > toUInt8(0), 2251799813685247.500000000 >= toUInt8(0) , toInt8(0) = 2251799813685247.500000000, toInt8(0) != 2251799813685247.500000000, toInt8(0) < 2251799813685247.500000000, toInt8(0) <= 2251799813685247.500000000, toInt8(0) > 2251799813685247.500000000, toInt8(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt8(0), 2251799813685247.500000000 != toInt8(0), 2251799813685247.500000000 < toInt8(0), 2251799813685247.500000000 <= toInt8(0), 2251799813685247.500000000 > toInt8(0), 2251799813685247.500000000 >= toInt8(0) , toUInt16(0) = 2251799813685247.500000000, toUInt16(0) != 2251799813685247.500000000, toUInt16(0) < 2251799813685247.500000000, toUInt16(0) <= 2251799813685247.500000000, toUInt16(0) > 2251799813685247.500000000, toUInt16(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt16(0), 2251799813685247.500000000 != toUInt16(0), 2251799813685247.500000000 < toUInt16(0), 2251799813685247.500000000 <= toUInt16(0), 2251799813685247.500000000 > toUInt16(0), 2251799813685247.500000000 >= toUInt16(0) , toInt16(0) = 2251799813685247.500000000, toInt16(0) != 2251799813685247.500000000, toInt16(0) < 2251799813685247.500000000, toInt16(0) <= 2251799813685247.500000000, toInt16(0) > 2251799813685247.500000000, toInt16(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt16(0), 2251799813685247.500000000 != toInt16(0), 2251799813685247.500000000 < toInt16(0), 2251799813685247.500000000 <= toInt16(0), 2251799813685247.500000000 > toInt16(0), 2251799813685247.500000000 >= toInt16(0) , toUInt32(0) = 2251799813685247.500000000, toUInt32(0) != 2251799813685247.500000000, toUInt32(0) < 2251799813685247.500000000, toUInt32(0) <= 2251799813685247.500000000, toUInt32(0) > 2251799813685247.500000000, toUInt32(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt32(0), 2251799813685247.500000000 != toUInt32(0), 2251799813685247.500000000 < toUInt32(0), 2251799813685247.500000000 <= toUInt32(0), 2251799813685247.500000000 > toUInt32(0), 2251799813685247.500000000 >= toUInt32(0) , toInt32(0) = 2251799813685247.500000000, toInt32(0) != 2251799813685247.500000000, toInt32(0) < 2251799813685247.500000000, toInt32(0) <= 2251799813685247.500000000, toInt32(0) > 2251799813685247.500000000, toInt32(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt32(0), 2251799813685247.500000000 != toInt32(0), 2251799813685247.500000000 < toInt32(0), 2251799813685247.500000000 <= toInt32(0), 2251799813685247.500000000 > toInt32(0), 2251799813685247.500000000 >= toInt32(0) , toUInt64(0) = 2251799813685247.500000000, toUInt64(0) != 2251799813685247.500000000, toUInt64(0) < 2251799813685247.500000000, toUInt64(0) <= 2251799813685247.500000000, toUInt64(0) > 2251799813685247.500000000, toUInt64(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt64(0), 2251799813685247.500000000 != toUInt64(0), 2251799813685247.500000000 < toUInt64(0), 2251799813685247.500000000 <= toUInt64(0), 2251799813685247.500000000 > toUInt64(0), 2251799813685247.500000000 >= toUInt64(0) , toInt64(0) = 2251799813685247.500000000, toInt64(0) != 2251799813685247.500000000, toInt64(0) < 2251799813685247.500000000, toInt64(0) <= 2251799813685247.500000000, toInt64(0) > 2251799813685247.500000000, toInt64(0) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt64(0), 2251799813685247.500000000 != toInt64(0), 2251799813685247.500000000 < toInt64(0), 2251799813685247.500000000 <= toInt64(0), 2251799813685247.500000000 > toInt64(0), 2251799813685247.500000000 >= toInt64(0) ; +SELECT '0', '2251799813685248.500000000', 0 = 2251799813685248.500000000, 0 != 2251799813685248.500000000, 0 < 2251799813685248.500000000, 0 <= 2251799813685248.500000000, 0 > 2251799813685248.500000000, 0 >= 2251799813685248.500000000, 2251799813685248.500000000 = 0, 2251799813685248.500000000 != 0, 2251799813685248.500000000 < 0, 2251799813685248.500000000 <= 0, 2251799813685248.500000000 > 0, 2251799813685248.500000000 >= 0 , toUInt8(0) = 2251799813685248.500000000, toUInt8(0) != 2251799813685248.500000000, toUInt8(0) < 2251799813685248.500000000, toUInt8(0) <= 2251799813685248.500000000, toUInt8(0) > 2251799813685248.500000000, toUInt8(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt8(0), 2251799813685248.500000000 != toUInt8(0), 2251799813685248.500000000 < toUInt8(0), 2251799813685248.500000000 <= toUInt8(0), 2251799813685248.500000000 > toUInt8(0), 2251799813685248.500000000 >= toUInt8(0) , toInt8(0) = 2251799813685248.500000000, toInt8(0) != 2251799813685248.500000000, toInt8(0) < 2251799813685248.500000000, toInt8(0) <= 2251799813685248.500000000, toInt8(0) > 2251799813685248.500000000, toInt8(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt8(0), 2251799813685248.500000000 != toInt8(0), 2251799813685248.500000000 < toInt8(0), 2251799813685248.500000000 <= toInt8(0), 2251799813685248.500000000 > toInt8(0), 2251799813685248.500000000 >= toInt8(0) , toUInt16(0) = 2251799813685248.500000000, toUInt16(0) != 2251799813685248.500000000, toUInt16(0) < 2251799813685248.500000000, toUInt16(0) <= 2251799813685248.500000000, toUInt16(0) > 2251799813685248.500000000, toUInt16(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt16(0), 2251799813685248.500000000 != toUInt16(0), 2251799813685248.500000000 < toUInt16(0), 2251799813685248.500000000 <= toUInt16(0), 2251799813685248.500000000 > toUInt16(0), 2251799813685248.500000000 >= toUInt16(0) , toInt16(0) = 2251799813685248.500000000, toInt16(0) != 2251799813685248.500000000, toInt16(0) < 2251799813685248.500000000, toInt16(0) <= 2251799813685248.500000000, toInt16(0) > 2251799813685248.500000000, toInt16(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt16(0), 2251799813685248.500000000 != toInt16(0), 2251799813685248.500000000 < toInt16(0), 2251799813685248.500000000 <= toInt16(0), 2251799813685248.500000000 > toInt16(0), 2251799813685248.500000000 >= toInt16(0) , toUInt32(0) = 2251799813685248.500000000, toUInt32(0) != 2251799813685248.500000000, toUInt32(0) < 2251799813685248.500000000, toUInt32(0) <= 2251799813685248.500000000, toUInt32(0) > 2251799813685248.500000000, toUInt32(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt32(0), 2251799813685248.500000000 != toUInt32(0), 2251799813685248.500000000 < toUInt32(0), 2251799813685248.500000000 <= toUInt32(0), 2251799813685248.500000000 > toUInt32(0), 2251799813685248.500000000 >= toUInt32(0) , toInt32(0) = 2251799813685248.500000000, toInt32(0) != 2251799813685248.500000000, toInt32(0) < 2251799813685248.500000000, toInt32(0) <= 2251799813685248.500000000, toInt32(0) > 2251799813685248.500000000, toInt32(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt32(0), 2251799813685248.500000000 != toInt32(0), 2251799813685248.500000000 < toInt32(0), 2251799813685248.500000000 <= toInt32(0), 2251799813685248.500000000 > toInt32(0), 2251799813685248.500000000 >= toInt32(0) , toUInt64(0) = 2251799813685248.500000000, toUInt64(0) != 2251799813685248.500000000, toUInt64(0) < 2251799813685248.500000000, toUInt64(0) <= 2251799813685248.500000000, toUInt64(0) > 2251799813685248.500000000, toUInt64(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt64(0), 2251799813685248.500000000 != toUInt64(0), 2251799813685248.500000000 < toUInt64(0), 2251799813685248.500000000 <= toUInt64(0), 2251799813685248.500000000 > toUInt64(0), 2251799813685248.500000000 >= toUInt64(0) , toInt64(0) = 2251799813685248.500000000, toInt64(0) != 2251799813685248.500000000, toInt64(0) < 2251799813685248.500000000, toInt64(0) <= 2251799813685248.500000000, toInt64(0) > 2251799813685248.500000000, toInt64(0) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt64(0), 2251799813685248.500000000 != toInt64(0), 2251799813685248.500000000 < toInt64(0), 2251799813685248.500000000 <= toInt64(0), 2251799813685248.500000000 > toInt64(0), 2251799813685248.500000000 >= toInt64(0) ; +SELECT '0', '1152921504606846976.000000000', 0 = 1152921504606846976.000000000, 0 != 1152921504606846976.000000000, 0 < 1152921504606846976.000000000, 0 <= 1152921504606846976.000000000, 0 > 1152921504606846976.000000000, 0 >= 1152921504606846976.000000000, 1152921504606846976.000000000 = 0, 1152921504606846976.000000000 != 0, 1152921504606846976.000000000 < 0, 1152921504606846976.000000000 <= 0, 1152921504606846976.000000000 > 0, 1152921504606846976.000000000 >= 0 , toUInt8(0) = 1152921504606846976.000000000, toUInt8(0) != 1152921504606846976.000000000, toUInt8(0) < 1152921504606846976.000000000, toUInt8(0) <= 1152921504606846976.000000000, toUInt8(0) > 1152921504606846976.000000000, toUInt8(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt8(0), 1152921504606846976.000000000 != toUInt8(0), 1152921504606846976.000000000 < toUInt8(0), 1152921504606846976.000000000 <= toUInt8(0), 1152921504606846976.000000000 > toUInt8(0), 1152921504606846976.000000000 >= toUInt8(0) , toInt8(0) = 1152921504606846976.000000000, toInt8(0) != 1152921504606846976.000000000, toInt8(0) < 1152921504606846976.000000000, toInt8(0) <= 1152921504606846976.000000000, toInt8(0) > 1152921504606846976.000000000, toInt8(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt8(0), 1152921504606846976.000000000 != toInt8(0), 1152921504606846976.000000000 < toInt8(0), 1152921504606846976.000000000 <= toInt8(0), 1152921504606846976.000000000 > toInt8(0), 1152921504606846976.000000000 >= toInt8(0) , toUInt16(0) = 1152921504606846976.000000000, toUInt16(0) != 1152921504606846976.000000000, toUInt16(0) < 1152921504606846976.000000000, toUInt16(0) <= 1152921504606846976.000000000, toUInt16(0) > 1152921504606846976.000000000, toUInt16(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt16(0), 1152921504606846976.000000000 != toUInt16(0), 1152921504606846976.000000000 < toUInt16(0), 1152921504606846976.000000000 <= toUInt16(0), 1152921504606846976.000000000 > toUInt16(0), 1152921504606846976.000000000 >= toUInt16(0) , toInt16(0) = 1152921504606846976.000000000, toInt16(0) != 1152921504606846976.000000000, toInt16(0) < 1152921504606846976.000000000, toInt16(0) <= 1152921504606846976.000000000, toInt16(0) > 1152921504606846976.000000000, toInt16(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt16(0), 1152921504606846976.000000000 != toInt16(0), 1152921504606846976.000000000 < toInt16(0), 1152921504606846976.000000000 <= toInt16(0), 1152921504606846976.000000000 > toInt16(0), 1152921504606846976.000000000 >= toInt16(0) , toUInt32(0) = 1152921504606846976.000000000, toUInt32(0) != 1152921504606846976.000000000, toUInt32(0) < 1152921504606846976.000000000, toUInt32(0) <= 1152921504606846976.000000000, toUInt32(0) > 1152921504606846976.000000000, toUInt32(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt32(0), 1152921504606846976.000000000 != toUInt32(0), 1152921504606846976.000000000 < toUInt32(0), 1152921504606846976.000000000 <= toUInt32(0), 1152921504606846976.000000000 > toUInt32(0), 1152921504606846976.000000000 >= toUInt32(0) , toInt32(0) = 1152921504606846976.000000000, toInt32(0) != 1152921504606846976.000000000, toInt32(0) < 1152921504606846976.000000000, toInt32(0) <= 1152921504606846976.000000000, toInt32(0) > 1152921504606846976.000000000, toInt32(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt32(0), 1152921504606846976.000000000 != toInt32(0), 1152921504606846976.000000000 < toInt32(0), 1152921504606846976.000000000 <= toInt32(0), 1152921504606846976.000000000 > toInt32(0), 1152921504606846976.000000000 >= toInt32(0) , toUInt64(0) = 1152921504606846976.000000000, toUInt64(0) != 1152921504606846976.000000000, toUInt64(0) < 1152921504606846976.000000000, toUInt64(0) <= 1152921504606846976.000000000, toUInt64(0) > 1152921504606846976.000000000, toUInt64(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt64(0), 1152921504606846976.000000000 != toUInt64(0), 1152921504606846976.000000000 < toUInt64(0), 1152921504606846976.000000000 <= toUInt64(0), 1152921504606846976.000000000 > toUInt64(0), 1152921504606846976.000000000 >= toUInt64(0) , toInt64(0) = 1152921504606846976.000000000, toInt64(0) != 1152921504606846976.000000000, toInt64(0) < 1152921504606846976.000000000, toInt64(0) <= 1152921504606846976.000000000, toInt64(0) > 1152921504606846976.000000000, toInt64(0) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt64(0), 1152921504606846976.000000000 != toInt64(0), 1152921504606846976.000000000 < toInt64(0), 1152921504606846976.000000000 <= toInt64(0), 1152921504606846976.000000000 > toInt64(0), 1152921504606846976.000000000 >= toInt64(0) ; +SELECT '0', '-1152921504606846976.000000000', 0 = -1152921504606846976.000000000, 0 != -1152921504606846976.000000000, 0 < -1152921504606846976.000000000, 0 <= -1152921504606846976.000000000, 0 > -1152921504606846976.000000000, 0 >= -1152921504606846976.000000000, -1152921504606846976.000000000 = 0, -1152921504606846976.000000000 != 0, -1152921504606846976.000000000 < 0, -1152921504606846976.000000000 <= 0, -1152921504606846976.000000000 > 0, -1152921504606846976.000000000 >= 0 , toUInt8(0) = -1152921504606846976.000000000, toUInt8(0) != -1152921504606846976.000000000, toUInt8(0) < -1152921504606846976.000000000, toUInt8(0) <= -1152921504606846976.000000000, toUInt8(0) > -1152921504606846976.000000000, toUInt8(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt8(0), -1152921504606846976.000000000 != toUInt8(0), -1152921504606846976.000000000 < toUInt8(0), -1152921504606846976.000000000 <= toUInt8(0), -1152921504606846976.000000000 > toUInt8(0), -1152921504606846976.000000000 >= toUInt8(0) , toInt8(0) = -1152921504606846976.000000000, toInt8(0) != -1152921504606846976.000000000, toInt8(0) < -1152921504606846976.000000000, toInt8(0) <= -1152921504606846976.000000000, toInt8(0) > -1152921504606846976.000000000, toInt8(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt8(0), -1152921504606846976.000000000 != toInt8(0), -1152921504606846976.000000000 < toInt8(0), -1152921504606846976.000000000 <= toInt8(0), -1152921504606846976.000000000 > toInt8(0), -1152921504606846976.000000000 >= toInt8(0) , toUInt16(0) = -1152921504606846976.000000000, toUInt16(0) != -1152921504606846976.000000000, toUInt16(0) < -1152921504606846976.000000000, toUInt16(0) <= -1152921504606846976.000000000, toUInt16(0) > -1152921504606846976.000000000, toUInt16(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt16(0), -1152921504606846976.000000000 != toUInt16(0), -1152921504606846976.000000000 < toUInt16(0), -1152921504606846976.000000000 <= toUInt16(0), -1152921504606846976.000000000 > toUInt16(0), -1152921504606846976.000000000 >= toUInt16(0) , toInt16(0) = -1152921504606846976.000000000, toInt16(0) != -1152921504606846976.000000000, toInt16(0) < -1152921504606846976.000000000, toInt16(0) <= -1152921504606846976.000000000, toInt16(0) > -1152921504606846976.000000000, toInt16(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt16(0), -1152921504606846976.000000000 != toInt16(0), -1152921504606846976.000000000 < toInt16(0), -1152921504606846976.000000000 <= toInt16(0), -1152921504606846976.000000000 > toInt16(0), -1152921504606846976.000000000 >= toInt16(0) , toUInt32(0) = -1152921504606846976.000000000, toUInt32(0) != -1152921504606846976.000000000, toUInt32(0) < -1152921504606846976.000000000, toUInt32(0) <= -1152921504606846976.000000000, toUInt32(0) > -1152921504606846976.000000000, toUInt32(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt32(0), -1152921504606846976.000000000 != toUInt32(0), -1152921504606846976.000000000 < toUInt32(0), -1152921504606846976.000000000 <= toUInt32(0), -1152921504606846976.000000000 > toUInt32(0), -1152921504606846976.000000000 >= toUInt32(0) , toInt32(0) = -1152921504606846976.000000000, toInt32(0) != -1152921504606846976.000000000, toInt32(0) < -1152921504606846976.000000000, toInt32(0) <= -1152921504606846976.000000000, toInt32(0) > -1152921504606846976.000000000, toInt32(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt32(0), -1152921504606846976.000000000 != toInt32(0), -1152921504606846976.000000000 < toInt32(0), -1152921504606846976.000000000 <= toInt32(0), -1152921504606846976.000000000 > toInt32(0), -1152921504606846976.000000000 >= toInt32(0) , toUInt64(0) = -1152921504606846976.000000000, toUInt64(0) != -1152921504606846976.000000000, toUInt64(0) < -1152921504606846976.000000000, toUInt64(0) <= -1152921504606846976.000000000, toUInt64(0) > -1152921504606846976.000000000, toUInt64(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt64(0), -1152921504606846976.000000000 != toUInt64(0), -1152921504606846976.000000000 < toUInt64(0), -1152921504606846976.000000000 <= toUInt64(0), -1152921504606846976.000000000 > toUInt64(0), -1152921504606846976.000000000 >= toUInt64(0) , toInt64(0) = -1152921504606846976.000000000, toInt64(0) != -1152921504606846976.000000000, toInt64(0) < -1152921504606846976.000000000, toInt64(0) <= -1152921504606846976.000000000, toInt64(0) > -1152921504606846976.000000000, toInt64(0) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt64(0), -1152921504606846976.000000000 != toInt64(0), -1152921504606846976.000000000 < toInt64(0), -1152921504606846976.000000000 <= toInt64(0), -1152921504606846976.000000000 > toInt64(0), -1152921504606846976.000000000 >= toInt64(0) ; +SELECT '0', '-9223372036854786048.000000000', 0 = -9223372036854786048.000000000, 0 != -9223372036854786048.000000000, 0 < -9223372036854786048.000000000, 0 <= -9223372036854786048.000000000, 0 > -9223372036854786048.000000000, 0 >= -9223372036854786048.000000000, -9223372036854786048.000000000 = 0, -9223372036854786048.000000000 != 0, -9223372036854786048.000000000 < 0, -9223372036854786048.000000000 <= 0, -9223372036854786048.000000000 > 0, -9223372036854786048.000000000 >= 0 , toUInt8(0) = -9223372036854786048.000000000, toUInt8(0) != -9223372036854786048.000000000, toUInt8(0) < -9223372036854786048.000000000, toUInt8(0) <= -9223372036854786048.000000000, toUInt8(0) > -9223372036854786048.000000000, toUInt8(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt8(0), -9223372036854786048.000000000 != toUInt8(0), -9223372036854786048.000000000 < toUInt8(0), -9223372036854786048.000000000 <= toUInt8(0), -9223372036854786048.000000000 > toUInt8(0), -9223372036854786048.000000000 >= toUInt8(0) , toInt8(0) = -9223372036854786048.000000000, toInt8(0) != -9223372036854786048.000000000, toInt8(0) < -9223372036854786048.000000000, toInt8(0) <= -9223372036854786048.000000000, toInt8(0) > -9223372036854786048.000000000, toInt8(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt8(0), -9223372036854786048.000000000 != toInt8(0), -9223372036854786048.000000000 < toInt8(0), -9223372036854786048.000000000 <= toInt8(0), -9223372036854786048.000000000 > toInt8(0), -9223372036854786048.000000000 >= toInt8(0) , toUInt16(0) = -9223372036854786048.000000000, toUInt16(0) != -9223372036854786048.000000000, toUInt16(0) < -9223372036854786048.000000000, toUInt16(0) <= -9223372036854786048.000000000, toUInt16(0) > -9223372036854786048.000000000, toUInt16(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt16(0), -9223372036854786048.000000000 != toUInt16(0), -9223372036854786048.000000000 < toUInt16(0), -9223372036854786048.000000000 <= toUInt16(0), -9223372036854786048.000000000 > toUInt16(0), -9223372036854786048.000000000 >= toUInt16(0) , toInt16(0) = -9223372036854786048.000000000, toInt16(0) != -9223372036854786048.000000000, toInt16(0) < -9223372036854786048.000000000, toInt16(0) <= -9223372036854786048.000000000, toInt16(0) > -9223372036854786048.000000000, toInt16(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt16(0), -9223372036854786048.000000000 != toInt16(0), -9223372036854786048.000000000 < toInt16(0), -9223372036854786048.000000000 <= toInt16(0), -9223372036854786048.000000000 > toInt16(0), -9223372036854786048.000000000 >= toInt16(0) , toUInt32(0) = -9223372036854786048.000000000, toUInt32(0) != -9223372036854786048.000000000, toUInt32(0) < -9223372036854786048.000000000, toUInt32(0) <= -9223372036854786048.000000000, toUInt32(0) > -9223372036854786048.000000000, toUInt32(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt32(0), -9223372036854786048.000000000 != toUInt32(0), -9223372036854786048.000000000 < toUInt32(0), -9223372036854786048.000000000 <= toUInt32(0), -9223372036854786048.000000000 > toUInt32(0), -9223372036854786048.000000000 >= toUInt32(0) , toInt32(0) = -9223372036854786048.000000000, toInt32(0) != -9223372036854786048.000000000, toInt32(0) < -9223372036854786048.000000000, toInt32(0) <= -9223372036854786048.000000000, toInt32(0) > -9223372036854786048.000000000, toInt32(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt32(0), -9223372036854786048.000000000 != toInt32(0), -9223372036854786048.000000000 < toInt32(0), -9223372036854786048.000000000 <= toInt32(0), -9223372036854786048.000000000 > toInt32(0), -9223372036854786048.000000000 >= toInt32(0) , toUInt64(0) = -9223372036854786048.000000000, toUInt64(0) != -9223372036854786048.000000000, toUInt64(0) < -9223372036854786048.000000000, toUInt64(0) <= -9223372036854786048.000000000, toUInt64(0) > -9223372036854786048.000000000, toUInt64(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt64(0), -9223372036854786048.000000000 != toUInt64(0), -9223372036854786048.000000000 < toUInt64(0), -9223372036854786048.000000000 <= toUInt64(0), -9223372036854786048.000000000 > toUInt64(0), -9223372036854786048.000000000 >= toUInt64(0) , toInt64(0) = -9223372036854786048.000000000, toInt64(0) != -9223372036854786048.000000000, toInt64(0) < -9223372036854786048.000000000, toInt64(0) <= -9223372036854786048.000000000, toInt64(0) > -9223372036854786048.000000000, toInt64(0) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt64(0), -9223372036854786048.000000000 != toInt64(0), -9223372036854786048.000000000 < toInt64(0), -9223372036854786048.000000000 <= toInt64(0), -9223372036854786048.000000000 > toInt64(0), -9223372036854786048.000000000 >= toInt64(0) ; +SELECT '0', '9223372036854786048.000000000', 0 = 9223372036854786048.000000000, 0 != 9223372036854786048.000000000, 0 < 9223372036854786048.000000000, 0 <= 9223372036854786048.000000000, 0 > 9223372036854786048.000000000, 0 >= 9223372036854786048.000000000, 9223372036854786048.000000000 = 0, 9223372036854786048.000000000 != 0, 9223372036854786048.000000000 < 0, 9223372036854786048.000000000 <= 0, 9223372036854786048.000000000 > 0, 9223372036854786048.000000000 >= 0 , toUInt8(0) = 9223372036854786048.000000000, toUInt8(0) != 9223372036854786048.000000000, toUInt8(0) < 9223372036854786048.000000000, toUInt8(0) <= 9223372036854786048.000000000, toUInt8(0) > 9223372036854786048.000000000, toUInt8(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt8(0), 9223372036854786048.000000000 != toUInt8(0), 9223372036854786048.000000000 < toUInt8(0), 9223372036854786048.000000000 <= toUInt8(0), 9223372036854786048.000000000 > toUInt8(0), 9223372036854786048.000000000 >= toUInt8(0) , toInt8(0) = 9223372036854786048.000000000, toInt8(0) != 9223372036854786048.000000000, toInt8(0) < 9223372036854786048.000000000, toInt8(0) <= 9223372036854786048.000000000, toInt8(0) > 9223372036854786048.000000000, toInt8(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt8(0), 9223372036854786048.000000000 != toInt8(0), 9223372036854786048.000000000 < toInt8(0), 9223372036854786048.000000000 <= toInt8(0), 9223372036854786048.000000000 > toInt8(0), 9223372036854786048.000000000 >= toInt8(0) , toUInt16(0) = 9223372036854786048.000000000, toUInt16(0) != 9223372036854786048.000000000, toUInt16(0) < 9223372036854786048.000000000, toUInt16(0) <= 9223372036854786048.000000000, toUInt16(0) > 9223372036854786048.000000000, toUInt16(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt16(0), 9223372036854786048.000000000 != toUInt16(0), 9223372036854786048.000000000 < toUInt16(0), 9223372036854786048.000000000 <= toUInt16(0), 9223372036854786048.000000000 > toUInt16(0), 9223372036854786048.000000000 >= toUInt16(0) , toInt16(0) = 9223372036854786048.000000000, toInt16(0) != 9223372036854786048.000000000, toInt16(0) < 9223372036854786048.000000000, toInt16(0) <= 9223372036854786048.000000000, toInt16(0) > 9223372036854786048.000000000, toInt16(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt16(0), 9223372036854786048.000000000 != toInt16(0), 9223372036854786048.000000000 < toInt16(0), 9223372036854786048.000000000 <= toInt16(0), 9223372036854786048.000000000 > toInt16(0), 9223372036854786048.000000000 >= toInt16(0) , toUInt32(0) = 9223372036854786048.000000000, toUInt32(0) != 9223372036854786048.000000000, toUInt32(0) < 9223372036854786048.000000000, toUInt32(0) <= 9223372036854786048.000000000, toUInt32(0) > 9223372036854786048.000000000, toUInt32(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt32(0), 9223372036854786048.000000000 != toUInt32(0), 9223372036854786048.000000000 < toUInt32(0), 9223372036854786048.000000000 <= toUInt32(0), 9223372036854786048.000000000 > toUInt32(0), 9223372036854786048.000000000 >= toUInt32(0) , toInt32(0) = 9223372036854786048.000000000, toInt32(0) != 9223372036854786048.000000000, toInt32(0) < 9223372036854786048.000000000, toInt32(0) <= 9223372036854786048.000000000, toInt32(0) > 9223372036854786048.000000000, toInt32(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt32(0), 9223372036854786048.000000000 != toInt32(0), 9223372036854786048.000000000 < toInt32(0), 9223372036854786048.000000000 <= toInt32(0), 9223372036854786048.000000000 > toInt32(0), 9223372036854786048.000000000 >= toInt32(0) , toUInt64(0) = 9223372036854786048.000000000, toUInt64(0) != 9223372036854786048.000000000, toUInt64(0) < 9223372036854786048.000000000, toUInt64(0) <= 9223372036854786048.000000000, toUInt64(0) > 9223372036854786048.000000000, toUInt64(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt64(0), 9223372036854786048.000000000 != toUInt64(0), 9223372036854786048.000000000 < toUInt64(0), 9223372036854786048.000000000 <= toUInt64(0), 9223372036854786048.000000000 > toUInt64(0), 9223372036854786048.000000000 >= toUInt64(0) , toInt64(0) = 9223372036854786048.000000000, toInt64(0) != 9223372036854786048.000000000, toInt64(0) < 9223372036854786048.000000000, toInt64(0) <= 9223372036854786048.000000000, toInt64(0) > 9223372036854786048.000000000, toInt64(0) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt64(0), 9223372036854786048.000000000 != toInt64(0), 9223372036854786048.000000000 < toInt64(0), 9223372036854786048.000000000 <= toInt64(0), 9223372036854786048.000000000 > toInt64(0), 9223372036854786048.000000000 >= toInt64(0) ; +SELECT '-1', '0.000000000', -1 = 0.000000000, -1 != 0.000000000, -1 < 0.000000000, -1 <= 0.000000000, -1 > 0.000000000, -1 >= 0.000000000, 0.000000000 = -1, 0.000000000 != -1, 0.000000000 < -1, 0.000000000 <= -1, 0.000000000 > -1, 0.000000000 >= -1 , toInt8(-1) = 0.000000000, toInt8(-1) != 0.000000000, toInt8(-1) < 0.000000000, toInt8(-1) <= 0.000000000, toInt8(-1) > 0.000000000, toInt8(-1) >= 0.000000000, 0.000000000 = toInt8(-1), 0.000000000 != toInt8(-1), 0.000000000 < toInt8(-1), 0.000000000 <= toInt8(-1), 0.000000000 > toInt8(-1), 0.000000000 >= toInt8(-1) , toInt16(-1) = 0.000000000, toInt16(-1) != 0.000000000, toInt16(-1) < 0.000000000, toInt16(-1) <= 0.000000000, toInt16(-1) > 0.000000000, toInt16(-1) >= 0.000000000, 0.000000000 = toInt16(-1), 0.000000000 != toInt16(-1), 0.000000000 < toInt16(-1), 0.000000000 <= toInt16(-1), 0.000000000 > toInt16(-1), 0.000000000 >= toInt16(-1) , toInt32(-1) = 0.000000000, toInt32(-1) != 0.000000000, toInt32(-1) < 0.000000000, toInt32(-1) <= 0.000000000, toInt32(-1) > 0.000000000, toInt32(-1) >= 0.000000000, 0.000000000 = toInt32(-1), 0.000000000 != toInt32(-1), 0.000000000 < toInt32(-1), 0.000000000 <= toInt32(-1), 0.000000000 > toInt32(-1), 0.000000000 >= toInt32(-1) , toInt64(-1) = 0.000000000, toInt64(-1) != 0.000000000, toInt64(-1) < 0.000000000, toInt64(-1) <= 0.000000000, toInt64(-1) > 0.000000000, toInt64(-1) >= 0.000000000, 0.000000000 = toInt64(-1), 0.000000000 != toInt64(-1), 0.000000000 < toInt64(-1), 0.000000000 <= toInt64(-1), 0.000000000 > toInt64(-1), 0.000000000 >= toInt64(-1) ; +SELECT '-1', '-1.000000000', -1 = -1.000000000, -1 != -1.000000000, -1 < -1.000000000, -1 <= -1.000000000, -1 > -1.000000000, -1 >= -1.000000000, -1.000000000 = -1, -1.000000000 != -1, -1.000000000 < -1, -1.000000000 <= -1, -1.000000000 > -1, -1.000000000 >= -1 , toInt8(-1) = -1.000000000, toInt8(-1) != -1.000000000, toInt8(-1) < -1.000000000, toInt8(-1) <= -1.000000000, toInt8(-1) > -1.000000000, toInt8(-1) >= -1.000000000, -1.000000000 = toInt8(-1), -1.000000000 != toInt8(-1), -1.000000000 < toInt8(-1), -1.000000000 <= toInt8(-1), -1.000000000 > toInt8(-1), -1.000000000 >= toInt8(-1) , toInt16(-1) = -1.000000000, toInt16(-1) != -1.000000000, toInt16(-1) < -1.000000000, toInt16(-1) <= -1.000000000, toInt16(-1) > -1.000000000, toInt16(-1) >= -1.000000000, -1.000000000 = toInt16(-1), -1.000000000 != toInt16(-1), -1.000000000 < toInt16(-1), -1.000000000 <= toInt16(-1), -1.000000000 > toInt16(-1), -1.000000000 >= toInt16(-1) , toInt32(-1) = -1.000000000, toInt32(-1) != -1.000000000, toInt32(-1) < -1.000000000, toInt32(-1) <= -1.000000000, toInt32(-1) > -1.000000000, toInt32(-1) >= -1.000000000, -1.000000000 = toInt32(-1), -1.000000000 != toInt32(-1), -1.000000000 < toInt32(-1), -1.000000000 <= toInt32(-1), -1.000000000 > toInt32(-1), -1.000000000 >= toInt32(-1) , toInt64(-1) = -1.000000000, toInt64(-1) != -1.000000000, toInt64(-1) < -1.000000000, toInt64(-1) <= -1.000000000, toInt64(-1) > -1.000000000, toInt64(-1) >= -1.000000000, -1.000000000 = toInt64(-1), -1.000000000 != toInt64(-1), -1.000000000 < toInt64(-1), -1.000000000 <= toInt64(-1), -1.000000000 > toInt64(-1), -1.000000000 >= toInt64(-1) ; +SELECT '-1', '1.000000000', -1 = 1.000000000, -1 != 1.000000000, -1 < 1.000000000, -1 <= 1.000000000, -1 > 1.000000000, -1 >= 1.000000000, 1.000000000 = -1, 1.000000000 != -1, 1.000000000 < -1, 1.000000000 <= -1, 1.000000000 > -1, 1.000000000 >= -1 , toInt8(-1) = 1.000000000, toInt8(-1) != 1.000000000, toInt8(-1) < 1.000000000, toInt8(-1) <= 1.000000000, toInt8(-1) > 1.000000000, toInt8(-1) >= 1.000000000, 1.000000000 = toInt8(-1), 1.000000000 != toInt8(-1), 1.000000000 < toInt8(-1), 1.000000000 <= toInt8(-1), 1.000000000 > toInt8(-1), 1.000000000 >= toInt8(-1) , toInt16(-1) = 1.000000000, toInt16(-1) != 1.000000000, toInt16(-1) < 1.000000000, toInt16(-1) <= 1.000000000, toInt16(-1) > 1.000000000, toInt16(-1) >= 1.000000000, 1.000000000 = toInt16(-1), 1.000000000 != toInt16(-1), 1.000000000 < toInt16(-1), 1.000000000 <= toInt16(-1), 1.000000000 > toInt16(-1), 1.000000000 >= toInt16(-1) , toInt32(-1) = 1.000000000, toInt32(-1) != 1.000000000, toInt32(-1) < 1.000000000, toInt32(-1) <= 1.000000000, toInt32(-1) > 1.000000000, toInt32(-1) >= 1.000000000, 1.000000000 = toInt32(-1), 1.000000000 != toInt32(-1), 1.000000000 < toInt32(-1), 1.000000000 <= toInt32(-1), 1.000000000 > toInt32(-1), 1.000000000 >= toInt32(-1) , toInt64(-1) = 1.000000000, toInt64(-1) != 1.000000000, toInt64(-1) < 1.000000000, toInt64(-1) <= 1.000000000, toInt64(-1) > 1.000000000, toInt64(-1) >= 1.000000000, 1.000000000 = toInt64(-1), 1.000000000 != toInt64(-1), 1.000000000 < toInt64(-1), 1.000000000 <= toInt64(-1), 1.000000000 > toInt64(-1), 1.000000000 >= toInt64(-1) ; +SELECT '-1', '18446744073709551616.000000000', -1 = 18446744073709551616.000000000, -1 != 18446744073709551616.000000000, -1 < 18446744073709551616.000000000, -1 <= 18446744073709551616.000000000, -1 > 18446744073709551616.000000000, -1 >= 18446744073709551616.000000000, 18446744073709551616.000000000 = -1, 18446744073709551616.000000000 != -1, 18446744073709551616.000000000 < -1, 18446744073709551616.000000000 <= -1, 18446744073709551616.000000000 > -1, 18446744073709551616.000000000 >= -1 , toInt8(-1) = 18446744073709551616.000000000, toInt8(-1) != 18446744073709551616.000000000, toInt8(-1) < 18446744073709551616.000000000, toInt8(-1) <= 18446744073709551616.000000000, toInt8(-1) > 18446744073709551616.000000000, toInt8(-1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt8(-1), 18446744073709551616.000000000 != toInt8(-1), 18446744073709551616.000000000 < toInt8(-1), 18446744073709551616.000000000 <= toInt8(-1), 18446744073709551616.000000000 > toInt8(-1), 18446744073709551616.000000000 >= toInt8(-1) , toInt16(-1) = 18446744073709551616.000000000, toInt16(-1) != 18446744073709551616.000000000, toInt16(-1) < 18446744073709551616.000000000, toInt16(-1) <= 18446744073709551616.000000000, toInt16(-1) > 18446744073709551616.000000000, toInt16(-1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt16(-1), 18446744073709551616.000000000 != toInt16(-1), 18446744073709551616.000000000 < toInt16(-1), 18446744073709551616.000000000 <= toInt16(-1), 18446744073709551616.000000000 > toInt16(-1), 18446744073709551616.000000000 >= toInt16(-1) , toInt32(-1) = 18446744073709551616.000000000, toInt32(-1) != 18446744073709551616.000000000, toInt32(-1) < 18446744073709551616.000000000, toInt32(-1) <= 18446744073709551616.000000000, toInt32(-1) > 18446744073709551616.000000000, toInt32(-1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt32(-1), 18446744073709551616.000000000 != toInt32(-1), 18446744073709551616.000000000 < toInt32(-1), 18446744073709551616.000000000 <= toInt32(-1), 18446744073709551616.000000000 > toInt32(-1), 18446744073709551616.000000000 >= toInt32(-1) , toInt64(-1) = 18446744073709551616.000000000, toInt64(-1) != 18446744073709551616.000000000, toInt64(-1) < 18446744073709551616.000000000, toInt64(-1) <= 18446744073709551616.000000000, toInt64(-1) > 18446744073709551616.000000000, toInt64(-1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt64(-1), 18446744073709551616.000000000 != toInt64(-1), 18446744073709551616.000000000 < toInt64(-1), 18446744073709551616.000000000 <= toInt64(-1), 18446744073709551616.000000000 > toInt64(-1), 18446744073709551616.000000000 >= toInt64(-1) ; +SELECT '-1', '9223372036854775808.000000000', -1 = 9223372036854775808.000000000, -1 != 9223372036854775808.000000000, -1 < 9223372036854775808.000000000, -1 <= 9223372036854775808.000000000, -1 > 9223372036854775808.000000000, -1 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = -1, 9223372036854775808.000000000 != -1, 9223372036854775808.000000000 < -1, 9223372036854775808.000000000 <= -1, 9223372036854775808.000000000 > -1, 9223372036854775808.000000000 >= -1 , toInt8(-1) = 9223372036854775808.000000000, toInt8(-1) != 9223372036854775808.000000000, toInt8(-1) < 9223372036854775808.000000000, toInt8(-1) <= 9223372036854775808.000000000, toInt8(-1) > 9223372036854775808.000000000, toInt8(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(-1), 9223372036854775808.000000000 != toInt8(-1), 9223372036854775808.000000000 < toInt8(-1), 9223372036854775808.000000000 <= toInt8(-1), 9223372036854775808.000000000 > toInt8(-1), 9223372036854775808.000000000 >= toInt8(-1) , toInt16(-1) = 9223372036854775808.000000000, toInt16(-1) != 9223372036854775808.000000000, toInt16(-1) < 9223372036854775808.000000000, toInt16(-1) <= 9223372036854775808.000000000, toInt16(-1) > 9223372036854775808.000000000, toInt16(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(-1), 9223372036854775808.000000000 != toInt16(-1), 9223372036854775808.000000000 < toInt16(-1), 9223372036854775808.000000000 <= toInt16(-1), 9223372036854775808.000000000 > toInt16(-1), 9223372036854775808.000000000 >= toInt16(-1) , toInt32(-1) = 9223372036854775808.000000000, toInt32(-1) != 9223372036854775808.000000000, toInt32(-1) < 9223372036854775808.000000000, toInt32(-1) <= 9223372036854775808.000000000, toInt32(-1) > 9223372036854775808.000000000, toInt32(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(-1), 9223372036854775808.000000000 != toInt32(-1), 9223372036854775808.000000000 < toInt32(-1), 9223372036854775808.000000000 <= toInt32(-1), 9223372036854775808.000000000 > toInt32(-1), 9223372036854775808.000000000 >= toInt32(-1) , toInt64(-1) = 9223372036854775808.000000000, toInt64(-1) != 9223372036854775808.000000000, toInt64(-1) < 9223372036854775808.000000000, toInt64(-1) <= 9223372036854775808.000000000, toInt64(-1) > 9223372036854775808.000000000, toInt64(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(-1), 9223372036854775808.000000000 != toInt64(-1), 9223372036854775808.000000000 < toInt64(-1), 9223372036854775808.000000000 <= toInt64(-1), 9223372036854775808.000000000 > toInt64(-1), 9223372036854775808.000000000 >= toInt64(-1) ; +SELECT '-1', '-9223372036854775808.000000000', -1 = -9223372036854775808.000000000, -1 != -9223372036854775808.000000000, -1 < -9223372036854775808.000000000, -1 <= -9223372036854775808.000000000, -1 > -9223372036854775808.000000000, -1 >= -9223372036854775808.000000000, -9223372036854775808.000000000 = -1, -9223372036854775808.000000000 != -1, -9223372036854775808.000000000 < -1, -9223372036854775808.000000000 <= -1, -9223372036854775808.000000000 > -1, -9223372036854775808.000000000 >= -1 , toInt8(-1) = -9223372036854775808.000000000, toInt8(-1) != -9223372036854775808.000000000, toInt8(-1) < -9223372036854775808.000000000, toInt8(-1) <= -9223372036854775808.000000000, toInt8(-1) > -9223372036854775808.000000000, toInt8(-1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt8(-1), -9223372036854775808.000000000 != toInt8(-1), -9223372036854775808.000000000 < toInt8(-1), -9223372036854775808.000000000 <= toInt8(-1), -9223372036854775808.000000000 > toInt8(-1), -9223372036854775808.000000000 >= toInt8(-1) , toInt16(-1) = -9223372036854775808.000000000, toInt16(-1) != -9223372036854775808.000000000, toInt16(-1) < -9223372036854775808.000000000, toInt16(-1) <= -9223372036854775808.000000000, toInt16(-1) > -9223372036854775808.000000000, toInt16(-1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt16(-1), -9223372036854775808.000000000 != toInt16(-1), -9223372036854775808.000000000 < toInt16(-1), -9223372036854775808.000000000 <= toInt16(-1), -9223372036854775808.000000000 > toInt16(-1), -9223372036854775808.000000000 >= toInt16(-1) , toInt32(-1) = -9223372036854775808.000000000, toInt32(-1) != -9223372036854775808.000000000, toInt32(-1) < -9223372036854775808.000000000, toInt32(-1) <= -9223372036854775808.000000000, toInt32(-1) > -9223372036854775808.000000000, toInt32(-1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt32(-1), -9223372036854775808.000000000 != toInt32(-1), -9223372036854775808.000000000 < toInt32(-1), -9223372036854775808.000000000 <= toInt32(-1), -9223372036854775808.000000000 > toInt32(-1), -9223372036854775808.000000000 >= toInt32(-1) , toInt64(-1) = -9223372036854775808.000000000, toInt64(-1) != -9223372036854775808.000000000, toInt64(-1) < -9223372036854775808.000000000, toInt64(-1) <= -9223372036854775808.000000000, toInt64(-1) > -9223372036854775808.000000000, toInt64(-1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt64(-1), -9223372036854775808.000000000 != toInt64(-1), -9223372036854775808.000000000 < toInt64(-1), -9223372036854775808.000000000 <= toInt64(-1), -9223372036854775808.000000000 > toInt64(-1), -9223372036854775808.000000000 >= toInt64(-1) ; +SELECT '-1', '9223372036854775808.000000000', -1 = 9223372036854775808.000000000, -1 != 9223372036854775808.000000000, -1 < 9223372036854775808.000000000, -1 <= 9223372036854775808.000000000, -1 > 9223372036854775808.000000000, -1 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = -1, 9223372036854775808.000000000 != -1, 9223372036854775808.000000000 < -1, 9223372036854775808.000000000 <= -1, 9223372036854775808.000000000 > -1, 9223372036854775808.000000000 >= -1 , toInt8(-1) = 9223372036854775808.000000000, toInt8(-1) != 9223372036854775808.000000000, toInt8(-1) < 9223372036854775808.000000000, toInt8(-1) <= 9223372036854775808.000000000, toInt8(-1) > 9223372036854775808.000000000, toInt8(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(-1), 9223372036854775808.000000000 != toInt8(-1), 9223372036854775808.000000000 < toInt8(-1), 9223372036854775808.000000000 <= toInt8(-1), 9223372036854775808.000000000 > toInt8(-1), 9223372036854775808.000000000 >= toInt8(-1) , toInt16(-1) = 9223372036854775808.000000000, toInt16(-1) != 9223372036854775808.000000000, toInt16(-1) < 9223372036854775808.000000000, toInt16(-1) <= 9223372036854775808.000000000, toInt16(-1) > 9223372036854775808.000000000, toInt16(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(-1), 9223372036854775808.000000000 != toInt16(-1), 9223372036854775808.000000000 < toInt16(-1), 9223372036854775808.000000000 <= toInt16(-1), 9223372036854775808.000000000 > toInt16(-1), 9223372036854775808.000000000 >= toInt16(-1) , toInt32(-1) = 9223372036854775808.000000000, toInt32(-1) != 9223372036854775808.000000000, toInt32(-1) < 9223372036854775808.000000000, toInt32(-1) <= 9223372036854775808.000000000, toInt32(-1) > 9223372036854775808.000000000, toInt32(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(-1), 9223372036854775808.000000000 != toInt32(-1), 9223372036854775808.000000000 < toInt32(-1), 9223372036854775808.000000000 <= toInt32(-1), 9223372036854775808.000000000 > toInt32(-1), 9223372036854775808.000000000 >= toInt32(-1) , toInt64(-1) = 9223372036854775808.000000000, toInt64(-1) != 9223372036854775808.000000000, toInt64(-1) < 9223372036854775808.000000000, toInt64(-1) <= 9223372036854775808.000000000, toInt64(-1) > 9223372036854775808.000000000, toInt64(-1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(-1), 9223372036854775808.000000000 != toInt64(-1), 9223372036854775808.000000000 < toInt64(-1), 9223372036854775808.000000000 <= toInt64(-1), 9223372036854775808.000000000 > toInt64(-1), 9223372036854775808.000000000 >= toInt64(-1) ; +SELECT '-1', '2251799813685248.000000000', -1 = 2251799813685248.000000000, -1 != 2251799813685248.000000000, -1 < 2251799813685248.000000000, -1 <= 2251799813685248.000000000, -1 > 2251799813685248.000000000, -1 >= 2251799813685248.000000000, 2251799813685248.000000000 = -1, 2251799813685248.000000000 != -1, 2251799813685248.000000000 < -1, 2251799813685248.000000000 <= -1, 2251799813685248.000000000 > -1, 2251799813685248.000000000 >= -1 , toInt8(-1) = 2251799813685248.000000000, toInt8(-1) != 2251799813685248.000000000, toInt8(-1) < 2251799813685248.000000000, toInt8(-1) <= 2251799813685248.000000000, toInt8(-1) > 2251799813685248.000000000, toInt8(-1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt8(-1), 2251799813685248.000000000 != toInt8(-1), 2251799813685248.000000000 < toInt8(-1), 2251799813685248.000000000 <= toInt8(-1), 2251799813685248.000000000 > toInt8(-1), 2251799813685248.000000000 >= toInt8(-1) , toInt16(-1) = 2251799813685248.000000000, toInt16(-1) != 2251799813685248.000000000, toInt16(-1) < 2251799813685248.000000000, toInt16(-1) <= 2251799813685248.000000000, toInt16(-1) > 2251799813685248.000000000, toInt16(-1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt16(-1), 2251799813685248.000000000 != toInt16(-1), 2251799813685248.000000000 < toInt16(-1), 2251799813685248.000000000 <= toInt16(-1), 2251799813685248.000000000 > toInt16(-1), 2251799813685248.000000000 >= toInt16(-1) , toInt32(-1) = 2251799813685248.000000000, toInt32(-1) != 2251799813685248.000000000, toInt32(-1) < 2251799813685248.000000000, toInt32(-1) <= 2251799813685248.000000000, toInt32(-1) > 2251799813685248.000000000, toInt32(-1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt32(-1), 2251799813685248.000000000 != toInt32(-1), 2251799813685248.000000000 < toInt32(-1), 2251799813685248.000000000 <= toInt32(-1), 2251799813685248.000000000 > toInt32(-1), 2251799813685248.000000000 >= toInt32(-1) , toInt64(-1) = 2251799813685248.000000000, toInt64(-1) != 2251799813685248.000000000, toInt64(-1) < 2251799813685248.000000000, toInt64(-1) <= 2251799813685248.000000000, toInt64(-1) > 2251799813685248.000000000, toInt64(-1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt64(-1), 2251799813685248.000000000 != toInt64(-1), 2251799813685248.000000000 < toInt64(-1), 2251799813685248.000000000 <= toInt64(-1), 2251799813685248.000000000 > toInt64(-1), 2251799813685248.000000000 >= toInt64(-1) ; +SELECT '-1', '4503599627370496.000000000', -1 = 4503599627370496.000000000, -1 != 4503599627370496.000000000, -1 < 4503599627370496.000000000, -1 <= 4503599627370496.000000000, -1 > 4503599627370496.000000000, -1 >= 4503599627370496.000000000, 4503599627370496.000000000 = -1, 4503599627370496.000000000 != -1, 4503599627370496.000000000 < -1, 4503599627370496.000000000 <= -1, 4503599627370496.000000000 > -1, 4503599627370496.000000000 >= -1 , toInt8(-1) = 4503599627370496.000000000, toInt8(-1) != 4503599627370496.000000000, toInt8(-1) < 4503599627370496.000000000, toInt8(-1) <= 4503599627370496.000000000, toInt8(-1) > 4503599627370496.000000000, toInt8(-1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt8(-1), 4503599627370496.000000000 != toInt8(-1), 4503599627370496.000000000 < toInt8(-1), 4503599627370496.000000000 <= toInt8(-1), 4503599627370496.000000000 > toInt8(-1), 4503599627370496.000000000 >= toInt8(-1) , toInt16(-1) = 4503599627370496.000000000, toInt16(-1) != 4503599627370496.000000000, toInt16(-1) < 4503599627370496.000000000, toInt16(-1) <= 4503599627370496.000000000, toInt16(-1) > 4503599627370496.000000000, toInt16(-1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt16(-1), 4503599627370496.000000000 != toInt16(-1), 4503599627370496.000000000 < toInt16(-1), 4503599627370496.000000000 <= toInt16(-1), 4503599627370496.000000000 > toInt16(-1), 4503599627370496.000000000 >= toInt16(-1) , toInt32(-1) = 4503599627370496.000000000, toInt32(-1) != 4503599627370496.000000000, toInt32(-1) < 4503599627370496.000000000, toInt32(-1) <= 4503599627370496.000000000, toInt32(-1) > 4503599627370496.000000000, toInt32(-1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt32(-1), 4503599627370496.000000000 != toInt32(-1), 4503599627370496.000000000 < toInt32(-1), 4503599627370496.000000000 <= toInt32(-1), 4503599627370496.000000000 > toInt32(-1), 4503599627370496.000000000 >= toInt32(-1) , toInt64(-1) = 4503599627370496.000000000, toInt64(-1) != 4503599627370496.000000000, toInt64(-1) < 4503599627370496.000000000, toInt64(-1) <= 4503599627370496.000000000, toInt64(-1) > 4503599627370496.000000000, toInt64(-1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt64(-1), 4503599627370496.000000000 != toInt64(-1), 4503599627370496.000000000 < toInt64(-1), 4503599627370496.000000000 <= toInt64(-1), 4503599627370496.000000000 > toInt64(-1), 4503599627370496.000000000 >= toInt64(-1) ; +SELECT '-1', '9007199254740991.000000000', -1 = 9007199254740991.000000000, -1 != 9007199254740991.000000000, -1 < 9007199254740991.000000000, -1 <= 9007199254740991.000000000, -1 > 9007199254740991.000000000, -1 >= 9007199254740991.000000000, 9007199254740991.000000000 = -1, 9007199254740991.000000000 != -1, 9007199254740991.000000000 < -1, 9007199254740991.000000000 <= -1, 9007199254740991.000000000 > -1, 9007199254740991.000000000 >= -1 , toInt8(-1) = 9007199254740991.000000000, toInt8(-1) != 9007199254740991.000000000, toInt8(-1) < 9007199254740991.000000000, toInt8(-1) <= 9007199254740991.000000000, toInt8(-1) > 9007199254740991.000000000, toInt8(-1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt8(-1), 9007199254740991.000000000 != toInt8(-1), 9007199254740991.000000000 < toInt8(-1), 9007199254740991.000000000 <= toInt8(-1), 9007199254740991.000000000 > toInt8(-1), 9007199254740991.000000000 >= toInt8(-1) , toInt16(-1) = 9007199254740991.000000000, toInt16(-1) != 9007199254740991.000000000, toInt16(-1) < 9007199254740991.000000000, toInt16(-1) <= 9007199254740991.000000000, toInt16(-1) > 9007199254740991.000000000, toInt16(-1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt16(-1), 9007199254740991.000000000 != toInt16(-1), 9007199254740991.000000000 < toInt16(-1), 9007199254740991.000000000 <= toInt16(-1), 9007199254740991.000000000 > toInt16(-1), 9007199254740991.000000000 >= toInt16(-1) , toInt32(-1) = 9007199254740991.000000000, toInt32(-1) != 9007199254740991.000000000, toInt32(-1) < 9007199254740991.000000000, toInt32(-1) <= 9007199254740991.000000000, toInt32(-1) > 9007199254740991.000000000, toInt32(-1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt32(-1), 9007199254740991.000000000 != toInt32(-1), 9007199254740991.000000000 < toInt32(-1), 9007199254740991.000000000 <= toInt32(-1), 9007199254740991.000000000 > toInt32(-1), 9007199254740991.000000000 >= toInt32(-1) , toInt64(-1) = 9007199254740991.000000000, toInt64(-1) != 9007199254740991.000000000, toInt64(-1) < 9007199254740991.000000000, toInt64(-1) <= 9007199254740991.000000000, toInt64(-1) > 9007199254740991.000000000, toInt64(-1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt64(-1), 9007199254740991.000000000 != toInt64(-1), 9007199254740991.000000000 < toInt64(-1), 9007199254740991.000000000 <= toInt64(-1), 9007199254740991.000000000 > toInt64(-1), 9007199254740991.000000000 >= toInt64(-1) ; +SELECT '-1', '9007199254740992.000000000', -1 = 9007199254740992.000000000, -1 != 9007199254740992.000000000, -1 < 9007199254740992.000000000, -1 <= 9007199254740992.000000000, -1 > 9007199254740992.000000000, -1 >= 9007199254740992.000000000, 9007199254740992.000000000 = -1, 9007199254740992.000000000 != -1, 9007199254740992.000000000 < -1, 9007199254740992.000000000 <= -1, 9007199254740992.000000000 > -1, 9007199254740992.000000000 >= -1 , toInt8(-1) = 9007199254740992.000000000, toInt8(-1) != 9007199254740992.000000000, toInt8(-1) < 9007199254740992.000000000, toInt8(-1) <= 9007199254740992.000000000, toInt8(-1) > 9007199254740992.000000000, toInt8(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(-1), 9007199254740992.000000000 != toInt8(-1), 9007199254740992.000000000 < toInt8(-1), 9007199254740992.000000000 <= toInt8(-1), 9007199254740992.000000000 > toInt8(-1), 9007199254740992.000000000 >= toInt8(-1) , toInt16(-1) = 9007199254740992.000000000, toInt16(-1) != 9007199254740992.000000000, toInt16(-1) < 9007199254740992.000000000, toInt16(-1) <= 9007199254740992.000000000, toInt16(-1) > 9007199254740992.000000000, toInt16(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(-1), 9007199254740992.000000000 != toInt16(-1), 9007199254740992.000000000 < toInt16(-1), 9007199254740992.000000000 <= toInt16(-1), 9007199254740992.000000000 > toInt16(-1), 9007199254740992.000000000 >= toInt16(-1) , toInt32(-1) = 9007199254740992.000000000, toInt32(-1) != 9007199254740992.000000000, toInt32(-1) < 9007199254740992.000000000, toInt32(-1) <= 9007199254740992.000000000, toInt32(-1) > 9007199254740992.000000000, toInt32(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(-1), 9007199254740992.000000000 != toInt32(-1), 9007199254740992.000000000 < toInt32(-1), 9007199254740992.000000000 <= toInt32(-1), 9007199254740992.000000000 > toInt32(-1), 9007199254740992.000000000 >= toInt32(-1) , toInt64(-1) = 9007199254740992.000000000, toInt64(-1) != 9007199254740992.000000000, toInt64(-1) < 9007199254740992.000000000, toInt64(-1) <= 9007199254740992.000000000, toInt64(-1) > 9007199254740992.000000000, toInt64(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(-1), 9007199254740992.000000000 != toInt64(-1), 9007199254740992.000000000 < toInt64(-1), 9007199254740992.000000000 <= toInt64(-1), 9007199254740992.000000000 > toInt64(-1), 9007199254740992.000000000 >= toInt64(-1) ; +SELECT '-1', '9007199254740992.000000000', -1 = 9007199254740992.000000000, -1 != 9007199254740992.000000000, -1 < 9007199254740992.000000000, -1 <= 9007199254740992.000000000, -1 > 9007199254740992.000000000, -1 >= 9007199254740992.000000000, 9007199254740992.000000000 = -1, 9007199254740992.000000000 != -1, 9007199254740992.000000000 < -1, 9007199254740992.000000000 <= -1, 9007199254740992.000000000 > -1, 9007199254740992.000000000 >= -1 , toInt8(-1) = 9007199254740992.000000000, toInt8(-1) != 9007199254740992.000000000, toInt8(-1) < 9007199254740992.000000000, toInt8(-1) <= 9007199254740992.000000000, toInt8(-1) > 9007199254740992.000000000, toInt8(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(-1), 9007199254740992.000000000 != toInt8(-1), 9007199254740992.000000000 < toInt8(-1), 9007199254740992.000000000 <= toInt8(-1), 9007199254740992.000000000 > toInt8(-1), 9007199254740992.000000000 >= toInt8(-1) , toInt16(-1) = 9007199254740992.000000000, toInt16(-1) != 9007199254740992.000000000, toInt16(-1) < 9007199254740992.000000000, toInt16(-1) <= 9007199254740992.000000000, toInt16(-1) > 9007199254740992.000000000, toInt16(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(-1), 9007199254740992.000000000 != toInt16(-1), 9007199254740992.000000000 < toInt16(-1), 9007199254740992.000000000 <= toInt16(-1), 9007199254740992.000000000 > toInt16(-1), 9007199254740992.000000000 >= toInt16(-1) , toInt32(-1) = 9007199254740992.000000000, toInt32(-1) != 9007199254740992.000000000, toInt32(-1) < 9007199254740992.000000000, toInt32(-1) <= 9007199254740992.000000000, toInt32(-1) > 9007199254740992.000000000, toInt32(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(-1), 9007199254740992.000000000 != toInt32(-1), 9007199254740992.000000000 < toInt32(-1), 9007199254740992.000000000 <= toInt32(-1), 9007199254740992.000000000 > toInt32(-1), 9007199254740992.000000000 >= toInt32(-1) , toInt64(-1) = 9007199254740992.000000000, toInt64(-1) != 9007199254740992.000000000, toInt64(-1) < 9007199254740992.000000000, toInt64(-1) <= 9007199254740992.000000000, toInt64(-1) > 9007199254740992.000000000, toInt64(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(-1), 9007199254740992.000000000 != toInt64(-1), 9007199254740992.000000000 < toInt64(-1), 9007199254740992.000000000 <= toInt64(-1), 9007199254740992.000000000 > toInt64(-1), 9007199254740992.000000000 >= toInt64(-1) ; +SELECT '-1', '9007199254740994.000000000', -1 = 9007199254740994.000000000, -1 != 9007199254740994.000000000, -1 < 9007199254740994.000000000, -1 <= 9007199254740994.000000000, -1 > 9007199254740994.000000000, -1 >= 9007199254740994.000000000, 9007199254740994.000000000 = -1, 9007199254740994.000000000 != -1, 9007199254740994.000000000 < -1, 9007199254740994.000000000 <= -1, 9007199254740994.000000000 > -1, 9007199254740994.000000000 >= -1 , toInt8(-1) = 9007199254740994.000000000, toInt8(-1) != 9007199254740994.000000000, toInt8(-1) < 9007199254740994.000000000, toInt8(-1) <= 9007199254740994.000000000, toInt8(-1) > 9007199254740994.000000000, toInt8(-1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt8(-1), 9007199254740994.000000000 != toInt8(-1), 9007199254740994.000000000 < toInt8(-1), 9007199254740994.000000000 <= toInt8(-1), 9007199254740994.000000000 > toInt8(-1), 9007199254740994.000000000 >= toInt8(-1) , toInt16(-1) = 9007199254740994.000000000, toInt16(-1) != 9007199254740994.000000000, toInt16(-1) < 9007199254740994.000000000, toInt16(-1) <= 9007199254740994.000000000, toInt16(-1) > 9007199254740994.000000000, toInt16(-1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt16(-1), 9007199254740994.000000000 != toInt16(-1), 9007199254740994.000000000 < toInt16(-1), 9007199254740994.000000000 <= toInt16(-1), 9007199254740994.000000000 > toInt16(-1), 9007199254740994.000000000 >= toInt16(-1) , toInt32(-1) = 9007199254740994.000000000, toInt32(-1) != 9007199254740994.000000000, toInt32(-1) < 9007199254740994.000000000, toInt32(-1) <= 9007199254740994.000000000, toInt32(-1) > 9007199254740994.000000000, toInt32(-1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt32(-1), 9007199254740994.000000000 != toInt32(-1), 9007199254740994.000000000 < toInt32(-1), 9007199254740994.000000000 <= toInt32(-1), 9007199254740994.000000000 > toInt32(-1), 9007199254740994.000000000 >= toInt32(-1) , toInt64(-1) = 9007199254740994.000000000, toInt64(-1) != 9007199254740994.000000000, toInt64(-1) < 9007199254740994.000000000, toInt64(-1) <= 9007199254740994.000000000, toInt64(-1) > 9007199254740994.000000000, toInt64(-1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt64(-1), 9007199254740994.000000000 != toInt64(-1), 9007199254740994.000000000 < toInt64(-1), 9007199254740994.000000000 <= toInt64(-1), 9007199254740994.000000000 > toInt64(-1), 9007199254740994.000000000 >= toInt64(-1) ; +SELECT '-1', '-9007199254740991.000000000', -1 = -9007199254740991.000000000, -1 != -9007199254740991.000000000, -1 < -9007199254740991.000000000, -1 <= -9007199254740991.000000000, -1 > -9007199254740991.000000000, -1 >= -9007199254740991.000000000, -9007199254740991.000000000 = -1, -9007199254740991.000000000 != -1, -9007199254740991.000000000 < -1, -9007199254740991.000000000 <= -1, -9007199254740991.000000000 > -1, -9007199254740991.000000000 >= -1 , toInt8(-1) = -9007199254740991.000000000, toInt8(-1) != -9007199254740991.000000000, toInt8(-1) < -9007199254740991.000000000, toInt8(-1) <= -9007199254740991.000000000, toInt8(-1) > -9007199254740991.000000000, toInt8(-1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt8(-1), -9007199254740991.000000000 != toInt8(-1), -9007199254740991.000000000 < toInt8(-1), -9007199254740991.000000000 <= toInt8(-1), -9007199254740991.000000000 > toInt8(-1), -9007199254740991.000000000 >= toInt8(-1) , toInt16(-1) = -9007199254740991.000000000, toInt16(-1) != -9007199254740991.000000000, toInt16(-1) < -9007199254740991.000000000, toInt16(-1) <= -9007199254740991.000000000, toInt16(-1) > -9007199254740991.000000000, toInt16(-1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt16(-1), -9007199254740991.000000000 != toInt16(-1), -9007199254740991.000000000 < toInt16(-1), -9007199254740991.000000000 <= toInt16(-1), -9007199254740991.000000000 > toInt16(-1), -9007199254740991.000000000 >= toInt16(-1) , toInt32(-1) = -9007199254740991.000000000, toInt32(-1) != -9007199254740991.000000000, toInt32(-1) < -9007199254740991.000000000, toInt32(-1) <= -9007199254740991.000000000, toInt32(-1) > -9007199254740991.000000000, toInt32(-1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt32(-1), -9007199254740991.000000000 != toInt32(-1), -9007199254740991.000000000 < toInt32(-1), -9007199254740991.000000000 <= toInt32(-1), -9007199254740991.000000000 > toInt32(-1), -9007199254740991.000000000 >= toInt32(-1) , toInt64(-1) = -9007199254740991.000000000, toInt64(-1) != -9007199254740991.000000000, toInt64(-1) < -9007199254740991.000000000, toInt64(-1) <= -9007199254740991.000000000, toInt64(-1) > -9007199254740991.000000000, toInt64(-1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt64(-1), -9007199254740991.000000000 != toInt64(-1), -9007199254740991.000000000 < toInt64(-1), -9007199254740991.000000000 <= toInt64(-1), -9007199254740991.000000000 > toInt64(-1), -9007199254740991.000000000 >= toInt64(-1) ; +SELECT '-1', '-9007199254740992.000000000', -1 = -9007199254740992.000000000, -1 != -9007199254740992.000000000, -1 < -9007199254740992.000000000, -1 <= -9007199254740992.000000000, -1 > -9007199254740992.000000000, -1 >= -9007199254740992.000000000, -9007199254740992.000000000 = -1, -9007199254740992.000000000 != -1, -9007199254740992.000000000 < -1, -9007199254740992.000000000 <= -1, -9007199254740992.000000000 > -1, -9007199254740992.000000000 >= -1 , toInt8(-1) = -9007199254740992.000000000, toInt8(-1) != -9007199254740992.000000000, toInt8(-1) < -9007199254740992.000000000, toInt8(-1) <= -9007199254740992.000000000, toInt8(-1) > -9007199254740992.000000000, toInt8(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(-1), -9007199254740992.000000000 != toInt8(-1), -9007199254740992.000000000 < toInt8(-1), -9007199254740992.000000000 <= toInt8(-1), -9007199254740992.000000000 > toInt8(-1), -9007199254740992.000000000 >= toInt8(-1) , toInt16(-1) = -9007199254740992.000000000, toInt16(-1) != -9007199254740992.000000000, toInt16(-1) < -9007199254740992.000000000, toInt16(-1) <= -9007199254740992.000000000, toInt16(-1) > -9007199254740992.000000000, toInt16(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(-1), -9007199254740992.000000000 != toInt16(-1), -9007199254740992.000000000 < toInt16(-1), -9007199254740992.000000000 <= toInt16(-1), -9007199254740992.000000000 > toInt16(-1), -9007199254740992.000000000 >= toInt16(-1) , toInt32(-1) = -9007199254740992.000000000, toInt32(-1) != -9007199254740992.000000000, toInt32(-1) < -9007199254740992.000000000, toInt32(-1) <= -9007199254740992.000000000, toInt32(-1) > -9007199254740992.000000000, toInt32(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(-1), -9007199254740992.000000000 != toInt32(-1), -9007199254740992.000000000 < toInt32(-1), -9007199254740992.000000000 <= toInt32(-1), -9007199254740992.000000000 > toInt32(-1), -9007199254740992.000000000 >= toInt32(-1) , toInt64(-1) = -9007199254740992.000000000, toInt64(-1) != -9007199254740992.000000000, toInt64(-1) < -9007199254740992.000000000, toInt64(-1) <= -9007199254740992.000000000, toInt64(-1) > -9007199254740992.000000000, toInt64(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(-1), -9007199254740992.000000000 != toInt64(-1), -9007199254740992.000000000 < toInt64(-1), -9007199254740992.000000000 <= toInt64(-1), -9007199254740992.000000000 > toInt64(-1), -9007199254740992.000000000 >= toInt64(-1) ; +SELECT '-1', '-9007199254740992.000000000', -1 = -9007199254740992.000000000, -1 != -9007199254740992.000000000, -1 < -9007199254740992.000000000, -1 <= -9007199254740992.000000000, -1 > -9007199254740992.000000000, -1 >= -9007199254740992.000000000, -9007199254740992.000000000 = -1, -9007199254740992.000000000 != -1, -9007199254740992.000000000 < -1, -9007199254740992.000000000 <= -1, -9007199254740992.000000000 > -1, -9007199254740992.000000000 >= -1 , toInt8(-1) = -9007199254740992.000000000, toInt8(-1) != -9007199254740992.000000000, toInt8(-1) < -9007199254740992.000000000, toInt8(-1) <= -9007199254740992.000000000, toInt8(-1) > -9007199254740992.000000000, toInt8(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(-1), -9007199254740992.000000000 != toInt8(-1), -9007199254740992.000000000 < toInt8(-1), -9007199254740992.000000000 <= toInt8(-1), -9007199254740992.000000000 > toInt8(-1), -9007199254740992.000000000 >= toInt8(-1) , toInt16(-1) = -9007199254740992.000000000, toInt16(-1) != -9007199254740992.000000000, toInt16(-1) < -9007199254740992.000000000, toInt16(-1) <= -9007199254740992.000000000, toInt16(-1) > -9007199254740992.000000000, toInt16(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(-1), -9007199254740992.000000000 != toInt16(-1), -9007199254740992.000000000 < toInt16(-1), -9007199254740992.000000000 <= toInt16(-1), -9007199254740992.000000000 > toInt16(-1), -9007199254740992.000000000 >= toInt16(-1) , toInt32(-1) = -9007199254740992.000000000, toInt32(-1) != -9007199254740992.000000000, toInt32(-1) < -9007199254740992.000000000, toInt32(-1) <= -9007199254740992.000000000, toInt32(-1) > -9007199254740992.000000000, toInt32(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(-1), -9007199254740992.000000000 != toInt32(-1), -9007199254740992.000000000 < toInt32(-1), -9007199254740992.000000000 <= toInt32(-1), -9007199254740992.000000000 > toInt32(-1), -9007199254740992.000000000 >= toInt32(-1) , toInt64(-1) = -9007199254740992.000000000, toInt64(-1) != -9007199254740992.000000000, toInt64(-1) < -9007199254740992.000000000, toInt64(-1) <= -9007199254740992.000000000, toInt64(-1) > -9007199254740992.000000000, toInt64(-1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(-1), -9007199254740992.000000000 != toInt64(-1), -9007199254740992.000000000 < toInt64(-1), -9007199254740992.000000000 <= toInt64(-1), -9007199254740992.000000000 > toInt64(-1), -9007199254740992.000000000 >= toInt64(-1) ; +SELECT '-1', '-9007199254740994.000000000', -1 = -9007199254740994.000000000, -1 != -9007199254740994.000000000, -1 < -9007199254740994.000000000, -1 <= -9007199254740994.000000000, -1 > -9007199254740994.000000000, -1 >= -9007199254740994.000000000, -9007199254740994.000000000 = -1, -9007199254740994.000000000 != -1, -9007199254740994.000000000 < -1, -9007199254740994.000000000 <= -1, -9007199254740994.000000000 > -1, -9007199254740994.000000000 >= -1 , toInt8(-1) = -9007199254740994.000000000, toInt8(-1) != -9007199254740994.000000000, toInt8(-1) < -9007199254740994.000000000, toInt8(-1) <= -9007199254740994.000000000, toInt8(-1) > -9007199254740994.000000000, toInt8(-1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt8(-1), -9007199254740994.000000000 != toInt8(-1), -9007199254740994.000000000 < toInt8(-1), -9007199254740994.000000000 <= toInt8(-1), -9007199254740994.000000000 > toInt8(-1), -9007199254740994.000000000 >= toInt8(-1) , toInt16(-1) = -9007199254740994.000000000, toInt16(-1) != -9007199254740994.000000000, toInt16(-1) < -9007199254740994.000000000, toInt16(-1) <= -9007199254740994.000000000, toInt16(-1) > -9007199254740994.000000000, toInt16(-1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt16(-1), -9007199254740994.000000000 != toInt16(-1), -9007199254740994.000000000 < toInt16(-1), -9007199254740994.000000000 <= toInt16(-1), -9007199254740994.000000000 > toInt16(-1), -9007199254740994.000000000 >= toInt16(-1) , toInt32(-1) = -9007199254740994.000000000, toInt32(-1) != -9007199254740994.000000000, toInt32(-1) < -9007199254740994.000000000, toInt32(-1) <= -9007199254740994.000000000, toInt32(-1) > -9007199254740994.000000000, toInt32(-1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt32(-1), -9007199254740994.000000000 != toInt32(-1), -9007199254740994.000000000 < toInt32(-1), -9007199254740994.000000000 <= toInt32(-1), -9007199254740994.000000000 > toInt32(-1), -9007199254740994.000000000 >= toInt32(-1) , toInt64(-1) = -9007199254740994.000000000, toInt64(-1) != -9007199254740994.000000000, toInt64(-1) < -9007199254740994.000000000, toInt64(-1) <= -9007199254740994.000000000, toInt64(-1) > -9007199254740994.000000000, toInt64(-1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt64(-1), -9007199254740994.000000000 != toInt64(-1), -9007199254740994.000000000 < toInt64(-1), -9007199254740994.000000000 <= toInt64(-1), -9007199254740994.000000000 > toInt64(-1), -9007199254740994.000000000 >= toInt64(-1) ; +SELECT '-1', '104.000000000', -1 = 104.000000000, -1 != 104.000000000, -1 < 104.000000000, -1 <= 104.000000000, -1 > 104.000000000, -1 >= 104.000000000, 104.000000000 = -1, 104.000000000 != -1, 104.000000000 < -1, 104.000000000 <= -1, 104.000000000 > -1, 104.000000000 >= -1 , toInt8(-1) = 104.000000000, toInt8(-1) != 104.000000000, toInt8(-1) < 104.000000000, toInt8(-1) <= 104.000000000, toInt8(-1) > 104.000000000, toInt8(-1) >= 104.000000000, 104.000000000 = toInt8(-1), 104.000000000 != toInt8(-1), 104.000000000 < toInt8(-1), 104.000000000 <= toInt8(-1), 104.000000000 > toInt8(-1), 104.000000000 >= toInt8(-1) , toInt16(-1) = 104.000000000, toInt16(-1) != 104.000000000, toInt16(-1) < 104.000000000, toInt16(-1) <= 104.000000000, toInt16(-1) > 104.000000000, toInt16(-1) >= 104.000000000, 104.000000000 = toInt16(-1), 104.000000000 != toInt16(-1), 104.000000000 < toInt16(-1), 104.000000000 <= toInt16(-1), 104.000000000 > toInt16(-1), 104.000000000 >= toInt16(-1) , toInt32(-1) = 104.000000000, toInt32(-1) != 104.000000000, toInt32(-1) < 104.000000000, toInt32(-1) <= 104.000000000, toInt32(-1) > 104.000000000, toInt32(-1) >= 104.000000000, 104.000000000 = toInt32(-1), 104.000000000 != toInt32(-1), 104.000000000 < toInt32(-1), 104.000000000 <= toInt32(-1), 104.000000000 > toInt32(-1), 104.000000000 >= toInt32(-1) , toInt64(-1) = 104.000000000, toInt64(-1) != 104.000000000, toInt64(-1) < 104.000000000, toInt64(-1) <= 104.000000000, toInt64(-1) > 104.000000000, toInt64(-1) >= 104.000000000, 104.000000000 = toInt64(-1), 104.000000000 != toInt64(-1), 104.000000000 < toInt64(-1), 104.000000000 <= toInt64(-1), 104.000000000 > toInt64(-1), 104.000000000 >= toInt64(-1) ; +SELECT '-1', '-4503599627370496.000000000', -1 = -4503599627370496.000000000, -1 != -4503599627370496.000000000, -1 < -4503599627370496.000000000, -1 <= -4503599627370496.000000000, -1 > -4503599627370496.000000000, -1 >= -4503599627370496.000000000, -4503599627370496.000000000 = -1, -4503599627370496.000000000 != -1, -4503599627370496.000000000 < -1, -4503599627370496.000000000 <= -1, -4503599627370496.000000000 > -1, -4503599627370496.000000000 >= -1 , toInt8(-1) = -4503599627370496.000000000, toInt8(-1) != -4503599627370496.000000000, toInt8(-1) < -4503599627370496.000000000, toInt8(-1) <= -4503599627370496.000000000, toInt8(-1) > -4503599627370496.000000000, toInt8(-1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt8(-1), -4503599627370496.000000000 != toInt8(-1), -4503599627370496.000000000 < toInt8(-1), -4503599627370496.000000000 <= toInt8(-1), -4503599627370496.000000000 > toInt8(-1), -4503599627370496.000000000 >= toInt8(-1) , toInt16(-1) = -4503599627370496.000000000, toInt16(-1) != -4503599627370496.000000000, toInt16(-1) < -4503599627370496.000000000, toInt16(-1) <= -4503599627370496.000000000, toInt16(-1) > -4503599627370496.000000000, toInt16(-1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt16(-1), -4503599627370496.000000000 != toInt16(-1), -4503599627370496.000000000 < toInt16(-1), -4503599627370496.000000000 <= toInt16(-1), -4503599627370496.000000000 > toInt16(-1), -4503599627370496.000000000 >= toInt16(-1) , toInt32(-1) = -4503599627370496.000000000, toInt32(-1) != -4503599627370496.000000000, toInt32(-1) < -4503599627370496.000000000, toInt32(-1) <= -4503599627370496.000000000, toInt32(-1) > -4503599627370496.000000000, toInt32(-1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt32(-1), -4503599627370496.000000000 != toInt32(-1), -4503599627370496.000000000 < toInt32(-1), -4503599627370496.000000000 <= toInt32(-1), -4503599627370496.000000000 > toInt32(-1), -4503599627370496.000000000 >= toInt32(-1) , toInt64(-1) = -4503599627370496.000000000, toInt64(-1) != -4503599627370496.000000000, toInt64(-1) < -4503599627370496.000000000, toInt64(-1) <= -4503599627370496.000000000, toInt64(-1) > -4503599627370496.000000000, toInt64(-1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt64(-1), -4503599627370496.000000000 != toInt64(-1), -4503599627370496.000000000 < toInt64(-1), -4503599627370496.000000000 <= toInt64(-1), -4503599627370496.000000000 > toInt64(-1), -4503599627370496.000000000 >= toInt64(-1) ; +SELECT '-1', '-0.500000000', -1 = -0.500000000, -1 != -0.500000000, -1 < -0.500000000, -1 <= -0.500000000, -1 > -0.500000000, -1 >= -0.500000000, -0.500000000 = -1, -0.500000000 != -1, -0.500000000 < -1, -0.500000000 <= -1, -0.500000000 > -1, -0.500000000 >= -1 , toInt8(-1) = -0.500000000, toInt8(-1) != -0.500000000, toInt8(-1) < -0.500000000, toInt8(-1) <= -0.500000000, toInt8(-1) > -0.500000000, toInt8(-1) >= -0.500000000, -0.500000000 = toInt8(-1), -0.500000000 != toInt8(-1), -0.500000000 < toInt8(-1), -0.500000000 <= toInt8(-1), -0.500000000 > toInt8(-1), -0.500000000 >= toInt8(-1) , toInt16(-1) = -0.500000000, toInt16(-1) != -0.500000000, toInt16(-1) < -0.500000000, toInt16(-1) <= -0.500000000, toInt16(-1) > -0.500000000, toInt16(-1) >= -0.500000000, -0.500000000 = toInt16(-1), -0.500000000 != toInt16(-1), -0.500000000 < toInt16(-1), -0.500000000 <= toInt16(-1), -0.500000000 > toInt16(-1), -0.500000000 >= toInt16(-1) , toInt32(-1) = -0.500000000, toInt32(-1) != -0.500000000, toInt32(-1) < -0.500000000, toInt32(-1) <= -0.500000000, toInt32(-1) > -0.500000000, toInt32(-1) >= -0.500000000, -0.500000000 = toInt32(-1), -0.500000000 != toInt32(-1), -0.500000000 < toInt32(-1), -0.500000000 <= toInt32(-1), -0.500000000 > toInt32(-1), -0.500000000 >= toInt32(-1) , toInt64(-1) = -0.500000000, toInt64(-1) != -0.500000000, toInt64(-1) < -0.500000000, toInt64(-1) <= -0.500000000, toInt64(-1) > -0.500000000, toInt64(-1) >= -0.500000000, -0.500000000 = toInt64(-1), -0.500000000 != toInt64(-1), -0.500000000 < toInt64(-1), -0.500000000 <= toInt64(-1), -0.500000000 > toInt64(-1), -0.500000000 >= toInt64(-1) ; +SELECT '-1', '0.500000000', -1 = 0.500000000, -1 != 0.500000000, -1 < 0.500000000, -1 <= 0.500000000, -1 > 0.500000000, -1 >= 0.500000000, 0.500000000 = -1, 0.500000000 != -1, 0.500000000 < -1, 0.500000000 <= -1, 0.500000000 > -1, 0.500000000 >= -1 , toInt8(-1) = 0.500000000, toInt8(-1) != 0.500000000, toInt8(-1) < 0.500000000, toInt8(-1) <= 0.500000000, toInt8(-1) > 0.500000000, toInt8(-1) >= 0.500000000, 0.500000000 = toInt8(-1), 0.500000000 != toInt8(-1), 0.500000000 < toInt8(-1), 0.500000000 <= toInt8(-1), 0.500000000 > toInt8(-1), 0.500000000 >= toInt8(-1) , toInt16(-1) = 0.500000000, toInt16(-1) != 0.500000000, toInt16(-1) < 0.500000000, toInt16(-1) <= 0.500000000, toInt16(-1) > 0.500000000, toInt16(-1) >= 0.500000000, 0.500000000 = toInt16(-1), 0.500000000 != toInt16(-1), 0.500000000 < toInt16(-1), 0.500000000 <= toInt16(-1), 0.500000000 > toInt16(-1), 0.500000000 >= toInt16(-1) , toInt32(-1) = 0.500000000, toInt32(-1) != 0.500000000, toInt32(-1) < 0.500000000, toInt32(-1) <= 0.500000000, toInt32(-1) > 0.500000000, toInt32(-1) >= 0.500000000, 0.500000000 = toInt32(-1), 0.500000000 != toInt32(-1), 0.500000000 < toInt32(-1), 0.500000000 <= toInt32(-1), 0.500000000 > toInt32(-1), 0.500000000 >= toInt32(-1) , toInt64(-1) = 0.500000000, toInt64(-1) != 0.500000000, toInt64(-1) < 0.500000000, toInt64(-1) <= 0.500000000, toInt64(-1) > 0.500000000, toInt64(-1) >= 0.500000000, 0.500000000 = toInt64(-1), 0.500000000 != toInt64(-1), 0.500000000 < toInt64(-1), 0.500000000 <= toInt64(-1), 0.500000000 > toInt64(-1), 0.500000000 >= toInt64(-1) ; +SELECT '-1', '-1.500000000', -1 = -1.500000000, -1 != -1.500000000, -1 < -1.500000000, -1 <= -1.500000000, -1 > -1.500000000, -1 >= -1.500000000, -1.500000000 = -1, -1.500000000 != -1, -1.500000000 < -1, -1.500000000 <= -1, -1.500000000 > -1, -1.500000000 >= -1 , toInt8(-1) = -1.500000000, toInt8(-1) != -1.500000000, toInt8(-1) < -1.500000000, toInt8(-1) <= -1.500000000, toInt8(-1) > -1.500000000, toInt8(-1) >= -1.500000000, -1.500000000 = toInt8(-1), -1.500000000 != toInt8(-1), -1.500000000 < toInt8(-1), -1.500000000 <= toInt8(-1), -1.500000000 > toInt8(-1), -1.500000000 >= toInt8(-1) , toInt16(-1) = -1.500000000, toInt16(-1) != -1.500000000, toInt16(-1) < -1.500000000, toInt16(-1) <= -1.500000000, toInt16(-1) > -1.500000000, toInt16(-1) >= -1.500000000, -1.500000000 = toInt16(-1), -1.500000000 != toInt16(-1), -1.500000000 < toInt16(-1), -1.500000000 <= toInt16(-1), -1.500000000 > toInt16(-1), -1.500000000 >= toInt16(-1) , toInt32(-1) = -1.500000000, toInt32(-1) != -1.500000000, toInt32(-1) < -1.500000000, toInt32(-1) <= -1.500000000, toInt32(-1) > -1.500000000, toInt32(-1) >= -1.500000000, -1.500000000 = toInt32(-1), -1.500000000 != toInt32(-1), -1.500000000 < toInt32(-1), -1.500000000 <= toInt32(-1), -1.500000000 > toInt32(-1), -1.500000000 >= toInt32(-1) , toInt64(-1) = -1.500000000, toInt64(-1) != -1.500000000, toInt64(-1) < -1.500000000, toInt64(-1) <= -1.500000000, toInt64(-1) > -1.500000000, toInt64(-1) >= -1.500000000, -1.500000000 = toInt64(-1), -1.500000000 != toInt64(-1), -1.500000000 < toInt64(-1), -1.500000000 <= toInt64(-1), -1.500000000 > toInt64(-1), -1.500000000 >= toInt64(-1) ; +SELECT '-1', '1.500000000', -1 = 1.500000000, -1 != 1.500000000, -1 < 1.500000000, -1 <= 1.500000000, -1 > 1.500000000, -1 >= 1.500000000, 1.500000000 = -1, 1.500000000 != -1, 1.500000000 < -1, 1.500000000 <= -1, 1.500000000 > -1, 1.500000000 >= -1 , toInt8(-1) = 1.500000000, toInt8(-1) != 1.500000000, toInt8(-1) < 1.500000000, toInt8(-1) <= 1.500000000, toInt8(-1) > 1.500000000, toInt8(-1) >= 1.500000000, 1.500000000 = toInt8(-1), 1.500000000 != toInt8(-1), 1.500000000 < toInt8(-1), 1.500000000 <= toInt8(-1), 1.500000000 > toInt8(-1), 1.500000000 >= toInt8(-1) , toInt16(-1) = 1.500000000, toInt16(-1) != 1.500000000, toInt16(-1) < 1.500000000, toInt16(-1) <= 1.500000000, toInt16(-1) > 1.500000000, toInt16(-1) >= 1.500000000, 1.500000000 = toInt16(-1), 1.500000000 != toInt16(-1), 1.500000000 < toInt16(-1), 1.500000000 <= toInt16(-1), 1.500000000 > toInt16(-1), 1.500000000 >= toInt16(-1) , toInt32(-1) = 1.500000000, toInt32(-1) != 1.500000000, toInt32(-1) < 1.500000000, toInt32(-1) <= 1.500000000, toInt32(-1) > 1.500000000, toInt32(-1) >= 1.500000000, 1.500000000 = toInt32(-1), 1.500000000 != toInt32(-1), 1.500000000 < toInt32(-1), 1.500000000 <= toInt32(-1), 1.500000000 > toInt32(-1), 1.500000000 >= toInt32(-1) , toInt64(-1) = 1.500000000, toInt64(-1) != 1.500000000, toInt64(-1) < 1.500000000, toInt64(-1) <= 1.500000000, toInt64(-1) > 1.500000000, toInt64(-1) >= 1.500000000, 1.500000000 = toInt64(-1), 1.500000000 != toInt64(-1), 1.500000000 < toInt64(-1), 1.500000000 <= toInt64(-1), 1.500000000 > toInt64(-1), 1.500000000 >= toInt64(-1) ; +SELECT '-1', '9007199254740992.000000000', -1 = 9007199254740992.000000000, -1 != 9007199254740992.000000000, -1 < 9007199254740992.000000000, -1 <= 9007199254740992.000000000, -1 > 9007199254740992.000000000, -1 >= 9007199254740992.000000000, 9007199254740992.000000000 = -1, 9007199254740992.000000000 != -1, 9007199254740992.000000000 < -1, 9007199254740992.000000000 <= -1, 9007199254740992.000000000 > -1, 9007199254740992.000000000 >= -1 , toInt8(-1) = 9007199254740992.000000000, toInt8(-1) != 9007199254740992.000000000, toInt8(-1) < 9007199254740992.000000000, toInt8(-1) <= 9007199254740992.000000000, toInt8(-1) > 9007199254740992.000000000, toInt8(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(-1), 9007199254740992.000000000 != toInt8(-1), 9007199254740992.000000000 < toInt8(-1), 9007199254740992.000000000 <= toInt8(-1), 9007199254740992.000000000 > toInt8(-1), 9007199254740992.000000000 >= toInt8(-1) , toInt16(-1) = 9007199254740992.000000000, toInt16(-1) != 9007199254740992.000000000, toInt16(-1) < 9007199254740992.000000000, toInt16(-1) <= 9007199254740992.000000000, toInt16(-1) > 9007199254740992.000000000, toInt16(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(-1), 9007199254740992.000000000 != toInt16(-1), 9007199254740992.000000000 < toInt16(-1), 9007199254740992.000000000 <= toInt16(-1), 9007199254740992.000000000 > toInt16(-1), 9007199254740992.000000000 >= toInt16(-1) , toInt32(-1) = 9007199254740992.000000000, toInt32(-1) != 9007199254740992.000000000, toInt32(-1) < 9007199254740992.000000000, toInt32(-1) <= 9007199254740992.000000000, toInt32(-1) > 9007199254740992.000000000, toInt32(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(-1), 9007199254740992.000000000 != toInt32(-1), 9007199254740992.000000000 < toInt32(-1), 9007199254740992.000000000 <= toInt32(-1), 9007199254740992.000000000 > toInt32(-1), 9007199254740992.000000000 >= toInt32(-1) , toInt64(-1) = 9007199254740992.000000000, toInt64(-1) != 9007199254740992.000000000, toInt64(-1) < 9007199254740992.000000000, toInt64(-1) <= 9007199254740992.000000000, toInt64(-1) > 9007199254740992.000000000, toInt64(-1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(-1), 9007199254740992.000000000 != toInt64(-1), 9007199254740992.000000000 < toInt64(-1), 9007199254740992.000000000 <= toInt64(-1), 9007199254740992.000000000 > toInt64(-1), 9007199254740992.000000000 >= toInt64(-1) ; +SELECT '-1', '2251799813685247.500000000', -1 = 2251799813685247.500000000, -1 != 2251799813685247.500000000, -1 < 2251799813685247.500000000, -1 <= 2251799813685247.500000000, -1 > 2251799813685247.500000000, -1 >= 2251799813685247.500000000, 2251799813685247.500000000 = -1, 2251799813685247.500000000 != -1, 2251799813685247.500000000 < -1, 2251799813685247.500000000 <= -1, 2251799813685247.500000000 > -1, 2251799813685247.500000000 >= -1 , toInt8(-1) = 2251799813685247.500000000, toInt8(-1) != 2251799813685247.500000000, toInt8(-1) < 2251799813685247.500000000, toInt8(-1) <= 2251799813685247.500000000, toInt8(-1) > 2251799813685247.500000000, toInt8(-1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt8(-1), 2251799813685247.500000000 != toInt8(-1), 2251799813685247.500000000 < toInt8(-1), 2251799813685247.500000000 <= toInt8(-1), 2251799813685247.500000000 > toInt8(-1), 2251799813685247.500000000 >= toInt8(-1) , toInt16(-1) = 2251799813685247.500000000, toInt16(-1) != 2251799813685247.500000000, toInt16(-1) < 2251799813685247.500000000, toInt16(-1) <= 2251799813685247.500000000, toInt16(-1) > 2251799813685247.500000000, toInt16(-1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt16(-1), 2251799813685247.500000000 != toInt16(-1), 2251799813685247.500000000 < toInt16(-1), 2251799813685247.500000000 <= toInt16(-1), 2251799813685247.500000000 > toInt16(-1), 2251799813685247.500000000 >= toInt16(-1) , toInt32(-1) = 2251799813685247.500000000, toInt32(-1) != 2251799813685247.500000000, toInt32(-1) < 2251799813685247.500000000, toInt32(-1) <= 2251799813685247.500000000, toInt32(-1) > 2251799813685247.500000000, toInt32(-1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt32(-1), 2251799813685247.500000000 != toInt32(-1), 2251799813685247.500000000 < toInt32(-1), 2251799813685247.500000000 <= toInt32(-1), 2251799813685247.500000000 > toInt32(-1), 2251799813685247.500000000 >= toInt32(-1) , toInt64(-1) = 2251799813685247.500000000, toInt64(-1) != 2251799813685247.500000000, toInt64(-1) < 2251799813685247.500000000, toInt64(-1) <= 2251799813685247.500000000, toInt64(-1) > 2251799813685247.500000000, toInt64(-1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt64(-1), 2251799813685247.500000000 != toInt64(-1), 2251799813685247.500000000 < toInt64(-1), 2251799813685247.500000000 <= toInt64(-1), 2251799813685247.500000000 > toInt64(-1), 2251799813685247.500000000 >= toInt64(-1) ; +SELECT '-1', '2251799813685248.500000000', -1 = 2251799813685248.500000000, -1 != 2251799813685248.500000000, -1 < 2251799813685248.500000000, -1 <= 2251799813685248.500000000, -1 > 2251799813685248.500000000, -1 >= 2251799813685248.500000000, 2251799813685248.500000000 = -1, 2251799813685248.500000000 != -1, 2251799813685248.500000000 < -1, 2251799813685248.500000000 <= -1, 2251799813685248.500000000 > -1, 2251799813685248.500000000 >= -1 , toInt8(-1) = 2251799813685248.500000000, toInt8(-1) != 2251799813685248.500000000, toInt8(-1) < 2251799813685248.500000000, toInt8(-1) <= 2251799813685248.500000000, toInt8(-1) > 2251799813685248.500000000, toInt8(-1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt8(-1), 2251799813685248.500000000 != toInt8(-1), 2251799813685248.500000000 < toInt8(-1), 2251799813685248.500000000 <= toInt8(-1), 2251799813685248.500000000 > toInt8(-1), 2251799813685248.500000000 >= toInt8(-1) , toInt16(-1) = 2251799813685248.500000000, toInt16(-1) != 2251799813685248.500000000, toInt16(-1) < 2251799813685248.500000000, toInt16(-1) <= 2251799813685248.500000000, toInt16(-1) > 2251799813685248.500000000, toInt16(-1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt16(-1), 2251799813685248.500000000 != toInt16(-1), 2251799813685248.500000000 < toInt16(-1), 2251799813685248.500000000 <= toInt16(-1), 2251799813685248.500000000 > toInt16(-1), 2251799813685248.500000000 >= toInt16(-1) , toInt32(-1) = 2251799813685248.500000000, toInt32(-1) != 2251799813685248.500000000, toInt32(-1) < 2251799813685248.500000000, toInt32(-1) <= 2251799813685248.500000000, toInt32(-1) > 2251799813685248.500000000, toInt32(-1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt32(-1), 2251799813685248.500000000 != toInt32(-1), 2251799813685248.500000000 < toInt32(-1), 2251799813685248.500000000 <= toInt32(-1), 2251799813685248.500000000 > toInt32(-1), 2251799813685248.500000000 >= toInt32(-1) , toInt64(-1) = 2251799813685248.500000000, toInt64(-1) != 2251799813685248.500000000, toInt64(-1) < 2251799813685248.500000000, toInt64(-1) <= 2251799813685248.500000000, toInt64(-1) > 2251799813685248.500000000, toInt64(-1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt64(-1), 2251799813685248.500000000 != toInt64(-1), 2251799813685248.500000000 < toInt64(-1), 2251799813685248.500000000 <= toInt64(-1), 2251799813685248.500000000 > toInt64(-1), 2251799813685248.500000000 >= toInt64(-1) ; +SELECT '-1', '1152921504606846976.000000000', -1 = 1152921504606846976.000000000, -1 != 1152921504606846976.000000000, -1 < 1152921504606846976.000000000, -1 <= 1152921504606846976.000000000, -1 > 1152921504606846976.000000000, -1 >= 1152921504606846976.000000000, 1152921504606846976.000000000 = -1, 1152921504606846976.000000000 != -1, 1152921504606846976.000000000 < -1, 1152921504606846976.000000000 <= -1, 1152921504606846976.000000000 > -1, 1152921504606846976.000000000 >= -1 , toInt8(-1) = 1152921504606846976.000000000, toInt8(-1) != 1152921504606846976.000000000, toInt8(-1) < 1152921504606846976.000000000, toInt8(-1) <= 1152921504606846976.000000000, toInt8(-1) > 1152921504606846976.000000000, toInt8(-1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt8(-1), 1152921504606846976.000000000 != toInt8(-1), 1152921504606846976.000000000 < toInt8(-1), 1152921504606846976.000000000 <= toInt8(-1), 1152921504606846976.000000000 > toInt8(-1), 1152921504606846976.000000000 >= toInt8(-1) , toInt16(-1) = 1152921504606846976.000000000, toInt16(-1) != 1152921504606846976.000000000, toInt16(-1) < 1152921504606846976.000000000, toInt16(-1) <= 1152921504606846976.000000000, toInt16(-1) > 1152921504606846976.000000000, toInt16(-1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt16(-1), 1152921504606846976.000000000 != toInt16(-1), 1152921504606846976.000000000 < toInt16(-1), 1152921504606846976.000000000 <= toInt16(-1), 1152921504606846976.000000000 > toInt16(-1), 1152921504606846976.000000000 >= toInt16(-1) , toInt32(-1) = 1152921504606846976.000000000, toInt32(-1) != 1152921504606846976.000000000, toInt32(-1) < 1152921504606846976.000000000, toInt32(-1) <= 1152921504606846976.000000000, toInt32(-1) > 1152921504606846976.000000000, toInt32(-1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt32(-1), 1152921504606846976.000000000 != toInt32(-1), 1152921504606846976.000000000 < toInt32(-1), 1152921504606846976.000000000 <= toInt32(-1), 1152921504606846976.000000000 > toInt32(-1), 1152921504606846976.000000000 >= toInt32(-1) , toInt64(-1) = 1152921504606846976.000000000, toInt64(-1) != 1152921504606846976.000000000, toInt64(-1) < 1152921504606846976.000000000, toInt64(-1) <= 1152921504606846976.000000000, toInt64(-1) > 1152921504606846976.000000000, toInt64(-1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt64(-1), 1152921504606846976.000000000 != toInt64(-1), 1152921504606846976.000000000 < toInt64(-1), 1152921504606846976.000000000 <= toInt64(-1), 1152921504606846976.000000000 > toInt64(-1), 1152921504606846976.000000000 >= toInt64(-1) ; +SELECT '-1', '-1152921504606846976.000000000', -1 = -1152921504606846976.000000000, -1 != -1152921504606846976.000000000, -1 < -1152921504606846976.000000000, -1 <= -1152921504606846976.000000000, -1 > -1152921504606846976.000000000, -1 >= -1152921504606846976.000000000, -1152921504606846976.000000000 = -1, -1152921504606846976.000000000 != -1, -1152921504606846976.000000000 < -1, -1152921504606846976.000000000 <= -1, -1152921504606846976.000000000 > -1, -1152921504606846976.000000000 >= -1 , toInt8(-1) = -1152921504606846976.000000000, toInt8(-1) != -1152921504606846976.000000000, toInt8(-1) < -1152921504606846976.000000000, toInt8(-1) <= -1152921504606846976.000000000, toInt8(-1) > -1152921504606846976.000000000, toInt8(-1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt8(-1), -1152921504606846976.000000000 != toInt8(-1), -1152921504606846976.000000000 < toInt8(-1), -1152921504606846976.000000000 <= toInt8(-1), -1152921504606846976.000000000 > toInt8(-1), -1152921504606846976.000000000 >= toInt8(-1) , toInt16(-1) = -1152921504606846976.000000000, toInt16(-1) != -1152921504606846976.000000000, toInt16(-1) < -1152921504606846976.000000000, toInt16(-1) <= -1152921504606846976.000000000, toInt16(-1) > -1152921504606846976.000000000, toInt16(-1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt16(-1), -1152921504606846976.000000000 != toInt16(-1), -1152921504606846976.000000000 < toInt16(-1), -1152921504606846976.000000000 <= toInt16(-1), -1152921504606846976.000000000 > toInt16(-1), -1152921504606846976.000000000 >= toInt16(-1) , toInt32(-1) = -1152921504606846976.000000000, toInt32(-1) != -1152921504606846976.000000000, toInt32(-1) < -1152921504606846976.000000000, toInt32(-1) <= -1152921504606846976.000000000, toInt32(-1) > -1152921504606846976.000000000, toInt32(-1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt32(-1), -1152921504606846976.000000000 != toInt32(-1), -1152921504606846976.000000000 < toInt32(-1), -1152921504606846976.000000000 <= toInt32(-1), -1152921504606846976.000000000 > toInt32(-1), -1152921504606846976.000000000 >= toInt32(-1) , toInt64(-1) = -1152921504606846976.000000000, toInt64(-1) != -1152921504606846976.000000000, toInt64(-1) < -1152921504606846976.000000000, toInt64(-1) <= -1152921504606846976.000000000, toInt64(-1) > -1152921504606846976.000000000, toInt64(-1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt64(-1), -1152921504606846976.000000000 != toInt64(-1), -1152921504606846976.000000000 < toInt64(-1), -1152921504606846976.000000000 <= toInt64(-1), -1152921504606846976.000000000 > toInt64(-1), -1152921504606846976.000000000 >= toInt64(-1) ; +SELECT '-1', '-9223372036854786048.000000000', -1 = -9223372036854786048.000000000, -1 != -9223372036854786048.000000000, -1 < -9223372036854786048.000000000, -1 <= -9223372036854786048.000000000, -1 > -9223372036854786048.000000000, -1 >= -9223372036854786048.000000000, -9223372036854786048.000000000 = -1, -9223372036854786048.000000000 != -1, -9223372036854786048.000000000 < -1, -9223372036854786048.000000000 <= -1, -9223372036854786048.000000000 > -1, -9223372036854786048.000000000 >= -1 , toInt8(-1) = -9223372036854786048.000000000, toInt8(-1) != -9223372036854786048.000000000, toInt8(-1) < -9223372036854786048.000000000, toInt8(-1) <= -9223372036854786048.000000000, toInt8(-1) > -9223372036854786048.000000000, toInt8(-1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt8(-1), -9223372036854786048.000000000 != toInt8(-1), -9223372036854786048.000000000 < toInt8(-1), -9223372036854786048.000000000 <= toInt8(-1), -9223372036854786048.000000000 > toInt8(-1), -9223372036854786048.000000000 >= toInt8(-1) , toInt16(-1) = -9223372036854786048.000000000, toInt16(-1) != -9223372036854786048.000000000, toInt16(-1) < -9223372036854786048.000000000, toInt16(-1) <= -9223372036854786048.000000000, toInt16(-1) > -9223372036854786048.000000000, toInt16(-1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt16(-1), -9223372036854786048.000000000 != toInt16(-1), -9223372036854786048.000000000 < toInt16(-1), -9223372036854786048.000000000 <= toInt16(-1), -9223372036854786048.000000000 > toInt16(-1), -9223372036854786048.000000000 >= toInt16(-1) , toInt32(-1) = -9223372036854786048.000000000, toInt32(-1) != -9223372036854786048.000000000, toInt32(-1) < -9223372036854786048.000000000, toInt32(-1) <= -9223372036854786048.000000000, toInt32(-1) > -9223372036854786048.000000000, toInt32(-1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt32(-1), -9223372036854786048.000000000 != toInt32(-1), -9223372036854786048.000000000 < toInt32(-1), -9223372036854786048.000000000 <= toInt32(-1), -9223372036854786048.000000000 > toInt32(-1), -9223372036854786048.000000000 >= toInt32(-1) , toInt64(-1) = -9223372036854786048.000000000, toInt64(-1) != -9223372036854786048.000000000, toInt64(-1) < -9223372036854786048.000000000, toInt64(-1) <= -9223372036854786048.000000000, toInt64(-1) > -9223372036854786048.000000000, toInt64(-1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt64(-1), -9223372036854786048.000000000 != toInt64(-1), -9223372036854786048.000000000 < toInt64(-1), -9223372036854786048.000000000 <= toInt64(-1), -9223372036854786048.000000000 > toInt64(-1), -9223372036854786048.000000000 >= toInt64(-1) ; +SELECT '-1', '9223372036854786048.000000000', -1 = 9223372036854786048.000000000, -1 != 9223372036854786048.000000000, -1 < 9223372036854786048.000000000, -1 <= 9223372036854786048.000000000, -1 > 9223372036854786048.000000000, -1 >= 9223372036854786048.000000000, 9223372036854786048.000000000 = -1, 9223372036854786048.000000000 != -1, 9223372036854786048.000000000 < -1, 9223372036854786048.000000000 <= -1, 9223372036854786048.000000000 > -1, 9223372036854786048.000000000 >= -1 , toInt8(-1) = 9223372036854786048.000000000, toInt8(-1) != 9223372036854786048.000000000, toInt8(-1) < 9223372036854786048.000000000, toInt8(-1) <= 9223372036854786048.000000000, toInt8(-1) > 9223372036854786048.000000000, toInt8(-1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt8(-1), 9223372036854786048.000000000 != toInt8(-1), 9223372036854786048.000000000 < toInt8(-1), 9223372036854786048.000000000 <= toInt8(-1), 9223372036854786048.000000000 > toInt8(-1), 9223372036854786048.000000000 >= toInt8(-1) , toInt16(-1) = 9223372036854786048.000000000, toInt16(-1) != 9223372036854786048.000000000, toInt16(-1) < 9223372036854786048.000000000, toInt16(-1) <= 9223372036854786048.000000000, toInt16(-1) > 9223372036854786048.000000000, toInt16(-1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt16(-1), 9223372036854786048.000000000 != toInt16(-1), 9223372036854786048.000000000 < toInt16(-1), 9223372036854786048.000000000 <= toInt16(-1), 9223372036854786048.000000000 > toInt16(-1), 9223372036854786048.000000000 >= toInt16(-1) , toInt32(-1) = 9223372036854786048.000000000, toInt32(-1) != 9223372036854786048.000000000, toInt32(-1) < 9223372036854786048.000000000, toInt32(-1) <= 9223372036854786048.000000000, toInt32(-1) > 9223372036854786048.000000000, toInt32(-1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt32(-1), 9223372036854786048.000000000 != toInt32(-1), 9223372036854786048.000000000 < toInt32(-1), 9223372036854786048.000000000 <= toInt32(-1), 9223372036854786048.000000000 > toInt32(-1), 9223372036854786048.000000000 >= toInt32(-1) , toInt64(-1) = 9223372036854786048.000000000, toInt64(-1) != 9223372036854786048.000000000, toInt64(-1) < 9223372036854786048.000000000, toInt64(-1) <= 9223372036854786048.000000000, toInt64(-1) > 9223372036854786048.000000000, toInt64(-1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt64(-1), 9223372036854786048.000000000 != toInt64(-1), 9223372036854786048.000000000 < toInt64(-1), 9223372036854786048.000000000 <= toInt64(-1), 9223372036854786048.000000000 > toInt64(-1), 9223372036854786048.000000000 >= toInt64(-1) ; +SELECT '1', '0.000000000', 1 = 0.000000000, 1 != 0.000000000, 1 < 0.000000000, 1 <= 0.000000000, 1 > 0.000000000, 1 >= 0.000000000, 0.000000000 = 1, 0.000000000 != 1, 0.000000000 < 1, 0.000000000 <= 1, 0.000000000 > 1, 0.000000000 >= 1 , toUInt8(1) = 0.000000000, toUInt8(1) != 0.000000000, toUInt8(1) < 0.000000000, toUInt8(1) <= 0.000000000, toUInt8(1) > 0.000000000, toUInt8(1) >= 0.000000000, 0.000000000 = toUInt8(1), 0.000000000 != toUInt8(1), 0.000000000 < toUInt8(1), 0.000000000 <= toUInt8(1), 0.000000000 > toUInt8(1), 0.000000000 >= toUInt8(1) , toInt8(1) = 0.000000000, toInt8(1) != 0.000000000, toInt8(1) < 0.000000000, toInt8(1) <= 0.000000000, toInt8(1) > 0.000000000, toInt8(1) >= 0.000000000, 0.000000000 = toInt8(1), 0.000000000 != toInt8(1), 0.000000000 < toInt8(1), 0.000000000 <= toInt8(1), 0.000000000 > toInt8(1), 0.000000000 >= toInt8(1) , toUInt16(1) = 0.000000000, toUInt16(1) != 0.000000000, toUInt16(1) < 0.000000000, toUInt16(1) <= 0.000000000, toUInt16(1) > 0.000000000, toUInt16(1) >= 0.000000000, 0.000000000 = toUInt16(1), 0.000000000 != toUInt16(1), 0.000000000 < toUInt16(1), 0.000000000 <= toUInt16(1), 0.000000000 > toUInt16(1), 0.000000000 >= toUInt16(1) , toInt16(1) = 0.000000000, toInt16(1) != 0.000000000, toInt16(1) < 0.000000000, toInt16(1) <= 0.000000000, toInt16(1) > 0.000000000, toInt16(1) >= 0.000000000, 0.000000000 = toInt16(1), 0.000000000 != toInt16(1), 0.000000000 < toInt16(1), 0.000000000 <= toInt16(1), 0.000000000 > toInt16(1), 0.000000000 >= toInt16(1) , toUInt32(1) = 0.000000000, toUInt32(1) != 0.000000000, toUInt32(1) < 0.000000000, toUInt32(1) <= 0.000000000, toUInt32(1) > 0.000000000, toUInt32(1) >= 0.000000000, 0.000000000 = toUInt32(1), 0.000000000 != toUInt32(1), 0.000000000 < toUInt32(1), 0.000000000 <= toUInt32(1), 0.000000000 > toUInt32(1), 0.000000000 >= toUInt32(1) , toInt32(1) = 0.000000000, toInt32(1) != 0.000000000, toInt32(1) < 0.000000000, toInt32(1) <= 0.000000000, toInt32(1) > 0.000000000, toInt32(1) >= 0.000000000, 0.000000000 = toInt32(1), 0.000000000 != toInt32(1), 0.000000000 < toInt32(1), 0.000000000 <= toInt32(1), 0.000000000 > toInt32(1), 0.000000000 >= toInt32(1) , toUInt64(1) = 0.000000000, toUInt64(1) != 0.000000000, toUInt64(1) < 0.000000000, toUInt64(1) <= 0.000000000, toUInt64(1) > 0.000000000, toUInt64(1) >= 0.000000000, 0.000000000 = toUInt64(1), 0.000000000 != toUInt64(1), 0.000000000 < toUInt64(1), 0.000000000 <= toUInt64(1), 0.000000000 > toUInt64(1), 0.000000000 >= toUInt64(1) , toInt64(1) = 0.000000000, toInt64(1) != 0.000000000, toInt64(1) < 0.000000000, toInt64(1) <= 0.000000000, toInt64(1) > 0.000000000, toInt64(1) >= 0.000000000, 0.000000000 = toInt64(1), 0.000000000 != toInt64(1), 0.000000000 < toInt64(1), 0.000000000 <= toInt64(1), 0.000000000 > toInt64(1), 0.000000000 >= toInt64(1) ; +SELECT '1', '-1.000000000', 1 = -1.000000000, 1 != -1.000000000, 1 < -1.000000000, 1 <= -1.000000000, 1 > -1.000000000, 1 >= -1.000000000, -1.000000000 = 1, -1.000000000 != 1, -1.000000000 < 1, -1.000000000 <= 1, -1.000000000 > 1, -1.000000000 >= 1 , toUInt8(1) = -1.000000000, toUInt8(1) != -1.000000000, toUInt8(1) < -1.000000000, toUInt8(1) <= -1.000000000, toUInt8(1) > -1.000000000, toUInt8(1) >= -1.000000000, -1.000000000 = toUInt8(1), -1.000000000 != toUInt8(1), -1.000000000 < toUInt8(1), -1.000000000 <= toUInt8(1), -1.000000000 > toUInt8(1), -1.000000000 >= toUInt8(1) , toInt8(1) = -1.000000000, toInt8(1) != -1.000000000, toInt8(1) < -1.000000000, toInt8(1) <= -1.000000000, toInt8(1) > -1.000000000, toInt8(1) >= -1.000000000, -1.000000000 = toInt8(1), -1.000000000 != toInt8(1), -1.000000000 < toInt8(1), -1.000000000 <= toInt8(1), -1.000000000 > toInt8(1), -1.000000000 >= toInt8(1) , toUInt16(1) = -1.000000000, toUInt16(1) != -1.000000000, toUInt16(1) < -1.000000000, toUInt16(1) <= -1.000000000, toUInt16(1) > -1.000000000, toUInt16(1) >= -1.000000000, -1.000000000 = toUInt16(1), -1.000000000 != toUInt16(1), -1.000000000 < toUInt16(1), -1.000000000 <= toUInt16(1), -1.000000000 > toUInt16(1), -1.000000000 >= toUInt16(1) , toInt16(1) = -1.000000000, toInt16(1) != -1.000000000, toInt16(1) < -1.000000000, toInt16(1) <= -1.000000000, toInt16(1) > -1.000000000, toInt16(1) >= -1.000000000, -1.000000000 = toInt16(1), -1.000000000 != toInt16(1), -1.000000000 < toInt16(1), -1.000000000 <= toInt16(1), -1.000000000 > toInt16(1), -1.000000000 >= toInt16(1) , toUInt32(1) = -1.000000000, toUInt32(1) != -1.000000000, toUInt32(1) < -1.000000000, toUInt32(1) <= -1.000000000, toUInt32(1) > -1.000000000, toUInt32(1) >= -1.000000000, -1.000000000 = toUInt32(1), -1.000000000 != toUInt32(1), -1.000000000 < toUInt32(1), -1.000000000 <= toUInt32(1), -1.000000000 > toUInt32(1), -1.000000000 >= toUInt32(1) , toInt32(1) = -1.000000000, toInt32(1) != -1.000000000, toInt32(1) < -1.000000000, toInt32(1) <= -1.000000000, toInt32(1) > -1.000000000, toInt32(1) >= -1.000000000, -1.000000000 = toInt32(1), -1.000000000 != toInt32(1), -1.000000000 < toInt32(1), -1.000000000 <= toInt32(1), -1.000000000 > toInt32(1), -1.000000000 >= toInt32(1) , toUInt64(1) = -1.000000000, toUInt64(1) != -1.000000000, toUInt64(1) < -1.000000000, toUInt64(1) <= -1.000000000, toUInt64(1) > -1.000000000, toUInt64(1) >= -1.000000000, -1.000000000 = toUInt64(1), -1.000000000 != toUInt64(1), -1.000000000 < toUInt64(1), -1.000000000 <= toUInt64(1), -1.000000000 > toUInt64(1), -1.000000000 >= toUInt64(1) , toInt64(1) = -1.000000000, toInt64(1) != -1.000000000, toInt64(1) < -1.000000000, toInt64(1) <= -1.000000000, toInt64(1) > -1.000000000, toInt64(1) >= -1.000000000, -1.000000000 = toInt64(1), -1.000000000 != toInt64(1), -1.000000000 < toInt64(1), -1.000000000 <= toInt64(1), -1.000000000 > toInt64(1), -1.000000000 >= toInt64(1) ; +SELECT '1', '1.000000000', 1 = 1.000000000, 1 != 1.000000000, 1 < 1.000000000, 1 <= 1.000000000, 1 > 1.000000000, 1 >= 1.000000000, 1.000000000 = 1, 1.000000000 != 1, 1.000000000 < 1, 1.000000000 <= 1, 1.000000000 > 1, 1.000000000 >= 1 , toUInt8(1) = 1.000000000, toUInt8(1) != 1.000000000, toUInt8(1) < 1.000000000, toUInt8(1) <= 1.000000000, toUInt8(1) > 1.000000000, toUInt8(1) >= 1.000000000, 1.000000000 = toUInt8(1), 1.000000000 != toUInt8(1), 1.000000000 < toUInt8(1), 1.000000000 <= toUInt8(1), 1.000000000 > toUInt8(1), 1.000000000 >= toUInt8(1) , toInt8(1) = 1.000000000, toInt8(1) != 1.000000000, toInt8(1) < 1.000000000, toInt8(1) <= 1.000000000, toInt8(1) > 1.000000000, toInt8(1) >= 1.000000000, 1.000000000 = toInt8(1), 1.000000000 != toInt8(1), 1.000000000 < toInt8(1), 1.000000000 <= toInt8(1), 1.000000000 > toInt8(1), 1.000000000 >= toInt8(1) , toUInt16(1) = 1.000000000, toUInt16(1) != 1.000000000, toUInt16(1) < 1.000000000, toUInt16(1) <= 1.000000000, toUInt16(1) > 1.000000000, toUInt16(1) >= 1.000000000, 1.000000000 = toUInt16(1), 1.000000000 != toUInt16(1), 1.000000000 < toUInt16(1), 1.000000000 <= toUInt16(1), 1.000000000 > toUInt16(1), 1.000000000 >= toUInt16(1) , toInt16(1) = 1.000000000, toInt16(1) != 1.000000000, toInt16(1) < 1.000000000, toInt16(1) <= 1.000000000, toInt16(1) > 1.000000000, toInt16(1) >= 1.000000000, 1.000000000 = toInt16(1), 1.000000000 != toInt16(1), 1.000000000 < toInt16(1), 1.000000000 <= toInt16(1), 1.000000000 > toInt16(1), 1.000000000 >= toInt16(1) , toUInt32(1) = 1.000000000, toUInt32(1) != 1.000000000, toUInt32(1) < 1.000000000, toUInt32(1) <= 1.000000000, toUInt32(1) > 1.000000000, toUInt32(1) >= 1.000000000, 1.000000000 = toUInt32(1), 1.000000000 != toUInt32(1), 1.000000000 < toUInt32(1), 1.000000000 <= toUInt32(1), 1.000000000 > toUInt32(1), 1.000000000 >= toUInt32(1) , toInt32(1) = 1.000000000, toInt32(1) != 1.000000000, toInt32(1) < 1.000000000, toInt32(1) <= 1.000000000, toInt32(1) > 1.000000000, toInt32(1) >= 1.000000000, 1.000000000 = toInt32(1), 1.000000000 != toInt32(1), 1.000000000 < toInt32(1), 1.000000000 <= toInt32(1), 1.000000000 > toInt32(1), 1.000000000 >= toInt32(1) , toUInt64(1) = 1.000000000, toUInt64(1) != 1.000000000, toUInt64(1) < 1.000000000, toUInt64(1) <= 1.000000000, toUInt64(1) > 1.000000000, toUInt64(1) >= 1.000000000, 1.000000000 = toUInt64(1), 1.000000000 != toUInt64(1), 1.000000000 < toUInt64(1), 1.000000000 <= toUInt64(1), 1.000000000 > toUInt64(1), 1.000000000 >= toUInt64(1) , toInt64(1) = 1.000000000, toInt64(1) != 1.000000000, toInt64(1) < 1.000000000, toInt64(1) <= 1.000000000, toInt64(1) > 1.000000000, toInt64(1) >= 1.000000000, 1.000000000 = toInt64(1), 1.000000000 != toInt64(1), 1.000000000 < toInt64(1), 1.000000000 <= toInt64(1), 1.000000000 > toInt64(1), 1.000000000 >= toInt64(1) ; +SELECT '1', '18446744073709551616.000000000', 1 = 18446744073709551616.000000000, 1 != 18446744073709551616.000000000, 1 < 18446744073709551616.000000000, 1 <= 18446744073709551616.000000000, 1 > 18446744073709551616.000000000, 1 >= 18446744073709551616.000000000, 18446744073709551616.000000000 = 1, 18446744073709551616.000000000 != 1, 18446744073709551616.000000000 < 1, 18446744073709551616.000000000 <= 1, 18446744073709551616.000000000 > 1, 18446744073709551616.000000000 >= 1 , toUInt8(1) = 18446744073709551616.000000000, toUInt8(1) != 18446744073709551616.000000000, toUInt8(1) < 18446744073709551616.000000000, toUInt8(1) <= 18446744073709551616.000000000, toUInt8(1) > 18446744073709551616.000000000, toUInt8(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt8(1), 18446744073709551616.000000000 != toUInt8(1), 18446744073709551616.000000000 < toUInt8(1), 18446744073709551616.000000000 <= toUInt8(1), 18446744073709551616.000000000 > toUInt8(1), 18446744073709551616.000000000 >= toUInt8(1) , toInt8(1) = 18446744073709551616.000000000, toInt8(1) != 18446744073709551616.000000000, toInt8(1) < 18446744073709551616.000000000, toInt8(1) <= 18446744073709551616.000000000, toInt8(1) > 18446744073709551616.000000000, toInt8(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt8(1), 18446744073709551616.000000000 != toInt8(1), 18446744073709551616.000000000 < toInt8(1), 18446744073709551616.000000000 <= toInt8(1), 18446744073709551616.000000000 > toInt8(1), 18446744073709551616.000000000 >= toInt8(1) , toUInt16(1) = 18446744073709551616.000000000, toUInt16(1) != 18446744073709551616.000000000, toUInt16(1) < 18446744073709551616.000000000, toUInt16(1) <= 18446744073709551616.000000000, toUInt16(1) > 18446744073709551616.000000000, toUInt16(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt16(1), 18446744073709551616.000000000 != toUInt16(1), 18446744073709551616.000000000 < toUInt16(1), 18446744073709551616.000000000 <= toUInt16(1), 18446744073709551616.000000000 > toUInt16(1), 18446744073709551616.000000000 >= toUInt16(1) , toInt16(1) = 18446744073709551616.000000000, toInt16(1) != 18446744073709551616.000000000, toInt16(1) < 18446744073709551616.000000000, toInt16(1) <= 18446744073709551616.000000000, toInt16(1) > 18446744073709551616.000000000, toInt16(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt16(1), 18446744073709551616.000000000 != toInt16(1), 18446744073709551616.000000000 < toInt16(1), 18446744073709551616.000000000 <= toInt16(1), 18446744073709551616.000000000 > toInt16(1), 18446744073709551616.000000000 >= toInt16(1) , toUInt32(1) = 18446744073709551616.000000000, toUInt32(1) != 18446744073709551616.000000000, toUInt32(1) < 18446744073709551616.000000000, toUInt32(1) <= 18446744073709551616.000000000, toUInt32(1) > 18446744073709551616.000000000, toUInt32(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt32(1), 18446744073709551616.000000000 != toUInt32(1), 18446744073709551616.000000000 < toUInt32(1), 18446744073709551616.000000000 <= toUInt32(1), 18446744073709551616.000000000 > toUInt32(1), 18446744073709551616.000000000 >= toUInt32(1) , toInt32(1) = 18446744073709551616.000000000, toInt32(1) != 18446744073709551616.000000000, toInt32(1) < 18446744073709551616.000000000, toInt32(1) <= 18446744073709551616.000000000, toInt32(1) > 18446744073709551616.000000000, toInt32(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt32(1), 18446744073709551616.000000000 != toInt32(1), 18446744073709551616.000000000 < toInt32(1), 18446744073709551616.000000000 <= toInt32(1), 18446744073709551616.000000000 > toInt32(1), 18446744073709551616.000000000 >= toInt32(1) , toUInt64(1) = 18446744073709551616.000000000, toUInt64(1) != 18446744073709551616.000000000, toUInt64(1) < 18446744073709551616.000000000, toUInt64(1) <= 18446744073709551616.000000000, toUInt64(1) > 18446744073709551616.000000000, toUInt64(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt64(1), 18446744073709551616.000000000 != toUInt64(1), 18446744073709551616.000000000 < toUInt64(1), 18446744073709551616.000000000 <= toUInt64(1), 18446744073709551616.000000000 > toUInt64(1), 18446744073709551616.000000000 >= toUInt64(1) , toInt64(1) = 18446744073709551616.000000000, toInt64(1) != 18446744073709551616.000000000, toInt64(1) < 18446744073709551616.000000000, toInt64(1) <= 18446744073709551616.000000000, toInt64(1) > 18446744073709551616.000000000, toInt64(1) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toInt64(1), 18446744073709551616.000000000 != toInt64(1), 18446744073709551616.000000000 < toInt64(1), 18446744073709551616.000000000 <= toInt64(1), 18446744073709551616.000000000 > toInt64(1), 18446744073709551616.000000000 >= toInt64(1) ; +SELECT '1', '9223372036854775808.000000000', 1 = 9223372036854775808.000000000, 1 != 9223372036854775808.000000000, 1 < 9223372036854775808.000000000, 1 <= 9223372036854775808.000000000, 1 > 9223372036854775808.000000000, 1 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 1, 9223372036854775808.000000000 != 1, 9223372036854775808.000000000 < 1, 9223372036854775808.000000000 <= 1, 9223372036854775808.000000000 > 1, 9223372036854775808.000000000 >= 1 , toUInt8(1) = 9223372036854775808.000000000, toUInt8(1) != 9223372036854775808.000000000, toUInt8(1) < 9223372036854775808.000000000, toUInt8(1) <= 9223372036854775808.000000000, toUInt8(1) > 9223372036854775808.000000000, toUInt8(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt8(1), 9223372036854775808.000000000 != toUInt8(1), 9223372036854775808.000000000 < toUInt8(1), 9223372036854775808.000000000 <= toUInt8(1), 9223372036854775808.000000000 > toUInt8(1), 9223372036854775808.000000000 >= toUInt8(1) , toInt8(1) = 9223372036854775808.000000000, toInt8(1) != 9223372036854775808.000000000, toInt8(1) < 9223372036854775808.000000000, toInt8(1) <= 9223372036854775808.000000000, toInt8(1) > 9223372036854775808.000000000, toInt8(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(1), 9223372036854775808.000000000 != toInt8(1), 9223372036854775808.000000000 < toInt8(1), 9223372036854775808.000000000 <= toInt8(1), 9223372036854775808.000000000 > toInt8(1), 9223372036854775808.000000000 >= toInt8(1) , toUInt16(1) = 9223372036854775808.000000000, toUInt16(1) != 9223372036854775808.000000000, toUInt16(1) < 9223372036854775808.000000000, toUInt16(1) <= 9223372036854775808.000000000, toUInt16(1) > 9223372036854775808.000000000, toUInt16(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt16(1), 9223372036854775808.000000000 != toUInt16(1), 9223372036854775808.000000000 < toUInt16(1), 9223372036854775808.000000000 <= toUInt16(1), 9223372036854775808.000000000 > toUInt16(1), 9223372036854775808.000000000 >= toUInt16(1) , toInt16(1) = 9223372036854775808.000000000, toInt16(1) != 9223372036854775808.000000000, toInt16(1) < 9223372036854775808.000000000, toInt16(1) <= 9223372036854775808.000000000, toInt16(1) > 9223372036854775808.000000000, toInt16(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(1), 9223372036854775808.000000000 != toInt16(1), 9223372036854775808.000000000 < toInt16(1), 9223372036854775808.000000000 <= toInt16(1), 9223372036854775808.000000000 > toInt16(1), 9223372036854775808.000000000 >= toInt16(1) , toUInt32(1) = 9223372036854775808.000000000, toUInt32(1) != 9223372036854775808.000000000, toUInt32(1) < 9223372036854775808.000000000, toUInt32(1) <= 9223372036854775808.000000000, toUInt32(1) > 9223372036854775808.000000000, toUInt32(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt32(1), 9223372036854775808.000000000 != toUInt32(1), 9223372036854775808.000000000 < toUInt32(1), 9223372036854775808.000000000 <= toUInt32(1), 9223372036854775808.000000000 > toUInt32(1), 9223372036854775808.000000000 >= toUInt32(1) , toInt32(1) = 9223372036854775808.000000000, toInt32(1) != 9223372036854775808.000000000, toInt32(1) < 9223372036854775808.000000000, toInt32(1) <= 9223372036854775808.000000000, toInt32(1) > 9223372036854775808.000000000, toInt32(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(1), 9223372036854775808.000000000 != toInt32(1), 9223372036854775808.000000000 < toInt32(1), 9223372036854775808.000000000 <= toInt32(1), 9223372036854775808.000000000 > toInt32(1), 9223372036854775808.000000000 >= toInt32(1) , toUInt64(1) = 9223372036854775808.000000000, toUInt64(1) != 9223372036854775808.000000000, toUInt64(1) < 9223372036854775808.000000000, toUInt64(1) <= 9223372036854775808.000000000, toUInt64(1) > 9223372036854775808.000000000, toUInt64(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(1), 9223372036854775808.000000000 != toUInt64(1), 9223372036854775808.000000000 < toUInt64(1), 9223372036854775808.000000000 <= toUInt64(1), 9223372036854775808.000000000 > toUInt64(1), 9223372036854775808.000000000 >= toUInt64(1) , toInt64(1) = 9223372036854775808.000000000, toInt64(1) != 9223372036854775808.000000000, toInt64(1) < 9223372036854775808.000000000, toInt64(1) <= 9223372036854775808.000000000, toInt64(1) > 9223372036854775808.000000000, toInt64(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(1), 9223372036854775808.000000000 != toInt64(1), 9223372036854775808.000000000 < toInt64(1), 9223372036854775808.000000000 <= toInt64(1), 9223372036854775808.000000000 > toInt64(1), 9223372036854775808.000000000 >= toInt64(1) ; +SELECT '1', '-9223372036854775808.000000000', 1 = -9223372036854775808.000000000, 1 != -9223372036854775808.000000000, 1 < -9223372036854775808.000000000, 1 <= -9223372036854775808.000000000, 1 > -9223372036854775808.000000000, 1 >= -9223372036854775808.000000000, -9223372036854775808.000000000 = 1, -9223372036854775808.000000000 != 1, -9223372036854775808.000000000 < 1, -9223372036854775808.000000000 <= 1, -9223372036854775808.000000000 > 1, -9223372036854775808.000000000 >= 1 , toUInt8(1) = -9223372036854775808.000000000, toUInt8(1) != -9223372036854775808.000000000, toUInt8(1) < -9223372036854775808.000000000, toUInt8(1) <= -9223372036854775808.000000000, toUInt8(1) > -9223372036854775808.000000000, toUInt8(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt8(1), -9223372036854775808.000000000 != toUInt8(1), -9223372036854775808.000000000 < toUInt8(1), -9223372036854775808.000000000 <= toUInt8(1), -9223372036854775808.000000000 > toUInt8(1), -9223372036854775808.000000000 >= toUInt8(1) , toInt8(1) = -9223372036854775808.000000000, toInt8(1) != -9223372036854775808.000000000, toInt8(1) < -9223372036854775808.000000000, toInt8(1) <= -9223372036854775808.000000000, toInt8(1) > -9223372036854775808.000000000, toInt8(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt8(1), -9223372036854775808.000000000 != toInt8(1), -9223372036854775808.000000000 < toInt8(1), -9223372036854775808.000000000 <= toInt8(1), -9223372036854775808.000000000 > toInt8(1), -9223372036854775808.000000000 >= toInt8(1) , toUInt16(1) = -9223372036854775808.000000000, toUInt16(1) != -9223372036854775808.000000000, toUInt16(1) < -9223372036854775808.000000000, toUInt16(1) <= -9223372036854775808.000000000, toUInt16(1) > -9223372036854775808.000000000, toUInt16(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt16(1), -9223372036854775808.000000000 != toUInt16(1), -9223372036854775808.000000000 < toUInt16(1), -9223372036854775808.000000000 <= toUInt16(1), -9223372036854775808.000000000 > toUInt16(1), -9223372036854775808.000000000 >= toUInt16(1) , toInt16(1) = -9223372036854775808.000000000, toInt16(1) != -9223372036854775808.000000000, toInt16(1) < -9223372036854775808.000000000, toInt16(1) <= -9223372036854775808.000000000, toInt16(1) > -9223372036854775808.000000000, toInt16(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt16(1), -9223372036854775808.000000000 != toInt16(1), -9223372036854775808.000000000 < toInt16(1), -9223372036854775808.000000000 <= toInt16(1), -9223372036854775808.000000000 > toInt16(1), -9223372036854775808.000000000 >= toInt16(1) , toUInt32(1) = -9223372036854775808.000000000, toUInt32(1) != -9223372036854775808.000000000, toUInt32(1) < -9223372036854775808.000000000, toUInt32(1) <= -9223372036854775808.000000000, toUInt32(1) > -9223372036854775808.000000000, toUInt32(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt32(1), -9223372036854775808.000000000 != toUInt32(1), -9223372036854775808.000000000 < toUInt32(1), -9223372036854775808.000000000 <= toUInt32(1), -9223372036854775808.000000000 > toUInt32(1), -9223372036854775808.000000000 >= toUInt32(1) , toInt32(1) = -9223372036854775808.000000000, toInt32(1) != -9223372036854775808.000000000, toInt32(1) < -9223372036854775808.000000000, toInt32(1) <= -9223372036854775808.000000000, toInt32(1) > -9223372036854775808.000000000, toInt32(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt32(1), -9223372036854775808.000000000 != toInt32(1), -9223372036854775808.000000000 < toInt32(1), -9223372036854775808.000000000 <= toInt32(1), -9223372036854775808.000000000 > toInt32(1), -9223372036854775808.000000000 >= toInt32(1) , toUInt64(1) = -9223372036854775808.000000000, toUInt64(1) != -9223372036854775808.000000000, toUInt64(1) < -9223372036854775808.000000000, toUInt64(1) <= -9223372036854775808.000000000, toUInt64(1) > -9223372036854775808.000000000, toUInt64(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt64(1), -9223372036854775808.000000000 != toUInt64(1), -9223372036854775808.000000000 < toUInt64(1), -9223372036854775808.000000000 <= toUInt64(1), -9223372036854775808.000000000 > toUInt64(1), -9223372036854775808.000000000 >= toUInt64(1) , toInt64(1) = -9223372036854775808.000000000, toInt64(1) != -9223372036854775808.000000000, toInt64(1) < -9223372036854775808.000000000, toInt64(1) <= -9223372036854775808.000000000, toInt64(1) > -9223372036854775808.000000000, toInt64(1) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toInt64(1), -9223372036854775808.000000000 != toInt64(1), -9223372036854775808.000000000 < toInt64(1), -9223372036854775808.000000000 <= toInt64(1), -9223372036854775808.000000000 > toInt64(1), -9223372036854775808.000000000 >= toInt64(1) ; +SELECT '1', '9223372036854775808.000000000', 1 = 9223372036854775808.000000000, 1 != 9223372036854775808.000000000, 1 < 9223372036854775808.000000000, 1 <= 9223372036854775808.000000000, 1 > 9223372036854775808.000000000, 1 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 1, 9223372036854775808.000000000 != 1, 9223372036854775808.000000000 < 1, 9223372036854775808.000000000 <= 1, 9223372036854775808.000000000 > 1, 9223372036854775808.000000000 >= 1 , toUInt8(1) = 9223372036854775808.000000000, toUInt8(1) != 9223372036854775808.000000000, toUInt8(1) < 9223372036854775808.000000000, toUInt8(1) <= 9223372036854775808.000000000, toUInt8(1) > 9223372036854775808.000000000, toUInt8(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt8(1), 9223372036854775808.000000000 != toUInt8(1), 9223372036854775808.000000000 < toUInt8(1), 9223372036854775808.000000000 <= toUInt8(1), 9223372036854775808.000000000 > toUInt8(1), 9223372036854775808.000000000 >= toUInt8(1) , toInt8(1) = 9223372036854775808.000000000, toInt8(1) != 9223372036854775808.000000000, toInt8(1) < 9223372036854775808.000000000, toInt8(1) <= 9223372036854775808.000000000, toInt8(1) > 9223372036854775808.000000000, toInt8(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt8(1), 9223372036854775808.000000000 != toInt8(1), 9223372036854775808.000000000 < toInt8(1), 9223372036854775808.000000000 <= toInt8(1), 9223372036854775808.000000000 > toInt8(1), 9223372036854775808.000000000 >= toInt8(1) , toUInt16(1) = 9223372036854775808.000000000, toUInt16(1) != 9223372036854775808.000000000, toUInt16(1) < 9223372036854775808.000000000, toUInt16(1) <= 9223372036854775808.000000000, toUInt16(1) > 9223372036854775808.000000000, toUInt16(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt16(1), 9223372036854775808.000000000 != toUInt16(1), 9223372036854775808.000000000 < toUInt16(1), 9223372036854775808.000000000 <= toUInt16(1), 9223372036854775808.000000000 > toUInt16(1), 9223372036854775808.000000000 >= toUInt16(1) , toInt16(1) = 9223372036854775808.000000000, toInt16(1) != 9223372036854775808.000000000, toInt16(1) < 9223372036854775808.000000000, toInt16(1) <= 9223372036854775808.000000000, toInt16(1) > 9223372036854775808.000000000, toInt16(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt16(1), 9223372036854775808.000000000 != toInt16(1), 9223372036854775808.000000000 < toInt16(1), 9223372036854775808.000000000 <= toInt16(1), 9223372036854775808.000000000 > toInt16(1), 9223372036854775808.000000000 >= toInt16(1) , toUInt32(1) = 9223372036854775808.000000000, toUInt32(1) != 9223372036854775808.000000000, toUInt32(1) < 9223372036854775808.000000000, toUInt32(1) <= 9223372036854775808.000000000, toUInt32(1) > 9223372036854775808.000000000, toUInt32(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt32(1), 9223372036854775808.000000000 != toUInt32(1), 9223372036854775808.000000000 < toUInt32(1), 9223372036854775808.000000000 <= toUInt32(1), 9223372036854775808.000000000 > toUInt32(1), 9223372036854775808.000000000 >= toUInt32(1) , toInt32(1) = 9223372036854775808.000000000, toInt32(1) != 9223372036854775808.000000000, toInt32(1) < 9223372036854775808.000000000, toInt32(1) <= 9223372036854775808.000000000, toInt32(1) > 9223372036854775808.000000000, toInt32(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt32(1), 9223372036854775808.000000000 != toInt32(1), 9223372036854775808.000000000 < toInt32(1), 9223372036854775808.000000000 <= toInt32(1), 9223372036854775808.000000000 > toInt32(1), 9223372036854775808.000000000 >= toInt32(1) , toUInt64(1) = 9223372036854775808.000000000, toUInt64(1) != 9223372036854775808.000000000, toUInt64(1) < 9223372036854775808.000000000, toUInt64(1) <= 9223372036854775808.000000000, toUInt64(1) > 9223372036854775808.000000000, toUInt64(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(1), 9223372036854775808.000000000 != toUInt64(1), 9223372036854775808.000000000 < toUInt64(1), 9223372036854775808.000000000 <= toUInt64(1), 9223372036854775808.000000000 > toUInt64(1), 9223372036854775808.000000000 >= toUInt64(1) , toInt64(1) = 9223372036854775808.000000000, toInt64(1) != 9223372036854775808.000000000, toInt64(1) < 9223372036854775808.000000000, toInt64(1) <= 9223372036854775808.000000000, toInt64(1) > 9223372036854775808.000000000, toInt64(1) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toInt64(1), 9223372036854775808.000000000 != toInt64(1), 9223372036854775808.000000000 < toInt64(1), 9223372036854775808.000000000 <= toInt64(1), 9223372036854775808.000000000 > toInt64(1), 9223372036854775808.000000000 >= toInt64(1) ; +SELECT '1', '2251799813685248.000000000', 1 = 2251799813685248.000000000, 1 != 2251799813685248.000000000, 1 < 2251799813685248.000000000, 1 <= 2251799813685248.000000000, 1 > 2251799813685248.000000000, 1 >= 2251799813685248.000000000, 2251799813685248.000000000 = 1, 2251799813685248.000000000 != 1, 2251799813685248.000000000 < 1, 2251799813685248.000000000 <= 1, 2251799813685248.000000000 > 1, 2251799813685248.000000000 >= 1 , toUInt8(1) = 2251799813685248.000000000, toUInt8(1) != 2251799813685248.000000000, toUInt8(1) < 2251799813685248.000000000, toUInt8(1) <= 2251799813685248.000000000, toUInt8(1) > 2251799813685248.000000000, toUInt8(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt8(1), 2251799813685248.000000000 != toUInt8(1), 2251799813685248.000000000 < toUInt8(1), 2251799813685248.000000000 <= toUInt8(1), 2251799813685248.000000000 > toUInt8(1), 2251799813685248.000000000 >= toUInt8(1) , toInt8(1) = 2251799813685248.000000000, toInt8(1) != 2251799813685248.000000000, toInt8(1) < 2251799813685248.000000000, toInt8(1) <= 2251799813685248.000000000, toInt8(1) > 2251799813685248.000000000, toInt8(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt8(1), 2251799813685248.000000000 != toInt8(1), 2251799813685248.000000000 < toInt8(1), 2251799813685248.000000000 <= toInt8(1), 2251799813685248.000000000 > toInt8(1), 2251799813685248.000000000 >= toInt8(1) , toUInt16(1) = 2251799813685248.000000000, toUInt16(1) != 2251799813685248.000000000, toUInt16(1) < 2251799813685248.000000000, toUInt16(1) <= 2251799813685248.000000000, toUInt16(1) > 2251799813685248.000000000, toUInt16(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt16(1), 2251799813685248.000000000 != toUInt16(1), 2251799813685248.000000000 < toUInt16(1), 2251799813685248.000000000 <= toUInt16(1), 2251799813685248.000000000 > toUInt16(1), 2251799813685248.000000000 >= toUInt16(1) , toInt16(1) = 2251799813685248.000000000, toInt16(1) != 2251799813685248.000000000, toInt16(1) < 2251799813685248.000000000, toInt16(1) <= 2251799813685248.000000000, toInt16(1) > 2251799813685248.000000000, toInt16(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt16(1), 2251799813685248.000000000 != toInt16(1), 2251799813685248.000000000 < toInt16(1), 2251799813685248.000000000 <= toInt16(1), 2251799813685248.000000000 > toInt16(1), 2251799813685248.000000000 >= toInt16(1) , toUInt32(1) = 2251799813685248.000000000, toUInt32(1) != 2251799813685248.000000000, toUInt32(1) < 2251799813685248.000000000, toUInt32(1) <= 2251799813685248.000000000, toUInt32(1) > 2251799813685248.000000000, toUInt32(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt32(1), 2251799813685248.000000000 != toUInt32(1), 2251799813685248.000000000 < toUInt32(1), 2251799813685248.000000000 <= toUInt32(1), 2251799813685248.000000000 > toUInt32(1), 2251799813685248.000000000 >= toUInt32(1) , toInt32(1) = 2251799813685248.000000000, toInt32(1) != 2251799813685248.000000000, toInt32(1) < 2251799813685248.000000000, toInt32(1) <= 2251799813685248.000000000, toInt32(1) > 2251799813685248.000000000, toInt32(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt32(1), 2251799813685248.000000000 != toInt32(1), 2251799813685248.000000000 < toInt32(1), 2251799813685248.000000000 <= toInt32(1), 2251799813685248.000000000 > toInt32(1), 2251799813685248.000000000 >= toInt32(1) , toUInt64(1) = 2251799813685248.000000000, toUInt64(1) != 2251799813685248.000000000, toUInt64(1) < 2251799813685248.000000000, toUInt64(1) <= 2251799813685248.000000000, toUInt64(1) > 2251799813685248.000000000, toUInt64(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt64(1), 2251799813685248.000000000 != toUInt64(1), 2251799813685248.000000000 < toUInt64(1), 2251799813685248.000000000 <= toUInt64(1), 2251799813685248.000000000 > toUInt64(1), 2251799813685248.000000000 >= toUInt64(1) , toInt64(1) = 2251799813685248.000000000, toInt64(1) != 2251799813685248.000000000, toInt64(1) < 2251799813685248.000000000, toInt64(1) <= 2251799813685248.000000000, toInt64(1) > 2251799813685248.000000000, toInt64(1) >= 2251799813685248.000000000, 2251799813685248.000000000 = toInt64(1), 2251799813685248.000000000 != toInt64(1), 2251799813685248.000000000 < toInt64(1), 2251799813685248.000000000 <= toInt64(1), 2251799813685248.000000000 > toInt64(1), 2251799813685248.000000000 >= toInt64(1) ; +SELECT '1', '4503599627370496.000000000', 1 = 4503599627370496.000000000, 1 != 4503599627370496.000000000, 1 < 4503599627370496.000000000, 1 <= 4503599627370496.000000000, 1 > 4503599627370496.000000000, 1 >= 4503599627370496.000000000, 4503599627370496.000000000 = 1, 4503599627370496.000000000 != 1, 4503599627370496.000000000 < 1, 4503599627370496.000000000 <= 1, 4503599627370496.000000000 > 1, 4503599627370496.000000000 >= 1 , toUInt8(1) = 4503599627370496.000000000, toUInt8(1) != 4503599627370496.000000000, toUInt8(1) < 4503599627370496.000000000, toUInt8(1) <= 4503599627370496.000000000, toUInt8(1) > 4503599627370496.000000000, toUInt8(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt8(1), 4503599627370496.000000000 != toUInt8(1), 4503599627370496.000000000 < toUInt8(1), 4503599627370496.000000000 <= toUInt8(1), 4503599627370496.000000000 > toUInt8(1), 4503599627370496.000000000 >= toUInt8(1) , toInt8(1) = 4503599627370496.000000000, toInt8(1) != 4503599627370496.000000000, toInt8(1) < 4503599627370496.000000000, toInt8(1) <= 4503599627370496.000000000, toInt8(1) > 4503599627370496.000000000, toInt8(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt8(1), 4503599627370496.000000000 != toInt8(1), 4503599627370496.000000000 < toInt8(1), 4503599627370496.000000000 <= toInt8(1), 4503599627370496.000000000 > toInt8(1), 4503599627370496.000000000 >= toInt8(1) , toUInt16(1) = 4503599627370496.000000000, toUInt16(1) != 4503599627370496.000000000, toUInt16(1) < 4503599627370496.000000000, toUInt16(1) <= 4503599627370496.000000000, toUInt16(1) > 4503599627370496.000000000, toUInt16(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt16(1), 4503599627370496.000000000 != toUInt16(1), 4503599627370496.000000000 < toUInt16(1), 4503599627370496.000000000 <= toUInt16(1), 4503599627370496.000000000 > toUInt16(1), 4503599627370496.000000000 >= toUInt16(1) , toInt16(1) = 4503599627370496.000000000, toInt16(1) != 4503599627370496.000000000, toInt16(1) < 4503599627370496.000000000, toInt16(1) <= 4503599627370496.000000000, toInt16(1) > 4503599627370496.000000000, toInt16(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt16(1), 4503599627370496.000000000 != toInt16(1), 4503599627370496.000000000 < toInt16(1), 4503599627370496.000000000 <= toInt16(1), 4503599627370496.000000000 > toInt16(1), 4503599627370496.000000000 >= toInt16(1) , toUInt32(1) = 4503599627370496.000000000, toUInt32(1) != 4503599627370496.000000000, toUInt32(1) < 4503599627370496.000000000, toUInt32(1) <= 4503599627370496.000000000, toUInt32(1) > 4503599627370496.000000000, toUInt32(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt32(1), 4503599627370496.000000000 != toUInt32(1), 4503599627370496.000000000 < toUInt32(1), 4503599627370496.000000000 <= toUInt32(1), 4503599627370496.000000000 > toUInt32(1), 4503599627370496.000000000 >= toUInt32(1) , toInt32(1) = 4503599627370496.000000000, toInt32(1) != 4503599627370496.000000000, toInt32(1) < 4503599627370496.000000000, toInt32(1) <= 4503599627370496.000000000, toInt32(1) > 4503599627370496.000000000, toInt32(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt32(1), 4503599627370496.000000000 != toInt32(1), 4503599627370496.000000000 < toInt32(1), 4503599627370496.000000000 <= toInt32(1), 4503599627370496.000000000 > toInt32(1), 4503599627370496.000000000 >= toInt32(1) , toUInt64(1) = 4503599627370496.000000000, toUInt64(1) != 4503599627370496.000000000, toUInt64(1) < 4503599627370496.000000000, toUInt64(1) <= 4503599627370496.000000000, toUInt64(1) > 4503599627370496.000000000, toUInt64(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt64(1), 4503599627370496.000000000 != toUInt64(1), 4503599627370496.000000000 < toUInt64(1), 4503599627370496.000000000 <= toUInt64(1), 4503599627370496.000000000 > toUInt64(1), 4503599627370496.000000000 >= toUInt64(1) , toInt64(1) = 4503599627370496.000000000, toInt64(1) != 4503599627370496.000000000, toInt64(1) < 4503599627370496.000000000, toInt64(1) <= 4503599627370496.000000000, toInt64(1) > 4503599627370496.000000000, toInt64(1) >= 4503599627370496.000000000, 4503599627370496.000000000 = toInt64(1), 4503599627370496.000000000 != toInt64(1), 4503599627370496.000000000 < toInt64(1), 4503599627370496.000000000 <= toInt64(1), 4503599627370496.000000000 > toInt64(1), 4503599627370496.000000000 >= toInt64(1) ; +SELECT '1', '9007199254740991.000000000', 1 = 9007199254740991.000000000, 1 != 9007199254740991.000000000, 1 < 9007199254740991.000000000, 1 <= 9007199254740991.000000000, 1 > 9007199254740991.000000000, 1 >= 9007199254740991.000000000, 9007199254740991.000000000 = 1, 9007199254740991.000000000 != 1, 9007199254740991.000000000 < 1, 9007199254740991.000000000 <= 1, 9007199254740991.000000000 > 1, 9007199254740991.000000000 >= 1 , toUInt8(1) = 9007199254740991.000000000, toUInt8(1) != 9007199254740991.000000000, toUInt8(1) < 9007199254740991.000000000, toUInt8(1) <= 9007199254740991.000000000, toUInt8(1) > 9007199254740991.000000000, toUInt8(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt8(1), 9007199254740991.000000000 != toUInt8(1), 9007199254740991.000000000 < toUInt8(1), 9007199254740991.000000000 <= toUInt8(1), 9007199254740991.000000000 > toUInt8(1), 9007199254740991.000000000 >= toUInt8(1) , toInt8(1) = 9007199254740991.000000000, toInt8(1) != 9007199254740991.000000000, toInt8(1) < 9007199254740991.000000000, toInt8(1) <= 9007199254740991.000000000, toInt8(1) > 9007199254740991.000000000, toInt8(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt8(1), 9007199254740991.000000000 != toInt8(1), 9007199254740991.000000000 < toInt8(1), 9007199254740991.000000000 <= toInt8(1), 9007199254740991.000000000 > toInt8(1), 9007199254740991.000000000 >= toInt8(1) , toUInt16(1) = 9007199254740991.000000000, toUInt16(1) != 9007199254740991.000000000, toUInt16(1) < 9007199254740991.000000000, toUInt16(1) <= 9007199254740991.000000000, toUInt16(1) > 9007199254740991.000000000, toUInt16(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt16(1), 9007199254740991.000000000 != toUInt16(1), 9007199254740991.000000000 < toUInt16(1), 9007199254740991.000000000 <= toUInt16(1), 9007199254740991.000000000 > toUInt16(1), 9007199254740991.000000000 >= toUInt16(1) , toInt16(1) = 9007199254740991.000000000, toInt16(1) != 9007199254740991.000000000, toInt16(1) < 9007199254740991.000000000, toInt16(1) <= 9007199254740991.000000000, toInt16(1) > 9007199254740991.000000000, toInt16(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt16(1), 9007199254740991.000000000 != toInt16(1), 9007199254740991.000000000 < toInt16(1), 9007199254740991.000000000 <= toInt16(1), 9007199254740991.000000000 > toInt16(1), 9007199254740991.000000000 >= toInt16(1) , toUInt32(1) = 9007199254740991.000000000, toUInt32(1) != 9007199254740991.000000000, toUInt32(1) < 9007199254740991.000000000, toUInt32(1) <= 9007199254740991.000000000, toUInt32(1) > 9007199254740991.000000000, toUInt32(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt32(1), 9007199254740991.000000000 != toUInt32(1), 9007199254740991.000000000 < toUInt32(1), 9007199254740991.000000000 <= toUInt32(1), 9007199254740991.000000000 > toUInt32(1), 9007199254740991.000000000 >= toUInt32(1) , toInt32(1) = 9007199254740991.000000000, toInt32(1) != 9007199254740991.000000000, toInt32(1) < 9007199254740991.000000000, toInt32(1) <= 9007199254740991.000000000, toInt32(1) > 9007199254740991.000000000, toInt32(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt32(1), 9007199254740991.000000000 != toInt32(1), 9007199254740991.000000000 < toInt32(1), 9007199254740991.000000000 <= toInt32(1), 9007199254740991.000000000 > toInt32(1), 9007199254740991.000000000 >= toInt32(1) , toUInt64(1) = 9007199254740991.000000000, toUInt64(1) != 9007199254740991.000000000, toUInt64(1) < 9007199254740991.000000000, toUInt64(1) <= 9007199254740991.000000000, toUInt64(1) > 9007199254740991.000000000, toUInt64(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt64(1), 9007199254740991.000000000 != toUInt64(1), 9007199254740991.000000000 < toUInt64(1), 9007199254740991.000000000 <= toUInt64(1), 9007199254740991.000000000 > toUInt64(1), 9007199254740991.000000000 >= toUInt64(1) , toInt64(1) = 9007199254740991.000000000, toInt64(1) != 9007199254740991.000000000, toInt64(1) < 9007199254740991.000000000, toInt64(1) <= 9007199254740991.000000000, toInt64(1) > 9007199254740991.000000000, toInt64(1) >= 9007199254740991.000000000, 9007199254740991.000000000 = toInt64(1), 9007199254740991.000000000 != toInt64(1), 9007199254740991.000000000 < toInt64(1), 9007199254740991.000000000 <= toInt64(1), 9007199254740991.000000000 > toInt64(1), 9007199254740991.000000000 >= toInt64(1) ; +SELECT '1', '9007199254740992.000000000', 1 = 9007199254740992.000000000, 1 != 9007199254740992.000000000, 1 < 9007199254740992.000000000, 1 <= 9007199254740992.000000000, 1 > 9007199254740992.000000000, 1 >= 9007199254740992.000000000, 9007199254740992.000000000 = 1, 9007199254740992.000000000 != 1, 9007199254740992.000000000 < 1, 9007199254740992.000000000 <= 1, 9007199254740992.000000000 > 1, 9007199254740992.000000000 >= 1 , toUInt8(1) = 9007199254740992.000000000, toUInt8(1) != 9007199254740992.000000000, toUInt8(1) < 9007199254740992.000000000, toUInt8(1) <= 9007199254740992.000000000, toUInt8(1) > 9007199254740992.000000000, toUInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(1), 9007199254740992.000000000 != toUInt8(1), 9007199254740992.000000000 < toUInt8(1), 9007199254740992.000000000 <= toUInt8(1), 9007199254740992.000000000 > toUInt8(1), 9007199254740992.000000000 >= toUInt8(1) , toInt8(1) = 9007199254740992.000000000, toInt8(1) != 9007199254740992.000000000, toInt8(1) < 9007199254740992.000000000, toInt8(1) <= 9007199254740992.000000000, toInt8(1) > 9007199254740992.000000000, toInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(1), 9007199254740992.000000000 != toInt8(1), 9007199254740992.000000000 < toInt8(1), 9007199254740992.000000000 <= toInt8(1), 9007199254740992.000000000 > toInt8(1), 9007199254740992.000000000 >= toInt8(1) , toUInt16(1) = 9007199254740992.000000000, toUInt16(1) != 9007199254740992.000000000, toUInt16(1) < 9007199254740992.000000000, toUInt16(1) <= 9007199254740992.000000000, toUInt16(1) > 9007199254740992.000000000, toUInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(1), 9007199254740992.000000000 != toUInt16(1), 9007199254740992.000000000 < toUInt16(1), 9007199254740992.000000000 <= toUInt16(1), 9007199254740992.000000000 > toUInt16(1), 9007199254740992.000000000 >= toUInt16(1) , toInt16(1) = 9007199254740992.000000000, toInt16(1) != 9007199254740992.000000000, toInt16(1) < 9007199254740992.000000000, toInt16(1) <= 9007199254740992.000000000, toInt16(1) > 9007199254740992.000000000, toInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(1), 9007199254740992.000000000 != toInt16(1), 9007199254740992.000000000 < toInt16(1), 9007199254740992.000000000 <= toInt16(1), 9007199254740992.000000000 > toInt16(1), 9007199254740992.000000000 >= toInt16(1) , toUInt32(1) = 9007199254740992.000000000, toUInt32(1) != 9007199254740992.000000000, toUInt32(1) < 9007199254740992.000000000, toUInt32(1) <= 9007199254740992.000000000, toUInt32(1) > 9007199254740992.000000000, toUInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(1), 9007199254740992.000000000 != toUInt32(1), 9007199254740992.000000000 < toUInt32(1), 9007199254740992.000000000 <= toUInt32(1), 9007199254740992.000000000 > toUInt32(1), 9007199254740992.000000000 >= toUInt32(1) , toInt32(1) = 9007199254740992.000000000, toInt32(1) != 9007199254740992.000000000, toInt32(1) < 9007199254740992.000000000, toInt32(1) <= 9007199254740992.000000000, toInt32(1) > 9007199254740992.000000000, toInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(1), 9007199254740992.000000000 != toInt32(1), 9007199254740992.000000000 < toInt32(1), 9007199254740992.000000000 <= toInt32(1), 9007199254740992.000000000 > toInt32(1), 9007199254740992.000000000 >= toInt32(1) , toUInt64(1) = 9007199254740992.000000000, toUInt64(1) != 9007199254740992.000000000, toUInt64(1) < 9007199254740992.000000000, toUInt64(1) <= 9007199254740992.000000000, toUInt64(1) > 9007199254740992.000000000, toUInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(1), 9007199254740992.000000000 != toUInt64(1), 9007199254740992.000000000 < toUInt64(1), 9007199254740992.000000000 <= toUInt64(1), 9007199254740992.000000000 > toUInt64(1), 9007199254740992.000000000 >= toUInt64(1) , toInt64(1) = 9007199254740992.000000000, toInt64(1) != 9007199254740992.000000000, toInt64(1) < 9007199254740992.000000000, toInt64(1) <= 9007199254740992.000000000, toInt64(1) > 9007199254740992.000000000, toInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(1), 9007199254740992.000000000 != toInt64(1), 9007199254740992.000000000 < toInt64(1), 9007199254740992.000000000 <= toInt64(1), 9007199254740992.000000000 > toInt64(1), 9007199254740992.000000000 >= toInt64(1) ; +SELECT '1', '9007199254740992.000000000', 1 = 9007199254740992.000000000, 1 != 9007199254740992.000000000, 1 < 9007199254740992.000000000, 1 <= 9007199254740992.000000000, 1 > 9007199254740992.000000000, 1 >= 9007199254740992.000000000, 9007199254740992.000000000 = 1, 9007199254740992.000000000 != 1, 9007199254740992.000000000 < 1, 9007199254740992.000000000 <= 1, 9007199254740992.000000000 > 1, 9007199254740992.000000000 >= 1 , toUInt8(1) = 9007199254740992.000000000, toUInt8(1) != 9007199254740992.000000000, toUInt8(1) < 9007199254740992.000000000, toUInt8(1) <= 9007199254740992.000000000, toUInt8(1) > 9007199254740992.000000000, toUInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(1), 9007199254740992.000000000 != toUInt8(1), 9007199254740992.000000000 < toUInt8(1), 9007199254740992.000000000 <= toUInt8(1), 9007199254740992.000000000 > toUInt8(1), 9007199254740992.000000000 >= toUInt8(1) , toInt8(1) = 9007199254740992.000000000, toInt8(1) != 9007199254740992.000000000, toInt8(1) < 9007199254740992.000000000, toInt8(1) <= 9007199254740992.000000000, toInt8(1) > 9007199254740992.000000000, toInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(1), 9007199254740992.000000000 != toInt8(1), 9007199254740992.000000000 < toInt8(1), 9007199254740992.000000000 <= toInt8(1), 9007199254740992.000000000 > toInt8(1), 9007199254740992.000000000 >= toInt8(1) , toUInt16(1) = 9007199254740992.000000000, toUInt16(1) != 9007199254740992.000000000, toUInt16(1) < 9007199254740992.000000000, toUInt16(1) <= 9007199254740992.000000000, toUInt16(1) > 9007199254740992.000000000, toUInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(1), 9007199254740992.000000000 != toUInt16(1), 9007199254740992.000000000 < toUInt16(1), 9007199254740992.000000000 <= toUInt16(1), 9007199254740992.000000000 > toUInt16(1), 9007199254740992.000000000 >= toUInt16(1) , toInt16(1) = 9007199254740992.000000000, toInt16(1) != 9007199254740992.000000000, toInt16(1) < 9007199254740992.000000000, toInt16(1) <= 9007199254740992.000000000, toInt16(1) > 9007199254740992.000000000, toInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(1), 9007199254740992.000000000 != toInt16(1), 9007199254740992.000000000 < toInt16(1), 9007199254740992.000000000 <= toInt16(1), 9007199254740992.000000000 > toInt16(1), 9007199254740992.000000000 >= toInt16(1) , toUInt32(1) = 9007199254740992.000000000, toUInt32(1) != 9007199254740992.000000000, toUInt32(1) < 9007199254740992.000000000, toUInt32(1) <= 9007199254740992.000000000, toUInt32(1) > 9007199254740992.000000000, toUInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(1), 9007199254740992.000000000 != toUInt32(1), 9007199254740992.000000000 < toUInt32(1), 9007199254740992.000000000 <= toUInt32(1), 9007199254740992.000000000 > toUInt32(1), 9007199254740992.000000000 >= toUInt32(1) , toInt32(1) = 9007199254740992.000000000, toInt32(1) != 9007199254740992.000000000, toInt32(1) < 9007199254740992.000000000, toInt32(1) <= 9007199254740992.000000000, toInt32(1) > 9007199254740992.000000000, toInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(1), 9007199254740992.000000000 != toInt32(1), 9007199254740992.000000000 < toInt32(1), 9007199254740992.000000000 <= toInt32(1), 9007199254740992.000000000 > toInt32(1), 9007199254740992.000000000 >= toInt32(1) , toUInt64(1) = 9007199254740992.000000000, toUInt64(1) != 9007199254740992.000000000, toUInt64(1) < 9007199254740992.000000000, toUInt64(1) <= 9007199254740992.000000000, toUInt64(1) > 9007199254740992.000000000, toUInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(1), 9007199254740992.000000000 != toUInt64(1), 9007199254740992.000000000 < toUInt64(1), 9007199254740992.000000000 <= toUInt64(1), 9007199254740992.000000000 > toUInt64(1), 9007199254740992.000000000 >= toUInt64(1) , toInt64(1) = 9007199254740992.000000000, toInt64(1) != 9007199254740992.000000000, toInt64(1) < 9007199254740992.000000000, toInt64(1) <= 9007199254740992.000000000, toInt64(1) > 9007199254740992.000000000, toInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(1), 9007199254740992.000000000 != toInt64(1), 9007199254740992.000000000 < toInt64(1), 9007199254740992.000000000 <= toInt64(1), 9007199254740992.000000000 > toInt64(1), 9007199254740992.000000000 >= toInt64(1) ; +SELECT '1', '9007199254740994.000000000', 1 = 9007199254740994.000000000, 1 != 9007199254740994.000000000, 1 < 9007199254740994.000000000, 1 <= 9007199254740994.000000000, 1 > 9007199254740994.000000000, 1 >= 9007199254740994.000000000, 9007199254740994.000000000 = 1, 9007199254740994.000000000 != 1, 9007199254740994.000000000 < 1, 9007199254740994.000000000 <= 1, 9007199254740994.000000000 > 1, 9007199254740994.000000000 >= 1 , toUInt8(1) = 9007199254740994.000000000, toUInt8(1) != 9007199254740994.000000000, toUInt8(1) < 9007199254740994.000000000, toUInt8(1) <= 9007199254740994.000000000, toUInt8(1) > 9007199254740994.000000000, toUInt8(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt8(1), 9007199254740994.000000000 != toUInt8(1), 9007199254740994.000000000 < toUInt8(1), 9007199254740994.000000000 <= toUInt8(1), 9007199254740994.000000000 > toUInt8(1), 9007199254740994.000000000 >= toUInt8(1) , toInt8(1) = 9007199254740994.000000000, toInt8(1) != 9007199254740994.000000000, toInt8(1) < 9007199254740994.000000000, toInt8(1) <= 9007199254740994.000000000, toInt8(1) > 9007199254740994.000000000, toInt8(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt8(1), 9007199254740994.000000000 != toInt8(1), 9007199254740994.000000000 < toInt8(1), 9007199254740994.000000000 <= toInt8(1), 9007199254740994.000000000 > toInt8(1), 9007199254740994.000000000 >= toInt8(1) , toUInt16(1) = 9007199254740994.000000000, toUInt16(1) != 9007199254740994.000000000, toUInt16(1) < 9007199254740994.000000000, toUInt16(1) <= 9007199254740994.000000000, toUInt16(1) > 9007199254740994.000000000, toUInt16(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt16(1), 9007199254740994.000000000 != toUInt16(1), 9007199254740994.000000000 < toUInt16(1), 9007199254740994.000000000 <= toUInt16(1), 9007199254740994.000000000 > toUInt16(1), 9007199254740994.000000000 >= toUInt16(1) , toInt16(1) = 9007199254740994.000000000, toInt16(1) != 9007199254740994.000000000, toInt16(1) < 9007199254740994.000000000, toInt16(1) <= 9007199254740994.000000000, toInt16(1) > 9007199254740994.000000000, toInt16(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt16(1), 9007199254740994.000000000 != toInt16(1), 9007199254740994.000000000 < toInt16(1), 9007199254740994.000000000 <= toInt16(1), 9007199254740994.000000000 > toInt16(1), 9007199254740994.000000000 >= toInt16(1) , toUInt32(1) = 9007199254740994.000000000, toUInt32(1) != 9007199254740994.000000000, toUInt32(1) < 9007199254740994.000000000, toUInt32(1) <= 9007199254740994.000000000, toUInt32(1) > 9007199254740994.000000000, toUInt32(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt32(1), 9007199254740994.000000000 != toUInt32(1), 9007199254740994.000000000 < toUInt32(1), 9007199254740994.000000000 <= toUInt32(1), 9007199254740994.000000000 > toUInt32(1), 9007199254740994.000000000 >= toUInt32(1) , toInt32(1) = 9007199254740994.000000000, toInt32(1) != 9007199254740994.000000000, toInt32(1) < 9007199254740994.000000000, toInt32(1) <= 9007199254740994.000000000, toInt32(1) > 9007199254740994.000000000, toInt32(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt32(1), 9007199254740994.000000000 != toInt32(1), 9007199254740994.000000000 < toInt32(1), 9007199254740994.000000000 <= toInt32(1), 9007199254740994.000000000 > toInt32(1), 9007199254740994.000000000 >= toInt32(1) , toUInt64(1) = 9007199254740994.000000000, toUInt64(1) != 9007199254740994.000000000, toUInt64(1) < 9007199254740994.000000000, toUInt64(1) <= 9007199254740994.000000000, toUInt64(1) > 9007199254740994.000000000, toUInt64(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt64(1), 9007199254740994.000000000 != toUInt64(1), 9007199254740994.000000000 < toUInt64(1), 9007199254740994.000000000 <= toUInt64(1), 9007199254740994.000000000 > toUInt64(1), 9007199254740994.000000000 >= toUInt64(1) , toInt64(1) = 9007199254740994.000000000, toInt64(1) != 9007199254740994.000000000, toInt64(1) < 9007199254740994.000000000, toInt64(1) <= 9007199254740994.000000000, toInt64(1) > 9007199254740994.000000000, toInt64(1) >= 9007199254740994.000000000, 9007199254740994.000000000 = toInt64(1), 9007199254740994.000000000 != toInt64(1), 9007199254740994.000000000 < toInt64(1), 9007199254740994.000000000 <= toInt64(1), 9007199254740994.000000000 > toInt64(1), 9007199254740994.000000000 >= toInt64(1) ; +SELECT '1', '-9007199254740991.000000000', 1 = -9007199254740991.000000000, 1 != -9007199254740991.000000000, 1 < -9007199254740991.000000000, 1 <= -9007199254740991.000000000, 1 > -9007199254740991.000000000, 1 >= -9007199254740991.000000000, -9007199254740991.000000000 = 1, -9007199254740991.000000000 != 1, -9007199254740991.000000000 < 1, -9007199254740991.000000000 <= 1, -9007199254740991.000000000 > 1, -9007199254740991.000000000 >= 1 , toUInt8(1) = -9007199254740991.000000000, toUInt8(1) != -9007199254740991.000000000, toUInt8(1) < -9007199254740991.000000000, toUInt8(1) <= -9007199254740991.000000000, toUInt8(1) > -9007199254740991.000000000, toUInt8(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt8(1), -9007199254740991.000000000 != toUInt8(1), -9007199254740991.000000000 < toUInt8(1), -9007199254740991.000000000 <= toUInt8(1), -9007199254740991.000000000 > toUInt8(1), -9007199254740991.000000000 >= toUInt8(1) , toInt8(1) = -9007199254740991.000000000, toInt8(1) != -9007199254740991.000000000, toInt8(1) < -9007199254740991.000000000, toInt8(1) <= -9007199254740991.000000000, toInt8(1) > -9007199254740991.000000000, toInt8(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt8(1), -9007199254740991.000000000 != toInt8(1), -9007199254740991.000000000 < toInt8(1), -9007199254740991.000000000 <= toInt8(1), -9007199254740991.000000000 > toInt8(1), -9007199254740991.000000000 >= toInt8(1) , toUInt16(1) = -9007199254740991.000000000, toUInt16(1) != -9007199254740991.000000000, toUInt16(1) < -9007199254740991.000000000, toUInt16(1) <= -9007199254740991.000000000, toUInt16(1) > -9007199254740991.000000000, toUInt16(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt16(1), -9007199254740991.000000000 != toUInt16(1), -9007199254740991.000000000 < toUInt16(1), -9007199254740991.000000000 <= toUInt16(1), -9007199254740991.000000000 > toUInt16(1), -9007199254740991.000000000 >= toUInt16(1) , toInt16(1) = -9007199254740991.000000000, toInt16(1) != -9007199254740991.000000000, toInt16(1) < -9007199254740991.000000000, toInt16(1) <= -9007199254740991.000000000, toInt16(1) > -9007199254740991.000000000, toInt16(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt16(1), -9007199254740991.000000000 != toInt16(1), -9007199254740991.000000000 < toInt16(1), -9007199254740991.000000000 <= toInt16(1), -9007199254740991.000000000 > toInt16(1), -9007199254740991.000000000 >= toInt16(1) , toUInt32(1) = -9007199254740991.000000000, toUInt32(1) != -9007199254740991.000000000, toUInt32(1) < -9007199254740991.000000000, toUInt32(1) <= -9007199254740991.000000000, toUInt32(1) > -9007199254740991.000000000, toUInt32(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt32(1), -9007199254740991.000000000 != toUInt32(1), -9007199254740991.000000000 < toUInt32(1), -9007199254740991.000000000 <= toUInt32(1), -9007199254740991.000000000 > toUInt32(1), -9007199254740991.000000000 >= toUInt32(1) , toInt32(1) = -9007199254740991.000000000, toInt32(1) != -9007199254740991.000000000, toInt32(1) < -9007199254740991.000000000, toInt32(1) <= -9007199254740991.000000000, toInt32(1) > -9007199254740991.000000000, toInt32(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt32(1), -9007199254740991.000000000 != toInt32(1), -9007199254740991.000000000 < toInt32(1), -9007199254740991.000000000 <= toInt32(1), -9007199254740991.000000000 > toInt32(1), -9007199254740991.000000000 >= toInt32(1) , toUInt64(1) = -9007199254740991.000000000, toUInt64(1) != -9007199254740991.000000000, toUInt64(1) < -9007199254740991.000000000, toUInt64(1) <= -9007199254740991.000000000, toUInt64(1) > -9007199254740991.000000000, toUInt64(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt64(1), -9007199254740991.000000000 != toUInt64(1), -9007199254740991.000000000 < toUInt64(1), -9007199254740991.000000000 <= toUInt64(1), -9007199254740991.000000000 > toUInt64(1), -9007199254740991.000000000 >= toUInt64(1) , toInt64(1) = -9007199254740991.000000000, toInt64(1) != -9007199254740991.000000000, toInt64(1) < -9007199254740991.000000000, toInt64(1) <= -9007199254740991.000000000, toInt64(1) > -9007199254740991.000000000, toInt64(1) >= -9007199254740991.000000000, -9007199254740991.000000000 = toInt64(1), -9007199254740991.000000000 != toInt64(1), -9007199254740991.000000000 < toInt64(1), -9007199254740991.000000000 <= toInt64(1), -9007199254740991.000000000 > toInt64(1), -9007199254740991.000000000 >= toInt64(1) ; +SELECT '1', '-9007199254740992.000000000', 1 = -9007199254740992.000000000, 1 != -9007199254740992.000000000, 1 < -9007199254740992.000000000, 1 <= -9007199254740992.000000000, 1 > -9007199254740992.000000000, 1 >= -9007199254740992.000000000, -9007199254740992.000000000 = 1, -9007199254740992.000000000 != 1, -9007199254740992.000000000 < 1, -9007199254740992.000000000 <= 1, -9007199254740992.000000000 > 1, -9007199254740992.000000000 >= 1 , toUInt8(1) = -9007199254740992.000000000, toUInt8(1) != -9007199254740992.000000000, toUInt8(1) < -9007199254740992.000000000, toUInt8(1) <= -9007199254740992.000000000, toUInt8(1) > -9007199254740992.000000000, toUInt8(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt8(1), -9007199254740992.000000000 != toUInt8(1), -9007199254740992.000000000 < toUInt8(1), -9007199254740992.000000000 <= toUInt8(1), -9007199254740992.000000000 > toUInt8(1), -9007199254740992.000000000 >= toUInt8(1) , toInt8(1) = -9007199254740992.000000000, toInt8(1) != -9007199254740992.000000000, toInt8(1) < -9007199254740992.000000000, toInt8(1) <= -9007199254740992.000000000, toInt8(1) > -9007199254740992.000000000, toInt8(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(1), -9007199254740992.000000000 != toInt8(1), -9007199254740992.000000000 < toInt8(1), -9007199254740992.000000000 <= toInt8(1), -9007199254740992.000000000 > toInt8(1), -9007199254740992.000000000 >= toInt8(1) , toUInt16(1) = -9007199254740992.000000000, toUInt16(1) != -9007199254740992.000000000, toUInt16(1) < -9007199254740992.000000000, toUInt16(1) <= -9007199254740992.000000000, toUInt16(1) > -9007199254740992.000000000, toUInt16(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt16(1), -9007199254740992.000000000 != toUInt16(1), -9007199254740992.000000000 < toUInt16(1), -9007199254740992.000000000 <= toUInt16(1), -9007199254740992.000000000 > toUInt16(1), -9007199254740992.000000000 >= toUInt16(1) , toInt16(1) = -9007199254740992.000000000, toInt16(1) != -9007199254740992.000000000, toInt16(1) < -9007199254740992.000000000, toInt16(1) <= -9007199254740992.000000000, toInt16(1) > -9007199254740992.000000000, toInt16(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(1), -9007199254740992.000000000 != toInt16(1), -9007199254740992.000000000 < toInt16(1), -9007199254740992.000000000 <= toInt16(1), -9007199254740992.000000000 > toInt16(1), -9007199254740992.000000000 >= toInt16(1) , toUInt32(1) = -9007199254740992.000000000, toUInt32(1) != -9007199254740992.000000000, toUInt32(1) < -9007199254740992.000000000, toUInt32(1) <= -9007199254740992.000000000, toUInt32(1) > -9007199254740992.000000000, toUInt32(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt32(1), -9007199254740992.000000000 != toUInt32(1), -9007199254740992.000000000 < toUInt32(1), -9007199254740992.000000000 <= toUInt32(1), -9007199254740992.000000000 > toUInt32(1), -9007199254740992.000000000 >= toUInt32(1) , toInt32(1) = -9007199254740992.000000000, toInt32(1) != -9007199254740992.000000000, toInt32(1) < -9007199254740992.000000000, toInt32(1) <= -9007199254740992.000000000, toInt32(1) > -9007199254740992.000000000, toInt32(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(1), -9007199254740992.000000000 != toInt32(1), -9007199254740992.000000000 < toInt32(1), -9007199254740992.000000000 <= toInt32(1), -9007199254740992.000000000 > toInt32(1), -9007199254740992.000000000 >= toInt32(1) , toUInt64(1) = -9007199254740992.000000000, toUInt64(1) != -9007199254740992.000000000, toUInt64(1) < -9007199254740992.000000000, toUInt64(1) <= -9007199254740992.000000000, toUInt64(1) > -9007199254740992.000000000, toUInt64(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(1), -9007199254740992.000000000 != toUInt64(1), -9007199254740992.000000000 < toUInt64(1), -9007199254740992.000000000 <= toUInt64(1), -9007199254740992.000000000 > toUInt64(1), -9007199254740992.000000000 >= toUInt64(1) , toInt64(1) = -9007199254740992.000000000, toInt64(1) != -9007199254740992.000000000, toInt64(1) < -9007199254740992.000000000, toInt64(1) <= -9007199254740992.000000000, toInt64(1) > -9007199254740992.000000000, toInt64(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(1), -9007199254740992.000000000 != toInt64(1), -9007199254740992.000000000 < toInt64(1), -9007199254740992.000000000 <= toInt64(1), -9007199254740992.000000000 > toInt64(1), -9007199254740992.000000000 >= toInt64(1) ; +SELECT '1', '-9007199254740992.000000000', 1 = -9007199254740992.000000000, 1 != -9007199254740992.000000000, 1 < -9007199254740992.000000000, 1 <= -9007199254740992.000000000, 1 > -9007199254740992.000000000, 1 >= -9007199254740992.000000000, -9007199254740992.000000000 = 1, -9007199254740992.000000000 != 1, -9007199254740992.000000000 < 1, -9007199254740992.000000000 <= 1, -9007199254740992.000000000 > 1, -9007199254740992.000000000 >= 1 , toUInt8(1) = -9007199254740992.000000000, toUInt8(1) != -9007199254740992.000000000, toUInt8(1) < -9007199254740992.000000000, toUInt8(1) <= -9007199254740992.000000000, toUInt8(1) > -9007199254740992.000000000, toUInt8(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt8(1), -9007199254740992.000000000 != toUInt8(1), -9007199254740992.000000000 < toUInt8(1), -9007199254740992.000000000 <= toUInt8(1), -9007199254740992.000000000 > toUInt8(1), -9007199254740992.000000000 >= toUInt8(1) , toInt8(1) = -9007199254740992.000000000, toInt8(1) != -9007199254740992.000000000, toInt8(1) < -9007199254740992.000000000, toInt8(1) <= -9007199254740992.000000000, toInt8(1) > -9007199254740992.000000000, toInt8(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt8(1), -9007199254740992.000000000 != toInt8(1), -9007199254740992.000000000 < toInt8(1), -9007199254740992.000000000 <= toInt8(1), -9007199254740992.000000000 > toInt8(1), -9007199254740992.000000000 >= toInt8(1) , toUInt16(1) = -9007199254740992.000000000, toUInt16(1) != -9007199254740992.000000000, toUInt16(1) < -9007199254740992.000000000, toUInt16(1) <= -9007199254740992.000000000, toUInt16(1) > -9007199254740992.000000000, toUInt16(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt16(1), -9007199254740992.000000000 != toUInt16(1), -9007199254740992.000000000 < toUInt16(1), -9007199254740992.000000000 <= toUInt16(1), -9007199254740992.000000000 > toUInt16(1), -9007199254740992.000000000 >= toUInt16(1) , toInt16(1) = -9007199254740992.000000000, toInt16(1) != -9007199254740992.000000000, toInt16(1) < -9007199254740992.000000000, toInt16(1) <= -9007199254740992.000000000, toInt16(1) > -9007199254740992.000000000, toInt16(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt16(1), -9007199254740992.000000000 != toInt16(1), -9007199254740992.000000000 < toInt16(1), -9007199254740992.000000000 <= toInt16(1), -9007199254740992.000000000 > toInt16(1), -9007199254740992.000000000 >= toInt16(1) , toUInt32(1) = -9007199254740992.000000000, toUInt32(1) != -9007199254740992.000000000, toUInt32(1) < -9007199254740992.000000000, toUInt32(1) <= -9007199254740992.000000000, toUInt32(1) > -9007199254740992.000000000, toUInt32(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt32(1), -9007199254740992.000000000 != toUInt32(1), -9007199254740992.000000000 < toUInt32(1), -9007199254740992.000000000 <= toUInt32(1), -9007199254740992.000000000 > toUInt32(1), -9007199254740992.000000000 >= toUInt32(1) , toInt32(1) = -9007199254740992.000000000, toInt32(1) != -9007199254740992.000000000, toInt32(1) < -9007199254740992.000000000, toInt32(1) <= -9007199254740992.000000000, toInt32(1) > -9007199254740992.000000000, toInt32(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt32(1), -9007199254740992.000000000 != toInt32(1), -9007199254740992.000000000 < toInt32(1), -9007199254740992.000000000 <= toInt32(1), -9007199254740992.000000000 > toInt32(1), -9007199254740992.000000000 >= toInt32(1) , toUInt64(1) = -9007199254740992.000000000, toUInt64(1) != -9007199254740992.000000000, toUInt64(1) < -9007199254740992.000000000, toUInt64(1) <= -9007199254740992.000000000, toUInt64(1) > -9007199254740992.000000000, toUInt64(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(1), -9007199254740992.000000000 != toUInt64(1), -9007199254740992.000000000 < toUInt64(1), -9007199254740992.000000000 <= toUInt64(1), -9007199254740992.000000000 > toUInt64(1), -9007199254740992.000000000 >= toUInt64(1) , toInt64(1) = -9007199254740992.000000000, toInt64(1) != -9007199254740992.000000000, toInt64(1) < -9007199254740992.000000000, toInt64(1) <= -9007199254740992.000000000, toInt64(1) > -9007199254740992.000000000, toInt64(1) >= -9007199254740992.000000000, -9007199254740992.000000000 = toInt64(1), -9007199254740992.000000000 != toInt64(1), -9007199254740992.000000000 < toInt64(1), -9007199254740992.000000000 <= toInt64(1), -9007199254740992.000000000 > toInt64(1), -9007199254740992.000000000 >= toInt64(1) ; +SELECT '1', '-9007199254740994.000000000', 1 = -9007199254740994.000000000, 1 != -9007199254740994.000000000, 1 < -9007199254740994.000000000, 1 <= -9007199254740994.000000000, 1 > -9007199254740994.000000000, 1 >= -9007199254740994.000000000, -9007199254740994.000000000 = 1, -9007199254740994.000000000 != 1, -9007199254740994.000000000 < 1, -9007199254740994.000000000 <= 1, -9007199254740994.000000000 > 1, -9007199254740994.000000000 >= 1 , toUInt8(1) = -9007199254740994.000000000, toUInt8(1) != -9007199254740994.000000000, toUInt8(1) < -9007199254740994.000000000, toUInt8(1) <= -9007199254740994.000000000, toUInt8(1) > -9007199254740994.000000000, toUInt8(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt8(1), -9007199254740994.000000000 != toUInt8(1), -9007199254740994.000000000 < toUInt8(1), -9007199254740994.000000000 <= toUInt8(1), -9007199254740994.000000000 > toUInt8(1), -9007199254740994.000000000 >= toUInt8(1) , toInt8(1) = -9007199254740994.000000000, toInt8(1) != -9007199254740994.000000000, toInt8(1) < -9007199254740994.000000000, toInt8(1) <= -9007199254740994.000000000, toInt8(1) > -9007199254740994.000000000, toInt8(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt8(1), -9007199254740994.000000000 != toInt8(1), -9007199254740994.000000000 < toInt8(1), -9007199254740994.000000000 <= toInt8(1), -9007199254740994.000000000 > toInt8(1), -9007199254740994.000000000 >= toInt8(1) , toUInt16(1) = -9007199254740994.000000000, toUInt16(1) != -9007199254740994.000000000, toUInt16(1) < -9007199254740994.000000000, toUInt16(1) <= -9007199254740994.000000000, toUInt16(1) > -9007199254740994.000000000, toUInt16(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt16(1), -9007199254740994.000000000 != toUInt16(1), -9007199254740994.000000000 < toUInt16(1), -9007199254740994.000000000 <= toUInt16(1), -9007199254740994.000000000 > toUInt16(1), -9007199254740994.000000000 >= toUInt16(1) , toInt16(1) = -9007199254740994.000000000, toInt16(1) != -9007199254740994.000000000, toInt16(1) < -9007199254740994.000000000, toInt16(1) <= -9007199254740994.000000000, toInt16(1) > -9007199254740994.000000000, toInt16(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt16(1), -9007199254740994.000000000 != toInt16(1), -9007199254740994.000000000 < toInt16(1), -9007199254740994.000000000 <= toInt16(1), -9007199254740994.000000000 > toInt16(1), -9007199254740994.000000000 >= toInt16(1) , toUInt32(1) = -9007199254740994.000000000, toUInt32(1) != -9007199254740994.000000000, toUInt32(1) < -9007199254740994.000000000, toUInt32(1) <= -9007199254740994.000000000, toUInt32(1) > -9007199254740994.000000000, toUInt32(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt32(1), -9007199254740994.000000000 != toUInt32(1), -9007199254740994.000000000 < toUInt32(1), -9007199254740994.000000000 <= toUInt32(1), -9007199254740994.000000000 > toUInt32(1), -9007199254740994.000000000 >= toUInt32(1) , toInt32(1) = -9007199254740994.000000000, toInt32(1) != -9007199254740994.000000000, toInt32(1) < -9007199254740994.000000000, toInt32(1) <= -9007199254740994.000000000, toInt32(1) > -9007199254740994.000000000, toInt32(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt32(1), -9007199254740994.000000000 != toInt32(1), -9007199254740994.000000000 < toInt32(1), -9007199254740994.000000000 <= toInt32(1), -9007199254740994.000000000 > toInt32(1), -9007199254740994.000000000 >= toInt32(1) , toUInt64(1) = -9007199254740994.000000000, toUInt64(1) != -9007199254740994.000000000, toUInt64(1) < -9007199254740994.000000000, toUInt64(1) <= -9007199254740994.000000000, toUInt64(1) > -9007199254740994.000000000, toUInt64(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt64(1), -9007199254740994.000000000 != toUInt64(1), -9007199254740994.000000000 < toUInt64(1), -9007199254740994.000000000 <= toUInt64(1), -9007199254740994.000000000 > toUInt64(1), -9007199254740994.000000000 >= toUInt64(1) , toInt64(1) = -9007199254740994.000000000, toInt64(1) != -9007199254740994.000000000, toInt64(1) < -9007199254740994.000000000, toInt64(1) <= -9007199254740994.000000000, toInt64(1) > -9007199254740994.000000000, toInt64(1) >= -9007199254740994.000000000, -9007199254740994.000000000 = toInt64(1), -9007199254740994.000000000 != toInt64(1), -9007199254740994.000000000 < toInt64(1), -9007199254740994.000000000 <= toInt64(1), -9007199254740994.000000000 > toInt64(1), -9007199254740994.000000000 >= toInt64(1) ; +SELECT '1', '104.000000000', 1 = 104.000000000, 1 != 104.000000000, 1 < 104.000000000, 1 <= 104.000000000, 1 > 104.000000000, 1 >= 104.000000000, 104.000000000 = 1, 104.000000000 != 1, 104.000000000 < 1, 104.000000000 <= 1, 104.000000000 > 1, 104.000000000 >= 1 , toUInt8(1) = 104.000000000, toUInt8(1) != 104.000000000, toUInt8(1) < 104.000000000, toUInt8(1) <= 104.000000000, toUInt8(1) > 104.000000000, toUInt8(1) >= 104.000000000, 104.000000000 = toUInt8(1), 104.000000000 != toUInt8(1), 104.000000000 < toUInt8(1), 104.000000000 <= toUInt8(1), 104.000000000 > toUInt8(1), 104.000000000 >= toUInt8(1) , toInt8(1) = 104.000000000, toInt8(1) != 104.000000000, toInt8(1) < 104.000000000, toInt8(1) <= 104.000000000, toInt8(1) > 104.000000000, toInt8(1) >= 104.000000000, 104.000000000 = toInt8(1), 104.000000000 != toInt8(1), 104.000000000 < toInt8(1), 104.000000000 <= toInt8(1), 104.000000000 > toInt8(1), 104.000000000 >= toInt8(1) , toUInt16(1) = 104.000000000, toUInt16(1) != 104.000000000, toUInt16(1) < 104.000000000, toUInt16(1) <= 104.000000000, toUInt16(1) > 104.000000000, toUInt16(1) >= 104.000000000, 104.000000000 = toUInt16(1), 104.000000000 != toUInt16(1), 104.000000000 < toUInt16(1), 104.000000000 <= toUInt16(1), 104.000000000 > toUInt16(1), 104.000000000 >= toUInt16(1) , toInt16(1) = 104.000000000, toInt16(1) != 104.000000000, toInt16(1) < 104.000000000, toInt16(1) <= 104.000000000, toInt16(1) > 104.000000000, toInt16(1) >= 104.000000000, 104.000000000 = toInt16(1), 104.000000000 != toInt16(1), 104.000000000 < toInt16(1), 104.000000000 <= toInt16(1), 104.000000000 > toInt16(1), 104.000000000 >= toInt16(1) , toUInt32(1) = 104.000000000, toUInt32(1) != 104.000000000, toUInt32(1) < 104.000000000, toUInt32(1) <= 104.000000000, toUInt32(1) > 104.000000000, toUInt32(1) >= 104.000000000, 104.000000000 = toUInt32(1), 104.000000000 != toUInt32(1), 104.000000000 < toUInt32(1), 104.000000000 <= toUInt32(1), 104.000000000 > toUInt32(1), 104.000000000 >= toUInt32(1) , toInt32(1) = 104.000000000, toInt32(1) != 104.000000000, toInt32(1) < 104.000000000, toInt32(1) <= 104.000000000, toInt32(1) > 104.000000000, toInt32(1) >= 104.000000000, 104.000000000 = toInt32(1), 104.000000000 != toInt32(1), 104.000000000 < toInt32(1), 104.000000000 <= toInt32(1), 104.000000000 > toInt32(1), 104.000000000 >= toInt32(1) , toUInt64(1) = 104.000000000, toUInt64(1) != 104.000000000, toUInt64(1) < 104.000000000, toUInt64(1) <= 104.000000000, toUInt64(1) > 104.000000000, toUInt64(1) >= 104.000000000, 104.000000000 = toUInt64(1), 104.000000000 != toUInt64(1), 104.000000000 < toUInt64(1), 104.000000000 <= toUInt64(1), 104.000000000 > toUInt64(1), 104.000000000 >= toUInt64(1) , toInt64(1) = 104.000000000, toInt64(1) != 104.000000000, toInt64(1) < 104.000000000, toInt64(1) <= 104.000000000, toInt64(1) > 104.000000000, toInt64(1) >= 104.000000000, 104.000000000 = toInt64(1), 104.000000000 != toInt64(1), 104.000000000 < toInt64(1), 104.000000000 <= toInt64(1), 104.000000000 > toInt64(1), 104.000000000 >= toInt64(1) ; +SELECT '1', '-4503599627370496.000000000', 1 = -4503599627370496.000000000, 1 != -4503599627370496.000000000, 1 < -4503599627370496.000000000, 1 <= -4503599627370496.000000000, 1 > -4503599627370496.000000000, 1 >= -4503599627370496.000000000, -4503599627370496.000000000 = 1, -4503599627370496.000000000 != 1, -4503599627370496.000000000 < 1, -4503599627370496.000000000 <= 1, -4503599627370496.000000000 > 1, -4503599627370496.000000000 >= 1 , toUInt8(1) = -4503599627370496.000000000, toUInt8(1) != -4503599627370496.000000000, toUInt8(1) < -4503599627370496.000000000, toUInt8(1) <= -4503599627370496.000000000, toUInt8(1) > -4503599627370496.000000000, toUInt8(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt8(1), -4503599627370496.000000000 != toUInt8(1), -4503599627370496.000000000 < toUInt8(1), -4503599627370496.000000000 <= toUInt8(1), -4503599627370496.000000000 > toUInt8(1), -4503599627370496.000000000 >= toUInt8(1) , toInt8(1) = -4503599627370496.000000000, toInt8(1) != -4503599627370496.000000000, toInt8(1) < -4503599627370496.000000000, toInt8(1) <= -4503599627370496.000000000, toInt8(1) > -4503599627370496.000000000, toInt8(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt8(1), -4503599627370496.000000000 != toInt8(1), -4503599627370496.000000000 < toInt8(1), -4503599627370496.000000000 <= toInt8(1), -4503599627370496.000000000 > toInt8(1), -4503599627370496.000000000 >= toInt8(1) , toUInt16(1) = -4503599627370496.000000000, toUInt16(1) != -4503599627370496.000000000, toUInt16(1) < -4503599627370496.000000000, toUInt16(1) <= -4503599627370496.000000000, toUInt16(1) > -4503599627370496.000000000, toUInt16(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt16(1), -4503599627370496.000000000 != toUInt16(1), -4503599627370496.000000000 < toUInt16(1), -4503599627370496.000000000 <= toUInt16(1), -4503599627370496.000000000 > toUInt16(1), -4503599627370496.000000000 >= toUInt16(1) , toInt16(1) = -4503599627370496.000000000, toInt16(1) != -4503599627370496.000000000, toInt16(1) < -4503599627370496.000000000, toInt16(1) <= -4503599627370496.000000000, toInt16(1) > -4503599627370496.000000000, toInt16(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt16(1), -4503599627370496.000000000 != toInt16(1), -4503599627370496.000000000 < toInt16(1), -4503599627370496.000000000 <= toInt16(1), -4503599627370496.000000000 > toInt16(1), -4503599627370496.000000000 >= toInt16(1) , toUInt32(1) = -4503599627370496.000000000, toUInt32(1) != -4503599627370496.000000000, toUInt32(1) < -4503599627370496.000000000, toUInt32(1) <= -4503599627370496.000000000, toUInt32(1) > -4503599627370496.000000000, toUInt32(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt32(1), -4503599627370496.000000000 != toUInt32(1), -4503599627370496.000000000 < toUInt32(1), -4503599627370496.000000000 <= toUInt32(1), -4503599627370496.000000000 > toUInt32(1), -4503599627370496.000000000 >= toUInt32(1) , toInt32(1) = -4503599627370496.000000000, toInt32(1) != -4503599627370496.000000000, toInt32(1) < -4503599627370496.000000000, toInt32(1) <= -4503599627370496.000000000, toInt32(1) > -4503599627370496.000000000, toInt32(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt32(1), -4503599627370496.000000000 != toInt32(1), -4503599627370496.000000000 < toInt32(1), -4503599627370496.000000000 <= toInt32(1), -4503599627370496.000000000 > toInt32(1), -4503599627370496.000000000 >= toInt32(1) , toUInt64(1) = -4503599627370496.000000000, toUInt64(1) != -4503599627370496.000000000, toUInt64(1) < -4503599627370496.000000000, toUInt64(1) <= -4503599627370496.000000000, toUInt64(1) > -4503599627370496.000000000, toUInt64(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt64(1), -4503599627370496.000000000 != toUInt64(1), -4503599627370496.000000000 < toUInt64(1), -4503599627370496.000000000 <= toUInt64(1), -4503599627370496.000000000 > toUInt64(1), -4503599627370496.000000000 >= toUInt64(1) , toInt64(1) = -4503599627370496.000000000, toInt64(1) != -4503599627370496.000000000, toInt64(1) < -4503599627370496.000000000, toInt64(1) <= -4503599627370496.000000000, toInt64(1) > -4503599627370496.000000000, toInt64(1) >= -4503599627370496.000000000, -4503599627370496.000000000 = toInt64(1), -4503599627370496.000000000 != toInt64(1), -4503599627370496.000000000 < toInt64(1), -4503599627370496.000000000 <= toInt64(1), -4503599627370496.000000000 > toInt64(1), -4503599627370496.000000000 >= toInt64(1) ; +SELECT '1', '-0.500000000', 1 = -0.500000000, 1 != -0.500000000, 1 < -0.500000000, 1 <= -0.500000000, 1 > -0.500000000, 1 >= -0.500000000, -0.500000000 = 1, -0.500000000 != 1, -0.500000000 < 1, -0.500000000 <= 1, -0.500000000 > 1, -0.500000000 >= 1 , toUInt8(1) = -0.500000000, toUInt8(1) != -0.500000000, toUInt8(1) < -0.500000000, toUInt8(1) <= -0.500000000, toUInt8(1) > -0.500000000, toUInt8(1) >= -0.500000000, -0.500000000 = toUInt8(1), -0.500000000 != toUInt8(1), -0.500000000 < toUInt8(1), -0.500000000 <= toUInt8(1), -0.500000000 > toUInt8(1), -0.500000000 >= toUInt8(1) , toInt8(1) = -0.500000000, toInt8(1) != -0.500000000, toInt8(1) < -0.500000000, toInt8(1) <= -0.500000000, toInt8(1) > -0.500000000, toInt8(1) >= -0.500000000, -0.500000000 = toInt8(1), -0.500000000 != toInt8(1), -0.500000000 < toInt8(1), -0.500000000 <= toInt8(1), -0.500000000 > toInt8(1), -0.500000000 >= toInt8(1) , toUInt16(1) = -0.500000000, toUInt16(1) != -0.500000000, toUInt16(1) < -0.500000000, toUInt16(1) <= -0.500000000, toUInt16(1) > -0.500000000, toUInt16(1) >= -0.500000000, -0.500000000 = toUInt16(1), -0.500000000 != toUInt16(1), -0.500000000 < toUInt16(1), -0.500000000 <= toUInt16(1), -0.500000000 > toUInt16(1), -0.500000000 >= toUInt16(1) , toInt16(1) = -0.500000000, toInt16(1) != -0.500000000, toInt16(1) < -0.500000000, toInt16(1) <= -0.500000000, toInt16(1) > -0.500000000, toInt16(1) >= -0.500000000, -0.500000000 = toInt16(1), -0.500000000 != toInt16(1), -0.500000000 < toInt16(1), -0.500000000 <= toInt16(1), -0.500000000 > toInt16(1), -0.500000000 >= toInt16(1) , toUInt32(1) = -0.500000000, toUInt32(1) != -0.500000000, toUInt32(1) < -0.500000000, toUInt32(1) <= -0.500000000, toUInt32(1) > -0.500000000, toUInt32(1) >= -0.500000000, -0.500000000 = toUInt32(1), -0.500000000 != toUInt32(1), -0.500000000 < toUInt32(1), -0.500000000 <= toUInt32(1), -0.500000000 > toUInt32(1), -0.500000000 >= toUInt32(1) , toInt32(1) = -0.500000000, toInt32(1) != -0.500000000, toInt32(1) < -0.500000000, toInt32(1) <= -0.500000000, toInt32(1) > -0.500000000, toInt32(1) >= -0.500000000, -0.500000000 = toInt32(1), -0.500000000 != toInt32(1), -0.500000000 < toInt32(1), -0.500000000 <= toInt32(1), -0.500000000 > toInt32(1), -0.500000000 >= toInt32(1) , toUInt64(1) = -0.500000000, toUInt64(1) != -0.500000000, toUInt64(1) < -0.500000000, toUInt64(1) <= -0.500000000, toUInt64(1) > -0.500000000, toUInt64(1) >= -0.500000000, -0.500000000 = toUInt64(1), -0.500000000 != toUInt64(1), -0.500000000 < toUInt64(1), -0.500000000 <= toUInt64(1), -0.500000000 > toUInt64(1), -0.500000000 >= toUInt64(1) , toInt64(1) = -0.500000000, toInt64(1) != -0.500000000, toInt64(1) < -0.500000000, toInt64(1) <= -0.500000000, toInt64(1) > -0.500000000, toInt64(1) >= -0.500000000, -0.500000000 = toInt64(1), -0.500000000 != toInt64(1), -0.500000000 < toInt64(1), -0.500000000 <= toInt64(1), -0.500000000 > toInt64(1), -0.500000000 >= toInt64(1) ; +SELECT '1', '0.500000000', 1 = 0.500000000, 1 != 0.500000000, 1 < 0.500000000, 1 <= 0.500000000, 1 > 0.500000000, 1 >= 0.500000000, 0.500000000 = 1, 0.500000000 != 1, 0.500000000 < 1, 0.500000000 <= 1, 0.500000000 > 1, 0.500000000 >= 1 , toUInt8(1) = 0.500000000, toUInt8(1) != 0.500000000, toUInt8(1) < 0.500000000, toUInt8(1) <= 0.500000000, toUInt8(1) > 0.500000000, toUInt8(1) >= 0.500000000, 0.500000000 = toUInt8(1), 0.500000000 != toUInt8(1), 0.500000000 < toUInt8(1), 0.500000000 <= toUInt8(1), 0.500000000 > toUInt8(1), 0.500000000 >= toUInt8(1) , toInt8(1) = 0.500000000, toInt8(1) != 0.500000000, toInt8(1) < 0.500000000, toInt8(1) <= 0.500000000, toInt8(1) > 0.500000000, toInt8(1) >= 0.500000000, 0.500000000 = toInt8(1), 0.500000000 != toInt8(1), 0.500000000 < toInt8(1), 0.500000000 <= toInt8(1), 0.500000000 > toInt8(1), 0.500000000 >= toInt8(1) , toUInt16(1) = 0.500000000, toUInt16(1) != 0.500000000, toUInt16(1) < 0.500000000, toUInt16(1) <= 0.500000000, toUInt16(1) > 0.500000000, toUInt16(1) >= 0.500000000, 0.500000000 = toUInt16(1), 0.500000000 != toUInt16(1), 0.500000000 < toUInt16(1), 0.500000000 <= toUInt16(1), 0.500000000 > toUInt16(1), 0.500000000 >= toUInt16(1) , toInt16(1) = 0.500000000, toInt16(1) != 0.500000000, toInt16(1) < 0.500000000, toInt16(1) <= 0.500000000, toInt16(1) > 0.500000000, toInt16(1) >= 0.500000000, 0.500000000 = toInt16(1), 0.500000000 != toInt16(1), 0.500000000 < toInt16(1), 0.500000000 <= toInt16(1), 0.500000000 > toInt16(1), 0.500000000 >= toInt16(1) , toUInt32(1) = 0.500000000, toUInt32(1) != 0.500000000, toUInt32(1) < 0.500000000, toUInt32(1) <= 0.500000000, toUInt32(1) > 0.500000000, toUInt32(1) >= 0.500000000, 0.500000000 = toUInt32(1), 0.500000000 != toUInt32(1), 0.500000000 < toUInt32(1), 0.500000000 <= toUInt32(1), 0.500000000 > toUInt32(1), 0.500000000 >= toUInt32(1) , toInt32(1) = 0.500000000, toInt32(1) != 0.500000000, toInt32(1) < 0.500000000, toInt32(1) <= 0.500000000, toInt32(1) > 0.500000000, toInt32(1) >= 0.500000000, 0.500000000 = toInt32(1), 0.500000000 != toInt32(1), 0.500000000 < toInt32(1), 0.500000000 <= toInt32(1), 0.500000000 > toInt32(1), 0.500000000 >= toInt32(1) , toUInt64(1) = 0.500000000, toUInt64(1) != 0.500000000, toUInt64(1) < 0.500000000, toUInt64(1) <= 0.500000000, toUInt64(1) > 0.500000000, toUInt64(1) >= 0.500000000, 0.500000000 = toUInt64(1), 0.500000000 != toUInt64(1), 0.500000000 < toUInt64(1), 0.500000000 <= toUInt64(1), 0.500000000 > toUInt64(1), 0.500000000 >= toUInt64(1) , toInt64(1) = 0.500000000, toInt64(1) != 0.500000000, toInt64(1) < 0.500000000, toInt64(1) <= 0.500000000, toInt64(1) > 0.500000000, toInt64(1) >= 0.500000000, 0.500000000 = toInt64(1), 0.500000000 != toInt64(1), 0.500000000 < toInt64(1), 0.500000000 <= toInt64(1), 0.500000000 > toInt64(1), 0.500000000 >= toInt64(1) ; +SELECT '1', '-1.500000000', 1 = -1.500000000, 1 != -1.500000000, 1 < -1.500000000, 1 <= -1.500000000, 1 > -1.500000000, 1 >= -1.500000000, -1.500000000 = 1, -1.500000000 != 1, -1.500000000 < 1, -1.500000000 <= 1, -1.500000000 > 1, -1.500000000 >= 1 , toUInt8(1) = -1.500000000, toUInt8(1) != -1.500000000, toUInt8(1) < -1.500000000, toUInt8(1) <= -1.500000000, toUInt8(1) > -1.500000000, toUInt8(1) >= -1.500000000, -1.500000000 = toUInt8(1), -1.500000000 != toUInt8(1), -1.500000000 < toUInt8(1), -1.500000000 <= toUInt8(1), -1.500000000 > toUInt8(1), -1.500000000 >= toUInt8(1) , toInt8(1) = -1.500000000, toInt8(1) != -1.500000000, toInt8(1) < -1.500000000, toInt8(1) <= -1.500000000, toInt8(1) > -1.500000000, toInt8(1) >= -1.500000000, -1.500000000 = toInt8(1), -1.500000000 != toInt8(1), -1.500000000 < toInt8(1), -1.500000000 <= toInt8(1), -1.500000000 > toInt8(1), -1.500000000 >= toInt8(1) , toUInt16(1) = -1.500000000, toUInt16(1) != -1.500000000, toUInt16(1) < -1.500000000, toUInt16(1) <= -1.500000000, toUInt16(1) > -1.500000000, toUInt16(1) >= -1.500000000, -1.500000000 = toUInt16(1), -1.500000000 != toUInt16(1), -1.500000000 < toUInt16(1), -1.500000000 <= toUInt16(1), -1.500000000 > toUInt16(1), -1.500000000 >= toUInt16(1) , toInt16(1) = -1.500000000, toInt16(1) != -1.500000000, toInt16(1) < -1.500000000, toInt16(1) <= -1.500000000, toInt16(1) > -1.500000000, toInt16(1) >= -1.500000000, -1.500000000 = toInt16(1), -1.500000000 != toInt16(1), -1.500000000 < toInt16(1), -1.500000000 <= toInt16(1), -1.500000000 > toInt16(1), -1.500000000 >= toInt16(1) , toUInt32(1) = -1.500000000, toUInt32(1) != -1.500000000, toUInt32(1) < -1.500000000, toUInt32(1) <= -1.500000000, toUInt32(1) > -1.500000000, toUInt32(1) >= -1.500000000, -1.500000000 = toUInt32(1), -1.500000000 != toUInt32(1), -1.500000000 < toUInt32(1), -1.500000000 <= toUInt32(1), -1.500000000 > toUInt32(1), -1.500000000 >= toUInt32(1) , toInt32(1) = -1.500000000, toInt32(1) != -1.500000000, toInt32(1) < -1.500000000, toInt32(1) <= -1.500000000, toInt32(1) > -1.500000000, toInt32(1) >= -1.500000000, -1.500000000 = toInt32(1), -1.500000000 != toInt32(1), -1.500000000 < toInt32(1), -1.500000000 <= toInt32(1), -1.500000000 > toInt32(1), -1.500000000 >= toInt32(1) , toUInt64(1) = -1.500000000, toUInt64(1) != -1.500000000, toUInt64(1) < -1.500000000, toUInt64(1) <= -1.500000000, toUInt64(1) > -1.500000000, toUInt64(1) >= -1.500000000, -1.500000000 = toUInt64(1), -1.500000000 != toUInt64(1), -1.500000000 < toUInt64(1), -1.500000000 <= toUInt64(1), -1.500000000 > toUInt64(1), -1.500000000 >= toUInt64(1) , toInt64(1) = -1.500000000, toInt64(1) != -1.500000000, toInt64(1) < -1.500000000, toInt64(1) <= -1.500000000, toInt64(1) > -1.500000000, toInt64(1) >= -1.500000000, -1.500000000 = toInt64(1), -1.500000000 != toInt64(1), -1.500000000 < toInt64(1), -1.500000000 <= toInt64(1), -1.500000000 > toInt64(1), -1.500000000 >= toInt64(1) ; +SELECT '1', '1.500000000', 1 = 1.500000000, 1 != 1.500000000, 1 < 1.500000000, 1 <= 1.500000000, 1 > 1.500000000, 1 >= 1.500000000, 1.500000000 = 1, 1.500000000 != 1, 1.500000000 < 1, 1.500000000 <= 1, 1.500000000 > 1, 1.500000000 >= 1 , toUInt8(1) = 1.500000000, toUInt8(1) != 1.500000000, toUInt8(1) < 1.500000000, toUInt8(1) <= 1.500000000, toUInt8(1) > 1.500000000, toUInt8(1) >= 1.500000000, 1.500000000 = toUInt8(1), 1.500000000 != toUInt8(1), 1.500000000 < toUInt8(1), 1.500000000 <= toUInt8(1), 1.500000000 > toUInt8(1), 1.500000000 >= toUInt8(1) , toInt8(1) = 1.500000000, toInt8(1) != 1.500000000, toInt8(1) < 1.500000000, toInt8(1) <= 1.500000000, toInt8(1) > 1.500000000, toInt8(1) >= 1.500000000, 1.500000000 = toInt8(1), 1.500000000 != toInt8(1), 1.500000000 < toInt8(1), 1.500000000 <= toInt8(1), 1.500000000 > toInt8(1), 1.500000000 >= toInt8(1) , toUInt16(1) = 1.500000000, toUInt16(1) != 1.500000000, toUInt16(1) < 1.500000000, toUInt16(1) <= 1.500000000, toUInt16(1) > 1.500000000, toUInt16(1) >= 1.500000000, 1.500000000 = toUInt16(1), 1.500000000 != toUInt16(1), 1.500000000 < toUInt16(1), 1.500000000 <= toUInt16(1), 1.500000000 > toUInt16(1), 1.500000000 >= toUInt16(1) , toInt16(1) = 1.500000000, toInt16(1) != 1.500000000, toInt16(1) < 1.500000000, toInt16(1) <= 1.500000000, toInt16(1) > 1.500000000, toInt16(1) >= 1.500000000, 1.500000000 = toInt16(1), 1.500000000 != toInt16(1), 1.500000000 < toInt16(1), 1.500000000 <= toInt16(1), 1.500000000 > toInt16(1), 1.500000000 >= toInt16(1) , toUInt32(1) = 1.500000000, toUInt32(1) != 1.500000000, toUInt32(1) < 1.500000000, toUInt32(1) <= 1.500000000, toUInt32(1) > 1.500000000, toUInt32(1) >= 1.500000000, 1.500000000 = toUInt32(1), 1.500000000 != toUInt32(1), 1.500000000 < toUInt32(1), 1.500000000 <= toUInt32(1), 1.500000000 > toUInt32(1), 1.500000000 >= toUInt32(1) , toInt32(1) = 1.500000000, toInt32(1) != 1.500000000, toInt32(1) < 1.500000000, toInt32(1) <= 1.500000000, toInt32(1) > 1.500000000, toInt32(1) >= 1.500000000, 1.500000000 = toInt32(1), 1.500000000 != toInt32(1), 1.500000000 < toInt32(1), 1.500000000 <= toInt32(1), 1.500000000 > toInt32(1), 1.500000000 >= toInt32(1) , toUInt64(1) = 1.500000000, toUInt64(1) != 1.500000000, toUInt64(1) < 1.500000000, toUInt64(1) <= 1.500000000, toUInt64(1) > 1.500000000, toUInt64(1) >= 1.500000000, 1.500000000 = toUInt64(1), 1.500000000 != toUInt64(1), 1.500000000 < toUInt64(1), 1.500000000 <= toUInt64(1), 1.500000000 > toUInt64(1), 1.500000000 >= toUInt64(1) , toInt64(1) = 1.500000000, toInt64(1) != 1.500000000, toInt64(1) < 1.500000000, toInt64(1) <= 1.500000000, toInt64(1) > 1.500000000, toInt64(1) >= 1.500000000, 1.500000000 = toInt64(1), 1.500000000 != toInt64(1), 1.500000000 < toInt64(1), 1.500000000 <= toInt64(1), 1.500000000 > toInt64(1), 1.500000000 >= toInt64(1) ; +SELECT '1', '9007199254740992.000000000', 1 = 9007199254740992.000000000, 1 != 9007199254740992.000000000, 1 < 9007199254740992.000000000, 1 <= 9007199254740992.000000000, 1 > 9007199254740992.000000000, 1 >= 9007199254740992.000000000, 9007199254740992.000000000 = 1, 9007199254740992.000000000 != 1, 9007199254740992.000000000 < 1, 9007199254740992.000000000 <= 1, 9007199254740992.000000000 > 1, 9007199254740992.000000000 >= 1 , toUInt8(1) = 9007199254740992.000000000, toUInt8(1) != 9007199254740992.000000000, toUInt8(1) < 9007199254740992.000000000, toUInt8(1) <= 9007199254740992.000000000, toUInt8(1) > 9007199254740992.000000000, toUInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt8(1), 9007199254740992.000000000 != toUInt8(1), 9007199254740992.000000000 < toUInt8(1), 9007199254740992.000000000 <= toUInt8(1), 9007199254740992.000000000 > toUInt8(1), 9007199254740992.000000000 >= toUInt8(1) , toInt8(1) = 9007199254740992.000000000, toInt8(1) != 9007199254740992.000000000, toInt8(1) < 9007199254740992.000000000, toInt8(1) <= 9007199254740992.000000000, toInt8(1) > 9007199254740992.000000000, toInt8(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt8(1), 9007199254740992.000000000 != toInt8(1), 9007199254740992.000000000 < toInt8(1), 9007199254740992.000000000 <= toInt8(1), 9007199254740992.000000000 > toInt8(1), 9007199254740992.000000000 >= toInt8(1) , toUInt16(1) = 9007199254740992.000000000, toUInt16(1) != 9007199254740992.000000000, toUInt16(1) < 9007199254740992.000000000, toUInt16(1) <= 9007199254740992.000000000, toUInt16(1) > 9007199254740992.000000000, toUInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt16(1), 9007199254740992.000000000 != toUInt16(1), 9007199254740992.000000000 < toUInt16(1), 9007199254740992.000000000 <= toUInt16(1), 9007199254740992.000000000 > toUInt16(1), 9007199254740992.000000000 >= toUInt16(1) , toInt16(1) = 9007199254740992.000000000, toInt16(1) != 9007199254740992.000000000, toInt16(1) < 9007199254740992.000000000, toInt16(1) <= 9007199254740992.000000000, toInt16(1) > 9007199254740992.000000000, toInt16(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt16(1), 9007199254740992.000000000 != toInt16(1), 9007199254740992.000000000 < toInt16(1), 9007199254740992.000000000 <= toInt16(1), 9007199254740992.000000000 > toInt16(1), 9007199254740992.000000000 >= toInt16(1) , toUInt32(1) = 9007199254740992.000000000, toUInt32(1) != 9007199254740992.000000000, toUInt32(1) < 9007199254740992.000000000, toUInt32(1) <= 9007199254740992.000000000, toUInt32(1) > 9007199254740992.000000000, toUInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt32(1), 9007199254740992.000000000 != toUInt32(1), 9007199254740992.000000000 < toUInt32(1), 9007199254740992.000000000 <= toUInt32(1), 9007199254740992.000000000 > toUInt32(1), 9007199254740992.000000000 >= toUInt32(1) , toInt32(1) = 9007199254740992.000000000, toInt32(1) != 9007199254740992.000000000, toInt32(1) < 9007199254740992.000000000, toInt32(1) <= 9007199254740992.000000000, toInt32(1) > 9007199254740992.000000000, toInt32(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt32(1), 9007199254740992.000000000 != toInt32(1), 9007199254740992.000000000 < toInt32(1), 9007199254740992.000000000 <= toInt32(1), 9007199254740992.000000000 > toInt32(1), 9007199254740992.000000000 >= toInt32(1) , toUInt64(1) = 9007199254740992.000000000, toUInt64(1) != 9007199254740992.000000000, toUInt64(1) < 9007199254740992.000000000, toUInt64(1) <= 9007199254740992.000000000, toUInt64(1) > 9007199254740992.000000000, toUInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(1), 9007199254740992.000000000 != toUInt64(1), 9007199254740992.000000000 < toUInt64(1), 9007199254740992.000000000 <= toUInt64(1), 9007199254740992.000000000 > toUInt64(1), 9007199254740992.000000000 >= toUInt64(1) , toInt64(1) = 9007199254740992.000000000, toInt64(1) != 9007199254740992.000000000, toInt64(1) < 9007199254740992.000000000, toInt64(1) <= 9007199254740992.000000000, toInt64(1) > 9007199254740992.000000000, toInt64(1) >= 9007199254740992.000000000, 9007199254740992.000000000 = toInt64(1), 9007199254740992.000000000 != toInt64(1), 9007199254740992.000000000 < toInt64(1), 9007199254740992.000000000 <= toInt64(1), 9007199254740992.000000000 > toInt64(1), 9007199254740992.000000000 >= toInt64(1) ; +SELECT '1', '2251799813685247.500000000', 1 = 2251799813685247.500000000, 1 != 2251799813685247.500000000, 1 < 2251799813685247.500000000, 1 <= 2251799813685247.500000000, 1 > 2251799813685247.500000000, 1 >= 2251799813685247.500000000, 2251799813685247.500000000 = 1, 2251799813685247.500000000 != 1, 2251799813685247.500000000 < 1, 2251799813685247.500000000 <= 1, 2251799813685247.500000000 > 1, 2251799813685247.500000000 >= 1 , toUInt8(1) = 2251799813685247.500000000, toUInt8(1) != 2251799813685247.500000000, toUInt8(1) < 2251799813685247.500000000, toUInt8(1) <= 2251799813685247.500000000, toUInt8(1) > 2251799813685247.500000000, toUInt8(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt8(1), 2251799813685247.500000000 != toUInt8(1), 2251799813685247.500000000 < toUInt8(1), 2251799813685247.500000000 <= toUInt8(1), 2251799813685247.500000000 > toUInt8(1), 2251799813685247.500000000 >= toUInt8(1) , toInt8(1) = 2251799813685247.500000000, toInt8(1) != 2251799813685247.500000000, toInt8(1) < 2251799813685247.500000000, toInt8(1) <= 2251799813685247.500000000, toInt8(1) > 2251799813685247.500000000, toInt8(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt8(1), 2251799813685247.500000000 != toInt8(1), 2251799813685247.500000000 < toInt8(1), 2251799813685247.500000000 <= toInt8(1), 2251799813685247.500000000 > toInt8(1), 2251799813685247.500000000 >= toInt8(1) , toUInt16(1) = 2251799813685247.500000000, toUInt16(1) != 2251799813685247.500000000, toUInt16(1) < 2251799813685247.500000000, toUInt16(1) <= 2251799813685247.500000000, toUInt16(1) > 2251799813685247.500000000, toUInt16(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt16(1), 2251799813685247.500000000 != toUInt16(1), 2251799813685247.500000000 < toUInt16(1), 2251799813685247.500000000 <= toUInt16(1), 2251799813685247.500000000 > toUInt16(1), 2251799813685247.500000000 >= toUInt16(1) , toInt16(1) = 2251799813685247.500000000, toInt16(1) != 2251799813685247.500000000, toInt16(1) < 2251799813685247.500000000, toInt16(1) <= 2251799813685247.500000000, toInt16(1) > 2251799813685247.500000000, toInt16(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt16(1), 2251799813685247.500000000 != toInt16(1), 2251799813685247.500000000 < toInt16(1), 2251799813685247.500000000 <= toInt16(1), 2251799813685247.500000000 > toInt16(1), 2251799813685247.500000000 >= toInt16(1) , toUInt32(1) = 2251799813685247.500000000, toUInt32(1) != 2251799813685247.500000000, toUInt32(1) < 2251799813685247.500000000, toUInt32(1) <= 2251799813685247.500000000, toUInt32(1) > 2251799813685247.500000000, toUInt32(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt32(1), 2251799813685247.500000000 != toUInt32(1), 2251799813685247.500000000 < toUInt32(1), 2251799813685247.500000000 <= toUInt32(1), 2251799813685247.500000000 > toUInt32(1), 2251799813685247.500000000 >= toUInt32(1) , toInt32(1) = 2251799813685247.500000000, toInt32(1) != 2251799813685247.500000000, toInt32(1) < 2251799813685247.500000000, toInt32(1) <= 2251799813685247.500000000, toInt32(1) > 2251799813685247.500000000, toInt32(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt32(1), 2251799813685247.500000000 != toInt32(1), 2251799813685247.500000000 < toInt32(1), 2251799813685247.500000000 <= toInt32(1), 2251799813685247.500000000 > toInt32(1), 2251799813685247.500000000 >= toInt32(1) , toUInt64(1) = 2251799813685247.500000000, toUInt64(1) != 2251799813685247.500000000, toUInt64(1) < 2251799813685247.500000000, toUInt64(1) <= 2251799813685247.500000000, toUInt64(1) > 2251799813685247.500000000, toUInt64(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt64(1), 2251799813685247.500000000 != toUInt64(1), 2251799813685247.500000000 < toUInt64(1), 2251799813685247.500000000 <= toUInt64(1), 2251799813685247.500000000 > toUInt64(1), 2251799813685247.500000000 >= toUInt64(1) , toInt64(1) = 2251799813685247.500000000, toInt64(1) != 2251799813685247.500000000, toInt64(1) < 2251799813685247.500000000, toInt64(1) <= 2251799813685247.500000000, toInt64(1) > 2251799813685247.500000000, toInt64(1) >= 2251799813685247.500000000, 2251799813685247.500000000 = toInt64(1), 2251799813685247.500000000 != toInt64(1), 2251799813685247.500000000 < toInt64(1), 2251799813685247.500000000 <= toInt64(1), 2251799813685247.500000000 > toInt64(1), 2251799813685247.500000000 >= toInt64(1) ; +SELECT '1', '2251799813685248.500000000', 1 = 2251799813685248.500000000, 1 != 2251799813685248.500000000, 1 < 2251799813685248.500000000, 1 <= 2251799813685248.500000000, 1 > 2251799813685248.500000000, 1 >= 2251799813685248.500000000, 2251799813685248.500000000 = 1, 2251799813685248.500000000 != 1, 2251799813685248.500000000 < 1, 2251799813685248.500000000 <= 1, 2251799813685248.500000000 > 1, 2251799813685248.500000000 >= 1 , toUInt8(1) = 2251799813685248.500000000, toUInt8(1) != 2251799813685248.500000000, toUInt8(1) < 2251799813685248.500000000, toUInt8(1) <= 2251799813685248.500000000, toUInt8(1) > 2251799813685248.500000000, toUInt8(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt8(1), 2251799813685248.500000000 != toUInt8(1), 2251799813685248.500000000 < toUInt8(1), 2251799813685248.500000000 <= toUInt8(1), 2251799813685248.500000000 > toUInt8(1), 2251799813685248.500000000 >= toUInt8(1) , toInt8(1) = 2251799813685248.500000000, toInt8(1) != 2251799813685248.500000000, toInt8(1) < 2251799813685248.500000000, toInt8(1) <= 2251799813685248.500000000, toInt8(1) > 2251799813685248.500000000, toInt8(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt8(1), 2251799813685248.500000000 != toInt8(1), 2251799813685248.500000000 < toInt8(1), 2251799813685248.500000000 <= toInt8(1), 2251799813685248.500000000 > toInt8(1), 2251799813685248.500000000 >= toInt8(1) , toUInt16(1) = 2251799813685248.500000000, toUInt16(1) != 2251799813685248.500000000, toUInt16(1) < 2251799813685248.500000000, toUInt16(1) <= 2251799813685248.500000000, toUInt16(1) > 2251799813685248.500000000, toUInt16(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt16(1), 2251799813685248.500000000 != toUInt16(1), 2251799813685248.500000000 < toUInt16(1), 2251799813685248.500000000 <= toUInt16(1), 2251799813685248.500000000 > toUInt16(1), 2251799813685248.500000000 >= toUInt16(1) , toInt16(1) = 2251799813685248.500000000, toInt16(1) != 2251799813685248.500000000, toInt16(1) < 2251799813685248.500000000, toInt16(1) <= 2251799813685248.500000000, toInt16(1) > 2251799813685248.500000000, toInt16(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt16(1), 2251799813685248.500000000 != toInt16(1), 2251799813685248.500000000 < toInt16(1), 2251799813685248.500000000 <= toInt16(1), 2251799813685248.500000000 > toInt16(1), 2251799813685248.500000000 >= toInt16(1) , toUInt32(1) = 2251799813685248.500000000, toUInt32(1) != 2251799813685248.500000000, toUInt32(1) < 2251799813685248.500000000, toUInt32(1) <= 2251799813685248.500000000, toUInt32(1) > 2251799813685248.500000000, toUInt32(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt32(1), 2251799813685248.500000000 != toUInt32(1), 2251799813685248.500000000 < toUInt32(1), 2251799813685248.500000000 <= toUInt32(1), 2251799813685248.500000000 > toUInt32(1), 2251799813685248.500000000 >= toUInt32(1) , toInt32(1) = 2251799813685248.500000000, toInt32(1) != 2251799813685248.500000000, toInt32(1) < 2251799813685248.500000000, toInt32(1) <= 2251799813685248.500000000, toInt32(1) > 2251799813685248.500000000, toInt32(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt32(1), 2251799813685248.500000000 != toInt32(1), 2251799813685248.500000000 < toInt32(1), 2251799813685248.500000000 <= toInt32(1), 2251799813685248.500000000 > toInt32(1), 2251799813685248.500000000 >= toInt32(1) , toUInt64(1) = 2251799813685248.500000000, toUInt64(1) != 2251799813685248.500000000, toUInt64(1) < 2251799813685248.500000000, toUInt64(1) <= 2251799813685248.500000000, toUInt64(1) > 2251799813685248.500000000, toUInt64(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt64(1), 2251799813685248.500000000 != toUInt64(1), 2251799813685248.500000000 < toUInt64(1), 2251799813685248.500000000 <= toUInt64(1), 2251799813685248.500000000 > toUInt64(1), 2251799813685248.500000000 >= toUInt64(1) , toInt64(1) = 2251799813685248.500000000, toInt64(1) != 2251799813685248.500000000, toInt64(1) < 2251799813685248.500000000, toInt64(1) <= 2251799813685248.500000000, toInt64(1) > 2251799813685248.500000000, toInt64(1) >= 2251799813685248.500000000, 2251799813685248.500000000 = toInt64(1), 2251799813685248.500000000 != toInt64(1), 2251799813685248.500000000 < toInt64(1), 2251799813685248.500000000 <= toInt64(1), 2251799813685248.500000000 > toInt64(1), 2251799813685248.500000000 >= toInt64(1) ; +SELECT '1', '1152921504606846976.000000000', 1 = 1152921504606846976.000000000, 1 != 1152921504606846976.000000000, 1 < 1152921504606846976.000000000, 1 <= 1152921504606846976.000000000, 1 > 1152921504606846976.000000000, 1 >= 1152921504606846976.000000000, 1152921504606846976.000000000 = 1, 1152921504606846976.000000000 != 1, 1152921504606846976.000000000 < 1, 1152921504606846976.000000000 <= 1, 1152921504606846976.000000000 > 1, 1152921504606846976.000000000 >= 1 , toUInt8(1) = 1152921504606846976.000000000, toUInt8(1) != 1152921504606846976.000000000, toUInt8(1) < 1152921504606846976.000000000, toUInt8(1) <= 1152921504606846976.000000000, toUInt8(1) > 1152921504606846976.000000000, toUInt8(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt8(1), 1152921504606846976.000000000 != toUInt8(1), 1152921504606846976.000000000 < toUInt8(1), 1152921504606846976.000000000 <= toUInt8(1), 1152921504606846976.000000000 > toUInt8(1), 1152921504606846976.000000000 >= toUInt8(1) , toInt8(1) = 1152921504606846976.000000000, toInt8(1) != 1152921504606846976.000000000, toInt8(1) < 1152921504606846976.000000000, toInt8(1) <= 1152921504606846976.000000000, toInt8(1) > 1152921504606846976.000000000, toInt8(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt8(1), 1152921504606846976.000000000 != toInt8(1), 1152921504606846976.000000000 < toInt8(1), 1152921504606846976.000000000 <= toInt8(1), 1152921504606846976.000000000 > toInt8(1), 1152921504606846976.000000000 >= toInt8(1) , toUInt16(1) = 1152921504606846976.000000000, toUInt16(1) != 1152921504606846976.000000000, toUInt16(1) < 1152921504606846976.000000000, toUInt16(1) <= 1152921504606846976.000000000, toUInt16(1) > 1152921504606846976.000000000, toUInt16(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt16(1), 1152921504606846976.000000000 != toUInt16(1), 1152921504606846976.000000000 < toUInt16(1), 1152921504606846976.000000000 <= toUInt16(1), 1152921504606846976.000000000 > toUInt16(1), 1152921504606846976.000000000 >= toUInt16(1) , toInt16(1) = 1152921504606846976.000000000, toInt16(1) != 1152921504606846976.000000000, toInt16(1) < 1152921504606846976.000000000, toInt16(1) <= 1152921504606846976.000000000, toInt16(1) > 1152921504606846976.000000000, toInt16(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt16(1), 1152921504606846976.000000000 != toInt16(1), 1152921504606846976.000000000 < toInt16(1), 1152921504606846976.000000000 <= toInt16(1), 1152921504606846976.000000000 > toInt16(1), 1152921504606846976.000000000 >= toInt16(1) , toUInt32(1) = 1152921504606846976.000000000, toUInt32(1) != 1152921504606846976.000000000, toUInt32(1) < 1152921504606846976.000000000, toUInt32(1) <= 1152921504606846976.000000000, toUInt32(1) > 1152921504606846976.000000000, toUInt32(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt32(1), 1152921504606846976.000000000 != toUInt32(1), 1152921504606846976.000000000 < toUInt32(1), 1152921504606846976.000000000 <= toUInt32(1), 1152921504606846976.000000000 > toUInt32(1), 1152921504606846976.000000000 >= toUInt32(1) , toInt32(1) = 1152921504606846976.000000000, toInt32(1) != 1152921504606846976.000000000, toInt32(1) < 1152921504606846976.000000000, toInt32(1) <= 1152921504606846976.000000000, toInt32(1) > 1152921504606846976.000000000, toInt32(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt32(1), 1152921504606846976.000000000 != toInt32(1), 1152921504606846976.000000000 < toInt32(1), 1152921504606846976.000000000 <= toInt32(1), 1152921504606846976.000000000 > toInt32(1), 1152921504606846976.000000000 >= toInt32(1) , toUInt64(1) = 1152921504606846976.000000000, toUInt64(1) != 1152921504606846976.000000000, toUInt64(1) < 1152921504606846976.000000000, toUInt64(1) <= 1152921504606846976.000000000, toUInt64(1) > 1152921504606846976.000000000, toUInt64(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt64(1), 1152921504606846976.000000000 != toUInt64(1), 1152921504606846976.000000000 < toUInt64(1), 1152921504606846976.000000000 <= toUInt64(1), 1152921504606846976.000000000 > toUInt64(1), 1152921504606846976.000000000 >= toUInt64(1) , toInt64(1) = 1152921504606846976.000000000, toInt64(1) != 1152921504606846976.000000000, toInt64(1) < 1152921504606846976.000000000, toInt64(1) <= 1152921504606846976.000000000, toInt64(1) > 1152921504606846976.000000000, toInt64(1) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toInt64(1), 1152921504606846976.000000000 != toInt64(1), 1152921504606846976.000000000 < toInt64(1), 1152921504606846976.000000000 <= toInt64(1), 1152921504606846976.000000000 > toInt64(1), 1152921504606846976.000000000 >= toInt64(1) ; +SELECT '1', '-1152921504606846976.000000000', 1 = -1152921504606846976.000000000, 1 != -1152921504606846976.000000000, 1 < -1152921504606846976.000000000, 1 <= -1152921504606846976.000000000, 1 > -1152921504606846976.000000000, 1 >= -1152921504606846976.000000000, -1152921504606846976.000000000 = 1, -1152921504606846976.000000000 != 1, -1152921504606846976.000000000 < 1, -1152921504606846976.000000000 <= 1, -1152921504606846976.000000000 > 1, -1152921504606846976.000000000 >= 1 , toUInt8(1) = -1152921504606846976.000000000, toUInt8(1) != -1152921504606846976.000000000, toUInt8(1) < -1152921504606846976.000000000, toUInt8(1) <= -1152921504606846976.000000000, toUInt8(1) > -1152921504606846976.000000000, toUInt8(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt8(1), -1152921504606846976.000000000 != toUInt8(1), -1152921504606846976.000000000 < toUInt8(1), -1152921504606846976.000000000 <= toUInt8(1), -1152921504606846976.000000000 > toUInt8(1), -1152921504606846976.000000000 >= toUInt8(1) , toInt8(1) = -1152921504606846976.000000000, toInt8(1) != -1152921504606846976.000000000, toInt8(1) < -1152921504606846976.000000000, toInt8(1) <= -1152921504606846976.000000000, toInt8(1) > -1152921504606846976.000000000, toInt8(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt8(1), -1152921504606846976.000000000 != toInt8(1), -1152921504606846976.000000000 < toInt8(1), -1152921504606846976.000000000 <= toInt8(1), -1152921504606846976.000000000 > toInt8(1), -1152921504606846976.000000000 >= toInt8(1) , toUInt16(1) = -1152921504606846976.000000000, toUInt16(1) != -1152921504606846976.000000000, toUInt16(1) < -1152921504606846976.000000000, toUInt16(1) <= -1152921504606846976.000000000, toUInt16(1) > -1152921504606846976.000000000, toUInt16(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt16(1), -1152921504606846976.000000000 != toUInt16(1), -1152921504606846976.000000000 < toUInt16(1), -1152921504606846976.000000000 <= toUInt16(1), -1152921504606846976.000000000 > toUInt16(1), -1152921504606846976.000000000 >= toUInt16(1) , toInt16(1) = -1152921504606846976.000000000, toInt16(1) != -1152921504606846976.000000000, toInt16(1) < -1152921504606846976.000000000, toInt16(1) <= -1152921504606846976.000000000, toInt16(1) > -1152921504606846976.000000000, toInt16(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt16(1), -1152921504606846976.000000000 != toInt16(1), -1152921504606846976.000000000 < toInt16(1), -1152921504606846976.000000000 <= toInt16(1), -1152921504606846976.000000000 > toInt16(1), -1152921504606846976.000000000 >= toInt16(1) , toUInt32(1) = -1152921504606846976.000000000, toUInt32(1) != -1152921504606846976.000000000, toUInt32(1) < -1152921504606846976.000000000, toUInt32(1) <= -1152921504606846976.000000000, toUInt32(1) > -1152921504606846976.000000000, toUInt32(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt32(1), -1152921504606846976.000000000 != toUInt32(1), -1152921504606846976.000000000 < toUInt32(1), -1152921504606846976.000000000 <= toUInt32(1), -1152921504606846976.000000000 > toUInt32(1), -1152921504606846976.000000000 >= toUInt32(1) , toInt32(1) = -1152921504606846976.000000000, toInt32(1) != -1152921504606846976.000000000, toInt32(1) < -1152921504606846976.000000000, toInt32(1) <= -1152921504606846976.000000000, toInt32(1) > -1152921504606846976.000000000, toInt32(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt32(1), -1152921504606846976.000000000 != toInt32(1), -1152921504606846976.000000000 < toInt32(1), -1152921504606846976.000000000 <= toInt32(1), -1152921504606846976.000000000 > toInt32(1), -1152921504606846976.000000000 >= toInt32(1) , toUInt64(1) = -1152921504606846976.000000000, toUInt64(1) != -1152921504606846976.000000000, toUInt64(1) < -1152921504606846976.000000000, toUInt64(1) <= -1152921504606846976.000000000, toUInt64(1) > -1152921504606846976.000000000, toUInt64(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt64(1), -1152921504606846976.000000000 != toUInt64(1), -1152921504606846976.000000000 < toUInt64(1), -1152921504606846976.000000000 <= toUInt64(1), -1152921504606846976.000000000 > toUInt64(1), -1152921504606846976.000000000 >= toUInt64(1) , toInt64(1) = -1152921504606846976.000000000, toInt64(1) != -1152921504606846976.000000000, toInt64(1) < -1152921504606846976.000000000, toInt64(1) <= -1152921504606846976.000000000, toInt64(1) > -1152921504606846976.000000000, toInt64(1) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toInt64(1), -1152921504606846976.000000000 != toInt64(1), -1152921504606846976.000000000 < toInt64(1), -1152921504606846976.000000000 <= toInt64(1), -1152921504606846976.000000000 > toInt64(1), -1152921504606846976.000000000 >= toInt64(1) ; +SELECT '1', '-9223372036854786048.000000000', 1 = -9223372036854786048.000000000, 1 != -9223372036854786048.000000000, 1 < -9223372036854786048.000000000, 1 <= -9223372036854786048.000000000, 1 > -9223372036854786048.000000000, 1 >= -9223372036854786048.000000000, -9223372036854786048.000000000 = 1, -9223372036854786048.000000000 != 1, -9223372036854786048.000000000 < 1, -9223372036854786048.000000000 <= 1, -9223372036854786048.000000000 > 1, -9223372036854786048.000000000 >= 1 , toUInt8(1) = -9223372036854786048.000000000, toUInt8(1) != -9223372036854786048.000000000, toUInt8(1) < -9223372036854786048.000000000, toUInt8(1) <= -9223372036854786048.000000000, toUInt8(1) > -9223372036854786048.000000000, toUInt8(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt8(1), -9223372036854786048.000000000 != toUInt8(1), -9223372036854786048.000000000 < toUInt8(1), -9223372036854786048.000000000 <= toUInt8(1), -9223372036854786048.000000000 > toUInt8(1), -9223372036854786048.000000000 >= toUInt8(1) , toInt8(1) = -9223372036854786048.000000000, toInt8(1) != -9223372036854786048.000000000, toInt8(1) < -9223372036854786048.000000000, toInt8(1) <= -9223372036854786048.000000000, toInt8(1) > -9223372036854786048.000000000, toInt8(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt8(1), -9223372036854786048.000000000 != toInt8(1), -9223372036854786048.000000000 < toInt8(1), -9223372036854786048.000000000 <= toInt8(1), -9223372036854786048.000000000 > toInt8(1), -9223372036854786048.000000000 >= toInt8(1) , toUInt16(1) = -9223372036854786048.000000000, toUInt16(1) != -9223372036854786048.000000000, toUInt16(1) < -9223372036854786048.000000000, toUInt16(1) <= -9223372036854786048.000000000, toUInt16(1) > -9223372036854786048.000000000, toUInt16(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt16(1), -9223372036854786048.000000000 != toUInt16(1), -9223372036854786048.000000000 < toUInt16(1), -9223372036854786048.000000000 <= toUInt16(1), -9223372036854786048.000000000 > toUInt16(1), -9223372036854786048.000000000 >= toUInt16(1) , toInt16(1) = -9223372036854786048.000000000, toInt16(1) != -9223372036854786048.000000000, toInt16(1) < -9223372036854786048.000000000, toInt16(1) <= -9223372036854786048.000000000, toInt16(1) > -9223372036854786048.000000000, toInt16(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt16(1), -9223372036854786048.000000000 != toInt16(1), -9223372036854786048.000000000 < toInt16(1), -9223372036854786048.000000000 <= toInt16(1), -9223372036854786048.000000000 > toInt16(1), -9223372036854786048.000000000 >= toInt16(1) , toUInt32(1) = -9223372036854786048.000000000, toUInt32(1) != -9223372036854786048.000000000, toUInt32(1) < -9223372036854786048.000000000, toUInt32(1) <= -9223372036854786048.000000000, toUInt32(1) > -9223372036854786048.000000000, toUInt32(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt32(1), -9223372036854786048.000000000 != toUInt32(1), -9223372036854786048.000000000 < toUInt32(1), -9223372036854786048.000000000 <= toUInt32(1), -9223372036854786048.000000000 > toUInt32(1), -9223372036854786048.000000000 >= toUInt32(1) , toInt32(1) = -9223372036854786048.000000000, toInt32(1) != -9223372036854786048.000000000, toInt32(1) < -9223372036854786048.000000000, toInt32(1) <= -9223372036854786048.000000000, toInt32(1) > -9223372036854786048.000000000, toInt32(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt32(1), -9223372036854786048.000000000 != toInt32(1), -9223372036854786048.000000000 < toInt32(1), -9223372036854786048.000000000 <= toInt32(1), -9223372036854786048.000000000 > toInt32(1), -9223372036854786048.000000000 >= toInt32(1) , toUInt64(1) = -9223372036854786048.000000000, toUInt64(1) != -9223372036854786048.000000000, toUInt64(1) < -9223372036854786048.000000000, toUInt64(1) <= -9223372036854786048.000000000, toUInt64(1) > -9223372036854786048.000000000, toUInt64(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt64(1), -9223372036854786048.000000000 != toUInt64(1), -9223372036854786048.000000000 < toUInt64(1), -9223372036854786048.000000000 <= toUInt64(1), -9223372036854786048.000000000 > toUInt64(1), -9223372036854786048.000000000 >= toUInt64(1) , toInt64(1) = -9223372036854786048.000000000, toInt64(1) != -9223372036854786048.000000000, toInt64(1) < -9223372036854786048.000000000, toInt64(1) <= -9223372036854786048.000000000, toInt64(1) > -9223372036854786048.000000000, toInt64(1) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toInt64(1), -9223372036854786048.000000000 != toInt64(1), -9223372036854786048.000000000 < toInt64(1), -9223372036854786048.000000000 <= toInt64(1), -9223372036854786048.000000000 > toInt64(1), -9223372036854786048.000000000 >= toInt64(1) ; +SELECT '1', '9223372036854786048.000000000', 1 = 9223372036854786048.000000000, 1 != 9223372036854786048.000000000, 1 < 9223372036854786048.000000000, 1 <= 9223372036854786048.000000000, 1 > 9223372036854786048.000000000, 1 >= 9223372036854786048.000000000, 9223372036854786048.000000000 = 1, 9223372036854786048.000000000 != 1, 9223372036854786048.000000000 < 1, 9223372036854786048.000000000 <= 1, 9223372036854786048.000000000 > 1, 9223372036854786048.000000000 >= 1 , toUInt8(1) = 9223372036854786048.000000000, toUInt8(1) != 9223372036854786048.000000000, toUInt8(1) < 9223372036854786048.000000000, toUInt8(1) <= 9223372036854786048.000000000, toUInt8(1) > 9223372036854786048.000000000, toUInt8(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt8(1), 9223372036854786048.000000000 != toUInt8(1), 9223372036854786048.000000000 < toUInt8(1), 9223372036854786048.000000000 <= toUInt8(1), 9223372036854786048.000000000 > toUInt8(1), 9223372036854786048.000000000 >= toUInt8(1) , toInt8(1) = 9223372036854786048.000000000, toInt8(1) != 9223372036854786048.000000000, toInt8(1) < 9223372036854786048.000000000, toInt8(1) <= 9223372036854786048.000000000, toInt8(1) > 9223372036854786048.000000000, toInt8(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt8(1), 9223372036854786048.000000000 != toInt8(1), 9223372036854786048.000000000 < toInt8(1), 9223372036854786048.000000000 <= toInt8(1), 9223372036854786048.000000000 > toInt8(1), 9223372036854786048.000000000 >= toInt8(1) , toUInt16(1) = 9223372036854786048.000000000, toUInt16(1) != 9223372036854786048.000000000, toUInt16(1) < 9223372036854786048.000000000, toUInt16(1) <= 9223372036854786048.000000000, toUInt16(1) > 9223372036854786048.000000000, toUInt16(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt16(1), 9223372036854786048.000000000 != toUInt16(1), 9223372036854786048.000000000 < toUInt16(1), 9223372036854786048.000000000 <= toUInt16(1), 9223372036854786048.000000000 > toUInt16(1), 9223372036854786048.000000000 >= toUInt16(1) , toInt16(1) = 9223372036854786048.000000000, toInt16(1) != 9223372036854786048.000000000, toInt16(1) < 9223372036854786048.000000000, toInt16(1) <= 9223372036854786048.000000000, toInt16(1) > 9223372036854786048.000000000, toInt16(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt16(1), 9223372036854786048.000000000 != toInt16(1), 9223372036854786048.000000000 < toInt16(1), 9223372036854786048.000000000 <= toInt16(1), 9223372036854786048.000000000 > toInt16(1), 9223372036854786048.000000000 >= toInt16(1) , toUInt32(1) = 9223372036854786048.000000000, toUInt32(1) != 9223372036854786048.000000000, toUInt32(1) < 9223372036854786048.000000000, toUInt32(1) <= 9223372036854786048.000000000, toUInt32(1) > 9223372036854786048.000000000, toUInt32(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt32(1), 9223372036854786048.000000000 != toUInt32(1), 9223372036854786048.000000000 < toUInt32(1), 9223372036854786048.000000000 <= toUInt32(1), 9223372036854786048.000000000 > toUInt32(1), 9223372036854786048.000000000 >= toUInt32(1) , toInt32(1) = 9223372036854786048.000000000, toInt32(1) != 9223372036854786048.000000000, toInt32(1) < 9223372036854786048.000000000, toInt32(1) <= 9223372036854786048.000000000, toInt32(1) > 9223372036854786048.000000000, toInt32(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt32(1), 9223372036854786048.000000000 != toInt32(1), 9223372036854786048.000000000 < toInt32(1), 9223372036854786048.000000000 <= toInt32(1), 9223372036854786048.000000000 > toInt32(1), 9223372036854786048.000000000 >= toInt32(1) , toUInt64(1) = 9223372036854786048.000000000, toUInt64(1) != 9223372036854786048.000000000, toUInt64(1) < 9223372036854786048.000000000, toUInt64(1) <= 9223372036854786048.000000000, toUInt64(1) > 9223372036854786048.000000000, toUInt64(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt64(1), 9223372036854786048.000000000 != toUInt64(1), 9223372036854786048.000000000 < toUInt64(1), 9223372036854786048.000000000 <= toUInt64(1), 9223372036854786048.000000000 > toUInt64(1), 9223372036854786048.000000000 >= toUInt64(1) , toInt64(1) = 9223372036854786048.000000000, toInt64(1) != 9223372036854786048.000000000, toInt64(1) < 9223372036854786048.000000000, toInt64(1) <= 9223372036854786048.000000000, toInt64(1) > 9223372036854786048.000000000, toInt64(1) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toInt64(1), 9223372036854786048.000000000 != toInt64(1), 9223372036854786048.000000000 < toInt64(1), 9223372036854786048.000000000 <= toInt64(1), 9223372036854786048.000000000 > toInt64(1), 9223372036854786048.000000000 >= toInt64(1) ; +SELECT '18446744073709551615', '0.000000000', 18446744073709551615 = 0.000000000, 18446744073709551615 != 0.000000000, 18446744073709551615 < 0.000000000, 18446744073709551615 <= 0.000000000, 18446744073709551615 > 0.000000000, 18446744073709551615 >= 0.000000000, 0.000000000 = 18446744073709551615, 0.000000000 != 18446744073709551615, 0.000000000 < 18446744073709551615, 0.000000000 <= 18446744073709551615, 0.000000000 > 18446744073709551615, 0.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 0.000000000, toUInt64(18446744073709551615) != 0.000000000, toUInt64(18446744073709551615) < 0.000000000, toUInt64(18446744073709551615) <= 0.000000000, toUInt64(18446744073709551615) > 0.000000000, toUInt64(18446744073709551615) >= 0.000000000, 0.000000000 = toUInt64(18446744073709551615), 0.000000000 != toUInt64(18446744073709551615), 0.000000000 < toUInt64(18446744073709551615), 0.000000000 <= toUInt64(18446744073709551615), 0.000000000 > toUInt64(18446744073709551615), 0.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-1.000000000', 18446744073709551615 = -1.000000000, 18446744073709551615 != -1.000000000, 18446744073709551615 < -1.000000000, 18446744073709551615 <= -1.000000000, 18446744073709551615 > -1.000000000, 18446744073709551615 >= -1.000000000, -1.000000000 = 18446744073709551615, -1.000000000 != 18446744073709551615, -1.000000000 < 18446744073709551615, -1.000000000 <= 18446744073709551615, -1.000000000 > 18446744073709551615, -1.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -1.000000000, toUInt64(18446744073709551615) != -1.000000000, toUInt64(18446744073709551615) < -1.000000000, toUInt64(18446744073709551615) <= -1.000000000, toUInt64(18446744073709551615) > -1.000000000, toUInt64(18446744073709551615) >= -1.000000000, -1.000000000 = toUInt64(18446744073709551615), -1.000000000 != toUInt64(18446744073709551615), -1.000000000 < toUInt64(18446744073709551615), -1.000000000 <= toUInt64(18446744073709551615), -1.000000000 > toUInt64(18446744073709551615), -1.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '1.000000000', 18446744073709551615 = 1.000000000, 18446744073709551615 != 1.000000000, 18446744073709551615 < 1.000000000, 18446744073709551615 <= 1.000000000, 18446744073709551615 > 1.000000000, 18446744073709551615 >= 1.000000000, 1.000000000 = 18446744073709551615, 1.000000000 != 18446744073709551615, 1.000000000 < 18446744073709551615, 1.000000000 <= 18446744073709551615, 1.000000000 > 18446744073709551615, 1.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 1.000000000, toUInt64(18446744073709551615) != 1.000000000, toUInt64(18446744073709551615) < 1.000000000, toUInt64(18446744073709551615) <= 1.000000000, toUInt64(18446744073709551615) > 1.000000000, toUInt64(18446744073709551615) >= 1.000000000, 1.000000000 = toUInt64(18446744073709551615), 1.000000000 != toUInt64(18446744073709551615), 1.000000000 < toUInt64(18446744073709551615), 1.000000000 <= toUInt64(18446744073709551615), 1.000000000 > toUInt64(18446744073709551615), 1.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '18446744073709551616.000000000', 18446744073709551615 = 18446744073709551616.000000000, 18446744073709551615 != 18446744073709551616.000000000, 18446744073709551615 < 18446744073709551616.000000000, 18446744073709551615 <= 18446744073709551616.000000000, 18446744073709551615 > 18446744073709551616.000000000, 18446744073709551615 >= 18446744073709551616.000000000, 18446744073709551616.000000000 = 18446744073709551615, 18446744073709551616.000000000 != 18446744073709551615, 18446744073709551616.000000000 < 18446744073709551615, 18446744073709551616.000000000 <= 18446744073709551615, 18446744073709551616.000000000 > 18446744073709551615, 18446744073709551616.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 18446744073709551616.000000000, toUInt64(18446744073709551615) != 18446744073709551616.000000000, toUInt64(18446744073709551615) < 18446744073709551616.000000000, toUInt64(18446744073709551615) <= 18446744073709551616.000000000, toUInt64(18446744073709551615) > 18446744073709551616.000000000, toUInt64(18446744073709551615) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt64(18446744073709551615), 18446744073709551616.000000000 != toUInt64(18446744073709551615), 18446744073709551616.000000000 < toUInt64(18446744073709551615), 18446744073709551616.000000000 <= toUInt64(18446744073709551615), 18446744073709551616.000000000 > toUInt64(18446744073709551615), 18446744073709551616.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9223372036854775808.000000000', 18446744073709551615 = 9223372036854775808.000000000, 18446744073709551615 != 9223372036854775808.000000000, 18446744073709551615 < 9223372036854775808.000000000, 18446744073709551615 <= 9223372036854775808.000000000, 18446744073709551615 > 9223372036854775808.000000000, 18446744073709551615 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 18446744073709551615, 9223372036854775808.000000000 != 18446744073709551615, 9223372036854775808.000000000 < 18446744073709551615, 9223372036854775808.000000000 <= 18446744073709551615, 9223372036854775808.000000000 > 18446744073709551615, 9223372036854775808.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9223372036854775808.000000000, toUInt64(18446744073709551615) != 9223372036854775808.000000000, toUInt64(18446744073709551615) < 9223372036854775808.000000000, toUInt64(18446744073709551615) <= 9223372036854775808.000000000, toUInt64(18446744073709551615) > 9223372036854775808.000000000, toUInt64(18446744073709551615) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(18446744073709551615), 9223372036854775808.000000000 != toUInt64(18446744073709551615), 9223372036854775808.000000000 < toUInt64(18446744073709551615), 9223372036854775808.000000000 <= toUInt64(18446744073709551615), 9223372036854775808.000000000 > toUInt64(18446744073709551615), 9223372036854775808.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9223372036854775808.000000000', 18446744073709551615 = -9223372036854775808.000000000, 18446744073709551615 != -9223372036854775808.000000000, 18446744073709551615 < -9223372036854775808.000000000, 18446744073709551615 <= -9223372036854775808.000000000, 18446744073709551615 > -9223372036854775808.000000000, 18446744073709551615 >= -9223372036854775808.000000000, -9223372036854775808.000000000 = 18446744073709551615, -9223372036854775808.000000000 != 18446744073709551615, -9223372036854775808.000000000 < 18446744073709551615, -9223372036854775808.000000000 <= 18446744073709551615, -9223372036854775808.000000000 > 18446744073709551615, -9223372036854775808.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9223372036854775808.000000000, toUInt64(18446744073709551615) != -9223372036854775808.000000000, toUInt64(18446744073709551615) < -9223372036854775808.000000000, toUInt64(18446744073709551615) <= -9223372036854775808.000000000, toUInt64(18446744073709551615) > -9223372036854775808.000000000, toUInt64(18446744073709551615) >= -9223372036854775808.000000000, -9223372036854775808.000000000 = toUInt64(18446744073709551615), -9223372036854775808.000000000 != toUInt64(18446744073709551615), -9223372036854775808.000000000 < toUInt64(18446744073709551615), -9223372036854775808.000000000 <= toUInt64(18446744073709551615), -9223372036854775808.000000000 > toUInt64(18446744073709551615), -9223372036854775808.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9223372036854775808.000000000', 18446744073709551615 = 9223372036854775808.000000000, 18446744073709551615 != 9223372036854775808.000000000, 18446744073709551615 < 9223372036854775808.000000000, 18446744073709551615 <= 9223372036854775808.000000000, 18446744073709551615 > 9223372036854775808.000000000, 18446744073709551615 >= 9223372036854775808.000000000, 9223372036854775808.000000000 = 18446744073709551615, 9223372036854775808.000000000 != 18446744073709551615, 9223372036854775808.000000000 < 18446744073709551615, 9223372036854775808.000000000 <= 18446744073709551615, 9223372036854775808.000000000 > 18446744073709551615, 9223372036854775808.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9223372036854775808.000000000, toUInt64(18446744073709551615) != 9223372036854775808.000000000, toUInt64(18446744073709551615) < 9223372036854775808.000000000, toUInt64(18446744073709551615) <= 9223372036854775808.000000000, toUInt64(18446744073709551615) > 9223372036854775808.000000000, toUInt64(18446744073709551615) >= 9223372036854775808.000000000, 9223372036854775808.000000000 = toUInt64(18446744073709551615), 9223372036854775808.000000000 != toUInt64(18446744073709551615), 9223372036854775808.000000000 < toUInt64(18446744073709551615), 9223372036854775808.000000000 <= toUInt64(18446744073709551615), 9223372036854775808.000000000 > toUInt64(18446744073709551615), 9223372036854775808.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '2251799813685248.000000000', 18446744073709551615 = 2251799813685248.000000000, 18446744073709551615 != 2251799813685248.000000000, 18446744073709551615 < 2251799813685248.000000000, 18446744073709551615 <= 2251799813685248.000000000, 18446744073709551615 > 2251799813685248.000000000, 18446744073709551615 >= 2251799813685248.000000000, 2251799813685248.000000000 = 18446744073709551615, 2251799813685248.000000000 != 18446744073709551615, 2251799813685248.000000000 < 18446744073709551615, 2251799813685248.000000000 <= 18446744073709551615, 2251799813685248.000000000 > 18446744073709551615, 2251799813685248.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 2251799813685248.000000000, toUInt64(18446744073709551615) != 2251799813685248.000000000, toUInt64(18446744073709551615) < 2251799813685248.000000000, toUInt64(18446744073709551615) <= 2251799813685248.000000000, toUInt64(18446744073709551615) > 2251799813685248.000000000, toUInt64(18446744073709551615) >= 2251799813685248.000000000, 2251799813685248.000000000 = toUInt64(18446744073709551615), 2251799813685248.000000000 != toUInt64(18446744073709551615), 2251799813685248.000000000 < toUInt64(18446744073709551615), 2251799813685248.000000000 <= toUInt64(18446744073709551615), 2251799813685248.000000000 > toUInt64(18446744073709551615), 2251799813685248.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '4503599627370496.000000000', 18446744073709551615 = 4503599627370496.000000000, 18446744073709551615 != 4503599627370496.000000000, 18446744073709551615 < 4503599627370496.000000000, 18446744073709551615 <= 4503599627370496.000000000, 18446744073709551615 > 4503599627370496.000000000, 18446744073709551615 >= 4503599627370496.000000000, 4503599627370496.000000000 = 18446744073709551615, 4503599627370496.000000000 != 18446744073709551615, 4503599627370496.000000000 < 18446744073709551615, 4503599627370496.000000000 <= 18446744073709551615, 4503599627370496.000000000 > 18446744073709551615, 4503599627370496.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 4503599627370496.000000000, toUInt64(18446744073709551615) != 4503599627370496.000000000, toUInt64(18446744073709551615) < 4503599627370496.000000000, toUInt64(18446744073709551615) <= 4503599627370496.000000000, toUInt64(18446744073709551615) > 4503599627370496.000000000, toUInt64(18446744073709551615) >= 4503599627370496.000000000, 4503599627370496.000000000 = toUInt64(18446744073709551615), 4503599627370496.000000000 != toUInt64(18446744073709551615), 4503599627370496.000000000 < toUInt64(18446744073709551615), 4503599627370496.000000000 <= toUInt64(18446744073709551615), 4503599627370496.000000000 > toUInt64(18446744073709551615), 4503599627370496.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9007199254740991.000000000', 18446744073709551615 = 9007199254740991.000000000, 18446744073709551615 != 9007199254740991.000000000, 18446744073709551615 < 9007199254740991.000000000, 18446744073709551615 <= 9007199254740991.000000000, 18446744073709551615 > 9007199254740991.000000000, 18446744073709551615 >= 9007199254740991.000000000, 9007199254740991.000000000 = 18446744073709551615, 9007199254740991.000000000 != 18446744073709551615, 9007199254740991.000000000 < 18446744073709551615, 9007199254740991.000000000 <= 18446744073709551615, 9007199254740991.000000000 > 18446744073709551615, 9007199254740991.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9007199254740991.000000000, toUInt64(18446744073709551615) != 9007199254740991.000000000, toUInt64(18446744073709551615) < 9007199254740991.000000000, toUInt64(18446744073709551615) <= 9007199254740991.000000000, toUInt64(18446744073709551615) > 9007199254740991.000000000, toUInt64(18446744073709551615) >= 9007199254740991.000000000, 9007199254740991.000000000 = toUInt64(18446744073709551615), 9007199254740991.000000000 != toUInt64(18446744073709551615), 9007199254740991.000000000 < toUInt64(18446744073709551615), 9007199254740991.000000000 <= toUInt64(18446744073709551615), 9007199254740991.000000000 > toUInt64(18446744073709551615), 9007199254740991.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9007199254740992.000000000', 18446744073709551615 = 9007199254740992.000000000, 18446744073709551615 != 9007199254740992.000000000, 18446744073709551615 < 9007199254740992.000000000, 18446744073709551615 <= 9007199254740992.000000000, 18446744073709551615 > 9007199254740992.000000000, 18446744073709551615 >= 9007199254740992.000000000, 9007199254740992.000000000 = 18446744073709551615, 9007199254740992.000000000 != 18446744073709551615, 9007199254740992.000000000 < 18446744073709551615, 9007199254740992.000000000 <= 18446744073709551615, 9007199254740992.000000000 > 18446744073709551615, 9007199254740992.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9007199254740992.000000000, toUInt64(18446744073709551615) != 9007199254740992.000000000, toUInt64(18446744073709551615) < 9007199254740992.000000000, toUInt64(18446744073709551615) <= 9007199254740992.000000000, toUInt64(18446744073709551615) > 9007199254740992.000000000, toUInt64(18446744073709551615) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(18446744073709551615), 9007199254740992.000000000 != toUInt64(18446744073709551615), 9007199254740992.000000000 < toUInt64(18446744073709551615), 9007199254740992.000000000 <= toUInt64(18446744073709551615), 9007199254740992.000000000 > toUInt64(18446744073709551615), 9007199254740992.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9007199254740992.000000000', 18446744073709551615 = 9007199254740992.000000000, 18446744073709551615 != 9007199254740992.000000000, 18446744073709551615 < 9007199254740992.000000000, 18446744073709551615 <= 9007199254740992.000000000, 18446744073709551615 > 9007199254740992.000000000, 18446744073709551615 >= 9007199254740992.000000000, 9007199254740992.000000000 = 18446744073709551615, 9007199254740992.000000000 != 18446744073709551615, 9007199254740992.000000000 < 18446744073709551615, 9007199254740992.000000000 <= 18446744073709551615, 9007199254740992.000000000 > 18446744073709551615, 9007199254740992.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9007199254740992.000000000, toUInt64(18446744073709551615) != 9007199254740992.000000000, toUInt64(18446744073709551615) < 9007199254740992.000000000, toUInt64(18446744073709551615) <= 9007199254740992.000000000, toUInt64(18446744073709551615) > 9007199254740992.000000000, toUInt64(18446744073709551615) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(18446744073709551615), 9007199254740992.000000000 != toUInt64(18446744073709551615), 9007199254740992.000000000 < toUInt64(18446744073709551615), 9007199254740992.000000000 <= toUInt64(18446744073709551615), 9007199254740992.000000000 > toUInt64(18446744073709551615), 9007199254740992.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9007199254740994.000000000', 18446744073709551615 = 9007199254740994.000000000, 18446744073709551615 != 9007199254740994.000000000, 18446744073709551615 < 9007199254740994.000000000, 18446744073709551615 <= 9007199254740994.000000000, 18446744073709551615 > 9007199254740994.000000000, 18446744073709551615 >= 9007199254740994.000000000, 9007199254740994.000000000 = 18446744073709551615, 9007199254740994.000000000 != 18446744073709551615, 9007199254740994.000000000 < 18446744073709551615, 9007199254740994.000000000 <= 18446744073709551615, 9007199254740994.000000000 > 18446744073709551615, 9007199254740994.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9007199254740994.000000000, toUInt64(18446744073709551615) != 9007199254740994.000000000, toUInt64(18446744073709551615) < 9007199254740994.000000000, toUInt64(18446744073709551615) <= 9007199254740994.000000000, toUInt64(18446744073709551615) > 9007199254740994.000000000, toUInt64(18446744073709551615) >= 9007199254740994.000000000, 9007199254740994.000000000 = toUInt64(18446744073709551615), 9007199254740994.000000000 != toUInt64(18446744073709551615), 9007199254740994.000000000 < toUInt64(18446744073709551615), 9007199254740994.000000000 <= toUInt64(18446744073709551615), 9007199254740994.000000000 > toUInt64(18446744073709551615), 9007199254740994.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9007199254740991.000000000', 18446744073709551615 = -9007199254740991.000000000, 18446744073709551615 != -9007199254740991.000000000, 18446744073709551615 < -9007199254740991.000000000, 18446744073709551615 <= -9007199254740991.000000000, 18446744073709551615 > -9007199254740991.000000000, 18446744073709551615 >= -9007199254740991.000000000, -9007199254740991.000000000 = 18446744073709551615, -9007199254740991.000000000 != 18446744073709551615, -9007199254740991.000000000 < 18446744073709551615, -9007199254740991.000000000 <= 18446744073709551615, -9007199254740991.000000000 > 18446744073709551615, -9007199254740991.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9007199254740991.000000000, toUInt64(18446744073709551615) != -9007199254740991.000000000, toUInt64(18446744073709551615) < -9007199254740991.000000000, toUInt64(18446744073709551615) <= -9007199254740991.000000000, toUInt64(18446744073709551615) > -9007199254740991.000000000, toUInt64(18446744073709551615) >= -9007199254740991.000000000, -9007199254740991.000000000 = toUInt64(18446744073709551615), -9007199254740991.000000000 != toUInt64(18446744073709551615), -9007199254740991.000000000 < toUInt64(18446744073709551615), -9007199254740991.000000000 <= toUInt64(18446744073709551615), -9007199254740991.000000000 > toUInt64(18446744073709551615), -9007199254740991.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9007199254740992.000000000', 18446744073709551615 = -9007199254740992.000000000, 18446744073709551615 != -9007199254740992.000000000, 18446744073709551615 < -9007199254740992.000000000, 18446744073709551615 <= -9007199254740992.000000000, 18446744073709551615 > -9007199254740992.000000000, 18446744073709551615 >= -9007199254740992.000000000, -9007199254740992.000000000 = 18446744073709551615, -9007199254740992.000000000 != 18446744073709551615, -9007199254740992.000000000 < 18446744073709551615, -9007199254740992.000000000 <= 18446744073709551615, -9007199254740992.000000000 > 18446744073709551615, -9007199254740992.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9007199254740992.000000000, toUInt64(18446744073709551615) != -9007199254740992.000000000, toUInt64(18446744073709551615) < -9007199254740992.000000000, toUInt64(18446744073709551615) <= -9007199254740992.000000000, toUInt64(18446744073709551615) > -9007199254740992.000000000, toUInt64(18446744073709551615) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(18446744073709551615), -9007199254740992.000000000 != toUInt64(18446744073709551615), -9007199254740992.000000000 < toUInt64(18446744073709551615), -9007199254740992.000000000 <= toUInt64(18446744073709551615), -9007199254740992.000000000 > toUInt64(18446744073709551615), -9007199254740992.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9007199254740992.000000000', 18446744073709551615 = -9007199254740992.000000000, 18446744073709551615 != -9007199254740992.000000000, 18446744073709551615 < -9007199254740992.000000000, 18446744073709551615 <= -9007199254740992.000000000, 18446744073709551615 > -9007199254740992.000000000, 18446744073709551615 >= -9007199254740992.000000000, -9007199254740992.000000000 = 18446744073709551615, -9007199254740992.000000000 != 18446744073709551615, -9007199254740992.000000000 < 18446744073709551615, -9007199254740992.000000000 <= 18446744073709551615, -9007199254740992.000000000 > 18446744073709551615, -9007199254740992.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9007199254740992.000000000, toUInt64(18446744073709551615) != -9007199254740992.000000000, toUInt64(18446744073709551615) < -9007199254740992.000000000, toUInt64(18446744073709551615) <= -9007199254740992.000000000, toUInt64(18446744073709551615) > -9007199254740992.000000000, toUInt64(18446744073709551615) >= -9007199254740992.000000000, -9007199254740992.000000000 = toUInt64(18446744073709551615), -9007199254740992.000000000 != toUInt64(18446744073709551615), -9007199254740992.000000000 < toUInt64(18446744073709551615), -9007199254740992.000000000 <= toUInt64(18446744073709551615), -9007199254740992.000000000 > toUInt64(18446744073709551615), -9007199254740992.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9007199254740994.000000000', 18446744073709551615 = -9007199254740994.000000000, 18446744073709551615 != -9007199254740994.000000000, 18446744073709551615 < -9007199254740994.000000000, 18446744073709551615 <= -9007199254740994.000000000, 18446744073709551615 > -9007199254740994.000000000, 18446744073709551615 >= -9007199254740994.000000000, -9007199254740994.000000000 = 18446744073709551615, -9007199254740994.000000000 != 18446744073709551615, -9007199254740994.000000000 < 18446744073709551615, -9007199254740994.000000000 <= 18446744073709551615, -9007199254740994.000000000 > 18446744073709551615, -9007199254740994.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9007199254740994.000000000, toUInt64(18446744073709551615) != -9007199254740994.000000000, toUInt64(18446744073709551615) < -9007199254740994.000000000, toUInt64(18446744073709551615) <= -9007199254740994.000000000, toUInt64(18446744073709551615) > -9007199254740994.000000000, toUInt64(18446744073709551615) >= -9007199254740994.000000000, -9007199254740994.000000000 = toUInt64(18446744073709551615), -9007199254740994.000000000 != toUInt64(18446744073709551615), -9007199254740994.000000000 < toUInt64(18446744073709551615), -9007199254740994.000000000 <= toUInt64(18446744073709551615), -9007199254740994.000000000 > toUInt64(18446744073709551615), -9007199254740994.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '104.000000000', 18446744073709551615 = 104.000000000, 18446744073709551615 != 104.000000000, 18446744073709551615 < 104.000000000, 18446744073709551615 <= 104.000000000, 18446744073709551615 > 104.000000000, 18446744073709551615 >= 104.000000000, 104.000000000 = 18446744073709551615, 104.000000000 != 18446744073709551615, 104.000000000 < 18446744073709551615, 104.000000000 <= 18446744073709551615, 104.000000000 > 18446744073709551615, 104.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 104.000000000, toUInt64(18446744073709551615) != 104.000000000, toUInt64(18446744073709551615) < 104.000000000, toUInt64(18446744073709551615) <= 104.000000000, toUInt64(18446744073709551615) > 104.000000000, toUInt64(18446744073709551615) >= 104.000000000, 104.000000000 = toUInt64(18446744073709551615), 104.000000000 != toUInt64(18446744073709551615), 104.000000000 < toUInt64(18446744073709551615), 104.000000000 <= toUInt64(18446744073709551615), 104.000000000 > toUInt64(18446744073709551615), 104.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-4503599627370496.000000000', 18446744073709551615 = -4503599627370496.000000000, 18446744073709551615 != -4503599627370496.000000000, 18446744073709551615 < -4503599627370496.000000000, 18446744073709551615 <= -4503599627370496.000000000, 18446744073709551615 > -4503599627370496.000000000, 18446744073709551615 >= -4503599627370496.000000000, -4503599627370496.000000000 = 18446744073709551615, -4503599627370496.000000000 != 18446744073709551615, -4503599627370496.000000000 < 18446744073709551615, -4503599627370496.000000000 <= 18446744073709551615, -4503599627370496.000000000 > 18446744073709551615, -4503599627370496.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -4503599627370496.000000000, toUInt64(18446744073709551615) != -4503599627370496.000000000, toUInt64(18446744073709551615) < -4503599627370496.000000000, toUInt64(18446744073709551615) <= -4503599627370496.000000000, toUInt64(18446744073709551615) > -4503599627370496.000000000, toUInt64(18446744073709551615) >= -4503599627370496.000000000, -4503599627370496.000000000 = toUInt64(18446744073709551615), -4503599627370496.000000000 != toUInt64(18446744073709551615), -4503599627370496.000000000 < toUInt64(18446744073709551615), -4503599627370496.000000000 <= toUInt64(18446744073709551615), -4503599627370496.000000000 > toUInt64(18446744073709551615), -4503599627370496.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-0.500000000', 18446744073709551615 = -0.500000000, 18446744073709551615 != -0.500000000, 18446744073709551615 < -0.500000000, 18446744073709551615 <= -0.500000000, 18446744073709551615 > -0.500000000, 18446744073709551615 >= -0.500000000, -0.500000000 = 18446744073709551615, -0.500000000 != 18446744073709551615, -0.500000000 < 18446744073709551615, -0.500000000 <= 18446744073709551615, -0.500000000 > 18446744073709551615, -0.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -0.500000000, toUInt64(18446744073709551615) != -0.500000000, toUInt64(18446744073709551615) < -0.500000000, toUInt64(18446744073709551615) <= -0.500000000, toUInt64(18446744073709551615) > -0.500000000, toUInt64(18446744073709551615) >= -0.500000000, -0.500000000 = toUInt64(18446744073709551615), -0.500000000 != toUInt64(18446744073709551615), -0.500000000 < toUInt64(18446744073709551615), -0.500000000 <= toUInt64(18446744073709551615), -0.500000000 > toUInt64(18446744073709551615), -0.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '0.500000000', 18446744073709551615 = 0.500000000, 18446744073709551615 != 0.500000000, 18446744073709551615 < 0.500000000, 18446744073709551615 <= 0.500000000, 18446744073709551615 > 0.500000000, 18446744073709551615 >= 0.500000000, 0.500000000 = 18446744073709551615, 0.500000000 != 18446744073709551615, 0.500000000 < 18446744073709551615, 0.500000000 <= 18446744073709551615, 0.500000000 > 18446744073709551615, 0.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 0.500000000, toUInt64(18446744073709551615) != 0.500000000, toUInt64(18446744073709551615) < 0.500000000, toUInt64(18446744073709551615) <= 0.500000000, toUInt64(18446744073709551615) > 0.500000000, toUInt64(18446744073709551615) >= 0.500000000, 0.500000000 = toUInt64(18446744073709551615), 0.500000000 != toUInt64(18446744073709551615), 0.500000000 < toUInt64(18446744073709551615), 0.500000000 <= toUInt64(18446744073709551615), 0.500000000 > toUInt64(18446744073709551615), 0.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-1.500000000', 18446744073709551615 = -1.500000000, 18446744073709551615 != -1.500000000, 18446744073709551615 < -1.500000000, 18446744073709551615 <= -1.500000000, 18446744073709551615 > -1.500000000, 18446744073709551615 >= -1.500000000, -1.500000000 = 18446744073709551615, -1.500000000 != 18446744073709551615, -1.500000000 < 18446744073709551615, -1.500000000 <= 18446744073709551615, -1.500000000 > 18446744073709551615, -1.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -1.500000000, toUInt64(18446744073709551615) != -1.500000000, toUInt64(18446744073709551615) < -1.500000000, toUInt64(18446744073709551615) <= -1.500000000, toUInt64(18446744073709551615) > -1.500000000, toUInt64(18446744073709551615) >= -1.500000000, -1.500000000 = toUInt64(18446744073709551615), -1.500000000 != toUInt64(18446744073709551615), -1.500000000 < toUInt64(18446744073709551615), -1.500000000 <= toUInt64(18446744073709551615), -1.500000000 > toUInt64(18446744073709551615), -1.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '1.500000000', 18446744073709551615 = 1.500000000, 18446744073709551615 != 1.500000000, 18446744073709551615 < 1.500000000, 18446744073709551615 <= 1.500000000, 18446744073709551615 > 1.500000000, 18446744073709551615 >= 1.500000000, 1.500000000 = 18446744073709551615, 1.500000000 != 18446744073709551615, 1.500000000 < 18446744073709551615, 1.500000000 <= 18446744073709551615, 1.500000000 > 18446744073709551615, 1.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 1.500000000, toUInt64(18446744073709551615) != 1.500000000, toUInt64(18446744073709551615) < 1.500000000, toUInt64(18446744073709551615) <= 1.500000000, toUInt64(18446744073709551615) > 1.500000000, toUInt64(18446744073709551615) >= 1.500000000, 1.500000000 = toUInt64(18446744073709551615), 1.500000000 != toUInt64(18446744073709551615), 1.500000000 < toUInt64(18446744073709551615), 1.500000000 <= toUInt64(18446744073709551615), 1.500000000 > toUInt64(18446744073709551615), 1.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9007199254740992.000000000', 18446744073709551615 = 9007199254740992.000000000, 18446744073709551615 != 9007199254740992.000000000, 18446744073709551615 < 9007199254740992.000000000, 18446744073709551615 <= 9007199254740992.000000000, 18446744073709551615 > 9007199254740992.000000000, 18446744073709551615 >= 9007199254740992.000000000, 9007199254740992.000000000 = 18446744073709551615, 9007199254740992.000000000 != 18446744073709551615, 9007199254740992.000000000 < 18446744073709551615, 9007199254740992.000000000 <= 18446744073709551615, 9007199254740992.000000000 > 18446744073709551615, 9007199254740992.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9007199254740992.000000000, toUInt64(18446744073709551615) != 9007199254740992.000000000, toUInt64(18446744073709551615) < 9007199254740992.000000000, toUInt64(18446744073709551615) <= 9007199254740992.000000000, toUInt64(18446744073709551615) > 9007199254740992.000000000, toUInt64(18446744073709551615) >= 9007199254740992.000000000, 9007199254740992.000000000 = toUInt64(18446744073709551615), 9007199254740992.000000000 != toUInt64(18446744073709551615), 9007199254740992.000000000 < toUInt64(18446744073709551615), 9007199254740992.000000000 <= toUInt64(18446744073709551615), 9007199254740992.000000000 > toUInt64(18446744073709551615), 9007199254740992.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '2251799813685247.500000000', 18446744073709551615 = 2251799813685247.500000000, 18446744073709551615 != 2251799813685247.500000000, 18446744073709551615 < 2251799813685247.500000000, 18446744073709551615 <= 2251799813685247.500000000, 18446744073709551615 > 2251799813685247.500000000, 18446744073709551615 >= 2251799813685247.500000000, 2251799813685247.500000000 = 18446744073709551615, 2251799813685247.500000000 != 18446744073709551615, 2251799813685247.500000000 < 18446744073709551615, 2251799813685247.500000000 <= 18446744073709551615, 2251799813685247.500000000 > 18446744073709551615, 2251799813685247.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 2251799813685247.500000000, toUInt64(18446744073709551615) != 2251799813685247.500000000, toUInt64(18446744073709551615) < 2251799813685247.500000000, toUInt64(18446744073709551615) <= 2251799813685247.500000000, toUInt64(18446744073709551615) > 2251799813685247.500000000, toUInt64(18446744073709551615) >= 2251799813685247.500000000, 2251799813685247.500000000 = toUInt64(18446744073709551615), 2251799813685247.500000000 != toUInt64(18446744073709551615), 2251799813685247.500000000 < toUInt64(18446744073709551615), 2251799813685247.500000000 <= toUInt64(18446744073709551615), 2251799813685247.500000000 > toUInt64(18446744073709551615), 2251799813685247.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '2251799813685248.500000000', 18446744073709551615 = 2251799813685248.500000000, 18446744073709551615 != 2251799813685248.500000000, 18446744073709551615 < 2251799813685248.500000000, 18446744073709551615 <= 2251799813685248.500000000, 18446744073709551615 > 2251799813685248.500000000, 18446744073709551615 >= 2251799813685248.500000000, 2251799813685248.500000000 = 18446744073709551615, 2251799813685248.500000000 != 18446744073709551615, 2251799813685248.500000000 < 18446744073709551615, 2251799813685248.500000000 <= 18446744073709551615, 2251799813685248.500000000 > 18446744073709551615, 2251799813685248.500000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 2251799813685248.500000000, toUInt64(18446744073709551615) != 2251799813685248.500000000, toUInt64(18446744073709551615) < 2251799813685248.500000000, toUInt64(18446744073709551615) <= 2251799813685248.500000000, toUInt64(18446744073709551615) > 2251799813685248.500000000, toUInt64(18446744073709551615) >= 2251799813685248.500000000, 2251799813685248.500000000 = toUInt64(18446744073709551615), 2251799813685248.500000000 != toUInt64(18446744073709551615), 2251799813685248.500000000 < toUInt64(18446744073709551615), 2251799813685248.500000000 <= toUInt64(18446744073709551615), 2251799813685248.500000000 > toUInt64(18446744073709551615), 2251799813685248.500000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '1152921504606846976.000000000', 18446744073709551615 = 1152921504606846976.000000000, 18446744073709551615 != 1152921504606846976.000000000, 18446744073709551615 < 1152921504606846976.000000000, 18446744073709551615 <= 1152921504606846976.000000000, 18446744073709551615 > 1152921504606846976.000000000, 18446744073709551615 >= 1152921504606846976.000000000, 1152921504606846976.000000000 = 18446744073709551615, 1152921504606846976.000000000 != 18446744073709551615, 1152921504606846976.000000000 < 18446744073709551615, 1152921504606846976.000000000 <= 18446744073709551615, 1152921504606846976.000000000 > 18446744073709551615, 1152921504606846976.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 1152921504606846976.000000000, toUInt64(18446744073709551615) != 1152921504606846976.000000000, toUInt64(18446744073709551615) < 1152921504606846976.000000000, toUInt64(18446744073709551615) <= 1152921504606846976.000000000, toUInt64(18446744073709551615) > 1152921504606846976.000000000, toUInt64(18446744073709551615) >= 1152921504606846976.000000000, 1152921504606846976.000000000 = toUInt64(18446744073709551615), 1152921504606846976.000000000 != toUInt64(18446744073709551615), 1152921504606846976.000000000 < toUInt64(18446744073709551615), 1152921504606846976.000000000 <= toUInt64(18446744073709551615), 1152921504606846976.000000000 > toUInt64(18446744073709551615), 1152921504606846976.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-1152921504606846976.000000000', 18446744073709551615 = -1152921504606846976.000000000, 18446744073709551615 != -1152921504606846976.000000000, 18446744073709551615 < -1152921504606846976.000000000, 18446744073709551615 <= -1152921504606846976.000000000, 18446744073709551615 > -1152921504606846976.000000000, 18446744073709551615 >= -1152921504606846976.000000000, -1152921504606846976.000000000 = 18446744073709551615, -1152921504606846976.000000000 != 18446744073709551615, -1152921504606846976.000000000 < 18446744073709551615, -1152921504606846976.000000000 <= 18446744073709551615, -1152921504606846976.000000000 > 18446744073709551615, -1152921504606846976.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -1152921504606846976.000000000, toUInt64(18446744073709551615) != -1152921504606846976.000000000, toUInt64(18446744073709551615) < -1152921504606846976.000000000, toUInt64(18446744073709551615) <= -1152921504606846976.000000000, toUInt64(18446744073709551615) > -1152921504606846976.000000000, toUInt64(18446744073709551615) >= -1152921504606846976.000000000, -1152921504606846976.000000000 = toUInt64(18446744073709551615), -1152921504606846976.000000000 != toUInt64(18446744073709551615), -1152921504606846976.000000000 < toUInt64(18446744073709551615), -1152921504606846976.000000000 <= toUInt64(18446744073709551615), -1152921504606846976.000000000 > toUInt64(18446744073709551615), -1152921504606846976.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '-9223372036854786048.000000000', 18446744073709551615 = -9223372036854786048.000000000, 18446744073709551615 != -9223372036854786048.000000000, 18446744073709551615 < -9223372036854786048.000000000, 18446744073709551615 <= -9223372036854786048.000000000, 18446744073709551615 > -9223372036854786048.000000000, 18446744073709551615 >= -9223372036854786048.000000000, -9223372036854786048.000000000 = 18446744073709551615, -9223372036854786048.000000000 != 18446744073709551615, -9223372036854786048.000000000 < 18446744073709551615, -9223372036854786048.000000000 <= 18446744073709551615, -9223372036854786048.000000000 > 18446744073709551615, -9223372036854786048.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = -9223372036854786048.000000000, toUInt64(18446744073709551615) != -9223372036854786048.000000000, toUInt64(18446744073709551615) < -9223372036854786048.000000000, toUInt64(18446744073709551615) <= -9223372036854786048.000000000, toUInt64(18446744073709551615) > -9223372036854786048.000000000, toUInt64(18446744073709551615) >= -9223372036854786048.000000000, -9223372036854786048.000000000 = toUInt64(18446744073709551615), -9223372036854786048.000000000 != toUInt64(18446744073709551615), -9223372036854786048.000000000 < toUInt64(18446744073709551615), -9223372036854786048.000000000 <= toUInt64(18446744073709551615), -9223372036854786048.000000000 > toUInt64(18446744073709551615), -9223372036854786048.000000000 >= toUInt64(18446744073709551615) ; +SELECT '18446744073709551615', '9223372036854786048.000000000', 18446744073709551615 = 9223372036854786048.000000000, 18446744073709551615 != 9223372036854786048.000000000, 18446744073709551615 < 9223372036854786048.000000000, 18446744073709551615 <= 9223372036854786048.000000000, 18446744073709551615 > 9223372036854786048.000000000, 18446744073709551615 >= 9223372036854786048.000000000, 9223372036854786048.000000000 = 18446744073709551615, 9223372036854786048.000000000 != 18446744073709551615, 9223372036854786048.000000000 < 18446744073709551615, 9223372036854786048.000000000 <= 18446744073709551615, 9223372036854786048.000000000 > 18446744073709551615, 9223372036854786048.000000000 >= 18446744073709551615 , toUInt64(18446744073709551615) = 9223372036854786048.000000000, toUInt64(18446744073709551615) != 9223372036854786048.000000000, toUInt64(18446744073709551615) < 9223372036854786048.000000000, toUInt64(18446744073709551615) <= 9223372036854786048.000000000, toUInt64(18446744073709551615) > 9223372036854786048.000000000, toUInt64(18446744073709551615) >= 9223372036854786048.000000000, 9223372036854786048.000000000 = toUInt64(18446744073709551615), 9223372036854786048.000000000 != toUInt64(18446744073709551615), 9223372036854786048.000000000 < toUInt64(18446744073709551615), 9223372036854786048.000000000 <= toUInt64(18446744073709551615), 9223372036854786048.000000000 > toUInt64(18446744073709551615), 9223372036854786048.000000000 >= toUInt64(18446744073709551615) ; +SELECT '9223372036854775808', '0.000000000', 9223372036854775808 = 0.000000000, 9223372036854775808 != 0.000000000, 9223372036854775808 < 0.000000000, 9223372036854775808 <= 0.000000000, 9223372036854775808 > 0.000000000, 9223372036854775808 >= 0.000000000, 0.000000000 = 9223372036854775808, 0.000000000 != 9223372036854775808, 0.000000000 < 9223372036854775808, 0.000000000 <= 9223372036854775808, 0.000000000 > 9223372036854775808, 0.000000000 >= 9223372036854775808 , toUInt64(9223372036854775808) = 0.000000000, toUInt64(9223372036854775808) != 0.000000000, toUInt64(9223372036854775808) < 0.000000000, toUInt64(9223372036854775808) <= 0.000000000, toUInt64(9223372036854775808) > 0.000000000, toUInt64(9223372036854775808) >= 0.000000000, 0.000000000 = toUInt64(9223372036854775808), 0.000000000 != toUInt64(9223372036854775808), 0.000000000 < toUInt64(9223372036854775808), 0.000000000 <= toUInt64(9223372036854775808), 0.000000000 > toUInt64(9223372036854775808), 0.000000000 >= toUInt64(9223372036854775808) ; +SELECT '9223372036854775808', '-1.000000000', 9223372036854775808 = -1.000000000, 9223372036854775808 != -1.000000000, 9223372036854775808 < -1.000000000, 9223372036854775808 <= -1.000000000, 9223372036854775808 > -1.000000000, 9223372036854775808 >= -1.000000000, -1.000000000 = 9223372036854775808, -1.000000000 != 9223372036854775808, -1.000000000 < 9223372036854775808, -1.000000000 <= 9223372036854775808, -1.000000000 > 9223372036854775808, -1.000000000 >= 9223372036854775808 , toUInt64(9223372036854775808) = -1.000000000, toUInt64(9223372036854775808) != -1.000000000, toUInt64(9223372036854775808) < -1.000000000, toUInt64(9223372036854775808) <= -1.000000000, toUInt64(9223372036854775808) > -1.000000000, toUInt64(9223372036854775808) >= -1.000000000, -1.000000000 = toUInt64(9223372036854775808), -1.000000000 != toUInt64(9223372036854775808), -1.000000000 < toUInt64(9223372036854775808), -1.000000000 <= toUInt64(9223372036854775808), -1.000000000 > toUInt64(9223372036854775808), -1.000000000 >= toUInt64(9223372036854775808) ; +SELECT '9223372036854775808', '1.000000000', 9223372036854775808 = 1.000000000, 9223372036854775808 != 1.000000000, 9223372036854775808 < 1.000000000, 9223372036854775808 <= 1.000000000, 9223372036854775808 > 1.000000000, 9223372036854775808 >= 1.000000000, 1.000000000 = 9223372036854775808, 1.000000000 != 9223372036854775808, 1.000000000 < 9223372036854775808, 1.000000000 <= 9223372036854775808, 1.000000000 > 9223372036854775808, 1.000000000 >= 9223372036854775808 , toUInt64(9223372036854775808) = 1.000000000, toUInt64(9223372036854775808) != 1.000000000, toUInt64(9223372036854775808) < 1.000000000, toUInt64(9223372036854775808) <= 1.000000000, toUInt64(9223372036854775808) > 1.000000000, toUInt64(9223372036854775808) >= 1.000000000, 1.000000000 = toUInt64(9223372036854775808), 1.000000000 != toUInt64(9223372036854775808), 1.000000000 < toUInt64(9223372036854775808), 1.000000000 <= toUInt64(9223372036854775808), 1.000000000 > toUInt64(9223372036854775808), 1.000000000 >= toUInt64(9223372036854775808) ; +SELECT '9223372036854775808', '18446744073709551616.000000000', 9223372036854775808 = 18446744073709551616.000000000, 9223372036854775808 != 18446744073709551616.000000000, 9223372036854775808 < 18446744073709551616.000000000, 9223372036854775808 <= 18446744073709551616.000000000, 9223372036854775808 > 18446744073709551616.000000000, 9223372036854775808 >= 18446744073709551616.000000000, 18446744073709551616.000000000 = 9223372036854775808, 18446744073709551616.000000000 != 9223372036854775808, 18446744073709551616.000000000 < 9223372036854775808, 18446744073709551616.000000000 <= 9223372036854775808, 18446744073709551616.000000000 > 9223372036854775808, 18446744073709551616.000000000 >= 9223372036854775808 , toUInt64(9223372036854775808) = 18446744073709551616.000000000, toUInt64(9223372036854775808) != 18446744073709551616.000000000, toUInt64(9223372036854775808) < 18446744073709551616.000000000, toUInt64(9223372036854775808) <= 18446744073709551616.000000000, toUInt64(9223372036854775808) > 18446744073709551616.000000000, toUInt64(9223372036854775808) >= 18446744073709551616.000000000, 18446744073709551616.000000000 = toUInt64(9223372036854775808), 18446744073709551616.000000000 != toUInt64(9223372036854775808), 18446744073709551616.000000000 < toUInt64(9223372036854775808), 18446744073709551616.000000000 <= toUInt64(9223372036854775808), 18446744073709551616.000000000 > toUInt64(9223372036854775808), 18446744073709551616.000000000 >= toUInt64(9223372036854775808) ; diff --git a/tests/queries/0_stateless/00506_union_distributed.reference b/tests/queries/0_stateless/00506_union_distributed.reference index 4a2dcd69dc2..3324c3d5675 100644 --- a/tests/queries/0_stateless/00506_union_distributed.reference +++ b/tests/queries/0_stateless/00506_union_distributed.reference @@ -1,16 +1,16 @@ 3 8 +13 28 23 48 33 68 -13 28 3 8 +13 28 23 48 33 68 -13 28 3 8 +13 28 23 48 33 68 -13 28 3 8 +13 28 23 48 33 68 -13 28 diff --git a/tests/queries/0_stateless/00506_union_distributed.sql b/tests/queries/0_stateless/00506_union_distributed.sql index 3f631b8da56..4c5fd9a1743 100644 --- a/tests/queries/0_stateless/00506_union_distributed.sql +++ b/tests/queries/0_stateless/00506_union_distributed.sql @@ -15,10 +15,10 @@ INSERT INTO union1 VALUES (11,12,13,14,15); INSERT INTO union2 VALUES (21,22,23,24,25); INSERT INTO union3 VALUES (31,32,33,34,35); -select b, sum(c) from ( select a, b, sum(c) as c from union2 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union2 where b>1 group by a, b ) as a group by b; -select b, sum(c) from ( select a, b, sum(c) as c from union1 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union2 where b>1 group by a, b ) as a group by b; -select b, sum(c) from ( select a, b, sum(c) as c from union1 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union1 where b>1 group by a, b ) as a group by b; -select b, sum(c) from ( select a, b, sum(c) as c from union2 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union3 where b>1 group by a, b ) as a group by b; +select b, sum(c) from ( select a, b, sum(c) as c from union2 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union2 where b>1 group by a, b order by a, b) as a group by b order by b; +select b, sum(c) from ( select a, b, sum(c) as c from union1 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union2 where b>1 group by a, b order by a, b) as a group by b order by b; +select b, sum(c) from ( select a, b, sum(c) as c from union1 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union1 where b>1 group by a, b order by a, b) as a group by b order by b; +select b, sum(c) from ( select a, b, sum(c) as c from union2 where a>1 group by a,b UNION ALL select a, b, sum(c) as c from union3 where b>1 group by a, b order by a, b) as a group by b order by b; DROP TABLE union1; DROP TABLE union2; diff --git a/tests/queries/0_stateless/00536_int_exp.sql b/tests/queries/0_stateless/00536_int_exp.sql index c78a326a3b3..80b88e8f4f8 100644 --- a/tests/queries/0_stateless/00536_int_exp.sql +++ b/tests/queries/0_stateless/00536_int_exp.sql @@ -1 +1 @@ -SELECT exp2(number) AS e2d, intExp2(number) AS e2i, e2d = e2i AS e2eq, exp10(number) AS e10d, intExp10(number) AS e10i, e10d = e10i AS e10eq FROM system.numbers LIMIT 64; +SELECT exp2(number) AS e2d, intExp2(number) AS e2i, toUInt64(e2d) = e2i AS e2eq, exp10(number) AS e10d, intExp10(number) AS e10i, toString(e10d) = toString(e10i) AS e10eq FROM system.numbers LIMIT 64; diff --git a/tests/queries/0_stateless/00555_hasAll_hasAny.reference b/tests/queries/0_stateless/00555_hasAll_hasAny.reference index b33700bfa02..5608f7b970e 100644 --- a/tests/queries/0_stateless/00555_hasAll_hasAny.reference +++ b/tests/queries/0_stateless/00555_hasAll_hasAny.reference @@ -34,10 +34,6 @@ 1 0 - -0 -0 -0 -0 - 0 1 diff --git a/tests/queries/0_stateless/00555_hasAll_hasAny.sql b/tests/queries/0_stateless/00555_hasAll_hasAny.sql index 9df356dce2e..c8a6c3cecbd 100644 --- a/tests/queries/0_stateless/00555_hasAll_hasAny.sql +++ b/tests/queries/0_stateless/00555_hasAll_hasAny.sql @@ -39,10 +39,10 @@ select hasAny(['a', 'b'], ['a', 'c']); select hasAll(['a', 'b'], ['a', 'c']); select '-'; -select hasAny([1], ['a']); -select hasAll([1], ['a']); -select hasAll([[1, 2], [3, 4]], ['a', 'c']); -select hasAny([[1, 2], [3, 4]], ['a', 'c']); +select hasAny([1], ['a']); -- { serverError 386 } +select hasAll([1], ['a']); -- { serverError 386 } +select hasAll([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } +select hasAny([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } select '-'; select hasAll([[1, 2], [3, 4]], [[1, 2], [3, 5]]); diff --git a/tests/queries/0_stateless/00555_hasSubstr.reference b/tests/queries/0_stateless/00555_hasSubstr.reference index 1051fa28d6c..de97d19c932 100644 --- a/tests/queries/0_stateless/00555_hasSubstr.reference +++ b/tests/queries/0_stateless/00555_hasSubstr.reference @@ -20,8 +20,6 @@ 0 1 - -0 -0 1 1 0 diff --git a/tests/queries/0_stateless/00555_hasSubstr.sql b/tests/queries/0_stateless/00555_hasSubstr.sql index 04c70e4a43b..5f90a69c546 100644 --- a/tests/queries/0_stateless/00555_hasSubstr.sql +++ b/tests/queries/0_stateless/00555_hasSubstr.sql @@ -25,8 +25,8 @@ select hasSubstr(['a', 'b'], ['a', 'c']); select hasSubstr(['a', 'c', 'b'], ['a', 'c']); select '-'; -select hasSubstr([1], ['a']); -select hasSubstr([[1, 2], [3, 4]], ['a', 'c']); +select hasSubstr([1], ['a']); -- { serverError 386 } +select hasSubstr([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } select hasSubstr([[1, 2], [3, 4], [5, 8]], [[3, 4]]); select hasSubstr([[1, 2], [3, 4], [5, 8]], [[3, 4], [5, 8]]); select hasSubstr([[1, 2], [3, 4], [5, 8]], [[1, 2], [5, 8]]); diff --git a/tests/queries/0_stateless/00632_aggregation_window_funnel.sql b/tests/queries/0_stateless/00632_aggregation_window_funnel.sql index d9991be5583..aa0dc804238 100644 --- a/tests/queries/0_stateless/00632_aggregation_window_funnel.sql +++ b/tests/queries/0_stateless/00632_aggregation_window_funnel.sql @@ -87,3 +87,5 @@ select 5 = windowFunnel(10000)(timestamp, event = 1000, event = 1001, event = 10 select 2 = windowFunnel(10000, 'strict_increase')(timestamp, event = 1000, event = 1001, event = 1002, event = 1003, event = 1004) from funnel_test_strict_increase; select 3 = windowFunnel(10000)(timestamp, event = 1004, event = 1004, event = 1004) from funnel_test_strict_increase; select 1 = windowFunnel(10000, 'strict_increase')(timestamp, event = 1004, event = 1004, event = 1004) from funnel_test_strict_increase; + +drop table funnel_test_strict_increase; diff --git a/tests/queries/0_stateless/00700_decimal_complex_types.reference b/tests/queries/0_stateless/00700_decimal_complex_types.reference index e81dd94513f..9c7c6fefefd 100644 --- a/tests/queries/0_stateless/00700_decimal_complex_types.reference +++ b/tests/queries/0_stateless/00700_decimal_complex_types.reference @@ -39,9 +39,33 @@ Tuple(Decimal(9, 1), Decimal(18, 1), Decimal(38, 1)) Decimal(9, 1) Decimal(18, 1 1 0 1 0 1 0 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 1 0 2 0 3 0 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 [0.100,0.200,0.300,0.400,0.500,0.600] Array(Decimal(18, 3)) [0.100,0.200,0.300,0.700,0.800,0.900] Array(Decimal(38, 3)) [0.400,0.500,0.600,0.700,0.800,0.900] Array(Decimal(38, 3)) diff --git a/tests/queries/0_stateless/00700_decimal_complex_types.sql b/tests/queries/0_stateless/00700_decimal_complex_types.sql index 2d506b124a2..f4b29e77be9 100644 --- a/tests/queries/0_stateless/00700_decimal_complex_types.sql +++ b/tests/queries/0_stateless/00700_decimal_complex_types.sql @@ -58,35 +58,35 @@ SELECT has(a, toDecimal32(0.1, 3)), has(a, toDecimal32(1.0, 3)) FROM decimal; SELECT has(b, toDecimal64(0.4, 3)), has(b, toDecimal64(1.0, 3)) FROM decimal; SELECT has(c, toDecimal128(0.7, 3)), has(c, toDecimal128(1.0, 3)) FROM decimal; -SELECT has(a, toDecimal32(0.1, 2)) FROM decimal; -- { serverError 43 } -SELECT has(a, toDecimal32(0.1, 4)) FROM decimal; -- { serverError 43 } -SELECT has(a, toDecimal64(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT has(a, toDecimal128(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal32(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal64(0.4, 2)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal64(0.4, 4)) FROM decimal; -- { serverError 43 } -SELECT has(b, toDecimal128(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal32(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal64(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal128(0.7, 2)) FROM decimal; -- { serverError 43 } -SELECT has(c, toDecimal128(0.7, 4)) FROM decimal; -- { serverError 43 } +SELECT has(a, toDecimal32(0.1, 2)) FROM decimal; +SELECT has(a, toDecimal32(0.1, 4)) FROM decimal; +SELECT has(a, toDecimal64(0.1, 3)) FROM decimal; +SELECT has(a, toDecimal128(0.1, 3)) FROM decimal; +SELECT has(b, toDecimal32(0.4, 3)) FROM decimal; +SELECT has(b, toDecimal64(0.4, 2)) FROM decimal; +SELECT has(b, toDecimal64(0.4, 4)) FROM decimal; +SELECT has(b, toDecimal128(0.4, 3)) FROM decimal; +SELECT has(c, toDecimal32(0.7, 3)) FROM decimal; +SELECT has(c, toDecimal64(0.7, 3)) FROM decimal; +SELECT has(c, toDecimal128(0.7, 2)) FROM decimal; +SELECT has(c, toDecimal128(0.7, 4)) FROM decimal; SELECT indexOf(a, toDecimal32(0.1, 3)), indexOf(a, toDecimal32(1.0, 3)) FROM decimal; SELECT indexOf(b, toDecimal64(0.5, 3)), indexOf(b, toDecimal64(1.0, 3)) FROM decimal; SELECT indexOf(c, toDecimal128(0.9, 3)), indexOf(c, toDecimal128(1.0, 3)) FROM decimal; -SELECT indexOf(a, toDecimal32(0.1, 2)) FROM decimal; -- { serverError 43 } -SELECT indexOf(a, toDecimal32(0.1, 4)) FROM decimal; -- { serverError 43 } -SELECT indexOf(a, toDecimal64(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(a, toDecimal128(0.1, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal32(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal64(0.4, 2)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal64(0.4, 4)) FROM decimal; -- { serverError 43 } -SELECT indexOf(b, toDecimal128(0.4, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal32(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal64(0.7, 3)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal128(0.7, 2)) FROM decimal; -- { serverError 43 } -SELECT indexOf(c, toDecimal128(0.7, 4)) FROM decimal; -- { serverError 43 } +SELECT indexOf(a, toDecimal32(0.1, 2)) FROM decimal; +SELECT indexOf(a, toDecimal32(0.1, 4)) FROM decimal; +SELECT indexOf(a, toDecimal64(0.1, 3)) FROM decimal; +SELECT indexOf(a, toDecimal128(0.1, 3)) FROM decimal; +SELECT indexOf(b, toDecimal32(0.4, 3)) FROM decimal; +SELECT indexOf(b, toDecimal64(0.4, 2)) FROM decimal; +SELECT indexOf(b, toDecimal64(0.4, 4)) FROM decimal; +SELECT indexOf(b, toDecimal128(0.4, 3)) FROM decimal; +SELECT indexOf(c, toDecimal32(0.7, 3)) FROM decimal; +SELECT indexOf(c, toDecimal64(0.7, 3)) FROM decimal; +SELECT indexOf(c, toDecimal128(0.7, 2)) FROM decimal; +SELECT indexOf(c, toDecimal128(0.7, 4)) FROM decimal; SELECT arrayConcat(a, b) AS x, toTypeName(x) FROM decimal; SELECT arrayConcat(a, c) AS x, toTypeName(x) FROM decimal; diff --git a/tests/queries/0_stateless/00717_merge_and_distributed.sql b/tests/queries/0_stateless/00717_merge_and_distributed.sql index f0d34b5165f..35dad18937a 100644 --- a/tests/queries/0_stateless/00717_merge_and_distributed.sql +++ b/tests/queries/0_stateless/00717_merge_and_distributed.sql @@ -18,9 +18,9 @@ SELECT * FROM merge(currentDatabase(), 'test_local_1'); SELECT *, _table FROM merge(currentDatabase(), 'test_local_1') ORDER BY _table; SELECT sum(value), _table FROM merge(currentDatabase(), 'test_local_1') GROUP BY _table ORDER BY _table; SELECT * FROM merge(currentDatabase(), 'test_local_1') WHERE _table = 'test_local_1'; -SELECT * FROM merge(currentDatabase(), 'test_local_1') PREWHERE _table = 'test_local_1'; -- { serverError 16 } +SELECT * FROM merge(currentDatabase(), 'test_local_1') PREWHERE _table = 'test_local_1'; -- { serverError 10 } SELECT * FROM merge(currentDatabase(), 'test_local_1') WHERE _table in ('test_local_1', 'test_local_2'); -SELECT * FROM merge(currentDatabase(), 'test_local_1') PREWHERE _table in ('test_local_1', 'test_local_2'); -- { serverError 16 } +SELECT * FROM merge(currentDatabase(), 'test_local_1') PREWHERE _table in ('test_local_1', 'test_local_2'); -- { serverError 10 } SELECT '--------------Single Distributed------------'; SELECT * FROM merge(currentDatabase(), 'test_distributed_1'); @@ -36,9 +36,9 @@ SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') ORDER BY _ta SELECT *, _table FROM merge(currentDatabase(), 'test_local_1|test_local_2') ORDER BY _table; SELECT sum(value), _table FROM merge(currentDatabase(), 'test_local_1|test_local_2') GROUP BY _table ORDER BY _table; SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') WHERE _table = 'test_local_1'; -SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') PREWHERE _table = 'test_local_1'; -- { serverError 16 } +SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') PREWHERE _table = 'test_local_1'; -- { serverError 10 } SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') WHERE _table in ('test_local_1', 'test_local_2') ORDER BY value; -SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') PREWHERE _table in ('test_local_1', 'test_local_2') ORDER BY value; -- { serverError 16 } +SELECT * FROM merge(currentDatabase(), 'test_local_1|test_local_2') PREWHERE _table in ('test_local_1', 'test_local_2') ORDER BY value; -- { serverError 10 } SELECT '--------------Local Merge Distributed------------'; SELECT * FROM merge(currentDatabase(), 'test_local_1|test_distributed_2') ORDER BY _table; diff --git a/tests/queries/0_stateless/00800_versatile_storage_join.reference b/tests/queries/0_stateless/00800_versatile_storage_join.reference index f1d3f98e32a..0a143f8bc12 100644 --- a/tests/queries/0_stateless/00800_versatile_storage_join.reference +++ b/tests/queries/0_stateless/00800_versatile_storage_join.reference @@ -1,5 +1,4 @@ --------read-------- -def [1,2] 2 abc [0] 1 def [1,2] 2 abc [0] 1 @@ -7,6 +6,7 @@ def [1,2] 2 abc [0] 1 def [1,2] 2 abc [0] 1 +def [1,2] 2 --------joinGet-------- abc diff --git a/tests/queries/0_stateless/00800_versatile_storage_join.sql b/tests/queries/0_stateless/00800_versatile_storage_join.sql index c1e325ce9aa..b0ec6f69f93 100644 --- a/tests/queries/0_stateless/00800_versatile_storage_join.sql +++ b/tests/queries/0_stateless/00800_versatile_storage_join.sql @@ -22,10 +22,10 @@ INSERT INTO join_all_left VALUES ('abc', [0], 1), ('def', [1, 2], 2); -- read from StorageJoin SELECT '--------read--------'; -SELECT * from join_any_inner; -SELECT * from join_any_left; -SELECT * from join_all_inner; -SELECT * from join_all_left; +SELECT * from join_any_inner ORDER BY k; +SELECT * from join_any_left ORDER BY k; +SELECT * from join_all_inner ORDER BY k; +SELECT * from join_all_left ORDER BY k; -- create StorageJoin tables with customized settings diff --git a/tests/queries/0_stateless/00804_rollup_with_having.reference b/tests/queries/0_stateless/00804_rollup_with_having.reference index 62de36a36ba..0f708e8d900 100644 --- a/tests/queries/0_stateless/00804_rollup_with_having.reference +++ b/tests/queries/0_stateless/00804_rollup_with_having.reference @@ -1,4 +1,4 @@ -a \N 1 a b 1 a \N 2 +a \N 1 a b 1 diff --git a/tests/queries/0_stateless/00804_rollup_with_having.sql b/tests/queries/0_stateless/00804_rollup_with_having.sql index cddaa8b6451..29b9ae19041 100644 --- a/tests/queries/0_stateless/00804_rollup_with_having.sql +++ b/tests/queries/0_stateless/00804_rollup_with_having.sql @@ -8,7 +8,7 @@ INSERT INTO rollup_having VALUES (NULL, NULL); INSERT INTO rollup_having VALUES ('a', NULL); INSERT INTO rollup_having VALUES ('a', 'b'); -SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP HAVING a IS NOT NULL; -SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP HAVING a IS NOT NULL and b IS NOT NULL; +SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP HAVING a IS NOT NULL ORDER BY a, b; +SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP HAVING a IS NOT NULL and b IS NOT NULL ORDER BY a, b; DROP TABLE rollup_having; diff --git a/tests/queries/0_stateless/00816_long_concurrent_alter_column.sh b/tests/queries/0_stateless/00816_long_concurrent_alter_column.sh index 63b687d072d..b0244991b3c 100755 --- a/tests/queries/0_stateless/00816_long_concurrent_alter_column.sh +++ b/tests/queries/0_stateless/00816_long_concurrent_alter_column.sh @@ -60,16 +60,10 @@ wait echo "DROP TABLE concurrent_alter_column NO DELAY" | ${CLICKHOUSE_CLIENT} # NO DELAY has effect only for Atomic database -db_engine=`$CLICKHOUSE_CLIENT -q "SELECT engine FROM system.databases WHERE name='$CLICKHOUSE_DATABASE'"` -if [[ $db_engine == "Atomic" ]]; then - # DROP is non-blocking, so wait for alters - while true; do - $CLICKHOUSE_CLIENT -q "SELECT c = 0 FROM (SELECT count() as c FROM system.processes WHERE query_id LIKE 'alter_00816_%')" | grep 1 > /dev/null && break; - sleep 1; - done -fi - -# Check for deadlocks -echo "SELECT * FROM system.processes WHERE query_id LIKE 'alter_00816_%'" | ${CLICKHOUSE_CLIENT} +# Wait for alters and check for deadlocks (in case of deadlock this loop will not finish) +while true; do + echo "SELECT * FROM system.processes WHERE query_id LIKE 'alter\\_00816\\_%'" | ${CLICKHOUSE_CLIENT} | grep -q -F 'alter' || break + sleep 1; +done echo 'did not crash' diff --git a/tests/queries/0_stateless/00826_cross_to_inner_join.reference b/tests/queries/0_stateless/00826_cross_to_inner_join.reference index 9b630d0d391..973c5b078a3 100644 --- a/tests/queries/0_stateless/00826_cross_to_inner_join.reference +++ b/tests/queries/0_stateless/00826_cross_to_inner_join.reference @@ -109,7 +109,7 @@ SELECT t2_00826.a, t2_00826.b FROM t1_00826 -ALL INNER JOIN t2_00826 ON (((a = t2_00826.a) AND (a = t2_00826.a)) AND (a = t2_00826.a)) AND (b = t2_00826.b) +ALL INNER JOIN t2_00826 ON (a = t2_00826.a) AND (a = t2_00826.a) AND (a = t2_00826.a) AND (b = t2_00826.b) WHERE (a = t2_00826.a) AND ((a = t2_00826.a) AND ((a = t2_00826.a) AND (b = t2_00826.b))) --- cross split conjunction --- SELECT diff --git a/tests/queries/0_stateless/00849_multiple_comma_join_2.reference b/tests/queries/0_stateless/00849_multiple_comma_join_2.reference index 4db65b0b795..fc39ef13935 100644 --- a/tests/queries/0_stateless/00849_multiple_comma_join_2.reference +++ b/tests/queries/0_stateless/00849_multiple_comma_join_2.reference @@ -127,7 +127,7 @@ FROM ) AS `--.s` CROSS JOIN t3 ) AS `--.s` -ALL INNER JOIN t4 ON ((a = `--t1.a`) AND (a = `--t2.a`)) AND (a = `--t3.a`) +ALL INNER JOIN t4 ON (a = `--t1.a`) AND (a = `--t2.a`) AND (a = `--t3.a`) WHERE (a = `--t1.a`) AND (a = `--t2.a`) AND (a = `--t3.a`) SELECT `--t1.a` AS `t1.a` FROM diff --git a/tests/queries/0_stateless/00855_join_with_array_join.reference b/tests/queries/0_stateless/00855_join_with_array_join.reference index 386bde518ea..88f9253500c 100644 --- a/tests/queries/0_stateless/00855_join_with_array_join.reference +++ b/tests/queries/0_stateless/00855_join_with_array_join.reference @@ -4,3 +4,8 @@ 4 0 5 0 6 0 +- +1 0 +2 2 a2 +1 0 +2 2 a2 diff --git a/tests/queries/0_stateless/00855_join_with_array_join.sql b/tests/queries/0_stateless/00855_join_with_array_join.sql index 10b03fec062..506d9479110 100644 --- a/tests/queries/0_stateless/00855_join_with_array_join.sql +++ b/tests/queries/0_stateless/00855_join_with_array_join.sql @@ -1,10 +1,35 @@ SET joined_subquery_requires_alias = 0; -select ax, c from (select [1,2] ax, 0 c) array join ax join (select 0 c) using(c); -select ax, c from (select [3,4] ax, 0 c) join (select 0 c) using(c) array join ax; -select ax, c from (select [5,6] ax, 0 c) s1 join system.one s2 ON s1.c = s2.dummy array join ax; +SELECT ax, c FROM (SELECT [1,2] ax, 0 c) ARRAY JOIN ax JOIN (SELECT 0 c) USING (c); +SELECT ax, c FROM (SELECT [3,4] ax, 0 c) JOIN (SELECT 0 c) USING (c) ARRAY JOIN ax; +SELECT ax, c FROM (SELECT [5,6] ax, 0 c) s1 JOIN system.one s2 ON s1.c = s2.dummy ARRAY JOIN ax; + + +SELECT ax, c FROM (SELECT [101,102] ax, 0 c) s1 +JOIN system.one s2 ON s1.c = s2.dummy +JOIN system.one s3 ON s1.c = s3.dummy +ARRAY JOIN ax; -- { serverError 48 } + +SELECT '-'; + +SET joined_subquery_requires_alias = 1; + +DROP TABLE IF EXISTS f; +DROP TABLE IF EXISTS d; + +CREATE TABLE f (`d_ids` Array(Int64) ) ENGINE = TinyLog; +INSERT INTO f VALUES ([1, 2]); + +CREATE TABLE d (`id` Int64, `name` String ) ENGINE = TinyLog; + +INSERT INTO d VALUES (2, 'a2'), (3, 'a3'); + +SELECT d_ids, id, name FROM f LEFT ARRAY JOIN d_ids LEFT JOIN d ON d.id = d_ids ORDER BY id; +SELECT did, id, name FROM f LEFT ARRAY JOIN d_ids as did LEFT JOIN d ON d.id = did ORDER BY id; + +-- name clash, doesn't work yet +SELECT id, name FROM f LEFT ARRAY JOIN d_ids as id LEFT JOIN d ON d.id = id ORDER BY id; -- { serverError 403 } + +DROP TABLE IF EXISTS f; +DROP TABLE IF EXISTS d; -select ax, c from (select [7,8] ax, 0 c) s1 -join system.one s2 ON s1.c = s2.dummy -join system.one s3 ON s1.c = s3.dummy -array join ax; -- { serverError 48 } diff --git a/tests/queries/0_stateless/00878_join_unexpected_results.reference b/tests/queries/0_stateless/00878_join_unexpected_results.reference index 65fcbc257ca..a389cb47a96 100644 --- a/tests/queries/0_stateless/00878_join_unexpected_results.reference +++ b/tests/queries/0_stateless/00878_join_unexpected_results.reference @@ -23,8 +23,6 @@ join_use_nulls = 1 - \N \N - -1 1 \N \N -2 2 \N \N - 1 1 1 1 2 2 \N \N @@ -51,8 +49,6 @@ join_use_nulls = 0 - - - -1 1 0 0 -2 2 0 0 - 1 1 1 1 2 2 0 0 diff --git a/tests/queries/0_stateless/00878_join_unexpected_results.sql b/tests/queries/0_stateless/00878_join_unexpected_results.sql index 6f6cd6e6479..0aef5208b26 100644 --- a/tests/queries/0_stateless/00878_join_unexpected_results.sql +++ b/tests/queries/0_stateless/00878_join_unexpected_results.sql @@ -30,11 +30,11 @@ select * from t left outer join s on (t.a=s.a and t.b=s.b) where s.a is null; select '-'; select s.* from t left outer join s on (t.a=s.a and t.b=s.b) where s.a is null; select '-'; -select t.*, s.* from t left join s on (s.a=t.a and t.b=s.b and t.a=toInt64(2)) order by t.a; +select t.*, s.* from t left join s on (s.a=t.a and t.b=s.b and t.a=toInt64(2)) order by t.a; -- {serverError 403 } select '-'; select t.*, s.* from t left join s on (s.a=t.a) order by t.a; select '-'; -select t.*, s.* from t left join s on (t.b=toInt64(2) and s.a=t.a) where s.b=2; +select t.*, s.* from t left join s on (t.b=toInt64(2) and s.a=t.a) where s.b=2; -- {serverError 403 } select 'join_use_nulls = 0'; set join_use_nulls = 0; @@ -58,11 +58,11 @@ select '-'; select '-'; -- select s.* from t left outer join s on (t.a=s.a and t.b=s.b) where s.a is null; -- TODO select '-'; -select t.*, s.* from t left join s on (s.a=t.a and t.b=s.b and t.a=toInt64(2)) order by t.a; +select t.*, s.* from t left join s on (s.a=t.a and t.b=s.b and t.a=toInt64(2)) order by t.a; -- {serverError 403 } select '-'; select t.*, s.* from t left join s on (s.a=t.a) order by t.a; select '-'; -select t.*, s.* from t left join s on (t.b=toInt64(2) and s.a=t.a) where s.b=2; +select t.*, s.* from t left join s on (t.b=toInt64(2) and s.a=t.a) where s.b=2; -- {serverError 403 } drop table t; drop table s; diff --git a/tests/queries/0_stateless/00882_multiple_join_no_alias.reference b/tests/queries/0_stateless/00882_multiple_join_no_alias.reference index a3723bc9976..523063f8a3c 100644 --- a/tests/queries/0_stateless/00882_multiple_join_no_alias.reference +++ b/tests/queries/0_stateless/00882_multiple_join_no_alias.reference @@ -1,8 +1,8 @@ 1 1 1 1 0 0 0 0 -0 1 +0 1 1 1 1 1 1 2 2 0 0 0 0 -2 2 0 1 1 1 +2 2 0 diff --git a/tests/queries/0_stateless/00882_multiple_join_no_alias.sql b/tests/queries/0_stateless/00882_multiple_join_no_alias.sql index bd3a2a19913..4a96e73c679 100644 --- a/tests/queries/0_stateless/00882_multiple_join_no_alias.sql +++ b/tests/queries/0_stateless/00882_multiple_join_no_alias.sql @@ -13,22 +13,22 @@ insert into y values (1,1); select s.a, s.a, s.b as s_b, s.b from t left join s on s.a = t.a left join y on s.b = y.b -order by t.a; +order by t.a, s.a, s.b; select max(s.a) from t left join s on s.a = t.a left join y on s.b = y.b -group by t.a; +group by t.a order by t.a; select t.a, t.a as t_a, s.a, s.a as s_a, y.a, y.a as y_a from t left join s on t.a = s.a left join y on y.b = s.b -order by t.a; +order by t.a, s.a, y.a; select t.a, t.a as t_a, max(s.a) from t left join s on t.a = s.a left join y on y.b = s.b -group by t.a; +group by t.a order by t.a; drop table t; drop table s; diff --git a/tests/queries/0_stateless/00906_low_cardinality_rollup.reference b/tests/queries/0_stateless/00906_low_cardinality_rollup.reference index 3e287311126..257605d9006 100644 --- a/tests/queries/0_stateless/00906_low_cardinality_rollup.reference +++ b/tests/queries/0_stateless/00906_low_cardinality_rollup.reference @@ -1,18 +1,18 @@ -c d 1 a b 1 -c \N 1 a \N 1 +c d 1 +c \N 1 \N \N 2 -c 1 a 1 +c 1 \N 2 -c d 1 a b 1 -c \N 1 a \N 1 +c d 1 +c \N 1 \N b 1 \N d 1 \N \N 2 -c 1 a 1 +c 1 \N 2 diff --git a/tests/queries/0_stateless/00906_low_cardinality_rollup.sql b/tests/queries/0_stateless/00906_low_cardinality_rollup.sql index 3b8be7b9ac6..125529ad383 100644 --- a/tests/queries/0_stateless/00906_low_cardinality_rollup.sql +++ b/tests/queries/0_stateless/00906_low_cardinality_rollup.sql @@ -3,10 +3,10 @@ CREATE TABLE lc (a LowCardinality(Nullable(String)), b LowCardinality(Nullable(S INSERT INTO lc VALUES ('a', 'b'); INSERT INTO lc VALUES ('c', 'd'); -SELECT a, b, count(a) FROM lc GROUP BY a, b WITH ROLLUP; -SELECT a, count(a) FROM lc GROUP BY a WITH ROLLUP; +SELECT a, b, count(a) FROM lc GROUP BY a, b WITH ROLLUP ORDER BY a, b; +SELECT a, count(a) FROM lc GROUP BY a WITH ROLLUP ORDER BY a; -SELECT a, b, count(a) FROM lc GROUP BY a, b WITH CUBE; -SELECT a, count(a) FROM lc GROUP BY a WITH CUBE; +SELECT a, b, count(a) FROM lc GROUP BY a, b WITH CUBE ORDER BY a, b; +SELECT a, count(a) FROM lc GROUP BY a WITH CUBE ORDER BY a; DROP TABLE if exists lc; diff --git a/tests/queries/0_stateless/00910_decimal_group_array_crash_3783.sql b/tests/queries/0_stateless/00910_decimal_group_array_crash_3783.sql index 29ec19a6efe..cf0e0bac3dd 100644 --- a/tests/queries/0_stateless/00910_decimal_group_array_crash_3783.sql +++ b/tests/queries/0_stateless/00910_decimal_group_array_crash_3783.sql @@ -28,7 +28,7 @@ SELECT `time`, groupArray((sensor_id, volume)) AS groupArr FROM ( WHERE received_at BETWEEN '2018-12-12 00:00:00' AND '2018-12-30 00:00:00' GROUP BY `time`,sensor_id ORDER BY `time` -) GROUP BY `time`; +) GROUP BY `time` ORDER BY `time`; DROP TABLE sensor_value; @@ -59,4 +59,4 @@ select s.a, s.b, max(s.dt1) dt1, s.c, s.d, s.f, s.i, max(s.dt2) dt2 from ( , toDecimal128(268.970000000000, 12) f , toDecimal128(0.000000000000, 12) i , toDateTime('2018-11-02 00:00:00', 'Europe/Moscow') dt2 -) s group by s.a, s.b, s.c, s.d, s.f, s.i; +) s group by s.a, s.b, s.c, s.d, s.f, s.i ORDER BY s.a, s.b, s.c, s.d, s.f, s.i; diff --git a/tests/queries/0_stateless/00915_simple_aggregate_function.reference b/tests/queries/0_stateless/00915_simple_aggregate_function.reference index 8d5d8340f17..6bbe9b1e8b3 100644 --- a/tests/queries/0_stateless/00915_simple_aggregate_function.reference +++ b/tests/queries/0_stateless/00915_simple_aggregate_function.reference @@ -39,7 +39,7 @@ SimpleAggregateFunction(sum, Float64) 7 14 8 16 9 18 -1 1 2 2.2.2.2 3 ([1,2,3],[2,1,1]) ([1,2,3],[1,1,2]) ([1,2,3],[2,1,2]) [1,2,2,3,4] [4,2,1,3] (1,1) (2,2) -10 2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222 20 20.20.20.20 5 ([2,3,4],[2,1,1]) ([2,3,4],[3,3,4]) ([2,3,4],[4,3,4]) [] [] (3,3) (4,4) -SimpleAggregateFunction(anyLast, Nullable(String)) SimpleAggregateFunction(anyLast, LowCardinality(Nullable(String))) SimpleAggregateFunction(anyLast, IPv4) SimpleAggregateFunction(groupBitOr, UInt32) SimpleAggregateFunction(sumMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(minMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(maxMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(groupArrayArray, Array(Int32)) SimpleAggregateFunction(groupUniqArrayArray, Array(Int32)) SimpleAggregateFunction(argMin, Tuple(Int32, Int64)) SimpleAggregateFunction(argMax, Tuple(Int32, Int64)) +1 1 2 2.2.2.2 3 ([1,2,3],[2,1,1]) ([1,2,3],[1,1,2]) ([1,2,3],[2,1,2]) [1,2,2,3,4] [4,2,1,3] +10 2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222 20 20.20.20.20 5 ([2,3,4],[2,1,1]) ([2,3,4],[3,3,4]) ([2,3,4],[4,3,4]) [] [] +SimpleAggregateFunction(anyLast, Nullable(String)) SimpleAggregateFunction(anyLast, LowCardinality(Nullable(String))) SimpleAggregateFunction(anyLast, IPv4) SimpleAggregateFunction(groupBitOr, UInt32) SimpleAggregateFunction(sumMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(minMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(maxMap, Tuple(Array(Int32), Array(Int64))) SimpleAggregateFunction(groupArrayArray, Array(Int32)) SimpleAggregateFunction(groupUniqArrayArray, Array(Int32)) with_overflow 1 0 diff --git a/tests/queries/0_stateless/00915_simple_aggregate_function.sql b/tests/queries/0_stateless/00915_simple_aggregate_function.sql index c669f810312..82a7aa2152f 100644 --- a/tests/queries/0_stateless/00915_simple_aggregate_function.sql +++ b/tests/queries/0_stateless/00915_simple_aggregate_function.sql @@ -31,22 +31,16 @@ create table simple ( tup_min SimpleAggregateFunction(minMap, Tuple(Array(Int32), Array(Int64))), tup_max SimpleAggregateFunction(maxMap, Tuple(Array(Int32), Array(Int64))), arr SimpleAggregateFunction(groupArrayArray, Array(Int32)), - uniq_arr SimpleAggregateFunction(groupUniqArrayArray, Array(Int32)), - arg_min SimpleAggregateFunction(argMin, Tuple(Int32, Int64)), - arg_max SimpleAggregateFunction(argMax, Tuple(Int32, Int64)) + uniq_arr SimpleAggregateFunction(groupUniqArrayArray, Array(Int32)) ) engine=AggregatingMergeTree order by id; - -insert into simple values(1,'1','1','1.1.1.1', 1, ([1,2], [1,1]), ([1,2], [1,1]), ([1,2], [1,1]), [1,2], [1,2], (1,1), (1,1)); -insert into simple values(1,null,'2','2.2.2.2', 2, ([1,3], [1,1]), ([1,3], [2,2]), ([1,3], [2,2]), [2,3,4], [2,3,4], (2,2), (2,2)); +insert into simple values(1,'1','1','1.1.1.1', 1, ([1,2], [1,1]), ([1,2], [1,1]), ([1,2], [1,1]), [1,2], [1,2]); +insert into simple values(1,null,'2','2.2.2.2', 2, ([1,3], [1,1]), ([1,3], [2,2]), ([1,3], [2,2]), [2,3,4], [2,3,4]); -- String longer then MAX_SMALL_STRING_SIZE (actual string length is 100) -insert into simple values(10,'10','10','10.10.10.10', 4, ([2,3], [1,1]), ([2,3], [3,3]), ([2,3], [3,3]), [], [], (3,3), (3,3)); -insert into simple values(10,'2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222','20','20.20.20.20', 1, ([2, 4], [1,1]), ([2, 4], [4,4]), ([2, 4], [4,4]), [], [], (4,4), (4,4)); +insert into simple values(10,'10','10','10.10.10.10', 4, ([2,3], [1,1]), ([2,3], [3,3]), ([2,3], [3,3]), [], []); +insert into simple values(10,'2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222','20','20.20.20.20', 1, ([2, 4], [1,1]), ([2, 4], [4,4]), ([2, 4], [4,4]), [], []); select * from simple final order by id; -select toTypeName(nullable_str), toTypeName(low_str), toTypeName(ip), toTypeName(status), - toTypeName(tup), toTypeName(tup_min), toTypeName(tup_max), toTypeName(arr), - toTypeName(uniq_arr), toTypeName(arg_min), toTypeName(arg_max) -from simple limit 1; +select toTypeName(nullable_str),toTypeName(low_str),toTypeName(ip),toTypeName(status), toTypeName(tup), toTypeName(tup_min), toTypeName(tup_max), toTypeName(arr), toTypeName(uniq_arr) from simple limit 1; optimize table simple final; diff --git a/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference b/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference index 7938dcdde86..b261da18d51 100644 --- a/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference +++ b/tests/queries/0_stateless/00918_has_unsufficient_type_check.reference @@ -1,3 +1,2 @@ -0 1 0 diff --git a/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql b/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql index f76fd446a8e..c40419e4d56 100644 --- a/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql +++ b/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql @@ -1,3 +1,3 @@ -SELECT hasAny([['Hello, world']], [[[]]]); +SELECT hasAny([['Hello, world']], [[[]]]); -- { serverError 386 } SELECT hasAny([['Hello, world']], [['Hello', 'world'], ['Hello, world']]); SELECT hasAll([['Hello, world']], [['Hello', 'world'], ['Hello, world']]); diff --git a/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference b/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference index 7a08495654c..f1839bae259 100644 --- a/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference +++ b/tests/queries/0_stateless/00933_test_fix_extra_seek_on_compressed_cache.reference @@ -1 +1 @@ -0 36 13 +0 0 13 diff --git a/tests/queries/0_stateless/00953_zookeeper_suetin_deduplication_bug.sh b/tests/queries/0_stateless/00953_zookeeper_suetin_deduplication_bug.sh index baa2b0cf53f..71ca29bfd96 100755 --- a/tests/queries/0_stateless/00953_zookeeper_suetin_deduplication_bug.sh +++ b/tests/queries/0_stateless/00953_zookeeper_suetin_deduplication_bug.sh @@ -21,15 +21,12 @@ ORDER BY (engine_id) SETTINGS replicated_deduplication_window = 2, cleanup_delay_period=4, cleanup_delay_period_random_add=0;" $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 1, 'hello')" -sleep 1 $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 2, 'hello')" -sleep 1 $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 3, 'hello')" $CLICKHOUSE_CLIENT --query="SELECT count(*) from elog" # 3 rows count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/elog/s1/blocks'") - while [[ $count != 2 ]] do sleep 1 @@ -39,9 +36,8 @@ done $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 1, 'hello')" $CLICKHOUSE_CLIENT --query="SELECT count(*) from elog" # 4 rows + count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/elog/s1/blocks'") - - while [[ $count != 2 ]] do sleep 1 @@ -53,12 +49,10 @@ $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 2, 'h $CLICKHOUSE_CLIENT --query="SELECT count(*) from elog" # 5 rows count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/elog/s1/blocks'") - while [[ $count != 2 ]] do sleep 1 count=$($CLICKHOUSE_CLIENT --query="SELECT COUNT(*) FROM system.zookeeper where path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/elog/s1/blocks'") - done $CLICKHOUSE_CLIENT --query="INSERT INTO elog VALUES (toDate('2018-10-01'), 2, 'hello')" diff --git a/tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper_long.sh b/tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper_long.sh index a3ac5692caa..22c404c7712 100755 --- a/tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper_long.sh +++ b/tests/queries/0_stateless/00975_indices_mutation_replicated_zookeeper_long.sh @@ -21,6 +21,7 @@ CREATE TABLE indices_mutaions1 PARTITION BY i32 ORDER BY u64 SETTINGS index_granularity = 2; + CREATE TABLE indices_mutaions2 ( u64 UInt64, diff --git a/tests/queries/0_stateless/00987_distributed_stack_overflow.sql b/tests/queries/0_stateless/00987_distributed_stack_overflow.sql index 4baa6969b31..d2e2b8f37ef 100644 --- a/tests/queries/0_stateless/00987_distributed_stack_overflow.sql +++ b/tests/queries/0_stateless/00987_distributed_stack_overflow.sql @@ -5,13 +5,13 @@ DROP TABLE IF EXISTS distr2; CREATE TABLE distr (x UInt8) ENGINE = Distributed(test_shard_localhost, currentDatabase(), distr); -- { serverError 269 } CREATE TABLE distr0 (x UInt8) ENGINE = Distributed(test_shard_localhost, '', distr0); -SELECT * FROM distr0; -- { serverError 306 } +SELECT * FROM distr0; -- { serverError 581 } CREATE TABLE distr1 (x UInt8) ENGINE = Distributed(test_shard_localhost, currentDatabase(), distr2); CREATE TABLE distr2 (x UInt8) ENGINE = Distributed(test_shard_localhost, currentDatabase(), distr1); -SELECT * FROM distr1; -- { serverError 306 } -SELECT * FROM distr2; -- { serverError 306 } +SELECT * FROM distr1; -- { serverError 581 } +SELECT * FROM distr2; -- { serverError 581 } DROP TABLE distr0; DROP TABLE distr1; diff --git a/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sh b/tests/queries/0_stateless/01039_row_policy_dcl.sh similarity index 50% rename from tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sh rename to tests/queries/0_stateless/01039_row_policy_dcl.sh index d03e02efc55..8c2249f2981 100755 --- a/tests/queries/0_stateless/00411_long_accurate_number_comparison_float.sh +++ b/tests/queries/0_stateless/01039_row_policy_dcl.sh @@ -4,6 +4,4 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -# We should have correct env vars from shell_config.sh to run this test - -python3 "$CURDIR"/00411_long_accurate_number_comparison.python float +${CLICKHOUSE_CLIENT} -q "SHOW POLICIES ON $CLICKHOUSE_DATABASE.*" diff --git a/tests/queries/0_stateless/01039_row_policy_dcl.sql b/tests/queries/0_stateless/01039_row_policy_dcl.sql deleted file mode 100644 index 14742a72914..00000000000 --- a/tests/queries/0_stateless/01039_row_policy_dcl.sql +++ /dev/null @@ -1 +0,0 @@ -SHOW POLICIES; diff --git a/tests/queries/0_stateless/01079_bad_alters_zookeeper_long.sh b/tests/queries/0_stateless/01079_bad_alters_zookeeper_long.sh index 6452b830f38..6e79be90046 100755 --- a/tests/queries/0_stateless/01079_bad_alters_zookeeper_long.sh +++ b/tests/queries/0_stateless/01079_bad_alters_zookeeper_long.sh @@ -21,7 +21,7 @@ $CLICKHOUSE_CLIENT --query "ALTER TABLE table_for_bad_alters MODIFY COLUMN value sleep 2 -while [[ $($CLICKHOUSE_CLIENT --query "KILL MUTATION WHERE mutation_id='0000000000'" 2>&1) ]]; do +while [[ $($CLICKHOUSE_CLIENT --query "KILL MUTATION WHERE mutation_id='0000000000' and database = '$CLICKHOUSE_DATABASE'" 2>&1) ]]; do sleep 1 done diff --git a/tests/queries/0_stateless/01151_storage_merge_filter_tables_by_virtual_column.reference b/tests/queries/0_stateless/01151_storage_merge_filter_tables_by_virtual_column.reference new file mode 100644 index 00000000000..90755b06aa9 --- /dev/null +++ b/tests/queries/0_stateless/01151_storage_merge_filter_tables_by_virtual_column.reference @@ -0,0 +1,7 @@ +30 4995 +20 4950 +15 4700 +20 495 +20 4545 +15 470 +15 4520 diff --git a/tests/queries/0_stateless/01151_storage_merge_filter_tables_by_virtual_column.sql b/tests/queries/0_stateless/01151_storage_merge_filter_tables_by_virtual_column.sql new file mode 100644 index 00000000000..2a250725654 --- /dev/null +++ b/tests/queries/0_stateless/01151_storage_merge_filter_tables_by_virtual_column.sql @@ -0,0 +1,26 @@ +drop table if exists src_table_1; +drop table if exists src_table_2; +drop table if exists src_table_3; +drop table if exists set; + +create table src_table_1 (n UInt64) engine=Memory as select * from numbers(10); +create table src_table_2 (n UInt64) engine=Log as select number * 10 from numbers(10); +create table src_table_3 (n UInt64) engine=MergeTree order by n as select number * 100 from numbers(10); +create table set (s String) engine=Set as select arrayJoin(['src_table_1', 'src_table_2']); + +create temporary table tmp (s String); +insert into tmp values ('src_table_1'), ('src_table_3'); + +select count(), sum(n) from merge(currentDatabase(), 'src_table'); +-- FIXME #21401 select count(), sum(n) from merge(currentDatabase(), 'src_table') where _table = 'src_table_1' or toInt8(substr(_table, 11, 1)) = 2; +select count(), sum(n) from merge(currentDatabase(), 'src_table') where _table in ('src_table_2', 'src_table_3'); +select count(), sum(n) from merge(currentDatabase(), 'src_table') where _table in ('src_table_2', 'src_table_3') and n % 20 = 0; +select count(), sum(n) from merge(currentDatabase(), 'src_table') where _table in set; +select count(), sum(n) from merge(currentDatabase(), 'src_table') where _table in tmp; +select count(), sum(n) from merge(currentDatabase(), 'src_table') where _table in set and n % 2 = 0; +select count(), sum(n) from merge(currentDatabase(), 'src_table') where n % 2 = 0 and _table in tmp; + +drop table src_table_1; +drop table src_table_2; +drop table src_table_3; +drop table set; diff --git a/tests/queries/0_stateless/01152_cross_replication.reference b/tests/queries/0_stateless/01152_cross_replication.reference new file mode 100644 index 00000000000..389d14ff28b --- /dev/null +++ b/tests/queries/0_stateless/01152_cross_replication.reference @@ -0,0 +1,10 @@ +localhost 9000 0 0 0 +localhost 9000 0 0 0 +demo_loan_01568 +demo_loan_01568 +CREATE TABLE shard_0.demo_loan_01568\n(\n `id` Int64 COMMENT \'id\',\n `date_stat` Date COMMENT \'date of stat\',\n `customer_no` String COMMENT \'customer no\',\n `loan_principal` Float64 COMMENT \'loan principal\'\n)\nENGINE = ReplacingMergeTree\nPARTITION BY toYYYYMM(date_stat)\nORDER BY id\nSETTINGS index_granularity = 8192 +CREATE TABLE shard_1.demo_loan_01568\n(\n `id` Int64 COMMENT \'id\',\n `date_stat` Date COMMENT \'date of stat\',\n `customer_no` String COMMENT \'customer no\',\n `loan_principal` Float64 COMMENT \'loan principal\'\n)\nENGINE = ReplacingMergeTree\nPARTITION BY toYYYYMM(date_stat)\nORDER BY id\nSETTINGS index_granularity = 8192 +1 2021-04-13 qwerty 3.14159 +2 2021-04-14 asdfgh 2.71828 +2 2021-04-14 asdfgh 2.71828 +1 2021-04-13 qwerty 3.14159 diff --git a/tests/queries/0_stateless/01152_cross_replication.sql b/tests/queries/0_stateless/01152_cross_replication.sql new file mode 100644 index 00000000000..23507c41fd0 --- /dev/null +++ b/tests/queries/0_stateless/01152_cross_replication.sql @@ -0,0 +1,30 @@ +DROP DATABASE IF EXISTS shard_0; +DROP DATABASE IF EXISTS shard_1; +SET distributed_ddl_output_mode='none'; +DROP TABLE IF EXISTS demo_loan_01568_dist; + +CREATE DATABASE shard_0; +CREATE DATABASE shard_1; + +CREATE TABLE demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); -- { serverError 371 } +SET distributed_ddl_output_mode='throw'; +CREATE TABLE shard_0.demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); +CREATE TABLE shard_1.demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); +SET distributed_ddl_output_mode='none'; + +SHOW TABLES FROM shard_0; +SHOW TABLES FROM shard_1; +SHOW CREATE TABLE shard_0.demo_loan_01568; +SHOW CREATE TABLE shard_1.demo_loan_01568; + +CREATE TABLE demo_loan_01568_dist AS shard_0.demo_loan_01568 ENGINE=Distributed('test_cluster_two_shards_different_databases', '', 'demo_loan_01568', id % 2); +INSERT INTO demo_loan_01568_dist VALUES (1, '2021-04-13', 'qwerty', 3.14159), (2, '2021-04-14', 'asdfgh', 2.71828); +SYSTEM FLUSH DISTRIBUTED demo_loan_01568_dist; +SELECT * FROM demo_loan_01568_dist ORDER BY id; + +SELECT * FROM shard_0.demo_loan_01568; +SELECT * FROM shard_1.demo_loan_01568; + +DROP DATABASE shard_0; +DROP DATABASE shard_1; +DROP TABLE demo_loan_01568_dist; diff --git a/tests/queries/0_stateless/01153_attach_mv_uuid.reference b/tests/queries/0_stateless/01153_attach_mv_uuid.reference new file mode 100644 index 00000000000..e37fe28e303 --- /dev/null +++ b/tests/queries/0_stateless/01153_attach_mv_uuid.reference @@ -0,0 +1,22 @@ +1 1 +2 4 +1 1 +2 4 +3 9 +4 16 +CREATE MATERIALIZED VIEW default.mv UUID \'e15f3ab5-6cae-4df3-b879-f40deafd82c2\'\n(\n `n` Int32,\n `n2` Int64\n)\nENGINE = MergeTree\nPARTITION BY n % 10\nORDER BY n AS\nSELECT\n n,\n n * n AS n2\nFROM default.src +1 1 +2 4 +CREATE MATERIALIZED VIEW default.mv UUID \'e15f3ab5-6cae-4df3-b879-f40deafd82c2\'\n(\n `n` Int32,\n `n2` Int64\n)\nENGINE = MergeTree\nPARTITION BY n % 10\nORDER BY n AS\nSELECT\n n,\n n * n AS n2\nFROM default.src +1 1 +2 4 +3 9 +4 16 +CREATE MATERIALIZED VIEW default.mv UUID \'e15f3ab5-6cae-4df3-b879-f40deafd82c2\' TO INNER UUID \'3bd68e3c-2693-4352-ad66-a66eba9e345e\'\n(\n `n` Int32,\n `n2` Int64\n)\nENGINE = MergeTree\nPARTITION BY n % 10\nORDER BY n AS\nSELECT\n n,\n n * n AS n2\nFROM default.src +1 1 +2 4 +CREATE MATERIALIZED VIEW default.mv UUID \'e15f3ab5-6cae-4df3-b879-f40deafd82c2\' TO INNER UUID \'3bd68e3c-2693-4352-ad66-a66eba9e345e\'\n(\n `n` Int32,\n `n2` Int64\n)\nENGINE = MergeTree\nPARTITION BY n % 10\nORDER BY n AS\nSELECT\n n,\n n * n AS n2\nFROM default.src +1 1 +2 4 +3 9 +4 16 diff --git a/tests/queries/0_stateless/01153_attach_mv_uuid.sql b/tests/queries/0_stateless/01153_attach_mv_uuid.sql new file mode 100644 index 00000000000..86d768d94a7 --- /dev/null +++ b/tests/queries/0_stateless/01153_attach_mv_uuid.sql @@ -0,0 +1,42 @@ +DROP TABLE IF EXISTS src; +DROP TABLE IF EXISTS mv; +DROP TABLE IF EXISTS ".inner_id.e15f3ab5-6cae-4df3-b879-f40deafd82c2"; + +CREATE TABLE src (n UInt64) ENGINE=MergeTree ORDER BY n; +CREATE MATERIALIZED VIEW mv (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n AS SELECT n, n * n AS n2 FROM src; +INSERT INTO src VALUES (1), (2); +SELECT * FROM mv ORDER BY n; +DETACH TABLE mv; +ATTACH TABLE mv; +INSERT INTO src VALUES (3), (4); +SELECT * FROM mv ORDER BY n; +DROP TABLE mv SYNC; + +SET show_table_uuid_in_table_create_query_if_not_nil=1; +CREATE TABLE ".inner_id.e15f3ab5-6cae-4df3-b879-f40deafd82c2" (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n; +ATTACH MATERIALIZED VIEW mv UUID 'e15f3ab5-6cae-4df3-b879-f40deafd82c2' (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n AS SELECT n, n * n AS n2 FROM src; +SHOW CREATE TABLE mv; +INSERT INTO src VALUES (1), (2); +SELECT * FROM mv ORDER BY n; +DETACH TABLE mv; +ATTACH TABLE mv; +SHOW CREATE TABLE mv; +INSERT INTO src VALUES (3), (4); +SELECT * FROM mv ORDER BY n; +DROP TABLE mv SYNC; + +CREATE TABLE ".inner_id.e15f3ab5-6cae-4df3-b879-f40deafd82c2" UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n; +ATTACH MATERIALIZED VIEW mv UUID 'e15f3ab5-6cae-4df3-b879-f40deafd82c2' TO INNER UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n AS SELECT n, n * n AS n2 FROM src; +SHOW CREATE TABLE mv; +INSERT INTO src VALUES (1), (2); +SELECT * FROM mv ORDER BY n; +DETACH TABLE mv; +ATTACH TABLE mv; +SHOW CREATE TABLE mv; +INSERT INTO src VALUES (3), (4); +SELECT * FROM mv ORDER BY n; +DROP TABLE mv SYNC; + +ATTACH MATERIALIZED VIEW mv UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' TO INNER UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n AS SELECT n, n * n AS n2 FROM src; -- { serverError 36 } + +DROP TABLE src; diff --git a/tests/queries/0_stateless/01181_db_atomic_drop_on_cluster.sql b/tests/queries/0_stateless/01181_db_atomic_drop_on_cluster.sql index ca393c36617..24283a0e8e3 100644 --- a/tests/queries/0_stateless/01181_db_atomic_drop_on_cluster.sql +++ b/tests/queries/0_stateless/01181_db_atomic_drop_on_cluster.sql @@ -1,5 +1,4 @@ DROP TABLE IF EXISTS test_repl ON CLUSTER test_shard_localhost SYNC; - CREATE TABLE test_repl ON CLUSTER test_shard_localhost (n UInt64) ENGINE ReplicatedMergeTree('/clickhouse/test_01181/{database}/test_repl','r1') ORDER BY tuple(); DETACH TABLE test_repl ON CLUSTER test_shard_localhost SYNC; ATTACH TABLE test_repl ON CLUSTER test_shard_localhost; diff --git a/tests/queries/0_stateless/01231_markdown_format.reference b/tests/queries/0_stateless/01231_markdown_format.reference index e2ec03b401a..65838bfede7 100644 --- a/tests/queries/0_stateless/01231_markdown_format.reference +++ b/tests/queries/0_stateless/01231_markdown_format.reference @@ -1,5 +1,5 @@ -| id | name | array | -|-:|:-|:-:| -| 1 | name1 | [1,2,3] | -| 2 | name2 | [4,5,6] | -| 3 | name3 | [7,8,9] | +| id | name | array | nullable | low_cardinality | decimal | +|-:|:-|:-|:-|:-|-:| +| 1 | name1 | [1,2,3] | Some long string | name1 | 1.110000 | +| 2 | name2 | [4,5,60000] | \N | Another long string | 222.222222 | +| 30000 | One more long string | [7,8,9] | name3 | name3 | 3.330000 | diff --git a/tests/queries/0_stateless/01231_markdown_format.sql b/tests/queries/0_stateless/01231_markdown_format.sql index 693664be1ab..65c65389e12 100644 --- a/tests/queries/0_stateless/01231_markdown_format.sql +++ b/tests/queries/0_stateless/01231_markdown_format.sql @@ -1,6 +1,6 @@ DROP TABLE IF EXISTS makrdown; -CREATE TABLE markdown (id UInt32, name String, array Array(Int8)) ENGINE = Memory; -INSERT INTO markdown VALUES (1, 'name1', [1,2,3]), (2, 'name2', [4,5,6]), (3, 'name3', [7,8,9]); +CREATE TABLE markdown (id UInt32, name String, array Array(Int32), nullable Nullable(String), low_cardinality LowCardinality(String), decimal Decimal32(6)) ENGINE = Memory; +INSERT INTO markdown VALUES (1, 'name1', [1,2,3], 'Some long string', 'name1', 1.11), (2, 'name2', [4,5,60000], Null, 'Another long string', 222.222222), (30000, 'One more long string', [7,8,9], 'name3', 'name3', 3.33); SELECT * FROM markdown FORMAT Markdown; DROP TABLE IF EXISTS markdown diff --git a/tests/queries/0_stateless/01263_type_conversion_nvartolomei.reference b/tests/queries/0_stateless/01263_type_conversion_nvartolomei.reference index 09b593dad3d..97c766822ac 100644 --- a/tests/queries/0_stateless/01263_type_conversion_nvartolomei.reference +++ b/tests/queries/0_stateless/01263_type_conversion_nvartolomei.reference @@ -3,5 +3,3 @@ a a --- -a -a diff --git a/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql b/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql index e3d66e9cdba..0eeb97e2b2d 100644 --- a/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql +++ b/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql @@ -43,7 +43,7 @@ SELECT * FROM d; SELECT '---'; INSERT INTO m VALUES ('b'); -SELECT v FROM d ORDER BY v; -- { clientError 36 } +SELECT toString(v) FROM (SELECT v FROM d ORDER BY v) FORMAT Null; -- { serverError 36 } DROP TABLE m; diff --git a/tests/queries/0_stateless/01269_create_with_null.reference b/tests/queries/0_stateless/01269_create_with_null.reference index 86be41bc06a..73f834da75a 100644 --- a/tests/queries/0_stateless/01269_create_with_null.reference +++ b/tests/queries/0_stateless/01269_create_with_null.reference @@ -2,3 +2,6 @@ Nullable(Int32) Int32 Nullable(Int32) Int32 CREATE TABLE default.data_null\n(\n `a` Nullable(Int32),\n `b` Int32,\n `c` Nullable(Int32),\n `d` Int32\n)\nENGINE = Memory Nullable(Int32) Int32 Nullable(Int32) Nullable(Int32) CREATE TABLE default.set_null\n(\n `a` Nullable(Int32),\n `b` Int32,\n `c` Nullable(Int32),\n `d` Nullable(Int32)\n)\nENGINE = Memory +CREATE TABLE default.set_null\n(\n `a` Nullable(Int32),\n `b` Int32,\n `c` Nullable(Int32),\n `d` Nullable(Int32)\n)\nENGINE = Memory +CREATE TABLE default.cannot_be_nullable\n(\n `n` Nullable(Int8),\n `a` Array(UInt8)\n)\nENGINE = Memory +CREATE TABLE default.cannot_be_nullable\n(\n `n` Nullable(Int8),\n `a` Array(UInt8)\n)\nENGINE = Memory diff --git a/tests/queries/0_stateless/01269_create_with_null.sql b/tests/queries/0_stateless/01269_create_with_null.sql index 856b6ea75f4..faa6b84e9e4 100644 --- a/tests/queries/0_stateless/01269_create_with_null.sql +++ b/tests/queries/0_stateless/01269_create_with_null.sql @@ -1,5 +1,6 @@ DROP TABLE IF EXISTS data_null; DROP TABLE IF EXISTS set_null; +DROP TABLE IF EXISTS cannot_be_nullable; SET data_type_default_nullable='false'; @@ -45,6 +46,17 @@ INSERT INTO set_null VALUES (NULL, 2, NULL, NULL); SELECT toTypeName(a), toTypeName(b), toTypeName(c), toTypeName(d) FROM set_null; SHOW CREATE TABLE set_null; +DETACH TABLE set_null; +ATTACH TABLE set_null; +SHOW CREATE TABLE set_null; + +CREATE TABLE cannot_be_nullable (n Int8, a Array(UInt8)) ENGINE=Memory; -- { serverError 43 } +CREATE TABLE cannot_be_nullable (n Int8, a Array(UInt8) NOT NULL) ENGINE=Memory; +SHOW CREATE TABLE cannot_be_nullable; +DETACH TABLE cannot_be_nullable; +ATTACH TABLE cannot_be_nullable; +SHOW CREATE TABLE cannot_be_nullable; DROP TABLE data_null; DROP TABLE set_null; +DROP TABLE cannot_be_nullable; diff --git a/tests/queries/0_stateless/01271_show_privileges.reference b/tests/queries/0_stateless/01271_show_privileges.reference index 7928f531a7d..892bd95d2d9 100644 --- a/tests/queries/0_stateless/01271_show_privileges.reference +++ b/tests/queries/0_stateless/01271_show_privileges.reference @@ -28,7 +28,7 @@ ALTER TTL ['ALTER MODIFY TTL','MODIFY TTL'] TABLE ALTER TABLE ALTER MATERIALIZE TTL ['MATERIALIZE TTL'] TABLE ALTER TABLE ALTER SETTINGS ['ALTER SETTING','ALTER MODIFY SETTING','MODIFY SETTING'] TABLE ALTER TABLE ALTER MOVE PARTITION ['ALTER MOVE PART','MOVE PARTITION','MOVE PART'] TABLE ALTER TABLE -ALTER FETCH PARTITION ['FETCH PARTITION'] TABLE ALTER TABLE +ALTER FETCH PARTITION ['ALTER FETCH PART','FETCH PARTITION'] TABLE ALTER TABLE ALTER FREEZE PARTITION ['FREEZE PARTITION','UNFREEZE'] TABLE ALTER TABLE ALTER TABLE [] \N ALTER ALTER VIEW REFRESH ['ALTER LIVE VIEW REFRESH','REFRESH VIEW'] VIEW ALTER VIEW @@ -82,6 +82,7 @@ SYSTEM DROP CACHE ['DROP CACHE'] \N SYSTEM SYSTEM RELOAD CONFIG ['RELOAD CONFIG'] GLOBAL SYSTEM RELOAD SYSTEM RELOAD SYMBOLS ['RELOAD SYMBOLS'] GLOBAL SYSTEM RELOAD SYSTEM RELOAD DICTIONARY ['SYSTEM RELOAD DICTIONARIES','RELOAD DICTIONARY','RELOAD DICTIONARIES'] GLOBAL SYSTEM RELOAD +SYSTEM RELOAD MODEL ['SYSTEM RELOAD MODELS','RELOAD MODEL','RELOAD MODELS'] GLOBAL SYSTEM RELOAD SYSTEM RELOAD EMBEDDED DICTIONARIES ['RELOAD EMBEDDED DICTIONARIES'] GLOBAL SYSTEM RELOAD SYSTEM RELOAD [] \N SYSTEM SYSTEM MERGES ['SYSTEM STOP MERGES','SYSTEM START MERGES','STOP_MERGES','START MERGES'] TABLE SYSTEM diff --git a/tests/queries/0_stateless/01294_create_settings_profile.reference b/tests/queries/0_stateless/01294_create_settings_profile.reference index ab1b3833419..da47b084070 100644 --- a/tests/queries/0_stateless/01294_create_settings_profile.reference +++ b/tests/queries/0_stateless/01294_create_settings_profile.reference @@ -38,8 +38,12 @@ CREATE SETTINGS PROFILE s2_01294 SETTINGS max_memory_usage = 5000000 CREATE SETTINGS PROFILE s3_01294 TO ALL CREATE SETTINGS PROFILE s4_01294 TO ALL CREATE SETTINGS PROFILE s1_01294 SETTINGS max_memory_usage = 6000000 +CREATE SETTINGS PROFILE s2_01294 SETTINGS max_memory_usage = 6000000 +CREATE SETTINGS PROFILE s3_01294 TO ALL +CREATE SETTINGS PROFILE s4_01294 TO ALL +CREATE SETTINGS PROFILE s1_01294 SETTINGS max_memory_usage = 6000000 CREATE SETTINGS PROFILE s2_01294 SETTINGS max_memory_usage = 6000000 TO r1_01294 -CREATE SETTINGS PROFILE s3_01294 SETTINGS max_memory_usage = 6000000 TO r1_01294 +CREATE SETTINGS PROFILE s3_01294 TO r1_01294 CREATE SETTINGS PROFILE s4_01294 TO r1_01294 -- readonly ambiguity CREATE SETTINGS PROFILE s1_01294 SETTINGS readonly = 1 @@ -53,7 +57,8 @@ s1_01294 local directory 0 0 [] [] s2_01294 local directory 1 0 ['r1_01294'] [] s3_01294 local directory 1 0 ['r1_01294'] [] s4_01294 local directory 1 0 ['r1_01294'] [] -s5_01294 local directory 3 1 [] ['r1_01294'] +s5_01294 local directory 3 0 ['u1_01294'] [] +s6_01294 local directory 0 1 [] ['r1_01294','u1_01294'] -- system.settings_profile_elements s2_01294 \N \N 0 readonly 0 \N \N \N \N s3_01294 \N \N 0 max_memory_usage 5000000 4000000 6000000 1 \N diff --git a/tests/queries/0_stateless/01294_create_settings_profile.sql b/tests/queries/0_stateless/01294_create_settings_profile.sql index 9dbabd3f068..b7dd91ad6ed 100644 --- a/tests/queries/0_stateless/01294_create_settings_profile.sql +++ b/tests/queries/0_stateless/01294_create_settings_profile.sql @@ -82,7 +82,8 @@ SELECT '-- multiple profiles in one command'; CREATE PROFILE s1_01294, s2_01294 SETTINGS max_memory_usage=5000000; CREATE PROFILE s3_01294, s4_01294 TO ALL; SHOW CREATE PROFILE s1_01294, s2_01294, s3_01294, s4_01294; -ALTER PROFILE s1_01294, s2_01294, s3_01294 SETTINGS max_memory_usage=6000000; +ALTER PROFILE s1_01294, s2_01294 SETTINGS max_memory_usage=6000000; +SHOW CREATE PROFILE s1_01294, s2_01294, s3_01294, s4_01294; ALTER PROFILE s2_01294, s3_01294, s4_01294 TO r1_01294; SHOW CREATE PROFILE s1_01294, s2_01294, s3_01294, s4_01294; DROP PROFILE s1_01294, s2_01294, s3_01294, s4_01294; @@ -107,12 +108,13 @@ CREATE PROFILE s1_01294; CREATE PROFILE s2_01294 SETTINGS readonly=0 TO r1_01294;; CREATE PROFILE s3_01294 SETTINGS max_memory_usage=5000000 MIN 4000000 MAX 6000000 READONLY TO r1_01294; CREATE PROFILE s4_01294 SETTINGS max_memory_usage=5000000 TO r1_01294; -CREATE PROFILE s5_01294 SETTINGS INHERIT default, readonly=0, max_memory_usage MAX 6000000 WRITABLE TO ALL EXCEPT r1_01294; +CREATE PROFILE s5_01294 SETTINGS INHERIT default, readonly=0, max_memory_usage MAX 6000000 WRITABLE TO u1_01294; +CREATE PROFILE s6_01294 TO ALL EXCEPT u1_01294, r1_01294; SELECT name, storage, num_elements, apply_to_all, apply_to_list, apply_to_except FROM system.settings_profiles WHERE name LIKE 's%\_01294' ORDER BY name; SELECT '-- system.settings_profile_elements'; SELECT * FROM system.settings_profile_elements WHERE profile_name LIKE 's%\_01294' ORDER BY profile_name, index; -DROP PROFILE s1_01294, s2_01294, s3_01294, s4_01294, s5_01294; +DROP PROFILE s1_01294, s2_01294, s3_01294, s4_01294, s5_01294, s6_01294; DROP ROLE r1_01294; DROP USER u1_01294; diff --git a/tests/queries/0_stateless/01300_read_wkt.sql b/tests/queries/0_stateless/01300_read_wkt.sql index 590305fddae..8121bdf6084 100644 --- a/tests/queries/0_stateless/01300_read_wkt.sql +++ b/tests/queries/0_stateless/01300_read_wkt.sql @@ -26,3 +26,5 @@ INSERT INTO geo VALUES ('MULTIPOLYGON(((1 0,10 0,10 10,0 10,1 0),(4 4,5 4,5 5,4 INSERT INTO geo VALUES ('MULTIPOLYGON(((0 0,10 0,10 10,0 10,0 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))', 2); INSERT INTO geo VALUES ('MULTIPOLYGON(((2 0,10 0,10 10,0 10,2 0),(4 4,5 4,5 5,4 5,4 4)),((-10 -10,-10 -9,-9 10,-10 -10)))', 3); SELECT readWktMultiPolygon(s) FROM geo ORDER BY id; + +DROP TABLE geo; diff --git a/tests/queries/0_stateless/01300_svg.sql b/tests/queries/0_stateless/01300_svg.sql index 3e70182023b..a1deb1745c3 100644 --- a/tests/queries/0_stateless/01300_svg.sql +++ b/tests/queries/0_stateless/01300_svg.sql @@ -46,3 +46,5 @@ SELECT svg(p) FROM geo ORDER BY id; SELECT svg(p, 'b') FROM geo ORDER BY id; SELECT svg([[[(0., 0.), (10, 0), (10, 10), (0, 10)], [(4., 4.), (5, 4), (5, 5), (4, 5)]], [[(-10., -10.), (-10, -9), (-9, 10)]]], s) FROM geo ORDER BY id; SELECT svg(p, s) FROM geo ORDER BY id; + +DROP TABLE geo; diff --git a/tests/queries/0_stateless/01300_wkt.sql b/tests/queries/0_stateless/01300_wkt.sql index 7047bb698bb..00063d0a612 100644 --- a/tests/queries/0_stateless/01300_wkt.sql +++ b/tests/queries/0_stateless/01300_wkt.sql @@ -30,3 +30,5 @@ INSERT INTO geo VALUES ([[[(0, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), INSERT INTO geo VALUES ([[[(1, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], [[(-10, -10), (-10, -9), (-9, 10)]]], 2); INSERT INTO geo VALUES ([[[(2, 0), (10, 0), (10, 10), (0, 10)], [(4, 4), (5, 4), (5, 5), (4, 5)]], [[(-10, -10), (-10, -9), (-9, 10)]]], 3); SELECT wkt(p) FROM geo ORDER BY id; + +DROP TABLE geo; diff --git a/tests/queries/0_stateless/01302_polygons_distance.sql b/tests/queries/0_stateless/01302_polygons_distance.sql index fdbd0254983..a69b5017a5f 100644 --- a/tests/queries/0_stateless/01302_polygons_distance.sql +++ b/tests/queries/0_stateless/01302_polygons_distance.sql @@ -6,3 +6,5 @@ drop table if exists polygon_01302; create table polygon_01302 (x Array(Array(Array(Tuple(Float64, Float64)))), y Array(Array(Array(Tuple(Float64, Float64))))) engine=Memory(); insert into polygon_01302 values ([[[(23.725750, 37.971536)]]], [[[(4.3826169, 50.8119483)]]]); select polygonsDistanceSpherical(x, y) from polygon_01302; + +drop table polygon_01302; diff --git a/tests/queries/0_stateless/01305_replica_create_drop_zookeeper.sh b/tests/queries/0_stateless/01305_replica_create_drop_zookeeper.sh index 01bb9af461c..6248813c9ba 100755 --- a/tests/queries/0_stateless/01305_replica_create_drop_zookeeper.sh +++ b/tests/queries/0_stateless/01305_replica_create_drop_zookeeper.sh @@ -8,24 +8,13 @@ set -e function thread() { - db_engine=`$CLICKHOUSE_CLIENT -q "SELECT engine FROM system.databases WHERE name='$CLICKHOUSE_DATABASE'"` - if [[ $db_engine == "Atomic" ]]; then - # Ignore "Replica already exists" exception - while true; do - $CLICKHOUSE_CLIENT -n -q "DROP TABLE IF EXISTS test_table_$1 NO DELAY; - CREATE TABLE test_table_$1 (a UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/alter_table', 'r_$1') ORDER BY tuple();" 2>&1 | - grep -vP '(^$)|(^Received exception from server)|(^\d+\. )|because the last replica of the table was dropped right now|is already started to be removing by another replica right now|is already finished removing by another replica right now|Removing leftovers from table|Another replica was suddenly created|was successfully removed from ZooKeeper|was created by another server at the same moment|was suddenly removed|some other replicas were created at the same time|already exists' - done - else - while true; do - $CLICKHOUSE_CLIENT -n -q "DROP TABLE IF EXISTS test_table_$1; - CREATE TABLE test_table_$1 (a UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/alter_table', 'r_$1') ORDER BY tuple();" 2>&1 | - grep -vP '(^$)|(^Received exception from server)|(^\d+\. )|because the last replica of the table was dropped right now|is already started to be removing by another replica right now|is already finished removing by another replica right now|Removing leftovers from table|Another replica was suddenly created|was successfully removed from ZooKeeper|was created by another server at the same moment|was suddenly removed|some other replicas were created at the same time' - done - fi + while true; do + $CLICKHOUSE_CLIENT -n -q "DROP TABLE IF EXISTS test_table_$1 SYNC; + CREATE TABLE test_table_$1 (a UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/alter_table', 'r_$1') ORDER BY tuple();" 2>&1 | + grep -vP '(^$)|(^Received exception from server)|(^\d+\. )|because the last replica of the table was dropped right now|is already started to be removing by another replica right now| were removed by another replica|Removing leftovers from table|Another replica was suddenly created|was created by another server at the same moment|was suddenly removed|some other replicas were created at the same time' + done } - # https://stackoverflow.com/questions/9954794/execute-a-shell-function-with-timeout export -f thread; diff --git a/tests/queries/0_stateless/01318_long_unsuccessful_mutation_zookeeper.sh b/tests/queries/0_stateless/01318_long_unsuccessful_mutation_zookeeper.sh index a05304c670c..13250e82079 100755 --- a/tests/queries/0_stateless/01318_long_unsuccessful_mutation_zookeeper.sh +++ b/tests/queries/0_stateless/01318_long_unsuccessful_mutation_zookeeper.sh @@ -47,7 +47,7 @@ done echo "$query_result" -$CLICKHOUSE_CLIENT --query "KILL MUTATION WHERE mutation_id='$first_mutation_id'" +$CLICKHOUSE_CLIENT --query "KILL MUTATION WHERE mutation_id='$first_mutation_id' and database='$CLICKHOUSE_DATABASE'" check_query="SELECT sum(parts_to_do) FROM system.mutations WHERE table='mutation_table' and database='$CLICKHOUSE_DATABASE'" diff --git a/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect b/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect index 50ef009dee9..a6d52b39918 100755 --- a/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect +++ b/tests/queries/0_stateless/01370_client_autocomplete_word_break_characters.expect @@ -23,7 +23,7 @@ set is_done 0 while {$is_done == 0} { send -- "\t" expect { - "_connections" { + "_" { set is_done 1 } default { diff --git a/tests/queries/0_stateless/01386_negative_float_constant_key_condition.reference b/tests/queries/0_stateless/01386_negative_float_constant_key_condition.reference index 44e0be8e356..bb0b1cf658d 100644 --- a/tests/queries/0_stateless/01386_negative_float_constant_key_condition.reference +++ b/tests/queries/0_stateless/01386_negative_float_constant_key_condition.reference @@ -1,4 +1,3 @@ 0 0 0 -0 diff --git a/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql b/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql index 216f43c4285..c2191d6ab96 100644 --- a/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql +++ b/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql @@ -12,7 +12,7 @@ SETTINGS index_granularity = 8192; INSERT INTO t0 VALUES (0, 0); SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND -1524532316)); -SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND -1.0)); +SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND -1.0)); -- { serverError 70 } SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND inf)); SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND nan)); diff --git a/tests/queries/0_stateless/01461_query_start_time_microseconds.sql b/tests/queries/0_stateless/01461_query_start_time_microseconds.sql index 678b9b3d85e..be1d9897053 100644 --- a/tests/queries/0_stateless/01461_query_start_time_microseconds.sql +++ b/tests/queries/0_stateless/01461_query_start_time_microseconds.sql @@ -7,6 +7,8 @@ WITH ( SELECT query_start_time_microseconds FROM system.query_log WHERE current_database = currentDatabase() + AND query like 'SELECT \'01461_query%' + AND event_date >= yesterday() ORDER BY query_start_time DESC LIMIT 1 ) AS time_with_microseconds, @@ -14,6 +16,8 @@ WITH ( SELECT query_start_time FROM system.query_log WHERE current_database = currentDatabase() + AND query like 'SELECT \'01461_query%' + AND event_date >= yesterday() ORDER BY query_start_time DESC LIMIT 1 ) AS t) @@ -27,6 +31,8 @@ WITH ( SELECT query_start_time_microseconds FROM system.query_thread_log WHERE current_database = currentDatabase() + AND query like 'SELECT \'01461_query%' + AND event_date >= yesterday() ORDER BY query_start_time DESC LIMIT 1 ) AS time_with_microseconds, @@ -34,6 +40,8 @@ WITH ( SELECT query_start_time FROM system.query_thread_log WHERE current_database = currentDatabase() + AND query like 'SELECT \'01461_query%' + AND event_date >= yesterday() ORDER BY query_start_time DESC LIMIT 1 ) AS t) diff --git a/tests/queries/0_stateless/01483_merge_table_join_and_group_by.reference b/tests/queries/0_stateless/01483_merge_table_join_and_group_by.reference index b2c3ea56b7f..4261ccd8a1f 100644 --- a/tests/queries/0_stateless/01483_merge_table_join_and_group_by.reference +++ b/tests/queries/0_stateless/01483_merge_table_join_and_group_by.reference @@ -5,3 +5,5 @@ 1 0 1 0 1 +1 0 +1 diff --git a/tests/queries/0_stateless/01483_merge_table_join_and_group_by.sql b/tests/queries/0_stateless/01483_merge_table_join_and_group_by.sql index a6678ca9040..68b4e7d4015 100644 --- a/tests/queries/0_stateless/01483_merge_table_join_and_group_by.sql +++ b/tests/queries/0_stateless/01483_merge_table_join_and_group_by.sql @@ -17,6 +17,9 @@ SELECT ID FROM m INNER JOIN b USING(key) GROUP BY ID; SELECT * FROM m INNER JOIN b USING(key) WHERE ID = 1 HAVING ID = 1 ORDER BY ID; SELECT * FROM m INNER JOIN b USING(key) WHERE ID = 1 GROUP BY ID, key HAVING ID = 1 ORDER BY ID; +SELECT sum(b.ID), sum(m.key) FROM m FULL JOIN b ON (m.key == b.key) GROUP BY key; +SELECT sum(b.ID + m.key) FROM m FULL JOIN b ON (m.key == b.key) GROUP BY key; + DROP TABLE IF EXISTS a; DROP TABLE IF EXISTS b; DROP TABLE IF EXISTS m; diff --git a/tests/queries/0_stateless/01508_partition_pruning_long.reference b/tests/queries/0_stateless/01508_partition_pruning_long.reference index 70f529c6058..334ecb63164 100644 --- a/tests/queries/0_stateless/01508_partition_pruning_long.reference +++ b/tests/queries/0_stateless/01508_partition_pruning_long.reference @@ -5,11 +5,11 @@ Selected 0/6 parts by partition key, 0 parts by primary key, 0/0 marks by primar select uniqExact(_part), count() from tMM where toDate(d)=toDate('2020-09-01'); 2 2880 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toDate(d)=toDate('2020-10-15'); 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toDate(d)='2020-09-15'; 0 0 @@ -17,27 +17,27 @@ Selected 0/6 parts by partition key, 0 parts by primary key, 0/0 marks by primar select uniqExact(_part), count() from tMM where toYYYYMM(d)=202009; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMMDD(d)=20200816; 2 2880 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMMDD(d)=20201015; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toDate(d)='2020-10-15'; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where d >= '2020-09-01 00:00:00' and d<'2020-10-15 00:00:00'; 3 15000 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from tMM where d >= '2020-01-16 00:00:00' and d < toDateTime('2021-08-17 00:00:00'); 6 30000 -Selected 6/6 parts by partition key, 6 parts by primary key, 6/12 marks by primary key, 6 marks to read from 6 ranges +Selected 6/6 parts by partition key, 6 parts by primary key, 6/6 marks by primary key, 6 marks to read from 6 ranges select uniqExact(_part), count() from tMM where d >= '2020-09-16 00:00:00' and d < toDateTime('2020-10-01 00:00:00'); 0 0 @@ -45,117 +45,117 @@ Selected 0/6 parts by partition key, 0 parts by primary key, 0/0 marks by primar select uniqExact(_part), count() from tMM where d >= '2020-09-12 00:00:00' and d < '2020-10-16 00:00:00'; 2 6440 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toStartOfDay(d) >= '2020-09-12 00:00:00'; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toStartOfDay(d) = '2020-09-01 00:00:00'; 2 2880 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toStartOfDay(d) = '2020-10-01 00:00:00'; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toStartOfDay(d) >= '2020-09-15 00:00:00' and d < '2020-10-16 00:00:00'; 2 6440 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) between 202009 and 202010; 4 20000 -Selected 4/6 parts by partition key, 4 parts by primary key, 4/8 marks by primary key, 4 marks to read from 4 ranges +Selected 4/6 parts by partition key, 4 parts by primary key, 4/4 marks by primary key, 4 marks to read from 4 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) between 202009 and 202009; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) between 202009 and 202010 and toStartOfDay(d) = '2020-10-01 00:00:00'; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) >= 202009 and toStartOfDay(d) < '2020-10-02 00:00:00'; 3 11440 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) > 202009 and toStartOfDay(d) < '2020-10-02 00:00:00'; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d)+1 > 202009 and toStartOfDay(d) < '2020-10-02 00:00:00'; 3 11440 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d)+1 > 202010 and toStartOfDay(d) < '2020-10-02 00:00:00'; 1 1440 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d)+1 > 202010; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d-1)+1 = 202010; 3 9999 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from tMM where toStartOfMonth(d) >= '2020-09-15'; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toStartOfMonth(d) >= '2020-09-01'; 4 20000 -Selected 4/6 parts by partition key, 4 parts by primary key, 4/8 marks by primary key, 4 marks to read from 4 ranges +Selected 4/6 parts by partition key, 4 parts by primary key, 4/4 marks by primary key, 4 marks to read from 4 ranges select uniqExact(_part), count() from tMM where toStartOfMonth(d) >= '2020-09-01' and toStartOfMonth(d) < '2020-10-01'; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d-1)+1 = 202010; 2 9999 -Selected 2/3 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/3 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d)+1 > 202010; 1 10000 -Selected 1/3 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/3 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from tMM where toYYYYMM(d) between 202009 and 202010; 2 20000 -Selected 2/3 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/3 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges --------- tDD ---------------------------- select uniqExact(_part), count() from tDD where toDate(d)=toDate('2020-09-24'); 1 10000 -Selected 1/4 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/4 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() FROM tDD WHERE toDate(d) = toDate('2020-09-24'); 1 10000 -Selected 1/4 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/4 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() FROM tDD WHERE toDate(d) = '2020-09-24'; 1 10000 -Selected 1/4 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/4 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() FROM tDD WHERE toDate(d) >= '2020-09-23' and toDate(d) <= '2020-09-26'; 3 40000 -Selected 3/4 parts by partition key, 3 parts by primary key, 4/7 marks by primary key, 4 marks to read from 3 ranges +Selected 3/4 parts by partition key, 3 parts by primary key, 4/4 marks by primary key, 4 marks to read from 3 ranges select uniqExact(_part), count() FROM tDD WHERE toYYYYMMDD(d) >= 20200923 and toDate(d) <= '2020-09-26'; 3 40000 -Selected 3/4 parts by partition key, 3 parts by primary key, 4/7 marks by primary key, 4 marks to read from 3 ranges +Selected 3/4 parts by partition key, 3 parts by primary key, 4/4 marks by primary key, 4 marks to read from 3 ranges --------- sDD ---------------------------- select uniqExact(_part), count() from sDD; 6 30000 -Selected 6/6 parts by partition key, 6 parts by primary key, 6/12 marks by primary key, 6 marks to read from 6 ranges +Selected 6/6 parts by partition key, 6 parts by primary key, 6/6 marks by primary key, 6 marks to read from 6 ranges select uniqExact(_part), count() from sDD where toYYYYMM(toDateTime(intDiv(d,1000),'UTC')-1)+1 = 202010; 3 9999 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from sDD where toYYYYMM(toDateTime(intDiv(d,1000),'UTC')-1) = 202010; 2 9999 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from sDD where toYYYYMM(toDateTime(intDiv(d,1000),'UTC')-1) = 202110; 0 0 @@ -163,52 +163,52 @@ Selected 0/6 parts by partition key, 0 parts by primary key, 0/0 marks by primar select uniqExact(_part), count() from sDD where toYYYYMM(toDateTime(intDiv(d,1000),'UTC'))+1 > 202009 and toStartOfDay(toDateTime(intDiv(d,1000),'UTC')) < toDateTime('2020-10-02 00:00:00','UTC'); 3 11440 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from sDD where toYYYYMM(toDateTime(intDiv(d,1000),'UTC'))+1 > 202009 and toDateTime(intDiv(d,1000),'UTC') < toDateTime('2020-10-01 00:00:00','UTC'); 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from sDD where d >= 1598918400000; 4 20000 -Selected 4/6 parts by partition key, 4 parts by primary key, 4/8 marks by primary key, 4 marks to read from 4 ranges +Selected 4/6 parts by partition key, 4 parts by primary key, 4/4 marks by primary key, 4 marks to read from 4 ranges select uniqExact(_part), count() from sDD where d >= 1598918400000 and toYYYYMM(toDateTime(intDiv(d,1000),'UTC')-1) < 202010; 3 10001 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges --------- xMM ---------------------------- select uniqExact(_part), count() from xMM where toStartOfDay(d) >= '2020-10-01 00:00:00'; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d <= '2020-10-01 00:00:00'; 3 10001 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d < '2020-10-01 00:00:00'; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d <= '2020-10-01 00:00:00' and a=1; 1 1 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d <= '2020-10-01 00:00:00' and a<>3; 2 5001 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d < '2020-10-01 00:00:00' and a<>3; 1 5000 -Selected 1/6 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/6 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d < '2020-11-01 00:00:00' and a = 1; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where a = 1; 3 15000 -Selected 3/6 parts by partition key, 3 parts by primary key, 3/6 marks by primary key, 3 marks to read from 3 ranges +Selected 3/6 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges select uniqExact(_part), count() from xMM where a = 66; 0 0 @@ -216,29 +216,29 @@ Selected 0/6 parts by partition key, 0 parts by primary key, 0/0 marks by primar select uniqExact(_part), count() from xMM where a <> 66; 6 30000 -Selected 6/6 parts by partition key, 6 parts by primary key, 6/12 marks by primary key, 6 marks to read from 6 ranges +Selected 6/6 parts by partition key, 6 parts by primary key, 6/6 marks by primary key, 6 marks to read from 6 ranges select uniqExact(_part), count() from xMM where a = 2; 2 10000 -Selected 2/6 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/6 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where a = 1; 2 15000 -Selected 2/5 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/5 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where toStartOfDay(d) >= '2020-10-01 00:00:00'; 1 10000 -Selected 1/5 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/5 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges select uniqExact(_part), count() from xMM where a <> 66; 5 30000 -Selected 5/5 parts by partition key, 5 parts by primary key, 5/10 marks by primary key, 5 marks to read from 5 ranges +Selected 5/5 parts by partition key, 5 parts by primary key, 5/5 marks by primary key, 5 marks to read from 5 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d <= '2020-10-01 00:00:00' and a<>3; 2 5001 -Selected 2/5 parts by partition key, 2 parts by primary key, 2/4 marks by primary key, 2 marks to read from 2 ranges +Selected 2/5 parts by partition key, 2 parts by primary key, 2/2 marks by primary key, 2 marks to read from 2 ranges select uniqExact(_part), count() from xMM where d >= '2020-09-01 00:00:00' and d < '2020-10-01 00:00:00' and a<>3; 1 5000 -Selected 1/5 parts by partition key, 1 parts by primary key, 1/2 marks by primary key, 1 marks to read from 1 ranges +Selected 1/5 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges diff --git a/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas.sql b/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas.sql index 1b680cf26c1..16c4a4df936 100644 --- a/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas.sql +++ b/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas.sql @@ -1,16 +1,16 @@ -DROP TABLE IF EXISTS r1; -DROP TABLE IF EXISTS r2; +DROP TABLE IF EXISTS r1 SYNC; +DROP TABLE IF EXISTS r2 SYNC; CREATE TABLE r1 ( key UInt64, value String ) -ENGINE = ReplicatedMergeTree('/clickhouse/01509_no_repliacs', '1') +ENGINE = ReplicatedMergeTree('/clickhouse/01509_parallel_quorum_insert_no_replicas', '1') ORDER BY tuple(); CREATE TABLE r2 ( key UInt64, value String ) -ENGINE = ReplicatedMergeTree('/clickhouse/01509_no_repliacs', '2') +ENGINE = ReplicatedMergeTree('/clickhouse/01509_parallel_quorum_insert_no_replicas', '2') ORDER BY tuple(); SET insert_quorum_parallel=1; @@ -18,8 +18,13 @@ SET insert_quorum_parallel=1; SET insert_quorum=3; INSERT INTO r1 VALUES(1, '1'); --{serverError 285} +-- retry should still fail despite the insert_deduplicate enabled +INSERT INTO r1 VALUES(1, '1'); --{serverError 285} +INSERT INTO r1 VALUES(1, '1'); --{serverError 285} + SELECT 'insert to two replicas works'; SET insert_quorum=2, insert_quorum_parallel=1; + INSERT INTO r1 VALUES(1, '1'); SELECT COUNT() FROM r1; @@ -29,12 +34,18 @@ DETACH TABLE r2; INSERT INTO r1 VALUES(2, '2'); --{serverError 285} +-- retry should fail despite the insert_deduplicate enabled +INSERT INTO r1 VALUES(2, '2'); --{serverError 285} +INSERT INTO r1 VALUES(2, '2'); --{serverError 285} + SET insert_quorum=1, insert_quorum_parallel=1; SELECT 'insert to single replica works'; INSERT INTO r1 VALUES(2, '2'); ATTACH TABLE r2; +INSERT INTO r2 VALUES(2, '2'); + SYSTEM SYNC REPLICA r2; SET insert_quorum=2, insert_quorum_parallel=1; @@ -47,6 +58,17 @@ SELECT COUNT() FROM r2; SELECT 'deduplication works'; INSERT INTO r2 VALUES(3, '3'); +-- still works if we relax quorum +SET insert_quorum=1, insert_quorum_parallel=1; +INSERT INTO r2 VALUES(3, '3'); +INSERT INTO r1 VALUES(3, '3'); +-- will start failing if we increase quorum +SET insert_quorum=3, insert_quorum_parallel=1; +INSERT INTO r1 VALUES(3, '3'); --{serverError 285} +-- work back ok when quorum=2 +SET insert_quorum=2, insert_quorum_parallel=1; +INSERT INTO r2 VALUES(3, '3'); + SELECT COUNT() FROM r1; SELECT COUNT() FROM r2; @@ -56,8 +78,18 @@ SET insert_quorum_timeout=0; INSERT INTO r1 VALUES (4, '4'); -- { serverError 319 } +-- retry should fail despite the insert_deduplicate enabled +INSERT INTO r1 VALUES (4, '4'); -- { serverError 319 } +INSERT INTO r1 VALUES (4, '4'); -- { serverError 319 } +SELECT * FROM r2 WHERE key=4; + SYSTEM START FETCHES r2; +SET insert_quorum_timeout=6000000; + +-- now retry should be successful +INSERT INTO r1 VALUES (4, '4'); + SYSTEM SYNC REPLICA r2; SELECT 'insert happened'; diff --git a/tests/queries/0_stateless/01551_mergetree_read_in_order_spread.reference b/tests/queries/0_stateless/01551_mergetree_read_in_order_spread.reference index becc626c1bb..835e2af269a 100644 --- a/tests/queries/0_stateless/01551_mergetree_read_in_order_spread.reference +++ b/tests/queries/0_stateless/01551_mergetree_read_in_order_spread.reference @@ -13,16 +13,16 @@ ExpressionTransform (MergingSorted) (Expression) ExpressionTransform - (ReadFromStorage) + (ReadFromMergeTree) MergeTree 0 → 1 (MergingSorted) MergingSortedTransform 2 → 1 (Expression) ExpressionTransform × 2 - (ReadFromStorage) + (ReadFromMergeTree) MergeTree × 2 0 → 1 (MergingSorted) (Expression) ExpressionTransform - (ReadFromStorage) + (ReadFromMergeTree) MergeTree 0 → 1 diff --git a/tests/queries/0_stateless/01562_optimize_monotonous_functions_in_order_by.reference b/tests/queries/0_stateless/01562_optimize_monotonous_functions_in_order_by.reference index a1a1814a581..0eb7e06f724 100644 --- a/tests/queries/0_stateless/01562_optimize_monotonous_functions_in_order_by.reference +++ b/tests/queries/0_stateless/01562_optimize_monotonous_functions_in_order_by.reference @@ -11,7 +11,7 @@ Expression (Projection) PartialSorting (Sort each block for ORDER BY) Expression (Before ORDER BY) SettingQuotaAndLimits (Set limits and quota after reading from storage) - ReadFromStorage (MergeTree) + ReadFromMergeTree SELECT timestamp, key @@ -23,7 +23,7 @@ Expression (Projection) FinishSorting Expression (Before ORDER BY) SettingQuotaAndLimits (Set limits and quota after reading from storage) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree SELECT timestamp, key @@ -37,7 +37,7 @@ Expression (Projection) FinishSorting Expression (Before ORDER BY) SettingQuotaAndLimits (Set limits and quota after reading from storage) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree SELECT timestamp, key diff --git a/tests/queries/0_stateless/01566_negate_formatting.reference b/tests/queries/0_stateless/01566_negate_formatting.reference index b955d4cbbc5..69d79cf929a 100644 --- a/tests/queries/0_stateless/01566_negate_formatting.reference +++ b/tests/queries/0_stateless/01566_negate_formatting.reference @@ -9,12 +9,26 @@ SELECT explain syntax select negate(1.), negate(-1.), - -1., -(-1.), (-1.) in (-1.); SELECT -1., - 1, - 1, - 1, + 1., + 1., + 1., -1. IN (-1.) explain syntax select negate(-9223372036854775808), -(-9223372036854775808), - -9223372036854775808; SELECT -9223372036854775808, -9223372036854775808, -9223372036854775808 +explain syntax select negate(0), negate(-0), - -0, -(-0), (-0) in (-0); +SELECT + 0, + 0, + 0, + 0, + 0 IN (0) +explain syntax select negate(0.), negate(-0.), - -0., -(-0.), (-0.) in (-0.); +SELECT + -0., + 0., + 0., + 0., + -0. IN (-0.) diff --git a/tests/queries/0_stateless/01566_negate_formatting.sql b/tests/queries/0_stateless/01566_negate_formatting.sql index 035ff80e8d8..65e983fbdd1 100644 --- a/tests/queries/0_stateless/01566_negate_formatting.sql +++ b/tests/queries/0_stateless/01566_negate_formatting.sql @@ -2,3 +2,5 @@ explain syntax select negate(1), negate(-1), - -1, -(-1), (-1) in (-1); explain syntax select negate(1.), negate(-1.), - -1., -(-1.), (-1.) in (-1.); explain syntax select negate(-9223372036854775808), -(-9223372036854775808), - -9223372036854775808; +explain syntax select negate(0), negate(-0), - -0, -(-0), (-0) in (-0); +explain syntax select negate(0.), negate(-0.), - -0., -(-0.), (-0.) in (-0.); diff --git a/tests/queries/0_stateless/01568_window_functions_distributed.reference b/tests/queries/0_stateless/01568_window_functions_distributed.reference new file mode 100644 index 00000000000..b441189303d --- /dev/null +++ b/tests/queries/0_stateless/01568_window_functions_distributed.reference @@ -0,0 +1,5 @@ +-- { echo } +set allow_experimental_window_functions = 1; +select row_number() over (order by dummy) from (select * from remote('127.0.0.{1,2}', system, one)); +1 +2 diff --git a/tests/queries/0_stateless/01568_window_functions_distributed.sql b/tests/queries/0_stateless/01568_window_functions_distributed.sql new file mode 100644 index 00000000000..754b996e00c --- /dev/null +++ b/tests/queries/0_stateless/01568_window_functions_distributed.sql @@ -0,0 +1,4 @@ +-- { echo } +set allow_experimental_window_functions = 1; + +select row_number() over (order by dummy) from (select * from remote('127.0.0.{1,2}', system, one)); diff --git a/tests/queries/0_stateless/01576_alias_column_rewrite.reference b/tests/queries/0_stateless/01576_alias_column_rewrite.reference index 334ebc7eb1f..c5679544e1d 100644 --- a/tests/queries/0_stateless/01576_alias_column_rewrite.reference +++ b/tests/queries/0_stateless/01576_alias_column_rewrite.reference @@ -28,47 +28,47 @@ Expression (Projection) PartialSorting (Sort each block for ORDER BY) Expression ((Before ORDER BY + Add table aliases)) SettingQuotaAndLimits (Set limits and quota after reading from storage) - ReadFromStorage (MergeTree) + ReadFromMergeTree Expression (Projection) Limit (preliminary LIMIT) FinishSorting Expression ((Before ORDER BY + Add table aliases)) SettingQuotaAndLimits (Set limits and quota after reading from storage) Union - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree + ReadFromMergeTree + ReadFromMergeTree Expression (Projection) Limit (preliminary LIMIT) FinishSorting Expression (Before ORDER BY) SettingQuotaAndLimits (Set limits and quota after reading from storage) Union - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree + ReadFromMergeTree + ReadFromMergeTree optimize_aggregation_in_order Expression ((Projection + Before ORDER BY)) Aggregating Expression ((Before GROUP BY + Add table aliases)) SettingQuotaAndLimits (Set limits and quota after reading from storage) - ReadFromStorage (MergeTree) + ReadFromMergeTree Expression ((Projection + Before ORDER BY)) Aggregating Expression ((Before GROUP BY + Add table aliases)) SettingQuotaAndLimits (Set limits and quota after reading from storage) Union - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree + ReadFromMergeTree + ReadFromMergeTree Expression ((Projection + Before ORDER BY)) Aggregating Expression (Before GROUP BY) SettingQuotaAndLimits (Set limits and quota after reading from storage) Union - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) - ReadFromStorage (MergeTree with order) + ReadFromMergeTree + ReadFromMergeTree + ReadFromMergeTree second-index 1 1 diff --git a/tests/queries/0_stateless/01591_window_functions.reference b/tests/queries/0_stateless/01591_window_functions.reference index 9067ee8d955..21a2e72fea4 100644 --- a/tests/queries/0_stateless/01591_window_functions.reference +++ b/tests/queries/0_stateless/01591_window_functions.reference @@ -942,8 +942,9 @@ FROM numbers(2) ; 1 0 1 1 --- optimize_read_in_order conflicts with sorting for window functions, must --- be disabled. +-- optimize_read_in_order conflicts with sorting for window functions, check that +-- it is disabled. +drop table if exists window_mt; create table window_mt engine MergeTree order by number as select number, mod(number, 3) p from numbers(100); select number, count(*) over (partition by p) @@ -1096,7 +1097,7 @@ select count() over (order by toInt64(number) range between -1 preceding and unb select count() over (order by toInt64(number) range between -1 following and unbounded following) from numbers(1); -- { serverError 36 } select count() over (order by toInt64(number) range between unbounded preceding and -1 preceding) from numbers(1); -- { serverError 36 } select count() over (order by toInt64(number) range between unbounded preceding and -1 following) from numbers(1); -- { serverError 36 } ----- a test with aggregate function that allocates memory in arena +-- a test with aggregate function that allocates memory in arena select sum(a[length(a)]) from ( select groupArray(number) over (partition by modulo(number, 11) @@ -1104,3 +1105,7 @@ from ( from numbers_mt(10000) ) settings max_block_size = 7; 49995000 +-- -INT_MIN row offset that can lead to problems with negation, found when fuzzing +-- under UBSan. Should be limited to at most INT_MAX. +select count() over (rows between 2147483648 preceding and 2147493648 following) from numbers(2); -- { serverError 36 } +drop table window_mt; diff --git a/tests/queries/0_stateless/01591_window_functions.sql b/tests/queries/0_stateless/01591_window_functions.sql index 85856dd797d..afbf26d0b5c 100644 --- a/tests/queries/0_stateless/01591_window_functions.sql +++ b/tests/queries/0_stateless/01591_window_functions.sql @@ -329,8 +329,9 @@ SELECT FROM numbers(2) ; --- optimize_read_in_order conflicts with sorting for window functions, must --- be disabled. +-- optimize_read_in_order conflicts with sorting for window functions, check that +-- it is disabled. +drop table if exists window_mt; create table window_mt engine MergeTree order by number as select number, mod(number, 3) p from numbers(100); @@ -402,10 +403,16 @@ select count() over (order by toInt64(number) range between -1 following and unb select count() over (order by toInt64(number) range between unbounded preceding and -1 preceding) from numbers(1); -- { serverError 36 } select count() over (order by toInt64(number) range between unbounded preceding and -1 following) from numbers(1); -- { serverError 36 } ----- a test with aggregate function that allocates memory in arena +-- a test with aggregate function that allocates memory in arena select sum(a[length(a)]) from ( select groupArray(number) over (partition by modulo(number, 11) order by modulo(number, 1111), number) a from numbers_mt(10000) ) settings max_block_size = 7; + +-- -INT_MIN row offset that can lead to problems with negation, found when fuzzing +-- under UBSan. Should be limited to at most INT_MAX. +select count() over (rows between 2147483648 preceding and 2147493648 following) from numbers(2); -- { serverError 36 } + +drop table window_mt; diff --git a/tests/queries/0_stateless/01598_memory_limit_zeros.sql b/tests/queries/0_stateless/01598_memory_limit_zeros.sql index e90d7bbccb7..a07ce0bcca3 100644 --- a/tests/queries/0_stateless/01598_memory_limit_zeros.sql +++ b/tests/queries/0_stateless/01598_memory_limit_zeros.sql @@ -1,2 +1,2 @@ -SET max_memory_usage = 1; +SET max_memory_usage = 1, max_untracked_memory = 1000000; select 'test', count(*) from zeros_mt(1000000) where not ignore(zero); -- { serverError 241 } diff --git a/tests/queries/0_stateless/01602_max_distributed_connections.reference b/tests/queries/0_stateless/01602_max_distributed_connections.reference index e69de29bb2d..7326d960397 100644 --- a/tests/queries/0_stateless/01602_max_distributed_connections.reference +++ b/tests/queries/0_stateless/01602_max_distributed_connections.reference @@ -0,0 +1 @@ +Ok diff --git a/tests/queries/0_stateless/01602_max_distributed_connections.sh b/tests/queries/0_stateless/01602_max_distributed_connections.sh index 93c6071c091..772acb39344 100755 --- a/tests/queries/0_stateless/01602_max_distributed_connections.sh +++ b/tests/queries/0_stateless/01602_max_distributed_connections.sh @@ -4,13 +4,31 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -common_opts=( - "--format=Null" +# We check that even if max_threads is small, the setting max_distributed_connections +# will allow to process queries on multiple shards concurrently. - "--max_threads=1" - "--max_distributed_connections=3" -) +# We do sleep 1.5 seconds on ten machines. +# If concurrency is one (bad) the query will take at least 15 seconds and the following loops are guaranteed to be infinite. +# If concurrency is 10 (good), the query may take less than 10 second with non-zero probability +# and the following loops will finish with probability 1 assuming independent random variables. -# NOTE: the test use higher timeout to avoid flakiness. -timeout 9s ${CLICKHOUSE_CLIENT} "$@" "${common_opts[@]}" -q "select sleep(3) from remote('127.{1,2,3,4,5}', system.one)" --prefer_localhost_replica=0 -timeout 9s ${CLICKHOUSE_CLIENT} "$@" "${common_opts[@]}" -q "select sleep(3) from remote('127.{1,2,3,4,5}', system.one)" --prefer_localhost_replica=1 +while true; do + timeout 10 ${CLICKHOUSE_CLIENT} --max_threads 1 --max_distributed_connections 10 --query " + SELECT sleep(1.5) FROM remote('127.{1..10}', system.one) FORMAT Null" --prefer_localhost_replica=0 && break +done + +while true; do + timeout 10 ${CLICKHOUSE_CLIENT} --max_threads 1 --max_distributed_connections 10 --query " + SELECT sleep(1.5) FROM remote('127.{1..10}', system.one) FORMAT Null" --prefer_localhost_replica=1 && break +done + +# If max_distributed_connections is low and async_socket_for_remote is disabled, +# the concurrency of distributed queries will be also low. + +timeout 1 ${CLICKHOUSE_CLIENT} --max_threads 1 --max_distributed_connections 1 --async_socket_for_remote 0 --query " + SELECT sleep(0.15) FROM remote('127.{1..10}', system.one) FORMAT Null" --prefer_localhost_replica=0 && echo 'Fail' + +timeout 1 ${CLICKHOUSE_CLIENT} --max_threads 1 --max_distributed_connections 1 --async_socket_for_remote 0 --query " + SELECT sleep(0.15) FROM remote('127.{1..10}', system.one) FORMAT Null" --prefer_localhost_replica=1 && echo 'Fail' + +echo 'Ok' diff --git a/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.reference b/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.reference deleted file mode 100644 index 19487c9f942..00000000000 --- a/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.reference +++ /dev/null @@ -1,140 +0,0 @@ ----------Q1---------- -2 2 2 20 -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL INNER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON a = table2.a -WHERE table2.b = toUInt32(20) ----------Q2---------- -2 2 2 20 -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL INNER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON a = table2.a -WHERE (table2.a < table2.b) AND (table2.b = toUInt32(20)) ----------Q3---------- ----------Q4---------- -6 40 -SELECT - a, - table2.b -FROM table1 -ALL INNER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON a = toUInt32(10 - table2.a) -WHERE (b = 6) AND (table2.b > 20) ----------Q5---------- -SELECT - a, - table2.b -FROM table1 -ALL INNER JOIN -( - SELECT - a, - b - FROM table2 - WHERE 0 -) AS table2 ON a = table2.a -WHERE 0 ----------Q6---------- ----------Q7---------- -0 0 0 0 -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL INNER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON a = table2.a -WHERE (table2.b < toUInt32(40)) AND (b < 1) ----------Q8---------- ----------Q9---will not be optimized---------- -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL LEFT JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON (a = table2.a) AND (b = toUInt32(10)) -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL RIGHT JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON (a = table2.a) AND (b = toUInt32(10)) -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL FULL OUTER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON (a = table2.a) AND (b = toUInt32(10)) -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -ALL FULL OUTER JOIN -( - SELECT - a, - b - FROM table2 -) AS table2 ON (a = table2.a) AND (table2.b = toUInt32(10)) -WHERE a < toUInt32(20) -SELECT - a, - b, - table2.a, - table2.b -FROM table1 -CROSS JOIN table2 diff --git a/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.sql b/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.sql deleted file mode 100644 index 23871a9c47c..00000000000 --- a/tests/queries/0_stateless/01653_move_conditions_from_join_on_to_where.sql +++ /dev/null @@ -1,48 +0,0 @@ -DROP TABLE IF EXISTS table1; -DROP TABLE IF EXISTS table2; - -CREATE TABLE table1 (a UInt32, b UInt32) ENGINE = Memory; -CREATE TABLE table2 (a UInt32, b UInt32) ENGINE = Memory; - -INSERT INTO table1 SELECT number, number FROM numbers(10); -INSERT INTO table2 SELECT number * 2, number * 20 FROM numbers(6); - -SELECT '---------Q1----------'; -SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.b = toUInt32(20)); -EXPLAIN SYNTAX SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.b = toUInt32(20)); - -SELECT '---------Q2----------'; -SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.a < table2.b) AND (table2.b = toUInt32(20)); -EXPLAIN SYNTAX SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.a < table2.b) AND (table2.b = toUInt32(20)); - -SELECT '---------Q3----------'; -SELECT * FROM table1 JOIN table2 ON (table1.a = toUInt32(table2.a + 5)) AND (table2.a < table1.b) AND (table2.b > toUInt32(20)); -- { serverError 48 } - -SELECT '---------Q4----------'; -SELECT table1.a, table2.b FROM table1 INNER JOIN table2 ON (table1.a = toUInt32(10 - table2.a)) AND (table1.b = 6) AND (table2.b > 20); -EXPLAIN SYNTAX SELECT table1.a, table2.b FROM table1 INNER JOIN table2 ON (table1.a = toUInt32(10 - table2.a)) AND (table1.b = 6) AND (table2.b > 20); - -SELECT '---------Q5----------'; -SELECT table1.a, table2.b FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table1.b = 6) AND (table2.b > 20) AND (10 < 6); -EXPLAIN SYNTAX SELECT table1.a, table2.b FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table1.b = 6) AND (table2.b > 20) AND (10 < 6); - -SELECT '---------Q6----------'; -SELECT table1.a, table2.b FROM table1 JOIN table2 ON (table1.b = 6) AND (table2.b > 20); -- { serverError 403 } - -SELECT '---------Q7----------'; -SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.b < toUInt32(40)) where table1.b < 1; -EXPLAIN SYNTAX SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.b < toUInt32(40)) where table1.b < 1; -SELECT * FROM table1 JOIN table2 ON (table1.a = table2.a) AND (table2.b < toUInt32(40)) where table1.b > 10; - -SELECT '---------Q8----------'; -SELECT * FROM table1 INNER JOIN table2 ON (table1.a = table2.a) AND (table2.b < toUInt32(table1, 10)); -- { serverError 47 } - -SELECT '---------Q9---will not be optimized----------'; -EXPLAIN SYNTAX SELECT * FROM table1 LEFT JOIN table2 ON (table1.a = table2.a) AND (table1.b = toUInt32(10)); -EXPLAIN SYNTAX SELECT * FROM table1 RIGHT JOIN table2 ON (table1.a = table2.a) AND (table1.b = toUInt32(10)); -EXPLAIN SYNTAX SELECT * FROM table1 FULL JOIN table2 ON (table1.a = table2.a) AND (table1.b = toUInt32(10)); -EXPLAIN SYNTAX SELECT * FROM table1 FULL JOIN table2 ON (table1.a = table2.a) AND (table2.b = toUInt32(10)) WHERE table1.a < toUInt32(20); -EXPLAIN SYNTAX SELECT * FROM table1 , table2; - -DROP TABLE table1; -DROP TABLE table2; diff --git a/tests/queries/0_stateless/01655_plan_optimizations.reference b/tests/queries/0_stateless/01655_plan_optimizations.reference index 99b32b74ca7..22f5a2e73e3 100644 --- a/tests/queries/0_stateless/01655_plan_optimizations.reference +++ b/tests/queries/0_stateless/01655_plan_optimizations.reference @@ -123,3 +123,26 @@ Filter column: notEquals(y, 2) 3 10 0 37 +> filter is pushed down before CreatingSets +CreatingSets +Filter +Filter +1 +3 +> one condition of filter is pushed down before LEFT JOIN +Join +Filter column: notEquals(number, 1) +Join +0 0 +3 3 +> one condition of filter is pushed down before INNER JOIN +Join +Filter column: notEquals(number, 1) +Join +3 3 +> filter is pushed down before UNION +Union +Filter +Filter +2 3 +2 3 diff --git a/tests/queries/0_stateless/01655_plan_optimizations.sh b/tests/queries/0_stateless/01655_plan_optimizations.sh index 3148dc4a597..148e6157773 100755 --- a/tests/queries/0_stateless/01655_plan_optimizations.sh +++ b/tests/queries/0_stateless/01655_plan_optimizations.sh @@ -150,3 +150,49 @@ $CLICKHOUSE_CLIENT -q " select * from ( select y, sum(x) from (select number as x, number % 4 as y from numbers(10)) group by y with totals ) where y != 2" + +echo "> filter is pushed down before CreatingSets" +$CLICKHOUSE_CLIENT -q " + explain select number from ( + select number from numbers(5) where number in (select 1 + number from numbers(3)) + ) where number != 2 settings enable_optimize_predicate_expression=0" | + grep -o "CreatingSets\|Filter" +$CLICKHOUSE_CLIENT -q " + select number from ( + select number from numbers(5) where number in (select 1 + number from numbers(3)) + ) where number != 2 settings enable_optimize_predicate_expression=0" + +echo "> one condition of filter is pushed down before LEFT JOIN" +$CLICKHOUSE_CLIENT -q " + explain actions = 1 + select number as a, r.b from numbers(4) as l any left join ( + select number + 2 as b from numbers(3) + ) as r on a = r.b where a != 1 and b != 2 settings enable_optimize_predicate_expression = 0" | + grep -o "Join\|Filter column: notEquals(number, 1)" +$CLICKHOUSE_CLIENT -q " + select number as a, r.b from numbers(4) as l any left join ( + select number + 2 as b from numbers(3) + ) as r on a = r.b where a != 1 and b != 2 settings enable_optimize_predicate_expression = 0" + +echo "> one condition of filter is pushed down before INNER JOIN" +$CLICKHOUSE_CLIENT -q " + explain actions = 1 + select number as a, r.b from numbers(4) as l any inner join ( + select number + 2 as b from numbers(3) + ) as r on a = r.b where a != 1 and b != 2 settings enable_optimize_predicate_expression = 0" | + grep -o "Join\|Filter column: notEquals(number, 1)" +$CLICKHOUSE_CLIENT -q " + select number as a, r.b from numbers(4) as l any inner join ( + select number + 2 as b from numbers(3) + ) as r on a = r.b where a != 1 and b != 2 settings enable_optimize_predicate_expression = 0" + +echo "> filter is pushed down before UNION" +$CLICKHOUSE_CLIENT -q " + explain select a, b from ( + select number + 1 as a, number + 2 as b from numbers(2) union all select number + 1 as b, number + 2 as a from numbers(2) + ) where a != 1 settings enable_optimize_predicate_expression = 0" | + grep -o "Union\|Filter" +$CLICKHOUSE_CLIENT -q " + select a, b from ( + select number + 1 as a, number + 2 as b from numbers(2) union all select number + 1 as b, number + 2 as a from numbers(2) + ) where a != 1 settings enable_optimize_predicate_expression = 0" diff --git a/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh b/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh index 593f0e59ea7..072e8d75f52 100755 --- a/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh +++ b/tests/queries/0_stateless/01658_read_file_to_stringcolumn.sh @@ -8,7 +8,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # Data preparation. # Now we can get the user_files_path by use the table file function for trick. also we can get it by query as: # "insert into function file('exist.txt', 'CSV', 'val1 char') values ('aaaa'); select _path from file('exist.txt', 'CSV', 'val1 char')" -user_files_path=$(clickhouse-client --query "select _path,_file from file('nonexist.txt', 'CSV', 'val1 char')" 2>&1 |grep Exception | awk '{gsub("/nonexist.txt","",$9); print $9}') +user_files_path=$(clickhouse-client --query "select _path,_file from file('nonexist.txt', 'CSV', 'val1 char')" 2>&1 | grep Exception | awk '{gsub("/nonexist.txt","",$9); print $9}') mkdir -p ${user_files_path}/ echo -n aaaaaaaaa > ${user_files_path}/a.txt diff --git a/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql b/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql index 48d3baba0c5..a056d77896c 100644 --- a/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql +++ b/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql @@ -1 +1,2 @@ SELECT repeat('abcdefghijklmnopqrstuvwxyz', number * 100) AS haystack, extractAllGroupsHorizontal(haystack, '(\\w)') AS matches FROM numbers(1023); -- { serverError 128 } +SELECT count(extractAllGroupsHorizontal(materialize('a'), '(a)')) FROM numbers(1000000) FORMAT Null; -- shouldn't fail diff --git a/tests/queries/0_stateless/01666_merge_tree_max_query_limit.sh b/tests/queries/0_stateless/01666_merge_tree_max_query_limit.sh index 5bb93371483..c5fbb35a9cd 100755 --- a/tests/queries/0_stateless/01666_merge_tree_max_query_limit.sh +++ b/tests/queries/0_stateless/01666_merge_tree_max_query_limit.sh @@ -15,13 +15,13 @@ drop table if exists simple; create table simple (i int, j int) engine = MergeTree order by i settings index_granularity = 1, max_concurrent_queries = 1, min_marks_to_honor_max_concurrent_queries = 2; -insert into simple select number, number + 100 from numbers(1000); +insert into simple select number, number + 100 from numbers(5000); " query_id="long_running_query-$CLICKHOUSE_DATABASE" echo "Spin up a long running query" -${CLICKHOUSE_CLIENT} --query "select sleepEachRow(0.01) from simple settings max_block_size = 1 format Null" --query_id "$query_id" > /dev/null 2>&1 & +${CLICKHOUSE_CLIENT} --query "select sleepEachRow(0.1) from simple settings max_block_size = 1 format Null" --query_id "$query_id" > /dev/null 2>&1 & wait_for_query_to_start "$query_id" # query which reads marks >= min_marks_to_honor_max_concurrent_queries is throttled diff --git a/tests/queries/0_stateless/01670_log_comment.sql b/tests/queries/0_stateless/01670_log_comment.sql index c1496273784..2fb61eb5812 100644 --- a/tests/queries/0_stateless/01670_log_comment.sql +++ b/tests/queries/0_stateless/01670_log_comment.sql @@ -1,5 +1,5 @@ SET log_comment = 'log_comment test', log_queries = 1; SELECT 1; SYSTEM FLUSH LOGS; -SELECT type, query FROM system.query_log WHERE current_database = currentDatabase() AND log_comment = 'log_comment test' AND event_date >= yesterday() AND type = 1 ORDER BY event_time DESC LIMIT 1; -SELECT type, query FROM system.query_log WHERE current_database = currentDatabase() AND log_comment = 'log_comment test' AND event_date >= yesterday() AND type = 2 ORDER BY event_time DESC LIMIT 1; +SELECT type, query FROM system.query_log WHERE current_database = currentDatabase() AND log_comment = 'log_comment test' AND query LIKE 'SELECT 1%' AND event_date >= yesterday() AND type = 1 ORDER BY event_time_microseconds DESC LIMIT 1; +SELECT type, query FROM system.query_log WHERE current_database = currentDatabase() AND log_comment = 'log_comment test' AND query LIKE 'SELECT 1%' AND event_date >= yesterday() AND type = 2 ORDER BY event_time_microseconds DESC LIMIT 1; diff --git a/tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.reference b/tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh b/tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.sh similarity index 80% rename from tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh rename to tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.sh index 08e07044841..1ed5c6be272 100755 --- a/tests/queries/0_stateless/01676_clickhouse_client_autocomplete.sh +++ b/tests/queries/0_stateless/01676_long_clickhouse_client_autocomplete.sh @@ -69,18 +69,6 @@ compwords_positive=( max_concurrent_queries_for_all_users # system.clusters test_shard_localhost - # system.errors, also it is very rare to cover system_events_show_zero_values - CONDITIONAL_TREE_PARENT_NOT_FOUND - # system.events, also it is very rare to cover system_events_show_zero_values - WriteBufferFromFileDescriptorWriteFailed - # system.asynchronous_metrics, also this metric has zero value - # - # NOTE: that there is no ability to complete metrics like - # jemalloc.background_thread.num_runs, due to "." is used as a word breaker - # (and this cannot be changed -- db.table) - ReplicasMaxAbsoluteDelay - # system.metrics - PartsPreCommitted # system.macros default_path_test # system.storage_policies, egh not uniq diff --git a/tests/queries/0_stateless/01700_deltasum.reference b/tests/queries/0_stateless/01700_deltasum.reference index be5b176c627..6be953e2b2d 100644 --- a/tests/queries/0_stateless/01700_deltasum.reference +++ b/tests/queries/0_stateless/01700_deltasum.reference @@ -7,3 +7,4 @@ 2 2.25 6.5 +7 diff --git a/tests/queries/0_stateless/01700_deltasum.sql b/tests/queries/0_stateless/01700_deltasum.sql index 93edb2e477d..83d5e0439d2 100644 --- a/tests/queries/0_stateless/01700_deltasum.sql +++ b/tests/queries/0_stateless/01700_deltasum.sql @@ -7,3 +7,4 @@ select deltaSumMerge(rows) from (select deltaSumState(arrayJoin([0, 1])) as rows select deltaSumMerge(rows) from (select deltaSumState(arrayJoin([4, 5])) as rows union all select deltaSumState(arrayJoin([0, 1])) as rows); select deltaSum(arrayJoin([2.25, 3, 4.5])); select deltaSumMerge(rows) from (select deltaSumState(arrayJoin([0.1, 0.3, 0.5])) as rows union all select deltaSumState(arrayJoin([4.1, 5.1, 6.6])) as rows); +select deltaSumMerge(rows) from (select deltaSumState(arrayJoin([3, 5])) as rows union all select deltaSumState(arrayJoin([1, 2])) as rows union all select deltaSumState(arrayJoin([4, 6])) as rows); diff --git a/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh b/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh index d3e634eb560..edc4f6916ff 100755 --- a/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh +++ b/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh @@ -1,9 +1,11 @@ -#!/usr/bin/env bash - -CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) -# shellcheck source=../shell_config.sh -. "$CURDIR"/../shell_config.sh +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh ${CLICKHOUSE_CLIENT} -q "create table insert_big_json(a String, b String) engine=MergeTree() order by tuple()"; -python3 -c "[print('{{\"a\":\"{}\", \"b\":\"{}\"'.format('clickhouse'* 1000000, 'dbms' * 1000000)) for i in range(10)]; [print('{{\"a\":\"{}\", \"b\":\"{}\"}}'.format('clickhouse'* 100000, 'dbms' * 100000)) for i in range(10)]" 2>/dev/null | ${CLICKHOUSE_CLIENT} --input_format_parallel_parsing=1 --max_memory_usage=0 -q "insert into insert_big_json FORMAT JSONEachRow" 2>&1 | grep -q "min_chunk_bytes_for_parallel_parsing" && echo "Ok." || echo "FAIL" ||: \ No newline at end of file +python3 -c "[print('{{\"a\":\"{}\", \"b\":\"{}\"'.format('clickhouse'* 1000000, 'dbms' * 1000000)) for i in range(10)]; [print('{{\"a\":\"{}\", \"b\":\"{}\"}}'.format('clickhouse'* 100000, 'dbms' * 100000)) for i in range(10)]" 2>/dev/null | ${CLICKHOUSE_CLIENT} --input_format_parallel_parsing=1 --max_memory_usage=0 -q "insert into insert_big_json FORMAT JSONEachRow" 2>&1 | grep -q "min_chunk_bytes_for_parallel_parsing" && echo "Ok." || echo "FAIL" ||: + +${CLICKHOUSE_CLIENT} -q "drop table insert_big_json" diff --git a/tests/queries/0_stateless/01702_system_query_log.reference b/tests/queries/0_stateless/01702_system_query_log.reference index 6d8908249bf..1f329feac22 100644 --- a/tests/queries/0_stateless/01702_system_query_log.reference +++ b/tests/queries/0_stateless/01702_system_query_log.reference @@ -42,23 +42,17 @@ Alter ALTER TABLE sqllt.table DROP COLUMN the_new_col; Alter ALTER TABLE sqllt.table UPDATE i = i + 1 WHERE 1; Alter ALTER TABLE sqllt.table DELETE WHERE i > 65535; Select -- not done, seems to hard, so I\'ve skipped queries of ALTER-X, where X is:\n-- PARTITION\n-- ORDER BY\n-- SAMPLE BY\n-- INDEX\n-- CONSTRAINT\n-- TTL\n-- USER\n-- QUOTA\n-- ROLE\n-- ROW POLICY\n-- SETTINGS PROFILE\n\nSELECT \'SYSTEM queries\'; -System SYSTEM RELOAD EMBEDDED DICTIONARIES; -System SYSTEM RELOAD DICTIONARIES; -System SYSTEM DROP DNS CACHE; -System SYSTEM DROP MARK CACHE; -System SYSTEM DROP UNCOMPRESSED CACHE; System SYSTEM FLUSH LOGS; -System SYSTEM RELOAD CONFIG; -System SYSTEM STOP MERGES; -System SYSTEM START MERGES; -System SYSTEM STOP TTL MERGES; -System SYSTEM START TTL MERGES; -System SYSTEM STOP MOVES; -System SYSTEM START MOVES; -System SYSTEM STOP FETCHES; -System SYSTEM START FETCHES; -System SYSTEM STOP REPLICATED SENDS; -System SYSTEM START REPLICATED SENDS; +System SYSTEM STOP MERGES sqllt.table +System SYSTEM START MERGES sqllt.table +System SYSTEM STOP TTL MERGES sqllt.table +System SYSTEM START TTL MERGES sqllt.table +System SYSTEM STOP MOVES sqllt.table +System SYSTEM START MOVES sqllt.table +System SYSTEM STOP FETCHES sqllt.table +System SYSTEM START FETCHES sqllt.table +System SYSTEM STOP REPLICATED SENDS sqllt.table +System SYSTEM START REPLICATED SENDS sqllt.table Select -- SYSTEM RELOAD DICTIONARY sqllt.dictionary; -- temporary out of order: Code: 210, Connection refused (localhost:9001) (version 21.3.1.1)\n-- DROP REPLICA\n-- haha, no\n-- SYSTEM KILL;\n-- SYSTEM SHUTDOWN;\n\n-- Since we don\'t really care about the actual output, suppress it with `FORMAT Null`.\nSELECT \'SHOW queries\'; SHOW CREATE TABLE sqllt.table FORMAT Null; SHOW CREATE DICTIONARY sqllt.dictionary FORMAT Null; diff --git a/tests/queries/0_stateless/01702_system_query_log.sql b/tests/queries/0_stateless/01702_system_query_log.sql index 5c3de9cf912..e3ebf97edb7 100644 --- a/tests/queries/0_stateless/01702_system_query_log.sql +++ b/tests/queries/0_stateless/01702_system_query_log.sql @@ -64,23 +64,17 @@ ALTER TABLE sqllt.table DELETE WHERE i > 65535; -- SETTINGS PROFILE SELECT 'SYSTEM queries'; -SYSTEM RELOAD EMBEDDED DICTIONARIES; -SYSTEM RELOAD DICTIONARIES; -SYSTEM DROP DNS CACHE; -SYSTEM DROP MARK CACHE; -SYSTEM DROP UNCOMPRESSED CACHE; SYSTEM FLUSH LOGS; -SYSTEM RELOAD CONFIG; -SYSTEM STOP MERGES; -SYSTEM START MERGES; -SYSTEM STOP TTL MERGES; -SYSTEM START TTL MERGES; -SYSTEM STOP MOVES; -SYSTEM START MOVES; -SYSTEM STOP FETCHES; -SYSTEM START FETCHES; -SYSTEM STOP REPLICATED SENDS; -SYSTEM START REPLICATED SENDS; +SYSTEM STOP MERGES sqllt.table; +SYSTEM START MERGES sqllt.table; +SYSTEM STOP TTL MERGES sqllt.table; +SYSTEM START TTL MERGES sqllt.table; +SYSTEM STOP MOVES sqllt.table; +SYSTEM START MOVES sqllt.table; +SYSTEM STOP FETCHES sqllt.table; +SYSTEM START FETCHES sqllt.table; +SYSTEM STOP REPLICATED SENDS sqllt.table; +SYSTEM START REPLICATED SENDS sqllt.table; -- SYSTEM RELOAD DICTIONARY sqllt.dictionary; -- temporary out of order: Code: 210, Connection refused (localhost:9001) (version 21.3.1.1) -- DROP REPLICA diff --git a/tests/queries/0_stateless/01709_inactive_parts_to_delay_throw.sql b/tests/queries/0_stateless/01709_inactive_parts_to_delay_throw.sql deleted file mode 100644 index fad890c4807..00000000000 --- a/tests/queries/0_stateless/01709_inactive_parts_to_delay_throw.sql +++ /dev/null @@ -1,12 +0,0 @@ -drop table if exists x; - -create table x (i int) engine MergeTree order by i settings old_parts_lifetime = 10000000000, min_bytes_for_wide_part = 0, inactive_parts_to_throw_insert = 1; - -insert into x values (1); -insert into x values (2); - -optimize table x final; - -insert into x values (3); -- { serverError 252; } - -drop table if exists x; diff --git a/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.reference b/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.sql b/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.sql new file mode 100644 index 00000000000..6de0d4f4e0c --- /dev/null +++ b/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.sql @@ -0,0 +1,12 @@ +drop table if exists data_01709; + +create table data_01709 (i int) engine MergeTree order by i settings old_parts_lifetime = 10000000000, min_bytes_for_wide_part = 0, inactive_parts_to_throw_insert = 1; + +insert into data_01709 values (1); +insert into data_01709 values (2); + +optimize table data_01709 final; + +insert into data_01709 values (3); -- { serverError 252; } + +drop table data_01709; diff --git a/tests/queries/0_stateless/01711_cte_subquery_fix.sql b/tests/queries/0_stateless/01711_cte_subquery_fix.sql index ddea548eada..10ad9019209 100644 --- a/tests/queries/0_stateless/01711_cte_subquery_fix.sql +++ b/tests/queries/0_stateless/01711_cte_subquery_fix.sql @@ -1,3 +1,7 @@ drop table if exists t; create table t engine = Memory as with cte as (select * from numbers(10)) select * from cte; drop table t; + +drop table if exists view1; +create view view1 as with t as (select number n from numbers(3)) select n from t; +drop table view1; diff --git a/tests/queries/0_stateless/01715_table_function_view_fix.sql b/tests/queries/0_stateless/01715_table_function_view_fix.sql index de5150b7b70..b96609391b5 100644 --- a/tests/queries/0_stateless/01715_table_function_view_fix.sql +++ b/tests/queries/0_stateless/01715_table_function_view_fix.sql @@ -1 +1,3 @@ SELECT view(SELECT 1); -- { clientError 62 } + +SELECT sumIf(dummy, dummy) FROM remote('127.0.0.{1,2}', numbers(2, 100), view(SELECT CAST(NULL, 'Nullable(UInt8)') AS dummy FROM system.one)); -- { serverError 183 } diff --git a/tests/queries/0_stateless/01720_country_perimeter_and_area.sh b/tests/queries/0_stateless/01720_country_perimeter_and_area.sh index 76dc403fb2f..75016ee1d1f 100755 --- a/tests/queries/0_stateless/01720_country_perimeter_and_area.sh +++ b/tests/queries/0_stateless/01720_country_perimeter_and_area.sh @@ -22,4 +22,6 @@ ${CLICKHOUSE_CLIENT} -q "SELECT name, polygonPerimeterSpherical(p) from country_ ${CLICKHOUSE_CLIENT} -q "SELECT '-------------------------------------'" ${CLICKHOUSE_CLIENT} -q "SELECT name, polygonAreaSpherical(p) from country_rings" ${CLICKHOUSE_CLIENT} -q "SELECT '-------------------------------------'" -${CLICKHOUSE_CLIENT} -q "drop table if exists country_rings;" \ No newline at end of file +${CLICKHOUSE_CLIENT} -q "drop table if exists country_rings;" + +${CLICKHOUSE_CLIENT} -q "drop table country_polygons" diff --git a/tests/queries/0_stateless/01736_null_as_default.sql b/tests/queries/0_stateless/01736_null_as_default.sql index f9a4bc69acf..a00011b06d4 100644 --- a/tests/queries/0_stateless/01736_null_as_default.sql +++ b/tests/queries/0_stateless/01736_null_as_default.sql @@ -1,5 +1,5 @@ -drop table if exists test_num; +drop table if exists test_enum; create table test_enum (c Nullable(Enum16('A' = 1, 'B' = 2))) engine Log; insert into test_enum values (1), (NULL); select * from test_enum; -drop table if exists test_num; +drop table test_enum; diff --git a/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.config.xml b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.config.xml new file mode 100644 index 00000000000..2d0a480a375 --- /dev/null +++ b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.config.xml @@ -0,0 +1,35 @@ + + + + trace + true + + + 9000 + + ./ + + 0 + + + + + + + ::/0 + + + default + default + 1 + + + + + + + + + + + diff --git a/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.reference b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.sh b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.sh new file mode 100755 index 00000000000..a4fd7529ab2 --- /dev/null +++ b/tests/queries/0_stateless/01737_clickhouse_server_wait_server_pool_long.sh @@ -0,0 +1,83 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +server_opts=( + "--config-file=$CUR_DIR/$(basename "${BASH_SOURCE[0]}" .sh).config.xml" + "--" + # to avoid multiple listen sockets (complexity for port discovering) + "--listen_host=127.1" + # we will discover the real port later. + "--tcp_port=0" + "--shutdown_wait_unfinished=0" +) +CLICKHOUSE_WATCHDOG_ENABLE=0 $CLICKHOUSE_SERVER_BINARY "${server_opts[@]}" >& clickhouse-server.log & +server_pid=$! + +trap cleanup EXIT +function cleanup() +{ + kill -9 $server_pid + kill -9 $client_pid + + echo "Test failed. Server log:" + cat clickhouse-server.log + rm -f clickhouse-server.log + + exit 1 +} + +server_port= +i=0 retries=300 +# wait until server will start to listen (max 30 seconds) +while [[ -z $server_port ]] && [[ $i -lt $retries ]]; do + server_port=$(lsof -n -a -P -i tcp -s tcp:LISTEN -p $server_pid 2>/dev/null | awk -F'[ :]' '/LISTEN/ { print $(NF-1) }') + ((++i)) + sleep 0.1 +done +if [[ -z $server_port ]]; then + echo "Cannot wait for LISTEN socket" >&2 + exit 1 +fi + +# wait for the server to start accepting tcp connections (max 30 seconds) +i=0 retries=300 +while ! $CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" --format Null -q 'select 1' 2>/dev/null && [[ $i -lt $retries ]]; do + sleep 0.1 +done +if ! $CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" --format Null -q 'select 1'; then + echo "Cannot wait until server will start accepting connections on " >&2 + exit 1 +fi + +query_id="$CLICKHOUSE_DATABASE-$SECONDS" +$CLICKHOUSE_CLIENT_BINARY --query_id "$query_id" --host 127.1 --port "$server_port" --format Null -q 'select sleepEachRow(1) from numbers(10)' 2>/dev/null & +client_pid=$! + +# wait until the query will appear in processlist (max 10 second) +# (it is enough to trigger the problem) +i=0 retries=1000 +while [[ $($CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" -q "select count() from system.processes where query_id = '$query_id'") != "1" ]] && [[ $i -lt $retries ]]; do + sleep 0.01 +done +if [[ $($CLICKHOUSE_CLIENT_BINARY --host 127.1 --port "$server_port" -q "select count() from system.processes where query_id = '$query_id'") != "1" ]]; then + echo "Cannot wait until the query will start" >&2 + exit 1 +fi + +# send TERM and save the error code to ensure that it is 0 (EXIT_SUCCESS) +kill $server_pid +wait $server_pid +return_code=$? + +wait $client_pid + +trap '' EXIT +if [ $return_code != 0 ]; then + cat clickhouse-server.log +fi +rm -f clickhouse-server.log + +exit $return_code diff --git a/tests/queries/0_stateless/01739_index_hint.reference b/tests/queries/0_stateless/01739_index_hint.reference new file mode 100644 index 00000000000..6aa40c5d302 --- /dev/null +++ b/tests/queries/0_stateless/01739_index_hint.reference @@ -0,0 +1,35 @@ +-- { echo } + +drop table if exists tbl; +create table tbl (p Int64, t Int64, f Float64) Engine=MergeTree partition by p order by t settings index_granularity=1; +insert into tbl select number / 4, number, 0 from numbers(16); +select * from tbl WHERE indexHint(t = 1) order by t; +0 0 0 +0 1 0 +select * from tbl WHERE indexHint(t in (select toInt64(number) + 2 from numbers(3))) order by t; +0 1 0 +0 2 0 +0 3 0 +1 4 0 +select * from tbl WHERE indexHint(p = 2) order by t; +2 8 0 +2 9 0 +2 10 0 +2 11 0 +select * from tbl WHERE indexHint(p in (select toInt64(number) - 2 from numbers(3))) order by t; +0 0 0 +0 1 0 +0 2 0 +0 3 0 +drop table tbl; +drop table if exists XXXX; +create table XXXX (t Int64, f Float64) Engine=MergeTree order by t settings index_granularity=128; +insert into XXXX select number*60, 0 from numbers(100000); +SELECT count() FROM XXXX WHERE indexHint(t = 42); +128 +drop table if exists XXXX; +create table XXXX (t Int64, f Float64) Engine=MergeTree order by t settings index_granularity=8192; +insert into XXXX select number*60, 0 from numbers(100000); +SELECT count() FROM XXXX WHERE indexHint(t = toDateTime(0)); +100000 +drop table XXXX; diff --git a/tests/queries/0_stateless/01739_index_hint.sql b/tests/queries/0_stateless/01739_index_hint.sql new file mode 100644 index 00000000000..28395c2dc1d --- /dev/null +++ b/tests/queries/0_stateless/01739_index_hint.sql @@ -0,0 +1,35 @@ +-- { echo } + +drop table if exists tbl; + +create table tbl (p Int64, t Int64, f Float64) Engine=MergeTree partition by p order by t settings index_granularity=1; + +insert into tbl select number / 4, number, 0 from numbers(16); + +select * from tbl WHERE indexHint(t = 1) order by t; + +select * from tbl WHERE indexHint(t in (select toInt64(number) + 2 from numbers(3))) order by t; + +select * from tbl WHERE indexHint(p = 2) order by t; + +select * from tbl WHERE indexHint(p in (select toInt64(number) - 2 from numbers(3))) order by t; + +drop table tbl; + +drop table if exists XXXX; + +create table XXXX (t Int64, f Float64) Engine=MergeTree order by t settings index_granularity=128; + +insert into XXXX select number*60, 0 from numbers(100000); + +SELECT count() FROM XXXX WHERE indexHint(t = 42); + +drop table if exists XXXX; + +create table XXXX (t Int64, f Float64) Engine=MergeTree order by t settings index_granularity=8192; + +insert into XXXX select number*60, 0 from numbers(100000); + +SELECT count() FROM XXXX WHERE indexHint(t = toDateTime(0)); + +drop table XXXX; diff --git a/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.reference b/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.reference new file mode 100644 index 00000000000..70c19fc8ced --- /dev/null +++ b/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.reference @@ -0,0 +1,12 @@ +210 230 20 +SELECT + sum(a), + sumCount(b).1, + sumCount(b).2 +FROM fuse_tbl +---------NOT trigger fuse-------- +210 11.5 +SELECT + sum(a), + avg(b) +FROM fuse_tbl diff --git a/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.sql b/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.sql new file mode 100644 index 00000000000..cad7b5803d4 --- /dev/null +++ b/tests/queries/0_stateless/01744_fuse_sum_count_aggregate.sql @@ -0,0 +1,11 @@ +DROP TABLE IF EXISTS fuse_tbl; +CREATE TABLE fuse_tbl(a Int8, b Int8) Engine = Log; +INSERT INTO fuse_tbl SELECT number, number + 1 FROM numbers(1, 20); + +SET optimize_fuse_sum_count_avg = 1; +SELECT sum(a), sum(b), count(b) from fuse_tbl; +EXPLAIN SYNTAX SELECT sum(a), sum(b), count(b) from fuse_tbl; +SELECT '---------NOT trigger fuse--------'; +SELECT sum(a), avg(b) from fuse_tbl; +EXPLAIN SYNTAX SELECT sum(a), avg(b) from fuse_tbl; +DROP TABLE fuse_tbl; diff --git a/tests/queries/0_stateless/01753_max_uri_size.sh b/tests/queries/0_stateless/01753_max_uri_size.sh index 5c63d9274fd..62bc4f2c26f 100755 --- a/tests/queries/0_stateless/01753_max_uri_size.sh +++ b/tests/queries/0_stateless/01753_max_uri_size.sh @@ -4,8 +4,14 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -# NOTE: since 'max_uri_size' doesn't affect the request itself, this test hardly depends on the default value of this setting (16Kb). +# NOTE: since 'max_uri_size' doesn't affect the request itself, this test hardly depends on the default value of this setting (1 MiB). -LONG_REQUEST=$(python3 -c "print('&max_uri_size=1'*2000, end='')") # ~30K +python3 -c " +print('${CLICKHOUSE_URL}', end='') +print('&hello=world'*100000, end='') +print('&query=SELECT+1') +" > "${CLICKHOUSE_TMP}/url.txt" -${CLICKHOUSE_CURL} -sSv "${CLICKHOUSE_URL}${LONG_REQUEST}&query=SELECT+1" 2>&1 | grep -Fc "HTTP/1.1 400 Bad Request" +wget --input-file "${CLICKHOUSE_TMP}/url.txt" 2>&1 | grep -Fc "400: Bad Request" + +rm "${CLICKHOUSE_TMP}/url.txt" diff --git a/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference new file mode 100644 index 00000000000..a1bfcf043da --- /dev/null +++ b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference @@ -0,0 +1,25 @@ +(0, 2) +0 0 +0 0 +WITH CAST(\'default\', \'String\') AS id_no SELECT one.dummy, ignore(id_no) FROM system.one WHERE dummy IN (0, 2) +WITH CAST(\'default\', \'String\') AS id_no SELECT one.dummy, ignore(id_no) FROM system.one WHERE dummy IN (0, 2) +optimize_skip_unused_shards_rewrite_in(0, 2) +0 0 +WITH CAST(\'default\', \'String\') AS id_02 SELECT one.dummy, ignore(id_02) FROM system.one WHERE dummy IN tuple(0) +WITH CAST(\'default\', \'String\') AS id_02 SELECT one.dummy, ignore(id_02) FROM system.one WHERE dummy IN tuple(2) +optimize_skip_unused_shards_rewrite_in(2,) +WITH CAST(\'default\', \'String\') AS id_2 SELECT one.dummy, ignore(id_2) FROM system.one WHERE dummy IN tuple(2) +optimize_skip_unused_shards_rewrite_in(0,) +0 0 +WITH CAST(\'default\', \'String\') AS id_0 SELECT one.dummy, ignore(id_0) FROM system.one WHERE dummy IN tuple(0) +errors +others +0 +0 +0 +different types -- prohibited +different types -- conversion +0 +optimize_skip_unused_shards_limit +0 +0 diff --git a/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql new file mode 100644 index 00000000000..dc481ccca72 --- /dev/null +++ b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql @@ -0,0 +1,138 @@ +-- NOTE: this test cannot use 'current_database = currentDatabase()', +-- because it does not propagated via remote queries, +-- hence it uses 'with (select currentDatabase()) as X' +-- (with subquery to expand it on the initiator). + +drop table if exists dist_01756; +drop table if exists dist_01756_str; +drop table if exists dist_01756_column; +drop table if exists data_01756_str; + +-- SELECT +-- intHash64(0) % 2, +-- intHash64(2) % 2 +-- ┌─modulo(intHash64(0), 2)─┬─modulo(intHash64(2), 2)─┐ +-- │ 0 │ 1 │ +-- └─────────────────────────┴─────────────────────────┘ +create table dist_01756 as system.one engine=Distributed(test_cluster_two_shards, system, one, intHash64(dummy)); + +-- separate log entry for localhost queries +set prefer_localhost_replica=0; +set force_optimize_skip_unused_shards=2; +set optimize_skip_unused_shards=1; +set optimize_skip_unused_shards_rewrite_in=0; +set log_queries=1; + +-- +-- w/o optimize_skip_unused_shards_rewrite_in=1 +-- +select '(0, 2)'; +with (select currentDatabase()) as id_no select *, ignore(id_no) from dist_01756 where dummy in (0, 2); +system flush logs; +select query from system.query_log where + event_date = today() and + event_time > now() - interval 1 hour and + not is_initial_query and + query not like '%system.query_log%' and + query like concat('WITH%', currentDatabase(), '%AS id_no %') and + type = 'QueryFinish' +order by query; + +-- +-- w/ optimize_skip_unused_shards_rewrite_in=1 +-- + +set optimize_skip_unused_shards_rewrite_in=1; + +-- detailed coverage for realistic examples +select 'optimize_skip_unused_shards_rewrite_in(0, 2)'; +with (select currentDatabase()) as id_02 select *, ignore(id_02) from dist_01756 where dummy in (0, 2); +system flush logs; +select query from system.query_log where + event_date = today() and + event_time > now() - interval 1 hour and + not is_initial_query and + query not like '%system.query_log%' and + query like concat('WITH%', currentDatabase(), '%AS id_02 %') and + type = 'QueryFinish' +order by query; + +select 'optimize_skip_unused_shards_rewrite_in(2,)'; +with (select currentDatabase()) as id_2 select *, ignore(id_2) from dist_01756 where dummy in (2,); +system flush logs; +select query from system.query_log where + event_date = today() and + event_time > now() - interval 1 hour and + not is_initial_query and + query not like '%system.query_log%' and + query like concat('WITH%', currentDatabase(), '%AS id_2 %') and + type = 'QueryFinish' +order by query; + +select 'optimize_skip_unused_shards_rewrite_in(0,)'; +with (select currentDatabase()) as id_0 select *, ignore(id_0) from dist_01756 where dummy in (0,); +system flush logs; +select query from system.query_log where + event_date = today() and + event_time > now() - interval 1 hour and + not is_initial_query and + query not like '%system.query_log%' and + query like concat('WITH%', currentDatabase(), '%AS id_0 %') and + type = 'QueryFinish' +order by query; + +-- +-- errors +-- +select 'errors'; + +-- not tuple +select * from dist_01756 where dummy in (0); -- { serverError 507 } +-- optimize_skip_unused_shards does not support non-constants +select * from dist_01756 where dummy in (select * from system.one); -- { serverError 507 } +select * from dist_01756 where dummy in (toUInt8(0)); -- { serverError 507 } +-- wrong type (tuple) +select * from dist_01756 where dummy in ('0'); -- { serverError 507 } +-- intHash64 does not accept string +select * from dist_01756 where dummy in ('0', '2'); -- { serverError 43 } +-- NOT IN does not supported +select * from dist_01756 where dummy not in (0, 2); -- { serverError 507 } + +-- +-- others +-- +select 'others'; + +select * from dist_01756 where dummy not in (2, 3) and dummy in (0, 2); +select * from dist_01756 where dummy in tuple(0, 2); +select * from dist_01756 where dummy in tuple(0); +select * from dist_01756 where dummy in tuple(2); +-- Identifier is NULL +select (2 IN (2,)), * from dist_01756 where dummy in (0, 2) format Null; +-- Literal is NULL +select (dummy IN (toUInt8(2),)), * from dist_01756 where dummy in (0, 2) format Null; + +-- different type +select 'different types -- prohibited'; +create table data_01756_str (key String) engine=Memory(); +create table dist_01756_str as data_01756_str engine=Distributed(test_cluster_two_shards, currentDatabase(), data_01756_str, cityHash64(key)); +select * from dist_01756_str where key in ('0', '2'); +select * from dist_01756_str where key in ('0', Null); -- { serverError 507 } +select * from dist_01756_str where key in (0, 2); -- { serverError 53 } +select * from dist_01756_str where key in (0, Null); -- { serverError 53 } + +-- different type #2 +select 'different types -- conversion'; +create table dist_01756_column as system.one engine=Distributed(test_cluster_two_shards, system, one, dummy); +select * from dist_01756_column where dummy in (0, '255'); +select * from dist_01756_column where dummy in (0, '255foo'); -- { serverError 53 } + +-- optimize_skip_unused_shards_limit +select 'optimize_skip_unused_shards_limit'; +select * from dist_01756 where dummy in (0, 2) settings optimize_skip_unused_shards_limit=1; -- { serverError 507 } +select * from dist_01756 where dummy in (0, 2) settings optimize_skip_unused_shards_limit=1, force_optimize_skip_unused_shards=0; + +drop table dist_01756; +drop table dist_01756_str; +drop table dist_01756_column; +drop table data_01756_str; diff --git a/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh b/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh index b26961eda8e..d18ea8694a9 100755 --- a/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh +++ b/tests/queries/0_stateless/01758_optimize_skip_unused_shards_once.sh @@ -10,3 +10,5 @@ $CLICKHOUSE_CLIENT --optimize_skip_unused_shards=1 -nm -q " create table dist_01758 as system.one engine=Distributed(test_cluster_two_shards, system, one, dummy); select * from dist_01758 where dummy = 0 format Null; " |& grep -o "StorageDistributed (dist_01758).*" + +$CLICKHOUSE_CLIENT -q "drop table dist_01758" 2>/dev/null diff --git a/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.sql b/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.sql index b95d640ca1a..2ddf318313f 100644 --- a/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.sql +++ b/tests/queries/0_stateless/01759_optimize_skip_unused_shards_zero_shards.sql @@ -1,2 +1,3 @@ create table dist_01756 (dummy UInt8) ENGINE = Distributed('test_cluster_two_shards', 'system', 'one', dummy); -select ignore(1), * from dist_01756 where 0 settings optimize_skip_unused_shards=1, force_optimize_skip_unused_shards=1 +select ignore(1), * from dist_01756 where 0 settings optimize_skip_unused_shards=1, force_optimize_skip_unused_shards=1; +drop table dist_01756; diff --git a/tests/queries/0_stateless/01760_polygon_dictionaries.sql b/tests/queries/0_stateless/01760_polygon_dictionaries.sql index 5e26d2fc306..406e9af27ea 100644 --- a/tests/queries/0_stateless/01760_polygon_dictionaries.sql +++ b/tests/queries/0_stateless/01760_polygon_dictionaries.sql @@ -65,3 +65,5 @@ SELECT tuple(inf, inf) as key, dictGet('01760_db.dict_array', 'name', key); --{s DROP DICTIONARY 01760_db.dict_array; DROP TABLE 01760_db.points; DROP TABLE 01760_db.polygons; + +DROP DATABASE 01760_db; diff --git a/tests/queries/0_stateless/01763_max_distributed_depth.reference b/tests/queries/0_stateless/01763_max_distributed_depth.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01763_max_distributed_depth.sql b/tests/queries/0_stateless/01763_max_distributed_depth.sql new file mode 100644 index 00000000000..d1bb9e4be90 --- /dev/null +++ b/tests/queries/0_stateless/01763_max_distributed_depth.sql @@ -0,0 +1,26 @@ +DROP TABLE IF EXISTS tt6; + +CREATE TABLE tt6 +( + `id` UInt32, + `first_column` UInt32, + `second_column` UInt32, + `third_column` UInt32, + `status` String + +) +ENGINE = Distributed('test_shard_localhost', '', 'tt6', rand()); + +INSERT INTO tt6 VALUES (1, 1, 1, 1, 'ok'); -- { serverError 581 } + +SELECT * FROM tt6; -- { serverError 581 } + +SET max_distributed_depth = 0; + +-- stack overflow +INSERT INTO tt6 VALUES (1, 1, 1, 1, 'ok'); -- { serverError 306} + +-- stack overflow +SELECT * FROM tt6; -- { serverError 306 } + +DROP TABLE tt6; diff --git a/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.reference b/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.reference new file mode 100644 index 00000000000..0cfb83aa2f2 --- /dev/null +++ b/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.reference @@ -0,0 +1,3 @@ +1 1 +2 2 +3 3 diff --git a/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.sql b/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.sql new file mode 100644 index 00000000000..5673e646a47 --- /dev/null +++ b/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.sql @@ -0,0 +1,53 @@ +DROP DATABASE IF EXISTS 01780_db; +CREATE DATABASE 01780_db; + +DROP DICTIONARY IF EXISTS dict1; +CREATE DICTIONARY dict1 +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 TABLE 'dict1')) +LAYOUT(DIRECT()); + +SELECT * FROM dict1; --{serverError 36} + +DROP DICTIONARY dict1; + +DROP DICTIONARY IF EXISTS dict2; +CREATE DICTIONARY 01780_db.dict2 +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 DATABASE '01780_db' TABLE 'dict2')) +LAYOUT(DIRECT()); + +SELECT * FROM 01780_db.dict2; --{serverError 36} +DROP DICTIONARY 01780_db.dict2; + +DROP TABLE IF EXISTS 01780_db.dict3_source; +CREATE TABLE 01780_db.dict3_source +( + id UInt64, + value String +) ENGINE = TinyLog; + +INSERT INTO 01780_db.dict3_source VALUES (1, '1'), (2, '2'), (3, '3'); + +CREATE DICTIONARY 01780_db.dict3 +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 TABLE 'dict3_source' DATABASE '01780_db')) +LAYOUT(DIRECT()); + +SELECT * FROM 01780_db.dict3; + +DROP DICTIONARY 01780_db.dict3; + +DROP DATABASE 01780_db; diff --git a/tests/queries/0_stateless/01781_merge_tree_deduplication.reference b/tests/queries/0_stateless/01781_merge_tree_deduplication.reference new file mode 100644 index 00000000000..cb5a3f1ff52 --- /dev/null +++ b/tests/queries/0_stateless/01781_merge_tree_deduplication.reference @@ -0,0 +1,85 @@ +1 1 +1 1 +=============== +1 1 +1 1 +2 2 +3 3 +4 4 +=============== +1 1 +1 1 +2 2 +3 3 +4 4 +5 5 +6 6 +7 7 +=============== +1 1 +1 1 +2 2 +3 3 +4 4 +5 5 +6 6 +7 7 +8 8 +9 9 +10 10 +11 11 +12 12 +=============== +10 10 +12 12 +=============== +1 1 +1 1 +2 2 +3 3 +4 4 +5 5 +6 6 +8 8 +9 9 +11 11 +12 12 +=============== +88 11 11 +77 11 11 +77 12 12 +=============== +1 1 33 +1 1 33 +2 2 33 +3 3 33 +=============== +1 1 33 +1 1 33 +1 1 33 +1 1 33 +2 2 33 +3 3 33 +=============== +1 1 33 +1 1 33 +1 1 33 +1 1 33 +1 1 33 +2 2 33 +3 3 33 +=============== +1 1 44 +2 2 44 +3 3 44 +4 4 44 +=============== +1 1 +1 1 +=============== +1 1 +1 1 +1 1 +2 2 +3 3 +4 4 diff --git a/tests/queries/0_stateless/01781_merge_tree_deduplication.sql b/tests/queries/0_stateless/01781_merge_tree_deduplication.sql new file mode 100644 index 00000000000..236f7b35b80 --- /dev/null +++ b/tests/queries/0_stateless/01781_merge_tree_deduplication.sql @@ -0,0 +1,187 @@ +DROP TABLE IF EXISTS merge_tree_deduplication; + +CREATE TABLE merge_tree_deduplication +( + key UInt64, + value String, + part UInt8 DEFAULT 77 +) +ENGINE=MergeTree() +ORDER BY key +PARTITION BY part +SETTINGS non_replicated_deduplication_window=3; + +SYSTEM STOP MERGES merge_tree_deduplication; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (1, '1'); + +SELECT key, value FROM merge_tree_deduplication; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (1, '1'); + +SELECT key, value FROM merge_tree_deduplication; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (2, '2'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (3, '3'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (4, '4'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (1, '1'); + +SELECT key, value FROM merge_tree_deduplication ORDER BY key; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (5, '5'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (6, '6'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (7, '7'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (5, '5'); + +SELECT key, value FROM merge_tree_deduplication ORDER BY key; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (8, '8'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (9, '9'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (10, '10'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (11, '11'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (12, '12'); + +INSERT INTO merge_tree_deduplication (key, value) VALUES (10, '10'); +INSERT INTO merge_tree_deduplication (key, value) VALUES (11, '11'); +INSERT INTO merge_tree_deduplication (key, value) VALUES (12, '12'); + +SELECT key, value FROM merge_tree_deduplication ORDER BY key; + +SELECT '==============='; + +ALTER TABLE merge_tree_deduplication DROP PART '77_9_9_0'; -- some old part + +INSERT INTO merge_tree_deduplication (key, value) VALUES (10, '10'); + +SELECT key, value FROM merge_tree_deduplication WHERE key = 10; + +ALTER TABLE merge_tree_deduplication DROP PART '77_13_13_0'; -- fresh part + +INSERT INTO merge_tree_deduplication (key, value) VALUES (12, '12'); + +SELECT key, value FROM merge_tree_deduplication WHERE key = 12; + +DETACH TABLE merge_tree_deduplication; +ATTACH TABLE merge_tree_deduplication; + +OPTIMIZE TABLE merge_tree_deduplication FINAL; + +INSERT INTO merge_tree_deduplication (key, value) VALUES (11, '11'); -- deduplicated +INSERT INTO merge_tree_deduplication (key, value) VALUES (12, '12'); -- deduplicated + +SELECT '==============='; + +SELECT key, value FROM merge_tree_deduplication ORDER BY key; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (11, '11', 88); + +ALTER TABLE merge_tree_deduplication DROP PARTITION 77; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (11, '11', 88); --deduplicated + +INSERT INTO merge_tree_deduplication (key, value) VALUES (11, '11'); -- not deduplicated +INSERT INTO merge_tree_deduplication (key, value) VALUES (12, '12'); -- not deduplicated + +SELECT part, key, value FROM merge_tree_deduplication ORDER BY key; + +-- Alters.... + +ALTER TABLE merge_tree_deduplication MODIFY SETTING non_replicated_deduplication_window = 2; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 33); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (2, '2', 33); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (3, '3', 33); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 33); + +SELECT * FROM merge_tree_deduplication WHERE part = 33 ORDER BY key; + +SELECT '==============='; + +ALTER TABLE merge_tree_deduplication MODIFY SETTING non_replicated_deduplication_window = 0; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 33); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 33); + +DETACH TABLE merge_tree_deduplication; +ATTACH TABLE merge_tree_deduplication; + +SELECT * FROM merge_tree_deduplication WHERE part = 33 ORDER BY key; + +SELECT '==============='; + +ALTER TABLE merge_tree_deduplication MODIFY SETTING non_replicated_deduplication_window = 3; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 33); + +SELECT * FROM merge_tree_deduplication WHERE part = 33 ORDER BY key; + +SELECT '==============='; + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 44); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (2, '2', 44); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (3, '3', 44); +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (1, '1', 44); + +INSERT INTO merge_tree_deduplication (key, value, part) VALUES (4, '4', 44); + +DETACH TABLE merge_tree_deduplication; +ATTACH TABLE merge_tree_deduplication; + +SELECT * FROM merge_tree_deduplication WHERE part = 44 ORDER BY key; + +DROP TABLE IF EXISTS merge_tree_deduplication; + +SELECT '==============='; + +DROP TABLE IF EXISTS merge_tree_no_deduplication; + +CREATE TABLE merge_tree_no_deduplication +( + key UInt64, + value String +) +ENGINE=MergeTree() +ORDER BY key; + +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (1, '1'); +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (1, '1'); + +SELECT * FROM merge_tree_no_deduplication ORDER BY key; + +SELECT '==============='; + +ALTER TABLE merge_tree_no_deduplication MODIFY SETTING non_replicated_deduplication_window = 3; + +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (1, '1'); +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (2, '2'); +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (3, '3'); + +DETACH TABLE merge_tree_no_deduplication; +ATTACH TABLE merge_tree_no_deduplication; + +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (1, '1'); +INSERT INTO merge_tree_no_deduplication (key, value) VALUES (4, '4'); + +SELECT * FROM merge_tree_no_deduplication ORDER BY key; + +DROP TABLE IF EXISTS merge_tree_no_deduplication; diff --git a/tests/queries/0_stateless/01783_merge_engine_join_key_condition.reference b/tests/queries/0_stateless/01783_merge_engine_join_key_condition.reference new file mode 100644 index 00000000000..4068a6e00dd --- /dev/null +++ b/tests/queries/0_stateless/01783_merge_engine_join_key_condition.reference @@ -0,0 +1,5 @@ +3 3 +1 4 +1 4 +1 4 +1 4 diff --git a/tests/queries/0_stateless/01783_merge_engine_join_key_condition.sql b/tests/queries/0_stateless/01783_merge_engine_join_key_condition.sql new file mode 100644 index 00000000000..372c1bd3572 --- /dev/null +++ b/tests/queries/0_stateless/01783_merge_engine_join_key_condition.sql @@ -0,0 +1,23 @@ +DROP TABLE IF EXISTS foo; +DROP TABLE IF EXISTS foo_merge; +DROP TABLE IF EXISTS t2; + +CREATE TABLE foo(Id Int32, Val Int32) Engine=MergeTree PARTITION BY Val ORDER BY Id; +INSERT INTO foo SELECT number, number%5 FROM numbers(100000); + +CREATE TABLE foo_merge as foo ENGINE=Merge(currentDatabase(), '^foo'); + +CREATE TABLE t2 (Id Int32, Val Int32, X Int32) Engine=Memory; +INSERT INTO t2 values (4, 3, 4); + +SET force_primary_key = 1, force_index_by_date=1; + +SELECT * FROM foo_merge WHERE Val = 3 AND Id = 3; +SELECT count(), X FROM foo_merge JOIN t2 USING Val WHERE Val = 3 AND Id = 3 AND t2.X == 4 GROUP BY X; +SELECT count(), X FROM foo_merge JOIN t2 USING Val WHERE Val = 3 AND (Id = 3 AND t2.X == 4) GROUP BY X; +SELECT count(), X FROM foo_merge JOIN t2 USING Val WHERE Val = 3 AND Id = 3 GROUP BY X; +SELECT count(), X FROM (SELECT * FROM foo_merge) f JOIN t2 USING Val WHERE Val = 3 AND Id = 3 GROUP BY X; + +DROP TABLE IF EXISTS foo; +DROP TABLE IF EXISTS foo_merge; +DROP TABLE IF EXISTS t2; diff --git a/tests/queries/0_stateless/01785_dictionary_element_count.reference b/tests/queries/0_stateless/01785_dictionary_element_count.reference new file mode 100644 index 00000000000..4b79788b4d4 --- /dev/null +++ b/tests/queries/0_stateless/01785_dictionary_element_count.reference @@ -0,0 +1,8 @@ +1 First +simple_key_flat_dictionary 01785_db 1 +1 First +simple_key_hashed_dictionary 01785_db 1 +1 First +simple_key_cache_dictionary 01785_db 1 +1 FirstKey First +complex_key_hashed_dictionary 01785_db 1 diff --git a/tests/queries/0_stateless/01785_dictionary_element_count.sql b/tests/queries/0_stateless/01785_dictionary_element_count.sql new file mode 100644 index 00000000000..6db65152a56 --- /dev/null +++ b/tests/queries/0_stateless/01785_dictionary_element_count.sql @@ -0,0 +1,91 @@ +DROP DATABASE IF EXISTS 01785_db; +CREATE DATABASE 01785_db; + +DROP TABLE IF EXISTS 01785_db.simple_key_source_table; +CREATE TABLE 01785_db.simple_key_source_table +( + id UInt64, + value String +) ENGINE = TinyLog(); + +INSERT INTO 01785_db.simple_key_source_table VALUES (1, 'First'); +INSERT INTO 01785_db.simple_key_source_table VALUES (1, 'First'); + +DROP DICTIONARY IF EXISTS 01785_db.simple_key_flat_dictionary; +CREATE DICTIONARY 01785_db.simple_key_flat_dictionary +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() DB '01785_db' TABLE 'simple_key_source_table')) +LAYOUT(FLAT()) +LIFETIME(MIN 0 MAX 1000); + +SELECT * FROM 01785_db.simple_key_flat_dictionary; +SELECT name, database, element_count FROM system.dictionaries WHERE database = '01785_db' AND name = 'simple_key_flat_dictionary'; + +DROP DICTIONARY 01785_db.simple_key_flat_dictionary; + +CREATE DICTIONARY 01785_db.simple_key_hashed_dictionary +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() DB '01785_db' TABLE 'simple_key_source_table')) +LAYOUT(HASHED()) +LIFETIME(MIN 0 MAX 1000); + +SELECT * FROM 01785_db.simple_key_hashed_dictionary; +SELECT name, database, element_count FROM system.dictionaries WHERE database = '01785_db' AND name = 'simple_key_hashed_dictionary'; + +DROP DICTIONARY 01785_db.simple_key_hashed_dictionary; + +CREATE DICTIONARY 01785_db.simple_key_cache_dictionary +( + id UInt64, + value String +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() DB '01785_db' TABLE 'simple_key_source_table')) +LAYOUT(CACHE(SIZE_IN_CELLS 100000)) +LIFETIME(MIN 0 MAX 1000); + +SELECT toUInt64(1) as key, dictGet('01785_db.simple_key_cache_dictionary', 'value', key); +SELECT name, database, element_count FROM system.dictionaries WHERE database = '01785_db' AND name = 'simple_key_cache_dictionary'; + +DROP DICTIONARY 01785_db.simple_key_cache_dictionary; + +DROP TABLE 01785_db.simple_key_source_table; + +DROP TABLE IF EXISTS 01785_db.complex_key_source_table; +CREATE TABLE 01785_db.complex_key_source_table +( + id UInt64, + id_key String, + value String +) ENGINE = TinyLog(); + +INSERT INTO 01785_db.complex_key_source_table VALUES (1, 'FirstKey', 'First'); +INSERT INTO 01785_db.complex_key_source_table VALUES (1, 'FirstKey', 'First'); + +CREATE DICTIONARY 01785_db.complex_key_hashed_dictionary +( + id UInt64, + id_key String, + value String +) +PRIMARY KEY id, id_key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() DB '01785_db' TABLE 'complex_key_source_table')) +LAYOUT(COMPLEX_KEY_HASHED()) +LIFETIME(MIN 0 MAX 1000); + +SELECT * FROM 01785_db.complex_key_hashed_dictionary; +SELECT name, database, element_count FROM system.dictionaries WHERE database = '01785_db' AND name = 'complex_key_hashed_dictionary'; + +DROP DICTIONARY 01785_db.complex_key_hashed_dictionary; + +DROP TABLE 01785_db.complex_key_source_table; + +DROP DATABASE 01785_db; diff --git a/tests/queries/0_stateless/01786_explain_merge_tree.reference b/tests/queries/0_stateless/01786_explain_merge_tree.reference new file mode 100644 index 00000000000..51eb52688a3 --- /dev/null +++ b/tests/queries/0_stateless/01786_explain_merge_tree.reference @@ -0,0 +1,51 @@ + ReadFromMergeTree + Indexes: + MinMax + Keys: + y + Condition: (y in [1, +inf)) + Parts: 4/5 + Granules: 11/12 + Partition + Keys: + y + bitAnd(z, 3) + Condition: and((bitAnd(z, 3) not in [1, 1]), and((y in [1, +inf)), (bitAnd(z, 3) not in [1, 1]))) + Parts: 3/4 + Granules: 10/11 + PrimaryKey + Keys: + x + y + Condition: and((x in [11, +inf)), (y in [1, +inf))) + Parts: 2/3 + Granules: 6/10 + Skip + Name: t_minmax + Description: minmax GRANULARITY 2 + Parts: 1/2 + Granules: 2/6 + Skip + Name: t_set + Description: set GRANULARITY 2 + Parts: 1/1 + Granules: 1/2 +----------------- + ReadFromMergeTree + ReadType: InOrder + Parts: 1 + Granules: 3 +----------------- + ReadFromMergeTree + ReadType: InReverseOrder + Parts: 1 + Granules: 3 + ReadFromMergeTree + Indexes: + PrimaryKey + Keys: + x + plus(x, y) + Condition: or((x in 2-element set), (plus(plus(x, y), 1) in (-inf, 2])) + Parts: 1/1 + Granules: 1/1 diff --git a/tests/queries/0_stateless/01786_explain_merge_tree.sh b/tests/queries/0_stateless/01786_explain_merge_tree.sh new file mode 100755 index 00000000000..2791d0c6921 --- /dev/null +++ b/tests/queries/0_stateless/01786_explain_merge_tree.sh @@ -0,0 +1,37 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +$CLICKHOUSE_CLIENT -q "drop table if exists test_index" +$CLICKHOUSE_CLIENT -q "drop table if exists idx" + +$CLICKHOUSE_CLIENT -q "create table test_index (x UInt32, y UInt32, z UInt32, t UInt32, index t_minmax t % 20 TYPE minmax GRANULARITY 2, index t_set t % 19 type set(4) granularity 2) engine = MergeTree order by (x, y) partition by (y, bitAnd(z, 3), intDiv(t, 15)) settings index_granularity = 2, min_bytes_for_wide_part = 0" +$CLICKHOUSE_CLIENT -q "insert into test_index select number, number > 3 ? 3 : number, number = 1 ? 1 : 0, number from numbers(20)" + +$CLICKHOUSE_CLIENT -q " + explain indexes = 1 select *, _part from test_index where t % 19 = 16 and y > 0 and bitAnd(z, 3) != 1 and x > 10 and t % 20 > 14; + " | grep -A 100 "ReadFromMergeTree" # | grep -v "Description" + +echo "-----------------" + +$CLICKHOUSE_CLIENT -q " + explain actions = 1 select x from test_index where x > 15 order by x; + " | grep -A 100 "ReadFromMergeTree" + +echo "-----------------" + +$CLICKHOUSE_CLIENT -q " + explain actions = 1 select x from test_index where x > 15 order by x desc; + " | grep -A 100 "ReadFromMergeTree" + +$CLICKHOUSE_CLIENT -q "CREATE TABLE idx (x UInt32, y UInt32, z UInt32) ENGINE = MergeTree ORDER BY (x, x + y) settings min_bytes_for_wide_part = 0" +$CLICKHOUSE_CLIENT -q "insert into idx select number, number, number from numbers(10)" + +$CLICKHOUSE_CLIENT -q " + explain indexes = 1 select z from idx where not(x + y + 1 > 2 and x not in (4, 5)) + " | grep -A 100 "ReadFromMergeTree" + +$CLICKHOUSE_CLIENT -q "drop table if exists test_index" +$CLICKHOUSE_CLIENT -q "drop table if exists idx" diff --git a/tests/queries/0_stateless/01786_group_by_pk_many_streams.reference b/tests/queries/0_stateless/01786_group_by_pk_many_streams.reference new file mode 100644 index 00000000000..b8809e746a5 --- /dev/null +++ b/tests/queries/0_stateless/01786_group_by_pk_many_streams.reference @@ -0,0 +1,11 @@ +94950 +84950 +74950 +64950 +54950 +======= +94950 +84950 +74950 +64950 +54950 diff --git a/tests/queries/0_stateless/01786_group_by_pk_many_streams.sql b/tests/queries/0_stateless/01786_group_by_pk_many_streams.sql new file mode 100644 index 00000000000..e555aa4d6e6 --- /dev/null +++ b/tests/queries/0_stateless/01786_group_by_pk_many_streams.sql @@ -0,0 +1,16 @@ +DROP TABLE IF EXISTS group_by_pk; + +CREATE TABLE group_by_pk (k UInt64, v UInt64) +ENGINE = MergeTree ORDER BY k PARTITION BY v % 50; + +INSERT INTO group_by_pk SELECT number / 100, number FROM numbers(1000); + +SELECT sum(v) AS s FROM group_by_pk GROUP BY k ORDER BY s DESC LIMIT 5 +SETTINGS optimize_aggregation_in_order = 1, max_block_size = 1; + +SELECT '======='; + +SELECT sum(v) AS s FROM group_by_pk GROUP BY k ORDER BY s DESC LIMIT 5 +SETTINGS optimize_aggregation_in_order = 0, max_block_size = 1; + +DROP TABLE IF EXISTS group_by_pk; diff --git a/tests/queries/0_stateless/01787_arena_assert_column_nothing.reference b/tests/queries/0_stateless/01787_arena_assert_column_nothing.reference new file mode 100644 index 00000000000..d00491fd7e5 --- /dev/null +++ b/tests/queries/0_stateless/01787_arena_assert_column_nothing.reference @@ -0,0 +1 @@ +1 diff --git a/tests/queries/0_stateless/01787_arena_assert_column_nothing.sql b/tests/queries/0_stateless/01787_arena_assert_column_nothing.sql new file mode 100644 index 00000000000..de6374a1bc3 --- /dev/null +++ b/tests/queries/0_stateless/01787_arena_assert_column_nothing.sql @@ -0,0 +1 @@ +SELECT 1 GROUP BY emptyArrayToSingle(arrayFilter(x -> 1, [])); diff --git a/tests/queries/0_stateless/01787_map_remote.reference b/tests/queries/0_stateless/01787_map_remote.reference new file mode 100644 index 00000000000..1c488d4418e --- /dev/null +++ b/tests/queries/0_stateless/01787_map_remote.reference @@ -0,0 +1,2 @@ +{'a':1,'b':2} +{'a':1,'b':2} diff --git a/tests/queries/0_stateless/01787_map_remote.sql b/tests/queries/0_stateless/01787_map_remote.sql new file mode 100644 index 00000000000..854eafa0a50 --- /dev/null +++ b/tests/queries/0_stateless/01787_map_remote.sql @@ -0,0 +1 @@ +SELECT map('a', 1, 'b', 2) FROM remote('127.0.0.{1,2}', system, one); \ No newline at end of file diff --git a/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.reference b/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.reference new file mode 100644 index 00000000000..c6f75cab8b7 --- /dev/null +++ b/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.reference @@ -0,0 +1,21 @@ +1 [100,200] ['aa','bb'] [1,2] +0 [0,1] ['aa','bb'] [0,0] +1 [100,200] ['aa','bb'] [1,2] +2 [100,200,300] ['a','b','c'] [10,20,30] +3 [3,4] ['aa','bb'] [3,6] +4 [4,5] ['aa','bb'] [4,8] +0 [0,1] ['aa','bb'] [0,0] +1 [100,200] ['aa','bb'] [1,2] +2 [100,200,300] ['a','b','c'] [100,200,300] +3 [3,4] ['aa','bb'] [3,6] +4 [4,5] ['aa','bb'] [4,8] +0 [0,1] ['aa','bb'] [0,0] +1 [100,200] ['aa','bb'] [1,2] +2 [100,200,300] ['a','b','c'] [100,200,300] +3 [68,72] ['aa','bb'] [68,72] +4 [4,5] ['aa','bb'] [4,8] +0 0 aa 0 +1 1 bb 2 +2 2 aa 4 +3 3 aa 6 +4 4 aa 8 diff --git a/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.sql b/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.sql new file mode 100644 index 00000000000..8e850b70c24 --- /dev/null +++ b/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.sql @@ -0,0 +1,70 @@ +DROP TABLE IF EXISTS test_wide_nested; + +CREATE TABLE test_wide_nested +( + `id` Int, + `info.id` Array(Int), + `info.name` Array(String), + `info.age` Array(Int) +) +ENGINE = MergeTree +ORDER BY tuple() +SETTINGS min_bytes_for_wide_part = 0; + +set mutations_sync = 1; + +INSERT INTO test_wide_nested SELECT number, [number,number + 1], ['aa','bb'], [number,number * 2] FROM numbers(5); + +alter table test_wide_nested update `info.id` = [100,200] where id = 1; +select * from test_wide_nested where id = 1 order by id; + +alter table test_wide_nested update `info.id` = [100,200,300], `info.age` = [10,20,30], `info.name` = ['a','b','c'] where id = 2; +select * from test_wide_nested; + +alter table test_wide_nested update `info.id` = [100,200,300], `info.age` = `info.id`, `info.name` = ['a','b','c'] where id = 2; +select * from test_wide_nested; + +alter table test_wide_nested update `info.id` = [100,200], `info.age`=[68,72] where id = 3; +alter table test_wide_nested update `info.id` = `info.age` where id = 3; +select * from test_wide_nested; + +alter table test_wide_nested update `info.id` = [100,200], `info.age` = [10,20,30], `info.name` = ['a','b','c'] where id = 0; -- { serverError 341 } + +-- Recreate table, because KILL MUTATION is not suitable for parallel tests execution. +DROP TABLE test_wide_nested; + +CREATE TABLE test_wide_nested +( + `id` Int, + `info.id` Array(Int), + `info.name` Array(String), + `info.age` Array(Int) +) +ENGINE = MergeTree +ORDER BY tuple() +SETTINGS min_bytes_for_wide_part = 0; + +INSERT INTO test_wide_nested SELECT number, [number,number + 1], ['aa','bb'], [number,number * 2] FROM numbers(5); + +alter table test_wide_nested update `info.id` = [100,200,300], `info.age` = [10,20,30] where id = 1; -- { serverError 341 } + +DROP TABLE test_wide_nested; + +DROP TABLE IF EXISTS test_wide_not_nested; + +CREATE TABLE test_wide_not_nested +( + `id` Int, + `info.id` Int, + `info.name` String, + `info.age` Int +) +ENGINE = MergeTree +ORDER BY tuple() +SETTINGS min_bytes_for_wide_part = 0; + +INSERT INTO test_wide_not_nested SELECT number, number, 'aa', number * 2 FROM numbers(5); +ALTER TABLE test_wide_not_nested UPDATE `info.name` = 'bb' WHERE id = 1; +SELECT * FROM test_wide_not_nested ORDER BY id; + +DROP TABLE test_wide_not_nested; diff --git a/tests/queries/0_stateless/01790_dist_INSERT_block_structure_mismatch_types_and_names.reference b/tests/queries/0_stateless/01790_dist_INSERT_block_structure_mismatch_types_and_names.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01790_dist_INSERT_block_structure_mismatch_types_and_names.sql b/tests/queries/0_stateless/01790_dist_INSERT_block_structure_mismatch_types_and_names.sql new file mode 100644 index 00000000000..e921460ccfc --- /dev/null +++ b/tests/queries/0_stateless/01790_dist_INSERT_block_structure_mismatch_types_and_names.sql @@ -0,0 +1,22 @@ +DROP TABLE IF EXISTS tmp_01781; +DROP TABLE IF EXISTS dist_01781; + +SET prefer_localhost_replica=0; + +CREATE TABLE tmp_01781 (n LowCardinality(String)) ENGINE=Memory; +CREATE TABLE dist_01781 (n LowCardinality(String)) Engine=Distributed(test_cluster_two_shards, currentDatabase(), tmp_01781, cityHash64(n)); + +SET insert_distributed_sync=1; +INSERT INTO dist_01781 VALUES ('1'),('2'); +-- different LowCardinality size +INSERT INTO dist_01781 SELECT * FROM numbers(1000); + +SET insert_distributed_sync=0; +SYSTEM STOP DISTRIBUTED SENDS dist_01781; +INSERT INTO dist_01781 VALUES ('1'),('2'); +-- different LowCardinality size +INSERT INTO dist_01781 SELECT * FROM numbers(1000); +SYSTEM FLUSH DISTRIBUTED dist_01781; + +DROP TABLE tmp_01781; +DROP TABLE dist_01781; diff --git a/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.reference b/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.reference new file mode 100644 index 00000000000..3bba1ac23c0 --- /dev/null +++ b/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.reference @@ -0,0 +1,6 @@ + DistributedBlockOutputStream: Structure does not match (remote: n Int8 Int8(size = 0), local: n UInt64 UInt64(size = 1)), implicit conversion will be done. + DistributedBlockOutputStream: Structure does not match (remote: n Int8 Int8(size = 0), local: n UInt64 UInt64(size = 1)), implicit conversion will be done. +1 +1 +2 +2 diff --git a/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.sh b/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.sh new file mode 100755 index 00000000000..e989696da03 --- /dev/null +++ b/tests/queries/0_stateless/01791_dist_INSERT_block_structure_mismatch.sh @@ -0,0 +1,30 @@ +#!/usr/bin/env bash + +# NOTE: this is a partial copy of the 01683_dist_INSERT_block_structure_mismatch, +# but this test also checks the log messages + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +$CLICKHOUSE_CLIENT --prefer_localhost_replica=0 -nm -q " + DROP TABLE IF EXISTS tmp_01683; + DROP TABLE IF EXISTS dist_01683; + + CREATE TABLE tmp_01683 (n Int8) ENGINE=Memory; + CREATE TABLE dist_01683 (n UInt64) Engine=Distributed(test_cluster_two_shards, currentDatabase(), tmp_01683, n); + + SET insert_distributed_sync=1; + INSERT INTO dist_01683 VALUES (1),(2); + + SET insert_distributed_sync=0; + INSERT INTO dist_01683 VALUES (1),(2); + SYSTEM FLUSH DISTRIBUTED dist_01683; + + -- TODO: cover distributed_directory_monitor_batch_inserts=1 + + SELECT * FROM tmp_01683 ORDER BY n; + + DROP TABLE tmp_01683; + DROP TABLE dist_01683; +" |& sed 's/^.*&1 \ + | grep -q "Code: 27" + +echo $?; + +$CLICKHOUSE_CLIENT --query="DROP TABLE nullable_low_cardinality_tsv_test"; diff --git a/tests/queries/0_stateless/01801_s3_cluster.reference b/tests/queries/0_stateless/01801_s3_cluster.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01801_s3_cluster.sh b/tests/queries/0_stateless/01801_s3_cluster.sh new file mode 100755 index 00000000000..215d5500be5 --- /dev/null +++ b/tests/queries/0_stateless/01801_s3_cluster.sh @@ -0,0 +1,12 @@ +#!/usr/bin/env bash + +# NOTE: this is a partial copy of the 01683_dist_INSERT_block_structure_mismatch, +# but this test also checks the log messages + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + + +${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3('https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" +${CLICKHOUSE_CLIENT_BINARY} --send_logs_level="none" -q "SELECT * FROM s3Cluster('test_cluster_two_shards', 'https://s3.mds.yandex.net/clickhouse-test-reports/*/*/functional_stateless_tests_(ubsan)/test_results.tsv', '$S3_ACCESS_KEY_ID', '$S3_SECRET_ACCESS', 'LineAsString', 'line String') limit 100 FORMAT Null;" diff --git a/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.reference b/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.reference new file mode 100644 index 00000000000..75c114cdd74 --- /dev/null +++ b/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.reference @@ -0,0 +1,27 @@ +-- { echo } + +SELECT formatDateTime(toDateTime64('1935-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('1969-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('1989-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('2019-09-16 19:20:12', 0, 'Europe/Moscow'), '%C'); +20 +SELECT formatDateTime(toDateTime64('2105-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +21 +SELECT formatDateTime(toDateTime64('2205-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +22 +-- non-zero scale +SELECT formatDateTime(toDateTime64('1935-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('1969-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('1989-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +19 +SELECT formatDateTime(toDateTime64('2019-09-16 19:20:12', 0, 'Europe/Moscow'), '%C'); +20 +SELECT formatDateTime(toDateTime64('2105-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +21 +SELECT formatDateTime(toDateTime64('2205-01-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +22 diff --git a/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.sql b/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.sql new file mode 100644 index 00000000000..e368f45cbda --- /dev/null +++ b/tests/queries/0_stateless/01802_formatDateTime_DateTime64_century.sql @@ -0,0 +1,16 @@ +-- { echo } + +SELECT formatDateTime(toDateTime64('1935-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('1969-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('1989-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2019-09-16 19:20:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2105-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2205-12-12 12:12:12', 0, 'Europe/Moscow'), '%C'); + +-- non-zero scale +SELECT formatDateTime(toDateTime64('1935-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('1969-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('1989-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2019-09-16 19:20:12', 0, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2105-12-12 12:12:12', 6, 'Europe/Moscow'), '%C'); +SELECT formatDateTime(toDateTime64('2205-01-12 12:12:12', 6, 'Europe/Moscow'), '%C'); \ No newline at end of file diff --git a/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.reference b/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.reference new file mode 100644 index 00000000000..42acbe4fbaf --- /dev/null +++ b/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.reference @@ -0,0 +1,10 @@ +0.5060606060606061 +0.5083333333333333 +0.5119047619047619 +0.5178571428571428 +0.5285714285714286 +0.525 +0.55 +0.625 +0.5 +nan diff --git a/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.sql b/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.sql new file mode 100644 index 00000000000..3c1746a30f8 --- /dev/null +++ b/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.sql @@ -0,0 +1,22 @@ +DROP TABLE IF EXISTS 01802_empsalary; + +SET allow_experimental_window_functions=1; + +CREATE TABLE 01802_empsalary +( + `depname` LowCardinality(String), + `empno` UInt64, + `salary` Int32, + `enroll_date` Date +) +ENGINE = MergeTree +ORDER BY enroll_date +SETTINGS index_granularity = 8192; + +INSERT INTO 01802_empsalary VALUES ('sales', 1, 5000, '2006-10-01'), ('develop', 8, 6000, '2006-10-01'), ('personnel', 2, 3900, '2006-12-23'), ('develop', 10, 5200, '2007-08-01'), ('sales', 3, 4800, '2007-08-01'), ('sales', 4, 4800, '2007-08-08'), ('develop', 11, 5200, '2007-08-15'), ('personnel', 5, 3500, '2007-12-10'), ('develop', 7, 4200, '2008-01-01'), ('develop', 9, 4500, '2008-01-01'); + +SELECT mannWhitneyUTest(salary, salary) OVER (ORDER BY salary ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS func FROM 01802_empsalary; -- {serverError 36} + +SELECT rankCorr(salary, 0.5) OVER (ORDER BY salary ASC ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS func FROM 01802_empsalary; + +DROP TABLE IF EXISTS 01802_empsalary; diff --git a/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.reference b/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.reference new file mode 100644 index 00000000000..729d93bf322 --- /dev/null +++ b/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.reference @@ -0,0 +1,24 @@ +before row policy +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 + +after row policy with no password + val +----- + 2 +(1 row) + +after row policy with plaintext_password + val +----- + 2 +(1 row) + diff --git a/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.sh b/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.sh new file mode 100755 index 00000000000..edd73131020 --- /dev/null +++ b/tests/queries/0_stateless/01802_test_postgresql_protocol_with_row_policy.sh @@ -0,0 +1,43 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +echo " +CREATE DATABASE IF NOT EXISTS db01802; +DROP TABLE IF EXISTS db01802.postgresql; +DROP ROW POLICY IF EXISTS test_policy ON db01802.postgresql; + +CREATE TABLE db01802.postgresql (val UInt32) ENGINE=MergeTree ORDER BY val; +INSERT INTO db01802.postgresql SELECT number FROM numbers(10); + +SELECT 'before row policy'; +SELECT * FROM db01802.postgresql; +" | $CLICKHOUSE_CLIENT -n + + +echo " +DROP USER IF EXISTS postgresql_user; +CREATE USER postgresql_user HOST IP '127.0.0.1' IDENTIFIED WITH no_password; +GRANT SELECT(val) ON db01802.postgresql TO postgresql_user; +CREATE ROW POLICY IF NOT EXISTS test_policy ON db01802.postgresql FOR SELECT USING val = 2 TO postgresql_user; + +SELECT ''; +SELECT 'after row policy with no password'; +" | $CLICKHOUSE_CLIENT -n + +psql --host localhost --port ${CLICKHOUSE_PORT_POSTGRESQL} db01802 --user postgresql_user -c "SELECT * FROM postgresql;" + +echo " +DROP USER IF EXISTS postgresql_user; +DROP ROW POLICY IF EXISTS test_policy ON db01802.postgresql; +CREATE USER postgresql_user HOST IP '127.0.0.1' IDENTIFIED WITH plaintext_password BY 'qwerty'; +GRANT SELECT(val) ON db01802.postgresql TO postgresql_user; +CREATE ROW POLICY IF NOT EXISTS test_policy ON db01802.postgresql FOR SELECT USING val = 2 TO postgresql_user; + +SELECT 'after row policy with plaintext_password'; +" | $CLICKHOUSE_CLIENT -n + +psql "postgresql://postgresql_user:qwerty@localhost:${CLICKHOUSE_PORT_POSTGRESQL}/db01802" -c "SELECT * FROM postgresql;" + diff --git a/tests/queries/0_stateless/01802_toDateTime64_large_values.reference b/tests/queries/0_stateless/01802_toDateTime64_large_values.reference new file mode 100644 index 00000000000..c44c61ab93a --- /dev/null +++ b/tests/queries/0_stateless/01802_toDateTime64_large_values.reference @@ -0,0 +1,10 @@ +-- { echo } + +SELECT toDateTime64('2205-12-12 12:12:12', 0, 'UTC'); +2205-12-12 12:12:12 +SELECT toDateTime64('2205-12-12 12:12:12', 0, 'Europe/Moscow'); +2205-12-12 12:12:12 +SELECT toDateTime64('2205-12-12 12:12:12', 6, 'Europe/Moscow'); +2205-12-12 12:12:12.000000 +SELECT toDateTime64('2205-12-12 12:12:12', 6, 'Europe/Moscow'); +2205-12-12 12:12:12.000000 diff --git a/tests/queries/0_stateless/01802_toDateTime64_large_values.sql b/tests/queries/0_stateless/01802_toDateTime64_large_values.sql new file mode 100644 index 00000000000..299111f43bc --- /dev/null +++ b/tests/queries/0_stateless/01802_toDateTime64_large_values.sql @@ -0,0 +1,7 @@ +-- { echo } + +SELECT toDateTime64('2205-12-12 12:12:12', 0, 'UTC'); +SELECT toDateTime64('2205-12-12 12:12:12', 0, 'Europe/Moscow'); + +SELECT toDateTime64('2205-12-12 12:12:12', 6, 'Europe/Moscow'); +SELECT toDateTime64('2205-12-12 12:12:12', 6, 'Europe/Moscow'); \ No newline at end of file diff --git a/tests/queries/0_stateless/01803_const_nullable_map.reference b/tests/queries/0_stateless/01803_const_nullable_map.reference new file mode 100644 index 00000000000..573541ac970 --- /dev/null +++ b/tests/queries/0_stateless/01803_const_nullable_map.reference @@ -0,0 +1 @@ +0 diff --git a/tests/queries/0_stateless/01803_const_nullable_map.sql b/tests/queries/0_stateless/01803_const_nullable_map.sql new file mode 100644 index 00000000000..4ac9f925e24 --- /dev/null +++ b/tests/queries/0_stateless/01803_const_nullable_map.sql @@ -0,0 +1,9 @@ +DROP TABLE IF EXISTS t_map_null; + +SET allow_experimental_map_type = 1; + +CREATE TABLE t_map_null (a Map(String, String), b String) engine = MergeTree() ORDER BY a; +INSERT INTO t_map_null VALUES (map('a', 'b', 'c', 'd'), 'foo'); +SELECT count() FROM t_map_null WHERE a = map('name', NULL, '', NULL); + +DROP TABLE t_map_null; diff --git a/tests/queries/0_stateless/01803_untuple_subquery.reference b/tests/queries/0_stateless/01803_untuple_subquery.reference new file mode 100644 index 00000000000..838ff3aa952 --- /dev/null +++ b/tests/queries/0_stateless/01803_untuple_subquery.reference @@ -0,0 +1,2 @@ +(0.5,'92233720368547758.07',NULL) 1.00 256 \N \N +\N diff --git a/tests/queries/0_stateless/01803_untuple_subquery.sql b/tests/queries/0_stateless/01803_untuple_subquery.sql new file mode 100644 index 00000000000..512b4c561af --- /dev/null +++ b/tests/queries/0_stateless/01803_untuple_subquery.sql @@ -0,0 +1,3 @@ +SELECT (0.5, '92233720368547758.07', NULL), '', '1.00', untuple(('256', NULL)), NULL FROM (SELECT untuple(((NULL, untuple((('0.0000000100', (65536, NULL, (65535, 9223372036854775807), '25.7', (0.00009999999747378752, '10.25', 1048577), 65536)), '0.0000001024', '65537', NULL))), untuple((9223372036854775807, -inf, 0.5)), NULL, -9223372036854775808)), 257, 7, ('0.0001048575', (1024, NULL, (7, 3), (untuple(tuple(-NULL)), NULL, '0.0001048577', NULL), 0)), 0, (0, 0.9998999834060669, '65537'), untuple(tuple('10.25'))); + +SELECT NULL FROM (SELECT untuple((NULL, dummy))); diff --git a/tests/queries/0_stateless/01804_dictionary_decimal256_type.reference b/tests/queries/0_stateless/01804_dictionary_decimal256_type.reference new file mode 100644 index 00000000000..1af9d45f72b --- /dev/null +++ b/tests/queries/0_stateless/01804_dictionary_decimal256_type.reference @@ -0,0 +1,14 @@ +Flat dictionary +5.00000 +Hashed dictionary +5.00000 +Cache dictionary +5.00000 +SSDCache dictionary +5.00000 +Direct dictionary +5.00000 +IPTrie dictionary +5.00000 +Polygon dictionary +5.00000 diff --git a/tests/queries/0_stateless/01804_dictionary_decimal256_type.sql b/tests/queries/0_stateless/01804_dictionary_decimal256_type.sql new file mode 100644 index 00000000000..cc0ec598b70 --- /dev/null +++ b/tests/queries/0_stateless/01804_dictionary_decimal256_type.sql @@ -0,0 +1,141 @@ +SET allow_experimental_bigint_types = 1; + +DROP TABLE IF EXISTS dictionary_decimal_source_table; +CREATE TABLE dictionary_decimal_source_table +( + id UInt64, + decimal_value Decimal256(5) +) ENGINE = TinyLog; + +INSERT INTO dictionary_decimal_source_table VALUES (1, 5.0); + +DROP DICTIONARY IF EXISTS flat_dictionary; +CREATE DICTIONARY flat_dictionary +( + id UInt64, + decimal_value Decimal256(5) +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(FLAT()); + +SELECT 'Flat dictionary'; +SELECT dictGet('flat_dictionary', 'decimal_value', toUInt64(1)); + +DROP DICTIONARY IF EXISTS hashed_dictionary; +CREATE DICTIONARY hashed_dictionary +( + id UInt64, + decimal_value Decimal256(5) +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(HASHED()); + +SELECT 'Hashed dictionary'; +SELECT dictGet('hashed_dictionary', 'decimal_value', toUInt64(1)); + +DROP DICTIONARY hashed_dictionary; + +DROP DICTIONARY IF EXISTS cache_dictionary; +CREATE DICTIONARY cache_dictionary +( + id UInt64, + decimal_value Decimal256(5) +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(CACHE(SIZE_IN_CELLS 10)); + +SELECT 'Cache dictionary'; +SELECT dictGet('cache_dictionary', 'decimal_value', toUInt64(1)); + +DROP DICTIONARY cache_dictionary; + +DROP DICTIONARY IF EXISTS ssd_cache_dictionary; +CREATE DICTIONARY ssd_cache_dictionary +( + id UInt64, + decimal_value Decimal256(5) +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(SSD_CACHE(BLOCK_SIZE 4096 FILE_SIZE 8192 PATH '/var/lib/clickhouse/clickhouse_dicts/0d')); + +SELECT 'SSDCache dictionary'; +SELECT dictGet('ssd_cache_dictionary', 'decimal_value', toUInt64(1)); + +DROP DICTIONARY ssd_cache_dictionary; + +DROP DICTIONARY IF EXISTS direct_dictionary; +CREATE DICTIONARY direct_dictionary +( + id UInt64, + decimal_value Decimal256(5) +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_source_table')) +LAYOUT(DIRECT()); + +SELECT 'Direct dictionary'; +SELECT dictGet('direct_dictionary', 'decimal_value', toUInt64(1)); + +DROP DICTIONARY direct_dictionary; + +DROP TABLE dictionary_decimal_source_table; + +DROP TABLE IF EXISTS ip_trie_dictionary_decimal_source_table; +CREATE TABLE ip_trie_dictionary_decimal_source_table +( + prefix String, + decimal_value Decimal256(5) +) ENGINE = TinyLog; + +INSERT INTO ip_trie_dictionary_decimal_source_table VALUES ('127.0.0.0', 5.0); + +DROP DICTIONARY IF EXISTS ip_trie_dictionary; +CREATE DICTIONARY ip_trie_dictionary +( + prefix String, + decimal_value Decimal256(5) +) +PRIMARY KEY prefix +SOURCE(CLICKHOUSE(HOST 'localhost' port tcpPort() TABLE 'ip_trie_dictionary_decimal_source_table')) +LIFETIME(MIN 10 MAX 1000) +LAYOUT(IP_TRIE()); + +SELECT 'IPTrie dictionary'; +SELECT dictGet('ip_trie_dictionary', 'decimal_value', tuple(IPv4StringToNum('127.0.0.0'))); + +DROP DICTIONARY ip_trie_dictionary; +DROP TABLE ip_trie_dictionary_decimal_source_table; + +DROP TABLE IF EXISTS dictionary_decimal_polygons_source_table; +CREATE TABLE dictionary_decimal_polygons_source_table +( + key Array(Array(Array(Tuple(Float64, Float64)))), + decimal_value Decimal256(5) +) ENGINE = TinyLog; + +INSERT INTO dictionary_decimal_polygons_source_table VALUES ([[[(0, 0), (0, 1), (1, 1), (1, 0)]]], 5.0); + +DROP DICTIONARY IF EXISTS polygon_dictionary; +CREATE DICTIONARY polygon_dictionary +( + key Array(Array(Array(Tuple(Float64, Float64)))), + decimal_value Decimal256(5) +) +PRIMARY KEY key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_decimal_polygons_source_table')) +LIFETIME(MIN 0 MAX 1000) +LAYOUT(POLYGON()); + +SELECT 'Polygon dictionary'; +SELECT dictGet('polygon_dictionary', 'decimal_value', tuple(0.5, 0.5)); + +DROP DICTIONARY polygon_dictionary; +DROP TABLE dictionary_decimal_polygons_source_table; diff --git a/tests/queries/0_stateless/01804_uniq_up_to_ubsan.reference b/tests/queries/0_stateless/01804_uniq_up_to_ubsan.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01804_uniq_up_to_ubsan.sql b/tests/queries/0_stateless/01804_uniq_up_to_ubsan.sql new file mode 100644 index 00000000000..fcbe585b70a --- /dev/null +++ b/tests/queries/0_stateless/01804_uniq_up_to_ubsan.sql @@ -0,0 +1,2 @@ +SELECT uniqUpTo(1e100)(number) FROM numbers(5); -- { serverError 70 } +SELECT uniqUpTo(-1e100)(number) FROM numbers(5); -- { serverError 70 } diff --git a/tests/queries/0_stateless/01809_inactive_parts_to_delay_throw_insert.reference b/tests/queries/0_stateless/01809_inactive_parts_to_delay_throw_insert.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01809_inactive_parts_to_delay_throw_insert.sql b/tests/queries/0_stateless/01809_inactive_parts_to_delay_throw_insert.sql new file mode 100644 index 00000000000..e9bbfe69421 --- /dev/null +++ b/tests/queries/0_stateless/01809_inactive_parts_to_delay_throw_insert.sql @@ -0,0 +1,12 @@ +drop table if exists data_01809; + +create table data_01809 (i int) engine MergeTree order by i settings old_parts_lifetime = 10000000000, min_bytes_for_wide_part = 0, inactive_parts_to_throw_insert = 0, inactive_parts_to_delay_insert = 1; + +insert into data_01809 values (1); +insert into data_01809 values (2); + +optimize table data_01809 final; + +insert into data_01809 values (3); + +drop table data_01809; diff --git a/tests/queries/0_stateless/01810_max_part_removal_threads_long.reference b/tests/queries/0_stateless/01810_max_part_removal_threads_long.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01810_max_part_removal_threads_long.sh b/tests/queries/0_stateless/01810_max_part_removal_threads_long.sh new file mode 100755 index 00000000000..f2aa1f63197 --- /dev/null +++ b/tests/queries/0_stateless/01810_max_part_removal_threads_long.sh @@ -0,0 +1,36 @@ +#!/usr/bin/env bash + +# NOTE: this done as not .sql since we need to Ordinary database +# (to account threads in query_log for DROP TABLE query) +# and we can do it compatible with parallel run only in .sh +# (via $CLICKHOUSE_DATABASE) + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +$CLICKHOUSE_CLIENT -nm -q "create database ordinary_$CLICKHOUSE_DATABASE engine=Ordinary" + +# MergeTree +$CLICKHOUSE_CLIENT -nm -q """ + use ordinary_$CLICKHOUSE_DATABASE; + drop table if exists data_01810; + create table data_01810 (key Int) Engine=MergeTree() order by key partition by key settings max_part_removal_threads=10, concurrent_part_removal_threshold=49; + insert into data_01810 select * from numbers(50); + drop table data_01810 settings log_queries=1; + system flush logs; + select throwIf(length(thread_ids)<50) from system.query_log where event_date = today() and current_database = currentDatabase() and query = 'drop table data_01810 settings log_queries=1;' and type = 'QueryFinish' format Null; +""" + +# ReplicatedMergeTree +$CLICKHOUSE_CLIENT -nm -q """ + use ordinary_$CLICKHOUSE_DATABASE; + drop table if exists rep_data_01810; + create table rep_data_01810 (key Int) Engine=ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/rep_data_01810', '1') order by key partition by key settings max_part_removal_threads=10, concurrent_part_removal_threshold=49; + insert into rep_data_01810 select * from numbers(50); + drop table rep_data_01810 settings log_queries=1; + system flush logs; + select throwIf(length(thread_ids)<50) from system.query_log where event_date = today() and current_database = currentDatabase() and query = 'drop table rep_data_01810 settings log_queries=1;' and type = 'QueryFinish' format Null; +""" + +$CLICKHOUSE_CLIENT -nm -q "drop database ordinary_$CLICKHOUSE_DATABASE" diff --git a/tests/queries/0_stateless/01811_filter_by_null.reference b/tests/queries/0_stateless/01811_filter_by_null.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01811_filter_by_null.sql b/tests/queries/0_stateless/01811_filter_by_null.sql new file mode 100644 index 00000000000..496faf428ab --- /dev/null +++ b/tests/queries/0_stateless/01811_filter_by_null.sql @@ -0,0 +1,9 @@ +DROP TABLE IF EXISTS test_01344; + +CREATE TABLE test_01344 (x String, INDEX idx (x) TYPE set(10) GRANULARITY 1) ENGINE = MergeTree ORDER BY tuple() SETTINGS min_bytes_for_wide_part = 0; +INSERT INTO test_01344 VALUES ('Hello, world'); +SELECT NULL FROM test_01344 WHERE ignore(1) = NULL; +SELECT NULL FROM test_01344 WHERE encrypt(ignore(encrypt(NULL, '0.0001048577', lcm(2, 65537), NULL, inf, NULL), lcm(-2, 1048575)), '-0.0000000001', lcm(NULL, NULL)) = NULL; +SELECT NULL FROM test_01344 WHERE ignore(x, lcm(NULL, 1048576), -2) = NULL; + +DROP TABLE test_01344; diff --git a/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.reference b/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.reference new file mode 100644 index 00000000000..209e3ef4b62 --- /dev/null +++ b/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.reference @@ -0,0 +1 @@ +20 diff --git a/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.sql b/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.sql new file mode 100644 index 00000000000..dac68ad4ae8 --- /dev/null +++ b/tests/queries/0_stateless/01811_storage_buffer_flush_parameters.sql @@ -0,0 +1,22 @@ +drop table if exists data_01811; +drop table if exists buffer_01811; + +create table data_01811 (key Int) Engine=Memory(); +/* Buffer with flush_rows=1000 */ +create table buffer_01811 (key Int) Engine=Buffer(currentDatabase(), data_01811, + /* num_layers= */ 1, + /* min_time= */ 1, /* max_time= */ 86400, + /* min_rows= */ 1e9, /* max_rows= */ 1e6, + /* min_bytes= */ 0, /* max_bytes= */ 4e6, + /* flush_time= */ 86400, /* flush_rows= */ 10, /* flush_bytes= */0 +); + +insert into buffer_01811 select * from numbers(10); +insert into buffer_01811 select * from numbers(10); + +-- wait for background buffer flush +select sleep(3) format Null; +select count() from data_01811; + +drop table buffer_01811; +drop table data_01811; diff --git a/tests/queries/0_stateless/01812_basic_auth_http_server.reference b/tests/queries/0_stateless/01812_basic_auth_http_server.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01812_basic_auth_http_server.sh b/tests/queries/0_stateless/01812_basic_auth_http_server.sh new file mode 100755 index 00000000000..4b993137bbd --- /dev/null +++ b/tests/queries/0_stateless/01812_basic_auth_http_server.sh @@ -0,0 +1,19 @@ +#!/usr/bin/env bash +# shellcheck disable=SC2046 + +# In very old (e.g. 1.1.54385) versions of ClickHouse there was a bug in Poco HTTP library: +# Basic HTTP authentication headers was not parsed if the size of URL is exactly 4077 + something bytes. +# So, the user may get authentication error if valid credentials are passed. +# This is a minor issue because it does not have security implications (at worse the user will be not allowed to access). + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +# In this test we do the opposite: passing the invalid credentials while server is accepting default user without a password. +# And if the bug exists, they will be ignored (treat as empty credentials) and query succeed. + +for i in {3950..4100}; do ${CLICKHOUSE_CURL} --user default:12345 "${CLICKHOUSE_URL}&query=SELECT+1"$(perl -e "print '+'x$i") | grep -v -F 'password' ||:; done + +# You can check that the bug exists in old version by running the old server in Docker: +# docker run --network host -it --rm yandex/clickhouse-server:1.1.54385 diff --git a/tests/queries/0_stateless/01812_has_generic.reference b/tests/queries/0_stateless/01812_has_generic.reference new file mode 100644 index 00000000000..e8183f05f5d --- /dev/null +++ b/tests/queries/0_stateless/01812_has_generic.reference @@ -0,0 +1,3 @@ +1 +1 +1 diff --git a/tests/queries/0_stateless/01812_has_generic.sql b/tests/queries/0_stateless/01812_has_generic.sql new file mode 100644 index 00000000000..9ab5b655102 --- /dev/null +++ b/tests/queries/0_stateless/01812_has_generic.sql @@ -0,0 +1,3 @@ +SELECT has([(1, 2), (3, 4)], (toUInt16(3), 4)); +SELECT hasAny([(1, 2), (3, 4)], [(toUInt16(3), 4)]); +SELECT hasAll([(1, 2), (3, 4)], [(toNullable(1), toUInt64(2)), (toUInt16(3), 4)]); diff --git a/tests/queries/0_stateless/01812_optimize_skip_unused_shards_single_node.reference b/tests/queries/0_stateless/01812_optimize_skip_unused_shards_single_node.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01812_optimize_skip_unused_shards_single_node.sql b/tests/queries/0_stateless/01812_optimize_skip_unused_shards_single_node.sql new file mode 100644 index 00000000000..c39947f2c04 --- /dev/null +++ b/tests/queries/0_stateless/01812_optimize_skip_unused_shards_single_node.sql @@ -0,0 +1,3 @@ +-- remote() does not have sharding key, while force_optimize_skip_unused_shards=2 requires from table to have it. +-- But due to only one node, everything works. +select * from remote('127.1', system.one) settings optimize_skip_unused_shards=1, force_optimize_skip_unused_shards=2 format Null; diff --git a/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.reference b/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.reference new file mode 100644 index 00000000000..5565ed6787f --- /dev/null +++ b/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.reference @@ -0,0 +1,4 @@ +0 +1 +0 +1 diff --git a/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.sql b/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.sql new file mode 100644 index 00000000000..722bd4af5bb --- /dev/null +++ b/tests/queries/0_stateless/01813_distributed_scalar_subqueries_alias.sql @@ -0,0 +1,18 @@ +DROP TABLE IF EXISTS data; +CREATE TABLE data (a Int64, b Int64) ENGINE = TinyLog(); + +DROP TABLE IF EXISTS data_distributed; +CREATE TABLE data_distributed (a Int64, b Int64) ENGINE = Distributed(test_shard_localhost, currentDatabase(), 'data'); + +INSERT INTO data VALUES (0, 0); + +SET prefer_localhost_replica = 1; +SELECT a / (SELECT sum(number) FROM numbers(10)) FROM data_distributed; +SELECT a < (SELECT 1) FROM data_distributed; + +SET prefer_localhost_replica = 0; +SELECT a / (SELECT sum(number) FROM numbers(10)) FROM data_distributed; +SELECT a < (SELECT 1) FROM data_distributed; + +DROP TABLE data_distributed; +DROP TABLE data; diff --git a/tests/queries/0_stateless/01817_storage_buffer_parameters.reference b/tests/queries/0_stateless/01817_storage_buffer_parameters.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01817_storage_buffer_parameters.sql b/tests/queries/0_stateless/01817_storage_buffer_parameters.sql new file mode 100644 index 00000000000..84727bc5d6b --- /dev/null +++ b/tests/queries/0_stateless/01817_storage_buffer_parameters.sql @@ -0,0 +1,42 @@ +drop table if exists data_01817; +drop table if exists buffer_01817; + +create table data_01817 (key Int) Engine=Null(); + +-- w/ flush_* +create table buffer_01817 (key Int) Engine=Buffer(currentDatabase(), data_01817, + /* num_layers= */ 1, + /* min_time= */ 1, /* max_time= */ 86400, + /* min_rows= */ 1e9, /* max_rows= */ 1e6, + /* min_bytes= */ 0, /* max_bytes= */ 4e6, + /* flush_time= */ 86400, /* flush_rows= */ 10, /* flush_bytes= */0 +); +drop table buffer_01817; + +-- w/o flush_* +create table buffer_01817 (key Int) Engine=Buffer(currentDatabase(), data_01817, + /* num_layers= */ 1, + /* min_time= */ 1, /* max_time= */ 86400, + /* min_rows= */ 1e9, /* max_rows= */ 1e6, + /* min_bytes= */ 0, /* max_bytes= */ 4e6 +); +drop table buffer_01817; + +-- not enough args +create table buffer_01817 (key Int) Engine=Buffer(currentDatabase(), data_01817, + /* num_layers= */ 1, + /* min_time= */ 1, /* max_time= */ 86400, + /* min_rows= */ 1e9, /* max_rows= */ 1e6, + /* min_bytes= */ 0 /* max_bytes= 4e6 */ +); -- { serverError 42 } +-- too much args +create table buffer_01817 (key Int) Engine=Buffer(currentDatabase(), data_01817, + /* num_layers= */ 1, + /* min_time= */ 1, /* max_time= */ 86400, + /* min_rows= */ 1e9, /* max_rows= */ 1e6, + /* min_bytes= */ 0, /* max_bytes= */ 4e6, + /* flush_time= */ 86400, /* flush_rows= */ 10, /* flush_bytes= */0, + 0 +); -- { serverError 42 } + +drop table data_01817; diff --git a/tests/queries/0_stateless/01818_case_float_value_fangyc.reference b/tests/queries/0_stateless/01818_case_float_value_fangyc.reference new file mode 100644 index 00000000000..61780798228 --- /dev/null +++ b/tests/queries/0_stateless/01818_case_float_value_fangyc.reference @@ -0,0 +1 @@ +b diff --git a/tests/queries/0_stateless/01818_case_float_value_fangyc.sql b/tests/queries/0_stateless/01818_case_float_value_fangyc.sql new file mode 100644 index 00000000000..3cdb8503e64 --- /dev/null +++ b/tests/queries/0_stateless/01818_case_float_value_fangyc.sql @@ -0,0 +1 @@ +select case 1.1 when 0.1 then 'a' when 1.1 then 'b' when 2.1 then 'c' else 'default' end as f; diff --git a/tests/queries/0_stateless/01818_input_format_with_names_use_header.reference b/tests/queries/0_stateless/01818_input_format_with_names_use_header.reference new file mode 100644 index 00000000000..b7b577c4685 --- /dev/null +++ b/tests/queries/0_stateless/01818_input_format_with_names_use_header.reference @@ -0,0 +1,2 @@ +testdata1 +testdata2 diff --git a/tests/queries/0_stateless/01818_input_format_with_names_use_header.sh b/tests/queries/0_stateless/01818_input_format_with_names_use_header.sh new file mode 100755 index 00000000000..953c39a40a2 --- /dev/null +++ b/tests/queries/0_stateless/01818_input_format_with_names_use_header.sh @@ -0,0 +1,15 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +${CLICKHOUSE_CLIENT} -q "DROP TABLE IF EXISTS \`01818_with_names\`;" + +${CLICKHOUSE_CLIENT} -q "CREATE TABLE \`01818_with_names\` (t String) ENGINE = MergeTree ORDER BY t;" + +echo -ne "t\ntestdata1\ntestdata2" | ${CLICKHOUSE_CLIENT} --input_format_with_names_use_header 0 --query "INSERT INTO \`01818_with_names\` FORMAT CSVWithNames" + +${CLICKHOUSE_CLIENT} -q "SELECT * FROM \`01818_with_names\`;" + +${CLICKHOUSE_CLIENT} -q "DROP TABLE IF EXISTS \`01818_with_names\`;" diff --git a/tests/queries/0_stateless/01820_unhex_case_insensitive.reference b/tests/queries/0_stateless/01820_unhex_case_insensitive.reference new file mode 100644 index 00000000000..e692ee54787 --- /dev/null +++ b/tests/queries/0_stateless/01820_unhex_case_insensitive.reference @@ -0,0 +1 @@ +012 MySQL diff --git a/tests/queries/0_stateless/01820_unhex_case_insensitive.sql b/tests/queries/0_stateless/01820_unhex_case_insensitive.sql new file mode 100644 index 00000000000..99d8031eeda --- /dev/null +++ b/tests/queries/0_stateless/01820_unhex_case_insensitive.sql @@ -0,0 +1,2 @@ +-- MySQL has function `unhex`, so we will make our function `unhex` also case insensitive for compatibility. +SELECT unhex('303132'), UNHEX('4D7953514C'); diff --git a/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.reference b/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.reference new file mode 100644 index 00000000000..9833cbcc9b6 --- /dev/null +++ b/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.reference @@ -0,0 +1 @@ +1 20 diff --git a/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.sql b/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.sql new file mode 100644 index 00000000000..636426fcc91 --- /dev/null +++ b/tests/queries/0_stateless/01821_dictionary_primary_key_wrong_order.sql @@ -0,0 +1,24 @@ +DROP TABLE IF EXISTS dictionary_primary_key_source_table; +CREATE TABLE dictionary_primary_key_source_table +( + identifier UInt64, + v UInt64 +) ENGINE = TinyLog; + +INSERT INTO dictionary_primary_key_source_table VALUES (20, 1); + +DROP DICTIONARY IF EXISTS flat_dictionary; +CREATE DICTIONARY flat_dictionary +( + identifier UInt64, + v UInt64 +) +PRIMARY KEY v +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_primary_key_source_table')) +LIFETIME(MIN 1 MAX 1000) +LAYOUT(FLAT()); + +SELECT * FROM flat_dictionary; + +DROP DICTIONARY flat_dictionary; +DROP TABLE dictionary_primary_key_source_table; diff --git a/tests/queries/0_stateless/01821_to_date_time_ubsan.reference b/tests/queries/0_stateless/01821_to_date_time_ubsan.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01821_to_date_time_ubsan.sql b/tests/queries/0_stateless/01821_to_date_time_ubsan.sql new file mode 100644 index 00000000000..74226fc221f --- /dev/null +++ b/tests/queries/0_stateless/01821_to_date_time_ubsan.sql @@ -0,0 +1,2 @@ +SELECT toDateTime('9223372036854775806', 7); -- { serverError 407 } +SELECT toDateTime('9223372036854775806', 8); -- { serverError 407 } diff --git a/tests/queries/0_stateless/01822_async_read_from_socket_crash.reference b/tests/queries/0_stateless/01822_async_read_from_socket_crash.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01822_async_read_from_socket_crash.sh b/tests/queries/0_stateless/01822_async_read_from_socket_crash.sh new file mode 100755 index 00000000000..b4bb2228a91 --- /dev/null +++ b/tests/queries/0_stateless/01822_async_read_from_socket_crash.sh @@ -0,0 +1,9 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + + + +for _ in {1..10}; do $CLICKHOUSE_CLIENT -q "select number from remote('127.0.0.{2,3}', numbers(20)) limit 8 settings max_block_size = 2, unknown_packet_in_send_data=4, sleep_in_send_data_ms=100, async_socket_for_remote=1 format Null" > /dev/null 2>&1 || true; done diff --git a/tests/queries/0_stateless/01822_union_and_constans_error.reference b/tests/queries/0_stateless/01822_union_and_constans_error.reference new file mode 100644 index 00000000000..d00491fd7e5 --- /dev/null +++ b/tests/queries/0_stateless/01822_union_and_constans_error.reference @@ -0,0 +1 @@ +1 diff --git a/tests/queries/0_stateless/01822_union_and_constans_error.sql b/tests/queries/0_stateless/01822_union_and_constans_error.sql new file mode 100644 index 00000000000..38b7df700cd --- /dev/null +++ b/tests/queries/0_stateless/01822_union_and_constans_error.sql @@ -0,0 +1,20 @@ +drop table if exists t0; +CREATE TABLE t0 (c0 String) ENGINE = Log(); + +SELECT isNull(t0.c0) OR COUNT('\n?pVa') +FROM t0 +GROUP BY t0.c0 +HAVING isNull(t0.c0) +UNION ALL +SELECT isNull(t0.c0) OR COUNT('\n?pVa') +FROM t0 +GROUP BY t0.c0 +HAVING NOT isNull(t0.c0) +UNION ALL +SELECT isNull(t0.c0) OR COUNT('\n?pVa') +FROM t0 +GROUP BY t0.c0 +HAVING isNull(isNull(t0.c0)) +SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0; + +drop table if exists t0; diff --git a/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.reference b/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.reference new file mode 100644 index 00000000000..2439021d2e0 --- /dev/null +++ b/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.reference @@ -0,0 +1 @@ +[['a'],['b','c']] diff --git a/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.sql b/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.sql new file mode 100644 index 00000000000..528a3b464b3 --- /dev/null +++ b/tests/queries/0_stateless/01823_array_low_cardinality_KuliginStepan.sql @@ -0,0 +1,7 @@ +create temporary table test ( + arr Array(Array(LowCardinality(String))) +); + +insert into test(arr) values ([['a'], ['b', 'c']]); + +select arrayFilter(x -> 1, arr) from test; diff --git a/tests/queries/0_stateless/01831_max_streams.reference b/tests/queries/0_stateless/01831_max_streams.reference new file mode 100644 index 00000000000..573541ac970 --- /dev/null +++ b/tests/queries/0_stateless/01831_max_streams.reference @@ -0,0 +1 @@ +0 diff --git a/tests/queries/0_stateless/01831_max_streams.sql b/tests/queries/0_stateless/01831_max_streams.sql new file mode 100644 index 00000000000..aa835dea5ac --- /dev/null +++ b/tests/queries/0_stateless/01831_max_streams.sql @@ -0,0 +1 @@ +select * from remote('127.1', system.one) settings max_distributed_connections=0; diff --git a/tests/queries/0_stateless/01833_test_collation_alvarotuso.reference b/tests/queries/0_stateless/01833_test_collation_alvarotuso.reference new file mode 100644 index 00000000000..c55134e07d3 --- /dev/null +++ b/tests/queries/0_stateless/01833_test_collation_alvarotuso.reference @@ -0,0 +1,6 @@ +a a +A A +b b +B B +c c +C C diff --git a/tests/queries/0_stateless/01833_test_collation_alvarotuso.sql b/tests/queries/0_stateless/01833_test_collation_alvarotuso.sql new file mode 100644 index 00000000000..65422731711 --- /dev/null +++ b/tests/queries/0_stateless/01833_test_collation_alvarotuso.sql @@ -0,0 +1,21 @@ +DROP TABLE IF EXISTS test_collation; + +CREATE TABLE test_collation +( + `v` String, + `v2` String +) +ENGINE = MergeTree +ORDER BY v +SETTINGS index_granularity = 8192; + +insert into test_collation values ('A', 'A'); +insert into test_collation values ('B', 'B'); +insert into test_collation values ('C', 'C'); +insert into test_collation values ('a', 'a'); +insert into test_collation values ('b', 'b'); +insert into test_collation values ('c', 'c'); + +SELECT * FROM test_collation ORDER BY v ASC COLLATE 'en'; + +DROP TABLE test_collation; diff --git a/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.reference b/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.reference new file mode 100644 index 00000000000..7326d960397 --- /dev/null +++ b/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.reference @@ -0,0 +1 @@ +Ok diff --git a/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.sh b/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.sh new file mode 100755 index 00000000000..793f477b3cb --- /dev/null +++ b/tests/queries/0_stateless/01834_alias_columns_laziness_filimonov.sh @@ -0,0 +1,27 @@ +#!/usr/bin/env bash + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +${CLICKHOUSE_CLIENT} --multiquery --query " +drop table if exists aliases_lazyness; +create table aliases_lazyness (x UInt32, y ALIAS sleepEachRow(0.1)) Engine=MergeTree ORDER BY x; +insert into aliases_lazyness(x) select * from numbers(40); +" + +# In very old ClickHouse versions alias column was calculated for every row. +# If it works this way, the query will take at least 0.1 * 40 = 4 seconds. +# If the issue does not exist, the query should take slightly more than 0.1 seconds. +# The exact time is not guaranteed, so we check in a loop that at least once +# the query will process in less than one second, that proves that the behaviour is not like it was long time ago. + +while true +do + timeout 1 ${CLICKHOUSE_CLIENT} --query "SELECT x, y FROM aliases_lazyness WHERE x = 1 FORMAT Null" && break +done + +${CLICKHOUSE_CLIENT} --multiquery --query " +drop table aliases_lazyness; +SELECT 'Ok'; +" diff --git a/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.reference b/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.reference new file mode 100644 index 00000000000..1f49e6b362b --- /dev/null +++ b/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.reference @@ -0,0 +1,2 @@ +2017-12-15 1 1 +2017-12-15 1 1 diff --git a/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.sql b/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.sql new file mode 100644 index 00000000000..54ffb7b4c1f --- /dev/null +++ b/tests/queries/0_stateless/01835_alias_to_primary_key_cyfdecyf.sql @@ -0,0 +1,21 @@ +DROP TABLE IF EXISTS db; + +CREATE TABLE tb +( + date Date, + `index` Int32, + value Int32, + idx Int32 ALIAS `index` +) +ENGINE = MergeTree +PARTITION BY date +ORDER BY (date, `index`); + +insert into tb values ('2017-12-15', 1, 1); + +SET force_primary_key = 1; + +select * from tb where `index` >= 0 AND `index` <= 2; +select * from tb where idx >= 0 AND idx <= 2; + +DROP TABLE tb; diff --git a/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.reference b/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.reference new file mode 100644 index 00000000000..fc624e3510f --- /dev/null +++ b/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.reference @@ -0,0 +1,6 @@ +DateTime +DateTime +DateTime(\'UTC\') +DateTime64(3) +DateTime64(3) +DateTime64(3, \'UTC\') diff --git a/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.sql b/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.sql new file mode 100644 index 00000000000..be47cfb0411 --- /dev/null +++ b/tests/queries/0_stateless/01836_date_time_keep_default_timezone_on_operations_den_crane.sql @@ -0,0 +1,26 @@ +SELECT toTypeName(now()); +SELECT toTypeName(now() - 1); +SELECT toTypeName(now('UTC') - 1); + +SELECT toTypeName(now64(3)); +SELECT toTypeName(now64(3) - 1); +SELECT toTypeName(toTimeZone(now64(3), 'UTC') - 1); + +DROP TABLE IF EXISTS tt_null; +DROP TABLE IF EXISTS tt; +DROP TABLE IF EXISTS tt_mv; + +create table tt_null(p String) engine = Null; + +create table tt(p String,tmin AggregateFunction(min, DateTime)) +engine = AggregatingMergeTree order by p; + +create materialized view tt_mv to tt as +select p, minState(now() - interval 30 minute) as tmin +from tt_null group by p; + +insert into tt_null values('x'); + +DROP TABLE tt_null; +DROP TABLE tt; +DROP TABLE tt_mv; diff --git a/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.reference b/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.reference new file mode 100644 index 00000000000..c71bf50e82f --- /dev/null +++ b/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.reference @@ -0,0 +1,2 @@ +[] +[] diff --git a/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.sql b/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.sql new file mode 100644 index 00000000000..f3aa595f6d5 --- /dev/null +++ b/tests/queries/0_stateless/01837_cast_to_array_from_empty_array.sql @@ -0,0 +1,2 @@ +SELECT CAST([] AS Array(Array(String))); +SELECT CAST([] AS Array(Array(Array(String)))); diff --git a/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.reference b/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.reference new file mode 100644 index 00000000000..f0543d9221e --- /dev/null +++ b/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.reference @@ -0,0 +1,4 @@ +simple key +example_simple_key_dictionary UInt64 +complex key +example_complex_key_dictionary (UInt64, String) diff --git a/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.sql b/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.sql new file mode 100644 index 00000000000..97d96f643cf --- /dev/null +++ b/tests/queries/0_stateless/01838_system_dictionaries_virtual_key_column.sql @@ -0,0 +1,26 @@ +DROP DICTIONARY IF EXISTS example_simple_key_dictionary; +CREATE DICTIONARY example_simple_key_dictionary ( + id UInt64, + value UInt64 +) +PRIMARY KEY id +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE '' DATABASE currentDatabase())) +LAYOUT(DIRECT()); + +SELECT 'simple key'; + +SELECT name, key FROM system.dictionaries WHERE name='example_simple_key_dictionary' AND database=currentDatabase(); + +DROP DICTIONARY IF EXISTS example_complex_key_dictionary; +CREATE DICTIONARY example_complex_key_dictionary ( + id UInt64, + id_key String, + value UInt64 +) +PRIMARY KEY id, id_key +SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE '' DATABASE currentDatabase())) +LAYOUT(COMPLEX_KEY_DIRECT()); + +SELECT 'complex key'; + +SELECT name, key FROM system.dictionaries WHERE name='example_complex_key_dictionary' AND database=currentDatabase(); diff --git a/tests/queries/0_stateless/01845_add_testcase_for_arrayElement.reference b/tests/queries/0_stateless/01845_add_testcase_for_arrayElement.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01845_add_testcase_for_arrayElement.sql b/tests/queries/0_stateless/01845_add_testcase_for_arrayElement.sql new file mode 100644 index 00000000000..6aeb71b8511 --- /dev/null +++ b/tests/queries/0_stateless/01845_add_testcase_for_arrayElement.sql @@ -0,0 +1,13 @@ +DROP TABLE IF EXISTS test; +CREATE TABLE test (`key` UInt32, `arr` ALIAS [1, 2], `xx` MATERIALIZED arr[1]) ENGINE = MergeTree PARTITION BY tuple() ORDER BY tuple(); +DROP TABLE test; + +CREATE TABLE test (`key` UInt32, `arr` Array(UInt32) ALIAS [1, 2], `xx` MATERIALIZED arr[1]) ENGINE = MergeTree PARTITION BY tuple() ORDER BY tuple(); +DROP TABLE test; + +CREATE TABLE test (`key` UInt32, `arr` Array(UInt32) ALIAS [1, 2], `xx` UInt32 MATERIALIZED arr[1]) ENGINE = MergeTree PARTITION BY tuple() ORDER BY tuple(); +DROP TABLE test; + +CREATE TABLE test (`key` UInt32, `arr` ALIAS [1, 2]) ENGINE = MergeTree PARTITION BY tuple() ORDER BY tuple(); +ALTER TABLE test ADD COLUMN `xx` UInt32 MATERIALIZED arr[1]; +DROP TABLE test; diff --git a/tests/queries/0_stateless/arcadia_skip_list.txt b/tests/queries/0_stateless/arcadia_skip_list.txt index 1b333a6baec..f7068c16edd 100644 --- a/tests/queries/0_stateless/arcadia_skip_list.txt +++ b/tests/queries/0_stateless/arcadia_skip_list.txt @@ -91,6 +91,7 @@ 01125_dict_ddl_cannot_add_column 01129_dict_get_join_lose_constness 01138_join_on_distributed_and_tmp +01153_attach_mv_uuid 01191_rename_dictionary 01200_mutations_memory_consumption 01211_optimize_skip_unused_shards_type_mismatch @@ -224,3 +225,10 @@ 01306_polygons_intersection 01702_system_query_log 01759_optimize_skip_unused_shards_zero_shards +01780_clickhouse_dictionary_source_loop +01790_dist_INSERT_block_structure_mismatch_types_and_names +01791_dist_INSERT_block_structure_mismatch +01801_distinct_group_by_shard +01804_dictionary_decimal256_type +01801_s3_distributed +01833_test_collation_alvarotuso diff --git a/tests/queries/1_stateful/00163_column_oriented_formats.reference b/tests/queries/1_stateful/00163_column_oriented_formats.reference new file mode 100644 index 00000000000..cb20aca4392 --- /dev/null +++ b/tests/queries/1_stateful/00163_column_oriented_formats.reference @@ -0,0 +1,12 @@ +Parquet +6b397d4643bc1f920f3eb8aa87ee180c - +7fe6d8c57ddc5fe37bbdcb7f73c5fa78 - +d8746733270cbeff7ab3550c9b944fb6 - +Arrow +6b397d4643bc1f920f3eb8aa87ee180c - +7fe6d8c57ddc5fe37bbdcb7f73c5fa78 - +d8746733270cbeff7ab3550c9b944fb6 - +ORC +6b397d4643bc1f920f3eb8aa87ee180c - +7fe6d8c57ddc5fe37bbdcb7f73c5fa78 - +d8746733270cbeff7ab3550c9b944fb6 - diff --git a/tests/queries/1_stateful/00163_column_oriented_formats.sh b/tests/queries/1_stateful/00163_column_oriented_formats.sh new file mode 100755 index 00000000000..1363ccf3c00 --- /dev/null +++ b/tests/queries/1_stateful/00163_column_oriented_formats.sh @@ -0,0 +1,20 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + + +FORMATS=('Parquet' 'Arrow' 'ORC') + +for format in "${FORMATS[@]}" +do + echo $format + $CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS 00163_column_oriented SYNC" + $CLICKHOUSE_CLIENT -q "CREATE TABLE 00163_column_oriented(ClientEventTime DateTime, MobilePhoneModel String, ClientIP6 FixedString(16)) ENGINE=File($format)" + $CLICKHOUSE_CLIENT -q "INSERT INTO 00163_column_oriented SELECT ClientEventTime, MobilePhoneModel, ClientIP6 FROM test.hits ORDER BY ClientEventTime, MobilePhoneModel, ClientIP6 LIMIT 100" + $CLICKHOUSE_CLIENT -q "SELECT ClientEventTime from 00163_column_oriented" | md5sum + $CLICKHOUSE_CLIENT -q "SELECT MobilePhoneModel from 00163_column_oriented" | md5sum + $CLICKHOUSE_CLIENT -q "SELECT ClientIP6 from 00163_column_oriented" | md5sum + $CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS 00163_column_oriented SYNC" +done diff --git a/tests/queries/query_test.py b/tests/queries/query_test.py index b747ac2944e..6ebeccbeeac 100644 --- a/tests/queries/query_test.py +++ b/tests/queries/query_test.py @@ -1,5 +1,3 @@ -import pytest - import difflib import os import random @@ -7,6 +5,8 @@ import string import subprocess import sys +import pytest + SKIP_LIST = [ # these couple of tests hangs everything @@ -14,44 +14,63 @@ SKIP_LIST = [ "00987_distributed_stack_overflow", # just fail + "00133_long_shard_memory_tracker_and_exception_safety", "00505_secure", "00505_shard_secure", "00646_url_engine", "00725_memory_tracking", # BROKEN + "00738_lock_for_inner_table", + "00821_distributed_storage_with_join_on", + "00825_protobuf_format_array_3dim", + "00825_protobuf_format_array_of_arrays", + "00825_protobuf_format_enum_mapping", + "00825_protobuf_format_nested_in_nested", + "00825_protobuf_format_nested_optional", + "00825_protobuf_format_no_length_delimiter", + "00825_protobuf_format_persons", + "00825_protobuf_format_squares", + "00825_protobuf_format_table_default", "00834_cancel_http_readonly_queries_on_client_close", + "00877_memory_limit_for_new_delete", + "00900_parquet_load", "00933_test_fix_extra_seek_on_compressed_cache", "00965_logs_level_bugfix", "00965_send_logs_level_concurrent_queries", + "00974_query_profiler", "00990_hasToken", "00990_metric_log_table_not_empty", "01014_lazy_database_concurrent_recreate_reattach_and_show_tables", + "01017_uniqCombined_memory_usage", "01018_Distributed__shard_num", "01018_ip_dictionary_long", + "01035_lc_empty_part_bug", # FLAKY "01050_clickhouse_dict_source_with_subquery", "01053_ssd_dictionary", "01054_cache_dictionary_overflow_cell", "01057_http_compression_prefer_brotli", "01080_check_for_error_incorrect_size_of_nested_column", "01083_expressions_in_engine_arguments", - # "01086_odbc_roundtrip", + "01086_odbc_roundtrip", "01088_benchmark_query_id", + "01092_memory_profiler", "01098_temporary_and_external_tables", "01099_parallel_distributed_insert_select", "01103_check_cpu_instructions_at_startup", "01114_database_atomic", "01148_zookeeper_path_macros_unfolding", + "01175_distributed_ddl_output_mode_long", "01181_db_atomic_drop_on_cluster", # tcp port in reference "01280_ssd_complex_key_dictionary", "01293_client_interactive_vertical_multiline", # expect-test "01293_client_interactive_vertical_singleline", # expect-test - "01293_system_distribution_queue", # FLAKY "01293_show_clusters", + "01293_show_settings", + "01293_system_distribution_queue", # FLAKY "01294_lazy_database_concurrent_recreate_reattach_and_show_tables_long", "01294_system_distributed_on_cluster", "01300_client_save_history_when_terminated", # expect-test "01304_direct_io", "01306_benchmark_json", - "01035_lc_empty_part_bug", # FLAKY "01320_create_sync_race_condition_zookeeper", "01355_CSV_input_format_allow_errors", "01370_client_autocomplete_word_break_characters", # expect-test @@ -66,18 +85,33 @@ SKIP_LIST = [ "01514_distributed_cancel_query_on_error", "01520_client_print_query_id", # expect-test "01526_client_start_and_exit", # expect-test + "01526_max_untracked_memory", "01527_dist_sharding_key_dictGet_reload", + "01528_play", "01545_url_file_format_settings", "01553_datetime64_comparison", "01555_system_distribution_queue_mask", "01558_ttest_scipy", "01561_mann_whitney_scipy", "01582_distinct_optimization", + "01591_window_functions", "01599_multiline_input_and_singleline_comments", # expect-test "01601_custom_tld", + "01606_git_import", "01610_client_spawn_editor", # expect-test + "01658_read_file_to_stringcolumn", + "01666_merge_tree_max_query_limit", + "01674_unicode_asan", "01676_clickhouse_client_autocomplete", # expect-test (partially) "01683_text_log_deadlock", # secure tcp + "01684_ssd_cache_dictionary_simple_key", + "01685_ssd_cache_dictionary_complex_key", + "01746_executable_pool_dictionary", + "01747_executable_pool_dictionary_implicit_key", + "01747_join_view_filter_dictionary", + "01748_dictionary_table_dot", + "01754_cluster_all_replicas_shard_num", + "01759_optimize_skip_unused_shards_zero_shards", ] diff --git a/tests/queries/shell_config.sh b/tests/queries/shell_config.sh index 5b942a95d02..ea7fa2e7921 100644 --- a/tests/queries/shell_config.sh +++ b/tests/queries/shell_config.sh @@ -23,14 +23,21 @@ export CLICKHOUSE_TEST_ZOOKEEPER_PREFIX="${CLICKHOUSE_TEST_NAME}_${CLICKHOUSE_DA [ -v CLICKHOUSE_LOG_COMMENT ] && CLICKHOUSE_BENCHMARK_OPT0+=" --log_comment='${CLICKHOUSE_LOG_COMMENT}' " export CLICKHOUSE_BINARY=${CLICKHOUSE_BINARY:="clickhouse"} +# client [ -x "$CLICKHOUSE_BINARY-client" ] && CLICKHOUSE_CLIENT_BINARY=${CLICKHOUSE_CLIENT_BINARY:=$CLICKHOUSE_BINARY-client} [ -x "$CLICKHOUSE_BINARY" ] && CLICKHOUSE_CLIENT_BINARY=${CLICKHOUSE_CLIENT_BINARY:=$CLICKHOUSE_BINARY client} export CLICKHOUSE_CLIENT_BINARY=${CLICKHOUSE_CLIENT_BINARY:=$CLICKHOUSE_BINARY-client} export CLICKHOUSE_CLIENT_OPT="${CLICKHOUSE_CLIENT_OPT0:-} ${CLICKHOUSE_CLIENT_OPT:-}" export CLICKHOUSE_CLIENT=${CLICKHOUSE_CLIENT:="$CLICKHOUSE_CLIENT_BINARY ${CLICKHOUSE_CLIENT_OPT:-}"} +# local [ -x "${CLICKHOUSE_BINARY}-local" ] && CLICKHOUSE_LOCAL=${CLICKHOUSE_LOCAL:="${CLICKHOUSE_BINARY}-local"} [ -x "${CLICKHOUSE_BINARY}" ] && CLICKHOUSE_LOCAL=${CLICKHOUSE_LOCAL:="${CLICKHOUSE_BINARY} local"} export CLICKHOUSE_LOCAL=${CLICKHOUSE_LOCAL:="${CLICKHOUSE_BINARY}-local"} +# server +[ -x "${CLICKHOUSE_BINARY}-server" ] && CLICKHOUSE_SERVER_BINARY=${CLICKHOUSE_SERVER_BINARY:="${CLICKHOUSE_BINARY}-server"} +[ -x "${CLICKHOUSE_BINARY}" ] && CLICKHOUSE_SERVER_BINARY=${CLICKHOUSE_SERVER_BINARY:="${CLICKHOUSE_BINARY} server"} +export CLICKHOUSE_SERVER_BINARY=${CLICKHOUSE_SERVER_BINARY:="${CLICKHOUSE_BINARY}-server"} +# others export CLICKHOUSE_OBFUSCATOR=${CLICKHOUSE_OBFUSCATOR:="${CLICKHOUSE_BINARY}-obfuscator"} export CLICKHOUSE_COMPRESSOR=${CLICKHOUSE_COMPRESSOR:="${CLICKHOUSE_BINARY}-compressor"} export CLICKHOUSE_BENCHMARK=${CLICKHOUSE_BENCHMARK:="${CLICKHOUSE_BINARY}-benchmark ${CLICKHOUSE_BENCHMARK_OPT0:-}"} @@ -63,6 +70,8 @@ export CLICKHOUSE_PORT_HTTPS=${CLICKHOUSE_PORT_HTTPS:="8443"} export CLICKHOUSE_PORT_HTTP_PROTO=${CLICKHOUSE_PORT_HTTP_PROTO:="http"} export CLICKHOUSE_PORT_MYSQL=${CLICKHOUSE_PORT_MYSQL:=$(${CLICKHOUSE_EXTRACT_CONFIG} --try --key=mysql_port 2>/dev/null)} 2>/dev/null export CLICKHOUSE_PORT_MYSQL=${CLICKHOUSE_PORT_MYSQL:="9004"} +export CLICKHOUSE_PORT_POSTGRESQL=${CLICKHOUSE_PORT_POSTGRESQL:=$(${CLICKHOUSE_EXTRACT_CONFIG} --try --key=postgresql_port 2>/dev/null)} 2>/dev/null +export CLICKHOUSE_PORT_POSTGRESQL=${CLICKHOUSE_PORT_POSTGRESQL:="9005"} # Add database and log comment to url params if [ -v CLICKHOUSE_URL_PARAMS ] diff --git a/tests/queries/skip_list.json b/tests/queries/skip_list.json index 4759fb95602..08a66c7499d 100644 --- a/tests/queries/skip_list.json +++ b/tests/queries/skip_list.json @@ -105,7 +105,8 @@ "00604_show_create_database", "00609_mv_index_in_in", "00510_materizlized_view_and_deduplication_zookeeper", - "00738_lock_for_inner_table" + "00738_lock_for_inner_table", + "01153_attach_mv_uuid" ], "database-replicated": [ "memory_tracking", @@ -148,6 +149,7 @@ "00626_replace_partition_from_table", "00152_insert_different_granularity", "00054_merge_tree_partitions", + "01781_merge_tree_deduplication", /// Old syntax is not allowed "01062_alter_on_mutataion_zookeeper", "00925_zookeeper_empty_replicated_merge_tree_optimize_final", @@ -389,7 +391,10 @@ "01655_plan_optimizations", "01475_read_subcolumns_storages", "01674_clickhouse_client_query_param_cte", - "01666_merge_tree_max_query_limit" + "01666_merge_tree_max_query_limit", + "01786_explain_merge_tree", + "01666_merge_tree_max_query_limit", + "01802_test_postgresql_protocol_with_row_policy" /// It cannot parse DROP ROW POLICY ], "parallel": [ @@ -556,6 +561,8 @@ "01135_default_and_alter_zookeeper", "01148_zookeeper_path_macros_unfolding", "01150_ddl_guard_rwr", + "01153_attach_mv_uuid", + "01152_cross_replication", "01185_create_or_replace_table", "01190_full_attach_syntax", "01191_rename_dictionary", @@ -675,6 +682,8 @@ "01760_polygon_dictionaries", "01760_system_dictionaries", "01761_alter_decimal_zookeeper", + "01360_materialized_view_with_join_on_query_log", // creates and drops MVs on query_log, which may interrupt flushes. + "01509_parallel_quorum_insert_no_replicas", // It's ok to execute in parallel with oter tests but not several instances of the same test. "attach", "ddl_dictionaries", "dictionary", @@ -693,8 +702,13 @@ "01682_cache_dictionary_complex_key", "01684_ssd_cache_dictionary_simple_key", "01685_ssd_cache_dictionary_complex_key", + "01737_clickhouse_server_wait_server_pool_long", // This test is fully compatible to run in parallel, however under ASAN processes are pretty heavy and may fail under flaky adress check. "01760_system_dictionaries", "01760_polygon_dictionaries", - "01778_hierarchical_dictionaries" + "01778_hierarchical_dictionaries", + "01780_clickhouse_dictionary_source_loop", + "01785_dictionary_element_count", + "01802_test_postgresql_protocol_with_row_policy", /// Creates database and users + "01804_dictionary_decimal256_type" ] } diff --git a/tests/testflows/map_type/__init__.py b/tests/testflows/map_type/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/testflows/map_type/configs/clickhouse/config.d/logs.xml b/tests/testflows/map_type/configs/clickhouse/config.d/logs.xml new file mode 100644 index 00000000000..e5077af3f49 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse/config.d/logs.xml @@ -0,0 +1,16 @@ + + + trace + /var/log/clickhouse-server/log.log + /var/log/clickhouse-server/log.err.log + 1000M + 10 + /var/log/clickhouse-server/stderr.log + /var/log/clickhouse-server/stdout.log + + + system + part_log
+ 500 +
+
diff --git a/tests/testflows/map_type/configs/clickhouse/config.d/macros.xml b/tests/testflows/map_type/configs/clickhouse/config.d/macros.xml new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/testflows/map_type/configs/clickhouse/config.d/remote.xml b/tests/testflows/map_type/configs/clickhouse/config.d/remote.xml new file mode 100644 index 00000000000..b7d02ceeec1 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse/config.d/remote.xml @@ -0,0 +1,42 @@ + + + + + + true + + clickhouse1 + 9000 + + + clickhouse2 + 9000 + + + clickhouse3 + 9000 + + + + + + + clickhouse1 + 9000 + + + + + clickhouse2 + 9000 + + + + + clickhouse3 + 9000 + + + + + diff --git a/tests/testflows/map_type/configs/clickhouse/config.d/zookeeper.xml b/tests/testflows/map_type/configs/clickhouse/config.d/zookeeper.xml new file mode 100644 index 00000000000..96270e7b645 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse/config.d/zookeeper.xml @@ -0,0 +1,10 @@ + + + + + zookeeper + 2181 + + 15000 + + diff --git a/tests/testflows/map_type/configs/clickhouse/config.xml b/tests/testflows/map_type/configs/clickhouse/config.xml new file mode 100644 index 00000000000..4ec12232539 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse/config.xml @@ -0,0 +1,448 @@ + + + + + + trace + /var/log/clickhouse-server/clickhouse-server.log + /var/log/clickhouse-server/clickhouse-server.err.log + 1000M + 10 + + + + 8123 + 9000 + + + + + + + + + /etc/clickhouse-server/server.crt + /etc/clickhouse-server/server.key + + /etc/clickhouse-server/dhparam.pem + none + true + true + sslv2,sslv3 + true + + + + true + true + sslv2,sslv3 + true + + + + RejectCertificateHandler + + + + + + + + + 9009 + + + + + + + + 0.0.0.0 + + + + + + + + + + + + 4096 + 3 + + + 100 + + + + + + 8589934592 + + + 5368709120 + + + + /var/lib/clickhouse/ + + + /var/lib/clickhouse/tmp/ + + + /var/lib/clickhouse/user_files/ + + + /var/lib/clickhouse/access/ + + + + + + users.xml + + + + /var/lib/clickhouse/access/ + + + + + users.xml + + + default + + + + + + default + + + + + + + + + false + + + + + + + + localhost + 9000 + + + + + + + localhost + 9000 + + + + + localhost + 9000 + + + + + + + localhost + 9440 + 1 + + + + + + + localhost + 9000 + + + + + localhost + 1 + + + + + + + + + + + + + + + + + 3600 + + + + 3600 + + + 60 + + + + + + + + + + system + query_log
+ + toYYYYMM(event_date) + + 7500 +
+ + + + system + trace_log
+ + toYYYYMM(event_date) + 7500 +
+ + + + system + query_thread_log
+ toYYYYMM(event_date) + 7500 +
+ + + + + + + + + + + + + + + + *_dictionary.xml + + + + + + + + + + /clickhouse/task_queue/ddl + + + + + + + + + + + + + + + + click_cost + any + + 0 + 3600 + + + 86400 + 60 + + + + max + + 0 + 60 + + + 3600 + 300 + + + 86400 + 3600 + + + + + + /var/lib/clickhouse/format_schemas/ + + + +
diff --git a/tests/testflows/map_type/configs/clickhouse/users.xml b/tests/testflows/map_type/configs/clickhouse/users.xml new file mode 100644 index 00000000000..86b2cd9e1e3 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse/users.xml @@ -0,0 +1,133 @@ + + + + + + + + 10000000000 + + + 0 + + + random + + + + + 1 + + + + + + + + + + + + + ::/0 + + + + default + + + default + + + 1 + + + + + + + + + + + + + + + + + 3600 + + + 0 + 0 + 0 + 0 + 0 + + + + diff --git a/tests/testflows/map_type/configs/clickhouse1/config.d/macros.xml b/tests/testflows/map_type/configs/clickhouse1/config.d/macros.xml new file mode 100644 index 00000000000..6cdcc1b440c --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse1/config.d/macros.xml @@ -0,0 +1,8 @@ + + + + clickhouse1 + 01 + 01 + + diff --git a/tests/testflows/map_type/configs/clickhouse2/config.d/macros.xml b/tests/testflows/map_type/configs/clickhouse2/config.d/macros.xml new file mode 100644 index 00000000000..a114a9ce4ab --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse2/config.d/macros.xml @@ -0,0 +1,8 @@ + + + + clickhouse2 + 01 + 02 + + diff --git a/tests/testflows/map_type/configs/clickhouse3/config.d/macros.xml b/tests/testflows/map_type/configs/clickhouse3/config.d/macros.xml new file mode 100644 index 00000000000..904a27b0172 --- /dev/null +++ b/tests/testflows/map_type/configs/clickhouse3/config.d/macros.xml @@ -0,0 +1,8 @@ + + + + clickhouse3 + 01 + 03 + + diff --git a/tests/testflows/map_type/docker-compose/clickhouse-service.yml b/tests/testflows/map_type/docker-compose/clickhouse-service.yml new file mode 100755 index 00000000000..fdd4a8057a9 --- /dev/null +++ b/tests/testflows/map_type/docker-compose/clickhouse-service.yml @@ -0,0 +1,27 @@ +version: '2.3' + +services: + clickhouse: + image: yandex/clickhouse-integration-test + expose: + - "9000" + - "9009" + - "8123" + volumes: + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/config.d:/etc/clickhouse-server/config.d" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/users.d:/etc/clickhouse-server/users.d" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/config.xml:/etc/clickhouse-server/config.xml" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse/users.xml:/etc/clickhouse-server/users.xml" + - "${CLICKHOUSE_TESTS_SERVER_BIN_PATH:-/usr/bin/clickhouse}:/usr/bin/clickhouse" + - "${CLICKHOUSE_TESTS_ODBC_BRIDGE_BIN_PATH:-/usr/bin/clickhouse-odbc-bridge}:/usr/bin/clickhouse-odbc-bridge" + entrypoint: bash -c "clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log" + healthcheck: + test: clickhouse client --query='select 1' + interval: 10s + timeout: 10s + retries: 3 + start_period: 300s + cap_add: + - SYS_PTRACE + security_opt: + - label:disable diff --git a/tests/testflows/map_type/docker-compose/docker-compose.yml b/tests/testflows/map_type/docker-compose/docker-compose.yml new file mode 100755 index 00000000000..29f2ef52470 --- /dev/null +++ b/tests/testflows/map_type/docker-compose/docker-compose.yml @@ -0,0 +1,60 @@ +version: '2.3' + +services: + zookeeper: + extends: + file: zookeeper-service.yml + service: zookeeper + + clickhouse1: + extends: + file: clickhouse-service.yml + service: clickhouse + hostname: clickhouse1 + volumes: + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse1/database/:/var/lib/clickhouse/" + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse1/logs/:/var/log/clickhouse-server/" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse1/config.d/macros.xml:/etc/clickhouse-server/config.d/macros.xml" + depends_on: + zookeeper: + condition: service_healthy + + clickhouse2: + extends: + file: clickhouse-service.yml + service: clickhouse + hostname: clickhouse2 + volumes: + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse2/database/:/var/lib/clickhouse/" + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse2/logs/:/var/log/clickhouse-server/" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse2/config.d/macros.xml:/etc/clickhouse-server/config.d/macros.xml" + depends_on: + zookeeper: + condition: service_healthy + + clickhouse3: + extends: + file: clickhouse-service.yml + service: clickhouse + hostname: clickhouse3 + volumes: + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse3/database/:/var/lib/clickhouse/" + - "${CLICKHOUSE_TESTS_DIR}/_instances/clickhouse3/logs/:/var/log/clickhouse-server/" + - "${CLICKHOUSE_TESTS_DIR}/configs/clickhouse3/config.d/macros.xml:/etc/clickhouse-server/config.d/macros.xml" + depends_on: + zookeeper: + condition: service_healthy + + # dummy service which does nothing, but allows to postpone + # 'docker-compose up -d' till all dependecies will go healthy + all_services_ready: + image: hello-world + depends_on: + clickhouse1: + condition: service_healthy + clickhouse2: + condition: service_healthy + clickhouse3: + condition: service_healthy + zookeeper: + condition: service_healthy diff --git a/tests/testflows/map_type/docker-compose/zookeeper-service.yml b/tests/testflows/map_type/docker-compose/zookeeper-service.yml new file mode 100755 index 00000000000..f3df33358be --- /dev/null +++ b/tests/testflows/map_type/docker-compose/zookeeper-service.yml @@ -0,0 +1,18 @@ +version: '2.3' + +services: + zookeeper: + image: zookeeper:3.4.12 + expose: + - "2181" + environment: + ZOO_TICK_TIME: 500 + ZOO_MY_ID: 1 + healthcheck: + test: echo stat | nc localhost 2181 + interval: 3s + timeout: 2s + retries: 5 + start_period: 2s + security_opt: + - label:disable diff --git a/tests/testflows/map_type/regression.py b/tests/testflows/map_type/regression.py new file mode 100755 index 00000000000..54d713347c6 --- /dev/null +++ b/tests/testflows/map_type/regression.py @@ -0,0 +1,121 @@ +#!/usr/bin/env python3 +import sys + +from testflows.core import * + +append_path(sys.path, "..") + +from helpers.cluster import Cluster +from helpers.argparser import argparser +from map_type.requirements import SRS018_ClickHouse_Map_Data_Type + +xfails = { + "tests/table map with key integer/Int:": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21032")], + "tests/table map with value integer/Int:": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21032")], + "tests/table map with key integer/UInt256": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21031")], + "tests/table map with value integer/UInt256": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21031")], + "tests/select map with key integer/Int64": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21030")], + "tests/select map with value integer/Int64": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21030")], + "tests/cast tuple of two arrays to map/string -> int": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21029")], + "tests/mapcontains/null key in map": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21028")], + "tests/mapcontains/null key not in map": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21028")], + "tests/mapkeys/null key not in map": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21028")], + "tests/mapkeys/null key in map": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21028")], + "tests/mapcontains/select nullable key": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21026")], + "tests/mapkeys/select keys from column": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21026")], + "tests/table map select key with value string/LowCardinality:": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21406")], + "tests/table map select key with key string/FixedString": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21406")], + "tests/table map select key with key string/Nullable": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21406")], + "tests/table map select key with key string/Nullable(NULL)": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21026")], + "tests/table map select key with key string/LowCardinality:": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21406")], + "tests/table map select key with key integer/Int:": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21032")], + "tests/table map select key with key integer/UInt256": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21031")], + "tests/table map select key with key integer/toNullable": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21406")], + "tests/table map select key with key integer/toNullable(NULL)": + [(Fail, "https://github.com/ClickHouse/ClickHouse/issues/21026")], + "tests/select map with key integer/Int128": + [(Fail, "large Int128 as key not supported")], + "tests/select map with key integer/Int256": + [(Fail, "large Int256 as key not supported")], + "tests/select map with key integer/UInt256": + [(Fail, "large UInt256 as key not supported")], + "tests/select map with key integer/toNullable": + [(Fail, "Nullable type as key not supported")], + "tests/select map with key integer/toNullable(NULL)": + [(Fail, "Nullable type as key not supported")], + "tests/select map with key string/Nullable": + [(Fail, "Nullable type as key not supported")], + "tests/select map with key string/Nullable(NULL)": + [(Fail, "Nullable type as key not supported")], + "tests/table map queries/select map with nullable value": + [(Fail, "Nullable value not supported")], + "tests/table map with key integer/toNullable": + [(Fail, "Nullable type as key not supported")], + "tests/table map with key integer/toNullable(NULL)": + [(Fail, "Nullable type as key not supported")], + "tests/table map with key string/Nullable": + [(Fail, "Nullable type as key not supported")], + "tests/table map with key string/Nullable(NULL)": + [(Fail, "Nullable type as key not supported")], + "tests/table map with key string/LowCardinality(String)": + [(Fail, "LowCardinality(String) as key not supported")], + "tests/table map with key string/LowCardinality(String) cast from String": + [(Fail, "LowCardinality(String) as key not supported")], + "tests/table map with key string/LowCardinality(String) for key and value": + [(Fail, "LowCardinality(String) as key not supported")], + "tests/table map with key string/LowCardinality(FixedString)": + [(Fail, "LowCardinality(FixedString) as key not supported")], + "tests/table map with value string/LowCardinality(String) for key and value": + [(Fail, "LowCardinality(String) as key not supported")], +} + +xflags = { +} + +@TestModule +@ArgumentParser(argparser) +@XFails(xfails) +@XFlags(xflags) +@Name("map type") +@Specifications( + SRS018_ClickHouse_Map_Data_Type +) +def regression(self, local, clickhouse_binary_path, stress=None, parallel=None): + """Map type regression. + """ + nodes = { + "clickhouse": + ("clickhouse1", "clickhouse2", "clickhouse3") + } + with Cluster(local, clickhouse_binary_path, nodes=nodes) as cluster: + self.context.cluster = cluster + self.context.stress = stress + + if parallel is not None: + self.context.parallel = parallel + + Feature(run=load("map_type.tests.feature", "feature")) + +if main(): + regression() diff --git a/tests/testflows/map_type/requirements/__init__.py b/tests/testflows/map_type/requirements/__init__.py new file mode 100644 index 00000000000..02f7d430154 --- /dev/null +++ b/tests/testflows/map_type/requirements/__init__.py @@ -0,0 +1 @@ +from .requirements import * diff --git a/tests/testflows/map_type/requirements/requirements.md b/tests/testflows/map_type/requirements/requirements.md new file mode 100644 index 00000000000..f19f5a7f7bd --- /dev/null +++ b/tests/testflows/map_type/requirements/requirements.md @@ -0,0 +1,512 @@ +# SRS018 ClickHouse Map Data Type +# Software Requirements Specification + +## Table of Contents + +* 1 [Revision History](#revision-history) +* 2 [Introduction](#introduction) +* 3 [Requirements](#requirements) + * 3.1 [General](#general) + * 3.1.1 [RQ.SRS-018.ClickHouse.Map.DataType](#rqsrs-018clickhousemapdatatype) + * 3.2 [Performance](#performance) + * 3.2.1 [RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples](#rqsrs-018clickhousemapdatatypeperformancevsarrayoftuples) + * 3.2.2 [RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays](#rqsrs-018clickhousemapdatatypeperformancevstupleofarrays) + * 3.3 [Key Types](#key-types) + * 3.3.1 [RQ.SRS-018.ClickHouse.Map.DataType.Key.String](#rqsrs-018clickhousemapdatatypekeystring) + * 3.3.2 [RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer](#rqsrs-018clickhousemapdatatypekeyinteger) + * 3.4 [Value Types](#value-types) + * 3.4.1 [RQ.SRS-018.ClickHouse.Map.DataType.Value.String](#rqsrs-018clickhousemapdatatypevaluestring) + * 3.4.2 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer](#rqsrs-018clickhousemapdatatypevalueinteger) + * 3.4.3 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Array](#rqsrs-018clickhousemapdatatypevaluearray) + * 3.5 [Invalid Types](#invalid-types) + * 3.5.1 [RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable](#rqsrs-018clickhousemapdatatypeinvalidnullable) + * 3.5.2 [RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing](#rqsrs-018clickhousemapdatatypeinvalidnothingnothing) + * 3.6 [Duplicated Keys](#duplicated-keys) + * 3.6.1 [RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys](#rqsrs-018clickhousemapdatatypeduplicatedkeys) + * 3.7 [Array of Maps](#array-of-maps) + * 3.7.1 [RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps](#rqsrs-018clickhousemapdatatypearrayofmaps) + * 3.8 [Nested With Maps](#nested-with-maps) + * 3.8.1 [RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps](#rqsrs-018clickhousemapdatatypenestedwithmaps) + * 3.9 [Value Retrieval](#value-retrieval) + * 3.9.1 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval](#rqsrs-018clickhousemapdatatypevalueretrieval) + * 3.9.2 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid](#rqsrs-018clickhousemapdatatypevalueretrievalkeyinvalid) + * 3.9.3 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound](#rqsrs-018clickhousemapdatatypevalueretrievalkeynotfound) + * 3.10 [Converting Tuple(Array, Array) to Map](#converting-tuplearray-array-to-map) + * 3.10.1 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap](#rqsrs-018clickhousemapdatatypeconversionfromtupleofarraystomap) + * 3.10.2 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid](#rqsrs-018clickhousemapdatatypeconversionfromtupleofarraysmapinvalid) + * 3.11 [Converting Array(Tuple(K,V)) to Map](#converting-arraytuplekv-to-map) + * 3.11.1 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap](#rqsrs-018clickhousemapdatatypeconversionfromarrayoftuplestomap) + * 3.11.2 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid](#rqsrs-018clickhousemapdatatypeconversionfromarrayoftuplestomapinvalid) + * 3.12 [Keys and Values Subcolumns](#keys-and-values-subcolumns) + * 3.12.1 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys](#rqsrs-018clickhousemapdatatypesubcolumnskeys) + * 3.12.2 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions](#rqsrs-018clickhousemapdatatypesubcolumnskeysarrayfunctions) + * 3.12.3 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap](#rqsrs-018clickhousemapdatatypesubcolumnskeysinlinedefinedmap) + * 3.12.4 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values](#rqsrs-018clickhousemapdatatypesubcolumnsvalues) + * 3.12.5 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions](#rqsrs-018clickhousemapdatatypesubcolumnsvaluesarrayfunctions) + * 3.12.6 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap](#rqsrs-018clickhousemapdatatypesubcolumnsvaluesinlinedefinedmap) + * 3.13 [Functions](#functions) + * 3.13.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap](#rqsrs-018clickhousemapdatatypefunctionsinlinedefinedmap) + * 3.13.2 [`length`](#length) + * 3.13.2.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length](#rqsrs-018clickhousemapdatatypefunctionslength) + * 3.13.3 [`empty`](#empty) + * 3.13.3.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty](#rqsrs-018clickhousemapdatatypefunctionsempty) + * 3.13.4 [`notEmpty`](#notempty) + * 3.13.4.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty](#rqsrs-018clickhousemapdatatypefunctionsnotempty) + * 3.13.5 [`map`](#map) + * 3.13.5.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map](#rqsrs-018clickhousemapdatatypefunctionsmap) + * 3.13.5.2 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments](#rqsrs-018clickhousemapdatatypefunctionsmapinvalidnumberofarguments) + * 3.13.5.3 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes](#rqsrs-018clickhousemapdatatypefunctionsmapmixedkeyorvaluetypes) + * 3.13.5.4 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd](#rqsrs-018clickhousemapdatatypefunctionsmapmapadd) + * 3.13.5.5 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract](#rqsrs-018clickhousemapdatatypefunctionsmapmapsubstract) + * 3.13.5.6 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries](#rqsrs-018clickhousemapdatatypefunctionsmapmappopulateseries) + * 3.13.6 [`mapContains`](#mapcontains) + * 3.13.6.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains](#rqsrs-018clickhousemapdatatypefunctionsmapcontains) + * 3.13.7 [`mapKeys`](#mapkeys) + * 3.13.7.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys](#rqsrs-018clickhousemapdatatypefunctionsmapkeys) + * 3.13.8 [`mapValues`](#mapvalues) + * 3.13.8.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues](#rqsrs-018clickhousemapdatatypefunctionsmapvalues) + +## Revision History + +This document is stored in an electronic form using [Git] source control management software +hosted in a [GitHub Repository]. +All the updates are tracked using the [Revision History]. + +## Introduction + +This software requirements specification covers requirements for `Map(key, value)` data type in [ClickHouse]. + +## Requirements + +### General + +#### RQ.SRS-018.ClickHouse.Map.DataType +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type that stores `key:value` pairs. + +### Performance + +#### RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples +version:1.0 + +[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as +compared to `Array(Tuple(K,V))` data type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays +version:1.0 + +[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as +compared to `Tuple(Array(String), Array(String))` data type where the first +array defines an array of keys and the second array defines an array of values. + +### Key Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Key.String +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where key is of a [String] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where key is of an [Integer] type. + +### Value Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.String +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [String] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Integer] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Array +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Array] type. + +### Invalid Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable +version: 1.0 + +[ClickHouse] SHALL not support creating table columns that have `Nullable(Map(key, value))` data type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing +version: 1.0 + +[ClickHouse] SHALL not support creating table columns that have `Map(Nothing, Nothing))` data type. + +### Duplicated Keys + +#### RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys +version: 1.0 + +[ClickHouse] MAY support `Map(key, value)` data type with duplicated keys. + +### Array of Maps + +#### RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps +version: 1.0 + +[ClickHouse] SHALL support `Array(Map(key, value))` data type. + +### Nested With Maps + +#### RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps +version: 1.0 + +[ClickHouse] SHALL support defining `Map(key, value)` data type inside the [Nested] data type. + +### Value Retrieval + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval +version: 1.0 + +[ClickHouse] SHALL support getting the value from a `Map(key, value)` data type using `map[key]` syntax. +If `key` has duplicates then the first `key:value` pair MAY be returned. + +For example, + +```sql +SELECT a['key2'] FROM table_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid +version: 1.0 + +[ClickHouse] SHALL return an error when key does not match the key type. + +For example, + +```sql +SELECT map(1,2) AS m, m[1024] +``` + +Exceptions: + +* when key is `NULL` the return value MAY be `NULL` +* when key value is not valid for the key type, for example it is out of range for [Integer] type, + when reading from a table column it MAY return the default value for key data type + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound +version: 1.0 + +[ClickHouse] SHALL return default value for the data type of the value +when there's no corresponding `key` defined in the `Map(key, value)` data type. + + +### Converting Tuple(Array, Array) to Map + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap +version: 1.0 + +[ClickHouse] SHALL support converting [Tuple(Array, Array)] to `Map(key, value)` using the [CAST] function. + +``` sql +SELECT CAST(([1, 2, 3], ['Ready', 'Steady', 'Go']), 'Map(UInt8, String)') AS map; +``` + +``` text +┌─map───────────────────────────┐ +│ {1:'Ready',2:'Steady',3:'Go'} │ +└───────────────────────────────┘ +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid +version: 1.0 + +[ClickHouse] MAY return an error when casting [Tuple(Array, Array)] to `Map(key, value)` + +* when arrays are not of equal size + + For example, + + ```sql + SELECT CAST(([2, 1, 1023], ['', '']), 'Map(UInt8, String)') AS map, map[10] + ``` + +### Converting Array(Tuple(K,V)) to Map + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap +version: 1.0 + +[ClickHouse] SHALL support converting [Array(Tuple(K,V))] to `Map(key, value)` using the [CAST] function. + +For example, + +```sql +SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid +version: 1.0 + +[ClickHouse] MAY return an error when casting [Array(Tuple(K, V))] to `Map(key, value)` + +* when element is not a [Tuple] + + ```sql + SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map + ``` + +* when [Tuple] does not contain two elements + + ```sql + SELECT CAST(([(1,2),(3,)]), 'Map(UInt8, UInt8)') AS map + ``` + +### Keys and Values Subcolumns + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys +version: 1.0 + +[ClickHouse] SHALL support `keys` subcolumn in the `Map(key, value)` type that can be used +to retrieve an [Array] of map keys. + +```sql +SELECT m.keys FROM t_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions +version: 1.0 + +[ClickHouse] SHALL support applying [Array] functions to the `keys` subcolumn in the `Map(key, value)` type. + +For example, + +```sql +SELECT * FROM t_map WHERE has(m.keys, 'a'); +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap +version: 1.0 + +[ClickHouse] MAY not support using inline defined map to get `keys` subcolumn. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, c.keys +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values +version: 1.0 + +[ClickHouse] SHALL support `values` subcolumn in the `Map(key, value)` type that can be used +to retrieve an [Array] of map values. + +```sql +SELECT m.values FROM t_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions +version: 1.0 + +[ClickHouse] SHALL support applying [Array] functions to the `values` subcolumn in the `Map(key, value)` type. + +For example, + +```sql +SELECT * FROM t_map WHERE has(m.values, 'a'); +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap +version: 1.0 + +[ClickHouse] MAY not support using inline defined map to get `values` subcolumn. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, c.values +``` + +### Functions + +#### RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap +version: 1.0 + +[ClickHouse] SHALL support using inline defined maps as an argument to map functions. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, mapKeys(c) +SELECT map( 'aa', 4, '44' , 5) as c, mapValues(c) +``` + +#### `length` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [length] function +that SHALL return number of keys in the map. + +For example, + +```sql +SELECT length(map(1,2,3,4)) +SELECT length(map()) +``` + +#### `empty` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [empty] function +that SHALL return 1 if number of keys in the map is 0 otherwise if the number of keys is +greater or equal to 1 it SHALL return 0. + +For example, + +```sql +SELECT empty(map(1,2,3,4)) +SELECT empty(map()) +``` + +#### `notEmpty` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [notEmpty] function +that SHALL return 0 if number if keys in the map is 0 otherwise if the number of keys is +greater or equal to 1 it SHALL return 1. + +For example, + +```sql +SELECT notEmpty(map(1,2,3,4)) +SELECT notEmpty(map()) +``` + +#### `map` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map +version: 1.0 + +[ClickHouse] SHALL support arranging `key, value` pairs into `Map(key, value)` data type +using `map` function. + +**Syntax** + +``` sql +map(key1, value1[, key2, value2, ...]) +``` + +For example, + +``` sql +SELECT map('key1', number, 'key2', number * 2) FROM numbers(3); + +┌─map('key1', number, 'key2', multiply(number, 2))─┐ +│ {'key1':0,'key2':0} │ +│ {'key1':1,'key2':2} │ +│ {'key1':2,'key2':4} │ +└──────────────────────────────────────────────────┘ +``` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments +version: 1.0 + +[ClickHouse] SHALL return an error when `map` function is called with non even number of arguments. + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes +version: 1.0 + +[ClickHouse] SHALL return an error when `map` function is called with mixed key or value types. + + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapAdd` function to a `Map(key, value)` data type. + +For example, + +``` sql +SELECT CAST(mapAdd(([toUInt8(1), 2], [1, 1]), ([toUInt8(1), 2], [1, 1])), "Map(Int8,Int8)") +``` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapSubstract` function to a `Map(key, value)` data type. + +For example, + +```sql +SELECT CAST(mapSubtract(([toUInt8(1), 2], [toInt32(1), 1]), ([toUInt8(1), 2], [toInt32(2), 1])), "Map(Int8,Int8)") +``` +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapPopulateSeries` function to a `Map(key, value)` data type. + +For example, + +```sql +SELECT CAST(mapPopulateSeries([1,2,4], [11,22,44], 5), "Map(Int8,Int8)") +``` + +#### `mapContains` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains +version: 1.0 + +[ClickHouse] SHALL support `mapContains(map, key)` function to check weather `map.keys` contains the `key`. + +For example, + +```sql +SELECT mapContains(a, 'abc') from table_map; +``` + +#### `mapKeys` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys +version: 1.0 + +[ClickHouse] SHALL support `mapKeys(map)` function to return all the map keys in the [Array] format. + +For example, + +```sql +SELECT mapKeys(a) from table_map; +``` + +#### `mapValues` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues +version: 1.0 + +[ClickHouse] SHALL support `mapValues(map)` function to return all the map values in the [Array] format. + +For example, + +```sql +SELECT mapValues(a) from table_map; +``` + +[Nested]: https://clickhouse.tech/docs/en/sql-reference/data-types/nested-data-structures/nested/ +[length]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#array_functions-length +[empty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-empty +[notEmpty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-notempty +[CAST]: https://clickhouse.tech/docs/en/sql-reference/functions/type-conversion-functions/#type_conversion_function-cast +[Tuple]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/ +[Tuple(Array,Array)]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/ +[Array]: https://clickhouse.tech/docs/en/sql-reference/data-types/array/ +[String]: https://clickhouse.tech/docs/en/sql-reference/data-types/string/ +[Integer]: https://clickhouse.tech/docs/en/sql-reference/data-types/int-uint/ +[ClickHouse]: https://clickhouse.tech +[GitHub Repository]: https://github.com/ClickHouse/ClickHouse/blob/master/tests/testflows/map_type/requirements/requirements.md +[Revision History]: https://github.com/ClickHouse/ClickHouse/commits/master/tests/testflows/map_type/requirements/requirements.md +[Git]: https://git-scm.com/ +[GitHub]: https://github.com diff --git a/tests/testflows/map_type/requirements/requirements.py b/tests/testflows/map_type/requirements/requirements.py new file mode 100644 index 00000000000..24e8abdf15f --- /dev/null +++ b/tests/testflows/map_type/requirements/requirements.py @@ -0,0 +1,1427 @@ +# These requirements were auto generated +# from software requirements specification (SRS) +# document by TestFlows v1.6.210226.1200017. +# Do not edit by hand but re-generate instead +# using 'tfs requirements generate' command. +from testflows.core import Specification +from testflows.core import Requirement + +Heading = Specification.Heading + +RQ_SRS_018_ClickHouse_Map_DataType = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type that stores `key:value` pairs.\n' + '\n' + ), + link=None, + level=3, + num='3.1.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_ArrayOfTuples = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as\n' + 'compared to `Array(Tuple(K,V))` data type.\n' + '\n' + ), + link=None, + level=3, + num='3.2.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_TupleOfArrays = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as\n' + 'compared to `Tuple(Array(String), Array(String))` data type where the first\n' + 'array defines an array of keys and the second array defines an array of values.\n' + '\n' + ), + link=None, + level=3, + num='3.2.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Key_String = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Key.String', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type where key is of a [String] type.\n' + '\n' + ), + link=None, + level=3, + num='3.3.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Key_Integer = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type where key is of an [Integer] type.\n' + '\n' + ), + link=None, + level=3, + num='3.3.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_String = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.String', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [String] type.\n' + '\n' + ), + link=None, + level=3, + num='3.4.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_Integer = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Integer] type.\n' + '\n' + ), + link=None, + level=3, + num='3.4.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_Array = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Array', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Array] type.\n' + '\n' + ), + link=None, + level=3, + num='3.4.3') + +RQ_SRS_018_ClickHouse_Map_DataType_Invalid_Nullable = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL not support creating table columns that have `Nullable(Map(key, value))` data type.\n' + '\n' + ), + link=None, + level=3, + num='3.5.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Invalid_NothingNothing = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL not support creating table columns that have `Map(Nothing, Nothing))` data type.\n' + '\n' + ), + link=None, + level=3, + num='3.5.2') + +RQ_SRS_018_ClickHouse_Map_DataType_DuplicatedKeys = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] MAY support `Map(key, value)` data type with duplicated keys.\n' + '\n' + ), + link=None, + level=3, + num='3.6.1') + +RQ_SRS_018_ClickHouse_Map_DataType_ArrayOfMaps = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Array(Map(key, value))` data type.\n' + '\n' + ), + link=None, + level=3, + num='3.7.1') + +RQ_SRS_018_ClickHouse_Map_DataType_NestedWithMaps = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support defining `Map(key, value)` data type inside the [Nested] data type.\n' + '\n' + ), + link=None, + level=3, + num='3.8.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support getting the value from a `Map(key, value)` data type using `map[key]` syntax.\n' + 'If `key` has duplicates then the first `key:value` pair MAY be returned. \n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT a['key2'] FROM table_map;\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.9.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyInvalid = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when key does not match the key type.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT map(1,2) AS m, m[1024]\n' + '```\n' + '\n' + 'Exceptions:\n' + '\n' + '* when key is `NULL` the return value MAY be `NULL`\n' + '* when key value is not valid for the key type, for example it is out of range for [Integer] type, \n' + ' when reading from a table column it MAY return the default value for key data type\n' + '\n' + ), + link=None, + level=3, + num='3.9.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyNotFound = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return default value for the data type of the value\n' + "when there's no corresponding `key` defined in the `Map(key, value)` data type. \n" + '\n' + '\n' + ), + link=None, + level=3, + num='3.9.3') + +RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysToMap = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support converting [Tuple(Array, Array)] to `Map(key, value)` using the [CAST] function.\n' + '\n' + '``` sql\n' + "SELECT CAST(([1, 2, 3], ['Ready', 'Steady', 'Go']), 'Map(UInt8, String)') AS map;\n" + '```\n' + '\n' + '``` text\n' + '┌─map───────────────────────────┐\n' + "│ {1:'Ready',2:'Steady',3:'Go'} │\n" + '└───────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.10.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysMap_Invalid = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] MAY return an error when casting [Tuple(Array, Array)] to `Map(key, value)`\n' + '\n' + '* when arrays are not of equal size\n' + '\n' + ' For example,\n' + '\n' + ' ```sql\n' + " SELECT CAST(([2, 1, 1023], ['', '']), 'Map(UInt8, String)') AS map, map[10]\n" + ' ```\n' + '\n' + ), + link=None, + level=3, + num='3.10.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support converting [Array(Tuple(K,V))] to `Map(key, value)` using the [CAST] function.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.11.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap_Invalid = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] MAY return an error when casting [Array(Tuple(K, V))] to `Map(key, value)`\n' + '\n' + '* when element is not a [Tuple]\n' + '\n' + ' ```sql\n' + " SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map\n" + ' ```\n' + '\n' + '* when [Tuple] does not contain two elements\n' + '\n' + ' ```sql\n' + " SELECT CAST(([(1,2),(3,)]), 'Map(UInt8, UInt8)') AS map\n" + ' ```\n' + '\n' + ), + link=None, + level=3, + num='3.11.2') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `keys` subcolumn in the `Map(key, value)` type that can be used \n' + 'to retrieve an [Array] of map keys.\n' + '\n' + '```sql\n' + 'SELECT m.keys FROM t_map;\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.1') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_ArrayFunctions = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support applying [Array] functions to the `keys` subcolumn in the `Map(key, value)` type.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT * FROM t_map WHERE has(m.keys, 'a');\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.2') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_InlineDefinedMap = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] MAY not support using inline defined map to get `keys` subcolumn.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT map( 'aa', 4, '44' , 5) as c, c.keys\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.3') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `values` subcolumn in the `Map(key, value)` type that can be used \n' + 'to retrieve an [Array] of map values.\n' + '\n' + '```sql\n' + 'SELECT m.values FROM t_map;\n' + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.4') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_ArrayFunctions = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support applying [Array] functions to the `values` subcolumn in the `Map(key, value)` type.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT * FROM t_map WHERE has(m.values, 'a');\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.5') + +RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_InlineDefinedMap = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] MAY not support using inline defined map to get `values` subcolumn.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT map( 'aa', 4, '44' , 5) as c, c.values\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.12.6') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_InlineDefinedMap = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support using inline defined maps as an argument to map functions.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT map( 'aa', 4, '44' , 5) as c, mapKeys(c)\n" + "SELECT map( 'aa', 4, '44' , 5) as c, mapValues(c)\n" + '```\n' + '\n' + ), + link=None, + level=3, + num='3.13.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Length = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [length] function\n' + 'that SHALL return number of keys in the map.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT length(map(1,2,3,4))\n' + 'SELECT length(map())\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.2.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Empty = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [empty] function\n' + 'that SHALL return 1 if number of keys in the map is 0 otherwise if the number of keys is \n' + 'greater or equal to 1 it SHALL return 0.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT empty(map(1,2,3,4))\n' + 'SELECT empty(map())\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.3.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_NotEmpty = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [notEmpty] function\n' + 'that SHALL return 0 if number if keys in the map is 0 otherwise if the number of keys is\n' + 'greater or equal to 1 it SHALL return 1.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT notEmpty(map(1,2,3,4))\n' + 'SELECT notEmpty(map())\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.4.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support arranging `key, value` pairs into `Map(key, value)` data type\n' + 'using `map` function.\n' + '\n' + '**Syntax** \n' + '\n' + '``` sql\n' + 'map(key1, value1[, key2, value2, ...])\n' + '```\n' + '\n' + 'For example,\n' + '\n' + '``` sql\n' + "SELECT map('key1', number, 'key2', number * 2) FROM numbers(3);\n" + '\n' + "┌─map('key1', number, 'key2', multiply(number, 2))─┐\n" + "│ {'key1':0,'key2':0} │\n" + "│ {'key1':1,'key2':2} │\n" + "│ {'key1':2,'key2':4} │\n" + '└──────────────────────────────────────────────────┘\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.5.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_InvalidNumberOfArguments = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `map` function is called with non even number of arguments.\n' + '\n' + ), + link=None, + level=4, + num='3.13.5.2') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MixedKeyOrValueTypes = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL return an error when `map` function is called with mixed key or value types.\n' + '\n' + '\n' + ), + link=None, + level=4, + num='3.13.5.3') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapAdd = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support converting the results of `mapAdd` function to a `Map(key, value)` data type.\n' + '\n' + 'For example,\n' + '\n' + '``` sql\n' + 'SELECT CAST(mapAdd(([toUInt8(1), 2], [1, 1]), ([toUInt8(1), 2], [1, 1])), "Map(Int8,Int8)")\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.5.4') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapSubstract = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support converting the results of `mapSubstract` function to a `Map(key, value)` data type.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT CAST(mapSubtract(([toUInt8(1), 2], [toInt32(1), 1]), ([toUInt8(1), 2], [toInt32(2), 1])), "Map(Int8,Int8)")\n' + '```\n' + ), + link=None, + level=4, + num='3.13.5.5') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapPopulateSeries = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support converting the results of `mapPopulateSeries` function to a `Map(key, value)` data type.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT CAST(mapPopulateSeries([1,2,4], [11,22,44], 5), "Map(Int8,Int8)")\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.5.6') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapContains = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `mapContains(map, key)` function to check weather `map.keys` contains the `key`.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + "SELECT mapContains(a, 'abc') from table_map;\n" + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.6.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapKeys = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `mapKeys(map)` function to return all the map keys in the [Array] format.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT mapKeys(a) from table_map;\n' + '```\n' + '\n' + ), + link=None, + level=4, + num='3.13.7.1') + +RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapValues = Requirement( + name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues', + version='1.0', + priority=None, + group=None, + type=None, + uid=None, + description=( + '[ClickHouse] SHALL support `mapValues(map)` function to return all the map values in the [Array] format.\n' + '\n' + 'For example,\n' + '\n' + '```sql\n' + 'SELECT mapValues(a) from table_map;\n' + '```\n' + '\n' + '[Nested]: https://clickhouse.tech/docs/en/sql-reference/data-types/nested-data-structures/nested/\n' + '[length]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#array_functions-length\n' + '[empty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-empty\n' + '[notEmpty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-notempty\n' + '[CAST]: https://clickhouse.tech/docs/en/sql-reference/functions/type-conversion-functions/#type_conversion_function-cast\n' + '[Tuple]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/\n' + '[Tuple(Array,Array)]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/\n' + '[Array]: https://clickhouse.tech/docs/en/sql-reference/data-types/array/ \n' + '[String]: https://clickhouse.tech/docs/en/sql-reference/data-types/string/\n' + '[Integer]: https://clickhouse.tech/docs/en/sql-reference/data-types/int-uint/\n' + '[ClickHouse]: https://clickhouse.tech\n' + '[GitHub Repository]: https://github.com/ClickHouse/ClickHouse/blob/master/tests/testflows/map_type/requirements/requirements.md \n' + '[Revision History]: https://github.com/ClickHouse/ClickHouse/commits/master/tests/testflows/map_type/requirements/requirements.md\n' + '[Git]: https://git-scm.com/\n' + '[GitHub]: https://github.com\n' + ), + link=None, + level=4, + num='3.13.8.1') + +SRS018_ClickHouse_Map_Data_Type = Specification( + name='SRS018 ClickHouse Map Data Type', + description=None, + author=None, + date=None, + status=None, + approved_by=None, + approved_date=None, + approved_version=None, + version=None, + group=None, + type=None, + link=None, + uid=None, + parent=None, + children=None, + headings=( + Heading(name='Revision History', level=1, num='1'), + Heading(name='Introduction', level=1, num='2'), + Heading(name='Requirements', level=1, num='3'), + Heading(name='General', level=2, num='3.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType', level=3, num='3.1.1'), + Heading(name='Performance', level=2, num='3.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples', level=3, num='3.2.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays', level=3, num='3.2.2'), + Heading(name='Key Types', level=2, num='3.3'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Key.String', level=3, num='3.3.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer', level=3, num='3.3.2'), + Heading(name='Value Types', level=2, num='3.4'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.String', level=3, num='3.4.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer', level=3, num='3.4.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Array', level=3, num='3.4.3'), + Heading(name='Invalid Types', level=2, num='3.5'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable', level=3, num='3.5.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing', level=3, num='3.5.2'), + Heading(name='Duplicated Keys', level=2, num='3.6'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys', level=3, num='3.6.1'), + Heading(name='Array of Maps', level=2, num='3.7'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps', level=3, num='3.7.1'), + Heading(name='Nested With Maps', level=2, num='3.8'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps', level=3, num='3.8.1'), + Heading(name='Value Retrieval', level=2, num='3.9'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval', level=3, num='3.9.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid', level=3, num='3.9.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound', level=3, num='3.9.3'), + Heading(name='Converting Tuple(Array, Array) to Map', level=2, num='3.10'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap', level=3, num='3.10.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid', level=3, num='3.10.2'), + Heading(name='Converting Array(Tuple(K,V)) to Map', level=2, num='3.11'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap', level=3, num='3.11.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid', level=3, num='3.11.2'), + Heading(name='Keys and Values Subcolumns', level=2, num='3.12'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys', level=3, num='3.12.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions', level=3, num='3.12.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap', level=3, num='3.12.3'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values', level=3, num='3.12.4'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions', level=3, num='3.12.5'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap', level=3, num='3.12.6'), + Heading(name='Functions', level=2, num='3.13'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap', level=3, num='3.13.1'), + Heading(name='`length`', level=3, num='3.13.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length', level=4, num='3.13.2.1'), + Heading(name='`empty`', level=3, num='3.13.3'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty', level=4, num='3.13.3.1'), + Heading(name='`notEmpty`', level=3, num='3.13.4'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty', level=4, num='3.13.4.1'), + Heading(name='`map`', level=3, num='3.13.5'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map', level=4, num='3.13.5.1'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments', level=4, num='3.13.5.2'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes', level=4, num='3.13.5.3'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd', level=4, num='3.13.5.4'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract', level=4, num='3.13.5.5'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries', level=4, num='3.13.5.6'), + Heading(name='`mapContains`', level=3, num='3.13.6'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains', level=4, num='3.13.6.1'), + Heading(name='`mapKeys`', level=3, num='3.13.7'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys', level=4, num='3.13.7.1'), + Heading(name='`mapValues`', level=3, num='3.13.8'), + Heading(name='RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues', level=4, num='3.13.8.1'), + ), + requirements=( + RQ_SRS_018_ClickHouse_Map_DataType, + RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_ArrayOfTuples, + RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_TupleOfArrays, + RQ_SRS_018_ClickHouse_Map_DataType_Key_String, + RQ_SRS_018_ClickHouse_Map_DataType_Key_Integer, + RQ_SRS_018_ClickHouse_Map_DataType_Value_String, + RQ_SRS_018_ClickHouse_Map_DataType_Value_Integer, + RQ_SRS_018_ClickHouse_Map_DataType_Value_Array, + RQ_SRS_018_ClickHouse_Map_DataType_Invalid_Nullable, + RQ_SRS_018_ClickHouse_Map_DataType_Invalid_NothingNothing, + RQ_SRS_018_ClickHouse_Map_DataType_DuplicatedKeys, + RQ_SRS_018_ClickHouse_Map_DataType_ArrayOfMaps, + RQ_SRS_018_ClickHouse_Map_DataType_NestedWithMaps, + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval, + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyInvalid, + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyNotFound, + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysToMap, + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysMap_Invalid, + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap, + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap_Invalid, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_ArrayFunctions, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_InlineDefinedMap, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_ArrayFunctions, + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_InlineDefinedMap, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_InlineDefinedMap, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Length, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Empty, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_NotEmpty, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_InvalidNumberOfArguments, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MixedKeyOrValueTypes, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapAdd, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapSubstract, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapPopulateSeries, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapContains, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapKeys, + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapValues, + ), + content=''' +# SRS018 ClickHouse Map Data Type +# Software Requirements Specification + +## Table of Contents + +* 1 [Revision History](#revision-history) +* 2 [Introduction](#introduction) +* 3 [Requirements](#requirements) + * 3.1 [General](#general) + * 3.1.1 [RQ.SRS-018.ClickHouse.Map.DataType](#rqsrs-018clickhousemapdatatype) + * 3.2 [Performance](#performance) + * 3.2.1 [RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples](#rqsrs-018clickhousemapdatatypeperformancevsarrayoftuples) + * 3.2.2 [RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays](#rqsrs-018clickhousemapdatatypeperformancevstupleofarrays) + * 3.3 [Key Types](#key-types) + * 3.3.1 [RQ.SRS-018.ClickHouse.Map.DataType.Key.String](#rqsrs-018clickhousemapdatatypekeystring) + * 3.3.2 [RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer](#rqsrs-018clickhousemapdatatypekeyinteger) + * 3.4 [Value Types](#value-types) + * 3.4.1 [RQ.SRS-018.ClickHouse.Map.DataType.Value.String](#rqsrs-018clickhousemapdatatypevaluestring) + * 3.4.2 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer](#rqsrs-018clickhousemapdatatypevalueinteger) + * 3.4.3 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Array](#rqsrs-018clickhousemapdatatypevaluearray) + * 3.5 [Invalid Types](#invalid-types) + * 3.5.1 [RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable](#rqsrs-018clickhousemapdatatypeinvalidnullable) + * 3.5.2 [RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing](#rqsrs-018clickhousemapdatatypeinvalidnothingnothing) + * 3.6 [Duplicated Keys](#duplicated-keys) + * 3.6.1 [RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys](#rqsrs-018clickhousemapdatatypeduplicatedkeys) + * 3.7 [Array of Maps](#array-of-maps) + * 3.7.1 [RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps](#rqsrs-018clickhousemapdatatypearrayofmaps) + * 3.8 [Nested With Maps](#nested-with-maps) + * 3.8.1 [RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps](#rqsrs-018clickhousemapdatatypenestedwithmaps) + * 3.9 [Value Retrieval](#value-retrieval) + * 3.9.1 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval](#rqsrs-018clickhousemapdatatypevalueretrieval) + * 3.9.2 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid](#rqsrs-018clickhousemapdatatypevalueretrievalkeyinvalid) + * 3.9.3 [RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound](#rqsrs-018clickhousemapdatatypevalueretrievalkeynotfound) + * 3.10 [Converting Tuple(Array, Array) to Map](#converting-tuplearray-array-to-map) + * 3.10.1 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap](#rqsrs-018clickhousemapdatatypeconversionfromtupleofarraystomap) + * 3.10.2 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid](#rqsrs-018clickhousemapdatatypeconversionfromtupleofarraysmapinvalid) + * 3.11 [Converting Array(Tuple(K,V)) to Map](#converting-arraytuplekv-to-map) + * 3.11.1 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap](#rqsrs-018clickhousemapdatatypeconversionfromarrayoftuplestomap) + * 3.11.2 [RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid](#rqsrs-018clickhousemapdatatypeconversionfromarrayoftuplestomapinvalid) + * 3.12 [Keys and Values Subcolumns](#keys-and-values-subcolumns) + * 3.12.1 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys](#rqsrs-018clickhousemapdatatypesubcolumnskeys) + * 3.12.2 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions](#rqsrs-018clickhousemapdatatypesubcolumnskeysarrayfunctions) + * 3.12.3 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap](#rqsrs-018clickhousemapdatatypesubcolumnskeysinlinedefinedmap) + * 3.12.4 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values](#rqsrs-018clickhousemapdatatypesubcolumnsvalues) + * 3.12.5 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions](#rqsrs-018clickhousemapdatatypesubcolumnsvaluesarrayfunctions) + * 3.12.6 [RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap](#rqsrs-018clickhousemapdatatypesubcolumnsvaluesinlinedefinedmap) + * 3.13 [Functions](#functions) + * 3.13.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap](#rqsrs-018clickhousemapdatatypefunctionsinlinedefinedmap) + * 3.13.2 [`length`](#length) + * 3.13.2.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length](#rqsrs-018clickhousemapdatatypefunctionslength) + * 3.13.3 [`empty`](#empty) + * 3.13.3.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty](#rqsrs-018clickhousemapdatatypefunctionsempty) + * 3.13.4 [`notEmpty`](#notempty) + * 3.13.4.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty](#rqsrs-018clickhousemapdatatypefunctionsnotempty) + * 3.13.5 [`map`](#map) + * 3.13.5.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map](#rqsrs-018clickhousemapdatatypefunctionsmap) + * 3.13.5.2 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments](#rqsrs-018clickhousemapdatatypefunctionsmapinvalidnumberofarguments) + * 3.13.5.3 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes](#rqsrs-018clickhousemapdatatypefunctionsmapmixedkeyorvaluetypes) + * 3.13.5.4 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd](#rqsrs-018clickhousemapdatatypefunctionsmapmapadd) + * 3.13.5.5 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract](#rqsrs-018clickhousemapdatatypefunctionsmapmapsubstract) + * 3.13.5.6 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries](#rqsrs-018clickhousemapdatatypefunctionsmapmappopulateseries) + * 3.13.6 [`mapContains`](#mapcontains) + * 3.13.6.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains](#rqsrs-018clickhousemapdatatypefunctionsmapcontains) + * 3.13.7 [`mapKeys`](#mapkeys) + * 3.13.7.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys](#rqsrs-018clickhousemapdatatypefunctionsmapkeys) + * 3.13.8 [`mapValues`](#mapvalues) + * 3.13.8.1 [RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues](#rqsrs-018clickhousemapdatatypefunctionsmapvalues) + +## Revision History + +This document is stored in an electronic form using [Git] source control management software +hosted in a [GitHub Repository]. +All the updates are tracked using the [Revision History]. + +## Introduction + +This software requirements specification covers requirements for `Map(key, value)` data type in [ClickHouse]. + +## Requirements + +### General + +#### RQ.SRS-018.ClickHouse.Map.DataType +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type that stores `key:value` pairs. + +### Performance + +#### RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.ArrayOfTuples +version:1.0 + +[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as +compared to `Array(Tuple(K,V))` data type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Performance.Vs.TupleOfArrays +version:1.0 + +[ClickHouse] SHALL provide comparable performance for `Map(key, value)` data type as +compared to `Tuple(Array(String), Array(String))` data type where the first +array defines an array of keys and the second array defines an array of values. + +### Key Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Key.String +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where key is of a [String] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Key.Integer +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where key is of an [Integer] type. + +### Value Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.String +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [String] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Integer +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Integer] type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Array +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type where value is of a [Array] type. + +### Invalid Types + +#### RQ.SRS-018.ClickHouse.Map.DataType.Invalid.Nullable +version: 1.0 + +[ClickHouse] SHALL not support creating table columns that have `Nullable(Map(key, value))` data type. + +#### RQ.SRS-018.ClickHouse.Map.DataType.Invalid.NothingNothing +version: 1.0 + +[ClickHouse] SHALL not support creating table columns that have `Map(Nothing, Nothing))` data type. + +### Duplicated Keys + +#### RQ.SRS-018.ClickHouse.Map.DataType.DuplicatedKeys +version: 1.0 + +[ClickHouse] MAY support `Map(key, value)` data type with duplicated keys. + +### Array of Maps + +#### RQ.SRS-018.ClickHouse.Map.DataType.ArrayOfMaps +version: 1.0 + +[ClickHouse] SHALL support `Array(Map(key, value))` data type. + +### Nested With Maps + +#### RQ.SRS-018.ClickHouse.Map.DataType.NestedWithMaps +version: 1.0 + +[ClickHouse] SHALL support defining `Map(key, value)` data type inside the [Nested] data type. + +### Value Retrieval + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval +version: 1.0 + +[ClickHouse] SHALL support getting the value from a `Map(key, value)` data type using `map[key]` syntax. +If `key` has duplicates then the first `key:value` pair MAY be returned. + +For example, + +```sql +SELECT a['key2'] FROM table_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyInvalid +version: 1.0 + +[ClickHouse] SHALL return an error when key does not match the key type. + +For example, + +```sql +SELECT map(1,2) AS m, m[1024] +``` + +Exceptions: + +* when key is `NULL` the return value MAY be `NULL` +* when key value is not valid for the key type, for example it is out of range for [Integer] type, + when reading from a table column it MAY return the default value for key data type + +#### RQ.SRS-018.ClickHouse.Map.DataType.Value.Retrieval.KeyNotFound +version: 1.0 + +[ClickHouse] SHALL return default value for the data type of the value +when there's no corresponding `key` defined in the `Map(key, value)` data type. + + +### Converting Tuple(Array, Array) to Map + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysToMap +version: 1.0 + +[ClickHouse] SHALL support converting [Tuple(Array, Array)] to `Map(key, value)` using the [CAST] function. + +``` sql +SELECT CAST(([1, 2, 3], ['Ready', 'Steady', 'Go']), 'Map(UInt8, String)') AS map; +``` + +``` text +┌─map───────────────────────────┐ +│ {1:'Ready',2:'Steady',3:'Go'} │ +└───────────────────────────────┘ +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.TupleOfArraysMap.Invalid +version: 1.0 + +[ClickHouse] MAY return an error when casting [Tuple(Array, Array)] to `Map(key, value)` + +* when arrays are not of equal size + + For example, + + ```sql + SELECT CAST(([2, 1, 1023], ['', '']), 'Map(UInt8, String)') AS map, map[10] + ``` + +### Converting Array(Tuple(K,V)) to Map + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap +version: 1.0 + +[ClickHouse] SHALL support converting [Array(Tuple(K,V))] to `Map(key, value)` using the [CAST] function. + +For example, + +```sql +SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.Conversion.From.ArrayOfTuplesToMap.Invalid +version: 1.0 + +[ClickHouse] MAY return an error when casting [Array(Tuple(K, V))] to `Map(key, value)` + +* when element is not a [Tuple] + + ```sql + SELECT CAST(([(1,2),(3)]), 'Map(UInt8, UInt8)') AS map + ``` + +* when [Tuple] does not contain two elements + + ```sql + SELECT CAST(([(1,2),(3,)]), 'Map(UInt8, UInt8)') AS map + ``` + +### Keys and Values Subcolumns + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys +version: 1.0 + +[ClickHouse] SHALL support `keys` subcolumn in the `Map(key, value)` type that can be used +to retrieve an [Array] of map keys. + +```sql +SELECT m.keys FROM t_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.ArrayFunctions +version: 1.0 + +[ClickHouse] SHALL support applying [Array] functions to the `keys` subcolumn in the `Map(key, value)` type. + +For example, + +```sql +SELECT * FROM t_map WHERE has(m.keys, 'a'); +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Keys.InlineDefinedMap +version: 1.0 + +[ClickHouse] MAY not support using inline defined map to get `keys` subcolumn. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, c.keys +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values +version: 1.0 + +[ClickHouse] SHALL support `values` subcolumn in the `Map(key, value)` type that can be used +to retrieve an [Array] of map values. + +```sql +SELECT m.values FROM t_map; +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.ArrayFunctions +version: 1.0 + +[ClickHouse] SHALL support applying [Array] functions to the `values` subcolumn in the `Map(key, value)` type. + +For example, + +```sql +SELECT * FROM t_map WHERE has(m.values, 'a'); +``` + +#### RQ.SRS-018.ClickHouse.Map.DataType.SubColumns.Values.InlineDefinedMap +version: 1.0 + +[ClickHouse] MAY not support using inline defined map to get `values` subcolumn. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, c.values +``` + +### Functions + +#### RQ.SRS-018.ClickHouse.Map.DataType.Functions.InlineDefinedMap +version: 1.0 + +[ClickHouse] SHALL support using inline defined maps as an argument to map functions. + +For example, + +```sql +SELECT map( 'aa', 4, '44' , 5) as c, mapKeys(c) +SELECT map( 'aa', 4, '44' , 5) as c, mapValues(c) +``` + +#### `length` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Length +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [length] function +that SHALL return number of keys in the map. + +For example, + +```sql +SELECT length(map(1,2,3,4)) +SELECT length(map()) +``` + +#### `empty` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Empty +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [empty] function +that SHALL return 1 if number of keys in the map is 0 otherwise if the number of keys is +greater or equal to 1 it SHALL return 0. + +For example, + +```sql +SELECT empty(map(1,2,3,4)) +SELECT empty(map()) +``` + +#### `notEmpty` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.NotEmpty +version: 1.0 + +[ClickHouse] SHALL support `Map(key, value)` data type as an argument to the [notEmpty] function +that SHALL return 0 if number if keys in the map is 0 otherwise if the number of keys is +greater or equal to 1 it SHALL return 1. + +For example, + +```sql +SELECT notEmpty(map(1,2,3,4)) +SELECT notEmpty(map()) +``` + +#### `map` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map +version: 1.0 + +[ClickHouse] SHALL support arranging `key, value` pairs into `Map(key, value)` data type +using `map` function. + +**Syntax** + +``` sql +map(key1, value1[, key2, value2, ...]) +``` + +For example, + +``` sql +SELECT map('key1', number, 'key2', number * 2) FROM numbers(3); + +┌─map('key1', number, 'key2', multiply(number, 2))─┐ +│ {'key1':0,'key2':0} │ +│ {'key1':1,'key2':2} │ +│ {'key1':2,'key2':4} │ +└──────────────────────────────────────────────────┘ +``` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.InvalidNumberOfArguments +version: 1.0 + +[ClickHouse] SHALL return an error when `map` function is called with non even number of arguments. + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MixedKeyOrValueTypes +version: 1.0 + +[ClickHouse] SHALL return an error when `map` function is called with mixed key or value types. + + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapAdd +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapAdd` function to a `Map(key, value)` data type. + +For example, + +``` sql +SELECT CAST(mapAdd(([toUInt8(1), 2], [1, 1]), ([toUInt8(1), 2], [1, 1])), "Map(Int8,Int8)") +``` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapSubstract +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapSubstract` function to a `Map(key, value)` data type. + +For example, + +```sql +SELECT CAST(mapSubtract(([toUInt8(1), 2], [toInt32(1), 1]), ([toUInt8(1), 2], [toInt32(2), 1])), "Map(Int8,Int8)") +``` +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.Map.MapPopulateSeries +version: 1.0 + +[ClickHouse] SHALL support converting the results of `mapPopulateSeries` function to a `Map(key, value)` data type. + +For example, + +```sql +SELECT CAST(mapPopulateSeries([1,2,4], [11,22,44], 5), "Map(Int8,Int8)") +``` + +#### `mapContains` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapContains +version: 1.0 + +[ClickHouse] SHALL support `mapContains(map, key)` function to check weather `map.keys` contains the `key`. + +For example, + +```sql +SELECT mapContains(a, 'abc') from table_map; +``` + +#### `mapKeys` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapKeys +version: 1.0 + +[ClickHouse] SHALL support `mapKeys(map)` function to return all the map keys in the [Array] format. + +For example, + +```sql +SELECT mapKeys(a) from table_map; +``` + +#### `mapValues` + +##### RQ.SRS-018.ClickHouse.Map.DataType.Functions.MapValues +version: 1.0 + +[ClickHouse] SHALL support `mapValues(map)` function to return all the map values in the [Array] format. + +For example, + +```sql +SELECT mapValues(a) from table_map; +``` + +[Nested]: https://clickhouse.tech/docs/en/sql-reference/data-types/nested-data-structures/nested/ +[length]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#array_functions-length +[empty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-empty +[notEmpty]: https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#function-notempty +[CAST]: https://clickhouse.tech/docs/en/sql-reference/functions/type-conversion-functions/#type_conversion_function-cast +[Tuple]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/ +[Tuple(Array,Array)]: https://clickhouse.tech/docs/en/sql-reference/data-types/tuple/ +[Array]: https://clickhouse.tech/docs/en/sql-reference/data-types/array/ +[String]: https://clickhouse.tech/docs/en/sql-reference/data-types/string/ +[Integer]: https://clickhouse.tech/docs/en/sql-reference/data-types/int-uint/ +[ClickHouse]: https://clickhouse.tech +[GitHub Repository]: https://github.com/ClickHouse/ClickHouse/blob/master/tests/testflows/map_type/requirements/requirements.md +[Revision History]: https://github.com/ClickHouse/ClickHouse/commits/master/tests/testflows/map_type/requirements/requirements.md +[Git]: https://git-scm.com/ +[GitHub]: https://github.com +''') diff --git a/tests/testflows/map_type/tests/__init__.py b/tests/testflows/map_type/tests/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/testflows/map_type/tests/common.py b/tests/testflows/map_type/tests/common.py new file mode 100644 index 00000000000..a3a0d0ef0b1 --- /dev/null +++ b/tests/testflows/map_type/tests/common.py @@ -0,0 +1,49 @@ +import uuid +from collections import namedtuple + +from testflows.core import * +from testflows.core.name import basename, parentname +from testflows._core.testtype import TestSubType + +def getuid(): + if current().subtype == TestSubType.Example: + testname = f"{basename(parentname(current().name)).replace(' ', '_').replace(',','')}" + else: + testname = f"{basename(current().name).replace(' ', '_').replace(',','')}" + return testname + "_" + str(uuid.uuid1()).replace('-', '_') + +@TestStep(Given) +def allow_experimental_map_type(self): + """Set allow_experimental_map_type = 1 + """ + setting = ("allow_experimental_map_type", 1) + default_query_settings = None + + try: + with By("adding allow_experimental_map_type to the default query settings"): + default_query_settings = getsattr(current().context, "default_query_settings", []) + default_query_settings.append(setting) + yield + finally: + with Finally("I remove allow_experimental_map_type from the default query settings"): + if default_query_settings: + try: + default_query_settings.pop(default_query_settings.index(setting)) + except ValueError: + pass + +@TestStep(Given) +def create_table(self, name, statement, on_cluster=False): + """Create table. + """ + node = current().context.node + try: + with Given(f"I have a {name} table"): + node.query(statement.format(name=name)) + yield name + finally: + with Finally("I drop the table"): + if on_cluster: + node.query(f"DROP TABLE IF EXISTS {name} ON CLUSTER {on_cluster}") + else: + node.query(f"DROP TABLE IF EXISTS {name}") diff --git a/tests/testflows/map_type/tests/feature.py b/tests/testflows/map_type/tests/feature.py new file mode 100755 index 00000000000..5fd48844825 --- /dev/null +++ b/tests/testflows/map_type/tests/feature.py @@ -0,0 +1,1195 @@ +# -*- coding: utf-8 -*- +import time + +from testflows.core import * +from testflows.asserts import error + +from map_type.requirements import * +from map_type.tests.common import * + +@TestOutline +def select_map(self, map, output, exitcode=0, message=None): + """Create a map using select statement. + """ + node = self.context.node + + with When("I create a map using select", description=map): + r = node.query(f"SELECT {map}", exitcode=exitcode, message=message) + + with Then("I expect output to match", description=output): + assert r.output == output, error() + +@TestOutline +def table_map(self, type, data, select, filter, exitcode, message, check_insert=False, order_by=None): + """Check using a map column in a table. + """ + uid = getuid() + node = self.context.node + + if order_by is None: + order_by = "m" + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (m " + type + ") ENGINE = MergeTree() ORDER BY " + order_by + + with And(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert data into the map column"): + if check_insert: + node.query(f"INSERT INTO {table} VALUES {data}", exitcode=exitcode, message=message) + else: + node.query(f"INSERT INTO {table} VALUES {data}") + + if not check_insert: + with And("I try to read from the table"): + node.query(f"SELECT {select} FROM {table} WHERE {filter} FORMAT JSONEachRow", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_String("1.0") +) +@Examples("map output", [ + ("map('',1)", "{'':1}", Name("empty string")), + ("map('hello',1)", "{'hello':1}", Name("non-empty string")), + ("map('Gãńdåłf_Thê_Gręât',1)", "{'Gãńdåłf_Thê_Gręât':1}", Name("utf-8 string")), + ("map('hello there',1)", "{'hello there':1}", Name("multi word string")), + ("map('hello',1,'there',2)", "{'hello':1,'there':2}", Name("multiple keys")), + ("map(toString(1),1)", "{'1':1}", Name("toString")), + ("map(toFixedString('1',1),1)", "{'1':1}", Name("toFixedString")), + ("map(toNullable('1'),1)", "{'1':1}", Name("Nullable")), + ("map(toNullable(NULL),1)", "{NULL:1}", Name("Nullable(NULL)")), + ("map(toLowCardinality('1'),1)", "{'1':1}", Name("LowCardinality(String)")), + ("map(toLowCardinality(toFixedString('1',1)),1)", "{'1':1}", Name("LowCardinality(FixedString)")), +], row_format="%20s,%20s") +def select_map_with_key_string(self, map, output): + """Create a map using select that has key string type. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_String("1.0") +) +@Examples("map output", [ + ("map('key','')", "{'key':''}", Name("empty string")), + ("map('key','hello')", "{'key':'hello'}", Name("non-empty string")), + ("map('key','Gãńdåłf_Thê_Gręât')", "{'key':'Gãńdåłf_Thê_Gręât'}", Name("utf-8 string")), + ("map('key','hello there')", "{'key':'hello there'}", Name("multi word string")), + ("map('key','hello','key2','there')", "{'key':'hello','key2':'there'}", Name("multiple keys")), + ("map('key',toString(1))", "{'key':'1'}", Name("toString")), + ("map('key',toFixedString('1',1))", "{'key':'1'}", Name("toFixedString")), + ("map('key',toNullable('1'))", "{'key':'1'}", Name("Nullable")), + ("map('key',toNullable(NULL))", "{'key':NULL}", Name("Nullable(NULL)")), + ("map('key',toLowCardinality('1'))", "{'key':'1'}", Name("LowCardinality(String)")), + ("map('key',toLowCardinality(toFixedString('1',1)))", "{'key':'1'}", Name("LowCardinality(FixedString)")), +], row_format="%20s,%20s") +def select_map_with_value_string(self, map, output): + """Create a map using select that has value string type. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Array("1.0") +) +@Examples("map output", [ + ("map('key',[])", "{'key':[]}", Name("empty Array")), + ("map('key',[1,2,3])", "{'key':[1,2,3]}", Name("non-empty array of ints")), + ("map('key',['1','2','3'])", "{'key':['1','2','3']}", Name("non-empty array of strings")), + ("map('key',[map(1,2),map(2,3)])", "{'key':[{1:2},{2:3}]}", Name("non-empty array of maps")), + ("map('key',[map(1,[map(1,[1])]),map(2,[map(2,[3])])])", "{'key':[{1:[{1:[1]}]},{2:[{2:[3]}]}]}", Name("non-empty array of maps of array of maps")), +]) +def select_map_with_value_array(self, map, output): + """Create a map using select that has value array type. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Integer("1.0") +) +@Examples("map output", [ + ("(map(1,127,2,0,3,-128))", '{1:127,2:0,3:-128}', Name("Int8")), + ("(map(1,0,2,255))", '{1:0,2:255}', Name("UInt8")), + ("(map(1,32767,2,0,3,-32768))", '{1:32767,2:0,3:-32768}', Name("Int16")), + ("(map(1,0,2,65535))", '{1:0,2:65535}', Name("UInt16")), + ("(map(1,2147483647,2,0,3,-2147483648))", '{1:2147483647,2:0,3:-2147483648}', Name("Int32")), + ("(map(1,0,2,4294967295))", '{1:0,2:4294967295}', Name("UInt32")), + ("(map(1,9223372036854775807,2,0,3,-9223372036854775808))", '{1:"9223372036854775807",2:"0",3:"-9223372036854775808"}', Name("Int64")), + ("(map(1,0,2,18446744073709551615))", '{1:0,2:18446744073709551615}', Name("UInt64")), + ("(map(1,170141183460469231731687303715884105727,2,0,3,-170141183460469231731687303715884105728))", '{1:1.7014118346046923e38,2:0,3:-1.7014118346046923e38}', Name("Int128")), + ("(map(1,57896044618658097711785492504343953926634992332820282019728792003956564819967,2,0,3,-57896044618658097711785492504343953926634992332820282019728792003956564819968))", '{1:5.78960446186581e76,2:0,3:-5.78960446186581e76}', Name("Int256")), + ("(map(1,0,2,115792089237316195423570985008687907853269984665640564039457584007913129639935))", '{1:0,2:1.157920892373162e77}', Name("UInt256")), + ("(map(1,toNullable(1)))", '{1:1}', Name("toNullable")), + ("(map(1,toNullable(NULL)))", '{1:NULL}', Name("toNullable(NULL)")), +]) +def select_map_with_value_integer(self, map, output): + """Create a map using select that has value integer type. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_Integer("1.0") +) +@Examples("map output", [ + ("(map(127,1,0,1,-128,1))", '{127:1,0:1,-128:1}', Name("Int8")), + ("(map(0,1,255,1))", '{0:1,255:1}', Name("UInt8")), + ("(map(32767,1,0,1,-32768,1))", '{32767:1,0:1,-32768:1}', Name("Int16")), + ("(map(0,1,65535,1))", '{0:1,65535:1}', Name("UInt16")), + ("(map(2147483647,1,0,1,-2147483648,1))", '{2147483647:1,0:1,-2147483648:1}', Name("Int32")), + ("(map(0,1,4294967295,1))", '{0:1,4294967295:1}', Name("UInt32")), + ("(map(9223372036854775807,1,0,1,-9223372036854775808,1))", '{"9223372036854775807":1,"0":1,"-9223372036854775808":1}', Name("Int64")), + ("(map(0,1,18446744073709551615,1))", '{0:1,18446744073709551615:1}', Name("UInt64")), + ("(map(170141183460469231731687303715884105727,1,0,1,-170141183460469231731687303715884105728,1))", '{1.7014118346046923e38:1,0:1,-1.7014118346046923e38:1}', Name("Int128")), + ("(map(57896044618658097711785492504343953926634992332820282019728792003956564819967,1,0,1,-57896044618658097711785492504343953926634992332820282019728792003956564819968,1))", '{5.78960446186581e76:1,0:1,-5.78960446186581e76:1}', Name("Int256")), + ("(map(0,1,115792089237316195423570985008687907853269984665640564039457584007913129639935,1))", '{0:1,1.157920892373162e77:1}', Name("UInt256")), + ("(map(toNullable(1),1))", '{1:1}', Name("toNullable")), + ("(map(toNullable(NULL),1))", '{NULL:1}', Name("toNullable(NULL)")), +]) +def select_map_with_key_integer(self, map, output): + """Create a map using select that has key integer type. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_String("1.0") +) +@Examples("type data output", [ + ("Map(String, Int8)", "('2020-01-01', map('',1))", '{"d":"2020-01-01","m":{"":1}}', Name("empty string")), + ("Map(String, Int8)", "('2020-01-01', map('hello',1))", '{"d":"2020-01-01","m":{"hello":1}}', Name("non-empty string")), + ("Map(String, Int8)", "('2020-01-01', map('Gãńdåłf_Thê_Gręât',1))", '{"d":"2020-01-01","m":{"Gãńdåłf_Thê_Gręât":1}}', Name("utf-8 string")), + ("Map(String, Int8)", "('2020-01-01', map('hello there',1))", '{"d":"2020-01-01","m":{"hello there":1}}', Name("multi word string")), + ("Map(String, Int8)", "('2020-01-01', map('hello',1,'there',2))", '{"d":"2020-01-01","m":{"hello":1,"there":2}}', Name("multiple keys")), + ("Map(String, Int8)", "('2020-01-01', map(toString(1),1))", '{"d":"2020-01-01","m":{"1":1}}', Name("toString")), + ("Map(FixedString(1), Int8)", "('2020-01-01', map(toFixedString('1',1),1))", '{"d":"2020-01-01","m":{"1":1}}', Name("FixedString")), + ("Map(Nullable(String), Int8)", "('2020-01-01', map(toNullable('1'),1))", '{"d":"2020-01-01","m":{"1":1}}', Name("Nullable")), + ("Map(Nullable(String), Int8)", "('2020-01-01', map(toNullable(NULL),1))", '{"d":"2020-01-01","m":{null:1}}', Name("Nullable(NULL)")), + ("Map(LowCardinality(String), Int8)", "('2020-01-01', map(toLowCardinality('1'),1))", '{"d":"2020-01-01","m":{"1":1}}', Name("LowCardinality(String)")), + ("Map(LowCardinality(String), Int8)", "('2020-01-01', map('1',1))", '{"d":"2020-01-01","m":{"1":1}}', Name("LowCardinality(String) cast from String")), + ("Map(LowCardinality(String), LowCardinality(String))", "('2020-01-01', map('1','1'))", '{"d":"2020-01-01","m":{"1":"1"}}', Name("LowCardinality(String) for key and value")), + ("Map(LowCardinality(FixedString(1)), Int8)", "('2020-01-01', map(toLowCardinality(toFixedString('1',1)),1))", '{"d":"2020-01-01","m":{"1":1}}', Name("LowCardinality(FixedString)")), +]) +def table_map_with_key_string(self, type, data, output): + """Check what values we can insert into map type column with key string. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_String("1.0") +) +@Examples("type data output select", [ + ("Map(String, Int8)", "('2020-01-01', map('',1))", '{"m":1}', "m[''] AS m", Name("empty string")), + ("Map(String, Int8)", "('2020-01-01', map('hello',1))", '{"m":1}', "m['hello'] AS m", Name("non-empty string")), + ("Map(String, Int8)", "('2020-01-01', map('Gãńdåłf_Thê_Gręât',1))", '{"m":1}', "m['Gãńdåłf_Thê_Gręât'] AS m", Name("utf-8 string")), + ("Map(String, Int8)", "('2020-01-01', map('hello there',1))", '{"m":1}', "m['hello there'] AS m", Name("multi word string")), + ("Map(String, Int8)", "('2020-01-01', map('hello',1,'there',2))", '{"m":1}', "m['hello'] AS m", Name("multiple keys")), + ("Map(String, Int8)", "('2020-01-01', map(toString(1),1))", '{"m":1}', "m['1'] AS m", Name("toString")), + ("Map(FixedString(1), Int8)", "('2020-01-01', map(toFixedString('1',1),1))", '{"m":1}', "m['1'] AS m", Name("FixedString")), + ("Map(Nullable(String), Int8)", "('2020-01-01', map(toNullable('1'),1))", '{"m":1}}', "m['1'] AS m", Name("Nullable")), + ("Map(Nullable(String), Int8)", "('2020-01-01', map(toNullable(NULL),1))", '{"m":1}', "m[null] AS m", Name("Nullable(NULL)")), + ("Map(LowCardinality(String), Int8)", "('2020-01-01', map(toLowCardinality('1'),1))", '{"m":1}}', "m['1'] AS m", Name("LowCardinality(String)")), + ("Map(LowCardinality(String), Int8)", "('2020-01-01', map('1',1))", '{"m":1}', "m['1'] AS m", Name("LowCardinality(String) cast from String")), + ("Map(LowCardinality(String), LowCardinality(String))", "('2020-01-01', map('1','1'))", '{"m":"1"}', "m['1'] AS m", Name("LowCardinality(String) for key and value")), + ("Map(LowCardinality(FixedString(1)), Int8)", "('2020-01-01', map(toLowCardinality(toFixedString('1',1)),1))", '{"m":1}', "m['1'] AS m", Name("LowCardinality(FixedString)")), +]) +def table_map_select_key_with_key_string(self, type, data, output, select): + """Check what values we can insert into map type column with key string and if key can be selected. + """ + insert_into_table(type=type, data=data, output=output, select=select) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_String("1.0") +) +@Examples("type data output", [ + ("Map(String, String)", "('2020-01-01', map('key',''))", '{"d":"2020-01-01","m":{"key":""}}', Name("empty string")), + ("Map(String, String)", "('2020-01-01', map('key','hello'))", '{"d":"2020-01-01","m":{"key":"hello"}}', Name("non-empty string")), + ("Map(String, String)", "('2020-01-01', map('key','Gãńdåłf_Thê_Gręât'))", '{"d":"2020-01-01","m":{"key":"Gãńdåłf_Thê_Gręât"}}', Name("utf-8 string")), + ("Map(String, String)", "('2020-01-01', map('key', 'hello there'))", '{"d":"2020-01-01","m":{"key":"hello there"}}', Name("multi word string")), + ("Map(String, String)", "('2020-01-01', map('key','hello','key2','there'))", '{"d":"2020-01-01","m":{"key":"hello","key2":"there"}}', Name("multiple keys")), + ("Map(String, String)", "('2020-01-01', map('key', toString(1)))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("toString")), + ("Map(String, FixedString(1))", "('2020-01-01', map('key',toFixedString('1',1)))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("FixedString")), + ("Map(String, Nullable(String))", "('2020-01-01', map('key',toNullable('1')))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("Nullable")), + ("Map(String, Nullable(String))", "('2020-01-01', map('key',toNullable(NULL)))", '{"d":"2020-01-01","m":{"key":null}}', Name("Nullable(NULL)")), + ("Map(String, LowCardinality(String))", "('2020-01-01', map('key',toLowCardinality('1')))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("LowCardinality(String)")), + ("Map(String, LowCardinality(String))", "('2020-01-01', map('key','1'))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("LowCardinality(String) cast from String")), + ("Map(LowCardinality(String), LowCardinality(String))", "('2020-01-01', map('1','1'))", '{"d":"2020-01-01","m":{"1":"1"}}', Name("LowCardinality(String) for key and value")), + ("Map(String, LowCardinality(FixedString(1)))", "('2020-01-01', map('key',toLowCardinality(toFixedString('1',1))))", '{"d":"2020-01-01","m":{"key":"1"}}', Name("LowCardinality(FixedString)")) +]) +def table_map_with_value_string(self, type, data, output): + """Check what values we can insert into map type column with value string. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_String("1.0") +) +@Examples("type data output", [ + ("Map(String, String)", "('2020-01-01', map('key',''))", '{"m":""}', Name("empty string")), + ("Map(String, String)", "('2020-01-01', map('key','hello'))", '{"m":"hello"}', Name("non-empty string")), + ("Map(String, String)", "('2020-01-01', map('key','Gãńdåłf_Thê_Gręât'))", '{"m":"Gãńdåłf_Thê_Gręât"}', Name("utf-8 string")), + ("Map(String, String)", "('2020-01-01', map('key', 'hello there'))", '{"m":"hello there"}', Name("multi word string")), + ("Map(String, String)", "('2020-01-01', map('key','hello','key2','there'))", '{"m":"hello"}', Name("multiple keys")), + ("Map(String, String)", "('2020-01-01', map('key', toString(1)))", '{"m":"1"}', Name("toString")), + ("Map(String, FixedString(1))", "('2020-01-01', map('key',toFixedString('1',1)))", '{"m":"1"}', Name("FixedString")), + ("Map(String, Nullable(String))", "('2020-01-01', map('key',toNullable('1')))", '{"m":"1"}', Name("Nullable")), + ("Map(String, Nullable(String))", "('2020-01-01', map('key',toNullable(NULL)))", '{"m":null}', Name("Nullable(NULL)")), + ("Map(String, LowCardinality(String))", "('2020-01-01', map('key',toLowCardinality('1')))", '{"m":"1"}', Name("LowCardinality(String)")), + ("Map(String, LowCardinality(String))", "('2020-01-01', map('key','1'))", '{"m":"1"}', Name("LowCardinality(String) cast from String")), + ("Map(LowCardinality(String), LowCardinality(String))", "('2020-01-01', map('key','1'))", '{"m":"1"}', Name("LowCardinality(String) for key and value")), + ("Map(String, LowCardinality(FixedString(1)))", "('2020-01-01', map('key',toLowCardinality(toFixedString('1',1))))", '{"m":"1"}', Name("LowCardinality(FixedString)")) +]) +def table_map_select_key_with_value_string(self, type, data, output): + """Check what values we can insert into map type column with value string and if it can be selected by key. + """ + insert_into_table(type=type, data=data, output=output, select="m['key'] AS m") + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Integer("1.0") +) +@Examples("type data output", [ + ("Map(Int8, Int8)", "('2020-01-01', map(1,127,2,0,3,-128))", '{"d":"2020-01-01","m":{1:127,2:0,3:-128}}', Name("Int8")), + ("Map(Int8, UInt8)", "('2020-01-01', map(1,0,2,255))", '{"d":"2020-01-01","m":{1:0,2:255}}', Name("UInt8")), + ("Map(Int8, Int16)", "('2020-01-01', map(1,127,2,0,3,-128))", '{"d":"2020-01-01","m":{1:32767,2:0,3:-32768}}', Name("Int16")), + ("Map(Int8, UInt16)", "('2020-01-01', map(1,0,2,65535))", '{"d":"2020-01-01","m":{1:0,2:65535}}', Name("UInt16")), + ("Map(Int8, Int32)", "('2020-01-01', map(1,127,2,0,3,-128))", '{"d":"2020-01-01","m":{1:2147483647,2:0,3:-2147483648}}', Name("Int32")), + ("Map(Int8, UInt32)", "('2020-01-01', map(1,0,2,4294967295))", '{"d":"2020-01-01","m":{1:0,2:4294967295}}', Name("UInt32")), + ("Map(Int8, Int64)", "('2020-01-01', map(1,9223372036854775807,2,0,3,-9223372036854775808))", '{"d":"2020-01-01","m":{1:"9223372036854775807",2:"0",3:"-9223372036854775808"}}', Name("Int64")), + ("Map(Int8, UInt64)", "('2020-01-01', map(1,0,2,18446744073709551615))", '{"d":"2020-01-01","m":{1:"0",2:"18446744073709551615"}}', Name("UInt64")), + ("Map(Int8, Int128)", "('2020-01-01', map(1,170141183460469231731687303715884105727,2,0,3,-170141183460469231731687303715884105728))", '{"d":"2020-01-01","m":{1:"170141183460469231731687303715884105727",2:"0",3:"-170141183460469231731687303715884105728"}}', Name("Int128")), + ("Map(Int8, Int256)", "('2020-01-01', map(1,57896044618658097711785492504343953926634992332820282019728792003956564819967,2,0,3,-57896044618658097711785492504343953926634992332820282019728792003956564819968))", '{"d":"2020-01-01","m":{1:"57896044618658097711785492504343953926634992332820282019728792003956564819967",2:"0",3:"-57896044618658097711785492504343953926634992332820282019728792003956564819968"}}', Name("Int256")), + ("Map(Int8, UInt256)", "('2020-01-01', map(1,0,2,115792089237316195423570985008687907853269984665640564039457584007913129639935))", '{"d":"2020-01-01","m":{1:"0",2:"115792089237316195423570985008687907853269984665640564039457584007913129639935"}}', Name("UInt256")), + ("Map(Int8, Nullable(Int8))", "('2020-01-01', map(1,toNullable(1)))", '{"d":"2020-01-01","m":{1:1}}', Name("toNullable")), + ("Map(Int8, Nullable(Int8))", "('2020-01-01', map(1,toNullable(NULL)))", '{"d":"2020-01-01","m":{1:null}}', Name("toNullable(NULL)")), +]) +def table_map_with_value_integer(self, type, data, output): + """Check what values we can insert into map type column with value integer. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Array("1.0") +) +@Examples("type data output", [ + ("Map(String, Array(Int8))", "('2020-01-01', map('key',[]))", '{"d":"2020-01-01","m":{"key":[]}}', Name("empty array")), + ("Map(String, Array(Int8))", "('2020-01-01', map('key',[1,2,3]))", '{"d":"2020-01-01","m":{"key":[1,2,3]}}', Name("non-empty array of ints")), + ("Map(String, Array(String))", "('2020-01-01', map('key',['1','2','3']))", '{"d":"2020-01-01","m":{"key":["1","2","3"]}}', Name("non-empty array of strings")), + ("Map(String, Array(Map(Int8, Int8)))", "('2020-01-01', map('key',[map(1,2),map(2,3)]))", '{"d":"2020-01-01","m":{"key":[{1:2},{2:3}]}}', Name("non-empty array of maps")), + ("Map(String, Array(Map(Int8, Array(Map(Int8, Array(Int8))))))", "('2020-01-01', map('key',[map(1,[map(1,[1])]),map(2,[map(2,[3])])]))", '{"d":"2020-01-01","m":{"key":[{1:[{1:[1]}]},{2:[{2:[3]}]}]}}', Name("non-empty array of maps of array of maps")), +]) +def table_map_with_value_array(self, type, data, output): + """Check what values we can insert into map type column with value Array. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_Integer("1.0") +) +@Examples("type data output", [ + ("Map(Int8, Int8)", "('2020-01-01', map(127,1,0,1,-128,1))", '{"d":"2020-01-01","m":{127:1,0:1,-128:1}}', Name("Int8")), + ("Map(UInt8, Int8)", "('2020-01-01', map(0,1,255,1))", '{"d":"2020-01-01","m":{0:1,255:1}}', Name("UInt8")), + ("Map(Int16, Int8)", "('2020-01-01', map(127,1,0,1,-128,1))", '{"d":"2020-01-01","m":{32767:1,0:1,-32768:1}}', Name("Int16")), + ("Map(UInt16, Int8)", "('2020-01-01', map(0,1,65535,1))", '{"d":"2020-01-01","m":{0:1,65535:1}}', Name("UInt16")), + ("Map(Int32, Int8)", "('2020-01-01', map(2147483647,1,0,1,-2147483648,1))", '{"d":"2020-01-01","m":{2147483647:1,0:1,-2147483648:1}}', Name("Int32")), + ("Map(UInt32, Int8)", "('2020-01-01', map(0,1,4294967295,1))", '{"d":"2020-01-01","m":{0:1,4294967295:1}}', Name("UInt32")), + ("Map(Int64, Int8)", "('2020-01-01', map(9223372036854775807,1,0,1,-9223372036854775808,1))", '{"d":"2020-01-01","m":{"9223372036854775807":1,"0":1,"-9223372036854775808":1}}', Name("Int64")), + ("Map(UInt64, Int8)", "('2020-01-01', map(0,1,18446744073709551615,1))", '{"d":"2020-01-01","m":{"0":1,"18446744073709551615":1}}', Name("UInt64")), + ("Map(Int128, Int8)", "('2020-01-01', map(170141183460469231731687303715884105727,1,0,1,-170141183460469231731687303715884105728,1))", '{"d":"2020-01-01","m":{170141183460469231731687303715884105727:1,0:1,"-170141183460469231731687303715884105728":1}}', Name("Int128")), + ("Map(Int256, Int8)", "('2020-01-01', map(57896044618658097711785492504343953926634992332820282019728792003956564819967,1,0,1,-57896044618658097711785492504343953926634992332820282019728792003956564819968,1))", '{"d":"2020-01-01","m":{"57896044618658097711785492504343953926634992332820282019728792003956564819967":1,"0":1,"-57896044618658097711785492504343953926634992332820282019728792003956564819968":1}}', Name("Int256")), + ("Map(UInt256, Int8)", "('2020-01-01', map(0,1,115792089237316195423570985008687907853269984665640564039457584007913129639935,1))", '{"d":"2020-01-01","m":{"0":1,"115792089237316195423570985008687907853269984665640564039457584007913129639935":1}}', Name("UInt256")), + ("Map(Nullable(Int8), Int8)", "('2020-01-01', map(toNullable(1),1))", '{"d":"2020-01-01","m":{1:1}}', Name("toNullable")), + ("Map(Nullable(Int8), Int8)", "('2020-01-01', map(toNullable(NULL),1))", '{"d":"2020-01-01","m":{null:1}}', Name("toNullable(NULL)")), +]) +def table_map_with_key_integer(self, type, data, output): + """Check what values we can insert into map type column with key integer. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Key_Integer("1.0") +) +@Examples("type data output select", [ + ("Map(Int8, Int8)", "('2020-01-01', map(127,1,0,1,-128,1))", '{"m":1}', "m[127] AS m", Name("Int8")), + ("Map(UInt8, Int8)", "('2020-01-01', map(0,1,255,1))", '{"m":2}', "(m[255] + m[0]) AS m", Name("UInt8")), + ("Map(Int16, Int8)", "('2020-01-01', map(127,1,0,1,-128,1))", '{"m":3}', "(m[-128] + m[0] + m[-128]) AS m", Name("Int16")), + ("Map(UInt16, Int8)", "('2020-01-01', map(0,1,65535,1))", '{"m":2}', "(m[0] + m[65535]) AS m", Name("UInt16")), + ("Map(Int32, Int8)", "('2020-01-01', map(2147483647,1,0,1,-2147483648,1))", '{"m":3}', "(m[2147483647] + m[0] + m[-2147483648]) AS m", Name("Int32")), + ("Map(UInt32, Int8)", "('2020-01-01', map(0,1,4294967295,1))", '{"m":2}', "(m[0] + m[4294967295]) AS m", Name("UInt32")), + ("Map(Int64, Int8)", "('2020-01-01', map(9223372036854775807,1,0,1,-9223372036854775808,1))", '{"m":3}', "(m[9223372036854775807] + m[0] + m[-9223372036854775808]) AS m", Name("Int64")), + ("Map(UInt64, Int8)", "('2020-01-01', map(0,1,18446744073709551615,1))", '{"m":2}', "(m[0] + m[18446744073709551615]) AS m", Name("UInt64")), + ("Map(Int128, Int8)", "('2020-01-01', map(170141183460469231731687303715884105727,1,0,1,-170141183460469231731687303715884105728,1))", '{"m":3}', "(m[170141183460469231731687303715884105727] + m[0] + m[-170141183460469231731687303715884105728]) AS m", Name("Int128")), + ("Map(Int256, Int8)", "('2020-01-01', map(57896044618658097711785492504343953926634992332820282019728792003956564819967,1,0,1,-57896044618658097711785492504343953926634992332820282019728792003956564819968,1))", '{"m":3}', "(m[57896044618658097711785492504343953926634992332820282019728792003956564819967] + m[0] + m[-57896044618658097711785492504343953926634992332820282019728792003956564819968]) AS m", Name("Int256")), + ("Map(UInt256, Int8)", "('2020-01-01', map(0,1,115792089237316195423570985008687907853269984665640564039457584007913129639935,1))", '{"m":2}', "(m[0] + m[115792089237316195423570985008687907853269984665640564039457584007913129639935]) AS m", Name("UInt256")), + ("Map(Nullable(Int8), Int8)", "('2020-01-01', map(toNullable(1),1))", '{"m":1}', "m[1] AS m", Name("toNullable")), + ("Map(Nullable(Int8), Int8)", "('2020-01-01', map(toNullable(NULL),1))", '{"m":1}', "m[null] AS m", Name("toNullable(NULL)")), +]) +def table_map_select_key_with_key_integer(self, type, data, output, select): + """Check what values we can insert into map type column with key integer and if we can use the key to select the value. + """ + insert_into_table(type=type, data=data, output=output, select=select) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_ArrayOfMaps("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_NestedWithMaps("1.0") +) +@Examples("type data output partition_by", [ + ("Array(Map(String, Int8))", + "('2020-01-01', [map('hello',1),map('hello',1,'there',2)])", + '{"d":"2020-01-01","m":[{"hello":1},{"hello":1,"there":2}]}', + "m", + Name("Array(Map(String, Int8))")), + ("Nested(x Map(String, Int8))", + "('2020-01-01', [map('hello',1)])", + '{"d":"2020-01-01","m.x":[{"hello":1}]}', + "m.x", + Name("Nested(x Map(String, Int8)")) +]) +def table_with_map_inside_another_type(self, type, data, output, partition_by): + """Check what values we can insert into a type that has map type. + """ + insert_into_table(type=type, data=data, output=output, partition_by=partition_by) + +@TestOutline +def insert_into_table(self, type, data, output, partition_by="m", select="*"): + """Check we can insert data into a table. + """ + uid = getuid() + node = self.context.node + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (d DATE, m " + type + ") ENGINE = MergeTree() PARTITION BY " + partition_by + " ORDER BY d" + + with Given(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert data", description=data): + sql = f"INSERT INTO {table} VALUES {data}" + node.query(sql) + + with And("I select rows from the table"): + r = node.query(f"SELECT {select} FROM {table} FORMAT JSONEachRow") + + with Then("I expect output to match", description=output): + assert r.output == output, error() + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MixedKeyOrValueTypes("1.0") +) +def select_map_with_invalid_mixed_key_and_value_types(self): + """Check that creating a map with mixed key types fails. + """ + node = self.context.node + exitcode = 130 + message = "DB::Exception: There is no supertype for types String, UInt8 because some of them are String/FixedString and some of them are not" + + with Check("attempt to create a map using SELECT with mixed key types then it fails"): + node.query("SELECT map('hello',1,2,3)", exitcode=exitcode, message=message) + + with Check("attempt to create a map using SELECT with mixed value types then it fails"): + node.query("SELECT map(1,'hello',2,2)", exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_InvalidNumberOfArguments("1.0") +) +def select_map_with_invalid_number_of_arguments(self): + """Check that creating a map with invalid number of arguments fails. + """ + node = self.context.node + exitcode = 42 + message = "DB::Exception: Function map requires even number of arguments" + + with When("I create a map using SELECT with invalid number of arguments"): + node.query("SELECT map(1,2,3)", exitcode=exitcode, message=message) + +@TestScenario +def select_map_empty(self): + """Check that we can can create a empty map by not passing any arguments. + """ + node = self.context.node + + with When("I create a map using SELECT with no arguments"): + r = node.query("SELECT map()") + + with Then("it should create an empty map"): + assert r.output == "{}", error() + +@TestScenario +def insert_invalid_mixed_key_and_value_types(self): + """Check that inserting a map with mixed key or value types fails. + """ + uid = getuid() + node = self.context.node + exitcode = 130 + message = "DB::Exception: There is no supertype for types String, UInt8 because some of them are String/FixedString and some of them are not" + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (d DATE, m Map(String, Int8)) ENGINE = MergeTree() PARTITION BY m ORDER BY d" + + with And(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert a map with mixed key types then it should fail"): + sql = f"INSERT INTO {table} VALUES ('2020-01-01', map('hello',1,2,3))" + node.query(sql, exitcode=exitcode, message=message) + + with When("I insert a map with mixed value types then it should fail"): + sql = f"INSERT INTO {table} VALUES ('2020-01-01', map(1,'hello',2,2))" + node.query(sql, exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_DuplicatedKeys("1.0") +) +@Examples("type data output", [ + ("Map(String, String)", + "('2020-01-01', map('hello','there','hello','over there'))", + '{"d":"2020-01-01","m":{"hello":"there","hello":"over there"}}', + Name("Map(String, String))")), + ("Map(Int64, String)", + "('2020-01-01', map(12345,'there',12345,'over there'))", + '{"d":"2020-01-01","m":{"12345":"there","12345":"over there"}}', + Name("Map(Int64, String))")), +]) +def table_map_with_duplicated_keys(self, type, data, output): + """Check that map supports duplicated keys. + """ + insert_into_table(type=type, data=data, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_DuplicatedKeys("1.0") +) +@Examples("map output", [ + ("map('hello','there','hello','over there')", "{'hello':'there','hello':'over there'}", Name("String")), + ("map(12345,'there',12345,'over there')", "{12345:'there',12345:'over there'}", Name("Integer")) +]) +def select_map_with_duplicated_keys(self, map, output): + """Check creating a map with duplicated keys. + """ + select_map(map=map, output=output) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyNotFound("1.0") +) +def select_map_key_not_found(self): + node = self.context.node + + with When("map is empty"): + node.query("SELECT map() AS m, m[1]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("map has integer values"): + r = node.query("SELECT map(1,2) AS m, m[2] FORMAT Values") + with Then("zero should be returned for key that is not found"): + assert r.output == "({1:2},0)", error() + + with When("map has string values"): + r = node.query("SELECT map(1,'2') AS m, m[2] FORMAT Values") + with Then("empty string should be returned for key that is not found"): + assert r.output == "({1:'2'},'')", error() + + with When("map has array values"): + r = node.query("SELECT map(1,[2]) AS m, m[2] FORMAT Values") + with Then("empty array be returned for key that is not found"): + assert r.output == "({1:[2]},[])", error() + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyNotFound("1.0") +) +@Examples("type data select exitcode message", [ + ("Map(UInt8, UInt8), y Int8", "(y) VALUES (1)", "m[1] AS v", 0, '{"v":0}', Name("empty map")), + ("Map(UInt8, UInt8)", "VALUES (map(1,2))", "m[2] AS v", 0, '{"v":0}', Name("map has integer values")), + ("Map(UInt8, String)", "VALUES (map(1,'2'))", "m[2] AS v", 0, '{"v":""}', Name("map has string values")), + ("Map(UInt8, Array(Int8))", "VALUES (map(1,[2]))", "m[2] AS v", 0, '{"v":[]}', Name("map has array values")), +]) +def table_map_key_not_found(self, type, data, select, exitcode, message, order_by=None): + """Check values returned from a map column when key is not found. + """ + uid = getuid() + node = self.context.node + + if order_by is None: + order_by = "m" + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (m " + type + ") ENGINE = MergeTree() ORDER BY " + order_by + + with And(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert data into the map column"): + node.query(f"INSERT INTO {table} {data}") + + with And("I try to read from the table"): + node.query(f"SELECT {select} FROM {table} FORMAT JSONEachRow", exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyInvalid("1.0") +) +def invalid_key(self): + """Check when key is not valid. + """ + node = self.context.node + + with When("I try to use an integer key that is too large"): + node.query("SELECT map(1,2) AS m, m[256]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("I try to use an integer key that is negative when key is unsigned"): + node.query("SELECT map(1,2) AS m, m[-1]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("I try to use a string key when key is an integer"): + node.query("SELECT map(1,2) AS m, m['1']", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("I try to use an integer key when key is a string"): + r = node.query("SELECT map('1',2) AS m, m[1]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("I try to use an empty key when key is a string"): + r = node.query("SELECT map('1',2) AS m, m[]", exitcode=62, message="DB::Exception: Syntax error: failed at position") + + with When("I try to use wrong type conversion in key"): + r = node.query("SELECT map(1,2) AS m, m[toInt8('1')]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("in array of maps I try to use an integer key that is negative when key is unsigned"): + node.query("SELECT [map(1,2)] AS m, m[1][-1]", exitcode=43, message="DB::Exception: Illegal types of arguments") + + with When("I try to use a NULL key when key is not nullable"): + r = node.query("SELECT map(1,2) AS m, m[NULL] FORMAT Values") + with Then("it should return NULL"): + assert r.output == "({1:2},NULL)", error() + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval_KeyInvalid("1.0") +) +@Examples("type data select exitcode message order_by", [ + ("Map(UInt8, UInt8)", "(map(1,2))", "m[256] AS v", 0, '{"v":0}', "m", Name("key too large)")), + ("Map(UInt8, UInt8)", "(map(1,2))", "m[-1] AS v", 0, '{"v":0}', "m", Name("key is negative")), + ("Map(UInt8, UInt8)", "(map(1,2))", "m['1'] AS v", 43, "DB::Exception: Illegal types of arguments", "m", Name("string when key is integer")), + ("Map(String, UInt8)", "(map('1',2))", "m[1] AS v", 43, "DB::Exception: Illegal types of arguments", "m", Name("integer when key is string")), + ("Map(String, UInt8)", "(map('1',2))", "m[] AS v", 62, "DB::Exception: Syntax error: failed at position", "m", Name("empty when key is string")), + ("Map(UInt8, UInt8)", "(map(1,2))", "m[toInt8('1')] AS v", 0, '{"v":2}', "m", Name("wrong type conversion when key is integer")), + ("Map(String, UInt8)", "(map('1',2))", "m[toFixedString('1',1)] AS v", 0, '{"v":2}', "m", Name("wrong type conversion when key is string")), + ("Map(UInt8, UInt8)", "(map(1,2))", "m[NULL] AS v", 0, '{"v":null}', "m", Name("NULL key when key is not nullable")), + ("Array(Map(UInt8, UInt8))", "([map(1,2)])", "m[1]['1'] AS v", 43, "DB::Exception: Illegal types of arguments", "m", Name("string when key is integer in array of maps")), + ("Nested(x Map(UInt8, UInt8))", "([map(1,2)])", "m.x[1]['1'] AS v", 43, "DB::Exception: Illegal types of arguments", "m.x", Name("string when key is integer in nested map")), +]) +def table_map_invalid_key(self, type, data, select, exitcode, message, order_by="m"): + """Check selecting values from a map column using an invalid key. + """ + uid = getuid() + node = self.context.node + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (m " + type + ") ENGINE = MergeTree() ORDER BY " + order_by + + with And(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert data into the map column"): + node.query(f"INSERT INTO {table} VALUES {data}") + + with And("I try to read from the table"): + node.query(f"SELECT {select} FROM {table} FORMAT JSONEachRow", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Value_Retrieval("1.0") +) +@Examples("type data select filter exitcode message order_by", [ + ("Map(UInt8, UInt8)", "(map(1,1)),(map(1,2)),(map(2,3))", "m[1] AS v", "1=1 ORDER BY m[1]", 0, '{"v":0}\n{"v":1}\n{"v":2}', None, + Name("select the same key from all the rows")), + ("Map(String, String)", "(map('a','b')),(map('c','d','e','f')),(map('e','f'))", "m", "m = map('e','f','c','d')", 0, '', None, + Name("filter rows by map having different pair order")), + ("Map(String, String)", "(map('a','b')),(map('c','d','e','f')),(map('e','f'))", "m", "m = map('c','d','e','f')", 0, '{"m":{"c":"d","e":"f"}}', None, + Name("filter rows by map having the same pair order")), + ("Map(String, String)", "(map('a','b')),(map('e','f'))", "m", "m = map()", 0, '', None, + Name("filter rows by empty map")), + ("Map(String, Int8)", "(map('a',1,'b',2)),(map('a',2)),(map('b',3))", "m", "m['a'] = 1", 0, '{"m":{"a":1,"b":2}}', None, + Name("filter rows by map key value")), + ("Map(String, Int8)", "(map('a',1,'b',2)),(map('a',2)),(map('b',3))", "m", "m['a'] = 1 AND m['b'] = 2", 0, '{"m":{"a":1,"b":2}}', None, + Name("filter rows by map multiple key value combined with AND")), + ("Map(String, Int8)", "(map('a',1,'b',2)),(map('a',2)),(map('b',3))", "m", "m['a'] = 1 OR m['b'] = 3", 0, '{"m":{"a":1,"b":2}}\n{"m":{"b":3}}', None, + Name("filter rows by map multiple key value combined with OR")), + ("Map(String, Array(Int8))", "(map('a',[])),(map('b',[1])),(map('c',[2]))", "m['b'] AS v", "m['b'] IN ([1],[2])", 0, '{"v":[1]}', None, + Name("filter rows by map array value using IN")), + ("Map(String, Nullable(String))", "(map('a',NULL)),(map('a',1))", "m", "isNull(m['a']) = 1", 0, '{"m":{"a":null}}', None, + Name("select map with nullable value")) +]) +def table_map_queries(self, type, data, select, filter, exitcode, message, order_by=None): + """Check retrieving map values and using maps in queries. + """ + uid = getuid() + node = self.context.node + + if order_by is None: + order_by = "m" + + with Given(f"table definition with {type}"): + sql = "CREATE TABLE {name} (m " + type + ") ENGINE = MergeTree() ORDER BY " + order_by + + with And(f"I create a table", description=sql): + table = create_table(name=uid, statement=sql) + + with When("I insert data into the map column"): + node.query(f"INSERT INTO {table} VALUES {data}") + + with And("I try to read from the table"): + node.query(f"SELECT {select} FROM {table} WHERE {filter} FORMAT JSONEachRow", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Invalid_Nullable("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Invalid_NothingNothing("1.0") +) +@Examples("type exitcode message", [ + ("Nullable(Map(String, String))", + 43, "DB::Exception: Nested type Map(String,String) cannot be inside Nullable type", + Name("nullable map")), + ("Map(Nothing, Nothing)", + 37, "DB::Exception: Column `m` with type Map(Nothing,Nothing) is not allowed in key expression, it's not comparable", + Name("map with nothing type for key and value")) +]) +def table_map_unsupported_types(self, type, exitcode, message): + """Check creating a table with unsupported map column types. + """ + uid = getuid() + node = self.context.node + + try: + with When(f"I create a table definition with {type}"): + sql = f"CREATE TABLE {uid} (m " + type + ") ENGINE = MergeTree() ORDER BY m" + node.query(sql, exitcode=exitcode, message=message) + finally: + with Finally("drop table if any"): + node.query(f"DROP TABLE IF EXISTS {uid}") + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysToMap("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysMap_Invalid("1.0") +) +@Examples("tuple type exitcode message", [ + ("([1, 2, 3], ['Ready', 'Steady', 'Go'])", "Map(UInt8, String)", + 0, "{1:'Ready',2:'Steady',3:'Go'}", Name("int -> int")), + ("([1, 2, 3], ['Ready', 'Steady', 'Go'])", "Map(String, String)", + 0, "{'1':'Ready','2':'Steady','3':'Go'}", Name("int -> string")), + ("(['1', '2', '3'], ['Ready', 'Steady', 'Go'])", "Map(UInt8, String)", + 0, "{1:'Ready',187:'Steady',143:'Go'}", Name("string -> int")), + ("([],[])", "Map(String, String)", + 0, "{}", Name("empty arrays to map str:str")), + ("([],[])", "Map(UInt8, Array(Int8))", + 0, "{}", Name("empty arrays to map uint8:array")), + ("([[1]],['hello'])", "Map(String, String)", + 0, "{'[1]':'hello'}", Name("array -> string")), + ("([(1,2),(3,4)])", "Map(UInt8, UInt8)", + 0, "{1:2,3:4}", Name("array of two tuples")), + ("([1, 2], ['Ready', 'Steady', 'Go'])", "Map(UInt8, String)", + 53, "DB::Exception: CAST AS Map can only be performed from tuple of arrays with equal sizes", + Name("unequal array sizes")), +]) +def cast_tuple_of_two_arrays_to_map(self, tuple, type, exitcode, message): + """Check casting Tuple(Array, Array) to a map type. + """ + node = self.context.node + + with When("I try to cast tuple", description=tuple): + node.query(f"SELECT CAST({tuple}, '{type}') AS map", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysToMap("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_TupleOfArraysMap_Invalid("1.0") +) +@Examples("tuple type exitcode message check_insert", [ + ("(([1, 2, 3], ['Ready', 'Steady', 'Go']))", "Map(UInt8, String)", + 0, '{"m":{1:"Ready",2:"Steady",3:"Go"}}', False, Name("int -> int")), + ("(([1, 2, 3], ['Ready', 'Steady', 'Go']))", "Map(String, String)", + 0, '{"m":{"1":"Ready","2":"Steady","3":"Go"}}', False, Name("int -> string")), + ("((['1', '2', '3'], ['Ready', 'Steady', 'Go']))", "Map(UInt8, String)", + 0, '', True, Name("string -> int")), + ("(([],[]))", "Map(String, String)", + 0, '{"m":{}}', False, Name("empty arrays to map str:str")), + ("(([],[]))", "Map(UInt8, Array(Int8))", + 0, '{"m":{}}', False, Name("empty arrays to map uint8:array")), + ("(([[1]],['hello']))", "Map(String, String)", + 53, 'DB::Exception: Type mismatch in IN or VALUES section', True, Name("array -> string")), + ("(([(1,2),(3,4)]))", "Map(UInt8, UInt8)", + 0, '{"m":{1:2,3:4}}', False, Name("array of two tuples")), + ("(([1, 2], ['Ready', 'Steady', 'Go']))", "Map(UInt8, String)", + 53, "DB::Exception: CAST AS Map can only be performed from tuple of arrays with equal sizes", True, + Name("unequal array sizes")), +]) +def table_map_cast_tuple_of_arrays_to_map(self, tuple, type, exitcode, message, check_insert): + """Check converting Tuple(Array, Array) into map on insert into a map type column. + """ + table_map(type=type, data=tuple, select="*", filter="1=1", exitcode=exitcode, message=message, check_insert=check_insert) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap_Invalid("1.0") +) +@Examples("tuple type exitcode message", [ + ("([(1,2),(3,4)])", "Map(UInt8, UInt8)", 0, "{1:2,3:4}", + Name("array of two tuples")), + ("([(1,2),(3)])", "Map(UInt8, UInt8)", 130, + "DB::Exception: There is no supertype for types Tuple(UInt8, UInt8), UInt8 because some of them are Tuple and some of them are not", + Name("not a tuple")), + ("([(1,2),(3,)])", "Map(UInt8, UInt8)", 130, + "DB::Exception: There is no supertype for types Tuple(UInt8, UInt8), Tuple(UInt8) because Tuples have different sizes", + Name("invalid tuple")), +]) +def cast_array_of_two_tuples_to_map(self, tuple, type, exitcode, message): + """Check casting Array(Tuple(K,V)) to a map type. + """ + node = self.context.node + + with When("I try to cast tuple", description=tuple): + node.query(f"SELECT CAST({tuple}, '{type}') AS map", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Conversion_From_ArrayOfTuplesToMap_Invalid("1.0") +) +@Examples("tuple type exitcode message check_insert", [ + ("(([(1,2),(3,4)]))", "Map(UInt8, UInt8)", 0, '{"m":{1:2,3:4}}', False, + Name("array of two tuples")), + ("(([(1,2),(3)]))", "Map(UInt8, UInt8)", 130, + "DB::Exception: There is no supertype for types Tuple(UInt8, UInt8), UInt8 because some of them are Tuple and some of them are not", True, + Name("not a tuple")), + ("(([(1,2),(3,)]))", "Map(UInt8, UInt8)", 130, + "DB::Exception: There is no supertype for types Tuple(UInt8, UInt8), Tuple(UInt8) because Tuples have different sizes", True, + Name("invalid tuple")), +]) +def table_map_cast_array_of_two_tuples_to_map(self, tuple, type, exitcode, message, check_insert): + """Check converting Array(Tuple(K,V),...) into map on insert into a map type column. + """ + table_map(type=type, data=tuple, select="*", filter="1=1", exitcode=exitcode, message=message, check_insert=check_insert) + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_InlineDefinedMap("1.0") +) +def subcolumns_keys_using_inline_defined_map(self): + node = self.context.node + exitcode = 47 + message = "DB::Exception: Missing columns: 'c.keys'" + + with When("I try to access keys sub-column using an inline defined map"): + node.query("SELECT map( 'aa', 4, '44' , 5) as c, c.keys", exitcode=exitcode, message=message) + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_InlineDefinedMap("1.0") +) +def subcolumns_values_using_inline_defined_map(self): + node = self.context.node + exitcode = 47 + message = "DB::Exception: Missing columns: 'c.values'" + + with When("I try to access values sub-column using an inline defined map"): + node.query("SELECT map( 'aa', 4, '44' , 5) as c, c.values", exitcode=exitcode, message=message) + +@TestOutline(Scenario) +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Keys_ArrayFunctions("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_SubColumns_Values_ArrayFunctions("1.0") +) +@Examples("type data select filter exitcode message", [ + # keys + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "m.keys AS keys", "1=1", + 0, '{"keys":["a","c"]}\n{"keys":["e"]}', Name("select keys")), + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "m.keys AS keys", "has(m.keys, 'e')", + 0, '{"keys":["e"]}', Name("filter by using keys in an array function")), + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "has(m.keys, 'e') AS r", "1=1", + 0, '{"r":0}\n{"r":1}', Name("column that uses keys in an array function")), + # values + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "m.values AS values", "1=1", + 0, '{"values":["b","d"]}\n{"values":["f"]}', Name("select values")), + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "m.values AS values", "has(m.values, 'f')", + 0, '{"values":["f"]}', Name("filter by using values in an array function")), + ("Map(String, String)", "(map('a','b','c','d')),(map('e','f'))", "has(m.values, 'f') AS r", "1=1", + 0, '{"r":0}\n{"r":1}', Name("column that uses values in an array function")) +]) +def subcolumns(self, type, data, select, filter, exitcode, message, order_by=None): + """Check usage of sub-columns in queries. + """ + table_map(type=type, data=data, select=select, filter=filter, exitcode=exitcode, message=message, order_by=order_by) + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Length("1.0") +) +def length(self): + """Check usage of length function with map data type. + """ + table_map(type="Map(String, String)", + data="(map('a','b','c','d')),(map('e','f'))", + select="length(m) AS len, m", + filter="length(m) = 1", + exitcode=0, message='{"len":"1","m":{"e":"f"}}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Empty("1.0") +) +def empty(self): + """Check usage of empty function with map data type. + """ + table_map(type="Map(String, String)", + data="(map('e','f'))", + select="empty(m) AS em, m", + filter="empty(m) <> 1", + exitcode=0, message='{"em":0,"m":{"e":"f"}}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_NotEmpty("1.0") +) +def notempty(self): + """Check usage of notEmpty function with map data type. + """ + table_map(type="Map(String, String)", + data="(map('e','f'))", + select="notEmpty(m) AS em, m", + filter="notEmpty(m) = 1", + exitcode=0, message='{"em":1,"m":{"e":"f"}}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapAdd("1.0") +) +def cast_from_mapadd(self): + """Check converting the result of mapAdd function to a map data type. + """ + select_map(map="CAST(mapAdd(([toUInt8(1), 2], [1, 1]), ([toUInt8(1), 2], [1, 1])), 'Map(Int8, Int8)')", output="{1:2,2:2}") + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapSubstract("1.0") +) +def cast_from_mapsubstract(self): + """Check converting the result of mapSubstract function to a map data type. + """ + select_map(map="CAST(mapSubtract(([toUInt8(1), 2], [toInt32(1), 1]), ([toUInt8(1), 2], [toInt32(2), 1])), 'Map(Int8, Int8)')", output="{1:-1,2:0}") + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map_MapPopulateSeries("1.0") +) +def cast_from_mappopulateseries(self): + """Check converting the result of mapPopulateSeries function to a map data type. + """ + select_map(map="CAST(mapPopulateSeries([1,2,4], [11,22,44], 5), 'Map(Int8, Int8)')", output="{1:11,2:22,3:0,4:44,5:0}") + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapContains("1.0") +) +def mapcontains(self): + """Check usages of mapContains function with map data type. + """ + node = self.context.node + + with Example("key in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="mapContains(m, 'a')", + exitcode=0, message='{"m":{"a":"b"}}') + + with Example("key not in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="NOT mapContains(m, 'a')", + exitcode=0, message='{"m":{"e":"f"}}') + + with Example("null key not in map"): + table_map(type="Map(Nullable(String), String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="mapContains(m, NULL)", + exitcode=0, message='') + + with Example("null key in map"): + table_map(type="Map(Nullable(String), String)", + data="(map('e','f')),(map('a','b')),(map(NULL,'c'))", + select="m", + filter="mapContains(m, NULL)", + exitcode=0, message='{null:"c"}') + + with Example("select nullable key"): + node.query("SELECT map(NULL, 1, 2, 3) AS m, mapContains(m, toNullable(toUInt8(2)))", exitcode=0, message="{2:3}") + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapKeys("1.0") +) +def mapkeys(self): + """Check usages of mapKeys function with map data type. + """ + with Example("key in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="has(mapKeys(m), 'a')", + exitcode=0, message='{"m":{"a":"b"}}') + + with Example("key not in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="NOT has(mapKeys(m), 'a')", + exitcode=0, message='{"m":{"e":"f"}}') + + with Example("null key not in map"): + table_map(type="Map(Nullable(String), String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="has(mapKeys(m), NULL)", + exitcode=0, message='') + + with Example("null key in map"): + table_map(type="Map(Nullable(String), String)", + data="(map('e','f')),(map('a','b')),(map(NULL,'c'))", + select="m", + filter="has(mapKeys(m), NULL)", + exitcode=0, message='{"m":{null:"c"}}') + + with Example("select keys from column"): + table_map(type="Map(Nullable(String), String)", + data="(map('e','f')),(map('a','b')),(map(NULL,'c'))", + select="mapKeys(m) AS keys", + filter="1 = 1", + exitcode=0, message='{"keys":["a"]}\n{"keys":["e"]}\n{"keys":[null]}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_MapValues("1.0") +) +def mapvalues(self): + """Check usages of mapValues function with map data type. + """ + with Example("value in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="has(mapValues(m), 'b')", + exitcode=0, message='{"m":{"a":"b"}}') + + with Example("value not in map"): + table_map(type="Map(String, String)", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="NOT has(mapValues(m), 'b')", + exitcode=0, message='{"m":{"e":"f"}}') + + with Example("null value not in map"): + table_map(type="Map(String, Nullable(String))", + data="(map('e','f')),(map('a','b'))", + select="m", + filter="has(mapValues(m), NULL)", + exitcode=0, message='') + + with Example("null value in map"): + table_map(type="Map(String, Nullable(String))", + data="(map('e','f')),(map('a','b')),(map('c',NULL))", + select="m", + filter="has(mapValues(m), NULL)", + exitcode=0, message='{"m":{"c":null}}') + + with Example("select values from column"): + table_map(type="Map(String, Nullable(String))", + data="(map('e','f')),(map('a','b')),(map('c',NULL))", + select="mapValues(m) AS values", + filter="1 = 1", + exitcode=0, message='{"values":["b"]}\n{"values":[null]}\n{"values":["f"]}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Functions_InlineDefinedMap("1.0") +) +def functions_with_inline_defined_map(self): + """Check that a map defined inline inside the select statement + can be used with functions that work with maps. + """ + with Example("mapKeys"): + select_map(map="map(1,2,3,4) as map, mapKeys(map) AS keys", output="{1:2,3:4}\t[1,3]") + + with Example("mapValyes"): + select_map(map="map(1,2,3,4) as map, mapValues(map) AS values", output="{1:2,3:4}\t[2,4]") + + with Example("mapContains"): + select_map(map="map(1,2,3,4) as map, mapContains(map, 1) AS contains", output="{1:2,3:4}\t1") + +@TestScenario +def empty_map(self): + """Check creating of an empty map `{}` using the map() function + when inserting data into a map type table column. + """ + table_map(type="Map(String, String)", + data="(map('e','f')),(map())", + select="m", + filter="1=1", + exitcode=0, message='{"m":{}}\n{"m":{"e":"f"}}') + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_TupleOfArrays("1.0") +) +def performance_vs_two_tuple_of_arrays(self, len=10, rows=6000000): + """Check performance of using map data type vs Tuple(Array, Array). + """ + uid = getuid() + node = self.context.node + + with Given(f"table with Tuple(Array(Int8),Array(Int8))"): + sql = "CREATE TABLE {name} (pairs Tuple(Array(Int8),Array(Int8))) ENGINE = MergeTree() ORDER BY pairs" + tuple_table = create_table(name=f"tuple_{uid}", statement=sql) + + with And(f"table with Map(Int8,Int8)"): + sql = "CREATE TABLE {name} (pairs Map(Int8,Int8)) ENGINE = MergeTree() ORDER BY pairs" + map_table = create_table(name=f"map_{uid}", statement=sql) + + with When("I insert data into table with tuples"): + keys = range(len) + values = range(len) + start_time = time.time() + node.query(f"INSERT INTO {tuple_table} SELECT ({keys},{values}) FROM numbers({rows})") + tuple_insert_time = time.time() - start_time + metric("tuple insert time", tuple_insert_time, "sec") + + with When("I insert data into table with a map"): + keys = range(len) + values = range(len) + start_time = time.time() + node.query(f"INSERT INTO {map_table} SELECT ({keys},{values}) FROM numbers({rows})") + map_insert_time = time.time() - start_time + metric("map insert time", map_insert_time, "sec") + + with And("I retrieve particular key value from table with tuples"): + start_time = time.time() + node.query(f"SELECT sum(arrayFirst((v, k) -> k = {len-1}, tupleElement(pairs, 2), tupleElement(pairs, 1))) AS sum FROM {tuple_table}", + exitcode=0, message=f"{rows*(len-1)}") + tuple_select_time = time.time() - start_time + metric("tuple(array, array) select time", tuple_select_time, "sec") + + with And("I retrieve particular key value from table with map"): + start_time = time.time() + node.query(f"SELECT sum(pairs[{len-1}]) AS sum FROM {map_table}", + exitcode=0, message=f"{rows*(len-1)}") + map_select_time = time.time() - start_time + metric("map select time", map_select_time, "sec") + + metric("insert difference", (1 - map_insert_time/tuple_insert_time) * 100, "%") + metric("select difference", (1 - map_select_time/tuple_select_time) * 100, "%") + +@TestScenario +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType_Performance_Vs_ArrayOfTuples("1.0") +) +def performance_vs_array_of_tuples(self, len=10, rows=6000000): + """Check performance of using map data type vs Array(Tuple(K,V)). + """ + uid = getuid() + node = self.context.node + + with Given(f"table with Array(Tuple(K,V))"): + sql = "CREATE TABLE {name} (pairs Array(Tuple(Int8, Int8))) ENGINE = MergeTree() ORDER BY pairs" + array_table = create_table(name=f"tuple_{uid}", statement=sql) + + with And(f"table with Map(Int8,Int8)"): + sql = "CREATE TABLE {name} (pairs Map(Int8,Int8)) ENGINE = MergeTree() ORDER BY pairs" + map_table = create_table(name=f"map_{uid}", statement=sql) + + with When("I insert data into table with an array of tuples"): + pairs = list(zip(range(len),range(len))) + start_time = time.time() + node.query(f"INSERT INTO {array_table} SELECT ({pairs}) FROM numbers({rows})") + array_insert_time = time.time() - start_time + metric("array insert time", array_insert_time, "sec") + + with When("I insert data into table with a map"): + keys = range(len) + values = range(len) + start_time = time.time() + node.query(f"INSERT INTO {map_table} SELECT ({keys},{values}) FROM numbers({rows})") + map_insert_time = time.time() - start_time + metric("map insert time", map_insert_time, "sec") + + with And("I retrieve particular key value from table with an array of tuples"): + start_time = time.time() + node.query(f"SELECT sum(arrayFirst((v) -> v.1 = {len-1}, pairs).2) AS sum FROM {array_table}", + exitcode=0, message=f"{rows*(len-1)}") + array_select_time = time.time() - start_time + metric("array(tuple(k,v)) select time", array_select_time, "sec") + + with And("I retrieve particular key value from table with map"): + start_time = time.time() + node.query(f"SELECT sum(pairs[{len-1}]) AS sum FROM {map_table}", + exitcode=0, message=f"{rows*(len-1)}") + map_select_time = time.time() - start_time + metric("map select time", map_select_time, "sec") + + metric("insert difference", (1 - map_insert_time/array_insert_time) * 100, "%") + metric("select difference", (1 - map_select_time/array_select_time) * 100, "%") + +@TestScenario +def performance(self, len=10, rows=6000000): + """Check insert and select performance of using map data type. + """ + uid = getuid() + node = self.context.node + + with Given("table with Map(Int8,Int8)"): + sql = "CREATE TABLE {name} (pairs Map(Int8,Int8)) ENGINE = MergeTree() ORDER BY pairs" + map_table = create_table(name=f"map_{uid}", statement=sql) + + with When("I insert data into table with a map"): + values = [x for pair in zip(range(len),range(len)) for x in pair] + start_time = time.time() + node.query(f"INSERT INTO {map_table} SELECT (map({','.join([str(v) for v in values])})) FROM numbers({rows})") + map_insert_time = time.time() - start_time + metric("map insert time", map_insert_time, "sec") + + with And("I retrieve particular key value from table with map"): + start_time = time.time() + node.query(f"SELECT sum(pairs[{len-1}]) AS sum FROM {map_table}", + exitcode=0, message=f"{rows*(len-1)}") + map_select_time = time.time() - start_time + metric("map select time", map_select_time, "sec") + +# FIXME: add tests for different table engines + +@TestFeature +@Name("tests") +@Requirements( + RQ_SRS_018_ClickHouse_Map_DataType("1.0"), + RQ_SRS_018_ClickHouse_Map_DataType_Functions_Map("1.0") +) +def feature(self, node="clickhouse1"): + self.context.node = self.context.cluster.node(node) + + with Given("I allow experimental map type"): + allow_experimental_map_type() + + for scenario in loads(current_module(), Scenario): + scenario() diff --git a/tests/testflows/rbac/tests/privileges/grant_option.py b/tests/testflows/rbac/tests/privileges/grant_option.py index f337aec2619..bc8b73eb32f 100644 --- a/tests/testflows/rbac/tests/privileges/grant_option.py +++ b/tests/testflows/rbac/tests/privileges/grant_option.py @@ -89,7 +89,7 @@ def grant_option_check(grant_option_target, grant_target, user_name, table_type, @Examples("privilege", [ ("ALTER MOVE PARTITION",), ("ALTER MOVE PART",), ("MOVE PARTITION",), ("MOVE PART",), ("ALTER DELETE",), ("DELETE",), - ("ALTER FETCH PARTITION",), ("FETCH PARTITION",), + ("ALTER FETCH PARTITION",), ("ALTER FETCH PART",), ("FETCH PARTITION",), ("ALTER FREEZE PARTITION",), ("FREEZE PARTITION",), ("ALTER UPDATE",), ("UPDATE",), ("ALTER ADD COLUMN",), ("ADD COLUMN",), diff --git a/tests/testflows/regression.py b/tests/testflows/regression.py index 45f1ed64a6c..13a24f97f9f 100755 --- a/tests/testflows/regression.py +++ b/tests/testflows/regression.py @@ -18,6 +18,7 @@ def regression(self, local, clickhouse_binary_path, stress=None, parallel=None): # Feature(test=load("ldap.regression", "regression"))(**args) # Feature(test=load("rbac.regression", "regression"))(**args) # Feature(test=load("aes_encryption.regression", "regression"))(**args) + Feature(test=load("map_type.regression", "regression"))(**args) # Feature(test=load("kerberos.regression", "regression"))(**args) if main(): diff --git a/utils/CMakeLists.txt b/utils/CMakeLists.txt index afeda751ea5..5b98e28c0c8 100644 --- a/utils/CMakeLists.txt +++ b/utils/CMakeLists.txt @@ -32,12 +32,14 @@ if (NOT DEFINED ENABLE_UTILS OR ENABLE_UTILS) add_subdirectory (db-generator) add_subdirectory (wal-dump) add_subdirectory (check-mysql-binlog) + add_subdirectory (keeper-bench) if (USE_NURAFT) add_subdirectory (keeper-data-dumper) endif () - if (NOT OS_DARWIN) + # memcpy_jart.S contains position dependent code + if (NOT CMAKE_POSITION_INDEPENDENT_CODE AND NOT OS_DARWIN) add_subdirectory (memcpy-bench) endif () endif () diff --git a/utils/github/backport.py b/utils/github/backport.py index 7fddbbee241..589124324b1 100644 --- a/utils/github/backport.py +++ b/utils/github/backport.py @@ -25,24 +25,23 @@ class Backport: def getPullRequests(self, from_commit): return self._gh.get_pull_requests(from_commit) - def getBranchesWithLTS(self): - branches = [] - for pull_request in self._gh.find_pull_requests("release-lts"): + def getBranchesWithRelease(self): + branches = set() + for pull_request in self._gh.find_pull_requests("release"): if not pull_request['merged'] and not pull_request['closed']: - branches.append(pull_request['headRefName']) + branches.add(pull_request['headRefName']) return branches - def execute(self, repo, upstream, until_commit, number, run_cherrypick, find_lts=False): + def execute(self, repo, upstream, until_commit, run_cherrypick): repo = LocalRepo(repo, upstream, self.default_branch_name) all_branches = repo.get_release_branches() # [(branch_name, base_commit)] - last_branches = set([branch[0] for branch in all_branches[-number:]]) - lts_branches = set(self.getBranchesWithLTS() if find_lts else []) + release_branches = self.getBranchesWithRelease() branches = [] # iterate over all branches to preserve their precedence. for branch in all_branches: - if branch[0] in last_branches or branch[0] in lts_branches: + if branch[0] in release_branches: branches.append(branch) if not branches: @@ -76,7 +75,7 @@ class Backport: # First pass. Find all must-backports for label in pr['labels']['nodes']: - if label['name'] == 'pr-bugfix': + if label['name'] == 'pr-bugfix' or label['name'] == 'pr-must-backport': backport_map[pr['number']] = branch_set.copy() continue matched = RE_MUST_BACKPORT.match(label['name']) @@ -115,8 +114,6 @@ if __name__ == "__main__": parser.add_argument('--token', type=str, required=True, help='token for Github access') parser.add_argument('--repo', type=str, required=True, help='path to full repository', metavar='PATH') parser.add_argument('--til', type=str, help='check PRs from HEAD til this commit', metavar='COMMIT') - parser.add_argument('-n', type=int, dest='number', help='number of last release branches to consider') - parser.add_argument('--lts', action='store_true', help='consider branches with LTS') parser.add_argument('--dry-run', action='store_true', help='do not create or merge any PRs', default=False) parser.add_argument('--verbose', '-v', action='store_true', help='more verbose output', default=False) parser.add_argument('--upstream', '-u', type=str, help='remote name of upstream in repository', default='origin') @@ -129,4 +126,4 @@ if __name__ == "__main__": cherrypick_run = lambda token, pr, branch: CherryPick(token, 'ClickHouse', 'ClickHouse', 'core', pr, branch).execute(args.repo, args.dry_run) bp = Backport(args.token, 'ClickHouse', 'ClickHouse', 'core') - bp.execute(args.repo, args.upstream, args.til, args.number, cherrypick_run, args.lts) + bp.execute(args.repo, args.upstream, args.til, cherrypick_run) diff --git a/utils/keeper-bench/CMakeLists.txt b/utils/keeper-bench/CMakeLists.txt new file mode 100644 index 00000000000..2f12194d1b7 --- /dev/null +++ b/utils/keeper-bench/CMakeLists.txt @@ -0,0 +1,2 @@ +add_executable(keeper-bench Generator.cpp Runner.cpp Stats.cpp main.cpp) +target_link_libraries(keeper-bench PRIVATE clickhouse_common_zookeeper) diff --git a/utils/keeper-bench/Generator.cpp b/utils/keeper-bench/Generator.cpp new file mode 100644 index 00000000000..852de07f2e1 --- /dev/null +++ b/utils/keeper-bench/Generator.cpp @@ -0,0 +1,238 @@ +#include "Generator.h" +#include +#include + +using namespace Coordination; +using namespace zkutil; + +namespace DB +{ +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} +} + +namespace +{ +std::string generateRandomString(size_t length) +{ + if (length == 0) + return ""; + + static const auto & chars = "0123456789" + "abcdefghijklmnopqrstuvwxyz" + "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; + + static pcg64 rng(randomSeed()); + static std::uniform_int_distribution pick(0, sizeof(chars) - 2); + + std::string s; + + s.reserve(length); + + while (length--) + s += chars[pick(rng)]; + + return s; +} +} + +std::string generateRandomPath(const std::string & prefix, size_t length) +{ + return std::filesystem::path(prefix) / generateRandomString(length); +} + +std::string generateRandomData(size_t size) +{ + return generateRandomString(size); +} + +void CreateRequestGenerator::startup(Coordination::ZooKeeper & zookeeper) +{ + auto promise = std::make_shared>(); + auto future = promise->get_future(); + auto create_callback = [promise] (const CreateResponse & response) + { + if (response.error != Coordination::Error::ZOK) + promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + promise->set_value(); + }; + zookeeper.create(path_prefix, "", false, false, default_acls, create_callback); + future.get(); +} + +ZooKeeperRequestPtr CreateRequestGenerator::generate() +{ + auto request = std::make_shared(); + request->acls = default_acls; + size_t plength = 5; + if (path_length) + plength = *path_length; + auto path_candidate = generateRandomPath(path_prefix, plength); + + while (paths_created.count(path_candidate)) + path_candidate = generateRandomPath(path_prefix, plength); + + paths_created.insert(path_candidate); + + request->path = path_candidate; + if (data_size) + request->data = generateRandomData(*data_size); + + return request; +} + + +void GetRequestGenerator::startup(Coordination::ZooKeeper & zookeeper) +{ + auto promise = std::make_shared>(); + auto future = promise->get_future(); + auto create_callback = [promise] (const CreateResponse & response) + { + if (response.error != Coordination::Error::ZOK) + promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + promise->set_value(); + }; + zookeeper.create(path_prefix, "", false, false, default_acls, create_callback); + future.get(); + size_t total_nodes = 1; + if (num_nodes) + total_nodes = *num_nodes; + + for (size_t i = 0; i < total_nodes; ++i) + { + auto path = generateRandomPath(path_prefix, 5); + while (std::find(paths_to_get.begin(), paths_to_get.end(), path) != paths_to_get.end()) + path = generateRandomPath(path_prefix, 5); + + auto create_promise = std::make_shared>(); + auto create_future = create_promise->get_future(); + auto callback = [create_promise] (const CreateResponse & response) + { + if (response.error != Coordination::Error::ZOK) + create_promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + create_promise->set_value(); + }; + std::string data; + if (nodes_data_size) + data = generateRandomString(*nodes_data_size); + + zookeeper.create(path, data, false, false, default_acls, callback); + create_future.get(); + paths_to_get.push_back(path); + } +} + +Coordination::ZooKeeperRequestPtr GetRequestGenerator::generate() +{ + auto request = std::make_shared(); + + size_t path_index = distribution(rng); + request->path = paths_to_get[path_index]; + return request; +} + +void ListRequestGenerator::startup(Coordination::ZooKeeper & zookeeper) +{ + auto promise = std::make_shared>(); + auto future = promise->get_future(); + auto create_callback = [promise] (const CreateResponse & response) + { + if (response.error != Coordination::Error::ZOK) + promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + promise->set_value(); + }; + zookeeper.create(path_prefix, "", false, false, default_acls, create_callback); + future.get(); + + size_t total_nodes = 1; + if (num_nodes) + total_nodes = *num_nodes; + + size_t path_length = 5; + if (paths_length) + path_length = *paths_length; + + for (size_t i = 0; i < total_nodes; ++i) + { + auto path = generateRandomPath(path_prefix, path_length); + + auto create_promise = std::make_shared>(); + auto create_future = create_promise->get_future(); + auto callback = [create_promise] (const CreateResponse & response) + { + if (response.error != Coordination::Error::ZOK) + create_promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + create_promise->set_value(); + }; + zookeeper.create(path, "", false, false, default_acls, callback); + create_future.get(); + } +} + +Coordination::ZooKeeperRequestPtr ListRequestGenerator::generate() +{ + auto request = std::make_shared(); + request->path = path_prefix; + return request; +} + +std::unique_ptr getGenerator(const std::string & name) +{ + if (name == "create_no_data") + { + return std::make_unique(); + } + else if (name == "create_small_data") + { + return std::make_unique("/create_generator", 5, 32); + } + else if (name == "create_medium_data") + { + return std::make_unique("/create_generator", 5, 1024); + } + else if (name == "create_big_data") + { + return std::make_unique("/create_generator", 5, 512 * 1024); + } + else if (name == "get_no_data") + { + return std::make_unique("/get_generator", 10, 0); + } + else if (name == "get_small_data") + { + return std::make_unique("/get_generator", 10, 32); + } + else if (name == "get_medium_data") + { + return std::make_unique("/get_generator", 10, 1024); + } + else if (name == "get_big_data") + { + return std::make_unique("/get_generator", 10, 512 * 1024); + } + else if (name == "list_no_nodes") + { + return std::make_unique("/list_generator", 0, 1); + } + else if (name == "list_few_nodes") + { + return std::make_unique("/list_generator", 10, 5); + } + else if (name == "list_medium_nodes") + { + return std::make_unique("/list_generator", 1000, 5); + } + else if (name == "list_a_lot_nodes") + { + return std::make_unique("/list_generator", 100000, 5); + } + + throw DB::Exception(DB::ErrorCodes::LOGICAL_ERROR, "Unknown generator {}", name); +} diff --git a/utils/keeper-bench/Generator.h b/utils/keeper-bench/Generator.h new file mode 100644 index 00000000000..d6cc0eec335 --- /dev/null +++ b/utils/keeper-bench/Generator.h @@ -0,0 +1,107 @@ +#pragma once +#include +#include +#include +#include +#include +#include +#include +#include + + +std::string generateRandomPath(const std::string & prefix, size_t length = 5); + +std::string generateRandomData(size_t size); + +class IGenerator +{ +public: + IGenerator() + { + Coordination::ACL acl; + acl.permissions = Coordination::ACL::All; + acl.scheme = "world"; + acl.id = "anyone"; + default_acls.emplace_back(std::move(acl)); + } + virtual void startup(Coordination::ZooKeeper & /*zookeeper*/) {} + virtual Coordination::ZooKeeperRequestPtr generate() = 0; + + virtual ~IGenerator() = default; + + Coordination::ACLs default_acls; + +}; + +class CreateRequestGenerator final : public IGenerator +{ +public: + explicit CreateRequestGenerator( + std::string path_prefix_ = "/create_generator", + std::optional path_length_ = std::nullopt, + std::optional data_size_ = std::nullopt) + : path_prefix(path_prefix_) + , path_length(path_length_) + , data_size(data_size_) + {} + + void startup(Coordination::ZooKeeper & zookeeper) override; + Coordination::ZooKeeperRequestPtr generate() override; + +private: + std::string path_prefix; + std::optional path_length; + std::optional data_size; + std::unordered_set paths_created; +}; + + +class GetRequestGenerator final : public IGenerator +{ +public: + explicit GetRequestGenerator( + std::string path_prefix_ = "/get_generator", + std::optional num_nodes_ = std::nullopt, + std::optional nodes_data_size_ = std::nullopt) + : path_prefix(path_prefix_) + , num_nodes(num_nodes_) + , nodes_data_size(nodes_data_size_) + , rng(randomSeed()) + , distribution(0, num_nodes ? *num_nodes - 1 : 0) + {} + + void startup(Coordination::ZooKeeper & zookeeper) override; + Coordination::ZooKeeperRequestPtr generate() override; + +private: + std::string path_prefix; + std::optional num_nodes; + std::optional nodes_data_size; + std::vector paths_to_get; + + pcg64 rng; + std::uniform_int_distribution distribution; +}; + +class ListRequestGenerator final : public IGenerator +{ +public: + explicit ListRequestGenerator( + std::string path_prefix_ = "/list_generator", + std::optional num_nodes_ = std::nullopt, + std::optional paths_length_ = std::nullopt) + : path_prefix(path_prefix_) + , num_nodes(num_nodes_) + , paths_length(paths_length_) + {} + + void startup(Coordination::ZooKeeper & zookeeper) override; + Coordination::ZooKeeperRequestPtr generate() override; + +private: + std::string path_prefix; + std::optional num_nodes; + std::optional paths_length; +}; + +std::unique_ptr getGenerator(const std::string & name); diff --git a/utils/keeper-bench/Runner.cpp b/utils/keeper-bench/Runner.cpp new file mode 100644 index 00000000000..d3f51fb2356 --- /dev/null +++ b/utils/keeper-bench/Runner.cpp @@ -0,0 +1,188 @@ +#include "Runner.h" + +namespace DB +{ +namespace ErrorCodes +{ + extern const int CANNOT_BLOCK_SIGNAL; +} +} + +void Runner::thread(std::vector> & zookeepers) +{ + Coordination::ZooKeeperRequestPtr request; + /// Randomly choosing connection index + pcg64 rng(randomSeed()); + std::uniform_int_distribution distribution(0, zookeepers.size() - 1); + + /// In these threads we do not accept INT signal. + sigset_t sig_set; + if (sigemptyset(&sig_set) + || sigaddset(&sig_set, SIGINT) + || pthread_sigmask(SIG_BLOCK, &sig_set, nullptr)) + { + DB::throwFromErrno("Cannot block signal.", DB::ErrorCodes::CANNOT_BLOCK_SIGNAL); + } + + while (true) + { + bool extracted = false; + + while (!extracted) + { + extracted = queue.tryPop(request, 100); + + if (shutdown + || (max_iterations && requests_executed >= max_iterations)) + { + return; + } + } + + const auto connection_index = distribution(rng); + auto & zk = zookeepers[connection_index]; + + auto promise = std::make_shared>(); + auto future = promise->get_future(); + Coordination::ResponseCallback callback = [promise](const Coordination::Response & response) + { + if (response.error != Coordination::Error::ZOK) + promise->set_exception(std::make_exception_ptr(zkutil::KeeperException(response.error))); + else + promise->set_value(response.bytesSize()); + }; + + Stopwatch watch; + + zk->executeGenericRequest(request, callback); + + try + { + auto response_size = future.get(); + double seconds = watch.elapsedSeconds(); + + std::lock_guard lock(mutex); + + if (request->isReadRequest()) + info->addRead(seconds, 1, request->bytesSize() + response_size); + else + info->addWrite(seconds, 1, request->bytesSize() + response_size); + } + catch (...) + { + if (!continue_on_error) + { + shutdown = true; + throw; + } + std::cerr << DB::getCurrentExceptionMessage(true, true /*check embedded stack trace*/) << std::endl; + } + + ++requests_executed; + } +} + +bool Runner::tryPushRequestInteractively(const Coordination::ZooKeeperRequestPtr & request, DB::InterruptListener & interrupt_listener) +{ + bool inserted = false; + + while (!inserted) + { + inserted = queue.tryPush(request, 100); + + if (shutdown) + { + /// An exception occurred in a worker + return false; + } + + if (max_time > 0 && total_watch.elapsedSeconds() >= max_time) + { + std::cout << "Stopping launch of queries. Requested time limit is exhausted.\n"; + return false; + } + + if (interrupt_listener.check()) + { + std::cout << "Stopping launch of queries. SIGINT received." << std::endl; + return false; + } + + if (delay > 0 && delay_watch.elapsedSeconds() > delay) + { + printNumberOfRequestsExecuted(requests_executed); + + std::lock_guard lock(mutex); + report(info, concurrency); + delay_watch.restart(); + } + } + + return true; +} + + +void Runner::runBenchmark() +{ + auto aux_connections = getConnections(); + + std::cerr << "Preparing to run\n"; + generator->startup(*aux_connections[0]); + std::cerr << "Prepared\n"; + try + { + for (size_t i = 0; i < concurrency; ++i) + { + auto connections = getConnections(); + pool.scheduleOrThrowOnError([this, connections]() mutable { thread(connections); }); + } + } + catch (...) + { + pool.wait(); + throw; + } + + DB::InterruptListener interrupt_listener; + delay_watch.restart(); + + /// Push queries into queue + for (size_t i = 0; !max_iterations || i < max_iterations; ++i) + { + if (!tryPushRequestInteractively(generator->generate(), interrupt_listener)) + { + shutdown = true; + break; + } + } + + pool.wait(); + total_watch.stop(); + + printNumberOfRequestsExecuted(requests_executed); + + std::lock_guard lock(mutex); + report(info, concurrency); +} + + +std::vector> Runner::getConnections() +{ + std::vector> zookeepers; + for (const auto & host_string : hosts_strings) + { + Coordination::ZooKeeper::Node node{Poco::Net::SocketAddress{host_string}, false}; + std::vector nodes; + nodes.push_back(node); + zookeepers.emplace_back(std::make_shared( + nodes, + "", /*chroot*/ + "", /*identity type*/ + "", /*identity*/ + Poco::Timespan(0, 30000 * 1000), + Poco::Timespan(0, 1000 * 1000), + Poco::Timespan(0, 10000 * 1000))); + } + + return zookeepers; +} diff --git a/utils/keeper-bench/Runner.h b/utils/keeper-bench/Runner.h new file mode 100644 index 00000000000..bb83e790214 --- /dev/null +++ b/utils/keeper-bench/Runner.h @@ -0,0 +1,79 @@ +#pragma once +#include +#include "Generator.h" +#include +#include +#include +#include +#include +#include +#include + +#include +#include "Stats.h" + +using Ports = std::vector; +using Strings = std::vector; + +class Runner +{ +public: + Runner( + size_t concurrency_, + const std::string & generator_name, + const Strings & hosts_strings_, + double max_time_, + double delay_, + bool continue_on_error_, + size_t max_iterations_) + : concurrency(concurrency_) + , pool(concurrency) + , hosts_strings(hosts_strings_) + , generator(getGenerator(generator_name)) + , max_time(max_time_) + , delay(delay_) + , continue_on_error(continue_on_error_) + , max_iterations(max_iterations_) + , info(std::make_shared()) + , queue(concurrency) + { + } + + void thread(std::vector> & zookeepers); + + void printNumberOfRequestsExecuted(size_t num) + { + std::cerr << "Requests executed: " << num << ".\n"; + } + + bool tryPushRequestInteractively(const Coordination::ZooKeeperRequestPtr & request, DB::InterruptListener & interrupt_listener); + + void runBenchmark(); + + +private: + + size_t concurrency = 1; + + ThreadPool pool; + Strings hosts_strings; + std::unique_ptr generator; + double max_time = 0; + double delay = 1; + bool continue_on_error = false; + std::atomic max_iterations = 0; + std::atomic requests_executed = 0; + std::atomic shutdown = false; + + std::shared_ptr info; + + Stopwatch total_watch; + Stopwatch delay_watch; + + std::mutex mutex; + + using Queue = ConcurrentBoundedQueue; + Queue queue; + + std::vector> getConnections(); +}; diff --git a/utils/keeper-bench/Stats.cpp b/utils/keeper-bench/Stats.cpp new file mode 100644 index 00000000000..1f8b02ed09d --- /dev/null +++ b/utils/keeper-bench/Stats.cpp @@ -0,0 +1,67 @@ +#include "Stats.h" +#include + +void report(std::shared_ptr & info, size_t concurrency) +{ + std::cerr << "\n"; + + /// Avoid zeros, nans or exceptions + if (0 == info->read_requests && 0 == info->write_requests) + return; + + double read_seconds = info->read_work_time / concurrency; + double write_seconds = info->write_work_time / concurrency; + + std::cerr << "read requests " << info->read_requests << ", write requests " << info->write_requests << ", "; + if (info->errors) + { + std::cerr << "errors " << info->errors << ", "; + } + if (0 != info->read_requests) + { + std::cerr + << "Read RPS: " << (info->read_requests / read_seconds) << ", " + << "Read MiB/s: " << (info->requests_read_bytes / read_seconds / 1048576); + if (0 != info->write_requests) + std::cerr << ", "; + } + if (0 != info->write_requests) + { + std::cerr + << "Write RPS: " << (info->write_requests / write_seconds) << ", " + << "Write MiB/s: " << (info->requests_write_bytes / write_seconds / 1048576) << ". " + << "\n"; + } + std::cerr << "\n"; + + auto print_percentile = [&](double percent, Stats::Sampler & sampler) + { + std::cerr << percent << "%\t\t"; + std::cerr << sampler.quantileNearest(percent / 100.0) << " sec.\t"; + std::cerr << "\n"; + }; + + if (0 != info->read_requests) + { + std::cerr << "Read sampler:\n"; + for (int percent = 0; percent <= 90; percent += 10) + print_percentile(percent, info->read_sampler); + + print_percentile(95, info->read_sampler); + print_percentile(99, info->read_sampler); + print_percentile(99.9, info->read_sampler); + print_percentile(99.99, info->read_sampler); + } + + if (0 != info->write_requests) + { + std::cerr << "Write sampler:\n"; + for (int percent = 0; percent <= 90; percent += 10) + print_percentile(percent, info->write_sampler); + + print_percentile(95, info->write_sampler); + print_percentile(99, info->write_sampler); + print_percentile(99.9, info->write_sampler); + print_percentile(99.99, info->write_sampler); + } +} diff --git a/utils/keeper-bench/Stats.h b/utils/keeper-bench/Stats.h new file mode 100644 index 00000000000..1b9a31bb734 --- /dev/null +++ b/utils/keeper-bench/Stats.h @@ -0,0 +1,52 @@ +#pragma once + +#include +#include + +#include + +struct Stats +{ + std::atomic read_requests{0}; + std::atomic write_requests{0}; + size_t errors = 0; + size_t requests_write_bytes = 0; + size_t requests_read_bytes = 0; + double read_work_time = 0; + double write_work_time = 0; + + using Sampler = ReservoirSampler; + Sampler read_sampler {1 << 16}; + Sampler write_sampler {1 << 16}; + + void addRead(double seconds, size_t requests_inc, size_t bytes_inc) + { + read_work_time += seconds; + read_requests += requests_inc; + requests_read_bytes += bytes_inc; + read_sampler.insert(seconds); + } + + void addWrite(double seconds, size_t requests_inc, size_t bytes_inc) + { + write_work_time += seconds; + write_requests += requests_inc; + requests_write_bytes += bytes_inc; + write_sampler.insert(seconds); + } + + void clear() + { + read_requests = 0; + write_requests = 0; + read_work_time = 0; + write_work_time = 0; + requests_read_bytes = 0; + requests_write_bytes = 0; + read_sampler.clear(); + write_sampler.clear(); + } +}; + + +void report(std::shared_ptr & info, size_t concurrency); diff --git a/utils/keeper-bench/main.cpp b/utils/keeper-bench/main.cpp new file mode 100644 index 00000000000..378d7c2f6e4 --- /dev/null +++ b/utils/keeper-bench/main.cpp @@ -0,0 +1,61 @@ +#include +#include +#include "Runner.h" +#include "Stats.h" +#include "Generator.h" +#include +#include + +using namespace std; + +int main(int argc, char *argv[]) +{ + + bool print_stacktrace = true; + + try + { + using boost::program_options::value; + + boost::program_options::options_description desc = createOptionsDescription("Allowed options", getTerminalWidth()); + desc.add_options() + ("help", "produce help message") + ("generator", value()->default_value("create_small_data"), "query to execute") + ("concurrency,c", value()->default_value(1), "number of parallel queries") + ("delay,d", value()->default_value(1), "delay between intermediate reports in seconds (set 0 to disable reports)") + ("iterations,i", value()->default_value(0), "amount of queries to be executed") + ("timelimit,t", value()->default_value(0.), "stop launch of queries after specified time limit") + ("hosts,h", value()->multitoken(), "") + ("continue_on_errors", "continue testing even if a query fails") + ("reconnect", "establish new connection for every query") + ; + + boost::program_options::variables_map options; + boost::program_options::store(boost::program_options::parse_command_line(argc, argv, desc), options); + boost::program_options::notify(options); + + if (options.count("help")) + { + std::cout << "Usage: " << argv[0] << " [options] < queries.txt\n"; + std::cout << desc << "\n"; + return 1; + } + + Runner runner(options["concurrency"].as(), + options["generator"].as(), + options["hosts"].as(), + options["timelimit"].as(), + options["delay"].as(), + options.count("continue_on_errors"), + options["iterations"].as()); + + runner.runBenchmark(); + + return 0; + } + catch (...) + { + std::cerr << DB::getCurrentExceptionMessage(print_stacktrace, true) << std::endl; + return DB::getCurrentExceptionCode(); + } +} diff --git a/utils/list-versions/version_date.tsv b/utils/list-versions/version_date.tsv index 3e1073b8529..a69f96970bb 100644 --- a/utils/list-versions/version_date.tsv +++ b/utils/list-versions/version_date.tsv @@ -1,12 +1,22 @@ +v21.4.4.30-stable 2021-04-16 +v21.4.3.21-stable 2021-04-12 +v21.3.7.62-stable 2021-04-16 +v21.3.6.55-lts 2021-04-12 +v21.3.5.42-lts 2021-04-07 v21.3.4.25-lts 2021-03-28 v21.3.3.14-lts 2021-03-19 v21.3.2.5-lts 2021-03-12 +v21.2.10.48-stable 2021-04-16 +v21.2.9.41-stable 2021-04-12 +v21.2.8.31-stable 2021-04-07 v21.2.7.11-stable 2021-03-28 v21.2.6.1-stable 2021-03-15 v21.2.5.5-stable 2021-03-02 v21.2.4.6-stable 2021-02-20 v21.2.3.15-stable 2021-02-14 v21.2.2.8-stable 2021-02-07 +v21.1.9.41-stable 2021-04-13 +v21.1.8.30-stable 2021-04-07 v21.1.7.1-stable 2021-03-15 v21.1.6.13-stable 2021-03-02 v21.1.5.4-stable 2021-02-20 @@ -39,6 +49,9 @@ v20.9.5.5-stable 2020-11-13 v20.9.4.76-stable 2020-10-29 v20.9.3.45-stable 2020-10-09 v20.9.2.20-stable 2020-09-22 +v20.8.18.32-lts 2021-04-16 +v20.8.17.25-lts 2021-04-08 +v20.8.16.20-lts 2021-04-06 v20.8.15.11-lts 2021-04-01 v20.8.14.4-lts 2021-03-03 v20.8.13.15-lts 2021-02-20 diff --git a/utils/memcpy-bench/memcpy-bench.cpp b/utils/memcpy-bench/memcpy-bench.cpp index b607c45370d..025c9100f75 100644 --- a/utils/memcpy-bench/memcpy-bench.cpp +++ b/utils/memcpy-bench/memcpy-bench.cpp @@ -299,8 +299,8 @@ static void * memcpySSE2Unrolled8(void * __restrict destination, const void * __ //static __attribute__((__always_inline__, __target__("sse2"))) -__attribute__((__always_inline__)) -void memcpy_my_medium_sse(uint8_t * __restrict & dst, const uint8_t * __restrict & src, size_t & size) +__attribute__((__always_inline__)) inline void +memcpy_my_medium_sse(uint8_t * __restrict & dst, const uint8_t * __restrict & src, size_t & size) { /// Align destination to 16 bytes boundary. size_t padding = (16 - (reinterpret_cast(dst) & 15)) & 15; diff --git a/website/benchmark/dbms/queries.js b/website/benchmark/dbms/queries.js index c92353ab0f2..f3cf25f2c8d 100644 --- a/website/benchmark/dbms/queries.js +++ b/website/benchmark/dbms/queries.js @@ -1,4 +1,4 @@ -var current_data_size = 1000000000; +var current_data_size = 100000000; var current_systems = ["ClickHouse", "Vertica", "Greenplum"]; diff --git a/website/benchmark/hardware/index.html b/website/benchmark/hardware/index.html index a57930b279d..71d3333432e 100644 --- a/website/benchmark/hardware/index.html +++ b/website/benchmark/hardware/index.html @@ -76,6 +76,7 @@ Results for Digitalocean (Storage-intesinve VMs) + (CPU/GP) are from Yiğit K Results for 2x AMD EPYC 7F72 3.2 Ghz (Total 96 Cores, IBM Cloud's Bare Metal Service) from Yiğit Konur and Metehan Çetinkaya of seo.do.
Results for 2x AMD EPYC 7742 (128 physical cores, 1 TB DDR4-3200 RAM) from Yedige Davletgaliyev and Nikita Zhavoronkov of blockchair.com.
Results for ASUS A15 (Ryzen laptop) are from Kimmo Linna.
+Results for MacBook Air M1 are from Denis Glazachev.

diff --git a/website/benchmark/hardware/results/amd_ryzen_9_3950x.json b/website/benchmark/hardware/results/amd_ryzen_9_3950x.json index 8760a235521..caa5a443e54 100644 --- a/website/benchmark/hardware/results/amd_ryzen_9_3950x.json +++ b/website/benchmark/hardware/results/amd_ryzen_9_3950x.json @@ -1,6 +1,6 @@ [ { - "system": "AMD Ryzen 9", + "system": "AMD Ryzen 9 (2020)", "system_full": "AMD Ryzen 9 3950X 16-Core Processor, 64 GiB RAM, Intel Optane 900P 280 GB", "time": "2020-03-14 00:00:00", "kind": "desktop", @@ -52,7 +52,7 @@ ] }, { - "system": "AMD Ryzen 9", + "system": "AMD Ryzen 9 (2021)", "system_full": "AMD Ryzen 9 3950X 16-Core Processor, 64 GiB RAM, Samsung evo 970 plus 1TB", "time": "2021-03-08 00:00:00", "kind": "desktop", diff --git a/website/benchmark/hardware/results/macbook_air_m1.json b/website/benchmark/hardware/results/macbook_air_m1.json new file mode 100644 index 00000000000..33f15d02480 --- /dev/null +++ b/website/benchmark/hardware/results/macbook_air_m1.json @@ -0,0 +1,54 @@ +[ + { + "system": "MacBook Air M1", + "system_full": "MacBook Air M1 13\" 2020, 8‑core CPU, 16 GiB RAM, 512 GB SSD", + "time": "2021-04-11 00:00:00", + "kind": "laptop", + "result": + [ +[0.003, 0.001, 0.001], +[0.019, 0.014, 0.014], +[0.042, 0.034, 0.033], +[0.101, 0.043, 0.041], +[0.100, 0.102, 0.101], +[0.394, 0.283, 0.289], +[0.029, 0.027, 0.027], +[0.018, 0.018, 0.018], +[0.511, 0.489, 0.494], +[0.620, 0.615, 0.618], +[0.217, 0.200, 0.197], +[0.237, 0.235, 0.242], +[0.774, 0.762, 0.761], +[0.969, 0.982, 0.969], +[0.896, 0.887, 0.861], +[0.999, 0.943, 0.945], +[3.343, 2.426, 2.366], +[1.463, 1.414, 1.382], +[4.958, 4.268, 4.257], +[0.056, 0.050, 0.049], +[1.696, 0.851, 0.846], +[1.036, 1.104, 1.174], +[4.326, 2.224, 2.255], +[1.397, 1.038, 1.055], +[0.317, 0.310, 0.305], +[0.274, 0.284, 0.269], +[0.317, 0.316, 0.313], +[0.943, 0.952, 0.951], +[2.794, 1.427, 1.433], +[1.606, 1.600, 1.605], +[0.751, 0.691, 0.679], +[1.532, 1.000, 0.952], +[9.679, 8.895, 7.967], +[7.001, 4.472, 4.050], +[4.790, 3.971, 3.987], +[1.215, 1.204, 1.256], +[0.129, 0.125, 0.119], +[0.057, 0.061, 0.056], +[0.045, 0.043, 0.043], +[0.256, 0.247, 0.249], +[0.020, 0.014, 0.013], +[0.013, 0.011, 0.012], +[0.009, 0.009, 0.009] + ] + } +] diff --git a/website/benchmark/hardware/results/qemu_aarch64_cascade_lake_80_vcpu.json b/website/benchmark/hardware/results/qemu_aarch64_cascade_lake_80_vcpu.json new file mode 100644 index 00000000000..ed25794c77b --- /dev/null +++ b/website/benchmark/hardware/results/qemu_aarch64_cascade_lake_80_vcpu.json @@ -0,0 +1,55 @@ +[ + { + "system": "Intel 80vCPU, QEMU, AArch64", + "system_full": "Intel Cascade Lake 80vCPU running AArch64 ClickHouse under qemu-aarch64 version 4.2.1 (userspace emulation)", + "cpu_vendor": "Intel", + "time": "2021-04-05 00:00:00", + "kind": "cloud", + "result": + [ +[0.045, 0.006, 0.006], +[0.366, 0.201, 0.576], +[0.314, 0.144, 0.152], +[0.701, 0.111, 0.110], +[0.308, 0.259, 0.261], +[1.009, 0.642, 0.658], +[0.160, 0.087, 0.086], +[0.123, 0.079, 0.080], +[0.570, 0.458, 0.454], +[0.708, 0.540, 0.547], +[0.541, 0.460, 0.464], +[0.578, 0.524, 0.531], +[0.927, 0.908, 0.906], +[1.075, 0.992, 1.051], +[1.055, 0.965, 0.991], +[0.904, 0.790, 0.781], +[2.076, 2.134, 2.121], +[1.668, 1.648, 1.615], +[4.134, 3.879, 4.002], +[0.142, 0.103, 0.105], +[7.018, 1.479, 1.515], +[1.618, 1.643, 1.680], +[6.516, 3.172, 3.182], +[6.028, 2.070, 2.076], +[0.608, 0.559, 0.577], +[0.548, 0.515, 0.516], +[0.598, 0.564, 0.563], +[1.562, 1.529, 1.537], +[5.968, 2.311, 2.375], +[3.263, 3.239, 3.279], +[1.134, 0.903, 0.928], +[2.987, 1.270, 1.284], +[6.256, 5.665, 5.320], +[3.020, 3.148, 3.109], +[3.092, 3.131, 3.146], +[1.183, 1.140, 1.185], +[0.762, 0.704, 0.715], +[0.412, 0.380, 0.385], +[0.376, 0.330, 0.327], +[1.505, 1.532, 1.503], +[0.201, 0.133, 0.130], +[0.173, 0.123, 0.150], +[0.070, 0.028, 0.028] + ] + } +] diff --git a/website/blog/en/2021/code-review.md b/website/blog/en/2021/code-review.md new file mode 100644 index 00000000000..dcde371629b --- /dev/null +++ b/website/blog/en/2021/code-review.md @@ -0,0 +1,83 @@ +--- +title: 'The Tests Are Passing, Why Would I Read The Diff Again?' +image: 'https://blog-images.clickhouse.tech/en/2021/code-review/two-ducks.jpg' +date: '2021-04-14' +author: '[Alexander Kuzmenkov](https://github.com/akuzm)' +tags: ['code review', 'development'] +--- + + +Code review is one of the few software development techniques that are consistently found to reduce the incidence of defects. Why is it effective? This article offers some wild conjecture on this topic, complete with practical advice on getting the most out of your code review. + + +## Understanding Why Your Program Works + +As software developers, we routinely have to reason about the behaviour of software. For example, to fix a bug, we start with a test case that exhibits the behavior in question, and then read the source code to see how this behavior arises. Often we find ourselves unable to understand anything, having to resort to forensic techniques such as using a debugger or interrogating the author of the code. This situation is far from ideal. After all, if we have trouble understanding our software, how can we be sure it works at all? No surprise that it doesn't. + +The correct understanding is also important when modifying and extending software. A programmer must always have a precise mental model on what is going on in the program, how exactly it maps to the domain, and so on. If there are flaws in this model, the code they write won't match the domain and won't solve the problem correctly. Wrong understanding directly causes bugs. + +How can we make our software easier to understand? It is often said that to see if you really understand something, you have to try explaining it to somebody. For example, as a science student taking an exam, you might be expected to give an explanation to some well-known observed effect, deriving it from the basic laws of this domain. In a similar way, if we are modeling some problem in software, we can start from domain knowledge and general programming knowledge, and build an argument as to why our model is applicable to the problem, why it is correct, has optimal performance and so on. This explanation takes the form of code comments, or, at a higher level, design documents. + +If you have a habit of thoroughly commenting your code, you might have noticed that writing the comments is often much harder than writing the code itself. It also has an unpleasant side effect — at times, while writing a comment, it becomes increasingly clear to you that the code is incomprehensible and takes forever to explain, or maybe is downright wrong, and you have to rewrite it. This is exactly the major positive effect of writing the comments. It helps you find bugs and make the code more understandable, and you wouldn't have noticed these problems unless you tried to explain the code. + +Understanding why your program works is inseparable from understanding why it fails, so it's no surprise that there is a similar process for the latter, called "rubber duck debugging". To debug a particularly nasty bug, you start explaining the program logic step by step to an imaginary partner or even to an inanimate object such as a yellow rubber duck. This process is often very effective, much in excess of what one would expect given the limited conversational abilities of rubber ducks. The underlying mechanism is probably the same as with comments — you start to understand your program better by just trying to explain it, and this lets you find bugs. + +When working in a team, you even have a luxury of explaining your code to another developer who works on the same project. It's probably more entertaining than talking to a duck. More importantly, they are going to maintain the code you wrote, so better make sure that _they_ can understand it as well. A good formal occasion for explaining how your code works is the code review process. Let's see how you can get the most out of it, in terms of making your code understandable. + +## Reviewing Others Code + +Code review is often framed as a gatekeeping process, where each contribution is vetted by maintainers to ensure that it is in line with project direction, has acceptable quality, meets the coding guidelines and so on. This perspective might seem natural when dealing with external contributions, but makes less sense if you apply it to internal ones. After all, our fellow maintainers have perfect understanding of project goals and guidelines, probably they are more talented and experienced than us, and can be trusted to produce the best solution possible. How can an additional review be helpful? + +A less-obvious, but very important, part of reviewing the code is just seeing whether it can be understood by another person. It is helpful regardless of the administrative roles and programming proficiency of the parties. What should you do as a reviewer if ease of understanding is your main priority? + +You probably don't need to be concerned with trivia such as code style. There are automated tools for that. You might find some bugs, but this is probably a side effect. Your main task is making sense of the code. + +Start with checking the high-level description of the problem that the pull request is trying to solve. Read the description of the bug it fixes, or the docs for the feature it adds. For bigger features, there is normally a design document that describes the overall implementation without getting too deep into the code details. After you understand the problem, start reading the code. Does it make sense to you? You shouldn't try too hard to understand it. Imagine that you are tired and under time pressure. If you feel you have to make a lot of effort to understand the code, ask the author for clarifications. As you talk, you might discover that the code is not correct, or it may be rewritten in a more straightforward way, or it needs more comments. + + + +After you get the answers, don't forget to update the code and the comments to reflect them. Don't just stop after getting it explained to you personally. If you had a question as a reviewer, chances are that other people will also have this question later, but there might be nobody around to ask. They will have to resort to `git blame` and re-reading the entire pull request or several of them. Code archaeology is sometimes fun, but it's the last thing you want to do when you are investigating an urgent bug. All the answers should be on the surface. + +Working with the author, you should ensure that the code is mostly obvious to anyone with basic domain and programming knowledge, and all non-obvious parts are clearly explained. + +### Preparing Your Code For Review + +As an author, you can also do some things to make your code easier to understand for the reviewer. + +First of all, if you are implementing a major feature, it probably needs a round of design review before you even start writing code. Skipping a design review and jumping right into the code review can be a major source of frustration, because it might turn out that even the problem you are solving was formulated incorrectly, and all your work has to be thrown away. Of course, this is not prevented completely by design review, either. Programming is an iterative, exploratory activity, and in complex cases you only begin to grasp the problem after implementing a first solution, which you then realize is incorrect and has to be thrown away. + +When preparing your code for review, your major objective is to make your problem and its solution clear to the reviewer. A good tool for this is code comments. Any sizable piece of logic should have an introductory comment describing its general purpose and outlining the implementation. This description can reference similar features, explain the difference to them, explain how it interfaces with other subsystems. A good place to put this general description is a function that serves as a main entry point for the feature, or other form of its public interface, or the most significant class, or the file containing the implementation, and so on. + +Drilling down to each block of code, you should be able to explain what it does, why it does that, why this way and not another. If there are several ways of doing the thing, why did you choose this one? Of course, for some code these things follow from the more general comments and don't have to be restated. The mechanics of data manipulation should be apparent from the code itself. If you find yourself explaining a particular feature of the language, it's probably best not to use it. + +Pay special attention to making the data structures apparent in the code, and their meaning and invariants well commented. The choice of data structures ultimately determines which algorithms you can apply, and sets the limits of performance, which is another reason why we should care about it as ClickHouse developers. + +When explaining the code, it is important to give your reader enough context, so that they can understand you without a deep investigation of the surrounding systems and obscure test cases. Give pointers to all the things that might be relevant to the task. If you know some corner cases which your code has to handle, describe them in enough detail so that they can be reproduced. If there is a relevant standard or a design document, reference it, or even quote it inline. If you're relying on some invariant in other system, mention it. It is good practice to add programmatic checks that mirror your comments, when it is easy to do so. Your comment about an invariant should be accompanied by an assertion, and an important scenario should be reproduced by a test case. + +Don't worry about being too verbose. There is often not enough comments, but almost never too much of them. + +## Common Concerns about Code Comments + +It is common to hear objections to the idea of commenting the code, so let's discuss a couple of usual ones. + +### Self-documenting Code + +You can often see a perplexing idea that the source code can somehow be "self-documenting", or that the comments are a "code smell", and their presence indicates that the code is badly written. I have trouble imagining how this belief can be compatible with any experience in maintaining sufficiently complex and large software, over the years, in collaboration with others. The code and the comments describe different parts of the solution. The code describes the data structures and their transformations, but it cannot convey meaning. The names in the code serve as pointers that map the data and its transformations to the domain concepts, but they are schematic and lack nuance. It is not so difficult to write code that makes it easy to understand what's going on in terms of data manipulation. What it takes is mostly moderation, that is, stopping yourself from being too clever. For most code, it is easy to see what it does, but why? Why this way and not that way? Why is it correct? Why this fast path here helps? Why did you choose this data layout? How is this invariant guaranteed? And so on. This might be not so evident for a developer who is working alone on a short-lived project, because they have all the necessary context in their head. But when they have to work with other people (or even with themselves from past and future), or in an unfamiliar area, the importance of non-code, higher-level context becomes painfully clear. The idea that we should, or even can, somehow encode comments such as [this one](https://github.com/ClickHouse/ClickHouse/blob/26d5db32ae5c9f54b8825e2eca1f077a3b17c84a/src/Storages/MergeTree/KeyCondition.cpp#L1312-L1347) into names or control flow is just absurd. + +### Obsolete Comments + +The comments can't be checked by the compiler or the tests, so there is no automated way to make sure that they are up to date with the rest of the comments and the code. The possibility of comments gradually getting incorrect is sometimes used as an argument against having any comments at all. + +This problem is not exclusive to the comments — the code also can and does become obsolete. Simple cases such as dead code can be detected by static analysis or studying the test coverage of code. More complex cases can only be found by proofreading, such as maintaining an invariant that is not important anymore, or preparing some data that is not needed. + +While an obsolete comment can lead to a mistake, the same applies, perhaps more strongly, to the lack of comments. When you need some higher-level knowledge about the code, but it is not written down, you are forced to perform an entire investigation from first principles to understand what's going on, and this is error-prone. Even an obsolete comment likely gives a better starting point than nothing. Moreover, in a code base that makes an active use of the comments, they tend to be mostly correct. This is because the developers rely on comments, read and write them, pay attention to them during code review. The comments are routinely changed along with changing the code, and the outdated comments are soon noticed and fixed. This does require some habit. A lone comment in a vast desert of impenetrable self-documenting code is not going to fare well. + + +## Conclusion + +Code review makes your software better, and a significant part of this probably comes from trying to understand what your software actually does. By paying attention specifically to this aspect of code review, you can make it even more efficient. You'll have less bugs, and your code will be easier to maintain — and what else could we ask for as software developers? + + +_2021-04-13 [Alexander Kuzmenkov](https://github.com/akuzm). Title photo by [Nikita Mikhaylov](https://github.com/nikitamikhaylov)_ + +_P.S. This text contains the personal opinions of the author, and is not an authoritative manual for ClickHouse maintainers._ diff --git a/website/templates/index/hero.html b/website/templates/index/hero.html index 55d0111ac61..efa4643e841 100644 --- a/website/templates/index/hero.html +++ b/website/templates/index/hero.html @@ -22,12 +22,8 @@
Quick start diff --git a/website/templates/index/quickstart.html b/website/templates/index/quickstart.html index 454fc68151d..0d967e7b96c 100644 --- a/website/templates/index/quickstart.html +++ b/website/templates/index/quickstart.html @@ -36,7 +36,7 @@ target="_blank"> official Docker images of ClickHouse, this is not the only option though. Alternatively, you can easily get a running ClickHouse instance or cluster at - + Yandex Managed Service for ClickHouse.