mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 15:12:02 +00:00
Merge remote-tracking branch 'upstream/master' into issue-6459
This commit is contained in:
commit
4b143c3e0f
100
CHANGELOG.md
100
CHANGELOG.md
@ -1,3 +1,27 @@
|
||||
## ClickHouse release 19.13.5.44, 2019-09-20
|
||||
|
||||
### Bug Fix
|
||||
* This release also contains all bug fixes from 19.14.6.12.
|
||||
* Fixed possible inconsistent state of table while executing `DROP` query for replicated table while zookeeper is not accessible. [#6045](https://github.com/yandex/ClickHouse/issues/6045) [#6413](https://github.com/yandex/ClickHouse/pull/6413) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Fix for data race in StorageMerge [#6717](https://github.com/yandex/ClickHouse/pull/6717) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix bug introduced in query profiler which leads to endless recv from socket. [#6386](https://github.com/yandex/ClickHouse/pull/6386) ([alesapin](https://github.com/alesapin))
|
||||
* Fix excessive CPU usage while executing `JSONExtractRaw` function over a boolean value. [#6208](https://github.com/yandex/ClickHouse/pull/6208) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Fixes the regression while pushing to materialized view. [#6415](https://github.com/yandex/ClickHouse/pull/6415) ([Ivan](https://github.com/abyss7))
|
||||
* Table function `url` had the vulnerability allowed the attacker to inject arbitrary HTTP headers in the request. This issue was found by [Nikita Tikhomirov](https://github.com/NSTikhomirov). [#6466](https://github.com/yandex/ClickHouse/pull/6466) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix useless `AST` check in Set index. [#6510](https://github.com/yandex/ClickHouse/issues/6510) [#6651](https://github.com/yandex/ClickHouse/pull/6651) ([Nikita Vasilev](https://github.com/nikvas0))
|
||||
* Fixed parsing of `AggregateFunction` values embedded in query. [#6575](https://github.com/yandex/ClickHouse/issues/6575) [#6773](https://github.com/yandex/ClickHouse/pull/6773) ([Zhichang Yu](https://github.com/yuzhichang))
|
||||
* Fixed wrong behaviour of `trim` functions family. [#6647](https://github.com/yandex/ClickHouse/pull/6647) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
## ClickHouse release 19.14.6.12, 2019-09-19
|
||||
|
||||
### Bug Fix
|
||||
* Fix for function `АrrayEnumerateUniqRanked` with empty arrays in params. [#6928](https://github.com/yandex/ClickHouse/pull/6928) ([proller](https://github.com/proller))
|
||||
* Fixed subquery name in queries with `ARRAY JOIN` and `GLOBAL IN subquery` with alias. Use subquery alias for external table name if it is specified. [#6934](https://github.com/yandex/ClickHouse/pull/6934) ([Ivan](https://github.com/abyss7))
|
||||
|
||||
### Build/Testing/Packaging Improvement
|
||||
* Fix [flapping](https://clickhouse-test-reports.s3.yandex.net/6944/aab95fd5175a513413c7395a73a82044bdafb906/functional_stateless_tests_(debug).html) test `00715_fetch_merged_or_mutated_part_zookeeper` by rewriting it to a shell scripts because it needs to wait for mutations to apply. [#6977](https://github.com/yandex/ClickHouse/pull/6977) ([Alexander Kazakov](https://github.com/Akazz))
|
||||
* Fixed UBSan and MemSan failure in function `groupUniqArray` with emtpy array argument. It was caused by placing of empty `PaddedPODArray` into hash table zero cell because constructor for zero cell value was not called. [#6937](https://github.com/yandex/ClickHouse/pull/6937) ([Amos Bird](https://github.com/amosbird))
|
||||
|
||||
## ClickHouse release 19.14.3.3, 2019-09-10
|
||||
|
||||
### New Feature
|
||||
@ -31,6 +55,7 @@
|
||||
* Implementation of `LIVE VIEW` tables that were originally proposed in [#2898](https://github.com/yandex/ClickHouse/pull/2898), prepared in [#3925](https://github.com/yandex/ClickHouse/issues/3925), and then updated in [#5541](https://github.com/yandex/ClickHouse/issues/5541). See [#5541](https://github.com/yandex/ClickHouse/issues/5541) for detailed description. [#5541](https://github.com/yandex/ClickHouse/issues/5541) ([vzakaznikov](https://github.com/vzakaznikov)) [#6425](https://github.com/yandex/ClickHouse/pull/6425) ([Nikolai Kochetov](https://github.com/KochetovNicolai)) [#6656](https://github.com/yandex/ClickHouse/pull/6656) ([vzakaznikov](https://github.com/vzakaznikov)) Note that `LIVE VIEW` feature may be removed in next versions.
|
||||
|
||||
### Bug Fix
|
||||
* This release also contains all bug fixes from 19.13 and 19.11.
|
||||
* Fix segmentation fault when the table has skip indices and vertical merge happens. [#6723](https://github.com/yandex/ClickHouse/pull/6723) ([alesapin](https://github.com/alesapin))
|
||||
* Fix per-column TTL with non-trivial column defaults. Previously in case of force TTL merge with `OPTIMIZE ... FINAL` query, expired values was replaced by type defaults instead of user-specified column defaults. [#6796](https://github.com/yandex/ClickHouse/pull/6796) ([Anton Popov](https://github.com/CurtizJ))
|
||||
* Fix Kafka messages duplication problem on normal server restart. [#6597](https://github.com/yandex/ClickHouse/pull/6597) ([Ivan](https://github.com/abyss7))
|
||||
@ -38,22 +63,15 @@
|
||||
* Fix `Key expression contains comparison between inconvertible types` exception in `bitmapContains` function. [#6136](https://github.com/yandex/ClickHouse/issues/6136) [#6146](https://github.com/yandex/ClickHouse/issues/6146) [#6156](https://github.com/yandex/ClickHouse/pull/6156) ([dimarub2000](https://github.com/dimarub2000))
|
||||
* Fix segfault with enabled `optimize_skip_unused_shards` and missing sharding key. [#6384](https://github.com/yandex/ClickHouse/pull/6384) ([Anton Popov](https://github.com/CurtizJ))
|
||||
* Fixed wrong code in mutations that may lead to memory corruption. Fixed segfault with read of address `0x14c0` that may happed due to concurrent `DROP TABLE` and `SELECT` from `system.parts` or `system.parts_columns`. Fixed race condition in preparation of mutation queries. Fixed deadlock caused by `OPTIMIZE` of Replicated tables and concurrent modification operations like ALTERs. [#6514](https://github.com/yandex/ClickHouse/pull/6514) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix bug introduced in query profiler which leads to endless recv from socket. [#6386](https://github.com/yandex/ClickHouse/pull/6386) ([alesapin](https://github.com/alesapin))
|
||||
* Removed extra verbose logging in MySQL interface [#6389](https://github.com/yandex/ClickHouse/pull/6389) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Return ability to parse boolean settings from 'true' and 'false' in configuration file. [#6278](https://github.com/yandex/ClickHouse/pull/6278) ([alesapin](https://github.com/alesapin))
|
||||
* Fix crash in `quantile` and `median` function over `Nullable(Decimal128)`. [#6378](https://github.com/yandex/ClickHouse/pull/6378) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed possible incomplete result returned by `SELECT` query with `WHERE` condition on primary key contained conversion to Float type. It was caused by incorrect checking of monotonicity in `toFloat` function. [#6248](https://github.com/yandex/ClickHouse/issues/6248) [#6374](https://github.com/yandex/ClickHouse/pull/6374) ([dimarub2000](https://github.com/dimarub2000))
|
||||
* Check `max_expanded_ast_elements` setting for mutations. Clear mutations after `TRUNCATE TABLE`. [#6205](https://github.com/yandex/ClickHouse/pull/6205) ([Winter Zhang](https://github.com/zhang2014))
|
||||
* Fix excessive CPU usage while executing `JSONExtractRaw` function over a boolean value. [#6208](https://github.com/yandex/ClickHouse/pull/6208) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Fixed an issue when long `ALTER UPDATE` or `ALTER DELETE` may prevent regular merges to run. Prevent mutations from executing if there is no enough free threads available. [#6502](https://github.com/yandex/ClickHouse/issues/6502) [#6617](https://github.com/yandex/ClickHouse/pull/6617) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fix JOIN results for key columns when used with `join_use_nulls`. Attach Nulls instead of columns defaults. [#6249](https://github.com/yandex/ClickHouse/pull/6249) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fix `JSONExtract` function while extracting a `Tuple` from JSON. [#6718](https://github.com/yandex/ClickHouse/pull/6718) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Fix for data race in StorageMerge [#6717](https://github.com/yandex/ClickHouse/pull/6717) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix for skip indices with vertical merge and alter. Fix for `Bad size of marks file` exception. [#6594](https://github.com/yandex/ClickHouse/issues/6594) [#6713](https://github.com/yandex/ClickHouse/pull/6713) ([alesapin](https://github.com/alesapin))
|
||||
* Fix rare crash in `ALTER MODIFY COLUMN` and vertical merge when one of merged/altered parts is empty (0 rows) [#6746](https://github.com/yandex/ClickHouse/issues/6746) [#6780](https://github.com/yandex/ClickHouse/pull/6780) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed wrong behaviour of `nullIf` function for constant arguments. [#6518](https://github.com/yandex/ClickHouse/pull/6518) ([Guillaume Tassery](https://github.com/YiuRULE)) [#6580](https://github.com/yandex/ClickHouse/pull/6580) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed bug in conversion of `LowCardinality` types in `AggregateFunctionFactory`. This fixes [#6257](https://github.com/yandex/ClickHouse/issues/6257). [#6281](https://github.com/yandex/ClickHouse/pull/6281) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fixed possible data loss after `ALTER DELETE` query on table with skipping index. [#6224](https://github.com/yandex/ClickHouse/issues/6224) [#6282](https://github.com/yandex/ClickHouse/pull/6282) ([Nikita Vasilev](https://github.com/nikvas0))
|
||||
* Fix wrong behavior and possible segfaults in `topK` and `topKWeighted` aggregated functions. [#6404](https://github.com/yandex/ClickHouse/pull/6404) ([Anton Popov](https://github.com/CurtizJ))
|
||||
* Fixed unsafe code around `getIdentifier` function. [#6401](https://github.com/yandex/ClickHouse/issues/6401) [#6409](https://github.com/yandex/ClickHouse/pull/6409) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed bug in MySQL wire protocol (is used while connecting to ClickHouse form MySQL client). Caused by heap buffer overflow in `PacketPayloadWriteBuffer`. [#6212](https://github.com/yandex/ClickHouse/pull/6212) ([Yuriy Baranov](https://github.com/yurriy))
|
||||
@ -63,50 +81,32 @@
|
||||
* Resolve a bug with `nullIf` function when we send a `NULL` argument on the second argument. [#6446](https://github.com/yandex/ClickHouse/pull/6446) ([Guillaume Tassery](https://github.com/YiuRULE))
|
||||
* Fix rare bug with wrong memory allocation/deallocation in complex key cache dictionaries with string fields which leads to infinite memory consumption (looks like memory leak). Bug reproduces when string size was a power of two starting from eight (8, 16, 32, etc). [#6447](https://github.com/yandex/ClickHouse/pull/6447) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed Gorilla encoding on small sequences which caused exception `Cannot write after end of buffer`. [#6398](https://github.com/yandex/ClickHouse/issues/6398) [#6444](https://github.com/yandex/ClickHouse/pull/6444) ([Vasily Nemkov](https://github.com/Enmk))
|
||||
* Fixed error with processing "timezone" in server configuration file. [#6709](https://github.com/yandex/ClickHouse/pull/6709) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Allow to use not nullable types in JOINs with `join_use_nulls` enabled. [#6705](https://github.com/yandex/ClickHouse/pull/6705) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Disable `Poco::AbstractConfiguration` substitutions in query in `clickhouse-client`. [#6706](https://github.com/yandex/ClickHouse/pull/6706) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed mismatched header in streams happened in case of reading from empty distributed table with sample and prewhere. [#6167](https://github.com/yandex/ClickHouse/issues/6167) ([Lixiang Qian](https://github.com/fancyqlx)) [#6823](https://github.com/yandex/ClickHouse/pull/6823) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Avoid deadlock in `REPLACE PARTITION`. [#6677](https://github.com/yandex/ClickHouse/pull/6677) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Query transformation for `MySQL`, `ODBC`, `JDBC` table functions now works properly for `SELECT WHERE` queries with multiple `AND` expressions. [#6381](https://github.com/yandex/ClickHouse/issues/6381) [#6676](https://github.com/yandex/ClickHouse/pull/6676) ([dimarub2000](https://github.com/dimarub2000))
|
||||
* Fixed bug in function `arrayEnumerateUniqRanked`. [#6779](https://github.com/yandex/ClickHouse/pull/6779) ([proller](https://github.com/proller))
|
||||
* Fixed parsing of `AggregateFunction` values embedded in query. [6575](https://github.com/yandex/ClickHouse/issues/6575) [#6773](https://github.com/yandex/ClickHouse/pull/6773) ([Zhichang Yu](https://github.com/yuzhichang))
|
||||
* Using `arrayReduce` for constant arguments may lead to segfault. [#6242](https://github.com/yandex/ClickHouse/issues/6242) [#6326](https://github.com/yandex/ClickHouse/pull/6326) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix inconsistent parts which can appear if replica was restored after `DROP PARTITION`. [#6522](https://github.com/yandex/ClickHouse/issues/6522) [#6523](https://github.com/yandex/ClickHouse/pull/6523) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fix crash when casting types to `Decimal` that do not support it. Throw exception instead. [#6297](https://github.com/yandex/ClickHouse/pull/6297) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed hang in `JSONExtractRaw` function. [#6195](https://github.com/yandex/ClickHouse/issues/6195) [#6198](https://github.com/yandex/ClickHouse/pull/6198) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed crash when using `IN` clause with a subquery with a tuple. [#6125](https://github.com/yandex/ClickHouse/issues/6125) [#6550](https://github.com/yandex/ClickHouse/pull/6550) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fixes the regression while pushing to materialized view. [#6415](https://github.com/yandex/ClickHouse/pull/6415) ([Ivan](https://github.com/abyss7))
|
||||
* Fixed possible inconsistent state of table while executing `DROP` query for replicated table while zookeeper is not accessible. [#6045](https://github.com/yandex/ClickHouse/issues/6045) [#6413](https://github.com/yandex/ClickHouse/pull/6413) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Fix bug with incorrect skip indices serialization and aggregation with adaptive granularity. [#6594](https://github.com/yandex/ClickHouse/issues/6594). [#6748](https://github.com/yandex/ClickHouse/pull/6748) ([alesapin](https://github.com/alesapin))
|
||||
* Fix `WITH ROLLUP` and `WITH CUBE` modifiers of `GROUP BY` with two-level aggregation. [#6225](https://github.com/yandex/ClickHouse/pull/6225) ([Anton Popov](https://github.com/CurtizJ))
|
||||
* Improve error handling in cache dictionaries. [#6737](https://github.com/yandex/ClickHouse/pull/6737) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Parquet: Fix reading boolean columns. [#6579](https://github.com/yandex/ClickHouse/pull/6579) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix bug with writing secondary indices marks with adaptive granularity. [#6126](https://github.com/yandex/ClickHouse/pull/6126) ([alesapin](https://github.com/alesapin))
|
||||
* Fix initialization order while server startup. Since `StorageMergeTree::background_task_handle` is initialized in `startup()` the `MergeTreeBlockOutputStream::write()` may try to use it before initialization. Just check if it is initialized. [#6080](https://github.com/yandex/ClickHouse/pull/6080) ([Ivan](https://github.com/abyss7))
|
||||
* Fixed crash in `extractAll()` function. [#6644](https://github.com/yandex/ClickHouse/pull/6644) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed wrong behaviour of `trim` functions family. [#6647](https://github.com/yandex/ClickHouse/pull/6647) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Clearing the data buffer from the previous read operation that was completed with an error. [#6026](https://github.com/yandex/ClickHouse/pull/6026) ([Nikolay](https://github.com/bopohaa))
|
||||
* Fix bug with enabling adaptive granularity when creating new replica for Replicated*MergeTree table. [#6394](https://github.com/yandex/ClickHouse/issues/6394) [#6452](https://github.com/yandex/ClickHouse/pull/6452) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed possible crash during server startup in case of exception happened in `libunwind` during exception at access to uninitialised `ThreadStatus` structure. [#6456](https://github.com/yandex/ClickHouse/pull/6456) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov))
|
||||
* Fixed data race in `system.parts` table and `ALTER` query. [#6245](https://github.com/yandex/ClickHouse/issues/6245). [#6513](https://github.com/yandex/ClickHouse/pull/6513) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix crash in `yandexConsistentHash` function. Found by fuzz test. [#6304](https://github.com/yandex/ClickHouse/issues/6304) [#6305](https://github.com/yandex/ClickHouse/pull/6305) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed the possibility of hanging queries when server is overloaded and global thread pool becomes near full. This have higher chance to happen on clusters with large number of shards (hundreds), because distributed queries allocate a thread per connection to each shard. For example, this issue may reproduce if a cluster of 330 shards is processing 30 concurrent distributed queries. This issue affects all versions starting from 19.2. [#6301](https://github.com/yandex/ClickHouse/pull/6301) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed logic of `arrayEnumerateUniqRanked` function. [#6423](https://github.com/yandex/ClickHouse/pull/6423) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix segfault when decoding symbol table. [#6603](https://github.com/yandex/ClickHouse/pull/6603) ([Amos Bird](https://github.com/amosbird))
|
||||
* Fixed mismatched header in streams happened in case of reading from empty distributed table with sample and prewhere. [#6167](https://github.com/yandex/ClickHouse/pull/6167) ([Lixiang Qian](https://github.com/fancyqlx))
|
||||
* Fixed irrelevant exception in cast of `LowCardinality(Nullable)` to not-Nullable column in case if it doesn't contain Nulls (e.g. in query like `SELECT CAST(CAST('Hello' AS LowCardinality(Nullable(String))) AS String)`. [#6094](https://github.com/yandex/ClickHouse/issues/6094) [#6119](https://github.com/yandex/ClickHouse/pull/6119) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Removed extra quoting of description in `system.settings` table. [#6696](https://github.com/yandex/ClickHouse/issues/6696) [#6699](https://github.com/yandex/ClickHouse/pull/6699) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Avoid possible deadlock in `TRUNCATE` of Replicated table. [#6695](https://github.com/yandex/ClickHouse/pull/6695) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix case with same column names in `GLOBAL JOIN ON` section. [#6181](https://github.com/yandex/ClickHouse/pull/6181) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fix reading in order of sorting key. [#6189](https://github.com/yandex/ClickHouse/pull/6189) ([Anton Popov](https://github.com/CurtizJ))
|
||||
* Fix `ALTER TABLE ... UPDATE` query for tables with `enable_mixed_granularity_parts=1`. [#6543](https://github.com/yandex/ClickHouse/pull/6543) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed the case when server may close listening sockets but not shutdown and continue serving remaining queries. You may end up with two running clickhouse-server processes. Sometimes, the server may return an error `bad_function_call` for remaining queries. [#6231](https://github.com/yandex/ClickHouse/pull/6231) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Table function `url` had the vulnerability allowed the attacker to inject arbitrary HTTP headers in the request. This issue was found by [Nikita Tikhomirov](https://github.com/NSTikhomirov). [#6466](https://github.com/yandex/ClickHouse/pull/6466) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix bug opened by [#4405](https://github.com/yandex/ClickHouse/pull/4405) (since 19.4.0). Reproduces in queries to Distributed tables over MergeTree tables when we doesn't query any columns (`SELECT 1`). [#6236](https://github.com/yandex/ClickHouse/pull/6236) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed overflow in integer division of signed type to unsigned type. The behaviour was exactly as in C or C++ language (integer promotion rules) that may be surprising. Please note that the overflow is still possible when dividing large signed number to large unsigned number or vice-versa (but that case is less usual). The issue existed in all server versions. [#6214](https://github.com/yandex/ClickHouse/issues/6214) [#6233](https://github.com/yandex/ClickHouse/pull/6233) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Limit maximum sleep time for throttling when `max_execution_speed` or `max_execution_speed_bytes` is set. Fixed false errors like `Estimated query execution time (inf seconds) is too long`. [#5547](https://github.com/yandex/ClickHouse/issues/5547) [#6232](https://github.com/yandex/ClickHouse/pull/6232) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix useless `AST` check in Set index. [#6510](https://github.com/yandex/ClickHouse/issues/6510) [#6651](https://github.com/yandex/ClickHouse/pull/6651) ([Nikita Vasilev](https://github.com/nikvas0))
|
||||
* Fixed issues about using `MATERIALIZED` columns and aliases in `MaterializedView`. [#448](https://github.com/yandex/ClickHouse/issues/448) [#3484](https://github.com/yandex/ClickHouse/issues/3484) [#3450](https://github.com/yandex/ClickHouse/issues/3450) [#2878](https://github.com/yandex/ClickHouse/issues/2878) [#2285](https://github.com/yandex/ClickHouse/issues/2285) [#3796](https://github.com/yandex/ClickHouse/pull/3796) ([Amos Bird](https://github.com/amosbird)) [#6316](https://github.com/yandex/ClickHouse/pull/6316) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix `FormatFactory` behaviour for input streams which are not implemented as processor. [#6495](https://github.com/yandex/ClickHouse/pull/6495) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fixed typo. [#6631](https://github.com/yandex/ClickHouse/pull/6631) ([Alex Ryndin](https://github.com/alexryndin))
|
||||
@ -114,8 +114,7 @@
|
||||
* Fixed error while parsing of columns list from string if type contained a comma (this issue was relevant for `File`, `URL`, `HDFS` storages) [#6217](https://github.com/yandex/ClickHouse/issues/6217). [#6209](https://github.com/yandex/ClickHouse/pull/6209) ([dimarub2000](https://github.com/dimarub2000))
|
||||
|
||||
### Security Fix
|
||||
* Fix two vulnerabilities in codecs in decompression phase (malicious user can fabricate compressed data that will lead to buffer overflow in decompression). [#6670](https://github.com/yandex/ClickHouse/pull/6670) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* If the attacker has write access to ZooKeeper and is able to run custom server available from the network where ClickHouse run, it can create custom-built malicious server that will act as ClickHouse replica and register it in ZooKeeper. When another replica will fetch data part from malicious replica, it can force clickhouse-server to write to arbitrary path on filesystem. Found by Eldar Zaitov, information security team at Yandex. [#6247](https://github.com/yandex/ClickHouse/pull/6247) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* This release also contains all bug security fixes from 19.13 and 19.11.
|
||||
* Fixed the possibility of a fabricated query to cause server crash due to stack overflow in SQL parser. Fixed the possibility of stack overflow in Merge and Distributed tables, materialized views and conditions for row-level security that involve subqueries. [#6433](https://github.com/yandex/ClickHouse/pull/6433) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
### Improvement
|
||||
@ -230,7 +229,6 @@
|
||||
* `odbc-bridge.cpp` defines `main()` so it should not be included in `clickhouse-lib`. [#6538](https://github.com/yandex/ClickHouse/pull/6538) ([Orivej Desh](https://github.com/orivej))
|
||||
* Test for crash in `FULL|RIGHT JOIN` with nulls in right table's keys. [#6362](https://github.com/yandex/ClickHouse/pull/6362) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Added a test for the limit on expansion of aliases just in case. [#6442](https://github.com/yandex/ClickHouse/pull/6442) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Added previous declaration checks for MySQL 8 integration. [#6569](https://github.com/yandex/ClickHouse/pull/6569) ([Rafael David Tinoco](https://github.com/rafaeldtinoco))
|
||||
* Switched from `boost::filesystem` to `std::filesystem` where appropriate. [#6253](https://github.com/yandex/ClickHouse/pull/6253) [#6385](https://github.com/yandex/ClickHouse/pull/6385) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Added RPM packages to website. [#6251](https://github.com/yandex/ClickHouse/pull/6251) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Add a test for fixed `Unknown identifier` exception in `IN` section. [#6708](https://github.com/yandex/ClickHouse/pull/6708) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
@ -258,7 +256,6 @@
|
||||
* Fixed tests affected by slow stack traces printing. [#6315](https://github.com/yandex/ClickHouse/pull/6315) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Add a test case for crash in `groupUniqArray` fixed in [#6029](https://github.com/yandex/ClickHouse/pull/6029). [#4402](https://github.com/yandex/ClickHouse/issues/4402) [#6129](https://github.com/yandex/ClickHouse/pull/6129) ([akuzm](https://github.com/akuzm))
|
||||
* Fixed indices mutations tests. [#6645](https://github.com/yandex/ClickHouse/pull/6645) ([Nikita Vasilev](https://github.com/nikvas0))
|
||||
* Attempt to fix performance test. [#6392](https://github.com/yandex/ClickHouse/pull/6392) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* In performance test, do not read query log for queries we didn't run. [#6427](https://github.com/yandex/ClickHouse/pull/6427) ([akuzm](https://github.com/akuzm))
|
||||
* Materialized view now could be created with any low cardinality types regardless to the setting about suspicious low cardinality types. [#6428](https://github.com/yandex/ClickHouse/pull/6428) ([Olga Khvostikova](https://github.com/stavrolia))
|
||||
* Updated tests for `send_logs_level` setting. [#6207](https://github.com/yandex/ClickHouse/pull/6207) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
@ -268,12 +265,51 @@
|
||||
* Fixes for Mac OS build (incomplete). [#6390](https://github.com/yandex/ClickHouse/pull/6390) ([alexey-milovidov](https://github.com/alexey-milovidov)) [#6429](https://github.com/yandex/ClickHouse/pull/6429) ([alex-zaitsev](https://github.com/alex-zaitsev))
|
||||
* Fix "splitted" build. [#6618](https://github.com/yandex/ClickHouse/pull/6618) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Other build fixes: [#6186](https://github.com/yandex/ClickHouse/pull/6186) ([Amos Bird](https://github.com/amosbird)) [#6486](https://github.com/yandex/ClickHouse/pull/6486) [#6348](https://github.com/yandex/ClickHouse/pull/6348) ([vxider](https://github.com/Vxider)) [#6744](https://github.com/yandex/ClickHouse/pull/6744) ([Ivan](https://github.com/abyss7)) [#6016](https://github.com/yandex/ClickHouse/pull/6016) [#6421](https://github.com/yandex/ClickHouse/pull/6421) [#6491](https://github.com/yandex/ClickHouse/pull/6491) ([proller](https://github.com/proller))
|
||||
* Fix kafka tests. [#6805](https://github.com/yandex/ClickHouse/pull/6805) ([Ivan](https://github.com/abyss7))
|
||||
|
||||
### Backward Incompatible Change
|
||||
* Removed rarely used table function `catBoostPool` and storage `CatBoostPool`. If you have used this table function, please write email to `clickhouse-feedback@yandex-team.com`. Note that CatBoost integration remains and will be supported. [#6279](https://github.com/yandex/ClickHouse/pull/6279) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Disable `ANY RIGHT JOIN` and `ANY FULL JOIN` by default. Set `any_join_get_any_from_right_table` setting to enable them. [#5126](https://github.com/yandex/ClickHouse/issues/5126) [#6351](https://github.com/yandex/ClickHouse/pull/6351) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
|
||||
## ClickHouse release 19.11.11.57, 2019-09-13
|
||||
* Fix logical error causing segfaults when selecting from Kafka empty topic. [#6902](https://github.com/yandex/ClickHouse/issues/6902) [#6909](https://github.com/yandex/ClickHouse/pull/6909) ([Ivan](https://github.com/abyss7))
|
||||
* Fix for function `АrrayEnumerateUniqRanked` with empty arrays in params. [#6928](https://github.com/yandex/ClickHouse/pull/6928) ([proller](https://github.com/proller))
|
||||
|
||||
## ClickHouse release 19.13.4.32, 2019-09-10
|
||||
|
||||
### Bug Fix
|
||||
* This release also contains all bug security fixes from 19.11.9.52 and 19.11.10.54.
|
||||
* Fixed data race in `system.parts` table and `ALTER` query. [#6245](https://github.com/yandex/ClickHouse/issues/6245) [#6513](https://github.com/yandex/ClickHouse/pull/6513) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed mismatched header in streams happened in case of reading from empty distributed table with sample and prewhere. [#6167](https://github.com/yandex/ClickHouse/issues/6167) ([Lixiang Qian](https://github.com/fancyqlx)) [#6823](https://github.com/yandex/ClickHouse/pull/6823) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Fixed crash when using `IN` clause with a subquery with a tuple. [#6125](https://github.com/yandex/ClickHouse/issues/6125) [#6550](https://github.com/yandex/ClickHouse/pull/6550) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fix case with same column names in `GLOBAL JOIN ON` section. [#6181](https://github.com/yandex/ClickHouse/pull/6181) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fix crash when casting types to `Decimal` that do not support it. Throw exception instead. [#6297](https://github.com/yandex/ClickHouse/pull/6297) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed crash in `extractAll()` function. [#6644](https://github.com/yandex/ClickHouse/pull/6644) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Query transformation for `MySQL`, `ODBC`, `JDBC` table functions now works properly for `SELECT WHERE` queries with multiple `AND` expressions. [#6381](https://github.com/yandex/ClickHouse/issues/6381) [#6676](https://github.com/yandex/ClickHouse/pull/6676) ([dimarub2000](https://github.com/dimarub2000))
|
||||
* Added previous declaration checks for MySQL 8 integration. [#6569](https://github.com/yandex/ClickHouse/pull/6569) ([Rafael David Tinoco](https://github.com/rafaeldtinoco))
|
||||
|
||||
### Security Fix
|
||||
* Fix two vulnerabilities in codecs in decompression phase (malicious user can fabricate compressed data that will lead to buffer overflow in decompression). [#6670](https://github.com/yandex/ClickHouse/pull/6670) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
|
||||
## ClickHouse release 19.11.10.54, 2019-09-10
|
||||
|
||||
### Bug Fix
|
||||
* Do store offsets for Kafka messages manually to be able to commit them all at once for all partitions. Fixes potential duplication in "one consumer - many partitions" scenario. [#6872](https://github.com/yandex/ClickHouse/pull/6872) ([Ivan](https://github.com/abyss7))
|
||||
|
||||
## ClickHouse release 19.11.9.52, 2019-09-6
|
||||
* Improve error handling in cache dictionaries. [#6737](https://github.com/yandex/ClickHouse/pull/6737) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Fixed bug in function `arrayEnumerateUniqRanked`. [#6779](https://github.com/yandex/ClickHouse/pull/6779) ([proller](https://github.com/proller))
|
||||
* Fix `JSONExtract` function while extracting a `Tuple` from JSON. [#6718](https://github.com/yandex/ClickHouse/pull/6718) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Fixed possible data loss after `ALTER DELETE` query on table with skipping index. [#6224](https://github.com/yandex/ClickHouse/issues/6224) [#6282](https://github.com/yandex/ClickHouse/pull/6282) ([Nikita Vasilev](https://github.com/nikvas0))
|
||||
* Fixed performance test. [#6392](https://github.com/yandex/ClickHouse/pull/6392) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Parquet: Fix reading boolean columns. [#6579](https://github.com/yandex/ClickHouse/pull/6579) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed wrong behaviour of `nullIf` function for constant arguments. [#6518](https://github.com/yandex/ClickHouse/pull/6518) ([Guillaume Tassery](https://github.com/YiuRULE)) [#6580](https://github.com/yandex/ClickHouse/pull/6580) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix Kafka messages duplication problem on normal server restart. [#6597](https://github.com/yandex/ClickHouse/pull/6597) ([Ivan](https://github.com/abyss7))
|
||||
* Fixed an issue when long `ALTER UPDATE` or `ALTER DELETE` may prevent regular merges to run. Prevent mutations from executing if there is no enough free threads available. [#6502](https://github.com/yandex/ClickHouse/issues/6502) [#6617](https://github.com/yandex/ClickHouse/pull/6617) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fixed error with processing "timezone" in server configuration file. [#6709](https://github.com/yandex/ClickHouse/pull/6709) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix kafka tests. [#6805](https://github.com/yandex/ClickHouse/pull/6805) ([Ivan](https://github.com/abyss7))
|
||||
|
||||
### Security Fix
|
||||
* If the attacker has write access to ZooKeeper and is able to run custom server available from the network where ClickHouse run, it can create custom-built malicious server that will act as ClickHouse replica and register it in ZooKeeper. When another replica will fetch data part from malicious replica, it can force clickhouse-server to write to arbitrary path on filesystem. Found by Eldar Zaitov, information security team at Yandex. [#6247](https://github.com/yandex/ClickHouse/pull/6247) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
## ClickHouse release 19.13.3.26, 2019-08-22
|
||||
|
||||
@ -284,6 +320,10 @@
|
||||
* Fixed issue with parsing CSV [#6426](https://github.com/yandex/ClickHouse/issues/6426) [#6559](https://github.com/yandex/ClickHouse/pull/6559) ([tavplubix](https://github.com/tavplubix))
|
||||
* Fixed data race in system.parts table and ALTER query. This fixes [#6245](https://github.com/yandex/ClickHouse/issues/6245). [#6513](https://github.com/yandex/ClickHouse/pull/6513) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed wrong code in mutations that may lead to memory corruption. Fixed segfault with read of address `0x14c0` that may happed due to concurrent `DROP TABLE` and `SELECT` from `system.parts` or `system.parts_columns`. Fixed race condition in preparation of mutation queries. Fixed deadlock caused by `OPTIMIZE` of Replicated tables and concurrent modification operations like ALTERs. [#6514](https://github.com/yandex/ClickHouse/pull/6514) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed possible data loss after `ALTER DELETE` query on table with skipping index. [#6224](https://github.com/yandex/ClickHouse/issues/6224) [#6282](https://github.com/yandex/ClickHouse/pull/6282) ([Nikita Vasilev](https://github.com/nikvas0))
|
||||
|
||||
### Security Fix
|
||||
* If the attacker has write access to ZooKeeper and is able to run custom server available from the network where ClickHouse run, it can create custom-built malicious server that will act as ClickHouse replica and register it in ZooKeeper. When another replica will fetch data part from malicious replica, it can force clickhouse-server to write to arbitrary path on filesystem. Found by Eldar Zaitov, information security team at Yandex. [#6247](https://github.com/yandex/ClickHouse/pull/6247) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
## ClickHouse release 19.13.2.19, 2019-08-14
|
||||
|
||||
|
@ -77,6 +77,14 @@ if (USE_STATIC_LIBRARIES)
|
||||
list(REVERSE CMAKE_FIND_LIBRARY_SUFFIXES)
|
||||
endif ()
|
||||
|
||||
option (ENABLE_FUZZING "Enables fuzzing instrumentation" OFF)
|
||||
|
||||
if (ENABLE_FUZZING)
|
||||
message (STATUS "Fuzzing instrumentation enabled")
|
||||
set (WITH_COVERAGE ON)
|
||||
set (SANITIZE "libfuzzer")
|
||||
endif()
|
||||
|
||||
include (cmake/sanitize.cmake)
|
||||
|
||||
|
||||
|
@ -42,6 +42,19 @@ if (SANITIZE)
|
||||
if (MAKE_STATIC_LIBRARIES AND CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libubsan")
|
||||
endif ()
|
||||
|
||||
elseif (SANITIZE STREQUAL "libfuzzer")
|
||||
# NOTE: Eldar Zaitov decided to name it "libfuzzer" instead of "fuzzer" to keep in mind another possible fuzzer backends.
|
||||
# NOTE: no-link means that all the targets are built with instrumentation for fuzzer, but only some of them (tests) have entry point for fuzzer and it's not checked.
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SAN_FLAGS} -fsanitize=fuzzer-no-link,address,undefined -fsanitize-address-use-after-scope")
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SAN_FLAGS} -fsanitize=fuzzer-no-link,address,undefined -fsanitize-address-use-after-scope")
|
||||
if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=fuzzer-no-link,address,undefined -fsanitize-address-use-after-scope")
|
||||
endif()
|
||||
if (MAKE_STATIC_LIBRARIES AND CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -static-libasan -static-libubsan")
|
||||
endif ()
|
||||
set (LIBFUZZER_CMAKE_CXX_FLAGS "-fsanitize=fuzzer,address,undefined -fsanitize-address-use-after-scope")
|
||||
else ()
|
||||
message (FATAL_ERROR "Unknown sanitizer type: ${SANITIZE}")
|
||||
endif ()
|
||||
|
@ -41,9 +41,9 @@ endif()
|
||||
option (LINKER_NAME "Linker name or full path")
|
||||
if (NOT LINKER_NAME)
|
||||
if (COMPILER_CLANG AND LLD_PATH)
|
||||
set (LINKER_NAME "lld")
|
||||
set (LINKER_NAME NAMES "lld")
|
||||
elseif (GOLD_PATH)
|
||||
set (LINKER_NAME "gold")
|
||||
set (LINKER_NAME NAMES "ld.gold" "gold")
|
||||
endif ()
|
||||
endif ()
|
||||
|
||||
|
@ -21,6 +21,7 @@ MetricsTransmitter::MetricsTransmitter(
|
||||
{
|
||||
interval_seconds = config.getInt(config_name + ".interval", 60);
|
||||
send_events = config.getBool(config_name + ".events", true);
|
||||
send_events_cumulative = config.getBool(config_name + ".events_cumulative", false);
|
||||
send_metrics = config.getBool(config_name + ".metrics", true);
|
||||
send_asynchronous_metrics = config.getBool(config_name + ".asynchronous_metrics", true);
|
||||
}
|
||||
@ -95,6 +96,16 @@ void MetricsTransmitter::transmit(std::vector<ProfileEvents::Count> & prev_count
|
||||
}
|
||||
}
|
||||
|
||||
if (send_events_cumulative)
|
||||
{
|
||||
for (size_t i = 0, end = ProfileEvents::end(); i < end; ++i)
|
||||
{
|
||||
const auto counter = ProfileEvents::global_counters[i].load(std::memory_order_relaxed);
|
||||
std::string key{ProfileEvents::getName(static_cast<ProfileEvents::Event>(i))};
|
||||
key_vals.emplace_back(profile_events_cumulative_path_prefix + key, counter);
|
||||
}
|
||||
}
|
||||
|
||||
if (send_metrics)
|
||||
{
|
||||
for (size_t i = 0, end = CurrentMetrics::end(); i < end; ++i)
|
||||
|
@ -24,7 +24,8 @@ class AsynchronousMetrics;
|
||||
|
||||
|
||||
/** Automatically sends
|
||||
* - difference of ProfileEvents;
|
||||
* - delta values of ProfileEvents;
|
||||
* - cumulative values of ProfileEvents;
|
||||
* - values of CurrentMetrics;
|
||||
* - values of AsynchronousMetrics;
|
||||
* to Graphite at beginning of every minute.
|
||||
@ -44,6 +45,7 @@ private:
|
||||
std::string config_name;
|
||||
UInt32 interval_seconds;
|
||||
bool send_events;
|
||||
bool send_events_cumulative;
|
||||
bool send_metrics;
|
||||
bool send_asynchronous_metrics;
|
||||
|
||||
@ -53,6 +55,7 @@ private:
|
||||
ThreadFromGlobalPool thread{&MetricsTransmitter::run, this};
|
||||
|
||||
static inline constexpr auto profile_events_path_prefix = "ClickHouse.ProfileEvents.";
|
||||
static inline constexpr auto profile_events_cumulative_path_prefix = "ClickHouse.ProfileEventsCumulative.";
|
||||
static inline constexpr auto current_metrics_path_prefix = "ClickHouse.Metrics.";
|
||||
static inline constexpr auto asynchronous_metrics_path_prefix = "ClickHouse.AsynchronousMetrics.";
|
||||
};
|
||||
|
@ -258,6 +258,7 @@
|
||||
|
||||
<metrics>true</metrics>
|
||||
<events>true</events>
|
||||
<events_cumulative>false</events_cumulative>
|
||||
<asynchronous_metrics>true</asynchronous_metrics>
|
||||
</graphite>
|
||||
<graphite>
|
||||
@ -269,6 +270,7 @@
|
||||
|
||||
<metrics>true</metrics>
|
||||
<events>true</events>
|
||||
<events_cumulative>false</events_cumulative>
|
||||
<asynchronous_metrics>false</asynchronous_metrics>
|
||||
</graphite>
|
||||
-->
|
||||
|
@ -63,7 +63,12 @@ public:
|
||||
roaring_bitmap_add(rb, value);
|
||||
}
|
||||
|
||||
UInt64 size() const { return isSmall() ? small.size() : roaring_bitmap_get_cardinality(rb); }
|
||||
UInt64 size() const
|
||||
{
|
||||
return isSmall()
|
||||
? small.size()
|
||||
: roaring_bitmap_get_cardinality(rb);
|
||||
}
|
||||
|
||||
void merge(const RoaringBitmapWithSmallSet & r1)
|
||||
{
|
||||
@ -91,7 +96,7 @@ public:
|
||||
std::string s;
|
||||
readStringBinary(s,in);
|
||||
rb = roaring_bitmap_portable_deserialize(s.c_str());
|
||||
for (const auto & x : small) //merge from small
|
||||
for (const auto & x : small) // merge from small
|
||||
roaring_bitmap_add(rb, x.getValue());
|
||||
}
|
||||
else
|
||||
@ -245,13 +250,13 @@ public:
|
||||
{
|
||||
for (const auto & x : small)
|
||||
if (r1.small.find(x.getValue()) != r1.small.end())
|
||||
retSize++;
|
||||
++retSize;
|
||||
}
|
||||
else if (isSmall() && r1.isLarge())
|
||||
{
|
||||
for (const auto & x : small)
|
||||
if (roaring_bitmap_contains(r1.rb, x.getValue()))
|
||||
retSize++;
|
||||
++retSize;
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -391,8 +396,7 @@ public:
|
||||
*/
|
||||
UInt8 rb_contains(const UInt32 x) const
|
||||
{
|
||||
return isSmall() ? small.find(x) != small.end() :
|
||||
roaring_bitmap_contains(rb, x);
|
||||
return isSmall() ? small.find(x) != small.end() : roaring_bitmap_contains(rb, x);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -460,21 +464,20 @@ public:
|
||||
/**
|
||||
* Return new set with specified range (not include the range_end)
|
||||
*/
|
||||
UInt64 rb_range(UInt32 range_start, UInt32 range_end, RoaringBitmapWithSmallSet& r1) const
|
||||
UInt64 rb_range(UInt32 range_start, UInt32 range_end, RoaringBitmapWithSmallSet & r1) const
|
||||
{
|
||||
UInt64 count = 0;
|
||||
if (range_start >= range_end)
|
||||
return count;
|
||||
if (isSmall())
|
||||
{
|
||||
std::vector<T> ans;
|
||||
for (const auto & x : small)
|
||||
{
|
||||
T val = x.getValue();
|
||||
if ((UInt32)val >= range_start && (UInt32)val < range_end)
|
||||
if (UInt32(val) >= range_start && UInt32(val) < range_end)
|
||||
{
|
||||
r1.add(val);
|
||||
count++;
|
||||
++count;
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -483,13 +486,50 @@ public:
|
||||
roaring_uint32_iterator_t iterator;
|
||||
roaring_init_iterator(rb, &iterator);
|
||||
roaring_move_uint32_iterator_equalorlarger(&iterator, range_start);
|
||||
while (iterator.has_value)
|
||||
while (iterator.has_value && UInt32(iterator.current_value) < range_end)
|
||||
{
|
||||
if ((UInt32)iterator.current_value >= range_end)
|
||||
break;
|
||||
r1.add(iterator.current_value);
|
||||
roaring_advance_uint32_iterator(&iterator);
|
||||
count++;
|
||||
++count;
|
||||
}
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return new set of the smallest `limit` values in set which is no less than `range_start`.
|
||||
*/
|
||||
UInt64 rb_limit(UInt32 range_start, UInt32 limit, RoaringBitmapWithSmallSet & r1) const
|
||||
{
|
||||
UInt64 count = 0;
|
||||
if (isSmall())
|
||||
{
|
||||
std::vector<T> ans;
|
||||
for (const auto & x : small)
|
||||
{
|
||||
T val = x.getValue();
|
||||
if (UInt32(val) >= range_start)
|
||||
{
|
||||
ans.push_back(val);
|
||||
}
|
||||
}
|
||||
sort(ans.begin(), ans.end());
|
||||
if (limit > ans.size())
|
||||
limit = ans.size();
|
||||
for (size_t i = 0; i < limit; ++i)
|
||||
r1.add(ans[i]);
|
||||
count = UInt64(limit);
|
||||
}
|
||||
else
|
||||
{
|
||||
roaring_uint32_iterator_t iterator;
|
||||
roaring_init_iterator(rb, &iterator);
|
||||
roaring_move_uint32_iterator_equalorlarger(&iterator, range_start);
|
||||
while (UInt32(count) < limit && iterator.has_value)
|
||||
{
|
||||
r1.add(iterator.current_value);
|
||||
roaring_advance_uint32_iterator(&iterator);
|
||||
++count;
|
||||
}
|
||||
}
|
||||
return count;
|
||||
@ -552,8 +592,8 @@ private:
|
||||
readBinary(val, dbBuf);
|
||||
container = containerptr_roaring_bitmap_add(r, val, &typecode, &containerindex);
|
||||
prev = val;
|
||||
i++;
|
||||
for (; i < n_args; i++)
|
||||
++i;
|
||||
for (; i < n_args; ++i)
|
||||
{
|
||||
readBinary(val, dbBuf);
|
||||
if (((prev ^ val) >> 16) == 0)
|
||||
|
@ -168,6 +168,12 @@ void ColumnNullable::insertRangeFromNotNullable(const IColumn & src, size_t star
|
||||
getNullMapData().resize_fill(getNullMapData().size() + length, 0);
|
||||
}
|
||||
|
||||
void ColumnNullable::insertManyFromNotNullable(const IColumn & src, size_t position, size_t length)
|
||||
{
|
||||
for (size_t i = 0; i < length; ++i)
|
||||
insertFromNotNullable(src, position);
|
||||
}
|
||||
|
||||
void ColumnNullable::popBack(size_t n)
|
||||
{
|
||||
getNestedColumn().popBack(n);
|
||||
|
@ -63,6 +63,7 @@ public:
|
||||
|
||||
void insertFromNotNullable(const IColumn & src, size_t n);
|
||||
void insertRangeFromNotNullable(const IColumn & src, size_t start, size_t length);
|
||||
void insertManyFromNotNullable(const IColumn & src, size_t position, size_t length);
|
||||
|
||||
void insertDefault() override
|
||||
{
|
||||
|
@ -146,6 +146,13 @@ public:
|
||||
/// Could be used to concatenate columns.
|
||||
virtual void insertRangeFrom(const IColumn & src, size_t start, size_t length) = 0;
|
||||
|
||||
/// Appends one element from other column with the same type multiple times.
|
||||
virtual void insertManyFrom(const IColumn & src, size_t position, size_t length)
|
||||
{
|
||||
for (size_t i = 0; i < length; ++i)
|
||||
insertFrom(src, position);
|
||||
}
|
||||
|
||||
/// Appends data located in specified memory chunk if it is possible (throws an exception if it cannot be implemented).
|
||||
/// Is used to optimize some computations (in aggregation, for example).
|
||||
/// Parameter length could be ignored if column values have fixed size.
|
||||
@ -157,6 +164,13 @@ public:
|
||||
/// For example, ColumnNullable(Nested) absolutely ignores values of nested column if it is marked as NULL.
|
||||
virtual void insertDefault() = 0;
|
||||
|
||||
/// Appends "default value" multiple times.
|
||||
virtual void insertManyDefaults(size_t length)
|
||||
{
|
||||
for (size_t i = 0; i < length; ++i)
|
||||
insertDefault();
|
||||
}
|
||||
|
||||
/** Removes last n elements.
|
||||
* Is used to support exception-safety of several operations.
|
||||
* For example, sometimes insertion should be reverted if we catch an exception during operation processing.
|
||||
|
@ -17,26 +17,41 @@ namespace DB
|
||||
}
|
||||
}
|
||||
|
||||
#if defined(OS_LINUX)
|
||||
static thread_local void * stack_address = nullptr;
|
||||
static thread_local size_t max_stack_size = 0;
|
||||
#endif
|
||||
|
||||
void checkStackSize()
|
||||
{
|
||||
#if defined(OS_LINUX)
|
||||
using namespace DB;
|
||||
|
||||
if (!stack_address)
|
||||
{
|
||||
#if defined(OS_DARWIN)
|
||||
// pthread_get_stacksize_np() returns a value too low for the main thread on
|
||||
// OSX 10.9, http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-October/011369.html
|
||||
//
|
||||
// Multiple workarounds possible, adopt the one made by https://github.com/robovm/robovm/issues/274
|
||||
// https://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/Multithreading/CreatingThreads/CreatingThreads.html
|
||||
// Stack size for the main thread is 8MB on OSX excluding the guard page size.
|
||||
pthread_t thread = pthread_self();
|
||||
max_stack_size = pthread_main_np() ? (8 * 1024 * 1024) : pthread_get_stacksize_np(thread);
|
||||
stack_address = pthread_get_stackaddr_np(thread);
|
||||
#else
|
||||
pthread_attr_t attr;
|
||||
# if defined(__FreeBSD__)
|
||||
pthread_attr_init(&attr);
|
||||
if (0 != pthread_attr_get_np(pthread_self(), &attr))
|
||||
throwFromErrno("Cannot pthread_attr_get_np", ErrorCodes::CANNOT_PTHREAD_ATTR);
|
||||
# else
|
||||
if (0 != pthread_getattr_np(pthread_self(), &attr))
|
||||
throwFromErrno("Cannot pthread_getattr_np", ErrorCodes::CANNOT_PTHREAD_ATTR);
|
||||
# endif
|
||||
|
||||
SCOPE_EXIT({ pthread_attr_destroy(&attr); });
|
||||
|
||||
if (0 != pthread_attr_getstack(&attr, &stack_address, &max_stack_size))
|
||||
throwFromErrno("Cannot pthread_getattr_np", ErrorCodes::CANNOT_PTHREAD_ATTR);
|
||||
#endif // OS_DARWIN
|
||||
}
|
||||
|
||||
const void * frame_address = __builtin_frame_address(0);
|
||||
@ -61,5 +76,4 @@ void checkStackSize()
|
||||
<< ", maximum stack size: " << max_stack_size;
|
||||
throw Exception(message.str(), ErrorCodes::TOO_DEEP_RECURSION);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
@ -3,3 +3,9 @@ target_link_libraries (compressed_buffer PRIVATE dbms)
|
||||
|
||||
add_executable (cached_compressed_read_buffer cached_compressed_read_buffer.cpp)
|
||||
target_link_libraries (cached_compressed_read_buffer PRIVATE dbms)
|
||||
|
||||
if (ENABLE_FUZZING)
|
||||
add_executable (compressed_buffer_fuzz compressed_buffer_fuzz.cpp)
|
||||
target_link_libraries (compressed_buffer_fuzz PRIVATE dbms)
|
||||
set_target_properties(compressed_buffer_fuzz PROPERTIES LINK_FLAGS ${LIBFUZZER_CMAKE_CXX_FLAGS})
|
||||
endif ()
|
||||
|
22
dbms/src/Compression/tests/compressed_buffer_fuzz.cpp
Normal file
22
dbms/src/Compression/tests/compressed_buffer_fuzz.cpp
Normal file
@ -0,0 +1,22 @@
|
||||
#include <iostream>
|
||||
#include <IO/ReadBufferFromMemory.h>
|
||||
#include <Compression/CompressedReadBuffer.h>
|
||||
#include <Common/Exception.h>
|
||||
|
||||
|
||||
extern "C" int LLVMFuzzerTestOneInput(const uint8_t * data, size_t size)
|
||||
try
|
||||
{
|
||||
DB::ReadBufferFromMemory from(data, size);
|
||||
DB::CompressedReadBuffer in{from};
|
||||
|
||||
while (!in.eof())
|
||||
in.next();
|
||||
|
||||
return 0;
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
std::cerr << DB::getCurrentExceptionMessage(true) << std::endl;
|
||||
return 1;
|
||||
}
|
@ -288,7 +288,7 @@ struct Settings : public SettingsCollection<Settings>
|
||||
M(SettingUInt64, max_bytes_in_join, 0, "Maximum size of the hash table for JOIN (in number of bytes in memory).") \
|
||||
M(SettingOverflowMode, join_overflow_mode, OverflowMode::THROW, "What to do when the limit is exceeded.") \
|
||||
M(SettingBool, join_any_take_last_row, false, "When disabled (default) ANY JOIN will take the first found row for a key. When enabled, it will take the last row seen if there are multiple rows for the same key.") \
|
||||
M(SettingBool, partial_merge_join, false, "Use partial merge join instead of hash join if possible.") \
|
||||
M(SettingBool, partial_merge_join, false, "Use partial merge join instead of hash join for LEFT and INNER JOINs.") \
|
||||
\
|
||||
M(SettingUInt64, max_rows_to_transfer, 0, "Maximum size (in rows) of the transmitted external table obtained when the GLOBAL IN/JOIN section is executed.") \
|
||||
M(SettingUInt64, max_bytes_to_transfer, 0, "Maximum size (in uncompressed bytes) of the transmitted external table obtained when the GLOBAL IN/JOIN section is executed.") \
|
||||
|
@ -16,7 +16,7 @@ SquashingTransform::Result SquashingTransform::add(MutableColumns && columns)
|
||||
if (columns.empty())
|
||||
return Result(std::move(accumulated_columns));
|
||||
|
||||
/// Just read block is alredy enough.
|
||||
/// Just read block is already enough.
|
||||
if (isEnoughSize(columns))
|
||||
{
|
||||
/// If no accumulated data, return just read block.
|
||||
|
@ -742,6 +742,74 @@ void DataTypeLowCardinality::deserializeBinary(Field & field, ReadBuffer & istr)
|
||||
dictionary_type->deserializeBinary(field, istr);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::serializeBinary(const IColumn & column, size_t row_num, WriteBuffer & ostr) const
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeBinary, ostr);
|
||||
}
|
||||
void DataTypeLowCardinality::deserializeBinary(IColumn & column, ReadBuffer & istr) const
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeBinary, istr);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::serializeTextEscaped(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsTextEscaped, ostr, settings);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::deserializeTextEscaped(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeAsTextEscaped, istr, settings);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::serializeTextQuoted(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsTextQuoted, ostr, settings);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::deserializeTextQuoted(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeAsTextQuoted, istr, settings);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::deserializeWholeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeAsTextEscaped, istr, settings);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::serializeTextCSV(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsTextCSV, ostr, settings);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::deserializeTextCSV(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeAsTextCSV, istr, settings);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsText, ostr, settings);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::serializeTextJSON(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsTextJSON, ostr, settings);
|
||||
}
|
||||
void DataTypeLowCardinality::deserializeTextJSON(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeAsTextJSON, istr, settings);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::serializeTextXML(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsTextXML, ostr, settings);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::serializeProtobuf(const IColumn & column, size_t row_num, ProtobufWriter & protobuf, size_t & value_index) const
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeProtobuf, protobuf, value_index);
|
||||
}
|
||||
|
||||
void DataTypeLowCardinality::deserializeProtobuf(IColumn & column, ProtobufReader & protobuf, bool allow_add_row, bool & row_added) const
|
||||
{
|
||||
if (allow_add_row)
|
||||
|
@ -51,75 +51,20 @@ public:
|
||||
|
||||
void serializeBinary(const Field & field, WriteBuffer & ostr) const override;
|
||||
void deserializeBinary(Field & field, ReadBuffer & istr) const override;
|
||||
|
||||
void serializeBinary(const IColumn & column, size_t row_num, WriteBuffer & ostr) const override
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeBinary, ostr);
|
||||
}
|
||||
void deserializeBinary(IColumn & column, ReadBuffer & istr) const override
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeBinary, istr);
|
||||
}
|
||||
|
||||
void serializeTextEscaped(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsTextEscaped, ostr, settings);
|
||||
}
|
||||
|
||||
void deserializeTextEscaped(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeAsTextEscaped, istr, settings);
|
||||
}
|
||||
|
||||
void serializeTextQuoted(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsTextQuoted, ostr, settings);
|
||||
}
|
||||
|
||||
void deserializeTextQuoted(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeAsTextQuoted, istr, settings);
|
||||
}
|
||||
|
||||
void deserializeWholeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeAsTextEscaped, istr, settings);
|
||||
}
|
||||
|
||||
void serializeTextCSV(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsTextCSV, ostr, settings);
|
||||
}
|
||||
|
||||
void deserializeTextCSV(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeAsTextCSV, istr, settings);
|
||||
}
|
||||
|
||||
void serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsText, ostr, settings);
|
||||
}
|
||||
|
||||
void serializeTextJSON(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsTextJSON, ostr, settings);
|
||||
}
|
||||
void deserializeTextJSON(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
|
||||
{
|
||||
deserializeImpl(column, &IDataType::deserializeAsTextJSON, istr, settings);
|
||||
}
|
||||
|
||||
void serializeTextXML(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeAsTextXML, ostr, settings);
|
||||
}
|
||||
|
||||
void serializeProtobuf(const IColumn & column, size_t row_num, ProtobufWriter & protobuf, size_t & value_index) const override
|
||||
{
|
||||
serializeImpl(column, row_num, &IDataType::serializeProtobuf, protobuf, value_index);
|
||||
}
|
||||
|
||||
void serializeBinary(const IColumn & column, size_t row_num, WriteBuffer & ostr) const override;
|
||||
void deserializeBinary(IColumn & column, ReadBuffer & istr) const override;
|
||||
void serializeTextEscaped(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override;
|
||||
void deserializeTextEscaped(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override;
|
||||
void serializeTextQuoted(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override;
|
||||
void deserializeTextQuoted(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override;
|
||||
void deserializeWholeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override;
|
||||
void serializeTextCSV(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override;
|
||||
void deserializeTextCSV(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override;
|
||||
void serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override;
|
||||
void serializeTextJSON(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override;
|
||||
void deserializeTextJSON(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override;
|
||||
void serializeTextXML(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override;
|
||||
void serializeProtobuf(const IColumn & column, size_t row_num, ProtobufWriter & protobuf, size_t & value_index) const override;
|
||||
void deserializeProtobuf(IColumn & column, ProtobufReader & protobuf, bool allow_add_row, bool & row_added) const override;
|
||||
|
||||
MutableColumnPtr createColumn() const override;
|
||||
|
@ -40,3 +40,5 @@ if(USE_POCO_MONGODB)
|
||||
endif()
|
||||
|
||||
add_subdirectory(Embedded)
|
||||
|
||||
target_include_directories(clickhouse_dictionaries SYSTEM PRIVATE ${SPARSEHASH_INCLUDE_DIR})
|
||||
|
@ -3,6 +3,21 @@
|
||||
#include "DictionaryBlockInputStream.h"
|
||||
#include "DictionaryFactory.h"
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
/// NOTE: Trailing return type is explicitly specified for SFINAE.
|
||||
|
||||
/// google::sparse_hash_map
|
||||
template <typename T> auto first(const T & value) -> decltype(value.first) { return value.first; }
|
||||
template <typename T> auto second(const T & value) -> decltype(value.second) { return value.second; }
|
||||
|
||||
/// HashMap
|
||||
template <typename T> auto first(const T & value) -> decltype(value.getFirst()) { return value.getFirst(); }
|
||||
template <typename T> auto second(const T & value) -> decltype(value.getSecond()) { return value.getSecond(); }
|
||||
|
||||
}
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
@ -21,12 +36,14 @@ HashedDictionary::HashedDictionary(
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
bool require_nonempty_,
|
||||
bool sparse_,
|
||||
BlockPtr saved_block_)
|
||||
: name{name_}
|
||||
, dict_struct(dict_struct_)
|
||||
, source_ptr{std::move(source_ptr_)}
|
||||
, dict_lifetime(dict_lifetime_)
|
||||
, require_nonempty(require_nonempty_)
|
||||
, sparse(sparse_)
|
||||
, saved_block{std::move(saved_block_)}
|
||||
{
|
||||
createAttributes();
|
||||
@ -57,11 +74,10 @@ static inline HashedDictionary::Key getAt(const HashedDictionary::Key & value, c
|
||||
return value;
|
||||
}
|
||||
|
||||
template <typename ChildType, typename AncestorType>
|
||||
void HashedDictionary::isInImpl(const ChildType & child_ids, const AncestorType & ancestor_ids, PaddedPODArray<UInt8> & out) const
|
||||
template <typename AttrType, typename ChildType, typename AncestorType>
|
||||
void HashedDictionary::isInAttrImpl(const AttrType & attr, const ChildType & child_ids, const AncestorType & ancestor_ids, PaddedPODArray<UInt8> & out) const
|
||||
{
|
||||
const auto null_value = std::get<UInt64>(hierarchical_attribute->null_values);
|
||||
const auto & attr = *std::get<CollectionPtrType<Key>>(hierarchical_attribute->maps);
|
||||
const auto rows = out.size();
|
||||
|
||||
for (const auto row : ext::range(0, rows))
|
||||
@ -73,7 +89,7 @@ void HashedDictionary::isInImpl(const ChildType & child_ids, const AncestorType
|
||||
{
|
||||
auto it = attr.find(id);
|
||||
if (it != std::end(attr))
|
||||
id = it->getSecond();
|
||||
id = second(*it);
|
||||
else
|
||||
break;
|
||||
}
|
||||
@ -83,6 +99,13 @@ void HashedDictionary::isInImpl(const ChildType & child_ids, const AncestorType
|
||||
|
||||
query_count.fetch_add(rows, std::memory_order_relaxed);
|
||||
}
|
||||
template <typename ChildType, typename AncestorType>
|
||||
void HashedDictionary::isInImpl(const ChildType & child_ids, const AncestorType & ancestor_ids, PaddedPODArray<UInt8> & out) const
|
||||
{
|
||||
if (!sparse)
|
||||
return isInAttrImpl(*std::get<CollectionPtrType<Key>>(hierarchical_attribute->maps), child_ids, ancestor_ids, out);
|
||||
return isInAttrImpl(*std::get<SparseCollectionPtrType<Key>>(hierarchical_attribute->sparse_maps), child_ids, ancestor_ids, out);
|
||||
}
|
||||
|
||||
void HashedDictionary::isInVectorVector(
|
||||
const PaddedPODArray<Key> & child_ids, const PaddedPODArray<Key> & ancestor_ids, PaddedPODArray<UInt8> & out) const
|
||||
@ -407,9 +430,22 @@ void HashedDictionary::loadData()
|
||||
template <typename T>
|
||||
void HashedDictionary::addAttributeSize(const Attribute & attribute)
|
||||
{
|
||||
const auto & map_ref = std::get<CollectionPtrType<T>>(attribute.maps);
|
||||
bytes_allocated += sizeof(CollectionType<T>) + map_ref->getBufferSizeInBytes();
|
||||
bucket_count = map_ref->getBufferSizeInCells();
|
||||
if (!sparse)
|
||||
{
|
||||
const auto & map_ref = std::get<CollectionPtrType<T>>(attribute.maps);
|
||||
bytes_allocated += sizeof(CollectionType<T>) + map_ref->getBufferSizeInBytes();
|
||||
bucket_count = map_ref->getBufferSizeInCells();
|
||||
}
|
||||
else
|
||||
{
|
||||
const auto & map_ref = std::get<SparseCollectionPtrType<T>>(attribute.sparse_maps);
|
||||
bucket_count = map_ref->bucket_count();
|
||||
|
||||
/** TODO: more accurate calculation */
|
||||
bytes_allocated += sizeof(CollectionType<T>);
|
||||
bytes_allocated += bucket_count;
|
||||
bytes_allocated += map_ref->size() * sizeof(Key) * sizeof(T);
|
||||
}
|
||||
}
|
||||
|
||||
void HashedDictionary::calculateBytesAllocated()
|
||||
@ -479,12 +515,15 @@ template <typename T>
|
||||
void HashedDictionary::createAttributeImpl(Attribute & attribute, const Field & null_value)
|
||||
{
|
||||
attribute.null_values = T(null_value.get<NearestFieldType<T>>());
|
||||
attribute.maps = std::make_unique<CollectionType<T>>();
|
||||
if (!sparse)
|
||||
attribute.maps = std::make_unique<CollectionType<T>>();
|
||||
else
|
||||
attribute.sparse_maps = std::make_unique<SparseCollectionType<T>>();
|
||||
}
|
||||
|
||||
HashedDictionary::Attribute HashedDictionary::createAttributeWithType(const AttributeUnderlyingType type, const Field & null_value)
|
||||
{
|
||||
Attribute attr{type, {}, {}, {}};
|
||||
Attribute attr{type, {}, {}, {}, {}};
|
||||
|
||||
switch (type)
|
||||
{
|
||||
@ -535,7 +574,10 @@ HashedDictionary::Attribute HashedDictionary::createAttributeWithType(const Attr
|
||||
case AttributeUnderlyingType::utString:
|
||||
{
|
||||
attr.null_values = null_value.get<String>();
|
||||
attr.maps = std::make_unique<CollectionType<StringRef>>();
|
||||
if (!sparse)
|
||||
attr.maps = std::make_unique<CollectionType<StringRef>>();
|
||||
else
|
||||
attr.sparse_maps = std::make_unique<SparseCollectionType<StringRef>>();
|
||||
attr.string_arena = std::make_unique<Arena>();
|
||||
break;
|
||||
}
|
||||
@ -545,28 +587,43 @@ HashedDictionary::Attribute HashedDictionary::createAttributeWithType(const Attr
|
||||
}
|
||||
|
||||
|
||||
template <typename AttributeType, typename OutputType, typename ValueSetter, typename DefaultGetter>
|
||||
void HashedDictionary::getItemsImpl(
|
||||
const Attribute & attribute, const PaddedPODArray<Key> & ids, ValueSetter && set_value, DefaultGetter && get_default) const
|
||||
template <typename OutputType, typename AttrType, typename ValueSetter, typename DefaultGetter>
|
||||
void HashedDictionary::getItemsAttrImpl(
|
||||
const AttrType & attr, const PaddedPODArray<Key> & ids, ValueSetter && set_value, DefaultGetter && get_default) const
|
||||
{
|
||||
const auto & attr = *std::get<CollectionPtrType<AttributeType>>(attribute.maps);
|
||||
const auto rows = ext::size(ids);
|
||||
|
||||
for (const auto i : ext::range(0, rows))
|
||||
{
|
||||
const auto it = attr.find(ids[i]);
|
||||
set_value(i, it != attr.end() ? static_cast<OutputType>(it->getSecond()) : get_default(i));
|
||||
set_value(i, it != attr.end() ? static_cast<OutputType>(second(*it)) : get_default(i));
|
||||
}
|
||||
|
||||
query_count.fetch_add(rows, std::memory_order_relaxed);
|
||||
}
|
||||
template <typename AttributeType, typename OutputType, typename ValueSetter, typename DefaultGetter>
|
||||
void HashedDictionary::getItemsImpl(
|
||||
const Attribute & attribute, const PaddedPODArray<Key> & ids, ValueSetter && set_value, DefaultGetter && get_default) const
|
||||
{
|
||||
if (!sparse)
|
||||
return getItemsAttrImpl<OutputType>(*std::get<CollectionPtrType<AttributeType>>(attribute.maps), ids, set_value, get_default);
|
||||
return getItemsAttrImpl<OutputType>(*std::get<SparseCollectionPtrType<AttributeType>>(attribute.sparse_maps), ids, set_value, get_default);
|
||||
}
|
||||
|
||||
|
||||
template <typename T>
|
||||
bool HashedDictionary::setAttributeValueImpl(Attribute & attribute, const Key id, const T value)
|
||||
{
|
||||
auto & map = *std::get<CollectionPtrType<T>>(attribute.maps);
|
||||
return map.insert({id, value}).second;
|
||||
if (!sparse)
|
||||
{
|
||||
auto & map = *std::get<CollectionPtrType<T>>(attribute.maps);
|
||||
return map.insert({id, value}).second;
|
||||
}
|
||||
else
|
||||
{
|
||||
auto & map = *std::get<SparseCollectionPtrType<T>>(attribute.sparse_maps);
|
||||
return map.insert({id, value}).second;
|
||||
}
|
||||
}
|
||||
|
||||
bool HashedDictionary::setAttributeValue(Attribute & attribute, const Key id, const Field & value)
|
||||
@ -605,10 +662,18 @@ bool HashedDictionary::setAttributeValue(Attribute & attribute, const Key id, co
|
||||
|
||||
case AttributeUnderlyingType::utString:
|
||||
{
|
||||
auto & map = *std::get<CollectionPtrType<StringRef>>(attribute.maps);
|
||||
const auto & string = value.get<String>();
|
||||
const auto string_in_arena = attribute.string_arena->insert(string.data(), string.size());
|
||||
return map.insert({id, StringRef{string_in_arena, string.size()}}).second;
|
||||
if (!sparse)
|
||||
{
|
||||
auto & map = *std::get<CollectionPtrType<StringRef>>(attribute.maps);
|
||||
return map.insert({id, StringRef{string_in_arena, string.size()}}).second;
|
||||
}
|
||||
else
|
||||
{
|
||||
auto & map = *std::get<SparseCollectionPtrType<StringRef>>(attribute.sparse_maps);
|
||||
return map.insert({id, StringRef{string_in_arena, string.size()}}).second;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -636,18 +701,23 @@ void HashedDictionary::has(const Attribute & attribute, const PaddedPODArray<Key
|
||||
query_count.fetch_add(rows, std::memory_order_relaxed);
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
PaddedPODArray<HashedDictionary::Key> HashedDictionary::getIds(const Attribute & attribute) const
|
||||
template <typename T, typename AttrType>
|
||||
PaddedPODArray<HashedDictionary::Key> HashedDictionary::getIdsAttrImpl(const AttrType & attr) const
|
||||
{
|
||||
const HashMap<UInt64, T> & attr = *std::get<CollectionPtrType<T>>(attribute.maps);
|
||||
|
||||
PaddedPODArray<Key> ids;
|
||||
ids.reserve(attr.size());
|
||||
for (const auto & value : attr)
|
||||
ids.push_back(value.getFirst());
|
||||
ids.push_back(first(value));
|
||||
|
||||
return ids;
|
||||
}
|
||||
template <typename T>
|
||||
PaddedPODArray<HashedDictionary::Key> HashedDictionary::getIds(const Attribute & attribute) const
|
||||
{
|
||||
if (!sparse)
|
||||
return getIdsAttrImpl<T>(*std::get<CollectionPtrType<Key>>(attribute.maps));
|
||||
return getIdsAttrImpl<T>(*std::get<SparseCollectionPtrType<Key>>(attribute.sparse_maps));
|
||||
}
|
||||
|
||||
PaddedPODArray<HashedDictionary::Key> HashedDictionary::getIds() const
|
||||
{
|
||||
@ -714,9 +784,11 @@ void registerDictionaryHashed(DictionaryFactory & factory)
|
||||
ErrorCodes::BAD_ARGUMENTS};
|
||||
const DictionaryLifetime dict_lifetime{config, config_prefix + ".lifetime"};
|
||||
const bool require_nonempty = config.getBool(config_prefix + ".require_nonempty", false);
|
||||
return std::make_unique<HashedDictionary>(name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty);
|
||||
const bool sparse = name == "sparse_hashed";
|
||||
return std::make_unique<HashedDictionary>(name, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty, sparse);
|
||||
};
|
||||
factory.registerLayout("hashed", create_layout);
|
||||
factory.registerLayout("sparse_hashed", create_layout);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -7,11 +7,16 @@
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Core/Block.h>
|
||||
#include <Common/HashTable/HashMap.h>
|
||||
#include <sparsehash/sparse_hash_map>
|
||||
#include <ext/range.h>
|
||||
#include "DictionaryStructure.h"
|
||||
#include "IDictionary.h"
|
||||
#include "IDictionarySource.h"
|
||||
|
||||
/** This dictionary stores all content in a hash table in memory
|
||||
* (a separate Key -> Value map for each attribute)
|
||||
* Two variants of hash table are supported: a fast HashMap and memory efficient sparse_hash_map.
|
||||
*/
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -26,6 +31,7 @@ public:
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
bool require_nonempty_,
|
||||
bool sparse_,
|
||||
BlockPtr saved_block_ = nullptr);
|
||||
|
||||
std::string getName() const override { return name; }
|
||||
@ -46,7 +52,7 @@ public:
|
||||
|
||||
std::shared_ptr<const IExternalLoadable> clone() const override
|
||||
{
|
||||
return std::make_shared<HashedDictionary>(name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty, saved_block);
|
||||
return std::make_shared<HashedDictionary>(name, dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty, sparse, saved_block);
|
||||
}
|
||||
|
||||
const IDictionarySource * getSource() const override { return source_ptr.get(); }
|
||||
@ -149,6 +155,11 @@ private:
|
||||
template <typename Value>
|
||||
using CollectionPtrType = std::unique_ptr<CollectionType<Value>>;
|
||||
|
||||
template <typename Value>
|
||||
using SparseCollectionType = google::sparse_hash_map<UInt64, Value, DefaultHash<UInt64>>;
|
||||
template <typename Value>
|
||||
using SparseCollectionPtrType = std::unique_ptr<SparseCollectionType<Value>>;
|
||||
|
||||
struct Attribute final
|
||||
{
|
||||
AttributeUnderlyingType type;
|
||||
@ -186,6 +197,23 @@ private:
|
||||
CollectionPtrType<Float64>,
|
||||
CollectionPtrType<StringRef>>
|
||||
maps;
|
||||
std::variant<
|
||||
SparseCollectionPtrType<UInt8>,
|
||||
SparseCollectionPtrType<UInt16>,
|
||||
SparseCollectionPtrType<UInt32>,
|
||||
SparseCollectionPtrType<UInt64>,
|
||||
SparseCollectionPtrType<UInt128>,
|
||||
SparseCollectionPtrType<Int8>,
|
||||
SparseCollectionPtrType<Int16>,
|
||||
SparseCollectionPtrType<Int32>,
|
||||
SparseCollectionPtrType<Int64>,
|
||||
SparseCollectionPtrType<Decimal32>,
|
||||
SparseCollectionPtrType<Decimal64>,
|
||||
SparseCollectionPtrType<Decimal128>,
|
||||
SparseCollectionPtrType<Float32>,
|
||||
SparseCollectionPtrType<Float64>,
|
||||
SparseCollectionPtrType<StringRef>>
|
||||
sparse_maps;
|
||||
std::unique_ptr<Arena> string_arena;
|
||||
};
|
||||
|
||||
@ -207,6 +235,9 @@ private:
|
||||
|
||||
Attribute createAttributeWithType(const AttributeUnderlyingType type, const Field & null_value);
|
||||
|
||||
template <typename OutputType, typename AttrType, typename ValueSetter, typename DefaultGetter>
|
||||
void getItemsAttrImpl(
|
||||
const AttrType & attr, const PaddedPODArray<Key> & ids, ValueSetter && set_value, DefaultGetter && get_default) const;
|
||||
template <typename AttributeType, typename OutputType, typename ValueSetter, typename DefaultGetter>
|
||||
void getItemsImpl(
|
||||
const Attribute & attribute, const PaddedPODArray<Key> & ids, ValueSetter && set_value, DefaultGetter && get_default) const;
|
||||
@ -221,11 +252,15 @@ private:
|
||||
template <typename T>
|
||||
void has(const Attribute & attribute, const PaddedPODArray<Key> & ids, PaddedPODArray<UInt8> & out) const;
|
||||
|
||||
template <typename T, typename AttrType>
|
||||
PaddedPODArray<Key> getIdsAttrImpl(const AttrType & attr) const;
|
||||
template <typename T>
|
||||
PaddedPODArray<Key> getIds(const Attribute & attribute) const;
|
||||
|
||||
PaddedPODArray<Key> getIds() const;
|
||||
|
||||
template <typename AttrType, typename ChildType, typename AncestorType>
|
||||
void isInAttrImpl(const AttrType & attr, const ChildType & child_ids, const AncestorType & ancestor_ids, PaddedPODArray<UInt8> & out) const;
|
||||
template <typename ChildType, typename AncestorType>
|
||||
void isInImpl(const ChildType & child_ids, const AncestorType & ancestor_ids, PaddedPODArray<UInt8> & out) const;
|
||||
|
||||
@ -234,6 +269,7 @@ private:
|
||||
const DictionarySourcePtr source_ptr;
|
||||
const DictionaryLifetime dict_lifetime;
|
||||
const bool require_nonempty;
|
||||
const bool sparse;
|
||||
|
||||
std::map<std::string, size_t> attribute_index_by_name;
|
||||
std::vector<Attribute> attributes;
|
||||
|
@ -83,7 +83,6 @@ BlockInputStreamPtr FormatFactory::getInput(
|
||||
const Block & sample,
|
||||
const Context & context,
|
||||
UInt64 max_block_size,
|
||||
UInt64 rows_portion_size,
|
||||
ReadCallback callback) const
|
||||
{
|
||||
if (name == "Native")
|
||||
@ -98,11 +97,10 @@ BlockInputStreamPtr FormatFactory::getInput(
|
||||
const Settings & settings = context.getSettingsRef();
|
||||
FormatSettings format_settings = getInputFormatSetting(settings);
|
||||
|
||||
return input_getter(
|
||||
buf, sample, context, max_block_size, rows_portion_size, callback ? callback : ReadCallback(), format_settings);
|
||||
return input_getter(buf, sample, context, max_block_size, callback ? callback : ReadCallback(), format_settings);
|
||||
}
|
||||
|
||||
auto format = getInputFormat(name, buf, sample, context, max_block_size, rows_portion_size, std::move(callback));
|
||||
auto format = getInputFormat(name, buf, sample, context, max_block_size, std::move(callback));
|
||||
return std::make_shared<InputStreamFromInputFormat>(std::move(format));
|
||||
}
|
||||
|
||||
@ -150,7 +148,6 @@ InputFormatPtr FormatFactory::getInputFormat(
|
||||
const Block & sample,
|
||||
const Context & context,
|
||||
UInt64 max_block_size,
|
||||
UInt64 rows_portion_size,
|
||||
ReadCallback callback) const
|
||||
{
|
||||
const auto & input_getter = getCreators(name).input_processor_creator;
|
||||
@ -164,7 +161,6 @@ InputFormatPtr FormatFactory::getInputFormat(
|
||||
params.max_block_size = max_block_size;
|
||||
params.allow_errors_num = format_settings.input_allow_errors_num;
|
||||
params.allow_errors_ratio = format_settings.input_allow_errors_ratio;
|
||||
params.rows_portion_size = rows_portion_size;
|
||||
params.callback = std::move(callback);
|
||||
params.max_execution_time = settings.max_execution_time;
|
||||
params.timeout_overflow_mode = settings.timeout_overflow_mode;
|
||||
|
@ -51,7 +51,6 @@ private:
|
||||
const Block & sample,
|
||||
const Context & context,
|
||||
UInt64 max_block_size,
|
||||
UInt64 rows_portion_size,
|
||||
ReadCallback callback,
|
||||
const FormatSettings & settings)>;
|
||||
|
||||
@ -96,7 +95,6 @@ public:
|
||||
const Block & sample,
|
||||
const Context & context,
|
||||
UInt64 max_block_size,
|
||||
UInt64 rows_portion_size = 0,
|
||||
ReadCallback callback = {}) const;
|
||||
|
||||
BlockOutputStreamPtr getOutput(const String & name, WriteBuffer & buf,
|
||||
@ -108,7 +106,6 @@ public:
|
||||
const Block & sample,
|
||||
const Context & context,
|
||||
UInt64 max_block_size,
|
||||
UInt64 rows_portion_size = 0,
|
||||
ReadCallback callback = {}) const;
|
||||
|
||||
OutputFormatPtr getOutputFormat(
|
||||
|
@ -13,7 +13,6 @@ void registerInputFormatNative(FormatFactory & factory)
|
||||
const Block & sample,
|
||||
const Context &,
|
||||
UInt64 /* max_block_size */,
|
||||
UInt64 /* min_read_rows */,
|
||||
FormatFactory::ReadCallback /* callback */,
|
||||
const FormatSettings &)
|
||||
{
|
||||
|
@ -39,7 +39,7 @@ try
|
||||
|
||||
FormatSettings format_settings;
|
||||
|
||||
RowInputFormatParams params{DEFAULT_INSERT_BLOCK_SIZE, 0, 0, 0, []{}};
|
||||
RowInputFormatParams params{DEFAULT_INSERT_BLOCK_SIZE, 0, 0, []{}};
|
||||
|
||||
InputFormatPtr input_format = std::make_shared<TabSeparatedRowInputFormat>(sample, in_buf, params, false, false, format_settings);
|
||||
BlockInputStreamPtr block_input = std::make_shared<InputStreamFromInputFormat>(std::move(input_format));
|
||||
|
@ -33,7 +33,7 @@ if (OPENSSL_CRYPTO_LIBRARY)
|
||||
endif()
|
||||
|
||||
target_include_directories(clickhouse_functions PRIVATE ${CMAKE_CURRENT_BINARY_DIR}/include)
|
||||
target_include_directories(clickhouse_functions SYSTEM PRIVATE ${DIVIDE_INCLUDE_DIR} ${METROHASH_INCLUDE_DIR})
|
||||
target_include_directories(clickhouse_functions SYSTEM PRIVATE ${DIVIDE_INCLUDE_DIR} ${METROHASH_INCLUDE_DIR} ${SPARSEHASH_INCLUDE_DIR})
|
||||
|
||||
if (CONSISTENT_HASHING_INCLUDE_DIR)
|
||||
target_include_directories (clickhouse_functions PRIVATE ${CONSISTENT_HASHING_INCLUDE_DIR})
|
||||
|
@ -10,6 +10,7 @@ void registerFunctionsBitmap(FunctionFactory & factory)
|
||||
factory.registerFunction<FunctionBitmapBuild>();
|
||||
factory.registerFunction<FunctionBitmapToArray>();
|
||||
factory.registerFunction<FunctionBitmapSubsetInRange>();
|
||||
factory.registerFunction<FunctionBitmapSubsetLimit>();
|
||||
|
||||
factory.registerFunction<FunctionBitmapSelfCardinality>();
|
||||
factory.registerFunction<FunctionBitmapMin>();
|
||||
|
@ -34,6 +34,9 @@ namespace ErrorCodes
|
||||
* Return subset in specified range (not include the range_end):
|
||||
* bitmapSubsetInRange: bitmap,integer,integer -> bitmap
|
||||
*
|
||||
* Return subset of the smallest `limit` values in set which is no smaller than `range_start`.
|
||||
* bitmapSubsetInRange: bitmap,integer,integer -> bitmap
|
||||
*
|
||||
* Two bitmap and calculation:
|
||||
* bitmapAnd: bitmap,bitmap -> bitmap
|
||||
*
|
||||
@ -49,7 +52,7 @@ namespace ErrorCodes
|
||||
* Retrun bitmap cardinality:
|
||||
* bitmapCardinality: bitmap -> integer
|
||||
*
|
||||
* Retrun smallest value in the set:
|
||||
* Retrun the smallest value in the set:
|
||||
* bitmapMin: bitmap -> integer
|
||||
*
|
||||
* Retrun the greatest value in the set:
|
||||
@ -250,12 +253,13 @@ private:
|
||||
}
|
||||
};
|
||||
|
||||
class FunctionBitmapSubsetInRange : public IFunction
|
||||
template <typename Impl>
|
||||
class FunctionBitmapSubset : public IFunction
|
||||
{
|
||||
public:
|
||||
static constexpr auto name = "bitmapSubsetInRange";
|
||||
static constexpr auto name = Impl::name;
|
||||
|
||||
static FunctionPtr create(const Context &) { return std::make_shared<FunctionBitmapSubsetInRange>(); }
|
||||
static FunctionPtr create(const Context &) { return std::make_shared<FunctionBitmapSubset<Impl>>(); }
|
||||
|
||||
String getName() const override { return name; }
|
||||
|
||||
@ -357,12 +361,37 @@ private:
|
||||
col_to->insertDefault();
|
||||
AggregateFunctionGroupBitmapData<T> & bd2
|
||||
= *reinterpret_cast<AggregateFunctionGroupBitmapData<T> *>(col_to->getData()[i]);
|
||||
bd0.rbs.rb_range(range_start, range_end, bd2.rbs);
|
||||
Impl::apply(bd0, range_start, range_end, bd2);
|
||||
}
|
||||
block.getByPosition(result).column = std::move(col_to);
|
||||
}
|
||||
};
|
||||
|
||||
struct BitmapSubsetInRangeImpl
|
||||
{
|
||||
public:
|
||||
static constexpr auto name = "bitmapSubsetInRange";
|
||||
template <typename T>
|
||||
static void apply(const AggregateFunctionGroupBitmapData<T> & bd0, UInt32 range_start, UInt32 range_end, AggregateFunctionGroupBitmapData<T> & bd2)
|
||||
{
|
||||
bd0.rbs.rb_range(range_start, range_end, bd2.rbs);
|
||||
}
|
||||
};
|
||||
|
||||
struct BitmapSubsetLimitImpl
|
||||
{
|
||||
public:
|
||||
static constexpr auto name = "bitmapSubsetLimit";
|
||||
template <typename T>
|
||||
static void apply(const AggregateFunctionGroupBitmapData<T> & bd0, UInt32 range_start, UInt32 range_end, AggregateFunctionGroupBitmapData<T> & bd2)
|
||||
{
|
||||
bd0.rbs.rb_limit(range_start, range_end, bd2.rbs);
|
||||
}
|
||||
};
|
||||
|
||||
using FunctionBitmapSubsetInRange = FunctionBitmapSubset<BitmapSubsetInRangeImpl>;
|
||||
using FunctionBitmapSubsetLimit = FunctionBitmapSubset<BitmapSubsetLimitImpl>;
|
||||
|
||||
template <typename Impl>
|
||||
class FunctionBitmapSelfCardinalityImpl : public IFunction
|
||||
{
|
||||
|
@ -5,21 +5,21 @@ namespace DB
|
||||
|
||||
class FunctionFactory;
|
||||
|
||||
#ifdef defined(OS_LINUX)
|
||||
void registerFunctionAddressToSymbol(FunctionFactory & factory);
|
||||
void registerFunctionDemangle(FunctionFactory & factory);
|
||||
void registerFunctionAddressToLine(FunctionFactory & factory);
|
||||
#endif
|
||||
void registerFunctionDemangle(FunctionFactory & factory);
|
||||
void registerFunctionTrap(FunctionFactory & factory);
|
||||
|
||||
void registerFunctionsIntrospection(FunctionFactory & factory)
|
||||
{
|
||||
#if defined (OS_LINUX)
|
||||
#if defined(OS_LINUX)
|
||||
registerFunctionAddressToSymbol(factory);
|
||||
registerFunctionDemangle(factory);
|
||||
registerFunctionAddressToLine(factory);
|
||||
registerFunctionTrap(factory);
|
||||
#else
|
||||
UNUSED(factory);
|
||||
#endif
|
||||
registerFunctionDemangle(factory);
|
||||
registerFunctionTrap(factory);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -45,7 +45,7 @@ namespace ErrorCodes
|
||||
|
||||
namespace
|
||||
{
|
||||
void setTimeouts(Poco::Net::HTTPClientSession & session, const ConnectionTimeouts & timeouts)
|
||||
void setTimeouts(Poco::Net::HTTPClientSession & session, const ConnectionTimeouts & timeouts)
|
||||
{
|
||||
#if defined(POCO_CLICKHOUSE_PATCH) || POCO_VERSION >= 0x02000000
|
||||
session.setTimeout(timeouts.connection_timeout, timeouts.send_timeout, timeouts.receive_timeout);
|
||||
@ -220,20 +220,25 @@ PooledHTTPSessionPtr makePooledHTTPSession(const Poco::URI & uri, const Connecti
|
||||
std::istream * receiveResponse(
|
||||
Poco::Net::HTTPClientSession & session, const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response)
|
||||
{
|
||||
auto istr = &session.receiveResponse(response);
|
||||
auto & istr = session.receiveResponse(response);
|
||||
assertResponseIsOk(request, response, istr);
|
||||
return &istr;
|
||||
}
|
||||
|
||||
void assertResponseIsOk(const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response, std::istream & istr)
|
||||
{
|
||||
auto status = response.getStatus();
|
||||
|
||||
if (status != Poco::Net::HTTPResponse::HTTP_OK)
|
||||
{
|
||||
std::stringstream error_message;
|
||||
error_message << "Received error from remote server " << request.getURI() << ". HTTP status code: " << status << " "
|
||||
<< response.getReason() << ", body: " << istr->rdbuf();
|
||||
<< response.getReason() << ", body: " << istr.rdbuf();
|
||||
|
||||
throw Exception(error_message.str(),
|
||||
status == HTTP_TOO_MANY_REQUESTS ? ErrorCodes::RECEIVED_ERROR_TOO_MANY_REQUESTS
|
||||
: ErrorCodes::RECEIVED_ERROR_FROM_REMOTE_IO_SERVER);
|
||||
}
|
||||
return istr;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -57,4 +57,6 @@ PooledHTTPSessionPtr makePooledHTTPSession(const Poco::URI & uri, const Connecti
|
||||
*/
|
||||
std::istream * receiveResponse(
|
||||
Poco::Net::HTTPClientSession & session, const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response);
|
||||
void assertResponseIsOk(const Poco::Net::HTTPRequest & request, Poco::Net::HTTPResponse & response, std::istream & istr);
|
||||
|
||||
}
|
||||
|
70
dbms/src/IO/ReadBufferFromS3.cpp
Normal file
70
dbms/src/IO/ReadBufferFromS3.cpp
Normal file
@ -0,0 +1,70 @@
|
||||
#include <IO/ReadBufferFromS3.h>
|
||||
|
||||
#include <IO/ReadBufferFromIStream.h>
|
||||
|
||||
#include <common/logger_useful.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
const int DEFAULT_S3_MAX_FOLLOW_GET_REDIRECT = 2;
|
||||
|
||||
ReadBufferFromS3::ReadBufferFromS3(Poco::URI uri_,
|
||||
const ConnectionTimeouts & timeouts,
|
||||
const Poco::Net::HTTPBasicCredentials & credentials,
|
||||
size_t buffer_size_)
|
||||
: ReadBuffer(nullptr, 0)
|
||||
, uri {uri_}
|
||||
, method {Poco::Net::HTTPRequest::HTTP_GET}
|
||||
, session {makeHTTPSession(uri_, timeouts)}
|
||||
{
|
||||
Poco::Net::HTTPResponse response;
|
||||
std::unique_ptr<Poco::Net::HTTPRequest> request;
|
||||
|
||||
for (int i = 0; i < DEFAULT_S3_MAX_FOLLOW_GET_REDIRECT; ++i)
|
||||
{
|
||||
// With empty path poco will send "POST HTTP/1.1" its bug.
|
||||
if (uri.getPath().empty())
|
||||
uri.setPath("/");
|
||||
|
||||
request = std::make_unique<Poco::Net::HTTPRequest>(method, uri.getPathAndQuery(), Poco::Net::HTTPRequest::HTTP_1_1);
|
||||
request->setHost(uri.getHost()); // use original, not resolved host name in header
|
||||
|
||||
if (!credentials.getUsername().empty())
|
||||
credentials.authenticate(*request);
|
||||
|
||||
LOG_TRACE((&Logger::get("ReadBufferFromS3")), "Sending request to " << uri.toString());
|
||||
|
||||
session->sendRequest(*request);
|
||||
|
||||
istr = &session->receiveResponse(response);
|
||||
|
||||
// Handle 307 Temporary Redirect in order to allow request redirection
|
||||
// See https://docs.aws.amazon.com/AmazonS3/latest/dev/Redirects.html
|
||||
if (response.getStatus() != Poco::Net::HTTPResponse::HTTP_TEMPORARY_REDIRECT)
|
||||
break;
|
||||
|
||||
auto location_iterator = response.find("Location");
|
||||
if (location_iterator == response.end())
|
||||
break;
|
||||
|
||||
uri = location_iterator->second;
|
||||
session = makeHTTPSession(uri, timeouts);
|
||||
}
|
||||
|
||||
assertResponseIsOk(*request, response, *istr);
|
||||
impl = std::make_unique<ReadBufferFromIStream>(*istr, buffer_size_);
|
||||
}
|
||||
|
||||
|
||||
bool ReadBufferFromS3::nextImpl()
|
||||
{
|
||||
if (!impl->next())
|
||||
return false;
|
||||
internal_buffer = impl->buffer();
|
||||
working_buffer = internal_buffer;
|
||||
return true;
|
||||
}
|
||||
|
||||
}
|
35
dbms/src/IO/ReadBufferFromS3.h
Normal file
35
dbms/src/IO/ReadBufferFromS3.h
Normal file
@ -0,0 +1,35 @@
|
||||
#pragma once
|
||||
|
||||
#include <memory>
|
||||
|
||||
#include <IO/ConnectionTimeouts.h>
|
||||
#include <IO/HTTPCommon.h>
|
||||
#include <IO/ReadBuffer.h>
|
||||
#include <Poco/Net/HTTPBasicCredentials.h>
|
||||
#include <Poco/URI.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
/** Perform S3 HTTP GET request and provide response to read.
|
||||
*/
|
||||
class ReadBufferFromS3 : public ReadBuffer
|
||||
{
|
||||
protected:
|
||||
Poco::URI uri;
|
||||
std::string method;
|
||||
|
||||
HTTPSessionPtr session;
|
||||
std::istream * istr; /// owned by session
|
||||
std::unique_ptr<ReadBuffer> impl;
|
||||
|
||||
public:
|
||||
explicit ReadBufferFromS3(Poco::URI uri_,
|
||||
const ConnectionTimeouts & timeouts = {},
|
||||
const Poco::Net::HTTPBasicCredentials & credentials = {},
|
||||
size_t buffer_size_ = DBMS_DEFAULT_BUFFER_SIZE);
|
||||
|
||||
bool nextImpl() override;
|
||||
};
|
||||
|
||||
}
|
286
dbms/src/IO/WriteBufferFromS3.cpp
Normal file
286
dbms/src/IO/WriteBufferFromS3.cpp
Normal file
@ -0,0 +1,286 @@
|
||||
#include <IO/WriteBufferFromS3.h>
|
||||
|
||||
#include <IO/WriteHelpers.h>
|
||||
|
||||
#include <Poco/DOM/AutoPtr.h>
|
||||
#include <Poco/DOM/DOMParser.h>
|
||||
#include <Poco/DOM/Document.h>
|
||||
#include <Poco/DOM/NodeList.h>
|
||||
#include <Poco/SAX/InputSource.h>
|
||||
|
||||
#include <common/logger_useful.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
const int DEFAULT_S3_MAX_FOLLOW_PUT_REDIRECT = 2;
|
||||
const int S3_WARN_MAX_PARTS = 10000;
|
||||
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int INCORRECT_DATA;
|
||||
}
|
||||
|
||||
|
||||
WriteBufferFromS3::WriteBufferFromS3(
|
||||
const Poco::URI & uri_,
|
||||
size_t minimum_upload_part_size_,
|
||||
const ConnectionTimeouts & timeouts_,
|
||||
const Poco::Net::HTTPBasicCredentials & credentials, size_t buffer_size_
|
||||
)
|
||||
: BufferWithOwnMemory<WriteBuffer>(buffer_size_, nullptr, 0)
|
||||
, uri {uri_}
|
||||
, minimum_upload_part_size {minimum_upload_part_size_}
|
||||
, timeouts {timeouts_}
|
||||
, auth_request {Poco::Net::HTTPRequest::HTTP_PUT, uri.getPathAndQuery(), Poco::Net::HTTPRequest::HTTP_1_1}
|
||||
, temporary_buffer {std::make_unique<WriteBufferFromString>(buffer_string)}
|
||||
, last_part_size {0}
|
||||
{
|
||||
if (!credentials.getUsername().empty())
|
||||
credentials.authenticate(auth_request);
|
||||
|
||||
initiate();
|
||||
}
|
||||
|
||||
|
||||
void WriteBufferFromS3::nextImpl()
|
||||
{
|
||||
if (!offset())
|
||||
return;
|
||||
|
||||
temporary_buffer->write(working_buffer.begin(), offset());
|
||||
|
||||
last_part_size += offset();
|
||||
|
||||
if (last_part_size > minimum_upload_part_size)
|
||||
{
|
||||
temporary_buffer->finish();
|
||||
writePart(buffer_string);
|
||||
last_part_size = 0;
|
||||
temporary_buffer = std::make_unique<WriteBufferFromString>(buffer_string);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void WriteBufferFromS3::finalize()
|
||||
{
|
||||
temporary_buffer->finish();
|
||||
if (!buffer_string.empty())
|
||||
{
|
||||
writePart(buffer_string);
|
||||
}
|
||||
|
||||
complete();
|
||||
}
|
||||
|
||||
|
||||
WriteBufferFromS3::~WriteBufferFromS3()
|
||||
{
|
||||
try
|
||||
{
|
||||
next();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void WriteBufferFromS3::initiate()
|
||||
{
|
||||
// See https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadInitiate.html
|
||||
Poco::Net::HTTPResponse response;
|
||||
std::unique_ptr<Poco::Net::HTTPRequest> request_ptr;
|
||||
HTTPSessionPtr session;
|
||||
std::istream * istr = nullptr; /// owned by session
|
||||
Poco::URI initiate_uri = uri;
|
||||
initiate_uri.setRawQuery("uploads");
|
||||
for (auto & param: uri.getQueryParameters())
|
||||
{
|
||||
initiate_uri.addQueryParameter(param.first, param.second);
|
||||
}
|
||||
|
||||
for (int i = 0; i < DEFAULT_S3_MAX_FOLLOW_PUT_REDIRECT; ++i)
|
||||
{
|
||||
session = makeHTTPSession(initiate_uri, timeouts);
|
||||
request_ptr = std::make_unique<Poco::Net::HTTPRequest>(Poco::Net::HTTPRequest::HTTP_POST, initiate_uri.getPathAndQuery(), Poco::Net::HTTPRequest::HTTP_1_1);
|
||||
request_ptr->setHost(initiate_uri.getHost()); // use original, not resolved host name in header
|
||||
|
||||
if (auth_request.hasCredentials())
|
||||
{
|
||||
Poco::Net::HTTPBasicCredentials credentials(auth_request);
|
||||
credentials.authenticate(*request_ptr);
|
||||
}
|
||||
|
||||
request_ptr->setContentLength(0);
|
||||
|
||||
LOG_TRACE((&Logger::get("WriteBufferFromS3")), "Sending request to " << initiate_uri.toString());
|
||||
|
||||
session->sendRequest(*request_ptr);
|
||||
|
||||
istr = &session->receiveResponse(response);
|
||||
|
||||
// Handle 307 Temporary Redirect in order to allow request redirection
|
||||
// See https://docs.aws.amazon.com/AmazonS3/latest/dev/Redirects.html
|
||||
if (response.getStatus() != Poco::Net::HTTPResponse::HTTP_TEMPORARY_REDIRECT)
|
||||
break;
|
||||
|
||||
auto location_iterator = response.find("Location");
|
||||
if (location_iterator == response.end())
|
||||
break;
|
||||
|
||||
initiate_uri = location_iterator->second;
|
||||
}
|
||||
assertResponseIsOk(*request_ptr, response, *istr);
|
||||
|
||||
Poco::XML::InputSource src(*istr);
|
||||
Poco::XML::DOMParser parser;
|
||||
Poco::AutoPtr<Poco::XML::Document> document = parser.parse(&src);
|
||||
Poco::AutoPtr<Poco::XML::NodeList> nodes = document->getElementsByTagName("UploadId");
|
||||
if (nodes->length() != 1)
|
||||
{
|
||||
throw Exception("Incorrect XML in response, no upload id", ErrorCodes::INCORRECT_DATA);
|
||||
}
|
||||
upload_id = nodes->item(0)->innerText();
|
||||
if (upload_id.empty())
|
||||
{
|
||||
throw Exception("Incorrect XML in response, empty upload id", ErrorCodes::INCORRECT_DATA);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void WriteBufferFromS3::writePart(const String & data)
|
||||
{
|
||||
// See https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html
|
||||
Poco::Net::HTTPResponse response;
|
||||
std::unique_ptr<Poco::Net::HTTPRequest> request_ptr;
|
||||
HTTPSessionPtr session;
|
||||
std::istream * istr = nullptr; /// owned by session
|
||||
Poco::URI part_uri = uri;
|
||||
part_uri.addQueryParameter("partNumber", std::to_string(part_tags.size() + 1));
|
||||
part_uri.addQueryParameter("uploadId", upload_id);
|
||||
|
||||
if (part_tags.size() == S3_WARN_MAX_PARTS)
|
||||
{
|
||||
// Don't throw exception here by ourselves but leave the decision to take by S3 server.
|
||||
LOG_WARNING(&Logger::get("WriteBufferFromS3"), "Maximum part number in S3 protocol has reached (too much parts). Server may not accept this whole upload.");
|
||||
}
|
||||
|
||||
for (int i = 0; i < DEFAULT_S3_MAX_FOLLOW_PUT_REDIRECT; ++i)
|
||||
{
|
||||
session = makeHTTPSession(part_uri, timeouts);
|
||||
request_ptr = std::make_unique<Poco::Net::HTTPRequest>(Poco::Net::HTTPRequest::HTTP_PUT, part_uri.getPathAndQuery(), Poco::Net::HTTPRequest::HTTP_1_1);
|
||||
request_ptr->setHost(part_uri.getHost()); // use original, not resolved host name in header
|
||||
|
||||
if (auth_request.hasCredentials())
|
||||
{
|
||||
Poco::Net::HTTPBasicCredentials credentials(auth_request);
|
||||
credentials.authenticate(*request_ptr);
|
||||
}
|
||||
|
||||
request_ptr->setExpectContinue(true);
|
||||
|
||||
request_ptr->setContentLength(data.size());
|
||||
|
||||
LOG_TRACE((&Logger::get("WriteBufferFromS3")), "Sending request to " << part_uri.toString());
|
||||
|
||||
std::ostream & ostr = session->sendRequest(*request_ptr);
|
||||
if (session->peekResponse(response))
|
||||
{
|
||||
// Received 100-continue.
|
||||
ostr << data;
|
||||
}
|
||||
|
||||
istr = &session->receiveResponse(response);
|
||||
|
||||
// Handle 307 Temporary Redirect in order to allow request redirection
|
||||
// See https://docs.aws.amazon.com/AmazonS3/latest/dev/Redirects.html
|
||||
if (response.getStatus() != Poco::Net::HTTPResponse::HTTP_TEMPORARY_REDIRECT)
|
||||
break;
|
||||
|
||||
auto location_iterator = response.find("Location");
|
||||
if (location_iterator == response.end())
|
||||
break;
|
||||
|
||||
part_uri = location_iterator->second;
|
||||
}
|
||||
assertResponseIsOk(*request_ptr, response, *istr);
|
||||
|
||||
auto etag_iterator = response.find("ETag");
|
||||
if (etag_iterator == response.end())
|
||||
{
|
||||
throw Exception("Incorrect response, no ETag", ErrorCodes::INCORRECT_DATA);
|
||||
}
|
||||
part_tags.push_back(etag_iterator->second);
|
||||
}
|
||||
|
||||
|
||||
void WriteBufferFromS3::complete()
|
||||
{
|
||||
// See https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html
|
||||
Poco::Net::HTTPResponse response;
|
||||
std::unique_ptr<Poco::Net::HTTPRequest> request_ptr;
|
||||
HTTPSessionPtr session;
|
||||
std::istream * istr = nullptr; /// owned by session
|
||||
Poco::URI complete_uri = uri;
|
||||
complete_uri.addQueryParameter("uploadId", upload_id);
|
||||
|
||||
String data;
|
||||
WriteBufferFromString buffer(data);
|
||||
writeString("<CompleteMultipartUpload>", buffer);
|
||||
for (size_t i = 0; i < part_tags.size(); ++i)
|
||||
{
|
||||
writeString("<Part><PartNumber>", buffer);
|
||||
writeIntText(i + 1, buffer);
|
||||
writeString("</PartNumber><ETag>", buffer);
|
||||
writeString(part_tags[i], buffer);
|
||||
writeString("</ETag></Part>", buffer);
|
||||
}
|
||||
writeString("</CompleteMultipartUpload>", buffer);
|
||||
buffer.finish();
|
||||
|
||||
for (int i = 0; i < DEFAULT_S3_MAX_FOLLOW_PUT_REDIRECT; ++i)
|
||||
{
|
||||
session = makeHTTPSession(complete_uri, timeouts);
|
||||
request_ptr = std::make_unique<Poco::Net::HTTPRequest>(Poco::Net::HTTPRequest::HTTP_POST, complete_uri.getPathAndQuery(), Poco::Net::HTTPRequest::HTTP_1_1);
|
||||
request_ptr->setHost(complete_uri.getHost()); // use original, not resolved host name in header
|
||||
|
||||
if (auth_request.hasCredentials())
|
||||
{
|
||||
Poco::Net::HTTPBasicCredentials credentials(auth_request);
|
||||
credentials.authenticate(*request_ptr);
|
||||
}
|
||||
|
||||
request_ptr->setExpectContinue(true);
|
||||
|
||||
request_ptr->setContentLength(data.size());
|
||||
|
||||
LOG_TRACE((&Logger::get("WriteBufferFromS3")), "Sending request to " << complete_uri.toString());
|
||||
|
||||
std::ostream & ostr = session->sendRequest(*request_ptr);
|
||||
if (session->peekResponse(response))
|
||||
{
|
||||
// Received 100-continue.
|
||||
ostr << data;
|
||||
}
|
||||
|
||||
istr = &session->receiveResponse(response);
|
||||
|
||||
// Handle 307 Temporary Redirect in order to allow request redirection
|
||||
// See https://docs.aws.amazon.com/AmazonS3/latest/dev/Redirects.html
|
||||
if (response.getStatus() != Poco::Net::HTTPResponse::HTTP_TEMPORARY_REDIRECT)
|
||||
break;
|
||||
|
||||
auto location_iterator = response.find("Location");
|
||||
if (location_iterator == response.end())
|
||||
break;
|
||||
|
||||
complete_uri = location_iterator->second;
|
||||
}
|
||||
assertResponseIsOk(*request_ptr, response, *istr);
|
||||
}
|
||||
|
||||
}
|
62
dbms/src/IO/WriteBufferFromS3.h
Normal file
62
dbms/src/IO/WriteBufferFromS3.h
Normal file
@ -0,0 +1,62 @@
|
||||
#pragma once
|
||||
|
||||
#include <functional>
|
||||
#include <memory>
|
||||
#include <vector>
|
||||
#include <Core/Types.h>
|
||||
#include <IO/ConnectionTimeouts.h>
|
||||
#include <IO/HTTPCommon.h>
|
||||
#include <IO/BufferWithOwnMemory.h>
|
||||
#include <IO/ReadBuffer.h>
|
||||
#include <IO/ReadBufferFromIStream.h>
|
||||
#include <IO/WriteBuffer.h>
|
||||
#include <IO/WriteBufferFromString.h>
|
||||
#include <Poco/Net/HTTPBasicCredentials.h>
|
||||
#include <Poco/Net/HTTPClientSession.h>
|
||||
#include <Poco/Net/HTTPRequest.h>
|
||||
#include <Poco/Net/HTTPResponse.h>
|
||||
#include <Poco/URI.h>
|
||||
#include <Poco/Version.h>
|
||||
#include <Common/DNSResolver.h>
|
||||
#include <Common/config.h>
|
||||
#include <common/logger_useful.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
/* Perform S3 HTTP PUT request.
|
||||
*/
|
||||
class WriteBufferFromS3 : public BufferWithOwnMemory<WriteBuffer>
|
||||
{
|
||||
private:
|
||||
Poco::URI uri;
|
||||
size_t minimum_upload_part_size;
|
||||
ConnectionTimeouts timeouts;
|
||||
Poco::Net::HTTPRequest auth_request;
|
||||
String buffer_string;
|
||||
std::unique_ptr<WriteBufferFromString> temporary_buffer;
|
||||
size_t last_part_size;
|
||||
String upload_id;
|
||||
std::vector<String> part_tags;
|
||||
|
||||
public:
|
||||
explicit WriteBufferFromS3(const Poco::URI & uri,
|
||||
size_t minimum_upload_part_size_,
|
||||
const ConnectionTimeouts & timeouts = {},
|
||||
const Poco::Net::HTTPBasicCredentials & credentials = {},
|
||||
size_t buffer_size_ = DBMS_DEFAULT_BUFFER_SIZE);
|
||||
|
||||
void nextImpl() override;
|
||||
|
||||
/// Receives response from the server after sending all data.
|
||||
void finalize();
|
||||
|
||||
~WriteBufferFromS3() override;
|
||||
|
||||
private:
|
||||
void initiate();
|
||||
void writePart(const String & data);
|
||||
void complete();
|
||||
};
|
||||
|
||||
}
|
@ -262,7 +262,10 @@ NamesAndTypesList getNamesAndTypeListFromTableExpression(const ASTTableExpressio
|
||||
|
||||
JoinPtr makeJoin(std::shared_ptr<AnalyzedJoin> table_join, const Block & right_sample_block)
|
||||
{
|
||||
if (table_join->partial_merge_join)
|
||||
bool is_left_or_inner = isLeft(table_join->kind()) || isInner(table_join->kind());
|
||||
bool is_asof = (table_join->strictness() == ASTTableJoin::Strictness::Asof);
|
||||
|
||||
if (table_join->partial_merge_join && !is_asof && is_left_or_inner)
|
||||
return std::make_shared<MergeJoin>(table_join, right_sample_block);
|
||||
return std::make_shared<Join>(table_join, right_sample_block);
|
||||
}
|
||||
|
@ -2,6 +2,7 @@
|
||||
|
||||
#include <Poco/File.h>
|
||||
|
||||
#include <Common/StringUtils/StringUtils.h>
|
||||
#include <Common/escapeForFileName.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
@ -416,7 +417,12 @@ ColumnsDescription InterpreterCreateQuery::setProperties(
|
||||
else if (!create.as_table.empty())
|
||||
{
|
||||
columns = as_storage->getColumns();
|
||||
indices = as_storage->getIndices();
|
||||
|
||||
/// Secondary indices make sense only for MergeTree family of storage engines.
|
||||
/// We should not copy them for other storages.
|
||||
if (create.storage && endsWith(create.storage->engine->name, "MergeTree"))
|
||||
indices = as_storage->getIndices();
|
||||
|
||||
constraints = as_storage->getConstraints();
|
||||
}
|
||||
else if (create.select)
|
||||
|
@ -1209,33 +1209,11 @@ void InterpreterSelectQuery::executeImpl(TPipeline & pipeline, const BlockInputS
|
||||
executeExpression(pipeline, expressions.before_order_and_select);
|
||||
executeDistinct(pipeline, true, expressions.selected_columns);
|
||||
|
||||
need_second_distinct_pass = query.distinct && pipeline.hasMixedStreams();
|
||||
}
|
||||
else
|
||||
{
|
||||
need_second_distinct_pass = query.distinct && pipeline.hasMixedStreams();
|
||||
else if (query.group_by_with_totals || query.group_by_with_rollup || query.group_by_with_cube)
|
||||
throw Exception("WITH TOTALS, ROLLUP or CUBE are not supported without aggregation", ErrorCodes::LOGICAL_ERROR);
|
||||
|
||||
if (query.group_by_with_totals && !aggregate_final)
|
||||
{
|
||||
bool final = !query.group_by_with_rollup && !query.group_by_with_cube;
|
||||
executeTotalsAndHaving(pipeline, expressions.has_having, expressions.before_having, aggregate_overflow_row, final);
|
||||
}
|
||||
|
||||
if ((query.group_by_with_rollup || query.group_by_with_cube) && !aggregate_final)
|
||||
{
|
||||
if (query.group_by_with_rollup)
|
||||
executeRollupOrCube(pipeline, Modificator::ROLLUP);
|
||||
else if (query.group_by_with_cube)
|
||||
executeRollupOrCube(pipeline, Modificator::CUBE);
|
||||
|
||||
if (expressions.has_having)
|
||||
{
|
||||
if (query.group_by_with_totals)
|
||||
throw Exception("WITH TOTALS and WITH ROLLUP or CUBE are not supported together in presence of HAVING", ErrorCodes::NOT_IMPLEMENTED);
|
||||
executeHaving(pipeline, expressions.before_having);
|
||||
}
|
||||
}
|
||||
}
|
||||
need_second_distinct_pass = query.distinct && pipeline.hasMixedStreams();
|
||||
|
||||
if (expressions.has_order_by)
|
||||
{
|
||||
|
@ -38,6 +38,7 @@ namespace ErrorCodes
|
||||
extern const int BAD_ARGUMENTS;
|
||||
extern const int CANNOT_KILL;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int TIMEOUT_EXCEEDED;
|
||||
}
|
||||
|
||||
|
||||
@ -338,7 +339,17 @@ void InterpreterSystemQuery::syncReplica(ASTSystemQuery & query)
|
||||
StoragePtr table = context.getTable(database_name, table_name);
|
||||
|
||||
if (auto storage_replicated = dynamic_cast<StorageReplicatedMergeTree *>(table.get()))
|
||||
storage_replicated->waitForShrinkingQueueSize(0, context.getSettingsRef().receive_timeout.value.milliseconds());
|
||||
{
|
||||
LOG_TRACE(log, "Synchronizing entries in replica's queue with table's log and waiting for it to become empty");
|
||||
if (!storage_replicated->waitForShrinkingQueueSize(0, context.getSettingsRef().receive_timeout.totalMilliseconds()))
|
||||
{
|
||||
LOG_ERROR(log, "SYNC REPLICA " + database_name + "." + table_name + ": Timed out!");
|
||||
throw Exception(
|
||||
"SYNC REPLICA " + database_name + "." + table_name + ": command timed out! "
|
||||
"See the 'receive_timeout' setting", ErrorCodes::TIMEOUT_EXCEEDED);
|
||||
}
|
||||
LOG_TRACE(log, "SYNC REPLICA " + database_name + "." + table_name + ": OK");
|
||||
}
|
||||
else
|
||||
throw Exception("Table " + database_name + "." + table_name + " is not replicated", ErrorCodes::BAD_ARGUMENTS);
|
||||
}
|
||||
|
@ -959,29 +959,7 @@ void Join::joinBlock(Block & block)
|
||||
|
||||
void Join::joinTotals(Block & block) const
|
||||
{
|
||||
Block totals_without_keys = totals;
|
||||
|
||||
if (totals_without_keys)
|
||||
{
|
||||
for (const auto & name : key_names_right)
|
||||
totals_without_keys.erase(totals_without_keys.getPositionByName(name));
|
||||
|
||||
for (size_t i = 0; i < totals_without_keys.columns(); ++i)
|
||||
block.insert(totals_without_keys.safeGetByPosition(i));
|
||||
}
|
||||
else
|
||||
{
|
||||
/// We will join empty `totals` - from one row with the default values.
|
||||
|
||||
for (size_t i = 0; i < sample_block_with_columns_to_add.columns(); ++i)
|
||||
{
|
||||
const auto & col = sample_block_with_columns_to_add.getByPosition(i);
|
||||
block.insert({
|
||||
col.type->createColumnConstWithDefaultValue(1)->convertToFullColumnIfConst(),
|
||||
col.type,
|
||||
col.name});
|
||||
}
|
||||
}
|
||||
JoinCommon::joinTotals(totals, sample_block_with_columns_to_add, key_names_right, block);
|
||||
}
|
||||
|
||||
|
||||
|
@ -194,14 +194,14 @@ struct ColumnAliasesMatcher
|
||||
}
|
||||
};
|
||||
|
||||
static bool needChildVisit(ASTPtr & node, const ASTPtr &)
|
||||
static bool needChildVisit(const ASTPtr & node, const ASTPtr &)
|
||||
{
|
||||
if (node->as<ASTQualifiedAsterisk>())
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
static void visit(ASTPtr & ast, Data & data)
|
||||
static void visit(const ASTPtr & ast, Data & data)
|
||||
{
|
||||
if (auto * t = ast->as<ASTIdentifier>())
|
||||
visit(*t, ast, data);
|
||||
@ -210,8 +210,9 @@ struct ColumnAliasesMatcher
|
||||
throw Exception("Multiple JOIN do not support asterisks for complex queries yet", ErrorCodes::NOT_IMPLEMENTED);
|
||||
}
|
||||
|
||||
static void visit(ASTIdentifier & node, ASTPtr &, Data & data)
|
||||
static void visit(const ASTIdentifier & const_node, const ASTPtr &, Data & data)
|
||||
{
|
||||
ASTIdentifier & node = const_cast<ASTIdentifier &>(const_node); /// we know it's not const
|
||||
if (node.isShort())
|
||||
return;
|
||||
|
||||
@ -375,7 +376,7 @@ using RewriteVisitor = InDepthNodeVisitor<RewriteMatcher, true>;
|
||||
using SetSubqueryAliasMatcher = OneTypeMatcher<SetSubqueryAliasVisitorData>;
|
||||
using SetSubqueryAliasVisitor = InDepthNodeVisitor<SetSubqueryAliasMatcher, true>;
|
||||
using ExtractAsterisksVisitor = ExtractAsterisksMatcher::Visitor;
|
||||
using ColumnAliasesVisitor = InDepthNodeVisitor<ColumnAliasesMatcher, true>;
|
||||
using ColumnAliasesVisitor = ConstInDepthNodeVisitor<ColumnAliasesMatcher, true>;
|
||||
using AppendSemanticMatcher = OneTypeMatcher<AppendSemanticVisitorData>;
|
||||
using AppendSemanticVisitor = InDepthNodeVisitor<AppendSemanticMatcher, true>;
|
||||
|
||||
@ -403,15 +404,19 @@ void JoinToSubqueryTransformMatcher::visit(ASTSelectQuery & select, ASTPtr & ast
|
||||
if (select.select())
|
||||
{
|
||||
aliases_data.public_names = true;
|
||||
ColumnAliasesVisitor(aliases_data).visit(select.refSelect());
|
||||
ColumnAliasesVisitor(aliases_data).visit(select.select());
|
||||
aliases_data.public_names = false;
|
||||
}
|
||||
if (select.where())
|
||||
ColumnAliasesVisitor(aliases_data).visit(select.refWhere());
|
||||
ColumnAliasesVisitor(aliases_data).visit(select.where());
|
||||
if (select.prewhere())
|
||||
ColumnAliasesVisitor(aliases_data).visit(select.refPrewhere());
|
||||
ColumnAliasesVisitor(aliases_data).visit(select.prewhere());
|
||||
if (select.orderBy())
|
||||
ColumnAliasesVisitor(aliases_data).visit(select.orderBy());
|
||||
if (select.groupBy())
|
||||
ColumnAliasesVisitor(aliases_data).visit(select.groupBy());
|
||||
if (select.having())
|
||||
ColumnAliasesVisitor(aliases_data).visit(select.refHaving());
|
||||
ColumnAliasesVisitor(aliases_data).visit(select.having());
|
||||
|
||||
/// JOIN sections
|
||||
for (auto & child : select.tables()->children)
|
||||
|
@ -17,6 +17,51 @@ namespace ErrorCodes
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
template <bool has_nulls>
|
||||
int nullableCompareAt(const IColumn & left_column, const IColumn & right_column, size_t lhs_pos, size_t rhs_pos)
|
||||
{
|
||||
static constexpr int null_direction_hint = 1;
|
||||
|
||||
if constexpr (has_nulls)
|
||||
{
|
||||
auto * left_nullable = checkAndGetColumn<ColumnNullable>(left_column);
|
||||
auto * right_nullable = checkAndGetColumn<ColumnNullable>(right_column);
|
||||
|
||||
if (left_nullable && right_nullable)
|
||||
{
|
||||
int res = left_column.compareAt(lhs_pos, rhs_pos, right_column, null_direction_hint);
|
||||
if (res)
|
||||
return res;
|
||||
|
||||
/// NULL != NULL case
|
||||
if (left_column.isNullAt(lhs_pos))
|
||||
return null_direction_hint;
|
||||
}
|
||||
|
||||
if (left_nullable && !right_nullable)
|
||||
{
|
||||
if (left_column.isNullAt(lhs_pos))
|
||||
return null_direction_hint;
|
||||
return left_nullable->getNestedColumn().compareAt(lhs_pos, rhs_pos, right_column, null_direction_hint);
|
||||
}
|
||||
|
||||
if (!left_nullable && right_nullable)
|
||||
{
|
||||
if (right_column.isNullAt(rhs_pos))
|
||||
return -null_direction_hint;
|
||||
return left_column.compareAt(lhs_pos, rhs_pos, right_nullable->getNestedColumn(), null_direction_hint);
|
||||
}
|
||||
}
|
||||
|
||||
/// !left_nullable && !right_nullable
|
||||
return left_column.compareAt(lhs_pos, rhs_pos, right_column, null_direction_hint);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
struct MergeJoinEqualRange
|
||||
{
|
||||
size_t left_start = 0;
|
||||
@ -42,45 +87,40 @@ public:
|
||||
bool atEnd() const { return impl.pos >= impl.rows; }
|
||||
void nextN(size_t num) { impl.pos += num; }
|
||||
|
||||
int compareAt(const MergeJoinCursor & rhs, size_t lhs_pos, size_t rhs_pos) const
|
||||
void setCompareNullability(const MergeJoinCursor & rhs)
|
||||
{
|
||||
int res = 0;
|
||||
has_nullable_columns = false;
|
||||
|
||||
for (size_t i = 0; i < impl.sort_columns_size; ++i)
|
||||
{
|
||||
res = impl.sort_columns[i]->compareAt(lhs_pos, rhs_pos, *(rhs.impl.sort_columns[i]), 1);
|
||||
if (res)
|
||||
bool is_left_nullable = isColumnNullable(*impl.sort_columns[i]);
|
||||
bool is_right_nullable = isColumnNullable(*rhs.impl.sort_columns[i]);
|
||||
|
||||
if (is_left_nullable || is_right_nullable)
|
||||
{
|
||||
has_nullable_columns = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
bool sameNext(size_t lhs_pos) const
|
||||
{
|
||||
if (lhs_pos + 1 >= impl.rows)
|
||||
return false;
|
||||
|
||||
for (size_t i = 0; i < impl.sort_columns_size; ++i)
|
||||
if (impl.sort_columns[i]->compareAt(lhs_pos, lhs_pos + 1, *(impl.sort_columns[i]), 1) != 0)
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
size_t getEqualLength()
|
||||
{
|
||||
if (atEnd())
|
||||
return 0;
|
||||
|
||||
size_t pos = impl.pos;
|
||||
while (sameNext(pos))
|
||||
++pos;
|
||||
return pos - impl.pos + 1;
|
||||
}
|
||||
|
||||
Range getNextEqualRange(MergeJoinCursor & rhs)
|
||||
{
|
||||
if (has_nullable_columns)
|
||||
return getNextEqualRangeImpl<true>(rhs);
|
||||
return getNextEqualRangeImpl<false>(rhs);
|
||||
}
|
||||
|
||||
private:
|
||||
SortCursorImpl impl;
|
||||
bool has_nullable_columns = false;
|
||||
|
||||
template <bool has_nulls>
|
||||
Range getNextEqualRangeImpl(MergeJoinCursor & rhs)
|
||||
{
|
||||
while (!atEnd() && !rhs.atEnd())
|
||||
{
|
||||
int cmp = compareAt(rhs, impl.pos, rhs.impl.pos);
|
||||
int cmp = compareAt<has_nulls>(rhs, impl.pos, rhs.impl.pos);
|
||||
if (cmp < 0)
|
||||
impl.next();
|
||||
if (cmp > 0)
|
||||
@ -97,8 +137,43 @@ public:
|
||||
return Range{impl.pos, rhs.impl.pos, 0, 0};
|
||||
}
|
||||
|
||||
private:
|
||||
SortCursorImpl impl;
|
||||
template <bool has_nulls>
|
||||
int compareAt(const MergeJoinCursor & rhs, size_t lhs_pos, size_t rhs_pos) const
|
||||
{
|
||||
int res = 0;
|
||||
for (size_t i = 0; i < impl.sort_columns_size; ++i)
|
||||
{
|
||||
auto * left_column = impl.sort_columns[i];
|
||||
auto * right_column = rhs.impl.sort_columns[i];
|
||||
|
||||
res = nullableCompareAt<has_nulls>(*left_column, *right_column, lhs_pos, rhs_pos);
|
||||
if (res)
|
||||
break;
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
size_t getEqualLength()
|
||||
{
|
||||
if (atEnd())
|
||||
return 0;
|
||||
|
||||
size_t pos = impl.pos;
|
||||
while (sameNext(pos))
|
||||
++pos;
|
||||
return pos - impl.pos + 1;
|
||||
}
|
||||
|
||||
bool sameNext(size_t lhs_pos) const
|
||||
{
|
||||
if (lhs_pos + 1 >= impl.rows)
|
||||
return false;
|
||||
|
||||
for (size_t i = 0; i < impl.sort_columns_size; ++i)
|
||||
if (impl.sort_columns[i]->compareAt(lhs_pos, lhs_pos + 1, *(impl.sort_columns[i]), 1) != 0)
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
||||
namespace
|
||||
@ -151,9 +226,9 @@ void copyRightRange(const Block & right_block, const Block & right_columns_to_ad
|
||||
auto * dst_nullable = typeid_cast<ColumnNullable *>(dst_column.get());
|
||||
|
||||
if (dst_nullable && !isColumnNullable(*src_column))
|
||||
dst_nullable->insertRangeFromNotNullable(*src_column, row_position, rows_to_add);
|
||||
dst_nullable->insertManyFromNotNullable(*src_column, row_position, rows_to_add);
|
||||
else
|
||||
dst_column->insertRangeFrom(*src_column, row_position, rows_to_add);
|
||||
dst_column->insertManyFrom(*src_column, row_position, rows_to_add);
|
||||
}
|
||||
}
|
||||
|
||||
@ -179,8 +254,7 @@ void joinEquals(const Block & left_block, const Block & right_block, const Block
|
||||
void appendNulls(MutableColumns & right_columns, size_t rows_to_add)
|
||||
{
|
||||
for (auto & column : right_columns)
|
||||
for (size_t i = 0; i < rows_to_add; ++i)
|
||||
column->insertDefault();
|
||||
column->insertManyDefaults(rows_to_add);
|
||||
}
|
||||
|
||||
void joinInequalsLeft(const Block & left_block, MutableColumns & left_columns, MutableColumns & right_columns,
|
||||
@ -232,6 +306,11 @@ void MergeJoin::setTotals(const Block & totals_block)
|
||||
mergeRightBlocks();
|
||||
}
|
||||
|
||||
void MergeJoin::joinTotals(Block & block) const
|
||||
{
|
||||
JoinCommon::joinTotals(totals, right_columns_to_add, table_join->keyNamesRight(), block);
|
||||
}
|
||||
|
||||
void MergeJoin::mergeRightBlocks()
|
||||
{
|
||||
const size_t max_merged_block_size = 128 * 1024 * 1024;
|
||||
@ -319,6 +398,7 @@ void MergeJoin::leftJoin(MergeJoinCursor & left_cursor, const Block & left_block
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, size_t & left_key_tail)
|
||||
{
|
||||
MergeJoinCursor right_cursor(right_block, right_merge_description);
|
||||
left_cursor.setCompareNullability(right_cursor);
|
||||
|
||||
while (!left_cursor.atEnd() && !right_cursor.atEnd())
|
||||
{
|
||||
@ -351,6 +431,7 @@ void MergeJoin::innerJoin(MergeJoinCursor & left_cursor, const Block & left_bloc
|
||||
MutableColumns & left_columns, MutableColumns & right_columns, size_t & left_key_tail)
|
||||
{
|
||||
MergeJoinCursor right_cursor(right_block, right_merge_description);
|
||||
left_cursor.setCompareNullability(right_cursor);
|
||||
|
||||
while (!left_cursor.atEnd() && !right_cursor.atEnd())
|
||||
{
|
||||
|
@ -22,7 +22,7 @@ public:
|
||||
|
||||
bool addJoinedBlock(const Block & block) override;
|
||||
void joinBlock(Block &) override;
|
||||
void joinTotals(Block &) const override {}
|
||||
void joinTotals(Block &) const override;
|
||||
void setTotals(const Block &) override;
|
||||
size_t getTotalRowCount() const override { return right_blocks_row_count; }
|
||||
|
||||
|
@ -122,5 +122,30 @@ void createMissedColumns(Block & block)
|
||||
}
|
||||
}
|
||||
|
||||
void joinTotals(const Block & totals, const Block & columns_to_add, const Names & key_names_right, Block & block)
|
||||
{
|
||||
if (Block totals_without_keys = totals)
|
||||
{
|
||||
for (const auto & name : key_names_right)
|
||||
totals_without_keys.erase(totals_without_keys.getPositionByName(name));
|
||||
|
||||
for (size_t i = 0; i < totals_without_keys.columns(); ++i)
|
||||
block.insert(totals_without_keys.safeGetByPosition(i));
|
||||
}
|
||||
else
|
||||
{
|
||||
/// We will join empty `totals` - from one row with the default values.
|
||||
|
||||
for (size_t i = 0; i < columns_to_add.columns(); ++i)
|
||||
{
|
||||
const auto & col = columns_to_add.getByPosition(i);
|
||||
block.insert({
|
||||
col.type->createColumnConstWithDefaultValue(1)->convertToFullColumnIfConst(),
|
||||
col.type,
|
||||
col.name});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
@ -26,6 +26,7 @@ ColumnRawPtrs extractKeysForJoin(const Names & key_names_right, const Block & ri
|
||||
void checkTypesOfKeys(const Block & block_left, const Names & key_names_left, const Block & block_right, const Names & key_names_right);
|
||||
|
||||
void createMissedColumns(Block & block);
|
||||
void joinTotals(const Block & totals, const Block & columns_to_add, const Names & key_names_right, Block & block);
|
||||
|
||||
}
|
||||
|
||||
|
@ -20,8 +20,10 @@ namespace ErrorCodes
|
||||
extern const int TIMEOUT_EXCEEDED;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
static bool isParseError(int code)
|
||||
bool isParseError(int code)
|
||||
{
|
||||
return code == ErrorCodes::CANNOT_PARSE_INPUT_ASSERTION_FAILED
|
||||
|| code == ErrorCodes::CANNOT_PARSE_QUOTED_STRING
|
||||
@ -33,34 +35,8 @@ static bool isParseError(int code)
|
||||
|| code == ErrorCodes::TOO_LARGE_STRING_SIZE;
|
||||
}
|
||||
|
||||
|
||||
static bool handleOverflowMode(OverflowMode mode, const String & message, int code)
|
||||
{
|
||||
switch (mode)
|
||||
{
|
||||
case OverflowMode::THROW:
|
||||
throw Exception(message, code);
|
||||
case OverflowMode::BREAK:
|
||||
return false;
|
||||
default:
|
||||
throw Exception("Logical error: unknown overflow mode", ErrorCodes::LOGICAL_ERROR);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static bool checkTimeLimit(const IRowInputFormat::Params & params, const Stopwatch & stopwatch)
|
||||
{
|
||||
if (params.max_execution_time != 0
|
||||
&& stopwatch.elapsed() > static_cast<UInt64>(params.max_execution_time.totalMicroseconds()) * 1000)
|
||||
return handleOverflowMode(params.timeout_overflow_mode,
|
||||
"Timeout exceeded: elapsed " + toString(stopwatch.elapsedSeconds())
|
||||
+ " seconds, maximum: " + toString(params.max_execution_time.totalMicroseconds() / 1000000.0),
|
||||
ErrorCodes::TIMEOUT_EXCEEDED);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
Chunk IRowInputFormat::generate()
|
||||
{
|
||||
if (total_rows == 0)
|
||||
@ -76,15 +52,8 @@ Chunk IRowInputFormat::generate()
|
||||
|
||||
try
|
||||
{
|
||||
for (size_t rows = 0, batch = 0; rows < params.max_block_size; ++rows, ++batch)
|
||||
for (size_t rows = 0; rows < params.max_block_size; ++rows)
|
||||
{
|
||||
if (params.rows_portion_size && batch == params.rows_portion_size)
|
||||
{
|
||||
batch = 0;
|
||||
if (!checkTimeLimit(params, total_stopwatch) || isCancelled())
|
||||
break;
|
||||
}
|
||||
|
||||
try
|
||||
{
|
||||
++total_rows;
|
||||
|
@ -27,8 +27,6 @@ struct RowInputFormatParams
|
||||
UInt64 allow_errors_num;
|
||||
Float64 allow_errors_ratio;
|
||||
|
||||
UInt64 rows_portion_size;
|
||||
|
||||
using ReadCallback = std::function<void()>;
|
||||
ReadCallback callback;
|
||||
|
||||
@ -85,4 +83,3 @@ private:
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
|
@ -392,7 +392,8 @@ struct StorageDistributedDirectoryMonitor::Batch
|
||||
remote->writePrepared(in);
|
||||
}
|
||||
|
||||
remote->writeSuffix();
|
||||
if (remote)
|
||||
remote->writeSuffix();
|
||||
}
|
||||
catch (const Exception & e)
|
||||
{
|
||||
|
@ -49,10 +49,13 @@ void KafkaBlockInputStream::readPrefixImpl()
|
||||
|
||||
buffer->subscribe(storage.getTopics());
|
||||
|
||||
const auto & limits_ = getLimits();
|
||||
const size_t poll_timeout = buffer->pollTimeout();
|
||||
size_t rows_portion_size = poll_timeout ? std::min<size_t>(max_block_size, limits_.max_execution_time.totalMilliseconds() / poll_timeout) : max_block_size;
|
||||
rows_portion_size = std::max(rows_portion_size, 1ul);
|
||||
broken = true;
|
||||
}
|
||||
|
||||
Block KafkaBlockInputStream::readImpl()
|
||||
{
|
||||
if (!buffer)
|
||||
return Block();
|
||||
|
||||
auto non_virtual_header = storage.getSampleBlockNonMaterialized(); /// FIXME: add materialized columns support
|
||||
auto read_callback = [this]
|
||||
@ -67,33 +70,72 @@ void KafkaBlockInputStream::readPrefixImpl()
|
||||
virtual_columns[4]->insert(std::chrono::duration_cast<std::chrono::seconds>(timestamp->get_timestamp()).count()); // "timestamp"
|
||||
};
|
||||
|
||||
auto child = FormatFactory::instance().getInput(
|
||||
storage.getFormatName(), *buffer, non_virtual_header, context, max_block_size, rows_portion_size, read_callback);
|
||||
child->setLimits(limits_);
|
||||
addChild(child);
|
||||
auto merge_blocks = [] (Block & block1, Block && block2)
|
||||
{
|
||||
if (!block1)
|
||||
{
|
||||
// Need to make sure that resulting block has the same structure
|
||||
block1 = std::move(block2);
|
||||
return;
|
||||
}
|
||||
|
||||
broken = true;
|
||||
}
|
||||
if (!block2)
|
||||
return;
|
||||
|
||||
Block KafkaBlockInputStream::readImpl()
|
||||
{
|
||||
if (!buffer)
|
||||
auto columns1 = block1.mutateColumns();
|
||||
auto columns2 = block2.mutateColumns();
|
||||
for (size_t i = 0, s = columns1.size(); i < s; ++i)
|
||||
columns1[i]->insertRangeFrom(*columns2[i], 0, columns2[i]->size());
|
||||
block1.setColumns(std::move(columns1));
|
||||
};
|
||||
|
||||
auto read_kafka_message = [&, this]
|
||||
{
|
||||
Block result;
|
||||
auto child = FormatFactory::instance().getInput(
|
||||
storage.getFormatName(), *buffer, non_virtual_header, context, max_block_size, read_callback);
|
||||
const auto virtual_header = storage.getSampleBlockForColumns({"_topic", "_key", "_offset", "_partition", "_timestamp"});
|
||||
|
||||
while (auto block = child->read())
|
||||
{
|
||||
auto virtual_block = virtual_header.cloneWithColumns(std::move(virtual_columns));
|
||||
virtual_columns = virtual_header.cloneEmptyColumns();
|
||||
|
||||
for (const auto & column : virtual_block.getColumnsWithTypeAndName())
|
||||
block.insert(column);
|
||||
|
||||
/// FIXME: materialize MATERIALIZED columns here.
|
||||
|
||||
merge_blocks(result, std::move(block));
|
||||
}
|
||||
|
||||
return result;
|
||||
};
|
||||
|
||||
Block single_block;
|
||||
|
||||
UInt64 total_rows = 0;
|
||||
while (total_rows < max_block_size)
|
||||
{
|
||||
auto new_block = read_kafka_message();
|
||||
auto new_rows = new_block.rows();
|
||||
total_rows += new_rows;
|
||||
merge_blocks(single_block, std::move(new_block));
|
||||
|
||||
buffer->allowNext();
|
||||
|
||||
if (!new_rows || !checkTimeLimit())
|
||||
break;
|
||||
}
|
||||
|
||||
if (!single_block)
|
||||
return Block();
|
||||
|
||||
Block block = children.back()->read();
|
||||
if (!block)
|
||||
return block;
|
||||
|
||||
Block virtual_block = storage.getSampleBlockForColumns({"_topic", "_key", "_offset", "_partition", "_timestamp"}).cloneWithColumns(std::move(virtual_columns));
|
||||
virtual_columns = storage.getSampleBlockForColumns({"_topic", "_key", "_offset", "_partition", "_timestamp"}).cloneEmptyColumns();
|
||||
|
||||
for (const auto & column : virtual_block.getColumnsWithTypeAndName())
|
||||
block.insert(column);
|
||||
|
||||
/// FIXME: materialize MATERIALIZED columns here.
|
||||
|
||||
return ConvertingBlockInputStream(
|
||||
context, std::make_shared<OneBlockInputStream>(block), getHeader(), ConvertingBlockInputStream::MatchColumnsMode::Name)
|
||||
context,
|
||||
std::make_shared<OneBlockInputStream>(single_block),
|
||||
getHeader(),
|
||||
ConvertingBlockInputStream::MatchColumnsMode::Name)
|
||||
.read();
|
||||
}
|
||||
|
||||
|
@ -13,7 +13,6 @@ ReadBufferFromKafkaConsumer::ReadBufferFromKafkaConsumer(
|
||||
size_t max_batch_size,
|
||||
size_t poll_timeout_,
|
||||
bool intermediate_commit_,
|
||||
char delimiter_,
|
||||
const std::atomic<bool> & stopped_)
|
||||
: ReadBuffer(nullptr, 0)
|
||||
, consumer(consumer_)
|
||||
@ -21,7 +20,6 @@ ReadBufferFromKafkaConsumer::ReadBufferFromKafkaConsumer(
|
||||
, batch_size(max_batch_size)
|
||||
, poll_timeout(poll_timeout_)
|
||||
, intermediate_commit(intermediate_commit_)
|
||||
, delimiter(delimiter_)
|
||||
, stopped(stopped_)
|
||||
, current(messages.begin())
|
||||
{
|
||||
@ -140,16 +138,9 @@ bool ReadBufferFromKafkaConsumer::nextImpl()
|
||||
/// NOTE: ReadBuffer was implemented with an immutable underlying contents in mind.
|
||||
/// If we failed to poll any message once - don't try again.
|
||||
/// Otherwise, the |poll_timeout| expectations get flawn.
|
||||
if (stalled || stopped)
|
||||
if (stalled || stopped || !allowed)
|
||||
return false;
|
||||
|
||||
if (put_delimiter)
|
||||
{
|
||||
BufferBase::set(&delimiter, 1, 0);
|
||||
put_delimiter = false;
|
||||
return true;
|
||||
}
|
||||
|
||||
if (current == messages.end())
|
||||
{
|
||||
if (intermediate_commit)
|
||||
@ -181,7 +172,7 @@ bool ReadBufferFromKafkaConsumer::nextImpl()
|
||||
// XXX: very fishy place with const casting.
|
||||
auto new_position = reinterpret_cast<char *>(const_cast<unsigned char *>(current->get_payload().get_data()));
|
||||
BufferBase::set(new_position, current->get_payload().get_size(), 0);
|
||||
put_delimiter = (delimiter != 0);
|
||||
allowed = false;
|
||||
|
||||
/// Since we can poll more messages than we already processed - commit only processed messages.
|
||||
consumer->store_offset(*current);
|
||||
|
@ -25,10 +25,10 @@ public:
|
||||
size_t max_batch_size,
|
||||
size_t poll_timeout_,
|
||||
bool intermediate_commit_,
|
||||
char delimiter_,
|
||||
const std::atomic<bool> & stopped_);
|
||||
~ReadBufferFromKafkaConsumer() override;
|
||||
|
||||
void allowNext() { allowed = true; } // Allow to read next message.
|
||||
void commit(); // Commit all processed messages.
|
||||
void subscribe(const Names & topics); // Subscribe internal consumer to topics.
|
||||
void unsubscribe(); // Unsubscribe internal consumer in case of failure.
|
||||
@ -51,9 +51,7 @@ private:
|
||||
const size_t poll_timeout = 0;
|
||||
bool stalled = false;
|
||||
bool intermediate_commit = true;
|
||||
|
||||
char delimiter;
|
||||
bool put_delimiter = false;
|
||||
bool allowed = true;
|
||||
|
||||
const std::atomic<bool> & stopped;
|
||||
|
||||
|
@ -278,7 +278,7 @@ ConsumerBufferPtr StorageKafka::createReadBuffer()
|
||||
size_t poll_timeout = settings.stream_poll_timeout_ms.totalMilliseconds();
|
||||
|
||||
/// NOTE: we pass |stream_cancelled| by reference here, so the buffers should not outlive the storage.
|
||||
return std::make_shared<ReadBufferFromKafkaConsumer>(consumer, log, batch_size, poll_timeout, intermediate_commit, row_delimiter, stream_cancelled);
|
||||
return std::make_shared<ReadBufferFromKafkaConsumer>(consumer, log, batch_size, poll_timeout, intermediate_commit, stream_cancelled);
|
||||
}
|
||||
|
||||
|
||||
|
@ -126,7 +126,7 @@ MergeTreeData::MergeTreeData(
|
||||
, log_name(database_name + "." + table_name)
|
||||
, log(&Logger::get(log_name))
|
||||
, storage_settings(std::move(storage_settings_))
|
||||
, storage_policy(context_.getStoragePolicy(getSettings()->storage_policy_name))
|
||||
, storage_policy(context_.getStoragePolicy(getSettings()->storage_policy))
|
||||
, data_parts_by_info(data_parts_indexes.get<TagByInfo>())
|
||||
, data_parts_by_state_and_info(data_parts_indexes.get<TagByStateAndInfo>())
|
||||
, parts_mover(this)
|
||||
|
@ -88,7 +88,7 @@ struct MergeTreeSettings : public SettingsCollection<MergeTreeSettings>
|
||||
M(SettingMaxThreads, max_part_loading_threads, 0, "The number of theads to load data parts at startup.") \
|
||||
M(SettingMaxThreads, max_part_removal_threads, 0, "The number of theads for concurrent removal of inactive data parts. One is usually enough, but in 'Google Compute Environment SSD Persistent Disks' file removal (unlink) operation is extraordinarily slow and you probably have to increase this number (recommended is up to 16).") \
|
||||
M(SettingUInt64, concurrent_part_removal_threshold, 100, "Activate concurrent part removal (see 'max_part_removal_threads') only if the number of inactive data parts is at least this.") \
|
||||
M(SettingString, storage_policy_name, "default", "Name of storage disk policy")
|
||||
M(SettingString, storage_policy, "default", "Name of storage disk policy")
|
||||
|
||||
DECLARE_SETTINGS_COLLECTION(LIST_OF_MERGE_TREE_SETTINGS)
|
||||
|
||||
@ -104,7 +104,7 @@ struct MergeTreeSettings : public SettingsCollection<MergeTreeSettings>
|
||||
/// We check settings after storage creation
|
||||
static bool isReadonlySetting(const String & name)
|
||||
{
|
||||
return name == "index_granularity" || name == "index_granularity_bytes" || name == "storage_policy_name";
|
||||
return name == "index_granularity" || name == "index_granularity_bytes" || name == "storage_policy";
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -5110,38 +5110,29 @@ ActionLock StorageReplicatedMergeTree::getActionLock(StorageActionBlockType acti
|
||||
|
||||
bool StorageReplicatedMergeTree::waitForShrinkingQueueSize(size_t queue_size, UInt64 max_wait_milliseconds)
|
||||
{
|
||||
Stopwatch watch;
|
||||
|
||||
/// Let's fetch new log entries firstly
|
||||
queue.pullLogsToQueue(getZooKeeper());
|
||||
|
||||
Stopwatch watch;
|
||||
Poco::Event event;
|
||||
std::atomic<bool> cond_reached{false};
|
||||
|
||||
auto callback = [&event, &cond_reached, queue_size] (size_t new_queue_size)
|
||||
Poco::Event target_size_event;
|
||||
auto callback = [&target_size_event, queue_size] (size_t new_queue_size)
|
||||
{
|
||||
if (new_queue_size <= queue_size)
|
||||
cond_reached.store(true, std::memory_order_relaxed);
|
||||
|
||||
event.set();
|
||||
target_size_event.set();
|
||||
};
|
||||
const auto handler = queue.addSubscriber(std::move(callback));
|
||||
|
||||
auto handler = queue.addSubscriber(std::move(callback));
|
||||
|
||||
while (true)
|
||||
while (!target_size_event.tryWait(50))
|
||||
{
|
||||
event.tryWait(50);
|
||||
|
||||
if (max_wait_milliseconds && watch.elapsedMilliseconds() > max_wait_milliseconds)
|
||||
break;
|
||||
|
||||
if (cond_reached)
|
||||
break;
|
||||
return false;
|
||||
|
||||
if (partial_shutdown_called)
|
||||
throw Exception("Shutdown is called for table", ErrorCodes::ABORTED);
|
||||
}
|
||||
|
||||
return cond_reached.load(std::memory_order_relaxed);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
|
177
dbms/src/Storages/StorageS3.cpp
Normal file
177
dbms/src/Storages/StorageS3.cpp
Normal file
@ -0,0 +1,177 @@
|
||||
#include <Storages/StorageFactory.h>
|
||||
#include <Storages/StorageS3.h>
|
||||
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/evaluateConstantExpression.h>
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
|
||||
#include <IO/ReadBufferFromS3.h>
|
||||
#include <IO/WriteBufferFromS3.h>
|
||||
|
||||
#include <Formats/FormatFactory.h>
|
||||
|
||||
#include <DataStreams/IBlockOutputStream.h>
|
||||
#include <DataStreams/IBlockInputStream.h>
|
||||
#include <DataStreams/AddingDefaultsBlockInputStream.h>
|
||||
|
||||
#include <Poco/Net/HTTPRequest.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
class StorageS3BlockInputStream : public IBlockInputStream
|
||||
{
|
||||
public:
|
||||
StorageS3BlockInputStream(const Poco::URI & uri,
|
||||
const String & format,
|
||||
const String & name_,
|
||||
const Block & sample_block,
|
||||
const Context & context,
|
||||
UInt64 max_block_size,
|
||||
const ConnectionTimeouts & timeouts)
|
||||
: name(name_)
|
||||
{
|
||||
read_buf = std::make_unique<ReadBufferFromS3>(uri, timeouts);
|
||||
|
||||
reader = FormatFactory::instance().getInput(format, *read_buf, sample_block, context, max_block_size);
|
||||
}
|
||||
|
||||
String getName() const override
|
||||
{
|
||||
return name;
|
||||
}
|
||||
|
||||
Block readImpl() override
|
||||
{
|
||||
return reader->read();
|
||||
}
|
||||
|
||||
Block getHeader() const override
|
||||
{
|
||||
return reader->getHeader();
|
||||
}
|
||||
|
||||
void readPrefixImpl() override
|
||||
{
|
||||
reader->readPrefix();
|
||||
}
|
||||
|
||||
void readSuffixImpl() override
|
||||
{
|
||||
reader->readSuffix();
|
||||
}
|
||||
|
||||
private:
|
||||
String name;
|
||||
std::unique_ptr<ReadBufferFromS3> read_buf;
|
||||
BlockInputStreamPtr reader;
|
||||
};
|
||||
|
||||
class StorageS3BlockOutputStream : public IBlockOutputStream
|
||||
{
|
||||
public:
|
||||
StorageS3BlockOutputStream(const Poco::URI & uri,
|
||||
const String & format,
|
||||
const Block & sample_block_,
|
||||
const Context & context,
|
||||
const ConnectionTimeouts & timeouts)
|
||||
: sample_block(sample_block_)
|
||||
{
|
||||
auto minimum_upload_part_size = context.getConfigRef().getUInt64("s3_minimum_upload_part_size", 512'000'000);
|
||||
write_buf = std::make_unique<WriteBufferFromS3>(uri, minimum_upload_part_size, timeouts);
|
||||
writer = FormatFactory::instance().getOutput(format, *write_buf, sample_block, context);
|
||||
}
|
||||
|
||||
Block getHeader() const override
|
||||
{
|
||||
return sample_block;
|
||||
}
|
||||
|
||||
void write(const Block & block) override
|
||||
{
|
||||
writer->write(block);
|
||||
}
|
||||
|
||||
void writePrefix() override
|
||||
{
|
||||
writer->writePrefix();
|
||||
}
|
||||
|
||||
void writeSuffix() override
|
||||
{
|
||||
writer->writeSuffix();
|
||||
writer->flush();
|
||||
write_buf->finalize();
|
||||
}
|
||||
|
||||
private:
|
||||
Block sample_block;
|
||||
std::unique_ptr<WriteBufferFromS3> write_buf;
|
||||
BlockOutputStreamPtr writer;
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
BlockInputStreams StorageS3::read(const Names & column_names,
|
||||
const SelectQueryInfo & /*query_info*/,
|
||||
const Context & context,
|
||||
QueryProcessingStage::Enum /*processed_stage*/,
|
||||
size_t max_block_size,
|
||||
unsigned /*num_streams*/)
|
||||
{
|
||||
BlockInputStreamPtr block_input = std::make_shared<StorageS3BlockInputStream>(uri,
|
||||
format_name,
|
||||
getName(),
|
||||
getHeaderBlock(column_names),
|
||||
context,
|
||||
max_block_size,
|
||||
ConnectionTimeouts::getHTTPTimeouts(context));
|
||||
|
||||
auto column_defaults = getColumns().getDefaults();
|
||||
if (column_defaults.empty())
|
||||
return {block_input};
|
||||
return {std::make_shared<AddingDefaultsBlockInputStream>(block_input, column_defaults, context)};
|
||||
}
|
||||
|
||||
void StorageS3::rename(const String & /*new_path_to_db*/, const String & new_database_name, const String & new_table_name, TableStructureWriteLockHolder &)
|
||||
{
|
||||
table_name = new_table_name;
|
||||
database_name = new_database_name;
|
||||
}
|
||||
|
||||
BlockOutputStreamPtr StorageS3::write(const ASTPtr & /*query*/, const Context & /*context*/)
|
||||
{
|
||||
return std::make_shared<StorageS3BlockOutputStream>(
|
||||
uri, format_name, getSampleBlock(), context_global, ConnectionTimeouts::getHTTPTimeouts(context_global));
|
||||
}
|
||||
|
||||
void registerStorageS3(StorageFactory & factory)
|
||||
{
|
||||
factory.registerStorage("S3", [](const StorageFactory::Arguments & args)
|
||||
{
|
||||
ASTs & engine_args = args.engine_args;
|
||||
|
||||
if (!(engine_args.size() == 1 || engine_args.size() == 2))
|
||||
throw Exception(
|
||||
"Storage S3 requires exactly 2 arguments: url and name of used format.", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
engine_args[0] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[0], args.local_context);
|
||||
|
||||
String url = engine_args[0]->as<ASTLiteral &>().value.safeGet<String>();
|
||||
Poco::URI uri(url);
|
||||
|
||||
engine_args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(engine_args[1], args.local_context);
|
||||
|
||||
String format_name = engine_args[1]->as<ASTLiteral &>().value.safeGet<String>();
|
||||
|
||||
return StorageS3::create(uri, args.database_name, args.table_name, format_name, args.columns, args.context);
|
||||
});
|
||||
}
|
||||
}
|
71
dbms/src/Storages/StorageS3.h
Normal file
71
dbms/src/Storages/StorageS3.h
Normal file
@ -0,0 +1,71 @@
|
||||
#pragma once
|
||||
|
||||
#include <Storages/IStorage.h>
|
||||
#include <Poco/URI.h>
|
||||
#include <common/logger_useful.h>
|
||||
#include <ext/shared_ptr_helper.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
/**
|
||||
* This class represents table engine for external S3 urls.
|
||||
* It sends HTTP GET to server when select is called and
|
||||
* HTTP PUT when insert is called.
|
||||
*/
|
||||
class StorageS3 : public ext::shared_ptr_helper<StorageS3>, public IStorage
|
||||
{
|
||||
public:
|
||||
StorageS3(const Poco::URI & uri_,
|
||||
const std::string & database_name_,
|
||||
const std::string & table_name_,
|
||||
const String & format_name_,
|
||||
const ColumnsDescription & columns_,
|
||||
Context & context_
|
||||
)
|
||||
: IStorage(columns_)
|
||||
, uri(uri_)
|
||||
, context_global(context_)
|
||||
, format_name(format_name_)
|
||||
, database_name(database_name_)
|
||||
, table_name(table_name_)
|
||||
{
|
||||
setColumns(columns_);
|
||||
}
|
||||
|
||||
String getName() const override
|
||||
{
|
||||
return "S3";
|
||||
}
|
||||
|
||||
Block getHeaderBlock(const Names & /*column_names*/) const
|
||||
{
|
||||
return getSampleBlock();
|
||||
}
|
||||
|
||||
String getTableName() const override
|
||||
{
|
||||
return table_name;
|
||||
}
|
||||
|
||||
BlockInputStreams read(const Names & column_names,
|
||||
const SelectQueryInfo & query_info,
|
||||
const Context & context,
|
||||
QueryProcessingStage::Enum processed_stage,
|
||||
size_t max_block_size,
|
||||
unsigned num_streams) override;
|
||||
|
||||
BlockOutputStreamPtr write(const ASTPtr & query, const Context & context) override;
|
||||
|
||||
void rename(const String & new_path_to_db, const String & new_database_name, const String & new_table_name, TableStructureWriteLockHolder &) override;
|
||||
|
||||
protected:
|
||||
Poco::URI uri;
|
||||
const Context & context_global;
|
||||
|
||||
private:
|
||||
String format_name;
|
||||
String database_name;
|
||||
String table_name;
|
||||
};
|
||||
|
||||
}
|
@ -2,11 +2,15 @@
|
||||
|
||||
set -x
|
||||
|
||||
# doesn't actually cd to directory, but return absolute path
|
||||
CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# cd to directory
|
||||
cd $CUR_DIR
|
||||
|
||||
CONTRIBUTORS_FILE=${CONTRIBUTORS_FILE=$CUR_DIR/StorageSystemContributors.generated.cpp}
|
||||
|
||||
git shortlog --summary | perl -lnE 's/^\s+\d+\s+(.+)/ "$1",/; next unless $1; say $_' > $CONTRIBUTORS_FILE.tmp
|
||||
# if you don't specify HEAD here, without terminal `git shortlog` would expect input from stdin
|
||||
git shortlog HEAD --summary | perl -lnE 's/^\s+\d+\s+(.+)/ "$1",/; next unless $1; say $_' > $CONTRIBUTORS_FILE.tmp
|
||||
|
||||
# If git history not available - dont make target file
|
||||
if [ ! -s $CONTRIBUTORS_FILE.tmp ]; then
|
||||
|
@ -19,6 +19,7 @@ void registerStorageDistributed(StorageFactory & factory);
|
||||
void registerStorageMemory(StorageFactory & factory);
|
||||
void registerStorageFile(StorageFactory & factory);
|
||||
void registerStorageURL(StorageFactory & factory);
|
||||
void registerStorageS3(StorageFactory & factory);
|
||||
void registerStorageDictionary(StorageFactory & factory);
|
||||
void registerStorageSet(StorageFactory & factory);
|
||||
void registerStorageJoin(StorageFactory & factory);
|
||||
@ -60,6 +61,7 @@ void registerStorages()
|
||||
registerStorageMemory(factory);
|
||||
registerStorageFile(factory);
|
||||
registerStorageURL(factory);
|
||||
registerStorageS3(factory);
|
||||
registerStorageDictionary(factory);
|
||||
registerStorageSet(factory);
|
||||
registerStorageJoin(factory);
|
||||
|
19
dbms/src/TableFunctions/TableFunctionS3.cpp
Normal file
19
dbms/src/TableFunctions/TableFunctionS3.cpp
Normal file
@ -0,0 +1,19 @@
|
||||
#include <Storages/StorageS3.h>
|
||||
#include <TableFunctions/TableFunctionFactory.h>
|
||||
#include <TableFunctions/TableFunctionS3.h>
|
||||
#include <Poco/URI.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
StoragePtr TableFunctionS3::getStorage(
|
||||
const String & source, const String & format, const ColumnsDescription & columns, Context & global_context, const std::string & table_name) const
|
||||
{
|
||||
Poco::URI uri(source);
|
||||
return StorageS3::create(uri, getDatabaseName(), table_name, format, columns, global_context);
|
||||
}
|
||||
|
||||
void registerTableFunctionS3(TableFunctionFactory & factory)
|
||||
{
|
||||
factory.registerFunction<TableFunctionS3>();
|
||||
}
|
||||
}
|
25
dbms/src/TableFunctions/TableFunctionS3.h
Normal file
25
dbms/src/TableFunctions/TableFunctionS3.h
Normal file
@ -0,0 +1,25 @@
|
||||
#pragma once
|
||||
|
||||
#include <TableFunctions/ITableFunctionFileLike.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Core/Block.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
/* s3(source, format, structure) - creates a temporary storage for a file in S3
|
||||
*/
|
||||
class TableFunctionS3 : public ITableFunctionFileLike
|
||||
{
|
||||
public:
|
||||
static constexpr auto name = "s3";
|
||||
std::string getName() const override
|
||||
{
|
||||
return name;
|
||||
}
|
||||
|
||||
private:
|
||||
StoragePtr getStorage(
|
||||
const String & source, const String & format, const ColumnsDescription & columns, Context & global_context, const std::string & table_name) const override;
|
||||
};
|
||||
}
|
@ -11,6 +11,7 @@ void registerTableFunctionMerge(TableFunctionFactory & factory);
|
||||
void registerTableFunctionRemote(TableFunctionFactory & factory);
|
||||
void registerTableFunctionNumbers(TableFunctionFactory & factory);
|
||||
void registerTableFunctionFile(TableFunctionFactory & factory);
|
||||
void registerTableFunctionS3(TableFunctionFactory & factory);
|
||||
void registerTableFunctionURL(TableFunctionFactory & factory);
|
||||
void registerTableFunctionValues(TableFunctionFactory & factory);
|
||||
void registerTableFunctionInput(TableFunctionFactory & factory);
|
||||
@ -38,6 +39,7 @@ void registerTableFunctions()
|
||||
registerTableFunctionRemote(factory);
|
||||
registerTableFunctionNumbers(factory);
|
||||
registerTableFunctionFile(factory);
|
||||
registerTableFunctionS3(factory);
|
||||
registerTableFunctionURL(factory);
|
||||
registerTableFunctionValues(factory);
|
||||
registerTableFunctionInput(factory);
|
||||
|
@ -125,6 +125,69 @@
|
||||
</structure>
|
||||
</dictionary>
|
||||
|
||||
<dictionary>
|
||||
<name>hashed_sparse_ints</name>
|
||||
<source>
|
||||
<clickhouse>
|
||||
<host>localhost</host>
|
||||
<port>9000</port>
|
||||
<user>default</user>
|
||||
<password></password>
|
||||
<db>test_00950</db>
|
||||
<table>ints</table>
|
||||
</clickhouse>
|
||||
</source>
|
||||
<lifetime>0</lifetime>
|
||||
<layout>
|
||||
<sparse_hashed/>
|
||||
</layout>
|
||||
<structure>
|
||||
<id>
|
||||
<name>key</name>
|
||||
</id>
|
||||
<attribute>
|
||||
<name>i8</name>
|
||||
<type>Int8</type>
|
||||
<null_value>0</null_value>
|
||||
</attribute>
|
||||
<attribute>
|
||||
<name>i16</name>
|
||||
<type>Int16</type>
|
||||
<null_value>0</null_value>
|
||||
</attribute>
|
||||
<attribute>
|
||||
<name>i32</name>
|
||||
<type>Int32</type>
|
||||
<null_value>0</null_value>
|
||||
</attribute>
|
||||
<attribute>
|
||||
<name>i64</name>
|
||||
<type>Int64</type>
|
||||
<null_value>0</null_value>
|
||||
</attribute>
|
||||
<attribute>
|
||||
<name>u8</name>
|
||||
<type>UInt8</type>
|
||||
<null_value>0</null_value>
|
||||
</attribute>
|
||||
<attribute>
|
||||
<name>u16</name>
|
||||
<type>UInt16</type>
|
||||
<null_value>0</null_value>
|
||||
</attribute>
|
||||
<attribute>
|
||||
<name>u32</name>
|
||||
<type>UInt32</type>
|
||||
<null_value>0</null_value>
|
||||
</attribute>
|
||||
<attribute>
|
||||
<name>u64</name>
|
||||
<type>UInt64</type>
|
||||
<null_value>0</null_value>
|
||||
</attribute>
|
||||
</structure>
|
||||
</dictionary>
|
||||
|
||||
<dictionary>
|
||||
<name>cache_ints</name>
|
||||
<source>
|
||||
|
@ -225,12 +225,12 @@ class ClickHouseCluster:
|
||||
def restart_instance_with_ip_change(self, node, new_ip):
|
||||
if '::' in new_ip:
|
||||
if node.ipv6_address is None:
|
||||
raise Exception("You shoud specity ipv6_address in add_node method")
|
||||
raise Exception("You should specity ipv6_address in add_node method")
|
||||
self._replace(node.docker_compose_path, node.ipv6_address, new_ip)
|
||||
node.ipv6_address = new_ip
|
||||
else:
|
||||
if node.ipv4_address is None:
|
||||
raise Exception("You shoud specity ipv4_address in add_node method")
|
||||
raise Exception("You should specity ipv4_address in add_node method")
|
||||
self._replace(node.docker_compose_path, node.ipv4_address, new_ip)
|
||||
node.ipv4_address = new_ip
|
||||
subprocess.check_call(self.base_cmd + ["stop", node.name])
|
||||
|
@ -107,4 +107,4 @@ if __name__ == "__main__":
|
||||
)
|
||||
|
||||
#print(cmd)
|
||||
subprocess.check_call(cmd, shell=True)
|
||||
subprocess.check_call(cmd, shell=True)
|
||||
|
@ -146,7 +146,7 @@ def test_query_parser(start_cluster):
|
||||
d UInt64
|
||||
) ENGINE = MergeTree()
|
||||
ORDER BY d
|
||||
SETTINGS storage_policy_name='very_exciting_policy'
|
||||
SETTINGS storage_policy='very_exciting_policy'
|
||||
""")
|
||||
|
||||
with pytest.raises(QueryRuntimeException):
|
||||
@ -155,7 +155,7 @@ def test_query_parser(start_cluster):
|
||||
d UInt64
|
||||
) ENGINE = MergeTree()
|
||||
ORDER BY d
|
||||
SETTINGS storage_policy_name='jbod1'
|
||||
SETTINGS storage_policy='jbod1'
|
||||
""")
|
||||
|
||||
|
||||
@ -164,7 +164,7 @@ def test_query_parser(start_cluster):
|
||||
d UInt64
|
||||
) ENGINE = MergeTree()
|
||||
ORDER BY d
|
||||
SETTINGS storage_policy_name='default'
|
||||
SETTINGS storage_policy='default'
|
||||
""")
|
||||
|
||||
node1.query("INSERT INTO table_with_normal_policy VALUES (5)")
|
||||
@ -182,7 +182,7 @@ def test_query_parser(start_cluster):
|
||||
node1.query("ALTER TABLE table_with_normal_policy MOVE PARTITION 'yyyy' TO DISK 'jbod1'")
|
||||
|
||||
with pytest.raises(QueryRuntimeException):
|
||||
node1.query("ALTER TABLE table_with_normal_policy MODIFY SETTING storage_policy_name='moving_jbod_with_external'")
|
||||
node1.query("ALTER TABLE table_with_normal_policy MODIFY SETTING storage_policy='moving_jbod_with_external'")
|
||||
finally:
|
||||
node1.query("DROP TABLE IF EXISTS table_with_normal_policy")
|
||||
|
||||
@ -204,7 +204,7 @@ def test_round_robin(start_cluster, name, engine):
|
||||
d UInt64
|
||||
) ENGINE = {engine}
|
||||
ORDER BY d
|
||||
SETTINGS storage_policy_name='jbods_with_external'
|
||||
SETTINGS storage_policy='jbods_with_external'
|
||||
""".format(name=name, engine=engine))
|
||||
|
||||
# first should go to the jbod1
|
||||
@ -239,7 +239,7 @@ def test_max_data_part_size(start_cluster, name, engine):
|
||||
s1 String
|
||||
) ENGINE = {engine}
|
||||
ORDER BY tuple()
|
||||
SETTINGS storage_policy_name='jbods_with_external'
|
||||
SETTINGS storage_policy='jbods_with_external'
|
||||
""".format(name=name, engine=engine))
|
||||
data = [] # 10MB in total
|
||||
for i in range(10):
|
||||
@ -263,7 +263,7 @@ def test_jbod_overflow(start_cluster, name, engine):
|
||||
s1 String
|
||||
) ENGINE = {engine}
|
||||
ORDER BY tuple()
|
||||
SETTINGS storage_policy_name='small_jbod_with_external'
|
||||
SETTINGS storage_policy='small_jbod_with_external'
|
||||
""".format(name=name, engine=engine))
|
||||
|
||||
node1.query("SYSTEM STOP MERGES")
|
||||
@ -313,7 +313,7 @@ def test_background_move(start_cluster, name, engine):
|
||||
s1 String
|
||||
) ENGINE = {engine}
|
||||
ORDER BY tuple()
|
||||
SETTINGS storage_policy_name='moving_jbod_with_external'
|
||||
SETTINGS storage_policy='moving_jbod_with_external'
|
||||
""".format(name=name, engine=engine))
|
||||
|
||||
for i in range(5):
|
||||
@ -357,7 +357,7 @@ def test_start_stop_moves(start_cluster, name, engine):
|
||||
s1 String
|
||||
) ENGINE = {engine}
|
||||
ORDER BY tuple()
|
||||
SETTINGS storage_policy_name='moving_jbod_with_external'
|
||||
SETTINGS storage_policy='moving_jbod_with_external'
|
||||
""".format(name=name, engine=engine))
|
||||
|
||||
node1.query("INSERT INTO {} VALUES ('HELLO')".format(name))
|
||||
@ -452,7 +452,7 @@ def test_alter_move(start_cluster, name, engine):
|
||||
) ENGINE = {engine}
|
||||
ORDER BY tuple()
|
||||
PARTITION BY toYYYYMM(EventDate)
|
||||
SETTINGS storage_policy_name='jbods_with_external'
|
||||
SETTINGS storage_policy='jbods_with_external'
|
||||
""".format(name=name, engine=engine))
|
||||
|
||||
node1.query("SYSTEM STOP MERGES {}".format(name)) # to avoid conflicts
|
||||
@ -540,7 +540,7 @@ def test_concurrent_alter_move(start_cluster, name, engine):
|
||||
) ENGINE = {engine}
|
||||
ORDER BY tuple()
|
||||
PARTITION BY toYYYYMM(EventDate)
|
||||
SETTINGS storage_policy_name='jbods_with_external'
|
||||
SETTINGS storage_policy='jbods_with_external'
|
||||
""".format(name=name, engine=engine))
|
||||
|
||||
def insert(num):
|
||||
@ -591,7 +591,7 @@ def test_concurrent_alter_move_and_drop(start_cluster, name, engine):
|
||||
) ENGINE = {engine}
|
||||
ORDER BY tuple()
|
||||
PARTITION BY toYYYYMM(EventDate)
|
||||
SETTINGS storage_policy_name='jbods_with_external'
|
||||
SETTINGS storage_policy='jbods_with_external'
|
||||
""".format(name=name, engine=engine))
|
||||
|
||||
def insert(num):
|
||||
@ -640,7 +640,7 @@ def test_mutate_to_another_disk(start_cluster, name, engine):
|
||||
s1 String
|
||||
) ENGINE = {engine}
|
||||
ORDER BY tuple()
|
||||
SETTINGS storage_policy_name='moving_jbod_with_external'
|
||||
SETTINGS storage_policy='moving_jbod_with_external'
|
||||
""".format(name=name, engine=engine))
|
||||
|
||||
for i in range(5):
|
||||
@ -687,7 +687,7 @@ def test_concurrent_alter_modify(start_cluster, name, engine):
|
||||
) ENGINE = {engine}
|
||||
ORDER BY tuple()
|
||||
PARTITION BY toYYYYMM(EventDate)
|
||||
SETTINGS storage_policy_name='jbods_with_external'
|
||||
SETTINGS storage_policy='jbods_with_external'
|
||||
""".format(name=name, engine=engine))
|
||||
|
||||
def insert(num):
|
||||
@ -733,7 +733,7 @@ def test_simple_replication_and_moves(start_cluster):
|
||||
s1 String
|
||||
) ENGINE = ReplicatedMergeTree('/clickhouse/replicated_table_for_moves', '{}')
|
||||
ORDER BY tuple()
|
||||
SETTINGS storage_policy_name='moving_jbod_with_external', old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=2
|
||||
SETTINGS storage_policy='moving_jbod_with_external', old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=2
|
||||
""".format(i + 1))
|
||||
|
||||
def insert(num):
|
||||
@ -796,7 +796,7 @@ def test_download_appropriate_disk(start_cluster):
|
||||
s1 String
|
||||
) ENGINE = ReplicatedMergeTree('/clickhouse/replicated_table_for_download', '{}')
|
||||
ORDER BY tuple()
|
||||
SETTINGS storage_policy_name='moving_jbod_with_external', old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=2
|
||||
SETTINGS storage_policy='moving_jbod_with_external', old_parts_lifetime=1, cleanup_delay_period=1, cleanup_delay_period_random_add=2
|
||||
""".format(i + 1))
|
||||
|
||||
data = []
|
||||
@ -827,7 +827,7 @@ def test_rename(start_cluster):
|
||||
s String
|
||||
) ENGINE = MergeTree
|
||||
ORDER BY tuple()
|
||||
SETTINGS storage_policy_name='small_jbod_with_external'
|
||||
SETTINGS storage_policy='small_jbod_with_external'
|
||||
""")
|
||||
|
||||
for _ in range(5):
|
||||
@ -867,7 +867,7 @@ def test_freeze(start_cluster):
|
||||
) ENGINE = MergeTree
|
||||
ORDER BY tuple()
|
||||
PARTITION BY toYYYYMM(d)
|
||||
SETTINGS storage_policy_name='small_jbod_with_external'
|
||||
SETTINGS storage_policy='small_jbod_with_external'
|
||||
""")
|
||||
|
||||
for _ in range(5):
|
||||
|
0
dbms/tests/integration/test_storage_s3/__init__.py
Normal file
0
dbms/tests/integration/test_storage_s3/__init__.py
Normal file
@ -0,0 +1,3 @@
|
||||
<yandex>
|
||||
<s3_minimum_upload_part_size>1000000</s3_minimum_upload_part_size>
|
||||
</yandex>
|
159
dbms/tests/integration/test_storage_s3/test.py
Normal file
159
dbms/tests/integration/test_storage_s3/test.py
Normal file
@ -0,0 +1,159 @@
|
||||
import httplib
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import time
|
||||
import traceback
|
||||
|
||||
import pytest
|
||||
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
|
||||
|
||||
logging.getLogger().setLevel(logging.INFO)
|
||||
logging.getLogger().addHandler(logging.StreamHandler())
|
||||
|
||||
|
||||
def get_communication_data(started_cluster):
|
||||
conn = httplib.HTTPConnection(started_cluster.instances["dummy"].ip_address, started_cluster.communication_port)
|
||||
conn.request("GET", "/")
|
||||
r = conn.getresponse()
|
||||
raw_data = r.read()
|
||||
conn.close()
|
||||
return json.loads(raw_data)
|
||||
|
||||
|
||||
def put_communication_data(started_cluster, body):
|
||||
conn = httplib.HTTPConnection(started_cluster.instances["dummy"].ip_address, started_cluster.communication_port)
|
||||
conn.request("PUT", "/", body)
|
||||
r = conn.getresponse()
|
||||
conn.close()
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def started_cluster():
|
||||
try:
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
instance = cluster.add_instance("dummy", config_dir="configs", main_configs=["configs/min_chunk_size.xml"])
|
||||
cluster.start()
|
||||
|
||||
cluster.communication_port = 10000
|
||||
instance.copy_file_to_container(os.path.join(os.path.dirname(__file__), "test_server.py"), "test_server.py")
|
||||
cluster.bucket = "abc"
|
||||
instance.exec_in_container(["python", "test_server.py", str(cluster.communication_port), cluster.bucket], detach=True)
|
||||
cluster.mock_host = instance.ip_address
|
||||
|
||||
for i in range(10):
|
||||
try:
|
||||
data = get_communication_data(cluster)
|
||||
cluster.redirecting_to_http_port = data["redirecting_to_http_port"]
|
||||
cluster.preserving_data_port = data["preserving_data_port"]
|
||||
cluster.multipart_preserving_data_port = data["multipart_preserving_data_port"]
|
||||
cluster.redirecting_preserving_data_port = data["redirecting_preserving_data_port"]
|
||||
except:
|
||||
logging.error(traceback.format_exc())
|
||||
time.sleep(0.5)
|
||||
else:
|
||||
break
|
||||
else:
|
||||
assert False, "Could not initialize mock server"
|
||||
|
||||
yield cluster
|
||||
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
|
||||
def run_query(instance, query, stdin=None):
|
||||
logging.info("Running query '{}'...".format(query))
|
||||
result = instance.query(query, stdin=stdin)
|
||||
logging.info("Query finished")
|
||||
return result
|
||||
|
||||
|
||||
def test_get_with_redirect(started_cluster):
|
||||
instance = started_cluster.instances["dummy"]
|
||||
format = "column1 UInt32, column2 UInt32, column3 UInt32"
|
||||
|
||||
put_communication_data(started_cluster, "=== Get with redirect test ===")
|
||||
query = "select *, column1*column2*column3 from s3('http://{}:{}/', 'CSV', '{}')".format(started_cluster.mock_host, started_cluster.redirecting_to_http_port, format)
|
||||
stdout = run_query(instance, query)
|
||||
data = get_communication_data(started_cluster)
|
||||
expected = [ [str(row[0]), str(row[1]), str(row[2]), str(row[0]*row[1]*row[2])] for row in data["redirect_csv_data"] ]
|
||||
assert list(map(str.split, stdout.splitlines())) == expected
|
||||
|
||||
|
||||
def test_put(started_cluster):
|
||||
instance = started_cluster.instances["dummy"]
|
||||
format = "column1 UInt32, column2 UInt32, column3 UInt32"
|
||||
|
||||
logging.info("Phase 3")
|
||||
put_communication_data(started_cluster, "=== Put test ===")
|
||||
values = "(1, 2, 3), (3, 2, 1), (78, 43, 45)"
|
||||
put_query = "insert into table function s3('http://{}:{}/{}/test.csv', 'CSV', '{}') values {}".format(started_cluster.mock_host, started_cluster.preserving_data_port, started_cluster.bucket, format, values)
|
||||
run_query(instance, put_query)
|
||||
data = get_communication_data(started_cluster)
|
||||
received_data_completed = data["received_data_completed"]
|
||||
received_data = data["received_data"]
|
||||
finalize_data = data["finalize_data"]
|
||||
finalize_data_query = data["finalize_data_query"]
|
||||
assert received_data[-1].decode() == "1,2,3\n3,2,1\n78,43,45\n"
|
||||
assert received_data_completed
|
||||
assert finalize_data == "<CompleteMultipartUpload><Part><PartNumber>1</PartNumber><ETag>hello-etag</ETag></Part></CompleteMultipartUpload>"
|
||||
assert finalize_data_query == "uploadId=TEST"
|
||||
|
||||
|
||||
def test_put_csv(started_cluster):
|
||||
instance = started_cluster.instances["dummy"]
|
||||
format = "column1 UInt32, column2 UInt32, column3 UInt32"
|
||||
|
||||
put_communication_data(started_cluster, "=== Put test CSV ===")
|
||||
put_query = "insert into table function s3('http://{}:{}/{}/test.csv', 'CSV', '{}') format CSV".format(started_cluster.mock_host, started_cluster.preserving_data_port, started_cluster.bucket, format)
|
||||
csv_data = "8,9,16\n11,18,13\n22,14,2\n"
|
||||
run_query(instance, put_query, stdin=csv_data)
|
||||
data = get_communication_data(started_cluster)
|
||||
received_data_completed = data["received_data_completed"]
|
||||
received_data = data["received_data"]
|
||||
finalize_data = data["finalize_data"]
|
||||
finalize_data_query = data["finalize_data_query"]
|
||||
assert received_data[-1].decode() == csv_data
|
||||
assert received_data_completed
|
||||
assert finalize_data == "<CompleteMultipartUpload><Part><PartNumber>1</PartNumber><ETag>hello-etag</ETag></Part></CompleteMultipartUpload>"
|
||||
assert finalize_data_query == "uploadId=TEST"
|
||||
|
||||
|
||||
def test_put_with_redirect(started_cluster):
|
||||
instance = started_cluster.instances["dummy"]
|
||||
format = "column1 UInt32, column2 UInt32, column3 UInt32"
|
||||
|
||||
put_communication_data(started_cluster, "=== Put with redirect test ===")
|
||||
other_values = "(1, 1, 1), (1, 1, 1), (11, 11, 11)"
|
||||
query = "insert into table function s3('http://{}:{}/{}/test.csv', 'CSV', '{}') values {}".format(started_cluster.mock_host, started_cluster.redirecting_preserving_data_port, started_cluster.bucket, format, other_values)
|
||||
run_query(instance, query)
|
||||
|
||||
query = "select *, column1*column2*column3 from s3('http://{}:{}/{}/test.csv', 'CSV', '{}')".format(started_cluster.mock_host, started_cluster.preserving_data_port, started_cluster.bucket, format)
|
||||
stdout = run_query(instance, query)
|
||||
assert list(map(str.split, stdout.splitlines())) == [
|
||||
["1", "1", "1", "1"],
|
||||
["1", "1", "1", "1"],
|
||||
["11", "11", "11", "1331"],
|
||||
]
|
||||
data = get_communication_data(started_cluster)
|
||||
received_data = data["received_data"]
|
||||
assert received_data[-1].decode() == "1,1,1\n1,1,1\n11,11,11\n"
|
||||
|
||||
|
||||
def test_multipart_put(started_cluster):
|
||||
instance = started_cluster.instances["dummy"]
|
||||
format = "column1 UInt32, column2 UInt32, column3 UInt32"
|
||||
|
||||
put_communication_data(started_cluster, "=== Multipart test ===")
|
||||
long_data = [[i, i+1, i+2] for i in range(100000)]
|
||||
long_values = "".join([ "{},{},{}\n".format(x,y,z) for x, y, z in long_data ])
|
||||
put_query = "insert into table function s3('http://{}:{}/{}/test.csv', 'CSV', '{}') format CSV".format(started_cluster.mock_host, started_cluster.multipart_preserving_data_port, started_cluster.bucket, format)
|
||||
run_query(instance, put_query, stdin=long_values)
|
||||
data = get_communication_data(started_cluster)
|
||||
assert "multipart_received_data" in data
|
||||
received_data = data["multipart_received_data"]
|
||||
assert received_data[-1].decode() == "".join([ "{},{},{}\n".format(x, y, z) for x, y, z in long_data ])
|
||||
assert 1 < data["multipart_parts"] < 10000
|
365
dbms/tests/integration/test_storage_s3/test_server.py
Normal file
365
dbms/tests/integration/test_storage_s3/test_server.py
Normal file
@ -0,0 +1,365 @@
|
||||
try:
|
||||
from BaseHTTPServer import BaseHTTPRequestHandler
|
||||
except ImportError:
|
||||
from http.server import BaseHTTPRequestHandler
|
||||
|
||||
try:
|
||||
from BaseHTTPServer import HTTPServer
|
||||
except ImportError:
|
||||
from http.server import HTTPServer
|
||||
|
||||
try:
|
||||
import urllib.parse as urlparse
|
||||
except ImportError:
|
||||
import urlparse
|
||||
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import socket
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import uuid
|
||||
import xml.etree.ElementTree
|
||||
|
||||
|
||||
logging.getLogger().setLevel(logging.INFO)
|
||||
file_handler = logging.FileHandler("/var/log/clickhouse-server/test-server.log", "a", encoding="utf-8")
|
||||
file_handler.setFormatter(logging.Formatter("%(asctime)s %(message)s"))
|
||||
logging.getLogger().addHandler(file_handler)
|
||||
logging.getLogger().addHandler(logging.StreamHandler())
|
||||
|
||||
communication_port = int(sys.argv[1])
|
||||
bucket = sys.argv[2]
|
||||
|
||||
def GetFreeTCPPortsAndIP(n):
|
||||
result = []
|
||||
sockets = []
|
||||
for i in range(n):
|
||||
tcp = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
tcp.bind((socket.gethostname(), 0))
|
||||
addr, port = tcp.getsockname()
|
||||
result.append(port)
|
||||
sockets.append(tcp)
|
||||
[ s.close() for s in sockets ]
|
||||
return result, addr
|
||||
|
||||
(
|
||||
redirecting_to_http_port,
|
||||
simple_server_port,
|
||||
preserving_data_port,
|
||||
multipart_preserving_data_port,
|
||||
redirecting_preserving_data_port
|
||||
), localhost = GetFreeTCPPortsAndIP(5)
|
||||
|
||||
data = {
|
||||
"redirecting_to_http_port": redirecting_to_http_port,
|
||||
"preserving_data_port": preserving_data_port,
|
||||
"multipart_preserving_data_port": multipart_preserving_data_port,
|
||||
"redirecting_preserving_data_port": redirecting_preserving_data_port,
|
||||
}
|
||||
|
||||
|
||||
class SimpleHTTPServerHandler(BaseHTTPRequestHandler):
|
||||
def do_GET(self):
|
||||
logging.info("GET {}".format(self.path))
|
||||
if self.path == "/milovidov/test.csv":
|
||||
self.send_response(200)
|
||||
self.send_header("Content-type", "text/plain")
|
||||
self.end_headers()
|
||||
data["redirect_csv_data"] = [[42, 87, 44], [55, 33, 81], [1, 0, 9]]
|
||||
self.wfile.write("".join([ "{},{},{}\n".format(*row) for row in data["redirect_csv_data"]]))
|
||||
else:
|
||||
self.send_response(404)
|
||||
self.end_headers()
|
||||
self.finish()
|
||||
|
||||
|
||||
class RedirectingToHTTPHandler(BaseHTTPRequestHandler):
|
||||
def do_GET(self):
|
||||
self.send_response(307)
|
||||
self.send_header("Content-type", "text/xml")
|
||||
self.send_header("Location", "http://{}:{}/milovidov/test.csv".format(localhost, simple_server_port))
|
||||
self.end_headers()
|
||||
self.wfile.write(r"""<?xml version="1.0" encoding="UTF-8"?>
|
||||
<Error>
|
||||
<Code>TemporaryRedirect</Code>
|
||||
<Message>Please re-send this request to the specified temporary endpoint.
|
||||
Continue to use the original request endpoint for future requests.</Message>
|
||||
<Endpoint>storage.yandexcloud.net</Endpoint>
|
||||
</Error>""".encode())
|
||||
self.finish()
|
||||
|
||||
|
||||
class PreservingDataHandler(BaseHTTPRequestHandler):
|
||||
protocol_version = "HTTP/1.1"
|
||||
|
||||
def parse_request(self):
|
||||
result = BaseHTTPRequestHandler.parse_request(self)
|
||||
# Adaptation to Python 3.
|
||||
if sys.version_info.major == 2 and result == True:
|
||||
expect = self.headers.get("Expect", "")
|
||||
if (expect.lower() == "100-continue" and self.protocol_version >= "HTTP/1.1" and self.request_version >= "HTTP/1.1"):
|
||||
if not self.handle_expect_100():
|
||||
return False
|
||||
return result
|
||||
|
||||
def send_response_only(self, code, message=None):
|
||||
if message is None:
|
||||
if code in self.responses:
|
||||
message = self.responses[code][0]
|
||||
else:
|
||||
message = ""
|
||||
if self.request_version != "HTTP/0.9":
|
||||
self.wfile.write("%s %d %s\r\n" % (self.protocol_version, code, message))
|
||||
|
||||
def handle_expect_100(self):
|
||||
logging.info("Received Expect-100")
|
||||
self.send_response_only(100)
|
||||
self.end_headers()
|
||||
return True
|
||||
|
||||
def do_POST(self):
|
||||
self.send_response(200)
|
||||
query = urlparse.urlparse(self.path).query
|
||||
logging.info("PreservingDataHandler POST ?" + query)
|
||||
if query == "uploads":
|
||||
post_data = r"""<?xml version="1.0" encoding="UTF-8"?>
|
||||
<hi><UploadId>TEST</UploadId></hi>""".encode()
|
||||
self.send_header("Content-length", str(len(post_data)))
|
||||
self.send_header("Content-type", "text/plain")
|
||||
self.end_headers()
|
||||
self.wfile.write(post_data)
|
||||
else:
|
||||
post_data = self.rfile.read(int(self.headers.get("Content-Length")))
|
||||
self.send_header("Content-type", "text/plain")
|
||||
self.end_headers()
|
||||
data["received_data_completed"] = True
|
||||
data["finalize_data"] = post_data
|
||||
data["finalize_data_query"] = query
|
||||
self.finish()
|
||||
|
||||
def do_PUT(self):
|
||||
self.send_response(200)
|
||||
self.send_header("Content-type", "text/plain")
|
||||
self.send_header("ETag", "hello-etag")
|
||||
self.end_headers()
|
||||
query = urlparse.urlparse(self.path).query
|
||||
path = urlparse.urlparse(self.path).path
|
||||
logging.info("Content-Length = " + self.headers.get("Content-Length"))
|
||||
logging.info("PUT " + query)
|
||||
assert self.headers.get("Content-Length")
|
||||
assert self.headers["Expect"] == "100-continue"
|
||||
put_data = self.rfile.read()
|
||||
data.setdefault("received_data", []).append(put_data)
|
||||
logging.info("PUT to {}".format(path))
|
||||
self.server.storage[path] = put_data
|
||||
self.finish()
|
||||
|
||||
def do_GET(self):
|
||||
path = urlparse.urlparse(self.path).path
|
||||
if path in self.server.storage:
|
||||
self.send_response(200)
|
||||
self.send_header("Content-type", "text/plain")
|
||||
self.send_header("Content-length", str(len(self.server.storage[path])))
|
||||
self.end_headers()
|
||||
self.wfile.write(self.server.storage[path])
|
||||
else:
|
||||
self.send_response(404)
|
||||
self.end_headers()
|
||||
self.finish()
|
||||
|
||||
|
||||
class MultipartPreservingDataHandler(BaseHTTPRequestHandler):
|
||||
protocol_version = "HTTP/1.1"
|
||||
|
||||
def parse_request(self):
|
||||
result = BaseHTTPRequestHandler.parse_request(self)
|
||||
# Adaptation to Python 3.
|
||||
if sys.version_info.major == 2 and result == True:
|
||||
expect = self.headers.get("Expect", "")
|
||||
if (expect.lower() == "100-continue" and self.protocol_version >= "HTTP/1.1" and self.request_version >= "HTTP/1.1"):
|
||||
if not self.handle_expect_100():
|
||||
return False
|
||||
return result
|
||||
|
||||
def send_response_only(self, code, message=None):
|
||||
if message is None:
|
||||
if code in self.responses:
|
||||
message = self.responses[code][0]
|
||||
else:
|
||||
message = ""
|
||||
if self.request_version != "HTTP/0.9":
|
||||
self.wfile.write("%s %d %s\r\n" % (self.protocol_version, code, message))
|
||||
|
||||
def handle_expect_100(self):
|
||||
logging.info("Received Expect-100")
|
||||
self.send_response_only(100)
|
||||
self.end_headers()
|
||||
return True
|
||||
|
||||
def do_POST(self):
|
||||
query = urlparse.urlparse(self.path).query
|
||||
logging.info("MultipartPreservingDataHandler POST ?" + query)
|
||||
if query == "uploads":
|
||||
self.send_response(200)
|
||||
post_data = r"""<?xml version="1.0" encoding="UTF-8"?>
|
||||
<hi><UploadId>TEST</UploadId></hi>""".encode()
|
||||
self.send_header("Content-length", str(len(post_data)))
|
||||
self.send_header("Content-type", "text/plain")
|
||||
self.end_headers()
|
||||
self.wfile.write(post_data)
|
||||
else:
|
||||
try:
|
||||
assert query == "uploadId=TEST"
|
||||
logging.info("Content-Length = " + self.headers.get("Content-Length"))
|
||||
post_data = self.rfile.read(int(self.headers.get("Content-Length")))
|
||||
root = xml.etree.ElementTree.fromstring(post_data)
|
||||
assert root.tag == "CompleteMultipartUpload"
|
||||
assert len(root) > 1
|
||||
content = ""
|
||||
for i, part in enumerate(root):
|
||||
assert part.tag == "Part"
|
||||
assert len(part) == 2
|
||||
assert part[0].tag == "PartNumber"
|
||||
assert part[1].tag == "ETag"
|
||||
assert int(part[0].text) == i + 1
|
||||
content += self.server.storage["@"+part[1].text]
|
||||
data.setdefault("multipart_received_data", []).append(content)
|
||||
data["multipart_parts"] = len(root)
|
||||
self.send_response(200)
|
||||
self.send_header("Content-type", "text/plain")
|
||||
self.end_headers()
|
||||
logging.info("Sending 200")
|
||||
except:
|
||||
logging.error("Sending 500")
|
||||
self.send_response(500)
|
||||
self.finish()
|
||||
|
||||
def do_PUT(self):
|
||||
uid = uuid.uuid4()
|
||||
self.send_response(200)
|
||||
self.send_header("Content-type", "text/plain")
|
||||
self.send_header("ETag", str(uid))
|
||||
self.end_headers()
|
||||
query = urlparse.urlparse(self.path).query
|
||||
path = urlparse.urlparse(self.path).path
|
||||
logging.info("Content-Length = " + self.headers.get("Content-Length"))
|
||||
logging.info("PUT " + query)
|
||||
assert self.headers.get("Content-Length")
|
||||
assert self.headers["Expect"] == "100-continue"
|
||||
put_data = self.rfile.read()
|
||||
data.setdefault("received_data", []).append(put_data)
|
||||
logging.info("PUT to {}".format(path))
|
||||
self.server.storage["@"+str(uid)] = put_data
|
||||
self.finish()
|
||||
|
||||
def do_GET(self):
|
||||
path = urlparse.urlparse(self.path).path
|
||||
if path in self.server.storage:
|
||||
self.send_response(200)
|
||||
self.send_header("Content-type", "text/plain")
|
||||
self.send_header("Content-length", str(len(self.server.storage[path])))
|
||||
self.end_headers()
|
||||
self.wfile.write(self.server.storage[path])
|
||||
else:
|
||||
self.send_response(404)
|
||||
self.end_headers()
|
||||
self.finish()
|
||||
|
||||
|
||||
class RedirectingPreservingDataHandler(BaseHTTPRequestHandler):
|
||||
protocol_version = "HTTP/1.1"
|
||||
|
||||
def parse_request(self):
|
||||
result = BaseHTTPRequestHandler.parse_request(self)
|
||||
# Adaptation to Python 3.
|
||||
if sys.version_info.major == 2 and result == True:
|
||||
expect = self.headers.get("Expect", "")
|
||||
if (expect.lower() == "100-continue" and self.protocol_version >= "HTTP/1.1" and self.request_version >= "HTTP/1.1"):
|
||||
if not self.handle_expect_100():
|
||||
return False
|
||||
return result
|
||||
|
||||
def send_response_only(self, code, message=None):
|
||||
if message is None:
|
||||
if code in self.responses:
|
||||
message = self.responses[code][0]
|
||||
else:
|
||||
message = ""
|
||||
if self.request_version != "HTTP/0.9":
|
||||
self.wfile.write("%s %d %s\r\n" % (self.protocol_version, code, message))
|
||||
|
||||
def handle_expect_100(self):
|
||||
logging.info("Received Expect-100")
|
||||
return True
|
||||
|
||||
def do_POST(self):
|
||||
query = urlparse.urlparse(self.path).query
|
||||
if query:
|
||||
query = "?{}".format(query)
|
||||
self.send_response(307)
|
||||
self.send_header("Content-type", "text/xml")
|
||||
self.send_header("Location", "http://{host}:{port}/{bucket}/test.csv{query}".format(host=localhost, port=preserving_data_port, bucket=bucket, query=query))
|
||||
self.end_headers()
|
||||
self.wfile.write(r"""<?xml version="1.0" encoding="UTF-8"?>
|
||||
<Error>
|
||||
<Code>TemporaryRedirect</Code>
|
||||
<Message>Please re-send this request to the specified temporary endpoint.
|
||||
Continue to use the original request endpoint for future requests.</Message>
|
||||
<Endpoint>{host}:{port}</Endpoint>
|
||||
</Error>""".format(host=localhost, port=preserving_data_port).encode())
|
||||
self.finish()
|
||||
|
||||
def do_PUT(self):
|
||||
query = urlparse.urlparse(self.path).query
|
||||
if query:
|
||||
query = "?{}".format(query)
|
||||
self.send_response(307)
|
||||
self.send_header("Content-type", "text/xml")
|
||||
self.send_header("Location", "http://{host}:{port}/{bucket}/test.csv{query}".format(host=localhost, port=preserving_data_port, bucket=bucket, query=query))
|
||||
self.end_headers()
|
||||
self.wfile.write(r"""<?xml version="1.0" encoding="UTF-8"?>
|
||||
<Error>
|
||||
<Code>TemporaryRedirect</Code>
|
||||
<Message>Please re-send this request to the specified temporary endpoint.
|
||||
Continue to use the original request endpoint for future requests.</Message>
|
||||
<Endpoint>{host}:{port}</Endpoint>
|
||||
</Error>""".format(host=localhost, port=preserving_data_port).encode())
|
||||
self.finish()
|
||||
|
||||
|
||||
class CommunicationServerHandler(BaseHTTPRequestHandler):
|
||||
def do_GET(self):
|
||||
self.send_response(200)
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(data))
|
||||
self.finish()
|
||||
|
||||
def do_PUT(self):
|
||||
self.send_response(200)
|
||||
self.end_headers()
|
||||
logging.info(self.rfile.read())
|
||||
self.finish()
|
||||
|
||||
|
||||
servers = []
|
||||
servers.append(HTTPServer((localhost, communication_port), CommunicationServerHandler))
|
||||
servers.append(HTTPServer((localhost, redirecting_to_http_port), RedirectingToHTTPHandler))
|
||||
servers.append(HTTPServer((localhost, preserving_data_port), PreservingDataHandler))
|
||||
servers[-1].storage = {}
|
||||
servers.append(HTTPServer((localhost, multipart_preserving_data_port), MultipartPreservingDataHandler))
|
||||
servers[-1].storage = {}
|
||||
servers.append(HTTPServer((localhost, simple_server_port), SimpleHTTPServerHandler))
|
||||
servers.append(HTTPServer((localhost, redirecting_preserving_data_port), RedirectingPreservingDataHandler))
|
||||
jobs = [ threading.Thread(target=server.serve_forever) for server in servers ]
|
||||
[ job.start() for job in jobs ]
|
||||
|
||||
time.sleep(60) # Timeout
|
||||
|
||||
logging.info("Shutting down")
|
||||
[ server.shutdown() for server in servers ]
|
||||
logging.info("Joining threads")
|
||||
[ job.join() for job in jobs ]
|
||||
logging.info("Done")
|
39
dbms/tests/performance/joins_in_memory_pmj.xml
Normal file
39
dbms/tests/performance/joins_in_memory_pmj.xml
Normal file
@ -0,0 +1,39 @@
|
||||
<test>
|
||||
<type>loop</type>
|
||||
|
||||
<stop_conditions>
|
||||
<any_of>
|
||||
<iterations>10</iterations>
|
||||
</any_of>
|
||||
</stop_conditions>
|
||||
|
||||
<main_metric>
|
||||
<rows_per_second />
|
||||
</main_metric>
|
||||
|
||||
<create_query>CREATE TABLE ints (i64 Int64, i32 Int32, i16 Int16, i8 Int8) ENGINE = Memory</create_query>
|
||||
<create_query>SET partial_merge_join = 1</create_query>
|
||||
|
||||
<fill_query>INSERT INTO ints SELECT number AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(5000)</fill_query>
|
||||
<fill_query>INSERT INTO ints SELECT 10000 + number % 1000 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(5000)</fill_query>
|
||||
<fill_query>INSERT INTO ints SELECT 20000 + number % 100 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(5000)</fill_query>
|
||||
<fill_query>INSERT INTO ints SELECT 30000 + number % 10 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(5000)</fill_query>
|
||||
<fill_query>INSERT INTO ints SELECT 40000 + number % 1 AS i64, i64 AS i32, i64 AS i16, i64 AS i8 FROM numbers(5000)</fill_query>
|
||||
|
||||
<query tag='ANY LEFT'>SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 = 200042</query>
|
||||
<query tag='ANY LEFT KEY'>SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 200042</query>
|
||||
<query tag='ANY LEFT ON'>SELECT COUNT() FROM ints l ANY LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 200042</query>
|
||||
<query tag='ANY LEFT IN'>SELECT COUNT() FROM ints l ANY LEFT JOIN ints r USING i64 WHERE i32 IN(42, 100042, 200042, 300042, 400042)</query>
|
||||
|
||||
<query tag='INNER'>SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 = 200042</query>
|
||||
<query tag='INNER KEY'>SELECT COUNT() FROM ints l INNER JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 200042</query>
|
||||
<query tag='INNER ON'>SELECT COUNT() FROM ints l INNER JOIN ints r ON l.i64 = r.i64 WHERE i32 = 200042</query>
|
||||
<query tag='INNER IN'>SELECT COUNT() FROM ints l INNER JOIN ints r USING i64 WHERE i32 IN(42, 100042, 200042, 300042, 400042)</query>
|
||||
|
||||
<query tag='LEFT'>SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 = 200042</query>
|
||||
<query tag='LEFT KEY'>SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64,i32,i16,i8 WHERE i32 = 200042</query>
|
||||
<query tag='LEFT ON'>SELECT COUNT() FROM ints l LEFT JOIN ints r ON l.i64 = r.i64 WHERE i32 = 200042</query>
|
||||
<query tag='LEFT IN'>SELECT COUNT() FROM ints l LEFT JOIN ints r USING i64 WHERE i32 IN(42, 100042, 200042, 300042, 400042)</query>
|
||||
|
||||
<drop_query>DROP TABLE IF EXISTS ints</drop_query>
|
||||
</test>
|
206
dbms/tests/performance/merge_tree_huge_pk.xml
Normal file
206
dbms/tests/performance/merge_tree_huge_pk.xml
Normal file
@ -0,0 +1,206 @@
|
||||
<test>
|
||||
<type>loop</type>
|
||||
|
||||
<stop_conditions>
|
||||
<all_of>
|
||||
<iterations>10</iterations>
|
||||
<min_time_not_changing_for_ms>12000</min_time_not_changing_for_ms>
|
||||
</all_of>
|
||||
<any_of>
|
||||
<iterations>50</iterations>
|
||||
<total_time_ms>60000</total_time_ms>
|
||||
</any_of>
|
||||
</stop_conditions>
|
||||
|
||||
<create_query>
|
||||
CREATE TABLE huge_pk ENGINE = MergeTree ORDER BY (
|
||||
c001, c002, c003, c004, c005, c006, c007, c008, c009, c010, c011, c012, c013, c014, c015, c016, c017, c018, c019, c020,
|
||||
c021, c022, c023, c024, c025, c026, c027, c028, c029, c030, c031, c032, c033, c034, c035, c036, c037, c038, c039, c040,
|
||||
c041, c042, c043, c044, c045, c046, c047, c048, c049, c050, c051, c052, c053, c054, c055, c056, c057, c058, c059, c060,
|
||||
c061, c062, c063, c064, c065, c066, c067, c068, c069, c070, c071, c072, c073, c074, c075, c076, c077, c078, c079, c080,
|
||||
c081, c082, c083, c084, c085, c086, c087, c088, c089, c090, c091, c092, c093, c094, c095, c096, c097, c098, c099, c100,
|
||||
c101, c102, c103, c104, c105, c106, c107, c108, c109, c110, c111, c112, c113, c114, c115, c116, c117, c118, c119, c120,
|
||||
c121, c122, c123, c124, c125, c126, c127, c128, c129, c130, c131, c132, c133, c134, c135, c136, c137, c138, c139, c140,
|
||||
c141, c142, c143, c144, c145, c146, c147, c148, c149, c150, c151, c152, c153, c154, c155, c156, c157, c158, c159, c160,
|
||||
c161, c162, c163, c164, c165, c166, c167, c168, c169, c170, c171, c172, c173, c174, c175, c176, c177, c178, c179, c180,
|
||||
c181, c182, c183, c184, c185, c186, c187, c188, c189, c190, c191, c192, c193, c194, c195, c196, c197, c198, c199, c200,
|
||||
c201, c202, c203, c204, c205, c206, c207, c208, c209, c210, c211, c212, c213, c214, c215, c216, c217, c218, c219, c220,
|
||||
c221, c222, c223, c224, c225, c226, c227, c228, c229, c230, c231, c232, c233, c234, c235, c236, c237, c238, c239, c240,
|
||||
c241, c242, c243, c244, c245, c246, c247, c248, c249, c250, c251, c252, c253, c254, c255, c256, c257, c258, c259, c260,
|
||||
c261, c262, c263, c264, c265, c266, c267, c268, c269, c270, c271, c272, c273, c274, c275, c276, c277, c278, c279, c280,
|
||||
c281, c282, c283, c284, c285, c286, c287, c288, c289, c290, c291, c292, c293, c294, c295, c296, c297, c298, c299, c300,
|
||||
c301, c302, c303, c304, c305, c306, c307, c308, c309, c310, c311, c312, c313, c314, c315, c316, c317, c318, c319, c320,
|
||||
c321, c322, c323, c324, c325, c326, c327, c328, c329, c330, c331, c332, c333, c334, c335, c336, c337, c338, c339, c340,
|
||||
c341, c342, c343, c344, c345, c346, c347, c348, c349, c350, c351, c352, c353, c354, c355, c356, c357, c358, c359, c360,
|
||||
c361, c362, c363, c364, c365, c366, c367, c368, c369, c370, c371, c372, c373, c374, c375, c376, c377, c378, c379, c380,
|
||||
c381, c382, c383, c384, c385, c386, c387, c388, c389, c390, c391, c392, c393, c394, c395, c396, c397, c398, c399, c400,
|
||||
c401, c402, c403, c404, c405, c406, c407, c408, c409, c410, c411, c412, c413, c414, c415, c416, c417, c418, c419, c420,
|
||||
c421, c422, c423, c424, c425, c426, c427, c428, c429, c430, c431, c432, c433, c434, c435, c436, c437, c438, c439, c440,
|
||||
c441, c442, c443, c444, c445, c446, c447, c448, c449, c450, c451, c452, c453, c454, c455, c456, c457, c458, c459, c460,
|
||||
c461, c462, c463, c464, c465, c466, c467, c468, c469, c470, c471, c472, c473, c474, c475, c476, c477, c478, c479, c480,
|
||||
c481, c482, c483, c484, c485, c486, c487, c488, c489, c490, c491, c492, c493, c494, c495, c496, c497, c498, c499, c500,
|
||||
c501, c502, c503, c504, c505, c506, c507, c508, c509, c510, c511, c512, c513, c514, c515, c516, c517, c518, c519, c520,
|
||||
c521, c522, c523, c524, c525, c526, c527, c528, c529, c530, c531, c532, c533, c534, c535, c536, c537, c538, c539, c540,
|
||||
c541, c542, c543, c544, c545, c546, c547, c548, c549, c550, c551, c552, c553, c554, c555, c556, c557, c558, c559, c560,
|
||||
c561, c562, c563, c564, c565, c566, c567, c568, c569, c570, c571, c572, c573, c574, c575, c576, c577, c578, c579, c580,
|
||||
c581, c582, c583, c584, c585, c586, c587, c588, c589, c590, c591, c592, c593, c594, c595, c596, c597, c598, c599, c600,
|
||||
c601, c602, c603, c604, c605, c606, c607, c608, c609, c610, c611, c612, c613, c614, c615, c616, c617, c618, c619, c620,
|
||||
c621, c622, c623, c624, c625, c626, c627, c628, c629, c630, c631, c632, c633, c634, c635, c636, c637, c638, c639, c640,
|
||||
c641, c642, c643, c644, c645, c646, c647, c648, c649, c650, c651, c652, c653, c654, c655, c656, c657, c658, c659, c660,
|
||||
c661, c662, c663, c664, c665, c666, c667, c668, c669, c670, c671, c672, c673, c674, c675, c676, c677, c678, c679, c680,
|
||||
c681, c682, c683, c684, c685, c686, c687, c688, c689, c690, c691, c692, c693, c694, c695, c696, c697, c698, c699, c700)
|
||||
AS SELECT
|
||||
rand64( 1) % 5 as c001, rand64( 2) % 5 as c002, rand64( 3) % 5 as c003, rand64( 4) % 5 as c004, rand64( 5) % 5 as c005,
|
||||
rand64( 6) % 5 as c006, rand64( 7) % 5 as c007, rand64( 8) % 5 as c008, rand64( 9) % 5 as c009, rand64( 10) % 5 as c010,
|
||||
rand64( 11) % 5 as c011, rand64( 12) % 5 as c012, rand64( 13) % 5 as c013, rand64( 14) % 5 as c014, rand64( 15) % 5 as c015,
|
||||
rand64( 16) % 5 as c016, rand64( 17) % 5 as c017, rand64( 18) % 5 as c018, rand64( 19) % 5 as c019, rand64( 20) % 5 as c020,
|
||||
rand64( 21) % 5 as c021, rand64( 22) % 5 as c022, rand64( 23) % 5 as c023, rand64( 24) % 5 as c024, rand64( 25) % 5 as c025,
|
||||
rand64( 26) % 5 as c026, rand64( 27) % 5 as c027, rand64( 28) % 5 as c028, rand64( 29) % 5 as c029, rand64( 30) % 5 as c030,
|
||||
rand64( 31) % 5 as c031, rand64( 32) % 5 as c032, rand64( 33) % 5 as c033, rand64( 34) % 5 as c034, rand64( 35) % 5 as c035,
|
||||
rand64( 36) % 5 as c036, rand64( 37) % 5 as c037, rand64( 38) % 5 as c038, rand64( 39) % 5 as c039, rand64( 40) % 5 as c040,
|
||||
rand64( 41) % 5 as c041, rand64( 42) % 5 as c042, rand64( 43) % 5 as c043, rand64( 44) % 5 as c044, rand64( 45) % 5 as c045,
|
||||
rand64( 46) % 5 as c046, rand64( 47) % 5 as c047, rand64( 48) % 5 as c048, rand64( 49) % 5 as c049, rand64( 50) % 5 as c050,
|
||||
rand64( 51) % 5 as c051, rand64( 52) % 5 as c052, rand64( 53) % 5 as c053, rand64( 54) % 5 as c054, rand64( 55) % 5 as c055,
|
||||
rand64( 56) % 5 as c056, rand64( 57) % 5 as c057, rand64( 58) % 5 as c058, rand64( 59) % 5 as c059, rand64( 60) % 5 as c060,
|
||||
rand64( 61) % 5 as c061, rand64( 62) % 5 as c062, rand64( 63) % 5 as c063, rand64( 64) % 5 as c064, rand64( 65) % 5 as c065,
|
||||
rand64( 66) % 5 as c066, rand64( 67) % 5 as c067, rand64( 68) % 5 as c068, rand64( 69) % 5 as c069, rand64( 70) % 5 as c070,
|
||||
rand64( 71) % 5 as c071, rand64( 72) % 5 as c072, rand64( 73) % 5 as c073, rand64( 74) % 5 as c074, rand64( 75) % 5 as c075,
|
||||
rand64( 76) % 5 as c076, rand64( 77) % 5 as c077, rand64( 78) % 5 as c078, rand64( 79) % 5 as c079, rand64( 80) % 5 as c080,
|
||||
rand64( 81) % 5 as c081, rand64( 82) % 5 as c082, rand64( 83) % 5 as c083, rand64( 84) % 5 as c084, rand64( 85) % 5 as c085,
|
||||
rand64( 86) % 5 as c086, rand64( 87) % 5 as c087, rand64( 88) % 5 as c088, rand64( 89) % 5 as c089, rand64( 90) % 5 as c090,
|
||||
rand64( 91) % 5 as c091, rand64( 92) % 5 as c092, rand64( 93) % 5 as c093, rand64( 94) % 5 as c094, rand64( 95) % 5 as c095,
|
||||
rand64( 96) % 5 as c096, rand64( 97) % 5 as c097, rand64( 98) % 5 as c098, rand64( 99) % 5 as c099, rand64(100) % 5 as c100,
|
||||
rand64(101) % 5 as c101, rand64(102) % 5 as c102, rand64(103) % 5 as c103, rand64(104) % 5 as c104, rand64(105) % 5 as c105,
|
||||
rand64(106) % 5 as c106, rand64(107) % 5 as c107, rand64(108) % 5 as c108, rand64(109) % 5 as c109, rand64(110) % 5 as c110,
|
||||
rand64(111) % 5 as c111, rand64(112) % 5 as c112, rand64(113) % 5 as c113, rand64(114) % 5 as c114, rand64(115) % 5 as c115,
|
||||
rand64(116) % 5 as c116, rand64(117) % 5 as c117, rand64(118) % 5 as c118, rand64(119) % 5 as c119, rand64(120) % 5 as c120,
|
||||
rand64(121) % 5 as c121, rand64(122) % 5 as c122, rand64(123) % 5 as c123, rand64(124) % 5 as c124, rand64(125) % 5 as c125,
|
||||
rand64(126) % 5 as c126, rand64(127) % 5 as c127, rand64(128) % 5 as c128, rand64(129) % 5 as c129, rand64(130) % 5 as c130,
|
||||
rand64(131) % 5 as c131, rand64(132) % 5 as c132, rand64(133) % 5 as c133, rand64(134) % 5 as c134, rand64(135) % 5 as c135,
|
||||
rand64(136) % 5 as c136, rand64(137) % 5 as c137, rand64(138) % 5 as c138, rand64(139) % 5 as c139, rand64(140) % 5 as c140,
|
||||
rand64(141) % 5 as c141, rand64(142) % 5 as c142, rand64(143) % 5 as c143, rand64(144) % 5 as c144, rand64(145) % 5 as c145,
|
||||
rand64(146) % 5 as c146, rand64(147) % 5 as c147, rand64(148) % 5 as c148, rand64(149) % 5 as c149, rand64(150) % 5 as c150,
|
||||
rand64(151) % 5 as c151, rand64(152) % 5 as c152, rand64(153) % 5 as c153, rand64(154) % 5 as c154, rand64(155) % 5 as c155,
|
||||
rand64(156) % 5 as c156, rand64(157) % 5 as c157, rand64(158) % 5 as c158, rand64(159) % 5 as c159, rand64(160) % 5 as c160,
|
||||
rand64(161) % 5 as c161, rand64(162) % 5 as c162, rand64(163) % 5 as c163, rand64(164) % 5 as c164, rand64(165) % 5 as c165,
|
||||
rand64(166) % 5 as c166, rand64(167) % 5 as c167, rand64(168) % 5 as c168, rand64(169) % 5 as c169, rand64(170) % 5 as c170,
|
||||
rand64(171) % 5 as c171, rand64(172) % 5 as c172, rand64(173) % 5 as c173, rand64(174) % 5 as c174, rand64(175) % 5 as c175,
|
||||
rand64(176) % 5 as c176, rand64(177) % 5 as c177, rand64(178) % 5 as c178, rand64(179) % 5 as c179, rand64(180) % 5 as c180,
|
||||
rand64(181) % 5 as c181, rand64(182) % 5 as c182, rand64(183) % 5 as c183, rand64(184) % 5 as c184, rand64(185) % 5 as c185,
|
||||
rand64(186) % 5 as c186, rand64(187) % 5 as c187, rand64(188) % 5 as c188, rand64(189) % 5 as c189, rand64(190) % 5 as c190,
|
||||
rand64(191) % 5 as c191, rand64(192) % 5 as c192, rand64(193) % 5 as c193, rand64(194) % 5 as c194, rand64(195) % 5 as c195,
|
||||
rand64(196) % 5 as c196, rand64(197) % 5 as c197, rand64(198) % 5 as c198, rand64(199) % 5 as c199, rand64(200) % 5 as c200,
|
||||
rand64(201) % 5 as c201, rand64(202) % 5 as c202, rand64(203) % 5 as c203, rand64(204) % 5 as c204, rand64(205) % 5 as c205,
|
||||
rand64(206) % 5 as c206, rand64(207) % 5 as c207, rand64(208) % 5 as c208, rand64(209) % 5 as c209, rand64(210) % 5 as c210,
|
||||
rand64(211) % 5 as c211, rand64(212) % 5 as c212, rand64(213) % 5 as c213, rand64(214) % 5 as c214, rand64(215) % 5 as c215,
|
||||
rand64(216) % 5 as c216, rand64(217) % 5 as c217, rand64(218) % 5 as c218, rand64(219) % 5 as c219, rand64(220) % 5 as c220,
|
||||
rand64(221) % 5 as c221, rand64(222) % 5 as c222, rand64(223) % 5 as c223, rand64(224) % 5 as c224, rand64(225) % 5 as c225,
|
||||
rand64(226) % 5 as c226, rand64(227) % 5 as c227, rand64(228) % 5 as c228, rand64(229) % 5 as c229, rand64(230) % 5 as c230,
|
||||
rand64(231) % 5 as c231, rand64(232) % 5 as c232, rand64(233) % 5 as c233, rand64(234) % 5 as c234, rand64(235) % 5 as c235,
|
||||
rand64(236) % 5 as c236, rand64(237) % 5 as c237, rand64(238) % 5 as c238, rand64(239) % 5 as c239, rand64(240) % 5 as c240,
|
||||
rand64(241) % 5 as c241, rand64(242) % 5 as c242, rand64(243) % 5 as c243, rand64(244) % 5 as c244, rand64(245) % 5 as c245,
|
||||
rand64(246) % 5 as c246, rand64(247) % 5 as c247, rand64(248) % 5 as c248, rand64(249) % 5 as c249, rand64(250) % 5 as c250,
|
||||
rand64(251) % 5 as c251, rand64(252) % 5 as c252, rand64(253) % 5 as c253, rand64(254) % 5 as c254, rand64(255) % 5 as c255,
|
||||
rand64(256) % 5 as c256, rand64(257) % 5 as c257, rand64(258) % 5 as c258, rand64(259) % 5 as c259, rand64(260) % 5 as c260,
|
||||
rand64(261) % 5 as c261, rand64(262) % 5 as c262, rand64(263) % 5 as c263, rand64(264) % 5 as c264, rand64(265) % 5 as c265,
|
||||
rand64(266) % 5 as c266, rand64(267) % 5 as c267, rand64(268) % 5 as c268, rand64(269) % 5 as c269, rand64(270) % 5 as c270,
|
||||
rand64(271) % 5 as c271, rand64(272) % 5 as c272, rand64(273) % 5 as c273, rand64(274) % 5 as c274, rand64(275) % 5 as c275,
|
||||
rand64(276) % 5 as c276, rand64(277) % 5 as c277, rand64(278) % 5 as c278, rand64(279) % 5 as c279, rand64(280) % 5 as c280,
|
||||
rand64(281) % 5 as c281, rand64(282) % 5 as c282, rand64(283) % 5 as c283, rand64(284) % 5 as c284, rand64(285) % 5 as c285,
|
||||
rand64(286) % 5 as c286, rand64(287) % 5 as c287, rand64(288) % 5 as c288, rand64(289) % 5 as c289, rand64(290) % 5 as c290,
|
||||
rand64(291) % 5 as c291, rand64(292) % 5 as c292, rand64(293) % 5 as c293, rand64(294) % 5 as c294, rand64(295) % 5 as c295,
|
||||
rand64(296) % 5 as c296, rand64(297) % 5 as c297, rand64(298) % 5 as c298, rand64(299) % 5 as c299, rand64(300) % 5 as c300,
|
||||
rand64(301) % 5 as c301, rand64(302) % 5 as c302, rand64(303) % 5 as c303, rand64(304) % 5 as c304, rand64(305) % 5 as c305,
|
||||
rand64(306) % 5 as c306, rand64(307) % 5 as c307, rand64(308) % 5 as c308, rand64(309) % 5 as c309, rand64(310) % 5 as c310,
|
||||
rand64(311) % 5 as c311, rand64(312) % 5 as c312, rand64(313) % 5 as c313, rand64(314) % 5 as c314, rand64(315) % 5 as c315,
|
||||
rand64(316) % 5 as c316, rand64(317) % 5 as c317, rand64(318) % 5 as c318, rand64(319) % 5 as c319, rand64(320) % 5 as c320,
|
||||
rand64(321) % 5 as c321, rand64(322) % 5 as c322, rand64(323) % 5 as c323, rand64(324) % 5 as c324, rand64(325) % 5 as c325,
|
||||
rand64(326) % 5 as c326, rand64(327) % 5 as c327, rand64(328) % 5 as c328, rand64(329) % 5 as c329, rand64(330) % 5 as c330,
|
||||
rand64(331) % 5 as c331, rand64(332) % 5 as c332, rand64(333) % 5 as c333, rand64(334) % 5 as c334, rand64(335) % 5 as c335,
|
||||
rand64(336) % 5 as c336, rand64(337) % 5 as c337, rand64(338) % 5 as c338, rand64(339) % 5 as c339, rand64(340) % 5 as c340,
|
||||
rand64(341) % 5 as c341, rand64(342) % 5 as c342, rand64(343) % 5 as c343, rand64(344) % 5 as c344, rand64(345) % 5 as c345,
|
||||
rand64(346) % 5 as c346, rand64(347) % 5 as c347, rand64(348) % 5 as c348, rand64(349) % 5 as c349, rand64(350) % 5 as c350,
|
||||
rand64(351) % 5 as c351, rand64(352) % 5 as c352, rand64(353) % 5 as c353, rand64(354) % 5 as c354, rand64(355) % 5 as c355,
|
||||
rand64(356) % 5 as c356, rand64(357) % 5 as c357, rand64(358) % 5 as c358, rand64(359) % 5 as c359, rand64(360) % 5 as c360,
|
||||
rand64(361) % 5 as c361, rand64(362) % 5 as c362, rand64(363) % 5 as c363, rand64(364) % 5 as c364, rand64(365) % 5 as c365,
|
||||
rand64(366) % 5 as c366, rand64(367) % 5 as c367, rand64(368) % 5 as c368, rand64(369) % 5 as c369, rand64(370) % 5 as c370,
|
||||
rand64(371) % 5 as c371, rand64(372) % 5 as c372, rand64(373) % 5 as c373, rand64(374) % 5 as c374, rand64(375) % 5 as c375,
|
||||
rand64(376) % 5 as c376, rand64(377) % 5 as c377, rand64(378) % 5 as c378, rand64(379) % 5 as c379, rand64(380) % 5 as c380,
|
||||
rand64(381) % 5 as c381, rand64(382) % 5 as c382, rand64(383) % 5 as c383, rand64(384) % 5 as c384, rand64(385) % 5 as c385,
|
||||
rand64(386) % 5 as c386, rand64(387) % 5 as c387, rand64(388) % 5 as c388, rand64(389) % 5 as c389, rand64(390) % 5 as c390,
|
||||
rand64(391) % 5 as c391, rand64(392) % 5 as c392, rand64(393) % 5 as c393, rand64(394) % 5 as c394, rand64(395) % 5 as c395,
|
||||
rand64(396) % 5 as c396, rand64(397) % 5 as c397, rand64(398) % 5 as c398, rand64(399) % 5 as c399, rand64(400) % 5 as c400,
|
||||
rand64(401) % 5 as c401, rand64(402) % 5 as c402, rand64(403) % 5 as c403, rand64(404) % 5 as c404, rand64(405) % 5 as c405,
|
||||
rand64(406) % 5 as c406, rand64(407) % 5 as c407, rand64(408) % 5 as c408, rand64(409) % 5 as c409, rand64(410) % 5 as c410,
|
||||
rand64(411) % 5 as c411, rand64(412) % 5 as c412, rand64(413) % 5 as c413, rand64(414) % 5 as c414, rand64(415) % 5 as c415,
|
||||
rand64(416) % 5 as c416, rand64(417) % 5 as c417, rand64(418) % 5 as c418, rand64(419) % 5 as c419, rand64(420) % 5 as c420,
|
||||
rand64(421) % 5 as c421, rand64(422) % 5 as c422, rand64(423) % 5 as c423, rand64(424) % 5 as c424, rand64(425) % 5 as c425,
|
||||
rand64(426) % 5 as c426, rand64(427) % 5 as c427, rand64(428) % 5 as c428, rand64(429) % 5 as c429, rand64(430) % 5 as c430,
|
||||
rand64(431) % 5 as c431, rand64(432) % 5 as c432, rand64(433) % 5 as c433, rand64(434) % 5 as c434, rand64(435) % 5 as c435,
|
||||
rand64(436) % 5 as c436, rand64(437) % 5 as c437, rand64(438) % 5 as c438, rand64(439) % 5 as c439, rand64(440) % 5 as c440,
|
||||
rand64(441) % 5 as c441, rand64(442) % 5 as c442, rand64(443) % 5 as c443, rand64(444) % 5 as c444, rand64(445) % 5 as c445,
|
||||
rand64(446) % 5 as c446, rand64(447) % 5 as c447, rand64(448) % 5 as c448, rand64(449) % 5 as c449, rand64(450) % 5 as c450,
|
||||
rand64(451) % 5 as c451, rand64(452) % 5 as c452, rand64(453) % 5 as c453, rand64(454) % 5 as c454, rand64(455) % 5 as c455,
|
||||
rand64(456) % 5 as c456, rand64(457) % 5 as c457, rand64(458) % 5 as c458, rand64(459) % 5 as c459, rand64(460) % 5 as c460,
|
||||
rand64(461) % 5 as c461, rand64(462) % 5 as c462, rand64(463) % 5 as c463, rand64(464) % 5 as c464, rand64(465) % 5 as c465,
|
||||
rand64(466) % 5 as c466, rand64(467) % 5 as c467, rand64(468) % 5 as c468, rand64(469) % 5 as c469, rand64(470) % 5 as c470,
|
||||
rand64(471) % 5 as c471, rand64(472) % 5 as c472, rand64(473) % 5 as c473, rand64(474) % 5 as c474, rand64(475) % 5 as c475,
|
||||
rand64(476) % 5 as c476, rand64(477) % 5 as c477, rand64(478) % 5 as c478, rand64(479) % 5 as c479, rand64(480) % 5 as c480,
|
||||
rand64(481) % 5 as c481, rand64(482) % 5 as c482, rand64(483) % 5 as c483, rand64(484) % 5 as c484, rand64(485) % 5 as c485,
|
||||
rand64(486) % 5 as c486, rand64(487) % 5 as c487, rand64(488) % 5 as c488, rand64(489) % 5 as c489, rand64(490) % 5 as c490,
|
||||
rand64(491) % 5 as c491, rand64(492) % 5 as c492, rand64(493) % 5 as c493, rand64(494) % 5 as c494, rand64(495) % 5 as c495,
|
||||
rand64(496) % 5 as c496, rand64(497) % 5 as c497, rand64(498) % 5 as c498, rand64(499) % 5 as c499, rand64(500) % 5 as c500,
|
||||
rand64(501) % 5 as c501, rand64(502) % 5 as c502, rand64(503) % 5 as c503, rand64(504) % 5 as c504, rand64(505) % 5 as c505,
|
||||
rand64(506) % 5 as c506, rand64(507) % 5 as c507, rand64(508) % 5 as c508, rand64(509) % 5 as c509, rand64(510) % 5 as c510,
|
||||
rand64(511) % 5 as c511, rand64(512) % 5 as c512, rand64(513) % 5 as c513, rand64(514) % 5 as c514, rand64(515) % 5 as c515,
|
||||
rand64(516) % 5 as c516, rand64(517) % 5 as c517, rand64(518) % 5 as c518, rand64(519) % 5 as c519, rand64(520) % 5 as c520,
|
||||
rand64(521) % 5 as c521, rand64(522) % 5 as c522, rand64(523) % 5 as c523, rand64(524) % 5 as c524, rand64(525) % 5 as c525,
|
||||
rand64(526) % 5 as c526, rand64(527) % 5 as c527, rand64(528) % 5 as c528, rand64(529) % 5 as c529, rand64(530) % 5 as c530,
|
||||
rand64(531) % 5 as c531, rand64(532) % 5 as c532, rand64(533) % 5 as c533, rand64(534) % 5 as c534, rand64(535) % 5 as c535,
|
||||
rand64(536) % 5 as c536, rand64(537) % 5 as c537, rand64(538) % 5 as c538, rand64(539) % 5 as c539, rand64(540) % 5 as c540,
|
||||
rand64(541) % 5 as c541, rand64(542) % 5 as c542, rand64(543) % 5 as c543, rand64(544) % 5 as c544, rand64(545) % 5 as c545,
|
||||
rand64(546) % 5 as c546, rand64(547) % 5 as c547, rand64(548) % 5 as c548, rand64(549) % 5 as c549, rand64(550) % 5 as c550,
|
||||
rand64(551) % 5 as c551, rand64(552) % 5 as c552, rand64(553) % 5 as c553, rand64(554) % 5 as c554, rand64(555) % 5 as c555,
|
||||
rand64(556) % 5 as c556, rand64(557) % 5 as c557, rand64(558) % 5 as c558, rand64(559) % 5 as c559, rand64(560) % 5 as c560,
|
||||
rand64(561) % 5 as c561, rand64(562) % 5 as c562, rand64(563) % 5 as c563, rand64(564) % 5 as c564, rand64(565) % 5 as c565,
|
||||
rand64(566) % 5 as c566, rand64(567) % 5 as c567, rand64(568) % 5 as c568, rand64(569) % 5 as c569, rand64(570) % 5 as c570,
|
||||
rand64(571) % 5 as c571, rand64(572) % 5 as c572, rand64(573) % 5 as c573, rand64(574) % 5 as c574, rand64(575) % 5 as c575,
|
||||
rand64(576) % 5 as c576, rand64(577) % 5 as c577, rand64(578) % 5 as c578, rand64(579) % 5 as c579, rand64(580) % 5 as c580,
|
||||
rand64(581) % 5 as c581, rand64(582) % 5 as c582, rand64(583) % 5 as c583, rand64(584) % 5 as c584, rand64(585) % 5 as c585,
|
||||
rand64(586) % 5 as c586, rand64(587) % 5 as c587, rand64(588) % 5 as c588, rand64(589) % 5 as c589, rand64(590) % 5 as c590,
|
||||
rand64(591) % 5 as c591, rand64(592) % 5 as c592, rand64(593) % 5 as c593, rand64(594) % 5 as c594, rand64(595) % 5 as c595,
|
||||
rand64(596) % 5 as c596, rand64(597) % 5 as c597, rand64(598) % 5 as c598, rand64(599) % 5 as c599, rand64(600) % 5 as c600,
|
||||
rand64(601) % 5 as c601, rand64(602) % 5 as c602, rand64(603) % 5 as c603, rand64(604) % 5 as c604, rand64(605) % 5 as c605,
|
||||
rand64(606) % 5 as c606, rand64(607) % 5 as c607, rand64(608) % 5 as c608, rand64(609) % 5 as c609, rand64(610) % 5 as c610,
|
||||
rand64(611) % 5 as c611, rand64(612) % 5 as c612, rand64(613) % 5 as c613, rand64(614) % 5 as c614, rand64(615) % 5 as c615,
|
||||
rand64(616) % 5 as c616, rand64(617) % 5 as c617, rand64(618) % 5 as c618, rand64(619) % 5 as c619, rand64(620) % 5 as c620,
|
||||
rand64(621) % 5 as c621, rand64(622) % 5 as c622, rand64(623) % 5 as c623, rand64(624) % 5 as c624, rand64(625) % 5 as c625,
|
||||
rand64(626) % 5 as c626, rand64(627) % 5 as c627, rand64(628) % 5 as c628, rand64(629) % 5 as c629, rand64(630) % 5 as c630,
|
||||
rand64(631) % 5 as c631, rand64(632) % 5 as c632, rand64(633) % 5 as c633, rand64(634) % 5 as c634, rand64(635) % 5 as c635,
|
||||
rand64(636) % 5 as c636, rand64(637) % 5 as c637, rand64(638) % 5 as c638, rand64(639) % 5 as c639, rand64(640) % 5 as c640,
|
||||
rand64(641) % 5 as c641, rand64(642) % 5 as c642, rand64(643) % 5 as c643, rand64(644) % 5 as c644, rand64(645) % 5 as c645,
|
||||
rand64(646) % 5 as c646, rand64(647) % 5 as c647, rand64(648) % 5 as c648, rand64(649) % 5 as c649, rand64(650) % 5 as c650,
|
||||
rand64(651) % 5 as c651, rand64(652) % 5 as c652, rand64(653) % 5 as c653, rand64(654) % 5 as c654, rand64(655) % 5 as c655,
|
||||
rand64(656) % 5 as c656, rand64(657) % 5 as c657, rand64(658) % 5 as c658, rand64(659) % 5 as c659, rand64(660) % 5 as c660,
|
||||
rand64(661) % 5 as c661, rand64(662) % 5 as c662, rand64(663) % 5 as c663, rand64(664) % 5 as c664, rand64(665) % 5 as c665,
|
||||
rand64(666) % 5 as c666, rand64(667) % 5 as c667, rand64(668) % 5 as c668, rand64(669) % 5 as c669, rand64(670) % 5 as c670,
|
||||
rand64(671) % 5 as c671, rand64(672) % 5 as c672, rand64(673) % 5 as c673, rand64(674) % 5 as c674, rand64(675) % 5 as c675,
|
||||
rand64(676) % 5 as c676, rand64(677) % 5 as c677, rand64(678) % 5 as c678, rand64(679) % 5 as c679, rand64(680) % 5 as c680,
|
||||
rand64(681) % 5 as c681, rand64(682) % 5 as c682, rand64(683) % 5 as c683, rand64(684) % 5 as c684, rand64(685) % 5 as c685,
|
||||
rand64(686) % 5 as c686, rand64(687) % 5 as c687, rand64(688) % 5 as c688, rand64(689) % 5 as c689, rand64(690) % 5 as c690,
|
||||
rand64(691) % 5 as c691, rand64(692) % 5 as c692, rand64(693) % 5 as c693, rand64(694) % 5 as c694, rand64(695) % 5 as c695,
|
||||
rand64(696) % 5 as c696, rand64(697) % 5 as c697, rand64(698) % 5 as c698, rand64(699) % 5 as c699, rand64(700) % 5 as c700,
|
||||
rand64(701) % 5 as c701
|
||||
FROM system.numbers
|
||||
LIMIT 1048576
|
||||
</create_query>
|
||||
|
||||
<!-- some queries with PK conditions -->
|
||||
<query><![CDATA[SELECT count() FROM huge_pk WHERE c001 > 10]]></query>
|
||||
<query><![CDATA[SELECT count() FROM huge_pk WHERE c001 in (2,3) and c400 in (10,0) and c100 < 2]]></query>
|
||||
<query><![CDATA[SELECT count() FROM huge_pk WHERE c700 > 10]]></query>
|
||||
<!-- column c701 is not in PK-->
|
||||
<query><![CDATA[SELECT count() FROM huge_pk WHERE c701 > 10]]></query>
|
||||
|
||||
<drop_query>DROP TABLE IF EXISTS huge_pk</drop_query>
|
||||
</test>
|
@ -6,4 +6,5 @@ ALL LEFT JOIN
|
||||
(
|
||||
SELECT number % 2 AS k1, number % 6 AS k2, number AS right FROM system.numbers LIMIT 10
|
||||
) js2
|
||||
USING k1, k2;
|
||||
USING k1, k2
|
||||
ORDER BY left, right;
|
||||
|
@ -6,4 +6,5 @@ ALL LEFT JOIN
|
||||
(
|
||||
SELECT number % 2 AS k1, toString(number % 6) AS k2, number AS right FROM system.numbers LIMIT 10
|
||||
) js2
|
||||
USING k1, k2;
|
||||
USING k1, k2
|
||||
ORDER BY left, right;
|
||||
|
@ -48,10 +48,10 @@
|
||||
{
|
||||
"i0": "0",
|
||||
"u0": "0",
|
||||
"ip": "0",
|
||||
"in": "0",
|
||||
"up": "0",
|
||||
"arr": [],
|
||||
"ip": "9223372036854775807",
|
||||
"in": "-9223372036854775808",
|
||||
"up": "18446744073709551615",
|
||||
"arr": ["0"],
|
||||
"tuple": ["0","0"]
|
||||
},
|
||||
|
||||
@ -119,7 +119,7 @@
|
||||
["0", "0", "9223372036854775807", "-9223372036854775808", "18446744073709551615", ["0"], ["0","0"]]
|
||||
],
|
||||
|
||||
"totals": ["0","0","0","0","0",[],["0","0"]],
|
||||
"totals": ["0","0","9223372036854775807","-9223372036854775808","18446744073709551615",["0"],["0","0"]],
|
||||
|
||||
"extremes":
|
||||
{
|
||||
@ -180,10 +180,10 @@
|
||||
{
|
||||
"i0": 0,
|
||||
"u0": 0,
|
||||
"ip": 0,
|
||||
"in": 0,
|
||||
"up": 0,
|
||||
"arr": [],
|
||||
"ip": 9223372036854775807,
|
||||
"in": -9223372036854775808,
|
||||
"up": 18446744073709551615,
|
||||
"arr": [0],
|
||||
"tuple": [0,0]
|
||||
},
|
||||
|
||||
@ -251,7 +251,7 @@
|
||||
[0, 0, 9223372036854775807, -9223372036854775808, 18446744073709551615, [0], [0,0]]
|
||||
],
|
||||
|
||||
"totals": [0,0,0,0,0,[],[0,0]],
|
||||
"totals": [0,0,9223372036854775807,-9223372036854775808,18446744073709551615,[0],[0,0]],
|
||||
|
||||
"extremes":
|
||||
{
|
||||
|
@ -2,11 +2,11 @@ SET output_format_write_statistics = 0;
|
||||
SET extremes = 1;
|
||||
|
||||
SET output_format_json_quote_64bit_integers = 1;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple WITH TOTALS FORMAT JSON;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple WITH TOTALS FORMAT JSONCompact;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple WITH TOTALS FORMAT JSONEachRow;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple GROUP BY i0, u0, ip, in, up, arr, tuple WITH TOTALS FORMAT JSON;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple GROUP BY i0, u0, ip, in, up, arr, tuple WITH TOTALS FORMAT JSONCompact;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple GROUP BY i0, u0, ip, in, up, arr, tuple WITH TOTALS FORMAT JSONEachRow;
|
||||
|
||||
SET output_format_json_quote_64bit_integers = 0;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple WITH TOTALS FORMAT JSON;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple WITH TOTALS FORMAT JSONCompact;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple WITH TOTALS FORMAT JSONEachRow;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple GROUP BY i0, u0, ip, in, up, arr, tuple WITH TOTALS FORMAT JSON;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple GROUP BY i0, u0, ip, in, up, arr, tuple WITH TOTALS FORMAT JSONCompact;
|
||||
SELECT toInt64(0) as i0, toUInt64(0) as u0, toInt64(9223372036854775807) as ip, toInt64(-9223372036854775808) as in, toUInt64(18446744073709551615) as up, [toInt64(0)] as arr, (toUInt64(0), toUInt64(0)) as tuple GROUP BY i0, u0, ip, in, up, arr, tuple WITH TOTALS FORMAT JSONEachRow;
|
||||
|
@ -67,6 +67,14 @@
|
||||
[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33]
|
||||
[30,31,32,33,100]
|
||||
[100]
|
||||
[]
|
||||
[]
|
||||
[1,5,7,9]
|
||||
[]
|
||||
[5,7,9]
|
||||
[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]
|
||||
[30,31,32,33,100,200,500]
|
||||
[100,200,500]
|
||||
4294967295
|
||||
4294967295
|
||||
4294967295
|
||||
|
@ -212,6 +212,25 @@ select bitmapToArray(bitmapSubsetInRange(bitmapBuild([
|
||||
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,
|
||||
100,200,500]), toUInt32(100), toUInt32(200)));
|
||||
|
||||
-- bitmapSubsetLimit:
|
||||
---- Empty
|
||||
SELECT bitmapToArray(bitmapSubsetLimit(bitmapBuild(emptyArrayUInt32()), toUInt32(0), toUInt32(10)));
|
||||
SELECT bitmapToArray(bitmapSubsetLimit(bitmapBuild(emptyArrayUInt16()), toUInt32(0), toUInt32(10)));
|
||||
---- Small
|
||||
select bitmapToArray(bitmapSubsetLimit(bitmapBuild([1,5,7,9]), toUInt32(0), toUInt32(4)));
|
||||
select bitmapToArray(bitmapSubsetLimit(bitmapBuild([1,5,7,9]), toUInt32(10), toUInt32(10)));
|
||||
select bitmapToArray(bitmapSubsetLimit(bitmapBuild([1,5,7,9]), toUInt32(3), toUInt32(7)));
|
||||
---- Large
|
||||
select bitmapToArray(bitmapSubsetLimit(bitmapBuild([
|
||||
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,
|
||||
100,200,500]), toUInt32(0), toUInt32(100)));
|
||||
select bitmapToArray(bitmapSubsetLimit(bitmapBuild([
|
||||
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,
|
||||
100,200,500]), toUInt32(30), toUInt32(200)));
|
||||
select bitmapToArray(bitmapSubsetLimit(bitmapBuild([
|
||||
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,
|
||||
100,200,500]), toUInt32(100), toUInt32(200)));
|
||||
|
||||
-- bitmapMin:
|
||||
---- Empty
|
||||
SELECT bitmapMin(bitmapBuild(emptyArrayUInt8()));
|
||||
|
@ -16,30 +16,44 @@ left join y on (y.a = s.a and y.b = s.b) format Vertical;
|
||||
|
||||
select t.a, s.b, s.a, s.b, y.a, y.b from t
|
||||
left join s on (t.a = s.a and s.b = t.b)
|
||||
left join y on (y.a = s.a and y.b = s.b) format PrettyCompactNoEscapes;
|
||||
left join y on (y.a = s.a and y.b = s.b)
|
||||
order by t.a
|
||||
format PrettyCompactNoEscapes;
|
||||
|
||||
select t.a as t_a from t
|
||||
left join s on s.a = t_a format PrettyCompactNoEscapes;
|
||||
left join s on s.a = t_a
|
||||
order by t.a
|
||||
format PrettyCompactNoEscapes;
|
||||
|
||||
select t.a, s.a as s_a from t
|
||||
left join s on s.a = t.a
|
||||
left join y on y.b = s.b format PrettyCompactNoEscapes;
|
||||
left join y on y.b = s.b
|
||||
order by t.a
|
||||
format PrettyCompactNoEscapes;
|
||||
|
||||
select t.a, t.a, t.b as t_b from t
|
||||
left join s on t.a = s.a
|
||||
left join y on y.b = s.b format PrettyCompactNoEscapes;
|
||||
left join y on y.b = s.b
|
||||
order by t.a
|
||||
format PrettyCompactNoEscapes;
|
||||
|
||||
select s.a, s.a, s.b as s_b, s.b from t
|
||||
left join s on s.a = t.a
|
||||
left join y on s.b = y.b format PrettyCompactNoEscapes;
|
||||
left join y on s.b = y.b
|
||||
order by t.a
|
||||
format PrettyCompactNoEscapes;
|
||||
|
||||
select y.a, y.a, y.b as y_b, y.b from t
|
||||
left join s on s.a = t.a
|
||||
left join y on y.b = s.b format PrettyCompactNoEscapes;
|
||||
left join y on y.b = s.b
|
||||
order by t.a
|
||||
format PrettyCompactNoEscapes;
|
||||
|
||||
select t.a, t.a as t_a, s.a, s.a as s_a, y.a, y.a as y_a from t
|
||||
left join s on t.a = s.a
|
||||
left join y on y.b = s.b format PrettyCompactNoEscapes;
|
||||
left join y on y.b = s.b
|
||||
order by t.a
|
||||
format PrettyCompactNoEscapes;
|
||||
|
||||
drop table t;
|
||||
drop table s;
|
||||
|
@ -0,0 +1,8 @@
|
||||
1 1 1 1
|
||||
0 0 0 0
|
||||
0
|
||||
1
|
||||
1 1 1 1 1 1
|
||||
2 2 0 0 0 0
|
||||
2 2 0
|
||||
1 1 1
|
@ -0,0 +1,35 @@
|
||||
drop table if exists t;
|
||||
drop table if exists s;
|
||||
drop table if exists y;
|
||||
|
||||
create table t(a Int64, b Int64) engine = Memory;
|
||||
create table s(a Int64, b Int64) engine = Memory;
|
||||
create table y(a Int64, b Int64) engine = Memory;
|
||||
|
||||
insert into t values (1,1), (2,2);
|
||||
insert into s values (1,1);
|
||||
insert into y values (1,1);
|
||||
|
||||
select s.a, s.a, s.b as s_b, s.b from t
|
||||
left join s on s.a = t.a
|
||||
left join y on s.b = y.b
|
||||
order by t.a;
|
||||
|
||||
select max(s.a) from t
|
||||
left join s on s.a = t.a
|
||||
left join y on s.b = y.b
|
||||
group by t.a;
|
||||
|
||||
select t.a, t.a as t_a, s.a, s.a as s_a, y.a, y.a as y_a from t
|
||||
left join s on t.a = s.a
|
||||
left join y on y.b = s.b
|
||||
order by t.a;
|
||||
|
||||
select t.a, t.a as t_a, max(s.a) from t
|
||||
left join s on t.a = s.a
|
||||
left join y on y.b = s.b
|
||||
group by t.a;
|
||||
|
||||
drop table t;
|
||||
drop table s;
|
||||
drop table y;
|
@ -2,8 +2,8 @@
|
||||
n: "123", s1: qwe,rty, s2: 'as"df\'gh', s3: "", s4: "zx
|
||||
cv bn m", d: 2016-01-01, n: 123 ;
|
||||
n: "456", s1: as"df\'gh, s2: '', s3: "zx\ncv\tbn m", s4: "qwe,rty", d: 2016-01-02, n: 456 ;
|
||||
n: "9876543210", s1: , s2: 'zx\ncv\tbn m', s3: "qwe,rty", s4: "as""df'gh", d: 2016-01-03, n: 9876543210 ;
|
||||
n: "789", s1: zx\ncv\tbn m, s2: 'qwe,rty', s3: "as\"df'gh", s4: "", d: 2016-01-04, n: 789
|
||||
n: "789", s1: zx\ncv\tbn m, s2: 'qwe,rty', s3: "as\"df'gh", s4: "", d: 2016-01-04, n: 789 ;
|
||||
n: "9876543210", s1: , s2: 'zx\ncv\tbn m', s3: "qwe,rty", s4: "as""df'gh", d: 2016-01-03, n: 9876543210
|
||||
------
|
||||
n: "0", s1: , s2: '', s3: "", s4: "", d: 0000-00-00, n: 0
|
||||
------
|
||||
|
@ -3,7 +3,7 @@ CREATE TABLE template (s1 String, s2 String, `s 3` String, "s 4" String, n UInt6
|
||||
INSERT INTO template VALUES
|
||||
('qwe,rty', 'as"df''gh', '', 'zx\ncv\tbn m', 123, '2016-01-01'),('as"df''gh', '', 'zx\ncv\tbn m', 'qwe,rty', 456, '2016-01-02'),('', 'zx\ncv\tbn m', 'qwe,rty', 'as"df''gh', 9876543210, '2016-01-03'),('zx\ncv\tbn m', 'qwe,rty', 'as"df''gh', '', 789, '2016-01-04');
|
||||
|
||||
SELECT * FROM template WITH TOTALS LIMIT 4 FORMAT Template SETTINGS
|
||||
SELECT * FROM template GROUP BY s1, s2, `s 3`, "s 4", n, d WITH TOTALS ORDER BY n LIMIT 4 FORMAT Template SETTINGS
|
||||
extremes = 1,
|
||||
format_schema = '{prefix} \n${data:None}\n------\n${totals:}\n------\n${min}\n------\n${max}\n${rows:Escaped} rows\nbefore limit ${rows_before_limit:XML}\nread ${rows_read:Escaped} $$ suffix $$',
|
||||
format_schema_rows = 'n:\t${n:JSON}, s1:\t${s1:Escaped}, s2:\t${s2:Quoted}, s3:\t${`s 3`:JSON}, s4:\t${"s 4":CSV}, d:\t${d:Escaped}, n:\t${n:Raw}\t',
|
||||
|
@ -4,6 +4,9 @@ dictGetOrDefault flat_ints 0 42 42 42 42 42 42 42 42
|
||||
dictGet hashed_ints 1 1 1 1 1 1 1 1 1
|
||||
dictGetOrDefault hashed_ints 1 1 1 1 1 1 1 1 1
|
||||
dictGetOrDefault hashed_ints 0 42 42 42 42 42 42 42 42
|
||||
dictGet hashed_sparse_ints 1 1 1 1 1 1 1 1 1
|
||||
dictGetOrDefault hashed_sparse_ints 1 1 1 1 1 1 1 1 1
|
||||
dictGetOrDefault hashed_sparse_ints 0 42 42 42 42 42 42 42 42
|
||||
dictGet cache_ints 1 1 1 1 1 1 1 1 1
|
||||
dictGetOrDefault cache_ints 1 1 1 1 1 1 1 1 1
|
||||
dictGetOrDefault cache_ints 0 42 42 42 42 42 42 42 42
|
||||
|
@ -69,6 +69,34 @@ select 'dictGetOrDefault', 'hashed_ints' as dict_name, toUInt64(0) as k,
|
||||
dictGetOrDefault(dict_name, 'u32', k, toUInt32(42)),
|
||||
dictGetOrDefault(dict_name, 'u64', k, toUInt64(42));
|
||||
|
||||
select 'dictGet', 'hashed_sparse_ints' as dict_name, toUInt64(1) as k,
|
||||
dictGet(dict_name, 'i8', k),
|
||||
dictGet(dict_name, 'i16', k),
|
||||
dictGet(dict_name, 'i32', k),
|
||||
dictGet(dict_name, 'i64', k),
|
||||
dictGet(dict_name, 'u8', k),
|
||||
dictGet(dict_name, 'u16', k),
|
||||
dictGet(dict_name, 'u32', k),
|
||||
dictGet(dict_name, 'u64', k);
|
||||
select 'dictGetOrDefault', 'hashed_sparse_ints' as dict_name, toUInt64(1) as k,
|
||||
dictGetOrDefault(dict_name, 'i8', k, toInt8(42)),
|
||||
dictGetOrDefault(dict_name, 'i16', k, toInt16(42)),
|
||||
dictGetOrDefault(dict_name, 'i32', k, toInt32(42)),
|
||||
dictGetOrDefault(dict_name, 'i64', k, toInt64(42)),
|
||||
dictGetOrDefault(dict_name, 'u8', k, toUInt8(42)),
|
||||
dictGetOrDefault(dict_name, 'u16', k, toUInt16(42)),
|
||||
dictGetOrDefault(dict_name, 'u32', k, toUInt32(42)),
|
||||
dictGetOrDefault(dict_name, 'u64', k, toUInt64(42));
|
||||
select 'dictGetOrDefault', 'hashed_sparse_ints' as dict_name, toUInt64(0) as k,
|
||||
dictGetOrDefault(dict_name, 'i8', k, toInt8(42)),
|
||||
dictGetOrDefault(dict_name, 'i16', k, toInt16(42)),
|
||||
dictGetOrDefault(dict_name, 'i32', k, toInt32(42)),
|
||||
dictGetOrDefault(dict_name, 'i64', k, toInt64(42)),
|
||||
dictGetOrDefault(dict_name, 'u8', k, toUInt8(42)),
|
||||
dictGetOrDefault(dict_name, 'u16', k, toUInt16(42)),
|
||||
dictGetOrDefault(dict_name, 'u32', k, toUInt32(42)),
|
||||
dictGetOrDefault(dict_name, 'u64', k, toUInt64(42));
|
||||
|
||||
select 'dictGet', 'cache_ints' as dict_name, toUInt64(1) as k,
|
||||
dictGet(dict_name, 'i8', k),
|
||||
dictGet(dict_name, 'i16', k),
|
||||
|
@ -0,0 +1,9 @@
|
||||
1 0 0
|
||||
1 0 1
|
||||
1 1 0
|
||||
1 1 1
|
||||
-
|
||||
1 0 0
|
||||
1 0 1
|
||||
1 1 0
|
||||
1 1 1
|
@ -0,0 +1,12 @@
|
||||
DROP TABLE IF EXISTS ints;
|
||||
CREATE TABLE ints (i64 Int64, i32 Int32) ENGINE = Memory;
|
||||
|
||||
SET partial_merge_join = 1;
|
||||
|
||||
INSERT INTO ints SELECT 1 AS i64, number AS i32 FROM numbers(2);
|
||||
|
||||
SELECT * FROM ints l LEFT JOIN ints r USING i64 ORDER BY l.i32, r.i32;
|
||||
SELECT '-';
|
||||
SELECT * FROM ints l INNER JOIN ints r USING i64 ORDER BY l.i32, r.i32;
|
||||
|
||||
DROP TABLE ints;
|
@ -0,0 +1,5 @@
|
||||
SET allow_experimental_data_skipping_indices=1;
|
||||
CREATE TABLE foo (key int, INDEX i1 key TYPE minmax GRANULARITY 1) Engine=MergeTree() ORDER BY key;
|
||||
CREATE TABLE as_foo AS foo;
|
||||
CREATE TABLE dist (key int, INDEX i1 key TYPE minmax GRANULARITY 1) Engine=Distributed(test_shard_localhost, currentDatabase(), 'foo'); -- { serverError 36 }
|
||||
CREATE TABLE dist_as_foo Engine=Distributed(test_shard_localhost, currentDatabase(), 'foo') AS foo;
|
@ -0,0 +1 @@
|
||||
OK
|
30
dbms/tests/queries/0_stateless/01013_sync_replica_timeout_zookeeper.sh
Executable file
30
dbms/tests/queries/0_stateless/01013_sync_replica_timeout_zookeeper.sh
Executable file
@ -0,0 +1,30 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
. $CURDIR/../shell_config.sh
|
||||
|
||||
|
||||
R1=table_1013_1
|
||||
R2=table_1013_2
|
||||
|
||||
${CLICKHOUSE_CLIENT} -n -q "
|
||||
DROP TABLE IF EXISTS $R1;
|
||||
DROP TABLE IF EXISTS $R2;
|
||||
|
||||
CREATE TABLE $R1 (x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/${CLICKHOUSE_DATABASE}.table_1013', 'r1') ORDER BY x;
|
||||
CREATE TABLE $R2 (x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/${CLICKHOUSE_DATABASE}.table_1013', 'r2') ORDER BY x;
|
||||
|
||||
SYSTEM STOP FETCHES $R2;
|
||||
INSERT INTO $R1 VALUES (1)
|
||||
"
|
||||
|
||||
timeout 10s ${CLICKHOUSE_CLIENT} -n -q "
|
||||
SET receive_timeout=1;
|
||||
SYSTEM SYNC REPLICA $R2
|
||||
" 2>&1 | fgrep -q "DB::Exception: SYNC REPLICA ${CLICKHOUSE_DATABASE}.$R2: command timed out!" && echo 'OK' || echo 'Failed!'
|
||||
|
||||
# By dropping tables all related SYNC REPLICA queries would be terminated as well
|
||||
${CLICKHOUSE_CLIENT} -n -q "
|
||||
DROP TABLE IF EXISTS $R2;
|
||||
DROP TABLE IF EXISTS $R1;
|
||||
"
|
@ -0,0 +1,7 @@
|
||||
11
|
||||
|
||||
11
|
||||
12
|
||||
12
|
||||
13
|
||||
13
|
6
dbms/tests/queries/0_stateless/01013_totals_without_aggregation.sql
Executable file
6
dbms/tests/queries/0_stateless/01013_totals_without_aggregation.sql
Executable file
@ -0,0 +1,6 @@
|
||||
SELECT 11 AS n GROUP BY n WITH TOTALS;
|
||||
SELECT 12 AS n GROUP BY n WITH ROLLUP;
|
||||
SELECT 13 AS n GROUP BY n WITH CUBE;
|
||||
SELECT 1 AS n WITH TOTALS; -- { serverError 49 }
|
||||
SELECT 1 AS n WITH ROLLUP; -- { serverError 49 }
|
||||
SELECT 1 AS n WITH CUBE; -- { serverError 49 }
|
@ -56,7 +56,8 @@ RUN apt-get update -y \
|
||||
tzdata \
|
||||
gperf \
|
||||
cmake \
|
||||
gdb
|
||||
gdb \
|
||||
rename
|
||||
|
||||
COPY build.sh /
|
||||
CMD ["/bin/bash", "/build.sh"]
|
||||
|
@ -12,3 +12,17 @@ ninja
|
||||
ccache --show-stats ||:
|
||||
mv ./dbms/programs/clickhouse* /output
|
||||
mv ./dbms/unit_tests_dbms /output
|
||||
find . -name '*.so' -print -exec mv '{}' /output \;
|
||||
find . -name '*.so.*' -print -exec mv '{}' /output \;
|
||||
|
||||
count=`ls -1 /output/*.so 2>/dev/null | wc -l`
|
||||
if [ $count != 0 ]
|
||||
then
|
||||
mkdir -p /output/config
|
||||
cp ../dbms/programs/server/config.xml /output/config
|
||||
cp ../dbms/programs/server/users.xml /output/config
|
||||
cp -r ../dbms/programs/server/config.d /output/config
|
||||
tar -czvf shared_build.tgz /output
|
||||
rm -r /output/*
|
||||
mv shared_build.tgz /output
|
||||
fi
|
||||
|
@ -141,7 +141,8 @@ Settings:
|
||||
- timeout – The timeout for sending data, in seconds.
|
||||
- root_path – Prefix for keys.
|
||||
- metrics – Sending data from a :ref:`system_tables-system.metrics` table.
|
||||
- events – Sending data from a :ref:`system_tables-system.events` table.
|
||||
- events – Sending deltas data accumulated for the time period from a :ref:`system_tables-system.events` table
|
||||
- events_cumulative – Sending cumulative data from a :ref:`system_tables-system.events` table
|
||||
- asynchronous_metrics – Sending data from a :ref:`system_tables-system.asynchronous_metrics` table.
|
||||
|
||||
You can configure multiple `<graphite>` clauses. For instance, you can use this for sending different data at different intervals.
|
||||
@ -157,6 +158,7 @@ You can configure multiple `<graphite>` clauses. For instance, you can use this
|
||||
<root_path>one_min</root_path>
|
||||
<metrics>true</metrics>
|
||||
<events>true</events>
|
||||
<events_cumulative>false</events_cumulative>
|
||||
<asynchronous_metrics>true</asynchronous_metrics>
|
||||
</graphite>
|
||||
```
|
||||
|
@ -39,6 +39,7 @@ The configuration looks like this:
|
||||
|
||||
- [flat](#flat)
|
||||
- [hashed](#dicts-external_dicts_dict_layout-hashed)
|
||||
- [sparse_hashed](#dicts-external_dicts_dict_layout-sparse_hashed)
|
||||
- [cache](#cache)
|
||||
- [range_hashed](#range-hashed)
|
||||
- [complex_key_hashed](#complex-key-hashed)
|
||||
@ -77,6 +78,18 @@ Configuration example:
|
||||
</layout>
|
||||
```
|
||||
|
||||
### sparse_hashed {#dicts-external_dicts_dict_layout-sparse_hashed}
|
||||
|
||||
Similar to `hashed`, but uses less memory in favor more CPU usage.
|
||||
|
||||
Configuration example:
|
||||
|
||||
```xml
|
||||
<layout>
|
||||
<sparse_hashed />
|
||||
</layout>
|
||||
```
|
||||
|
||||
|
||||
### complex_key_hashed
|
||||
|
||||
|
@ -82,6 +82,32 @@ SELECT bitmapToArray(bitmapSubsetInRange(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,
|
||||
└───────────────────┘
|
||||
```
|
||||
|
||||
## bitmapSubsetLimit {#bitmap_functions-bitmapsubsetlimit}
|
||||
|
||||
Return subset of the smallest `limit` values in set which is no less than `range_start`.
|
||||
|
||||
```
|
||||
bitmapSubsetLimit(bitmap, range_start, limit)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `bitmap` – [Bitmap object](#bitmap_functions-bitmapbuild).
|
||||
- `range_start` – range start point. Type: [UInt32](../../data_types/int_uint.md).
|
||||
- `limit` – subset cardinality upper limit. Type: [UInt32](../../data_types/int_uint.md).
|
||||
|
||||
**Example**
|
||||
|
||||
``` sql
|
||||
SELECT bitmapToArray(bitmapSubsetLimit(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res
|
||||
```
|
||||
|
||||
```
|
||||
┌─res───────────────────────┐
|
||||
│ [30,31,32,33,100,200,500] │
|
||||
└───────────────────────────┘
|
||||
```
|
||||
|
||||
## bitmapContains {#bitmap_functions-bitmapcontains}
|
||||
|
||||
Checks whether the bitmap contains an element.
|
||||
@ -294,7 +320,7 @@ SELECT bitmapCardinality(bitmapBuild([1, 2, 3, 4, 5])) AS res
|
||||
|
||||
## bitmapMin
|
||||
|
||||
Retrun smallest value of type UInt64 in the set, UINT32_MAX if the set is empty.
|
||||
Retrun the smallest value of type UInt64 in the set, UINT32_MAX if the set is empty.
|
||||
|
||||
|
||||
```
|
||||
@ -319,7 +345,7 @@ SELECT bitmapMin(bitmapBuild([1, 2, 3, 4, 5])) AS res
|
||||
|
||||
## bitmapMax
|
||||
|
||||
Retrun smallest value of type UInt64 in the set, 0 if the set is empty.
|
||||
Retrun the greatest value of type UInt64 in the set, 0 if the set is empty.
|
||||
|
||||
|
||||
```
|
||||
|
@ -140,7 +140,8 @@ ClickHouse проверит условия `min_part_size` и `min_part_size_rat
|
||||
- timeout - Таймаут отправки данных в секундах.
|
||||
- root_path - Префикс для ключей.
|
||||
- metrics - Отправка данных из таблицы :ref:`system_tables-system.metrics`.
|
||||
- events - Отправка данных из таблицы :ref:`system_tables-system.events`.
|
||||
- events - Отправка дельты данных, накопленной за промежуток времени из таблицы :ref:`system_tables-system.events`
|
||||
- events_cumulative - Отправка суммарных данных из таблицы :ref:`system_tables-system.events`
|
||||
- asynchronous_metrics - Отправка данных из таблицы :ref:`system_tables-system.asynchronous_metrics`.
|
||||
|
||||
Можно определить несколько секций `<graphite>`, например, для передачи различных данных с различной частотой.
|
||||
@ -156,6 +157,7 @@ ClickHouse проверит условия `min_part_size` и `min_part_size_rat
|
||||
<root_path>one_min</root_path>
|
||||
<metrics>true</metrics>
|
||||
<events>true</events>
|
||||
<events_cumulative>false</events_cumulative>
|
||||
<asynchronous_metrics>true</asynchronous_metrics>
|
||||
</graphite>
|
||||
```
|
||||
|
@ -332,7 +332,7 @@ TTL date_time + INTERVAL 15 HOUR
|
||||
Создание таблицы с TTL
|
||||
|
||||
```sql
|
||||
CREATE TABLE example_table
|
||||
CREATE TABLE example_table
|
||||
(
|
||||
d DateTime,
|
||||
a Int TTL d + INTERVAL 1 MONTH,
|
||||
@ -367,7 +367,7 @@ ALTER TABLE example_table
|
||||
Примеры:
|
||||
|
||||
```sql
|
||||
CREATE TABLE example_table
|
||||
CREATE TABLE example_table
|
||||
(
|
||||
d DateTime,
|
||||
a Int
|
||||
@ -378,7 +378,7 @@ ORDER BY d
|
||||
TTL d + INTERVAL 1 MONTH;
|
||||
```
|
||||
|
||||
Изменение TTL
|
||||
Изменение TTL
|
||||
|
||||
```sql
|
||||
ALTER TABLE example_table
|
||||
@ -488,10 +488,10 @@ CREATE TABLE table_with_non_default_policy (
|
||||
OrderID UInt64,
|
||||
BannerID UInt64,
|
||||
SearchPhrase String
|
||||
) ENGINE = MergeTree()
|
||||
) ENGINE = MergeTree
|
||||
ORDER BY (OrderID, BannerID)
|
||||
PARTITION BY toYYYYMM(EventDate)
|
||||
SETTINGS storage_policy_name='moving_from_ssd_to_hdd'
|
||||
SETTINGS storage_policy = 'moving_from_ssd_to_hdd'
|
||||
```
|
||||
|
||||
По умолчанию используется политика хранения `default` в которой есть один том и один диск, указанный в `<path>`. В данный момент менять политику хранения после создания таблицы нельзя.
|
||||
@ -502,7 +502,7 @@ SETTINGS storage_policy_name='moving_from_ssd_to_hdd'
|
||||
|
||||
* В результате вставки (запрос `INSERT`).
|
||||
* В фоновых операциях слияний и [мутаций](../../query_language/alter.md#alter-mutations).
|
||||
* При скачивании данных с другой реплики.
|
||||
* При скачивании данных с другой реплики.
|
||||
* В результате заморозки партиций [ALTER TABLE ... FREEZE PARTITION](../../query_language/alter.md#alter_freeze-partition).
|
||||
|
||||
Во всех случаях, кроме мутаций и заморозки партиций, при записи куска выбирается том и диск в соответствии с указанной конфигурацией хранилища:
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user