diff --git a/CHANGELOG.md b/CHANGELOG.md index d2cc3e51997..e2c777b3bcf 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,148 @@ +## ClickHouse release 21.2 + +### ClickHouse release v21.2.2.8-stable, 2021-02-07 + +#### Backward Incompatible Change + +* Bitwise functions (`bitAnd`, `bitOr`, etc) are forbidden for floating point arguments. Now you have to do explicit cast to integer. [#19853](https://github.com/ClickHouse/ClickHouse/pull/19853) ([Azat Khuzhin](https://github.com/azat)). +* Forbid `lcm`/`gcd` for floats. [#19532](https://github.com/ClickHouse/ClickHouse/pull/19532) ([Azat Khuzhin](https://github.com/azat)). +* Fix memory tracking for `OPTIMIZE TABLE`/merges; account query memory limits and sampling for `OPTIMIZE TABLE`/merges. [#18772](https://github.com/ClickHouse/ClickHouse/pull/18772) ([Azat Khuzhin](https://github.com/azat)). +* Disallow floating point column as partition key, see [#18421](https://github.com/ClickHouse/ClickHouse/issues/18421#event-4147046255). [#18464](https://github.com/ClickHouse/ClickHouse/pull/18464) ([hexiaoting](https://github.com/hexiaoting)). +* Excessive parenthesis in type definitions no longer supported, example: `Array((UInt8))`. + +#### New Feature + +* Added `PostgreSQL` table engine (both select/insert, with support for multidimensional arrays), also as table function. Added `PostgreSQL` dictionary source. Added `PostgreSQL` database engine. [#18554](https://github.com/ClickHouse/ClickHouse/pull/18554) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Data type `Nested` now supports arbitrary levels of nesting. Introduced subcolumns of complex types, such as `size0` in `Array`, `null` in `Nullable`, names of `Tuple` elements, which can be read without reading of whole column. [#17310](https://github.com/ClickHouse/ClickHouse/pull/17310) ([Anton Popov](https://github.com/CurtizJ)). +* Added `Nullable` support for `FlatDictionary`, `HashedDictionary`, `ComplexKeyHashedDictionary`, `DirectDictionary`, `ComplexKeyDirectDictionary`, `RangeHashedDictionary`. [#18236](https://github.com/ClickHouse/ClickHouse/pull/18236) ([Maksim Kita](https://github.com/kitaisreal)). +* Adds a new table called `system.distributed_ddl_queue` that displays the queries in the DDL worker queue. [#17656](https://github.com/ClickHouse/ClickHouse/pull/17656) ([Bharat Nallan](https://github.com/bharatnc)). +* Added support of mapping LDAP group names, and attribute values in general, to local roles for users from ldap user directories. [#17211](https://github.com/ClickHouse/ClickHouse/pull/17211) ([Denis Glazachev](https://github.com/traceon)). +* Support insert into table function `cluster`, and for both table functions `remote` and `cluster`, support distributing data across nodes by specify sharding key. Close [#16752](https://github.com/ClickHouse/ClickHouse/issues/16752). [#18264](https://github.com/ClickHouse/ClickHouse/pull/18264) ([flynn](https://github.com/ucasFL)). +* Add function `decodeXMLComponent` to decode characters for XML. Example: `SELECT decodeXMLComponent('Hello,"world"!')` [#17659](https://github.com/ClickHouse/ClickHouse/issues/17659). [#18542](https://github.com/ClickHouse/ClickHouse/pull/18542) ([nauta](https://github.com/nautaa)). +* Added functions `parseDateTimeBestEffortUSOrZero`, `parseDateTimeBestEffortUSOrNull`. [#19712](https://github.com/ClickHouse/ClickHouse/pull/19712) ([Maksim Kita](https://github.com/kitaisreal)). +* Add `sign` math function. [#19527](https://github.com/ClickHouse/ClickHouse/pull/19527) ([flynn](https://github.com/ucasFL)). +* Add information about used features (functions, table engines, etc) into system.query_log. [#18495](https://github.com/ClickHouse/ClickHouse/issues/18495). [#19371](https://github.com/ClickHouse/ClickHouse/pull/19371) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Function `formatDateTime` support the `%Q` modification to format date to quarter. [#19224](https://github.com/ClickHouse/ClickHouse/pull/19224) ([Jianmei Zhang](https://github.com/zhangjmruc)). +* Support MetaKey+Enter hotkey binding in play UI. [#19012](https://github.com/ClickHouse/ClickHouse/pull/19012) ([sundyli](https://github.com/sundy-li)). +* Add three functions for map data type: 1. `mapContains(map, key)` to check weather map.keys include the second parameter key. 2. `mapKeys(map)` return all the keys in Array format 3. `mapValues(map)` return all the values in Array format. [#18788](https://github.com/ClickHouse/ClickHouse/pull/18788) ([hexiaoting](https://github.com/hexiaoting)). +* Add `log_comment` setting related to [#18494](https://github.com/ClickHouse/ClickHouse/issues/18494). [#18549](https://github.com/ClickHouse/ClickHouse/pull/18549) ([Zijie Lu](https://github.com/TszKitLo40)). +* Add support of tuple argument to `argMin` and `argMax` functions. [#17359](https://github.com/ClickHouse/ClickHouse/pull/17359) ([Ildus Kurbangaliev](https://github.com/ildus)). +* Support `EXISTS VIEW` syntax. [#18552](https://github.com/ClickHouse/ClickHouse/pull/18552) ([Du Chuan](https://github.com/spongedu)). +* Add `SELECT ALL` syntax. closes [#18706](https://github.com/ClickHouse/ClickHouse/issues/18706). [#18723](https://github.com/ClickHouse/ClickHouse/pull/18723) ([flynn](https://github.com/ucasFL)). + +#### Performance Improvement + +* Faster parts removal by lowering the number of `stat` syscalls. This returns the optimization that existed while ago. More safe interface of `IDisk`. This closes [#19065](https://github.com/ClickHouse/ClickHouse/issues/19065). [#19086](https://github.com/ClickHouse/ClickHouse/pull/19086) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Aliases declared in `WITH` statement are properly used in index analysis. Queries like `WITH column AS alias SELECT ... WHERE alias = ...` may use index now. [#18896](https://github.com/ClickHouse/ClickHouse/pull/18896) ([Amos Bird](https://github.com/amosbird)). +* Add `optimize_alias_column_prediction` (on by default), that will: - Respect aliased columns in WHERE during partition pruning and skipping data using secondary indexes; - Respect aliased columns in WHERE for trivial count queries for optimize_trivial_count; - Respect aliased columns in GROUP BY/ORDER BY for optimize_aggregation_in_order/optimize_read_in_order. [#16995](https://github.com/ClickHouse/ClickHouse/pull/16995) ([sundyli](https://github.com/sundy-li)). +* Speed up aggregate function `sum`. Improvement only visible on synthetic benchmarks and not very practical. [#19216](https://github.com/ClickHouse/ClickHouse/pull/19216) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Update libc++ and use another ABI to provide better performance. [#18914](https://github.com/ClickHouse/ClickHouse/pull/18914) ([Danila Kutenin](https://github.com/danlark1)). +* Rewrite `sumIf()` and `sum(if())` function to `countIf()` function when logically equivalent. [#17041](https://github.com/ClickHouse/ClickHouse/pull/17041) ([flynn](https://github.com/ucasFL)). +* Use a connection pool for S3 connections, controlled by the `s3_max_connections` settings. [#13405](https://github.com/ClickHouse/ClickHouse/pull/13405) ([Vladimir Chebotarev](https://github.com/excitoon)). +* Add support for zstd long option for better compression of string columns to save space. [#17184](https://github.com/ClickHouse/ClickHouse/pull/17184) ([ygrek](https://github.com/ygrek)). +* Slightly improve server latency by removing access to configuration on every connection. [#19863](https://github.com/ClickHouse/ClickHouse/pull/19863) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Reduce lock contention for multiple layers of the `Buffer` engine. [#19379](https://github.com/ClickHouse/ClickHouse/pull/19379) ([Azat Khuzhin](https://github.com/azat)). +* Support splitting `Filter` step of query plan into `Expression + Filter` pair. Together with `Expression + Expression` merging optimization ([#17458](https://github.com/ClickHouse/ClickHouse/issues/17458)) it may delay execution for some expressions after `Filter` step. [#19253](https://github.com/ClickHouse/ClickHouse/pull/19253) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). + +#### Improvement + +* `SELECT count() FROM table` now can be executed if only one any column can be selected from the `table`. This PR fixes [#10639](https://github.com/ClickHouse/ClickHouse/issues/10639). [#18233](https://github.com/ClickHouse/ClickHouse/pull/18233) ([Vitaly Baranov](https://github.com/vitlibar)). +* Set charset to `utf8mb4` when interacting with remote MySQL servers. Fixes [#19795](https://github.com/ClickHouse/ClickHouse/issues/19795). [#19800](https://github.com/ClickHouse/ClickHouse/pull/19800) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* `S3` table function now supports `auto` compression mode (autodetect). This closes [#18754](https://github.com/ClickHouse/ClickHouse/issues/18754). [#19793](https://github.com/ClickHouse/ClickHouse/pull/19793) ([Vladimir Chebotarev](https://github.com/excitoon)). +* Correctly output infinite arguments for `formatReadableTimeDelta` function. In previous versions, there was implicit conversion to implementation specific integer value. [#19791](https://github.com/ClickHouse/ClickHouse/pull/19791) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Table function `S3` will use global region if the region can't be determined exactly. This closes [#10998](https://github.com/ClickHouse/ClickHouse/issues/10998). [#19750](https://github.com/ClickHouse/ClickHouse/pull/19750) ([Vladimir Chebotarev](https://github.com/excitoon)). +* In distributed queries if the setting `async_socket_for_remote` is enabled, it was possible to get stack overflow at least in debug build configuration if very deeply nested data type is used in table (e.g. `Array(Array(Array(...more...)))`). This fixes [#19108](https://github.com/ClickHouse/ClickHouse/issues/19108). This change introduces minor backward incompatibility: excessive parenthesis in type definitions no longer supported, example: `Array((UInt8))`. [#19736](https://github.com/ClickHouse/ClickHouse/pull/19736) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add separate pool for message brokers (RabbitMQ and Kafka). [#19722](https://github.com/ClickHouse/ClickHouse/pull/19722) ([Azat Khuzhin](https://github.com/azat)). +* Fix rare `max_number_of_merges_with_ttl_in_pool` limit overrun (more merges with TTL can be assigned) for non-replicated MergeTree. [#19708](https://github.com/ClickHouse/ClickHouse/pull/19708) ([alesapin](https://github.com/alesapin)). +* Dictionary: better error message during attribute parsing. [#19678](https://github.com/ClickHouse/ClickHouse/pull/19678) ([Maksim Kita](https://github.com/kitaisreal)). +* Add an option to disable validation of checksums on reading. Should never be used in production. Please do not expect any benefits in disabling it. It may only be used for experiments and benchmarks. The setting only applicable for tables of MergeTree family. Checksums are always validated for other table engines and when receiving data over network. In my observations there is no performance difference or it is less than 0.5%. [#19588](https://github.com/ClickHouse/ClickHouse/pull/19588) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Support constant result in function `multiIf`. [#19533](https://github.com/ClickHouse/ClickHouse/pull/19533) ([Maksim Kita](https://github.com/kitaisreal)). +* Enable function length/empty/notEmpty for datatype Map, which returns keys number in Map. [#19530](https://github.com/ClickHouse/ClickHouse/pull/19530) ([taiyang-li](https://github.com/taiyang-li)). +* Add `--reconnect` option to `clickhouse-benchmark`. When this option is specified, it will reconnect before every request. This is needed for testing. [#19872](https://github.com/ClickHouse/ClickHouse/pull/19872) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Support using the new location of `.debug` file. This fixes [#19348](https://github.com/ClickHouse/ClickHouse/issues/19348). [#19520](https://github.com/ClickHouse/ClickHouse/pull/19520) ([Amos Bird](https://github.com/amosbird)). +* `toIPv6` function parses `IPv4` addresses. [#19518](https://github.com/ClickHouse/ClickHouse/pull/19518) ([Bharat Nallan](https://github.com/bharatnc)). +* Add `http_referer` field to `system.query_log`, `system.processes`, etc. This closes [#19389](https://github.com/ClickHouse/ClickHouse/issues/19389). [#19390](https://github.com/ClickHouse/ClickHouse/pull/19390) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Improve MySQL compatibility by making more functions case insensitive and adding aliases. [#19387](https://github.com/ClickHouse/ClickHouse/pull/19387) ([Daniil Kondratyev](https://github.com/dankondr)). +* Add metrics for MergeTree parts (Wide/Compact/InMemory) types. [#19381](https://github.com/ClickHouse/ClickHouse/pull/19381) ([Azat Khuzhin](https://github.com/azat)). +* Allow docker to be executed with arbitrary uid. [#19374](https://github.com/ClickHouse/ClickHouse/pull/19374) ([filimonov](https://github.com/filimonov)). +* Fix wrong alignment of values of `IPv4` data type in Pretty formats. They were aligned to the right, not to the left. This closes [#19184](https://github.com/ClickHouse/ClickHouse/issues/19184). [#19339](https://github.com/ClickHouse/ClickHouse/pull/19339) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Allow change `max_server_memory_usage` without restart. This closes [#18154](https://github.com/ClickHouse/ClickHouse/issues/18154). [#19186](https://github.com/ClickHouse/ClickHouse/pull/19186) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* The exception when function `bar` is called with certain NaN argument may be slightly misleading in previous versions. This fixes [#19088](https://github.com/ClickHouse/ClickHouse/issues/19088). [#19107](https://github.com/ClickHouse/ClickHouse/pull/19107) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Explicitly set uid / gid of clickhouse user & group to the fixed values (101) in clickhouse-server images. [#19096](https://github.com/ClickHouse/ClickHouse/pull/19096) ([filimonov](https://github.com/filimonov)). +* Fixed `PeekableReadBuffer: Memory limit exceed` error when inserting data with huge strings. Fixes [#18690](https://github.com/ClickHouse/ClickHouse/issues/18690). [#18979](https://github.com/ClickHouse/ClickHouse/pull/18979) ([tavplubix](https://github.com/tavplubix)). +* Docker image: several improvements for clickhouse-server entrypoint. [#18954](https://github.com/ClickHouse/ClickHouse/pull/18954) ([filimonov](https://github.com/filimonov)). +* Add `normalizeQueryKeepNames` and `normalizedQueryHashKeepNames` to normalize queries without masking long names with `?`. This helps better analyze complex query logs. [#18910](https://github.com/ClickHouse/ClickHouse/pull/18910) ([Amos Bird](https://github.com/amosbird)). +* Check per-block checksum of the distributed batch on the sender before sending (without reading the file twice, the checksums will be verified while reading), this will avoid stuck of the INSERT on the receiver (on truncated .bin file on the sender). Avoid reading .bin files twice for batched INSERT (it was required to calculate rows/bytes to take squashing into account, now this information included into the header, backward compatible is preserved). [#18853](https://github.com/ClickHouse/ClickHouse/pull/18853) ([Azat Khuzhin](https://github.com/azat)). +* Fix issues with RIGHT and FULL JOIN of tables with aggregate function states. In previous versions exception about `cloneResized` method was thrown. [#18818](https://github.com/ClickHouse/ClickHouse/pull/18818) ([templarzq](https://github.com/templarzq)). +* Added prefix-based S3 endpoint settings. [#18812](https://github.com/ClickHouse/ClickHouse/pull/18812) ([Vladimir Chebotarev](https://github.com/excitoon)). +* Add [UInt8, UInt16, UInt32, UInt64] arguments types support for bitmapTransform, bitmapSubsetInRange, bitmapSubsetLimit, bitmapContains functions. This closes [#18713](https://github.com/ClickHouse/ClickHouse/issues/18713). [#18791](https://github.com/ClickHouse/ClickHouse/pull/18791) ([sundyli](https://github.com/sundy-li)). +* Allow CTE (Common Table Expressions) to be further aliased. Propagate CSE (Common Subexpressions Elimination) to subqueries in the same level when `enable_global_with_statement = 1`. This fixes [#17378](https://github.com/ClickHouse/ClickHouse/issues/17378) . This fixes https://github.com/ClickHouse/ClickHouse/pull/16575#issuecomment-753416235 . [#18684](https://github.com/ClickHouse/ClickHouse/pull/18684) ([Amos Bird](https://github.com/amosbird)). +* Update librdkafka to v1.6.0-RC2. Fixes [#18668](https://github.com/ClickHouse/ClickHouse/issues/18668). [#18671](https://github.com/ClickHouse/ClickHouse/pull/18671) ([filimonov](https://github.com/filimonov)). +* In case of unexpected exceptions automatically restart background thread which is responsible for execution of distributed DDL queries. Fixes [#17991](https://github.com/ClickHouse/ClickHouse/issues/17991). [#18285](https://github.com/ClickHouse/ClickHouse/pull/18285) ([徐炘](https://github.com/weeds085490)). +* Updated AWS C++ SDK in order to utilize global regions in S3. [#17870](https://github.com/ClickHouse/ClickHouse/pull/17870) ([Vladimir Chebotarev](https://github.com/excitoon)). +* Added support for `WITH ... [AND] [PERIODIC] REFRESH [interval_in_sec]` clause when creating `LIVE VIEW` tables. [#14822](https://github.com/ClickHouse/ClickHouse/pull/14822) ([vzakaznikov](https://github.com/vzakaznikov)). +* Restrict `MODIFY TTL` queries for `MergeTree` tables created in old syntax. Previously the query succeeded, but actually it had no effect. [#19064](https://github.com/ClickHouse/ClickHouse/pull/19064) ([Anton Popov](https://github.com/CurtizJ)). + +#### Bug Fix + +* Fix index analysis of binary functions with constant argument which leads to wrong query results. This fixes [#18364](https://github.com/ClickHouse/ClickHouse/issues/18364). [#18373](https://github.com/ClickHouse/ClickHouse/pull/18373) ([Amos Bird](https://github.com/amosbird)). +* Fix starting the server with tables having default expressions containing dictGet(). Allow getting return type of dictGet() without loading dictionary. [#19805](https://github.com/ClickHouse/ClickHouse/pull/19805) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix server crash after query with `if` function with `Tuple` type of then/else branches result. `Tuple` type must contain `Array` or another complex type. Fixes [#18356](https://github.com/ClickHouse/ClickHouse/issues/18356). [#20133](https://github.com/ClickHouse/ClickHouse/pull/20133) ([alesapin](https://github.com/alesapin)). +* `MaterializeMySQL` (experimental feature): Fix replication for statements that update several tables. [#20066](https://github.com/ClickHouse/ClickHouse/pull/20066) ([Håvard Kvålen](https://github.com/havardk)). +* Prevent "Connection refused" in docker during initialization script execution. [#20012](https://github.com/ClickHouse/ClickHouse/pull/20012) ([filimonov](https://github.com/filimonov)). +* `EmbeddedRocksDB` is an experimental storage. Fix the issue with lack of proper type checking. Simplified code. This closes [#19967](https://github.com/ClickHouse/ClickHouse/issues/19967). [#19972](https://github.com/ClickHouse/ClickHouse/pull/19972) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix a segfault in function `fromModifiedJulianDay` when the argument type is `Nullable(T)` for any integral types other than Int32. [#19959](https://github.com/ClickHouse/ClickHouse/pull/19959) ([PHO](https://github.com/depressed-pho)). +* The function `greatCircleAngle` returned inaccurate results in previous versions. This closes [#19769](https://github.com/ClickHouse/ClickHouse/issues/19769). [#19789](https://github.com/ClickHouse/ClickHouse/pull/19789) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix rare bug when some replicated operations (like mutation) cannot process some parts after data corruption. Fixes [#19593](https://github.com/ClickHouse/ClickHouse/issues/19593). [#19702](https://github.com/ClickHouse/ClickHouse/pull/19702) ([alesapin](https://github.com/alesapin)). +* Background thread which executes `ON CLUSTER` queries might hang waiting for dropped replicated table to do something. It's fixed. [#19684](https://github.com/ClickHouse/ClickHouse/pull/19684) ([yiguolei](https://github.com/yiguolei)). +* Fix wrong deserialization of columns description. It makes INSERT into a table with a column named `\` impossible. [#19479](https://github.com/ClickHouse/ClickHouse/pull/19479) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Mark distributed batch as broken in case of empty data block in one of files. [#19449](https://github.com/ClickHouse/ClickHouse/pull/19449) ([Azat Khuzhin](https://github.com/azat)). +* Fixed very rare bug that might cause mutation to hang after `DROP/DETACH/REPLACE/MOVE PARTITION`. It was partially fixed by [#15537](https://github.com/ClickHouse/ClickHouse/issues/15537) for the most cases. [#19443](https://github.com/ClickHouse/ClickHouse/pull/19443) ([tavplubix](https://github.com/tavplubix)). +* Fix possible error `Extremes transform was already added to pipeline`. Fixes [#14100](https://github.com/ClickHouse/ClickHouse/issues/14100). [#19430](https://github.com/ClickHouse/ClickHouse/pull/19430) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix default value in join types with non-zero default (e.g. some Enums). Closes [#18197](https://github.com/ClickHouse/ClickHouse/issues/18197). [#19360](https://github.com/ClickHouse/ClickHouse/pull/19360) ([vdimir](https://github.com/vdimir)). +* Do not mark file for distributed send as broken on EOF. [#19290](https://github.com/ClickHouse/ClickHouse/pull/19290) ([Azat Khuzhin](https://github.com/azat)). +* Fix leaking of pipe fd for `async_socket_for_remote`. [#19153](https://github.com/ClickHouse/ClickHouse/pull/19153) ([Azat Khuzhin](https://github.com/azat)). +* Fix infinite reading from file in `ORC` format (was introduced in [#10580](https://github.com/ClickHouse/ClickHouse/issues/10580)). Fixes [#19095](https://github.com/ClickHouse/ClickHouse/issues/19095). [#19134](https://github.com/ClickHouse/ClickHouse/pull/19134) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix issue in merge tree data writer which can lead to marks with bigger size than fixed granularity size. Fixes [#18913](https://github.com/ClickHouse/ClickHouse/issues/18913). [#19123](https://github.com/ClickHouse/ClickHouse/pull/19123) ([alesapin](https://github.com/alesapin)). +* Fix startup bug when clickhouse was not able to read compression codec from `LowCardinality(Nullable(...))` and throws exception `Attempt to read after EOF`. Fixes [#18340](https://github.com/ClickHouse/ClickHouse/issues/18340). [#19101](https://github.com/ClickHouse/ClickHouse/pull/19101) ([alesapin](https://github.com/alesapin)). +* Simplify the implementation of `tupleHammingDistance`. Support for tuples of any equal length. Fixes [#19029](https://github.com/ClickHouse/ClickHouse/issues/19029). [#19084](https://github.com/ClickHouse/ClickHouse/pull/19084) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Make sure `groupUniqArray` returns correct type for argument of Enum type. This closes [#17875](https://github.com/ClickHouse/ClickHouse/issues/17875). [#19019](https://github.com/ClickHouse/ClickHouse/pull/19019) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix possible error `Expected single dictionary argument for function` if use function `ignore` with `LowCardinality` argument. Fixes [#14275](https://github.com/ClickHouse/ClickHouse/issues/14275). [#19016](https://github.com/ClickHouse/ClickHouse/pull/19016) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix inserting of `LowCardinality` column to table with `TinyLog` engine. Fixes [#18629](https://github.com/ClickHouse/ClickHouse/issues/18629). [#19010](https://github.com/ClickHouse/ClickHouse/pull/19010) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix minor issue in JOIN: Join tries to materialize const columns, but our code waits for them in other places. [#18982](https://github.com/ClickHouse/ClickHouse/pull/18982) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Disable `optimize_move_functions_out_of_any` because optimization is not always correct. This closes [#18051](https://github.com/ClickHouse/ClickHouse/issues/18051). This closes [#18973](https://github.com/ClickHouse/ClickHouse/issues/18973). [#18981](https://github.com/ClickHouse/ClickHouse/pull/18981) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix possible exception `QueryPipeline stream: different number of columns` caused by merging of query plan's `Expression` steps. Fixes [#18190](https://github.com/ClickHouse/ClickHouse/issues/18190). [#18980](https://github.com/ClickHouse/ClickHouse/pull/18980) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fixed very rare deadlock at shutdown. [#18977](https://github.com/ClickHouse/ClickHouse/pull/18977) ([tavplubix](https://github.com/tavplubix)). +* Fixed rare crashes when server run out of memory. [#18976](https://github.com/ClickHouse/ClickHouse/pull/18976) ([tavplubix](https://github.com/tavplubix)). +* Fix incorrect behavior when `ALTER TABLE ... DROP PART 'part_name'` query removes all deduplication blocks for the whole partition. Fixes [#18874](https://github.com/ClickHouse/ClickHouse/issues/18874). [#18969](https://github.com/ClickHouse/ClickHouse/pull/18969) ([alesapin](https://github.com/alesapin)). +* Fixed issue [#18894](https://github.com/ClickHouse/ClickHouse/issues/18894) Add a check to avoid exception when long column alias('table.column' style, usually auto-generated by BI tools like Looker) equals to long table name. [#18968](https://github.com/ClickHouse/ClickHouse/pull/18968) ([Daniel Qin](https://github.com/mathfool)). +* Fix error `Task was not found in task queue` (possible only for remote queries, with `async_socket_for_remote = 1`). [#18964](https://github.com/ClickHouse/ClickHouse/pull/18964) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix bug when mutation with some escaped text (like `ALTER ... UPDATE e = CAST('foo', 'Enum8(\'foo\' = 1')` serialized incorrectly. Fixes [#18878](https://github.com/ClickHouse/ClickHouse/issues/18878). [#18944](https://github.com/ClickHouse/ClickHouse/pull/18944) ([alesapin](https://github.com/alesapin)). +* ATTACH PARTITION will reset mutations. [#18804](https://github.com/ClickHouse/ClickHouse/issues/18804). [#18935](https://github.com/ClickHouse/ClickHouse/pull/18935) ([fastio](https://github.com/fastio)). +* Fix issue with `bitmapOrCardinality` that may lead to nullptr dereference. This closes [#18911](https://github.com/ClickHouse/ClickHouse/issues/18911). [#18912](https://github.com/ClickHouse/ClickHouse/pull/18912) ([sundyli](https://github.com/sundy-li)). +* Fixed `Attempt to read after eof` error when trying to `CAST` `NULL` from `Nullable(String)` to `Nullable(Decimal(P, S))`. Now function `CAST` returns `NULL` when it cannot parse decimal from nullable string. Fixes [#7690](https://github.com/ClickHouse/ClickHouse/issues/7690). [#18718](https://github.com/ClickHouse/ClickHouse/pull/18718) ([Winter Zhang](https://github.com/zhang2014)). +* Fix data type convert issue for MySQL engine. [#18124](https://github.com/ClickHouse/ClickHouse/pull/18124) ([bo zeng](https://github.com/mis98zb)). +* Fix clickhouse-client abort exception while executing only `select`. [#19790](https://github.com/ClickHouse/ClickHouse/pull/19790) ([taiyang-li](https://github.com/taiyang-li)). + + +#### Build/Testing/Packaging Improvement + +* Run [SQLancer](https://twitter.com/RiggerManuel/status/1352345625480884228) (logical SQL fuzzer) in CI. [#19006](https://github.com/ClickHouse/ClickHouse/pull/19006) ([Ilya Yatsishin](https://github.com/qoega)). +* Query Fuzzer will fuzz newly added tests more extensively. This closes [#18916](https://github.com/ClickHouse/ClickHouse/issues/18916). [#19185](https://github.com/ClickHouse/ClickHouse/pull/19185) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Integrate with [Big List of Naughty Strings](https://github.com/minimaxir/big-list-of-naughty-strings/) for better fuzzing. [#19480](https://github.com/ClickHouse/ClickHouse/pull/19480) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Add integration tests run with MSan. [#18974](https://github.com/ClickHouse/ClickHouse/pull/18974) ([alesapin](https://github.com/alesapin)). +* Fixed MemorySanitizer errors in cyrus-sasl and musl. [#19821](https://github.com/ClickHouse/ClickHouse/pull/19821) ([Ilya Yatsishin](https://github.com/qoega)). +* Insuffiient arguments check in `positionCaseInsensitiveUTF8` function triggered address sanitizer. [#19720](https://github.com/ClickHouse/ClickHouse/pull/19720) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Remove --project-directory for docker-compose in integration test. Fix logs formatting from docker container. [#19706](https://github.com/ClickHouse/ClickHouse/pull/19706) ([Ilya Yatsishin](https://github.com/qoega)). +* Made generation of macros.xml easier for integration tests. No more excessive logging from dicttoxml. dicttoxml project is not active for 5+ years. [#19697](https://github.com/ClickHouse/ClickHouse/pull/19697) ([Ilya Yatsishin](https://github.com/qoega)). +* Allow to explicitly enable or disable watchdog via environment variable `CLICKHOUSE_WATCHDOG_ENABLE`. By default it is enabled if server is not attached to terminal. [#19522](https://github.com/ClickHouse/ClickHouse/pull/19522) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Allow building ClickHouse with Kafka support on arm64. [#19369](https://github.com/ClickHouse/ClickHouse/pull/19369) ([filimonov](https://github.com/filimonov)). +* Allow building librdkafka without ssl. [#19337](https://github.com/ClickHouse/ClickHouse/pull/19337) ([filimonov](https://github.com/filimonov)). +* Restore Kafka input in FreeBSD builds. [#18924](https://github.com/ClickHouse/ClickHouse/pull/18924) ([Alexandre Snarskii](https://github.com/snar)). +* Fix potential nullptr dereference in table function `VALUES`. [#19357](https://github.com/ClickHouse/ClickHouse/pull/19357) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Avoid UBSan reports in `arrayElement` function, `substring` and `arraySum`. Fixes [#19305](https://github.com/ClickHouse/ClickHouse/issues/19305). Fixes [#19287](https://github.com/ClickHouse/ClickHouse/issues/19287). This closes [#19336](https://github.com/ClickHouse/ClickHouse/issues/19336). [#19347](https://github.com/ClickHouse/ClickHouse/pull/19347) ([alexey-milovidov](https://github.com/alexey-milovidov)). + + ## ClickHouse release 21.1 ### ClickHouse release v21.1.3.32-stable, 2021-02-03 diff --git a/README.md b/README.md index 8e114d5abe9..53778c79bef 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ ClickHouse® is an open-source column-oriented database management system that a * [Tutorial](https://clickhouse.tech/docs/en/getting_started/tutorial/) shows how to set up and query small ClickHouse cluster. * [Documentation](https://clickhouse.tech/docs/en/) provides more in-depth information. * [YouTube channel](https://www.youtube.com/c/ClickHouseDB) has a lot of content about ClickHouse in video format. -* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-d2zxkf9e-XyxDa_ucfPxzuH4SJIm~Ng) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time. +* [Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-ly9m4w1x-6j7x5Ts_pQZqrctAbRZ3cg) and [Telegram](https://telegram.me/clickhouse_en) allow to chat with ClickHouse users in real-time. * [Blog](https://clickhouse.yandex/blog/en/) contains various ClickHouse-related articles, as well as announcements and reports about events. * [Code Browser](https://clickhouse.tech/codebrowser/html_report/ClickHouse/index.html) with syntax highlight and navigation. * [Yandex.Messenger channel](https://yandex.ru/chat/#/join/20e380d9-c7be-4123-ab06-e95fb946975e) shares announcements and useful links in Russian. diff --git a/contrib/base64-cmake/CMakeLists.txt b/contrib/base64-cmake/CMakeLists.txt index 63b4e324d29..a295ee45b84 100644 --- a/contrib/base64-cmake/CMakeLists.txt +++ b/contrib/base64-cmake/CMakeLists.txt @@ -11,7 +11,7 @@ endif () target_compile_options(base64_scalar PRIVATE -falign-loops) if (ARCH_AMD64) - target_compile_options(base64_ssse3 PRIVATE -mssse3 -falign-loops) + target_compile_options(base64_ssse3 PRIVATE -mno-avx -mno-avx2 -mssse3 -falign-loops) target_compile_options(base64_avx PRIVATE -falign-loops -mavx) target_compile_options(base64_avx2 PRIVATE -falign-loops -mavx2) else () diff --git a/contrib/hyperscan-cmake/CMakeLists.txt b/contrib/hyperscan-cmake/CMakeLists.txt index c44214cded8..75c45ff7bf5 100644 --- a/contrib/hyperscan-cmake/CMakeLists.txt +++ b/contrib/hyperscan-cmake/CMakeLists.txt @@ -252,6 +252,7 @@ if (NOT EXTERNAL_HYPERSCAN_LIBRARY_FOUND) target_compile_definitions (hyperscan PUBLIC USE_HYPERSCAN=1) target_compile_options (hyperscan PRIVATE -g0 # Library has too much debug information + -mno-avx -mno-avx2 # The library is using dynamic dispatch and is confused if AVX is enabled globally -march=corei7 -O2 -fno-strict-aliasing -fno-omit-frame-pointer -fvisibility=hidden # The options from original build system -fno-sanitize=undefined # Assume the library takes care of itself ) diff --git a/contrib/poco b/contrib/poco index e11f3c97157..fbaaba4a02e 160000 --- a/contrib/poco +++ b/contrib/poco @@ -1 +1 @@ -Subproject commit e11f3c971570cf6a31006cd21cadf41a259c360a +Subproject commit fbaaba4a02e29987b8c584747a496c79528f125f diff --git a/docker/test/fasttest/run.sh b/docker/test/fasttest/run.sh index 17cec7ae286..bb29959acd2 100755 --- a/docker/test/fasttest/run.sh +++ b/docker/test/fasttest/run.sh @@ -319,6 +319,7 @@ function run_tests # In fasttest, ENABLE_LIBRARIES=0, so rocksdb engine is not enabled by default 01504_rocksdb + 01686_rocksdb # Look at DistributedFilesToInsert, so cannot run in parallel. 01460_DistributedFilesToInsert diff --git a/docker/test/integration/runner/Dockerfile b/docker/test/integration/runner/Dockerfile index f353931f0a0..502dc3736b2 100644 --- a/docker/test/integration/runner/Dockerfile +++ b/docker/test/integration/runner/Dockerfile @@ -61,7 +61,7 @@ RUN python3 -m pip install \ aerospike \ avro \ cassandra-driver \ - confluent-kafka \ + confluent-kafka==1.5.0 \ dict2xml \ dicttoxml \ docker \ diff --git a/docs/en/introduction/adopters.md b/docs/en/introduction/adopters.md index 707a05b63e5..c7230f2f080 100644 --- a/docs/en/introduction/adopters.md +++ b/docs/en/introduction/adopters.md @@ -46,7 +46,7 @@ toc_title: Adopters | Exness | Trading | Metrics, Logging | — | — | [Talk in Russian, May 2019](https://youtu.be/_rpU-TvSfZ8?t=3215) | | FastNetMon | DDoS Protection | Main Product | | — | [Official website](https://fastnetmon.com/docs-fnm-advanced/fastnetmon-advanced-traffic-persistency/) | | Flipkart | e-Commerce | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=239) | -| FunCorp | Games | | — | — | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) | +| FunCorp | Games | | — | 14 bn records/day as of Jan 2021 | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) | | Geniee | Ad network | Main product | — | — | [Blog post in Japanese, July 2017](https://tech.geniee.co.jp/entry/2017/07/20/160100) | | Genotek | Bioinformatics | Main product | — | — | [Video, August 2020](https://youtu.be/v3KyZbz9lEE) | | HUYA | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) | @@ -74,6 +74,7 @@ toc_title: Adopters | NOC Project | Network Monitoring | Analytics | Main Product | — | [Official Website](https://getnoc.com/features/big-data/) | | Nuna Inc. | Health Data Analytics | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=170) | | OneAPM | Monitorings and Data Analysis | Main product | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/8.%20clickhouse在OneAPM的应用%20杜龙.pdf) | +| Panelbear | Analytics | Monitoring and Analytics | — | — | [Tech Stack, November 2020](https://panelbear.com/blog/tech-stack/) | | Percent 百分点 | Analytics | Main Product | — | — | [Slides in Chinese, June 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/4.%20ClickHouse万亿数据双中心的设计与实践%20.pdf) | | Percona | Performance analysis | Percona Monitoring and Management | — | — | [Official website, Mar 2020](https://www.percona.com/blog/2020/03/30/advanced-query-analysis-in-percona-monitoring-and-management-with-direct-clickhouse-access/) | | Plausible | Analytics | Main Product | — | — | [Blog post, June 2020](https://twitter.com/PlausibleHQ/status/1273889629087969280) | diff --git a/docs/en/operations/quotas.md b/docs/en/operations/quotas.md index c637ef03f71..56c3eaf6455 100644 --- a/docs/en/operations/quotas.md +++ b/docs/en/operations/quotas.md @@ -29,6 +29,8 @@ Let’s look at the section of the ‘users.xml’ file that defines quotas. 0 + 0 + 0 0 0 0 @@ -48,6 +50,8 @@ The resource consumption calculated for each interval is output to the server lo 3600 1000 + 100 + 100 100 1000000000 100000000000 @@ -58,6 +62,8 @@ The resource consumption calculated for each interval is output to the server lo 86400 10000 + 10000 + 10000 1000 5000000000 500000000000 @@ -74,6 +80,10 @@ Here are the amounts that can be restricted: `queries` – The total number of requests. +`query_selects` – The total number of select requests. + +`query_inserts` – The total number of insert requests. + `errors` – The number of queries that threw an exception. `result_rows` – The total number of rows given as a result. diff --git a/docs/en/operations/system-tables/part_log.md b/docs/en/operations/system-tables/part_log.md index 9aa95b1a493..08269a2dc48 100644 --- a/docs/en/operations/system-tables/part_log.md +++ b/docs/en/operations/system-tables/part_log.md @@ -6,29 +6,62 @@ This table contains information about events that occurred with [data parts](../ The `system.part_log` table contains the following columns: -- `event_type` (Enum) — Type of the event that occurred with the data part. Can have one of the following values: +- `query_id` ([String](../../sql-reference/data-types/string.md)) — Identifier of the `INSERT` query that created this data part. +- `event_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Type of the event that occurred with the data part. Can have one of the following values: - `NEW_PART` — Inserting of a new data part. - `MERGE_PARTS` — Merging of data parts. - `DOWNLOAD_PART` — Downloading a data part. - `REMOVE_PART` — Removing or detaching a data part using [DETACH PARTITION](../../sql-reference/statements/alter/partition.md#alter_detach-partition). - `MUTATE_PART` — Mutating of a data part. - `MOVE_PART` — Moving the data part from the one disk to another one. -- `event_date` (Date) — Event date. -- `event_time` (DateTime) — Event time. -- `duration_ms` (UInt64) — Duration. -- `database` (String) — Name of the database the data part is in. -- `table` (String) — Name of the table the data part is in. -- `part_name` (String) — Name of the data part. -- `partition_id` (String) — ID of the partition that the data part was inserted to. The column takes the ‘all’ value if the partitioning is by `tuple()`. -- `rows` (UInt64) — The number of rows in the data part. -- `size_in_bytes` (UInt64) — Size of the data part in bytes. -- `merged_from` (Array(String)) — An array of names of the parts which the current part was made up from (after the merge). -- `bytes_uncompressed` (UInt64) — Size of uncompressed bytes. -- `read_rows` (UInt64) — The number of rows was read during the merge. -- `read_bytes` (UInt64) — The number of bytes was read during the merge. -- `error` (UInt16) — The code number of the occurred error. -- `exception` (String) — Text message of the occurred error. +- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Event date. +- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Event time. +- `duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Duration. +- `database` ([String](../../sql-reference/data-types/string.md)) — Name of the database the data part is in. +- `table` ([String](../../sql-reference/data-types/string.md)) — Name of the table the data part is in. +- `part_name` ([String](../../sql-reference/data-types/string.md)) — Name of the data part. +- `partition_id` ([String](../../sql-reference/data-types/string.md)) — ID of the partition that the data part was inserted to. The column takes the `all` value if the partitioning is by `tuple()`. +- `path_on_disk` ([String](../../sql-reference/data-types/string.md)) — Absolute path to the folder with data part files. +- `rows` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The number of rows in the data part. +- `size_in_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Size of the data part in bytes. +- `merged_from` ([Array(String)](../../sql-reference/data-types/array.md)) — An array of names of the parts which the current part was made up from (after the merge). +- `bytes_uncompressed` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Size of uncompressed bytes. +- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The number of rows was read during the merge. +- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The number of bytes was read during the merge. +- `peak_memory_usage` ([Int64](../../sql-reference/data-types/int-uint.md)) — The maximum difference between the amount of allocated and freed memory in context of this thread. +- `error` ([UInt16](../../sql-reference/data-types/int-uint.md)) — The code number of the occurred error. +- `exception` ([String](../../sql-reference/data-types/string.md)) — Text message of the occurred error. The `system.part_log` table is created after the first inserting data to the `MergeTree` table. +**Example** + +``` sql +SELECT * FROM system.part_log LIMIT 1 FORMAT Vertical; +``` + +``` text +Row 1: +────── +query_id: 983ad9c7-28d5-4ae1-844e-603116b7de31 +event_type: NewPart +event_date: 2021-02-02 +event_time: 2021-02-02 11:14:28 +duration_ms: 35 +database: default +table: log_mt_2 +part_name: all_1_1_0 +partition_id: all +path_on_disk: db/data/default/log_mt_2/all_1_1_0/ +rows: 115418 +size_in_bytes: 1074311 +merged_from: [] +bytes_uncompressed: 0 +read_rows: 0 +read_bytes: 0 +peak_memory_usage: 0 +error: 0 +exception: +``` + [Original article](https://clickhouse.tech/docs/en/operations/system_tables/part_log) diff --git a/docs/en/operations/system-tables/quota_limits.md b/docs/en/operations/system-tables/quota_limits.md index 065296f5df3..c2dcb4db34d 100644 --- a/docs/en/operations/system-tables/quota_limits.md +++ b/docs/en/operations/system-tables/quota_limits.md @@ -9,6 +9,8 @@ Columns: - `0` — Interval is not randomized. - `1` — Interval is randomized. - `max_queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of queries. +- `max_query_selects` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of select queries. +- `max_query_inserts` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of insert queries. - `max_errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of errors. - `max_result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of result rows. - `max_result_bytes` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of RAM volume in bytes used to store a queries result. diff --git a/docs/en/operations/system-tables/quota_usage.md b/docs/en/operations/system-tables/quota_usage.md index 0eb59fd6453..17af9ad9a30 100644 --- a/docs/en/operations/system-tables/quota_usage.md +++ b/docs/en/operations/system-tables/quota_usage.md @@ -9,6 +9,8 @@ Columns: - `end_time`([Nullable](../../sql-reference/data-types/nullable.md)([DateTime](../../sql-reference/data-types/datetime.md))) — End time for calculating resource consumption. - `duration` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Length of the time interval for calculating resource consumption, in seconds. - `queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — The total number of requests on this interval. +- `query_selects` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — The total number of select requests on this interval. +- `query_inserts` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — The total number of insert requests on this interval. - `max_queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of requests. - `errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — The number of queries that threw an exception. - `max_errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of errors. diff --git a/docs/en/operations/system-tables/quotas_usage.md b/docs/en/operations/system-tables/quotas_usage.md index ed6be820b26..31aafd3e697 100644 --- a/docs/en/operations/system-tables/quotas_usage.md +++ b/docs/en/operations/system-tables/quotas_usage.md @@ -11,6 +11,10 @@ Columns: - `duration` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt32](../../sql-reference/data-types/int-uint.md))) — Length of the time interval for calculating resource consumption, in seconds. - `queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — The total number of requests in this interval. - `max_queries` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of requests. +- `query_selects` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — The total number of select requests in this interval. +- `max_query_selects` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of select requests. +- `query_inserts` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — The total number of insert requests in this interval. +- `max_query_inserts` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of insert requests. - `errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — The number of queries that threw an exception. - `max_errors` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — Maximum number of errors. - `result_rows` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt64](../../sql-reference/data-types/int-uint.md))) — The total number of rows given as a result. diff --git a/docs/en/operations/system-tables/zookeeper.md b/docs/en/operations/system-tables/zookeeper.md index ddb4d305964..82ace5e81dc 100644 --- a/docs/en/operations/system-tables/zookeeper.md +++ b/docs/en/operations/system-tables/zookeeper.md @@ -1,12 +1,16 @@ # system.zookeeper {#system-zookeeper} The table does not exist if ZooKeeper is not configured. Allows reading data from the ZooKeeper cluster defined in the config. -The query must have a ‘path’ equality condition in the WHERE clause. This is the path in ZooKeeper for the children that you want to get data for. +The query must either have a ‘path =’ condition or a `path IN` condition set with the `WHERE` clause as shown below. This corresponds to the path of the children in ZooKeeper that you want to get data for. The query `SELECT * FROM system.zookeeper WHERE path = '/clickhouse'` outputs data for all children on the `/clickhouse` node. To output data for all root nodes, write path = ‘/’. If the path specified in ‘path’ doesn’t exist, an exception will be thrown. +The query `SELECT * FROM system.zookeeper WHERE path IN ('/', '/clickhouse')` outputs data for all children on the `/` and `/clickhouse` node. +If in the specified ‘path’ collection has doesn't exist path, an exception will be thrown. +It can be used to do a batch of ZooKeeper path queries. + Columns: - `name` (String) — The name of the node. diff --git a/docs/en/sql-reference/aggregate-functions/reference/argmax.md b/docs/en/sql-reference/aggregate-functions/reference/argmax.md index 35e87d49e60..9899c731ce9 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/argmax.md +++ b/docs/en/sql-reference/aggregate-functions/reference/argmax.md @@ -4,13 +4,42 @@ toc_priority: 106 # argMax {#agg-function-argmax} -Syntax: `argMax(arg, val)` or `argMax(tuple(arg, val))` +Calculates the `arg` value for a maximum `val` value. If there are several different values of `arg` for maximum values of `val`, returns the first of these values encountered. -Calculates the `arg` value for a maximum `val` value. If there are several different values of `arg` for maximum values of `val`, the first of these values encountered is output. +Tuple version of this function will return the tuple with the maximum `val` value. It is convenient for use with [SimpleAggregateFunction](../../../sql-reference/data-types/simpleaggregatefunction.md). -Tuple version of this function will return the tuple with the maximum `val` value. It is convinient for use with `SimpleAggregateFunction`. +**Syntax** -**Example:** +``` sql +argMax(arg, val) +``` + +or + +``` sql +argMax(tuple(arg, val)) +``` + +**Parameters** + +- `arg` — Argument. +- `val` — Value. + +**Returned value** + +- `arg` value that corresponds to maximum `val` value. + +Type: matches `arg` type. + +For tuple in the input: + +- Tuple `(arg, val)`, where `val` is the maximum value and `arg` is a corresponding value. + +Type: [Tuple](../../../sql-reference/data-types/tuple.md). + +**Example** + +Input table: ``` text ┌─user─────┬─salary─┐ @@ -20,12 +49,18 @@ Tuple version of this function will return the tuple with the maximum `val` valu └──────────┴────────┘ ``` +Query: + ``` sql -SELECT argMax(user, salary), argMax(tuple(user, salary)) FROM salary +SELECT argMax(user, salary), argMax(tuple(user, salary)) FROM salary; ``` +Result: + ``` text ┌─argMax(user, salary)─┬─argMax(tuple(user, salary))─┐ │ director │ ('director',5000) │ └──────────────────────┴─────────────────────────────┘ ``` + +[Original article](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmax/) diff --git a/docs/en/sql-reference/aggregate-functions/reference/argmin.md b/docs/en/sql-reference/aggregate-functions/reference/argmin.md index 72c9bce6817..2fe9a313260 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/argmin.md +++ b/docs/en/sql-reference/aggregate-functions/reference/argmin.md @@ -4,13 +4,42 @@ toc_priority: 105 # argMin {#agg-function-argmin} -Syntax: `argMin(arg, val)` or `argMin(tuple(arg, val))` +Calculates the `arg` value for a minimum `val` value. If there are several different values of `arg` for minimum values of `val`, returns the first of these values encountered. -Calculates the `arg` value for a minimal `val` value. If there are several different values of `arg` for minimal values of `val`, the first of these values encountered is output. +Tuple version of this function will return the tuple with the minimum `val` value. It is convenient for use with [SimpleAggregateFunction](../../../sql-reference/data-types/simpleaggregatefunction.md). -Tuple version of this function will return the tuple with the minimal `val` value. It is convinient for use with `SimpleAggregateFunction`. +**Syntax** -**Example:** +``` sql +argMin(arg, val) +``` + +or + +``` sql +argMin(tuple(arg, val)) +``` + +**Parameters** + +- `arg` — Argument. +- `val` — Value. + +**Returned value** + +- `arg` value that corresponds to minimum `val` value. + +Type: matches `arg` type. + +For tuple in the input: + +- Tuple `(arg, val)`, where `val` is the minimum value and `arg` is a corresponding value. + +Type: [Tuple](../../../sql-reference/data-types/tuple.md). + +**Example** + +Input table: ``` text ┌─user─────┬─salary─┐ @@ -20,12 +49,18 @@ Tuple version of this function will return the tuple with the minimal `val` valu └──────────┴────────┘ ``` +Query: + ``` sql -SELECT argMin(user, salary), argMin(tuple(user, salary)) FROM salary +SELECT argMin(user, salary), argMin(tuple(user, salary)) FROM salary; ``` +Result: + ``` text ┌─argMin(user, salary)─┬─argMin(tuple(user, salary))─┐ │ worker │ ('worker',1000) │ └──────────────────────┴─────────────────────────────┘ ``` + +[Original article](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmin/) diff --git a/docs/en/sql-reference/aggregate-functions/reference/mannwhitneyutest.md b/docs/en/sql-reference/aggregate-functions/reference/mannwhitneyutest.md new file mode 100644 index 00000000000..012df7052aa --- /dev/null +++ b/docs/en/sql-reference/aggregate-functions/reference/mannwhitneyutest.md @@ -0,0 +1,71 @@ +--- +toc_priority: 310 +toc_title: mannWhitneyUTest +--- + +# mannWhitneyUTest {#mannwhitneyutest} + +Applies the Mann-Whitney rank test to samples from two populations. + +**Syntax** + +``` sql +mannWhitneyUTest[(alternative[, continuity_correction])](sample_data, sample_index) +``` + +Values of both samples are in the `sample_data` column. If `sample_index` equals to 0 then the value in that row belongs to the sample from the first population. Otherwise it belongs to the sample from the second population. +The null hypothesis is that two populations are stochastically equal. Also one-sided hypothesises can be tested. This test does not assume that data have normal distribution. + +**Parameters** + +- `alternative` — alternative hypothesis. (Optional, default: `'two-sided'`.) [String](../../../sql-reference/data-types/string.md). + - `'two-sided'`; + - `'greater'`; + - `'less'`. +- `continuity_correction` - if not 0 then continuity correction in the normal approximation for the p-value is applied. (Optional, default: 1.) [UInt64](../../../sql-reference/data-types/int-uint.md). +- `sample_data` — sample data. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). +- `sample_index` — sample index. [Integer](../../../sql-reference/data-types/int-uint.md). + + +**Returned values** + +[Tuple](../../../sql-reference/data-types/tuple.md) with two elements: +- calculated U-statistic. [Float64](../../../sql-reference/data-types/float.md). +- calculated p-value. [Float64](../../../sql-reference/data-types/float.md). + + +**Example** + +Input table: + +``` text +┌─sample_data─┬─sample_index─┐ +│ 10 │ 0 │ +│ 11 │ 0 │ +│ 12 │ 0 │ +│ 1 │ 1 │ +│ 2 │ 1 │ +│ 3 │ 1 │ +└─────────────┴──────────────┘ +``` + +Query: + +``` sql +SELECT mannWhitneyUTest('greater')(sample_data, sample_index) FROM mww_ttest; +``` + +Result: + +``` text +┌─mannWhitneyUTest('greater')(sample_data, sample_index)─┐ +│ (9,0.04042779918503192) │ +└────────────────────────────────────────────────────────┘ +``` + +**See Also** + +- [Mann–Whitney U test](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test) +- [Stochastic ordering](https://en.wikipedia.org/wiki/Stochastic_ordering) + +[Original article](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/mannwhitneyutest/) diff --git a/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted.md b/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted.md index 0f8606986c8..817cd831d85 100644 --- a/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted.md +++ b/docs/en/sql-reference/aggregate-functions/reference/quantiletimingweighted.md @@ -79,6 +79,40 @@ Result: └───────────────────────────────────────────────┘ ``` +# quantilesTimingWeighted {#quantilestimingweighted} + +Same as `quantileTimingWeighted`, but accept multiple parameters with quantile levels and return an Array filled with many values of that quantiles. + + +**Example** + +Input table: + +``` text +┌─response_time─┬─weight─┐ +│ 68 │ 1 │ +│ 104 │ 2 │ +│ 112 │ 3 │ +│ 126 │ 2 │ +│ 138 │ 1 │ +│ 162 │ 1 │ +└───────────────┴────────┘ +``` + +Query: + +``` sql +SELECT quantilesTimingWeighted(0,5, 0.99)(response_time, weight) FROM t +``` + +Result: + +``` text +┌─quantilesTimingWeighted(0.5, 0.99)(response_time, weight)─┐ +│ [112,162] │ +└───────────────────────────────────────────────────────────┘ +``` + **See Also** - [median](../../../sql-reference/aggregate-functions/reference/median.md#median) diff --git a/docs/en/sql-reference/aggregate-functions/reference/studentttest.md b/docs/en/sql-reference/aggregate-functions/reference/studentttest.md new file mode 100644 index 00000000000..f868e976039 --- /dev/null +++ b/docs/en/sql-reference/aggregate-functions/reference/studentttest.md @@ -0,0 +1,65 @@ +--- +toc_priority: 300 +toc_title: studentTTest +--- + +# studentTTest {#studentttest} + +Applies Student's t-test to samples from two populations. + +**Syntax** + +``` sql +studentTTest(sample_data, sample_index) +``` + +Values of both samples are in the `sample_data` column. If `sample_index` equals to 0 then the value in that row belongs to the sample from the first population. Otherwise it belongs to the sample from the second population. +The null hypothesis is that means of populations are equal. Normal distribution with equal variances is assumed. + +**Parameters** + +- `sample_data` — sample data. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). +- `sample_index` — sample index. [Integer](../../../sql-reference/data-types/int-uint.md). + +**Returned values** + +[Tuple](../../../sql-reference/data-types/tuple.md) with two elements: +- calculated t-statistic. [Float64](../../../sql-reference/data-types/float.md). +- calculated p-value. [Float64](../../../sql-reference/data-types/float.md). + + +**Example** + +Input table: + +``` text +┌─sample_data─┬─sample_index─┐ +│ 20.3 │ 0 │ +│ 21.1 │ 0 │ +│ 21.9 │ 1 │ +│ 21.7 │ 0 │ +│ 19.9 │ 1 │ +│ 21.8 │ 1 │ +└─────────────┴──────────────┘ +``` + +Query: + +``` sql +SELECT studentTTest(sample_data, sample_index) FROM student_ttest; +``` + +Result: + +``` text +┌─studentTTest(sample_data, sample_index)───┐ +│ (-0.21739130434783777,0.8385421208415731) │ +└───────────────────────────────────────────┘ +``` + +**See Also** + +- [Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test) +- [welchTTest function](welchttest.md#welchttest) + +[Original article](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/studentttest/) diff --git a/docs/en/sql-reference/aggregate-functions/reference/welchttest.md b/docs/en/sql-reference/aggregate-functions/reference/welchttest.md new file mode 100644 index 00000000000..3fe1c9d58b9 --- /dev/null +++ b/docs/en/sql-reference/aggregate-functions/reference/welchttest.md @@ -0,0 +1,65 @@ +--- +toc_priority: 301 +toc_title: welchTTest +--- + +# welchTTest {#welchttest} + +Applies Welch's t-test to samples from two populations. + +**Syntax** + +``` sql +welchTTest(sample_data, sample_index) +``` + +Values of both samples are in the `sample_data` column. If `sample_index` equals to 0 then the value in that row belongs to the sample from the first population. Otherwise it belongs to the sample from the second population. +The null hypothesis is that means of populations are equal. Normal distribution is assumed. Populations may have unequal variance. + +**Parameters** + +- `sample_data` — sample data. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). +- `sample_index` — sample index. [Integer](../../../sql-reference/data-types/int-uint.md). + +**Returned values** + +[Tuple](../../../sql-reference/data-types/tuple.md) with two elements: +- calculated t-statistic. [Float64](../../../sql-reference/data-types/float.md). +- calculated p-value. [Float64](../../../sql-reference/data-types/float.md). + + +**Example** + +Input table: + +``` text +┌─sample_data─┬─sample_index─┐ +│ 20.3 │ 0 │ +│ 22.1 │ 0 │ +│ 21.9 │ 0 │ +│ 18.9 │ 1 │ +│ 20.3 │ 1 │ +│ 19 │ 1 │ +└─────────────┴──────────────┘ +``` + +Query: + +``` sql +SELECT welchTTest(sample_data, sample_index) FROM welch_ttest; +``` + +Result: + +``` text +┌─welchTTest(sample_data, sample_index)─────┐ +│ (2.7988719532211235,0.051807360348581945) │ +└───────────────────────────────────────────┘ +``` + +**See Also** + +- [Welch's t-test](https://en.wikipedia.org/wiki/Welch%27s_t-test) +- [studentTTest function](studentttest.md#studentttest) + +[Original article](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/welchTTest/) diff --git a/docs/en/sql-reference/data-types/array.md b/docs/en/sql-reference/data-types/array.md index 48957498d63..41e35aaa96f 100644 --- a/docs/en/sql-reference/data-types/array.md +++ b/docs/en/sql-reference/data-types/array.md @@ -45,6 +45,8 @@ SELECT [1, 2] AS x, toTypeName(x) ## Working with Data Types {#working-with-data-types} +The maximum size of an array is limited to one million elements. + When creating an array on the fly, ClickHouse automatically defines the argument type as the narrowest data type that can store all the listed arguments. If there are any [Nullable](../../sql-reference/data-types/nullable.md#data_type-nullable) or literal [NULL](../../sql-reference/syntax.md#null-literal) values, the type of an array element also becomes [Nullable](../../sql-reference/data-types/nullable.md). If ClickHouse couldn’t determine the data type, it generates an exception. For instance, this happens when trying to create an array with strings and numbers simultaneously (`SELECT array(1, 'a')`). diff --git a/docs/en/sql-reference/functions/array-functions.md b/docs/en/sql-reference/functions/array-functions.md index dc7727bdfd8..d5b357795d7 100644 --- a/docs/en/sql-reference/functions/array-functions.md +++ b/docs/en/sql-reference/functions/array-functions.md @@ -1288,73 +1288,226 @@ Returns the index of the first element in the `arr1` array for which `func` retu Note that the `arrayFirstIndex` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted. -## arrayMin(\[func,\] arr1, …) {#array-min} +## arrayMin {#array-min} -Returns the min of the `func` values. If the function is omitted, it just returns the min of the array elements. +Returns the minimum of elements in the source array. + +If the `func` function is specified, returns the mininum of elements converted by this function. Note that the `arrayMin` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. -Examples: +**Syntax** + ```sql -SELECT arrayMin([1, 2, 4]) AS res +arrayMin([func,] arr) +``` + +**Parameters** + +- `func` — Function. [Expression](../../sql-reference/data-types/special-data-types/expression.md). +- `arr` — Array. [Array](../../sql-reference/data-types/array.md). + +**Returned value** + +- The minimum of function values (or the array minimum). + +Type: if `func` is specified, matches `func` return value type, else matches the array elements type. + +**Examples** + +Query: + +```sql +SELECT arrayMin([1, 2, 4]) AS res; +``` + +Result: + +```text ┌─res─┐ │ 1 │ └─────┘ +``` +Query: -SELECT arrayMin(x -> (-x), [1, 2, 4]) AS res +```sql +SELECT arrayMin(x -> (-x), [1, 2, 4]) AS res; +``` + +Result: + +```text ┌─res─┐ │ -4 │ └─────┘ ``` -## arrayMax(\[func,\] arr1, …) {#array-max} +## arrayMax {#array-max} -Returns the max of the `func` values. If the function is omitted, it just returns the max of the array elements. +Returns the maximum of elements in the source array. + +If the `func` function is specified, returns the maximum of elements converted by this function. Note that the `arrayMax` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. -Examples: +**Syntax** + ```sql -SELECT arrayMax([1, 2, 4]) AS res +arrayMax([func,] arr) +``` + +**Parameters** + +- `func` — Function. [Expression](../../sql-reference/data-types/special-data-types/expression.md). +- `arr` — Array. [Array](../../sql-reference/data-types/array.md). + +**Returned value** + +- The maximum of function values (or the array maximum). + +Type: if `func` is specified, matches `func` return value type, else matches the array elements type. + +**Examples** + +Query: + +```sql +SELECT arrayMax([1, 2, 4]) AS res; +``` + +Result: + +```text ┌─res─┐ │ 4 │ └─────┘ +``` +Query: -SELECT arrayMax(x -> (-x), [1, 2, 4]) AS res +```sql +SELECT arrayMax(x -> (-x), [1, 2, 4]) AS res; +``` + +Result: + +```text ┌─res─┐ │ -1 │ └─────┘ ``` -## arraySum(\[func,\] arr1, …) {#array-sum} +## arraySum {#array-sum} -Returns the sum of the `func` values. If the function is omitted, it just returns the sum of the array elements. +Returns the sum of elements in the source array. + +If the `func` function is specified, returns the sum of elements converted by this function. Note that the `arraySum` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. -Examples: +**Syntax** + ```sql -SELECT arraySum([2,3]) AS res +arraySum([func,] arr) +``` + +**Parameters** + +- `func` — Function. [Expression](../../sql-reference/data-types/special-data-types/expression.md). +- `arr` — Array. [Array](../../sql-reference/data-types/array.md). + +**Returned value** + +- The sum of the function values (or the array sum). + +Type: for decimal numbers in source array (or for converted values, if `func` is specified) — [Decimal128](../../sql-reference/data-types/decimal.md), for floating point numbers — [Float64](../../sql-reference/data-types/float.md), for numeric unsigned — [UInt64](../../sql-reference/data-types/int-uint.md), and for numeric signed — [Int64](../../sql-reference/data-types/int-uint.md). + +**Examples** + +Query: + +```sql +SELECT arraySum([2, 3]) AS res; +``` + +Result: + +```text ┌─res─┐ │ 5 │ └─────┘ +``` +Query: -SELECT arraySum(x -> x*x, [2, 3]) AS res +```sql +SELECT arraySum(x -> x*x, [2, 3]) AS res; +``` + +Result: + +```text ┌─res─┐ │ 13 │ └─────┘ ``` +## arrayAvg {#array-avg} -## arrayAvg(\[func,\] arr1, …) {#array-avg} +Returns the average of elements in the source array. -Returns the average of the `func` values. If the function is omitted, it just returns the average of the array elements. +If the `func` function is specified, returns the average of elements converted by this function. Note that the `arrayAvg` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. +**Syntax** + +```sql +arrayAvg([func,] arr) +``` + +**Parameters** + +- `func` — Function. [Expression](../../sql-reference/data-types/special-data-types/expression.md). +- `arr` — Array. [Array](../../sql-reference/data-types/array.md). + +**Returned value** + +- The average of function values (or the array average). + +Type: [Float64](../../sql-reference/data-types/float.md). + +**Examples** + +Query: + +```sql +SELECT arrayAvg([1, 2, 4]) AS res; +``` + +Result: + +```text +┌────────────────res─┐ +│ 2.3333333333333335 │ +└────────────────────┘ +``` + +Query: + +```sql +SELECT arrayAvg(x -> (x * x), [2, 4]) AS res; +``` + +Result: + +```text +┌─res─┐ +│ 10 │ +└─────┘ +``` + ## arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1} Returns an array of partial sums of elements in the source array (a running sum). If the `func` function is specified, then the values of the array elements are converted by this function before summing. diff --git a/docs/en/sql-reference/functions/date-time-functions.md b/docs/en/sql-reference/functions/date-time-functions.md index 9de780fb596..4a73bdb2546 100644 --- a/docs/en/sql-reference/functions/date-time-functions.md +++ b/docs/en/sql-reference/functions/date-time-functions.md @@ -380,7 +380,7 @@ Alias: `dateTrunc`. **Parameters** -- `unit` — Part of date. [String](../syntax.md#syntax-string-literal). +- `unit` — The type of interval to truncate the result. [String Literal](../syntax.md#syntax-string-literal). Possible values: - `second` @@ -435,6 +435,201 @@ Result: - [toStartOfInterval](#tostartofintervaltime-or-data-interval-x-unit-time-zone) +## date\_add {#date_add} + +Adds specified date/time interval to the provided date. + +**Syntax** + +``` sql +date_add(unit, value, date) +``` + +Aliases: `dateAdd`, `DATE_ADD`. + +**Parameters** + +- `unit` — The type of interval to add. [String](../../sql-reference/data-types/string.md). + + Supported values: second, minute, hour, day, week, month, quarter, year. +- `value` - Value in specified unit - [Int](../../sql-reference/data-types/int-uint.md) +- `date` — [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). + + +**Returned value** + +Returns Date or DateTime with `value` expressed in `unit` added to `date`. + +**Example** + +```sql +select date_add(YEAR, 3, toDate('2018-01-01')); +``` + +```text +┌─plus(toDate('2018-01-01'), toIntervalYear(3))─┐ +│ 2021-01-01 │ +└───────────────────────────────────────────────┘ +``` + +## date\_diff {#date_diff} + +Returns the difference between two Date or DateTime values. + +**Syntax** + +``` sql +date_diff('unit', startdate, enddate, [timezone]) +``` + +Aliases: `dateDiff`, `DATE_DIFF`. + +**Parameters** + +- `unit` — The type of interval for result [String](../../sql-reference/data-types/string.md). + + Supported values: second, minute, hour, day, week, month, quarter, year. + +- `startdate` — The first time value to subtract (the subtrahend). [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). + +- `enddate` — The second time value to subtract from (the minuend). [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). + +- `timezone` — Optional parameter. If specified, it is applied to both `startdate` and `enddate`. If not specified, timezones of `startdate` and `enddate` are used. If they are not the same, the result is unspecified. + +**Returned value** + +Difference between `enddate` and `startdate` expressed in `unit`. + +Type: `int`. + +**Example** + +Query: + +``` sql +SELECT dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00')); +``` + +Result: + +``` text +┌─dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'))─┐ +│ 25 │ +└────────────────────────────────────────────────────────────────────────────────────────┘ +``` + +## date\_sub {#date_sub} + +Subtracts a time/date interval from the provided date. + +**Syntax** + +``` sql +date_sub(unit, value, date) +``` + +Aliases: `dateSub`, `DATE_SUB`. + +**Parameters** + +- `unit` — The type of interval to subtract. [String](../../sql-reference/data-types/string.md). + + Supported values: second, minute, hour, day, week, month, quarter, year. +- `value` - Value in specified unit - [Int](../../sql-reference/data-types/int-uint.md) +- `date` — [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md) to subtract value from. + +**Returned value** + +Returns Date or DateTime with `value` expressed in `unit` subtracted from `date`. + +**Example** + +Query: + +``` sql +SELECT date_sub(YEAR, 3, toDate('2018-01-01')); +``` + +Result: + +``` text +┌─minus(toDate('2018-01-01'), toIntervalYear(3))─┐ +│ 2015-01-01 │ +└────────────────────────────────────────────────┘ +``` + +## timestamp\_add {#timestamp_add} + +Adds the specified time value with the provided date or date time value. + +**Syntax** + +``` sql +timestamp_add(date, INTERVAL value unit) +``` + +Aliases: `timeStampAdd`, `TIMESTAMP_ADD`. + +**Parameters** + +- `date` — Date or Date with time - [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). +- `value` - Value in specified unit - [Int](../../sql-reference/data-types/int-uint.md) +- `unit` — The type of interval to add. [String](../../sql-reference/data-types/string.md). + + Supported values: second, minute, hour, day, week, month, quarter, year. + +**Returned value** + +Returns Date or DateTime with the specified `value` expressed in `unit` added to `date`. + +**Example** + +```sql +select timestamp_add(toDate('2018-01-01'), INTERVAL 3 MONTH); +``` + +```text +┌─plus(toDate('2018-01-01'), toIntervalMonth(3))─┐ +│ 2018-04-01 │ +└────────────────────────────────────────────────┘ +``` + +## timestamp\_sub {#timestamp_sub} + +Returns the difference between two dates in the specified unit. + +**Syntax** + +``` sql +timestamp_sub(unit, value, date) +``` + +Aliases: `timeStampSub`, `TIMESTAMP_SUB`. + +**Parameters** + +- `unit` — The type of interval to add. [String](../../sql-reference/data-types/string.md). + + Supported values: second, minute, hour, day, week, month, quarter, year. +- `value` - Value in specified unit - [Int](../../sql-reference/data-types/int-uint.md). +- `date`- [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). + +**Returned value** + +Difference between `date` and the specified `value` expressed in `unit`. + +**Example** + +```sql +select timestamp_sub(MONTH, 5, toDateTime('2018-12-18 01:02:03')); +``` + +```text +┌─minus(toDateTime('2018-12-18 01:02:03'), toIntervalMonth(5))─┐ +│ 2018-07-18 01:02:03 │ +└──────────────────────────────────────────────────────────────┘ +``` + ## now {#now} Returns the current date and time. @@ -550,50 +745,6 @@ SELECT └──────────────────────────┴───────────────────────────────┘ ``` -## dateDiff {#datediff} - -Returns the difference between two Date or DateTime values. - -**Syntax** - -``` sql -dateDiff('unit', startdate, enddate, [timezone]) -``` - -**Parameters** - -- `unit` — Time unit, in which the returned value is expressed. [String](../../sql-reference/syntax.md#syntax-string-literal). - - Supported values: second, minute, hour, day, week, month, quarter, year. - -- `startdate` — The first time value to compare. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). - -- `enddate` — The second time value to compare. [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md). - -- `timezone` — Optional parameter. If specified, it is applied to both `startdate` and `enddate`. If not specified, timezones of `startdate` and `enddate` are used. If they are not the same, the result is unspecified. - -**Returned value** - -Difference between `startdate` and `enddate` expressed in `unit`. - -Type: `int`. - -**Example** - -Query: - -``` sql -SELECT dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00')); -``` - -Result: - -``` text -┌─dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'))─┐ -│ 25 │ -└────────────────────────────────────────────────────────────────────────────────────────┘ -``` - ## timeSlots(StartTime, Duration,\[, Size\]) {#timeslotsstarttime-duration-size} For a time interval starting at ‘StartTime’ and continuing for ‘Duration’ seconds, it returns an array of moments in time, consisting of points from this interval rounded down to the ‘Size’ in seconds. ‘Size’ is an optional parameter: a constant UInt32, set to 1800 by default. diff --git a/docs/en/sql-reference/statements/alter/quota.md b/docs/en/sql-reference/statements/alter/quota.md index 905c57503fc..a43b5255598 100644 --- a/docs/en/sql-reference/statements/alter/quota.md +++ b/docs/en/sql-reference/statements/alter/quota.md @@ -5,7 +5,7 @@ toc_title: QUOTA # ALTER QUOTA {#alter-quota-statement} -Changes [quotas](../../../operations/access-rights.md#quotas-management). +Changes quotas. Syntax: @@ -14,13 +14,13 @@ ALTER QUOTA [IF EXISTS] name [ON CLUSTER cluster_name] [RENAME TO new_name] [KEYED BY {user_name | ip_address | client_key | client_key,user_name | client_key,ip_address} | NOT KEYED] [FOR [RANDOMIZED] INTERVAL number {second | minute | hour | day | week | month | quarter | year} - {MAX { {queries | errors | result_rows | result_bytes | read_rows | read_bytes | execution_time} = number } [,...] | + {MAX { {queries | query_selects | query_inserts | errors | result_rows | result_bytes | read_rows | read_bytes | execution_time} = number } [,...] | NO LIMITS | TRACKING ONLY} [,...]] [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] ``` Keys `user_name`, `ip_address`, `client_key`, `client_key, user_name` and `client_key, ip_address` correspond to the fields in the [system.quotas](../../../operations/system-tables/quotas.md) table. -Parameters `queries`, `errors`, `result_rows`, `result_bytes`, `read_rows`, `read_bytes`, `execution_time` correspond to the fields in the [system.quotas_usage](../../../operations/system-tables/quotas_usage.md) table. +Parameters `queries`, `query_selects`, 'query_inserts', errors`, `result_rows`, `result_bytes`, `read_rows`, `read_bytes`, `execution_time` correspond to the fields in the [system.quotas_usage](../../../operations/system-tables/quotas_usage.md) table. `ON CLUSTER` clause allows creating quotas on a cluster, see [Distributed DDL](../../../sql-reference/distributed-ddl.md). diff --git a/docs/en/sql-reference/statements/create/quota.md b/docs/en/sql-reference/statements/create/quota.md index ec980af921f..71416abf588 100644 --- a/docs/en/sql-reference/statements/create/quota.md +++ b/docs/en/sql-reference/statements/create/quota.md @@ -13,14 +13,14 @@ Syntax: CREATE QUOTA [IF NOT EXISTS | OR REPLACE] name [ON CLUSTER cluster_name] [KEYED BY {user_name | ip_address | client_key | client_key,user_name | client_key,ip_address} | NOT KEYED] [FOR [RANDOMIZED] INTERVAL number {second | minute | hour | day | week | month | quarter | year} - {MAX { {queries | errors | result_rows | result_bytes | read_rows | read_bytes | execution_time} = number } [,...] | + {MAX { {queries | query_selects | query_inserts | errors | result_rows | result_bytes | read_rows | read_bytes | execution_time} = number } [,...] | NO LIMITS | TRACKING ONLY} [,...]] [TO {role [,...] | ALL | ALL EXCEPT role [,...]}] ``` -Keys `user_name`, `ip_address`, `client_key`, `client_key, user_name` and `client_key, ip_address` correspond to the fields in the [system.quotas](../../../operations/system-tables/quotas.md) table. +Keys `user_name`, `ip_address`, `client_key`, `client_key, user_name` and `client_key, ip_address` correspond to the fields in the [system.quotas](../../../operations/system-tables/quotas.md) table. -Parameters `queries`, `errors`, `result_rows`, `result_bytes`, `read_rows`, `read_bytes`, `execution_time` correspond to the fields in the [system.quotas_usage](../../../operations/system-tables/quotas_usage.md) table. +Parameters `queries`, `query_selects`, `query_inserts`, `errors`, `result_rows`, `result_bytes`, `read_rows`, `read_bytes`, `execution_time` correspond to the fields in the [system.quotas_usage](../../../operations/system-tables/quotas_usage.md) table. `ON CLUSTER` clause allows creating quotas on a cluster, see [Distributed DDL](../../../sql-reference/distributed-ddl.md). diff --git a/docs/en/sql-reference/window-functions/index.md b/docs/en/sql-reference/window-functions/index.md index a79328ade32..5a6f13226a5 100644 --- a/docs/en/sql-reference/window-functions/index.md +++ b/docs/en/sql-reference/window-functions/index.md @@ -1,9 +1,14 @@ -# [development] Window Functions +--- +toc_priority: 62 +toc_title: Window Functions +--- + +# [experimental] Window Functions !!! warning "Warning" This is an experimental feature that is currently in development and is not ready for general use. It will change in unpredictable backwards-incompatible ways in -the future releases. +the future releases. Set `allow_experimental_window_functions = 1` to enable it. ClickHouse currently supports calculation of aggregate functions over a window. Pure window functions such as `rank`, `lag`, `lead` and so on are not yet supported. @@ -11,9 +16,7 @@ Pure window functions such as `rank`, `lag`, `lead` and so on are not yet suppor The window can be specified either with an `OVER` clause or with a separate `WINDOW` clause. -Only two variants of frame are supported, `ROWS` and `RANGE`. The only supported -frame boundaries are `ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW`. - +Only two variants of frame are supported, `ROWS` and `RANGE`. Offsets for the `RANGE` frame are not yet supported. ## References @@ -28,6 +31,7 @@ https://github.com/ClickHouse/ClickHouse/blob/master/tests/performance/window_fu https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/01591_window_functions.sql ### Postgres Docs +https://www.postgresql.org/docs/current/sql-select.html#SQL-WINDOW https://www.postgresql.org/docs/devel/sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS https://www.postgresql.org/docs/devel/functions-window.html https://www.postgresql.org/docs/devel/tutorial-window.html diff --git a/docs/ru/development/style.md b/docs/ru/development/style.md index 4d71dca46a7..1b211259bbb 100644 --- a/docs/ru/development/style.md +++ b/docs/ru/development/style.md @@ -714,6 +714,7 @@ auto s = std::string{"Hello"}; ### Пользовательская ошибка {#error-messages-user-error} Такая ошибка вызвана действиями пользователя (неверный синтаксис запроса) или конфигурацией внешних систем (кончилось место на диске). Предполагается, что пользователь может устранить её самостоятельно. Для этого в сообщении об ошибке должна содержаться следующая информация: + * что произошло. Это должно объясняться в пользовательских терминах (`Function pow() is not supported for data type UInt128`), а не загадочными конструкциями из кода (`runtime overload resolution failed in DB::BinaryOperationBuilder::Impl, UInt128, Int8>::kaboongleFastPath()`). * почему/где/когда -- любой контекст, который помогает отладить проблему. Представьте, как бы её отлаживали вы (программировать и пользоваться отладчиком нельзя). * что можно предпринять для устранения ошибки. Здесь можно перечислить типичные причины проблемы, настройки, влияющие на это поведение, и так далее. diff --git a/docs/ru/operations/system-tables/part_log.md b/docs/ru/operations/system-tables/part_log.md index 255ece76ee2..bba4fda6135 100644 --- a/docs/ru/operations/system-tables/part_log.md +++ b/docs/ru/operations/system-tables/part_log.md @@ -6,29 +6,62 @@ Столбцы: -- `event_type` (Enum) — тип события. Столбец может содержать одно из следующих значений: +- `query_id` ([String](../../sql-reference/data-types/string.md)) — идентификатор запроса `INSERT`, создавшего этот кусок. +- `event_type` ([Enum8](../../sql-reference/data-types/enum.md)) — тип события. Столбец может содержать одно из следующих значений: - `NEW_PART` — вставка нового куска. - `MERGE_PARTS` — слияние кусков. - `DOWNLOAD_PART` — загрузка с реплики. - `REMOVE_PART` — удаление или отсоединение из таблицы с помощью [DETACH PARTITION](../../sql-reference/statements/alter/partition.md#alter_detach-partition). - `MUTATE_PART` — изменение куска. - `MOVE_PART` — перемещение куска между дисками. -- `event_date` (Date) — дата события. -- `event_time` (DateTime) — время события. -- `duration_ms` (UInt64) — длительность. -- `database` (String) — имя базы данных, в которой находится кусок. -- `table` (String) — имя таблицы, в которой находится кусок. -- `part_name` (String) — имя куска. -- `partition_id` (String) — идентификатор партиции, в которую был добавлен кусок. В столбце будет значение ‘all’, если таблица партициируется по выражению `tuple()`. -- `rows` (UInt64) — число строк в куске. -- `size_in_bytes` (UInt64) — размер куска данных в байтах. -- `merged_from` (Array(String)) — массив имён кусков, из которых образован текущий кусок в результате слияния (также столбец заполняется в случае скачивания уже смерженного куска). -- `bytes_uncompressed` (UInt64) — количество прочитанных разжатых байт. -- `read_rows` (UInt64) — сколько было прочитано строк при слиянии кусков. -- `read_bytes` (UInt64) — сколько было прочитано байт при слиянии кусков. -- `error` (UInt16) — код ошибки, возникшей при текущем событии. -- `exception` (String) — текст ошибки. +- `event_date` ([Date](../../sql-reference/data-types/date.md)) — дата события. +- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — время события. +- `duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md)) — длительность. +- `database` ([String](../../sql-reference/data-types/string.md)) — имя базы данных, в которой находится кусок. +- `table` ([String](../../sql-reference/data-types/string.md)) — имя таблицы, в которой находится кусок. +- `part_name` ([String](../../sql-reference/data-types/string.md)) — имя куска. +- `partition_id` ([String](../../sql-reference/data-types/string.md)) — идентификатор партиции, в которую был добавлен кусок. В столбце будет значение `all`, если таблица партициируется по выражению `tuple()`. +- `path_on_disk` ([String](../../sql-reference/data-types/string.md)) — абсолютный путь к папке с файлами кусков данных. +- `rows` ([UInt64](../../sql-reference/data-types/int-uint.md)) — число строк в куске. +- `size_in_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — размер куска данных в байтах. +- `merged_from` ([Array(String)](../../sql-reference/data-types/array.md)) — массив имён кусков, из которых образован текущий кусок в результате слияния (также столбец заполняется в случае скачивания уже смерженного куска). +- `bytes_uncompressed` ([UInt64](../../sql-reference/data-types/int-uint.md)) — количество прочитанных не сжатых байт. +- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md)) — сколько было прочитано строк при слиянии кусков. +- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — сколько было прочитано байт при слиянии кусков. +- `peak_memory_usage` ([Int64](../../sql-reference/data-types/int-uint.md)) — максимальная разница между выделенной и освобождённой памятью в контексте потока. +- `error` ([UInt16](../../sql-reference/data-types/int-uint.md)) — код ошибки, возникшей при текущем событии. +- `exception` ([String](../../sql-reference/data-types/string.md)) — текст ошибки. Системная таблица `system.part_log` будет создана после первой вставки данных в таблицу `MergeTree`. +**Пример** + +``` sql +SELECT * FROM system.part_log LIMIT 1 FORMAT Vertical; +``` + +``` text +Row 1: +────── +query_id: 983ad9c7-28d5-4ae1-844e-603116b7de31 +event_type: NewPart +event_date: 2021-02-02 +event_time: 2021-02-02 11:14:28 +duration_ms: 35 +database: default +table: log_mt_2 +part_name: all_1_1_0 +partition_id: all +path_on_disk: db/data/default/log_mt_2/all_1_1_0/ +rows: 115418 +size_in_bytes: 1074311 +merged_from: [] +bytes_uncompressed: 0 +read_rows: 0 +read_bytes: 0 +peak_memory_usage: 0 +error: 0 +exception: +``` + [Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/part_log) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/argmax.md b/docs/ru/sql-reference/aggregate-functions/reference/argmax.md index 97edd5773c8..f44e65831a9 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/argmax.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/argmax.md @@ -4,8 +4,63 @@ toc_priority: 106 # argMax {#agg-function-argmax} -Синтаксис: `argMax(arg, val)` +Вычисляет значение `arg` при максимальном значении `val`. Если есть несколько разных значений `arg` для максимальных значений `val`, возвращает первое попавшееся из таких значений. -Вычисляет значение arg при максимальном значении val. Если есть несколько разных значений arg для максимальных значений val, то выдаётся первое попавшееся из таких значений. +Если функции передан кортеж, то будет выведен кортеж с максимальным значением `val`. Удобно использовать для работы с [SimpleAggregateFunction](../../../sql-reference/data-types/simpleaggregatefunction.md). -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmax/) +**Синтаксис** + +``` sql +argMax(arg, val) +``` + +или + +``` sql +argMax(tuple(arg, val)) +``` + +**Параметры** + +- `arg` — аргумент. +- `val` — значение. + +**Возвращаемое значение** + +- Значение `arg`, соответствующее максимальному значению `val`. + +Тип: соответствует типу `arg`. + +Если передан кортеж: + +- Кортеж `(arg, val)` c максимальным значением `val` и соответствующим ему `arg`. + +Тип: [Tuple](../../../sql-reference/data-types/tuple.md). + +**Пример** + +Исходная таблица: + +``` text +┌─user─────┬─salary─┐ +│ director │ 5000 │ +│ manager │ 3000 │ +│ worker │ 1000 │ +└──────────┴────────┘ +``` + +Запрос: + +``` sql +SELECT argMax(user, salary), argMax(tuple(user, salary)) FROM salary; +``` + +Результат: + +``` text +┌─argMax(user, salary)─┬─argMax(tuple(user, salary))─┐ +│ director │ ('director',5000) │ +└──────────────────────┴─────────────────────────────┘ +``` + +[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/reference/argmax/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/argmin.md b/docs/ru/sql-reference/aggregate-functions/reference/argmin.md index 58161cd226a..8c25b79f92a 100644 --- a/docs/ru/sql-reference/aggregate-functions/reference/argmin.md +++ b/docs/ru/sql-reference/aggregate-functions/reference/argmin.md @@ -4,11 +4,42 @@ toc_priority: 105 # argMin {#agg-function-argmin} -Синтаксис: `argMin(arg, val)` +Вычисляет значение `arg` при минимальном значении `val`. Если есть несколько разных значений `arg` для минимальных значений `val`, возвращает первое попавшееся из таких значений. -Вычисляет значение arg при минимальном значении val. Если есть несколько разных значений arg для минимальных значений val, то выдаётся первое попавшееся из таких значений. +Если функции передан кортеж, то будет выведен кортеж с минимальным значением `val`. Удобно использовать для работы с [SimpleAggregateFunction](../../../sql-reference/data-types/simpleaggregatefunction.md). -**Пример:** +**Синтаксис** + +``` sql +argMin(arg, val) +``` + +или + +``` sql +argMin(tuple(arg, val)) +``` + +**Параметры** + +- `arg` — аргумент. +- `val` — значение. + +**Возвращаемое значение** + +- Значение `arg`, соответствующее минимальному значению `val`. + +Тип: соответствует типу `arg`. + +Если передан кортеж: + +- Кортеж `(arg, val)` c минимальным значением `val` и соответствующим ему `arg`. + +Тип: [Tuple](../../../sql-reference/data-types/tuple.md). + +**Пример** + +Исходная таблица: ``` text ┌─user─────┬─salary─┐ @@ -18,14 +49,18 @@ toc_priority: 105 └──────────┴────────┘ ``` +Запрос: + ``` sql -SELECT argMin(user, salary) FROM salary +SELECT argMin(user, salary), argMin(tuple(user, salary)) FROM salary; ``` +Результат: + ``` text -┌─argMin(user, salary)─┐ -│ worker │ -└──────────────────────┘ +┌─argMin(user, salary)─┬─argMin(tuple(user, salary))─┐ +│ worker │ ('worker',1000) │ +└──────────────────────┴─────────────────────────────┘ ``` -[Оригинальная статья](https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/argmin/) +[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/reference/argmin/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/mannwhitneyutest.md b/docs/ru/sql-reference/aggregate-functions/reference/mannwhitneyutest.md new file mode 100644 index 00000000000..fb73fff5f00 --- /dev/null +++ b/docs/ru/sql-reference/aggregate-functions/reference/mannwhitneyutest.md @@ -0,0 +1,71 @@ +--- +toc_priority: 310 +toc_title: mannWhitneyUTest +--- + +# mannWhitneyUTest {#mannwhitneyutest} + +Вычисляет U-критерий Манна — Уитни для выборок из двух генеральных совокупностей. + +**Синтаксис** + +``` sql +mannWhitneyUTest[(alternative[, continuity_correction])](sample_data, sample_index) +``` + +Значения выборок берутся из столбца `sample_data`. Если `sample_index` равно 0, то значение из этой строки принадлежит первой выборке. Во всех остальных случаях значение принадлежит второй выборке. +Проверяется нулевая гипотеза, что генеральные совокупности стохастически равны. Наряду с двусторонней гипотезой могут быть проверены и односторонние. +Для применения U-критерия Манна — Уитни закон распределения генеральных совокупностей не обязан быть нормальным. + +**Параметры** + +- `alternative` — альтернативная гипотеза. (Необязательный параметр, по умолчанию: `'two-sided'`.) [String](../../../sql-reference/data-types/string.md). + - `'two-sided'`; + - `'greater'`; + - `'less'`. +- `continuity_correction` - если не 0, то при вычислении p-значения применяется коррекция непрерывности. (Необязательный параметр, по умолчанию: 1.) [UInt64](../../../sql-reference/data-types/int-uint.md). +- `sample_data` — данные выборок. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). +- `sample_index` — индексы выборок. [Integer](../../../sql-reference/data-types/int-uint.md). + + +**Возвращаемые значения** + +[Кортеж](../../../sql-reference/data-types/tuple.md) с двумя элементами: +- вычисленное значение критерия Манна — Уитни. [Float64](../../../sql-reference/data-types/float.md). +- вычисленное p-значение. [Float64](../../../sql-reference/data-types/float.md). + + +**Пример** + +Таблица: + +``` text +┌─sample_data─┬─sample_index─┐ +│ 10 │ 0 │ +│ 11 │ 0 │ +│ 12 │ 0 │ +│ 1 │ 1 │ +│ 2 │ 1 │ +│ 3 │ 1 │ +└─────────────┴──────────────┘ +``` + +Запрос: + +``` sql +SELECT mannWhitneyUTest('greater')(sample_data, sample_index) FROM mww_ttest; +``` + +Результат: + +``` text +┌─mannWhitneyUTest('greater')(sample_data, sample_index)─┐ +│ (9,0.04042779918503192) │ +└────────────────────────────────────────────────────────┘ +``` + +**Смотрите также** + +- [U-критерий Манна — Уитни](https://ru.wikipedia.org/wiki/U-%D0%BA%D1%80%D0%B8%D1%82%D0%B5%D1%80%D0%B8%D0%B9_%D0%9C%D0%B0%D0%BD%D0%BD%D0%B0_%E2%80%94_%D0%A3%D0%B8%D1%82%D0%BD%D0%B8) + +[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/reference/mannwhitneyutest/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/studentttest.md b/docs/ru/sql-reference/aggregate-functions/reference/studentttest.md new file mode 100644 index 00000000000..5361e06c5e2 --- /dev/null +++ b/docs/ru/sql-reference/aggregate-functions/reference/studentttest.md @@ -0,0 +1,65 @@ +--- +toc_priority: 300 +toc_title: studentTTest +--- + +# studentTTest {#studentttest} + +Вычисляет t-критерий Стьюдента для выборок из двух генеральных совокупностей. + +**Синтаксис** + +``` sql +studentTTest(sample_data, sample_index) +``` + +Значения выборок берутся из столбца `sample_data`. Если `sample_index` равно 0, то значение из этой строки принадлежит первой выборке. Во всех остальных случаях значение принадлежит второй выборке. +Проверяется нулевая гипотеза, что средние значения генеральных совокупностей совпадают. Для применения t-критерия Стьюдента распределение в генеральных совокупностях должно быть нормальным и дисперсии должны совпадать. + +**Параметры** + +- `sample_data` — данные выборок. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). +- `sample_index` — индексы выборок. [Integer](../../../sql-reference/data-types/int-uint.md). + +**Возвращаемые значения** + +[Кортеж](../../../sql-reference/data-types/tuple.md) с двумя элементами: +- вычисленное значение критерия Стьюдента. [Float64](../../../sql-reference/data-types/float.md). +- вычисленное p-значение. [Float64](../../../sql-reference/data-types/float.md). + + +**Пример** + +Таблица: + +``` text +┌─sample_data─┬─sample_index─┐ +│ 20.3 │ 0 │ +│ 21.1 │ 0 │ +│ 21.9 │ 1 │ +│ 21.7 │ 0 │ +│ 19.9 │ 1 │ +│ 21.8 │ 1 │ +└─────────────┴──────────────┘ +``` + +Запрос: + +``` sql +SELECT studentTTest(sample_data, sample_index) FROM student_ttest; +``` + +Результат: + +``` text +┌─studentTTest(sample_data, sample_index)───┐ +│ (-0.21739130434783777,0.8385421208415731) │ +└───────────────────────────────────────────┘ +``` + +**Смотрите также** + +- [t-критерий Стьюдента](https://ru.wikipedia.org/wiki/T-%D0%BA%D1%80%D0%B8%D1%82%D0%B5%D1%80%D0%B8%D0%B9_%D0%A1%D1%82%D1%8C%D1%8E%D0%B4%D0%B5%D0%BD%D1%82%D0%B0) +- [welchTTest](welchttest.md#welchttest) + +[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/reference/studentttest/) diff --git a/docs/ru/sql-reference/aggregate-functions/reference/welchttest.md b/docs/ru/sql-reference/aggregate-functions/reference/welchttest.md new file mode 100644 index 00000000000..1f36b2d04ee --- /dev/null +++ b/docs/ru/sql-reference/aggregate-functions/reference/welchttest.md @@ -0,0 +1,65 @@ +--- +toc_priority: 301 +toc_title: welchTTest +--- + +# welchTTest {#welchttest} + +Вычисляет t-критерий Уэлча для выборок из двух генеральных совокупностей. + +**Синтаксис** + +``` sql +welchTTest(sample_data, sample_index) +``` + +Значения выборок берутся из столбца `sample_data`. Если `sample_index` равно 0, то значение из этой строки принадлежит первой выборке. Во всех остальных случаях значение принадлежит второй выборке. +Проверяется нулевая гипотеза, что средние значения генеральных совокупностей совпадают. Для применения t-критерия Уэлча распределение в генеральных совокупностях должно быть нормальным. Дисперсии могут не совпадать. + +**Параметры** + +- `sample_data` — данные выборок. [Integer](../../../sql-reference/data-types/int-uint.md), [Float](../../../sql-reference/data-types/float.md) or [Decimal](../../../sql-reference/data-types/decimal.md). +- `sample_index` — индексы выборок. [Integer](../../../sql-reference/data-types/int-uint.md). + +**Возвращаемые значения** + +[Кортеж](../../../sql-reference/data-types/tuple.md) с двумя элементами: +- вычисленное значение критерия Уэлча. [Float64](../../../sql-reference/data-types/float.md). +- вычисленное p-значение. [Float64](../../../sql-reference/data-types/float.md). + + +**Пример** + +Таблица: + +``` text +┌─sample_data─┬─sample_index─┐ +│ 20.3 │ 0 │ +│ 22.1 │ 0 │ +│ 21.9 │ 0 │ +│ 18.9 │ 1 │ +│ 20.3 │ 1 │ +│ 19 │ 1 │ +└─────────────┴──────────────┘ +``` + +Запрос: + +``` sql +SELECT welchTTest(sample_data, sample_index) FROM welch_ttest; +``` + +Результат: + +``` text +┌─welchTTest(sample_data, sample_index)─────┐ +│ (2.7988719532211235,0.051807360348581945) │ +└───────────────────────────────────────────┘ +``` + +**Смотрите также** + +- [t-критерий Уэлча](https://ru.wikipedia.org/wiki/T-%D0%BA%D1%80%D0%B8%D1%82%D0%B5%D1%80%D0%B8%D0%B9_%D0%A3%D1%8D%D0%BB%D1%87%D0%B0) +- [studentTTest](studentttest.md#studentttest) + +[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/aggregate-functions/reference/welchTTest/) diff --git a/docs/ru/sql-reference/data-types/array.md b/docs/ru/sql-reference/data-types/array.md index 906246b66ee..86a23ed041b 100644 --- a/docs/ru/sql-reference/data-types/array.md +++ b/docs/ru/sql-reference/data-types/array.md @@ -47,6 +47,8 @@ SELECT [1, 2] AS x, toTypeName(x) ## Особенности работы с типами данных {#osobennosti-raboty-s-tipami-dannykh} +Максимальный размер массива ограничен одним миллионом элементов. + При создании массива «на лету» ClickHouse автоматически определяет тип аргументов как наиболее узкий тип данных, в котором можно хранить все перечисленные аргументы. Если среди аргументов есть [NULL](../../sql-reference/data-types/array.md#null-literal) или аргумент типа [Nullable](nullable.md#data_type-nullable), то тип элементов массива — [Nullable](nullable.md). Если ClickHouse не смог подобрать тип данных, то он сгенерирует исключение. Это произойдёт, например, при попытке создать массив одновременно со строками и числами `SELECT array(1, 'a')`. diff --git a/docs/ru/sql-reference/functions/array-functions.md b/docs/ru/sql-reference/functions/array-functions.md index 015d14b9de5..80057e6f0e0 100644 --- a/docs/ru/sql-reference/functions/array-functions.md +++ b/docs/ru/sql-reference/functions/array-functions.md @@ -1135,11 +1135,225 @@ SELECT Функция `arrayFirstIndex` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен. -## arraySum(\[func,\] arr1, …) {#array-sum} +## arrayMin {#array-min} -Возвращает сумму значений функции `func`. Если функция не указана - просто возвращает сумму элементов массива. +Возвращает значение минимального элемента в исходном массиве. -Функция `arraySum` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) - в качестве первого аргумента ей можно передать лямбда-функцию. +Если передана функция `func`, возвращается минимум из элементов массива, преобразованных этой функцией. + +Функция `arrayMin` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей можно передать лямбда-функцию. + +**Синтаксис** + +```sql +arrayMin([func,] arr) +``` + +**Параметры** + +- `func` — функция. [Expression](../../sql-reference/data-types/special-data-types/expression.md). +- `arr` — массив. [Array](../../sql-reference/data-types/array.md). + +**Возвращаемое значение** + +- Минимальное значение функции (или минимальный элемент массива). + +Тип: если передана `func`, соответствует типу ее возвращаемого значения, иначе соответствует типу элементов массива. + +**Примеры** + +Запрос: + +```sql +SELECT arrayMin([1, 2, 4]) AS res; +``` + +Результат: + +```text +┌─res─┐ +│ 1 │ +└─────┘ +``` + +Запрос: + +```sql +SELECT arrayMin(x -> (-x), [1, 2, 4]) AS res; +``` + +Результат: + +```text +┌─res─┐ +│ -4 │ +└─────┘ +``` + +## arrayMax {#array-max} + +Возвращает значение максимального элемента в исходном массиве. + +Если передана функция `func`, возвращается максимум из элементов массива, преобразованных этой функцией. + +Функция `arrayMax` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей можно передать лямбда-функцию. + +**Синтаксис** + +```sql +arrayMax([func,] arr) +``` + +**Параметры** + +- `func` — функция. [Expression](../../sql-reference/data-types/special-data-types/expression.md). +- `arr` — массив. [Array](../../sql-reference/data-types/array.md). + +**Возвращаемое значение** + +- Максимальное значение функции (или максимальный элемент массива). + +Тип: если передана `func`, соответствует типу ее возвращаемого значения, иначе соответствует типу элементов массива. + +**Примеры** + +Запрос: + +```sql +SELECT arrayMax([1, 2, 4]) AS res; +``` + +Результат: + +```text +┌─res─┐ +│ 4 │ +└─────┘ +``` + +Запрос: + +```sql +SELECT arrayMax(x -> (-x), [1, 2, 4]) AS res; +``` + +Результат: + +```text +┌─res─┐ +│ -1 │ +└─────┘ +``` + +## arraySum {#array-sum} + +Возвращает сумму элементов в исходном массиве. + +Если передана функция `func`, возвращается сумма элементов массива, преобразованных этой функцией. + +Функция `arraySum` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей можно передать лямбда-функцию. + +**Синтаксис** + +```sql +arraySum([func,] arr) +``` + +**Параметры** + +- `func` — функция. [Expression](../../sql-reference/data-types/special-data-types/expression.md). +- `arr` — массив. [Array](../../sql-reference/data-types/array.md). + +**Возвращаемое значение** + +- Сумма значений функции (или сумма элементов массива). + +Тип: для Decimal чисел в исходном массиве (если функция `func` была передана, то для чисел, преобразованных ею) — [Decimal128](../../sql-reference/data-types/decimal.md), для чисел с плавающей точкой — [Float64](../../sql-reference/data-types/float.md), для беззнаковых целых чисел — [UInt64](../../sql-reference/data-types/int-uint.md), для целых чисел со знаком — [Int64](../../sql-reference/data-types/int-uint.md). + +**Примеры** + +Запрос: + +```sql +SELECT arraySum([2, 3]) AS res; +``` + +Результат: + +```text +┌─res─┐ +│ 5 │ +└─────┘ +``` + +Запрос: + +```sql +SELECT arraySum(x -> x*x, [2, 3]) AS res; +``` + +Результат: + +```text +┌─res─┐ +│ 13 │ +└─────┘ +``` + +## arrayAvg {#array-avg} + +Возвращает среднее значение элементов в исходном массиве. + +Если передана функция `func`, возвращается среднее значение элементов массива, преобразованных этой функцией. + +Функция `arrayAvg` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей можно передать лямбда-функцию. + +**Синтаксис** + +```sql +arrayAvg([func,] arr) +``` + +**Параметры** + +- `func` — функция. [Expression](../../sql-reference/data-types/special-data-types/expression.md). +- `arr` — массив. [Array](../../sql-reference/data-types/array.md). + +**Возвращаемое значение** + +- Среднее значение функции (или среднее значение элементов массива). + +Тип: [Float64](../../sql-reference/data-types/float.md). + +**Примеры** + +Запрос: + +```sql +SELECT arrayAvg([1, 2, 4]) AS res; +``` + +Результат: + +```text +┌────────────────res─┐ +│ 2.3333333333333335 │ +└────────────────────┘ +``` + +Запрос: + +```sql +SELECT arrayAvg(x -> (x * x), [2, 4]) AS res; +``` + +Результат: + +```text +┌─res─┐ +│ 10 │ +└─────┘ +``` ## arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1} diff --git a/docs/zh/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md b/docs/zh/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md index 7a0a42fa47c..3b89da9f595 100644 --- a/docs/zh/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md +++ b/docs/zh/engines/table-engines/mergetree-family/versionedcollapsingmergetree.md @@ -37,7 +37,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] VersionedCollapsingMergeTree(sign, version) ``` -- `sign` — 指定行类型的列名: `1` 是一个 “state” 行, `-1` 是一个 “cancel” 划 +- `sign` — 指定行类型的列名: `1` 是一个 “state” 行, `-1` 是一个 “cancel” 行 列数据类型应为 `Int8`. diff --git a/docs/zh/operations/system-tables/zookeeper.md b/docs/zh/operations/system-tables/zookeeper.md index b66e5262df3..f7e816ccee6 100644 --- a/docs/zh/operations/system-tables/zookeeper.md +++ b/docs/zh/operations/system-tables/zookeeper.md @@ -6,12 +6,16 @@ machine_translated_rev: 5decc73b5dc60054f19087d3690c4eb99446a6c3 # 系统。动物园管理员 {#system-zookeeper} 如果未配置ZooKeeper,则表不存在。 允许从配置中定义的ZooKeeper集群读取数据。 -查询必须具有 ‘path’ WHERE子句中的平等条件。 这是ZooKeeper中您想要获取数据的孩子的路径。 +查询必须具有 ‘path’ WHERE子句中的相等条件或者在某个集合中的条件。 这是ZooKeeper中您想要获取数据的孩子的路径。 查询 `SELECT * FROM system.zookeeper WHERE path = '/clickhouse'` 输出对所有孩子的数据 `/clickhouse` 节点。 要输出所有根节点的数据,write path= ‘/’. 如果在指定的路径 ‘path’ 不存在,将引发异常。 +查询`SELECT * FROM system.zookeeper WHERE path IN ('/', '/clickhouse')` 输出`/` 和 `/clickhouse`节点上所有子节点的数据。 +如果在指定的 ‘path’ 集合中有不存在的路径,将引发异常。 +它可以用来做一批ZooKeeper路径查询。 + 列: - `name` (String) — The name of the node. diff --git a/programs/client/Client.cpp b/programs/client/Client.cpp index 9a8b580407a..e41f780e99a 100644 --- a/programs/client/Client.cpp +++ b/programs/client/Client.cpp @@ -1719,7 +1719,7 @@ private: } // Remember where the data ended. We use this info later to determine // where the next query begins. - parsed_insert_query->end = data_in.buffer().begin() + data_in.count(); + parsed_insert_query->end = parsed_insert_query->data + data_in.count(); } else if (!is_interactive) { @@ -1900,6 +1900,9 @@ private: switch (packet.type) { + case Protocol::Server::PartUUIDs: + return true; + case Protocol::Server::Data: if (!cancelled) onData(packet.block); diff --git a/programs/client/QueryFuzzer.cpp b/programs/client/QueryFuzzer.cpp index ae0de450a10..05c20434820 100644 --- a/programs/client/QueryFuzzer.cpp +++ b/programs/client/QueryFuzzer.cpp @@ -325,6 +325,51 @@ void QueryFuzzer::fuzzColumnLikeExpressionList(IAST * ast) // the generic recursion into IAST.children. } +void QueryFuzzer::fuzzWindowFrame(WindowFrame & frame) +{ + switch (fuzz_rand() % 40) + { + case 0: + { + const auto r = fuzz_rand() % 3; + frame.type = r == 0 ? WindowFrame::FrameType::Rows + : r == 1 ? WindowFrame::FrameType::Range + : WindowFrame::FrameType::Groups; + break; + } + case 1: + { + const auto r = fuzz_rand() % 3; + frame.begin_type = r == 0 ? WindowFrame::BoundaryType::Unbounded + : r == 1 ? WindowFrame::BoundaryType::Current + : WindowFrame::BoundaryType::Offset; + break; + } + case 2: + { + const auto r = fuzz_rand() % 3; + frame.end_type = r == 0 ? WindowFrame::BoundaryType::Unbounded + : r == 1 ? WindowFrame::BoundaryType::Current + : WindowFrame::BoundaryType::Offset; + break; + } + case 3: + { + frame.begin_offset = getRandomField(0).get(); + break; + } + case 4: + { + frame.end_offset = getRandomField(0).get(); + break; + } + default: + break; + } + + frame.is_default = (frame == WindowFrame{}); +} + void QueryFuzzer::fuzz(ASTs & asts) { for (auto & ast : asts) @@ -409,6 +454,7 @@ void QueryFuzzer::fuzz(ASTPtr & ast) auto & def = fn->window_definition->as(); fuzzColumnLikeExpressionList(def.partition_by.get()); fuzzOrderByList(def.order_by.get()); + fuzzWindowFrame(def.frame); } fuzz(fn->children); @@ -421,6 +467,23 @@ void QueryFuzzer::fuzz(ASTPtr & ast) fuzz(select->children); } + /* + * The time to fuzz the settings has not yet come. + * Apparently we don't have any infractructure to validate the values of + * the settings, and the first query with max_block_size = -1 breaks + * because of overflows here and there. + *//* + * else if (auto * set = typeid_cast(ast.get())) + * { + * for (auto & c : set->changes) + * { + * if (fuzz_rand() % 50 == 0) + * { + * c.value = fuzzField(c.value); + * } + * } + * } + */ else if (auto * literal = typeid_cast(ast.get())) { // There is a caveat with fuzzing the children: many ASTs also keep the diff --git a/programs/client/QueryFuzzer.h b/programs/client/QueryFuzzer.h index e9d3f150283..38714205967 100644 --- a/programs/client/QueryFuzzer.h +++ b/programs/client/QueryFuzzer.h @@ -14,6 +14,7 @@ namespace DB class ASTExpressionList; class ASTOrderByElement; +struct WindowFrame; /* * This is an AST-based query fuzzer that makes random modifications to query @@ -65,6 +66,7 @@ struct QueryFuzzer void fuzzOrderByElement(ASTOrderByElement * elem); void fuzzOrderByList(IAST * ast); void fuzzColumnLikeExpressionList(IAST * ast); + void fuzzWindowFrame(WindowFrame & frame); void fuzz(ASTs & asts); void fuzz(ASTPtr & ast); void collectFuzzInfoMain(const ASTPtr ast); diff --git a/src/Access/Quota.h b/src/Access/Quota.h index b636e83ec40..430bdca29b0 100644 --- a/src/Access/Quota.h +++ b/src/Access/Quota.h @@ -31,6 +31,8 @@ struct Quota : public IAccessEntity enum ResourceType { QUERIES, /// Number of queries. + QUERY_SELECTS, /// Number of select queries. + QUERY_INSERTS, /// Number of inserts queries. ERRORS, /// Number of queries with exceptions. RESULT_ROWS, /// Number of rows returned as result. RESULT_BYTES, /// Number of bytes returned as result. @@ -152,6 +154,16 @@ inline const Quota::ResourceTypeInfo & Quota::ResourceTypeInfo::get(ResourceType static const auto info = make_info("QUERIES", 1); return info; } + case Quota::QUERY_SELECTS: + { + static const auto info = make_info("QUERY_SELECTS", 1); + return info; + } + case Quota::QUERY_INSERTS: + { + static const auto info = make_info("QUERY_INSERTS", 1); + return info; + } case Quota::ERRORS: { static const auto info = make_info("ERRORS", 1); diff --git a/src/AggregateFunctions/AggregateFunctionMannWhitney.h b/src/AggregateFunctions/AggregateFunctionMannWhitney.h index 403f628a9ff..1451536d519 100644 --- a/src/AggregateFunctions/AggregateFunctionMannWhitney.h +++ b/src/AggregateFunctions/AggregateFunctionMannWhitney.h @@ -147,7 +147,7 @@ public: } if (params[0].getType() != Field::Types::String) - throw Exception("Aggregate function " + getName() + " require require first parameter to be a String", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT); + throw Exception("Aggregate function " + getName() + " require first parameter to be a String", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT); auto param = params[0].get(); if (param == "two-sided") @@ -158,13 +158,13 @@ public: alternative = Alternative::Greater; else throw Exception("Unknown parameter in aggregate function " + getName() + - ". It must be one of: 'two sided', 'less', 'greater'", ErrorCodes::BAD_ARGUMENTS); + ". It must be one of: 'two-sided', 'less', 'greater'", ErrorCodes::BAD_ARGUMENTS); if (params.size() != 2) return; if (params[1].getType() != Field::Types::UInt64) - throw Exception("Aggregate function " + getName() + " require require second parameter to be a UInt64", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT); + throw Exception("Aggregate function " + getName() + " require second parameter to be a UInt64", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT); continuity_correction = static_cast(params[1].get()); } diff --git a/src/AggregateFunctions/AggregateFunctionWindowFunnel.h b/src/AggregateFunctions/AggregateFunctionWindowFunnel.h index de8f0f1e2e9..c765024507e 100644 --- a/src/AggregateFunctions/AggregateFunctionWindowFunnel.h +++ b/src/AggregateFunctions/AggregateFunctionWindowFunnel.h @@ -149,7 +149,6 @@ private: UInt8 strict_order; // When the 'strict_order' is set, it doesn't allow interventions of other events. // In the case of 'A->B->D->C', it stops finding 'A->B->C' at the 'D' and the max event level is 2. - // Loop through the entire events_list, update the event timestamp value // The level path must be 1---2---3---...---check_events_size, find the max event level that satisfied the path in the sliding window. // If found, returns the max event level, else return 0. diff --git a/src/AggregateFunctions/QuantileTiming.h b/src/AggregateFunctions/QuantileTiming.h index 6070f264ad6..dd6d923a5a0 100644 --- a/src/AggregateFunctions/QuantileTiming.h +++ b/src/AggregateFunctions/QuantileTiming.h @@ -32,6 +32,8 @@ namespace ErrorCodes * - a histogram (that is, value -> number), consisting of two parts * -- for values from 0 to 1023 - in increments of 1; * -- for values from 1024 to 30,000 - in increments of 16; + * + * NOTE: 64-bit integer weight can overflow, see also QantileExactWeighted.h::get() */ #define TINY_MAX_ELEMS 31 @@ -396,9 +398,9 @@ namespace detail /// Get the value of the `level` quantile. The level must be between 0 and 1. UInt16 get(double level) const { - UInt64 pos = std::ceil(count * level); + double pos = std::ceil(count * level); - UInt64 accumulated = 0; + double accumulated = 0; Iterator it(*this); while (it.isValid()) @@ -422,9 +424,9 @@ namespace detail const auto * indices_end = indices + size; const auto * index = indices; - UInt64 pos = std::ceil(count * levels[*index]); + double pos = std::ceil(count * levels[*index]); - UInt64 accumulated = 0; + double accumulated = 0; Iterator it(*this); while (it.isValid()) diff --git a/src/Client/Connection.cpp b/src/Client/Connection.cpp index 65b15a46955..e38a6b240a6 100644 --- a/src/Client/Connection.cpp +++ b/src/Client/Connection.cpp @@ -542,6 +542,12 @@ void Connection::sendData(const Block & block, const String & name, bool scalar) throttler->add(out->count() - prev_bytes); } +void Connection::sendIgnoredPartUUIDs(const std::vector & uuids) +{ + writeVarUInt(Protocol::Client::IgnoredPartUUIDs, *out); + writeVectorBinary(uuids, *out); + out->next(); +} void Connection::sendPreparedData(ReadBuffer & input, size_t size, const String & name) { @@ -798,6 +804,10 @@ Packet Connection::receivePacket(std::function async_ case Protocol::Server::EndOfStream: return res; + case Protocol::Server::PartUUIDs: + readVectorBinary(res.part_uuids, *in); + return res; + default: /// In unknown state, disconnect - to not leave unsynchronised connection. disconnect(); diff --git a/src/Client/Connection.h b/src/Client/Connection.h index 83e8f3ba206..2d24b143d7a 100644 --- a/src/Client/Connection.h +++ b/src/Client/Connection.h @@ -66,6 +66,7 @@ struct Packet std::vector multistring_message; Progress progress; BlockStreamProfileInfo profile_info; + std::vector part_uuids; Packet() : type(Protocol::Server::Hello) {} }; @@ -157,6 +158,8 @@ public: void sendScalarsData(Scalars & data); /// Send all contents of external (temporary) tables. void sendExternalTablesData(ExternalTablesData & data); + /// Send parts' uuids to excluded them from query processing + void sendIgnoredPartUUIDs(const std::vector & uuids); /// Send prepared block of data (serialized and, if need, compressed), that will be read from 'input'. /// You could pass size of serialized/compressed block. diff --git a/src/Client/MultiplexedConnections.cpp b/src/Client/MultiplexedConnections.cpp index ed7aad0a515..c50dd7b6454 100644 --- a/src/Client/MultiplexedConnections.cpp +++ b/src/Client/MultiplexedConnections.cpp @@ -140,6 +140,21 @@ void MultiplexedConnections::sendQuery( sent_query = true; } +void MultiplexedConnections::sendIgnoredPartUUIDs(const std::vector & uuids) +{ + std::lock_guard lock(cancel_mutex); + + if (sent_query) + throw Exception("Cannot send uuids after query is sent.", ErrorCodes::LOGICAL_ERROR); + + for (ReplicaState & state : replica_states) + { + Connection * connection = state.connection; + if (connection != nullptr) + connection->sendIgnoredPartUUIDs(uuids); + } +} + Packet MultiplexedConnections::receivePacket() { std::lock_guard lock(cancel_mutex); @@ -195,6 +210,7 @@ Packet MultiplexedConnections::drain() switch (packet.type) { + case Protocol::Server::PartUUIDs: case Protocol::Server::Data: case Protocol::Server::Progress: case Protocol::Server::ProfileInfo: @@ -253,6 +269,7 @@ Packet MultiplexedConnections::receivePacketUnlocked(std::function & uuids); + /** On each replica, read and skip all packets to EndOfStream or Exception. * Returns EndOfStream if no exception has been received. Otherwise * returns the last received packet of type Exception. diff --git a/src/Columns/ColumnAggregateFunction.cpp b/src/Columns/ColumnAggregateFunction.cpp index d0a5e120a07..9562dc647c9 100644 --- a/src/Columns/ColumnAggregateFunction.cpp +++ b/src/Columns/ColumnAggregateFunction.cpp @@ -75,8 +75,28 @@ void ColumnAggregateFunction::set(const AggregateFunctionPtr & func_) ColumnAggregateFunction::~ColumnAggregateFunction() { if (!func->hasTrivialDestructor() && !src) - for (auto * val : data) - func->destroy(val); + { + if (copiedDataInfo.empty()) + { + for (auto * val : data) + { + func->destroy(val); + } + } + else + { + size_t pos; + for (Map::iterator it = copiedDataInfo.begin(), it_end = copiedDataInfo.end(); it != it_end; ++it) + { + pos = it->getValue().second; + if (data[pos] != nullptr) + { + func->destroy(data[pos]); + data[pos] = nullptr; + } + } + } + } } void ColumnAggregateFunction::addArena(ConstArenaPtr arena_) @@ -455,14 +475,37 @@ void ColumnAggregateFunction::insertFrom(const IColumn & from, size_t n) /// (only as a whole, see comment above). ensureOwnership(); insertDefault(); - insertMergeFrom(from, n); + insertCopyFrom(assert_cast(from).data[n]); } void ColumnAggregateFunction::insertFrom(ConstAggregateDataPtr place) { ensureOwnership(); insertDefault(); - insertMergeFrom(place); + insertCopyFrom(place); +} + +void ColumnAggregateFunction::insertCopyFrom(ConstAggregateDataPtr place) +{ + Map::LookupResult result; + result = copiedDataInfo.find(place); + if (result == nullptr) + { + copiedDataInfo[place] = data.size()-1; + func->merge(data.back(), place, &createOrGetArena()); + } + else + { + size_t pos = result->getValue().second; + if (pos != data.size() - 1) + { + data[data.size() - 1] = data[pos]; + } + else /// insert same data to same pos, merge them. + { + func->merge(data.back(), place, &createOrGetArena()); + } + } } void ColumnAggregateFunction::insertMergeFrom(ConstAggregateDataPtr place) @@ -697,5 +740,4 @@ MutableColumnPtr ColumnAggregateFunction::cloneResized(size_t size) const return cloned_col; } } - } diff --git a/src/Columns/ColumnAggregateFunction.h b/src/Columns/ColumnAggregateFunction.h index cd45cf583a0..a1aa9e29a39 100644 --- a/src/Columns/ColumnAggregateFunction.h +++ b/src/Columns/ColumnAggregateFunction.h @@ -13,6 +13,8 @@ #include +#include + namespace DB { @@ -82,6 +84,17 @@ private: /// Name of the type to distinguish different aggregation states. String type_string; + /// MergedData records, used to avoid duplicated data copy. + ///key: src pointer, val: pos in current column. + using Map = HashMap< + ConstAggregateDataPtr, + size_t, + DefaultHash, + HashTableGrower<3>, + HashTableAllocatorWithStackMemory) * (1 << 3)>>; + + Map copiedDataInfo; + ColumnAggregateFunction() {} /// Create a new column that has another column as a source. @@ -140,6 +153,8 @@ public: void insertFrom(ConstAggregateDataPtr place); + void insertCopyFrom(ConstAggregateDataPtr place); + /// Merge state at last row with specified state in another column. void insertMergeFrom(ConstAggregateDataPtr place); diff --git a/src/Columns/ColumnsNumber.h b/src/Columns/ColumnsNumber.h index 96ce2bd6d6f..17a28e617c3 100644 --- a/src/Columns/ColumnsNumber.h +++ b/src/Columns/ColumnsNumber.h @@ -26,4 +26,6 @@ using ColumnInt256 = ColumnVector; using ColumnFloat32 = ColumnVector; using ColumnFloat64 = ColumnVector; +using ColumnUUID = ColumnVector; + } diff --git a/src/Common/CurrentThread.h b/src/Common/CurrentThread.h index 876cbd8a66b..7ab57ea7fab 100644 --- a/src/Common/CurrentThread.h +++ b/src/Common/CurrentThread.h @@ -63,9 +63,6 @@ public: /// Call from master thread as soon as possible (e.g. when thread accepted connection) static void initializeQuery(); - /// Sets query_context for current thread group - static void attachQueryContext(Context & query_context); - /// You must call one of these methods when create a query child thread: /// Add current thread to a group associated with the thread group static void attachTo(const ThreadGroupStatusPtr & thread_group); @@ -99,6 +96,10 @@ public: private: static void defaultThreadDeleter(); + + /// Sets query_context for current thread group + /// Can by used only through QueryScope + static void attachQueryContext(Context & query_context); }; } diff --git a/src/Common/ErrorCodes.cpp b/src/Common/ErrorCodes.cpp index a2cd65137c0..cf758691cec 100644 --- a/src/Common/ErrorCodes.cpp +++ b/src/Common/ErrorCodes.cpp @@ -533,11 +533,13 @@ M(564, INTERSERVER_SCHEME_DOESNT_MATCH) \ M(565, TOO_MANY_PARTITIONS) \ M(566, CANNOT_RMDIR) \ + M(567, DUPLICATED_PART_UUIDS) \ \ M(999, KEEPER_EXCEPTION) \ M(1000, POCO_EXCEPTION) \ M(1001, STD_EXCEPTION) \ - M(1002, UNKNOWN_EXCEPTION) + M(1002, UNKNOWN_EXCEPTION) \ + M(1003, INVALID_SHARD_ID) /* See END */ diff --git a/src/Common/HashTable/HashMap.h b/src/Common/HashTable/HashMap.h index e09f60c4294..99dc5414107 100644 --- a/src/Common/HashTable/HashMap.h +++ b/src/Common/HashTable/HashMap.h @@ -109,6 +109,11 @@ struct HashMapCell DB::assertChar(',', rb); DB::readDoubleQuoted(value.second, rb); } + + static bool constexpr need_to_notify_cell_during_move = false; + + static void move(HashMapCell * /* old_location */, HashMapCell * /* new_location */) {} + }; template diff --git a/src/Common/HashTable/HashTable.h b/src/Common/HashTable/HashTable.h index 15fa09490e6..9b6bb0a1be4 100644 --- a/src/Common/HashTable/HashTable.h +++ b/src/Common/HashTable/HashTable.h @@ -69,11 +69,16 @@ namespace ZeroTraits { template -bool check(const T x) { return x == 0; } +inline bool check(const T x) { return x == 0; } template -void set(T & x) { x = 0; } +inline void set(T & x) { x = 0; } +template <> +inline bool check(const char * x) { return x == nullptr; } + +template <> +inline void set(const char *& x){ x = nullptr; } } @@ -204,6 +209,13 @@ struct HashTableCell /// Deserialization, in binary and text form. void read(DB::ReadBuffer & rb) { DB::readBinary(key, rb); } void readText(DB::ReadBuffer & rb) { DB::readDoubleQuoted(key, rb); } + + /// When cell pointer is moved during erase, reinsert or resize operations + + static constexpr bool need_to_notify_cell_during_move = false; + + static void move(HashTableCell * /* old_location */, HashTableCell * /* new_location */) {} + }; /** @@ -334,6 +346,32 @@ struct ZeroValueStorage }; +template +struct AllocatorBufferDeleter; + +template +struct AllocatorBufferDeleter +{ + AllocatorBufferDeleter(Allocator &, size_t) {} + + void operator()(Cell *) const {} + +}; + +template +struct AllocatorBufferDeleter +{ + AllocatorBufferDeleter(Allocator & allocator_, size_t size_) + : allocator(allocator_) + , size(size_) {} + + void operator()(Cell * buffer) const { allocator.free(buffer, size); } + + Allocator & allocator; + size_t size; +}; + + // The HashTable template < @@ -427,7 +465,6 @@ protected: } } - /// Increase the size of the buffer. void resize(size_t for_num_elems = 0, size_t for_buf_size = 0) { @@ -460,7 +497,24 @@ protected: new_grower.increaseSize(); /// Expand the space. - buf = reinterpret_cast(Allocator::realloc(buf, getBufferSizeInBytes(), new_grower.bufSize() * sizeof(Cell))); + + size_t old_buffer_size = getBufferSizeInBytes(); + + /** If cell required to be notified during move we need to temporary keep old buffer + * because realloc does not quarantee for reallocated buffer to have same base address + */ + using Deleter = AllocatorBufferDeleter; + Deleter buffer_deleter(*this, old_buffer_size); + std::unique_ptr old_buffer(buf, buffer_deleter); + + if constexpr (Cell::need_to_notify_cell_during_move) + { + buf = reinterpret_cast(Allocator::alloc(new_grower.bufSize() * sizeof(Cell))); + memcpy(reinterpret_cast(buf), reinterpret_cast(old_buffer.get()), old_buffer_size); + } + else + buf = reinterpret_cast(Allocator::realloc(buf, old_buffer_size, new_grower.bufSize() * sizeof(Cell))); + grower = new_grower; /** Now some items may need to be moved to a new location. @@ -470,7 +524,12 @@ protected: size_t i = 0; for (; i < old_size; ++i) if (!buf[i].isZero(*this)) - reinsert(buf[i], buf[i].getHash(*this)); + { + size_t updated_place_value = reinsert(buf[i], buf[i].getHash(*this)); + + if constexpr (Cell::need_to_notify_cell_during_move) + Cell::move(&(old_buffer.get())[i], &buf[updated_place_value]); + } /** There is also a special case: * if the element was to be at the end of the old buffer, [ x] @@ -481,7 +540,13 @@ protected: * process tail from the collision resolution chain immediately after it [ o x ] */ for (; !buf[i].isZero(*this); ++i) - reinsert(buf[i], buf[i].getHash(*this)); + { + size_t updated_place_value = reinsert(buf[i], buf[i].getHash(*this)); + + if constexpr (Cell::need_to_notify_cell_during_move) + if (&buf[i] != &buf[updated_place_value]) + Cell::move(&buf[i], &buf[updated_place_value]); + } #ifdef DBMS_HASH_MAP_DEBUG_RESIZES watch.stop(); @@ -495,20 +560,20 @@ protected: /** Paste into the new buffer the value that was in the old buffer. * Used when increasing the buffer size. */ - void reinsert(Cell & x, size_t hash_value) + size_t reinsert(Cell & x, size_t hash_value) { size_t place_value = grower.place(hash_value); /// If the element is in its place. if (&x == &buf[place_value]) - return; + return place_value; /// Compute a new location, taking into account the collision resolution chain. place_value = findCell(Cell::getKey(x.getValue()), hash_value, place_value); /// If the item remains in its place in the old collision resolution chain. if (!buf[place_value].isZero(*this)) - return; + return place_value; /// Copy to a new location and zero the old one. x.setHash(hash_value); @@ -516,6 +581,7 @@ protected: x.setZero(); /// Then the elements that previously were in collision with this can move to the old place. + return place_value; } @@ -881,7 +947,11 @@ public: /// Reinsert node pointed to by iterator void ALWAYS_INLINE reinsert(iterator & it, size_t hash_value) { - reinsert(*it.getPtr(), hash_value); + size_t place_value = reinsert(*it.getPtr(), hash_value); + + if constexpr (Cell::need_to_notify_cell_during_move) + if (it.getPtr() != &buf[place_value]) + Cell::move(it.getPtr(), &buf[place_value]); } @@ -958,8 +1028,14 @@ public: return const_cast *>(this)->find(x, hash_value); } - std::enable_if_t + std::enable_if_t ALWAYS_INLINE erase(const Key & x) + { + return erase(x, hash(x)); + } + + std::enable_if_t + ALWAYS_INLINE erase(const Key & x, size_t hash_value) { /** Deletion from open addressing hash table without tombstones * @@ -977,21 +1053,19 @@ public: { --m_size; this->clearHasZero(); + return true; } else { - return; + return false; } } - size_t hash_value = hash(x); size_t erased_key_position = findCell(x, hash_value, grower.place(hash_value)); /// Key is not found if (buf[erased_key_position].isZero(*this)) - { - return; - } + return false; /// We need to guarantee loop termination because there will be empty position assert(m_size < grower.bufSize()); @@ -1056,12 +1130,18 @@ public: /// Move the element to the freed place memcpy(static_cast(&buf[erased_key_position]), static_cast(&buf[next_position]), sizeof(Cell)); + + if constexpr (Cell::need_to_notify_cell_during_move) + Cell::move(&buf[next_position], &buf[erased_key_position]); + /// Now we have another freed place erased_key_position = next_position; } buf[erased_key_position].setZero(); --m_size; + + return true; } bool ALWAYS_INLINE has(const Key & x) const diff --git a/src/Common/HashTable/LRUHashMap.h b/src/Common/HashTable/LRUHashMap.h new file mode 100644 index 00000000000..292006f2438 --- /dev/null +++ b/src/Common/HashTable/LRUHashMap.h @@ -0,0 +1,244 @@ +#pragma once + +#include + +#include +#include +#include + +#include +#include +#include +#include + + +template +struct LRUHashMapCell : + public std::conditional_t, + HashMapCell> +{ +public: + using Key = TKey; + + using Base = std::conditional_t, + HashMapCell>; + + using Mapped = typename Base::Mapped; + using State = typename Base::State; + + using mapped_type = Mapped; + using key_type = Key; + + using Base::Base; + + static bool constexpr need_to_notify_cell_during_move = true; + + static void move(LRUHashMapCell * __restrict old_location, LRUHashMapCell * __restrict new_location) + { + /** We update new location prev and next pointers because during hash table resize + * they can be updated during move of another cell. + */ + + new_location->prev = old_location->prev; + new_location->next = old_location->next; + + LRUHashMapCell * prev = new_location->prev; + LRUHashMapCell * next = new_location->next; + + /// Updated previous next and next previous nodes of list to point to new location + + if (prev) + prev->next = new_location; + + if (next) + next->prev = new_location; + } + +private: + template + friend class LRUHashMapCellNodeTraits; + + LRUHashMapCell * next = nullptr; + LRUHashMapCell * prev = nullptr; +}; + +template +struct LRUHashMapCellNodeTraits +{ + using node = LRUHashMapCell; + using node_ptr = LRUHashMapCell *; + using const_node_ptr = const LRUHashMapCell *; + + static node * get_next(const node * ptr) { return ptr->next; } + static void set_next(node * __restrict ptr, node * __restrict next) { ptr->next = next; } + static node * get_previous(const node * ptr) { return ptr->prev; } + static void set_previous(node * __restrict ptr, node * __restrict prev) { ptr->prev = prev; } +}; + +template +class LRUHashMapImpl : + private HashMapTable< + TKey, + LRUHashMapCell, + Hash, + HashTableGrower<>, + HashTableAllocator> +{ + using Base = HashMapTable< + TKey, + LRUHashMapCell, + Hash, + HashTableGrower<>, + HashTableAllocator>; +public: + using Key = TKey; + using Value = TValue; + + using Cell = LRUHashMapCell; + + using LRUHashMapCellIntrusiveValueTraits = + boost::intrusive::trivial_value_traits< + LRUHashMapCellNodeTraits, + boost::intrusive::link_mode_type::normal_link>; + + using LRUList = boost::intrusive::list< + Cell, + boost::intrusive::value_traits, + boost::intrusive::constant_time_size>; + + using iterator = typename LRUList::iterator; + using const_iterator = typename LRUList::const_iterator; + using reverse_iterator = typename LRUList::reverse_iterator; + using const_reverse_iterator = typename LRUList::const_reverse_iterator; + + LRUHashMapImpl(size_t max_size_, bool preallocate_max_size_in_hash_map = false) + : Base(preallocate_max_size_in_hash_map ? max_size_ : 32) + , max_size(max_size_) + { + assert(max_size > 0); + } + + std::pair insert(const Key & key, const Value & value) + { + return emplace(key, value); + } + + std::pair insert(const Key & key, Value && value) + { + return emplace(key, std::move(value)); + } + + template + std::pair emplace(const Key & key, Args&&... args) + { + size_t hash_value = Base::hash(key); + + Cell * it = Base::find(key, hash_value); + + if (it) + { + /// Cell contains element return it and put to the end of lru list + lru_list.splice(lru_list.end(), lru_list, lru_list.iterator_to(*it)); + return std::make_pair(it, false); + } + + if (size() == max_size) + { + /// Erase least recently used element from front of the list + Cell & node = lru_list.front(); + + const Key & element_to_remove_key = node.getKey(); + size_t key_hash = node.getHash(*this); + + lru_list.pop_front(); + + [[maybe_unused]] bool erased = Base::erase(element_to_remove_key, key_hash); + assert(erased); + } + + [[maybe_unused]] bool inserted; + + /// Insert value first try to insert in zero storage if not then insert in buffer + if (!Base::emplaceIfZero(key, it, inserted, hash_value)) + Base::emplaceNonZero(key, it, inserted, hash_value); + + assert(inserted); + + new (&it->getMapped()) Value(std::forward(args)...); + + /// Put cell to the end of lru list + lru_list.insert(lru_list.end(), *it); + + return std::make_pair(it, true); + } + + using Base::find; + + Value & get(const Key & key) + { + auto it = Base::find(key); + assert(it); + + Value & value = it->getMapped(); + + /// Put cell to the end of lru list + lru_list.splice(lru_list.end(), lru_list, lru_list.iterator_to(*it)); + + return value; + } + + const Value & get(const Key & key) const + { + return const_cast *>(this)->get(key); + } + + bool contains(const Key & key) const + { + return Base::has(key); + } + + bool erase(const Key & key) + { + auto hash = Base::hash(key); + auto it = Base::find(key, hash); + + if (!it) + return false; + + lru_list.erase(lru_list.iterator_to(*it)); + + return Base::erase(key, hash); + } + + void clear() + { + lru_list.clear(); + Base::clear(); + } + + using Base::size; + + size_t getMaxSize() const { return max_size; } + + iterator begin() { return lru_list.begin(); } + const_iterator begin() const { return lru_list.cbegin(); } + iterator end() { return lru_list.end(); } + const_iterator end() const { return lru_list.cend(); } + + reverse_iterator rbegin() { return lru_list.rbegin(); } + const_reverse_iterator rbegin() const { return lru_list.crbegin(); } + reverse_iterator rend() { return lru_list.rend(); } + const_reverse_iterator rend() const { return lru_list.crend(); } + +private: + size_t max_size; + LRUList lru_list; +}; + +template > +using LRUHashMap = LRUHashMapImpl; + +template > +using LRUHashMapWithSavedHash = LRUHashMapImpl; diff --git a/src/Common/StackTrace.h b/src/Common/StackTrace.h index 3ae4b964838..b2e14a01f03 100644 --- a/src/Common/StackTrace.h +++ b/src/Common/StackTrace.h @@ -34,7 +34,15 @@ public: std::optional file; std::optional line; }; - static constexpr size_t capacity = 32; + + static constexpr size_t capacity = +#ifndef NDEBUG + /* The stacks are normally larger in debug version due to less inlining. */ + 64 +#else + 32 +#endif + ; using FramePointers = std::array; using Frames = std::array; diff --git a/src/Common/ThreadProfileEvents.cpp b/src/Common/ThreadProfileEvents.cpp index e6336baecda..327178c92ff 100644 --- a/src/Common/ThreadProfileEvents.cpp +++ b/src/Common/ThreadProfileEvents.cpp @@ -68,7 +68,7 @@ TasksStatsCounters::TasksStatsCounters(const UInt64 tid, const MetricsProvider p case MetricsProvider::Netlink: stats_getter = [metrics_provider = std::make_shared(), tid]() { - ::taskstats result; + ::taskstats result{}; metrics_provider->getStat(result, tid); return result; }; @@ -76,7 +76,7 @@ TasksStatsCounters::TasksStatsCounters(const UInt64 tid, const MetricsProvider p case MetricsProvider::Procfs: stats_getter = [metrics_provider = std::make_shared(tid)]() { - ::taskstats result; + ::taskstats result{}; metrics_provider->getTaskStats(result); return result; }; diff --git a/src/Common/ThreadStatus.cpp b/src/Common/ThreadStatus.cpp index 5105fff03b2..8c01ed2d46f 100644 --- a/src/Common/ThreadStatus.cpp +++ b/src/Common/ThreadStatus.cpp @@ -99,6 +99,11 @@ ThreadStatus::~ThreadStatus() /// We've already allocated a little bit more than the limit and cannot track it in the thread memory tracker or its parent. } +#if !defined(ARCADIA_BUILD) + /// It may cause segfault if query_context was destroyed, but was not detached + assert((!query_context && query_id.empty()) || (query_context && query_id == query_context->getCurrentQueryId())); +#endif + if (deleter) deleter(); current_thread = nullptr; diff --git a/src/Common/ThreadStatus.h b/src/Common/ThreadStatus.h index 1be1f2cd4df..dc5f09c5f3d 100644 --- a/src/Common/ThreadStatus.h +++ b/src/Common/ThreadStatus.h @@ -201,7 +201,7 @@ public: void setFatalErrorCallback(std::function callback); void onFatalError(); - /// Sets query context for current thread and its thread group + /// Sets query context for current master thread and its thread group /// NOTE: query_context have to be alive until detachQuery() is called void attachQueryContext(Context & query_context); diff --git a/src/Common/tests/CMakeLists.txt b/src/Common/tests/CMakeLists.txt index cb36e2b97d2..2dd56e862f0 100644 --- a/src/Common/tests/CMakeLists.txt +++ b/src/Common/tests/CMakeLists.txt @@ -38,6 +38,9 @@ target_link_libraries (arena_with_free_lists PRIVATE dbms) add_executable (pod_array pod_array.cpp) target_link_libraries (pod_array PRIVATE clickhouse_common_io) +add_executable (lru_hash_map_perf lru_hash_map_perf.cpp) +target_link_libraries (lru_hash_map_perf PRIVATE clickhouse_common_io) + add_executable (thread_creation_latency thread_creation_latency.cpp) target_link_libraries (thread_creation_latency PRIVATE clickhouse_common_io) diff --git a/src/Common/tests/gtest_lru_hash_map.cpp b/src/Common/tests/gtest_lru_hash_map.cpp new file mode 100644 index 00000000000..562ee667b7b --- /dev/null +++ b/src/Common/tests/gtest_lru_hash_map.cpp @@ -0,0 +1,161 @@ +#include +#include + +#include + +#include + +template +std::vector convertToVector(const LRUHashMap & map) +{ + std::vector result; + result.reserve(map.size()); + + for (auto & node: map) + result.emplace_back(node.getKey()); + + return result; +} + +void testInsert(size_t elements_to_insert_size, size_t map_size) +{ + using LRUHashMap = LRUHashMap; + + LRUHashMap map(map_size); + + std::vector expected; + + for (size_t i = 0; i < elements_to_insert_size; ++i) + map.insert(i, i); + + for (size_t i = elements_to_insert_size - map_size; i < elements_to_insert_size; ++i) + expected.emplace_back(i); + + std::vector actual = convertToVector(map); + ASSERT_EQ(map.size(), actual.size()); + ASSERT_EQ(actual, expected); +} + +TEST(LRUHashMap, Insert) +{ + { + using LRUHashMap = LRUHashMap; + + LRUHashMap map(3); + + map.emplace(1, 1); + map.insert(2, 2); + int v = 3; + map.insert(3, v); + map.emplace(4, 4); + + std::vector expected = { 2, 3, 4 }; + std::vector actual = convertToVector(map); + + ASSERT_EQ(actual, expected); + } + + testInsert(1200000, 1200000); + testInsert(10, 5); + testInsert(1200000, 2); + testInsert(1200000, 1); +} + +TEST(LRUHashMap, GetModify) +{ + using LRUHashMap = LRUHashMap; + + LRUHashMap map(3); + + map.emplace(1, 1); + map.emplace(2, 2); + map.emplace(3, 3); + + map.get(3) = 4; + + std::vector expected = { 1, 2, 4 }; + std::vector actual; + actual.reserve(map.size()); + + for (auto & node : map) + actual.emplace_back(node.getMapped()); + + ASSERT_EQ(actual, expected); +} + +TEST(LRUHashMap, SetRecentKeyToTop) +{ + using LRUHashMap = LRUHashMap; + + LRUHashMap map(3); + + map.emplace(1, 1); + map.emplace(2, 2); + map.emplace(3, 3); + map.emplace(1, 4); + + std::vector expected = { 2, 3, 1 }; + std::vector actual = convertToVector(map); + + ASSERT_EQ(actual, expected); +} + +TEST(LRUHashMap, GetRecentKeyToTop) +{ + using LRUHashMap = LRUHashMap; + + LRUHashMap map(3); + + map.emplace(1, 1); + map.emplace(2, 2); + map.emplace(3, 3); + map.get(1); + + std::vector expected = { 2, 3, 1 }; + std::vector actual = convertToVector(map); + + ASSERT_EQ(actual, expected); +} + +TEST(LRUHashMap, Contains) +{ + using LRUHashMap = LRUHashMap; + + LRUHashMap map(3); + + map.emplace(1, 1); + map.emplace(2, 2); + map.emplace(3, 3); + + ASSERT_TRUE(map.contains(1)); + ASSERT_TRUE(map.contains(2)); + ASSERT_TRUE(map.contains(3)); + ASSERT_EQ(map.size(), 3); + + map.erase(1); + map.erase(2); + map.erase(3); + + ASSERT_EQ(map.size(), 0); + ASSERT_FALSE(map.contains(1)); + ASSERT_FALSE(map.contains(2)); + ASSERT_FALSE(map.contains(3)); +} + +TEST(LRUHashMap, Clear) +{ + using LRUHashMap = LRUHashMap; + + LRUHashMap map(3); + + map.emplace(1, 1); + map.emplace(2, 2); + map.emplace(3, 3); + map.clear(); + + std::vector expected = {}; + std::vector actual = convertToVector(map); + + ASSERT_EQ(actual, expected); + ASSERT_EQ(map.size(), 0); +} diff --git a/src/Common/tests/lru_hash_map_perf.cpp b/src/Common/tests/lru_hash_map_perf.cpp new file mode 100644 index 00000000000..14beff3f7da --- /dev/null +++ b/src/Common/tests/lru_hash_map_perf.cpp @@ -0,0 +1,244 @@ +#include +#include +#include +#include +#include + +#include +#include + +template +class LRUHashMapBasic +{ +public: + using key_type = Key; + using value_type = Value; + using list_type = std::list; + using node = std::pair; + using map_type = std::unordered_map>; + + LRUHashMapBasic(size_t max_size_, bool preallocated) + : hash_map(preallocated ? max_size_ : 32) + , max_size(max_size_) + { + } + + void insert(const Key &key, const Value &value) + { + auto it = hash_map.find(key); + + if (it == hash_map.end()) + { + if (size() >= max_size) + { + auto iterator_to_remove = list.begin(); + + hash_map.erase(*iterator_to_remove); + list.erase(iterator_to_remove); + } + + list.push_back(key); + hash_map[key] = std::make_pair(value, --list.end()); + } + else + { + auto & [value_to_update, iterator_in_list_to_update] = it->second; + + list.splice(list.end(), list, iterator_in_list_to_update); + + iterator_in_list_to_update = list.end(); + value_to_update = value; + } + } + + value_type & get(const key_type &key) + { + auto iterator_in_map = hash_map.find(key); + assert(iterator_in_map != hash_map.end()); + + auto & [value_to_return, iterator_in_list_to_update] = iterator_in_map->second; + + list.splice(list.end(), list, iterator_in_list_to_update); + iterator_in_list_to_update = list.end(); + + return value_to_return; + } + + const value_type & get(const key_type & key) const + { + return const_cast *>(this)->get(key); + } + + size_t getMaxSize() const + { + return max_size; + } + + size_t size() const + { + return hash_map.size(); + } + + bool empty() const + { + return hash_map.empty(); + } + + bool contains(const Key & key) + { + return hash_map.find(key) != hash_map.end(); + } + + void clear() + { + hash_map.clear(); + list.clear(); + } + +private: + map_type hash_map; + list_type list; + size_t max_size; +}; + +std::vector generateNumbersToInsert(size_t numbers_to_insert_size) +{ + std::vector numbers; + numbers.reserve(numbers_to_insert_size); + + std::random_device rd; + pcg64 gen(rd()); + + UInt64 min = std::numeric_limits::min(); + UInt64 max = std::numeric_limits::max(); + + auto distribution = std::uniform_int_distribution<>(min, max); + + for (size_t i = 0; i < numbers_to_insert_size; ++i) + { + UInt64 number = distribution(gen); + numbers.emplace_back(number); + } + + return numbers; +} + +void testInsertElementsIntoHashMap(size_t map_size, const std::vector & numbers_to_insert, bool preallocated) +{ + size_t numbers_to_insert_size = numbers_to_insert.size(); + std::cout << "TestInsertElementsIntoHashMap preallocated map size: " << map_size << " numbers to insert size: " << numbers_to_insert_size; + std::cout << std::endl; + + HashMap hash_map(preallocated ? map_size : 32); + + Stopwatch watch; + + for (size_t i = 0; i < numbers_to_insert_size; ++i) + hash_map.insert({ numbers_to_insert[i], numbers_to_insert[i] }); + + std::cout << "Inserted in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl; + + UInt64 summ = 0; + + for (size_t i = 0; i < numbers_to_insert_size; ++i) + { + auto * it = hash_map.find(numbers_to_insert[i]); + + if (it) + summ += it->getMapped(); + } + + std::cout << "Calculated summ: " << summ << " in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl; +} + +void testInsertElementsIntoStandardMap(size_t map_size, const std::vector & numbers_to_insert, bool preallocated) +{ + size_t numbers_to_insert_size = numbers_to_insert.size(); + std::cout << "TestInsertElementsIntoStandardMap map size: " << map_size << " numbers to insert size: " << numbers_to_insert_size; + std::cout << std::endl; + + std::unordered_map hash_map(preallocated ? map_size : 32); + + Stopwatch watch; + + for (size_t i = 0; i < numbers_to_insert_size; ++i) + hash_map.insert({ numbers_to_insert[i], numbers_to_insert[i] }); + + std::cout << "Inserted in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl; + + UInt64 summ = 0; + + for (size_t i = 0; i < numbers_to_insert_size; ++i) + { + auto it = hash_map.find(numbers_to_insert[i]); + + if (it != hash_map.end()) + summ += it->second; + } + + std::cout << "Calculated summ: " << summ << " in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl; +} + +template +UInt64 testInsertIntoEmptyCache(size_t map_size, const std::vector & numbers_to_insert, bool preallocated) +{ + size_t numbers_to_insert_size = numbers_to_insert.size(); + std::cout << "Test testInsertPreallocated preallocated map size: " << map_size << " numbers to insert size: " << numbers_to_insert_size; + std::cout << std::endl; + + LRUCache cache(map_size, preallocated); + Stopwatch watch; + + for (size_t i = 0; i < numbers_to_insert_size; ++i) + { + cache.insert(numbers_to_insert[i], numbers_to_insert[i]); + } + + std::cout << "Inserted in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl; + + UInt64 summ = 0; + + for (size_t i = 0; i < numbers_to_insert_size; ++i) + if (cache.contains(numbers_to_insert[i])) + summ += cache.get(numbers_to_insert[i]); + + std::cout << "Calculated summ: " << summ << " in " << watch.elapsedMilliseconds() << " milliseconds" << std::endl; + + return summ; +} + +int main(int argc, char ** argv) +{ + (void)(argc); + (void)(argv); + + size_t hash_map_size = 1200000; + size_t numbers_to_insert_size = 12000000; + std::vector numbers = generateNumbersToInsert(numbers_to_insert_size); + + std::cout << "Test insert into HashMap preallocated=0" << std::endl; + testInsertElementsIntoHashMap(hash_map_size, numbers, true); + std::cout << std::endl; + + std::cout << "Test insert into HashMap preallocated=1" << std::endl; + testInsertElementsIntoHashMap(hash_map_size, numbers, true); + std::cout << std::endl; + + std::cout << "Test LRUHashMap preallocated=0" << std::endl; + testInsertIntoEmptyCache>(hash_map_size, numbers, false); + std::cout << std::endl; + + std::cout << "Test LRUHashMap preallocated=1" << std::endl; + testInsertIntoEmptyCache>(hash_map_size, numbers, true); + std::cout << std::endl; + + std::cout << "Test LRUHashMapBasic preallocated=0" << std::endl; + testInsertIntoEmptyCache>(hash_map_size, numbers, false); + std::cout << std::endl; + + std::cout << "Test LRUHashMapBasic preallocated=1" << std::endl; + testInsertIntoEmptyCache>(hash_map_size, numbers, true); + std::cout << std::endl; + + return 0; +} diff --git a/src/Core/MySQL/MySQLReplication.cpp b/src/Core/MySQL/MySQLReplication.cpp index b86d6447e26..8fdf337c849 100644 --- a/src/Core/MySQL/MySQLReplication.cpp +++ b/src/Core/MySQL/MySQLReplication.cpp @@ -136,6 +136,7 @@ namespace MySQLReplication out << "XID: " << this->xid << '\n'; } + /// https://dev.mysql.com/doc/internals/en/table-map-event.html void TableMapEvent::parseImpl(ReadBuffer & payload) { payload.readStrict(reinterpret_cast(&table_id), 6); @@ -257,15 +258,19 @@ namespace MySQLReplication out << "Null Bitmap: " << bitmap_str << '\n'; } - void RowsEvent::parseImpl(ReadBuffer & payload) + void RowsEventHeader::parse(ReadBuffer & payload) { payload.readStrict(reinterpret_cast(&table_id), 6); payload.readStrict(reinterpret_cast(&flags), 2); + UInt16 extra_data_len; /// This extra_data_len contains the 2 bytes length. payload.readStrict(reinterpret_cast(&extra_data_len), 2); payload.ignore(extra_data_len - 2); + } + void RowsEvent::parseImpl(ReadBuffer & payload) + { number_columns = readLengthEncodedNumber(payload); size_t columns_bitmap_size = (number_columns + 7) / 8; switch (header.type) @@ -795,37 +800,50 @@ namespace MySQLReplication { event = std::make_shared(std::move(event_header)); event->parseEvent(event_payload); - table_map = std::static_pointer_cast(event); + auto table_map = std::static_pointer_cast(event); + table_maps[table_map->table_id] = table_map; break; } case WRITE_ROWS_EVENT_V1: case WRITE_ROWS_EVENT_V2: { - if (doReplicate()) - event = std::make_shared(table_map, std::move(event_header)); + RowsEventHeader rows_header(event_header.type); + rows_header.parse(event_payload); + if (doReplicate(rows_header.table_id)) + event = std::make_shared(table_maps.at(rows_header.table_id), std::move(event_header), rows_header); else event = std::make_shared(std::move(event_header)); event->parseEvent(event_payload); + if (rows_header.flags & ROWS_END_OF_STATEMENT) + table_maps.clear(); break; } case DELETE_ROWS_EVENT_V1: case DELETE_ROWS_EVENT_V2: { - if (doReplicate()) - event = std::make_shared(table_map, std::move(event_header)); + RowsEventHeader rows_header(event_header.type); + rows_header.parse(event_payload); + if (doReplicate(rows_header.table_id)) + event = std::make_shared(table_maps.at(rows_header.table_id), std::move(event_header), rows_header); else event = std::make_shared(std::move(event_header)); event->parseEvent(event_payload); + if (rows_header.flags & ROWS_END_OF_STATEMENT) + table_maps.clear(); break; } case UPDATE_ROWS_EVENT_V1: case UPDATE_ROWS_EVENT_V2: { - if (doReplicate()) - event = std::make_shared(table_map, std::move(event_header)); + RowsEventHeader rows_header(event_header.type); + rows_header.parse(event_payload); + if (doReplicate(rows_header.table_id)) + event = std::make_shared(table_maps.at(rows_header.table_id), std::move(event_header), rows_header); else event = std::make_shared(std::move(event_header)); event->parseEvent(event_payload); + if (rows_header.flags & ROWS_END_OF_STATEMENT) + table_maps.clear(); break; } case GTID_EVENT: @@ -843,6 +861,19 @@ namespace MySQLReplication } } } + + bool MySQLFlavor::doReplicate(UInt64 table_id) + { + if (replicate_do_db.empty()) + return false; + if (table_id == 0x00ffffff) + { + // Special "dummy event" + return false; + } + auto table_map = table_maps.at(table_id); + return table_map->schema == replicate_do_db; + } } } diff --git a/src/Core/MySQL/MySQLReplication.h b/src/Core/MySQL/MySQLReplication.h index 7c7604cad58..d415bdda70d 100644 --- a/src/Core/MySQL/MySQLReplication.h +++ b/src/Core/MySQL/MySQLReplication.h @@ -430,6 +430,22 @@ namespace MySQLReplication void parseMeta(String meta); }; + enum RowsEventFlags + { + ROWS_END_OF_STATEMENT = 1 + }; + + class RowsEventHeader + { + public: + EventType type; + UInt64 table_id; + UInt16 flags; + + RowsEventHeader(EventType type_) : type(type_), table_id(0), flags(0) {} + void parse(ReadBuffer & payload); + }; + class RowsEvent : public EventBase { public: @@ -438,9 +454,11 @@ namespace MySQLReplication String table; std::vector rows; - RowsEvent(std::shared_ptr table_map_, EventHeader && header_) - : EventBase(std::move(header_)), number_columns(0), table_id(0), flags(0), extra_data_len(0), table_map(table_map_) + RowsEvent(std::shared_ptr table_map_, EventHeader && header_, const RowsEventHeader & rows_header) + : EventBase(std::move(header_)), number_columns(0), table_map(table_map_) { + table_id = rows_header.table_id; + flags = rows_header.flags; schema = table_map->schema; table = table_map->table; } @@ -450,7 +468,6 @@ namespace MySQLReplication protected: UInt64 table_id; UInt16 flags; - UInt16 extra_data_len; Bitmap columns_present_bitmap1; Bitmap columns_present_bitmap2; @@ -464,21 +481,24 @@ namespace MySQLReplication class WriteRowsEvent : public RowsEvent { public: - WriteRowsEvent(std::shared_ptr table_map_, EventHeader && header_) : RowsEvent(table_map_, std::move(header_)) {} + WriteRowsEvent(std::shared_ptr table_map_, EventHeader && header_, const RowsEventHeader & rows_header) + : RowsEvent(table_map_, std::move(header_), rows_header) {} MySQLEventType type() const override { return MYSQL_WRITE_ROWS_EVENT; } }; class DeleteRowsEvent : public RowsEvent { public: - DeleteRowsEvent(std::shared_ptr table_map_, EventHeader && header_) : RowsEvent(table_map_, std::move(header_)) {} + DeleteRowsEvent(std::shared_ptr table_map_, EventHeader && header_, const RowsEventHeader & rows_header) + : RowsEvent(table_map_, std::move(header_), rows_header) {} MySQLEventType type() const override { return MYSQL_DELETE_ROWS_EVENT; } }; class UpdateRowsEvent : public RowsEvent { public: - UpdateRowsEvent(std::shared_ptr table_map_, EventHeader && header_) : RowsEvent(table_map_, std::move(header_)) {} + UpdateRowsEvent(std::shared_ptr table_map_, EventHeader && header_, const RowsEventHeader & rows_header) + : RowsEvent(table_map_, std::move(header_), rows_header) {} MySQLEventType type() const override { return MYSQL_UPDATE_ROWS_EVENT; } }; @@ -546,10 +566,10 @@ namespace MySQLReplication Position position; BinlogEventPtr event; String replicate_do_db; - std::shared_ptr table_map; + std::map > table_maps; size_t checksum_signature_length = 4; - inline bool doReplicate() { return (replicate_do_db.empty() || table_map->schema == replicate_do_db); } + bool doReplicate(UInt64 table_id); }; } diff --git a/src/Core/Protocol.h b/src/Core/Protocol.h index f383e509751..df51a0cb61a 100644 --- a/src/Core/Protocol.h +++ b/src/Core/Protocol.h @@ -75,8 +75,9 @@ namespace Protocol TablesStatusResponse = 9, /// A response to TablesStatus request. Log = 10, /// System logs of the query execution TableColumns = 11, /// Columns' description for default values calculation + PartUUIDs = 12, /// List of unique parts ids. - MAX = TableColumns, + MAX = PartUUIDs, }; /// NOTE: If the type of packet argument would be Enum, the comparison packet >= 0 && packet < 10 @@ -98,6 +99,7 @@ namespace Protocol "TablesStatusResponse", "Log", "TableColumns", + "PartUUIDs", }; return packet <= MAX ? data[packet] @@ -132,8 +134,9 @@ namespace Protocol TablesStatusRequest = 5, /// Check status of tables on the server. KeepAlive = 6, /// Keep the connection alive Scalar = 7, /// A block of data (compressed or not). + IgnoredPartUUIDs = 8, /// List of unique parts ids to exclude from query processing - MAX = Scalar, + MAX = IgnoredPartUUIDs, }; inline const char * toString(UInt64 packet) @@ -147,6 +150,7 @@ namespace Protocol "TablesStatusRequest", "KeepAlive", "Scalar", + "IgnoredPartUUIDs", }; return packet <= MAX ? data[packet] diff --git a/src/Core/Settings.h b/src/Core/Settings.h index c4cf3803913..96571cedd3f 100644 --- a/src/Core/Settings.h +++ b/src/Core/Settings.h @@ -86,8 +86,6 @@ class IColumn; \ M(Bool, optimize_move_to_prewhere, true, "Allows disabling WHERE to PREWHERE optimization in SELECT queries from MergeTree.", 0) \ \ - M(Milliseconds, insert_in_memory_parts_timeout, 600000, "", 0) \ - \ M(UInt64, replication_alter_partitions_sync, 1, "Wait for actions to manipulate the partitions. 0 - do not wait, 1 - wait for execution only of itself, 2 - wait for everyone.", 0) \ M(UInt64, replication_alter_columns_timeout, 60, "Wait for actions to change the table structure within the specified number of seconds. 0 - wait unlimited time.", 0) \ \ @@ -420,6 +418,9 @@ class IColumn; M(Bool, async_socket_for_remote, true, "Asynchronously read from socket executing remote query", 0) \ \ M(Bool, optimize_rewrite_sum_if_to_count_if, true, "Rewrite sumIf() and sum(if()) function countIf() function when logically equivalent", 0) \ + M(UInt64, insert_shard_id, 0, "If non zero, when insert into a distributed table, the data will be inserted into the shard `insert_shard_id` synchronously. Possible values range from 1 to `shards_number` of corresponding distributed table", 0) \ + M(Bool, allow_experimental_query_deduplication, false, "Allow sending parts' UUIDs for a query in order to deduplicate data parts if any", 0) \ + \ /** Obsolete settings that do nothing but left for compatibility reasons. Remove each one after half a year of obsolescence. */ \ \ M(UInt64, max_memory_usage_for_all_queries, 0, "Obsolete. Will be removed after 2020-10-20", 0) \ diff --git a/src/DataStreams/PushingToViewsBlockOutputStream.cpp b/src/DataStreams/PushingToViewsBlockOutputStream.cpp index a6e0dcd7356..4d1990ffe18 100644 --- a/src/DataStreams/PushingToViewsBlockOutputStream.cpp +++ b/src/DataStreams/PushingToViewsBlockOutputStream.cpp @@ -121,7 +121,7 @@ PushingToViewsBlockOutputStream::PushingToViewsBlockOutputStream( out = std::make_shared( dependent_table, dependent_metadata_snapshot, *insert_context, ASTPtr()); - views.emplace_back(ViewInfo{std::move(query), database_table, std::move(out), nullptr}); + views.emplace_back(ViewInfo{std::move(query), database_table, std::move(out), nullptr, 0 /* elapsed_ms */}); } /// Do not push to destination table if the flag is set @@ -146,8 +146,6 @@ Block PushingToViewsBlockOutputStream::getHeader() const void PushingToViewsBlockOutputStream::write(const Block & block) { - Stopwatch watch; - /** Throw an exception if the sizes of arrays - elements of nested data structures doesn't match. * We have to make this assertion before writing to table, because storage engine may assume that they have equal sizes. * NOTE It'd better to do this check in serialization of nested structures (in place when this assumption is required), @@ -177,15 +175,15 @@ void PushingToViewsBlockOutputStream::write(const Block & block) { // Push to views concurrently if enabled and more than one view is attached ThreadPool pool(std::min(size_t(settings.max_threads), views.size())); - for (size_t view_num = 0; view_num < views.size(); ++view_num) + for (auto & view : views) { auto thread_group = CurrentThread::getGroup(); - pool.scheduleOrThrowOnError([=, this] + pool.scheduleOrThrowOnError([=, &view, this] { setThreadName("PushingToViews"); if (thread_group) CurrentThread::attachToIfDetached(thread_group); - process(block, view_num); + process(block, view); }); } // Wait for concurrent view processing @@ -194,22 +192,14 @@ void PushingToViewsBlockOutputStream::write(const Block & block) else { // Process sequentially - for (size_t view_num = 0; view_num < views.size(); ++view_num) + for (auto & view : views) { - process(block, view_num); + process(block, view); - if (views[view_num].exception) - std::rethrow_exception(views[view_num].exception); + if (view.exception) + std::rethrow_exception(view.exception); } } - - UInt64 milliseconds = watch.elapsedMilliseconds(); - if (views.size() > 1) - { - LOG_TRACE(log, "Pushing from {} to {} views took {} ms.", - storage->getStorageID().getNameForLogs(), views.size(), - milliseconds); - } } void PushingToViewsBlockOutputStream::writePrefix() @@ -257,12 +247,13 @@ void PushingToViewsBlockOutputStream::writeSuffix() if (view.exception) continue; - pool.scheduleOrThrowOnError([thread_group, &view] + pool.scheduleOrThrowOnError([thread_group, &view, this] { setThreadName("PushingToViews"); if (thread_group) CurrentThread::attachToIfDetached(thread_group); + Stopwatch watch; try { view.out->writeSuffix(); @@ -271,6 +262,12 @@ void PushingToViewsBlockOutputStream::writeSuffix() { view.exception = std::current_exception(); } + view.elapsed_ms += watch.elapsedMilliseconds(); + + LOG_TRACE(log, "Pushing from {} to {} took {} ms.", + storage->getStorageID().getNameForLogs(), + view.table_id.getNameForLogs(), + view.elapsed_ms); }); } // Wait for concurrent view processing @@ -290,6 +287,7 @@ void PushingToViewsBlockOutputStream::writeSuffix() if (parallel_processing) continue; + Stopwatch watch; try { view.out->writeSuffix(); @@ -299,10 +297,24 @@ void PushingToViewsBlockOutputStream::writeSuffix() ex.addMessage("while write prefix to view " + view.table_id.getNameForLogs()); throw; } + view.elapsed_ms += watch.elapsedMilliseconds(); + + LOG_TRACE(log, "Pushing from {} to {} took {} ms.", + storage->getStorageID().getNameForLogs(), + view.table_id.getNameForLogs(), + view.elapsed_ms); } if (first_exception) std::rethrow_exception(first_exception); + + UInt64 milliseconds = main_watch.elapsedMilliseconds(); + if (views.size() > 1) + { + LOG_TRACE(log, "Pushing from {} to {} views took {} ms.", + storage->getStorageID().getNameForLogs(), views.size(), + milliseconds); + } } void PushingToViewsBlockOutputStream::flush() @@ -314,10 +326,9 @@ void PushingToViewsBlockOutputStream::flush() view.out->flush(); } -void PushingToViewsBlockOutputStream::process(const Block & block, size_t view_num) +void PushingToViewsBlockOutputStream::process(const Block & block, ViewInfo & view) { Stopwatch watch; - auto & view = views[view_num]; try { @@ -379,11 +390,7 @@ void PushingToViewsBlockOutputStream::process(const Block & block, size_t view_n view.exception = std::current_exception(); } - UInt64 milliseconds = watch.elapsedMilliseconds(); - LOG_TRACE(log, "Pushing from {} to {} took {} ms.", - storage->getStorageID().getNameForLogs(), - view.table_id.getNameForLogs(), - milliseconds); + view.elapsed_ms += watch.elapsedMilliseconds(); } } diff --git a/src/DataStreams/PushingToViewsBlockOutputStream.h b/src/DataStreams/PushingToViewsBlockOutputStream.h index 30a223d26a3..6b32607b294 100644 --- a/src/DataStreams/PushingToViewsBlockOutputStream.h +++ b/src/DataStreams/PushingToViewsBlockOutputStream.h @@ -1,6 +1,7 @@ #pragma once #include +#include #include #include @@ -44,6 +45,7 @@ private: const Context & context; ASTPtr query_ptr; + Stopwatch main_watch; struct ViewInfo { @@ -51,13 +53,14 @@ private: StorageID table_id; BlockOutputStreamPtr out; std::exception_ptr exception; + UInt64 elapsed_ms = 0; }; std::vector views; std::unique_ptr select_context; std::unique_ptr insert_context; - void process(const Block & block, size_t view_num); + void process(const Block & block, ViewInfo & view); }; diff --git a/src/DataStreams/RemoteQueryExecutor.cpp b/src/DataStreams/RemoteQueryExecutor.cpp index 14e51ffefdf..fc3870b3f22 100644 --- a/src/DataStreams/RemoteQueryExecutor.cpp +++ b/src/DataStreams/RemoteQueryExecutor.cpp @@ -13,6 +13,7 @@ #include #include #include +#include namespace DB { @@ -20,6 +21,7 @@ namespace DB namespace ErrorCodes { extern const int UNKNOWN_PACKET_FROM_SERVER; + extern const int DUPLICATED_PART_UUIDS; } RemoteQueryExecutor::RemoteQueryExecutor( @@ -158,6 +160,7 @@ void RemoteQueryExecutor::sendQuery() std::lock_guard guard(was_cancelled_mutex); established = true; + was_cancelled = false; auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(settings); ClientInfo modified_client_info = context.getClientInfo(); @@ -167,6 +170,12 @@ void RemoteQueryExecutor::sendQuery() modified_client_info.client_trace_context = CurrentThread::get().thread_trace_context; } + { + std::lock_guard lock(duplicated_part_uuids_mutex); + if (!duplicated_part_uuids.empty()) + multiplexed_connections->sendIgnoredPartUUIDs(duplicated_part_uuids); + } + multiplexed_connections->sendQuery(timeouts, query, query_id, stage, modified_client_info, true); established = false; @@ -196,6 +205,8 @@ Block RemoteQueryExecutor::read() if (auto block = processPacket(std::move(packet))) return *block; + else if (got_duplicated_part_uuids) + return std::get(restartQueryWithoutDuplicatedUUIDs()); } } @@ -211,7 +222,7 @@ std::variant RemoteQueryExecutor::read(std::unique_ptr return Block(); } - if (!read_context) + if (!read_context || resent_query) { std::lock_guard lock(was_cancelled_mutex); if (was_cancelled) @@ -234,6 +245,8 @@ std::variant RemoteQueryExecutor::read(std::unique_ptr { if (auto data = processPacket(std::move(read_context->packet))) return std::move(*data); + else if (got_duplicated_part_uuids) + return restartQueryWithoutDuplicatedUUIDs(&read_context); } } while (true); @@ -242,10 +255,39 @@ std::variant RemoteQueryExecutor::read(std::unique_ptr #endif } + +std::variant RemoteQueryExecutor::restartQueryWithoutDuplicatedUUIDs(std::unique_ptr * read_context) +{ + /// Cancel previous query and disconnect before retry. + cancel(read_context); + multiplexed_connections->disconnect(); + + /// Only resend once, otherwise throw an exception + if (!resent_query) + { + if (log) + LOG_DEBUG(log, "Found duplicate UUIDs, will retry query without those parts"); + + resent_query = true; + sent_query = false; + got_duplicated_part_uuids = false; + /// Consecutive read will implicitly send query first. + if (!read_context) + return read(); + else + return read(*read_context); + } + throw Exception("Found duplicate uuids while processing query.", ErrorCodes::DUPLICATED_PART_UUIDS); +} + std::optional RemoteQueryExecutor::processPacket(Packet packet) { switch (packet.type) { + case Protocol::Server::PartUUIDs: + if (!setPartUUIDs(packet.part_uuids)) + got_duplicated_part_uuids = true; + break; case Protocol::Server::Data: /// If the block is not empty and is not a header block if (packet.block && (packet.block.rows() > 0)) @@ -306,6 +348,20 @@ std::optional RemoteQueryExecutor::processPacket(Packet packet) return {}; } +bool RemoteQueryExecutor::setPartUUIDs(const std::vector & uuids) +{ + Context & query_context = const_cast(context).getQueryContext(); + auto duplicates = query_context.getPartUUIDs()->add(uuids); + + if (!duplicates.empty()) + { + std::lock_guard lock(duplicated_part_uuids_mutex); + duplicated_part_uuids.insert(duplicated_part_uuids.begin(), duplicates.begin(), duplicates.end()); + return false; + } + return true; +} + void RemoteQueryExecutor::finish(std::unique_ptr * read_context) { /** If one of: @@ -383,6 +439,7 @@ void RemoteQueryExecutor::sendExternalTables() { std::lock_guard lock(external_tables_mutex); + external_tables_data.clear(); external_tables_data.reserve(count); for (size_t i = 0; i < count; ++i) diff --git a/src/DataStreams/RemoteQueryExecutor.h b/src/DataStreams/RemoteQueryExecutor.h index 46d9d067563..6a10627b948 100644 --- a/src/DataStreams/RemoteQueryExecutor.h +++ b/src/DataStreams/RemoteQueryExecutor.h @@ -57,6 +57,9 @@ public: /// Create connection and send query, external tables and scalars. void sendQuery(); + /// Query is resent to a replica, the query itself can be modified. + std::atomic resent_query { false }; + /// Read next block of data. Returns empty block if query is finished. Block read(); @@ -152,6 +155,14 @@ private: */ std::atomic got_unknown_packet_from_replica { false }; + /** Got duplicated uuids from replica + */ + std::atomic got_duplicated_part_uuids{ false }; + + /// Parts uuids, collected from remote replicas + std::mutex duplicated_part_uuids_mutex; + std::vector duplicated_part_uuids; + PoolMode pool_mode = PoolMode::GET_MANY; StorageID main_table = StorageID::createEmpty(); @@ -163,6 +174,14 @@ private: /// Send all temporary tables to remote servers void sendExternalTables(); + /// Set part uuids to a query context, collected from remote replicas. + /// Return true if duplicates found. + bool setPartUUIDs(const std::vector & uuids); + + /// Cancell query and restart it with info about duplicated UUIDs + /// only for `allow_experimental_query_deduplication`. + std::variant restartQueryWithoutDuplicatedUUIDs(std::unique_ptr * read_context = nullptr); + /// If wasn't sent yet, send request to cancel all connections to replicas void tryCancel(const char * reason, std::unique_ptr * read_context); @@ -174,6 +193,10 @@ private: /// Process packet for read and return data block if possible. std::optional processPacket(Packet packet); + + /// Reads packet by packet + Block readPackets(); + }; } diff --git a/src/DataTypes/NumberTraits.h b/src/DataTypes/NumberTraits.h index 3aa00c68274..c3b0d956ef5 100644 --- a/src/DataTypes/NumberTraits.h +++ b/src/DataTypes/NumberTraits.h @@ -104,11 +104,16 @@ template struct ResultOfIntegerDivision sizeof(A)>::Type; }; -/** Division with remainder you get a number with the same number of bits as in divisor. - */ +/** Division with remainder you get a number with the same number of bits as in divisor, + * or larger in case of signed type. + */ template struct ResultOfModulo { - using Type0 = typename Construct || is_signed_v, false, sizeof(B)>::Type; + static constexpr bool result_is_signed = is_signed_v; + /// If modulo of division can yield negative number, we need larger type to accommodate it. + /// Example: toInt32(-199) % toUInt8(200) will return -199 that does not fit in Int8, only in Int16. + static constexpr size_t size_of_result = result_is_signed ? nextSize(sizeof(B)) : sizeof(B); + using Type0 = typename Construct::Type; using Type = std::conditional_t || std::is_floating_point_v, Float64, Type0>; }; diff --git a/src/Dictionaries/CacheDictionary.cpp b/src/Dictionaries/CacheDictionary.cpp index ad98d69fdf9..67bcab109ea 100644 --- a/src/Dictionaries/CacheDictionary.cpp +++ b/src/Dictionaries/CacheDictionary.cpp @@ -1291,7 +1291,6 @@ void CacheDictionary::update(UpdateUnitPtr & update_unit_ptr) BlockInputStreamPtr stream = current_source_ptr->loadIds(update_unit_ptr->requested_ids); stream->readPrefix(); - while (true) { Block block = stream->read(); diff --git a/src/Dictionaries/ExecutableDictionarySource.cpp b/src/Dictionaries/ExecutableDictionarySource.cpp index 42dac540f09..37dde600adf 100644 --- a/src/Dictionaries/ExecutableDictionarySource.cpp +++ b/src/Dictionaries/ExecutableDictionarySource.cpp @@ -186,6 +186,9 @@ namespace if (!err.empty()) LOG_ERROR(log, "Having stderr: {}", err); + if (thread.joinable()) + thread.join(); + command->wait(); } diff --git a/src/Disks/DiskCacheWrapper.cpp b/src/Disks/DiskCacheWrapper.cpp index 26e6b7609f3..6d991c17c67 100644 --- a/src/Disks/DiskCacheWrapper.cpp +++ b/src/Disks/DiskCacheWrapper.cpp @@ -108,7 +108,7 @@ DiskCacheWrapper::readFile(const String & path, size_t buf_size, size_t estimate if (!cache_file_predicate(path)) return DiskDecorator::readFile(path, buf_size, estimated_size, aio_threshold, mmap_threshold); - LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Read file {} from cache", backQuote(path)); + LOG_DEBUG(&Poco::Logger::get("DiskCache"), "Read file {} from cache", backQuote(path)); if (cache_disk->exists(path)) return cache_disk->readFile(path, buf_size, estimated_size, aio_threshold, mmap_threshold); @@ -122,11 +122,11 @@ DiskCacheWrapper::readFile(const String & path, size_t buf_size, size_t estimate { /// This thread will responsible for file downloading to cache. metadata->status = DOWNLOADING; - LOG_DEBUG(&Poco::Logger::get("DiskS3"), "File {} doesn't exist in cache. Will download it", backQuote(path)); + LOG_DEBUG(&Poco::Logger::get("DiskCache"), "File {} doesn't exist in cache. Will download it", backQuote(path)); } else if (metadata->status == DOWNLOADING) { - LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Waiting for file {} download to cache", backQuote(path)); + LOG_DEBUG(&Poco::Logger::get("DiskCache"), "Waiting for file {} download to cache", backQuote(path)); metadata->condition.wait(lock, [metadata] { return metadata->status == DOWNLOADED || metadata->status == ERROR; }); } } @@ -139,7 +139,7 @@ DiskCacheWrapper::readFile(const String & path, size_t buf_size, size_t estimate { try { - auto dir_path = getDirectoryPath(path); + auto dir_path = directoryPath(path); if (!cache_disk->exists(dir_path)) cache_disk->createDirectories(dir_path); @@ -151,11 +151,11 @@ DiskCacheWrapper::readFile(const String & path, size_t buf_size, size_t estimate } cache_disk->moveFile(tmp_path, path); - LOG_DEBUG(&Poco::Logger::get("DiskS3"), "File {} downloaded to cache", backQuote(path)); + LOG_DEBUG(&Poco::Logger::get("DiskCache"), "File {} downloaded to cache", backQuote(path)); } catch (...) { - tryLogCurrentException("DiskS3", "Failed to download file + " + backQuote(path) + " to cache"); + tryLogCurrentException("DiskCache", "Failed to download file + " + backQuote(path) + " to cache"); result_status = ERROR; } } @@ -180,9 +180,9 @@ DiskCacheWrapper::writeFile(const String & path, size_t buf_size, WriteMode mode if (!cache_file_predicate(path)) return DiskDecorator::writeFile(path, buf_size, mode); - LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Write file {} to cache", backQuote(path)); + LOG_DEBUG(&Poco::Logger::get("DiskCache"), "Write file {} to cache", backQuote(path)); - auto dir_path = getDirectoryPath(path); + auto dir_path = directoryPath(path); if (!cache_disk->exists(dir_path)) cache_disk->createDirectories(dir_path); @@ -217,7 +217,7 @@ void DiskCacheWrapper::moveFile(const String & from_path, const String & to_path { if (cache_disk->exists(from_path)) { - auto dir_path = getDirectoryPath(to_path); + auto dir_path = directoryPath(to_path); if (!cache_disk->exists(dir_path)) cache_disk->createDirectories(dir_path); @@ -230,7 +230,7 @@ void DiskCacheWrapper::replaceFile(const String & from_path, const String & to_p { if (cache_disk->exists(from_path)) { - auto dir_path = getDirectoryPath(to_path); + auto dir_path = directoryPath(to_path); if (!cache_disk->exists(dir_path)) cache_disk->createDirectories(dir_path); @@ -239,19 +239,6 @@ void DiskCacheWrapper::replaceFile(const String & from_path, const String & to_p DiskDecorator::replaceFile(from_path, to_path); } -void DiskCacheWrapper::copyFile(const String & from_path, const String & to_path) -{ - if (cache_disk->exists(from_path)) - { - auto dir_path = getDirectoryPath(to_path); - if (!cache_disk->exists(dir_path)) - cache_disk->createDirectories(dir_path); - - cache_disk->copyFile(from_path, to_path); - } - DiskDecorator::copyFile(from_path, to_path); -} - void DiskCacheWrapper::removeFile(const String & path) { cache_disk->removeFileIfExists(path); @@ -280,9 +267,10 @@ void DiskCacheWrapper::removeRecursive(const String & path) void DiskCacheWrapper::createHardLink(const String & src_path, const String & dst_path) { - if (cache_disk->exists(src_path)) + /// Don't create hardlinks for cache files to shadow directory as it just waste cache disk space. + if (cache_disk->exists(src_path) && !dst_path.starts_with("shadow/")) { - auto dir_path = getDirectoryPath(dst_path); + auto dir_path = directoryPath(dst_path); if (!cache_disk->exists(dir_path)) cache_disk->createDirectories(dir_path); @@ -303,11 +291,6 @@ void DiskCacheWrapper::createDirectories(const String & path) DiskDecorator::createDirectories(path); } -inline String DiskCacheWrapper::getDirectoryPath(const String & path) -{ - return Poco::Path{path}.setFileName("").toString(); -} - /// TODO: Current reservation mechanism leaks IDisk abstraction details. /// This hack is needed to return proper disk pointer (wrapper instead of implementation) from reservation object. class ReservationDelegate : public IReservation diff --git a/src/Disks/DiskCacheWrapper.h b/src/Disks/DiskCacheWrapper.h index 10d24bf92e8..bf1a5df693a 100644 --- a/src/Disks/DiskCacheWrapper.h +++ b/src/Disks/DiskCacheWrapper.h @@ -32,7 +32,6 @@ public: void moveDirectory(const String & from_path, const String & to_path) override; void moveFile(const String & from_path, const String & to_path) override; void replaceFile(const String & from_path, const String & to_path) override; - void copyFile(const String & from_path, const String & to_path) override; std::unique_ptr readFile(const String & path, size_t buf_size, size_t estimated_size, size_t aio_threshold, size_t mmap_threshold) const override; std::unique_ptr @@ -46,7 +45,6 @@ public: private: std::shared_ptr acquireDownloadMetadata(const String & path) const; - static String getDirectoryPath(const String & path); /// Disk to cache files. std::shared_ptr cache_disk; diff --git a/src/Disks/DiskDecorator.cpp b/src/Disks/DiskDecorator.cpp index 0c052ce8e91..3ebd1b6cf3b 100644 --- a/src/Disks/DiskDecorator.cpp +++ b/src/Disks/DiskDecorator.cpp @@ -103,11 +103,6 @@ void DiskDecorator::replaceFile(const String & from_path, const String & to_path delegate->replaceFile(from_path, to_path); } -void DiskDecorator::copyFile(const String & from_path, const String & to_path) -{ - delegate->copyFile(from_path, to_path); -} - void DiskDecorator::copy(const String & from_path, const std::shared_ptr & to_disk, const String & to_path) { delegate->copy(from_path, to_disk, to_path); @@ -185,4 +180,9 @@ SyncGuardPtr DiskDecorator::getDirectorySyncGuard(const String & path) const return delegate->getDirectorySyncGuard(path); } +void DiskDecorator::onFreeze(const String & path) +{ + delegate->onFreeze(path); +} + } diff --git a/src/Disks/DiskDecorator.h b/src/Disks/DiskDecorator.h index b50252c2c97..c204d10bed9 100644 --- a/src/Disks/DiskDecorator.h +++ b/src/Disks/DiskDecorator.h @@ -32,7 +32,6 @@ public: void createFile(const String & path) override; void moveFile(const String & from_path, const String & to_path) override; void replaceFile(const String & from_path, const String & to_path) override; - void copyFile(const String & from_path, const String & to_path) override; void copy(const String & from_path, const std::shared_ptr & to_disk, const String & to_path) override; void listFiles(const String & path, std::vector & file_names) override; std::unique_ptr @@ -48,8 +47,9 @@ public: void setReadOnly(const String & path) override; void createHardLink(const String & src_path, const String & dst_path) override; void truncateFile(const String & path, size_t size) override; - const String getType() const override { return delegate->getType(); } + DiskType::Type getType() const override { return delegate->getType(); } Executor & getExecutor() override; + void onFreeze(const String & path) override; SyncGuardPtr getDirectorySyncGuard(const String & path) const override; protected: diff --git a/src/Disks/DiskLocal.cpp b/src/Disks/DiskLocal.cpp index 8787f613bf7..5035a865191 100644 --- a/src/Disks/DiskLocal.cpp +++ b/src/Disks/DiskLocal.cpp @@ -218,11 +218,6 @@ void DiskLocal::replaceFile(const String & from_path, const String & to_path) from_file.renameTo(to_file.path()); } -void DiskLocal::copyFile(const String & from_path, const String & to_path) -{ - Poco::File(disk_path + from_path).copyTo(disk_path + to_path); -} - std::unique_ptr DiskLocal::readFile(const String & path, size_t buf_size, size_t estimated_size, size_t aio_threshold, size_t mmap_threshold) const { diff --git a/src/Disks/DiskLocal.h b/src/Disks/DiskLocal.h index d8d45290986..7dbfbe445f8 100644 --- a/src/Disks/DiskLocal.h +++ b/src/Disks/DiskLocal.h @@ -67,8 +67,6 @@ public: void replaceFile(const String & from_path, const String & to_path) override; - void copyFile(const String & from_path, const String & to_path) override; - void copy(const String & from_path, const std::shared_ptr & to_disk, const String & to_path) override; void listFiles(const String & path, std::vector & file_names) override; @@ -100,7 +98,7 @@ public: void truncateFile(const String & path, size_t size) override; - const String getType() const override { return "local"; } + DiskType::Type getType() const override { return DiskType::Type::Local; } SyncGuardPtr getDirectorySyncGuard(const String & path) const override; diff --git a/src/Disks/DiskMemory.cpp b/src/Disks/DiskMemory.cpp index d2ed91a8263..a0905e67427 100644 --- a/src/Disks/DiskMemory.cpp +++ b/src/Disks/DiskMemory.cpp @@ -314,11 +314,6 @@ void DiskMemory::replaceFileImpl(const String & from_path, const String & to_pat files.insert(std::move(node)); } -void DiskMemory::copyFile(const String & /*from_path*/, const String & /*to_path*/) -{ - throw Exception("Method copyFile is not implemented for memory disks", ErrorCodes::NOT_IMPLEMENTED); -} - std::unique_ptr DiskMemory::readFile(const String & path, size_t /*buf_size*/, size_t, size_t, size_t) const { std::lock_guard lock(mutex); diff --git a/src/Disks/DiskMemory.h b/src/Disks/DiskMemory.h index 3ebc76661d4..29ac4919833 100644 --- a/src/Disks/DiskMemory.h +++ b/src/Disks/DiskMemory.h @@ -60,8 +60,6 @@ public: void replaceFile(const String & from_path, const String & to_path) override; - void copyFile(const String & from_path, const String & to_path) override; - void listFiles(const String & path, std::vector & file_names) override; std::unique_ptr readFile( @@ -91,7 +89,7 @@ public: void truncateFile(const String & path, size_t size) override; - const String getType() const override { return "memory"; } + DiskType::Type getType() const override { return DiskType::Type::RAM; } private: void createDirectoriesImpl(const String & path); diff --git a/src/Disks/IDisk.h b/src/Disks/IDisk.h index f41490a0807..6f021346174 100644 --- a/src/Disks/IDisk.h +++ b/src/Disks/IDisk.h @@ -57,6 +57,29 @@ public: using SpacePtr = std::shared_ptr; +struct DiskType +{ + enum class Type + { + Local, + RAM, + S3 + }; + static String toString(Type disk_type) + { + switch (disk_type) + { + case Type::Local: + return "local"; + case Type::RAM: + return "memory"; + case Type::S3: + return "s3"; + } + __builtin_unreachable(); + } +}; + /** * A guard, that should synchronize file's or directory's state * with storage device (e.g. fsync in POSIX) in its destructor. @@ -140,9 +163,6 @@ public: /// If a file with `to_path` path already exists, it will be replaced. virtual void replaceFile(const String & from_path, const String & to_path) = 0; - /// Copy the file from `from_path` to `to_path`. - virtual void copyFile(const String & from_path, const String & to_path) = 0; - /// Recursively copy data containing at `from_path` to `to_path` located at `to_disk`. virtual void copy(const String & from_path, const std::shared_ptr & to_disk, const String & to_path); @@ -191,7 +211,7 @@ public: virtual void truncateFile(const String & path, size_t size); /// Return disk type - "local", "s3", etc. - virtual const String getType() const = 0; + virtual DiskType::Type getType() const = 0; /// Invoked when Global Context is shutdown. virtual void shutdown() { } @@ -199,6 +219,9 @@ public: /// Returns executor to perform asynchronous operations. virtual Executor & getExecutor() { return *executor; } + /// Invoked on partitions freeze query. + virtual void onFreeze(const String &) { } + /// Returns guard, that insures synchronization of directory metadata with storage device. virtual SyncGuardPtr getDirectorySyncGuard(const String & path) const; @@ -269,4 +292,11 @@ inline String fileName(const String & path) { return Poco::Path(path).getFileName(); } + +/// Return directory path for the specified path. +inline String directoryPath(const String & path) +{ + return Poco::Path(path).setFileName("").toString(); +} + } diff --git a/src/Disks/S3/DiskS3.cpp b/src/Disks/S3/DiskS3.cpp index 238db98c9cc..3d91d5fbb78 100644 --- a/src/Disks/S3/DiskS3.cpp +++ b/src/Disks/S3/DiskS3.cpp @@ -23,6 +23,8 @@ #include #include #include +#include +#include #include @@ -32,12 +34,15 @@ namespace DB namespace ErrorCodes { + extern const int S3_ERROR; extern const int FILE_ALREADY_EXISTS; extern const int CANNOT_SEEK_THROUGH_FILE; extern const int UNKNOWN_FORMAT; extern const int INCORRECT_DISK_INDEX; + extern const int BAD_ARGUMENTS; extern const int PATH_ACCESS_DENIED; extern const int CANNOT_DELETE_DIRECTORY; + extern const int LOGICAL_ERROR; } @@ -76,12 +81,12 @@ String getRandomName() } template -void throwIfError(Aws::Utils::Outcome && response) +void throwIfError(Aws::Utils::Outcome & response) { if (!response.IsSuccess()) { const auto & err = response.GetError(); - throw Exception(err.GetMessage(), static_cast(err.GetErrorType())); + throw Exception(std::to_string(static_cast(err.GetErrorType())) + ": " + err.GetMessage(), ErrorCodes::S3_ERROR); } } @@ -244,7 +249,7 @@ public: if (whence == SEEK_CUR) { /// If position within current working buffer - shift pos. - if (working_buffer.size() && size_t(getPosition() + offset_) < absolute_position) + if (!working_buffer.empty() && size_t(getPosition() + offset_) < absolute_position) { pos += offset_; return getPosition(); @@ -257,7 +262,7 @@ public: else if (whence == SEEK_SET) { /// If position within current working buffer - shift pos. - if (working_buffer.size() && size_t(offset_) >= absolute_position - working_buffer.size() + if (!working_buffer.empty() && size_t(offset_) >= absolute_position - working_buffer.size() && size_t(offset_) < absolute_position) { pos = working_buffer.end() - (absolute_position - offset_); @@ -500,17 +505,17 @@ private: CurrentMetrics::Increment metric_increment; }; -/// Runs tasks asynchronously using global thread pool. +/// Runs tasks asynchronously using thread pool. class AsyncExecutor : public Executor { public: - explicit AsyncExecutor() = default; + explicit AsyncExecutor(int thread_pool_size) : pool(ThreadPool(thread_pool_size)) { } std::future execute(std::function task) override { auto promise = std::make_shared>(); - GlobalThreadPool::instance().scheduleOrThrowOnError( + pool.scheduleOrThrowOnError( [promise, task]() { try @@ -531,6 +536,9 @@ public: return promise->get_future(); } + +private: + ThreadPool pool; }; @@ -544,8 +552,10 @@ DiskS3::DiskS3( size_t min_upload_part_size_, size_t max_single_part_upload_size_, size_t min_bytes_for_seek_, - bool send_metadata_) - : IDisk(std::make_unique()) + bool send_metadata_, + int thread_pool_size_, + int list_object_keys_size_) + : IDisk(std::make_unique(thread_pool_size_)) , name(std::move(name_)) , client(std::move(client_)) , proxy_configuration(std::move(proxy_configuration_)) @@ -556,6 +566,8 @@ DiskS3::DiskS3( , max_single_part_upload_size(max_single_part_upload_size_) , min_bytes_for_seek(min_bytes_for_seek_) , send_metadata(send_metadata_) + , revision_counter(0) + , list_object_keys_size(list_object_keys_size_) { } @@ -613,45 +625,31 @@ void DiskS3::moveFile(const String & from_path, const String & to_path) { if (exists(to_path)) throw Exception("File already exists: " + to_path, ErrorCodes::FILE_ALREADY_EXISTS); + + if (send_metadata) + { + auto revision = ++revision_counter; + const DiskS3::ObjectMetadata object_metadata { + {"from_path", from_path}, + {"to_path", to_path} + }; + createFileOperationObject("rename", revision, object_metadata); + } + Poco::File(metadata_path + from_path).renameTo(metadata_path + to_path); } void DiskS3::replaceFile(const String & from_path, const String & to_path) { - Poco::File from_file(metadata_path + from_path); - Poco::File to_file(metadata_path + to_path); - if (to_file.exists()) + if (exists(to_path)) { - Poco::File tmp_file(metadata_path + to_path + ".old"); - to_file.renameTo(tmp_file.path()); - from_file.renameTo(metadata_path + to_path); - removeFile(to_path + ".old"); + const String tmp_path = to_path + ".old"; + moveFile(to_path, tmp_path); + moveFile(from_path, to_path); + removeFile(tmp_path); } else - from_file.renameTo(to_file.path()); -} - -void DiskS3::copyFile(const String & from_path, const String & to_path) -{ - if (exists(to_path)) - removeFile(to_path); - - auto from = readMeta(from_path); - auto to = createMeta(to_path); - - for (const auto & [path, size] : from.s3_objects) - { - auto new_path = getRandomName(); - Aws::S3::Model::CopyObjectRequest req; - req.SetCopySource(bucket + "/" + s3_root_path + path); - req.SetBucket(bucket); - req.SetKey(s3_root_path + new_path); - throwIfError(client->CopyObject(req)); - - to.addObject(new_path, size); - } - - to.save(); + moveFile(from_path, to_path); } std::unique_ptr DiskS3::readFile(const String & path, size_t buf_size, size_t, size_t, size_t) const @@ -673,7 +671,17 @@ std::unique_ptr DiskS3::writeFile(const String & path, /// Path to store new S3 object. auto s3_path = getRandomName(); - auto object_metadata = createObjectMetadata(path); + + std::optional object_metadata; + if (send_metadata) + { + auto revision = ++revision_counter; + object_metadata = { + {"path", path} + }; + s3_path = "r" + revisionToString(revision) + "-file-" + s3_path; + } + if (!exist || mode == WriteMode::Rewrite) { /// If metadata file exists - remove and create new. @@ -777,7 +785,8 @@ void DiskS3::removeAws(const AwsS3KeyKeeper & keys) Aws::S3::Model::DeleteObjectsRequest request; request.SetBucket(bucket); request.SetDelete(delkeys); - throwIfError(client->DeleteObjects(request)); + auto outcome = client->DeleteObjects(request); + throwIfError(outcome); } } } @@ -852,6 +861,17 @@ Poco::Timestamp DiskS3::getLastModified(const String & path) void DiskS3::createHardLink(const String & src_path, const String & dst_path) { + /// We don't need to record hardlinks created to shadow folder. + if (send_metadata && !dst_path.starts_with("shadow/")) + { + auto revision = ++revision_counter; + const ObjectMetadata object_metadata { + {"src_path", src_path}, + {"dst_path", dst_path} + }; + createFileOperationObject("hardlink", revision, object_metadata); + } + /// Increment number of references. auto src = readMeta(src_path); ++src.ref_count; @@ -886,12 +906,368 @@ void DiskS3::shutdown() client->DisableRequestProcessing(); } -std::optional DiskS3::createObjectMetadata(const String & path) const +void DiskS3::createFileOperationObject(const String & operation_name, UInt64 revision, const DiskS3::ObjectMetadata & metadata) { - if (send_metadata) - return (DiskS3::ObjectMetadata){{"path", path}}; + const String key = "operations/r" + revisionToString(revision) + "-" + operation_name; + WriteBufferFromS3 buffer(client, bucket, s3_root_path + key, min_upload_part_size, max_single_part_upload_size, metadata); + buffer.write('0'); + buffer.finalize(); +} - return {}; +void DiskS3::startup() +{ + if (!send_metadata) + return; + + LOG_INFO(&Poco::Logger::get("DiskS3"), "Starting up disk {}", name); + + /// Find last revision. + UInt64 l = 0, r = LATEST_REVISION; + while (l < r) + { + LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Check revision in bounds {}-{}", l, r); + + auto revision = l + (r - l + 1) / 2; + auto revision_str = revisionToString(revision); + + LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Check object with revision {}", revision); + + /// Check file or operation with such revision exists. + if (checkObjectExists(s3_root_path + "r" + revision_str) + || checkObjectExists(s3_root_path + "operations/r" + revision_str)) + l = revision; + else + r = revision - 1; + } + revision_counter = l; + LOG_INFO(&Poco::Logger::get("DiskS3"), "Found last revision number {} for disk {}", revision_counter, name); +} + +bool DiskS3::checkObjectExists(const String & prefix) +{ + Aws::S3::Model::ListObjectsV2Request request; + request.SetBucket(bucket); + request.SetPrefix(prefix); + request.SetMaxKeys(1); + + auto outcome = client->ListObjectsV2(request); + throwIfError(outcome); + + return !outcome.GetResult().GetContents().empty(); +} + +Aws::S3::Model::HeadObjectResult DiskS3::headObject(const String & source_bucket, const String & key) +{ + Aws::S3::Model::HeadObjectRequest request; + request.SetBucket(source_bucket); + request.SetKey(key); + + auto outcome = client->HeadObject(request); + throwIfError(outcome); + + return outcome.GetResultWithOwnership(); +} + +void DiskS3::listObjects(const String & source_bucket, const String & source_path, std::function callback) +{ + Aws::S3::Model::ListObjectsV2Request request; + request.SetBucket(source_bucket); + request.SetPrefix(source_path); + request.SetMaxKeys(list_object_keys_size); + + Aws::S3::Model::ListObjectsV2Outcome outcome; + do + { + outcome = client->ListObjectsV2(request); + throwIfError(outcome); + + bool should_continue = callback(outcome.GetResult()); + + if (!should_continue) + break; + + request.SetContinuationToken(outcome.GetResult().GetNextContinuationToken()); + } while (outcome.GetResult().GetIsTruncated()); +} + +void DiskS3::copyObject(const String & src_bucket, const String & src_key, const String & dst_bucket, const String & dst_key) +{ + Aws::S3::Model::CopyObjectRequest request; + request.SetCopySource(src_bucket + "/" + src_key); + request.SetBucket(dst_bucket); + request.SetKey(dst_key); + + auto outcome = client->CopyObject(request); + throwIfError(outcome); +} + +struct DiskS3::RestoreInformation +{ + UInt64 revision = LATEST_REVISION; + String source_bucket; + String source_path; +}; + +void DiskS3::readRestoreInformation(DiskS3::RestoreInformation & restore_information) +{ + ReadBufferFromFile buffer(metadata_path + restore_file_name, 512); + buffer.next(); + + /// Empty file - just restore all metadata. + if (!buffer.hasPendingData()) + return; + + try + { + readIntText(restore_information.revision, buffer); + assertChar('\n', buffer); + + if (!buffer.hasPendingData()) + return; + + readText(restore_information.source_bucket, buffer); + assertChar('\n', buffer); + + if (!buffer.hasPendingData()) + return; + + readText(restore_information.source_path, buffer); + assertChar('\n', buffer); + + if (buffer.hasPendingData()) + throw Exception("Extra information at the end of restore file", ErrorCodes::UNKNOWN_FORMAT); + } + catch (const Exception & e) + { + throw Exception("Failed to read restore information", e, ErrorCodes::UNKNOWN_FORMAT); + } +} + +void DiskS3::restore() +{ + if (!exists(restore_file_name)) + return; + + try + { + RestoreInformation information; + information.source_bucket = bucket; + information.source_path = s3_root_path; + + readRestoreInformation(information); + if (information.revision == 0) + information.revision = LATEST_REVISION; + if (!information.source_path.ends_with('/')) + information.source_path += '/'; + + if (information.source_bucket == bucket) + { + /// In this case we need to additionally cleanup S3 from objects with later revision. + /// Will be simply just restore to different path. + if (information.source_path == s3_root_path && information.revision != LATEST_REVISION) + throw Exception("Restoring to the same bucket and path is allowed if revision is latest (0)", ErrorCodes::BAD_ARGUMENTS); + + /// This case complicates S3 cleanup in case of unsuccessful restore. + if (information.source_path != s3_root_path && s3_root_path.starts_with(information.source_path)) + throw Exception("Restoring to the same bucket is allowed only if source path is not a sub-path of configured path in S3 disk", ErrorCodes::BAD_ARGUMENTS); + } + + ///TODO: Cleanup FS and bucket if previous restore was failed. + + LOG_INFO(&Poco::Logger::get("DiskS3"), "Starting to restore disk {}. Revision: {}, Source bucket: {}, Source path: {}", + name, information.revision, information.source_bucket, information.source_path); + + restoreFiles(information.source_bucket, information.source_path, information.revision); + restoreFileOperations(information.source_bucket, information.source_path, information.revision); + + Poco::File restore_file(metadata_path + restore_file_name); + restore_file.remove(); + + LOG_INFO(&Poco::Logger::get("DiskS3"), "Restore disk {} finished", name); + } + catch (const Exception & e) + { + LOG_ERROR(&Poco::Logger::get("DiskS3"), "Failed to restore disk. Code: {}, e.displayText() = {}, Stack trace:\n\n{}", e.code(), e.displayText(), e.getStackTraceString()); + + throw; + } +} + +void DiskS3::restoreFiles(const String & source_bucket, const String & source_path, UInt64 target_revision) +{ + LOG_INFO(&Poco::Logger::get("DiskS3"), "Starting restore files for disk {}", name); + + std::vector> results; + listObjects(source_bucket, source_path, [this, &source_bucket, &source_path, &target_revision, &results](auto list_result) + { + std::vector keys; + for (const auto & row : list_result.GetContents()) + { + const String & key = row.GetKey(); + + /// Skip file operations objects. They will be processed separately. + if (key.find("/operations/") != String::npos) + continue; + + const auto [revision, _] = extractRevisionAndOperationFromKey(key); + /// Filter early if it's possible to get revision from key. + if (revision > target_revision) + continue; + + keys.push_back(key); + } + + if (!keys.empty()) + { + auto result = getExecutor().execute([this, &source_bucket, &source_path, keys]() + { + processRestoreFiles(source_bucket, source_path, keys); + }); + + results.push_back(std::move(result)); + } + + return true; + }); + + for (auto & result : results) + result.wait(); + for (auto & result : results) + result.get(); + + LOG_INFO(&Poco::Logger::get("DiskS3"), "Files are restored for disk {}", name); +} + +void DiskS3::processRestoreFiles(const String & source_bucket, const String & source_path, Strings keys) +{ + for (const auto & key : keys) + { + auto head_result = headObject(source_bucket, key); + auto object_metadata = head_result.GetMetadata(); + + /// Restore file if object has 'path' in metadata. + auto path_entry = object_metadata.find("path"); + if (path_entry == object_metadata.end()) + throw Exception("Failed to restore key " + key + " because it doesn't have 'path' in metadata", ErrorCodes::S3_ERROR); + + const auto & path = path_entry->second; + + createDirectories(directoryPath(path)); + auto metadata = createMeta(path); + auto relative_key = shrinkKey(source_path, key); + + /// Copy object if we restore to different bucket / path. + if (bucket != source_bucket || s3_root_path != source_path) + copyObject(source_bucket, key, bucket, s3_root_path + relative_key); + + metadata.addObject(relative_key, head_result.GetContentLength()); + metadata.save(); + + LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Restored file {}", path); + } +} + +void DiskS3::restoreFileOperations(const String & source_bucket, const String & source_path, UInt64 target_revision) +{ + LOG_INFO(&Poco::Logger::get("DiskS3"), "Starting restore file operations for disk {}", name); + + /// Enable recording file operations if we restore to different bucket / path. + send_metadata = bucket != source_bucket || s3_root_path != source_path; + + listObjects(source_bucket, source_path + "operations/", [this, &source_bucket, &target_revision](auto list_result) + { + const String rename = "rename"; + const String hardlink = "hardlink"; + + for (const auto & row : list_result.GetContents()) + { + const String & key = row.GetKey(); + + const auto [revision, operation] = extractRevisionAndOperationFromKey(key); + if (revision == UNKNOWN_REVISION) + { + LOG_WARNING(&Poco::Logger::get("DiskS3"), "Skip key {} with unknown revision", key); + continue; + } + + /// S3 ensures that keys will be listed in ascending UTF-8 bytes order (revision order). + /// We can stop processing if revision of the object is already more than required. + if (revision > target_revision) + return false; + + /// Keep original revision if restore to different bucket / path. + if (send_metadata) + revision_counter = revision - 1; + + auto object_metadata = headObject(source_bucket, key).GetMetadata(); + if (operation == rename) + { + auto from_path = object_metadata["from_path"]; + auto to_path = object_metadata["to_path"]; + if (exists(from_path)) + { + moveFile(from_path, to_path); + LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Revision {}. Restored rename {} -> {}", revision, from_path, to_path); + } + } + else if (operation == hardlink) + { + auto src_path = object_metadata["src_path"]; + auto dst_path = object_metadata["dst_path"]; + if (exists(src_path)) + { + createDirectories(directoryPath(dst_path)); + createHardLink(src_path, dst_path); + LOG_DEBUG(&Poco::Logger::get("DiskS3"), "Revision {}. Restored hardlink {} -> {}", revision, src_path, dst_path); + } + } + } + + return true; + }); + + send_metadata = true; + + LOG_INFO(&Poco::Logger::get("DiskS3"), "File operations restored for disk {}", name); +} + +std::tuple DiskS3::extractRevisionAndOperationFromKey(const String & key) +{ + UInt64 revision = UNKNOWN_REVISION; + String operation; + + re2::RE2::FullMatch(key, key_regexp, &revision, &operation); + + return {revision, operation}; +} + +String DiskS3::shrinkKey(const String & path, const String & key) +{ + if (!key.starts_with(path)) + throw Exception("The key " + key + " prefix mismatch with given " + path, ErrorCodes::LOGICAL_ERROR); + + return key.substr(path.length()); +} + +String DiskS3::revisionToString(UInt64 revision) +{ + static constexpr size_t max_digits = 19; /// UInt64 max digits in decimal representation. + + /// Align revision number with leading zeroes to have strict lexicographical order of them. + auto revision_str = std::to_string(revision); + auto digits_to_align = max_digits - revision_str.length(); + for (size_t i = 0; i < digits_to_align; ++i) + revision_str = "0" + revision_str; + + return revision_str; +} + +void DiskS3::onFreeze(const String & path) +{ + createDirectories(path); + WriteBufferFromFile revision_file_buf(metadata_path + path + "revision.txt", 32); + writeIntText(revision_counter.load(), revision_file_buf); + revision_file_buf.finalize(); } } diff --git a/src/Disks/S3/DiskS3.h b/src/Disks/S3/DiskS3.h index 3dbd9029fb2..5182ae4801b 100644 --- a/src/Disks/S3/DiskS3.h +++ b/src/Disks/S3/DiskS3.h @@ -1,11 +1,16 @@ #pragma once +#include #include "Disks/DiskFactory.h" #include "Disks/Executor.h" #include "ProxyConfiguration.h" #include +#include +#include + #include +#include namespace DB @@ -25,6 +30,7 @@ public: class AwsS3KeyKeeper; struct Metadata; + struct RestoreInformation; DiskS3( String name_, @@ -36,7 +42,9 @@ public: size_t min_upload_part_size_, size_t max_single_part_upload_size_, size_t min_bytes_for_seek_, - bool send_metadata_); + bool send_metadata_, + int thread_pool_size_, + int list_object_keys_size_); const String & getName() const override { return name; } @@ -74,8 +82,6 @@ public: void replaceFile(const String & from_path, const String & to_path) override; - void copyFile(const String & from_path, const String & to_path) override; - void listFiles(const String & path, std::vector & file_names) override; std::unique_ptr readFile( @@ -105,22 +111,47 @@ public: void setReadOnly(const String & path) override; - const String getType() const override { return "s3"; } + DiskType::Type getType() const override { return DiskType::Type::S3; } void shutdown() override; + /// Actions performed after disk creation. + void startup(); + + /// Restore S3 metadata files on file system. + void restore(); + + /// Dumps current revision counter into file 'revision.txt' at given path. + void onFreeze(const String & path) override; + private: bool tryReserve(UInt64 bytes); void removeMeta(const String & path, AwsS3KeyKeeper & keys); void removeMetaRecursive(const String & path, AwsS3KeyKeeper & keys); void removeAws(const AwsS3KeyKeeper & keys); - std::optional createObjectMetadata(const String & path) const; Metadata readMeta(const String & path) const; Metadata createMeta(const String & path) const; -private: + void createFileOperationObject(const String & operation_name, UInt64 revision, const ObjectMetadata & metadata); + static String revisionToString(UInt64 revision); + + bool checkObjectExists(const String & prefix); + Aws::S3::Model::HeadObjectResult headObject(const String & source_bucket, const String & key); + void listObjects(const String & source_bucket, const String & source_path, std::function callback); + void copyObject(const String & src_bucket, const String & src_key, const String & dst_bucket, const String & dst_key); + + void readRestoreInformation(RestoreInformation & restore_information); + void restoreFiles(const String & source_bucket, const String & source_path, UInt64 target_revision); + void processRestoreFiles(const String & source_bucket, const String & source_path, std::vector keys); + void restoreFileOperations(const String & source_bucket, const String & source_path, UInt64 target_revision); + + /// Remove 'path' prefix from 'key' to get relative key. + /// It's needed to store keys to metadata files in RELATIVE_PATHS version. + static String shrinkKey(const String & path, const String & key); + std::tuple extractRevisionAndOperationFromKey(const String & key); + const String name; std::shared_ptr client; std::shared_ptr proxy_configuration; @@ -135,6 +166,18 @@ private: UInt64 reserved_bytes = 0; UInt64 reservation_count = 0; std::mutex reservation_mutex; + + std::atomic revision_counter; + static constexpr UInt64 LATEST_REVISION = (static_cast(1)) << 63; + static constexpr UInt64 UNKNOWN_REVISION = 0; + + /// File at path {metadata_path}/restore contains metadata restore information + const String restore_file_name = "restore"; + /// The number of keys listed in one request (1000 is max value) + int list_object_keys_size; + + /// Key has format: ../../r{revision}-{operation} + const re2::RE2 key_regexp {".*/r(\\d+)-(\\w+).*"}; }; } diff --git a/src/Disks/S3/registerDiskS3.cpp b/src/Disks/S3/registerDiskS3.cpp index f9eddebdf88..d094d228bae 100644 --- a/src/Disks/S3/registerDiskS3.cpp +++ b/src/Disks/S3/registerDiskS3.cpp @@ -152,7 +152,9 @@ void registerDiskS3(DiskFactory & factory) context.getSettingsRef().s3_min_upload_part_size, context.getSettingsRef().s3_max_single_part_upload_size, config.getUInt64(config_prefix + ".min_bytes_for_seek", 1024 * 1024), - config.getBool(config_prefix + ".send_object_metadata", false)); + config.getBool(config_prefix + ".send_metadata", false), + config.getInt(config_prefix + ".thread_pool_size", 16), + config.getInt(config_prefix + ".list_object_keys_size", 1000)); /// This code is used only to check access to the corresponding disk. if (!config.getBool(config_prefix + ".skip_access_check", false)) @@ -162,6 +164,9 @@ void registerDiskS3(DiskFactory & factory) checkRemoveAccess(*s3disk); } + s3disk->restore(); + s3disk->startup(); + bool cache_enabled = config.getBool(config_prefix + ".cache_enabled", true); if (cache_enabled) diff --git a/src/Functions/CMakeLists.txt b/src/Functions/CMakeLists.txt index 321aa5e2196..1c3beb2e47d 100644 --- a/src/Functions/CMakeLists.txt +++ b/src/Functions/CMakeLists.txt @@ -117,3 +117,6 @@ target_link_libraries(clickhouse_functions PRIVATE clickhouse_functions_array) if (USE_STATS) target_link_libraries(clickhouse_functions PRIVATE stats) endif() + +# Signed integer overflow on user-provided data inside boost::geometry - ignore. +set_source_files_properties("pointInPolygon.cpp" PROPERTIES COMPILE_FLAGS -fno-sanitize=signed-integer-overflow) diff --git a/src/Functions/DateTimeTransforms.h b/src/Functions/DateTimeTransforms.h index b55f78e71bd..333b397312d 100644 --- a/src/Functions/DateTimeTransforms.h +++ b/src/Functions/DateTimeTransforms.h @@ -704,7 +704,11 @@ struct DateTimeTransformImpl { using Op = Transformer; - const DateLUTImpl & time_zone = extractTimeZoneFromFunctionArguments(arguments, 1, 0); + size_t time_zone_argument_position = 1; + if constexpr (std::is_same_v) + time_zone_argument_position = 2; + + const DateLUTImpl & time_zone = extractTimeZoneFromFunctionArguments(arguments, time_zone_argument_position, 0); const ColumnPtr source_col = arguments[0].column; if (const auto * sources = checkAndGetColumn(source_col.get())) diff --git a/src/Functions/FunctionsConversion.h b/src/Functions/FunctionsConversion.h index 96e49686526..b95d4ea9790 100644 --- a/src/Functions/FunctionsConversion.h +++ b/src/Functions/FunctionsConversion.h @@ -477,6 +477,61 @@ template struct ConvertImpl struct ConvertImpl : DateTimeTransformImpl {}; +/** Conversion of numeric to DateTime64 + */ + +template +struct ToDateTime64TransformUnsigned +{ + static constexpr auto name = "toDateTime64"; + + const DateTime64::NativeType scale_multiplier = 1; + + ToDateTime64TransformUnsigned(UInt32 scale = 0) + : scale_multiplier(DecimalUtils::scaleMultiplier(scale)) + {} + + inline NO_SANITIZE_UNDEFINED DateTime64::NativeType execute(FromType from, const DateLUTImpl &) const + { + from = std::min(time_t(from), time_t(0xFFFFFFFF)); + return DecimalUtils::decimalFromComponentsWithMultiplier(from, 0, scale_multiplier); + } +}; +template +struct ToDateTime64TransformSigned +{ + static constexpr auto name = "toDateTime64"; + + const DateTime64::NativeType scale_multiplier = 1; + + ToDateTime64TransformSigned(UInt32 scale = 0) + : scale_multiplier(DecimalUtils::scaleMultiplier(scale)) + {} + + inline NO_SANITIZE_UNDEFINED DateTime64::NativeType execute(FromType from, const DateLUTImpl &) const + { + if (from < 0) + return 0; + from = std::min(time_t(from), time_t(0xFFFFFFFF)); + return DecimalUtils::decimalFromComponentsWithMultiplier(from, 0, scale_multiplier); + } +}; + +template struct ConvertImpl + : DateTimeTransformImpl> {}; +template struct ConvertImpl + : DateTimeTransformImpl> {}; +template struct ConvertImpl + : DateTimeTransformImpl> {}; +template struct ConvertImpl + : DateTimeTransformImpl> {}; +template struct ConvertImpl + : DateTimeTransformImpl> {}; +template struct ConvertImpl + : DateTimeTransformImpl> {}; +template struct ConvertImpl + : DateTimeTransformImpl> {}; + /** Conversion of DateTime64 to Date or DateTime: discards fractional part. */ template @@ -1294,7 +1349,12 @@ public: bool useDefaultImplementationForNulls() const override { return checked_return_type; } bool useDefaultImplementationForConstants() const override { return true; } - ColumnNumbers getArgumentsThatAreAlwaysConstant() const override { return {1}; } + ColumnNumbers getArgumentsThatAreAlwaysConstant() const override + { + if constexpr (std::is_same_v) + return {2}; + return {1}; + } bool canBeExecutedOnDefaultArguments() const override { return false; } ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count) const override @@ -2313,7 +2373,7 @@ private: using LeftDataType = typename Types::LeftType; using RightDataType = typename Types::RightType; - if constexpr (IsDataTypeDecimalOrNumber && IsDataTypeDecimalOrNumber) + if constexpr (IsDataTypeDecimalOrNumber && IsDataTypeDecimalOrNumber && !std::is_same_v) { if (wrapper_cast_type == CastType::accurate) { diff --git a/src/Functions/array/arrayCumSum.cpp b/src/Functions/array/arrayCumSum.cpp index 40c0cd4ade2..96001901a6e 100644 --- a/src/Functions/array/arrayCumSum.cpp +++ b/src/Functions/array/arrayCumSum.cpp @@ -45,6 +45,41 @@ struct ArrayCumSumImpl } + template + static void NO_SANITIZE_UNDEFINED implConst( + size_t size, const IColumn::Offset * __restrict offsets, Dst * __restrict res_values, Src src_value) + { + size_t pos = 0; + for (const auto * end = offsets + size; offsets < end; ++offsets) + { + auto offset = *offsets; + Dst accumulated{}; + for (; pos < offset; ++pos) + { + accumulated += src_value; + res_values[pos] = accumulated; + } + } + } + + template + static void NO_SANITIZE_UNDEFINED implVector( + size_t size, const IColumn::Offset * __restrict offsets, Dst * __restrict res_values, const Src * __restrict src_values) + { + size_t pos = 0; + for (const auto * end = offsets + size; offsets < end; ++offsets) + { + auto offset = *offsets; + Dst accumulated{}; + for (; pos < offset; ++pos) + { + accumulated += src_values[pos]; + res_values[pos] = accumulated; + } + } + } + + template static bool executeType(const ColumnPtr & mapped, const ColumnArray & array, ColumnPtr & res_ptr) { @@ -75,19 +110,7 @@ struct ArrayCumSumImpl typename ColVecResult::Container & res_values = res_nested->getData(); res_values.resize(column_const->size()); - - size_t pos = 0; - for (auto offset : offsets) - { - // skip empty arrays - if (pos < offset) - { - res_values[pos++] = x; // NOLINT - for (; pos < offset; ++pos) - res_values[pos] = res_values[pos - 1] + x; - } - } - + implConst(offsets.size(), offsets.data(), res_values.data(), x); res_ptr = ColumnArray::create(std::move(res_nested), array.getOffsetsPtr()); return true; } @@ -103,18 +126,7 @@ struct ArrayCumSumImpl typename ColVecResult::Container & res_values = res_nested->getData(); res_values.resize(data.size()); - - size_t pos = 0; - for (auto offset : offsets) - { - // skip empty arrays - if (pos < offset) - { - res_values[pos] = data[pos]; // NOLINT - for (++pos; pos < offset; ++pos) - res_values[pos] = res_values[pos - 1] + data[pos]; - } - } + implVector(offsets.size(), offsets.data(), res_values.data(), data.data()); res_ptr = ColumnArray::create(std::move(res_nested), array.getOffsetsPtr()); return true; diff --git a/src/Functions/array/arrayCumSumNonNegative.cpp b/src/Functions/array/arrayCumSumNonNegative.cpp index ff0f081d70b..148d4283701 100644 --- a/src/Functions/array/arrayCumSumNonNegative.cpp +++ b/src/Functions/array/arrayCumSumNonNegative.cpp @@ -48,6 +48,26 @@ struct ArrayCumSumNonNegativeImpl } + template + static void NO_SANITIZE_UNDEFINED implVector( + size_t size, const IColumn::Offset * __restrict offsets, Dst * __restrict res_values, const Src * __restrict src_values) + { + size_t pos = 0; + for (const auto * end = offsets + size; offsets < end; ++offsets) + { + auto offset = *offsets; + Dst accumulated{}; + for (; pos < offset; ++pos) + { + accumulated += src_values[pos]; + if (accumulated < 0) + accumulated = 0; + res_values[pos] = accumulated; + } + } + } + + template static bool executeType(const ColumnPtr & mapped, const ColumnArray & array, ColumnPtr & res_ptr) { @@ -70,26 +90,7 @@ struct ArrayCumSumNonNegativeImpl typename ColVecResult::Container & res_values = res_nested->getData(); res_values.resize(data.size()); - - size_t pos = 0; - Result accum_sum = 0; - for (auto offset : offsets) - { - // skip empty arrays - if (pos < offset) - { - accum_sum = data[pos] > 0 ? data[pos] : Element(0); // NOLINT - res_values[pos] = accum_sum; - for (++pos; pos < offset; ++pos) - { - accum_sum = accum_sum + data[pos]; - if (accum_sum < 0) - accum_sum = 0; - - res_values[pos] = accum_sum; - } - } - } + implVector(offsets.size(), offsets.data(), res_values.data(), data.data()); res_ptr = ColumnArray::create(std::move(res_nested), array.getOffsetsPtr()); return true; diff --git a/src/Functions/array/mapPopulateSeries.cpp b/src/Functions/array/mapPopulateSeries.cpp index 46c99dba483..2050e0c28ab 100644 --- a/src/Functions/array/mapPopulateSeries.cpp +++ b/src/Functions/array/mapPopulateSeries.cpp @@ -16,6 +16,7 @@ namespace ErrorCodes extern const int ILLEGAL_COLUMN; extern const int ILLEGAL_TYPE_OF_ARGUMENT; extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; + extern const int TOO_LARGE_ARRAY_SIZE; } class FunctionMapPopulateSeries : public IFunction @@ -188,9 +189,13 @@ private: } } + static constexpr size_t MAX_ARRAY_SIZE = 1ULL << 30; + if (static_cast(max_key - min_key) > MAX_ARRAY_SIZE) + throw Exception(ErrorCodes::TOO_LARGE_ARRAY_SIZE, "Too large array size in the result of function {}", getName()); + /* fill the result arrays */ KeyType key; - for (key = min_key; key <= max_key; ++key) + for (key = min_key;; ++key) { to_keys_data.insert(key); @@ -205,6 +210,8 @@ private: } ++offset; + if (key == max_key) + break; } to_keys_offsets.push_back(offset); diff --git a/src/Functions/if.cpp b/src/Functions/if.cpp index 3be4848f1ff..614bfcf700e 100644 --- a/src/Functions/if.cpp +++ b/src/Functions/if.cpp @@ -532,7 +532,7 @@ private: return nullptr; } - ColumnPtr executeTuple(const ColumnsWithTypeAndName & arguments, size_t input_rows_count) const + ColumnPtr executeTuple(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count) const { /// Calculate function for each corresponding elements of tuples. @@ -558,6 +558,7 @@ private: const DataTypeTuple & type1 = static_cast(*arg1.type); const DataTypeTuple & type2 = static_cast(*arg2.type); + const DataTypeTuple & tuple_result = static_cast(*result_type); ColumnsWithTypeAndName temporary_columns(3); temporary_columns[0] = arguments[0]; @@ -570,7 +571,7 @@ private: temporary_columns[1] = {col1_contents[i], type1.getElements()[i], {}}; temporary_columns[2] = {col2_contents[i], type2.getElements()[i], {}}; - tuple_columns[i] = executeImpl(temporary_columns, std::make_shared(), input_rows_count); + tuple_columns[i] = executeImpl(temporary_columns, tuple_result.getElements()[i], input_rows_count); } return ColumnTuple::create(tuple_columns); @@ -988,7 +989,7 @@ public: || (res = executeTyped(cond_col, arguments, result_type, input_rows_count)) || (res = executeString(cond_col, arguments, result_type)) || (res = executeGenericArray(cond_col, arguments, result_type)) - || (res = executeTuple(arguments, input_rows_count)))) + || (res = executeTuple(arguments, result_type, input_rows_count)))) { return executeGeneric(cond_col, arguments, input_rows_count); } diff --git a/src/Functions/tests/gtest_number_traits.cpp b/src/Functions/tests/gtest_number_traits.cpp index 7664b4fcbdc..7f25c6cbeb7 100644 --- a/src/Functions/tests/gtest_number_traits.cpp +++ b/src/Functions/tests/gtest_number_traits.cpp @@ -258,7 +258,7 @@ TEST(NumberTraits, Others) ASSERT_EQ(getTypeString(DB::NumberTraits::ResultOfFloatingPointDivision::Type()), "Float64"); ASSERT_EQ(getTypeString(DB::NumberTraits::ResultOfFloatingPointDivision::Type()), "Float64"); ASSERT_EQ(getTypeString(DB::NumberTraits::ResultOfIntegerDivision::Type()), "Int8"); - ASSERT_EQ(getTypeString(DB::NumberTraits::ResultOfModulo::Type()), "Int8"); + ASSERT_EQ(getTypeString(DB::NumberTraits::ResultOfModulo::Type()), "UInt8"); } diff --git a/src/IO/BrotliReadBuffer.cpp b/src/IO/BrotliReadBuffer.cpp index 70d3a76e629..41991ad0516 100644 --- a/src/IO/BrotliReadBuffer.cpp +++ b/src/IO/BrotliReadBuffer.cpp @@ -77,7 +77,7 @@ bool BrotliReadBuffer::nextImpl() if (in->eof()) { eof = true; - return working_buffer.size() != 0; + return !working_buffer.empty(); } else { diff --git a/src/IO/BufferBase.h b/src/IO/BufferBase.h index c22dcbecf7b..198441d8bc1 100644 --- a/src/IO/BufferBase.h +++ b/src/IO/BufferBase.h @@ -40,6 +40,7 @@ public: inline Position end() const { return end_pos; } inline size_t size() const { return size_t(end_pos - begin_pos); } inline void resize(size_t size) { end_pos = begin_pos + size; } + inline bool empty() const { return size() == 0; } inline void swap(Buffer & other) { diff --git a/src/IO/ConcatReadBuffer.h b/src/IO/ConcatReadBuffer.h index 1df99429e93..c416b0fd892 100644 --- a/src/IO/ConcatReadBuffer.h +++ b/src/IO/ConcatReadBuffer.h @@ -25,11 +25,16 @@ protected: return false; /// First reading - if (working_buffer.size() == 0 && (*current)->hasPendingData()) + if (working_buffer.empty()) { - working_buffer = Buffer((*current)->position(), (*current)->buffer().end()); - return true; + if ((*current)->hasPendingData()) + { + working_buffer = Buffer((*current)->position(), (*current)->buffer().end()); + return true; + } } + else + (*current)->position() = position(); if (!(*current)->next()) { @@ -51,14 +56,12 @@ protected: } public: - ConcatReadBuffer(const ReadBuffers & buffers_) : ReadBuffer(nullptr, 0), buffers(buffers_), current(buffers.begin()) {} - - ConcatReadBuffer(ReadBuffer & buf1, ReadBuffer & buf2) : ReadBuffer(nullptr, 0) + explicit ConcatReadBuffer(const ReadBuffers & buffers_) : ReadBuffer(nullptr, 0), buffers(buffers_), current(buffers.begin()) { - buffers.push_back(&buf1); - buffers.push_back(&buf2); - current = buffers.begin(); + assert(!buffers.empty()); } + + ConcatReadBuffer(ReadBuffer & buf1, ReadBuffer & buf2) : ConcatReadBuffer({&buf1, &buf2}) {} }; } diff --git a/src/IO/HashingReadBuffer.h b/src/IO/HashingReadBuffer.h index 9fcd6dc6b41..08b6de69dcb 100644 --- a/src/IO/HashingReadBuffer.h +++ b/src/IO/HashingReadBuffer.h @@ -1,10 +1,11 @@ #pragma once -#include #include +#include namespace DB { + /* * Calculates the hash from the read data. When reading, the data is read from the nested ReadBuffer. * Small pieces are copied into its own memory. @@ -12,14 +13,14 @@ namespace DB class HashingReadBuffer : public IHashingBuffer { public: - HashingReadBuffer(ReadBuffer & in_, size_t block_size_ = DBMS_DEFAULT_HASHING_BLOCK_SIZE) : - IHashingBuffer(block_size_), in(in_) + explicit HashingReadBuffer(ReadBuffer & in_, size_t block_size_ = DBMS_DEFAULT_HASHING_BLOCK_SIZE) + : IHashingBuffer(block_size_), in(in_) { working_buffer = in.buffer(); pos = in.position(); /// calculate hash from the data already read - if (working_buffer.size()) + if (!working_buffer.empty()) { calculateHash(pos, working_buffer.end() - pos); } @@ -39,7 +40,7 @@ private: return res; } -private: ReadBuffer & in; }; + } diff --git a/src/IO/LZMAInflatingReadBuffer.cpp b/src/IO/LZMAInflatingReadBuffer.cpp index e30e8df5f9d..6a0a7e5ee31 100644 --- a/src/IO/LZMAInflatingReadBuffer.cpp +++ b/src/IO/LZMAInflatingReadBuffer.cpp @@ -66,7 +66,7 @@ bool LZMAInflatingReadBuffer::nextImpl() if (in->eof()) { eof = true; - return working_buffer.size() != 0; + return !working_buffer.empty(); } else { diff --git a/src/IO/LimitReadBuffer.cpp b/src/IO/LimitReadBuffer.cpp index f36facfdd99..baa9e487688 100644 --- a/src/IO/LimitReadBuffer.cpp +++ b/src/IO/LimitReadBuffer.cpp @@ -1,4 +1,5 @@ #include + #include @@ -13,6 +14,8 @@ namespace ErrorCodes bool LimitReadBuffer::nextImpl() { + assert(position() >= in.position()); + /// Let underlying buffer calculate read bytes in `next()` call. in.position() = position(); @@ -25,7 +28,10 @@ bool LimitReadBuffer::nextImpl() } if (!in.next()) + { + working_buffer = in.buffer(); return false; + } working_buffer = in.buffer(); @@ -50,7 +56,7 @@ LimitReadBuffer::LimitReadBuffer(ReadBuffer & in_, UInt64 limit_, bool throw_exc LimitReadBuffer::~LimitReadBuffer() { /// Update underlying buffer's position in case when limit wasn't reached. - if (working_buffer.size() != 0) + if (!working_buffer.empty()) in.position() = position(); } diff --git a/src/IO/MemoryReadWriteBuffer.cpp b/src/IO/MemoryReadWriteBuffer.cpp index 0b0d9704de6..69bcd52a8d2 100644 --- a/src/IO/MemoryReadWriteBuffer.cpp +++ b/src/IO/MemoryReadWriteBuffer.cpp @@ -61,7 +61,7 @@ private: position() = nullptr; } - return buffer().size() != 0; + return !buffer().empty(); } using Container = std::forward_list; diff --git a/src/IO/ReadBuffer.h b/src/IO/ReadBuffer.h index 3d6eb6970ce..5cbe04f8348 100644 --- a/src/IO/ReadBuffer.h +++ b/src/IO/ReadBuffer.h @@ -55,13 +55,19 @@ public: */ bool next() { + assert(!hasPendingData()); + assert(position() <= working_buffer.end()); + bytes += offset(); bool res = nextImpl(); if (!res) - working_buffer.resize(0); - - pos = working_buffer.begin() + nextimpl_working_buffer_offset; + working_buffer = Buffer(pos, pos); + else + pos = working_buffer.begin() + nextimpl_working_buffer_offset; nextimpl_working_buffer_offset = 0; + + assert(position() <= working_buffer.end()); + return res; } @@ -72,7 +78,7 @@ public: next(); } - virtual ~ReadBuffer() {} + virtual ~ReadBuffer() = default; /** Unlike std::istream, it returns true if all data was read @@ -192,7 +198,7 @@ private: */ virtual bool nextImpl() { return false; } - [[noreturn]] void throwReadAfterEOF() + [[noreturn]] static void throwReadAfterEOF() { throw Exception("Attempt to read after eof", ErrorCodes::ATTEMPT_TO_READ_AFTER_EOF); } diff --git a/src/IO/ReadBufferFromFileDescriptor.cpp b/src/IO/ReadBufferFromFileDescriptor.cpp index 0ab07b85027..dd5d9e67cd7 100644 --- a/src/IO/ReadBufferFromFileDescriptor.cpp +++ b/src/IO/ReadBufferFromFileDescriptor.cpp @@ -90,6 +90,7 @@ bool ReadBufferFromFileDescriptor::nextImpl() if (bytes_read) { ProfileEvents::increment(ProfileEvents::ReadBufferFromFileDescriptorReadBytes, bytes_read); + working_buffer = internal_buffer; working_buffer.resize(bytes_read); } else diff --git a/src/IO/ReadWriteBufferFromHTTP.h b/src/IO/ReadWriteBufferFromHTTP.h index de10f268dc3..9cd37bd00f8 100644 --- a/src/IO/ReadWriteBufferFromHTTP.h +++ b/src/IO/ReadWriteBufferFromHTTP.h @@ -76,9 +76,7 @@ public: } } - virtual ~UpdatableSessionBase() - { - } + virtual ~UpdatableSessionBase() = default; }; @@ -205,6 +203,8 @@ namespace detail { if (next_callback) next_callback(count()); + if (!working_buffer.empty()) + impl->position() = position(); if (!impl->next()) return false; internal_buffer = impl->buffer(); diff --git a/src/IO/SeekableReadBuffer.h b/src/IO/SeekableReadBuffer.h index f7a468b0490..f8e6d817fb1 100644 --- a/src/IO/SeekableReadBuffer.h +++ b/src/IO/SeekableReadBuffer.h @@ -21,6 +21,12 @@ public: */ virtual off_t seek(off_t off, int whence) = 0; + /** + * Keep in mind that seekable buffer may encounter eof() once and the working buffer + * may get into inconsistent state. Don't forget to reset it on the first nextImpl() + * after seek(). + */ + /** * @return Offset from the begin of the underlying buffer / file corresponds to the buffer current position. */ diff --git a/src/IO/WriteBuffer.h b/src/IO/WriteBuffer.h index c513b22b0a5..d425f813d7b 100644 --- a/src/IO/WriteBuffer.h +++ b/src/IO/WriteBuffer.h @@ -4,6 +4,7 @@ #include #include #include +#include #include #include @@ -37,7 +38,7 @@ public: */ inline void next() { - if (!offset() && available()) + if (!offset()) return; bytes += offset(); @@ -60,7 +61,7 @@ public: /** it is desirable in the derived classes to place the next() call in the destructor, * so that the last data is written */ - virtual ~WriteBuffer() {} + virtual ~WriteBuffer() = default; inline void nextIfAtEnd() { @@ -73,6 +74,9 @@ public: { size_t bytes_copied = 0; + /// Produces endless loop + assert(!working_buffer.empty()); + while (bytes_copied < n) { nextIfAtEnd(); diff --git a/src/IO/WriteHelpers.h b/src/IO/WriteHelpers.h index 9072f306bd9..a37a5b5ddc6 100644 --- a/src/IO/WriteHelpers.h +++ b/src/IO/WriteHelpers.h @@ -910,6 +910,7 @@ inline void writeBinary(const StringRef & x, WriteBuffer & buf) { writeStringBin inline void writeBinary(const std::string_view & x, WriteBuffer & buf) { writeStringBinary(x, buf); } inline void writeBinary(const Int128 & x, WriteBuffer & buf) { writePODBinary(x, buf); } inline void writeBinary(const UInt128 & x, WriteBuffer & buf) { writePODBinary(x, buf); } +inline void writeBinary(const UUID & x, WriteBuffer & buf) { writePODBinary(x, buf); } inline void writeBinary(const DummyUInt256 & x, WriteBuffer & buf) { writePODBinary(x, buf); } inline void writeBinary(const Decimal32 & x, WriteBuffer & buf) { writePODBinary(x, buf); } inline void writeBinary(const Decimal64 & x, WriteBuffer & buf) { writePODBinary(x, buf); } diff --git a/src/IO/ZlibInflatingReadBuffer.cpp b/src/IO/ZlibInflatingReadBuffer.cpp index 0b23bef1b10..bea83c74e21 100644 --- a/src/IO/ZlibInflatingReadBuffer.cpp +++ b/src/IO/ZlibInflatingReadBuffer.cpp @@ -70,7 +70,7 @@ bool ZlibInflatingReadBuffer::nextImpl() if (in->eof()) { eof = true; - return working_buffer.size() != 0; + return !working_buffer.empty(); } else { diff --git a/src/IO/ZstdInflatingReadBuffer.cpp b/src/IO/ZstdInflatingReadBuffer.cpp index 94a0b56fc6d..b441a6a7210 100644 --- a/src/IO/ZstdInflatingReadBuffer.cpp +++ b/src/IO/ZstdInflatingReadBuffer.cpp @@ -54,7 +54,7 @@ bool ZstdInflatingReadBuffer::nextImpl() if (in->eof()) { eof = true; - return working_buffer.size() != 0; + return !working_buffer.empty(); } return true; diff --git a/src/Interpreters/ActionsDAG.cpp b/src/Interpreters/ActionsDAG.cpp index d8c40ffda2f..becd3f4f4a2 100644 --- a/src/Interpreters/ActionsDAG.cpp +++ b/src/Interpreters/ActionsDAG.cpp @@ -454,36 +454,42 @@ bool ActionsDAG::tryRestoreColumn(const std::string & column_name) return false; } -void ActionsDAG::removeUnusedInput(const std::string & column_name) +bool ActionsDAG::removeUnusedResult(const std::string & column_name) { + /// Find column in index and remove. + const Node * col; + { + auto it = index.begin(); + for (; it != index.end(); ++it) + if ((*it)->result_name == column_name) + break; + + if (it == index.end()) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Not found result {} in ActionsDAG\n{}", column_name, dumpDAG()); + + col = *it; + index.remove(it); + } + + /// Check if column is in input. auto it = inputs.begin(); for (; it != inputs.end(); ++it) - if ((*it)->result_name == column_name) + if (*it == col) break; if (it == inputs.end()) - throw Exception(ErrorCodes::LOGICAL_ERROR, "Not found input {} in ActionsDAG\n{}", column_name, dumpDAG()); + return false; - auto * input = *it; + /// Check column has no dependent. for (const auto & node : nodes) for (const auto * child : node.children) - if (input == child) - throw Exception(ErrorCodes::LOGICAL_ERROR, - "Cannot remove input {} because it has dependent nodes in ActionsDAG\n{}", - column_name, dumpDAG()); - - for (auto jt = index.begin(); jt != index.end(); ++jt) - { - if (*jt == input) - { - index.remove(jt); - break; - } - } + if (col == child) + return false; + /// Remove from nodes and inputs. for (auto jt = nodes.begin(); jt != nodes.end(); ++jt) { - if (&(*jt) == input) + if (&(*jt) == *it) { nodes.erase(jt); break; @@ -491,6 +497,7 @@ void ActionsDAG::removeUnusedInput(const std::string & column_name) } inputs.erase(it); + return true; } ActionsDAGPtr ActionsDAG::clone() const @@ -844,7 +851,7 @@ ActionsDAGPtr ActionsDAG::merge(ActionsDAG && first, ActionsDAG && second) return std::make_shared(std::move(first)); } -std::pair ActionsDAG::split(std::unordered_set split_nodes) const +ActionsDAG::SplitResult ActionsDAG::split(std::unordered_set split_nodes) const { /// Split DAG into two parts. /// (first_nodes, first_index) is a part which will have split_list in result. @@ -1045,7 +1052,7 @@ std::pair ActionsDAG::split(std::unordered_set ActionsDAG::splitActionsBeforeArrayJoin(const NameSet & array_joined_columns) const +ActionsDAG::SplitResult ActionsDAG::splitActionsBeforeArrayJoin(const NameSet & array_joined_columns) const { struct Frame @@ -1113,7 +1120,7 @@ std::pair ActionsDAG::splitActionsBeforeArrayJoin return res; } -std::pair ActionsDAG::splitActionsForFilter(const std::string & column_name) const +ActionsDAG::SplitResult ActionsDAG::splitActionsForFilter(const std::string & column_name) const { auto it = index.begin(); for (; it != index.end(); ++it) diff --git a/src/Interpreters/ActionsDAG.h b/src/Interpreters/ActionsDAG.h index b12da30e24f..fa5ae2ac83f 100644 --- a/src/Interpreters/ActionsDAG.h +++ b/src/Interpreters/ActionsDAG.h @@ -214,9 +214,10 @@ public: /// If column is not in index, try to find it in nodes and insert back into index. bool tryRestoreColumn(const std::string & column_name); - /// Find column in input. Remove it from input and index. - /// Checks that column in inputs and has not dependent nodes. - void removeUnusedInput(const std::string & column_name); + /// Find column in result. Remove it from index. + /// If columns is in inputs and has no dependent nodes, remove it from inputs too. + /// Return true if column was removed from inputs. + bool removeUnusedResult(const std::string & column_name); void projectInput() { settings.project_input = true; } void removeUnusedActions(const Names & required_names); @@ -255,18 +256,20 @@ public: /// Otherwise, any two actions may be combined. static ActionsDAGPtr merge(ActionsDAG && first, ActionsDAG && second); + using SplitResult = std::pair; + /// Split ActionsDAG into two DAGs, where first part contains all nodes from split_nodes and their children. /// Execution of first then second parts on block is equivalent to execution of initial DAG. /// First DAG and initial DAG have equal inputs, second DAG and initial DAG has equal index (outputs). /// Second DAG inputs may contain less inputs then first DAG (but also include other columns). - std::pair split(std::unordered_set split_nodes) const; + SplitResult split(std::unordered_set split_nodes) const; /// Splits actions into two parts. Returned first half may be swapped with ARRAY JOIN. - std::pair splitActionsBeforeArrayJoin(const NameSet & array_joined_columns) const; + SplitResult splitActionsBeforeArrayJoin(const NameSet & array_joined_columns) const; /// Splits actions into two parts. First part has minimal size sufficient for calculation of column_name. /// Index of initial actions must contain column_name. - std::pair splitActionsForFilter(const std::string & column_name) const; + SplitResult splitActionsForFilter(const std::string & column_name) const; private: Node & addNode(Node node, bool can_replace = false); diff --git a/src/Interpreters/Context.cpp b/src/Interpreters/Context.cpp index 8ff317764a7..5c99d39dc2e 100644 --- a/src/Interpreters/Context.cpp +++ b/src/Interpreters/Context.cpp @@ -64,6 +64,7 @@ #include #include #include +#include namespace ProfileEvents @@ -1137,12 +1138,6 @@ String Context::getCurrentDatabase() const } -String Context::getCurrentQueryId() const -{ - return client_info.current_query_id; -} - - String Context::getInitialQueryId() const { return client_info.initial_query_id; @@ -2510,4 +2505,22 @@ StorageID Context::resolveStorageIDImpl(StorageID storage_id, StorageNamespace w return StorageID::createEmpty(); } +PartUUIDsPtr Context::getPartUUIDs() +{ + auto lock = getLock(); + if (!part_uuids) + part_uuids = std::make_shared(); + + return part_uuids; +} + +PartUUIDsPtr Context::getIgnoredPartUUIDs() +{ + auto lock = getLock(); + if (!ignored_part_uuids) + ignored_part_uuids = std::make_shared(); + + return ignored_part_uuids; +} + } diff --git a/src/Interpreters/Context.h b/src/Interpreters/Context.h index 5801cc2b949..98ca3909fea 100644 --- a/src/Interpreters/Context.h +++ b/src/Interpreters/Context.h @@ -107,6 +107,8 @@ using StoragePolicyPtr = std::shared_ptr; using StoragePoliciesMap = std::map; class StoragePolicySelector; using StoragePolicySelectorPtr = std::shared_ptr; +struct PartUUIDs; +using PartUUIDsPtr = std::shared_ptr; class IOutputFormat; using OutputFormatPtr = std::shared_ptr; @@ -264,6 +266,9 @@ private: using SampleBlockCache = std::unordered_map; mutable SampleBlockCache sample_block_cache; + PartUUIDsPtr part_uuids; /// set of parts' uuids, is used for query parts deduplication + PartUUIDsPtr ignored_part_uuids; /// set of parts' uuids are meant to be excluded from query processing + NameToNameMap query_parameters; /// Dictionary with query parameters for prepared statements. /// (key=name, value) @@ -436,7 +441,7 @@ public: StoragePtr getViewSource(); String getCurrentDatabase() const; - String getCurrentQueryId() const; + String getCurrentQueryId() const { return client_info.current_query_id; } /// Id of initiating query for distributed queries; or current query id if it's not a distributed query. String getInitialQueryId() const; @@ -734,6 +739,9 @@ public: }; MySQLWireContext mysql; + + PartUUIDsPtr getPartUUIDs(); + PartUUIDsPtr getIgnoredPartUUIDs(); private: std::unique_lock getLock() const; diff --git a/src/Interpreters/DDLWorker.cpp b/src/Interpreters/DDLWorker.cpp index e240e912218..e18b92b4bd5 100644 --- a/src/Interpreters/DDLWorker.cpp +++ b/src/Interpreters/DDLWorker.cpp @@ -610,12 +610,14 @@ bool DDLWorker::tryExecuteQuery(const String & query, const DDLTask & task, Exec ReadBufferFromString istr(query_to_execute); String dummy_string; WriteBufferFromString ostr(dummy_string); + std::optional query_scope; try { auto current_context = std::make_unique(context); current_context->getClientInfo().query_kind = ClientInfo::QueryKind::SECONDARY_QUERY; current_context->setCurrentQueryId(""); // generate random query_id + query_scope.emplace(*current_context); executeQuery(istr, ostr, false, *current_context, {}); } catch (...) @@ -632,20 +634,6 @@ bool DDLWorker::tryExecuteQuery(const String & query, const DDLTask & task, Exec return true; } -void DDLWorker::attachToThreadGroup() -{ - if (thread_group) - { - /// Put all threads to one thread pool - CurrentThread::attachToIfDetached(thread_group); - } - else - { - CurrentThread::initializeQuery(); - thread_group = CurrentThread::getGroup(); - } -} - void DDLWorker::enqueueTask(DDLTaskPtr task_ptr) { @@ -1148,8 +1136,6 @@ void DDLWorker::runMainThread() { try { - attachToThreadGroup(); - cleanup_event->set(); scheduleTasks(); diff --git a/src/Interpreters/DDLWorker.h b/src/Interpreters/DDLWorker.h index 9771954817d..0a8fa6923ae 100644 --- a/src/Interpreters/DDLWorker.h +++ b/src/Interpreters/DDLWorker.h @@ -162,8 +162,6 @@ private: void runMainThread(); void runCleanupThread(); - void attachToThreadGroup(); - private: Context context; Poco::Logger * log; @@ -196,8 +194,6 @@ private: /// How many tasks could be in the queue size_t max_tasks_in_queue = 1000; - ThreadGroupStatusPtr thread_group; - std::atomic max_id = 0; friend class DDLQueryStatusInputStream; diff --git a/src/Interpreters/DNSCacheUpdater.cpp b/src/Interpreters/DNSCacheUpdater.cpp index fb0298f480f..723945165e3 100644 --- a/src/Interpreters/DNSCacheUpdater.cpp +++ b/src/Interpreters/DNSCacheUpdater.cpp @@ -37,7 +37,7 @@ void DNSCacheUpdater::run() * - automatically throttle when DNS requests take longer time; * - add natural randomization on huge clusters - avoid sending all requests at the same moment of time from different servers. */ - task_handle->scheduleAfter(update_period_seconds * 1000); + task_handle->scheduleAfter(size_t(update_period_seconds) * 1000); } void DNSCacheUpdater::start() diff --git a/src/Interpreters/InterpreterSelectQuery.cpp b/src/Interpreters/InterpreterSelectQuery.cpp index 2ee1b3956e4..4b89273cd86 100644 --- a/src/Interpreters/InterpreterSelectQuery.cpp +++ b/src/Interpreters/InterpreterSelectQuery.cpp @@ -69,7 +69,6 @@ #include #include -#include #include #include #include @@ -390,13 +389,18 @@ InterpreterSelectQuery::InterpreterSelectQuery( if (try_move_to_prewhere && storage && !row_policy_filter && query.where() && !query.prewhere() && !query.final()) { /// PREWHERE optimization: transfer some condition from WHERE to PREWHERE if enabled and viable - if (const auto * merge_tree = dynamic_cast(storage.get())) + if (const auto & column_sizes = storage->getColumnSizes(); !column_sizes.empty()) { + /// Extract column compressed sizes. + std::unordered_map column_compressed_sizes; + for (const auto & [name, sizes] : column_sizes) + column_compressed_sizes[name] = sizes.data_compressed; + SelectQueryInfo current_info; current_info.query = query_ptr; current_info.syntax_analyzer_result = syntax_analyzer_result; - MergeTreeWhereOptimizer{current_info, *context, *merge_tree, metadata_snapshot, syntax_analyzer_result->requiredSourceColumns(), log}; + MergeTreeWhereOptimizer{current_info, *context, std::move(column_compressed_sizes), metadata_snapshot, syntax_analyzer_result->requiredSourceColumns(), log}; } } diff --git a/src/Interpreters/MarkTableIdentifiersVisitor.cpp b/src/Interpreters/MarkTableIdentifiersVisitor.cpp index 78563059ed1..6557e1b5292 100644 --- a/src/Interpreters/MarkTableIdentifiersVisitor.cpp +++ b/src/Interpreters/MarkTableIdentifiersVisitor.cpp @@ -47,7 +47,7 @@ void MarkTableIdentifiersMatcher::visit(const ASTFunction & func, ASTPtr &, Data // First argument of dictGet can be a dictionary name, perhaps with a database. if (functionIsJoinGet(func.name) || functionIsDictGet(func.name)) { - if (func.arguments->children.empty()) + if (!func.arguments || func.arguments->children.empty()) { return; } diff --git a/src/Interpreters/RewriteSumIfFunctionVisitor.cpp b/src/Interpreters/RewriteSumIfFunctionVisitor.cpp index 2fb0765db13..2593c220c63 100644 --- a/src/Interpreters/RewriteSumIfFunctionVisitor.cpp +++ b/src/Interpreters/RewriteSumIfFunctionVisitor.cpp @@ -13,18 +13,6 @@ void RewriteSumIfFunctionMatcher::visit(ASTPtr & ast, Data & data) visit(*func, ast, data); } -static ASTPtr createNewFunctionWithOneArgument(const String & func_name, const ASTPtr & argument) -{ - auto new_func = std::make_shared(); - new_func->name = func_name; - - auto new_arguments = std::make_shared(); - new_arguments->children.push_back(argument); - new_func->arguments = new_arguments; - new_func->children.push_back(new_arguments); - return new_func; -} - void RewriteSumIfFunctionMatcher::visit(const ASTFunction & func, ASTPtr & ast, Data &) { if (!func.arguments || func.arguments->children.empty()) @@ -46,7 +34,7 @@ void RewriteSumIfFunctionMatcher::visit(const ASTFunction & func, ASTPtr & ast, if (func_arguments.size() == 2 && literal->value.get() == 1) { - auto new_func = createNewFunctionWithOneArgument("countIf", func_arguments[1]); + auto new_func = makeASTFunction("countIf", func_arguments[1]); new_func->setAlias(func.alias); ast = std::move(new_func); return; @@ -74,7 +62,7 @@ void RewriteSumIfFunctionMatcher::visit(const ASTFunction & func, ASTPtr & ast, /// sum(if(cond, 1, 0)) -> countIf(cond) if (first_value == 1 && second_value == 0) { - auto new_func = createNewFunctionWithOneArgument("countIf", if_arguments[0]); + auto new_func = makeASTFunction("countIf", if_arguments[0]); new_func->setAlias(func.alias); ast = std::move(new_func); return; @@ -82,8 +70,8 @@ void RewriteSumIfFunctionMatcher::visit(const ASTFunction & func, ASTPtr & ast, /// sum(if(cond, 0, 1)) -> countIf(not(cond)) if (first_value == 0 && second_value == 1) { - auto not_func = createNewFunctionWithOneArgument("not", if_arguments[0]); - auto new_func = createNewFunctionWithOneArgument("countIf", not_func); + auto not_func = makeASTFunction("not", if_arguments[0]); + auto new_func = makeASTFunction("countIf", not_func); new_func->setAlias(func.alias); ast = std::move(new_func); return; diff --git a/src/Interpreters/ThreadStatusExt.cpp b/src/Interpreters/ThreadStatusExt.cpp index 61322cabfb3..8a979721290 100644 --- a/src/Interpreters/ThreadStatusExt.cpp +++ b/src/Interpreters/ThreadStatusExt.cpp @@ -500,6 +500,8 @@ CurrentThread::QueryScope::QueryScope(Context & query_context) { CurrentThread::initializeQuery(); CurrentThread::attachQueryContext(query_context); + if (!query_context.hasQueryContext()) + query_context.makeQueryContext(); } void CurrentThread::QueryScope::logPeakMemoryUsage() diff --git a/src/Interpreters/WindowDescription.cpp b/src/Interpreters/WindowDescription.cpp index bfb53ebb79f..6e72f056b16 100644 --- a/src/Interpreters/WindowDescription.cpp +++ b/src/Interpreters/WindowDescription.cpp @@ -6,6 +6,11 @@ namespace DB { +namespace ErrorCodes +{ + extern const int BAD_ARGUMENTS; +} + std::string WindowFunctionDescription::dump() const { WriteBufferFromOwnString ss; @@ -33,4 +38,95 @@ std::string WindowDescription::dump() const return ss.str(); } +std::string WindowFrame::toString() const +{ + WriteBufferFromOwnString buf; + toString(buf); + return buf.str(); +} + +void WindowFrame::toString(WriteBuffer & buf) const +{ + buf << toString(type) << " BETWEEN "; + if (begin_type == BoundaryType::Current) + { + buf << "CURRENT ROW"; + } + else if (begin_type == BoundaryType::Unbounded) + { + buf << "UNBOUNDED PRECEDING"; + } + else + { + buf << abs(begin_offset); + buf << " " + << (begin_offset > 0 ? "FOLLOWING" : "PRECEDING"); + } + buf << " AND "; + if (end_type == BoundaryType::Current) + { + buf << "CURRENT ROW"; + } + else if (end_type == BoundaryType::Unbounded) + { + buf << "UNBOUNDED PRECEDING"; + } + else + { + buf << abs(end_offset); + buf << " " + << (end_offset > 0 ? "FOLLOWING" : "PRECEDING"); + } +} + +void WindowFrame::checkValid() const +{ + if (begin_type == BoundaryType::Unbounded + || end_type == BoundaryType::Unbounded) + { + return; + } + + if (begin_type == BoundaryType::Current + && end_type == BoundaryType::Offset + && end_offset > 0) + { + return; + } + + if (end_type == BoundaryType::Current + && begin_type == BoundaryType::Offset + && begin_offset < 0) + { + return; + } + + if (end_type == BoundaryType::Current + && begin_type == BoundaryType::Current) + { + // BETWEEN CURRENT ROW AND CURRENT ROW makes some sense for RANGE or + // GROUP frames, and is technically valid for ROWS frame. + return; + } + + if (end_type == BoundaryType::Offset + && begin_type == BoundaryType::Offset) + { + if (type == FrameType::Rows) + { + if (end_offset >= begin_offset) + { + return; + } + } + + // For RANGE and GROUPS, we must check that end follows begin if sorted + // according to ORDER BY (we don't support them yet). + } + + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Window frame '{}' is invalid", + toString()); +} + } diff --git a/src/Interpreters/WindowDescription.h b/src/Interpreters/WindowDescription.h index d34b7721a5e..447352f7a83 100644 --- a/src/Interpreters/WindowDescription.h +++ b/src/Interpreters/WindowDescription.h @@ -53,6 +53,13 @@ struct WindowFrame int64_t end_offset = 0; + // Throws BAD_ARGUMENTS exception if the frame definition is incorrect, e.g. + // the frame start comes later than the frame end. + void checkValid() const; + + std::string toString() const; + void toString(WriteBuffer & buf) const; + bool operator == (const WindowFrame & other) const { // We don't compare is_default because it's not a real property of the diff --git a/src/Interpreters/executeQuery.cpp b/src/Interpreters/executeQuery.cpp index 50e891a3524..d786e1146be 100644 --- a/src/Interpreters/executeQuery.cpp +++ b/src/Interpreters/executeQuery.cpp @@ -343,13 +343,10 @@ static std::tuple executeQueryImpl( { const auto current_time = std::chrono::system_clock::now(); - /// If we already executing query and it requires to execute internal query, than - /// don't replace thread context with given (it can be temporary). Otherwise, attach context to thread. - if (!internal) - { - context.makeQueryContext(); - CurrentThread::attachQueryContext(context); - } +#if !defined(ARCADIA_BUILD) + assert(internal || CurrentThread::get().getQueryContext()); + assert(internal || CurrentThread::get().getQueryContext()->getCurrentQueryId() == CurrentThread::getQueryId()); +#endif const Settings & settings = context.getSettingsRef(); @@ -524,6 +521,14 @@ static std::tuple executeQueryImpl( quota = context.getQuota(); if (quota) { + if (ast->as() || ast->as()) + { + quota->used(Quota::QUERY_SELECTS, 1); + } + else if (ast->as()) + { + quota->used(Quota::QUERY_INSERTS, 1); + } quota->used(Quota::QUERIES, 1); quota->checkExceeded(Quota::ERRORS); } diff --git a/src/Interpreters/ya.make b/src/Interpreters/ya.make index 1cadc447e59..6a155749ddf 100644 --- a/src/Interpreters/ya.make +++ b/src/Interpreters/ya.make @@ -145,6 +145,7 @@ SRCS( TranslateQualifiedNamesVisitor.cpp TreeOptimizer.cpp TreeRewriter.cpp + WindowDescription.cpp addMissingDefaults.cpp addTypeConversionToAST.cpp castColumn.cpp diff --git a/src/Parsers/ExpressionElementParsers.cpp b/src/Parsers/ExpressionElementParsers.cpp index 358c86e166b..3f4403bc264 100644 --- a/src/Parsers/ExpressionElementParsers.cpp +++ b/src/Parsers/ExpressionElementParsers.cpp @@ -46,6 +46,7 @@ namespace DB namespace ErrorCodes { + extern const int BAD_ARGUMENTS; extern const int SYNTAX_ERROR; extern const int LOGICAL_ERROR; extern const int NOT_IMPLEMENTED; @@ -558,7 +559,24 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p } else if (parser_literal.parse(pos, ast_literal, expected)) { - node->frame.begin_offset = ast_literal->as().value.safeGet(); + const Field & value = ast_literal->as().value; + if (!isInt64FieldType(value.getType())) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Only integer frame offsets are supported, '{}' is not supported.", + Field::Types::toString(value.getType())); + } + node->frame.begin_offset = value.get(); + node->frame.begin_type = WindowFrame::BoundaryType::Offset; + // We can easily get a UINT64_MAX here, which doesn't even fit into + // int64_t. Not sure what checks we are going to need here after we + // support floats and dates. + if (node->frame.begin_offset > INT_MAX || node->frame.begin_offset < INT_MIN) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Frame offset must be between {} and {}, but {} is given", + INT_MAX, INT_MIN, node->frame.begin_offset); + } } else { @@ -567,7 +585,7 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p if (keyword_preceding.ignore(pos, expected)) { - node->frame.begin_offset = - node->frame.begin_offset; + node->frame.begin_offset = -node->frame.begin_offset; } else if (keyword_following.ignore(pos, expected)) { @@ -604,7 +622,22 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p } else if (parser_literal.parse(pos, ast_literal, expected)) { - node->frame.end_offset = ast_literal->as().value.safeGet(); + const Field & value = ast_literal->as().value; + if (!isInt64FieldType(value.getType())) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Only integer frame offsets are supported, '{}' is not supported.", + Field::Types::toString(value.getType())); + } + node->frame.end_offset = value.get(); + node->frame.end_type = WindowFrame::BoundaryType::Offset; + + if (node->frame.end_offset > INT_MAX || node->frame.end_offset < INT_MIN) + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Frame offset must be between {} and {}, but {} is given", + INT_MAX, INT_MIN, node->frame.end_offset); + } } else { @@ -623,6 +656,7 @@ static bool tryParseFrameDefinition(ASTWindowDefinition * node, IParser::Pos & p } else if (keyword_following.ignore(pos, expected)) { + // Positive offset or UNBOUNDED FOLLOWING. } else { diff --git a/src/Parsers/New/CMakeLists.txt b/src/Parsers/New/CMakeLists.txt index 360dd4d7488..468394b7bd8 100644 --- a/src/Parsers/New/CMakeLists.txt +++ b/src/Parsers/New/CMakeLists.txt @@ -65,8 +65,6 @@ target_compile_options (clickhouse_parsers_new -Wno-documentation-deprecated-sync -Wno-shadow-field -Wno-unused-parameter - - PUBLIC -Wno-extra-semi -Wno-inconsistent-missing-destructor-override ) diff --git a/src/Parsers/ParserDataType.cpp b/src/Parsers/ParserDataType.cpp index 3d3f393a300..dd495fe6d53 100644 --- a/src/Parsers/ParserDataType.cpp +++ b/src/Parsers/ParserDataType.cpp @@ -32,8 +32,10 @@ private: const char * operators[] = {"=", "equals", nullptr}; ParserLeftAssociativeBinaryOperatorList enum_parser(operators, std::make_unique()); - return nested_parser.parse(pos, node, expected) - || enum_parser.parse(pos, node, expected) + if (pos->type == TokenType::BareWord && std::string_view(pos->begin, pos->size()) == "Nested") + return nested_parser.parse(pos, node, expected); + + return enum_parser.parse(pos, node, expected) || literal_parser.parse(pos, node, expected) || data_type_parser.parse(pos, node, expected); } diff --git a/src/Parsers/ParserInsertQuery.h b/src/Parsers/ParserInsertQuery.h index b6a199c9d71..f98e433551d 100644 --- a/src/Parsers/ParserInsertQuery.h +++ b/src/Parsers/ParserInsertQuery.h @@ -30,7 +30,7 @@ private: const char * getName() const override { return "INSERT query"; } bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override; public: - ParserInsertQuery(const char * end_) : end(end_) {} + explicit ParserInsertQuery(const char * end_) : end(end_) {} }; /** Insert accepts an identifier and an asterisk with variants. diff --git a/src/Processors/DelayedPortsProcessor.cpp b/src/Processors/DelayedPortsProcessor.cpp index cb181a4e4ac..d740ef08e5a 100644 --- a/src/Processors/DelayedPortsProcessor.cpp +++ b/src/Processors/DelayedPortsProcessor.cpp @@ -12,7 +12,7 @@ DelayedPortsProcessor::DelayedPortsProcessor( const Block & header, size_t num_ports, const PortNumbers & delayed_ports, bool assert_main_ports_empty) : IProcessor(InputPorts(num_ports, header), OutputPorts((assert_main_ports_empty ? delayed_ports.size() : num_ports), header)) - , num_delayed(delayed_ports.size()) + , num_delayed_ports(delayed_ports.size()) { port_pairs.resize(num_ports); output_to_pair.reserve(outputs.size()); @@ -36,21 +36,24 @@ DelayedPortsProcessor::DelayedPortsProcessor( } } +void DelayedPortsProcessor::finishPair(PortsPair & pair) +{ + if (!pair.is_finished) + { + pair.is_finished = true; + ++num_finished_pairs; + + if (pair.output_port) + ++num_finished_outputs; + } +} + bool DelayedPortsProcessor::processPair(PortsPair & pair) { - auto finish = [&]() - { - if (!pair.is_finished) - { - pair.is_finished = true; - ++num_finished; - } - }; - if (pair.output_port && pair.output_port->isFinished()) { pair.input_port->close(); - finish(); + finishPair(pair); return false; } @@ -58,7 +61,7 @@ bool DelayedPortsProcessor::processPair(PortsPair & pair) { if (pair.output_port) pair.output_port->finish(); - finish(); + finishPair(pair); return false; } @@ -72,7 +75,7 @@ bool DelayedPortsProcessor::processPair(PortsPair & pair) throw Exception(ErrorCodes::LOGICAL_ERROR, "Input port for DelayedPortsProcessor is assumed to have no data, but it has one"); - pair.output_port->pushData(pair.input_port->pullData()); + pair.output_port->pushData(pair.input_port->pullData(true)); } return true; @@ -80,7 +83,7 @@ bool DelayedPortsProcessor::processPair(PortsPair & pair) IProcessor::Status DelayedPortsProcessor::prepare(const PortNumbers & updated_inputs, const PortNumbers & updated_outputs) { - bool skip_delayed = (num_finished + num_delayed) < port_pairs.size(); + bool skip_delayed = (num_finished_pairs + num_delayed_ports) < port_pairs.size(); bool need_data = false; if (!are_inputs_initialized && !updated_outputs.empty()) @@ -95,9 +98,27 @@ IProcessor::Status DelayedPortsProcessor::prepare(const PortNumbers & updated_in for (const auto & output_number : updated_outputs) { - auto pair_num = output_to_pair[output_number]; - if (!skip_delayed || !port_pairs[pair_num].is_delayed) - need_data = processPair(port_pairs[pair_num]) || need_data; + auto & pair = port_pairs[output_to_pair[output_number]]; + + /// Finish pair of ports earlier if possible. + if (!pair.is_finished && pair.output_port && pair.output_port->isFinished()) + finishPair(pair); + else if (!skip_delayed || !pair.is_delayed) + need_data = processPair(pair) || need_data; + } + + /// Do not wait for delayed ports if all output ports are finished. + if (num_finished_outputs == outputs.size()) + { + for (auto & pair : port_pairs) + { + if (pair.output_port) + pair.output_port->finish(); + + pair.input_port->close(); + } + + return Status::Finished; } for (const auto & input_number : updated_inputs) @@ -107,14 +128,14 @@ IProcessor::Status DelayedPortsProcessor::prepare(const PortNumbers & updated_in } /// In case if main streams are finished at current iteration, start processing delayed streams. - if (skip_delayed && (num_finished + num_delayed) >= port_pairs.size()) + if (skip_delayed && (num_finished_pairs + num_delayed_ports) >= port_pairs.size()) { for (auto & pair : port_pairs) if (pair.is_delayed) need_data = processPair(pair) || need_data; } - if (num_finished == port_pairs.size()) + if (num_finished_pairs == port_pairs.size()) return Status::Finished; if (need_data) diff --git a/src/Processors/DelayedPortsProcessor.h b/src/Processors/DelayedPortsProcessor.h index 3e44c945bd4..a6a9590e0c8 100644 --- a/src/Processors/DelayedPortsProcessor.h +++ b/src/Processors/DelayedPortsProcessor.h @@ -28,13 +28,15 @@ private: }; std::vector port_pairs; - size_t num_delayed; - size_t num_finished = 0; + const size_t num_delayed_ports; + size_t num_finished_pairs = 0; + size_t num_finished_outputs = 0; std::vector output_to_pair; bool are_inputs_initialized = false; bool processPair(PortsPair & pair); + void finishPair(PortsPair & pair); }; } diff --git a/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.cpp b/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.cpp index eda2665119a..0ebca3661b4 100644 --- a/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.cpp +++ b/src/Processors/Formats/Impl/ParallelFormattingOutputFormat.cpp @@ -147,12 +147,13 @@ namespace DB /// We want to preallocate memory buffer (increase capacity) /// and put the pointer at the beginning of the buffer unit.segment.resize(DBMS_DEFAULT_BUFFER_SIZE); - /// The second invocation won't release memory, only set size equals to 0. - unit.segment.resize(0); unit.actual_memory_size = 0; BufferWithOutsideMemory out_buffer(unit.segment); + /// The second invocation won't release memory, only set size equals to 0. + unit.segment.resize(0); + auto formatter = internal_formatter_creator(out_buffer); switch (unit.type) diff --git a/src/Processors/QueryPlan/Optimizations/splitFilter.cpp b/src/Processors/QueryPlan/Optimizations/splitFilter.cpp index 38ba8f25b24..8c212936195 100644 --- a/src/Processors/QueryPlan/Optimizations/splitFilter.cpp +++ b/src/Processors/QueryPlan/Optimizations/splitFilter.cpp @@ -24,8 +24,9 @@ size_t trySplitFilter(QueryPlan::Node * node, QueryPlan::Nodes & nodes) if (split.second->trivial()) return 0; + bool remove_filter = false; if (filter_step->removesFilterColumn()) - split.second->removeUnusedInput(filter_step->getFilterColumnName()); + remove_filter = split.second->removeUnusedResult(filter_step->getFilterColumnName()); auto description = filter_step->getStepDescription(); @@ -37,7 +38,7 @@ size_t trySplitFilter(QueryPlan::Node * node, QueryPlan::Nodes & nodes) filter_node.children.at(0)->step->getOutputStream(), std::move(split.first), filter_step->getFilterColumnName(), - filter_step->removesFilterColumn()); + remove_filter); node->step = std::make_unique(filter_node.step->getOutputStream(), std::move(split.second)); diff --git a/src/Processors/QueryPlan/WindowStep.cpp b/src/Processors/QueryPlan/WindowStep.cpp index 82c589b8b20..1a71ca0adc7 100644 --- a/src/Processors/QueryPlan/WindowStep.cpp +++ b/src/Processors/QueryPlan/WindowStep.cpp @@ -57,6 +57,7 @@ WindowStep::WindowStep(const DataStream & input_stream_, { // We don't remove any columns, only add, so probably we don't have to update // the output DataStream::distinct_columns. + window_description.frame.checkValid(); } void WindowStep::transformPipeline(QueryPipeline & pipeline) diff --git a/src/Processors/Transforms/WindowTransform.cpp b/src/Processors/Transforms/WindowTransform.cpp index fe55fe6bcad..474d1a3c452 100644 --- a/src/Processors/Transforms/WindowTransform.cpp +++ b/src/Processors/Transforms/WindowTransform.cpp @@ -165,16 +165,197 @@ void WindowTransform::advancePartitionEnd() assert(!partition_ended && partition_end == blocksEnd()); } -void WindowTransform::advanceFrameStart() const +auto WindowTransform::moveRowNumberNoCheck(const RowNumber & _x, int offset) const { - // Frame start is always UNBOUNDED PRECEDING for now, so we don't have to - // move it. It is initialized when the new partition starts. - if (window_description.frame.begin_type - != WindowFrame::BoundaryType::Unbounded) + RowNumber x = _x; + + if (offset > 0) { - throw Exception(ErrorCodes::NOT_IMPLEMENTED, - "Frame start type '{}' is not implemented", - WindowFrame::toString(window_description.frame.begin_type)); + for (;;) + { + assertValid(x); + assert(offset >= 0); + + const auto block_rows = blockRowsNumber(x); + x.row += offset; + if (x.row >= block_rows) + { + offset = x.row - block_rows; + x.row = 0; + x.block++; + + if (x == blocksEnd()) + { + break; + } + } + else + { + offset = 0; + break; + } + } + } + else if (offset < 0) + { + for (;;) + { + assertValid(x); + assert(offset <= 0); + + if (x.row >= static_cast(-offset)) + { + x.row -= -offset; + offset = 0; + break; + } + + // Move to the first row in current block. Note that the offset is + // negative. + offset += x.row; + x.row = 0; + + // Move to the last row of the previous block, if we are not at the + // first one. Offset also is incremented by one, because we pass over + // the first row of this block. + if (x.block == first_block_number) + { + break; + } + + --x.block; + offset += 1; + x.row = blockRowsNumber(x) - 1; + } + } + + return std::tuple{x, offset}; +} + +auto WindowTransform::moveRowNumber(const RowNumber & _x, int offset) const +{ + auto [x, o] = moveRowNumberNoCheck(_x, offset); + +#ifndef NDEBUG + // Check that it was reversible. + auto [xx, oo] = moveRowNumberNoCheck(x, -(offset - o)); + +// fmt::print(stderr, "{} -> {}, result {}, {}, new offset {}, twice {}, {}\n", +// _x, offset, x, o, -(offset - o), xx, oo); + assert(xx == _x); + assert(oo == 0); +#endif + + return std::tuple{x, o}; +} + + +void WindowTransform::advanceFrameStartRowsOffset() +{ + // Just recalculate it each time by walking blocks. + const auto [moved_row, offset_left] = moveRowNumber(current_row, + window_description.frame.begin_offset); + + frame_start = moved_row; + + assertValid(frame_start); + +// fmt::print(stderr, "frame start {} left {} partition start {}\n", +// frame_start, offset_left, partition_start); + + if (frame_start <= partition_start) + { + // Got to the beginning of partition and can't go further back. + frame_start = partition_start; + frame_started = true; + return; + } + + if (partition_end <= frame_start) + { + // A FOLLOWING frame start ran into the end of partition. + frame_start = partition_end; + frame_started = partition_ended; + return; + } + + // Handled the equality case above. Now the frame start is inside the + // partition, if we walked all the offset, it's final. + assert(partition_start < frame_start); + frame_started = offset_left == 0; + + // If we ran into the start of data (offset left is negative), we won't be + // able to make progress. Should have handled this case above. + assert(offset_left >= 0); +} + +void WindowTransform::advanceFrameStartChoose() +{ + switch (window_description.frame.begin_type) + { + case WindowFrame::BoundaryType::Unbounded: + // UNBOUNDED PRECEDING, just mark it valid. It is initialized when + // the new partition starts. + frame_started = true; + return; + case WindowFrame::BoundaryType::Current: + // CURRENT ROW differs between frame types only in how the peer + // groups are accounted. + assert(partition_start <= peer_group_start); + assert(peer_group_start < partition_end); + assert(peer_group_start <= current_row); + frame_start = peer_group_start; + frame_started = true; + return; + case WindowFrame::BoundaryType::Offset: + switch (window_description.frame.type) + { + case WindowFrame::FrameType::Rows: + advanceFrameStartRowsOffset(); + return; + default: + // Fallthrough to the "not implemented" error. + break; + } + break; + } + + throw Exception(ErrorCodes::NOT_IMPLEMENTED, + "Frame start type '{}' for frame '{}' is not implemented", + WindowFrame::toString(window_description.frame.begin_type), + WindowFrame::toString(window_description.frame.type)); +} + +void WindowTransform::advanceFrameStart() +{ + if (frame_started) + { + return; + } + + const auto frame_start_before = frame_start; + advanceFrameStartChoose(); + assert(frame_start_before <= frame_start); + if (frame_start == frame_start_before) + { + // If the frame start didn't move, this means we validated that the frame + // starts at the point we reached earlier but were unable to validate. + // This probably only happens in degenerate cases where the frame start + // is further than the end of partition, and the partition ends at the + // last row of the block, but we can only tell for sure after a new + // block arrives. We still have to update the state of aggregate + // functions when the frame start becomes valid, so we continue. + assert(frame_started); + } + + assert(partition_start <= frame_start); + assert(frame_start <= partition_end); + if (partition_ended && frame_start == partition_end) + { + // Check that if the start of frame (e.g. FOLLOWING) runs into the end + // of partition, it is marked as valid -- we can't advance it any + // further. + assert(frame_started); } } @@ -257,18 +438,15 @@ void WindowTransform::advanceFrameEndCurrentRow() // fmt::print(stderr, "first row {} last {}\n", frame_end.row, rows_end); - // We could retreat the frame_end here, but for some reason I am reluctant - // to do this... It would have better data locality. - auto reference = current_row; + // Advance frame_end while it is still peers with the current row. for (; frame_end.row < rows_end; ++frame_end.row) { - if (!arePeers(reference, frame_end)) + if (!arePeers(current_row, frame_end)) { // fmt::print(stderr, "{} and {} don't match\n", reference, frame_end); frame_ended = true; return; } - reference = frame_end; } // Might have gotten to the end of the current block, have to properly @@ -291,6 +469,39 @@ void WindowTransform::advanceFrameEndUnbounded() frame_ended = partition_ended; } +void WindowTransform::advanceFrameEndRowsOffset() +{ + // Walk the specified offset from the current row. The "+1" is needed + // because the frame_end is a past-the-end pointer. + const auto [moved_row, offset_left] = moveRowNumber(current_row, + window_description.frame.end_offset + 1); + + if (partition_end <= moved_row) + { + // Clamp to the end of partition. It might not have ended yet, in which + // case wait for more data. + frame_end = partition_end; + frame_ended = partition_ended; + return; + } + + if (moved_row <= partition_start) + { + // Clamp to the start of partition. + frame_end = partition_start; + frame_ended = true; + return; + } + + // Frame end inside partition, if we walked all the offset, it's final. + frame_end = moved_row; + frame_ended = offset_left == 0; + + // If we ran into the start of data (offset left is negative), we won't be + // able to make progress. Should have handled this case above. + assert(offset_left >= 0); +} + void WindowTransform::advanceFrameEnd() { // No reason for this function to be called again after it succeeded. @@ -301,16 +512,23 @@ void WindowTransform::advanceFrameEnd() switch (window_description.frame.end_type) { case WindowFrame::BoundaryType::Current: - // The only frame end we have for now is CURRENT ROW. advanceFrameEndCurrentRow(); break; case WindowFrame::BoundaryType::Unbounded: advanceFrameEndUnbounded(); break; case WindowFrame::BoundaryType::Offset: - throw Exception(ErrorCodes::NOT_IMPLEMENTED, - "The frame end type '{}' is not implemented", - WindowFrame::toString(window_description.frame.end_type)); + switch (window_description.frame.type) + { + case WindowFrame::FrameType::Rows: + advanceFrameEndRowsOffset(); + break; + default: + throw Exception(ErrorCodes::NOT_IMPLEMENTED, + "The frame end type '{}' is not implemented", + WindowFrame::toString(window_description.frame.end_type)); + } + break; } // fmt::print(stderr, "frame_end {} -> {}\n", frame_end_before, frame_end); @@ -321,44 +539,81 @@ void WindowTransform::advanceFrameEnd() { return; } +} - // Add the rows over which we advanced the frame to the aggregate function - // states. We could have advanced over at most the entire last block. - uint64_t rows_end = frame_end.row; - if (frame_end.row == 0) +// Update the aggregation states after the frame has changed. +void WindowTransform::updateAggregationState() +{ +// fmt::print(stderr, "update agg states [{}, {}) -> [{}, {})\n", +// prev_frame_start, prev_frame_end, frame_start, frame_end); + + // Assert that the frame boundaries are known, have proper order wrt each + // other, and have not gone back wrt the previous frame. + assert(frame_started); + assert(frame_ended); + assert(frame_start <= frame_end); + assert(prev_frame_start <= prev_frame_end); + assert(prev_frame_start <= frame_start); + assert(prev_frame_end <= frame_end); + + // We might have to reset aggregation state and/or add some rows to it. + // Figure out what to do. + bool reset_aggregation = false; + RowNumber rows_to_add_start; + RowNumber rows_to_add_end; + if (frame_start == prev_frame_start) { - assert(frame_end == blocksEnd()); - rows_end = blockRowsNumber(frame_end_before); + // The frame start didn't change, add the tail rows. + reset_aggregation = false; + rows_to_add_start = prev_frame_end; + rows_to_add_end = frame_end; } else { - assert(frame_end_before.block == frame_end.block); + // The frame start changed, reset the state and aggregate over the + // entire frame. This can be made per-function after we learn to + // subtract rows from some types of aggregation states, but for now we + // always have to reset when the frame start changes. + reset_aggregation = true; + rows_to_add_start = frame_start; + rows_to_add_end = frame_end; } - // Equality would mean "no data to process", for which we checked above. - assert(frame_end_before.row < rows_end); for (auto & ws : workspaces) { - if (frame_end_before.block != ws.cached_block_number) - { - const auto & block - = blocks[frame_end_before.block - first_block_number]; - ws.argument_columns.clear(); - for (const auto i : ws.argument_column_indices) - { - ws.argument_columns.push_back(block.input_columns[i].get()); - } - ws.cached_block_number = frame_end_before.block; - } - const auto * a = ws.window_function.aggregate_function.get(); auto * buf = ws.aggregate_function_state.data(); - auto * columns = ws.argument_columns.data(); - for (auto row = frame_end_before.row; row < rows_end; ++row) + + if (reset_aggregation) { - a->add(buf, columns, row, arena.get()); +// fmt::print(stderr, "(2) reset aggregation\n"); + a->destroy(buf); + a->create(buf); + } + + for (auto row = rows_to_add_start; row < rows_to_add_end; + advanceRowNumber(row)) + { + if (row.block != ws.cached_block_number) + { + const auto & block + = blocks[row.block - first_block_number]; + ws.argument_columns.clear(); + for (const auto i : ws.argument_column_indices) + { + ws.argument_columns.push_back(block.input_columns[i].get()); + } + ws.cached_block_number = row.block; + } + +// fmt::print(stderr, "(2) add row {}\n", row); + auto * columns = ws.argument_columns.data(); + a->add(buf, columns, row.row, arena.get()); } } + + prev_frame_start = frame_start; + prev_frame_end = frame_end; } void WindowTransform::writeOutCurrentRow() @@ -414,8 +669,8 @@ void WindowTransform::appendChunk(Chunk & chunk) for (;;) { advancePartitionEnd(); -// fmt::print(stderr, "partition [?, {}), {}\n", -// partition_end, partition_ended); +// fmt::print(stderr, "partition [{}, {}), {}\n", +// partition_start, partition_end, partition_ended); // Either we ran out of data or we found the end of partition (maybe // both, but this only happens at the total end of data). @@ -430,15 +685,38 @@ void WindowTransform::appendChunk(Chunk & chunk) // which is precisely the definition of `partition_end`. while (current_row < partition_end) { - // Advance the frame start, updating the state of the aggregate - // functions. - advanceFrameStart(); - // Advance the frame end, updating the state of the aggregate - // functions. - advanceFrameEnd(); +// fmt::print(stderr, "(1) row {} frame [{}, {}) {}, {}\n", +// current_row, frame_start, frame_end, +// frame_started, frame_ended); -// fmt::print(stderr, "row {} frame [{}, {}) {}\n", -// current_row, frame_start, frame_end, frame_ended); + // We now know that the current row is valid, so we can update the + // peer group start. + if (!arePeers(peer_group_start, current_row)) + { + peer_group_start = current_row; + } + + // Advance the frame start. + advanceFrameStart(); + + if (!frame_started) + { + // Wait for more input data to find the start of frame. + assert(!input_is_finished); + assert(!partition_ended); + return; + } + + // frame_end must be greater or equal than frame_start, so if the + // frame_start is already past the current frame_end, we can start + // from it to save us some work. + if (frame_end < frame_start) + { + frame_end = frame_start; + } + + // Advance the frame end. + advanceFrameEnd(); if (!frame_ended) { @@ -448,16 +726,34 @@ void WindowTransform::appendChunk(Chunk & chunk) return; } - // The frame shouldn't be empty (probably?). - assert(frame_start < frame_end); +// fmt::print(stderr, "(2) row {} frame [{}, {}) {}, {}\n", +// current_row, frame_start, frame_end, +// frame_started, frame_ended); + + // The frame can be empty sometimes, e.g. the boundaries coincide + // or the start is after the partition end. But hopefully start is + // not after end. + assert(frame_started); + assert(frame_ended); + assert(frame_start <= frame_end); + + // Now that we know the new frame boundaries, update the aggregation + // states. Theoretically we could do this simultaneously with moving + // the frame boundaries, but it would require some care not to + // perform unnecessary work while we are still looking for the frame + // start, so do it the simple way for now. + updateAggregationState(); // Write out the aggregation results. writeOutCurrentRow(); // Move to the next row. The frame will have to be recalculated. + // The peer group start is updated at the beginning of the loop, + // because current_row might now be past-the-end. advanceRowNumber(current_row); first_not_ready_row = current_row; frame_ended = false; + frame_started = false; } if (input_is_finished) @@ -478,15 +774,18 @@ void WindowTransform::appendChunk(Chunk & chunk) } // Start the next partition. - const auto new_partition_start = partition_end; + partition_start = partition_end; advanceRowNumber(partition_end); partition_ended = false; // We have to reset the frame when the new partition starts. This is not a // generally correct way to do so, but we don't really support moving frame // for now. - frame_start = new_partition_start; - frame_end = new_partition_start; - assert(current_row == new_partition_start); + frame_start = partition_start; + frame_end = partition_start; + prev_frame_start = partition_start; + prev_frame_end = partition_start; + assert(current_row == partition_start); + peer_group_start = partition_start; // fmt::print(stderr, "reinitialize agg data at start of {}\n", // new_partition_start); @@ -534,6 +833,15 @@ IProcessor::Status WindowTransform::prepare() return Status::Finished; } + if (output_data.exception) + { + // An exception occurred during processing. + output.pushData(std::move(output_data)); + output.finish(); + input.close(); + return Status::Finished; + } + assert(first_not_ready_row.block >= first_block_number); // The first_not_ready_row might be past-the-end if we have already // calculated the window functions for all input rows. That's why the @@ -665,6 +973,7 @@ void WindowTransform::work() assert(next_output_block_number >= first_block_number); assert(frame_start.block >= first_block_number); assert(current_row.block >= first_block_number); + assert(peer_group_start.block >= first_block_number); } } diff --git a/src/Processors/Transforms/WindowTransform.h b/src/Processors/Transforms/WindowTransform.h index 39ccd4f96f9..bb1a9aefd64 100644 --- a/src/Processors/Transforms/WindowTransform.h +++ b/src/Processors/Transforms/WindowTransform.h @@ -53,6 +53,11 @@ struct RowNumber { return block == other.block && row == other.row; } + + bool operator <= (const RowNumber & other) const + { + return *this < other || *this == other; + } }; /* @@ -101,11 +106,15 @@ public: private: void advancePartitionEnd(); - void advanceFrameStart() const; - void advanceFrameEnd(); + void advanceFrameStart(); + void advanceFrameStartChoose(); + void advanceFrameStartRowsOffset(); void advanceFrameEndCurrentRow(); void advanceFrameEndUnbounded(); + void advanceFrameEndRowsOffset(); + void advanceFrameEnd(); bool arePeers(const RowNumber & x, const RowNumber & y) const; + void updateAggregationState(); void writeOutCurrentRow(); Columns & inputAt(const RowNumber & x) @@ -169,9 +178,28 @@ private: #endif } + auto moveRowNumber(const RowNumber & _x, int offset) const; + auto moveRowNumberNoCheck(const RowNumber & _x, int offset) const; + + void assertValid(const RowNumber & x) const + { + assert(x.block >= first_block_number); + if (x.block == first_block_number + blocks.size()) + { + assert(x.row == 0); + } + else + { + assert(x.row < blockRowsNumber(x)); + } + } + RowNumber blocksEnd() const { return RowNumber{first_block_number + blocks.size(), 0}; } + RowNumber blocksBegin() const + { return RowNumber{first_block_number, 0}; } + public: /* * Data (formerly) inherited from ISimpleTransform, needed for the @@ -217,18 +245,26 @@ public: // Used to determine which resulting blocks we can pass to the consumer. RowNumber first_not_ready_row; - // We don't keep the pointer to start of partition, because we don't really - // need it, and we want to be able to drop the starting blocks to save memory. - // The `partition_end` is past-the-end, as usual. When partition_ended = false, - // it still haven't ended, and partition_end is the next row to check. + // Boundaries of the current partition. + // partition_start doesn't point to a valid block, because we want to drop + // the blocks early to save memory. We still have to track it so that we can + // cut off a PRECEDING frame at the partition start. + // The `partition_end` is past-the-end, as usual. When + // partition_ended = false, it still haven't ended, and partition_end is the + // next row to check. + RowNumber partition_start; RowNumber partition_end; bool partition_ended = false; - // This is the row for which we are computing the window functions now. + // The row for which we are now computing the window functions. RowNumber current_row; + // The start of current peer group, needed for CURRENT ROW frame start. + // For ROWS frame, always equal to the current row, and for RANGE and GROUP + // frames may be earlier. + RowNumber peer_group_start; - // The frame is [frame_start, frame_end) if frame_ended, and unknown - // otherwise. Note that when we move to the next row, both the + // The frame is [frame_start, frame_end) if frame_ended && frame_started, + // and unknown otherwise. Note that when we move to the next row, both the // frame_start and the frame_end may jump forward by an unknown amount of // blocks, e.g. if we use a RANGE frame. This means that sometimes we don't // know neither frame_end nor frame_start. @@ -239,6 +275,13 @@ public: RowNumber frame_start; RowNumber frame_end; bool frame_ended = false; + bool frame_started = false; + + // The previous frame boundaries that correspond to the current state of the + // aggregate function. We use them to determine how to update the aggregation + // state after we find the new frame. + RowNumber prev_frame_start; + RowNumber prev_frame_end; }; } diff --git a/src/Server/GRPCServer.cpp b/src/Server/GRPCServer.cpp index c3492e9ea8a..ede9bbff063 100644 --- a/src/Server/GRPCServer.cpp +++ b/src/Server/GRPCServer.cpp @@ -652,7 +652,6 @@ namespace /// Create context. query_context.emplace(iserver.context()); - query_scope.emplace(*query_context); /// Authentication. query_context->setUser(user, password, user_address); @@ -670,6 +669,8 @@ namespace query_context->setSessionContext(session->context); } + query_scope.emplace(*query_context); + /// Set client info. ClientInfo & client_info = query_context->getClientInfo(); client_info.query_kind = ClientInfo::QueryKind::INITIAL_QUERY; diff --git a/src/Server/HTTPHandler.cpp b/src/Server/HTTPHandler.cpp index e161b5752ae..5e0d1f0ac66 100644 --- a/src/Server/HTTPHandler.cpp +++ b/src/Server/HTTPHandler.cpp @@ -219,8 +219,11 @@ void HTTPHandler::pushDelayedResults(Output & used_output) } } - ConcatReadBuffer concat_read_buffer(read_buffers_raw_ptr); - copyData(concat_read_buffer, *used_output.out_maybe_compressed); + if (!read_buffers_raw_ptr.empty()) + { + ConcatReadBuffer concat_read_buffer(read_buffers_raw_ptr); + copyData(concat_read_buffer, *used_output.out_maybe_compressed); + } } diff --git a/src/Server/MySQLHandler.cpp b/src/Server/MySQLHandler.cpp index 63a48fde1a7..3cbe285615e 100644 --- a/src/Server/MySQLHandler.cpp +++ b/src/Server/MySQLHandler.cpp @@ -24,6 +24,7 @@ #include #include #include +#include #if !defined(ARCADIA_BUILD) # include @@ -86,6 +87,8 @@ MySQLHandler::MySQLHandler(IServer & server_, const Poco::Net::StreamSocket & so void MySQLHandler::run() { + setThreadName("MySQLHandler"); + ThreadStatus thread_status; connection_context.makeSessionContext(); connection_context.getClientInfo().interface = ClientInfo::Interface::MYSQL; connection_context.setDefaultFormat("MySQLWire"); @@ -340,7 +343,9 @@ void MySQLHandler::comQuery(ReadBuffer & payload) affected_rows += progress.written_rows; }); - executeQuery(should_replace ? replacement : payload, *out, true, query_context, + CurrentThread::QueryScope query_scope{query_context}; + + executeQuery(should_replace ? replacement : payload, *out, false, query_context, [&with_output](const String &, const String &, const String &, const String &) { with_output = true; diff --git a/src/Server/PostgreSQLHandler.cpp b/src/Server/PostgreSQLHandler.cpp index 2bce5abcd11..b3a3bbf2aaa 100644 --- a/src/Server/PostgreSQLHandler.cpp +++ b/src/Server/PostgreSQLHandler.cpp @@ -5,6 +5,7 @@ #include #include "PostgreSQLHandler.h" #include +#include #include #if !defined(ARCADIA_BUILD) @@ -49,6 +50,8 @@ void PostgreSQLHandler::changeIO(Poco::Net::StreamSocket & socket) void PostgreSQLHandler::run() { + setThreadName("PostgresHandler"); + ThreadStatus thread_status; connection_context.makeSessionContext(); connection_context.getClientInfo().interface = ClientInfo::Interface::POSTGRESQL; connection_context.setDefaultFormat("PostgreSQLWire"); @@ -273,8 +276,10 @@ void PostgreSQLHandler::processQuery() for (const auto & spl_query : queries) { + /// FIXME why do we execute all queries in a single connection context? + CurrentThread::QueryScope query_scope{connection_context}; ReadBufferFromString read_buf(spl_query); - executeQuery(read_buf, *out, true, connection_context, {}); + executeQuery(read_buf, *out, false, connection_context, {}); PostgreSQLProtocol::Messaging::CommandComplete::Command command = PostgreSQLProtocol::Messaging::CommandComplete::classifyQuery(spl_query); diff --git a/src/Server/TCPHandler.cpp b/src/Server/TCPHandler.cpp index 12d1a0249b7..fa213dcdc55 100644 --- a/src/Server/TCPHandler.cpp +++ b/src/Server/TCPHandler.cpp @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include @@ -180,10 +181,16 @@ void TCPHandler::runImpl() /** If Query - process it. If Ping or Cancel - go back to the beginning. * There may come settings for a separate query that modify `query_context`. + * It's possible to receive part uuids packet before the query, so then receivePacket has to be called twice. */ if (!receivePacket()) continue; + /** If part_uuids got received in previous packet, trying to read again. + */ + if (state.empty() && state.part_uuids && !receivePacket()) + continue; + query_scope.emplace(*query_context); send_exception_with_stack_trace = query_context->getSettingsRef().calculate_text_stack_trace; @@ -528,6 +535,10 @@ void TCPHandler::processOrdinaryQuery() /// Pull query execution result, if exists, and send it to network. if (state.io.in) { + + if (query_context->getSettingsRef().allow_experimental_query_deduplication) + sendPartUUIDs(); + /// This allows the client to prepare output format if (Block header = state.io.in->getHeader()) sendData(header); @@ -592,6 +603,9 @@ void TCPHandler::processOrdinaryQueryWithProcessors() { auto & pipeline = state.io.pipeline; + if (query_context->getSettingsRef().allow_experimental_query_deduplication) + sendPartUUIDs(); + /// Send header-block, to allow client to prepare output format for data to send. { const auto & header = pipeline.getHeader(); @@ -693,6 +707,20 @@ void TCPHandler::receiveUnexpectedTablesStatusRequest() throw NetException("Unexpected packet TablesStatusRequest received from client", ErrorCodes::UNEXPECTED_PACKET_FROM_CLIENT); } +void TCPHandler::sendPartUUIDs() +{ + auto uuids = query_context->getPartUUIDs()->get(); + if (!uuids.empty()) + { + for (const auto & uuid : uuids) + LOG_TRACE(log, "Sending UUID: {}", toString(uuid)); + + writeVarUInt(Protocol::Server::PartUUIDs, *out); + writeVectorBinary(uuids, *out); + out->next(); + } +} + void TCPHandler::sendProfileInfo(const BlockStreamProfileInfo & info) { writeVarUInt(Protocol::Server::ProfileInfo, *out); @@ -905,6 +933,10 @@ bool TCPHandler::receivePacket() switch (packet_type) { + case Protocol::Client::IgnoredPartUUIDs: + /// Part uuids packet if any comes before query. + receiveIgnoredPartUUIDs(); + return true; case Protocol::Client::Query: if (!state.empty()) receiveUnexpectedQuery(); @@ -940,6 +972,16 @@ bool TCPHandler::receivePacket() } } +void TCPHandler::receiveIgnoredPartUUIDs() +{ + state.part_uuids = true; + std::vector uuids; + readVectorBinary(uuids, *in); + + if (!uuids.empty()) + query_context->getIgnoredPartUUIDs()->add(uuids); +} + void TCPHandler::receiveClusterNameAndSalt() { readStringBinary(cluster, *in); diff --git a/src/Server/TCPHandler.h b/src/Server/TCPHandler.h index 0d3109a6591..41539bef1e1 100644 --- a/src/Server/TCPHandler.h +++ b/src/Server/TCPHandler.h @@ -67,6 +67,9 @@ struct QueryState /// Temporary tables read bool temporary_tables_read = false; + /// A state got uuids to exclude from a query + bool part_uuids = false; + /// Request requires data from client for function input() bool need_receive_data_for_input = false; /// temporary place for incoming data block for input() @@ -173,6 +176,7 @@ private: void receiveHello(); bool receivePacket(); void receiveQuery(); + void receiveIgnoredPartUUIDs(); bool receiveData(bool scalar); bool readDataNext(const size_t & poll_interval, const int & receive_timeout); void readData(const Settings & connection_settings); @@ -201,6 +205,7 @@ private: void sendProgress(); void sendLogs(); void sendEndOfStream(); + void sendPartUUIDs(); void sendProfileInfo(const BlockStreamProfileInfo & info); void sendTotals(const Block & totals); void sendExtremes(const Block & extremes); diff --git a/src/Storages/Distributed/DistributedBlockOutputStream.cpp b/src/Storages/Distributed/DistributedBlockOutputStream.cpp index c698c0b18d5..51bd6d83105 100644 --- a/src/Storages/Distributed/DistributedBlockOutputStream.cpp +++ b/src/Storages/Distributed/DistributedBlockOutputStream.cpp @@ -75,7 +75,9 @@ static Block adoptBlock(const Block & header, const Block & block, Poco::Logger ConvertingBlockInputStream::MatchColumnsMode::Name); return convert.read(); } -static void writeBlockConvert(const BlockOutputStreamPtr & out, const Block & block, const size_t repeats, Poco::Logger * log) + + +static void writeBlockConvert(const BlockOutputStreamPtr & out, const Block & block, size_t repeats, Poco::Logger * log) { Block adopted_block = adoptBlock(out->getHeader(), block, log); for (size_t i = 0; i < repeats; ++i) @@ -387,11 +389,18 @@ void DistributedBlockOutputStream::writeSync(const Block & block) bool random_shard_insert = settings.insert_distributed_one_random_shard && !storage.has_sharding_key; size_t start = 0; size_t end = shards_info.size(); - if (random_shard_insert) + + if (settings.insert_shard_id) + { + start = settings.insert_shard_id - 1; + end = settings.insert_shard_id; + } + else if (random_shard_insert) { start = storage.getRandomShardIndex(shards_info); end = start + 1; } + size_t num_shards = end - start; if (!pool) @@ -549,7 +558,7 @@ void DistributedBlockOutputStream::writeSplitAsync(const Block & block) } -void DistributedBlockOutputStream::writeAsyncImpl(const Block & block, const size_t shard_id) +void DistributedBlockOutputStream::writeAsyncImpl(const Block & block, size_t shard_id) { const auto & shard_info = cluster->getShardsInfo()[shard_id]; const auto & settings = context.getSettingsRef(); @@ -585,7 +594,7 @@ void DistributedBlockOutputStream::writeAsyncImpl(const Block & block, const siz } -void DistributedBlockOutputStream::writeToLocal(const Block & block, const size_t repeats) +void DistributedBlockOutputStream::writeToLocal(const Block & block, size_t repeats) { /// Async insert does not support settings forwarding yet whereas sync one supports InterpreterInsertQuery interp(query_ast, context); diff --git a/src/Storages/Distributed/DistributedBlockOutputStream.h b/src/Storages/Distributed/DistributedBlockOutputStream.h index ef37776893a..ca57ad46fbb 100644 --- a/src/Storages/Distributed/DistributedBlockOutputStream.h +++ b/src/Storages/Distributed/DistributedBlockOutputStream.h @@ -62,10 +62,10 @@ private: void writeSplitAsync(const Block & block); - void writeAsyncImpl(const Block & block, const size_t shard_id = 0); + void writeAsyncImpl(const Block & block, size_t shard_id = 0); /// Increments finished_writings_count after each repeat. - void writeToLocal(const Block & block, const size_t repeats); + void writeToLocal(const Block & block, size_t repeats); void writeToShard(const Block & block, const std::vector & dir_names); diff --git a/src/Storages/Kafka/KafkaBlockOutputStream.cpp b/src/Storages/Kafka/KafkaBlockOutputStream.cpp index e1742741670..cfbb7ad2523 100644 --- a/src/Storages/Kafka/KafkaBlockOutputStream.cpp +++ b/src/Storages/Kafka/KafkaBlockOutputStream.cpp @@ -6,11 +6,6 @@ namespace DB { -namespace ErrorCodes -{ - extern const int CANNOT_CREATE_IO_BUFFER; -} - KafkaBlockOutputStream::KafkaBlockOutputStream( StorageKafka & storage_, const StorageMetadataPtr & metadata_snapshot_, @@ -29,8 +24,6 @@ Block KafkaBlockOutputStream::getHeader() const void KafkaBlockOutputStream::writePrefix() { buffer = storage.createWriteBuffer(getHeader()); - if (!buffer) - throw Exception("Failed to create Kafka producer!", ErrorCodes::CANNOT_CREATE_IO_BUFFER); auto format_settings = getFormatSettings(*context); format_settings.protobuf.allow_many_rows_no_delimiters = true; diff --git a/src/Storages/Kafka/WriteBufferToKafkaProducer.cpp b/src/Storages/Kafka/WriteBufferToKafkaProducer.cpp index c6d365ce2fe..dbb18b56769 100644 --- a/src/Storages/Kafka/WriteBufferToKafkaProducer.cpp +++ b/src/Storages/Kafka/WriteBufferToKafkaProducer.cpp @@ -42,6 +42,8 @@ WriteBufferToKafkaProducer::WriteBufferToKafkaProducer( timestamp_column_index = column_index; } } + + reinitializeChunks(); } WriteBufferToKafkaProducer::~WriteBufferToKafkaProducer() @@ -108,9 +110,7 @@ void WriteBufferToKafkaProducer::countRow(const Columns & columns, size_t curren break; } - rows = 0; - chunks.clear(); - set(nullptr, 0); + reinitializeChunks(); } } @@ -135,10 +135,25 @@ void WriteBufferToKafkaProducer::flush() } void WriteBufferToKafkaProducer::nextImpl() +{ + addChunk(); +} + +void WriteBufferToKafkaProducer::addChunk() { chunks.push_back(std::string()); chunks.back().resize(chunk_size); set(chunks.back().data(), chunk_size); } +void WriteBufferToKafkaProducer::reinitializeChunks() +{ + rows = 0; + chunks.clear(); + /// We cannot leave the buffer in the undefined state (i.e. without any + /// underlying buffer), since in this case the WriteBuffeR::next() will + /// not call our nextImpl() (due to available() == 0) + addChunk(); +} + } diff --git a/src/Storages/Kafka/WriteBufferToKafkaProducer.h b/src/Storages/Kafka/WriteBufferToKafkaProducer.h index 76859c4e33f..15881b7a8e5 100644 --- a/src/Storages/Kafka/WriteBufferToKafkaProducer.h +++ b/src/Storages/Kafka/WriteBufferToKafkaProducer.h @@ -30,6 +30,8 @@ public: private: void nextImpl() override; + void addChunk(); + void reinitializeChunks(); ProducerPtr producer; const std::string topic; diff --git a/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp b/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp index c852151f27d..ce60856505e 100644 --- a/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp +++ b/src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp @@ -7,6 +7,7 @@ #include #include #include +#include namespace DB @@ -205,6 +206,7 @@ namespace virtual void insertStringColumn(const ColumnPtr & column, const String & name) = 0; virtual void insertUInt64Column(const ColumnPtr & column, const String & name) = 0; + virtual void insertUUIDColumn(const ColumnPtr & column, const String & name) = 0; }; } @@ -241,6 +243,16 @@ static void injectVirtualColumnsImpl(size_t rows, VirtualColumnsInserter & inser inserter.insertUInt64Column(column, virtual_column_name); } + else if (virtual_column_name == "_part_uuid") + { + ColumnPtr column; + if (rows) + column = DataTypeUUID().createColumnConst(rows, task->data_part->uuid)->convertToFullColumnIfConst(); + else + column = DataTypeUUID().createColumn(); + + inserter.insertUUIDColumn(column, virtual_column_name); + } else if (virtual_column_name == "_partition_id") { ColumnPtr column; @@ -271,6 +283,11 @@ namespace block.insert({column, std::make_shared(), name}); } + void insertUUIDColumn(const ColumnPtr & column, const String & name) final + { + block.insert({column, std::make_shared(), name}); + } + Block & block; }; @@ -288,6 +305,10 @@ namespace columns.push_back(column); } + void insertUUIDColumn(const ColumnPtr & column, const String &) final + { + columns.push_back(column); + } Columns & columns; }; } diff --git a/src/Storages/MergeTree/MergeTreeData.cpp b/src/Storages/MergeTree/MergeTreeData.cpp index 9ed751cbc8e..7cdf4f7b9cd 100644 --- a/src/Storages/MergeTree/MergeTreeData.cpp +++ b/src/Storages/MergeTree/MergeTreeData.cpp @@ -4,6 +4,7 @@ #include #include #include +#include #include #include #include @@ -3620,6 +3621,10 @@ PartitionCommandsResultInfo MergeTreeData::freezePartitionsByMatcher(MatcherFn m const auto data_parts = getDataParts(); String backup_name = (!with_name.empty() ? escapeForFileName(with_name) : toString(increment)); + String backup_path = shadow_path + backup_name + "/"; + + for (const auto & disk : getStoragePolicy()->getDisks()) + disk->onFreeze(backup_path); PartitionCommandsResultInfo result; @@ -3629,12 +3634,10 @@ PartitionCommandsResultInfo MergeTreeData::freezePartitionsByMatcher(MatcherFn m if (!matcher(part)) continue; - part->volume->getDisk()->createDirectories(shadow_path); - - String backup_path = shadow_path + backup_name + "/"; - LOG_DEBUG(log, "Freezing part {} snapshot will be placed at {}", part->name, backup_path); + part->volume->getDisk()->createDirectories(backup_path); + String backup_part_path = backup_path + relative_data_path + part->relative_path; if (auto part_in_memory = asInMemoryPart(part)) part_in_memory->flushToDisk(backup_path + relative_data_path, part->relative_path, metadata_snapshot); @@ -3949,6 +3952,7 @@ NamesAndTypesList MergeTreeData::getVirtuals() const return NamesAndTypesList{ NameAndTypePair("_part", std::make_shared()), NameAndTypePair("_part_index", std::make_shared()), + NameAndTypePair("_part_uuid", std::make_shared()), NameAndTypePair("_partition_id", std::make_shared()), NameAndTypePair("_sample_factor", std::make_shared()), }; diff --git a/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp b/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp index c00a2cbfa08..c571a53d4c8 100644 --- a/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp +++ b/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp @@ -1234,7 +1234,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mutatePartToTempor if (files_to_skip.count(it->name())) continue; - String destination = new_part_tmp_path + "/"; + String destination = new_part_tmp_path; String file_name = it->name(); auto rename_it = std::find_if(files_to_rename.begin(), files_to_rename.end(), [&file_name](const auto & rename_pair) { return rename_pair.first == file_name; }); if (rename_it != files_to_rename.end()) diff --git a/src/Storages/MergeTree/MergeTreeDataPartUUID.cpp b/src/Storages/MergeTree/MergeTreeDataPartUUID.cpp new file mode 100644 index 00000000000..17d19855798 --- /dev/null +++ b/src/Storages/MergeTree/MergeTreeDataPartUUID.cpp @@ -0,0 +1,38 @@ +#include + +namespace DB +{ + +std::vector PartUUIDs::add(const std::vector & new_uuids) +{ + std::lock_guard lock(mutex); + std::vector intersection; + + /// First check any presence of uuids in a uuids, return duplicates back if any + for (const auto & uuid : new_uuids) + { + if (uuids.find(uuid) != uuids.end()) + intersection.emplace_back(uuid); + } + + if (intersection.empty()) + { + for (const auto & uuid : new_uuids) + uuids.emplace(uuid); + } + return intersection; +} + +std::vector PartUUIDs::get() const +{ + std::lock_guard lock(mutex); + return std::vector(uuids.begin(), uuids.end()); +} + +bool PartUUIDs::has(const UUID & uuid) const +{ + std::lock_guard lock(mutex); + return uuids.find(uuid) != uuids.end(); +} + +} diff --git a/src/Storages/MergeTree/MergeTreeDataPartUUID.h b/src/Storages/MergeTree/MergeTreeDataPartUUID.h new file mode 100644 index 00000000000..ee3a9ee2791 --- /dev/null +++ b/src/Storages/MergeTree/MergeTreeDataPartUUID.h @@ -0,0 +1,34 @@ +#pragma once + +#include +#include +#include +#include + +namespace DB +{ + +/** PartUUIDs is a uuid set to control query deduplication. + * The object is used in query context in both direction: + * Server->Client to send all parts' UUIDs that have been read during the query + * Client->Server to ignored specified parts from being processed. + * + * Current implementation assumes a user setting allow_experimental_query_deduplication=1 is set. + */ +struct PartUUIDs +{ +public: + /// Add new UUIDs if not duplicates found otherwise return duplicated UUIDs + std::vector add(const std::vector & uuids); + /// Get accumulated UUIDs + std::vector get() const; + bool has(const UUID & uuid) const; + +private: + mutable std::mutex mutex; + std::unordered_set uuids; +}; + +using PartUUIDsPtr = std::shared_ptr; + +} diff --git a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp index 924c7dbe185..7b1baf10616 100644 --- a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp +++ b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp @@ -6,7 +6,6 @@ #include #include -#include #include #include #include @@ -15,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -35,8 +35,10 @@ #include #include +#include #include #include +#include #include #include @@ -61,6 +63,7 @@ namespace ErrorCodes extern const int TOO_MANY_ROWS; extern const int CANNOT_PARSE_TEXT; extern const int TOO_MANY_PARTITIONS; + extern const int DUPLICATED_PART_UUIDS; } @@ -71,14 +74,27 @@ MergeTreeDataSelectExecutor::MergeTreeDataSelectExecutor(const MergeTreeData & d /// Construct a block consisting only of possible values of virtual columns -static Block getBlockWithPartColumn(const MergeTreeData::DataPartsVector & parts) +static Block getBlockWithVirtualPartColumns(const MergeTreeData::DataPartsVector & parts, bool with_uuid) { - auto column = ColumnString::create(); + auto part_column = ColumnString::create(); + auto part_uuid_column = ColumnUUID::create(); for (const auto & part : parts) - column->insert(part->name); + { + part_column->insert(part->name); + if (with_uuid) + part_uuid_column->insert(part->uuid); + } - return Block{ColumnWithTypeAndName(std::move(column), std::make_shared(), "_part")}; + if (with_uuid) + { + return Block(std::initializer_list{ + ColumnWithTypeAndName(std::move(part_column), std::make_shared(), "_part"), + ColumnWithTypeAndName(std::move(part_uuid_column), std::make_shared(), "_part_uuid"), + }); + } + + return Block{ColumnWithTypeAndName(std::move(part_column), std::make_shared(), "_part")}; } @@ -162,6 +178,7 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( Names real_column_names; bool part_column_queried = false; + bool part_uuid_column_queried = false; bool sample_factor_column_queried = false; Float64 used_sample_factor = 1; @@ -181,6 +198,11 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( { virt_column_names.push_back(name); } + else if (name == "_part_uuid") + { + part_uuid_column_queried = true; + virt_column_names.push_back(name); + } else if (name == "_sample_factor") { sample_factor_column_queried = true; @@ -198,9 +220,9 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( if (real_column_names.empty()) real_column_names.push_back(ExpressionActions::getSmallestColumn(available_real_columns)); - /// If `_part` virtual column is requested, we try to use it as an index. - Block virtual_columns_block = getBlockWithPartColumn(parts); - if (part_column_queried) + /// If `_part` or `_part_uuid` virtual columns are requested, we try to filter out data by them. + Block virtual_columns_block = getBlockWithVirtualPartColumns(parts, part_uuid_column_queried); + if (part_column_queried || part_uuid_column_queried) VirtualColumnUtils::filterBlockWithQuery(query_info.query, virtual_columns_block, context); auto part_values = VirtualColumnUtils::extractSingleValueFromBlock(virtual_columns_block, "_part"); @@ -244,40 +266,13 @@ QueryPlanPtr MergeTreeDataSelectExecutor::readFromParts( } } - /// Select the parts in which there can be data that satisfy `minmax_idx_condition` and that match the condition on `_part`, - /// as well as `max_block_number_to_read`. - { - auto prev_parts = parts; - parts.clear(); + const Context & query_context = context.hasQueryContext() ? context.getQueryContext() : context; - for (const auto & part : prev_parts) - { - if (part_values.find(part->name) == part_values.end()) - continue; + if (query_context.getSettingsRef().allow_experimental_query_deduplication) + selectPartsToReadWithUUIDFilter(parts, part_values, minmax_idx_condition, partition_pruner, max_block_numbers_to_read, query_context); + else + selectPartsToRead(parts, part_values, minmax_idx_condition, partition_pruner, max_block_numbers_to_read); - if (part->isEmpty()) - continue; - - if (minmax_idx_condition && !minmax_idx_condition->checkInHyperrectangle( - part->minmax_idx.hyperrectangle, data.minmax_idx_column_types).can_be_true) - continue; - - if (partition_pruner) - { - if (partition_pruner->canBePruned(part)) - continue; - } - - if (max_block_numbers_to_read) - { - auto blocks_iterator = max_block_numbers_to_read->find(part->info.partition_id); - if (blocks_iterator == max_block_numbers_to_read->end() || part->info.max_block > blocks_iterator->second) - continue; - } - - parts.push_back(part); - } - } /// Sampling. Names column_names_to_read = real_column_names; @@ -1849,5 +1844,134 @@ MarkRanges MergeTreeDataSelectExecutor::filterMarksUsingIndex( return res; } +void MergeTreeDataSelectExecutor::selectPartsToRead( + MergeTreeData::DataPartsVector & parts, + const std::unordered_set & part_values, + const std::optional & minmax_idx_condition, + std::optional & partition_pruner, + const PartitionIdToMaxBlock * max_block_numbers_to_read) const +{ + auto prev_parts = parts; + parts.clear(); + + for (const auto & part : prev_parts) + { + if (part_values.find(part->name) == part_values.end()) + continue; + + if (part->isEmpty()) + continue; + + if (minmax_idx_condition && !minmax_idx_condition->checkInHyperrectangle( + part->minmax_idx.hyperrectangle, data.minmax_idx_column_types).can_be_true) + continue; + + if (partition_pruner) + { + if (partition_pruner->canBePruned(part)) + continue; + } + + if (max_block_numbers_to_read) + { + auto blocks_iterator = max_block_numbers_to_read->find(part->info.partition_id); + if (blocks_iterator == max_block_numbers_to_read->end() || part->info.max_block > blocks_iterator->second) + continue; + } + + parts.push_back(part); + } +} + +void MergeTreeDataSelectExecutor::selectPartsToReadWithUUIDFilter( + MergeTreeData::DataPartsVector & parts, + const std::unordered_set & part_values, + const std::optional & minmax_idx_condition, + std::optional & partition_pruner, + const PartitionIdToMaxBlock * max_block_numbers_to_read, + const Context & query_context) const +{ + /// const_cast to add UUIDs to context. Bad practice. + Context & non_const_context = const_cast(query_context); + + /// process_parts prepare parts that have to be read for the query, + /// returns false if duplicated parts' UUID have been met + auto select_parts = [&] (MergeTreeData::DataPartsVector & selected_parts) -> bool + { + auto ignored_part_uuids = non_const_context.getIgnoredPartUUIDs(); + std::unordered_set temp_part_uuids; + + auto prev_parts = selected_parts; + selected_parts.clear(); + + for (const auto & part : prev_parts) + { + if (part_values.find(part->name) == part_values.end()) + continue; + + if (part->isEmpty()) + continue; + + if (minmax_idx_condition + && !minmax_idx_condition->checkInHyperrectangle(part->minmax_idx.hyperrectangle, data.minmax_idx_column_types) + .can_be_true) + continue; + + if (partition_pruner) + { + if (partition_pruner->canBePruned(part)) + continue; + } + + if (max_block_numbers_to_read) + { + auto blocks_iterator = max_block_numbers_to_read->find(part->info.partition_id); + if (blocks_iterator == max_block_numbers_to_read->end() || part->info.max_block > blocks_iterator->second) + continue; + } + + /// populate UUIDs and exclude ignored parts if enabled + if (part->uuid != UUIDHelpers::Nil) + { + /// Skip the part if its uuid is meant to be excluded + if (ignored_part_uuids->has(part->uuid)) + continue; + + auto result = temp_part_uuids.insert(part->uuid); + if (!result.second) + throw Exception("Found a part with the same UUID on the same replica.", ErrorCodes::LOGICAL_ERROR); + } + + selected_parts.push_back(part); + } + + if (!temp_part_uuids.empty()) + { + auto duplicates = non_const_context.getPartUUIDs()->add(std::vector{temp_part_uuids.begin(), temp_part_uuids.end()}); + if (!duplicates.empty()) + { + /// on a local replica with prefer_localhost_replica=1 if any duplicates appeared during the first pass, + /// adding them to the exclusion, so they will be skipped on second pass + non_const_context.getIgnoredPartUUIDs()->add(duplicates); + return false; + } + } + + return true; + }; + + /// Process parts that have to be read for a query. + auto needs_retry = !select_parts(parts); + + /// If any duplicated part UUIDs met during the first step, try to ignore them in second pass + if (needs_retry) + { + LOG_DEBUG(log, "Found duplicate uuids locally, will retry part selection without them"); + + /// Second attempt didn't help, throw an exception + if (!select_parts(parts)) + throw Exception("Found duplicate UUIDs while processing query.", ErrorCodes::DUPLICATED_PART_UUIDS); + } +} } diff --git a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h index c3b3020ebf5..04a3be3d3f0 100644 --- a/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h +++ b/src/Storages/MergeTree/MergeTreeDataSelectExecutor.h @@ -4,6 +4,7 @@ #include #include #include +#include namespace DB @@ -113,6 +114,24 @@ private: const Settings & settings, const MergeTreeReaderSettings & reader_settings, Poco::Logger * log); + + /// Select the parts in which there can be data that satisfy `minmax_idx_condition` and that match the condition on `_part`, + /// as well as `max_block_number_to_read`. + void selectPartsToRead( + MergeTreeData::DataPartsVector & parts, + const std::unordered_set & part_values, + const std::optional & minmax_idx_condition, + std::optional & partition_pruner, + const PartitionIdToMaxBlock * max_block_numbers_to_read) const; + + /// Same as previous but also skip parts uuids if any to the query context, or skip parts which uuids marked as excluded. + void selectPartsToReadWithUUIDFilter( + MergeTreeData::DataPartsVector & parts, + const std::unordered_set & part_values, + const std::optional & minmax_idx_condition, + std::optional & partition_pruner, + const PartitionIdToMaxBlock * max_block_numbers_to_read, + const Context & query_context) const; }; } diff --git a/src/Storages/MergeTree/MergeTreeSettings.h b/src/Storages/MergeTree/MergeTreeSettings.h index 713bfffde05..53388617a07 100644 --- a/src/Storages/MergeTree/MergeTreeSettings.h +++ b/src/Storages/MergeTree/MergeTreeSettings.h @@ -16,6 +16,10 @@ class ASTStorage; struct Settings; +/** These settings represent fine tunes for internal details of MergeTree storages + * and should not be changed by the user without a reason. + */ + #define LIST_OF_MERGE_TREE_SETTINGS(M) \ M(UInt64, min_compress_block_size, 0, "When granule is written, compress the data in buffer if the size of pending uncompressed data is larger or equal than the specified threshold. If this setting is not set, the corresponding global setting is used.", 0) \ M(UInt64, max_compress_block_size, 0, "Compress the pending uncompressed data in buffer if its size is larger or equal than the specified threshold. Block of data will be compressed even if the current granule is not finished. If this setting is not set, the corresponding global setting is used.", 0) \ @@ -40,7 +44,7 @@ struct Settings; M(UInt64, number_of_free_entries_in_pool_to_execute_mutation, 10, "When there is less than specified number of free entries in pool, do not execute part mutations. This is to leave free threads for regular merges and avoid \"Too many parts\"", 0) \ M(UInt64, max_number_of_merges_with_ttl_in_pool, 2, "When there is more than specified number of merges with TTL entries in pool, do not assign new merge with TTL. This is to leave free threads for regular merges and avoid \"Too many parts\"", 0) \ M(Seconds, old_parts_lifetime, 8 * 60, "How many seconds to keep obsolete parts.", 0) \ - M(Seconds, temporary_directories_lifetime, 86400, "How many seconds to keep tmp_-directories.", 0) \ + M(Seconds, temporary_directories_lifetime, 86400, "How many seconds to keep tmp_-directories. You should not lower this value because merges and mutations may not be able to work with low value of this setting.", 0) \ M(Seconds, lock_acquire_timeout_for_background_operations, DBMS_DEFAULT_LOCK_ACQUIRE_TIMEOUT_SEC, "For background operations like merges, mutations etc. How many seconds before failing to acquire table locks.", 0) \ M(UInt64, min_rows_to_fsync_after_merge, 0, "Minimal number of rows to do fsync for part after merge (0 - disabled)", 0) \ M(UInt64, min_compressed_bytes_to_fsync_after_merge, 0, "Minimal number of compressed bytes to do fsync for part after merge (0 - disabled)", 0) \ diff --git a/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp b/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp index 5d6b74cabe9..34cac56d74c 100644 --- a/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp +++ b/src/Storages/MergeTree/MergeTreeWhereOptimizer.cpp @@ -30,7 +30,7 @@ static constexpr auto threshold = 2; MergeTreeWhereOptimizer::MergeTreeWhereOptimizer( SelectQueryInfo & query_info, const Context & context, - const MergeTreeData & data, + std::unordered_map column_sizes_, const StorageMetadataPtr & metadata_snapshot, const Names & queried_columns_, Poco::Logger * log_) @@ -39,28 +39,20 @@ MergeTreeWhereOptimizer::MergeTreeWhereOptimizer( , queried_columns{queried_columns_} , block_with_constants{KeyCondition::getBlockWithConstants(query_info.query, query_info.syntax_analyzer_result, context)} , log{log_} + , column_sizes{std::move(column_sizes_)} { const auto & primary_key = metadata_snapshot->getPrimaryKey(); if (!primary_key.column_names.empty()) first_primary_key_column = primary_key.column_names[0]; - calculateColumnSizes(data, queried_columns); + for (const auto & [_, size] : column_sizes) + total_size_of_queried_columns += size; + determineArrayJoinedNames(query_info.query->as()); optimize(query_info.query->as()); } -void MergeTreeWhereOptimizer::calculateColumnSizes(const MergeTreeData & data, const Names & column_names) -{ - for (const auto & column_name : column_names) - { - UInt64 size = data.getColumnCompressedSize(column_name); - column_sizes[column_name] = size; - total_size_of_queried_columns += size; - } -} - - static void collectIdentifiersNoSubqueries(const ASTPtr & ast, NameSet & set) { if (auto opt_name = tryGetIdentifierName(ast)) diff --git a/src/Storages/MergeTree/MergeTreeWhereOptimizer.h b/src/Storages/MergeTree/MergeTreeWhereOptimizer.h index 939c265b3e5..cad77fb9eed 100644 --- a/src/Storages/MergeTree/MergeTreeWhereOptimizer.h +++ b/src/Storages/MergeTree/MergeTreeWhereOptimizer.h @@ -33,7 +33,7 @@ public: MergeTreeWhereOptimizer( SelectQueryInfo & query_info, const Context & context, - const MergeTreeData & data, + std::unordered_map column_sizes_, const StorageMetadataPtr & metadata_snapshot, const Names & queried_columns_, Poco::Logger * log_); @@ -75,8 +75,6 @@ private: /// Transform Conditions list to WHERE or PREWHERE expression. static ASTPtr reconstruct(const Conditions & conditions); - void calculateColumnSizes(const MergeTreeData & data, const Names & column_names); - void optimizeConjunction(ASTSelectQuery & select, ASTFunction * const fun) const; void optimizeArbitrary(ASTSelectQuery & select) const; diff --git a/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp b/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp index 289f0c61b7d..d239586bb65 100644 --- a/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp +++ b/src/Storages/RabbitMQ/RabbitMQBlockOutputStream.cpp @@ -8,12 +8,6 @@ namespace DB { -namespace ErrorCodes -{ - extern const int CANNOT_CREATE_IO_BUFFER; -} - - RabbitMQBlockOutputStream::RabbitMQBlockOutputStream( StorageRabbitMQ & storage_, const StorageMetadataPtr & metadata_snapshot_, @@ -37,9 +31,6 @@ void RabbitMQBlockOutputStream::writePrefix() storage.unbindExchange(); buffer = storage.createWriteBuffer(); - if (!buffer) - throw Exception("Failed to create RabbitMQ producer!", ErrorCodes::CANNOT_CREATE_IO_BUFFER); - buffer->activateWriting(); auto format_settings = getFormatSettings(context); diff --git a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp index ac94659d321..08b95d46115 100644 --- a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp +++ b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.cpp @@ -55,7 +55,6 @@ WriteBufferToRabbitMQProducer::WriteBufferToRabbitMQProducer( , max_rows(rows_per_message) , chunk_size(chunk_size_) { - loop = std::make_unique(); uv_loop_init(loop.get()); event_handler = std::make_unique(loop.get(), log); @@ -85,6 +84,8 @@ WriteBufferToRabbitMQProducer::WriteBufferToRabbitMQProducer( key_arguments[matching[0]] = matching[1]; } } + + reinitializeChunks(); } @@ -122,9 +123,7 @@ void WriteBufferToRabbitMQProducer::countRow() payload.append(last_chunk, 0, last_chunk_size); - rows = 0; - chunks.clear(); - set(nullptr, 0); + reinitializeChunks(); ++payload_counter; payloads.push(std::make_pair(payload_counter, payload)); @@ -321,17 +320,32 @@ void WriteBufferToRabbitMQProducer::writingFunc() setupChannel(); } - LOG_DEBUG(log, "Prodcuer on channel {} completed", channel_id); + LOG_DEBUG(log, "Producer on channel {} completed", channel_id); } void WriteBufferToRabbitMQProducer::nextImpl() +{ + addChunk(); +} + +void WriteBufferToRabbitMQProducer::addChunk() { chunks.push_back(std::string()); chunks.back().resize(chunk_size); set(chunks.back().data(), chunk_size); } +void WriteBufferToRabbitMQProducer::reinitializeChunks() +{ + rows = 0; + chunks.clear(); + /// We cannot leave the buffer in the undefined state (i.e. without any + /// underlying buffer), since in this case the WriteBuffeR::next() will + /// not call our nextImpl() (due to available() == 0) + addChunk(); +} + void WriteBufferToRabbitMQProducer::iterateEventLoop() { diff --git a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h index 6fa4ca9587f..2897e20b21d 100644 --- a/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h +++ b/src/Storages/RabbitMQ/WriteBufferToRabbitMQProducer.h @@ -41,6 +41,9 @@ public: private: void nextImpl() override; + void addChunk(); + void reinitializeChunks(); + void iterateEventLoop(); void writingFunc(); bool setupConnection(bool reconnecting); diff --git a/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp b/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp index 249026d1011..d7456966467 100644 --- a/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp +++ b/src/Storages/RocksDB/StorageEmbeddedRocksDB.cpp @@ -24,6 +24,7 @@ #include #include #include +#include #include #include @@ -44,9 +45,12 @@ namespace ErrorCodes extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; } +using FieldVectorPtr = std::shared_ptr; + // returns keys may be filter by condition -static bool traverseASTFilter(const String & primary_key, const DataTypePtr & primary_key_type, const ASTPtr & elem, const PreparedSets & sets, FieldVector & res) +static bool traverseASTFilter( + const String & primary_key, const DataTypePtr & primary_key_type, const ASTPtr & elem, const PreparedSets & sets, FieldVectorPtr & res) { const auto * function = elem->as(); if (!function) @@ -63,13 +67,9 @@ static bool traverseASTFilter(const String & primary_key, const DataTypePtr & pr else if (function->name == "or") { // make sure every child has the key filter condition - FieldVector child_res; for (const auto & child : function->arguments->children) - { - if (!traverseASTFilter(primary_key, primary_key_type, child, sets, child_res)) + if (!traverseASTFilter(primary_key, primary_key_type, child, sets, res)) return false; - } - res.insert(res.end(), child_res.begin(), child_res.end()); return true; } else if (function->name == "equals" || function->name == "in") @@ -108,9 +108,7 @@ static bool traverseASTFilter(const String & primary_key, const DataTypePtr & pr prepared_set->checkColumnsNumber(1); const auto & set_column = *prepared_set->getSetElements()[0]; for (size_t row = 0; row < set_column.size(); ++row) - { - res.push_back(set_column[row]); - } + res->push_back(set_column[row]); return true; } else @@ -125,10 +123,12 @@ static bool traverseASTFilter(const String & primary_key, const DataTypePtr & pr if (ident->name() != primary_key) return false; - //function->name == "equals" + /// function->name == "equals" if (const auto * literal = value->as()) { - res.push_back(literal->value); + auto converted_field = convertFieldToType(literal->value, *primary_key_type); + if (!converted_field.isNull()) + res->push_back(converted_field); return true; } } @@ -140,14 +140,14 @@ static bool traverseASTFilter(const String & primary_key, const DataTypePtr & pr /** Retrieve from the query a condition of the form `key = 'key'`, `key in ('xxx_'), from conjunctions in the WHERE clause. * TODO support key like search */ -static std::pair getFilterKeys(const String & primary_key, const DataTypePtr & primary_key_type, const SelectQueryInfo & query_info) +static std::pair getFilterKeys( + const String & primary_key, const DataTypePtr & primary_key_type, const SelectQueryInfo & query_info) { const auto & select = query_info.query->as(); if (!select.where()) - { - return std::make_pair(FieldVector{}, true); - } - FieldVector res; + return {{}, true}; + + FieldVectorPtr res = std::make_shared(); auto matched_keys = traverseASTFilter(primary_key, primary_key_type, select.where(), query_info.sets, res); return std::make_pair(res, !matched_keys); } @@ -159,23 +159,19 @@ public: EmbeddedRocksDBSource( const StorageEmbeddedRocksDB & storage_, const StorageMetadataPtr & metadata_snapshot_, - const FieldVector & keys_, - const size_t start_, - const size_t end_, + FieldVectorPtr keys_, + FieldVector::const_iterator begin_, + FieldVector::const_iterator end_, const size_t max_block_size_) : SourceWithProgress(metadata_snapshot_->getSampleBlock()) , storage(storage_) , metadata_snapshot(metadata_snapshot_) - , start(start_) + , keys(std::move(keys_)) + , begin(begin_) , end(end_) + , it(begin) , max_block_size(max_block_size_) { - // slice the keys - if (end > start) - { - keys.resize(end - start); - std::copy(keys_.begin() + start, keys_.begin() + end, keys.begin()); - } } String getName() const override @@ -185,27 +181,34 @@ public: Chunk generate() override { - if (processed_keys >= keys.size() || (start == end)) + if (it >= end) return {}; - std::vector slices_keys; - slices_keys.reserve(keys.size()); - std::vector values; - std::vector wbs(keys.size()); + size_t num_keys = end - begin; + + std::vector serialized_keys(num_keys); + std::vector slices_keys(num_keys); const auto & sample_block = metadata_snapshot->getSampleBlock(); const auto & key_column = sample_block.getByName(storage.primary_key); auto columns = sample_block.cloneEmptyColumns(); size_t primary_key_pos = sample_block.getPositionByName(storage.primary_key); - for (size_t i = processed_keys; i < std::min(keys.size(), processed_keys + max_block_size); ++i) + size_t rows_processed = 0; + while (it < end && rows_processed < max_block_size) { - key_column.type->serializeBinary(keys[i], wbs[i]); - auto str_ref = wbs[i].stringRef(); - slices_keys.emplace_back(str_ref.data, str_ref.size); + WriteBufferFromString wb(serialized_keys[rows_processed]); + key_column.type->serializeBinary(*it, wb); + wb.finalize(); + slices_keys[rows_processed] = std::move(serialized_keys[rows_processed]); + + ++it; + ++rows_processed; } + std::vector values; auto statuses = storage.rocksdb_ptr->MultiGet(rocksdb::ReadOptions(), slices_keys, &values); + for (size_t i = 0; i < statuses.size(); ++i) { if (statuses[i].ok()) @@ -221,7 +224,6 @@ public: } } } - processed_keys += max_block_size; UInt64 num_rows = columns.at(0)->size(); return Chunk(std::move(columns), num_rows); @@ -231,12 +233,11 @@ private: const StorageEmbeddedRocksDB & storage; const StorageMetadataPtr metadata_snapshot; - const size_t start; - const size_t end; + FieldVectorPtr keys; + FieldVector::const_iterator begin; + FieldVector::const_iterator end; + FieldVector::const_iterator it; const size_t max_block_size; - FieldVector keys; - - size_t processed_keys = 0; }; @@ -289,7 +290,8 @@ Pipe StorageEmbeddedRocksDB::read( unsigned num_streams) { metadata_snapshot->check(column_names, getVirtuals(), getStorageID()); - FieldVector keys; + + FieldVectorPtr keys; bool all_scan = false; auto primary_key_data_type = metadata_snapshot->getSampleBlock().getByName(primary_key).type; @@ -302,37 +304,34 @@ Pipe StorageEmbeddedRocksDB::read( } else { - if (keys.empty()) + if (keys->empty()) return {}; - std::sort(keys.begin(), keys.end()); - auto unique_iter = std::unique(keys.begin(), keys.end()); - if (unique_iter != keys.end()) - keys.erase(unique_iter, keys.end()); + std::sort(keys->begin(), keys->end()); + keys->erase(std::unique(keys->begin(), keys->end()), keys->end()); Pipes pipes; - size_t start = 0; - size_t end; - const size_t num_threads = std::min(size_t(num_streams), keys.size()); - const size_t batch_per_size = ceil(keys.size() * 1.0 / num_threads); + size_t num_keys = keys->size(); + size_t num_threads = std::min(size_t(num_streams), keys->size()); - for (size_t t = 0; t < num_threads; ++t) + assert(num_keys <= std::numeric_limits::max()); + assert(num_threads <= std::numeric_limits::max()); + + for (size_t thread_idx = 0; thread_idx < num_threads; ++thread_idx) { - if (start >= keys.size()) - start = end = 0; - else - end = start + batch_per_size > keys.size() ? keys.size() : start + batch_per_size; + size_t begin = num_keys * thread_idx / num_threads; + size_t end = num_keys * (thread_idx + 1) / num_threads; - pipes.emplace_back( - std::make_shared(*this, metadata_snapshot, keys, start, end, max_block_size)); - start += batch_per_size; + pipes.emplace_back(std::make_shared( + *this, metadata_snapshot, keys, keys->begin() + begin, keys->begin() + end, max_block_size)); } return Pipe::unitePipes(std::move(pipes)); } } -BlockOutputStreamPtr StorageEmbeddedRocksDB::write(const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) +BlockOutputStreamPtr StorageEmbeddedRocksDB::write( + const ASTPtr & /*query*/, const StorageMetadataPtr & metadata_snapshot, const Context & /*context*/) { return std::make_shared(*this, metadata_snapshot); } diff --git a/src/Storages/StorageDistributed.cpp b/src/Storages/StorageDistributed.cpp index 5227cd8a33e..02ee70dc8f4 100644 --- a/src/Storages/StorageDistributed.cpp +++ b/src/Storages/StorageDistributed.cpp @@ -4,6 +4,7 @@ #include #include +#include #include #include @@ -82,6 +83,7 @@ namespace ErrorCodes extern const int TYPE_MISMATCH; extern const int TOO_MANY_ROWS; extern const int UNABLE_TO_SKIP_UNUSED_SHARDS; + extern const int INVALID_SHARD_ID; } namespace ActionLocks @@ -345,6 +347,7 @@ NamesAndTypesList StorageDistributed::getVirtuals() const NameAndTypePair("_table", std::make_shared()), NameAndTypePair("_part", std::make_shared()), NameAndTypePair("_part_index", std::make_shared()), + NameAndTypePair("_part_uuid", std::make_shared()), NameAndTypePair("_partition_id", std::make_shared()), NameAndTypePair("_sample_factor", std::make_shared()), NameAndTypePair("_shard_num", std::make_shared()), @@ -540,22 +543,29 @@ BlockOutputStreamPtr StorageDistributed::write(const ASTPtr &, const StorageMeta const auto & settings = context.getSettingsRef(); /// Ban an attempt to make async insert into the table belonging to DatabaseMemory - if (!storage_policy && !owned_cluster && !settings.insert_distributed_sync) + if (!storage_policy && !owned_cluster && !settings.insert_distributed_sync && !settings.insert_shard_id) { throw Exception("Storage " + getName() + " must have own data directory to enable asynchronous inserts", ErrorCodes::BAD_ARGUMENTS); } + auto shard_num = cluster->getLocalShardCount() + cluster->getRemoteShardCount(); + /// If sharding key is not specified, then you can only write to a shard containing only one shard - if (!settings.insert_distributed_one_random_shard && !has_sharding_key - && ((cluster->getLocalShardCount() + cluster->getRemoteShardCount()) >= 2)) + if (!settings.insert_shard_id && !settings.insert_distributed_one_random_shard && !has_sharding_key && shard_num >= 2) { - throw Exception("Method write is not supported by storage " + getName() + " with more than one shard and no sharding key provided", - ErrorCodes::STORAGE_REQUIRES_PARAMETER); + throw Exception( + "Method write is not supported by storage " + getName() + " with more than one shard and no sharding key provided", + ErrorCodes::STORAGE_REQUIRES_PARAMETER); + } + + if (settings.insert_shard_id && settings.insert_shard_id > shard_num) + { + throw Exception("Shard id should be range from 1 to shard number", ErrorCodes::INVALID_SHARD_ID); } /// Force sync insertion if it is remote() table function - bool insert_sync = settings.insert_distributed_sync || owned_cluster; + bool insert_sync = settings.insert_distributed_sync || settings.insert_shard_id || owned_cluster; auto timeout = settings.insert_distributed_timeout; /// DistributedBlockOutputStream will not own cluster, but will own ConnectionPools of the cluster diff --git a/src/Storages/StorageMaterializeMySQL.cpp b/src/Storages/StorageMaterializeMySQL.cpp index 721221e3fdc..e59f1e22958 100644 --- a/src/Storages/StorageMaterializeMySQL.cpp +++ b/src/Storages/StorageMaterializeMySQL.cpp @@ -30,9 +30,8 @@ namespace DB StorageMaterializeMySQL::StorageMaterializeMySQL(const StoragePtr & nested_storage_, const IDatabase * database_) : StorageProxy(nested_storage_->getStorageID()), nested_storage(nested_storage_), database(database_) { - auto nested_memory_metadata = nested_storage->getInMemoryMetadata(); StorageInMemoryMetadata in_memory_metadata; - in_memory_metadata.setColumns(nested_memory_metadata.getColumns()); + in_memory_metadata = nested_storage->getInMemoryMetadata(); setInMemoryMetadata(in_memory_metadata); } diff --git a/src/Storages/StorageReplicatedMergeTree.cpp b/src/Storages/StorageReplicatedMergeTree.cpp index 69cbe0d7062..53104efeb43 100644 --- a/src/Storages/StorageReplicatedMergeTree.cpp +++ b/src/Storages/StorageReplicatedMergeTree.cpp @@ -3699,7 +3699,7 @@ void StorageReplicatedMergeTree::shutdown() /// We clear all old parts after stopping all background operations. It's /// important, because background operations can produce temporary parts - /// which will remove themselves in their descrutors. If so, we may have + /// which will remove themselves in their destructors. If so, we may have /// race condition between our remove call and background process. clearOldPartsFromFilesystem(true); } diff --git a/src/Storages/System/StorageSystemDisks.cpp b/src/Storages/System/StorageSystemDisks.cpp index fbbee51e34e..b04d24cc705 100644 --- a/src/Storages/System/StorageSystemDisks.cpp +++ b/src/Storages/System/StorageSystemDisks.cpp @@ -51,7 +51,7 @@ Pipe StorageSystemDisks::read( col_free->insert(disk_ptr->getAvailableSpace()); col_total->insert(disk_ptr->getTotalSpace()); col_keep->insert(disk_ptr->getKeepingFreeSpace()); - col_type->insert(disk_ptr->getType()); + col_type->insert(DiskType::toString(disk_ptr->getType())); } Columns res_columns; diff --git a/src/Storages/System/StorageSystemZooKeeper.cpp b/src/Storages/System/StorageSystemZooKeeper.cpp index 287650ef86c..8fa5ccbd630 100644 --- a/src/Storages/System/StorageSystemZooKeeper.cpp +++ b/src/Storages/System/StorageSystemZooKeeper.cpp @@ -12,6 +12,9 @@ #include #include #include +#include +#include +#include namespace DB @@ -43,8 +46,24 @@ NamesAndTypesList StorageSystemZooKeeper::getNamesAndTypes() }; } +using Paths = Strings; -static bool extractPathImpl(const IAST & elem, String & res, const Context & context) +static String pathCorrected(const String & path) +{ + String path_corrected; + /// path should starts with '/', otherwise ZBADARGUMENTS will be thrown in + /// ZooKeeper::sendThread and the session will fail. + if (path[0] != '/') + path_corrected = '/'; + path_corrected += path; + /// In all cases except the root, path must not end with a slash. + if (path_corrected != "/" && path_corrected.back() == '/') + path_corrected.resize(path_corrected.size() - 1); + return path_corrected; +} + + +static bool extractPathImpl(const IAST & elem, Paths & res, const Context & context) { const auto * function = elem.as(); if (!function) @@ -59,15 +78,65 @@ static bool extractPathImpl(const IAST & elem, String & res, const Context & con return false; } - if (function->name == "equals") - { - const auto & args = function->arguments->as(); - ASTPtr value; + const auto & args = function->arguments->as(); + if (args.children.size() != 2) + return false; - if (args.children.size() != 2) + if (function->name == "in") + { + const ASTIdentifier * ident = args.children.at(0)->as(); + if (!ident || ident->name() != "path") return false; + ASTPtr value = args.children.at(1); + + if (value->as()) + { + auto interpreter_subquery = interpretSubquery(value, context, {}, {}); + auto stream = interpreter_subquery->execute().getInputStream(); + SizeLimits limites(context.getSettingsRef().max_rows_in_set, context.getSettingsRef().max_bytes_in_set, OverflowMode::THROW); + Set set(limites, true, context.getSettingsRef().transform_null_in); + set.setHeader(stream->getHeader()); + + stream->readPrefix(); + while (Block block = stream->read()) + { + set.insertFromBlock(block); + } + set.finishInsert(); + stream->readSuffix(); + + set.checkColumnsNumber(1); + const auto & set_column = *set.getSetElements()[0]; + for (size_t row = 0; row < set_column.size(); ++row) + res.emplace_back(set_column[row].safeGet()); + } + else + { + auto evaluated = evaluateConstantExpressionAsLiteral(value, context); + const auto * literal = evaluated->as(); + if (!literal) + return false; + + if (String str; literal->value.tryGet(str)) + { + res.emplace_back(str); + } + else if (Tuple tuple; literal->value.tryGet(tuple)) + { + for (auto element : tuple) + res.emplace_back(element.safeGet()); + } + else + return false; + } + + return true; + } + else if (function->name == "equals") + { const ASTIdentifier * ident; + ASTPtr value; if ((ident = args.children.at(0)->as())) value = args.children.at(1); else if ((ident = args.children.at(1)->as())) @@ -86,7 +155,7 @@ static bool extractPathImpl(const IAST & elem, String & res, const Context & con if (literal->value.getType() != Field::Types::String) return false; - res = literal->value.safeGet(); + res.emplace_back(literal->value.safeGet()); return true; } @@ -96,69 +165,69 @@ static bool extractPathImpl(const IAST & elem, String & res, const Context & con /** Retrieve from the query a condition of the form `path = 'path'`, from conjunctions in the WHERE clause. */ -static String extractPath(const ASTPtr & query, const Context & context) +static Paths extractPath(const ASTPtr & query, const Context & context) { const auto & select = query->as(); if (!select.where()) - return ""; + return Paths(); - String res; - return extractPathImpl(*select.where(), res, context) ? res : ""; + Paths res; + return extractPathImpl(*select.where(), res, context) ? res : Paths(); } void StorageSystemZooKeeper::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo & query_info) const { - String path = extractPath(query_info.query, context); - if (path.empty()) - throw Exception("SELECT from system.zookeeper table must contain condition like path = 'path' in WHERE clause.", ErrorCodes::BAD_ARGUMENTS); + const Paths & paths = extractPath(query_info.query, context); + if (paths.empty()) + throw Exception("SELECT from system.zookeeper table must contain condition like path = 'path' or path IN ('path1','path2'...) or path IN (subquery) in WHERE clause.", ErrorCodes::BAD_ARGUMENTS); zkutil::ZooKeeperPtr zookeeper = context.getZooKeeper(); - String path_corrected; - /// path should starts with '/', otherwise ZBADARGUMENTS will be thrown in - /// ZooKeeper::sendThread and the session will fail. - if (path[0] != '/') - path_corrected = '/'; - path_corrected += path; - /// In all cases except the root, path must not end with a slash. - if (path_corrected != "/" && path_corrected.back() == '/') - path_corrected.resize(path_corrected.size() - 1); - - zkutil::Strings nodes = zookeeper->getChildren(path_corrected); - - String path_part = path_corrected; - if (path_part == "/") - path_part.clear(); - - std::vector> futures; - futures.reserve(nodes.size()); - for (const String & node : nodes) - futures.push_back(zookeeper->asyncTryGet(path_part + '/' + node)); - - for (size_t i = 0, size = nodes.size(); i < size; ++i) + std::unordered_set paths_corrected; + for (const auto & path : paths) { - auto res = futures[i].get(); - if (res.error == Coordination::Error::ZNONODE) - continue; /// Node was deleted meanwhile. + const String & path_corrected = pathCorrected(path); + auto [it, inserted] = paths_corrected.emplace(path_corrected); + if (!inserted) /// Do not repeat processing. + continue; - const Coordination::Stat & stat = res.stat; + zkutil::Strings nodes = zookeeper->getChildren(path_corrected); - size_t col_num = 0; - res_columns[col_num++]->insert(nodes[i]); - res_columns[col_num++]->insert(res.data); - res_columns[col_num++]->insert(stat.czxid); - res_columns[col_num++]->insert(stat.mzxid); - res_columns[col_num++]->insert(UInt64(stat.ctime / 1000)); - res_columns[col_num++]->insert(UInt64(stat.mtime / 1000)); - res_columns[col_num++]->insert(stat.version); - res_columns[col_num++]->insert(stat.cversion); - res_columns[col_num++]->insert(stat.aversion); - res_columns[col_num++]->insert(stat.ephemeralOwner); - res_columns[col_num++]->insert(stat.dataLength); - res_columns[col_num++]->insert(stat.numChildren); - res_columns[col_num++]->insert(stat.pzxid); - res_columns[col_num++]->insert(path); /// This is the original path. In order to process the request, condition in WHERE should be triggered. + String path_part = path_corrected; + if (path_part == "/") + path_part.clear(); + + std::vector> futures; + futures.reserve(nodes.size()); + for (const String & node : nodes) + futures.push_back(zookeeper->asyncTryGet(path_part + '/' + node)); + + for (size_t i = 0, size = nodes.size(); i < size; ++i) + { + auto res = futures[i].get(); + if (res.error == Coordination::Error::ZNONODE) + continue; /// Node was deleted meanwhile. + + const Coordination::Stat & stat = res.stat; + + size_t col_num = 0; + res_columns[col_num++]->insert(nodes[i]); + res_columns[col_num++]->insert(res.data); + res_columns[col_num++]->insert(stat.czxid); + res_columns[col_num++]->insert(stat.mzxid); + res_columns[col_num++]->insert(UInt64(stat.ctime / 1000)); + res_columns[col_num++]->insert(UInt64(stat.mtime / 1000)); + res_columns[col_num++]->insert(stat.version); + res_columns[col_num++]->insert(stat.cversion); + res_columns[col_num++]->insert(stat.aversion); + res_columns[col_num++]->insert(stat.ephemeralOwner); + res_columns[col_num++]->insert(stat.dataLength); + res_columns[col_num++]->insert(stat.numChildren); + res_columns[col_num++]->insert(stat.pzxid); + res_columns[col_num++]->insert( + path); /// This is the original path. In order to process the request, condition in WHERE should be triggered. + } } } diff --git a/src/Storages/tests/CMakeLists.txt b/src/Storages/tests/CMakeLists.txt index 292f7603838..b58fed9edf5 100644 --- a/src/Storages/tests/CMakeLists.txt +++ b/src/Storages/tests/CMakeLists.txt @@ -29,4 +29,7 @@ target_link_libraries (transform_part_zk_nodes if (ENABLE_FUZZING) add_executable (mergetree_checksum_fuzzer mergetree_checksum_fuzzer.cpp) target_link_libraries (mergetree_checksum_fuzzer PRIVATE dbms ${LIB_FUZZING_ENGINE}) + + add_executable (columns_description_fuzzer columns_description_fuzzer.cpp) + target_link_libraries (columns_description_fuzzer PRIVATE dbms ${LIB_FUZZING_ENGINE}) endif () diff --git a/src/Storages/tests/columns_description_fuzzer.cpp b/src/Storages/tests/columns_description_fuzzer.cpp new file mode 100644 index 00000000000..44fd667ff1c --- /dev/null +++ b/src/Storages/tests/columns_description_fuzzer.cpp @@ -0,0 +1,15 @@ +#include + + +extern "C" int LLVMFuzzerTestOneInput(const uint8_t * data, size_t size) +try +{ + using namespace DB; + ColumnsDescription columns = ColumnsDescription::parse(std::string(reinterpret_cast(data), size)); + std::cerr << columns.toString() << "\n"; + return 0; +} +catch (...) +{ + return 1; +} diff --git a/src/Storages/ya.make b/src/Storages/ya.make index 69e319cbad5..dbf37e58695 100644 --- a/src/Storages/ya.make +++ b/src/Storages/ya.make @@ -48,6 +48,7 @@ SRCS( MergeTree/MergeTreeDataPartInMemory.cpp MergeTree/MergeTreeDataPartTTLInfo.cpp MergeTree/MergeTreeDataPartType.cpp + MergeTree/MergeTreeDataPartUUID.cpp MergeTree/MergeTreeDataPartWide.cpp MergeTree/MergeTreeDataPartWriterCompact.cpp MergeTree/MergeTreeDataPartWriterInMemory.cpp diff --git a/tests/clickhouse-test b/tests/clickhouse-test index 0c49a3670a0..74f5f07eb9d 100755 --- a/tests/clickhouse-test +++ b/tests/clickhouse-test @@ -428,15 +428,23 @@ def run_tests_array(all_tests_with_params): status += print_test_time(total_time) status += " - result differs with reference:\n{}\n".format(diff) else: - passed_total += 1 - failures_chain = 0 - status += MSG_OK - status += print_test_time(total_time) - status += "\n" - if os.path.exists(stdout_file): - os.remove(stdout_file) - if os.path.exists(stderr_file): - os.remove(stderr_file) + if args.test_runs > 1 and total_time > 30 and 'long' not in name: + # We're in Flaky Check mode, check the run time as well while we're at it. + failures += 1 + failures_chain += 1 + status += MSG_FAIL + status += print_test_time(total_time) + status += " - Long test not marked as 'long'" + else: + passed_total += 1 + failures_chain = 0 + status += MSG_OK + status += print_test_time(total_time) + status += "\n" + if os.path.exists(stdout_file): + os.remove(stdout_file) + if os.path.exists(stderr_file): + os.remove(stderr_file) if status and not status.endswith('\n'): status += '\n' diff --git a/tests/integration/helpers/cluster.py b/tests/integration/helpers/cluster.py index a04c1b7bf7d..14aa2f252c5 100644 --- a/tests/integration/helpers/cluster.py +++ b/tests/integration/helpers/cluster.py @@ -154,6 +154,7 @@ class ClickHouseCluster: self.minio_certs_dir = None self.minio_host = "minio1" self.minio_bucket = "root" + self.minio_bucket_2 = "root2" self.minio_port = 9001 self.minio_client = None # type: Minio self.minio_redirect_host = "proxy1" @@ -555,17 +556,18 @@ class ClickHouseCluster: print("Connected to Minio.") - if minio_client.bucket_exists(self.minio_bucket): - minio_client.remove_bucket(self.minio_bucket) + buckets = [self.minio_bucket, self.minio_bucket_2] - minio_client.make_bucket(self.minio_bucket) - - print(("S3 bucket '%s' created", self.minio_bucket)) + for bucket in buckets: + if minio_client.bucket_exists(bucket): + minio_client.remove_bucket(bucket) + minio_client.make_bucket(bucket) + print("S3 bucket '%s' created", bucket) self.minio_client = minio_client return except Exception as ex: - print(("Can't connect to Minio: %s", str(ex))) + print("Can't connect to Minio: %s", str(ex)) time.sleep(1) raise Exception("Can't wait Minio to start") @@ -1048,32 +1050,25 @@ class ClickHouseInstance: return self.http_query(sql=sql, data=data, params=params, user=user, password=password, expect_fail_and_get_error=True) - def kill_clickhouse(self, stop_start_wait_sec=5): - pid = self.get_process_pid("clickhouse") - if not pid: - raise Exception("No clickhouse found") - self.exec_in_container(["bash", "-c", "kill -9 {}".format(pid)], user='root') - time.sleep(stop_start_wait_sec) - - def restore_clickhouse(self, retries=100): - pid = self.get_process_pid("clickhouse") - if pid: - raise Exception("ClickHouse has already started") - self.exec_in_container(["bash", "-c", "{} --daemon".format(CLICKHOUSE_START_COMMAND)], user=str(os.getuid())) - from helpers.test_tools import assert_eq_with_retry - # wait start - assert_eq_with_retry(self, "select 1", "1", retry_count=retries) - - def restart_clickhouse(self, stop_start_wait_sec=5, kill=False): + def stop_clickhouse(self, start_wait_sec=5, kill=False): if not self.stay_alive: - raise Exception("clickhouse can be restarted only with stay_alive=True instance") + raise Exception("clickhouse can be stopped only with stay_alive=True instance") self.exec_in_container(["bash", "-c", "pkill {} clickhouse".format("-9" if kill else "")], user='root') - time.sleep(stop_start_wait_sec) + time.sleep(start_wait_sec) + + def start_clickhouse(self, stop_wait_sec=5): + if not self.stay_alive: + raise Exception("clickhouse can be started again only with stay_alive=True instance") + self.exec_in_container(["bash", "-c", "{} --daemon".format(CLICKHOUSE_START_COMMAND)], user=str(os.getuid())) # wait start from helpers.test_tools import assert_eq_with_retry - assert_eq_with_retry(self, "select 1", "1", retry_count=int(stop_start_wait_sec / 0.5), sleep_time=0.5) + assert_eq_with_retry(self, "select 1", "1", retry_count=int(stop_wait_sec / 0.5), sleep_time=0.5) + + def restart_clickhouse(self, stop_start_wait_sec=5, kill=False): + self.stop_clickhouse(stop_start_wait_sec, kill) + self.start_clickhouse(stop_start_wait_sec) def exec_in_container(self, cmd, detach=False, nothrow=False, **kwargs): container_id = self.get_docker_handle().id @@ -1411,7 +1406,7 @@ class ClickHouseKiller(object): self.clickhouse_node = clickhouse_node def __enter__(self): - self.clickhouse_node.kill_clickhouse() + self.clickhouse_node.stop_clickhouse(kill=True) def __exit__(self, exc_type, exc_val, exc_tb): - self.clickhouse_node.restore_clickhouse() + self.clickhouse_node.start_clickhouse() diff --git a/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py b/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py index 38ff8fd752b..c9be2387fc7 100644 --- a/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py +++ b/tests/integration/test_materialize_mysql_database/materialize_with_ddl.py @@ -757,3 +757,26 @@ def system_parts_test(clickhouse_node, mysql_node, service_name): check_active_parts(2) clickhouse_node.query("OPTIMIZE TABLE system_parts_test.test") check_active_parts(1) + +def multi_table_update_test(clickhouse_node, mysql_node, service_name): + mysql_node.query("DROP DATABASE IF EXISTS multi_table_update") + clickhouse_node.query("DROP DATABASE IF EXISTS multi_table_update") + mysql_node.query("CREATE DATABASE multi_table_update") + mysql_node.query("CREATE TABLE multi_table_update.a (id INT(11) NOT NULL PRIMARY KEY, value VARCHAR(255))") + mysql_node.query("CREATE TABLE multi_table_update.b (id INT(11) NOT NULL PRIMARY KEY, othervalue VARCHAR(255))") + mysql_node.query("INSERT INTO multi_table_update.a VALUES(1, 'foo')") + mysql_node.query("INSERT INTO multi_table_update.b VALUES(1, 'bar')") + clickhouse_node.query("CREATE DATABASE multi_table_update ENGINE = MaterializeMySQL('{}:3306', 'multi_table_update', 'root', 'clickhouse')".format(service_name)) + check_query(clickhouse_node, "SHOW TABLES FROM multi_table_update", "a\nb\n") + mysql_node.query("UPDATE multi_table_update.a, multi_table_update.b SET value='baz', othervalue='quux' where a.id=b.id") + + check_query(clickhouse_node, "SELECT * FROM multi_table_update.a", "1\tbaz\n") + check_query(clickhouse_node, "SELECT * FROM multi_table_update.b", "1\tquux\n") + +def system_tables_test(clickhouse_node, mysql_node, service_name): + mysql_node.query("DROP DATABASE IF EXISTS system_tables_test") + clickhouse_node.query("DROP DATABASE IF EXISTS system_tables_test") + mysql_node.query("CREATE DATABASE system_tables_test") + mysql_node.query("CREATE TABLE system_tables_test.test (id int NOT NULL PRIMARY KEY) ENGINE=InnoDB") + clickhouse_node.query("CREATE DATABASE system_tables_test ENGINE = MaterializeMySQL('{}:3306', 'system_tables_test', 'root', 'clickhouse')".format(service_name)) + check_query(clickhouse_node, "SELECT partition_key, sorting_key, primary_key FROM system.tables WHERE database = 'system_tables_test' AND name = 'test'", "intDiv(id, 4294967)\tid\tid\n") diff --git a/tests/integration/test_materialize_mysql_database/test.py b/tests/integration/test_materialize_mysql_database/test.py index 8cd2f7def07..e55772d9e1d 100644 --- a/tests/integration/test_materialize_mysql_database/test.py +++ b/tests/integration/test_materialize_mysql_database/test.py @@ -237,3 +237,12 @@ def test_utf8mb4(started_cluster, started_mysql_8_0, started_mysql_5_7, clickhou @pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_ordinary]) def test_system_parts_table(started_cluster, started_mysql_8_0, clickhouse_node): materialize_with_ddl.system_parts_test(clickhouse_node, started_mysql_8_0, "mysql8_0") + +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_ordinary]) +def test_multi_table_update(started_cluster, started_mysql_8_0, started_mysql_5_7, clickhouse_node): + materialize_with_ddl.multi_table_update_test(clickhouse_node, started_mysql_5_7, "mysql1") + materialize_with_ddl.multi_table_update_test(clickhouse_node, started_mysql_8_0, "mysql8_0") + +@pytest.mark.parametrize(('clickhouse_node'), [node_db_ordinary, node_db_ordinary]) +def test_system_tables_table(started_cluster, started_mysql_8_0, clickhouse_node): + materialize_with_ddl.system_tables_test(clickhouse_node, started_mysql_8_0, "mysql8_0") diff --git a/tests/integration/test_merge_tree_s3_restore/configs/config.d/bg_processing_pool_conf.xml b/tests/integration/test_merge_tree_s3_restore/configs/config.d/bg_processing_pool_conf.xml new file mode 100644 index 00000000000..a756c4434ea --- /dev/null +++ b/tests/integration/test_merge_tree_s3_restore/configs/config.d/bg_processing_pool_conf.xml @@ -0,0 +1,5 @@ + + 0.5 + 0.5 + 0.5 + diff --git a/tests/integration/test_merge_tree_s3_restore/configs/config.d/log_conf.xml b/tests/integration/test_merge_tree_s3_restore/configs/config.d/log_conf.xml new file mode 100644 index 00000000000..318a6bca95d --- /dev/null +++ b/tests/integration/test_merge_tree_s3_restore/configs/config.d/log_conf.xml @@ -0,0 +1,12 @@ + + 3 + + trace + /var/log/clickhouse-server/log.log + /var/log/clickhouse-server/log.err.log + 1000M + 10 + /var/log/clickhouse-server/stderr.log + /var/log/clickhouse-server/stdout.log + + diff --git a/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf.xml b/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf.xml new file mode 100644 index 00000000000..9361a21efca --- /dev/null +++ b/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf.xml @@ -0,0 +1,34 @@ + + + + + s3 + http://minio1:9001/root/data/ + minio + minio123 + true + 1 + + + local + / + + + + + +
+ s3 +
+ + hdd + +
+
+
+
+ + + 0 + +
diff --git a/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf_another_bucket.xml b/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf_another_bucket.xml new file mode 100644 index 00000000000..645d1111ab8 --- /dev/null +++ b/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf_another_bucket.xml @@ -0,0 +1,34 @@ + + + + + s3 + http://minio1:9001/root2/data/ + minio + minio123 + true + 1 + + + local + / + + + + + +
+ s3 +
+ + hdd + +
+
+
+
+ + + 0 + +
diff --git a/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf_another_bucket_path.xml b/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf_another_bucket_path.xml new file mode 100644 index 00000000000..42207674c79 --- /dev/null +++ b/tests/integration/test_merge_tree_s3_restore/configs/config.d/storage_conf_another_bucket_path.xml @@ -0,0 +1,34 @@ + + + + + s3 + http://minio1:9001/root2/another_data/ + minio + minio123 + true + 1 + + + local + / + + + + + +
+ s3 +
+ + hdd + +
+
+
+
+ + + 0 + +
diff --git a/tests/integration/test_merge_tree_s3_restore/configs/config.d/users.xml b/tests/integration/test_merge_tree_s3_restore/configs/config.d/users.xml new file mode 100644 index 00000000000..797113053f4 --- /dev/null +++ b/tests/integration/test_merge_tree_s3_restore/configs/config.d/users.xml @@ -0,0 +1,5 @@ + + + + + diff --git a/tests/integration/test_merge_tree_s3_restore/configs/config.xml b/tests/integration/test_merge_tree_s3_restore/configs/config.xml new file mode 100644 index 00000000000..24b7344df3a --- /dev/null +++ b/tests/integration/test_merge_tree_s3_restore/configs/config.xml @@ -0,0 +1,20 @@ + + + 9000 + 127.0.0.1 + + + + true + none + + AcceptCertificateHandler + + + + + 500 + 5368709120 + ./clickhouse/ + users.xml + diff --git a/tests/integration/test_merge_tree_s3_restore/test.py b/tests/integration/test_merge_tree_s3_restore/test.py new file mode 100644 index 00000000000..346d9aced3f --- /dev/null +++ b/tests/integration/test_merge_tree_s3_restore/test.py @@ -0,0 +1,313 @@ +import logging +import random +import string +import time + +import pytest +from helpers.cluster import ClickHouseCluster + +logging.getLogger().setLevel(logging.INFO) +logging.getLogger().addHandler(logging.StreamHandler()) + + +@pytest.fixture(scope="module") +def cluster(): + try: + cluster = ClickHouseCluster(__file__) + cluster.add_instance("node", main_configs=[ + "configs/config.d/storage_conf.xml", + "configs/config.d/bg_processing_pool_conf.xml", + "configs/config.d/log_conf.xml"], user_configs=[], with_minio=True, stay_alive=True) + cluster.add_instance("node_another_bucket", main_configs=[ + "configs/config.d/storage_conf_another_bucket.xml", + "configs/config.d/bg_processing_pool_conf.xml", + "configs/config.d/log_conf.xml"], user_configs=[], stay_alive=True) + cluster.add_instance("node_another_bucket_path", main_configs=[ + "configs/config.d/storage_conf_another_bucket_path.xml", + "configs/config.d/bg_processing_pool_conf.xml", + "configs/config.d/log_conf.xml"], user_configs=[], stay_alive=True) + logging.info("Starting cluster...") + cluster.start() + logging.info("Cluster started") + + yield cluster + finally: + cluster.shutdown() + + +def random_string(length): + letters = string.ascii_letters + return ''.join(random.choice(letters) for i in range(length)) + + +def generate_values(date_str, count, sign=1): + data = [[date_str, sign * (i + 1), random_string(10)] for i in range(count)] + data.sort(key=lambda tup: tup[1]) + return ",".join(["('{}',{},'{}',{})".format(x, y, z, 0) for x, y, z in data]) + + +def create_table(node, table_name, additional_settings=None): + node.query("CREATE DATABASE IF NOT EXISTS s3 ENGINE = Ordinary") + + create_table_statement = """ + CREATE TABLE s3.{} ( + dt Date, + id Int64, + data String, + counter Int64, + INDEX min_max (id) TYPE minmax GRANULARITY 3 + ) ENGINE=MergeTree() + PARTITION BY dt + ORDER BY (dt, id) + SETTINGS + storage_policy='s3', + old_parts_lifetime=600, + index_granularity=512 + """.format(table_name) + + if additional_settings: + create_table_statement += "," + create_table_statement += additional_settings + + node.query(create_table_statement) + + +def purge_s3(cluster, bucket): + minio = cluster.minio_client + for obj in list(minio.list_objects(bucket, recursive=True)): + minio.remove_object(bucket, obj.object_name) + + +def drop_s3_metadata(node): + node.exec_in_container(['bash', '-c', 'rm -rf /var/lib/clickhouse/disks/s3/*'], user='root') + + +def drop_shadow_information(node): + node.exec_in_container(['bash', '-c', 'rm -rf /var/lib/clickhouse/shadow/*'], user='root') + + +def create_restore_file(node, revision=0, bucket=None, path=None): + add_restore_option = 'echo -en "{}\n" >> /var/lib/clickhouse/disks/s3/restore' + node.exec_in_container(['bash', '-c', add_restore_option.format(revision)], user='root') + if bucket: + node.exec_in_container(['bash', '-c', add_restore_option.format(bucket)], user='root') + if path: + node.exec_in_container(['bash', '-c', add_restore_option.format(path)], user='root') + + +def get_revision_counter(node, backup_number): + return int(node.exec_in_container(['bash', '-c', 'cat /var/lib/clickhouse/disks/s3/shadow/{}/revision.txt'.format(backup_number)], user='root')) + + +@pytest.fixture(autouse=True) +def drop_table(cluster): + yield + + node_names = ["node", "node_another_bucket", "node_another_bucket_path"] + + for node_name in node_names: + node = cluster.instances[node_name] + node.query("DROP TABLE IF EXISTS s3.test NO DELAY") + + drop_s3_metadata(node) + drop_shadow_information(node) + + buckets = [cluster.minio_bucket, cluster.minio_bucket_2] + for bucket in buckets: + purge_s3(cluster, bucket) + + +def test_full_restore(cluster): + node = cluster.instances["node"] + + create_table(node, "test") + + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-03', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-04', 4096, -1))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096, -1))) + + # To ensure parts have merged + node.query("OPTIMIZE TABLE s3.test") + + assert node.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + assert node.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + + node.stop_clickhouse() + drop_s3_metadata(node) + node.start_clickhouse() + + # All data is removed. + assert node.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(0) + + node.stop_clickhouse() + create_restore_file(node) + node.start_clickhouse(10) + + assert node.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + assert node.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + + +def test_restore_another_bucket_path(cluster): + node = cluster.instances["node"] + + create_table(node, "test") + + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-03', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-04', 4096, -1))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096, -1))) + + # To ensure parts have merged + node.query("OPTIMIZE TABLE s3.test") + + assert node.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + assert node.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + + node_another_bucket = cluster.instances["node_another_bucket"] + + create_table(node_another_bucket, "test") + + node_another_bucket.stop_clickhouse() + create_restore_file(node_another_bucket, bucket="root") + node_another_bucket.start_clickhouse(10) + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + + node_another_bucket_path = cluster.instances["node_another_bucket_path"] + + create_table(node_another_bucket_path, "test") + + node_another_bucket_path.stop_clickhouse() + create_restore_file(node_another_bucket_path, bucket="root2", path="data") + node_another_bucket_path.start_clickhouse(10) + + assert node_another_bucket_path.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + assert node_another_bucket_path.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + + +def test_restore_different_revisions(cluster): + node = cluster.instances["node"] + + create_table(node, "test") + + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-03', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-04', 4096, -1))) + + node.query("ALTER TABLE s3.test FREEZE") + revision1 = get_revision_counter(node, 1) + + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-05', 4096, -1))) + + node.query("ALTER TABLE s3.test FREEZE") + revision2 = get_revision_counter(node, 2) + + # To ensure parts have merged + node.query("OPTIMIZE TABLE s3.test") + + node.query("ALTER TABLE s3.test FREEZE") + revision3 = get_revision_counter(node, 3) + + assert node.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + assert node.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + assert node.query("SELECT count(*) from system.parts where table = 'test'") == '5\n' + + node_another_bucket = cluster.instances["node_another_bucket"] + + create_table(node_another_bucket, "test") + + # Restore to revision 1 (2 parts). + node_another_bucket.stop_clickhouse() + drop_s3_metadata(node_another_bucket) + purge_s3(cluster, cluster.minio_bucket_2) + create_restore_file(node_another_bucket, revision=revision1, bucket="root") + node_another_bucket.start_clickhouse(10) + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + assert node_another_bucket.query("SELECT count(*) from system.parts where table = 'test'") == '2\n' + + # Restore to revision 2 (4 parts). + node_another_bucket.stop_clickhouse() + drop_s3_metadata(node_another_bucket) + purge_s3(cluster, cluster.minio_bucket_2) + create_restore_file(node_another_bucket, revision=revision2, bucket="root") + node_another_bucket.start_clickhouse(10) + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + assert node_another_bucket.query("SELECT count(*) from system.parts where table = 'test'") == '4\n' + + # Restore to revision 3 (4 parts + 1 merged). + node_another_bucket.stop_clickhouse() + drop_s3_metadata(node_another_bucket) + purge_s3(cluster, cluster.minio_bucket_2) + create_restore_file(node_another_bucket, revision=revision3, bucket="root") + node_another_bucket.start_clickhouse(10) + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 4) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + assert node_another_bucket.query("SELECT count(*) from system.parts where table = 'test'") == '5\n' + + +def test_restore_mutations(cluster): + node = cluster.instances["node"] + + create_table(node, "test") + + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-03', 4096))) + node.query("INSERT INTO s3.test VALUES {}".format(generate_values('2020-01-03', 4096, -1))) + + node.query("ALTER TABLE s3.test FREEZE") + revision_before_mutation = get_revision_counter(node, 1) + + node.query("ALTER TABLE s3.test UPDATE counter = 1 WHERE 1", settings={"mutations_sync": 2}) + + node.query("ALTER TABLE s3.test FREEZE") + revision_after_mutation = get_revision_counter(node, 2) + + node_another_bucket = cluster.instances["node_another_bucket"] + + create_table(node_another_bucket, "test") + + # Restore to revision before mutation. + node_another_bucket.stop_clickhouse() + drop_s3_metadata(node_another_bucket) + purge_s3(cluster, cluster.minio_bucket_2) + create_restore_file(node_another_bucket, revision=revision_before_mutation, bucket="root") + node_another_bucket.start_clickhouse(10) + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + assert node_another_bucket.query("SELECT sum(counter) FROM s3.test FORMAT Values") == "({})".format(0) + + # Restore to revision after mutation. + node_another_bucket.stop_clickhouse() + drop_s3_metadata(node_another_bucket) + purge_s3(cluster, cluster.minio_bucket_2) + create_restore_file(node_another_bucket, revision=revision_after_mutation, bucket="root") + node_another_bucket.start_clickhouse(10) + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + assert node_another_bucket.query("SELECT sum(counter) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) + assert node_another_bucket.query("SELECT sum(counter) FROM s3.test WHERE id > 0 FORMAT Values") == "({})".format(4096) + + # Restore to revision in the middle of mutation. + # Unfinished mutation should be completed after table startup. + node_another_bucket.stop_clickhouse() + drop_s3_metadata(node_another_bucket) + purge_s3(cluster, cluster.minio_bucket_2) + revision = (revision_before_mutation + revision_after_mutation) // 2 + create_restore_file(node_another_bucket, revision=revision, bucket="root") + node_another_bucket.start_clickhouse(10) + + # Wait for unfinished mutation completion. + time.sleep(3) + + assert node_another_bucket.query("SELECT count(*) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) + assert node_another_bucket.query("SELECT sum(id) FROM s3.test FORMAT Values") == "({})".format(0) + assert node_another_bucket.query("SELECT sum(counter) FROM s3.test FORMAT Values") == "({})".format(4096 * 2) + assert node_another_bucket.query("SELECT sum(counter) FROM s3.test WHERE id > 0 FORMAT Values") == "({})".format(4096) diff --git a/tests/queries/0_stateless/01443_merge_truncate.reference b/tests/integration/test_query_deduplication/__init__.py similarity index 100% rename from tests/queries/0_stateless/01443_merge_truncate.reference rename to tests/integration/test_query_deduplication/__init__.py diff --git a/tests/integration/test_query_deduplication/configs/deduplication_settings.xml b/tests/integration/test_query_deduplication/configs/deduplication_settings.xml new file mode 100644 index 00000000000..8369c916848 --- /dev/null +++ b/tests/integration/test_query_deduplication/configs/deduplication_settings.xml @@ -0,0 +1,5 @@ + + + 1 + + diff --git a/tests/integration/test_query_deduplication/configs/remote_servers.xml b/tests/integration/test_query_deduplication/configs/remote_servers.xml new file mode 100644 index 00000000000..f12558ca529 --- /dev/null +++ b/tests/integration/test_query_deduplication/configs/remote_servers.xml @@ -0,0 +1,24 @@ + + + + + + node1 + 9000 + + + + + node2 + 9000 + + + + + node3 + 9000 + + + + + diff --git a/tests/integration/test_query_deduplication/test.py b/tests/integration/test_query_deduplication/test.py new file mode 100644 index 00000000000..8d935b98579 --- /dev/null +++ b/tests/integration/test_query_deduplication/test.py @@ -0,0 +1,165 @@ +import uuid + +import pytest + +from helpers.cluster import ClickHouseCluster +from helpers.test_tools import TSV + +DUPLICATED_UUID = uuid.uuid4() + +cluster = ClickHouseCluster(__file__) + +node1 = cluster.add_instance( + 'node1', + main_configs=['configs/remote_servers.xml', 'configs/deduplication_settings.xml']) + +node2 = cluster.add_instance( + 'node2', + main_configs=['configs/remote_servers.xml', 'configs/deduplication_settings.xml']) + +node3 = cluster.add_instance( + 'node3', + main_configs=['configs/remote_servers.xml', 'configs/deduplication_settings.xml']) + + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.start() + yield cluster + finally: + cluster.shutdown() + + +def prepare_node(node, parts_uuid=None): + node.query(""" + CREATE TABLE t(_prefix UInt8 DEFAULT 0, key UInt64, value UInt64) + ENGINE MergeTree() + ORDER BY tuple() + PARTITION BY _prefix + SETTINGS index_granularity = 1 + """) + + node.query(""" + CREATE TABLE d AS t ENGINE=Distributed(test_cluster, default, t) + """) + + # Stop merges while populating test data + node.query("SYSTEM STOP MERGES") + + # Create 5 parts + for i in range(1, 6): + node.query("INSERT INTO t VALUES ({}, {}, {})".format(i, i, i)) + + node.query("DETACH TABLE t") + + if parts_uuid: + for part, part_uuid in parts_uuid: + script = """ + echo -n '{}' > /var/lib/clickhouse/data/default/t/{}/uuid.txt + """.format(part_uuid, part) + node.exec_in_container(["bash", "-c", script]) + + # Attach table back + node.query("ATTACH TABLE t") + + # NOTE: + # due to absence of the ability to lock part, need to operate on parts with preventin merges + # node.query("SYSTEM START MERGES") + # node.query("OPTIMIZE TABLE t FINAL") + + print(node.name) + print(node.query("SELECT name, uuid, partition FROM system.parts WHERE table = 't' AND active ORDER BY name")) + + assert '5' == node.query("SELECT count() FROM system.parts WHERE table = 't' AND active").strip() + if parts_uuid: + for part, part_uuid in parts_uuid: + assert '1' == node.query( + "SELECT count() FROM system.parts WHERE table = 't' AND uuid = '{}' AND active".format( + part_uuid)).strip() + + +@pytest.fixture(scope="module") +def prepared_cluster(started_cluster): + print("duplicated UUID: {}".format(DUPLICATED_UUID)) + prepare_node(node1, parts_uuid=[("3_3_3_0", DUPLICATED_UUID)]) + prepare_node(node2, parts_uuid=[("3_3_3_0", DUPLICATED_UUID)]) + prepare_node(node3) + + +def test_virtual_column(prepared_cluster): + # Part containing `key=3` has the same fingerprint on both nodes, + # we expect it to be included only once in the end result.; + # select query is using virtucal column _part_fingerprint to filter out part in one shard + expected = """ + 1 2 + 2 2 + 3 1 + 4 2 + 5 2 + """ + assert TSV(expected) == TSV(node1.query(""" + SELECT + key, + count() AS c + FROM d + WHERE ((_shard_num = 1) AND (_part_uuid != '{}')) OR (_shard_num = 2) + GROUP BY key + ORDER BY + key ASC + """.format(DUPLICATED_UUID))) + + +def test_with_deduplication(prepared_cluster): + # Part containing `key=3` has the same fingerprint on both nodes, + # we expect it to be included only once in the end result + expected = """ +1 3 +2 3 +3 2 +4 3 +5 3 +""" + assert TSV(expected) == TSV(node1.query( + "SET allow_experimental_query_deduplication=1; SELECT key, count() c FROM d GROUP BY key ORDER BY key")) + + +def test_no_merge_with_deduplication(prepared_cluster): + # Part containing `key=3` has the same fingerprint on both nodes, + # we expect it to be included only once in the end result. + # even with distributed_group_by_no_merge=1 the duplicated part should be excluded from the final result + expected = """ +1 1 +2 1 +3 1 +4 1 +5 1 +1 1 +2 1 +3 1 +4 1 +5 1 +1 1 +2 1 +4 1 +5 1 +""" + assert TSV(expected) == TSV(node1.query("SELECT key, count() c FROM d GROUP BY key ORDER BY key", settings={ + "allow_experimental_query_deduplication": 1, + "distributed_group_by_no_merge": 1, + })) + + +def test_without_deduplication(prepared_cluster): + # Part containing `key=3` has the same fingerprint on both nodes, + # but allow_experimental_query_deduplication is disabled, + # so it will not be excluded + expected = """ +1 3 +2 3 +3 3 +4 3 +5 3 +""" + assert TSV(expected) == TSV(node1.query( + "SET allow_experimental_query_deduplication=0; SELECT key, count() c FROM d GROUP BY key ORDER BY key")) diff --git a/tests/integration/test_quota/normal_limits.xml b/tests/integration/test_quota/normal_limits.xml index b7c3a67b5cc..e32043ef5ec 100644 --- a/tests/integration/test_quota/normal_limits.xml +++ b/tests/integration/test_quota/normal_limits.xml @@ -8,6 +8,8 @@ 1000 + 500 + 500 0 1000 0 diff --git a/tests/integration/test_quota/test.py b/tests/integration/test_quota/test.py index 0614150ee07..84454159a58 100644 --- a/tests/integration/test_quota/test.py +++ b/tests/integration/test_quota/test.py @@ -28,7 +28,7 @@ def system_quota_limits(canonical): def system_quota_usage(canonical): canonical_tsv = TSV(canonical) - query = "SELECT quota_name, quota_key, duration, queries, max_queries, errors, max_errors, result_rows, max_result_rows," \ + query = "SELECT quota_name, quota_key, duration, queries, max_queries, query_selects, max_query_selects, query_inserts, max_query_inserts, errors, max_errors, result_rows, max_result_rows," \ "result_bytes, max_result_bytes, read_rows, max_read_rows, read_bytes, max_read_bytes, max_execution_time " \ "FROM system.quota_usage ORDER BY duration" r = TSV(instance.query(query)) @@ -38,7 +38,7 @@ def system_quota_usage(canonical): def system_quotas_usage(canonical): canonical_tsv = TSV(canonical) - query = "SELECT quota_name, quota_key, is_current, duration, queries, max_queries, errors, max_errors, result_rows, max_result_rows, " \ + query = "SELECT quota_name, quota_key, is_current, duration, queries, max_queries, query_selects, max_query_selects, query_inserts, max_query_inserts, errors, max_errors, result_rows, max_result_rows, " \ "result_bytes, max_result_bytes, read_rows, max_read_rows, read_bytes, max_read_bytes, max_execution_time " \ "FROM system.quotas_usage ORDER BY quota_name, quota_key, duration" r = TSV(instance.query(query)) @@ -73,6 +73,7 @@ def reset_quotas_and_usage_info(): try: yield finally: + copy_quota_xml('simpliest.xml') # To reset usage info. instance.query("DROP QUOTA IF EXISTS qA, qB") copy_quota_xml('simpliest.xml') # To reset usage info. copy_quota_xml('normal_limits.xml') @@ -81,18 +82,18 @@ def reset_quotas_and_usage_info(): def test_quota_from_users_xml(): check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", [31556952], 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) - system_quota_usage([["myQuota", "default", 31556952, 0, 1000, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, 500, 500, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) + system_quota_usage([["myQuota", "default", 31556952, 0, 1000, 0, 500, 0, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) system_quotas_usage( - [["myQuota", "default", 1, 31556952, 0, 1000, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + [["myQuota", "default", 1, 31556952, 0, 1000, 0, 500, 0, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) instance.query("SELECT * from test_table") system_quota_usage( - [["myQuota", "default", 31556952, 1, 1000, 0, "\\N", 50, "\\N", 200, "\\N", 50, 1000, 200, "\\N", "\\N"]]) + [["myQuota", "default", 31556952, 1, 1000, 1, 500, 0, 500, 0, "\\N", 50, "\\N", 200, "\\N", 50, 1000, 200, "\\N", "\\N"]]) instance.query("SELECT COUNT() from test_table") system_quota_usage( - [["myQuota", "default", 31556952, 2, 1000, 0, "\\N", 51, "\\N", 208, "\\N", 50, 1000, 200, "\\N", "\\N"]]) + [["myQuota", "default", 31556952, 2, 1000, 2, 500, 0, 500, 0, "\\N", 51, "\\N", 208, "\\N", 50, 1000, 200, "\\N", "\\N"]]) def test_simpliest_quota(): @@ -102,11 +103,11 @@ def test_simpliest_quota(): "['default']", "[]"]]) system_quota_limits("") system_quota_usage( - [["myQuota", "default", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N"]]) + [["myQuota", "default", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N"]]) instance.query("SELECT * from test_table") system_quota_usage( - [["myQuota", "default", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N"]]) + [["myQuota", "default", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N"]]) def test_tracking_quota(): @@ -114,16 +115,16 @@ def test_tracking_quota(): copy_quota_xml('tracking.xml') check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", "[31556952]", 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N"]]) - system_quota_usage([["myQuota", "default", 31556952, 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", "\\N"]]) + system_quota_limits([["myQuota", 31556952, 0, "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N"]]) + system_quota_usage([["myQuota", "default", 31556952, 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", "\\N"]]) instance.query("SELECT * from test_table") system_quota_usage( - [["myQuota", "default", 31556952, 1, "\\N", 0, "\\N", 50, "\\N", 200, "\\N", 50, "\\N", 200, "\\N", "\\N"]]) + [["myQuota", "default", 31556952, 1, "\\N", 1, "\\N", 0, "\\N", 0, "\\N", 50, "\\N", 200, "\\N", 50, "\\N", 200, "\\N", "\\N"]]) instance.query("SELECT COUNT() from test_table") system_quota_usage( - [["myQuota", "default", 31556952, 2, "\\N", 0, "\\N", 51, "\\N", 208, "\\N", 50, "\\N", 200, "\\N", "\\N"]]) + [["myQuota", "default", 31556952, 2, "\\N", 2, "\\N", 0, "\\N", 0, "\\N", 51, "\\N", 208, "\\N", 50, "\\N", 200, "\\N", "\\N"]]) def test_exceed_quota(): @@ -131,55 +132,55 @@ def test_exceed_quota(): copy_quota_xml('tiny_limits.xml') check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", "[31556952]", 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1, 1, 1, "\\N", 1, "\\N", "\\N"]]) - system_quota_usage([["myQuota", "default", 31556952, 0, 1, 0, 1, 0, 1, 0, "\\N", 0, 1, 0, "\\N", "\\N"]]) + system_quota_limits([["myQuota", 31556952, 0, 1, 1, 1, 1, 1, "\\N", 1, "\\N", "\\N"]]) + system_quota_usage([["myQuota", "default", 31556952, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, "\\N", 0, 1, 0, "\\N", "\\N"]]) assert re.search("Quota.*has\ been\ exceeded", instance.query_and_get_error("SELECT * from test_table")) - system_quota_usage([["myQuota", "default", 31556952, 1, 1, 1, 1, 0, 1, 0, "\\N", 50, 1, 0, "\\N", "\\N"]]) + system_quota_usage([["myQuota", "default", 31556952, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, "\\N", 50, 1, 0, "\\N", "\\N"]]) # Change quota, now the limits are enough to execute queries. copy_quota_xml('normal_limits.xml') check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", "[31556952]", 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) - system_quota_usage([["myQuota", "default", 31556952, 1, 1000, 1, "\\N", 0, "\\N", 0, "\\N", 50, 1000, 0, "\\N", "\\N"]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, 500, 500, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) + system_quota_usage([["myQuota", "default", 31556952, 1, 1000, 1, 500, 0, 500, 1, "\\N", 0, "\\N", 0, "\\N", 50, 1000, 0, "\\N", "\\N"]]) instance.query("SELECT * from test_table") system_quota_usage( - [["myQuota", "default", 31556952, 2, 1000, 1, "\\N", 50, "\\N", 200, "\\N", 100, 1000, 200, "\\N", "\\N"]]) + [["myQuota", "default", 31556952, 2, 1000, 2, 500, 0, 500, 1, "\\N", 50, "\\N", 200, "\\N", 100, 1000, 200, "\\N", "\\N"]]) def test_add_remove_interval(): check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", [31556952], 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) - system_quota_usage([["myQuota", "default", 31556952, 0, 1000, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, 500, 500, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) + system_quota_usage([["myQuota", "default", 31556952, 0, 1000, 0, 500, 0, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) # Add interval. copy_quota_xml('two_intervals.xml') check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", "[31556952,63113904]", 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"], - ["myQuota", 63113904, 1, "\\N", "\\N", "\\N", 30000, "\\N", 20000, 120]]) - system_quota_usage([["myQuota", "default", 31556952, 0, 1000, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"], - ["myQuota", "default", 63113904, 0, "\\N", 0, "\\N", 0, "\\N", 0, 30000, 0, "\\N", 0, 20000, 120]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", "\\N", "\\N", 1000, "\\N", "\\N"], + ["myQuota", 63113904, 1, "\\N", "\\N", "\\N", "\\N", "\\N", 30000, "\\N", 20000, 120]]) + system_quota_usage([["myQuota", "default", 31556952, 0, 1000, 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"], + ["myQuota", "default", 63113904, 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, 30000, 0, "\\N", 0, 20000, 120]]) instance.query("SELECT * from test_table") system_quota_usage( - [["myQuota", "default", 31556952, 1, 1000, 0, "\\N", 50, "\\N", 200, "\\N", 50, 1000, 200, "\\N", "\\N"], - ["myQuota", "default", 63113904, 1, "\\N", 0, "\\N", 50, "\\N", 200, 30000, 50, "\\N", 200, 20000, 120]]) + [["myQuota", "default", 31556952, 1, 1000, 1, "\\N", 0, "\\N", 0, "\\N", 50, "\\N", 200, "\\N", 50, 1000, 200, "\\N", "\\N"], + ["myQuota", "default", 63113904, 1, "\\N", 1, "\\N", 0, "\\N", 0, "\\N", 50, "\\N", 200, 30000, 50, "\\N", 200, 20000, 120]]) # Remove interval. copy_quota_xml('normal_limits.xml') check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", [31556952], 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, 500, 500, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) system_quota_usage( - [["myQuota", "default", 31556952, 1, 1000, 0, "\\N", 50, "\\N", 200, "\\N", 50, 1000, 200, "\\N", "\\N"]]) + [["myQuota", "default", 31556952, 1, 1000, 1, 500, 0, 500, 0, "\\N", 50, "\\N", 200, "\\N", 50, 1000, 200, "\\N", "\\N"]]) instance.query("SELECT * from test_table") system_quota_usage( - [["myQuota", "default", 31556952, 2, 1000, 0, "\\N", 100, "\\N", 400, "\\N", 100, 1000, 400, "\\N", "\\N"]]) + [["myQuota", "default", 31556952, 2, 1000, 2, 500, 0, 500, 0, "\\N", 100, "\\N", 400, "\\N", 100, 1000, 400, "\\N", "\\N"]]) # Remove all intervals. copy_quota_xml('simpliest.xml') @@ -187,26 +188,26 @@ def test_add_remove_interval(): "['default']", "[]"]]) system_quota_limits("") system_quota_usage( - [["myQuota", "default", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N"]]) + [["myQuota", "default", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N"]]) instance.query("SELECT * from test_table") system_quota_usage( - [["myQuota", "default", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N"]]) + [["myQuota", "default", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N"]]) # Add one interval back. copy_quota_xml('normal_limits.xml') check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", [31556952], 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) - system_quota_usage([["myQuota", "default", 31556952, 0, 1000, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, 500, 500, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) + system_quota_usage([["myQuota", "default", 31556952, 0, 1000, 0, 500, 0, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) def test_add_remove_quota(): check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", [31556952], 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, 500, 500, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) system_quotas_usage( - [["myQuota", "default", 1, 31556952, 0, 1000, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + [["myQuota", "default", 1, 31556952, 0, 1000, 0, 500, 0, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) # Add quota. copy_quota_xml('two_quotas.xml') @@ -214,19 +215,19 @@ def test_add_remove_quota(): 0, "['default']", "[]"], ["myQuota2", "4590510c-4d13-bf21-ec8a-c2187b092e73", "users.xml", "['client_key','user_name']", "[3600,2629746]", 0, "[]", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"], - ["myQuota2", 3600, 1, "\\N", "\\N", 4000, 400000, 4000, 400000, 60], - ["myQuota2", 2629746, 0, "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", 1800]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", "\\N", "\\N", 1000, "\\N", "\\N"], + ["myQuota2", 3600, 1, "\\N", "\\N", "\\N", "\\N", 4000, 400000, 4000, 400000, 60], + ["myQuota2", 2629746, 0, "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", "\\N", 1800]]) system_quotas_usage( - [["myQuota", "default", 1, 31556952, 0, 1000, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + [["myQuota", "default", 1, 31556952, 0, 1000, 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) # Drop quota. copy_quota_xml('normal_limits.xml') check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", "[31556952]", 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, 500, 500, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) system_quotas_usage( - [["myQuota", "default", 1, 31556952, 0, 1000, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + [["myQuota", "default", 1, 31556952, 0, 1000, 0, 500, 0, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) # Drop all quotas. copy_quota_xml('no_quotas.xml') @@ -238,15 +239,15 @@ def test_add_remove_quota(): copy_quota_xml('normal_limits.xml') check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", "[31556952]", 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, 500, 500, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) system_quotas_usage( - [["myQuota", "default", 1, 31556952, 0, 1000, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + [["myQuota", "default", 1, 31556952, 0, 1000, 0, 500, 0, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) def test_reload_users_xml_by_timer(): check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", "[31556952]", 0, "['default']", "[]"]]) - system_quota_limits([["myQuota", 31556952, 0, 1000, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, 500, 500, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) time.sleep(1) # The modification time of the 'quota.xml' file should be different, # because config files are reload by timer only when the modification time is changed. @@ -255,25 +256,25 @@ def test_reload_users_xml_by_timer(): ["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", ['user_name'], "[31556952]", 0, "['default']", "[]"]]) assert_eq_with_retry(instance, "SELECT * FROM system.quota_limits", - [["myQuota", 31556952, 0, 1, 1, 1, "\\N", 1, "\\N", "\\N"]]) + [["myQuota", 31556952, 0, 1, 1, 1, 1, 1, "\\N", 1, "\\N", "\\N"]]) def test_dcl_introspection(): assert instance.query("SHOW QUOTAS") == "myQuota\n" assert instance.query( - "SHOW CREATE QUOTA") == "CREATE QUOTA myQuota KEYED BY user_name FOR INTERVAL 1 year MAX queries = 1000, read_rows = 1000 TO default\n" + "SHOW CREATE QUOTA") == "CREATE QUOTA myQuota KEYED BY user_name FOR INTERVAL 1 year MAX queries = 1000, query_selects = 500, query_inserts = 500, read_rows = 1000 TO default\n" assert instance.query( - "SHOW CREATE QUOTAS") == "CREATE QUOTA myQuota KEYED BY user_name FOR INTERVAL 1 year MAX queries = 1000, read_rows = 1000 TO default\n" + "SHOW CREATE QUOTAS") == "CREATE QUOTA myQuota KEYED BY user_name FOR INTERVAL 1 year MAX queries = 1000, query_selects = 500, query_inserts = 500, read_rows = 1000 TO default\n" assert re.match( - "myQuota\\tdefault\\t.*\\t31556952\\t0\\t1000\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t1000\\t0\\t\\\\N\\t.*\\t\\\\N\n", + "myQuota\\tdefault\\t.*\\t31556952\\t0\\t1000\\t0\\t500\\t0\\t500\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t1000\\t0\\t\\\\N\\t.*\\t\\\\N\n", instance.query("SHOW QUOTA")) instance.query("SELECT * from test_table") assert re.match( - "myQuota\\tdefault\\t.*\\t31556952\\t1\\t1000\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t1000\\t200\\t\\\\N\\t.*\\t\\\\N\n", + "myQuota\\tdefault\\t.*\\t31556952\\t1\\t1000\\t1\\t500\\t0\\t500\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t1000\\t200\\t\\\\N\\t.*\\t\\\\N\n", instance.query("SHOW QUOTA")) - expected_access = "CREATE QUOTA myQuota KEYED BY user_name FOR INTERVAL 1 year MAX queries = 1000, read_rows = 1000 TO default\n" + expected_access = "CREATE QUOTA myQuota KEYED BY user_name FOR INTERVAL 1 year MAX queries = 1000, query_selects = 500, query_inserts = 500, read_rows = 1000 TO default\n" assert expected_access in instance.query("SHOW ACCESS") # Add interval. @@ -282,8 +283,8 @@ def test_dcl_introspection(): assert instance.query( "SHOW CREATE QUOTA") == "CREATE QUOTA myQuota KEYED BY user_name FOR INTERVAL 1 year MAX queries = 1000, read_rows = 1000, FOR RANDOMIZED INTERVAL 2 year MAX result_bytes = 30000, read_bytes = 20000, execution_time = 120 TO default\n" assert re.match( - "myQuota\\tdefault\\t.*\\t31556952\\t1\\t1000\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t1000\\t200\\t\\\\N\\t.*\\t\\\\N\n" - "myQuota\\tdefault\\t.*\\t63113904\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t30000\\t0\\t\\\\N\\t0\\t20000\\t.*\\t120", + "myQuota\\tdefault\\t.*\\t31556952\\t1\\t1000\\t1\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t1000\\t200\\t\\\\N\\t.*\\t\\\\N\n" + "myQuota\\tdefault\\t.*\\t63113904\\t0\\t\\\\N\t0\\t\\\\N\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t30000\\t0\\t\\\\N\\t0\\t20000\\t.*\\t120", instance.query("SHOW QUOTA")) # Drop interval, add quota. @@ -297,7 +298,7 @@ def test_dcl_introspection(): "SHOW CREATE QUOTAS") == "CREATE QUOTA myQuota KEYED BY user_name FOR INTERVAL 1 year MAX queries = 1000, read_rows = 1000 TO default\n" \ "CREATE QUOTA myQuota2 KEYED BY client_key, user_name FOR RANDOMIZED INTERVAL 1 hour MAX result_rows = 4000, result_bytes = 400000, read_rows = 4000, read_bytes = 400000, execution_time = 60, FOR INTERVAL 1 month MAX execution_time = 1800\n" assert re.match( - "myQuota\\tdefault\\t.*\\t31556952\\t1\\t1000\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t1000\\t200\\t\\\\N\\t.*\\t\\\\N\n", + "myQuota\\tdefault\\t.*\\t31556952\\t1\\t1000\\t1\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t1000\\t200\\t\\\\N\\t.*\\t\\\\N\n", instance.query("SHOW QUOTA")) # Drop all quotas. @@ -315,12 +316,12 @@ def test_dcl_management(): assert instance.query( "SHOW CREATE QUOTA qA") == "CREATE QUOTA qA FOR INTERVAL 5 quarter MAX queries = 123 TO default\n" assert re.match( - "qA\\t\\t.*\\t39446190\\t0\\t123\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t.*\\t\\\\N\n", + "qA\\t\\t.*\\t39446190\\t0\\t123\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t.*\\t\\\\N\n", instance.query("SHOW QUOTA")) instance.query("SELECT * from test_table") assert re.match( - "qA\\t\\t.*\\t39446190\\t1\\t123\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t.*\\t\\\\N\n", + "qA\\t\\t.*\\t39446190\\t1\\t123\\t1\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t.*\\t\\\\N\n", instance.query("SHOW QUOTA")) instance.query( @@ -328,37 +329,37 @@ def test_dcl_management(): assert instance.query( "SHOW CREATE QUOTA qA") == "CREATE QUOTA qA FOR INTERVAL 30 minute MAX execution_time = 0.5, FOR INTERVAL 5 quarter MAX queries = 321, errors = 10 TO default\n" assert re.match( - "qA\\t\\t.*\\t1800\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t.*\\t0.5\n" - "qA\\t\\t.*\\t39446190\\t1\\t321\\t0\\t10\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t.*\\t\\\\N\n", + "qA\\t\\t.*\\t1800\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t.*\\t0.5\n" + "qA\\t\\t.*\\t39446190\\t1\\t321\\t1\\t\\\\N\\t0\\t\\\\N\\t0\\t10\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t.*\\t\\\\N\n", instance.query("SHOW QUOTA")) instance.query("SELECT * from test_table") assert re.match( - "qA\\t\\t.*\\t1800\\t1\\t\\\\N\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t.*\\t0.5\n" - "qA\\t\\t.*\\t39446190\\t2\\t321\\t0\\t10\\t100\\t\\\\N\\t400\\t\\\\N\\t100\\t\\\\N\\t400\\t\\\\N\\t.*\\t\\\\N\n", + "qA\\t\\t.*\\t1800\\t1\\t\\\\N\\t1\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t.*\\t0.5\n" + "qA\\t\\t.*\\t39446190\\t2\\t321\\t2\\t\\\\N\\t0\\t\\\\N\\t0\\t10\\t100\\t\\\\N\\t400\\t\\\\N\\t100\\t\\\\N\\t400\\t\\\\N\\t.*\\t\\\\N\n", instance.query("SHOW QUOTA")) instance.query( "ALTER QUOTA qA FOR INTERVAL 15 MONTH NO LIMITS, FOR RANDOMIZED INTERVAL 16 MONTH TRACKING ONLY, FOR INTERVAL 1800 SECOND NO LIMITS") assert re.match( - "qA\\t\\t.*\\t42075936\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t.*\\t\\\\N\n", + "qA\\t\\t.*\\t42075936\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t.*\\t\\\\N\n", instance.query("SHOW QUOTA")) instance.query("SELECT * from test_table") assert re.match( - "qA\\t\\t.*\\t42075936\\t1\\t\\\\N\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t.*\\t\\\\N\n", + "qA\\t\\t.*\\t42075936\\t1\\t\\\\N\\t1\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t.*\\t\\\\N\n", instance.query("SHOW QUOTA")) instance.query("ALTER QUOTA qA RENAME TO qB") assert instance.query( "SHOW CREATE QUOTA qB") == "CREATE QUOTA qB FOR RANDOMIZED INTERVAL 16 month TRACKING ONLY TO default\n" assert re.match( - "qB\\t\\t.*\\t42075936\\t1\\t\\\\N\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t.*\\t\\\\N\n", + "qB\\t\\t.*\\t42075936\\t1\\t\\\\N\\t1\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t50\\t\\\\N\\t200\\t\\\\N\\t.*\\t\\\\N\n", instance.query("SHOW QUOTA")) instance.query("SELECT * from test_table") assert re.match( - "qB\\t\\t.*\\t42075936\\t2\\t\\\\N\\t0\\t\\\\N\\t100\\t\\\\N\\t400\\t\\\\N\\t100\\t\\\\N\\t400\\t\\\\N\\t.*\\t\\\\N\n", + "qB\\t\\t.*\\t42075936\\t2\\t\\\\N\\t2\\t\\\\N\\t0\\t\\\\N\\t0\\t\\\\N\\t100\\t\\\\N\\t400\\t\\\\N\\t100\\t\\\\N\\t400\\t\\\\N\\t.*\\t\\\\N\n", instance.query("SHOW QUOTA")) instance.query("DROP QUOTA qB") @@ -367,3 +368,15 @@ def test_dcl_management(): def test_users_xml_is_readonly(): assert re.search("storage is readonly", instance.query_and_get_error("DROP QUOTA myQuota")) + +def test_query_inserts(): + check_system_quotas([["myQuota", "e651da9c-a748-8703-061a-7e5e5096dae7", "users.xml", "['user_name']", [31556952], + 0, "['default']", "[]"]]) + system_quota_limits([["myQuota", 31556952, 0, 1000, 500, 500, "\\N", "\\N", "\\N", 1000, "\\N", "\\N"]]) + system_quota_usage([["myQuota", "default", 31556952, 0, 1000, 0, 500, 0, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + system_quotas_usage( + [["myQuota", "default", 1, 31556952, 0, 1000, 0, 500, 0, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) + + instance.query("INSERT INTO test_table values(1)") + system_quota_usage( + [["myQuota", "default", 31556952, 1, 1000, 0, 500, 1, 500, 0, "\\N", 0, "\\N", 0, "\\N", 0, 1000, 0, "\\N", "\\N"]]) \ No newline at end of file diff --git a/tests/integration/test_quota/tiny_limits.xml b/tests/integration/test_quota/tiny_limits.xml index 3ab8858738a..4797c360ddd 100644 --- a/tests/integration/test_quota/tiny_limits.xml +++ b/tests/integration/test_quota/tiny_limits.xml @@ -8,6 +8,8 @@ 1 + 1 + 1 1 1 1 diff --git a/tests/integration/test_quota/tracking.xml b/tests/integration/test_quota/tracking.xml index 47e12bf8005..c5e7c993edc 100644 --- a/tests/integration/test_quota/tracking.xml +++ b/tests/integration/test_quota/tracking.xml @@ -8,6 +8,8 @@ 0 + 0 + 0 0 0 0 diff --git a/tests/performance/aggfunc_col_data_copy.xml b/tests/performance/aggfunc_col_data_copy.xml new file mode 100644 index 00000000000..111f7959d58 --- /dev/null +++ b/tests/performance/aggfunc_col_data_copy.xml @@ -0,0 +1,24 @@ + + drop table if EXISTS test_bm2; + drop table if EXISTS test_bm_join2; + create table test_bm2( + dim UInt64, + id UInt64) + ENGINE = MergeTree() + ORDER BY( dim ) + SETTINGS index_granularity = 8192; + + + create table test_bm_join2( + dim UInt64, + ids AggregateFunction(groupBitmap, UInt64) ) + ENGINE = MergeTree() + ORDER BY(dim) + SETTINGS index_granularity = 8192; + + insert into test_bm2 SELECT 1,number FROM numbers(0, 1000) + insert into test_bm_join2 SELECT 1, bitmapBuild(range(toUInt64(0),toUInt64(11000000))) + select a.dim,bitmapCardinality(b.ids) from test_bm2 a left join test_bm_join2 b using(dim) + drop table if exists test_bm2 + drop table if exists test_bm_join2 + diff --git a/tests/performance/reinterpret_as.xml b/tests/performance/reinterpret_as.xml index 50cf0cb2278..6ef152bc552 100644 --- a/tests/performance/reinterpret_as.xml +++ b/tests/performance/reinterpret_as.xml @@ -19,7 +19,7 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(20000000) SETTINGS max_threads = 8 FORMAT Null @@ -38,7 +38,7 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(20000000) SETTINGS max_threads = 8 FORMAT Null @@ -76,7 +76,7 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(20000000) SETTINGS max_threads = 8 FORMAT Null @@ -95,7 +95,7 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(20000000) SETTINGS max_threads = 8 FORMAT Null @@ -115,7 +115,7 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(20000000) SETTINGS max_threads = 8 FORMAT Null @@ -134,7 +134,7 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(20000000) SETTINGS max_threads = 8 FORMAT Null @@ -153,7 +153,7 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(20000000) SETTINGS max_threads = 8 FORMAT Null @@ -172,7 +172,7 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(20000000) SETTINGS max_threads = 8 FORMAT Null @@ -191,7 +191,7 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(20000000) SETTINGS max_threads = 8 FORMAT Null @@ -210,7 +210,7 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(20000000) SETTINGS max_threads = 8 FORMAT Null @@ -230,11 +230,11 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(2000000) SETTINGS max_threads = 8 FORMAT Null - + SELECT reinterpretAsFixedString(a), @@ -249,7 +249,7 @@ toInt256(number) as d, toString(number) as f, toFixedString(f, 20) as g - FROM numbers_mt(200000000) + FROM numbers_mt(20000000) SETTINGS max_threads = 8 FORMAT Null diff --git a/tests/performance/window_functions.xml b/tests/performance/window_functions.xml index f42345d0696..93983e9b1bf 100644 --- a/tests/performance/window_functions.xml +++ b/tests/performance/window_functions.xml @@ -25,7 +25,31 @@ select * from ( select CounterID, UserID, count(*) user_hits, - count() over (partition by CounterID order by user_hits desc) + count() + over (partition by CounterID order by user_hits desc + rows unbounded preceding) + user_rank + from hits_100m_single + where CounterID < 10000 + group by CounterID, UserID + ) + where user_rank <= 10 + format Null + ]]> + + + toString(i), range(13)), arrayMap(i -> (number + i), range(13)))) FROM numbers(10); + +SELECT * FROM agg_table; + +SELECT if(xxx = 'x', ([2], 3), ([3], 4)) FROM agg_table; + +SELECT if(xxx = 'x', ([2], 3), ([3], 4, 'q', 'w', 7)) FROM agg_table; --{ serverError 386 } + +ALTER TABLE agg_table UPDATE two_values = (two_values.1, two_values.2) WHERE time BETWEEN toDateTime('2020-08-01 00:00:00') AND toDateTime('2020-12-01 00:00:00') SETTINGS mutations_sync = 2; + +ALTER TABLE agg_table UPDATE agg_simple = 5 WHERE time BETWEEN toDateTime('2020-08-01 00:00:00') AND toDateTime('2020-12-01 00:00:00') SETTINGS mutations_sync = 2; + +ALTER TABLE agg_table UPDATE agg = (agg.1, agg.2) WHERE time BETWEEN toDateTime('2020-08-01 00:00:00') AND toDateTime('2020-12-01 00:00:00') SETTINGS mutations_sync = 2; + +ALTER TABLE agg_table UPDATE agg = (agg.1, arrayMap(x -> toUInt64(x / 2), agg.2)) WHERE time BETWEEN toDateTime('2020-08-01 00:00:00') AND toDateTime('2020-12-01 00:00:00') SETTINGS mutations_sync = 2; + +SELECT * FROM agg_table; + +DROP TABLE IF EXISTS agg_table; diff --git a/tests/queries/0_stateless/arcadia_skip_list.txt b/tests/queries/0_stateless/arcadia_skip_list.txt index 7f34b5a9a84..160f10eb413 100644 --- a/tests/queries/0_stateless/arcadia_skip_list.txt +++ b/tests/queries/0_stateless/arcadia_skip_list.txt @@ -200,4 +200,5 @@ 01676_clickhouse_client_autocomplete 01671_aggregate_function_group_bitmap_data 01674_executable_dictionary_implicit_key +01686_rocksdb 01683_dist_INSERT_block_structure_mismatch diff --git a/tests/queries/skip_list.json b/tests/queries/skip_list.json index 3311eb3882d..d76603bf633 100644 --- a/tests/queries/skip_list.json +++ b/tests/queries/skip_list.json @@ -508,6 +508,7 @@ "01294_lazy_database_concurrent_recreate_reattach_and_show_tables", "01294_system_distributed_on_cluster", "01296_create_row_policy_in_current_database", + "01297_create_quota", "01305_replica_create_drop_zookeeper", "01307_multiple_leaders_zookeeper", "01318_long_unsuccessful_mutation_zookeeper", diff --git a/utils/check-mysql-binlog/main.cpp b/utils/check-mysql-binlog/main.cpp index ccdc4cd168c..04dfb56ff08 100644 --- a/utils/check-mysql-binlog/main.cpp +++ b/utils/check-mysql-binlog/main.cpp @@ -69,21 +69,27 @@ static DB::MySQLReplication::BinlogEventPtr parseSingleEventBody( case DB::MySQLReplication::WRITE_ROWS_EVENT_V1: case DB::MySQLReplication::WRITE_ROWS_EVENT_V2: { - event = std::make_shared(last_table_map_event, std::move(header)); + DB::MySQLReplication::RowsEventHeader rows_header(header.type); + rows_header.parse(*event_payload); + event = std::make_shared(last_table_map_event, std::move(header), rows_header); event->parseEvent(*event_payload); break; } case DB::MySQLReplication::DELETE_ROWS_EVENT_V1: case DB::MySQLReplication::DELETE_ROWS_EVENT_V2: { - event = std::make_shared(last_table_map_event, std::move(header)); + DB::MySQLReplication::RowsEventHeader rows_header(header.type); + rows_header.parse(*event_payload); + event = std::make_shared(last_table_map_event, std::move(header), rows_header); event->parseEvent(*event_payload); break; } case DB::MySQLReplication::UPDATE_ROWS_EVENT_V1: case DB::MySQLReplication::UPDATE_ROWS_EVENT_V2: { - event = std::make_shared(last_table_map_event, std::move(header)); + DB::MySQLReplication::RowsEventHeader rows_header(header.type); + rows_header.parse(*event_payload); + event = std::make_shared(last_table_map_event, std::move(header), rows_header); event->parseEvent(*event_payload); break; } diff --git a/utils/check-style/check-style-all b/utils/check-style/check-style-all new file mode 100755 index 00000000000..c34224e5469 --- /dev/null +++ b/utils/check-style/check-style-all @@ -0,0 +1,8 @@ +#!/usr/bin/env bash + +dir=$(dirname $0) +$dir/check-style -n +$dir/check-typos +$dir/check-whitespaces -n +$dir/check-duplicate-includes.sh +$dir/shellcheck-run.sh diff --git a/utils/list-versions/version_date.tsv b/utils/list-versions/version_date.tsv index c4b27f3199d..8d05f5fff46 100644 --- a/utils/list-versions/version_date.tsv +++ b/utils/list-versions/version_date.tsv @@ -1,3 +1,4 @@ +v21.2.2.8-stable 2021-02-07 v21.1.3.32-stable 2021-02-03 v21.1.2.15-stable 2021-01-18 v20.12.5.18-stable 2021-02-03 diff --git a/website/benchmark/hardware/results/yandex_cloud_broadwell_8_vcpu.json b/website/benchmark/hardware/results/yandex_cloud_broadwell_8_vcpu.json new file mode 100644 index 00000000000..1217adbbff5 --- /dev/null +++ b/website/benchmark/hardware/results/yandex_cloud_broadwell_8_vcpu.json @@ -0,0 +1,55 @@ +[ + { + "system": "Yandex Cloud 8vCPU", + "system_full": "Yandex Cloud Broadwell, 8 vCPU (4 threads), 64 GB RAM, 500 GB SSD", + "cpu_vendor": "Intel", + "time": "2021-02-05 00:00:00", + "kind": "cloud", + "result": + [ + [0.004, 0.003, 0.003], + [0.047, 0.030, 0.021], + [0.129, 0.066, 0.067], + [0.873, 0.098, 0.095], + [0.869, 0.247, 0.257], + [1.429, 0.818, 0.768], + [0.055, 0.042, 0.043], + [0.034, 0.025, 0.024], + [1.372, 1.003, 1.051], + [1.605, 1.281, 1.209], + [0.942, 0.503, 0.483], + [0.980, 0.537, 0.558], + [2.076, 1.664, 1.635], + [3.136, 2.235, 2.171], + [2.351, 1.973, 1.974], + [2.369, 2.170, 2.133], + [6.281, 5.576, 5.498], + [3.739, 3.481, 3.354], + [10.947, 10.225, 10.271], + [0.875, 0.111, 0.108], + [10.832, 1.844, 1.877], + [12.344, 2.330, 2.227], + [22.999, 5.000, 4.903], + [20.086, 2.390, 2.278], + [3.036, 0.722, 0.673], + [1.420, 0.602, 0.578], + [3.040, 0.728, 0.714], + [10.842, 1.874, 1.783], + [9.207, 2.809, 2.705], + [2.751, 2.703, 2.714], + [2.810, 1.675, 1.568], + [6.507, 2.449, 2.505], + [15.968, 15.014, 15.318], + [13.479, 7.951, 7.702], + [13.227, 7.791, 7.699], + [2.811, 2.723, 2.549], + [0.358, 0.249, 0.273], + [0.157, 0.099, 0.101], + [0.189, 0.088, 0.080], + [0.758, 0.544, 0.525], + [0.115, 0.033, 0.027], + [0.063, 0.048, 0.023], + [0.014, 0.011, 0.008] + ] + } +] diff --git a/website/benchmark/hardware/results/yandex_cloud_broadwell_8_vcpu_s3.json b/website/benchmark/hardware/results/yandex_cloud_broadwell_8_vcpu_s3.json new file mode 100644 index 00000000000..ace2442c86e --- /dev/null +++ b/website/benchmark/hardware/results/yandex_cloud_broadwell_8_vcpu_s3.json @@ -0,0 +1,55 @@ +[ + { + "system": "Yandex Cloud 8vCPU Object Storage", + "system_full": "Yandex Cloud Broadwell, 8 vCPU (4 threads), 64 GB RAM, Object Storage", + "cpu_vendor": "Intel", + "time": "2021-02-05 00:00:00", + "kind": "cloud", + "result": + [ + [0.007, 0.003, 0.003], + [0.214, 0.111, 0.096], + [1.239, 1.359, 0.718], + [3.056, 3.366, 1.869], + [1.946, 1.552, 2.450], + [4.804, 2.307, 2.398], + [0.198, 0.108, 0.114], + [0.141, 0.104, 0.100], + [2.755, 2.749, 3.608], + [3.140, 3.905, 3.830], + [2.353, 4.996, 1.637], + [3.796, 1.536, 1.724], + [3.565, 3.016, 3.381], + [4.962, 4.263, 4.352], + [4.210, 3.974, 4.318], + [3.884, 3.434, 3.124], + [10.451, 9.147, 7.526], + [6.288, 5.882, 7.714], + [15.239, 33.243, 17.968], + [1.645, 1.870, 3.230], + [10.980, 8.984, 7.589], + [14.345, 11.503, 12.449], + [17.687, 17.764, 18.984], + [76.606, 65.179, 94.215], + [5.833, 3.347, 3.127], + [3.815, 2.574, 2.402], + [4.916, 6.169, 5.731], + [7.961, 9.930, 8.555], + [5.995, 7.382, 6.054], + [3.113, 4.176, 3.172], + [5.077, 5.221, 5.709], + [8.990, 9.598, 6.272], + [17.832, 17.668, 17.276], + [11.846, 14.692, 13.225], + [12.544, 12.502, 12.725], + [3.604, 4.811, 3.267], + [0.738, 0.751, 0.862], + [0.718, 0.611, 0.561], + [2.125, 0.688, 0.522], + [1.469, 1.546, 1.373], + [1.382, 1.069, 0.976], + [1.353, 1.212, 1.119], + [0.045, 0.031, 0.041] + ] + } +] diff --git a/website/templates/index/community.html b/website/templates/index/community.html index e65f9ff0f86..20b09e7318b 100644 --- a/website/templates/index/community.html +++ b/website/templates/index/community.html @@ -66,7 +66,7 @@