diff --git a/CHANGELOG.md b/CHANGELOG.md index cc1ec835a7b..2eaecaa4c9b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,142 @@ +## ClickHouse release 21.5, 2021-05-20 + +#### Backward Incompatible Change + +* Change comparison of integers and floating point numbers when integer is not exactly representable in the floating point data type. In new version comparison will return false as the rounding error will occur. Example: `9223372036854775808.0 != 9223372036854775808`, because the number `9223372036854775808` is not representable as floating point number exactly (and `9223372036854775808.0` is rounded to `9223372036854776000.0`). But in previous version the comparison will return as the numbers are equal, because if the floating point number `9223372036854776000.0` get converted back to UInt64, it will yield `9223372036854775808`. For the reference, the Python programming language also treats these numbers as equal. But this behaviour was dependend on CPU model (different results on AMD64 and AArch64 for some out-of-range numbers), so we make the comparison more precise. It will treat int and float numbers equal only if int is represented in floating point type exactly. [#22595](https://github.com/ClickHouse/ClickHouse/pull/22595) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Remove support for `argMin` and `argMax` for single `Tuple` argument. The code was not memory-safe. The feature was added by mistake and it is confusing for people. These functions can be reintroduced under different names later. This fixes [#22384](https://github.com/ClickHouse/ClickHouse/issues/22384) and reverts [#17359](https://github.com/ClickHouse/ClickHouse/issues/17359). [#23393](https://github.com/ClickHouse/ClickHouse/pull/23393) ([alexey-milovidov](https://github.com/alexey-milovidov)). + +#### New Feature + +* Added functions `dictGetChildren(dictionary, key)`, `dictGetDescendants(dictionary, key, level)`. Function `dictGetChildren` return all children as an array if indexes. It is a inverse transformation for `dictGetHierarchy`. Function `dictGetDescendants` return all descendants as if `dictGetChildren` was applied `level` times recursively. Zero `level` value is equivalent to infinity. Improved performance of `dictGetHierarchy`, `dictIsIn` functions. Closes [#14656](https://github.com/ClickHouse/ClickHouse/issues/14656). [#22096](https://github.com/ClickHouse/ClickHouse/pull/22096) ([Maksim Kita](https://github.com/kitaisreal)). +* Added function `dictGetOrNull`. It works like `dictGet`, but return `Null` in case key was not found in dictionary. Closes [#22375](https://github.com/ClickHouse/ClickHouse/issues/22375). [#22413](https://github.com/ClickHouse/ClickHouse/pull/22413) ([Maksim Kita](https://github.com/kitaisreal)). +* Added a table function `s3Cluster`, which allows to process files from `s3` in parallel on every node of a specified cluster. [#22012](https://github.com/ClickHouse/ClickHouse/pull/22012) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Added support for replicas and shards in MySQL/PostgreSQL table engine / table function. You can write `SELECT * FROM mysql('host{1,2}-{1|2}', ...)`. Closes [#20969](https://github.com/ClickHouse/ClickHouse/issues/20969). [#22217](https://github.com/ClickHouse/ClickHouse/pull/22217) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Added `ALTER TABLE ... FETCH PART ...` query. It's similar to `FETCH PARTITION`, but fetches only one part. [#22706](https://github.com/ClickHouse/ClickHouse/pull/22706) ([turbo jason](https://github.com/songenjie)). +* Added a setting `max_distributed_depth` that limits the depth of recursive queries to `Distributed` tables. Closes [#20229](https://github.com/ClickHouse/ClickHouse/issues/20229). [#21942](https://github.com/ClickHouse/ClickHouse/pull/21942) ([flynn](https://github.com/ucasFL)). + +#### Performance Improvement + +* Improved performance of `intDiv` by dynamic dispatch for AVX2. This closes [#22314](https://github.com/ClickHouse/ClickHouse/issues/22314). [#23000](https://github.com/ClickHouse/ClickHouse/pull/23000) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Improved performance of reading from `ArrowStream` input format for sources other then local file (e.g. URL). [#22673](https://github.com/ClickHouse/ClickHouse/pull/22673) ([nvartolomei](https://github.com/nvartolomei)). +* Disabled compression by default when interacting with localhost (with clickhouse-client or server to server with distributed queries) via native protocol. It may improve performance of some import/export operations. This closes [#22234](https://github.com/ClickHouse/ClickHouse/issues/22234). [#22237](https://github.com/ClickHouse/ClickHouse/pull/22237) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Exclude values that does not belong to the shard from right part of IN section for distributed queries (under `optimize_skip_unused_shards_rewrite_in`, enabled by default, since it still requires `optimize_skip_unused_shards`). [#21511](https://github.com/ClickHouse/ClickHouse/pull/21511) ([Azat Khuzhin](https://github.com/azat)). +* Improved performance of reading a subset of columns with File-like table engine and column-oriented format like Parquet, Arrow or ORC. This closes [#issue:20129](https://github.com/ClickHouse/ClickHouse/issues/20129). [#21302](https://github.com/ClickHouse/ClickHouse/pull/21302) ([keenwolf](https://github.com/keen-wolf)). +* Allow to move more conditions to `PREWHERE` as it was before version 21.1 (adjustment of internal heuristics). Insufficient number of moved condtions could lead to worse performance. [#23397](https://github.com/ClickHouse/ClickHouse/pull/23397) ([Anton Popov](https://github.com/CurtizJ)). +* Improved performance of ODBC connections and fixed all the outstanding issues from the backlog. Using `nanodbc` library instead of `Poco::ODBC`. Closes [#9678](https://github.com/ClickHouse/ClickHouse/issues/9678). Add support for DateTime64 and Decimal* for ODBC table engine. Closes [#21961](https://github.com/ClickHouse/ClickHouse/issues/21961). Fixed issue with cyrillic text being truncated. Closes [#16246](https://github.com/ClickHouse/ClickHouse/issues/16246). Added connection pools for odbc bridge. [#21972](https://github.com/ClickHouse/ClickHouse/pull/21972) ([Kseniia Sumarokova](https://github.com/kssenii)). + +#### Improvement + +* Increase `max_uri_size` (the maximum size of URL in HTTP interface) to 1 MiB by default. This closes [#21197](https://github.com/ClickHouse/ClickHouse/issues/21197). [#22997](https://github.com/ClickHouse/ClickHouse/pull/22997) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Set `background_fetches_pool_size` to `8` that is better for production usage with frequent small insertions or slow ZooKeeper cluster. [#22945](https://github.com/ClickHouse/ClickHouse/pull/22945) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* FlatDictionary added `initial_array_size`, `max_array_size` options. [#22521](https://github.com/ClickHouse/ClickHouse/pull/22521) ([Maksim Kita](https://github.com/kitaisreal)). +* Add new setting `non_replicated_deduplication_window` for non-replicated MergeTree inserts deduplication. [#22514](https://github.com/ClickHouse/ClickHouse/pull/22514) ([alesapin](https://github.com/alesapin)). +* Update paths to the `CatBoost` model configs in config reloading. [#22434](https://github.com/ClickHouse/ClickHouse/pull/22434) ([Kruglov Pavel](https://github.com/Avogar)). +* Added `Decimal256` type support in dictionaries. `Decimal256` is experimental feature. Closes [#20979](https://github.com/ClickHouse/ClickHouse/issues/20979). [#22960](https://github.com/ClickHouse/ClickHouse/pull/22960) ([Maksim Kita](https://github.com/kitaisreal)). +* Enabled `async_socket_for_remote` by default (using less amount of OS threads for distributed queries). [#23683](https://github.com/ClickHouse/ClickHouse/pull/23683) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fixed `quantile(s)TDigest`. Added special handling of singleton centroids according to tdunning/t-digest 3.2+. Also a bug with over-compression of centroids in implementation of earlier version of the algorithm was fixed. [#23314](https://github.com/ClickHouse/ClickHouse/pull/23314) ([Vladimir Chebotarev](https://github.com/excitoon)). +* Make function name `unhex` case insensitive for compatibility with MySQL. [#23229](https://github.com/ClickHouse/ClickHouse/pull/23229) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Implement functions `arrayHasAny`, `arrayHasAll`, `has`, `indexOf`, `countEqual` for generic case when types of array elements are different. In previous versions the functions `arrayHasAny`, `arrayHasAll` returned false and `has`, `indexOf`, `countEqual` thrown exception. Also add support for `Decimal` and big integer types in functions `has` and similar. This closes [#20272](https://github.com/ClickHouse/ClickHouse/issues/20272). [#23044](https://github.com/ClickHouse/ClickHouse/pull/23044) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Raised the threshold on max number of matches in result of the function `extractAllGroupsHorizontal`. [#23036](https://github.com/ClickHouse/ClickHouse/pull/23036) ([Vasily Nemkov](https://github.com/Enmk)). +* Do not perform `optimize_skip_unused_shards` for cluster with one node. [#22999](https://github.com/ClickHouse/ClickHouse/pull/22999) ([Azat Khuzhin](https://github.com/azat)). +* Added ability to run clickhouse-keeper (experimental drop-in replacement to ZooKeeper) with SSL. Config settings `keeper_server.tcp_port_secure` can be used for secure interaction between client and keeper-server. `keeper_server.raft_configuration.secure` can be used to enable internal secure communication between nodes. [#22992](https://github.com/ClickHouse/ClickHouse/pull/22992) ([alesapin](https://github.com/alesapin)). +* Added ability to flush buffer only in background for `Buffer` tables. [#22986](https://github.com/ClickHouse/ClickHouse/pull/22986) ([Azat Khuzhin](https://github.com/azat)). +* When selecting from MergeTree table with NULL in WHERE condition, in rare cases, exception was thrown. This closes [#20019](https://github.com/ClickHouse/ClickHouse/issues/20019). [#22978](https://github.com/ClickHouse/ClickHouse/pull/22978) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix error handling in Poco HTTP Client for AWS. [#22973](https://github.com/ClickHouse/ClickHouse/pull/22973) ([kreuzerkrieg](https://github.com/kreuzerkrieg)). +* Respect `max_part_removal_threads` for `ReplicatedMergeTree`. [#22971](https://github.com/ClickHouse/ClickHouse/pull/22971) ([Azat Khuzhin](https://github.com/azat)). +* Fix obscure corner case of MergeTree settings inactive_parts_to_throw_insert = 0 with inactive_parts_to_delay_insert > 0. [#22947](https://github.com/ClickHouse/ClickHouse/pull/22947) ([Azat Khuzhin](https://github.com/azat)). +* `dateDiff` now works with `DateTime64` arguments (even for values outside of `DateTime` range) [#22931](https://github.com/ClickHouse/ClickHouse/pull/22931) ([Vasily Nemkov](https://github.com/Enmk)). +* MaterializeMySQL (experimental feature): added an ability to replicate MySQL databases containing views without failing. This is accomplished by ignoring the views. [#22760](https://github.com/ClickHouse/ClickHouse/pull/22760) ([Christian](https://github.com/cfroystad)). +* Allow RBAC row policy via postgresql protocol. Closes [#22658](https://github.com/ClickHouse/ClickHouse/issues/22658). PostgreSQL protocol is enabled in configuration by default. [#22755](https://github.com/ClickHouse/ClickHouse/pull/22755) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Add metric to track how much time is spend during waiting for Buffer layer lock. [#22725](https://github.com/ClickHouse/ClickHouse/pull/22725) ([Azat Khuzhin](https://github.com/azat)). +* Allow to use CTE in VIEW definition. This closes [#22491](https://github.com/ClickHouse/ClickHouse/issues/22491). [#22657](https://github.com/ClickHouse/ClickHouse/pull/22657) ([Amos Bird](https://github.com/amosbird)). +* Clear the rest of the screen and show cursor in `clickhouse-client` if previous program has left garbage in terminal. This closes [#16518](https://github.com/ClickHouse/ClickHouse/issues/16518). [#22634](https://github.com/ClickHouse/ClickHouse/pull/22634) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Make `round` function to behave consistently on non-x86_64 platforms. Rounding half to nearest even (Banker's rounding) is used. [#22582](https://github.com/ClickHouse/ClickHouse/pull/22582) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Correctly check structure of blocks of data that are sending by Distributed tables. [#22325](https://github.com/ClickHouse/ClickHouse/pull/22325) ([Azat Khuzhin](https://github.com/azat)). +* Allow publishing Kafka errors to a virtual column of Kafka engine, controlled by the `kafka_handle_error_mode` setting. [#21850](https://github.com/ClickHouse/ClickHouse/pull/21850) ([fastio](https://github.com/fastio)). +* Add aliases `simpleJSONExtract/simpleJSONHas` to `visitParam/visitParamExtract{UInt, Int, Bool, Float, Raw, String}`. Fixes [#21383](https://github.com/ClickHouse/ClickHouse/issues/21383). [#21519](https://github.com/ClickHouse/ClickHouse/pull/21519) ([fastio](https://github.com/fastio)). +* Add `clickhouse-library-bridge` for library dictionary source. Closes [#9502](https://github.com/ClickHouse/ClickHouse/issues/9502). [#21509](https://github.com/ClickHouse/ClickHouse/pull/21509) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Forbid to drop a column if it's referenced by materialized view. Closes [#21164](https://github.com/ClickHouse/ClickHouse/issues/21164). [#21303](https://github.com/ClickHouse/ClickHouse/pull/21303) ([flynn](https://github.com/ucasFL)). +* Support dynamic interserver credentials (rotating credentials without downtime). [#14113](https://github.com/ClickHouse/ClickHouse/pull/14113) ([johnskopis](https://github.com/johnskopis)). +* Add support for Kafka storage with `Arrow` and `ArrowStream` format messages. [#23415](https://github.com/ClickHouse/ClickHouse/pull/23415) ([Chao Ma](https://github.com/godliness)). +* Fixed missing semicolon in exception message. The user may find this exception message unpleasant to read. [#23208](https://github.com/ClickHouse/ClickHouse/pull/23208) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fixed missing whitespace in some exception messages about `LowCardinality` type. [#23207](https://github.com/ClickHouse/ClickHouse/pull/23207) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Some values were formatted with alignment in center in table cells in `Markdown` format. Not anymore. [#23096](https://github.com/ClickHouse/ClickHouse/pull/23096) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Remove non-essential details from suggestions in clickhouse-client. This closes [#22158](https://github.com/ClickHouse/ClickHouse/issues/22158). [#23040](https://github.com/ClickHouse/ClickHouse/pull/23040) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Correct calculation of `bytes_allocated` field in system.dictionaries for sparse_hashed dictionaries. [#22867](https://github.com/ClickHouse/ClickHouse/pull/22867) ([Azat Khuzhin](https://github.com/azat)). +* Fixed approximate total rows accounting for reverse reading from MergeTree. [#22726](https://github.com/ClickHouse/ClickHouse/pull/22726) ([Azat Khuzhin](https://github.com/azat)). +* Fix the case when it was possible to configure dictionary with clickhouse source that was looking to itself that leads to infinite loop. Closes [#14314](https://github.com/ClickHouse/ClickHouse/issues/14314). [#22479](https://github.com/ClickHouse/ClickHouse/pull/22479) ([Maksim Kita](https://github.com/kitaisreal)). + +#### Bug Fix + +* Multiple fixes for hedged requests. Fixed an error `Can't initialize pipeline with empty pipe` for queries with `GLOBAL IN/JOIN` when the setting `use_hedged_requests` is enabled. Fixes [#23431](https://github.com/ClickHouse/ClickHouse/issues/23431). [#23805](https://github.com/ClickHouse/ClickHouse/pull/23805) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). Fixed a race condition in hedged connections which leads to crash. This fixes [#22161](https://github.com/ClickHouse/ClickHouse/issues/22161). [#22443](https://github.com/ClickHouse/ClickHouse/pull/22443) ([Kruglov Pavel](https://github.com/Avogar)). Fix possible crash in case if `unknown packet` was received from remote query (with `async_socket_for_remote` enabled). Fixes [#21167](https://github.com/ClickHouse/ClickHouse/issues/21167). [#23309](https://github.com/ClickHouse/ClickHouse/pull/23309) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fixed the behavior when disabling `input_format_with_names_use_header ` setting discards all the input with CSVWithNames format. This fixes [#22406](https://github.com/ClickHouse/ClickHouse/issues/22406). [#23202](https://github.com/ClickHouse/ClickHouse/pull/23202) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fixed remote JDBC bridge timeout connection issue. Closes [#9609](https://github.com/ClickHouse/ClickHouse/issues/9609). [#23771](https://github.com/ClickHouse/ClickHouse/pull/23771) ([Maksim Kita](https://github.com/kitaisreal), [alexey-milovidov](https://github.com/alexey-milovidov)). +* Fix the logic of initial load of `complex_key_hashed` if `update_field` is specified. Closes [#23800](https://github.com/ClickHouse/ClickHouse/issues/23800). [#23824](https://github.com/ClickHouse/ClickHouse/pull/23824) ([Maksim Kita](https://github.com/kitaisreal)). +* Fixed crash when `PREWHERE` and row policy filter are both in effect with empty result. [#23763](https://github.com/ClickHouse/ClickHouse/pull/23763) ([Amos Bird](https://github.com/amosbird)). +* Avoid possible "Cannot schedule a task" error (in case some exception had been occurred) on INSERT into Distributed. [#23744](https://github.com/ClickHouse/ClickHouse/pull/23744) ([Azat Khuzhin](https://github.com/azat)). +* Added an exception in case of completely the same values in both samples in aggregate function `mannWhitneyUTest`. This fixes [#23646](https://github.com/ClickHouse/ClickHouse/issues/23646). [#23654](https://github.com/ClickHouse/ClickHouse/pull/23654) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fixed server fault when inserting data through HTTP caused an exception. This fixes [#23512](https://github.com/ClickHouse/ClickHouse/issues/23512). [#23643](https://github.com/ClickHouse/ClickHouse/pull/23643) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fixed misinterpretation of some `LIKE` expressions with escape sequences. [#23610](https://github.com/ClickHouse/ClickHouse/pull/23610) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fixed restart / stop command hanging. Closes [#20214](https://github.com/ClickHouse/ClickHouse/issues/20214). [#23552](https://github.com/ClickHouse/ClickHouse/pull/23552) ([filimonov](https://github.com/filimonov)). +* Fixed `COLUMNS` matcher in case of multiple JOINs in select query. Closes [#22736](https://github.com/ClickHouse/ClickHouse/issues/22736). [#23501](https://github.com/ClickHouse/ClickHouse/pull/23501) ([Maksim Kita](https://github.com/kitaisreal)). +* Fixed a crash when modifying column's default value when a column itself is used as `ReplacingMergeTree`'s parameter. [#23483](https://github.com/ClickHouse/ClickHouse/pull/23483) ([hexiaoting](https://github.com/hexiaoting)). +* Fixed corner cases in vertical merges with `ReplacingMergeTree`. In rare cases they could lead to fails of merges with exceptions like `Incomplete granules are not allowed while blocks are granules size`. [#23459](https://github.com/ClickHouse/ClickHouse/pull/23459) ([Anton Popov](https://github.com/CurtizJ)). +* Fixed bug that does not allow cast from empty array literal, to array with dimensions greater than 1, e.g. `CAST([] AS Array(Array(String)))`. Closes [#14476](https://github.com/ClickHouse/ClickHouse/issues/14476). [#23456](https://github.com/ClickHouse/ClickHouse/pull/23456) ([Maksim Kita](https://github.com/kitaisreal)). +* Fixed a bug when `deltaSum` aggregate function produced incorrect result after resetting the counter. [#23437](https://github.com/ClickHouse/ClickHouse/pull/23437) ([Russ Frank](https://github.com/rf)). +* Fixed `Cannot unlink file` error on unsuccessful creation of ReplicatedMergeTree table with multidisk configuration. This closes [#21755](https://github.com/ClickHouse/ClickHouse/issues/21755). [#23433](https://github.com/ClickHouse/ClickHouse/pull/23433) ([tavplubix](https://github.com/tavplubix)). +* Fixed incompatible constant expression generation during partition pruning based on virtual columns. This fixes https://github.com/ClickHouse/ClickHouse/pull/21401#discussion_r611888913. [#23366](https://github.com/ClickHouse/ClickHouse/pull/23366) ([Amos Bird](https://github.com/amosbird)). +* Fixed a crash when setting join_algorithm is set to 'auto' and Join is performed with a Dictionary. Close [#23002](https://github.com/ClickHouse/ClickHouse/issues/23002). [#23312](https://github.com/ClickHouse/ClickHouse/pull/23312) ([Vladimir](https://github.com/vdimir)). +* Don't relax NOT conditions during partition pruning. This fixes [#23305](https://github.com/ClickHouse/ClickHouse/issues/23305) and [#21539](https://github.com/ClickHouse/ClickHouse/issues/21539). [#23310](https://github.com/ClickHouse/ClickHouse/pull/23310) ([Amos Bird](https://github.com/amosbird)). +* Fixed very rare race condition on background cleanup of old blocks. It might cause a block not to be deduplicated if it's too close to the end of deduplication window. [#23301](https://github.com/ClickHouse/ClickHouse/pull/23301) ([tavplubix](https://github.com/tavplubix)). +* Fixed very rare (distributed) race condition between creation and removal of ReplicatedMergeTree tables. It might cause exceptions like `node doesn't exist` on attempt to create replicated table. Fixes [#21419](https://github.com/ClickHouse/ClickHouse/issues/21419). [#23294](https://github.com/ClickHouse/ClickHouse/pull/23294) ([tavplubix](https://github.com/tavplubix)). +* Fixed simple key dictionary from DDL creation if primary key is not first attribute. Fixes [#23236](https://github.com/ClickHouse/ClickHouse/issues/23236). [#23262](https://github.com/ClickHouse/ClickHouse/pull/23262) ([Maksim Kita](https://github.com/kitaisreal)). +* Fixed reading from ODBC when there are many long column names in a table. Closes [#8853](https://github.com/ClickHouse/ClickHouse/issues/8853). [#23215](https://github.com/ClickHouse/ClickHouse/pull/23215) ([Kseniia Sumarokova](https://github.com/kssenii)). +* MaterializeMySQL (experimental feature): fixed `Not found column` error when selecting from `MaterializeMySQL` with condition on key column. Fixes [#22432](https://github.com/ClickHouse/ClickHouse/issues/22432). [#23200](https://github.com/ClickHouse/ClickHouse/pull/23200) ([tavplubix](https://github.com/tavplubix)). +* Correct aliases handling if subquery was optimized to constant. Fixes [#22924](https://github.com/ClickHouse/ClickHouse/issues/22924). Fixes [#10401](https://github.com/ClickHouse/ClickHouse/issues/10401). [#23191](https://github.com/ClickHouse/ClickHouse/pull/23191) ([Maksim Kita](https://github.com/kitaisreal)). +* Server might fail to start if `data_type_default_nullable` setting is enabled in default profile, it's fixed. Fixes [#22573](https://github.com/ClickHouse/ClickHouse/issues/22573). [#23185](https://github.com/ClickHouse/ClickHouse/pull/23185) ([tavplubix](https://github.com/tavplubix)). +* Fixed a crash on shutdown which happened because of wrong accounting of current connections. [#23154](https://github.com/ClickHouse/ClickHouse/pull/23154) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fixed `Table .inner_id... doesn't exist` error when selecting from Materialized View after detaching it from Atomic database and attaching back. [#23047](https://github.com/ClickHouse/ClickHouse/pull/23047) ([tavplubix](https://github.com/tavplubix)). +* Fix error `Cannot find column in ActionsDAG result` which may happen if subquery uses `untuple`. Fixes [#22290](https://github.com/ClickHouse/ClickHouse/issues/22290). [#22991](https://github.com/ClickHouse/ClickHouse/pull/22991) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix usage of constant columns of type `Map` with nullable values. [#22939](https://github.com/ClickHouse/ClickHouse/pull/22939) ([Anton Popov](https://github.com/CurtizJ)). +* fixed `formatDateTime()` on `DateTime64` and "%C" format specifier fixed `toDateTime64()` for large values and non-zero scale. [#22937](https://github.com/ClickHouse/ClickHouse/pull/22937) ([Vasily Nemkov](https://github.com/Enmk)). +* Fixed a crash when using `mannWhitneyUTest` and `rankCorr` with window functions. This fixes [#22728](https://github.com/ClickHouse/ClickHouse/issues/22728). [#22876](https://github.com/ClickHouse/ClickHouse/pull/22876) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* LIVE VIEW (experimental feature): fixed possible hanging in concurrent DROP/CREATE of TEMPORARY LIVE VIEW in `TemporaryLiveViewCleaner`, [see](https://gist.github.com/vzakaznikov/0c03195960fc86b56bfe2bc73a90019e). [#22858](https://github.com/ClickHouse/ClickHouse/pull/22858) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fixed pushdown of `HAVING` in case, when filter column is used in aggregation. [#22763](https://github.com/ClickHouse/ClickHouse/pull/22763) ([Anton Popov](https://github.com/CurtizJ)). +* Fixed possible hangs in Zookeeper requests in case of OOM exception. Fixes [#22438](https://github.com/ClickHouse/ClickHouse/issues/22438). [#22684](https://github.com/ClickHouse/ClickHouse/pull/22684) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fixed wait for mutations on several replicas for ReplicatedMergeTree table engines. Previously, mutation/alter query may finish before mutation actually executed on other replicas. [#22669](https://github.com/ClickHouse/ClickHouse/pull/22669) ([alesapin](https://github.com/alesapin)). +* Fixed exception for Log with nested types without columns in the SELECT clause. [#22654](https://github.com/ClickHouse/ClickHouse/pull/22654) ([Azat Khuzhin](https://github.com/azat)). +* Fix unlimited wait for auxiliary AWS requests. [#22594](https://github.com/ClickHouse/ClickHouse/pull/22594) ([Vladimir Chebotarev](https://github.com/excitoon)). +* Fixed a crash when client closes connection very early [#22579](https://github.com/ClickHouse/ClickHouse/issues/22579). [#22591](https://github.com/ClickHouse/ClickHouse/pull/22591) ([nvartolomei](https://github.com/nvartolomei)). +* `Map` data type (experimental feature): fixed an incorrect formatting of function `map` in distributed queries. [#22588](https://github.com/ClickHouse/ClickHouse/pull/22588) ([foolchi](https://github.com/foolchi)). +* Fixed deserialization of empty string without newline at end of TSV format. This closes [#20244](https://github.com/ClickHouse/ClickHouse/issues/20244). Possible workaround without version update: set `input_format_null_as_default` to zero. It was zero in old versions. [#22527](https://github.com/ClickHouse/ClickHouse/pull/22527) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Fixed wrong cast of a column of `LowCardinality` type in Merge Join algorithm. Close [#22386](https://github.com/ClickHouse/ClickHouse/issues/22386), close [#22388](https://github.com/ClickHouse/ClickHouse/issues/22388). [#22510](https://github.com/ClickHouse/ClickHouse/pull/22510) ([Vladimir](https://github.com/vdimir)). +* Buffer overflow (on read) was possible in `tokenbf_v1` full text index. The excessive bytes are not used but the read operation may lead to crash in rare cases. This closes [#19233](https://github.com/ClickHouse/ClickHouse/issues/19233). [#22421](https://github.com/ClickHouse/ClickHouse/pull/22421) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Do not limit HTTP chunk size. Fixes [#21907](https://github.com/ClickHouse/ClickHouse/issues/21907). [#22322](https://github.com/ClickHouse/ClickHouse/pull/22322) ([Ivan](https://github.com/abyss7)). +* Fixed a bug, which leads to underaggregation of data in case of enabled `optimize_aggregation_in_order` and many parts in table. Slightly improve performance of aggregation with enabled `optimize_aggregation_in_order`. [#21889](https://github.com/ClickHouse/ClickHouse/pull/21889) ([Anton Popov](https://github.com/CurtizJ)). +* Check if table function view is used as a column. This complements #20350. [#21465](https://github.com/ClickHouse/ClickHouse/pull/21465) ([Amos Bird](https://github.com/amosbird)). +* Fix "unknown column" error for tables with `Merge` engine in queris with `JOIN` and aggregation. Closes [#18368](https://github.com/ClickHouse/ClickHouse/issues/18368), close [#22226](https://github.com/ClickHouse/ClickHouse/issues/22226). [#21370](https://github.com/ClickHouse/ClickHouse/pull/21370) ([Vladimir](https://github.com/vdimir)). +* Fixed name clashes in pushdown optimization. It caused incorrect `WHERE` filtration after FULL JOIN. Close [#20497](https://github.com/ClickHouse/ClickHouse/issues/20497). [#20622](https://github.com/ClickHouse/ClickHouse/pull/20622) ([Vladimir](https://github.com/vdimir)). +* Fixed very rare bug when quorum insert with `quorum_parallel=1` is not really "quorum" because of deduplication. [#18215](https://github.com/ClickHouse/ClickHouse/pull/18215) ([filimonov](https://github.com/filimonov) - reported, [alesapin](https://github.com/alesapin) - fixed). + +#### Build/Testing/Packaging Improvement + +* Run stateless tests in parallel in CI. [#22300](https://github.com/ClickHouse/ClickHouse/pull/22300) ([alesapin](https://github.com/alesapin)). +* Simplify debian packages. This fixes [#21698](https://github.com/ClickHouse/ClickHouse/issues/21698). [#22976](https://github.com/ClickHouse/ClickHouse/pull/22976) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Added support for ClickHouse build on Apple M1. [#21639](https://github.com/ClickHouse/ClickHouse/pull/21639) ([changvvb](https://github.com/changvvb)). +* Fixed ClickHouse Keeper build for MacOS. [#22860](https://github.com/ClickHouse/ClickHouse/pull/22860) ([alesapin](https://github.com/alesapin)). +* Fixed some tests on AArch64 platform. [#22596](https://github.com/ClickHouse/ClickHouse/pull/22596) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Added function alignment for possibly better performance. [#21431](https://github.com/ClickHouse/ClickHouse/pull/21431) ([Danila Kutenin](https://github.com/danlark1)). +* Adjust some tests to output identical results on amd64 and aarch64 (qemu). The result was depending on implementation specific CPU behaviour. [#22590](https://github.com/ClickHouse/ClickHouse/pull/22590) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Allow query profiling only on x86_64. See [#15174](https://github.com/ClickHouse/ClickHouse/issues/15174#issuecomment-812954965) and [#15638](https://github.com/ClickHouse/ClickHouse/issues/15638#issuecomment-703805337). This closes [#15638](https://github.com/ClickHouse/ClickHouse/issues/15638). [#22580](https://github.com/ClickHouse/ClickHouse/pull/22580) ([alexey-milovidov](https://github.com/alexey-milovidov)). +* Allow building with unbundled xz (lzma) using `USE_INTERNAL_XZ_LIBRARY=OFF` CMake option. [#22571](https://github.com/ClickHouse/ClickHouse/pull/22571) ([Kfir Itzhak](https://github.com/mastertheknife)). +* Enable bundled `openldap` on `ppc64le` [#22487](https://github.com/ClickHouse/ClickHouse/pull/22487) ([Kfir Itzhak](https://github.com/mastertheknife)). +* Disable incompatible libraries (platform specific typically) on `ppc64le` [#22475](https://github.com/ClickHouse/ClickHouse/pull/22475) ([Kfir Itzhak](https://github.com/mastertheknife)). +* Add Jepsen test in CI for clickhouse Keeper. [#22373](https://github.com/ClickHouse/ClickHouse/pull/22373) ([alesapin](https://github.com/alesapin)). +* Build `jemalloc` with support for [heap profiling](https://github.com/jemalloc/jemalloc/wiki/Use-Case%3A-Heap-Profiling). [#22834](https://github.com/ClickHouse/ClickHouse/pull/22834) ([nvartolomei](https://github.com/nvartolomei)). +* Avoid UB in `*Log` engines for rwlock unlock due to unlock from another thread. [#22583](https://github.com/ClickHouse/ClickHouse/pull/22583) ([Azat Khuzhin](https://github.com/azat)). +* Fixed UB by unlocking the rwlock of the TinyLog from the same thread. [#22560](https://github.com/ClickHouse/ClickHouse/pull/22560) ([Azat Khuzhin](https://github.com/azat)). + + ## ClickHouse release 21.4 ### ClickHouse release 21.4.1 2021-04-12 diff --git a/base/daemon/BaseDaemon.cpp b/base/daemon/BaseDaemon.cpp index 83384038b7c..01e700ebba3 100644 --- a/base/daemon/BaseDaemon.cpp +++ b/base/daemon/BaseDaemon.cpp @@ -468,7 +468,7 @@ void BaseDaemon::reloadConfiguration() * instead of using files specified in config.xml. * (It's convenient to log in console when you start server without any command line parameters.) */ - config_path = config().getString("config-file", "config.xml"); + config_path = config().getString("config-file", getDefaultConfigFileName()); DB::ConfigProcessor config_processor(config_path, false, true); config_processor.setConfigPath(Poco::Path(config_path).makeParent().toString()); loaded_config = config_processor.loadConfig(/* allow_zk_includes = */ true); @@ -516,6 +516,11 @@ std::string BaseDaemon::getDefaultCorePath() const return "/opt/cores/"; } +std::string BaseDaemon::getDefaultConfigFileName() const +{ + return "config.xml"; +} + void BaseDaemon::closeFDs() { #if defined(OS_FREEBSD) || defined(OS_DARWIN) diff --git a/base/daemon/BaseDaemon.h b/base/daemon/BaseDaemon.h index 8b9d765cf2e..3d47d85a9f5 100644 --- a/base/daemon/BaseDaemon.h +++ b/base/daemon/BaseDaemon.h @@ -149,6 +149,8 @@ protected: virtual std::string getDefaultCorePath() const; + virtual std::string getDefaultConfigFileName() const; + std::optional pid_file; std::atomic_bool is_cancelled{false}; diff --git a/base/mysqlxx/PoolWithFailover.cpp b/base/mysqlxx/PoolWithFailover.cpp index ea2d060e596..e317ab7f228 100644 --- a/base/mysqlxx/PoolWithFailover.cpp +++ b/base/mysqlxx/PoolWithFailover.cpp @@ -78,6 +78,8 @@ PoolWithFailover::PoolWithFailover( const RemoteDescription & addresses, const std::string & user, const std::string & password, + unsigned default_connections_, + unsigned max_connections_, size_t max_tries_) : max_tries(max_tries_) , shareable(false) @@ -85,7 +87,13 @@ PoolWithFailover::PoolWithFailover( /// Replicas have the same priority, but traversed replicas are moved to the end of the queue. for (const auto & [host, port] : addresses) { - replicas_by_priority[0].emplace_back(std::make_shared(database, host, user, password, port)); + replicas_by_priority[0].emplace_back(std::make_shared(database, + host, user, password, port, + /* socket_ = */ "", + MYSQLXX_DEFAULT_TIMEOUT, + MYSQLXX_DEFAULT_RW_TIMEOUT, + default_connections_, + max_connections_)); } } diff --git a/base/mysqlxx/PoolWithFailover.h b/base/mysqlxx/PoolWithFailover.h index 5154fc3e253..1c7a63e76c0 100644 --- a/base/mysqlxx/PoolWithFailover.h +++ b/base/mysqlxx/PoolWithFailover.h @@ -115,6 +115,8 @@ namespace mysqlxx const RemoteDescription & addresses, const std::string & user, const std::string & password, + unsigned default_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS, + unsigned max_connections_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_CONNECTIONS, size_t max_tries_ = MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES); PoolWithFailover(const PoolWithFailover & other); diff --git a/cmake/autogenerated_versions.txt b/cmake/autogenerated_versions.txt index 51f4b974161..34de50e9f8a 100644 --- a/cmake/autogenerated_versions.txt +++ b/cmake/autogenerated_versions.txt @@ -1,9 +1,9 @@ # This strings autochanged from release_lib.sh: -SET(VERSION_REVISION 54451) +SET(VERSION_REVISION 54452) SET(VERSION_MAJOR 21) -SET(VERSION_MINOR 6) +SET(VERSION_MINOR 7) SET(VERSION_PATCH 1) -SET(VERSION_GITHASH 96fced4c3cf432fb0b401d2ab01f0c56e5f74a96) -SET(VERSION_DESCRIBE v21.6.1.1-prestable) -SET(VERSION_STRING 21.6.1.1) +SET(VERSION_GITHASH 976ccc2e908ac3bc28f763bfea8134ea0a121b40) +SET(VERSION_DESCRIBE v21.7.1.1-prestable) +SET(VERSION_STRING 21.7.1.1) # end of autochange diff --git a/contrib/re2 b/contrib/re2 index 7cf8b88e8f7..13ebb377c6a 160000 --- a/contrib/re2 +++ b/contrib/re2 @@ -1 +1 @@ -Subproject commit 7cf8b88e8f70f97fd4926b56aa87e7f53b2717e0 +Subproject commit 13ebb377c6ad763ca61d12dd6f88b1126bd0b911 diff --git a/contrib/re2_st/re2_transform.cmake b/contrib/re2_st/re2_transform.cmake index 2d50d9e8c2a..56a96f45630 100644 --- a/contrib/re2_st/re2_transform.cmake +++ b/contrib/re2_st/re2_transform.cmake @@ -1,7 +1,7 @@ file (READ ${SOURCE_FILENAME} CONTENT) string (REGEX REPLACE "using re2::RE2;" "" CONTENT "${CONTENT}") string (REGEX REPLACE "using re2::LazyRE2;" "" CONTENT "${CONTENT}") -string (REGEX REPLACE "namespace re2" "namespace re2_st" CONTENT "${CONTENT}") +string (REGEX REPLACE "namespace re2 {" "namespace re2_st {" CONTENT "${CONTENT}") string (REGEX REPLACE "re2::" "re2_st::" CONTENT "${CONTENT}") string (REGEX REPLACE "\"re2/" "\"re2_st/" CONTENT "${CONTENT}") string (REGEX REPLACE "(.\\*?_H)" "\\1_ST" CONTENT "${CONTENT}") diff --git a/debian/changelog b/debian/changelog index 8b6626416a9..e1c46dae3a8 100644 --- a/debian/changelog +++ b/debian/changelog @@ -1,5 +1,5 @@ -clickhouse (21.6.1.1) unstable; urgency=low +clickhouse (21.7.1.1) unstable; urgency=low * Modified source code - -- clickhouse-release Tue, 20 Apr 2021 01:48:16 +0300 + -- clickhouse-release Thu, 20 May 2021 22:23:29 +0300 diff --git a/docker/client/Dockerfile b/docker/client/Dockerfile index 569025dec1c..79ac92f2277 100644 --- a/docker/client/Dockerfile +++ b/docker/client/Dockerfile @@ -1,7 +1,7 @@ FROM ubuntu:18.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.6.1.* +ARG version=21.7.1.* RUN apt-get update \ && apt-get install --yes --no-install-recommends \ diff --git a/docker/server/Dockerfile b/docker/server/Dockerfile index d302fec7417..52dcb6caae5 100644 --- a/docker/server/Dockerfile +++ b/docker/server/Dockerfile @@ -1,7 +1,7 @@ FROM ubuntu:20.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.6.1.* +ARG version=21.7.1.* ARG gosu_ver=1.10 # set non-empty deb_location_url url to create a docker image diff --git a/docker/test/Dockerfile b/docker/test/Dockerfile index 0e4646386ce..9809a36395d 100644 --- a/docker/test/Dockerfile +++ b/docker/test/Dockerfile @@ -1,7 +1,7 @@ FROM ubuntu:18.04 ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/" -ARG version=21.6.1.* +ARG version=21.7.1.* RUN apt-get update && \ apt-get install -y apt-transport-https dirmngr && \ diff --git a/docker/test/fasttest/run.sh b/docker/test/fasttest/run.sh index 42c720a7e63..3a19a249f8e 100755 --- a/docker/test/fasttest/run.sh +++ b/docker/test/fasttest/run.sh @@ -73,7 +73,7 @@ function start_server --path "$FASTTEST_DATA" --user_files_path "$FASTTEST_DATA/user_files" --top_level_domains_path "$FASTTEST_DATA/top_level_domains" - --keeper_server.log_storage_path "$FASTTEST_DATA/coordination" + --keeper_server.storage_path "$FASTTEST_DATA/coordination" ) clickhouse-server "${opts[@]}" &>> "$FASTTEST_OUTPUT/server.log" & server_pid=$! @@ -376,35 +376,14 @@ function run_tests # Depends on LLVM JIT 01852_jit_if 01865_jit_comparison_constant_result + 01871_merge_tree_compile_expressions ) - (time clickhouse-test --hung-check -j 8 --order=random --use-skip-list --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 ||:) | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt" - - # substr is to remove semicolon after test name - readarray -t FAILED_TESTS < <(awk '/\[ FAIL|TIMEOUT|ERROR \]/ { print substr($3, 1, length($3)-1) }' "$FASTTEST_OUTPUT/test_log.txt" | tee "$FASTTEST_OUTPUT/failed-parallel-tests.txt") - - # We will rerun sequentially any tests that have failed during parallel run. - # They might have failed because there was some interference from other tests - # running concurrently. If they fail even in seqential mode, we will report them. - # FIXME All tests that require exclusive access to the server must be - # explicitly marked as `sequential`, and `clickhouse-test` must detect them and - # run them in a separate group after all other tests. This is faster and also - # explicit instead of guessing. - if [[ -n "${FAILED_TESTS[*]}" ]] - then - stop_server ||: - - # Clean the data so that there is no interference from the previous test run. - rm -rf "$FASTTEST_DATA"/{{meta,}data,user_files,coordination} ||: - - start_server - - echo "Going to run again: ${FAILED_TESTS[*]}" - - clickhouse-test --hung-check --order=random --no-long --testname --shard --zookeeper "${FAILED_TESTS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a "$FASTTEST_OUTPUT/test_log.txt" - else - echo "No failed tests" - fi + time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \ + --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" \ + -- "$FASTTEST_FOCUS" 2>&1 \ + | ts '%Y-%m-%d %H:%M:%S' \ + | tee "$FASTTEST_OUTPUT/test_log.txt" } case "$stage" in diff --git a/docker/test/integration/runner/compose/docker_compose_keeper.yml b/docker/test/integration/runner/compose/docker_compose_keeper.yml new file mode 100644 index 00000000000..e11a13e6eab --- /dev/null +++ b/docker/test/integration/runner/compose/docker_compose_keeper.yml @@ -0,0 +1,92 @@ +version: '2.3' +services: + zoo1: + image: ${image:-yandex/clickhouse-integration-test} + restart: always + user: ${user:-} + volumes: + - type: bind + source: ${keeper_binary:-} + target: /usr/bin/clickhouse + - type: bind + source: ${keeper_config_dir1:-} + target: /etc/clickhouse-keeper + - type: bind + source: ${keeper_logs_dir1:-} + target: /var/log/clickhouse-keeper + - type: ${keeper_fs:-tmpfs} + source: ${keeper_db_dir1:-} + target: /var/lib/clickhouse-keeper + entrypoint: "clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config1.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log" + cap_add: + - SYS_PTRACE + - NET_ADMIN + - IPC_LOCK + - SYS_NICE + security_opt: + - label:disable + dns_opt: + - attempts:2 + - timeout:1 + - inet6 + - rotate + zoo2: + image: ${image:-yandex/clickhouse-integration-test} + restart: always + user: ${user:-} + volumes: + - type: bind + source: ${keeper_binary:-} + target: /usr/bin/clickhouse + - type: bind + source: ${keeper_config_dir2:-} + target: /etc/clickhouse-keeper + - type: bind + source: ${keeper_logs_dir2:-} + target: /var/log/clickhouse-keeper + - type: ${keeper_fs:-tmpfs} + source: ${keeper_db_dir2:-} + target: /var/lib/clickhouse-keeper + entrypoint: "clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config2.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log" + cap_add: + - SYS_PTRACE + - NET_ADMIN + - IPC_LOCK + - SYS_NICE + security_opt: + - label:disable + dns_opt: + - attempts:2 + - timeout:1 + - inet6 + - rotate + zoo3: + image: ${image:-yandex/clickhouse-integration-test} + restart: always + user: ${user:-} + volumes: + - type: bind + source: ${keeper_binary:-} + target: /usr/bin/clickhouse + - type: bind + source: ${keeper_config_dir3:-} + target: /etc/clickhouse-keeper + - type: bind + source: ${keeper_logs_dir3:-} + target: /var/log/clickhouse-keeper + - type: ${keeper_fs:-tmpfs} + source: ${keeper_db_dir3:-} + target: /var/lib/clickhouse-keeper + entrypoint: "clickhouse keeper --config=/etc/clickhouse-keeper/keeper_config3.xml --log-file=/var/log/clickhouse-keeper/clickhouse-keeper.log --errorlog-file=/var/log/clickhouse-keeper/clickhouse-keeper.err.log" + cap_add: + - SYS_PTRACE + - NET_ADMIN + - IPC_LOCK + - SYS_NICE + security_opt: + - label:disable + dns_opt: + - attempts:2 + - timeout:1 + - inet6 + - rotate diff --git a/docs/en/engines/table-engines/integrations/mysql.md b/docs/en/engines/table-engines/integrations/mysql.md index 3847e7a9e0e..9bd12e97dd8 100644 --- a/docs/en/engines/table-engines/integrations/mysql.md +++ b/docs/en/engines/table-engines/integrations/mysql.md @@ -15,7 +15,12 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1], name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2], ... -) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']); +) ENGINE = MySQL('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']) +SETTINGS + [connection_pool_size=16, ] + [connection_max_tries=3, ] + [connection_auto_close=true ] +; ``` See a detailed description of the [CREATE TABLE](../../../sql-reference/statements/create/table.md#create-table-query) query. diff --git a/docs/en/operations/external-authenticators/ldap.md b/docs/en/operations/external-authenticators/ldap.md index 1b65ecc968b..805d45e1b38 100644 --- a/docs/en/operations/external-authenticators/ldap.md +++ b/docs/en/operations/external-authenticators/ldap.md @@ -17,6 +17,7 @@ To define LDAP server you must add `ldap_servers` section to the `config.xml`. + localhost 636 @@ -31,6 +32,18 @@ To define LDAP server you must add `ldap_servers` section to the `config.xml`. /path/to/tls_ca_cert_dir ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384 + + + + localhost + 389 + EXAMPLE\{user_name} + + CN=Users,DC=example,DC=com + (&(objectClass=user)(sAMAccountName={user_name})) + + no + ``` @@ -43,6 +56,15 @@ Note, that you can define multiple LDAP servers inside the `ldap_servers` sectio - `port` — LDAP server port, default is `636` if `enable_tls` is set to `true`, `389` otherwise. - `bind_dn` — Template used to construct the DN to bind to. - The resulting DN will be constructed by replacing all `{user_name}` substrings of the template with the actual user name during each authentication attempt. +- `user_dn_detection` - Section with LDAP search parameters for detecting the actual user DN of the bound user. + - This is mainly used in search filters for further role mapping when the server is Active Directory. The resulting user DN will be used when replacing `{user_dn}` substrings wherever they are allowed. By default, user DN is set equal to bind DN, but once search is performed, it will be updated with to the actual detected user DN value. + - `base_dn` - Template used to construct the base DN for the LDAP search. + - The resulting DN will be constructed by replacing all `{user_name}` and `{bind_dn}` substrings of the template with the actual user name and bind DN during the LDAP search. + - `scope` - Scope of the LDAP search. + - Accepted values are: `base`, `one_level`, `children`, `subtree` (the default). + - `search_filter` - Template used to construct the search filter for the LDAP search. + - The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}`, and `{base_dn}` substrings of the template with the actual user name, bind DN, and base DN during the LDAP search. + - Note, that the special characters must be escaped properly in XML. - `verification_cooldown` — A period of time, in seconds, after a successful bind attempt, during which the user will be assumed to be successfully authenticated for all consecutive requests without contacting the LDAP server. - Specify `0` (the default) to disable caching and force contacting the LDAP server for each authentication request. - `enable_tls` — A flag to trigger the use of the secure connection to the LDAP server. @@ -107,7 +129,7 @@ Goes into `config.xml`. - + my_ldap_server @@ -122,6 +144,18 @@ Goes into `config.xml`. clickhouse_ + + + + my_ad_server + + CN=Users,DC=example,DC=com + CN + subtree + (&(objectClass=group)(member={user_dn})) + clickhouse_ + + ``` @@ -137,13 +171,13 @@ Note that `my_ldap_server` referred in the `ldap` section inside the `user_direc - When a user authenticates, while still bound to LDAP, an LDAP search is performed using `search_filter` and the name of the logged-in user. For each entry found during that search, the value of the specified attribute is extracted. For each attribute value that has the specified prefix, the prefix is removed, and the rest of the value becomes the name of a local role defined in ClickHouse, which is expected to be created beforehand by the [CREATE ROLE](../../sql-reference/statements/create/role.md#create-role-statement) statement. - There can be multiple `role_mapping` sections defined inside the same `ldap` section. All of them will be applied. - `base_dn` — Template used to construct the base DN for the LDAP search. - - The resulting DN will be constructed by replacing all `{user_name}` and `{bind_dn}` substrings of the template with the actual user name and bind DN during each LDAP search. + - The resulting DN will be constructed by replacing all `{user_name}`, `{bind_dn}`, and `{user_dn}` substrings of the template with the actual user name, bind DN, and user DN during each LDAP search. - `scope` — Scope of the LDAP search. - Accepted values are: `base`, `one_level`, `children`, `subtree` (the default). - `search_filter` — Template used to construct the search filter for the LDAP search. - - The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}` and `{base_dn}` substrings of the template with the actual user name, bind DN and base DN during each LDAP search. + - The resulting filter will be constructed by replacing all `{user_name}`, `{bind_dn}`, `{user_dn}`, and `{base_dn}` substrings of the template with the actual user name, bind DN, user DN, and base DN during each LDAP search. - Note, that the special characters must be escaped properly in XML. - - `attribute` — Attribute name whose values will be returned by the LDAP search. + - `attribute` — Attribute name whose values will be returned by the LDAP search. `cn`, by default. - `prefix` — Prefix, that will be expected to be in front of each string in the original list of strings returned by the LDAP search. The prefix will be removed from the original strings and the resulting strings will be treated as local role names. Empty by default. [Original article](https://clickhouse.tech/docs/en/operations/external-authenticators/ldap/) diff --git a/docs/en/operations/settings/settings.md b/docs/en/operations/settings/settings.md index a5dc66cf0d6..3c5efd79863 100644 --- a/docs/en/operations/settings/settings.md +++ b/docs/en/operations/settings/settings.md @@ -1520,8 +1520,8 @@ Do not merge aggregation states from different servers for distributed query pro Possible values: - 0 — Disabled (final query processing is done on the initiator node). -- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data). -- 2 - Same as 1 but apply `ORDER BY` and `LIMIT` on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`). +- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data), can be used in case it is for certain that there are different keys on different shards. +- 2 - Same as `1` but applies `ORDER BY` and `LIMIT` (it is not possilbe when the query processed completelly on the remote node, like for `distributed_group_by_no_merge=1`) on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`). **Example** diff --git a/docs/ru/engines/table-engines/integrations/hdfs.md b/docs/ru/engines/table-engines/integrations/hdfs.md index b56bbfc0788..c96ac12cd2a 100644 --- a/docs/ru/engines/table-engines/integrations/hdfs.md +++ b/docs/ru/engines/table-engines/integrations/hdfs.md @@ -183,7 +183,7 @@ CREATE TABLE big_table (name String, value UInt32) ENGINE = HDFS('hdfs://hdfs1:9 #### Ограничения {#limitations} * hadoop\_security\_kerberos\_ticket\_cache\_path могут быть определены только на глобальном уровне -## Поддержика Kerberos {#kerberos-support} +## Поддержка Kerberos {#kerberos-support} Если hadoop\_security\_authentication параметр имеет значение 'kerberos', ClickHouse аутентифицируется с помощью Kerberos. [Расширенные параметры](#clickhouse-extras) и hadoop\_security\_kerberos\_ticket\_cache\_path помогают сделать это. diff --git a/docs/tools/blog.py b/docs/tools/blog.py index c3261f61d4d..d0f2496f914 100644 --- a/docs/tools/blog.py +++ b/docs/tools/blog.py @@ -40,7 +40,7 @@ def build_for_lang(lang, args): site_names = { 'en': 'ClickHouse Blog', - 'ru': 'Блог ClickHouse ' + 'ru': 'Блог ClickHouse' } assert len(site_names) == len(languages) @@ -62,7 +62,7 @@ def build_for_lang(lang, args): strict=True, theme=theme_cfg, nav=blog_nav, - copyright='©2016–2020 Yandex LLC', + copyright='©2016–2021 Yandex LLC', use_directory_urls=True, repo_name='ClickHouse/ClickHouse', repo_url='https://github.com/ClickHouse/ClickHouse/', diff --git a/docs/tools/build.py b/docs/tools/build.py index 5a1f10268ab..61112d5a4f5 100755 --- a/docs/tools/build.py +++ b/docs/tools/build.py @@ -94,7 +94,7 @@ def build_for_lang(lang, args): site_dir=site_dir, strict=True, theme=theme_cfg, - copyright='©2016–2020 Yandex LLC', + copyright='©2016–2021 Yandex LLC', use_directory_urls=True, repo_name='ClickHouse/ClickHouse', repo_url='https://github.com/ClickHouse/ClickHouse/', diff --git a/docs/tools/nav.py b/docs/tools/nav.py index 291797a1633..db64d1ba404 100644 --- a/docs/tools/nav.py +++ b/docs/tools/nav.py @@ -31,7 +31,16 @@ def build_nav_entry(root, args): result_items.append((prio, title, payload)) elif filename.endswith('.md'): path = os.path.join(root, filename) - meta, content = util.read_md_file(path) + + meta = '' + content = '' + + try: + meta, content = util.read_md_file(path) + except: + print('Error in file: {}'.format(path)) + raise + path = path.split('/', 2)[-1] title = meta.get('toc_title', find_first_header(content)) if title: diff --git a/docs/tools/website.py b/docs/tools/website.py index 6927fbd87bb..f0346de5c94 100644 --- a/docs/tools/website.py +++ b/docs/tools/website.py @@ -155,10 +155,6 @@ def build_website(args): os.path.join(args.src_dir, 'utils', 'list-versions', 'version_date.tsv'), os.path.join(args.output_dir, 'data', 'version_date.tsv')) - shutil.copy2( - os.path.join(args.website_dir, 'js', 'embedd.min.js'), - os.path.join(args.output_dir, 'js', 'embedd.min.js')) - for root, _, filenames in os.walk(args.output_dir): for filename in filenames: if filename == 'main.html': diff --git a/docs/zh/operations/backup.md b/docs/zh/operations/backup.md index 1b1993e3ae6..6d517e6ccb3 100644 --- a/docs/zh/operations/backup.md +++ b/docs/zh/operations/backup.md @@ -7,37 +7,37 @@ toc_title: "\u6570\u636E\u5907\u4EFD" # 数据备份 {#data-backup} -尽管[副本](../engines/table-engines/mergetree-family/replication.md) 可以预防硬件错误带来的数据丢失, 但是它不能防止人为操作的错误: 意外删除数据, 删除错误的 table 或者删除错误 cluster 上的 table, 可以导致错误数据处理错误或者数据损坏的 bugs. 这类意外可能会影响所有的副本. ClickHouse 有内建的保障措施可以预防一些错误 — 例如, 默认情况下[您不能使用类似MergeTree的引擎删除包含超过50Gb数据的表](server-configuration-parameters/settings.md#max-table-size-to-drop). 但是,这些保障措施不能涵盖所有可能的情况,并且可以规避。 +尽管 [副本] (../engines/table-engines/mergetree-family/replication.md) 可以提供针对硬件的错误防护, 但是它不能预防人为操作失误: 数据的意外删除, 错误表的删除或者错误集群上表的删除, 以及导致错误数据处理或者数据损坏的软件bug. 在很多案例中,这类意外可能会影响所有的副本. ClickHouse 有内置的保护措施可以预防一些错误 — 例如, 默认情况下 [不能人工删除使用带有MergeTree引擎且包含超过50Gb数据的表] (server-configuration-parameters/settings.md#max-table-size-to-drop). 但是,这些保护措施不能覆盖所有可能情况,并且这些措施可以被绕过。 -为了有效地减少可能的人为错误,您应该 **提前**准备备份和还原数据的策略. +为了有效地减少可能的人为错误,您应该 **提前** 仔细的准备备份和数据还原的策略. -不同公司有不同的可用资源和业务需求,因此没有适合各种情况的ClickHouse备份和恢复通用解决方案。 适用于 1GB 的数据的方案可能并不适用于几十 PB 数据的情况。 有多种可能的并有自己优缺点的方法,这将在下面讨论。 好的主意是同时结合使用多种方法而不是仅使用一种,这样可以弥补不同方法各自的缺点。 +不同公司有不同的可用资源和业务需求,因此不存在一个通用的解决方案可以应对各种情况下的ClickHouse备份和恢复。 适用于 1GB 数据的方案可能并不适用于几十 PB 数据的情况。 有多种具备各自优缺点的可能方法,将在下面对其进行讨论。最好使用几种方法而不是仅仅使用一种方法来弥补它们的各种缺点。。 !!! note "注" - 请记住,如果您备份了某些内容并且从未尝试过还原它,那么当您实际需要它时(或者至少需要比业务能够容忍的时间更长),恢复可能无法正常工作。 因此,无论您选择哪种备份方法,请确保自动还原过程,并定期在备用ClickHouse群集上练习。 + 需要注意的是,如果您备份了某些内容并且从未尝试过还原它,那么当您实际需要它时可能无法正常恢复(或者至少需要的时间比业务能够容忍的时间更长)。 因此,无论您选择哪种备份方法,请确保自动还原过程,并定期在备用ClickHouse群集上演练。 -## 将源数据复制到其他地方 {#duplicating-source-data-somewhere-else} +## 将源数据复制到其它地方 {#duplicating-source-data-somewhere-else} -通常被聚集到ClickHouse的数据是通过某种持久队列传递的,例如 [Apache Kafka](https://kafka.apache.org). 在这种情况下,可以配置一组额外的订阅服务器,这些订阅服务器将在写入ClickHouse时读取相同的数据流,并将其存储在冷存储中。 大多数公司已经有一些默认的推荐冷存储,可能是对象存储或分布式文件系统,如 [HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html). +通常摄入到ClickHouse的数据是通过某种持久队列传递的,例如 [Apache Kafka] (https://kafka.apache.org). 在这种情况下,可以配置一组额外的订阅服务器,这些订阅服务器将在写入ClickHouse时读取相同的数据流,并将其存储在冷存储中。 大多数公司已经有一些默认推荐的冷存储,可能是对象存储或分布式文件系统,如 [HDFS] (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html). ## 文件系统快照 {#filesystem-snapshots} -某些本地文件系统提供快照功能(例如, [ZFS](https://en.wikipedia.org/wiki/ZFS)),但它们可能不是提供实时查询的最佳选择。 一个可能的解决方案是使用这种文件系统创建额外的副本,并将它们从 [分布](../engines/table-engines/special/distributed.md) 用于以下目的的表 `SELECT` 查询。 任何修改数据的查询都无法访问此类副本上的快照。 作为奖励,这些副本可能具有特殊的硬件配置,每个服务器附加更多的磁盘,这将是经济高效的。 +某些本地文件系统提供快照功能(例如, [ZFS] (https://en.wikipedia.org/wiki/ZFS)),但它们可能不是提供实时查询的最佳选择。 一个可能的解决方案是使用这种文件系统创建额外的副本,并将它们与用于`SELECT` 查询的 [分布式] (../engines/table-engines/special/distributed.md) 表分离。 任何修改数据的查询都无法访问此类副本上的快照。 作为回报,这些副本可能具有特殊的硬件配置,每个服务器附加更多的磁盘,这将是经济高效的。 ## clickhouse-copier {#clickhouse-copier} -[clickhouse-copier](utilities/clickhouse-copier.md) 是一个多功能工具,最初创建用于重新分片pb大小的表。 因为它可以在ClickHouse表和集群之间可靠地复制数据,所以它还可用于备份和还原数据。 +[clickhouse-copier] (utilities/clickhouse-copier.md) 是一个多功能工具,最初创建它是为了用于重新切分pb大小的表。 因为它能够在ClickHouse表和集群之间可靠地复制数据,所以它也可用于备份和还原数据。 对于较小的数据量,一个简单的 `INSERT INTO ... SELECT ...` 到远程表也可以工作。 -## 部件操作 {#manipulations-with-parts} +## part操作 {#manipulations-with-parts} -ClickHouse允许使用 `ALTER TABLE ... FREEZE PARTITION ...` 查询以创建表分区的本地副本。 这是利用硬链接(hardlink)到 `/var/lib/clickhouse/shadow/` 文件夹中实现的,所以它通常不会占用旧数据的额外磁盘空间。 创建的文件副本不由ClickHouse服务器处理,所以你可以把它们留在那里:你将有一个简单的备份,不需要任何额外的外部系统,但它仍然会容易出现硬件问题。 出于这个原因,最好将它们远程复制到另一个位置,然后删除本地副本。 分布式文件系统和对象存储仍然是一个不错的选择,但是具有足够大容量的正常附加文件服务器也可以工作(在这种情况下,传输将通过网络文件系统 [rsync](https://en.wikipedia.org/wiki/Rsync)). +ClickHouse允许使用 `ALTER TABLE ... FREEZE PARTITION ...` 查询以创建表分区的本地副本。 这是利用硬链接(hardlink)到 `/var/lib/clickhouse/shadow/` 文件夹中实现的,所以它通常不会因为旧数据而占用额外的磁盘空间。 创建的文件副本不由ClickHouse服务器处理,所以你可以把它们留在那里:你将有一个简单的备份,不需要任何额外的外部系统,但它仍然容易出现硬件问题。 出于这个原因,最好将它们远程复制到另一个位置,然后删除本地副本。 分布式文件系统和对象存储仍然是一个不错的选择,但是具有足够大容量的正常附加文件服务器也可以工作(在这种情况下,传输将通过网络文件系统或者也许是 [rsync] (https://en.wikipedia.org/wiki/Rsync) 来进行). 数据可以使用 `ALTER TABLE ... ATTACH PARTITION ...` 从备份中恢复。 -有关与分区操作相关的查询的详细信息,请参阅 [更改文档](../sql-reference/statements/alter.md#alter_manipulations-with-partitions). +有关与分区操作相关的查询的详细信息,请参阅 [更改文档] (../sql-reference/statements/alter.md#alter_manipulations-with-partitions). -第三方工具可用于自动化此方法: [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup). +第三方工具可用于自动化此方法: [clickhouse-backup] (https://github.com/AlexAkulov/clickhouse-backup). -[原始文章](https://clickhouse.tech/docs/en/operations/backup/) +[原始文章] (https://clickhouse.tech/docs/en/operations/backup/) diff --git a/docs/zh/sql-reference/data-types/special-data-types/interval.md b/docs/zh/sql-reference/data-types/special-data-types/interval.md index df2ce097df0..9df25e3f555 100644 --- a/docs/zh/sql-reference/data-types/special-data-types/interval.md +++ b/docs/zh/sql-reference/data-types/special-data-types/interval.md @@ -5,9 +5,9 @@ toc_priority: 61 toc_title: "\u95F4\u9694" --- -# 间隔 {#data-type-interval} +# Interval类型 {#data-type-interval} -表示时间和日期间隔的数据类型族。 由此产生的类型 [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 接线员 +表示时间和日期间隔的数据类型家族。 [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 运算的结果类型。 !!! warning "警告" `Interval` 数据类型值不能存储在表中。 @@ -15,7 +15,7 @@ toc_title: "\u95F4\u9694" 结构: - 时间间隔作为无符号整数值。 -- 间隔的类型。 +- 时间间隔的类型。 支持的时间间隔类型: @@ -28,7 +28,7 @@ toc_title: "\u95F4\u9694" - `QUARTER` - `YEAR` -对于每个间隔类型,都有一个单独的数据类型。 例如, `DAY` 间隔对应于 `IntervalDay` 数据类型: +对于每个时间间隔类型,都有一个单独的数据类型。 例如, `DAY` 间隔对应于 `IntervalDay` 数据类型: ``` sql SELECT toTypeName(INTERVAL 4 DAY) @@ -42,7 +42,7 @@ SELECT toTypeName(INTERVAL 4 DAY) ## 使用说明 {#data-type-interval-usage-remarks} -您可以使用 `Interval`-在算术运算类型值 [日期](../../../sql-reference/data-types/date.md) 和 [日期时间](../../../sql-reference/data-types/datetime.md)-类型值。 例如,您可以将4天添加到当前时间: +您可以在与 [日期](../../../sql-reference/data-types/date.md) 和 [日期时间](../../../sql-reference/data-types/datetime.md) 类型值的算术运算中使用 `Interval` 类型值。 例如,您可以将4天添加到当前时间: ``` sql SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY @@ -54,10 +54,10 @@ SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY └─────────────────────┴───────────────────────────────┘ ``` -不同类型的间隔不能合并。 你不能使用间隔,如 `4 DAY 1 HOUR`. 以小于或等于间隔的最小单位的单位指定间隔,例如,间隔 `1 day and an hour` 间隔可以表示为 `25 HOUR` 或 `90000 SECOND`. - -你不能执行算术运算 `Interval`-类型值,但你可以添加不同类型的时间间隔,因此值 `Date` 或 `DateTime` 数据类型。 例如: +不同类型的间隔不能合并。 你不能使用诸如 `4 DAY 1 HOUR` 的时间间隔. 以小于或等于时间间隔最小单位的单位来指定间隔,例如,时间间隔 `1 day and an hour` 可以表示为 `25 HOUR` 或 `90000 SECOND`. +你不能对 `Interval` 类型的值执行算术运算,但你可以向 `Date` 或 `DateTime` 数据类型的值添加不同类型的时间间隔,例如: + ``` sql SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR ``` @@ -81,5 +81,5 @@ Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Wrong argu ## 另请参阅 {#see-also} -- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 接线员 +- [INTERVAL](../../../sql-reference/operators/index.md#operator-interval) 操作 - [toInterval](../../../sql-reference/functions/type-conversion-functions.md#function-tointerval) 类型转换函数 diff --git a/docs/zh/sql-reference/syntax.md b/docs/zh/sql-reference/syntax.md index 8c331db1139..c05c5a1a7bf 100644 --- a/docs/zh/sql-reference/syntax.md +++ b/docs/zh/sql-reference/syntax.md @@ -14,7 +14,7 @@ INSERT INTO t VALUES (1, 'Hello, world'), (2, 'abc'), (3, 'def') 含`INSERT INTO t VALUES` 的部分由完整SQL解析器处理,包含数据的部分 `(1, 'Hello, world'), (2, 'abc'), (3, 'def')` 交给快速流式解析器解析。通过设置参数 [input_format_values_interpret_expressions](../operations/settings/settings.md#settings-input_format_values_interpret_expressions),你也可以对数据部分开启完整SQL解析器。当 `input_format_values_interpret_expressions = 1` 时,CH优先采用快速流式解析器来解析数据。如果失败,CH再尝试用完整SQL解析器来处理,就像处理SQL [expression](#syntax-expressions) 一样。 -数据可以采用任何格式。当CH接受到请求时,服务端先在内存中计算不超过 [max_query_size](../operations/settings/settings.md#settings-max_query_size) 字节的请求数据(默认1 mb),然后剩下部分交给快速流式解析器。 +数据可以采用任何格式。当CH接收到请求时,服务端先在内存中计算不超过 [max_query_size](../operations/settings/settings.md#settings-max_query_size) 字节的请求数据(默认1 mb),然后剩下部分交给快速流式解析器。 这将避免在处理大型的 `INSERT`语句时出现问题。 diff --git a/programs/CMakeLists.txt b/programs/CMakeLists.txt index 09199e83026..6fd4c2050b4 100644 --- a/programs/CMakeLists.txt +++ b/programs/CMakeLists.txt @@ -47,6 +47,9 @@ option (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE "HTTP-server working like a proxy to Li option (ENABLE_CLICKHOUSE_GIT_IMPORT "A tool to analyze Git repositories" ${ENABLE_CLICKHOUSE_ALL}) + +option (ENABLE_CLICKHOUSE_KEEPER "ClickHouse alternative to ZooKeeper" ${ENABLE_CLICKHOUSE_ALL}) + if (CLICKHOUSE_SPLIT_BINARY) option(ENABLE_CLICKHOUSE_INSTALL "Install ClickHouse without .deb/.rpm/.tgz packages (having the binary only)" OFF) else () @@ -134,6 +137,12 @@ else() message(STATUS "ClickHouse git-import: OFF") endif() +if (ENABLE_CLICKHOUSE_KEEPER) + message(STATUS "ClickHouse keeper mode: ON") +else() + message(STATUS "ClickHouse keeper mode: OFF") +endif() + if(NOT (MAKE_STATIC_LIBRARIES OR SPLIT_SHARED_LIBRARIES)) set(CLICKHOUSE_ONE_SHARED ON) endif() @@ -189,6 +198,54 @@ macro(clickhouse_program_add name) clickhouse_program_add_executable(${name}) endmacro() +# Embed default config files as a resource into the binary. +# This is needed for two purposes: +# 1. Allow to run the binary without download of any other files. +# 2. Allow to implement "sudo clickhouse install" tool. +# +# Arguments: target (server, client, keeper, etc.) and list of files +# +# Also dependency on TARGET_FILE is required, look at examples in programs/server and programs/keeper +macro(clickhouse_embed_binaries) + # TODO We actually need this on Mac, FreeBSD. + if (OS_LINUX) + + set(arguments_list "${ARGN}") + list(GET arguments_list 0 target) + + # for some reason cmake iterates loop including + math(EXPR arguments_count "${ARGC}-1") + + foreach(RESOURCE_POS RANGE 1 "${arguments_count}") + list(GET arguments_list "${RESOURCE_POS}" RESOURCE_FILE) + set(RESOURCE_OBJ ${RESOURCE_FILE}.o) + set(RESOURCE_OBJS ${RESOURCE_OBJS} ${RESOURCE_OBJ}) + + # https://stackoverflow.com/questions/14776463/compile-and-add-an-object-file-from-a-binary-with-cmake + # PPC64LE fails to do this with objcopy, use ld or lld instead + if (ARCH_PPC64LE) + add_custom_command(OUTPUT ${RESOURCE_OBJ} + COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${CMAKE_LINKER} -m elf64lppc -r -b binary -o "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" ${RESOURCE_FILE}) + else() + add_custom_command(OUTPUT ${RESOURCE_OBJ} + COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS} ${RESOURCE_FILE} "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" + COMMAND ${OBJCOPY_PATH} --rename-section .data=.rodata,alloc,load,readonly,data,contents + "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}") + endif() + set_source_files_properties(${RESOURCE_OBJ} PROPERTIES EXTERNAL_OBJECT true GENERATED true) + endforeach() + + add_library(clickhouse_${target}_configs STATIC ${RESOURCE_OBJS}) + set_target_properties(clickhouse_${target}_configs PROPERTIES LINKER_LANGUAGE C) + + # whole-archive prevents symbols from being discarded for unknown reason + # CMake can shuffle each of target_link_libraries arguments with other + # libraries in linker command. To avoid this we hardcode whole-archive + # library into single string. + add_dependencies(clickhouse-${target}-lib clickhouse_${target}_configs) + endif () +endmacro() + add_subdirectory (server) add_subdirectory (client) @@ -202,6 +259,7 @@ add_subdirectory (obfuscator) add_subdirectory (install) add_subdirectory (git-import) add_subdirectory (bash-completion) +add_subdirectory (keeper) if (ENABLE_CLICKHOUSE_ODBC_BRIDGE) add_subdirectory (odbc-bridge) @@ -212,15 +270,15 @@ if (ENABLE_CLICKHOUSE_LIBRARY_BRIDGE) endif () if (CLICKHOUSE_ONE_SHARED) - add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_GIT_IMPORT_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES}) - target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_GIT_IMPORT_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK}) - target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_GIT_IMPORT_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE}) + add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_GIT_IMPORT_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES} ${CLICKHOUSE_KEEPER_SOURCES}) + target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_GIT_IMPORT_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK} ${CLICKHOUSE_KEEPER_LINK}) + target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_GIT_IMPORT_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE} ${CLICKHOUSE_KEEPER_INCLUDE}) set_target_properties(clickhouse-lib PROPERTIES SOVERSION ${VERSION_MAJOR}.${VERSION_MINOR} VERSION ${VERSION_SO} OUTPUT_NAME clickhouse DEBUG_POSTFIX "") install (TARGETS clickhouse-lib LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} COMPONENT clickhouse) endif() if (CLICKHOUSE_SPLIT_BINARY) - set (CLICKHOUSE_ALL_TARGETS clickhouse-server clickhouse-client clickhouse-local clickhouse-benchmark clickhouse-extract-from-config clickhouse-compressor clickhouse-format clickhouse-obfuscator clickhouse-git-import clickhouse-copier) + set (CLICKHOUSE_ALL_TARGETS clickhouse-server clickhouse-client clickhouse-local clickhouse-benchmark clickhouse-extract-from-config clickhouse-compressor clickhouse-format clickhouse-obfuscator clickhouse-git-import clickhouse-copier clickhouse-keeper) if (ENABLE_CLICKHOUSE_ODBC_BRIDGE) list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-odbc-bridge) @@ -277,6 +335,9 @@ else () if (ENABLE_CLICKHOUSE_GIT_IMPORT) clickhouse_target_link_split_lib(clickhouse git-import) endif () + if (ENABLE_CLICKHOUSE_KEEPER) + clickhouse_target_link_split_lib(clickhouse keeper) + endif() if (ENABLE_CLICKHOUSE_INSTALL) clickhouse_target_link_split_lib(clickhouse install) endif () @@ -332,6 +393,11 @@ else () install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-git-import" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) list(APPEND CLICKHOUSE_BUNDLE clickhouse-git-import) endif () + if (ENABLE_CLICKHOUSE_KEEPER) + add_custom_target (clickhouse-keeper ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-keeper DEPENDS clickhouse) + install (FILES "${CMAKE_CURRENT_BINARY_DIR}/clickhouse-keeper" DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) + list(APPEND CLICKHOUSE_BUNDLE clickhouse-keeper) + endif () install (TARGETS clickhouse RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse) diff --git a/programs/client/Client.cpp b/programs/client/Client.cpp index ccf92ebc419..098f7e689c5 100644 --- a/programs/client/Client.cpp +++ b/programs/client/Client.cpp @@ -1366,6 +1366,27 @@ private: { const auto * exception = server_exception ? server_exception.get() : client_exception.get(); fmt::print(stderr, "Error on processing query '{}': {}\n", ast_to_process->formatForErrorMessage(), exception->message()); + + // Try to reconnect after errors, for two reasons: + // 1. We might not have realized that the server died, e.g. if + // it sent us a trace and closed connection properly. + // 2. The connection might have gotten into a wrong state and + // the next query will get false positive about + // "Unknown packet from server". + try + { + connection->forceConnected(connection_parameters.timeouts); + } + catch (...) + { + // Just report it, we'll terminate below. + fmt::print(stderr, + "Error while reconnecting to the server: Code: {}: {}\n", + getCurrentExceptionCode(), + getCurrentExceptionMessage(true)); + + assert(!connection->isConnected()); + } } if (!connection->isConnected()) @@ -1469,11 +1490,6 @@ private: server_exception.reset(); client_exception.reset(); have_error = false; - - // We have to reinitialize connection after errors, because it - // might have gotten into a wrong state and we'll get false - // positives about "Unknown packet from server". - connection->forceConnected(connection_parameters.timeouts); } else if (ast_to_process->formatForErrorMessage().size() > 500) { diff --git a/programs/config_tools.h.in b/programs/config_tools.h.in index abe9ef8c562..50ba0c16a83 100644 --- a/programs/config_tools.h.in +++ b/programs/config_tools.h.in @@ -16,3 +16,4 @@ #cmakedefine01 ENABLE_CLICKHOUSE_INSTALL #cmakedefine01 ENABLE_CLICKHOUSE_ODBC_BRIDGE #cmakedefine01 ENABLE_CLICKHOUSE_LIBRARY_BRIDGE +#cmakedefine01 ENABLE_CLICKHOUSE_KEEPER diff --git a/programs/keeper/CMakeLists.txt b/programs/keeper/CMakeLists.txt new file mode 100644 index 00000000000..e604d0e304e --- /dev/null +++ b/programs/keeper/CMakeLists.txt @@ -0,0 +1,24 @@ +set(CLICKHOUSE_KEEPER_SOURCES + Keeper.cpp +) + +if (OS_LINUX) + set (LINK_RESOURCE_LIB INTERFACE "-Wl,${WHOLE_ARCHIVE} $ -Wl,${NO_WHOLE_ARCHIVE}") +endif () + +set (CLICKHOUSE_KEEPER_LINK + PRIVATE + clickhouse_common_config + clickhouse_common_io + clickhouse_common_zookeeper + daemon + dbms + + ${LINK_RESOURCE_LIB} +) + +clickhouse_program_add(keeper) + +install (FILES keeper_config.xml DESTINATION "${CLICKHOUSE_ETC_DIR}/clickhouse-keeper" COMPONENT clickhouse-keeper) + +clickhouse_embed_binaries(keeper keeper_config.xml keeper_embedded.xml) diff --git a/programs/keeper/Keeper.cpp b/programs/keeper/Keeper.cpp new file mode 100644 index 00000000000..b9d87ba7fdb --- /dev/null +++ b/programs/keeper/Keeper.cpp @@ -0,0 +1,474 @@ +#include "Keeper.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#if !defined(ARCADIA_BUILD) +# include "config_core.h" +# include "Common/config_version.h" +#endif + +#if USE_SSL +# include +# include +#endif + +#if USE_NURAFT +# include +#endif + +#if defined(OS_LINUX) +# include +# include +#endif + + +int mainEntryClickHouseKeeper(int argc, char ** argv) +{ + DB::Keeper app; + + try + { + return app.run(argc, argv); + } + catch (...) + { + std::cerr << DB::getCurrentExceptionMessage(true) << "\n"; + auto code = DB::getCurrentExceptionCode(); + return code ? code : 1; + } +} + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int NO_ELEMENTS_IN_CONFIG; + extern const int SUPPORT_IS_DISABLED; + extern const int NETWORK_ERROR; + extern const int MISMATCHING_USERS_FOR_PROCESS_AND_DATA; + extern const int FAILED_TO_GETPWUID; +} + +namespace +{ + +int waitServersToFinish(std::vector & servers, size_t seconds_to_wait) +{ + const int sleep_max_ms = 1000 * seconds_to_wait; + const int sleep_one_ms = 100; + int sleep_current_ms = 0; + int current_connections = 0; + for (;;) + { + current_connections = 0; + + for (auto & server : servers) + { + server.stop(); + current_connections += server.currentConnections(); + } + + if (!current_connections) + break; + + sleep_current_ms += sleep_one_ms; + if (sleep_current_ms < sleep_max_ms) + std::this_thread::sleep_for(std::chrono::milliseconds(sleep_one_ms)); + else + break; + } + return current_connections; +} + +Poco::Net::SocketAddress makeSocketAddress(const std::string & host, UInt16 port, Poco::Logger * log) +{ + Poco::Net::SocketAddress socket_address; + try + { + socket_address = Poco::Net::SocketAddress(host, port); + } + catch (const Poco::Net::DNSException & e) + { + const auto code = e.code(); + if (code == EAI_FAMILY +#if defined(EAI_ADDRFAMILY) + || code == EAI_ADDRFAMILY +#endif + ) + { + LOG_ERROR(log, "Cannot resolve listen_host ({}), error {}: {}. " + "If it is an IPv6 address and your host has disabled IPv6, then consider to " + "specify IPv4 address to listen in element of configuration " + "file. Example: 0.0.0.0", + host, e.code(), e.message()); + } + + throw; + } + return socket_address; +} + +[[noreturn]] void forceShutdown() +{ +#if defined(THREAD_SANITIZER) && defined(OS_LINUX) + /// Thread sanitizer tries to do something on exit that we don't need if we want to exit immediately, + /// while connection handling threads are still run. + (void)syscall(SYS_exit_group, 0); + __builtin_unreachable(); +#else + _exit(0); +#endif +} + +std::string getUserName(uid_t user_id) +{ + /// Try to convert user id into user name. + auto buffer_size = sysconf(_SC_GETPW_R_SIZE_MAX); + if (buffer_size <= 0) + buffer_size = 1024; + std::string buffer; + buffer.reserve(buffer_size); + + struct passwd passwd_entry; + struct passwd * result = nullptr; + const auto error = getpwuid_r(user_id, &passwd_entry, buffer.data(), buffer_size, &result); + + if (error) + throwFromErrno("Failed to find user name for " + toString(user_id), ErrorCodes::FAILED_TO_GETPWUID, error); + else if (result) + return result->pw_name; + return toString(user_id); +} + +} + +Poco::Net::SocketAddress Keeper::socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, [[maybe_unused]] bool secure) const +{ + auto address = makeSocketAddress(host, port, &logger()); +#if !defined(POCO_CLICKHOUSE_PATCH) || POCO_VERSION < 0x01090100 + if (secure) + /// Bug in old (<1.9.1) poco, listen() after bind() with reusePort param will fail because have no implementation in SecureServerSocketImpl + /// https://github.com/pocoproject/poco/pull/2257 + socket.bind(address, /* reuseAddress = */ true); + else +#endif +#if POCO_VERSION < 0x01080000 + socket.bind(address, /* reuseAddress = */ true); +#else + socket.bind(address, /* reuseAddress = */ true, /* reusePort = */ config().getBool("listen_reuse_port", false)); +#endif + + socket.listen(/* backlog = */ config().getUInt("listen_backlog", 64)); + + return address; +} + +void Keeper::createServer(const std::string & listen_host, const char * port_name, bool listen_try, CreateServerFunc && func) const +{ + /// For testing purposes, user may omit tcp_port or http_port or https_port in configuration file. + if (!config().has(port_name)) + return; + + auto port = config().getInt(port_name); + try + { + func(port); + } + catch (const Poco::Exception &) + { + std::string message = "Listen [" + listen_host + "]:" + std::to_string(port) + " failed: " + getCurrentExceptionMessage(false); + + if (listen_try) + { + LOG_WARNING(&logger(), "{}. If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to " + "specify not disabled IPv4 or IPv6 address to listen in element of configuration " + "file. Example for disabled IPv6: 0.0.0.0 ." + " Example for disabled IPv4: ::", + message); + } + else + { + throw Exception{message, ErrorCodes::NETWORK_ERROR}; + } + } +} + +void Keeper::uninitialize() +{ + logger().information("shutting down"); + BaseDaemon::uninitialize(); +} + +int Keeper::run() +{ + if (config().hasOption("help")) + { + Poco::Util::HelpFormatter help_formatter(Keeper::options()); + auto header_str = fmt::format("{} [OPTION] [-- [ARG]...]\n" + "positional arguments can be used to rewrite config.xml properties, for example, --http_port=8010", + commandName()); + help_formatter.setHeader(header_str); + help_formatter.format(std::cout); + return 0; + } + if (config().hasOption("version")) + { + std::cout << DBMS_NAME << " keeper version " << VERSION_STRING << VERSION_OFFICIAL << "." << std::endl; + return 0; + } + + return Application::run(); // NOLINT +} + +void Keeper::initialize(Poco::Util::Application & self) +{ + BaseDaemon::initialize(self); + logger().information("starting up"); + + LOG_INFO(&logger(), "OS Name = {}, OS Version = {}, OS Architecture = {}", + Poco::Environment::osName(), + Poco::Environment::osVersion(), + Poco::Environment::osArchitecture()); +} + +std::string Keeper::getDefaultConfigFileName() const +{ + return "keeper_config.xml"; +} + +void Keeper::defineOptions(Poco::Util::OptionSet & options) +{ + options.addOption( + Poco::Util::Option("help", "h", "show help and exit") + .required(false) + .repeatable(false) + .binding("help")); + options.addOption( + Poco::Util::Option("version", "V", "show version and exit") + .required(false) + .repeatable(false) + .binding("version")); + BaseDaemon::defineOptions(options); +} + +int Keeper::main(const std::vector & /*args*/) +{ + Poco::Logger * log = &logger(); + + UseSSL use_ssl; + + MainThreadStatus::getInstance(); + +#if !defined(NDEBUG) || !defined(__OPTIMIZE__) + LOG_WARNING(log, "Keeper was built in debug mode. It will work slowly."); +#endif + +#if defined(SANITIZER) + LOG_WARNING(log, "Keeper was built with sanitizer. It will work slowly."); +#endif + + auto shared_context = Context::createShared(); + global_context = Context::createGlobal(shared_context.get()); + + global_context->makeGlobalContext(); + global_context->setApplicationType(Context::ApplicationType::KEEPER); + + if (!config().has("keeper_server")) + throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG, "Keeper configuration ( section) not found in config"); + + + std::string path; + + if (config().has("keeper_server.storage_path")) + path = config().getString("keeper_server.storage_path"); + else if (config().has("keeper_server.log_storage_path")) + path = config().getString("keeper_server.log_storage_path"); + else if (config().has("keeper_server.snapshot_storage_path")) + path = config().getString("keeper_server.snapshot_storage_path"); + else + path = std::filesystem::path{KEEPER_DEFAULT_PATH}; + + + /// Check that the process user id matches the owner of the data. + const auto effective_user_id = geteuid(); + struct stat statbuf; + if (stat(path.c_str(), &statbuf) == 0 && effective_user_id != statbuf.st_uid) + { + const auto effective_user = getUserName(effective_user_id); + const auto data_owner = getUserName(statbuf.st_uid); + std::string message = "Effective user of the process (" + effective_user + + ") does not match the owner of the data (" + data_owner + ")."; + if (effective_user_id == 0) + { + message += " Run under 'sudo -u " + data_owner + "'."; + throw Exception(message, ErrorCodes::MISMATCHING_USERS_FOR_PROCESS_AND_DATA); + } + else + { + LOG_WARNING(log, message); + } + } + + const Settings & settings = global_context->getSettingsRef(); + + GlobalThreadPool::initialize(config().getUInt("max_thread_pool_size", 100)); + + static ServerErrorHandler error_handler; + Poco::ErrorHandler::set(&error_handler); + + /// Initialize DateLUT early, to not interfere with running time of first query. + LOG_DEBUG(log, "Initializing DateLUT."); + DateLUT::instance(); + LOG_TRACE(log, "Initialized DateLUT with time zone '{}'.", DateLUT::instance().getTimeZone()); + + /// Don't want to use DNS cache + DNSResolver::instance().setDisableCacheFlag(); + + Poco::ThreadPool server_pool(3, config().getUInt("max_connections", 1024)); + + std::vector listen_hosts = DB::getMultipleValuesFromConfig(config(), "", "listen_host"); + + bool listen_try = config().getBool("listen_try", false); + if (listen_hosts.empty()) + { + listen_hosts.emplace_back("::1"); + listen_hosts.emplace_back("127.0.0.1"); + listen_try = true; + } + + auto servers = std::make_shared>(); + +#if USE_NURAFT + /// Initialize test keeper RAFT. Do nothing if no nu_keeper_server in config. + global_context->initializeKeeperStorageDispatcher(); + for (const auto & listen_host : listen_hosts) + { + /// TCP Keeper + const char * port_name = "keeper_server.tcp_port"; + createServer(listen_host, port_name, listen_try, [&](UInt16 port) + { + Poco::Net::ServerSocket socket; + auto address = socketBindListen(socket, listen_host, port); + socket.setReceiveTimeout(settings.receive_timeout); + socket.setSendTimeout(settings.send_timeout); + servers->emplace_back( + port_name, + std::make_unique( + new KeeperTCPHandlerFactory(*this, false), server_pool, socket, new Poco::Net::TCPServerParams)); + + LOG_INFO(log, "Listening for connections to Keeper (tcp): {}", address.toString()); + }); + + const char * secure_port_name = "keeper_server.tcp_port_secure"; + createServer(listen_host, secure_port_name, listen_try, [&](UInt16 port) + { +#if USE_SSL + Poco::Net::SecureServerSocket socket; + auto address = socketBindListen(socket, listen_host, port, /* secure = */ true); + socket.setReceiveTimeout(settings.receive_timeout); + socket.setSendTimeout(settings.send_timeout); + servers->emplace_back( + secure_port_name, + std::make_unique( + new KeeperTCPHandlerFactory(*this, true), server_pool, socket, new Poco::Net::TCPServerParams)); + LOG_INFO(log, "Listening for connections to Keeper with secure protocol (tcp_secure): {}", address.toString()); +#else + UNUSED(port); + throw Exception{"SSL support for TCP protocol is disabled because Poco library was built without NetSSL support.", + ErrorCodes::SUPPORT_IS_DISABLED}; +#endif + }); + } +#else + throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "ClickHouse keeper built without NuRaft library. Cannot use coordination."); +#endif + + for (auto & server : *servers) + server.start(); + + SCOPE_EXIT({ + LOG_INFO(log, "Shutting down."); + + global_context->shutdown(); + + LOG_DEBUG(log, "Waiting for current connections to Keeper to finish."); + int current_connections = 0; + for (auto & server : *servers) + { + server.stop(); + current_connections += server.currentConnections(); + } + + if (current_connections) + LOG_INFO(log, "Closed all listening sockets. Waiting for {} outstanding connections.", current_connections); + else + LOG_INFO(log, "Closed all listening sockets."); + + if (current_connections > 0) + current_connections = waitServersToFinish(*servers, config().getInt("shutdown_wait_unfinished", 5)); + + if (current_connections) + LOG_INFO(log, "Closed connections to Keeper. But {} remain. Probably some users cannot finish their connections after context shutdown.", current_connections); + else + LOG_INFO(log, "Closed connections to Keeper."); + + global_context->shutdownKeeperStorageDispatcher(); + + /// Wait server pool to avoid use-after-free of destroyed context in the handlers + server_pool.joinAll(); + + /** Explicitly destroy Context. It is more convenient than in destructor of Server, because logger is still available. + * At this moment, no one could own shared part of Context. + */ + global_context.reset(); + shared_context.reset(); + + LOG_DEBUG(log, "Destroyed global context."); + + if (current_connections) + { + LOG_INFO(log, "Will shutdown forcefully."); + forceShutdown(); + } + }); + + + buildLoggers(config(), logger()); + + LOG_INFO(log, "Ready for connections."); + + waitForTerminationRequest(); + + return Application::EXIT_OK; +} + + +void Keeper::logRevision() const +{ + Poco::Logger::root().information("Starting ClickHouse Keeper " + std::string{VERSION_STRING} + + " with revision " + std::to_string(ClickHouseRevision::getVersionRevision()) + + ", " + build_id_info + + ", PID " + std::to_string(getpid())); +} + + +} diff --git a/programs/keeper/Keeper.h b/programs/keeper/Keeper.h new file mode 100644 index 00000000000..e80fe10b61c --- /dev/null +++ b/programs/keeper/Keeper.h @@ -0,0 +1,69 @@ +#pragma once + +#include +#include + +namespace Poco +{ + namespace Net + { + class ServerSocket; + } +} + +namespace DB +{ + +/// standalone clickhouse-keeper server (replacement for ZooKeeper). Uses the same +/// config as clickhouse-server. Serves requests on TCP ports with or without +/// SSL using ZooKeeper protocol. +class Keeper : public BaseDaemon, public IServer +{ +public: + using ServerApplication::run; + + Poco::Util::LayeredConfiguration & config() const override + { + return BaseDaemon::config(); + } + + Poco::Logger & logger() const override + { + return BaseDaemon::logger(); + } + + ContextPtr context() const override + { + return global_context; + } + + bool isCancelled() const override + { + return BaseDaemon::isCancelled(); + } + + void defineOptions(Poco::Util::OptionSet & _options) override; + +protected: + void logRevision() const override; + + int run() override; + + void initialize(Application & self) override; + + void uninitialize() override; + + int main(const std::vector & args) override; + + std::string getDefaultConfigFileName() const override; + +private: + ContextPtr global_context; + + Poco::Net::SocketAddress socketBindListen(Poco::Net::ServerSocket & socket, const std::string & host, UInt16 port, [[maybe_unused]] bool secure = false) const; + + using CreateServerFunc = std::function; + void createServer(const std::string & listen_host, const char * port_name, bool listen_try, CreateServerFunc && func) const; +}; + +} diff --git a/programs/keeper/clickhouse-keeper.cpp b/programs/keeper/clickhouse-keeper.cpp new file mode 100644 index 00000000000..baa673f79ee --- /dev/null +++ b/programs/keeper/clickhouse-keeper.cpp @@ -0,0 +1,6 @@ +int mainEntryClickHouseKeeper(int argc, char ** argv); + +int main(int argc_, char ** argv_) +{ + return mainEntryClickHouseKeeper(argc_, argv_); +} diff --git a/programs/keeper/keeper_config.xml b/programs/keeper/keeper_config.xml new file mode 100644 index 00000000000..ef218c9f2d7 --- /dev/null +++ b/programs/keeper/keeper_config.xml @@ -0,0 +1,81 @@ + + + + trace + /var/log/clickhouse-keeper/clickhouse-keeper.log + /var/log/clickhouse-keeper/clickhouse-keeper.err.log + + 1000M + 10 + + + + 4096 + + + 9181 + + + 1 + + /var/lib/clickhouse/coordination/logs + /var/lib/clickhouse/coordination/snapshots + + + 10000 + 30000 + information + + + + + + 1 + + + localhost + 44444 + + + + + + + + + + + + + /etc/clickhouse-keeper/server.crt + /etc/clickhouse-keeper/server.key + + /etc/clickhouse-keeper/dhparam.pem + none + true + true + sslv2,sslv3 + true + + + + diff --git a/programs/keeper/keeper_embedded.xml b/programs/keeper/keeper_embedded.xml new file mode 100644 index 00000000000..37edaedba80 --- /dev/null +++ b/programs/keeper/keeper_embedded.xml @@ -0,0 +1,21 @@ + + + trace + true + + + + 9181 + 1 + ./keeper_log + ./keeper_snapshot + + + + 1 + localhost + 44444 + + + + diff --git a/programs/main.cpp b/programs/main.cpp index cbb22b7a87b..ccdf4d50fb4 100644 --- a/programs/main.cpp +++ b/programs/main.cpp @@ -55,6 +55,9 @@ int mainEntryClickHouseObfuscator(int argc, char ** argv); #if ENABLE_CLICKHOUSE_GIT_IMPORT int mainEntryClickHouseGitImport(int argc, char ** argv); #endif +#if ENABLE_CLICKHOUSE_KEEPER +int mainEntryClickHouseKeeper(int argc, char ** argv); +#endif #if ENABLE_CLICKHOUSE_INSTALL int mainEntryClickHouseInstall(int argc, char ** argv); int mainEntryClickHouseStart(int argc, char ** argv); @@ -112,6 +115,9 @@ std::pair clickhouse_applications[] = #if ENABLE_CLICKHOUSE_GIT_IMPORT {"git-import", mainEntryClickHouseGitImport}, #endif +#if ENABLE_CLICKHOUSE_KEEPER + {"keeper", mainEntryClickHouseKeeper}, +#endif #if ENABLE_CLICKHOUSE_INSTALL {"install", mainEntryClickHouseInstall}, {"start", mainEntryClickHouseStart}, diff --git a/programs/server/CMakeLists.txt b/programs/server/CMakeLists.txt index 0dcfbce1c30..f7f76fdb450 100644 --- a/programs/server/CMakeLists.txt +++ b/programs/server/CMakeLists.txt @@ -31,37 +31,4 @@ clickhouse_program_add(server) install(FILES config.xml users.xml DESTINATION "${CLICKHOUSE_ETC_DIR}/clickhouse-server" COMPONENT clickhouse) -# TODO We actually need this on Mac, FreeBSD. -if (OS_LINUX) - # Embed default config files as a resource into the binary. - # This is needed for two purposes: - # 1. Allow to run the binary without download of any other files. - # 2. Allow to implement "sudo clickhouse install" tool. - - foreach(RESOURCE_FILE config.xml users.xml embedded.xml play.html) - set(RESOURCE_OBJ ${RESOURCE_FILE}.o) - set(RESOURCE_OBJS ${RESOURCE_OBJS} ${RESOURCE_OBJ}) - - # https://stackoverflow.com/questions/14776463/compile-and-add-an-object-file-from-a-binary-with-cmake - # PPC64LE fails to do this with objcopy, use ld or lld instead - if (ARCH_PPC64LE) - add_custom_command(OUTPUT ${RESOURCE_OBJ} - COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${CMAKE_LINKER} -m elf64lppc -r -b binary -o "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" ${RESOURCE_FILE}) - else() - add_custom_command(OUTPUT ${RESOURCE_OBJ} - COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ${OBJCOPY_PATH} -I binary ${OBJCOPY_ARCH_OPTIONS} ${RESOURCE_FILE} "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" - COMMAND ${OBJCOPY_PATH} --rename-section .data=.rodata,alloc,load,readonly,data,contents - "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}" "${CMAKE_CURRENT_BINARY_DIR}/${RESOURCE_OBJ}") - endif() - set_source_files_properties(${RESOURCE_OBJ} PROPERTIES EXTERNAL_OBJECT true GENERATED true) - endforeach(RESOURCE_FILE) - - add_library(clickhouse_server_configs STATIC ${RESOURCE_OBJS}) - set_target_properties(clickhouse_server_configs PROPERTIES LINKER_LANGUAGE C) - - # whole-archive prevents symbols from being discarded for unknown reason - # CMake can shuffle each of target_link_libraries arguments with other - # libraries in linker command. To avoid this we hardcode whole-archive - # library into single string. - add_dependencies(clickhouse-server-lib clickhouse_server_configs) -endif () +clickhouse_embed_binaries(server config.xml users.xml embedded.xml play.html) diff --git a/programs/server/config.xml b/programs/server/config.xml index df8a5266c39..75647b10416 100644 --- a/programs/server/config.xml +++ b/programs/server/config.xml @@ -362,6 +362,20 @@ bind_dn - template used to construct the DN to bind to. The resulting DN will be constructed by replacing all '{user_name}' substrings of the template with the actual user name during each authentication attempt. + user_dn_detection - section with LDAP search parameters for detecting the actual user DN of the bound user. + This is mainly used in search filters for further role mapping when the server is Active Directory. The + resulting user DN will be used when replacing '{user_dn}' substrings wherever they are allowed. By default, + user DN is set equal to bind DN, but once search is performed, it will be updated with to the actual detected + user DN value. + base_dn - template used to construct the base DN for the LDAP search. + The resulting DN will be constructed by replacing all '{user_name}' and '{bind_dn}' substrings + of the template with the actual user name and bind DN during the LDAP search. + scope - scope of the LDAP search. + Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default). + search_filter - template used to construct the search filter for the LDAP search. + The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', and '{base_dn}' + substrings of the template with the actual user name, bind DN, and base DN during the LDAP search. + Note, that the special characters must be escaped properly in XML. verification_cooldown - a period of time, in seconds, after a successful bind attempt, during which a user will be assumed to be successfully authenticated for all consecutive requests without contacting the LDAP server. Specify 0 (the default) to disable caching and force contacting the LDAP server for each authentication request. @@ -393,6 +407,17 @@ /path/to/tls_ca_cert_dir ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384 + Example (typical Active Directory with configured user DN detection for further role mapping): + + localhost + 389 + EXAMPLE\{user_name} + + CN=Users,DC=example,DC=com + (&(objectClass=user)(sAMAccountName={user_name})) + + no + --> @@ -444,15 +469,16 @@ There can be multiple 'role_mapping' sections defined inside the same 'ldap' section. All of them will be applied. base_dn - template used to construct the base DN for the LDAP search. - The resulting DN will be constructed by replacing all '{user_name}' and '{bind_dn}' substrings - of the template with the actual user name and bind DN during each LDAP search. + The resulting DN will be constructed by replacing all '{user_name}', '{bind_dn}', and '{user_dn}' + substrings of the template with the actual user name, bind DN, and user DN during each LDAP search. scope - scope of the LDAP search. Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default). search_filter - template used to construct the search filter for the LDAP search. - The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', and '{base_dn}' - substrings of the template with the actual user name, bind DN, and base DN during each LDAP search. + The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', '{user_dn}', and + '{base_dn}' substrings of the template with the actual user name, bind DN, user DN, and base DN during + each LDAP search. Note, that the special characters must be escaped properly in XML. - attribute - attribute name whose values will be returned by the LDAP search. + attribute - attribute name whose values will be returned by the LDAP search. 'cn', by default. prefix - prefix, that will be expected to be in front of each string in the original list of strings returned by the LDAP search. Prefix will be removed from the original strings and resulting strings will be treated as local role names. Empty, by default. @@ -471,6 +497,17 @@ clickhouse_ + Example (typical Active Directory with role mapping that relies on the detected user DN): + + my_ad_server + + CN=Users,DC=example,DC=com + CN + subtree + (&(objectClass=group)(member={user_dn})) + clickhouse_ + + --> diff --git a/src/Access/ExternalAuthenticators.cpp b/src/Access/ExternalAuthenticators.cpp index 0c4d2f417c9..d4100c4e520 100644 --- a/src/Access/ExternalAuthenticators.cpp +++ b/src/Access/ExternalAuthenticators.cpp @@ -20,13 +20,42 @@ namespace ErrorCodes namespace { -auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const String & name) +void parseLDAPSearchParams(LDAPClient::SearchParams & params, const Poco::Util::AbstractConfiguration & config, const String & prefix) +{ + const bool has_base_dn = config.has(prefix + ".base_dn"); + const bool has_search_filter = config.has(prefix + ".search_filter"); + const bool has_attribute = config.has(prefix + ".attribute"); + const bool has_scope = config.has(prefix + ".scope"); + + if (has_base_dn) + params.base_dn = config.getString(prefix + ".base_dn"); + + if (has_search_filter) + params.search_filter = config.getString(prefix + ".search_filter"); + + if (has_attribute) + params.attribute = config.getString(prefix + ".attribute"); + + if (has_scope) + { + auto scope = config.getString(prefix + ".scope"); + boost::algorithm::to_lower(scope); + + if (scope == "base") params.scope = LDAPClient::SearchParams::Scope::BASE; + else if (scope == "one_level") params.scope = LDAPClient::SearchParams::Scope::ONE_LEVEL; + else if (scope == "subtree") params.scope = LDAPClient::SearchParams::Scope::SUBTREE; + else if (scope == "children") params.scope = LDAPClient::SearchParams::Scope::CHILDREN; + else + throw Exception("Invalid value for 'scope' field of LDAP search parameters in '" + prefix + + "' section, must be one of 'base', 'one_level', 'subtree', or 'children'", ErrorCodes::BAD_ARGUMENTS); + } +} + +void parseLDAPServer(LDAPClient::Params & params, const Poco::Util::AbstractConfiguration & config, const String & name) { if (name.empty()) throw Exception("LDAP server name cannot be empty", ErrorCodes::BAD_ARGUMENTS); - LDAPClient::Params params; - const String ldap_server_config = "ldap_servers." + name; const bool has_host = config.has(ldap_server_config + ".host"); @@ -34,6 +63,7 @@ auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const Str const bool has_bind_dn = config.has(ldap_server_config + ".bind_dn"); const bool has_auth_dn_prefix = config.has(ldap_server_config + ".auth_dn_prefix"); const bool has_auth_dn_suffix = config.has(ldap_server_config + ".auth_dn_suffix"); + const bool has_user_dn_detection = config.has(ldap_server_config + ".user_dn_detection"); const bool has_verification_cooldown = config.has(ldap_server_config + ".verification_cooldown"); const bool has_enable_tls = config.has(ldap_server_config + ".enable_tls"); const bool has_tls_minimum_protocol_version = config.has(ldap_server_config + ".tls_minimum_protocol_version"); @@ -66,6 +96,17 @@ auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const Str params.bind_dn = auth_dn_prefix + "{user_name}" + auth_dn_suffix; } + if (has_user_dn_detection) + { + if (!params.user_dn_detection) + { + params.user_dn_detection.emplace(); + params.user_dn_detection->attribute = "dn"; + } + + parseLDAPSearchParams(*params.user_dn_detection, config, ldap_server_config + ".user_dn_detection"); + } + if (has_verification_cooldown) params.verification_cooldown = std::chrono::seconds{config.getUInt64(ldap_server_config + ".verification_cooldown")}; @@ -143,14 +184,10 @@ auto parseLDAPServer(const Poco::Util::AbstractConfiguration & config, const Str } else params.port = (params.enable_tls == LDAPClient::Params::TLSEnable::YES ? 636 : 389); - - return params; } -auto parseKerberosParams(const Poco::Util::AbstractConfiguration & config) +void parseKerberosParams(GSSAcceptorContext::Params & params, const Poco::Util::AbstractConfiguration & config) { - GSSAcceptorContext::Params params; - Poco::Util::AbstractConfiguration::Keys keys; config.keys("kerberos", keys); @@ -180,12 +217,20 @@ auto parseKerberosParams(const Poco::Util::AbstractConfiguration & config) params.realm = config.getString("kerberos.realm", ""); params.principal = config.getString("kerberos.principal", ""); - - return params; } } +void parseLDAPRoleSearchParams(LDAPClient::RoleSearchParams & params, const Poco::Util::AbstractConfiguration & config, const String & prefix) +{ + parseLDAPSearchParams(params, config, prefix); + + const bool has_prefix = config.has(prefix + ".prefix"); + + if (has_prefix) + params.prefix = config.getString(prefix + ".prefix"); +} + void ExternalAuthenticators::reset() { std::scoped_lock lock(mutex); @@ -229,7 +274,8 @@ void ExternalAuthenticators::setConfiguration(const Poco::Util::AbstractConfigur { try { - ldap_client_params_blueprint.insert_or_assign(ldap_server_name, parseLDAPServer(config, ldap_server_name)); + ldap_client_params_blueprint.erase(ldap_server_name); + parseLDAPServer(ldap_client_params_blueprint.emplace(ldap_server_name, LDAPClient::Params{}).first->second, config, ldap_server_name); } catch (...) { @@ -240,7 +286,7 @@ void ExternalAuthenticators::setConfiguration(const Poco::Util::AbstractConfigur try { if (kerberos_keys_count > 0) - kerberos_params = parseKerberosParams(config); + parseKerberosParams(kerberos_params.emplace(), config); } catch (...) { @@ -249,7 +295,7 @@ void ExternalAuthenticators::setConfiguration(const Poco::Util::AbstractConfigur } bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const BasicCredentials & credentials, - const LDAPClient::SearchParamsList * search_params, LDAPClient::SearchResultsList * search_results) const + const LDAPClient::RoleSearchParamsList * role_search_params, LDAPClient::SearchResultsList * role_search_results) const { std::optional params; std::size_t params_hash = 0; @@ -267,9 +313,9 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B params->password = credentials.getPassword(); params->combineCoreHash(params_hash); - if (search_params) + if (role_search_params) { - for (const auto & params_instance : *search_params) + for (const auto & params_instance : *role_search_params) { params_instance.combineHash(params_hash); } @@ -301,14 +347,14 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B // Ensure that search_params are compatible. ( - search_params == nullptr ? - entry.last_successful_search_results.empty() : - search_params->size() == entry.last_successful_search_results.size() + role_search_params == nullptr ? + entry.last_successful_role_search_results.empty() : + role_search_params->size() == entry.last_successful_role_search_results.size() ) ) { - if (search_results) - *search_results = entry.last_successful_search_results; + if (role_search_results) + *role_search_results = entry.last_successful_role_search_results; return true; } @@ -326,7 +372,7 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B } LDAPSimpleAuthClient client(params.value()); - const auto result = client.authenticate(search_params, search_results); + const auto result = client.authenticate(role_search_params, role_search_results); const auto current_check_timestamp = std::chrono::steady_clock::now(); // Update the cache, but only if this is the latest check and the server is still configured in a compatible way. @@ -345,9 +391,9 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B std::size_t new_params_hash = 0; new_params.combineCoreHash(new_params_hash); - if (search_params) + if (role_search_params) { - for (const auto & params_instance : *search_params) + for (const auto & params_instance : *role_search_params) { params_instance.combineHash(new_params_hash); } @@ -363,17 +409,17 @@ bool ExternalAuthenticators::checkLDAPCredentials(const String & server, const B entry.last_successful_params_hash = params_hash; entry.last_successful_authentication_timestamp = current_check_timestamp; - if (search_results) - entry.last_successful_search_results = *search_results; + if (role_search_results) + entry.last_successful_role_search_results = *role_search_results; else - entry.last_successful_search_results.clear(); + entry.last_successful_role_search_results.clear(); } else if ( entry.last_successful_params_hash != params_hash || ( - search_params == nullptr ? - !entry.last_successful_search_results.empty() : - search_params->size() != entry.last_successful_search_results.size() + role_search_params == nullptr ? + !entry.last_successful_role_search_results.empty() : + role_search_params->size() != entry.last_successful_role_search_results.size() ) ) { diff --git a/src/Access/ExternalAuthenticators.h b/src/Access/ExternalAuthenticators.h index c8feea7eada..24f1f7b6528 100644 --- a/src/Access/ExternalAuthenticators.h +++ b/src/Access/ExternalAuthenticators.h @@ -34,7 +34,7 @@ public: // The name and readiness of the credentials must be verified before calling these. bool checkLDAPCredentials(const String & server, const BasicCredentials & credentials, - const LDAPClient::SearchParamsList * search_params = nullptr, LDAPClient::SearchResultsList * search_results = nullptr) const; + const LDAPClient::RoleSearchParamsList * role_search_params = nullptr, LDAPClient::SearchResultsList * role_search_results = nullptr) const; bool checkKerberosCredentials(const String & realm, const GSSAcceptorContext & credentials) const; GSSAcceptorContext::Params getKerberosParams() const; @@ -44,7 +44,7 @@ private: { std::size_t last_successful_params_hash = 0; std::chrono::steady_clock::time_point last_successful_authentication_timestamp; - LDAPClient::SearchResultsList last_successful_search_results; + LDAPClient::SearchResultsList last_successful_role_search_results; }; using LDAPCache = std::unordered_map; // user name -> cache entry @@ -58,4 +58,6 @@ private: std::optional kerberos_params; }; +void parseLDAPRoleSearchParams(LDAPClient::RoleSearchParams & params, const Poco::Util::AbstractConfiguration & config, const String & prefix); + } diff --git a/src/Access/LDAPAccessStorage.cpp b/src/Access/LDAPAccessStorage.cpp index b47a9b3e041..c1d54e8c9aa 100644 --- a/src/Access/LDAPAccessStorage.cpp +++ b/src/Access/LDAPAccessStorage.cpp @@ -68,34 +68,15 @@ void LDAPAccessStorage::setConfiguration(AccessControlManager * access_control_m common_roles_cfg.insert(role_names.begin(), role_names.end()); } - LDAPClient::SearchParamsList role_search_params_cfg; + LDAPClient::RoleSearchParamsList role_search_params_cfg; if (has_role_mapping) { Poco::Util::AbstractConfiguration::Keys all_keys; config.keys(prefix, all_keys); for (const auto & key : all_keys) { - if (key != "role_mapping" && key.find("role_mapping[") != 0) - continue; - - const String rm_prefix = prefix_str + key; - const String rm_prefix_str = rm_prefix + '.'; - role_search_params_cfg.emplace_back(); - auto & rm_params = role_search_params_cfg.back(); - - rm_params.base_dn = config.getString(rm_prefix_str + "base_dn", ""); - rm_params.search_filter = config.getString(rm_prefix_str + "search_filter", ""); - rm_params.attribute = config.getString(rm_prefix_str + "attribute", "cn"); - rm_params.prefix = config.getString(rm_prefix_str + "prefix", ""); - - auto scope = config.getString(rm_prefix_str + "scope", "subtree"); - boost::algorithm::to_lower(scope); - if (scope == "base") rm_params.scope = LDAPClient::SearchParams::Scope::BASE; - else if (scope == "one_level") rm_params.scope = LDAPClient::SearchParams::Scope::ONE_LEVEL; - else if (scope == "subtree") rm_params.scope = LDAPClient::SearchParams::Scope::SUBTREE; - else if (scope == "children") rm_params.scope = LDAPClient::SearchParams::Scope::CHILDREN; - else - throw Exception("Invalid value of 'scope' field in '" + key + "' section of LDAP user directory, must be one of 'base', 'one_level', 'subtree', or 'children'", ErrorCodes::BAD_ARGUMENTS); + if (key == "role_mapping" || key.find("role_mapping[") == 0) + parseLDAPRoleSearchParams(role_search_params_cfg.emplace_back(), config, prefix_str + key); } } @@ -364,7 +345,7 @@ std::set LDAPAccessStorage::mapExternalRolesNoLock(const LDAPClient::Sea bool LDAPAccessStorage::areLDAPCredentialsValidNoLock(const User & user, const Credentials & credentials, - const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & search_results) const + const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & role_search_results) const { if (!credentials.isReady()) return false; @@ -373,7 +354,7 @@ bool LDAPAccessStorage::areLDAPCredentialsValidNoLock(const User & user, const C return false; if (const auto * basic_credentials = dynamic_cast(&credentials)) - return external_authenticators.checkLDAPCredentials(ldap_server_name, *basic_credentials, &role_search_params, &search_results); + return external_authenticators.checkLDAPCredentials(ldap_server_name, *basic_credentials, &role_search_params, &role_search_results); return false; } diff --git a/src/Access/LDAPAccessStorage.h b/src/Access/LDAPAccessStorage.h index ea0ab47c225..33ac9f0a914 100644 --- a/src/Access/LDAPAccessStorage.h +++ b/src/Access/LDAPAccessStorage.h @@ -68,12 +68,12 @@ private: void updateAssignedRolesNoLock(const UUID & id, const String & user_name, const LDAPClient::SearchResultsList & external_roles) const; std::set mapExternalRolesNoLock(const LDAPClient::SearchResultsList & external_roles) const; bool areLDAPCredentialsValidNoLock(const User & user, const Credentials & credentials, - const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & search_results) const; + const ExternalAuthenticators & external_authenticators, LDAPClient::SearchResultsList & role_search_results) const; mutable std::recursive_mutex mutex; AccessControlManager * access_control_manager = nullptr; String ldap_server_name; - LDAPClient::SearchParamsList role_search_params; + LDAPClient::RoleSearchParamsList role_search_params; std::set common_role_names; // role name that should be granted to all users at all times mutable std::map external_role_hashes; // user name -> LDAPClient::SearchResultsList hash (most recently retrieved and processed) mutable std::map> users_per_roles; // role name -> user names (...it should be granted to; may but don't have to exist for common roles) diff --git a/src/Access/LDAPClient.cpp b/src/Access/LDAPClient.cpp index 5c4b7dd8d99..a8f9675774b 100644 --- a/src/Access/LDAPClient.cpp +++ b/src/Access/LDAPClient.cpp @@ -32,6 +32,11 @@ void LDAPClient::SearchParams::combineHash(std::size_t & seed) const boost::hash_combine(seed, static_cast(scope)); boost::hash_combine(seed, search_filter); boost::hash_combine(seed, attribute); +} + +void LDAPClient::RoleSearchParams::combineHash(std::size_t & seed) const +{ + SearchParams::combineHash(seed); boost::hash_combine(seed, prefix); } @@ -42,6 +47,9 @@ void LDAPClient::Params::combineCoreHash(std::size_t & seed) const boost::hash_combine(seed, bind_dn); boost::hash_combine(seed, user); boost::hash_combine(seed, password); + + if (user_dn_detection) + user_dn_detection->combineHash(seed); } LDAPClient::LDAPClient(const Params & params_) @@ -286,18 +294,33 @@ void LDAPClient::openConnection() if (params.enable_tls == LDAPClient::Params::TLSEnable::YES_STARTTLS) diag(ldap_start_tls_s(handle, nullptr, nullptr)); + final_user_name = escapeForLDAP(params.user); + final_bind_dn = replacePlaceholders(params.bind_dn, { {"{user_name}", final_user_name} }); + final_user_dn = final_bind_dn; // The default value... may be updated right after a successful bind. + switch (params.sasl_mechanism) { case LDAPClient::Params::SASLMechanism::SIMPLE: { - const auto escaped_user_name = escapeForLDAP(params.user); - const auto bind_dn = replacePlaceholders(params.bind_dn, { {"{user_name}", escaped_user_name} }); - ::berval cred; cred.bv_val = const_cast(params.password.c_str()); cred.bv_len = params.password.size(); - diag(ldap_sasl_bind_s(handle, bind_dn.c_str(), LDAP_SASL_SIMPLE, &cred, nullptr, nullptr, nullptr)); + diag(ldap_sasl_bind_s(handle, final_bind_dn.c_str(), LDAP_SASL_SIMPLE, &cred, nullptr, nullptr, nullptr)); + + // Once bound, run the user DN search query and update the default value, if asked. + if (params.user_dn_detection) + { + const auto user_dn_search_results = search(*params.user_dn_detection); + + if (user_dn_search_results.empty()) + throw Exception("Failed to detect user DN: empty search results", ErrorCodes::LDAP_ERROR); + + if (user_dn_search_results.size() > 1) + throw Exception("Failed to detect user DN: more than one entry in the search results", ErrorCodes::LDAP_ERROR); + + final_user_dn = *user_dn_search_results.begin(); + } break; } @@ -316,6 +339,9 @@ void LDAPClient::closeConnection() noexcept ldap_unbind_ext_s(handle, nullptr, nullptr); handle = nullptr; + final_user_name.clear(); + final_bind_dn.clear(); + final_user_dn.clear(); } LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params) @@ -333,10 +359,19 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params) case SearchParams::Scope::CHILDREN: scope = LDAP_SCOPE_CHILDREN; break; } - const auto escaped_user_name = escapeForLDAP(params.user); - const auto bind_dn = replacePlaceholders(params.bind_dn, { {"{user_name}", escaped_user_name} }); - const auto base_dn = replacePlaceholders(search_params.base_dn, { {"{user_name}", escaped_user_name}, {"{bind_dn}", bind_dn} }); - const auto search_filter = replacePlaceholders(search_params.search_filter, { {"{user_name}", escaped_user_name}, {"{bind_dn}", bind_dn}, {"{base_dn}", base_dn} }); + const auto final_base_dn = replacePlaceholders(search_params.base_dn, { + {"{user_name}", final_user_name}, + {"{bind_dn}", final_bind_dn}, + {"{user_dn}", final_user_dn} + }); + + const auto final_search_filter = replacePlaceholders(search_params.search_filter, { + {"{user_name}", final_user_name}, + {"{bind_dn}", final_bind_dn}, + {"{user_dn}", final_user_dn}, + {"{base_dn}", final_base_dn} + }); + char * attrs[] = { const_cast(search_params.attribute.c_str()), nullptr }; ::timeval timeout = { params.search_timeout.count(), 0 }; LDAPMessage* msgs = nullptr; @@ -349,7 +384,7 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params) } }); - diag(ldap_search_ext_s(handle, base_dn.c_str(), scope, search_filter.c_str(), attrs, 0, nullptr, nullptr, &timeout, params.search_limit, &msgs)); + diag(ldap_search_ext_s(handle, final_base_dn.c_str(), scope, final_search_filter.c_str(), attrs, 0, nullptr, nullptr, &timeout, params.search_limit, &msgs)); for ( auto * msg = ldap_first_message(handle, msgs); @@ -361,6 +396,27 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params) { case LDAP_RES_SEARCH_ENTRY: { + // Extract DN separately, if the requested attribute is DN. + if (boost::iequals("dn", search_params.attribute)) + { + BerElement * ber = nullptr; + + SCOPE_EXIT({ + if (ber) + { + ber_free(ber, 0); + ber = nullptr; + } + }); + + ::berval bv; + + diag(ldap_get_dn_ber(handle, msg, &ber, &bv)); + + if (bv.bv_val && bv.bv_len > 0) + result.emplace(bv.bv_val, bv.bv_len); + } + BerElement * ber = nullptr; SCOPE_EXIT({ @@ -471,12 +527,12 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams & search_params) return result; } -bool LDAPSimpleAuthClient::authenticate(const SearchParamsList * search_params, SearchResultsList * search_results) +bool LDAPSimpleAuthClient::authenticate(const RoleSearchParamsList * role_search_params, SearchResultsList * role_search_results) { if (params.user.empty()) throw Exception("LDAP authentication of a user with empty name is not allowed", ErrorCodes::BAD_ARGUMENTS); - if (!search_params != !search_results) + if (!role_search_params != !role_search_results) throw Exception("Cannot return LDAP search results", ErrorCodes::BAD_ARGUMENTS); // Silently reject authentication attempt if the password is empty as if it didn't match. @@ -489,21 +545,21 @@ bool LDAPSimpleAuthClient::authenticate(const SearchParamsList * search_params, openConnection(); // While connected, run search queries and save the results, if asked. - if (search_params) + if (role_search_params) { - search_results->clear(); - search_results->reserve(search_params->size()); + role_search_results->clear(); + role_search_results->reserve(role_search_params->size()); try { - for (const auto & single_search_params : *search_params) + for (const auto & params_instance : *role_search_params) { - search_results->emplace_back(search(single_search_params)); + role_search_results->emplace_back(search(params_instance)); } } catch (...) { - search_results->clear(); + role_search_results->clear(); throw; } } @@ -532,7 +588,7 @@ LDAPClient::SearchResults LDAPClient::search(const SearchParams &) throw Exception("ClickHouse was built without LDAP support", ErrorCodes::FEATURE_IS_NOT_ENABLED_AT_BUILD_TIME); } -bool LDAPSimpleAuthClient::authenticate(const SearchParamsList *, SearchResultsList *) +bool LDAPSimpleAuthClient::authenticate(const RoleSearchParamsList *, SearchResultsList *) { throw Exception("ClickHouse was built without LDAP support", ErrorCodes::FEATURE_IS_NOT_ENABLED_AT_BUILD_TIME); } diff --git a/src/Access/LDAPClient.h b/src/Access/LDAPClient.h index 4fc97bb957b..388e7ad0f0d 100644 --- a/src/Access/LDAPClient.h +++ b/src/Access/LDAPClient.h @@ -38,12 +38,20 @@ public: Scope scope = Scope::SUBTREE; String search_filter; String attribute = "cn"; + + void combineHash(std::size_t & seed) const; + }; + + struct RoleSearchParams + : public SearchParams + { String prefix; void combineHash(std::size_t & seed) const; }; - using SearchParamsList = std::vector; + using RoleSearchParamsList = std::vector; + using SearchResults = std::set; using SearchResultsList = std::vector; @@ -105,6 +113,8 @@ public: String user; String password; + std::optional user_dn_detection; + std::chrono::seconds verification_cooldown{0}; std::chrono::seconds operation_timeout{40}; @@ -134,6 +144,9 @@ protected: #if USE_LDAP LDAP * handle = nullptr; #endif + String final_user_name; + String final_bind_dn; + String final_user_dn; }; class LDAPSimpleAuthClient @@ -141,7 +154,7 @@ class LDAPSimpleAuthClient { public: using LDAPClient::LDAPClient; - bool authenticate(const SearchParamsList * search_params, SearchResultsList * search_results); + bool authenticate(const RoleSearchParamsList * role_search_params, SearchResultsList * role_search_results); }; } diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt index 5cf6439744b..938ce32ff62 100644 --- a/src/CMakeLists.txt +++ b/src/CMakeLists.txt @@ -188,6 +188,7 @@ add_object_library(clickhouse_interpreters_clusterproxy Interpreters/ClusterProx add_object_library(clickhouse_interpreters_jit Interpreters/JIT) add_object_library(clickhouse_columns Columns) add_object_library(clickhouse_storages Storages) +add_object_library(clickhouse_storages_mysql Storages/MySQL) add_object_library(clickhouse_storages_distributed Storages/Distributed) add_object_library(clickhouse_storages_mergetree Storages/MergeTree) add_object_library(clickhouse_storages_liveview Storages/LiveView) diff --git a/src/Common/Allocator.h b/src/Common/Allocator.h index ebfd654d558..f20f11889ce 100644 --- a/src/Common/Allocator.h +++ b/src/Common/Allocator.h @@ -99,9 +99,17 @@ public: /// Free memory range. void free(void * buf, size_t size) { - checkSize(size); - freeNoTrack(buf, size); - CurrentMemoryTracker::free(size); + try + { + checkSize(size); + freeNoTrack(buf, size); + CurrentMemoryTracker::free(size); + } + catch (...) + { + DB::tryLogCurrentException("Allocator::free"); + throw; + } } /** Enlarge memory range. diff --git a/src/Common/Config/ConfigProcessor.cpp b/src/Common/Config/ConfigProcessor.cpp index 39ab407579d..bc2a8a27943 100644 --- a/src/Common/Config/ConfigProcessor.cpp +++ b/src/Common/Config/ConfigProcessor.cpp @@ -462,10 +462,19 @@ XMLDocumentPtr ConfigProcessor::processConfig( } else { - /// When we can use config embedded in binary. + /// These embedded files added during build with some cmake magic. + /// Look at the end of programs/sever/CMakeLists.txt. + std::string embedded_name; if (path == "config.xml") + embedded_name = "embedded.xml"; + + if (path == "keeper_config.xml") + embedded_name = "keeper_embedded.xml"; + + /// When we can use config embedded in binary. + if (!embedded_name.empty()) { - auto resource = getResource("embedded.xml"); + auto resource = getResource(embedded_name); if (resource.empty()) throw Exception(ErrorCodes::FILE_DOESNT_EXIST, "Configuration file {} doesn't exist and there is no embedded config", path); LOG_DEBUG(log, "There is no file '{}', will use embedded config.", path); diff --git a/src/Common/NamePrompter.h b/src/Common/NamePrompter.h index 5f7832c4423..bb17f860f29 100644 --- a/src/Common/NamePrompter.h +++ b/src/Common/NamePrompter.h @@ -90,17 +90,16 @@ private: } }; -template + +template class IHints { public: - virtual std::vector getAllRegisteredNames() const = 0; std::vector getHints(const String & name) const { - static const auto registered_names = getAllRegisteredNames(); - return prompter.getHints(name, registered_names); + return prompter.getHints(name, getAllRegisteredNames()); } virtual ~IHints() = default; diff --git a/src/Common/PODArray.h b/src/Common/PODArray.h index 8f6414b3fa9..d90ac0469c5 100644 --- a/src/Common/PODArray.h +++ b/src/Common/PODArray.h @@ -513,7 +513,7 @@ public: insertPrepare(from_begin, from_end); if (unlikely(bytes_to_move)) - memcpy(this->c_end + bytes_to_copy - bytes_to_move, this->c_end - bytes_to_move, bytes_to_move); + memmove(this->c_end + bytes_to_copy - bytes_to_move, this->c_end - bytes_to_move, bytes_to_move); memcpy(this->c_end - bytes_to_move, reinterpret_cast(&*from_begin), bytes_to_copy); diff --git a/src/Common/StringUtils/StringUtils.h b/src/Common/StringUtils/StringUtils.h index cb2227f01a8..20c0a5ca380 100644 --- a/src/Common/StringUtils/StringUtils.h +++ b/src/Common/StringUtils/StringUtils.h @@ -123,7 +123,7 @@ inline bool isWhitespaceASCII(char c) /// Since |isWhiteSpaceASCII()| is used inside algorithms it's easier to implement another function than add extra argument. inline bool isWhitespaceASCIIOneLine(char c) { - return c == ' ' || c == '\t' || c == '\r' || c == '\f' || c == '\v'; + return c == ' ' || c == '\t' || c == '\f' || c == '\v'; } inline bool isControlASCII(char c) diff --git a/src/Common/examples/integer_hash_tables_benchmark.cpp b/src/Common/examples/integer_hash_tables_benchmark.cpp index cb7467ce909..f81add6d6cb 100644 --- a/src/Common/examples/integer_hash_tables_benchmark.cpp +++ b/src/Common/examples/integer_hash_tables_benchmark.cpp @@ -61,6 +61,10 @@ static void NO_INLINE testForType(size_t method, size_t rows_size) test>(data.data(), data.size(), "Abseil HashMap"); } else if (method == 3) + { + test>>(data.data(), data.size(), "Abseil HashMap with CH Hash"); + } + else if (method == 4) { test>(data.data(), data.size(), "std::unordered_map"); } @@ -81,50 +85,110 @@ static void NO_INLINE testForType(size_t method, size_t rows_size) * ./integer_hash_tables_benchmark 1 $2 100000000 < $1 * ./integer_hash_tables_benchmark 2 $2 100000000 < $1 * ./integer_hash_tables_benchmark 3 $2 100000000 < $1 + * ./integer_hash_tables_benchmark 4 $2 100000000 < $1 * - * Results of this benchmark on hits_100m_obfuscated + * Results of this benchmark on hits_100m_obfuscated X86-64 * * File hits_100m_obfuscated/201307_1_96_4/WatchID.bin - * CH HashMap: Elapsed: 7.366 (13575745.933 elem/sec.), map size: 99997493 - * Google DenseMap: Elapsed: 10.089 (9911817.125 elem/sec.), map size: 99997493 - * Abseil HashMap: Elapsed: 9.011 (11097794.073 elem/sec.), map size: 99997493 - * std::unordered_map: Elapsed: 44.758 (2234223.189 elem/sec.), map size: 99997493 + * CH HashMap: Elapsed: 7.416 (13484217.815 elem/sec.), map size: 99997493 + * Google DenseMap: Elapsed: 10.303 (9706022.031 elem/sec.), map size: 99997493 + * Abseil HashMap: Elapsed: 9.106 (10982139.229 elem/sec.), map size: 99997493 + * Abseil HashMap with CH Hash: Elapsed: 9.221 (10845360.669 elem/sec.), map size: 99997493 + * std::unordered_map: Elapsed: 45.213 (2211758.706 elem/sec.), map size: 9999749 * * File hits_100m_obfuscated/201307_1_96_4/URLHash.bin - * CH HashMap: Elapsed: 2.672 (37421588.347 elem/sec.), map size: 20714865 - * Google DenseMap: Elapsed: 3.409 (29333308.209 elem/sec.), map size: 20714865 - * Abseil HashMap: Elapsed: 2.778 (36000540.035 elem/sec.), map size: 20714865 - * std::unordered_map: Elapsed: 8.643 (11570012.207 elem/sec.), map size: 20714865 + * CH HashMap: Elapsed: 2.620 (38168135.308 elem/sec.), map size: 20714865 + * Google DenseMap: Elapsed: 3.426 (29189309.058 elem/sec.), map size: 20714865 + * Abseil HashMap: Elapsed: 2.788 (35870495.097 elem/sec.), map size: 20714865 + * Abseil HashMap with CH Hash: Elapsed: 2.991 (33428850.155 elem/sec.), map size: 20714865 + * std::unordered_map: Elapsed: 8.503 (11760331.346 elem/sec.), map size: 20714865 * * File hits_100m_obfuscated/201307_1_96_4/UserID.bin - * CH HashMap: Elapsed: 2.116 (47267659.076 elem/sec.), map size: 17630976 - * Google DenseMap: Elapsed: 2.722 (36740693.786 elem/sec.), map size: 17630976 - * Abseil HashMap: Elapsed: 2.597 (38509988.663 elem/sec.), map size: 17630976 - * std::unordered_map: Elapsed: 7.327 (13647271.471 elem/sec.), map size: 17630976 + * CH HashMap: Elapsed: 2.157 (46352039.753 elem/sec.), map size: 17630976 + * Google DenseMap: Elapsed: 2.725 (36694226.782 elem/sec.), map size: 17630976 + * Abseil HashMap: Elapsed: 2.590 (38604284.187 elem/sec.), map size: 17630976 + * Abseil HashMap with CH Hash: Elapsed: 2.785 (35904856.137 elem/sec.), map size: 17630976 + * std::unordered_map: Elapsed: 7.268 (13759557.609 elem/sec.), map size: 17630976 * * File hits_100m_obfuscated/201307_1_96_4/RegionID.bin - * CH HashMap: Elapsed: 0.201 (498144193.695 elem/sec.), map size: 9040 - * Google DenseMap: Elapsed: 0.261 (382656387.016 elem/sec.), map size: 9046 - * Abseil HashMap: Elapsed: 0.307 (325874545.117 elem/sec.), map size: 9040 - * std::unordered_map: Elapsed: 0.466 (214379083.420 elem/sec.), map size: 9040 + * CH HashMap: Elapsed: 0.192 (521583315.810 elem/sec.), map size: 9040 + * Google DenseMap: Elapsed: 0.297 (337081407.799 elem/sec.), map size: 9046 + * Abseil HashMap: Elapsed: 0.295 (338805623.511 elem/sec.), map size: 9040 + * Abseil HashMap with CH Hash: Elapsed: 0.331 (302155391.036 elem/sec.), map size: 9040 + * std::unordered_map: Elapsed: 0.455 (219971555.390 elem/sec.), map size: 9040 * * File hits_100m_obfuscated/201307_1_96_4/CounterID.bin - * CH HashMap: Elapsed: 0.220 (455344735.648 elem/sec.), map size: 6506 - * Google DenseMap: Elapsed: 0.297 (336187522.818 elem/sec.), map size: 6506 - * Abseil HashMap: Elapsed: 0.307 (325264214.480 elem/sec.), map size: 6506 - * std::unordered_map: Elapsed: 0.389 (257195996.114 elem/sec.), map size: 6506 + * CH HashMap: Elapsed: 0.217 (460216823.609 elem/sec.), map size: 6506 + * Google DenseMap: Elapsed: 0.373 (267838665.098 elem/sec.), map size: 6506 + * Abseil HashMap: Elapsed: 0.325 (308124728.989 elem/sec.), map size: 6506 + * Abseil HashMap with CH Hash: Elapsed: 0.354 (282167144.801 elem/sec.), map size: 6506 + * std::unordered_map: Elapsed: 0.390 (256573354.171 elem/sec.), map size: 6506 * * File hits_100m_obfuscated/201307_1_96_4/TraficSourceID.bin - * CH HashMap: Elapsed: 0.274 (365196673.729 elem/sec.), map size: 10 - * Google DenseMap: Elapsed: 0.782 (127845746.927 elem/sec.), map size: 1565609 /// Broken because there is 0 key in dataset - * Abseil HashMap: Elapsed: 0.303 (330461565.053 elem/sec.), map size: 10 - * std::unordered_map: Elapsed: 0.843 (118596530.649 elem/sec.), map size: 10 + * CH HashMap: Elapsed: 0.246 (406714566.282 elem/sec.), map size: 10 + * Google DenseMap: Elapsed: 0.760 (131615151.233 elem/sec.), map size: 1565609 /// Broken because there is 0 key in dataset + * Abseil HashMap: Elapsed: 0.309 (324068156.680 elem/sec.), map size: 10 + * Abseil HashMap with CH Hash: Elapsed: 0.339 (295108223.814 elem/sec.), map size: 10 + * std::unordered_map: Elapsed: 0.811 (123304031.195 elem/sec.), map size: 10 * * File hits_100m_obfuscated/201307_1_96_4/AdvEngineID.bin - * CH HashMap: Elapsed: 0.160 (623399865.019 elem/sec.), map size: 19 - * Google DenseMap: Elapsed: 1.673 (59757144.027 elem/sec.), map size: 32260732 /// Broken because there is 0 key in dataset - * Abseil HashMap: Elapsed: 0.297 (336589258.845 elem/sec.), map size: 19 - * std::unordered_map: Elapsed: 0.332 (301114451.384 elem/sec.), map size: 19 + * CH HashMap: Elapsed: 0.155 (643245257.748 elem/sec.), map size: 19 + * Google DenseMap: Elapsed: 1.629 (61395025.417 elem/sec.), map size: 32260732 // Broken because there is 0 key in dataset + * Abseil HashMap: Elapsed: 0.292 (342765027.204 elem/sec.), map size: 19 + * Abseil HashMap with CH Hash: Elapsed: 0.330 (302822020.210 elem/sec.), map size: 19 + * std::unordered_map: Elapsed: 0.308 (325059333.730 elem/sec.), map size: 19 + * + * + * Results of this benchmark on hits_100m_obfuscated AARCH64 + * + * File hits_100m_obfuscated/201307_1_96_4/WatchID.bin + * CH HashMap: Elapsed: 9.530 (10493528.533 elem/sec.), map size: 99997493 + * Google DenseMap: Elapsed: 14.436 (6927091.135 elem/sec.), map size: 99997493 + * Abseil HashMap: Elapsed: 16.671 (5998504.085 elem/sec.), map size: 99997493 + * Abseil HashMap with CH Hash: Elapsed: 16.803 (5951365.711 elem/sec.), map size: 99997493 + * std::unordered_map: Elapsed: 50.805 (1968305.658 elem/sec.), map size: 99997493 + * + * File hits_100m_obfuscated/201307_1_96_4/URLHash.bin + * CH HashMap: Elapsed: 3.693 (27076878.092 elem/sec.), map size: 20714865 + * Google DenseMap: Elapsed: 5.051 (19796401.694 elem/sec.), map size: 20714865 + * Abseil HashMap: Elapsed: 5.617 (17804528.625 elem/sec.), map size: 20714865 + * Abseil HashMap with CH Hash: Elapsed: 5.702 (17537013.639 elem/sec.), map size: 20714865 + * std::unordered_map: Elapsed: 10.757 (9296040.953 elem/sec.), map size: 2071486 + * + * File hits_100m_obfuscated/201307_1_96_4/UserID.bin + * CH HashMap: Elapsed: 2.982 (33535795.695 elem/sec.), map size: 17630976 + * Google DenseMap: Elapsed: 3.940 (25381557.959 elem/sec.), map size: 17630976 + * Abseil HashMap: Elapsed: 4.493 (22259078.458 elem/sec.), map size: 17630976 + * Abseil HashMap with CH Hash: Elapsed: 4.596 (21759738.710 elem/sec.), map size: 17630976 + * std::unordered_map: Elapsed: 9.035 (11067903.596 elem/sec.), map size: 17630976 + * + * File hits_100m_obfuscated/201307_1_96_4/RegionID.bin + * CH HashMap: Elapsed: 0.302 (331026285.361 elem/sec.), map size: 9040 + * Google DenseMap: Elapsed: 0.623 (160419421.840 elem/sec.), map size: 9046 + * Abseil HashMap: Elapsed: 0.981 (101971186.758 elem/sec.), map size: 9040 + * Abseil HashMap with CH Hash: Elapsed: 0.991 (100932993.199 elem/sec.), map size: 9040 + * std::unordered_map: Elapsed: 0.809 (123541402.715 elem/sec.), map size: 9040 + * + * File hits_100m_obfuscated/201307_1_96_4/CounterID.bin + * CH HashMap: Elapsed: 0.343 (291821742.078 elem/sec.), map size: 6506 + * Google DenseMap: Elapsed: 0.718 (139191105.450 elem/sec.), map size: 6506 + * Abseil HashMap: Elapsed: 1.019 (98148285.278 elem/sec.), map size: 6506 + * Abseil HashMap with CH Hash: Elapsed: 1.048 (95446843.667 elem/sec.), map size: 6506 + * std::unordered_map: Elapsed: 0.701 (142701070.085 elem/sec.), map size: 6506 + * + * File hits_100m_obfuscated/201307_1_96_4/TraficSourceID.bin + * CH HashMap: Elapsed: 0.376 (265905243.103 elem/sec.), map size: 10 + * Google DenseMap: Elapsed: 1.309 (76420707.298 elem/sec.), map size: 1565609 /// Broken because there is 0 key in dataset + * Abseil HashMap: Elapsed: 0.955 (104668109.775 elem/sec.), map size: 10 + * Abseil HashMap with CH Hash: Elapsed: 0.967 (103456305.391 elem/sec.), map size: 10 + * std::unordered_map: Elapsed: 1.241 (80591305.890 elem/sec.), map size: 10 + * + * File hits_100m_obfuscated/201307_1_96_4/AdvEngineID.bin + * CH HashMap: Elapsed: 0.213 (470208130.105 elem/sec.), map size: 19 + * Google DenseMap: Elapsed: 2.525 (39607131.523 elem/sec.), map size: 32260732 /// Broken because there is 0 key in dataset + * Abseil HashMap: Elapsed: 0.950 (105233678.618 elem/sec.), map size: 19 + * Abseil HashMap with CH Hash: Elapsed: 0.962 (104001230.717 elem/sec.), map size: 19 + * std::unordered_map: Elapsed: 0.585 (171059989.837 elem/sec.), map size: 19 */ int main(int argc, char ** argv) diff --git a/src/Common/isLocalAddress.cpp b/src/Common/isLocalAddress.cpp index 8da281e3051..4167a24ab21 100644 --- a/src/Common/isLocalAddress.cpp +++ b/src/Common/isLocalAddress.cpp @@ -1,29 +1,88 @@ #include +#include #include +#include #include -#include -#include +#include +#include #include namespace DB { +namespace ErrorCodes +{ + extern const int SYSTEM_ERROR; +} + +namespace +{ + +struct NetworkInterfaces +{ + ifaddrs * ifaddr; + NetworkInterfaces() + { + if (getifaddrs(&ifaddr) == -1) + { + throwFromErrno("Cannot getifaddrs", ErrorCodes::SYSTEM_ERROR); + } + } + + bool hasAddress(const Poco::Net::IPAddress & address) const + { + ifaddrs * iface; + for (iface = ifaddr; iface != nullptr; iface = iface->ifa_next) + { + /// Point-to-point (VPN) addresses may have NULL ifa_addr + if (!iface->ifa_addr) + continue; + + auto family = iface->ifa_addr->sa_family; + std::optional interface_address; + switch (family) + { + /// We interested only in IP-adresses + case AF_INET: + { + interface_address.emplace(*(iface->ifa_addr)); + break; + } + case AF_INET6: + { + interface_address.emplace(&reinterpret_cast(iface->ifa_addr)->sin6_addr, sizeof(struct in6_addr)); + break; + } + default: + continue; + } + + /** Compare the addresses without taking into account `scope`. + * Theoretically, this may not be correct - depends on `route` setting + * - through which interface we will actually access the specified address. + */ + if (interface_address->length() == address.length() + && 0 == memcmp(interface_address->addr(), address.addr(), address.length())) + return true; + } + return false; + } + + ~NetworkInterfaces() + { + freeifaddrs(ifaddr); + } +}; + +} + + bool isLocalAddress(const Poco::Net::IPAddress & address) { - static auto interfaces = Poco::Net::NetworkInterface::list(); - - return interfaces.end() != std::find_if(interfaces.begin(), interfaces.end(), - [&] (const Poco::Net::NetworkInterface & interface) - { - /** Compare the addresses without taking into account `scope`. - * Theoretically, this may not be correct - depends on `route` setting - * - through which interface we will actually access the specified address. - */ - return interface.address().length() == address.length() - && 0 == memcmp(interface.address().addr(), address.addr(), address.length()); - }); + NetworkInterfaces interfaces; + return interfaces.hasAddress(address); } bool isLocalAddress(const Poco::Net::SocketAddress & address, UInt16 clickhouse_port) diff --git a/src/Common/tests/gtest_local_address.cpp b/src/Common/tests/gtest_local_address.cpp new file mode 100644 index 00000000000..504fba19713 --- /dev/null +++ b/src/Common/tests/gtest_local_address.cpp @@ -0,0 +1,19 @@ +#include +#include +#include +#include +#include + + +TEST(LocalAddress, SmokeTest) +{ + auto cmd = DB::ShellCommand::executeDirect("/bin/hostname", {"-i"}); + std::string address_str; + DB::readString(address_str, cmd->out); + cmd->wait(); + std::cerr << "Got Address:" << address_str << std::endl; + + Poco::Net::IPAddress address(address_str); + + EXPECT_TRUE(DB::isLocalAddress(address)); +} diff --git a/src/Common/tests/gtest_pod_array.cpp b/src/Common/tests/gtest_pod_array.cpp index 74fbf447f29..b5ec45c3e5d 100644 --- a/src/Common/tests/gtest_pod_array.cpp +++ b/src/Common/tests/gtest_pod_array.cpp @@ -419,31 +419,56 @@ TEST(Common, PODArrayBasicSwapMoveConstructor) TEST(Common, PODArrayInsert) { - std::string str = "test_string_abacaba"; - PODArray chars; - chars.insert(chars.end(), str.begin(), str.end()); - EXPECT_EQ(str, std::string(chars.data(), chars.size())); - - std::string insert_in_the_middle = "insert_in_the_middle"; - auto pos = str.size() / 2; - str.insert(str.begin() + pos, insert_in_the_middle.begin(), insert_in_the_middle.end()); - chars.insert(chars.begin() + pos, insert_in_the_middle.begin(), insert_in_the_middle.end()); - EXPECT_EQ(str, std::string(chars.data(), chars.size())); - - std::string insert_with_resize; - insert_with_resize.reserve(chars.capacity() * 2); - char cur_char = 'a'; - while (insert_with_resize.size() < insert_with_resize.capacity()) { - insert_with_resize += cur_char; - if (cur_char == 'z') - cur_char = 'a'; - else - ++cur_char; + std::string str = "test_string_abacaba"; + PODArray chars; + chars.insert(chars.end(), str.begin(), str.end()); + EXPECT_EQ(str, std::string(chars.data(), chars.size())); + + std::string insert_in_the_middle = "insert_in_the_middle"; + auto pos = str.size() / 2; + str.insert(str.begin() + pos, insert_in_the_middle.begin(), insert_in_the_middle.end()); + chars.insert(chars.begin() + pos, insert_in_the_middle.begin(), insert_in_the_middle.end()); + EXPECT_EQ(str, std::string(chars.data(), chars.size())); + + std::string insert_with_resize; + insert_with_resize.reserve(chars.capacity() * 2); + char cur_char = 'a'; + while (insert_with_resize.size() < insert_with_resize.capacity()) + { + insert_with_resize += cur_char; + if (cur_char == 'z') + cur_char = 'a'; + else + ++cur_char; + } + str.insert(str.begin(), insert_with_resize.begin(), insert_with_resize.end()); + chars.insert(chars.begin(), insert_with_resize.begin(), insert_with_resize.end()); + EXPECT_EQ(str, std::string(chars.data(), chars.size())); + } + { + PODArray values; + PODArray values_to_insert; + + for (size_t i = 0; i < 120; ++i) + values.emplace_back(i); + + values.insert(values.begin() + 1, values_to_insert.begin(), values_to_insert.end()); + ASSERT_EQ(values.size(), 120); + + values_to_insert.emplace_back(0); + values_to_insert.emplace_back(1); + + values.insert(values.begin() + 1, values_to_insert.begin(), values_to_insert.end()); + ASSERT_EQ(values.size(), 122); + + values_to_insert.clear(); + for (size_t i = 0; i < 240; ++i) + values_to_insert.emplace_back(i); + + values.insert(values.begin() + 1, values_to_insert.begin(), values_to_insert.end()); + ASSERT_EQ(values.size(), 362); } - str.insert(str.begin(), insert_with_resize.begin(), insert_with_resize.end()); - chars.insert(chars.begin(), insert_with_resize.begin(), insert_with_resize.end()); - EXPECT_EQ(str, std::string(chars.data(), chars.size())); } TEST(Common, PODArrayInsertFromItself) diff --git a/src/Common/tests/gtest_sensitive_data_masker.cpp b/src/Common/tests/gtest_sensitive_data_masker.cpp index b8125c77b9b..7ebf141d961 100644 --- a/src/Common/tests/gtest_sensitive_data_masker.cpp +++ b/src/Common/tests/gtest_sensitive_data_masker.cpp @@ -223,7 +223,7 @@ TEST(Common, SensitiveDataMasker) { EXPECT_EQ( std::string(e.message()), - "SensitiveDataMasker: cannot compile re2: ())(, error: missing ): ())(. Look at https://github.com/google/re2/wiki/Syntax for reference.: while adding query masking rule 'test'." + "SensitiveDataMasker: cannot compile re2: ())(, error: unexpected ): ())(. Look at https://github.com/google/re2/wiki/Syntax for reference.: while adding query masking rule 'test'." ); EXPECT_EQ(e.code(), DB::ErrorCodes::CANNOT_COMPILE_REGEXP); } diff --git a/src/Common/ya.make b/src/Common/ya.make index f12b17827f7..dde1e6ae013 100644 --- a/src/Common/ya.make +++ b/src/Common/ya.make @@ -18,6 +18,7 @@ PEERDIR( contrib/libs/openssl contrib/libs/poco/NetSSL_OpenSSL contrib/libs/re2 + contrib/libs/cxxsupp/libcxxabi-parts contrib/restricted/dragonbox ) diff --git a/src/Common/ya.make.in b/src/Common/ya.make.in index fd6a805891e..459266a54e7 100644 --- a/src/Common/ya.make.in +++ b/src/Common/ya.make.in @@ -17,6 +17,7 @@ PEERDIR( contrib/libs/openssl contrib/libs/poco/NetSSL_OpenSSL contrib/libs/re2 + contrib/libs/cxxsupp/libcxxabi-parts contrib/restricted/dragonbox ) diff --git a/src/Compression/tests/gtest_compressionCodec.cpp b/src/Compression/tests/gtest_compressionCodec.cpp index 20fe5476807..6ba2d3457ea 100644 --- a/src/Compression/tests/gtest_compressionCodec.cpp +++ b/src/Compression/tests/gtest_compressionCodec.cpp @@ -345,10 +345,12 @@ CodecTestSequence operator*(CodecTestSequence && left, T times) std::ostream & operator<<(std::ostream & ostr, const Codec & codec) { - return ostr << "Codec{" - << "name: " << codec.codec_statement - << ", expected_compression_ratio: " << *codec.expected_compression_ratio - << "}"; + ostr << "Codec{" + << "name: " << codec.codec_statement; + if (codec.expected_compression_ratio) + return ostr << ", expected_compression_ratio: " << *codec.expected_compression_ratio << "}"; + else + return ostr << "}"; } std::ostream & operator<<(std::ostream & ostr, const CodecTestSequence & seq) diff --git a/src/Coordination/KeeperServer.cpp b/src/Coordination/KeeperServer.cpp index a3214474e96..ba904a535d0 100644 --- a/src/Coordination/KeeperServer.cpp +++ b/src/Coordination/KeeperServer.cpp @@ -14,6 +14,7 @@ #include #include #include +#include #include namespace DB @@ -59,6 +60,21 @@ void setSSLParams(nuraft::asio_service::options & asio_opts) } #endif +std::string getSnapshotsPathFromConfig(const Poco::Util::AbstractConfiguration & config, bool standalone_keeper) +{ + /// the most specialized path + if (config.has("keeper_server.snapshot_storage_path")) + return config.getString("keeper_server.snapshot_storage_path"); + + if (config.has("keeper_server.storage_path")) + return std::filesystem::path{config.getString("keeper_server.storage_path")} / "snapshots"; + + if (standalone_keeper) + return std::filesystem::path{config.getString("path", KEEPER_DEFAULT_PATH)} / "snapshots"; + else + return std::filesystem::path{config.getString("path", DBMS_DEFAULT_PATH)} / "coordination/snapshots"; +} + } KeeperServer::KeeperServer( @@ -66,14 +82,15 @@ KeeperServer::KeeperServer( const CoordinationSettingsPtr & coordination_settings_, const Poco::Util::AbstractConfiguration & config, ResponsesQueue & responses_queue_, - SnapshotsQueue & snapshots_queue_) + SnapshotsQueue & snapshots_queue_, + bool standalone_keeper) : server_id(server_id_) , coordination_settings(coordination_settings_) , state_machine(nuraft::cs_new( responses_queue_, snapshots_queue_, - config.getString("keeper_server.snapshot_storage_path", config.getString("path", DBMS_DEFAULT_PATH) + "coordination/snapshots"), + getSnapshotsPathFromConfig(config, standalone_keeper), coordination_settings)) - , state_manager(nuraft::cs_new(server_id, "keeper_server", config, coordination_settings)) + , state_manager(nuraft::cs_new(server_id, "keeper_server", config, coordination_settings, standalone_keeper)) , log(&Poco::Logger::get("KeeperServer")) { if (coordination_settings->quorum_reads) diff --git a/src/Coordination/KeeperServer.h b/src/Coordination/KeeperServer.h index 11900ebb213..421be331537 100644 --- a/src/Coordination/KeeperServer.h +++ b/src/Coordination/KeeperServer.h @@ -55,7 +55,8 @@ public: const CoordinationSettingsPtr & coordination_settings_, const Poco::Util::AbstractConfiguration & config, ResponsesQueue & responses_queue_, - SnapshotsQueue & snapshots_queue_); + SnapshotsQueue & snapshots_queue_, + bool standalone_keeper); void startup(); diff --git a/src/Coordination/KeeperStateManager.cpp b/src/Coordination/KeeperStateManager.cpp index e57ae7e7c19..d54752ec168 100644 --- a/src/Coordination/KeeperStateManager.cpp +++ b/src/Coordination/KeeperStateManager.cpp @@ -1,5 +1,6 @@ #include #include +#include namespace DB { @@ -9,6 +10,26 @@ namespace ErrorCodes extern const int RAFT_ERROR; } +namespace +{ + std::string getLogsPathFromConfig( + const std::string & config_prefix, const Poco::Util::AbstractConfiguration & config, bool standalone_keeper) +{ + /// the most specialized path + if (config.has(config_prefix + ".log_storage_path")) + return config.getString(config_prefix + ".log_storage_path"); + + if (config.has(config_prefix + ".storage_path")) + return std::filesystem::path{config.getString(config_prefix + ".storage_path")} / "logs"; + + if (standalone_keeper) + return std::filesystem::path{config.getString("path", KEEPER_DEFAULT_PATH)} / "logs"; + else + return std::filesystem::path{config.getString("path", DBMS_DEFAULT_PATH)} / "coordination/logs"; +} + +} + KeeperStateManager::KeeperStateManager(int server_id_, const std::string & host, int port, const std::string & logs_path) : my_server_id(server_id_) , my_port(port) @@ -24,11 +45,12 @@ KeeperStateManager::KeeperStateManager( int my_server_id_, const std::string & config_prefix, const Poco::Util::AbstractConfiguration & config, - const CoordinationSettingsPtr & coordination_settings) + const CoordinationSettingsPtr & coordination_settings, + bool standalone_keeper) : my_server_id(my_server_id_) , secure(config.getBool(config_prefix + ".raft_configuration.secure", false)) , log_store(nuraft::cs_new( - config.getString(config_prefix + ".log_storage_path", config.getString("path", DBMS_DEFAULT_PATH) + "coordination/logs"), + getLogsPathFromConfig(config_prefix, config, standalone_keeper), coordination_settings->rotate_log_storage_interval, coordination_settings->force_sync)) , cluster_config(nuraft::cs_new()) { diff --git a/src/Coordination/KeeperStateManager.h b/src/Coordination/KeeperStateManager.h index cb5181760cb..2a93a1dc62b 100644 --- a/src/Coordination/KeeperStateManager.h +++ b/src/Coordination/KeeperStateManager.h @@ -17,7 +17,8 @@ public: int server_id_, const std::string & config_prefix, const Poco::Util::AbstractConfiguration & config, - const CoordinationSettingsPtr & coordination_settings); + const CoordinationSettingsPtr & coordination_settings, + bool standalone_keeper); KeeperStateManager( int server_id_, diff --git a/src/Coordination/KeeperStorage.cpp b/src/Coordination/KeeperStorage.cpp index 5f2d6141be9..9e8d2a124e9 100644 --- a/src/Coordination/KeeperStorage.cpp +++ b/src/Coordination/KeeperStorage.cpp @@ -547,6 +547,17 @@ struct KeeperStorageCloseRequest final : public KeeperStorageRequest } }; +/// Dummy implementation TODO: implement simple ACL +struct KeeperStorageAuthRequest final : public KeeperStorageRequest +{ + using KeeperStorageRequest::KeeperStorageRequest; + std::pair process(KeeperStorage::Container &, KeeperStorage::Ephemerals &, int64_t, int64_t) const override + { + Coordination::ZooKeeperResponsePtr response_ptr = zk_request->makeResponse(); + return { response_ptr, {} }; + } +}; + void KeeperStorage::finalize() { if (finalized) @@ -611,7 +622,7 @@ KeeperWrapperFactory::KeeperWrapperFactory() { registerKeeperRequestWrapper(*this); registerKeeperRequestWrapper(*this); - //registerKeeperRequestWrapper(*this); + registerKeeperRequestWrapper(*this); registerKeeperRequestWrapper(*this); registerKeeperRequestWrapper(*this); registerKeeperRequestWrapper(*this); diff --git a/src/Coordination/KeeperStorageDispatcher.cpp b/src/Coordination/KeeperStorageDispatcher.cpp index c27d3b80e3c..14a44ee6f3f 100644 --- a/src/Coordination/KeeperStorageDispatcher.cpp +++ b/src/Coordination/KeeperStorageDispatcher.cpp @@ -234,7 +234,7 @@ bool KeeperStorageDispatcher::putRequest(const Coordination::ZooKeeperRequestPtr return true; } -void KeeperStorageDispatcher::initialize(const Poco::Util::AbstractConfiguration & config) +void KeeperStorageDispatcher::initialize(const Poco::Util::AbstractConfiguration & config, bool standalone_keeper) { LOG_DEBUG(log, "Initializing storage dispatcher"); int myid = config.getInt("keeper_server.server_id"); @@ -246,7 +246,8 @@ void KeeperStorageDispatcher::initialize(const Poco::Util::AbstractConfiguration responses_thread = ThreadFromGlobalPool([this] { responseThread(); }); snapshot_thread = ThreadFromGlobalPool([this] { snapshotThread(); }); - server = std::make_unique(myid, coordination_settings, config, responses_queue, snapshots_queue); + server = std::make_unique( + myid, coordination_settings, config, responses_queue, snapshots_queue, standalone_keeper); try { LOG_DEBUG(log, "Waiting server to initialize"); diff --git a/src/Coordination/KeeperStorageDispatcher.h b/src/Coordination/KeeperStorageDispatcher.h index e4cfa620e6c..cc95de04ce9 100644 --- a/src/Coordination/KeeperStorageDispatcher.h +++ b/src/Coordination/KeeperStorageDispatcher.h @@ -86,7 +86,7 @@ private: public: KeeperStorageDispatcher(); - void initialize(const Poco::Util::AbstractConfiguration & config); + void initialize(const Poco::Util::AbstractConfiguration & config, bool standalone_keeper); void shutdown(); diff --git a/src/Core/Defines.h b/src/Core/Defines.h index 668a60f9be8..94df16758bf 100644 --- a/src/Core/Defines.h +++ b/src/Core/Defines.h @@ -98,6 +98,8 @@ #define DBMS_DEFAULT_PATH "/var/lib/clickhouse/" +#define KEEPER_DEFAULT_PATH "/var/lib/clickhouse-keeper/" + // more aliases: https://mailman.videolan.org/pipermail/x264-devel/2014-May/010660.html /// Marks that extra information is sent to a shard. It could be any magic numbers. diff --git a/src/Core/Settings.h b/src/Core/Settings.h index ea2019a4ff1..125879486ab 100644 --- a/src/Core/Settings.h +++ b/src/Core/Settings.h @@ -115,7 +115,7 @@ class IColumn; M(Bool, skip_unavailable_shards, false, "If 1, ClickHouse silently skips unavailable shards and nodes unresolvable through DNS. Shard is marked as unavailable when none of the replicas can be reached.", 0) \ \ M(UInt64, parallel_distributed_insert_select, 0, "Process distributed INSERT SELECT query in the same cluster on local tables on every shard, if 1 SELECT is executed on each shard, if 2 SELECT and INSERT is executed on each shard", 0) \ - M(UInt64, distributed_group_by_no_merge, 0, "If 1, Do not merge aggregation states from different servers for distributed query processing - in case it is for certain that there are different keys on different shards. If 2 - same as 1 but also apply ORDER BY and LIMIT stages", 0) \ + M(UInt64, distributed_group_by_no_merge, 0, "If 1, Do not merge aggregation states from different servers for distributed queries (shards will process query up to the Complete stage, initiator just proxies the data from the shards). If 2 the initiator will apply ORDER BY and LIMIT stages (it is not in case when shard process query up to the Complete stage)", 0) \ M(Bool, optimize_distributed_group_by_sharding_key, false, "Optimize GROUP BY sharding_key queries (by avoiding costly aggregation on the initiator server).", 0) \ M(UInt64, optimize_skip_unused_shards_limit, 1000, "Limit for number of sharding key values, turns off optimize_skip_unused_shards if the limit is reached", 0) \ M(Bool, optimize_skip_unused_shards, false, "Assumes that data is distributed by sharding_key. Optimization to skip unused shards if SELECT query filters by sharding_key.", 0) \ diff --git a/src/DataStreams/AddingDefaultBlockOutputStream.cpp b/src/DataStreams/AddingDefaultBlockOutputStream.cpp index 46799a109d3..6f7975d492d 100644 --- a/src/DataStreams/AddingDefaultBlockOutputStream.cpp +++ b/src/DataStreams/AddingDefaultBlockOutputStream.cpp @@ -15,7 +15,7 @@ AddingDefaultBlockOutputStream::AddingDefaultBlockOutputStream( : output(output_), header(header_) { auto dag = addMissingDefaults(header_, output->getHeader().getNamesAndTypesList(), columns_, context_, null_as_default_); - adding_defaults_actions = std::make_shared(std::move(dag), ExpressionActionsSettings::fromContext(context_)); + adding_defaults_actions = std::make_shared(std::move(dag), ExpressionActionsSettings::fromContext(context_, CompileExpressions::yes)); } void AddingDefaultBlockOutputStream::write(const Block & block) diff --git a/src/DataStreams/AddingDefaultsBlockInputStream.cpp b/src/DataStreams/AddingDefaultsBlockInputStream.cpp index e3f0906cb03..81be24439a5 100644 --- a/src/DataStreams/AddingDefaultsBlockInputStream.cpp +++ b/src/DataStreams/AddingDefaultsBlockInputStream.cpp @@ -174,7 +174,7 @@ Block AddingDefaultsBlockInputStream::readImpl() auto dag = evaluateMissingDefaults(evaluate_block, header.getNamesAndTypesList(), columns, context, false); if (dag) { - auto actions = std::make_shared(std::move(dag), ExpressionActionsSettings::fromContext(context)); + auto actions = std::make_shared(std::move(dag), ExpressionActionsSettings::fromContext(context, CompileExpressions::yes)); actions->execute(evaluate_block); } diff --git a/src/DataTypes/EnumValues.cpp b/src/DataTypes/EnumValues.cpp index 39c24bf1122..6df899ba9a2 100644 --- a/src/DataTypes/EnumValues.cpp +++ b/src/DataTypes/EnumValues.cpp @@ -67,7 +67,7 @@ T EnumValues::getValue(StringRef field_name, bool try_treat_as_id) const return x; } auto hints = this->getHints(field_name.toString()); - auto hints_string = !hints.empty() ? ", may be you meant: " + toString(hints) : ""; + auto hints_string = !hints.empty() ? ", maybe you meant: " + toString(hints) : ""; throw Exception{"Unknown element '" + field_name.toString() + "' for enum" + hints_string, ErrorCodes::BAD_ARGUMENTS}; } return it->getMapped(); diff --git a/src/Databases/MySQL/DatabaseConnectionMySQL.cpp b/src/Databases/MySQL/DatabaseConnectionMySQL.cpp index 5cd59f8a7c8..9b71fe537ec 100644 --- a/src/Databases/MySQL/DatabaseConnectionMySQL.cpp +++ b/src/Databases/MySQL/DatabaseConnectionMySQL.cpp @@ -20,6 +20,7 @@ # include # include # include +# include # include # include # include @@ -253,12 +254,13 @@ void DatabaseConnectionMySQL::fetchLatestTablesStructureIntoCache( std::move(mysql_pool), database_name_in_mysql, table_name, - false, - "", + /* replace_query_ */ false, + /* on_duplicate_clause = */ "", ColumnsDescription{columns_name_and_type}, ConstraintsDescription{}, String{}, - getContext())); + getContext(), + MySQLSettings{})); } } diff --git a/src/Functions/FunctionsComparison.h b/src/Functions/FunctionsComparison.h index fa607c06775..9ffb0cd0fc3 100644 --- a/src/Functions/FunctionsComparison.h +++ b/src/Functions/FunctionsComparison.h @@ -1147,17 +1147,24 @@ public: /// NOTE: We consider NaN comparison to be implementation specific (and in our implementation NaNs are sometimes equal sometimes not). if (left_type->equals(*right_type) && !left_type->isNullable() && !isTuple(left_type) && col_left_untyped == col_right_untyped) { + ColumnPtr result_column; + /// Always true: =, <=, >= if constexpr (IsOperation::equals || IsOperation::less_or_equals || IsOperation::greater_or_equals) { - return DataTypeUInt8().createColumnConst(input_rows_count, 1u); + result_column = DataTypeUInt8().createColumnConst(input_rows_count, 1u); } else { - return DataTypeUInt8().createColumnConst(input_rows_count, 0u); + result_column = DataTypeUInt8().createColumnConst(input_rows_count, 0u); } + + if (!isColumnConst(*col_left_untyped)) + result_column = result_column->convertToFullColumnIfConst(); + + return result_column; } WhichDataType which_left{left_type}; diff --git a/src/Functions/IFunction.cpp b/src/Functions/IFunction.cpp index 262f5f7f0a8..998d48941ba 100644 --- a/src/Functions/IFunction.cpp +++ b/src/Functions/IFunction.cpp @@ -61,13 +61,14 @@ ColumnPtr replaceLowCardinalityColumnsByNestedAndGetDictionaryIndexes( { /// Single LowCardinality column is supported now. if (indexes) - throw Exception("Expected single dictionary argument for function.", ErrorCodes::LOGICAL_ERROR); + throw Exception(ErrorCodes::LOGICAL_ERROR, "Expected single dictionary argument for function."); const auto * low_cardinality_type = checkAndGetDataType(column.type.get()); if (!low_cardinality_type) - throw Exception("Incompatible type for low cardinality column: " + column.type->getName(), - ErrorCodes::LOGICAL_ERROR); + throw Exception(ErrorCodes::LOGICAL_ERROR, + "Incompatible type for low cardinality column: {}", + column.type->getName()); if (can_be_executed_on_default_arguments) { @@ -121,7 +122,10 @@ ColumnPtr IExecutableFunction::defaultImplementationForConstantArguments( /// Check that these arguments are really constant. for (auto arg_num : arguments_to_remain_constants) if (arg_num < args.size() && !isColumnConst(*args[arg_num].column)) - throw Exception("Argument at index " + toString(arg_num) + " for function " + getName() + " must be constant", ErrorCodes::ILLEGAL_COLUMN); + throw Exception(ErrorCodes::ILLEGAL_COLUMN, + "Argument at index {} for function {} must be constant", + toString(arg_num), + getName()); if (args.empty() || !useDefaultImplementationForConstants() || !allArgumentsAreConstants(args)) return nullptr; @@ -150,8 +154,9 @@ ColumnPtr IExecutableFunction::defaultImplementationForConstantArguments( * not in "arguments_to_remain_constants" set. Otherwise we get infinite recursion. */ if (!have_converted_columns) - throw Exception("Number of arguments for function " + getName() + " doesn't match: the function requires more arguments", - ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, + "Number of arguments for function {} doesn't match: the function requires more arguments", + getName()); ColumnPtr result_column = executeWithoutLowCardinalityColumns(temporary_columns, result_type, 1, dry_run); @@ -266,9 +271,11 @@ void IFunctionOverloadResolver::checkNumberOfArguments(size_t number_of_argument size_t expected_number_of_arguments = getNumberOfArguments(); if (number_of_arguments != expected_number_of_arguments) - throw Exception("Number of arguments for function " + getName() + " doesn't match: passed " - + toString(number_of_arguments) + ", should be " + toString(expected_number_of_arguments), - ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH); + throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, + "Number of arguments for function {} doesn't match: passed {}, should be {}", + getName(), + toString(number_of_arguments), + toString(expected_number_of_arguments)); } DataTypePtr IFunctionOverloadResolver::getReturnType(const ColumnsWithTypeAndName & arguments) const @@ -299,11 +306,7 @@ DataTypePtr IFunctionOverloadResolver::getReturnType(const ColumnsWithTypeAndNam ++num_full_ordinary_columns; } - for (auto & arg : args_without_low_cardinality) - { - arg.column = recursiveRemoveLowCardinality(arg.column); - arg.type = recursiveRemoveLowCardinality(arg.type); - } + convertLowCardinalityColumnsToFull(args_without_low_cardinality); auto type_without_low_cardinality = getReturnTypeWithoutLowCardinality(args_without_low_cardinality); diff --git a/src/Functions/IFunction.h b/src/Functions/IFunction.h index fe3ec21afa1..7542451a81a 100644 --- a/src/Functions/IFunction.h +++ b/src/Functions/IFunction.h @@ -245,6 +245,8 @@ public: void getLambdaArgumentTypes(DataTypes & arguments) const; + void checkNumberOfArguments(size_t number_of_arguments) const; + /// Get the main function name. virtual String getName() const = 0; @@ -319,8 +321,6 @@ protected: private: - void checkNumberOfArguments(size_t number_of_arguments) const; - DataTypePtr getReturnTypeWithoutLowCardinality(const ColumnsWithTypeAndName & arguments) const; }; diff --git a/src/IO/HashingWriteBuffer.h b/src/IO/HashingWriteBuffer.h index 539c0390dc2..bd00a2b12da 100644 --- a/src/IO/HashingWriteBuffer.h +++ b/src/IO/HashingWriteBuffer.h @@ -17,7 +17,7 @@ class IHashingBuffer : public BufferWithOwnMemory public: using uint128 = CityHash_v1_0_2::uint128; - IHashingBuffer(size_t block_size_ = DBMS_DEFAULT_HASHING_BLOCK_SIZE) + IHashingBuffer(size_t block_size_ = DBMS_DEFAULT_HASHING_BLOCK_SIZE) : BufferWithOwnMemory(block_size_), block_pos(0), block_size(block_size_), state(0, 0) { } diff --git a/src/Interpreters/ActionsDAG.cpp b/src/Interpreters/ActionsDAG.cpp index 65a676489c7..21c24956453 100644 --- a/src/Interpreters/ActionsDAG.cpp +++ b/src/Interpreters/ActionsDAG.cpp @@ -293,6 +293,17 @@ NamesAndTypesList ActionsDAG::getRequiredColumns() const return result; } +Names ActionsDAG::getRequiredColumnsNames() const +{ + Names result; + result.reserve(inputs.size()); + + for (const auto & input : inputs) + result.emplace_back(input->result_name); + + return result; +} + ColumnsWithTypeAndName ActionsDAG::getResultColumns() const { ColumnsWithTypeAndName result; @@ -1041,11 +1052,19 @@ ActionsDAGPtr ActionsDAG::makeConvertingActions( { auto & input = inputs[res_elem.name]; if (input.empty()) - throw Exception("Cannot find column " + backQuote(res_elem.name) + " in source stream", - ErrorCodes::THERE_IS_NO_COLUMN); - - src_node = dst_node = actions_dag->inputs[input.front()]; - input.pop_front(); + { + const auto * res_const = typeid_cast(res_elem.column.get()); + if (ignore_constant_values && res_const) + src_node = dst_node = &actions_dag->addColumn(res_elem); + else + throw Exception("Cannot find column " + backQuote(res_elem.name) + " in source stream", + ErrorCodes::THERE_IS_NO_COLUMN); + } + else + { + src_node = dst_node = actions_dag->inputs[input.front()]; + input.pop_front(); + } break; } } diff --git a/src/Interpreters/ActionsDAG.h b/src/Interpreters/ActionsDAG.h index d8e2505f5b3..cc1d9a0e6ac 100644 --- a/src/Interpreters/ActionsDAG.h +++ b/src/Interpreters/ActionsDAG.h @@ -121,6 +121,7 @@ public: const NodeRawConstPtrs & getInputs() const { return inputs; } NamesAndTypesList getRequiredColumns() const; + Names getRequiredColumnsNames() const; ColumnsWithTypeAndName getResultColumns() const; NamesAndTypesList getNamesAndTypesList() const; diff --git a/src/Interpreters/ActionsVisitor.cpp b/src/Interpreters/ActionsVisitor.cpp index dace0beea11..44eb7c902cd 100644 --- a/src/Interpreters/ActionsVisitor.cpp +++ b/src/Interpreters/ActionsVisitor.cpp @@ -1015,7 +1015,7 @@ void ActionsMatcher::visit(const ASTFunction & node, const ASTPtr & ast, Data & auto lambda_actions = std::make_shared( lambda_dag, - ExpressionActionsSettings::fromContext(data.getContext())); + ExpressionActionsSettings::fromContext(data.getContext(), CompileExpressions::yes)); DataTypePtr result_type = lambda_actions->getSampleBlock().getByName(result_name).type; diff --git a/src/Interpreters/ClusterProxy/SelectStreamFactory.cpp b/src/Interpreters/ClusterProxy/SelectStreamFactory.cpp index 7cb55f32162..0c9d42e1381 100644 --- a/src/Interpreters/ClusterProxy/SelectStreamFactory.cpp +++ b/src/Interpreters/ClusterProxy/SelectStreamFactory.cpp @@ -8,11 +8,13 @@ #include #include #include +#include #include #include #include #include +#include #include #include #include @@ -73,6 +75,95 @@ SelectStreamFactory::SelectStreamFactory( namespace { +/// Special support for the case when `_shard_num` column is used in GROUP BY key expression. +/// This column is a constant for shard. +/// Constant expression with this column may be removed from intermediate header. +/// However, this column is not constant for initiator, and it expect intermediate header has it. +/// +/// To fix it, the following trick is applied. +/// We check all GROUP BY keys which depend only on `_shard_num`. +/// Calculate such expression for current shard if it is used in header. +/// Those columns will be added to modified header as already known constants. +/// +/// For local shard, missed constants will be added by converting actions. +/// For remote shard, RemoteQueryExecutor will automatically add missing constant. +Block evaluateConstantGroupByKeysWithShardNumber( + const ContextPtr & context, const ASTPtr & query_ast, const Block & header, UInt32 shard_num) +{ + Block res; + + ColumnWithTypeAndName shard_num_col; + shard_num_col.type = std::make_shared(); + shard_num_col.column = shard_num_col.type->createColumnConst(0, shard_num); + shard_num_col.name = "_shard_num"; + + if (auto group_by = query_ast->as().groupBy()) + { + for (const auto & elem : group_by->children) + { + String key_name = elem->getColumnName(); + if (header.has(key_name)) + { + auto ast = elem->clone(); + + RequiredSourceColumnsVisitor::Data columns_context; + RequiredSourceColumnsVisitor(columns_context).visit(ast); + + auto required_columns = columns_context.requiredColumns(); + if (required_columns.size() != 1 || required_columns.count("_shard_num") == 0) + continue; + + Block block({shard_num_col}); + auto syntax_result = TreeRewriter(context).analyze(ast, {NameAndTypePair{shard_num_col.name, shard_num_col.type}}); + ExpressionAnalyzer(ast, syntax_result, context).getActions(true, false)->execute(block); + + res.insert(block.getByName(key_name)); + } + } + } + + /// We always add _shard_num constant just in case. + /// For initial query it is considered as a column from table, and may be required by intermediate block. + if (!res.has(shard_num_col.name)) + res.insert(std::move(shard_num_col)); + + return res; +} + +ActionsDAGPtr getConvertingDAG(const Block & block, const Block & header) +{ + /// Convert header structure to expected. + /// Also we ignore constants from result and replace it with constants from header. + /// It is needed for functions like `now64()` or `randConstant()` because their values may be different. + return ActionsDAG::makeConvertingActions( + block.getColumnsWithTypeAndName(), + header.getColumnsWithTypeAndName(), + ActionsDAG::MatchColumnsMode::Name, + true); +} + +void addConvertingActions(QueryPlan & plan, const Block & header) +{ + if (blocksHaveEqualStructure(plan.getCurrentDataStream().header, header)) + return; + + auto convert_actions_dag = getConvertingDAG(plan.getCurrentDataStream().header, header); + auto converting = std::make_unique(plan.getCurrentDataStream(), convert_actions_dag); + plan.addStep(std::move(converting)); +} + +void addConvertingActions(Pipe & pipe, const Block & header) +{ + if (blocksHaveEqualStructure(pipe.getHeader(), header)) + return; + + auto convert_actions = std::make_shared(getConvertingDAG(pipe.getHeader(), header)); + pipe.addSimpleTransform([&](const Block & cur_header, Pipe::StreamType) -> ProcessorPtr + { + return std::make_shared(cur_header, convert_actions); + }); +} + std::unique_ptr createLocalPlan( const ASTPtr & query_ast, const Block & header, @@ -86,18 +177,7 @@ std::unique_ptr createLocalPlan( InterpreterSelectQuery interpreter(query_ast, context, SelectQueryOptions(processed_stage)); interpreter.buildQueryPlan(*query_plan); - /// Convert header structure to expected. - /// Also we ignore constants from result and replace it with constants from header. - /// It is needed for functions like `now64()` or `randConstant()` because their values may be different. - auto convert_actions_dag = ActionsDAG::makeConvertingActions( - query_plan->getCurrentDataStream().header.getColumnsWithTypeAndName(), - header.getColumnsWithTypeAndName(), - ActionsDAG::MatchColumnsMode::Name, - true); - - auto converting = std::make_unique(query_plan->getCurrentDataStream(), convert_actions_dag); - converting->setStepDescription("Convert block structure for query from local replica"); - query_plan->addStep(std::move(converting)); + addConvertingActions(*query_plan, header); return query_plan; } @@ -134,12 +214,25 @@ void SelectStreamFactory::createForShard( } auto modified_query_ast = query_ast->clone(); + auto modified_header = header; if (has_virtual_shard_num_column) + { VirtualColumnUtils::rewriteEntityInAst(modified_query_ast, "_shard_num", shard_info.shard_num, "toUInt32"); + auto shard_num_constants = evaluateConstantGroupByKeysWithShardNumber(context, query_ast, modified_header, shard_info.shard_num); + + for (auto & col : shard_num_constants) + { + if (modified_header.has(col.name)) + modified_header.getByName(col.name).column = std::move(col.column); + else + modified_header.insert(std::move(col)); + } + } auto emplace_local_stream = [&]() { - plans.emplace_back(createLocalPlan(modified_query_ast, header, context, processed_stage)); + plans.emplace_back(createLocalPlan(modified_query_ast, modified_header, context, processed_stage)); + addConvertingActions(*plans.back(), header); }; String modified_query = formattedAST(modified_query_ast); @@ -147,7 +240,7 @@ void SelectStreamFactory::createForShard( auto emplace_remote_stream = [&]() { auto remote_query_executor = std::make_shared( - shard_info.pool, modified_query, header, context, throttler, scalars, external_tables, processed_stage); + shard_info.pool, modified_query, modified_header, context, throttler, scalars, external_tables, processed_stage); remote_query_executor->setLogger(log); remote_query_executor->setPoolMode(PoolMode::GET_MANY); @@ -156,6 +249,7 @@ void SelectStreamFactory::createForShard( remote_pipes.emplace_back(createRemoteSourcePipe(remote_query_executor, add_agg_info, add_totals, add_extremes, async_read)); remote_pipes.back().addInterpreterContext(context); + addConvertingActions(remote_pipes.back(), header); }; const auto & settings = context->getSettingsRef(); @@ -247,7 +341,7 @@ void SelectStreamFactory::createForShard( /// Do it lazily to avoid connecting in the main thread. auto lazily_create_stream = [ - pool = shard_info.pool, shard_num = shard_info.shard_num, modified_query, header = header, modified_query_ast, + pool = shard_info.pool, shard_num = shard_info.shard_num, modified_query, header = modified_header, modified_query_ast, context, throttler, main_table = main_table, table_func_ptr = table_func_ptr, scalars = scalars, external_tables = external_tables, stage = processed_stage, local_delay, add_agg_info, add_totals, add_extremes, async_read]() @@ -302,8 +396,9 @@ void SelectStreamFactory::createForShard( } }; - delayed_pipes.emplace_back(createDelayedPipe(header, lazily_create_stream, add_totals, add_extremes)); + delayed_pipes.emplace_back(createDelayedPipe(modified_header, lazily_create_stream, add_totals, add_extremes)); delayed_pipes.back().addInterpreterContext(context); + addConvertingActions(delayed_pipes.back(), header); } else emplace_remote_stream(); diff --git a/src/Interpreters/Context.cpp b/src/Interpreters/Context.cpp index f34061a4794..a9087ba9d2f 100644 --- a/src/Interpreters/Context.cpp +++ b/src/Interpreters/Context.cpp @@ -314,8 +314,8 @@ struct ContextSharedPart ConfigurationPtr zookeeper_config; /// Stores zookeeper configs #if USE_NURAFT - mutable std::mutex nu_keeper_storage_dispatcher_mutex; - mutable std::shared_ptr nu_keeper_storage_dispatcher; + mutable std::mutex keeper_storage_dispatcher_mutex; + mutable std::shared_ptr keeper_storage_dispatcher; #endif mutable std::mutex auxiliary_zookeepers_mutex; mutable std::map auxiliary_zookeepers; /// Map for auxiliary ZooKeeper clients. @@ -1678,16 +1678,16 @@ zkutil::ZooKeeperPtr Context::getZooKeeper() const void Context::initializeKeeperStorageDispatcher() const { #if USE_NURAFT - std::lock_guard lock(shared->nu_keeper_storage_dispatcher_mutex); + std::lock_guard lock(shared->keeper_storage_dispatcher_mutex); - if (shared->nu_keeper_storage_dispatcher) + if (shared->keeper_storage_dispatcher) throw Exception(ErrorCodes::LOGICAL_ERROR, "Trying to initialize Keeper multiple times"); const auto & config = getConfigRef(); if (config.has("keeper_server")) { - shared->nu_keeper_storage_dispatcher = std::make_shared(); - shared->nu_keeper_storage_dispatcher->initialize(config); + shared->keeper_storage_dispatcher = std::make_shared(); + shared->keeper_storage_dispatcher->initialize(config, getApplicationType() == ApplicationType::KEEPER); } #endif } @@ -1695,22 +1695,22 @@ void Context::initializeKeeperStorageDispatcher() const #if USE_NURAFT std::shared_ptr & Context::getKeeperStorageDispatcher() const { - std::lock_guard lock(shared->nu_keeper_storage_dispatcher_mutex); - if (!shared->nu_keeper_storage_dispatcher) + std::lock_guard lock(shared->keeper_storage_dispatcher_mutex); + if (!shared->keeper_storage_dispatcher) throw Exception(ErrorCodes::LOGICAL_ERROR, "Keeper must be initialized before requests"); - return shared->nu_keeper_storage_dispatcher; + return shared->keeper_storage_dispatcher; } #endif void Context::shutdownKeeperStorageDispatcher() const { #if USE_NURAFT - std::lock_guard lock(shared->nu_keeper_storage_dispatcher_mutex); - if (shared->nu_keeper_storage_dispatcher) + std::lock_guard lock(shared->keeper_storage_dispatcher_mutex); + if (shared->keeper_storage_dispatcher) { - shared->nu_keeper_storage_dispatcher->shutdown(); - shared->nu_keeper_storage_dispatcher.reset(); + shared->keeper_storage_dispatcher->shutdown(); + shared->keeper_storage_dispatcher.reset(); } #endif } diff --git a/src/Interpreters/Context.h b/src/Interpreters/Context.h index a8fd0cf1700..5089d2c0288 100644 --- a/src/Interpreters/Context.h +++ b/src/Interpreters/Context.h @@ -734,7 +734,8 @@ public: { SERVER, /// The program is run as clickhouse-server daemon (default behavior) CLIENT, /// clickhouse-client - LOCAL /// clickhouse-local + LOCAL, /// clickhouse-local + KEEPER, /// clickhouse-keeper (also daemon) }; ApplicationType getApplicationType() const; diff --git a/src/Interpreters/ExpressionActions.cpp b/src/Interpreters/ExpressionActions.cpp index d1f5b7cd896..bd06c753319 100644 --- a/src/Interpreters/ExpressionActions.cpp +++ b/src/Interpreters/ExpressionActions.cpp @@ -51,7 +51,7 @@ ExpressionActions::ExpressionActions(ActionsDAGPtr actions_dag_, const Expressio actions_dag = actions_dag_->clone(); #if USE_EMBEDDED_COMPILER - if (settings.compile_expressions) + if (settings.can_compile_expressions && settings.compile_expressions == CompileExpressions::yes) actions_dag->compileExpressions(settings.min_count_to_compile_expression); #endif diff --git a/src/Interpreters/ExpressionActions.h b/src/Interpreters/ExpressionActions.h index eb3b60c1ffb..c446b339072 100644 --- a/src/Interpreters/ExpressionActions.h +++ b/src/Interpreters/ExpressionActions.h @@ -30,7 +30,6 @@ using ArrayJoinActionPtr = std::shared_ptr; class ExpressionActions; using ExpressionActionsPtr = std::shared_ptr; - /// Sequence of actions on the block. /// Is used to calculate expressions. /// diff --git a/src/Interpreters/ExpressionActionsSettings.cpp b/src/Interpreters/ExpressionActionsSettings.cpp index a9495628d6f..550aa4d339c 100644 --- a/src/Interpreters/ExpressionActionsSettings.cpp +++ b/src/Interpreters/ExpressionActionsSettings.cpp @@ -6,20 +6,21 @@ namespace DB { -ExpressionActionsSettings ExpressionActionsSettings::fromSettings(const Settings & from) +ExpressionActionsSettings ExpressionActionsSettings::fromSettings(const Settings & from, CompileExpressions compile_expressions) { ExpressionActionsSettings settings; - settings.compile_expressions = from.compile_expressions; + settings.can_compile_expressions = from.compile_expressions; settings.min_count_to_compile_expression = from.min_count_to_compile_expression; settings.max_temporary_columns = from.max_temporary_columns; settings.max_temporary_non_const_columns = from.max_temporary_non_const_columns; + settings.compile_expressions = compile_expressions; return settings; } -ExpressionActionsSettings ExpressionActionsSettings::fromContext(ContextPtr from) +ExpressionActionsSettings ExpressionActionsSettings::fromContext(ContextPtr from, CompileExpressions compile_expressions) { - return fromSettings(from->getSettingsRef()); + return fromSettings(from->getSettingsRef(), compile_expressions); } } diff --git a/src/Interpreters/ExpressionActionsSettings.h b/src/Interpreters/ExpressionActionsSettings.h index 06351136a9a..26532128805 100644 --- a/src/Interpreters/ExpressionActionsSettings.h +++ b/src/Interpreters/ExpressionActionsSettings.h @@ -9,16 +9,24 @@ namespace DB struct Settings; +enum class CompileExpressions: uint8_t +{ + no = 0, + yes = 1, +}; + struct ExpressionActionsSettings { - bool compile_expressions = false; + bool can_compile_expressions = false; size_t min_count_to_compile_expression = 0; size_t max_temporary_columns = 0; size_t max_temporary_non_const_columns = 0; - static ExpressionActionsSettings fromSettings(const Settings & from); - static ExpressionActionsSettings fromContext(ContextPtr from); + CompileExpressions compile_expressions = CompileExpressions::no; + + static ExpressionActionsSettings fromSettings(const Settings & from, CompileExpressions compile_expressions = CompileExpressions::no); + static ExpressionActionsSettings fromContext(ContextPtr from, CompileExpressions compile_expressions = CompileExpressions::no); }; } diff --git a/src/Interpreters/ExpressionAnalyzer.cpp b/src/Interpreters/ExpressionAnalyzer.cpp index 9866817c1c4..766b0523c81 100644 --- a/src/Interpreters/ExpressionAnalyzer.cpp +++ b/src/Interpreters/ExpressionAnalyzer.cpp @@ -909,8 +909,8 @@ ActionsDAGPtr SelectQueryExpressionAnalyzer::appendPrewhere( auto tmp_actions_dag = std::make_shared(sourceColumns()); getRootActions(select_query->prewhere(), only_types, tmp_actions_dag); tmp_actions_dag->removeUnusedActions(NameSet{prewhere_column_name}); - auto tmp_actions = std::make_shared(tmp_actions_dag, ExpressionActionsSettings::fromContext(getContext())); - auto required_columns = tmp_actions->getRequiredColumns(); + + auto required_columns = tmp_actions_dag->getRequiredColumnsNames(); NameSet required_source_columns(required_columns.begin(), required_columns.end()); required_source_columns.insert(first_action_names.begin(), first_action_names.end()); @@ -1028,7 +1028,7 @@ bool SelectQueryExpressionAnalyzer::appendGroupBy(ExpressionActionsChain & chain auto actions_dag = std::make_shared(columns_after_join); getRootActions(child, only_types, actions_dag); group_by_elements_actions.emplace_back( - std::make_shared(actions_dag, ExpressionActionsSettings::fromContext(getContext()))); + std::make_shared(actions_dag, ExpressionActionsSettings::fromContext(getContext(), CompileExpressions::yes))); } } @@ -1187,7 +1187,7 @@ ActionsDAGPtr SelectQueryExpressionAnalyzer::appendOrderBy(ExpressionActionsChai auto actions_dag = std::make_shared(columns_after_join); getRootActions(child, only_types, actions_dag); order_by_elements_actions.emplace_back( - std::make_shared(actions_dag, ExpressionActionsSettings::fromContext(getContext()))); + std::make_shared(actions_dag, ExpressionActionsSettings::fromContext(getContext(), CompileExpressions::yes))); } } @@ -1345,13 +1345,12 @@ ActionsDAGPtr ExpressionAnalyzer::getActionsDAG(bool add_aliases, bool project_r return actions_dag; } -ExpressionActionsPtr ExpressionAnalyzer::getActions(bool add_aliases, bool project_result) +ExpressionActionsPtr ExpressionAnalyzer::getActions(bool add_aliases, bool project_result, CompileExpressions compile_expressions) { return std::make_shared( - getActionsDAG(add_aliases, project_result), ExpressionActionsSettings::fromContext(getContext())); + getActionsDAG(add_aliases, project_result), ExpressionActionsSettings::fromContext(getContext(), compile_expressions)); } - ExpressionActionsPtr ExpressionAnalyzer::getConstActions(const ColumnsWithTypeAndName & constant_inputs) { auto actions = std::make_shared(constant_inputs); diff --git a/src/Interpreters/ExpressionAnalyzer.h b/src/Interpreters/ExpressionAnalyzer.h index ef25ee2ece5..e9abf2db033 100644 --- a/src/Interpreters/ExpressionAnalyzer.h +++ b/src/Interpreters/ExpressionAnalyzer.h @@ -112,7 +112,7 @@ public: /// If also project_result, than only aliases remain in the output block. /// Otherwise, only temporary columns will be deleted from the block. ActionsDAGPtr getActionsDAG(bool add_aliases, bool project_result = true); - ExpressionActionsPtr getActions(bool add_aliases, bool project_result = true); + ExpressionActionsPtr getActions(bool add_aliases, bool project_result = true, CompileExpressions compile_expressions = CompileExpressions::no); /// Actions that can be performed on an empty block: adding constants and applying functions that depend only on constants. /// Does not execute subqueries. diff --git a/src/Interpreters/ExpressionJIT.cpp b/src/Interpreters/ExpressionJIT.cpp index edd634fe558..be693fdc3b9 100644 --- a/src/Interpreters/ExpressionJIT.cpp +++ b/src/Interpreters/ExpressionJIT.cpp @@ -315,28 +315,6 @@ static bool isCompilableConstant(const ActionsDAG::Node & node) return node.column && isColumnConst(*node.column) && canBeNativeType(*node.result_type) && node.allow_constant_folding; } -static bool checkIfFunctionIsComparisonEdgeCase(const ActionsDAG::Node & node, const IFunctionBase & impl) -{ - static std::unordered_set comparison_functions - { - NameEquals::name, - NameNotEquals::name, - NameLess::name, - NameGreater::name, - NameLessOrEquals::name, - NameGreaterOrEquals::name - }; - - auto it = comparison_functions.find(impl.getName()); - if (it == comparison_functions.end()) - return false; - - const auto * lhs_node = node.children[0]; - const auto * rhs_node = node.children[1]; - - return lhs_node == rhs_node && !isTuple(lhs_node->result_type); -} - static bool isCompilableFunction(const ActionsDAG::Node & node) { if (node.type != ActionsDAG::ActionType::FUNCTION) @@ -353,9 +331,6 @@ static bool isCompilableFunction(const ActionsDAG::Node & node) return false; } - if (checkIfFunctionIsComparisonEdgeCase(node, *node.function_base)) - return false; - return function.isCompilable(); } diff --git a/src/Interpreters/InterpreterInsertQuery.cpp b/src/Interpreters/InterpreterInsertQuery.cpp index 9b3d1bcebd7..225bf9ec651 100644 --- a/src/Interpreters/InterpreterInsertQuery.cpp +++ b/src/Interpreters/InterpreterInsertQuery.cpp @@ -325,7 +325,7 @@ BlockIO InterpreterInsertQuery::execute() res.pipeline.getHeader().getColumnsWithTypeAndName(), header.getColumnsWithTypeAndName(), ActionsDAG::MatchColumnsMode::Position); - auto actions = std::make_shared(actions_dag, ExpressionActionsSettings::fromContext(getContext())); + auto actions = std::make_shared(actions_dag, ExpressionActionsSettings::fromContext(getContext(), CompileExpressions::yes)); res.pipeline.addSimpleTransform([&](const Block & in_header) -> ProcessorPtr { diff --git a/src/Interpreters/InterpreterSelectQuery.cpp b/src/Interpreters/InterpreterSelectQuery.cpp index f422080e597..61740284f1b 100644 --- a/src/Interpreters/InterpreterSelectQuery.cpp +++ b/src/Interpreters/InterpreterSelectQuery.cpp @@ -593,6 +593,7 @@ Block InterpreterSelectQuery::getSampleBlockImpl() OpenTelemetrySpanHolder span(__PRETTY_FUNCTION__); query_info.query = query_ptr; + query_info.has_window = query_analyzer->hasWindow(); if (storage && !options.only_analyze) { @@ -636,9 +637,7 @@ Block InterpreterSelectQuery::getSampleBlockImpl() if (analysis_result.prewhere_info) { - ExpressionActions( - analysis_result.prewhere_info->prewhere_actions, - ExpressionActionsSettings::fromContext(context)).execute(header); + header = analysis_result.prewhere_info->prewhere_actions->updateHeader(header); if (analysis_result.prewhere_info->remove_prewhere_column) header.erase(analysis_result.prewhere_info->prewhere_column_name); } @@ -1257,7 +1256,12 @@ void InterpreterSelectQuery::executeImpl(QueryPlan & query_plan, const BlockInpu // 4) preliminary distinct. // Some of these were already executed at the shards (first_stage), // see the counterpart code and comments there. - if (expressions.need_aggregate) + if (from_aggregation_stage) + { + if (query_analyzer->hasWindow()) + throw Exception("Window functions does not support processing from WithMergeableStateAfterAggregation", ErrorCodes::NOT_IMPLEMENTED); + } + else if (expressions.need_aggregate) { executeExpression(query_plan, expressions.before_window, "Before window functions"); @@ -1895,11 +1899,12 @@ void InterpreterSelectQuery::executeFetchColumns(QueryProcessingStage::Enum proc // TODO figure out how to make set for projections query_info.sets = query_analyzer->getPreparedSets(); - auto actions_settings = ExpressionActionsSettings::fromContext(context); auto & prewhere_info = analysis_result.prewhere_info; if (prewhere_info) { + auto actions_settings = ExpressionActionsSettings::fromContext(context, CompileExpressions::yes); + query_info.prewhere_info = std::make_shared(); query_info.prewhere_info->prewhere_actions = std::make_shared(prewhere_info->prewhere_actions, actions_settings); diff --git a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp index 40508488d05..d50aa323ba3 100644 --- a/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp +++ b/src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp @@ -124,8 +124,8 @@ static NamesAndTypesList getNames(const ASTFunction & expr, ContextPtr context, ASTPtr temp_ast = expr.clone(); auto syntax = TreeRewriter(context).analyze(temp_ast, columns); - auto expression = ExpressionAnalyzer(temp_ast, syntax, context).getActions(false); - return expression->getRequiredColumnsWithTypes(); + auto required_columns = ExpressionAnalyzer(temp_ast, syntax, context).getActionsDAG(false)->getRequiredColumns(); + return required_columns; } static NamesAndTypesList modifyPrimaryKeysToNonNullable(const NamesAndTypesList & primary_keys, NamesAndTypesList & columns) diff --git a/src/Interpreters/convertFieldToType.cpp b/src/Interpreters/convertFieldToType.cpp index fa49b730379..0b124634fec 100644 --- a/src/Interpreters/convertFieldToType.cpp +++ b/src/Interpreters/convertFieldToType.cpp @@ -201,7 +201,13 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID return src; } - /// TODO Conversion from integers to DateTime64 + if (which_type.isDateTime64() + && (which_from_type.isNativeInt() || which_from_type.isNativeUInt() || which_from_type.isDateOrDateTime())) + { + const auto scale = static_cast(type).getScale(); + const auto decimal_value = DecimalUtils::decimalFromComponents(src.reinterpret(), 0, scale); + return Field(DecimalField(decimal_value, scale)); + } } else if (which_type.isUUID() && src.getType() == Field::Types::UUID) { diff --git a/src/Interpreters/examples/jit_example.cpp b/src/Interpreters/examples/jit_example.cpp index a4f19838bbe..92215429bfc 100644 --- a/src/Interpreters/examples/jit_example.cpp +++ b/src/Interpreters/examples/jit_example.cpp @@ -1,57 +1,57 @@ #include -#include +// #include -#include +// #include -void test_function() -{ - std::cerr << "Test function" << std::endl; -} +// void test_function() +// { +// std::cerr << "Test function" << std::endl; +// } int main(int argc, char **argv) { (void)(argc); (void)(argv); - auto jit = DB::CHJIT(); + // auto jit = DB::CHJIT(); - jit.registerExternalSymbol("test_function", reinterpret_cast(&test_function)); + // jit.registerExternalSymbol("test_function", reinterpret_cast(&test_function)); - auto compiled_module_info = jit.compileModule([](llvm::Module & module) - { - auto & context = module.getContext(); - llvm::IRBuilder<> b (context); + // auto compiled_module_info = jit.compileModule([](llvm::Module & module) + // { + // auto & context = module.getContext(); + // llvm::IRBuilder<> b (context); - auto * func_declaration_type = llvm::FunctionType::get(b.getVoidTy(), { }, /*isVarArg=*/false); - auto * func_declaration = llvm::Function::Create(func_declaration_type, llvm::Function::ExternalLinkage, "test_function", module); + // auto * func_declaration_type = llvm::FunctionType::get(b.getVoidTy(), { }, /*isVarArg=*/false); + // auto * func_declaration = llvm::Function::Create(func_declaration_type, llvm::Function::ExternalLinkage, "test_function", module); - auto * value_type = b.getInt64Ty(); - auto * pointer_type = value_type->getPointerTo(); + // auto * value_type = b.getInt64Ty(); + // auto * pointer_type = value_type->getPointerTo(); - auto * func_type = llvm::FunctionType::get(b.getVoidTy(), { pointer_type }, /*isVarArg=*/false); - auto * function = llvm::Function::Create(func_type, llvm::Function::ExternalLinkage, "test_name", module); - auto * entry = llvm::BasicBlock::Create(context, "entry", function); + // auto * func_type = llvm::FunctionType::get(b.getVoidTy(), { pointer_type }, /*isVarArg=*/false); + // auto * function = llvm::Function::Create(func_type, llvm::Function::ExternalLinkage, "test_name", module); + // auto * entry = llvm::BasicBlock::Create(context, "entry", function); - auto * argument = function->args().begin(); - b.SetInsertPoint(entry); + // auto * argument = function->args().begin(); + // b.SetInsertPoint(entry); - b.CreateCall(func_declaration); + // b.CreateCall(func_declaration); - auto * load_argument = b.CreateLoad(argument); - auto * value = b.CreateAdd(load_argument, load_argument); - b.CreateRet(value); - }); + // auto * load_argument = b.CreateLoad(argument); + // auto * value = b.CreateAdd(load_argument, load_argument); + // b.CreateRet(value); + // }); - for (const auto & compiled_function_name : compiled_module_info.compiled_functions) - { - std::cerr << compiled_function_name << std::endl; - } + // for (const auto & compiled_function_name : compiled_module_info.compiled_functions) + // { + // std::cerr << compiled_function_name << std::endl; + // } - int64_t value = 5; - auto * test_name_function = reinterpret_cast(jit.findCompiledFunction(compiled_module_info, "test_name")); - auto result = test_name_function(&value); - std::cerr << "Result " << result << std::endl; + // int64_t value = 5; + // auto * test_name_function = reinterpret_cast(jit.findCompiledFunction(compiled_module_info, "test_name")); + // auto result = test_name_function(&value); + // std::cerr << "Result " << result << std::endl; return 0; } diff --git a/src/Parsers/ASTSelectQuery.cpp b/src/Parsers/ASTSelectQuery.cpp index 4715c7f201b..b5cfa17f035 100644 --- a/src/Parsers/ASTSelectQuery.cpp +++ b/src/Parsers/ASTSelectQuery.cpp @@ -95,7 +95,7 @@ void ASTSelectQuery::formatImpl(const FormatSettings & s, FormatState & state, F if (tables()) { - s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << "FROM " << (s.hilite ? hilite_none : ""); + s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << "FROM" << (s.hilite ? hilite_none : ""); tables()->formatImpl(s, state, frame); } diff --git a/src/Parsers/ASTSelectWithUnionQuery.cpp b/src/Parsers/ASTSelectWithUnionQuery.cpp index 6a990a3d154..fa7359574f8 100644 --- a/src/Parsers/ASTSelectWithUnionQuery.cpp +++ b/src/Parsers/ASTSelectWithUnionQuery.cpp @@ -35,24 +35,24 @@ void ASTSelectWithUnionQuery::formatQueryImpl(const FormatSettings & settings, F if (mode == Mode::Unspecified) return ""; else if (mode == Mode::ALL) - return "ALL"; + return " ALL"; else - return "DISTINCT"; + return " DISTINCT"; }; for (ASTs::const_iterator it = list_of_selects->children.begin(); it != list_of_selects->children.end(); ++it) { if (it != list_of_selects->children.begin()) - settings.ostr << settings.nl_or_ws << indent_str << (settings.hilite ? hilite_keyword : "") << "UNION " + settings.ostr << settings.nl_or_ws << indent_str << (settings.hilite ? hilite_keyword : "") << "UNION" << mode_to_str((is_normalized) ? union_mode : list_of_modes[it - list_of_selects->children.begin() - 1]) << (settings.hilite ? hilite_none : ""); if (auto * node = (*it)->as()) { + settings.ostr << settings.nl_or_ws << indent_str; + if (node->list_of_selects->children.size() == 1) { - if (it != list_of_selects->children.begin()) - settings.ostr << settings.nl_or_ws; (node->list_of_selects->children.at(0))->formatImpl(settings, state, frame); } else diff --git a/src/Parsers/ASTSubquery.cpp b/src/Parsers/ASTSubquery.cpp index 142072050a6..58f334376e6 100644 --- a/src/Parsers/ASTSubquery.cpp +++ b/src/Parsers/ASTSubquery.cpp @@ -29,6 +29,11 @@ void ASTSubquery::appendColumnNameImpl(WriteBuffer & ostr) const void ASTSubquery::formatImplWithoutAlias(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const { + /// NOTE: due to trickery of filling cte_name (in interpreters) it is hard + /// to print it w/o newline (for !oneline case), since if nl_or_ws + /// prepended here, then formatting will be incorrect with alias: + /// + /// (select 1 in ((select 1) as sub)) if (!cte_name.empty()) { settings.ostr << (settings.hilite ? hilite_identifier : ""); @@ -40,7 +45,7 @@ void ASTSubquery::formatImplWithoutAlias(const FormatSettings & settings, Format std::string indent_str = settings.one_line ? "" : std::string(4u * frame.indent, ' '); std::string nl_or_nothing = settings.one_line ? "" : "\n"; - settings.ostr << nl_or_nothing << indent_str << "(" << nl_or_nothing; + settings.ostr << "(" << nl_or_nothing; FormatStateStacked frame_nested = frame; frame_nested.need_parens = false; ++frame_nested.indent; diff --git a/src/Parsers/ASTTablesInSelectQuery.cpp b/src/Parsers/ASTTablesInSelectQuery.cpp index 8d131a848f7..e063b9f7125 100644 --- a/src/Parsers/ASTTablesInSelectQuery.cpp +++ b/src/Parsers/ASTTablesInSelectQuery.cpp @@ -109,14 +109,17 @@ void ASTTableExpression::formatImpl(const FormatSettings & settings, FormatState if (database_and_table_name) { + settings.ostr << " "; database_and_table_name->formatImpl(settings, state, frame); } else if (table_function) { + settings.ostr << " "; table_function->formatImpl(settings, state, frame); } else if (subquery) { + settings.ostr << settings.nl_or_ws << indent_str; subquery->formatImpl(settings, state, frame); } @@ -142,9 +145,15 @@ void ASTTableExpression::formatImpl(const FormatSettings & settings, FormatState } -void ASTTableJoin::formatImplBeforeTable(const FormatSettings & settings, FormatState &, FormatStateStacked) const +void ASTTableJoin::formatImplBeforeTable(const FormatSettings & settings, FormatState &, FormatStateStacked frame) const { settings.ostr << (settings.hilite ? hilite_keyword : ""); + std::string indent_str = settings.one_line ? "" : std::string(4 * frame.indent, ' '); + + if (kind != Kind::Comma) + { + settings.ostr << settings.nl_or_ws << indent_str; + } switch (locality) { @@ -241,6 +250,7 @@ void ASTArrayJoin::formatImpl(const FormatSettings & settings, FormatState & sta frame.expression_list_prepend_whitespace = true; settings.ostr << (settings.hilite ? hilite_keyword : "") + << settings.nl_or_ws << (kind == Kind::Left ? "LEFT " : "") << "ARRAY JOIN" << (settings.hilite ? hilite_none : ""); settings.one_line @@ -254,10 +264,7 @@ void ASTTablesInSelectQueryElement::formatImpl(const FormatSettings & settings, if (table_expression) { if (table_join) - { table_join->as().formatImplBeforeTable(settings, state, frame); - settings.ostr << " "; - } table_expression->formatImpl(settings, state, frame); @@ -275,13 +282,8 @@ void ASTTablesInSelectQuery::formatImpl(const FormatSettings & settings, FormatS { std::string indent_str = settings.one_line ? "" : std::string(4 * frame.indent, ' '); - for (ASTs::const_iterator it = children.begin(); it != children.end(); ++it) - { - if (it != children.begin()) - settings.ostr << settings.nl_or_ws << indent_str; - - (*it)->formatImpl(settings, state, frame); - } + for (const auto & child : children) + child->formatImpl(settings, state, frame); } } diff --git a/src/Parsers/ASTWithElement.cpp b/src/Parsers/ASTWithElement.cpp index ce39086eb4a..b517509c4bc 100644 --- a/src/Parsers/ASTWithElement.cpp +++ b/src/Parsers/ASTWithElement.cpp @@ -16,8 +16,11 @@ ASTPtr ASTWithElement::clone() const void ASTWithElement::formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const { + std::string indent_str = settings.one_line ? "" : std::string(4 * frame.indent, ' '); + settings.writeIdentifier(name); - settings.ostr << (settings.hilite ? hilite_keyword : "") << " AS " << (settings.hilite ? hilite_none : ""); + settings.ostr << (settings.hilite ? hilite_keyword : "") << " AS" << (settings.hilite ? hilite_none : ""); + settings.ostr << settings.nl_or_ws << indent_str; dynamic_cast(*subquery).formatImplWithoutAlias(settings, state, frame); } diff --git a/src/Processors/QueryPlan/ArrayJoinStep.cpp b/src/Processors/QueryPlan/ArrayJoinStep.cpp index 9089bb8e5a2..fa9ea298319 100644 --- a/src/Processors/QueryPlan/ArrayJoinStep.cpp +++ b/src/Processors/QueryPlan/ArrayJoinStep.cpp @@ -60,6 +60,7 @@ void ArrayJoinStep::transformPipeline(QueryPipeline & pipeline, const BuildQuery pipeline.getHeader().getColumnsWithTypeAndName(), res_header.getColumnsWithTypeAndName(), ActionsDAG::MatchColumnsMode::Name); + auto actions = std::make_shared(actions_dag, settings.getActionsSettings()); pipeline.addSimpleTransform([&](const Block & header) diff --git a/src/Processors/QueryPlan/BuildQueryPipelineSettings.cpp b/src/Processors/QueryPlan/BuildQueryPipelineSettings.cpp index 9691da4a362..2480673d65e 100644 --- a/src/Processors/QueryPlan/BuildQueryPipelineSettings.cpp +++ b/src/Processors/QueryPlan/BuildQueryPipelineSettings.cpp @@ -9,7 +9,7 @@ namespace DB BuildQueryPipelineSettings BuildQueryPipelineSettings::fromSettings(const Settings & from) { BuildQueryPipelineSettings settings; - settings.actions_settings = ExpressionActionsSettings::fromSettings(from); + settings.actions_settings = ExpressionActionsSettings::fromSettings(from, CompileExpressions::yes); return settings; } diff --git a/src/Processors/QueryPlan/ExpressionStep.cpp b/src/Processors/QueryPlan/ExpressionStep.cpp index eb0c5abe669..656dcd46fe9 100644 --- a/src/Processors/QueryPlan/ExpressionStep.cpp +++ b/src/Processors/QueryPlan/ExpressionStep.cpp @@ -55,6 +55,7 @@ void ExpressionStep::updateInputStream(DataStream input_stream, bool keep_header void ExpressionStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) { auto expression = std::make_shared(actions_dag, settings.getActionsSettings()); + pipeline.addSimpleTransform([&](const Block & header) { return std::make_shared(header, expression); @@ -80,7 +81,7 @@ void ExpressionStep::describeActions(FormatSettings & settings) const String prefix(settings.offset, ' '); bool first = true; - auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); + auto expression = std::make_shared(actions_dag); for (const auto & action : expression->getActions()) { settings.out << prefix << (first ? "Actions: " @@ -97,7 +98,7 @@ void ExpressionStep::describeActions(FormatSettings & settings) const void ExpressionStep::describeActions(JSONBuilder::JSONMap & map) const { - auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); + auto expression = std::make_shared(actions_dag); map.add("Expression", expression->toTree()); } diff --git a/src/Processors/QueryPlan/FilterStep.cpp b/src/Processors/QueryPlan/FilterStep.cpp index 49c9326087b..15fd5c7b673 100644 --- a/src/Processors/QueryPlan/FilterStep.cpp +++ b/src/Processors/QueryPlan/FilterStep.cpp @@ -68,6 +68,7 @@ void FilterStep::updateInputStream(DataStream input_stream, bool keep_header) void FilterStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) { auto expression = std::make_shared(actions_dag, settings.getActionsSettings()); + pipeline.addSimpleTransform([&](const Block & header, QueryPipeline::StreamType stream_type) { bool on_totals = stream_type == QueryPipeline::StreamType::Totals; @@ -99,7 +100,7 @@ void FilterStep::describeActions(FormatSettings & settings) const settings.out << '\n'; bool first = true; - auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); + auto expression = std::make_shared(actions_dag); for (const auto & action : expression->getActions()) { settings.out << prefix << (first ? "Actions: " @@ -119,7 +120,7 @@ void FilterStep::describeActions(JSONBuilder::JSONMap & map) const map.add("Filter Column", filter_column_name); map.add("Removes Filter", remove_filter_column); - auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); + auto expression = std::make_shared(actions_dag); map.add("Expression", expression->toTree()); } diff --git a/src/Processors/QueryPlan/TotalsHavingStep.cpp b/src/Processors/QueryPlan/TotalsHavingStep.cpp index ce073db4daa..db82538d5a0 100644 --- a/src/Processors/QueryPlan/TotalsHavingStep.cpp +++ b/src/Processors/QueryPlan/TotalsHavingStep.cpp @@ -51,10 +51,16 @@ TotalsHavingStep::TotalsHavingStep( void TotalsHavingStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings & settings) { + auto expression_actions = actions_dag ? std::make_shared(actions_dag, settings.getActionsSettings()) : nullptr; + auto totals_having = std::make_shared( - pipeline.getHeader(), overflow_row, - (actions_dag ? std::make_shared(actions_dag, settings.getActionsSettings()) : nullptr), - filter_column_name, totals_mode, auto_include_threshold, final); + pipeline.getHeader(), + overflow_row, + expression_actions, + filter_column_name, + totals_mode, + auto_include_threshold, + final); pipeline.addTotalsHavingTransform(std::move(totals_having)); } @@ -85,7 +91,7 @@ void TotalsHavingStep::describeActions(FormatSettings & settings) const if (actions_dag) { bool first = true; - auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); + auto expression = std::make_shared(actions_dag); for (const auto & action : expression->getActions()) { settings.out << prefix << (first ? "Actions: " @@ -102,7 +108,7 @@ void TotalsHavingStep::describeActions(JSONBuilder::JSONMap & map) const if (actions_dag) { map.add("Filter column", filter_column_name); - auto expression = std::make_shared(actions_dag, ExpressionActionsSettings{}); + auto expression = std::make_shared(actions_dag); map.add("Expression", expression->toTree()); } } diff --git a/src/Server/HTTP/ReadHeaders.cpp b/src/Server/HTTP/ReadHeaders.cpp index 2fc2de8321a..b7057501064 100644 --- a/src/Server/HTTP/ReadHeaders.cpp +++ b/src/Server/HTTP/ReadHeaders.cpp @@ -68,9 +68,6 @@ void readHeaders( if (in.eof()) throw Poco::Net::MessageException("Field is invalid"); - if (value.empty()) - throw Poco::Net::MessageException("Field value is empty"); - if (ch == '\n') throw Poco::Net::MessageException("No CRLF found"); diff --git a/src/Storages/ConstraintsDescription.cpp b/src/Storages/ConstraintsDescription.cpp index 1e86a17523b..7015c3f8e48 100644 --- a/src/Storages/ConstraintsDescription.cpp +++ b/src/Storages/ConstraintsDescription.cpp @@ -52,7 +52,7 @@ ConstraintsExpressions ConstraintsDescription::getExpressions(const DB::ContextP auto * constraint_ptr = constraint->as(); ASTPtr expr = constraint_ptr->expr->clone(); auto syntax_result = TreeRewriter(context).analyze(expr, source_columns_); - res.push_back(ExpressionAnalyzer(constraint_ptr->expr->clone(), syntax_result, context).getActions(false)); + res.push_back(ExpressionAnalyzer(constraint_ptr->expr->clone(), syntax_result, context).getActions(false, true, CompileExpressions::yes)); } return res; } diff --git a/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp b/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp index 8e67931754b..4e151bfdb91 100644 --- a/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp +++ b/src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp @@ -1268,6 +1268,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mutatePartToTempor need_remove_expired_values = true; /// All columns from part are changed and may be some more that were missing before in part + /// TODO We can materialize compact part without copying data if (!isWidePart(source_part) || (mutation_kind == MutationsInterpreter::MutationKind::MUTATE_OTHER && interpreter && interpreter->isAffectingAllColumns())) { @@ -1386,6 +1387,7 @@ MergeTreeData::MutableDataPartPtr MergeTreeDataMergerMutator::mutatePartToTempor metadata_snapshot, indices_to_recalc, projections_to_recalc, + // If it's an index/projection materialization, we don't write any data columns, thus empty header is used mutation_kind == MutationsInterpreter::MutationKind::MUTATE_INDEX_PROJECTION ? Block{} : updated_header, new_data_part, in, diff --git a/src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp b/src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp index 5e85f88a487..23a7b205a1b 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp +++ b/src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp @@ -355,7 +355,9 @@ size_t MergeTreeDataPartWriterCompact::ColumnsBuffer::size() const void MergeTreeDataPartWriterCompact::finish(IMergeTreeDataPart::Checksums & checksums, bool sync) { - finishDataSerialization(checksums, sync); + // If we don't have anything to write, skip finalization. + if (!columns_list.empty()) + finishDataSerialization(checksums, sync); if (settings.rewrite_primary_key) finishPrimaryIndexSerialization(checksums, sync); diff --git a/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp b/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp index 57e8cca46cd..2666ba1518f 100644 --- a/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp +++ b/src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp @@ -559,7 +559,10 @@ void MergeTreeDataPartWriterWide::finishDataSerialization(IMergeTreeDataPart::Ch void MergeTreeDataPartWriterWide::finish(IMergeTreeDataPart::Checksums & checksums, bool sync) { - finishDataSerialization(checksums, sync); + // If we don't have anything to write, skip finalization. + if (!columns_list.empty()) + finishDataSerialization(checksums, sync); + if (settings.rewrite_primary_key) finishPrimaryIndexSerialization(checksums, sync); diff --git a/src/Storages/MySQL/MySQLSettings.cpp b/src/Storages/MySQL/MySQLSettings.cpp new file mode 100644 index 00000000000..1a8f0804777 --- /dev/null +++ b/src/Storages/MySQL/MySQLSettings.cpp @@ -0,0 +1,42 @@ +#include +#include +#include +#include +#include + + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int UNKNOWN_SETTING; +} + +IMPLEMENT_SETTINGS_TRAITS(MySQLSettingsTraits, LIST_OF_MYSQL_SETTINGS) + +void MySQLSettings::loadFromQuery(ASTStorage & storage_def) +{ + if (storage_def.settings) + { + try + { + applyChanges(storage_def.settings->changes); + } + catch (Exception & e) + { + if (e.code() == ErrorCodes::UNKNOWN_SETTING) + e.addMessage("for storage " + storage_def.engine->name); + throw; + } + } + else + { + auto settings_ast = std::make_shared(); + settings_ast->is_standalone = false; + storage_def.set(storage_def.settings, settings_ast); + } +} + +} + diff --git a/src/Storages/MySQL/MySQLSettings.h b/src/Storages/MySQL/MySQLSettings.h new file mode 100644 index 00000000000..da8723c2ea6 --- /dev/null +++ b/src/Storages/MySQL/MySQLSettings.h @@ -0,0 +1,32 @@ +#pragma once + +#include +#include + + +namespace Poco::Util +{ + class AbstractConfiguration; +} + + +namespace DB +{ +class ASTStorage; + +#define LIST_OF_MYSQL_SETTINGS(M) \ + M(UInt64, connection_pool_size, 16, "Size of connection pool (if all connections are in use, the query will wait until some connection will be freed).", 0) \ + M(UInt64, connection_max_tries, 3, "Number of retries for pool with failover", 0) \ + M(Bool, connection_auto_close, true, "Auto-close connection after query execution, i.e. disable connection reuse.", 0) \ + +DECLARE_SETTINGS_TRAITS(MySQLSettingsTraits, LIST_OF_MYSQL_SETTINGS) + + +/** Settings for the MySQL family of engines. + */ +struct MySQLSettings : public BaseSettings +{ + void loadFromQuery(ASTStorage & storage_def); +}; + +} diff --git a/src/Storages/SelectQueryInfo.h b/src/Storages/SelectQueryInfo.h index 33335773842..e971e126972 100644 --- a/src/Storages/SelectQueryInfo.h +++ b/src/Storages/SelectQueryInfo.h @@ -170,6 +170,9 @@ struct SelectQueryInfo /// Example: x IN (1, 2, 3) PreparedSets sets; + /// Cached value of ExpressionAnalysisResult::has_window + bool has_window = false; + ClusterPtr getCluster() const { return !optimized_cluster ? cluster : optimized_cluster; } /// If not null, it means we choose a projection to execute current query. diff --git a/src/Storages/StorageDistributed.cpp b/src/Storages/StorageDistributed.cpp index de243c2d2e1..3a3291c6c48 100644 --- a/src/Storages/StorageDistributed.cpp +++ b/src/Storages/StorageDistributed.cpp @@ -281,13 +281,15 @@ void replaceConstantExpressions( visitor.visit(node); } -/// Returns one of the following: +/// This is the implementation of optimize_distributed_group_by_sharding_key. +/// It returns up to which stage the query can be processed on a shard, which +/// is one of the following: /// - QueryProcessingStage::Complete /// - QueryProcessingStage::WithMergeableStateAfterAggregation /// - none (in this case regular WithMergeableState should be used) -std::optional getOptimizedQueryProcessingStage(const ASTPtr & query_ptr, bool extremes, const Block & sharding_key_block) +std::optional getOptimizedQueryProcessingStage(const SelectQueryInfo & query_info, bool extremes, const Block & sharding_key_block) { - const auto & select = query_ptr->as(); + const auto & select = query_info.query->as(); auto sharding_block_has = [&](const auto & exprs, size_t limit = SIZE_MAX) -> bool { @@ -314,6 +316,10 @@ std::optional getOptimizedQueryProcessingStage(const if (select.group_by_with_totals || select.group_by_with_rollup || select.group_by_with_cube) return {}; + // Window functions are not supported. + if (query_info.has_window) + return {}; + // TODO: extremes support can be implemented if (extremes) return {}; @@ -527,7 +533,7 @@ QueryProcessingStage::Enum StorageDistributed::getQueryProcessingStage( (settings.allow_nondeterministic_optimize_skip_unused_shards || sharding_key_is_deterministic)) { Block sharding_key_block = sharding_key_expr->getSampleBlock(); - auto stage = getOptimizedQueryProcessingStage(query_info.query, settings.extremes, sharding_key_block); + auto stage = getOptimizedQueryProcessingStage(query_info, settings.extremes, sharding_key_block); if (stage) { LOG_DEBUG(log, "Force processing stage to {}", QueryProcessingStage::toString(*stage)); diff --git a/src/Storages/StorageExternalDistributed.cpp b/src/Storages/StorageExternalDistributed.cpp index 5a153f16a0a..32b9c7e9245 100644 --- a/src/Storages/StorageExternalDistributed.cpp +++ b/src/Storages/StorageExternalDistributed.cpp @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -79,7 +80,8 @@ StorageExternalDistributed::StorageExternalDistributed( columns_, constraints_, String{}, - context); + context, + MySQLSettings{}); break; } #endif diff --git a/src/Storages/StorageMaterializedView.cpp b/src/Storages/StorageMaterializedView.cpp index 27cd649aae4..67bd6b21c3f 100644 --- a/src/Storages/StorageMaterializedView.cpp +++ b/src/Storages/StorageMaterializedView.cpp @@ -366,17 +366,18 @@ void StorageMaterializedView::renameInMemory(const StorageID & new_table_id) auto metadata_snapshot = getInMemoryMetadataPtr(); bool from_atomic_to_atomic_database = old_table_id.hasUUID() && new_table_id.hasUUID(); - if (has_inner_table && tryGetTargetTable() && !from_atomic_to_atomic_database) + if (!from_atomic_to_atomic_database && has_inner_table && tryGetTargetTable()) { auto new_target_table_name = generateInnerTableName(new_table_id); auto rename = std::make_shared(); ASTRenameQuery::Table from; + assert(target_table_id.database_name == old_table_id.database_name); from.database = target_table_id.database_name; from.table = target_table_id.table_name; ASTRenameQuery::Table to; - to.database = target_table_id.database_name; + to.database = new_table_id.database_name; to.table = new_target_table_name; ASTRenameQuery::Element elem; @@ -385,10 +386,16 @@ void StorageMaterializedView::renameInMemory(const StorageID & new_table_id) rename->elements.emplace_back(elem); InterpreterRenameQuery(rename, getContext()).execute(); + target_table_id.database_name = new_table_id.database_name; target_table_id.table_name = new_target_table_name; } IStorage::renameInMemory(new_table_id); + if (from_atomic_to_atomic_database && has_inner_table) + { + assert(target_table_id.database_name == old_table_id.database_name); + target_table_id.database_name = new_table_id.database_name; + } const auto & select_query = metadata_snapshot->getSelectQuery(); // TODO Actually we don't need to update dependency if MV has UUID, but then db and table name will be outdated DatabaseCatalog::instance().updateDependency(select_query.select_table_id, old_table_id, select_query.select_table_id, getStorageID()); diff --git a/src/Storages/StorageMemory.cpp b/src/Storages/StorageMemory.cpp index 13d5df866bc..7f7f68335bb 100644 --- a/src/Storages/StorageMemory.cpp +++ b/src/Storages/StorageMemory.cpp @@ -262,7 +262,14 @@ void StorageMemory::mutate(const MutationCommands & commands, ContextPtr context auto metadata_snapshot = getInMemoryMetadataPtr(); auto storage = getStorageID(); auto storage_ptr = DatabaseCatalog::instance().getTable(storage, context); - auto interpreter = std::make_unique(storage_ptr, metadata_snapshot, commands, context, true); + + /// When max_threads > 1, the order of returning blocks is uncentain, + /// which will lead to inconsistency after updateBlockData. + auto new_context = Context::createCopy(context); + new_context->setSetting("max_streams_to_max_threads_ratio", 1); + new_context->setSetting("max_threads", 1); + + auto interpreter = std::make_unique(storage_ptr, metadata_snapshot, commands, new_context, true); auto in = interpreter->execute(); in->readPrefix(); diff --git a/src/Storages/StorageMerge.cpp b/src/Storages/StorageMerge.cpp index aff62d2a337..5a84e0c3901 100644 --- a/src/Storages/StorageMerge.cpp +++ b/src/Storages/StorageMerge.cpp @@ -415,7 +415,7 @@ Pipe StorageMerge::createSources( auto adding_column_dag = ActionsDAG::makeAddingColumnActions(std::move(column)); auto adding_column_actions = std::make_shared( std::move(adding_column_dag), - ExpressionActionsSettings::fromContext(modified_context)); + ExpressionActionsSettings::fromContext(modified_context, CompileExpressions::yes)); pipe.addSimpleTransform([&](const Block & stream_header) { @@ -559,7 +559,7 @@ void StorageMerge::convertingSourceStream( pipe.getHeader().getColumnsWithTypeAndName(), header.getColumnsWithTypeAndName(), ActionsDAG::MatchColumnsMode::Name); - auto convert_actions = std::make_shared(convert_actions_dag, ExpressionActionsSettings::fromContext(local_context)); + auto convert_actions = std::make_shared(convert_actions_dag, ExpressionActionsSettings::fromContext(local_context, CompileExpressions::yes)); pipe.addSimpleTransform([&](const Block & stream_header) { diff --git a/src/Storages/StorageMySQL.cpp b/src/Storages/StorageMySQL.cpp index 4cf69d7dd77..1dadcfe986b 100644 --- a/src/Storages/StorageMySQL.cpp +++ b/src/Storages/StorageMySQL.cpp @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -50,13 +51,15 @@ StorageMySQL::StorageMySQL( const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, const String & comment, - ContextPtr context_) + ContextPtr context_, + const MySQLSettings & mysql_settings_) : IStorage(table_id_) , WithContext(context_->getGlobalContext()) , remote_database_name(remote_database_name_) , remote_table_name(remote_table_name_) , replace_query{replace_query_} , on_duplicate_clause{on_duplicate_clause_} + , mysql_settings(mysql_settings_) , pool(std::make_shared(pool_)) { StorageInMemoryMetadata storage_metadata; @@ -98,7 +101,8 @@ Pipe StorageMySQL::read( } - StreamSettings mysql_input_stream_settings(context_->getSettingsRef(), true, false); + StreamSettings mysql_input_stream_settings(context_->getSettingsRef(), + mysql_settings.connection_auto_close); return Pipe(std::make_shared( std::make_shared(pool, query, sample_block, mysql_input_stream_settings))); } @@ -250,8 +254,22 @@ void registerStorageMySQL(StorageFactory & factory) const String & password = engine_args[4]->as().value.safeGet(); size_t max_addresses = args.getContext()->getSettingsRef().glob_expansion_max_elements; + /// TODO: move some arguments from the arguments to the SETTINGS. + MySQLSettings mysql_settings; + if (args.storage_def->settings) + { + mysql_settings.loadFromQuery(*args.storage_def); + } + + if (!mysql_settings.connection_pool_size) + throw Exception("connection_pool_size cannot be zero.", ErrorCodes::BAD_ARGUMENTS); + auto addresses = parseRemoteDescriptionForExternalDatabase(host_port, max_addresses, 3306); - mysqlxx::PoolWithFailover pool(remote_database, addresses, username, password); + mysqlxx::PoolWithFailover pool(remote_database, addresses, + username, password, + MYSQLXX_POOL_WITH_FAILOVER_DEFAULT_START_CONNECTIONS, + mysql_settings.connection_pool_size, + mysql_settings.connection_max_tries); bool replace_query = false; std::string on_duplicate_clause; @@ -275,9 +293,11 @@ void registerStorageMySQL(StorageFactory & factory) args.columns, args.constraints, args.comment, - args.getContext()); + args.getContext(), + mysql_settings); }, { + .supports_settings = true, .source_access_type = AccessType::MYSQL, }); } diff --git a/src/Storages/StorageMySQL.h b/src/Storages/StorageMySQL.h index a7aca48197e..5eb9ed14524 100644 --- a/src/Storages/StorageMySQL.h +++ b/src/Storages/StorageMySQL.h @@ -9,6 +9,7 @@ #include #include +#include #include @@ -33,7 +34,8 @@ public: const ColumnsDescription & columns_, const ConstraintsDescription & constraints_, const String & comment, - ContextPtr context_); + ContextPtr context_, + const MySQLSettings & mysql_settings_); std::string getName() const override { return "MySQL"; } @@ -56,6 +58,8 @@ private: bool replace_query; std::string on_duplicate_clause; + MySQLSettings mysql_settings; + mysqlxx::PoolWithFailoverPtr pool; }; diff --git a/src/Storages/StorageReplicatedMergeTree.cpp b/src/Storages/StorageReplicatedMergeTree.cpp index f9f1b11d0b8..d04dc46ea83 100644 --- a/src/Storages/StorageReplicatedMergeTree.cpp +++ b/src/Storages/StorageReplicatedMergeTree.cpp @@ -4722,6 +4722,10 @@ void StorageReplicatedMergeTree::alter( if (new_indices_str != current_metadata->secondary_indices.toString()) future_metadata_in_zk.skip_indices = new_indices_str; + String new_projections_str = future_metadata.projections.toString(); + if (new_projections_str != current_metadata->projections.toString()) + future_metadata_in_zk.projections = new_projections_str; + String new_constraints_str = future_metadata.constraints.toString(); if (new_constraints_str != current_metadata->constraints.toString()) future_metadata_in_zk.constraints = new_constraints_str; diff --git a/src/Storages/StorageTableFunction.h b/src/Storages/StorageTableFunction.h index e6f21c44fc3..859735fec5b 100644 --- a/src/Storages/StorageTableFunction.h +++ b/src/Storages/StorageTableFunction.h @@ -103,7 +103,7 @@ public: ActionsDAG::MatchColumnsMode::Name); auto convert_actions = std::make_shared( convert_actions_dag, - ExpressionActionsSettings::fromSettings(context->getSettingsRef())); + ExpressionActionsSettings::fromSettings(context->getSettingsRef(), CompileExpressions::yes)); pipe.addSimpleTransform([&](const Block & header) { diff --git a/src/Storages/System/StorageSystemContributors.generated.cpp b/src/Storages/System/StorageSystemContributors.generated.cpp index b8741e6951c..a5524aee6ae 100644 --- a/src/Storages/System/StorageSystemContributors.generated.cpp +++ b/src/Storages/System/StorageSystemContributors.generated.cpp @@ -1,34 +1,43 @@ // autogenerated by ./StorageSystemContributors.sh const char * auto_contributors[] { + "박현우", "0xflotus", "20018712", "243f6a88 85a308d3", "243f6a8885a308d313198a2e037", "3ldar-nasyrov", "821008736@qq.com", + "abdrakhmanov", + "abel-wang", + "abyss7", + "achimbab", + "achulkov2", + "adevyatova", + "ageraab", + "akazz", "Akazz", + "akonyaev", + "akuzm", "Alain BERRIER", "Albert Kidrachev", "Alberto", - "Aleksandr Karo", "Aleksandra (Ася)", + "Aleksandr Karo", "Aleksandrov Vladimir", + "alekseik1", "Aleksei Levushkin", "Aleksei Semiglazov", "Aleksey", "Aleksey Akulovich", + "alesapin", "Alex", - "Alex Bocharov", - "Alex Karo", - "Alex Krash", - "Alex Ryndin", - "Alex Zatelepin", "Alexander Avdonkin", "Alexander Burmak", "Alexander Ermolaev", - "Alexander GQ Gerasiov", "Alexander Gololobov", + "Alexander GQ Gerasiov", "Alexander Kazakov", + "alexander kozhikhov", "Alexander Kozhikhov", "Alexander Krasheninnikov", "Alexander Kuranoff", @@ -44,37 +53,53 @@ const char * auto_contributors[] { "Alexander Sapin", "Alexander Tokmakov", "Alexander Tretiakov", + "Alexandra Latysheva", + "Alexandre Snarskii", "Alexandr Kondratev", "Alexandr Krasheninnikov", "Alexandr Orlov", - "Alexandra Latysheva", - "Alexandre Snarskii", + "Alex Bocharov", "Alexei Averchenko", "Alexey", "Alexey Arno", "Alexey Dushechkin", "Alexey Elymanov", "Alexey Ilyukhov", + "alexey-milovidov", "Alexey Milovidov", "Alexey Tronov", "Alexey Vasiliev", "Alexey Zatelepin", + "Alex Karo", + "Alex Krash", + "alex.lvxin", + "Alex Ryndin", "Alexsey Shestakov", - "Ali Demirci", + "alex-zaitsev", + "Alex Zatelepin", + "alfredlu", "Aliaksandr Pliutau", "Aliaksandr Shylau", + "Ali Demirci", + "amesaru", + "Amesaru", "Amos Bird", + "amoschen", + "amudong", "Amy Krishnevsky", - "AnaUvarova", "Anastasiya Rodigina", "Anastasiya Tsarkova", "Anatoly Pugachev", + "ana-uvarova", + "AnaUvarova", "AndreevDm", "Andrei Bodrov", "Andrei Chulkov", + "andrei-karpliuk", "Andrei Nekrashevich", "Andrew Grigorev", "Andrew Onyshchuk", + "andrewsg", "Andrey", "Andrey Chulkov", "Andrey Dudin", @@ -85,11 +110,16 @@ const char * auto_contributors[] { "Andrey Mironov", "Andrey Skobtsov", "Andrey Urusov", + "Andrey Z", "Andy Yang", "Anmol Arora", "Anna", "Anna Shakhova", + "annvsh", + "anrodigina", "Anthony N. Simon", + "antikvist", + "anton", "Anton Ivashkin", "Anton Kobzev", "Anton Kvasha", @@ -101,83 +131,136 @@ const char * auto_contributors[] { "Anton Tikhonov", "Anton Yuzhaninov", "Anton Zhabolenko", + "ap11", + "a.palagashvili", + "aprudaev", "Ariel Robaldo", "Arsen Hakobyan", "ArtCorp", "Artem Andreenko", + "Artemeey", "Artem Gavrilov", "Artem Hnilov", + "Artemkin Pavel", "Artem Konovalov", "Artem Streltsov", "Artem Zuikov", - "Artemeey", - "Artemkin Pavel", "Arthur Petukhovsky", "Arthur Tokarchuk", "Arthur Wong", + "artpaul", "Artur", "Artur Beglaryan", "AsiaKorushkina", + "asiana21", "Atri Sharma", + "avasiliev", + "avogar", "Avogar", + "avsharapov", + "awesomeleo", "Azat Khuzhin", - "BSD_Conqueror", "Babacar Diassé", "Bakhtiyor Ruziev", "BanyRule", "Baudouin Giard", "BayoNet", + "benamazing", + "benbiti", + "Benjamin Naecker", "Bertrand Junqua", + "bgranvea", "Bharat Nallan", + "bharatnc", "Big Elephant", "Bill", "BlahGeek", + "blazerer", + "bluebirddm", + "bobrovskij artemij", "Bogdan", "Bogdan Voronin", "BohuTANG", "Bolinov", + "booknouse", "Boris Granveaud", "Bowen Masco", + "bo zeng", "Brett Hoerner", + "BSD_Conqueror", + "bseng", "Bulat Gaifullin", "Carbyn", + "cekc", + "centos7", + "champtar", + "chang.chen", + "changvvb", "Chao Ma", "Chao Wang", + "chasingegg", + "chengy8934", + "chenqi", + "chenxing-xc", + "chenxing.xc", "Chen Yufei", + "chertus", "Chienlung Cheung", + "chou.fan", "Christian", "Ciprian Hacman", "Clement Rodriguez", "Clément Rodriguez", "Colum", + "comunodi", "Constantin S. Pan", + "coraxster", "CurtizJ", - "DIAOZHAFENG", + "damozhaeva", "Daniel Bershatsky", "Daniel Dao", "Daniel Qin", "Danila Kutenin", + "dankondr", "Dao Minh Thuc", + "daoready", "Daria Mozhaeva", "Dario", - "DarkWanderer", "Darío", + "DarkWanderer", + "dasmfm", + "davydovska", + "decaseal", "Denis Burlaka", "Denis Glazachev", "Denis Krivak", "Denis Zhuravlev", "Denny Crane", + "dependabot[bot]", + "dependabot-preview[bot]", "Derek Perkins", + "detailyang", + "dfenelonov", + "dgrr", + "DIAOZHAFENG", + "dimarub2000", "Ding Xiang Fei", + "dinosaur", + "divanorama", + "dkxiaohei", + "dmi-feo", "Dmitriev Mikhail", + "dmitrii", "Dmitrii Kovalkov", "Dmitrii Raev", + "dmitriiut", "Dmitriy", "Dmitry", "Dmitry Belyavtsev", "Dmitry Bilunov", "Dmitry Galuza", "Dmitry Krylov", + "dmitry kuzmin", "Dmitry Luhtionov", "Dmitry Moskowski", "Dmitry Muzyka", @@ -188,69 +271,126 @@ const char * auto_contributors[] { "Dongdong Yang", "DoomzD", "Dr. Strange Looker", + "eaxdev", + "eejoin", + "egatov", "Egor O'Sten", + "Egor Savin", "Ekaterina", + "elBroom", "Eldar Zaitov", "Elena Baskakova", + "elenaspb2019", "Elghazal Ahmed", "Elizaveta Mironyuk", + "emakarov", + "emhlbmc", + "emironyuk", "Emmanuel Donin de Rosière", "Eric", "Eric Daniel", "Erixonich", + "ermaotech", "Ernest Poletaev", "Eugene Klimov", "Eugene Konkov", "Evgenia Sudarikova", - "Evgenii Pravda", "Evgeniia Sudarikova", + "Evgenii Pravda", "Evgeniy Gatov", "Evgeniy Udodov", "Evgeny Konkov", "Evgeny Markov", + "evtan", "Ewout", - "Fabian Stäber", + "exprmntr", + "ezhaka", + "f1yegor", "Fabiano Francesconi", + "Fabian Stäber", "Fadi Hadzh", "Fan()", + "fancno", + "FArthur-cmd", + "fastio", + "favstovol", "FawnD2", "FeehanG", + "felixoid", + "felixxdu", + "feng lv", + "fenglv", + "fessmage", "FgoDt", + "filimonov", + "filipe", "Filipe Caixeta", + "flow", "Flowyi", + "flynn", + "foxxmary", "Francisco Barón", + "frank", + "franklee", "Frank Zhao", + "fredchenbj", "Fruit of Eden", "Fullstop000", + "fuqi", + "Fuwang Hu", + "fuwhu", + "Fu Zhe", + "fuzhe1989", "Gagan Arneja", "Gao Qiang", + "g-arslan", "Gary Dotzler", "George", - "George G", "George3d6", + "George G", "Gervasio Varela", + "ggerogery", + "giordyb", "Gleb Kanterov", "Gleb Novikov", "Gleb-Tretyakov", + "glockbender", + "glushkovds", "Gregory", "Grigory", "Grigory Buteyko", "Grigory Pervakov", "Guillaume Tassery", + "guoleiyi", + "gyuton", "Haavard Kvaalen", "Habibullah Oladepo", "Hamoon", + "hao.he", "Hasitha Kanchana", "Hasnat", + "hchen9", + "hcz", + "heng zhao", + "hexiaoting", "Hiroaki Nakamura", + "hotid", "HuFuwang", "Hui Wang", + "hustnn", + "huzhichengdd", + "ice1x", + "idfer", + "igor", "Igor", "Igor Hatarist", + "igor.lapko", "Igor Mineev", "Igor Strykhar", "Igr", "Igr Mineev", + "ikarishinjieva", + "ikopylov", "Ildar Musin", "Ildus Kurbangaliev", "Ilya", @@ -265,42 +405,60 @@ const char * auto_contributors[] { "Ilya Skrypitsa", "Ilya Yatsishin", "ImgBotApp", + "imgbot[bot]", + "ip", "Islam Israfilov", "Islam Israfilov (Islam93)", + "it1804", "Ivan", "Ivan A. Torgashov", "Ivan Babrou", "Ivan Blinkov", "Ivan He", + "ivan-kush", "Ivan Kush", "Ivan Kushnarenko", "Ivan Lezhankin", "Ivan Remen", "Ivan Starkov", + "ivanzhukov", "Ivan Zhukov", "JackyWoo", "Jacob Hayes", + "jakalletti", "JaosnHsieh", "Jason", + "javartisan", + "javi", + "javi santana", "Javi Santana", "Javi santana bot", "Jean Baptiste Favre", + "jennyma", + "jetgm", "Jiading Guo", "Jiang Tao", + "jianmei zhang", "Jochen Schalanda", "John", "John Hummel", "John Skopis", "Jonatas Freitas", + "Julian Zhou", + "jyz0309", "Kang Liu", "Karl Pietrzak", + "keenwolf", "Keiji Yoshida", "Ken Chen", "Kevin Chiang", + "kevin wan", "Kiran", "Kirill Danshin", + "kirillikoff", "Kirill Malev", "Kirill Shvakov", + "kmeaw", "Koblikov Mihail", "KochetovNicolai", "Konstantin Grabar", @@ -308,30 +466,60 @@ const char * auto_contributors[] { "Konstantin Malanchev", "Konstantin Podshumok", "Korviakov Andrey", + "koshachy", "Kozlov Ivan", + "kreuzerkrieg", "Kruglov Pavel", + "ks1322", "Kseniia Sumarokova", + "kshvakov", + "kssenii", + "l", + "lalex", "Latysheva Alexandra", + "lehasm", + "Léo Ercolanelli", "Leonardo Cecchi", "Leopold Schabel", + "leozhang", "Lev Borodin", + "levushkin aleksej", + "levysh", "Lewinma", + "liangqian", + "libenwang", + "lichengxiang", + "linceyou", + "litao91", + "liu-bov", "Liu Cong", "LiuCong", + "liuyangkuan", "LiuYangkuan", + "liuyimin", + "liyang", + "lomberts", + "long2ice", "Lopatin Konstantin", "Loud_Scream", + "luc1ph3r", + "Lucid Dreams", "Luis Bosque", + "lulichao", "Lv Feng", - "Léo Ercolanelli", "M0r64n", - "Maks Skorokhod", + "madianjun", + "maiha", "Maksim", "Maksim Fedotov", "Maksim Kita", + "Maks Skorokhod", + "malkfilipp", + "manmitya", + "maqroll", "Marat IDRISOV", - "Marek Vavrusa", "Marek Vavruša", + "Marek Vavrusa", "Marek Vavruša", "Mariano Benítez Mulet", "Mark Andreev", @@ -340,16 +528,19 @@ const char * auto_contributors[] { "Maroun Maroun", "Marquitos", "Marsel Arduanov", - "Marti Raudsepp", "Martijn Bakker", + "Marti Raudsepp", "Marvin Taschenberger", "Masha", + "mastertheknife", "Matthew Peveler", "Matwey V. Kornilov", "Max", "Max Akhmedov", - "Max Vetrov", + "maxim", "Maxim Akhmedov", + "MaximAL", + "maxim-babenko", "Maxim Babenko", "Maxim Fedotov", "Maxim Fridental", @@ -360,55 +551,81 @@ const char * auto_contributors[] { "Maxim Serebryakov", "Maxim Smirnov", "Maxim Ulanovskiy", - "MaximAL", + "maxkuzn", + "maxulan", + "Max Vetrov", "Mc.Spring", + "mehanizm", "MeiK", + "melin", + "memo", + "meo", + "mergify[bot]", "Metehan Çetinkaya", "Metikov Vadim", + "mf5137", + "mfridental", "Michael Furmur", "Michael Kolupaev", "Michael Monashev", "Michael Razuvaev", "Michael Smitasin", "Michal Lisowski", + "michon470", "MicrochipQ", + "miha-g", "Mihail Fandyushin", "Mikahil Nacharov", "Mike", "Mike F", "Mike Kot", + "mikepop7", "Mikhail", "Mikhail Cheshkov", "Mikhail Fandyushin", "Mikhail Filimonov", + "Mikhail f. Shiryaev", "Mikhail Gaidamaka", "Mikhail Korotov", "Mikhail Malafeev", "Mikhail Nacharov", "Mikhail Salosin", "Mikhail Surin", - "Mikhail f. Shiryaev", "MikuSugar", "Milad Arabi", + "millb", + "mnkonkova", "Mohammad Hossein Sekhavat", + "morty", + "moscas", "MovElb", "Mr.General", "Murat Kabilov", + "m-ves", + "mwish", "MyroTk", - "NIKITA MIKHAILOV", + "myrrc", + "nagorny", "Narek Galstyan", - "NeZeD [Mac Pro]", + "nauta", + "nautaa", "Neeke Gao", "Neng Liu", + "never lee", + "NeZeD [Mac Pro]", + "nicelulu", "Nickolay Yastrebov", + "Nicolae Vartolomei", "Nico Mandery", "Nico Piderman", - "Nicolae Vartolomei", "Nik", "Nikhil Nadig", "Nikhil Raman", "Nikita Lapkov", "Nikita Mikhailov", + "NIKITA MIKHAILOV", + "Nikita Mikhalev", + "nikitamikhaylov", "Nikita Mikhaylov", "Nikita Orlov", "Nikita Vasilev", @@ -421,18 +638,29 @@ const char * auto_contributors[] { "Nikolay Vasiliev", "Nikolay Volosatov", "Niu Zhaojie", + "nonexistence", + "ns-vasilev", + "nvartolomei", + "oandrew", + "objatie_groba", + "ocadaruma", "Odin Hultgren Van Der Horst", + "ogorbacheva", "Okada Haruki", "Oleg Favstov", "Oleg Komarov", + "olegkv", "Oleg Matrokhin", "Oleg Obleukhov", + "Oleg Strokachuk", "Olga Khvostikova", + "olgarev", "Olga Revyakina", + "orantius", "Orivej Desh", "Oskar Wojciski", "OuO", - "PHO", + "palasonicq", "Paramtamtam", "Patrick Zippenfenig", "Pavel", @@ -449,41 +677,67 @@ const char * auto_contributors[] { "Persiyanov Dmitriy Andreevich", "Pervakov Grigorii", "Pervakov Grigory", + "peshkurov", + "philip.han", "Philippe Ombredanne", + "PHO", + "pingyu", + "potya", "Potya", "Pradeep Chhetri", + "proller", + "pufit", + "pyos", "Pysaoke", + "qianlixiang", + "qianmoQ", + "quid", "Quid37", + "r1j1k", "Rafael David Tinoco", + "rainbowsysu", "Ramazan Polat", + "Raúl Marín", "Ravengg", "RegulusZ", "Reilee", "Reto Kromer", "Ri", + "ritaank", + "robert", "Robert Hodges", + "robot-clickhouse", + "robot-metrika-test", + "rodrigargar", + "roman", "Roman Bug", "Roman Lipovsky", "Roman Nikolaev", "Roman Nozdrin", "Roman Peshkurov", "Roman Tsisyk", + "romanzhukov", + "root", + "roverxu", "Ruslan", "Ruslan Savchenko", "Russ Frank", "Ruzal Ibragimov", - "S.M.A. Djawadi", "Sabyanin Maxim", "SaltTan", "Sami Kerola", "Samuel Chou", + "santaux", + "satanson", "Saulius Valatka", - "Serg Kulakov", - "Serge Rider", + "sdk2", + "Sébastien Launay", + "serebrserg", "Sergei Bocharov", "Sergei Semin", "Sergei Shtykov", "Sergei Tsetlin (rekub)", + "Serge Rider", "Sergey Demurin", "Sergey Elantsev", "Sergey Fedorov", @@ -497,63 +751,102 @@ const char * auto_contributors[] { "Sergey Zaikin", "Sergi Almacellas Abellana", "Sergi Vladykin", + "Serg Kulakov", + "sev7e0", "SevaCode", + "sevirov", + "sfod", + "shangshujie", + "shedx", "Sherry Wang", "Silviu Caragea", "Simeon Emanuilov", "Simon Liu", "Simon Podlipsky", + "Šimon Podlipský", + "simon-says", "Sina", "Sjoerd Mulder", "Slach", + "S.M.A. Djawadi", "Snow", "Sofia Antipushina", + "songenjie", + "spff", + "spongedc", + "spyros87", "Stanislav Pavlovichev", "Stas Pavlovichev", + "stavrolia", "Stefan Thies", "Stepan", "Stepan Herold", + "stepenhu", "Steve-金勇", "Stig Bakken", "Stupnikov Andrey", + "su-houzhen", + "sundy", + "sundy-li", + "sundyli", "SuperBot", - "Sébastien Launay", + "svladykin", "TAC", - "TCeason", "Tagir Kuskarov", + "tai", + "taichong", "Tai White", + "taiyang-li", "Taleh Zaliyev", "Tangaev", + "tao jiang", "Tatiana Kirillova", + "tavplubix", + "TCeason", "Tema Novikov", + "templarzq", "The-Alchemist", + "tiger.yan", + "tison", "TiunovNN", "Tobias Adamson", "Tom Bombadil", + "topvisor", "Tsarkova Anastasia", "TszkitLo40", + "turbo jason", + "tyrionhuang", + "ubuntu", "Ubuntu", "Ubus", "UnamedRus", + "unegare", + "unknown", + "urgordeadbeef", "V", - "VDimir", "Vadim", + "VadimPE", "Vadim Plakhtinskiy", "Vadim Skipin", - "VadimPE", "Val", "Valera Ryaboshapko", + "Vasilyev Nikita", "Vasily Kozhukhovskiy", "Vasily Morozov", "Vasily Nemkov", "Vasily Okunev", "Vasily Vasilkov", - "Vasilyev Nikita", + "vdimir", + "VDimir", + "velom", "Veloman Yunkan", "Veniamin Gvozdikov", "Veselkov Konstantin", + "vic", + "vicdashkov", "Victor Tarnavsky", "Viktor Taranenko", + "vinity", "Vitaliy Fedorchenko", "Vitaliy Karnienko", "Vitaliy Kozlovskiy", @@ -562,12 +855,15 @@ const char * auto_contributors[] { "Vitaly", "Vitaly Baranov", "Vitaly Samigullin", + "vitstn", + "vivarum", "Vivien Maisonneuve", "Vlad Arkhipov", "Vladimir", "Vladimir Bunchuk", "Vladimir Ch", "Vladimir Chebotarev", + "vladimir golovchenko", "Vladimir Golovchenko", "Vladimir Goncharov", "Vladimir Klimontovich", @@ -580,22 +876,39 @@ const char * auto_contributors[] { "Vojtech Splichal", "Volodymyr Kuznetsov", "Vsevolod Orlov", + "vxider", "Vxider", "Vyacheslav Alipov", + "vzakaznikov", + "wangchao", "Wang Fenjin", + "weeds085490", "Weiqing Xu", "William Shallum", "Winter Zhang", + "wzl", "Xianda Ke", "Xiang Zhou", - "Y Lu", - "Yangkuan Liu", - "Yatsishin Ilya", + "xPoSx", "Yağızcan Değirmenci", + "Yangkuan Liu", + "yangshuai", + "Yatsishin Ilya", "Yegor Andreenko", - "Yingchun Lai", + "Yegor Levankov", + "ygrek", + "yhgcn", "Yiğit Konur", + "yiguolei", + "Yingchun Lai", + "yingjinghan", + "ylchou", + "Y Lu", "Yohann Jardin", + "yonesko", + "yuefoo", + "yulu86", + "yuluxu", "Yuntao Wu", "Yuri Dyachenko", "Yurii Vlasenko", @@ -605,315 +918,34 @@ const char * auto_contributors[] { "Yuriy Korzhenevskiy", "Yury Karpovich", "Yury Stankevich", - "Zhichang Yu", - "Zhipeng", - "Zoran Pandovski", - "a.palagashvili", - "abdrakhmanov", - "abyss7", - "achimbab", - "achulkov2", - "adevyatova", - "ageraab", - "akazz", - "akonyaev", - "akuzm", - "alekseik1", - "alesapin", - "alex-zaitsev", - "alex.lvxin", - "alexander kozhikhov", - "alexey-milovidov", - "alfredlu", - "amoschen", - "amudong", - "ana-uvarova", - "andrei-karpliuk", - "andrewsg", - "annvsh", - "anrodigina", - "antikvist", - "anton", - "ap11", - "aprudaev", - "artpaul", - "asiana21", - "avasiliev", - "avogar", - "avsharapov", - "awesomeleo", - "benamazing", - "benbiti", - "bgranvea", - "bharatnc", - "blazerer", - "bluebirddm", - "bo zeng", - "bobrovskij artemij", - "booknouse", - "bseng", - "cekc", - "centos7", - "champtar", - "chang.chen", - "changvvb", - "chasingegg", - "chengy8934", - "chenqi", - "chenxing-xc", - "chenxing.xc", - "chertus", - "comunodi", - "coraxster", - "damozhaeva", - "dankondr", - "daoready", - "dasmfm", - "davydovska", - "decaseal", - "dependabot-preview[bot]", - "dependabot[bot]", - "detailyang", - "dfenelonov", - "dgrr", - "dimarub2000", - "dinosaur", - "dkxiaohei", - "dmi-feo", - "dmitrii", - "dmitriiut", - "dmitry kuzmin", - "eejoin", - "egatov", - "elBroom", - "elenaspb2019", - "emakarov", - "emhlbmc", - "emironyuk", - "evtan", - "exprmntr", - "ezhaka", - "f1yegor", - "fastio", - "favstovol", - "felixoid", - "felixxdu", - "feng lv", - "fenglv", - "fessmage", - "filimonov", - "filipe", - "flow", - "flynn", - "foxxmary", - "frank", - "franklee", - "fredchenbj", - "fuqi", - "fuwhu", - "g-arslan", - "ggerogery", - "giordyb", - "glockbender", - "glushkovds", - "guoleiyi", - "gyuton", - "hao.he", - "hchen9", - "hcz", - "heng zhao", - "hexiaoting", - "hotid", - "hustnn", - "idfer", - "igor", - "igor.lapko", - "ikarishinjieva", - "ikopylov", - "imgbot[bot]", - "ip", - "it1804", - "ivan-kush", - "ivanzhukov", - "jakalletti", - "javartisan", - "javi", - "javi santana", - "jennyma", - "jetgm", - "jianmei zhang", - "jyz0309", - "keenwolf", - "kevin wan", - "kirillikoff", - "kmeaw", - "koshachy", - "kreuzerkrieg", - "ks1322", - "kshvakov", - "kssenii", - "l", - "lalex", - "lehasm", - "leozhang", - "levushkin aleksej", - "levysh", - "liangqian", - "libenwang", - "lichengxiang", - "linceyou", - "litao91", - "liu-bov", - "liuyangkuan", - "liuyimin", - "liyang", - "lomberts", - "long2ice", - "luc1ph3r", - "madianjun", - "maiha", - "malkfilipp", - "manmitya", - "maqroll", - "mastertheknife", - "maxim", - "maxim-babenko", - "maxkuzn", - "maxulan", - "mehanizm", - "melin", - "memo", - "meo", - "mergify[bot]", - "mf5137", - "mfridental", - "michon470", - "miha-g", - "mikepop7", - "millb", - "mnkonkova", - "morty", - "moscas", - "myrrc", - "nagorny", - "nauta", - "nautaa", - "never lee", - "nicelulu", - "nikitamikhaylov", - "nonexistence", - "ns-vasilev", - "nvartolomei", - "oandrew", - "objatie_groba", - "ocadaruma", - "ogorbacheva", - "olegkv", - "olgarev", - "orantius", - "palasonicq", - "peshkurov", - "philip.han", - "pingyu", - "potya", - "proller", - "pufit", - "pyos", - "qianlixiang", - "qianmoQ", - "quid", - "r1j1k", - "rainbowsysu", - "ritaank", - "robert", - "robot-clickhouse", - "robot-metrika-test", - "rodrigargar", - "roman", - "romanzhukov", - "root", - "roverxu", - "santaux", - "satanson", - "sdk2", - "serebrserg", - "sev7e0", - "sevirov", - "sfod", - "shangshujie", - "shedx", - "simon-says", - "songenjie", - "spff", - "spongedc", - "spyros87", - "stavrolia", - "stepenhu", - "su-houzhen", - "sundy", - "sundy-li", - "sundyli", - "svladykin", - "tai", - "taichong", - "taiyang-li", - "tao jiang", - "tavplubix", - "templarzq", - "tiger.yan", - "tison", - "topvisor", - "turbo jason", - "tyrionhuang", - "ubuntu", - "unegare", - "unknown", - "urgordeadbeef", - "vdimir", - "velom", - "vic", - "vicdashkov", - "vinity", - "vitstn", - "vivarum", - "vladimir golovchenko", - "vxider", - "vzakaznikov", - "wangchao", - "weeds085490", - "xPoSx", - "yangshuai", - "ygrek", - "yhgcn", - "yiguolei", - "yingjinghan", - "ylchou", - "yonesko", - "yuefoo", - "yulu86", - "yuluxu", "zamulla", "zhang2014", "zhangshengyu", "zhangxiao018", "zhangxiao871", "zhen ni", + "Zhichang Yu", + "Zhichun Wu", + "Zhipeng", "zhukai", "zlx19950903", + "Zoran Pandovski", "zvonand", "zvrr", "zvvr", "zzsmdfj", - "Šimon Podlipský", "Артем Стрельцов", "Георгий Кондратьев", "Дмитрий Канатников", "Иванов Евгений", + "Илья Исаев", "Павел Литвиненко", "Смитюх Вячеслав", "Сундуков Алексей", + "万康", "吴健", "小路", + "张中南", "张健", "张风啸", "徐炘", @@ -925,5 +957,4 @@ const char * auto_contributors[] { "靳阳", "黄朝晖", "黄璞", - "박현우", nullptr}; diff --git a/src/Storages/VirtualColumnUtils.cpp b/src/Storages/VirtualColumnUtils.cpp index 0c6cb563525..248b734195a 100644 --- a/src/Storages/VirtualColumnUtils.cpp +++ b/src/Storages/VirtualColumnUtils.cpp @@ -199,7 +199,7 @@ void filterBlockWithQuery(const ASTPtr & query, Block & block, ContextPtr contex auto syntax_result = TreeRewriter(context).analyze(expression_ast, block.getNamesAndTypesList()); ExpressionAnalyzer analyzer(expression_ast, syntax_result, context); buildSets(expression_ast, analyzer); - ExpressionActionsPtr actions = analyzer.getActions(false); + ExpressionActionsPtr actions = analyzer.getActions(false /* add alises */, true /* project result */, CompileExpressions::yes); Block block_with_filter = block; actions->execute(block_with_filter); diff --git a/src/Storages/ya.make b/src/Storages/ya.make index d83ba7f6490..8e0efac8c6e 100644 --- a/src/Storages/ya.make +++ b/src/Storages/ya.make @@ -112,6 +112,7 @@ SRCS( MergeTree/localBackup.cpp MergeTree/registerStorageMergeTree.cpp MutationCommands.cpp + MySQL/MySQLSettings.cpp PartitionCommands.cpp ProjectionsDescription.cpp ReadInOrderOptimizer.cpp diff --git a/src/TableFunctions/TableFunctionMySQL.cpp b/src/TableFunctions/TableFunctionMySQL.cpp index 325b2dc44c6..0b60e11f490 100644 --- a/src/TableFunctions/TableFunctionMySQL.cpp +++ b/src/TableFunctions/TableFunctionMySQL.cpp @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -107,7 +108,8 @@ StoragePtr TableFunctionMySQL::executeImpl( columns, ConstraintsDescription{}, String{}, - context); + context, + MySQLSettings{}); pool.reset(); diff --git a/tests/clickhouse-test b/tests/clickhouse-test index 97744a07db3..ecc2001fe87 100755 --- a/tests/clickhouse-test +++ b/tests/clickhouse-test @@ -35,6 +35,7 @@ MESSAGES_TO_RETRY = [ "DB::Exception: Connection loss", "Coordination::Exception: Session expired", "Coordination::Exception: Connection loss", + "Coordination::Exception: Operation timeout", "Operation timed out", "ConnectionPoolWithFailover: Connection failed at try", ] @@ -128,7 +129,7 @@ def get_db_engine(args, database_name): return " ENGINE=" + args.db_engine return "" # Will use default engine -def configure_testcase_args(args, case_file, suite_tmp_dir): +def configure_testcase_args(args, case_file, suite_tmp_dir, stderr_file): testcase_args = copy.deepcopy(args) testcase_args.testcase_start_time = datetime.now() @@ -147,12 +148,13 @@ def configure_testcase_args(args, case_file, suite_tmp_dir): return ''.join(random.choice(alphabet) for _ in range(length)) database = 'test_{suffix}'.format(suffix=random_str()) - clickhouse_proc_create = Popen(shlex.split(testcase_args.testcase_client), stdin=PIPE, stdout=PIPE, stderr=None, universal_newlines=True) - try: - clickhouse_proc_create.communicate(("CREATE DATABASE " + database + get_db_engine(testcase_args, database)), timeout=testcase_args.timeout) - except TimeoutExpired: - total_time = (datetime.now() - testcase_args.testcase_start_time).total_seconds() - return clickhouse_proc_create, "", "Timeout creating database {} before test".format(database), total_time + with open(stderr_file, 'w') as stderr: + clickhouse_proc_create = Popen(shlex.split(testcase_args.testcase_client), stdin=PIPE, stdout=PIPE, stderr=stderr, universal_newlines=True) + try: + clickhouse_proc_create.communicate(("CREATE DATABASE " + database + get_db_engine(testcase_args, database)), timeout=testcase_args.timeout) + except TimeoutExpired: + total_time = (datetime.now() - testcase_args.testcase_start_time).total_seconds() + return clickhouse_proc_create, "", "Timeout creating database {} before test".format(database), total_time os.environ["CLICKHOUSE_DATABASE"] = database # Set temporary directory to match the randomly generated database, @@ -183,15 +185,17 @@ def run_single_test(args, ext, server_logs_level, client_options, case_file, std 'stderr': stderr_file, } - pattern = '{test} > {stdout} 2> {stderr}' + # >> append to stdout and stderr, because there are also output of per test database creation + if not args.database: + pattern = '{test} >> {stdout} 2>> {stderr}' + else: + pattern = '{test} > {stdout} 2> {stderr}' if ext == '.sql': pattern = "{client} --send_logs_level={logs_level} --testmode --multiquery {options} < " + pattern command = pattern.format(**params) - # print(command) - proc = Popen(command, shell=True, env=os.environ) while (datetime.now() - start_time).total_seconds() < args.timeout and proc.poll() is None: @@ -203,7 +207,8 @@ def run_single_test(args, ext, server_logs_level, client_options, case_file, std need_drop_database = not maybe_passed if need_drop_database: - clickhouse_proc_create = Popen(shlex.split(client), stdin=PIPE, stdout=PIPE, stderr=None, universal_newlines=True) + with open(stderr_file, 'a') as stderr: + clickhouse_proc_create = Popen(shlex.split(client), stdin=PIPE, stdout=PIPE, stderr=stderr, universal_newlines=True) seconds_left = max(args.timeout - (datetime.now() - start_time).total_seconds(), 20) try: drop_database_query = "DROP DATABASE " + database @@ -419,8 +424,7 @@ def run_tests_array(all_tests_with_params): stderr_file = os.path.join(suite_tmp_dir, name) + file_suffix + '.stderr' - testcase_args = configure_testcase_args(args, case_file, suite_tmp_dir) - + testcase_args = configure_testcase_args(args, case_file, suite_tmp_dir, stderr_file) proc, stdout, stderr, total_time = run_single_test(testcase_args, ext, server_logs_level, client_options, case_file, stdout_file, stderr_file) if proc.returncode is None: @@ -440,7 +444,7 @@ def run_tests_array(all_tests_with_params): else: counter = 1 while need_retry(stderr): - testcase_args = configure_testcase_args(args, case_file, suite_tmp_dir) + testcase_args = configure_testcase_args(args, case_file, suite_tmp_dir, stderr_file) proc, stdout, stderr, total_time = run_single_test(testcase_args, ext, server_logs_level, client_options, case_file, stdout_file, stderr_file) sleep(2**counter) counter += 1 diff --git a/tests/integration/helpers/cluster.py b/tests/integration/helpers/cluster.py index ed28c3a7fc4..5bd608ef758 100644 --- a/tests/integration/helpers/cluster.py +++ b/tests/integration/helpers/cluster.py @@ -37,7 +37,6 @@ DEFAULT_ENV_NAME = 'env_file' SANITIZER_SIGN = "==================" - def _create_env_file(path, variables, fname=DEFAULT_ENV_NAME): full_path = os.path.join(path, fname) with open(full_path, 'w') as f: @@ -199,9 +198,11 @@ class ClickHouseCluster: self.schema_registry_port = 8081 self.zookeeper_use_tmpfs = True + self.use_keeper = True self.docker_client = None self.is_up = False + self.env = os.environ.copy() print("CLUSTER INIT base_config_dir:{}".format(self.base_config_dir)) def get_client_cmd(self): @@ -218,7 +219,7 @@ class ClickHouseCluster: with_redis=False, with_minio=False, with_cassandra=False, hostname=None, env_variables=None, image="yandex/clickhouse-integration-test", tag=None, stay_alive=False, ipv4_address=None, ipv6_address=None, with_installed_binary=False, tmpfs=None, - zookeeper_docker_compose_path=None, zookeeper_use_tmpfs=True, minio_certs_dir=None): + zookeeper_docker_compose_path=None, zookeeper_use_tmpfs=True, minio_certs_dir=None, use_keeper=True): """Add an instance to the cluster. name - the name of the instance directory and the value of the 'instance' macro in ClickHouse. @@ -239,6 +240,8 @@ class ClickHouseCluster: if not env_variables: env_variables = {} + self.use_keeper = use_keeper + # Code coverage files will be placed in database directory # (affect only WITH_COVERAGE=1 build) env_variables['LLVM_PROFILE_FILE'] = '/var/lib/clickhouse/server_%h_%p_%m.profraw' @@ -291,7 +294,10 @@ class ClickHouseCluster: cmds = [] if with_zookeeper and not self.with_zookeeper: if not zookeeper_docker_compose_path: - zookeeper_docker_compose_path = p.join(docker_compose_yml_dir, 'docker_compose_zookeeper.yml') + if self.use_keeper: + zookeeper_docker_compose_path = p.join(docker_compose_yml_dir, 'docker_compose_keeper.yml') + else: + zookeeper_docker_compose_path = p.join(docker_compose_yml_dir, 'docker_compose_zookeeper.yml') self.with_zookeeper = True self.zookeeper_use_tmpfs = zookeeper_use_tmpfs @@ -443,8 +449,7 @@ class ClickHouseCluster: run_and_check(self.base_cmd + ["up", "--force-recreate", "--no-deps", "-d", node.name]) node.ip_address = self.get_instance_ip(node.name) node.client = Client(node.ip_address, command=self.client_bin_path) - start_deadline = time.time() + 20.0 # seconds - node.wait_for_start(start_deadline) + node.wait_for_start(start_timeout=20.0, connection_timeout=600.0) # seconds return node def get_instance_ip(self, instance_name): @@ -681,20 +686,50 @@ class ClickHouseCluster: common_opts = ['up', '-d'] if self.with_zookeeper and self.base_zookeeper_cmd: - print('Setup ZooKeeper') - env = os.environ.copy() - if not self.zookeeper_use_tmpfs: - env['ZK_FS'] = 'bind' - for i in range(1, 4): - zk_data_path = self.instances_dir + '/zkdata' + str(i) - zk_log_data_path = self.instances_dir + '/zklog' + str(i) - if not os.path.exists(zk_data_path): - os.mkdir(zk_data_path) - if not os.path.exists(zk_log_data_path): - os.mkdir(zk_log_data_path) - env['ZK_DATA' + str(i)] = zk_data_path - env['ZK_DATA_LOG' + str(i)] = zk_log_data_path - run_and_check(self.base_zookeeper_cmd + common_opts, env=env) + if self.use_keeper: + print('Setup Keeper') + binary_path = self.server_bin_path + if binary_path.endswith('-server'): + binary_path = binary_path[:-len('-server')] + + self.env['keeper_binary'] = binary_path + self.env['image'] = "yandex/clickhouse-integration-test:" + self.docker_base_tag + self.env['user'] = str(os.getuid()) + if not self.zookeeper_use_tmpfs: + self.env['keeper_fs'] = 'bind' + + for i in range (1, 4): + instance_dir = p.join(self.instances_dir, f"keeper{i}") + logs_dir = p.join(instance_dir, "logs") + configs_dir = p.join(instance_dir, "configs") + coordination_dir = p.join(instance_dir, "coordination") + if not os.path.exists(instance_dir): + os.mkdir(instance_dir) + os.mkdir(configs_dir) + os.mkdir(logs_dir) + if not self.zookeeper_use_tmpfs: + os.mkdir(coordination_dir) + shutil.copy(os.path.join(HELPERS_DIR, f'keeper_config{i}.xml'), configs_dir) + + self.env[f'keeper_logs_dir{i}'] = p.abspath(logs_dir) + self.env[f'keeper_config_dir{i}'] = p.abspath(configs_dir) + if not self.zookeeper_use_tmpfs: + self.env[f'keeper_db_dir{i}'] = p.abspath(coordination_dir) + else: + print('Setup ZooKeeper') + if not self.zookeeper_use_tmpfs: + self.env['ZK_FS'] = 'bind' + for i in range(1, 4): + zk_data_path = self.instances_dir + '/zkdata' + str(i) + zk_log_data_path = self.instances_dir + '/zklog' + str(i) + if not os.path.exists(zk_data_path): + os.mkdir(zk_data_path) + if not os.path.exists(zk_log_data_path): + os.mkdir(zk_log_data_path) + self.env['ZK_DATA' + str(i)] = zk_data_path + self.env['ZK_DATA_LOG' + str(i)] = zk_log_data_path + + run_and_check(self.base_zookeeper_cmd + common_opts, env=self.env) for command in self.pre_zookeeper_commands: self.run_kazoo_commands_with_retries(command, repeats=5) self.wait_zookeeper_to_start(120) @@ -731,9 +766,8 @@ class ClickHouseCluster: if self.with_kerberized_kafka and self.base_kerberized_kafka_cmd: print('Setup kerberized kafka') - env = os.environ.copy() - env['KERBERIZED_KAFKA_DIR'] = instance.path + '/' - run_and_check(self.base_kerberized_kafka_cmd + common_opts + ['--renew-anon-volumes'], env=env) + self.env['KERBERIZED_KAFKA_DIR'] = instance.path + '/' + run_and_check(self.base_kerberized_kafka_cmd + common_opts + ['--renew-anon-volumes'], env=self.env) self.kerberized_kafka_docker_id = self.get_instance_docker_id('kerberized_kafka1') if self.with_rabbitmq and self.base_rabbitmq_cmd: subprocess_check_call(self.base_rabbitmq_cmd + common_opts + ['--renew-anon-volumes']) @@ -747,9 +781,8 @@ class ClickHouseCluster: if self.with_kerberized_hdfs and self.base_kerberized_hdfs_cmd: print('Setup kerberized HDFS') - env = os.environ.copy() - env['KERBERIZED_HDFS_DIR'] = instance.path + '/' - run_and_check(self.base_kerberized_hdfs_cmd + common_opts, env=env) + self.env['KERBERIZED_HDFS_DIR'] = instance.path + '/' + run_and_check(self.base_kerberized_hdfs_cmd + common_opts, env=self.env) self.make_hdfs_api(kerberized=True) self.wait_hdfs_to_start(timeout=300) @@ -764,23 +797,22 @@ class ClickHouseCluster: time.sleep(10) if self.with_minio and self.base_minio_cmd: - env = os.environ.copy() prev_ca_certs = os.environ.get('SSL_CERT_FILE') if self.minio_certs_dir: minio_certs_dir = p.join(self.base_dir, self.minio_certs_dir) - env['MINIO_CERTS_DIR'] = minio_certs_dir + self.env['MINIO_CERTS_DIR'] = minio_certs_dir # Minio client (urllib3) uses SSL_CERT_FILE for certificate validation. os.environ['SSL_CERT_FILE'] = p.join(minio_certs_dir, 'public.crt') else: # Attach empty certificates directory to ensure non-secure mode. minio_certs_dir = p.join(self.instances_dir, 'empty_minio_certs_dir') os.mkdir(minio_certs_dir) - env['MINIO_CERTS_DIR'] = minio_certs_dir + self.env['MINIO_CERTS_DIR'] = minio_certs_dir minio_start_cmd = self.base_minio_cmd + common_opts logging.info("Trying to create Minio instance by command %s", ' '.join(map(str, minio_start_cmd))) - run_and_check(minio_start_cmd, env=env) + run_and_check(minio_start_cmd, env=self.env) try: logging.info("Trying to connect to Minio...") @@ -799,16 +831,16 @@ class ClickHouseCluster: clickhouse_start_cmd = self.base_cmd + ['up', '-d', '--no-recreate'] print(("Trying to create ClickHouse instance by command %s", ' '.join(map(str, clickhouse_start_cmd)))) - subprocess_check_call(clickhouse_start_cmd) + run_and_check(clickhouse_start_cmd, env=self.env) print("ClickHouse instance created") - start_deadline = time.time() + 20.0 # seconds + start_timeout = 20.0 # seconds for instance in self.instances.values(): instance.docker_client = self.docker_client instance.ip_address = self.get_instance_ip(instance.name) print("Waiting for ClickHouse start...") - instance.wait_for_start(start_deadline) + instance.wait_for_start(start_timeout) print("ClickHouse started") instance.client = Client(instance.ip_address, command=self.client_bin_path) @@ -825,7 +857,7 @@ class ClickHouseCluster: sanitizer_assert_instance = None with open(self.docker_logs_path, "w+") as f: try: - subprocess.check_call(self.base_cmd + ['logs'], stdout=f) # STYLE_CHECK_ALLOW_SUBPROCESS_CHECK_CALL + subprocess.check_call(self.base_cmd + ['logs'], env=self.env, stdout=f) # STYLE_CHECK_ALLOW_SUBPROCESS_CHECK_CALL except Exception as e: print("Unable to get logs from docker.") f.seek(0) @@ -836,14 +868,14 @@ class ClickHouseCluster: if kill: try: - subprocess_check_call(self.base_cmd + ['stop', '--timeout', '20']) + run_and_check(self.base_cmd + ['stop', '--timeout', '20'], env=self.env) except Exception as e: print("Kill command failed during shutdown. {}".format(repr(e))) print("Trying to kill forcefully") subprocess_check_call(self.base_cmd + ['kill']) try: - subprocess_check_call(self.base_cmd + ['down', '--volumes', '--remove-orphans']) + run_and_check(self.base_cmd + ['down', '--volumes', '--remove-orphans'], env=self.env) except Exception as e: print("Down + remove orphans failed durung shutdown. {}".format(repr(e))) @@ -1245,32 +1277,54 @@ class ClickHouseInstance: def start(self): self.get_docker_handle().start() - def wait_for_start(self, deadline=None, timeout=None): - start_time = time.time() + def wait_for_start(self, start_timeout=None, connection_timeout=None): - if timeout is not None: - deadline = start_time + timeout + if start_timeout is None or start_timeout <= 0: + raise Exception("Invalid timeout: {}".format(start_timeout)) + + if connection_timeout is not None and connection_timeout < start_timeout: + raise Exception("Connection timeout {} should be grater then start timeout {}" + .format(connection_timeout, start_timeout)) + + start_time = time.time() + prev_rows_in_log = 0 + + def has_new_rows_in_log(): + nonlocal prev_rows_in_log + try: + rows_in_log = int(self.count_in_log(".*").strip()) + res = rows_in_log > prev_rows_in_log + prev_rows_in_log = rows_in_log + return res + except ValueError: + return False while True: handle = self.get_docker_handle() status = handle.status if status == 'exited': - raise Exception( - "Instance `{}' failed to start. Container status: {}, logs: {}".format(self.name, status, - handle.logs().decode('utf-8'))) + raise Exception("Instance `{}' failed to start. Container status: {}, logs: {}" + .format(self.name, status, handle.logs().decode('utf-8'))) + + deadline = start_time + start_timeout + # It is possible that server starts slowly. + # If container is running, and there is some progress in log, check connection_timeout. + if connection_timeout and status == 'running' and has_new_rows_in_log(): + deadline = start_time + connection_timeout current_time = time.time() - time_left = deadline - current_time - if deadline is not None and current_time >= deadline: + if current_time >= deadline: raise Exception("Timed out while waiting for instance `{}' with ip address {} to start. " "Container status: {}, logs: {}".format(self.name, self.ip_address, status, handle.logs().decode('utf-8'))) + socket_timeout = min(start_timeout, deadline - current_time) + # Repeatedly poll the instance address until there is something that listens there. # Usually it means that ClickHouse is ready to accept queries. try: sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - sock.settimeout(time_left) + sock.settimeout(socket_timeout) sock.connect((self.ip_address, 9000)) return except socket.timeout: diff --git a/tests/integration/helpers/keeper_config1.xml b/tests/integration/helpers/keeper_config1.xml new file mode 100644 index 00000000000..d80f30ebd42 --- /dev/null +++ b/tests/integration/helpers/keeper_config1.xml @@ -0,0 +1,41 @@ + + true + :: + 0.0.0.0 + + + trace + /var/log/clickhouse-keeper/clickhouse-keeper.log + /var/log/clickhouse-keeper/clickhouse-keeper.err.log + + + + 2181 + 1 + + + 10000 + 30000 + trace + false + + + + + 1 + zoo1 + 9444 + + + 2 + zoo2 + 9444 + + + 3 + zoo3 + 9444 + + + + diff --git a/tests/integration/helpers/keeper_config2.xml b/tests/integration/helpers/keeper_config2.xml new file mode 100644 index 00000000000..8a125cecdb3 --- /dev/null +++ b/tests/integration/helpers/keeper_config2.xml @@ -0,0 +1,41 @@ + + true + :: + 0.0.0.0 + + + trace + /var/log/clickhouse-keeper/clickhouse-keeper.log + /var/log/clickhouse-keeper/clickhouse-keeper.err.log + + + + 2181 + 2 + + + 10000 + 30000 + trace + false + + + + + 1 + zoo1 + 9444 + + + 2 + zoo2 + 9444 + + + 3 + zoo3 + 9444 + + + + diff --git a/tests/integration/helpers/keeper_config3.xml b/tests/integration/helpers/keeper_config3.xml new file mode 100644 index 00000000000..04b41677039 --- /dev/null +++ b/tests/integration/helpers/keeper_config3.xml @@ -0,0 +1,41 @@ + + true + :: + 0.0.0.0 + + + trace + /var/log/clickhouse-keeper/clickhouse-keeper.log + /var/log/clickhouse-keeper/clickhouse-keeper.err.log + + + + 2181 + 3 + + + 10000 + 30000 + trace + false + + + + + 1 + zoo1 + 9444 + + + 2 + zoo2 + 9444 + + + 3 + zoo3 + 9444 + + + + diff --git a/tests/integration/test_jbod_balancer/test.py b/tests/integration/test_jbod_balancer/test.py index abc6a0bff11..ef0308cc658 100644 --- a/tests/integration/test_jbod_balancer/test.py +++ b/tests/integration/test_jbod_balancer/test.py @@ -92,7 +92,10 @@ def test_jbod_balanced_merge(start_cluster): node1.query("create table tmp1 as tbl") node1.query("create table tmp2 as tbl") - for i in range(200): + p = Pool(20) + + def task(i): + print("Processing insert {}/{}".format(i, 200)) # around 1k per block node1.query( "insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" @@ -104,6 +107,8 @@ def test_jbod_balanced_merge(start_cluster): "insert into tmp2 select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" ) + p.map(task, range(200)) + time.sleep(1) check_balance(node1, "tbl") @@ -151,8 +156,10 @@ def test_replicated_balanced_merge_fetch(start_cluster): node.query("create table tmp2 as tmp1") node2.query("alter table tbl modify setting always_fetch_merged_part = 1") + p = Pool(20) - for i in range(200): + def task(i): + print("Processing insert {}/{}".format(i, 200)) # around 1k per block node1.query( "insert into tbl select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" @@ -170,6 +177,8 @@ def test_replicated_balanced_merge_fetch(start_cluster): "insert into tmp2 select randConstant() % 2, randomPrintableASCII(16) from numbers(50)" ) + p.map(task, range(200)) + node2.query("SYSTEM SYNC REPLICA tbl", timeout=10) check_balance(node1, "tbl") diff --git a/tests/integration/test_rename_column/test.py b/tests/integration/test_rename_column/test.py index 0e5cc9f5d9d..3a818303f40 100644 --- a/tests/integration/test_rename_column/test.py +++ b/tests/integration/test_rename_column/test.py @@ -3,6 +3,7 @@ import random import time from multiprocessing.dummy import Pool +import datetime import pytest from helpers.client import QueryRuntimeException @@ -111,7 +112,7 @@ def insert(node, table_name, chunk=1000, col_names=None, iterations=1, ignore_ex try: query = ["SET max_partitions_per_insert_block = 10000000"] if with_many_parts: - query.append("SET max_insert_block_size = 64") + query.append("SET max_insert_block_size = 256") if with_time_column: query.append( "INSERT INTO {table_name} ({col0}, {col1}, time) SELECT number AS {col0}, number + 1 AS {col1}, now() + 10 AS time FROM numbers_mt({chunk})" @@ -305,35 +306,45 @@ def test_rename_with_parallel_merges(started_cluster): table_name = "test_rename_with_parallel_merges" drop_table(nodes, table_name) try: + print("Creating tables", datetime.datetime.now()) create_table(nodes, table_name) - for i in range(20): + for i in range(5): insert(node1, table_name, 100, ["num", "num2"], 1, False, False, True, offset=i * 100) - def merge_parts(node, table_name, iterations=1): - for i in range(iterations): - node.query("OPTIMIZE TABLE %s FINAL" % table_name) + print("Data inserted", datetime.datetime.now()) + def merge_parts(node, table_name, iterations=1): + for _ in range(iterations): + try: + node.query("OPTIMIZE TABLE %s FINAL" % table_name) + except Exception as ex: + print("Got an exception while optimizing table", ex) + + print("Creating pool") p = Pool(15) tasks = [] - for i in range(1): - tasks.append(p.apply_async(rename_column, (node1, table_name, "num2", "foo2", 5, True))) - tasks.append(p.apply_async(rename_column, (node2, table_name, "foo2", "foo3", 5, True))) - tasks.append(p.apply_async(rename_column, (node3, table_name, "foo3", "num2", 5, True))) - tasks.append(p.apply_async(merge_parts, (node1, table_name, 5))) - tasks.append(p.apply_async(merge_parts, (node2, table_name, 5))) - tasks.append(p.apply_async(merge_parts, (node3, table_name, 5))) + tasks.append(p.apply_async(rename_column, (node1, table_name, "num2", "foo2", 2, True))) + tasks.append(p.apply_async(rename_column, (node2, table_name, "foo2", "foo3", 2, True))) + tasks.append(p.apply_async(rename_column, (node3, table_name, "foo3", "num2", 2, True))) + tasks.append(p.apply_async(merge_parts, (node1, table_name, 2))) + tasks.append(p.apply_async(merge_parts, (node2, table_name, 2))) + tasks.append(p.apply_async(merge_parts, (node3, table_name, 2))) + print("Waiting for tasks", datetime.datetime.now()) for task in tasks: task.get(timeout=240) + print("Finished waiting", datetime.datetime.now()) + print("Renaming columns", datetime.datetime.now()) # rename column back to the original name rename_column(node1, table_name, "foo3", "num2", 1, True) rename_column(node1, table_name, "foo2", "num2", 1, True) + print("Finished renaming", datetime.datetime.now()) # check that select still works - select(node1, table_name, "num2", "1998\n") - select(node2, table_name, "num2", "1998\n") - select(node3, table_name, "num2", "1998\n") + select(node1, table_name, "num2", "500\n") + select(node2, table_name, "num2", "500\n") + select(node3, table_name, "num2", "500\n") finally: drop_table(nodes, table_name) diff --git a/tests/integration/test_zookeeper_config/test.py b/tests/integration/test_zookeeper_config/test.py index 584f76c80f0..ea8341aebde 100644 --- a/tests/integration/test_zookeeper_config/test.py +++ b/tests/integration/test_zookeeper_config/test.py @@ -100,11 +100,12 @@ def test_identity(): cluster_1 = ClickHouseCluster(__file__, zookeeper_config_path='configs/zookeeper_config_with_password.xml') cluster_2 = ClickHouseCluster(__file__) + # TODO ACL not implemented in Keeper. node1 = cluster_1.add_instance('node1', main_configs=["configs/remote_servers.xml", "configs/zookeeper_config_with_password.xml"], - with_zookeeper=True, zookeeper_use_tmpfs=False) + with_zookeeper=True, zookeeper_use_tmpfs=False, use_keeper=False) node2 = cluster_2.add_instance('node2', main_configs=["configs/remote_servers.xml"], with_zookeeper=True, - zookeeper_use_tmpfs=False) + zookeeper_use_tmpfs=False, use_keeper=False) try: cluster_1.start() @@ -126,6 +127,7 @@ def test_identity(): cluster_2.shutdown() +# NOTE this test have to be ported to Keeper def test_secure_connection(): # We need absolute path in zookeeper volumes. Generate it dynamically. TEMPLATE = ''' diff --git a/tests/jepsen.clickhouse-keeper/resources/config.xml b/tests/jepsen.clickhouse-keeper/resources/config.xml deleted file mode 120000 index c7596baa075..00000000000 --- a/tests/jepsen.clickhouse-keeper/resources/config.xml +++ /dev/null @@ -1 +0,0 @@ -../../../programs/server/config.xml \ No newline at end of file diff --git a/tests/jepsen.clickhouse-keeper/resources/keeper_config.xml b/tests/jepsen.clickhouse-keeper/resources/keeper_config.xml index 528ea5d77be..f06d9683990 100644 --- a/tests/jepsen.clickhouse-keeper/resources/keeper_config.xml +++ b/tests/jepsen.clickhouse-keeper/resources/keeper_config.xml @@ -1,4 +1,12 @@ + :: + + + trace + /var/log/clickhouse-keeper/clickhouse-keeper.log + /var/log/clickhouse-keeper/clickhouse-keeper.err.log + + 9181 {id} diff --git a/tests/jepsen.clickhouse-keeper/resources/listen.xml b/tests/jepsen.clickhouse-keeper/resources/listen.xml deleted file mode 100644 index de8c737ff75..00000000000 --- a/tests/jepsen.clickhouse-keeper/resources/listen.xml +++ /dev/null @@ -1,3 +0,0 @@ - - :: - diff --git a/tests/jepsen.clickhouse-keeper/resources/users.xml b/tests/jepsen.clickhouse-keeper/resources/users.xml deleted file mode 120000 index 41b137a130f..00000000000 --- a/tests/jepsen.clickhouse-keeper/resources/users.xml +++ /dev/null @@ -1 +0,0 @@ -../../../programs/server/users.xml \ No newline at end of file diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/db.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/db.clj index fdb6b233fec..30c2c0eaf4f 100644 --- a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/db.clj +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/db.clj @@ -89,10 +89,7 @@ (defn install-configs [test node] - (c/exec :echo (slurp (io/resource "config.xml")) :> (str configs-dir "/config.xml")) - (c/exec :echo (slurp (io/resource "users.xml")) :> (str configs-dir "/users.xml")) - (c/exec :echo (slurp (io/resource "listen.xml")) :> (str sub-configs-dir "/listen.xml")) - (c/exec :echo (cluster-config test node (slurp (io/resource "keeper_config.xml"))) :> (str sub-configs-dir "/keeper_config.xml"))) + (c/exec :echo (cluster-config test node (slurp (io/resource "keeper_config.xml"))) :> (str configs-dir "/keeper_config.xml"))) (defn collect-traces [test node] @@ -144,7 +141,7 @@ (info node "Coordination files exists, going to compress") (c/cd data-dir (c/exec :tar :czf "coordination.tar.gz" "coordination"))))) - (let [common-logs [stderr-file (str logs-dir "/clickhouse-server.log") (str data-dir "/coordination.tar.gz")] + (let [common-logs [stderr-file (str logs-dir "/clickhouse-keeper.log") (str data-dir "/coordination.tar.gz")] gdb-log (str logs-dir "/gdb.log")] (if (cu/exists? (str logs-dir "/gdb.log")) (conj common-logs gdb-log) diff --git a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/utils.clj b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/utils.clj index 70813457251..0457ff6eae2 100644 --- a/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/utils.clj +++ b/tests/jepsen.clickhouse-keeper/src/jepsen/clickhouse_keeper/utils.clj @@ -143,7 +143,7 @@ [node test] (info "Checking server alive on" node) (try - (c/exec binary-path :client :--query "SELECT 1") + (zk-connect (name node) 9181 30000) (catch Exception _ false))) (defn wait-clickhouse-alive! @@ -169,16 +169,13 @@ :logfile stderr-file :chdir data-dir} binary-path - :server - :--config (str configs-dir "/config.xml") + :keeper + :--config (str configs-dir "/keeper_config.xml") :-- - :--path (str data-dir "/") - :--user_files_path (str data-dir "/user_files") - :--top_level_domains_path (str data-dir "/top_level_domains") - :--logger.log (str logs-dir "/clickhouse-server.log") - :--logger.errorlog (str logs-dir "/clickhouse-server.err.log") + :--logger.log (str logs-dir "/clickhouse-keeper.log") + :--logger.errorlog (str logs-dir "/clickhouse-keeper.err.log") :--keeper_server.snapshot_storage_path coordination-snapshots-dir - :--keeper_server.logs_storage_path coordination-logs-dir) + :--keeper_server.log_storage_path coordination-logs-dir) (wait-clickhouse-alive! node test))) (defn md5 [^String s] diff --git a/tests/performance/column_column_comparison.xml b/tests/performance/column_column_comparison.xml deleted file mode 100644 index 2b59a65a54b..00000000000 --- a/tests/performance/column_column_comparison.xml +++ /dev/null @@ -1,31 +0,0 @@ - - - comparison - - - - hits_100m_single - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/tests/queries/0_stateless/00077_set_keys_fit_128_bits_many_blocks.sql b/tests/queries/0_stateless/00077_set_keys_fit_128_bits_many_blocks.sql index 02f0a6648a8..fe6a0cefd31 100644 --- a/tests/queries/0_stateless/00077_set_keys_fit_128_bits_many_blocks.sql +++ b/tests/queries/0_stateless/00077_set_keys_fit_128_bits_many_blocks.sql @@ -1,6 +1,6 @@ SET max_block_size = 1000; -SELECT number FROM +SELECT number FROM ( SELECT * FROM system.numbers LIMIT 10000 ) diff --git a/tests/queries/0_stateless/00184_shard_distributed_group_by_no_merge.reference b/tests/queries/0_stateless/00184_shard_distributed_group_by_no_merge.reference index 453c7fb5af0..b667c57a14c 100644 --- a/tests/queries/0_stateless/00184_shard_distributed_group_by_no_merge.reference +++ b/tests/queries/0_stateless/00184_shard_distributed_group_by_no_merge.reference @@ -39,3 +39,5 @@ GROUP BY w/ ALIAS 1 ORDER BY w/ ALIAS 0 +func(aggregate function) GROUP BY +0 diff --git a/tests/queries/0_stateless/00184_shard_distributed_group_by_no_merge.sql b/tests/queries/0_stateless/00184_shard_distributed_group_by_no_merge.sql index 9912e083777..cce10312e8f 100644 --- a/tests/queries/0_stateless/00184_shard_distributed_group_by_no_merge.sql +++ b/tests/queries/0_stateless/00184_shard_distributed_group_by_no_merge.sql @@ -39,4 +39,7 @@ SELECT n FROM remote('127.0.0.{2,3}', currentDatabase(), data_00184) GROUP BY nu SELECT 'ORDER BY w/ ALIAS'; SELECT n FROM remote('127.0.0.{2,3}', currentDatabase(), data_00184) ORDER BY number AS n LIMIT 1 SETTINGS distributed_group_by_no_merge=2; +SELECT 'func(aggregate function) GROUP BY'; +SELECT assumeNotNull(argMax(dummy, 1)) FROM remote('127.1', system.one) SETTINGS distributed_group_by_no_merge=2; + drop table data_00184; diff --git a/tests/queries/0_stateless/00316_rounding_functions_and_empty_block.sql b/tests/queries/0_stateless/00316_rounding_functions_and_empty_block.sql index 9f91cd713ae..08c30324da4 100644 --- a/tests/queries/0_stateless/00316_rounding_functions_and_empty_block.sql +++ b/tests/queries/0_stateless/00316_rounding_functions_and_empty_block.sql @@ -2,7 +2,7 @@ SET any_join_distinct_right_table_keys = 1; SELECT floor((ReferrerTimestamp - InstallTimestamp) / 86400) AS DaysSinceInstallations -FROM +FROM ( SELECT 6534090703218709881 AS DeviceIDHash, 1458586663 AS InstallTimestamp UNION ALL SELECT 2697418689476658272, 1458561552 diff --git a/tests/queries/0_stateless/00534_functions_bad_arguments4.reference b/tests/queries/0_stateless/00534_functions_bad_arguments4_long.reference similarity index 100% rename from tests/queries/0_stateless/00534_functions_bad_arguments4.reference rename to tests/queries/0_stateless/00534_functions_bad_arguments4_long.reference diff --git a/tests/queries/0_stateless/00534_functions_bad_arguments4.sh b/tests/queries/0_stateless/00534_functions_bad_arguments4_long.sh similarity index 100% rename from tests/queries/0_stateless/00534_functions_bad_arguments4.sh rename to tests/queries/0_stateless/00534_functions_bad_arguments4_long.sh diff --git a/tests/queries/0_stateless/00585_union_all_subquery_aggregation_column_removal.sql b/tests/queries/0_stateless/00585_union_all_subquery_aggregation_column_removal.sql index bf5d2251470..49975daaa7e 100644 --- a/tests/queries/0_stateless/00585_union_all_subquery_aggregation_column_removal.sql +++ b/tests/queries/0_stateless/00585_union_all_subquery_aggregation_column_removal.sql @@ -11,7 +11,7 @@ INSERT INTO transactions VALUES ('facebook.com'), ('yandex.ru'), ('baidu.com'); SELECT sum(total_count) AS total, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, @@ -36,7 +36,7 @@ FORMAT JSONEachRow; SELECT sum(total_count) AS total, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, @@ -63,7 +63,7 @@ SELECT DISTINCT * FROM SELECT sum(total_count) AS total, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, @@ -88,7 +88,7 @@ UNION ALL SELECT sum(total_count) AS total, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, @@ -116,7 +116,7 @@ SELECT sum(total_count) AS total, sum(facebookHits) AS facebook, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, @@ -142,7 +142,7 @@ SELECT sum(total_count) AS total, max(facebookHits) AS facebook, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, @@ -170,7 +170,7 @@ SELECT * FROM SELECT sum(total_count) AS total, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, @@ -195,7 +195,7 @@ ALL FULL OUTER JOIN SELECT sum(total_count) AS total, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, @@ -224,7 +224,7 @@ SELECT total FROM SELECT sum(total_count) AS total, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, @@ -249,7 +249,7 @@ ALL FULL OUTER JOIN SELECT sum(total_count) AS total, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, @@ -278,7 +278,7 @@ SELECT domain FROM SELECT sum(total_count) AS total, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, @@ -303,7 +303,7 @@ ALL FULL OUTER JOIN SELECT sum(total_count) AS total, domain -FROM +FROM ( SELECT COUNT(*) AS total_count, diff --git a/tests/queries/0_stateless/00597_push_down_predicate.reference b/tests/queries/0_stateless/00597_push_down_predicate_long.reference similarity index 94% rename from tests/queries/0_stateless/00597_push_down_predicate.reference rename to tests/queries/0_stateless/00597_push_down_predicate_long.reference index 59313c35b81..bd7d3cd81d4 100644 --- a/tests/queries/0_stateless/00597_push_down_predicate.reference +++ b/tests/queries/0_stateless/00597_push_down_predicate_long.reference @@ -5,7 +5,7 @@ 2000-01-01 1 test string 1 1 -------Forbid push down------- SELECT count() -FROM +FROM ( SELECT [number] AS a, @@ -21,11 +21,11 @@ WHERE NOT ignore(a + b) SELECT a, b -FROM +FROM ( SELECT 1 AS a ) -ANY LEFT JOIN +ANY LEFT JOIN ( SELECT 1 AS a, @@ -35,13 +35,13 @@ WHERE b = 0 SELECT a, b -FROM +FROM ( SELECT 1 AS a, 1 AS b ) -ANY RIGHT JOIN +ANY RIGHT JOIN ( SELECT 1 AS a ) USING (a) @@ -49,11 +49,11 @@ WHERE b = 0 SELECT a, b -FROM +FROM ( SELECT 1 AS a ) -ANY FULL OUTER JOIN +ANY FULL OUTER JOIN ( SELECT 1 AS a, @@ -63,26 +63,26 @@ WHERE b = 0 SELECT a, b -FROM +FROM ( SELECT 1 AS a, 1 AS b ) -ANY FULL OUTER JOIN +ANY FULL OUTER JOIN ( SELECT 1 AS a ) USING (a) WHERE b = 0 -------Need push down------- SELECT toString(value) AS value -FROM +FROM ( SELECT 1 AS value ) 1 SELECT id -FROM +FROM ( SELECT 1 AS id UNION ALL @@ -92,7 +92,7 @@ FROM WHERE id = 1 1 SELECT id -FROM +FROM ( SELECT arrayJoin([1, 2, 3]) AS id WHERE id = 1 @@ -100,7 +100,7 @@ FROM WHERE id = 1 1 SELECT id -FROM +FROM ( SELECT arrayJoin([1, 2, 3]) AS id WHERE id = 1 @@ -110,7 +110,7 @@ WHERE id = 1 SELECT id, subquery -FROM +FROM ( SELECT 1 AS id, @@ -122,7 +122,7 @@ WHERE subquery = 1 SELECT a, b -FROM +FROM ( SELECT toUInt64(sum(id) AS b) AS a, @@ -137,7 +137,7 @@ SELECT id, name, value -FROM +FROM ( SELECT date, @@ -156,7 +156,7 @@ WHERE id = 1 SELECT a, b -FROM +FROM ( SELECT toUInt64(sum(id) AS b) AS a, @@ -171,7 +171,7 @@ SELECT id, name, value -FROM +FROM ( SELECT date, @@ -188,14 +188,14 @@ SELECT id, name, value -FROM +FROM ( SELECT date, id, name, value - FROM + FROM ( SELECT date, @@ -214,14 +214,14 @@ SELECT id, name, value -FROM +FROM ( SELECT date, id, name, value - FROM + FROM ( SELECT date, @@ -240,7 +240,7 @@ SELECT id, name, value -FROM +FROM ( SELECT date, @@ -257,14 +257,14 @@ SELECT id, name, value -FROM +FROM ( SELECT date, id, name, value - FROM + FROM ( SELECT date, @@ -283,7 +283,7 @@ SELECT id, name, value -FROM +FROM ( SELECT date, @@ -300,14 +300,14 @@ SELECT id, name, value -FROM +FROM ( SELECT date, id, name, value - FROM + FROM ( SELECT date, @@ -325,7 +325,7 @@ SELECT id, date, value -FROM +FROM ( SELECT id, @@ -344,7 +344,7 @@ SELECT id, name, value -FROM +FROM ( SELECT date, @@ -373,7 +373,7 @@ SELECT date, name, value -FROM +FROM ( SELECT date, @@ -383,7 +383,7 @@ FROM FROM test_00597 WHERE id = 1 ) -ANY LEFT JOIN +ANY LEFT JOIN ( SELECT id FROM test_00597 @@ -395,11 +395,11 @@ SELECT date, name, value -FROM +FROM ( SELECT toInt8(1) AS id ) -ANY LEFT JOIN +ANY LEFT JOIN ( SELECT date, @@ -411,7 +411,7 @@ ANY LEFT JOIN WHERE value = 1 1 2000-01-01 test string 1 1 SELECT value -FROM +FROM ( SELECT toInt8(1) AS id ) @@ -423,7 +423,7 @@ SELECT id, name, value -FROM +FROM ( SELECT date, @@ -433,7 +433,7 @@ FROM date, name, value - FROM + FROM ( SELECT date, @@ -443,7 +443,7 @@ FROM FROM test_00597 WHERE id = 1 ) - ANY LEFT JOIN + ANY LEFT JOIN ( SELECT id FROM test_00597 @@ -460,7 +460,7 @@ SELECT b.date, b.name, b.value -FROM +FROM ( SELECT date, @@ -469,7 +469,7 @@ FROM value FROM test_00597 ) -ANY LEFT JOIN +ANY LEFT JOIN ( SELECT date, @@ -485,7 +485,7 @@ SELECT date, name, value -FROM +FROM ( SELECT toInt8(1) AS id, @@ -493,7 +493,7 @@ FROM FROM system.numbers LIMIT 1 ) -ANY LEFT JOIN +ANY LEFT JOIN ( SELECT date, @@ -513,7 +513,7 @@ SELECT `b.id`, `b.name`, `b.value` -FROM +FROM ( SELECT date, @@ -524,7 +524,7 @@ FROM b.id, b.name, b.value - FROM + FROM ( SELECT date, @@ -534,7 +534,7 @@ FROM FROM test_00597 WHERE id = 1 ) AS a - ANY LEFT JOIN + ANY LEFT JOIN ( SELECT date, @@ -555,7 +555,7 @@ SELECT r.date, r.name, r.value -FROM +FROM ( SELECT date, @@ -564,14 +564,14 @@ FROM value FROM test_00597 ) -SEMI LEFT JOIN +SEMI LEFT JOIN ( SELECT date, id, name, value - FROM + FROM ( SELECT date, @@ -586,7 +586,7 @@ SEMI LEFT JOIN WHERE r.id = 1 2000-01-01 1 test string 1 1 2000-01-01 test string 1 1 SELECT value + t1.value AS expr -FROM +FROM ( SELECT value, diff --git a/tests/queries/0_stateless/00597_push_down_predicate.sql b/tests/queries/0_stateless/00597_push_down_predicate_long.sql similarity index 100% rename from tests/queries/0_stateless/00597_push_down_predicate.sql rename to tests/queries/0_stateless/00597_push_down_predicate_long.sql diff --git a/tests/queries/0_stateless/00599_create_view_with_subquery.reference b/tests/queries/0_stateless/00599_create_view_with_subquery.reference index d83d2837a18..0458f650fd0 100644 --- a/tests/queries/0_stateless/00599_create_view_with_subquery.reference +++ b/tests/queries/0_stateless/00599_create_view_with_subquery.reference @@ -1 +1 @@ -CREATE VIEW default.test_view_00599\n(\n `id` UInt64\n) AS\nSELECT *\nFROM default.test_00599\nWHERE id = \n(\n SELECT 1\n) +CREATE VIEW default.test_view_00599\n(\n `id` UInt64\n) AS\nSELECT *\nFROM default.test_00599\nWHERE id = (\n SELECT 1\n) diff --git a/tests/queries/0_stateless/00618_nullable_in.sql b/tests/queries/0_stateless/00618_nullable_in.sql index 72e166dc0f5..f039f1fb9d5 100644 --- a/tests/queries/0_stateless/00618_nullable_in.sql +++ b/tests/queries/0_stateless/00618_nullable_in.sql @@ -5,7 +5,7 @@ SELECT uniqExact(x) AS u, uniqExactIf(x, name = 'a') AS ue, uniqExactIf(x, name IN ('a', 'b')) AS ui -FROM +FROM ( SELECT toNullable('a') AS name, diff --git a/tests/queries/0_stateless/00743_limit_by_not_found_column.sql b/tests/queries/0_stateless/00743_limit_by_not_found_column.sql index d20b3b0209e..831d67f624b 100644 --- a/tests/queries/0_stateless/00743_limit_by_not_found_column.sql +++ b/tests/queries/0_stateless/00743_limit_by_not_found_column.sql @@ -22,7 +22,7 @@ DROP TABLE installation_stats; CREATE TEMPORARY TABLE Accounts (AccountID UInt64, Currency String); SELECT AccountID -FROM +FROM ( SELECT AccountID, diff --git a/tests/queries/0_stateless/00751_default_databasename_for_view.reference b/tests/queries/0_stateless/00751_default_databasename_for_view.reference index 76d5cee02e2..b3f1875ae91 100644 --- a/tests/queries/0_stateless/00751_default_databasename_for_view.reference +++ b/tests/queries/0_stateless/00751_default_databasename_for_view.reference @@ -12,14 +12,11 @@ SELECT platform, app FROM test_00751.t_00751 -WHERE (app = -( +WHERE (app = ( SELECT min(app) FROM test_00751.u_00751 -)) AND (platform = -( - SELECT - ( +)) AND (platform = ( + SELECT ( SELECT min(platform) FROM test_00751.v_00751 ) diff --git a/tests/queries/0_stateless/00808_not_optimize_predicate.reference b/tests/queries/0_stateless/00808_not_optimize_predicate.reference index d8ab9425aab..647c6d91890 100644 --- a/tests/queries/0_stateless/00808_not_optimize_predicate.reference +++ b/tests/queries/0_stateless/00808_not_optimize_predicate.reference @@ -18,7 +18,7 @@ SELECT n, `finalizeAggregation(s)` -FROM +FROM ( SELECT n, diff --git a/tests/queries/0_stateless/00849_multiple_comma_join_2.reference b/tests/queries/0_stateless/00849_multiple_comma_join_2.reference index fc39ef13935..5709db44eb1 100644 --- a/tests/queries/0_stateless/00849_multiple_comma_join_2.reference +++ b/tests/queries/0_stateless/00849_multiple_comma_join_2.reference @@ -10,7 +10,7 @@ FROM t1 ALL INNER JOIN t2 ON b = t2.b WHERE b = t2.b SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT a AS `--t1.a`, @@ -21,7 +21,7 @@ FROM ALL INNER JOIN t3 ON `--t1.a` = a WHERE (`--t1.a` = `--t2.a`) AND (`--t1.a` = a) SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT b AS `--t1.b`, @@ -33,13 +33,13 @@ FROM ALL INNER JOIN t3 ON `--t1.b` = b WHERE (`--t1.b` = `--t2.b`) AND (`--t1.b` = b) SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT `--t1.a`, `--t2.a`, a AS `--t3.a` - FROM + FROM ( SELECT a AS `--t1.a`, @@ -52,14 +52,14 @@ FROM ALL INNER JOIN t4 ON `--t1.a` = a WHERE (`--t1.a` = `--t2.a`) AND (`--t1.a` = `--t3.a`) AND (`--t1.a` = a) SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT `--t1.b`, `--t1.a`, `--t2.b`, b AS `--t3.b` - FROM + FROM ( SELECT b AS `--t1.b`, @@ -73,13 +73,13 @@ FROM ALL INNER JOIN t4 ON `--t1.b` = b WHERE (`--t1.b` = `--t2.b`) AND (`--t1.b` = `--t3.b`) AND (`--t1.b` = b) SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT `--t1.a`, `--t2.a`, a AS `--t3.a` - FROM + FROM ( SELECT a AS `--t1.a`, @@ -92,13 +92,13 @@ FROM ALL INNER JOIN t4 ON `--t2.a` = a WHERE (`--t2.a` = `--t1.a`) AND (`--t2.a` = `--t3.a`) AND (`--t2.a` = a) SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT `--t1.a`, `--t2.a`, a AS `--t3.a` - FROM + FROM ( SELECT a AS `--t1.a`, @@ -111,13 +111,13 @@ FROM ALL INNER JOIN t4 ON `--t3.a` = a WHERE (`--t3.a` = `--t1.a`) AND (`--t3.a` = `--t2.a`) AND (`--t3.a` = a) SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT `--t1.a`, `--t2.a`, a AS `--t3.a` - FROM + FROM ( SELECT a AS `--t1.a`, @@ -130,13 +130,13 @@ FROM ALL INNER JOIN t4 ON (a = `--t1.a`) AND (a = `--t2.a`) AND (a = `--t3.a`) WHERE (a = `--t1.a`) AND (a = `--t2.a`) AND (a = `--t3.a`) SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT `--t1.a`, `--t2.a`, a AS `--t3.a` - FROM + FROM ( SELECT a AS `--t1.a`, @@ -149,10 +149,10 @@ FROM ALL INNER JOIN t4 ON `--t3.a` = a WHERE (`--t1.a` = `--t2.a`) AND (`--t2.a` = `--t3.a`) AND (`--t3.a` = a) SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT `--t1.a` - FROM + FROM ( SELECT a AS `--t1.a` FROM t1 @@ -162,10 +162,10 @@ FROM ) AS `--.s` CROSS JOIN t4 SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT `--t1.a` - FROM + FROM ( SELECT a AS `--t1.a` FROM t1 @@ -175,7 +175,7 @@ FROM ) AS `--.s` CROSS JOIN t4 SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT a AS `--t1.a` FROM t1 @@ -183,7 +183,7 @@ FROM ) AS `--.s` CROSS JOIN t3 SELECT `--t1.a` AS `t1.a` -FROM +FROM ( SELECT a AS `--t1.a`, diff --git a/tests/queries/0_stateless/00911_tautological_compare.reference b/tests/queries/0_stateless/00911_tautological_compare.reference index 405d3348775..e69de29bb2d 100644 --- a/tests/queries/0_stateless/00911_tautological_compare.reference +++ b/tests/queries/0_stateless/00911_tautological_compare.reference @@ -1,8 +0,0 @@ -0 -0 -0 -0 -0 -0 -0 -0 diff --git a/tests/queries/0_stateless/00911_tautological_compare.sql b/tests/queries/0_stateless/00911_tautological_compare.sql index 34c95d73716..bcbbbeb514b 100644 --- a/tests/queries/0_stateless/00911_tautological_compare.sql +++ b/tests/queries/0_stateless/00911_tautological_compare.sql @@ -1,10 +1,49 @@ -SELECT count() FROM system.numbers WHERE number != number; -SELECT count() FROM system.numbers WHERE number < number; -SELECT count() FROM system.numbers WHERE number > number; +-- TODO: Tautological optimization breaks JIT expression compilation, because it can return constant result +-- for non constant columns. And then sample blocks from same ActionsDAGs can be mismatched. +-- This optimization cannot be performed on AST rewrite level, because we does not have information about types +-- and equals(tuple(NULL), tuple(NULL)) have same hash code, but should not be optimized. +-- Return this test after refactoring of InterpreterSelectQuery. -SELECT count() FROM system.numbers WHERE NOT (number = number); -SELECT count() FROM system.numbers WHERE NOT (number <= number); -SELECT count() FROM system.numbers WHERE NOT (number >= number); +-- SELECT count() FROM system.numbers WHERE number != number; +-- SELECT count() FROM system.numbers WHERE number < number; +-- SELECT count() FROM system.numbers WHERE number > number; -SELECT count() FROM system.numbers WHERE SHA256(toString(number)) != SHA256(toString(number)); -SELECT count() FROM system.numbers WHERE SHA256(toString(number)) != SHA256(toString(number)) AND rand() > 10; +-- SELECT count() FROM system.numbers WHERE NOT (number = number); +-- SELECT count() FROM system.numbers WHERE NOT (number <= number); +-- SELECT count() FROM system.numbers WHERE NOT (number >= number); + +-- SELECT count() FROM system.numbers WHERE SHA256(toString(number)) != SHA256(toString(number)); +-- SELECT count() FROM system.numbers WHERE SHA256(toString(number)) != SHA256(toString(number)) AND rand() > 10; + +-- column_column_comparison.xml +-- +-- +-- comparison +-- + +-- +-- hits_100m_single +-- + + +-- +-- +-- +-- +-- +-- +-- +-- +-- +-- +-- +-- +-- +-- +-- +-- +-- +-- +-- + +-- diff --git a/tests/queries/0_stateless/00957_format_with_clashed_aliases.reference b/tests/queries/0_stateless/00957_format_with_clashed_aliases.reference index d6e53c8b48b..cffcb8482ad 100644 --- a/tests/queries/0_stateless/00957_format_with_clashed_aliases.reference +++ b/tests/queries/0_stateless/00957_format_with_clashed_aliases.reference @@ -1,7 +1,7 @@ SELECT 1 AS x, x.y -FROM +FROM ( SELECT 'Hello, world' AS y ) AS x diff --git a/tests/queries/0_stateless/01029_early_constant_folding.reference b/tests/queries/0_stateless/01029_early_constant_folding.reference index 8a2d7e6c61a..6063e08afe0 100644 --- a/tests/queries/0_stateless/01029_early_constant_folding.reference +++ b/tests/queries/0_stateless/01029_early_constant_folding.reference @@ -4,8 +4,7 @@ SELECT 1 SELECT 1 WHERE (1 IN (0, 2)) AND (2 = (identity(CAST(2, \'UInt8\')) AS subquery)) SELECT 1 -WHERE 1 IN ( -( +WHERE 1 IN (( SELECT arrayJoin([1, 2, 3]) ) AS subquery) SELECT 1 diff --git a/tests/queries/0_stateless/01054_cache_dictionary_overflow_cell.sql b/tests/queries/0_stateless/01054_cache_dictionary_overflow_cell.sql index b040a0e7a50..d8d1d61be63 100644 --- a/tests/queries/0_stateless/01054_cache_dictionary_overflow_cell.sql +++ b/tests/queries/0_stateless/01054_cache_dictionary_overflow_cell.sql @@ -48,7 +48,7 @@ dictGet('one_cell_cache_ints_overflow', 'i8', toUInt64(19)), dictGet('one_cell_cache_ints_overflow', 'i8', toUInt64(20)); SELECT arrayMap(x -> dictGet('one_cell_cache_ints_overflow', 'i8', toUInt64(x)), array) -FROM +FROM ( SELECT [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20] AS array ); diff --git a/tests/queries/0_stateless/01056_predicate_optimizer_bugs.reference b/tests/queries/0_stateless/01056_predicate_optimizer_bugs.reference index 4227af86be7..7df4dc7ead5 100644 --- a/tests/queries/0_stateless/01056_predicate_optimizer_bugs.reference +++ b/tests/queries/0_stateless/01056_predicate_optimizer_bugs.reference @@ -3,14 +3,14 @@ SELECT v, d, i -FROM +FROM ( SELECT t.1 AS k, t.2 AS v, runningDifference(v) AS d, runningDifference(cityHash64(t.1)) AS i - FROM + FROM ( SELECT arrayJoin([(\'a\', 1), (\'a\', 2), (\'a\', 3), (\'b\', 11), (\'b\', 13), (\'b\', 15)]) AS t ) @@ -26,14 +26,14 @@ SELECT co2, co3, num -FROM +FROM ( SELECT co, co2, co3, count() AS num - FROM + FROM ( SELECT 1 AS co, @@ -51,13 +51,13 @@ WHERE (co != 0) AND (co2 != 2) 1 0 3 1 1 0 0 1 SELECT alias AS name -FROM +FROM ( SELECT name AS alias FROM system.settings WHERE alias = \'enable_optimize_predicate_expression\' ) -ANY INNER JOIN +ANY INNER JOIN ( SELECT name FROM system.settings @@ -66,17 +66,17 @@ WHERE name = \'enable_optimize_predicate_expression\' enable_optimize_predicate_expression 1 val11 val21 val31 SELECT ccc -FROM +FROM ( SELECT 1 AS ccc WHERE 0 UNION ALL SELECT ccc - FROM + FROM ( SELECT 2 AS ccc ) - ANY INNER JOIN + ANY INNER JOIN ( SELECT 2 AS ccc ) USING (ccc) @@ -91,7 +91,7 @@ SELECT b.ts, b.id, id_c -FROM +FROM ( SELECT ts, @@ -109,7 +109,7 @@ SELECT b.ts AS `--b.ts`, b.id AS `--b.id`, id_c AS `--b.id_c` -FROM +FROM ( SELECT ts, @@ -129,7 +129,7 @@ WHERE `--a.ts` <= toDateTime(\'1970-01-01 03:00:00\') 2 3 4 5 SELECT dummy -FROM +FROM ( SELECT dummy FROM system.one @@ -141,13 +141,13 @@ SELECT id, value, value_1 -FROM +FROM ( SELECT 1 AS id, 2 AS value ) -ALL INNER JOIN +ALL INNER JOIN ( SELECT 1 AS id, diff --git a/tests/queries/0_stateless/01067_join_null.sql b/tests/queries/0_stateless/01067_join_null.sql index bc888fe01ba..a00c345128c 100644 --- a/tests/queries/0_stateless/01067_join_null.sql +++ b/tests/queries/0_stateless/01067_join_null.sql @@ -1,5 +1,5 @@ SELECT id -FROM +FROM ( SELECT 1 AS id UNION ALL @@ -20,7 +20,7 @@ ORDER BY id; SELECT '---'; SELECT * -FROM +FROM ( SELECT NULL AS x ) js1 @@ -32,7 +32,7 @@ INNER JOIN SELECT '---'; SELECT * -FROM +FROM ( SELECT NULL AS x ) js1 diff --git a/tests/queries/0_stateless/01076_predicate_optimizer_with_view.reference b/tests/queries/0_stateless/01076_predicate_optimizer_with_view.reference index dfab41b5e4c..620c5c7c8d1 100644 --- a/tests/queries/0_stateless/01076_predicate_optimizer_with_view.reference +++ b/tests/queries/0_stateless/01076_predicate_optimizer_with_view.reference @@ -3,7 +3,7 @@ SELECT id, name, value -FROM +FROM ( SELECT * FROM default.test @@ -15,7 +15,7 @@ SELECT id, name, value -FROM +FROM ( SELECT * FROM default.test @@ -23,7 +23,7 @@ FROM ) AS test_view WHERE id = 2 SELECT id -FROM +FROM ( SELECT * FROM default.test @@ -31,7 +31,7 @@ FROM ) AS test_view WHERE id = 1 SELECT id -FROM +FROM ( SELECT * FROM default.test diff --git a/tests/queries/0_stateless/01083_expressions_in_engine_arguments.reference b/tests/queries/0_stateless/01083_expressions_in_engine_arguments.reference index 2a5d7e6da32..b25cfadd0ec 100644 --- a/tests/queries/0_stateless/01083_expressions_in_engine_arguments.reference +++ b/tests/queries/0_stateless/01083_expressions_in_engine_arguments.reference @@ -6,6 +6,6 @@ CREATE TABLE default.distributed\n(\n `n` Int8\n)\nENGINE = Distributed(\'tes CREATE TABLE default.distributed_tf\n(\n `n` Int8\n) AS cluster(\'test_shard_localhost\', \'default\', \'buffer\') CREATE TABLE default.url\n(\n `n` UInt64,\n `col` String\n)\nENGINE = URL(\'https://localhost:8443/?query=select+n,+_table+from+default.merge+format+CSV\', \'CSV\') CREATE TABLE default.rich_syntax\n(\n `n` Int64\n) AS remote(\'localhos{x|y|t}\', cluster(\'test_shard_localhost\', remote(\'127.0.0.{1..4}\', \'default\', \'view\'))) -CREATE VIEW default.view\n(\n `n` Int64\n) AS\nSELECT toInt64(n) AS n\nFROM \n(\n SELECT toString(n) AS n\n FROM default.merge\n WHERE _table != \'qwerty\'\n ORDER BY _table ASC\n)\nUNION ALL\nSELECT *\nFROM default.file +CREATE VIEW default.view\n(\n `n` Int64\n) AS\nSELECT toInt64(n) AS n\nFROM\n(\n SELECT toString(n) AS n\n FROM default.merge\n WHERE _table != \'qwerty\'\n ORDER BY _table ASC\n)\nUNION ALL\nSELECT *\nFROM default.file CREATE DICTIONARY default.dict\n(\n `n` UInt64,\n `col` String DEFAULT \'42\'\n)\nPRIMARY KEY n\nSOURCE(CLICKHOUSE(HOST \'localhost\' PORT 9440 SECURE 1 USER \'default\' TABLE \'url\'))\nLIFETIME(MIN 0 MAX 1)\nLAYOUT(CACHE(SIZE_IN_CELLS 1)) 16 diff --git a/tests/queries/0_stateless/01155_rename_move_materialized_view.reference b/tests/queries/0_stateless/01155_rename_move_materialized_view.reference new file mode 100644 index 00000000000..942cedf8696 --- /dev/null +++ b/tests/queries/0_stateless/01155_rename_move_materialized_view.reference @@ -0,0 +1,58 @@ +1 .inner.mv1 before moving tablesmv1 +1 dst before moving tablesmv2 +1 mv1 before moving tablesmv1 +1 mv2 before moving tablesmv2 +1 src before moving tables +ordinary: +.inner.mv1 +dst +mv1 +mv2 +src +ordinary after rename: +atomic after rename: +.inner_id. +dst +mv1 +mv2 +src +3 .inner_id. after renaming databasemv1 +3 .inner_id. before moving tablesmv1 +3 dst after renaming databasemv2 +3 dst before moving tablesmv2 +3 mv1 after renaming databasemv1 +3 mv1 before moving tablesmv1 +3 mv2 after renaming databasemv2 +3 mv2 before moving tablesmv2 +3 src after moving tables +3 src after renaming database +3 src before moving tables +.inner_id. +dst +mv1 +mv2 +src +CREATE DATABASE test_01155_atomic\nENGINE = Atomic +4 .inner.mv1 after renaming databasemv1 +4 .inner.mv1 after renaming tablesmv1 +4 .inner.mv1 before moving tablesmv1 +4 dst after renaming databasemv2 +4 dst after renaming tablesmv2 +4 dst before moving tablesmv2 +4 mv1 after renaming databasemv1 +4 mv1 after renaming tablesmv1 +4 mv1 before moving tablesmv1 +4 mv2 after renaming databasemv2 +4 mv2 after renaming tablesmv2 +4 mv2 before moving tablesmv2 +4 src after moving tables +4 src after renaming database +4 src after renaming tables +4 src before moving tables +test_01155_ordinary: +.inner.mv1 +dst +mv1 +mv2 +src +test_01155_atomic: diff --git a/tests/queries/0_stateless/01155_rename_move_materialized_view.sql b/tests/queries/0_stateless/01155_rename_move_materialized_view.sql new file mode 100644 index 00000000000..2ede0fbcedf --- /dev/null +++ b/tests/queries/0_stateless/01155_rename_move_materialized_view.sql @@ -0,0 +1,59 @@ +DROP DATABASE IF EXISTS test_01155_ordinary; +DROP DATABASE IF EXISTS test_01155_atomic; + +CREATE DATABASE test_01155_ordinary ENGINE=Ordinary; +CREATE DATABASE test_01155_atomic ENGINE=Atomic; + +USE test_01155_ordinary; +CREATE TABLE src (s String) ENGINE=MergeTree() PARTITION BY tuple() ORDER BY s; +CREATE MATERIALIZED VIEW mv1 (s String) ENGINE=MergeTree() PARTITION BY tuple() ORDER BY s AS SELECT (*,).1 || 'mv1' as s FROM src; +CREATE TABLE dst (s String) ENGINE=MergeTree() PARTITION BY tuple() ORDER BY s; +CREATE MATERIALIZED VIEW mv2 TO dst (s String) AS SELECT (*,).1 || 'mv2' as s FROM src; +INSERT INTO src VALUES ('before moving tables'); +SELECT 1, substr(_table, 1, 10), s FROM merge('test_01155_ordinary', '') ORDER BY _table, s; + +-- Move tables with materialized views from Ordinary to Atomic +SELECT 'ordinary:'; +SHOW TABLES FROM test_01155_ordinary; +RENAME TABLE test_01155_ordinary.mv1 TO test_01155_atomic.mv1; +RENAME TABLE test_01155_ordinary.mv2 TO test_01155_atomic.mv2; +RENAME TABLE test_01155_ordinary.dst TO test_01155_atomic.dst; +RENAME TABLE test_01155_ordinary.src TO test_01155_atomic.src; +SELECT 'ordinary after rename:'; +SELECT substr(name, 1, 10) FROM system.tables WHERE database='test_01155_ordinary'; +SELECT 'atomic after rename:'; +SELECT substr(name, 1, 10) FROM system.tables WHERE database='test_01155_atomic'; +DROP DATABASE test_01155_ordinary; +USE default; + +INSERT INTO test_01155_atomic.src VALUES ('after moving tables'); +SELECT 2, substr(_table, 1, 10), s FROM merge('test_01155_atomic', '') ORDER BY _table, s; -- { serverError 81 } + +RENAME DATABASE test_01155_atomic TO test_01155_ordinary; +USE test_01155_ordinary; + +INSERT INTO src VALUES ('after renaming database'); +SELECT 3, substr(_table, 1, 10), s FROM merge('test_01155_ordinary', '') ORDER BY _table, s; + +SELECT substr(name, 1, 10) FROM system.tables WHERE database='test_01155_ordinary'; + +-- Move tables back +RENAME DATABASE test_01155_ordinary TO test_01155_atomic; + +CREATE DATABASE test_01155_ordinary ENGINE=Ordinary; +SHOW CREATE DATABASE test_01155_atomic; + +RENAME TABLE test_01155_atomic.mv1 TO test_01155_ordinary.mv1; +RENAME TABLE test_01155_atomic.mv2 TO test_01155_ordinary.mv2; +RENAME TABLE test_01155_atomic.dst TO test_01155_ordinary.dst; +RENAME TABLE test_01155_atomic.src TO test_01155_ordinary.src; + +INSERT INTO src VALUES ('after renaming tables'); +SELECT 4, substr(_table, 1, 10), s FROM merge('test_01155_ordinary', '') ORDER BY _table, s; +SELECT 'test_01155_ordinary:'; +SHOW TABLES FROM test_01155_ordinary; +SELECT 'test_01155_atomic:'; +SHOW TABLES FROM test_01155_atomic; + +DROP DATABASE IF EXISTS test_01155_atomic; +DROP DATABASE IF EXISTS test_01155_ordinary; diff --git a/tests/queries/0_stateless/01244_optimize_distributed_group_by_sharding_key.reference b/tests/queries/0_stateless/01244_optimize_distributed_group_by_sharding_key.reference index d1697bd2310..acaf6531101 100644 --- a/tests/queries/0_stateless/01244_optimize_distributed_group_by_sharding_key.reference +++ b/tests/queries/0_stateless/01244_optimize_distributed_group_by_sharding_key.reference @@ -123,3 +123,6 @@ GROUP BY sharding_key, ... GROUP BY ..., sharding_key 0 0 1 0 +window functions +0 0 +1 0 diff --git a/tests/queries/0_stateless/01244_optimize_distributed_group_by_sharding_key.sql b/tests/queries/0_stateless/01244_optimize_distributed_group_by_sharding_key.sql index 2f77155cc54..6b6300a4871 100644 --- a/tests/queries/0_stateless/01244_optimize_distributed_group_by_sharding_key.sql +++ b/tests/queries/0_stateless/01244_optimize_distributed_group_by_sharding_key.sql @@ -106,5 +106,9 @@ select * from dist_01247 group by key, value; select 'GROUP BY ..., sharding_key'; select * from dist_01247 group by value, key; +-- window functions +select 'window functions'; +select key, sum(sum(value)) over (rows unbounded preceding) from dist_01247 group by key settings allow_experimental_window_functions=1; + drop table dist_01247; drop table data_01247; diff --git a/tests/queries/0_stateless/01245_limit_infinite_sources.sql b/tests/queries/0_stateless/01245_limit_infinite_sources.sql index 803a2d14c39..05680d86a33 100644 --- a/tests/queries/0_stateless/01245_limit_infinite_sources.sql +++ b/tests/queries/0_stateless/01245_limit_infinite_sources.sql @@ -1,5 +1,5 @@ SELECT number -FROM +FROM ( SELECT zero AS number FROM remote('127.0.0.2', system.zeros) diff --git a/tests/queries/0_stateless/01271_optimize_arithmetic_operations_in_aggr_func.reference b/tests/queries/0_stateless/01271_optimize_arithmetic_operations_in_aggr_func_long.reference similarity index 96% rename from tests/queries/0_stateless/01271_optimize_arithmetic_operations_in_aggr_func.reference rename to tests/queries/0_stateless/01271_optimize_arithmetic_operations_in_aggr_func_long.reference index 669221005f4..b50519b9b3a 100644 --- a/tests/queries/0_stateless/01271_optimize_arithmetic_operations_in_aggr_func.reference +++ b/tests/queries/0_stateless/01271_optimize_arithmetic_operations_in_aggr_func_long.reference @@ -3,7 +3,7 @@ SELECT sum(1 + n), sum(n - 1), sum(1 - n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -13,7 +13,7 @@ SELECT 2 * sum(n), sum(n) / 2, sum(1 / n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -23,7 +23,7 @@ SELECT 1 + min(n), min(n) - 1, 1 - min(n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -33,7 +33,7 @@ SELECT 2 * min(n), min(n) / 2, min(1 / n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -43,7 +43,7 @@ SELECT 1 + max(n), max(n) - 1, 1 - max(n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -53,7 +53,7 @@ SELECT 2 * max(n), max(n) / 2, max(1 / n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -63,7 +63,7 @@ SELECT sum(-1 + n), sum(n - -1), sum(-1 - n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -73,7 +73,7 @@ SELECT -2 * sum(n), sum(n) / -2, sum(-1 / n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -83,7 +83,7 @@ SELECT -1 + min(n), min(n) - -1, -1 - min(n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -93,7 +93,7 @@ SELECT -2 * max(n), max(n) / -2, min(-1 / n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -103,7 +103,7 @@ SELECT -1 + max(n), max(n) - -1, -1 - max(n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -113,7 +113,7 @@ SELECT -2 * min(n), min(n) / -2, max(-1 / n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -123,7 +123,7 @@ SELECT sum(abs(2) + n), sum(n - abs(2)), sum(1 - abs(2)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -133,7 +133,7 @@ SELECT sum(abs(2) * n), sum(n / abs(2)), sum(1 / abs(2)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -143,7 +143,7 @@ SELECT min(abs(2) + n), min(n - abs(2)), 1 - min(abs(2)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -153,7 +153,7 @@ SELECT min(abs(2) * n), min(n / abs(2)), min(1 / abs(2)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -163,7 +163,7 @@ SELECT max(abs(2) + n), max(n - abs(2)), 1 - max(abs(2)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -173,7 +173,7 @@ SELECT max(abs(2) * n), max(n / abs(2)), max(1 / abs(2)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -183,7 +183,7 @@ SELECT sum(abs(n) + n), sum(n - abs(n)), sum(1 - abs(n)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -193,7 +193,7 @@ SELECT sum(abs(n) * n), sum(n / abs(n)), sum(1 / abs(n)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -203,7 +203,7 @@ SELECT min(abs(n) + n), min(n - abs(n)), 1 - min(abs(n)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -213,7 +213,7 @@ SELECT min(abs(n) * n), min(n / abs(n)), min(1 / abs(n)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -223,7 +223,7 @@ SELECT max(abs(n) + n), max(n - abs(n)), 1 - max(abs(n)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -233,7 +233,7 @@ SELECT max(abs(n) * n), max(n / abs(n)), max(1 / abs(n)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -243,7 +243,7 @@ SELECT sum(1 + (n * n)), sum((n * n) - 1), sum(1 - (n * n)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -253,7 +253,7 @@ SELECT sum((2 * n) * n), sum(n * n) / 2, sum((1 / n) * n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -263,7 +263,7 @@ SELECT 1 + min(n * n), min(n * n) - 1, 1 - min(n * n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -273,7 +273,7 @@ SELECT min((2 * n) * n), min(n * n) / 2, min((1 / n) * n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -283,7 +283,7 @@ SELECT 1 + max(n * n), max(n * n) - 1, 1 - max(n * n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -293,7 +293,7 @@ SELECT max((2 * n) * n), max(n * n) / 2, max((1 / n) * n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -303,7 +303,7 @@ SELECT sum((1 + 1) + n), sum((1 + n) - 1), sum((1 + 1) - n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -313,7 +313,7 @@ SELECT sum(1 + (2 * n)), sum(1 + (n / 2)), sum(1 + (1 / n)) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -323,7 +323,7 @@ SELECT min((1 + 1) + n), (1 + min(n)) - 1, min((1 + 1) - n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -333,7 +333,7 @@ SELECT 1 + min(2 * n), 1 + min(n / 2), 1 + min(1 / n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -343,7 +343,7 @@ SELECT max((1 + 1) + n), (1 + max(n)) - 1, max((1 + 1) - n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -353,7 +353,7 @@ SELECT 1 + max(2 * n), 1 + max(n / 2), 1 + max(1 / n) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -363,7 +363,7 @@ SELECT sum((-1 + n) + -1), sum((n - -1) + -1), sum((-1 - n) + -1) -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -373,7 +373,7 @@ SELECT (-2 * sum(n)) * -1, (sum(n) / -2) / -1, sum(-1 / n) / -1 -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -383,7 +383,7 @@ SELECT (-1 + min(n)) + -1, (min(n) - -1) + -1, (-1 - min(n)) + -1 -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -393,7 +393,7 @@ SELECT (-2 * min(n)) * -1, (min(n) / -2) / -1, max(-1 / n) / -1 -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -403,7 +403,7 @@ SELECT (-1 + max(n)) + -1, (max(n) - -1) + -1, (-1 - max(n)) + -1 -FROM +FROM ( SELECT number AS n FROM numbers(10) @@ -413,43 +413,43 @@ SELECT (-2 * max(n)) * -1, (max(n) / -2) / -1, min(-1 / n) / -1 -FROM +FROM ( SELECT number AS n FROM numbers(10) ) SELECT ((sum(n + 1) + sum(1 + n)) + sum(n - 1)) + sum(1 - n) -FROM +FROM ( SELECT number AS n FROM numbers(10) ) SELECT (((sum(n) * 2) + (2 * sum(n))) + (sum(n) / 2)) + sum(1 / n) -FROM +FROM ( SELECT number AS n FROM numbers(10) ) SELECT (((min(n) + 1) + (1 + min(n))) + (min(n) - 1)) + (1 - min(n)) -FROM +FROM ( SELECT number AS n FROM numbers(10) ) SELECT (((min(n) * 2) + (2 * min(n))) + (min(n) / 2)) + min(1 / n) -FROM +FROM ( SELECT number AS n FROM numbers(10) ) SELECT (((max(n) + 1) + (1 + max(n))) + (max(n) - 1)) + (1 - max(n)) -FROM +FROM ( SELECT number AS n FROM numbers(10) ) SELECT (((max(n) * 2) + (2 * max(n))) + (max(n) / 2)) + max(1 / n) -FROM +FROM ( SELECT number AS n FROM numbers(10) diff --git a/tests/queries/0_stateless/01271_optimize_arithmetic_operations_in_aggr_func.sql b/tests/queries/0_stateless/01271_optimize_arithmetic_operations_in_aggr_func_long.sql similarity index 100% rename from tests/queries/0_stateless/01271_optimize_arithmetic_operations_in_aggr_func.sql rename to tests/queries/0_stateless/01271_optimize_arithmetic_operations_in_aggr_func_long.sql diff --git a/tests/queries/0_stateless/01305_duplicate_order_by_and_distinct.reference b/tests/queries/0_stateless/01305_duplicate_order_by_and_distinct.reference index c076e56a91b..10f8bbfd392 100644 --- a/tests/queries/0_stateless/01305_duplicate_order_by_and_distinct.reference +++ b/tests/queries/0_stateless/01305_duplicate_order_by_and_distinct.reference @@ -1,8 +1,8 @@ SELECT number -FROM +FROM ( SELECT number - FROM + FROM ( SELECT DISTINCT number FROM numbers(3) @@ -13,10 +13,10 @@ ORDER BY number ASC 1 2 SELECT DISTINCT number -FROM +FROM ( SELECT DISTINCT number - FROM + FROM ( SELECT DISTINCT number FROM numbers(3) @@ -29,10 +29,10 @@ ORDER BY number ASC 1 2 SELECT number -FROM +FROM ( SELECT number - FROM + FROM ( SELECT DISTINCT number % 2 AS number FROM numbers(3) @@ -42,10 +42,10 @@ ORDER BY number ASC 0 1 SELECT DISTINCT number -FROM +FROM ( SELECT DISTINCT number - FROM + FROM ( SELECT DISTINCT number % 2 AS number FROM numbers(3) diff --git a/tests/queries/0_stateless/01321_aggregate_functions_of_group_by_keys.reference b/tests/queries/0_stateless/01321_aggregate_functions_of_group_by_keys.reference index 5eaaf24208e..555593d04bc 100644 --- a/tests/queries/0_stateless/01321_aggregate_functions_of_group_by_keys.reference +++ b/tests/queries/0_stateless/01321_aggregate_functions_of_group_by_keys.reference @@ -74,7 +74,7 @@ GROUP BY number % 5 ORDER BY a ASC SELECT foo -FROM +FROM ( SELECT number AS foo FROM numbers(1) @@ -155,7 +155,7 @@ GROUP BY number % 5 ORDER BY a ASC SELECT foo -FROM +FROM ( SELECT anyLast(number) AS foo FROM numbers(1) diff --git a/tests/queries/0_stateless/01323_redundant_functions_in_order_by.reference b/tests/queries/0_stateless/01323_redundant_functions_in_order_by.reference index c8421b9869e..b32ad433730 100644 --- a/tests/queries/0_stateless/01323_redundant_functions_in_order_by.reference +++ b/tests/queries/0_stateless/01323_redundant_functions_in_order_by.reference @@ -16,21 +16,21 @@ 2 2 3 3 SELECT groupArray(x) -FROM +FROM ( SELECT number AS x FROM numbers(3) ORDER BY x ASC ) SELECT groupArray(x) -FROM +FROM ( SELECT number AS x FROM numbers(3) ORDER BY x ASC ) SELECT groupArray(x) -FROM +FROM ( SELECT number AS x FROM numbers(3) @@ -43,7 +43,7 @@ SELECT a, b, c -FROM +FROM ( SELECT number + 2 AS key FROM numbers(4) @@ -84,7 +84,7 @@ ORDER BY 2 2 3 3 SELECT groupArray(x) -FROM +FROM ( SELECT number AS x FROM numbers(3) @@ -93,7 +93,7 @@ FROM exp(x) ASC ) SELECT groupArray(x) -FROM +FROM ( SELECT number AS x FROM numbers(3) @@ -102,7 +102,7 @@ FROM exp(exp(x)) ASC ) SELECT groupArray(x) -FROM +FROM ( SELECT number AS x FROM numbers(3) @@ -115,7 +115,7 @@ SELECT a, b, c -FROM +FROM ( SELECT number + 2 AS key FROM numbers(4) diff --git a/tests/queries/0_stateless/01372_wrong_order_by_removal.reference b/tests/queries/0_stateless/01372_wrong_order_by_removal.reference index f1f1bcef6e5..0c4d92a26b9 100644 --- a/tests/queries/0_stateless/01372_wrong_order_by_removal.reference +++ b/tests/queries/0_stateless/01372_wrong_order_by_removal.reference @@ -1,7 +1,7 @@ SELECT k, groupArrayMovingSum(v) -FROM +FROM ( SELECT k, diff --git a/tests/queries/0_stateless/01390_remove_injective_in_uniq.reference b/tests/queries/0_stateless/01390_remove_injective_in_uniq.reference index 94e1dbc5da7..eaff98b39cb 100644 --- a/tests/queries/0_stateless/01390_remove_injective_in_uniq.reference +++ b/tests/queries/0_stateless/01390_remove_injective_in_uniq.reference @@ -4,7 +4,7 @@ SELECT uniqHLL12(x), uniqCombined(x), uniqCombined64(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) @@ -15,7 +15,7 @@ SELECT uniqHLL12(x + y), uniqCombined(x + y), uniqCombined64(x + y) -FROM +FROM ( SELECT number % 2 AS x, @@ -28,7 +28,7 @@ SELECT uniqHLL12(x), uniqCombined(x), uniqCombined64(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) @@ -39,7 +39,7 @@ SELECT uniqHLL12(x), uniqCombined(x), uniqCombined64(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) @@ -50,7 +50,7 @@ SELECT uniqHLL12(x), uniqCombined(x), uniqCombined64(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) @@ -61,13 +61,13 @@ SELECT uniqHLL12(x), uniqCombined(x), uniqCombined64(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) ) SELECT uniqExact(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) @@ -80,7 +80,7 @@ SELECT uniqHLL12(x), uniqCombined(x), uniqCombined64(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) @@ -91,7 +91,7 @@ SELECT uniqHLL12(x + y), uniqCombined(x + y), uniqCombined64(x + y) -FROM +FROM ( SELECT number % 2 AS x, @@ -104,7 +104,7 @@ SELECT uniqHLL12(-x), uniqCombined(-x), uniqCombined64(-x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) @@ -115,7 +115,7 @@ SELECT uniqHLL12(bitNot(x)), uniqCombined(bitNot(x)), uniqCombined64(bitNot(x)) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) @@ -126,7 +126,7 @@ SELECT uniqHLL12(bitNot(-x)), uniqCombined(bitNot(-x)), uniqCombined64(bitNot(-x)) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) @@ -137,13 +137,13 @@ SELECT uniqHLL12(-bitNot(-x)), uniqCombined(-bitNot(-x)), uniqCombined64(-bitNot(-x)) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) ) SELECT uniqExact(-bitNot(-x)) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) diff --git a/tests/queries/0_stateless/01399_http_request_headers.reference b/tests/queries/0_stateless/01399_http_request_headers.reference index 99ba2b1787f..90a10a9818d 100644 --- a/tests/queries/0_stateless/01399_http_request_headers.reference +++ b/tests/queries/0_stateless/01399_http_request_headers.reference @@ -1,4 +1,5 @@ 1 +1 Code: 516 1 Code: 516 diff --git a/tests/queries/0_stateless/01399_http_request_headers.sh b/tests/queries/0_stateless/01399_http_request_headers.sh index 9b07f018230..f06e7ffc32b 100755 --- a/tests/queries/0_stateless/01399_http_request_headers.sh +++ b/tests/queries/0_stateless/01399_http_request_headers.sh @@ -4,6 +4,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh +${CLICKHOUSE_CURL} -sS "${CLICKHOUSE_URL}" -H 'EmptyHeader;' -d 'SELECT 1' ${CLICKHOUSE_CURL} -sS "${CLICKHOUSE_URL}" -H 'X-ClickHouse-User: default' -d 'SELECT 1' ${CLICKHOUSE_CURL} -sS "${CLICKHOUSE_URL}" -H 'X-ClickHouse-User: header_test' -d 'SELECT 1' | grep -o 'Code: 516' ${CLICKHOUSE_CURL} -sS "${CLICKHOUSE_URL}" -H 'X-ClickHouse-Key: ' -d 'SELECT 1' diff --git a/tests/queries/0_stateless/01412_row_from_totals.sql b/tests/queries/0_stateless/01412_row_from_totals.sql index 170eb06721a..63c2a111369 100644 --- a/tests/queries/0_stateless/01412_row_from_totals.sql +++ b/tests/queries/0_stateless/01412_row_from_totals.sql @@ -11,7 +11,7 @@ insert into tracking_events_tmp select 2, '2020-07-10' from numbers(1881); insert into tracking_events_tmp select 2, '2020-07-11' from numbers(1623); SELECT EventDate -FROM +FROM ( SELECT EventDate FROM tracking_events_tmp AS t1 diff --git a/tests/queries/0_stateless/01414_push_predicate_when_contains_with_clause.reference b/tests/queries/0_stateless/01414_push_predicate_when_contains_with_clause.reference index 13c8fe551c7..bbda231662e 100644 --- a/tests/queries/0_stateless/01414_push_predicate_when_contains_with_clause.reference +++ b/tests/queries/0_stateless/01414_push_predicate_when_contains_with_clause.reference @@ -3,7 +3,7 @@ SELECT number, square_number -FROM +FROM ( WITH number * 2 AS square_number SELECT diff --git a/tests/queries/0_stateless/01455_duplicate_distinct_optimization.reference b/tests/queries/0_stateless/01455_duplicate_distinct_optimization.reference index 2c54899f9f5..82e887e1b92 100644 --- a/tests/queries/0_stateless/01455_duplicate_distinct_optimization.reference +++ b/tests/queries/0_stateless/01455_duplicate_distinct_optimization.reference @@ -1,13 +1,13 @@ SELECT DISTINCT number FROM numbers(1) SELECT number -FROM +FROM ( SELECT DISTINCT number FROM numbers(1) ) SELECT DISTINCT number * 2 -FROM +FROM ( SELECT DISTINCT number * 2, @@ -15,7 +15,7 @@ FROM FROM numbers(1) ) SELECT number -FROM +FROM ( SELECT DISTINCT number * 2 AS number FROM numbers(1) @@ -23,7 +23,7 @@ FROM SELECT b, a -FROM +FROM ( SELECT DISTINCT number % 2 AS a, @@ -31,7 +31,7 @@ FROM FROM numbers(100) ) SELECT DISTINCT a -FROM +FROM ( SELECT DISTINCT number % 2 AS a, @@ -39,10 +39,10 @@ FROM FROM numbers(100) ) SELECT a -FROM +FROM ( SELECT DISTINCT a - FROM + FROM ( SELECT DISTINCT number % 2 AS a, @@ -51,12 +51,12 @@ FROM ) ) SELECT DISTINCT a -FROM +FROM ( SELECT a, b - FROM + FROM ( SELECT DISTINCT number % 2 AS a, @@ -67,12 +67,12 @@ FROM SELECT a, b -FROM +FROM ( SELECT b, a - FROM + FROM ( SELECT DISTINCT number AS a, @@ -83,13 +83,13 @@ FROM SELECT a, b -FROM +FROM ( SELECT b, a, a + b - FROM + FROM ( SELECT DISTINCT number % 2 AS a, @@ -98,10 +98,10 @@ FROM ) ) SELECT DISTINCT a -FROM +FROM ( SELECT a - FROM + FROM ( SELECT DISTINCT number % 2 AS a, @@ -110,21 +110,21 @@ FROM ) ) SELECT DISTINCT number -FROM +FROM ( SELECT DISTINCT number FROM numbers(1) ) AS t1 CROSS JOIN numbers(2) AS t2 SELECT number -FROM +FROM ( SELECT DISTINCT number FROM numbers(1) AS t1 CROSS JOIN numbers(2) AS t2 ) SELECT DISTINCT number -FROM +FROM ( SELECT DISTINCT number FROM numbers(1) diff --git a/tests/queries/0_stateless/01495_subqueries_in_with_statement_4.reference b/tests/queries/0_stateless/01495_subqueries_in_with_statement_4.reference index 6a713b14181..c83f681fc54 100644 --- a/tests/queries/0_stateless/01495_subqueries_in_with_statement_4.reference +++ b/tests/queries/0_stateless/01495_subqueries_in_with_statement_4.reference @@ -1,5 +1,5 @@ 0 0 -WITH it AS +WITH it AS ( SELECT * FROM numbers(1) @@ -7,4 +7,5 @@ WITH it AS SELECT number, number -FROM it AS i +FROM +it AS i diff --git a/tests/queries/0_stateless/01515_with_global_and_with_propagation.reference b/tests/queries/0_stateless/01515_with_global_and_with_propagation.reference index 76c46dd798f..dd611b45ccd 100644 --- a/tests/queries/0_stateless/01515_with_global_and_with_propagation.reference +++ b/tests/queries/0_stateless/01515_with_global_and_with_propagation.reference @@ -13,7 +13,7 @@ WITH 1 AS x SELECT x WITH 1 AS x SELECT x -FROM +FROM ( WITH 1 AS x SELECT x @@ -22,7 +22,7 @@ WITH 1 AS x SELECT y, x -FROM +FROM ( WITH 2 AS x SELECT 2 AS y @@ -39,7 +39,7 @@ WITH 2 AS x SELECT x WITH 5 AS q1, - x AS + x AS ( WITH 5 AS q1 SELECT @@ -51,4 +51,5 @@ WITH SELECT b, a -FROM x +FROM +x diff --git a/tests/queries/0_stateless/01532_having_with_totals.sql b/tests/queries/0_stateless/01532_having_with_totals.sql index 10f55c8c135..290799c1354 100644 --- a/tests/queries/0_stateless/01532_having_with_totals.sql +++ b/tests/queries/0_stateless/01532_having_with_totals.sql @@ -3,7 +3,7 @@ create table local_t engine Log as select 1 a; SELECT '127.0.0.{1,2}'; SELECT * -FROM +FROM ( SELECT a FROM remote('127.0.0.{1,2}', currentDatabase(), local_t) @@ -17,7 +17,7 @@ WHERE a IN SELECT '127.0.0.1'; SELECT * -FROM +FROM ( SELECT a FROM remote('127.0.0.1', currentDatabase(), local_t) diff --git a/tests/queries/0_stateless/01532_min_max_with_modifiers.sql b/tests/queries/0_stateless/01532_min_max_with_modifiers.sql index 0c8651c0f01..364b110d8c1 100644 --- a/tests/queries/0_stateless/01532_min_max_with_modifiers.sql +++ b/tests/queries/0_stateless/01532_min_max_with_modifiers.sql @@ -11,7 +11,7 @@ SELECT min(x) AS lower, max(x) + 1 AS upper, upper - lower AS range -FROM +FROM ( SELECT arrayJoin([1, 2]) AS x ) diff --git a/tests/queries/0_stateless/01559_aggregate_null_for_empty_fix.sql b/tests/queries/0_stateless/01559_aggregate_null_for_empty_fix.sql index 5955dee37f8..3434a049073 100644 --- a/tests/queries/0_stateless/01559_aggregate_null_for_empty_fix.sql +++ b/tests/queries/0_stateless/01559_aggregate_null_for_empty_fix.sql @@ -1,5 +1,5 @@ SELECT MAX(aggr) -FROM +FROM ( SELECT MAX(-1) AS aggr FROM system.one @@ -11,7 +11,7 @@ FROM ); SELECT MaX(aggr) -FROM +FROM ( SELECT mAX(-1) AS aggr FROM system.one @@ -22,7 +22,7 @@ FROM WHERE 1 ); SELECT MaX(aggr) -FROM +FROM ( SELECT mAX(-1) AS aggr FROM system.one @@ -33,7 +33,7 @@ FROM WHERE 1 ); SELECT MaX(aggr) -FROM +FROM ( SELECT mAX(-1) AS aggr FROM system.one @@ -45,7 +45,7 @@ FROM ); SET aggregate_functions_null_for_empty=1; SELECT MAX(aggr) -FROM +FROM ( SELECT MAX(-1) AS aggr FROM system.one @@ -57,7 +57,7 @@ FROM ); SELECT MaX(aggr) -FROM +FROM ( SELECT mAX(-1) AS aggr FROM system.one @@ -68,7 +68,7 @@ FROM WHERE 1 ); SELECT MaX(aggr) -FROM +FROM ( SELECT mAX(-1) AS aggr FROM system.one @@ -79,7 +79,7 @@ FROM WHERE 1 ); SELECT MaX(aggr) -FROM +FROM ( SELECT mAX(-1) AS aggr FROM system.one diff --git a/tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.sql b/tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.sql index c5e677570ea..2c3ae41864e 100644 --- a/tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.sql +++ b/tests/queries/0_stateless/01560_DateTime_and_DateTime64_comparision.sql @@ -31,7 +31,7 @@ SELECT dt64 != dt, toDateTime(dt64) != dt, dt64 != toDateTime64(dt, 1, 'UTC') -FROM +FROM ( WITH toDateTime('2015-05-18 07:40:11') as value SELECT diff --git a/tests/queries/0_stateless/01560_monotonicity_check_multiple_args_bug.sql b/tests/queries/0_stateless/01560_monotonicity_check_multiple_args_bug.sql index befc13be8eb..b475a5bdd3c 100644 --- a/tests/queries/0_stateless/01560_monotonicity_check_multiple_args_bug.sql +++ b/tests/queries/0_stateless/01560_monotonicity_check_multiple_args_bug.sql @@ -1,7 +1,7 @@ WITH arrayJoin(range(2)) AS delta SELECT toDate(time) + toIntervalDay(delta) AS dt -FROM +FROM ( SELECT toDateTime('2020.11.12 19:02:04') AS time ) @@ -10,7 +10,7 @@ ORDER BY dt ASC; WITH arrayJoin([0, 1]) AS delta SELECT toDate(time) + toIntervalDay(delta) AS dt -FROM +FROM ( SELECT toDateTime('2020.11.12 19:02:04') AS time ) diff --git a/tests/queries/0_stateless/01568_window_functions_distributed.reference b/tests/queries/0_stateless/01568_window_functions_distributed.reference index 29d3e5ea885..7d5a95046f7 100644 --- a/tests/queries/0_stateless/01568_window_functions_distributed.reference +++ b/tests/queries/0_stateless/01568_window_functions_distributed.reference @@ -10,7 +10,7 @@ select max(identity(dummy + 1)) over () from remote('127.0.0.{1,2}', system, one 1 1 drop table if exists t_01568; -create table t_01568 engine Log as select intDiv(number, 3) p, number from numbers(9); +create table t_01568 engine Memory as select intDiv(number, 3) p, number from numbers(9); select sum(number) over w, max(number) over w from t_01568 window w as (partition by p); 3 2 3 2 @@ -49,4 +49,12 @@ select groupArray(groupArray(number)) over (rows unbounded preceding) from remot [[0,3,6,0,3,6]] [[0,3,6,0,3,6],[1,4,7,1,4,7]] [[0,3,6,0,3,6],[1,4,7,1,4,7],[2,5,8,2,5,8]] +select groupArray(groupArray(number)) over (rows unbounded preceding) from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3) settings distributed_group_by_no_merge=1; +[[0,3,6]] +[[0,3,6],[1,4,7]] +[[0,3,6],[1,4,7],[2,5,8]] +[[0,3,6]] +[[0,3,6],[1,4,7]] +[[0,3,6],[1,4,7],[2,5,8]] +select groupArray(groupArray(number)) over (rows unbounded preceding) from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3) settings distributed_group_by_no_merge=2; -- { serverError 48 } drop table t_01568; diff --git a/tests/queries/0_stateless/01568_window_functions_distributed.sql b/tests/queries/0_stateless/01568_window_functions_distributed.sql index 7d9d1ea5c92..bc82e1ed6ac 100644 --- a/tests/queries/0_stateless/01568_window_functions_distributed.sql +++ b/tests/queries/0_stateless/01568_window_functions_distributed.sql @@ -9,7 +9,7 @@ select max(identity(dummy + 1)) over () from remote('127.0.0.{1,2}', system, one drop table if exists t_01568; -create table t_01568 engine Log as select intDiv(number, 3) p, number from numbers(9); +create table t_01568 engine Memory as select intDiv(number, 3) p, number from numbers(9); select sum(number) over w, max(number) over w from t_01568 window w as (partition by p); @@ -19,5 +19,7 @@ select distinct sum(number) over w, max(number) over w from remote('127.0.0.{1,2 -- window functions + aggregation w/shards select groupArray(groupArray(number)) over (rows unbounded preceding) from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3); +select groupArray(groupArray(number)) over (rows unbounded preceding) from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3) settings distributed_group_by_no_merge=1; +select groupArray(groupArray(number)) over (rows unbounded preceding) from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3) settings distributed_group_by_no_merge=2; -- { serverError 48 } drop table t_01568; diff --git a/tests/queries/0_stateless/01582_deterministic_function_with_predicate.reference b/tests/queries/0_stateless/01582_deterministic_function_with_predicate.reference index a1b4bfdcc03..b764691f5b6 100644 --- a/tests/queries/0_stateless/01582_deterministic_function_with_predicate.reference +++ b/tests/queries/0_stateless/01582_deterministic_function_with_predicate.reference @@ -1,8 +1,8 @@ SELECT count() -FROM +FROM ( SELECT number - FROM + FROM ( SELECT number FROM numbers(1000000) diff --git a/tests/queries/0_stateless/01593_functions_in_order_by.reference b/tests/queries/0_stateless/01593_functions_in_order_by.reference index 534b29af3e6..4f4ea02f95c 100644 --- a/tests/queries/0_stateless/01593_functions_in_order_by.reference +++ b/tests/queries/0_stateless/01593_functions_in_order_by.reference @@ -1,7 +1,7 @@ SELECT msg, toDateTime(intDiv(ms, 1000)) AS time -FROM +FROM ( SELECT \'hello\' AS msg, diff --git a/tests/queries/0_stateless/01593_functions_in_order_by.sql b/tests/queries/0_stateless/01593_functions_in_order_by.sql index 2d38e45fff7..5aa6aef9e12 100644 --- a/tests/queries/0_stateless/01593_functions_in_order_by.sql +++ b/tests/queries/0_stateless/01593_functions_in_order_by.sql @@ -1,6 +1,6 @@ EXPLAIN SYNTAX SELECT msg, toDateTime(intDiv(ms, 1000)) AS time -FROM +FROM ( SELECT 'hello' AS msg, diff --git a/tests/queries/0_stateless/01635_nullable_fuzz.sql b/tests/queries/0_stateless/01635_nullable_fuzz.sql index c134578b221..f49fe39d350 100644 --- a/tests/queries/0_stateless/01635_nullable_fuzz.sql +++ b/tests/queries/0_stateless/01635_nullable_fuzz.sql @@ -4,7 +4,7 @@ SELECT '', number, NULL AS k -FROM +FROM ( SELECT materialize(NULL) OR materialize(-9223372036854775808), diff --git a/tests/queries/0_stateless/01636_nullable_fuzz2.sql b/tests/queries/0_stateless/01636_nullable_fuzz2.sql index a40da51c38c..49ee7626b4e 100644 --- a/tests/queries/0_stateless/01636_nullable_fuzz2.sql +++ b/tests/queries/0_stateless/01636_nullable_fuzz2.sql @@ -13,7 +13,7 @@ insert into tracking_events_tmp select 2, '2020-07-10' from numbers(1881); insert into tracking_events_tmp select 2, '2020-07-11' from numbers(1623); SELECT EventDate -FROM +FROM ( SELECT EventDate FROM tracking_events_tmp AS t1 diff --git a/tests/queries/0_stateless/01650_expressions_merge_bug.sql b/tests/queries/0_stateless/01650_expressions_merge_bug.sql index f28f663b2d4..3cab4dbd5a6 100644 --- a/tests/queries/0_stateless/01650_expressions_merge_bug.sql +++ b/tests/queries/0_stateless/01650_expressions_merge_bug.sql @@ -7,7 +7,7 @@ SELECT 9223372036854775807 ), NULL -FROM +FROM ( SELECT DISTINCT NULL, diff --git a/tests/queries/0_stateless/01670_neighbor_lc_bug.sql b/tests/queries/0_stateless/01670_neighbor_lc_bug.sql index de9afcf1495..2d99225aa89 100644 --- a/tests/queries/0_stateless/01670_neighbor_lc_bug.sql +++ b/tests/queries/0_stateless/01670_neighbor_lc_bug.sql @@ -33,7 +33,7 @@ SELECT val_low, neighbor(val_low, -1) AS low_m1, neighbor(val_low, 1) AS low_p1 -FROM +FROM ( SELECT * FROM neighbor_test diff --git a/tests/queries/0_stateless/01710_projection_fetch.reference b/tests/queries/0_stateless/01710_projection_fetch.reference index fd20a585633..54e5bff80a9 100644 --- a/tests/queries/0_stateless/01710_projection_fetch.reference +++ b/tests/queries/0_stateless/01710_projection_fetch.reference @@ -9,3 +9,9 @@ 2 2 3 3 4 4 +0 +CREATE TABLE default.tp_2\n(\n `x` Int32,\n `y` Int32,\n PROJECTION p\n (\n SELECT \n x,\n y\n ORDER BY x\n ),\n PROJECTION pp\n (\n SELECT \n x,\n count()\n GROUP BY x\n )\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01710_projection_fetch_default\', \'2\')\nORDER BY y\nSETTINGS min_rows_for_compact_part = 2, min_rows_for_wide_part = 4, min_bytes_for_compact_part = 16, min_bytes_for_wide_part = 32, index_granularity = 8192 +2 +CREATE TABLE default.tp_2\n(\n `x` Int32,\n `y` Int32,\n PROJECTION p\n (\n SELECT \n x,\n y\n ORDER BY x\n ),\n PROJECTION pp\n (\n SELECT \n x,\n count()\n GROUP BY x\n )\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01710_projection_fetch_default\', \'2\')\nORDER BY y\nSETTINGS min_rows_for_compact_part = 2, min_rows_for_wide_part = 4, min_bytes_for_compact_part = 16, min_bytes_for_wide_part = 32, index_granularity = 8192 +CREATE TABLE default.tp_2\n(\n `x` Int32,\n `y` Int32,\n PROJECTION p\n (\n SELECT \n x,\n y\n ORDER BY x\n ),\n PROJECTION pp\n (\n SELECT \n x,\n count()\n GROUP BY x\n )\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01710_projection_fetch_default\', \'2\')\nORDER BY y\nSETTINGS min_rows_for_compact_part = 2, min_rows_for_wide_part = 4, min_bytes_for_compact_part = 16, min_bytes_for_wide_part = 32, index_granularity = 8192 +CREATE TABLE default.tp_2\n(\n `x` Int32,\n `y` Int32,\n PROJECTION p\n (\n SELECT \n x,\n y\n ORDER BY x\n )\n)\nENGINE = ReplicatedMergeTree(\'/clickhouse/tables/01710_projection_fetch_default\', \'2\')\nORDER BY y\nSETTINGS min_rows_for_compact_part = 2, min_rows_for_wide_part = 4, min_bytes_for_compact_part = 16, min_bytes_for_wide_part = 32, index_granularity = 8192 diff --git a/tests/queries/0_stateless/01710_projection_fetch.sql b/tests/queries/0_stateless/01710_projection_fetch.sql index c12dec4cbcc..06790317808 100644 --- a/tests/queries/0_stateless/01710_projection_fetch.sql +++ b/tests/queries/0_stateless/01710_projection_fetch.sql @@ -15,6 +15,27 @@ insert into tp_1 select number, number from numbers(5); system sync replica tp_2; select * from tp_2 order by x; +-- test projection creation, materialization, clear and drop +alter table tp_1 add projection pp (select x, count() group by x); +system sync replica tp_2; +select count() from system.projection_parts where table = 'tp_2' and name = 'pp' and active; +show create table tp_2; + +-- all other three operations are mutations +set mutations_sync = 2; +alter table tp_1 materialize projection pp; +select count() from system.projection_parts where table = 'tp_2' and name = 'pp' and active; +show create table tp_2; + +alter table tp_1 clear projection pp; +system sync replica tp_2; +select * from system.projection_parts where table = 'tp_2' and name = 'pp' and active; +show create table tp_2; + +alter table tp_1 drop projection pp; +system sync replica tp_2; +select * from system.projection_parts where table = 'tp_2' and name = 'pp' and active; +show create table tp_2; + drop table if exists tp_1; drop table if exists tp_2; - diff --git a/tests/queries/0_stateless/01732_explain_syntax_union_query.reference b/tests/queries/0_stateless/01732_explain_syntax_union_query.reference index fe5eb01a7ed..ccafa916b9f 100644 --- a/tests/queries/0_stateless/01732_explain_syntax_union_query.reference +++ b/tests/queries/0_stateless/01732_explain_syntax_union_query.reference @@ -7,7 +7,7 @@ UNION ALL SELECT 1 UNION ALL SELECT 1 - +- SELECT 1 UNION ALL ( @@ -19,9 +19,9 @@ UNION ALL ) UNION ALL SELECT 1 - +- SELECT x -FROM +FROM ( SELECT 1 AS x UNION ALL @@ -35,9 +35,9 @@ FROM UNION ALL SELECT 1 ) - +- SELECT x -FROM +FROM ( SELECT 1 AS x UNION ALL @@ -45,15 +45,15 @@ FROM UNION ALL SELECT 1 ) - +- SELECT 1 UNION DISTINCT SELECT 1 UNION DISTINCT SELECT 1 - +- SELECT 1 - +- ( SELECT 1 diff --git a/tests/queries/0_stateless/01732_explain_syntax_union_query.sql b/tests/queries/0_stateless/01732_explain_syntax_union_query.sql index 0dd1e19e765..c35021090f1 100644 --- a/tests/queries/0_stateless/01732_explain_syntax_union_query.sql +++ b/tests/queries/0_stateless/01732_explain_syntax_union_query.sql @@ -13,7 +13,7 @@ UNION ALL SELECT 1 ); -SELECT ' '; +SELECT '-'; EXPLAIN SYNTAX SELECT 1 @@ -30,7 +30,7 @@ UNION ALL SELECT 1 ); -SELECT ' '; +SELECT '-'; EXPLAIN SYNTAX SELECT x @@ -51,11 +51,11 @@ FROM ) ); -SELECT ' '; +SELECT '-'; EXPLAIN SYNTAX SELECT x -FROM +FROM ( SELECT 1 AS x UNION ALL @@ -66,7 +66,7 @@ FROM ) ); -SELECT ' '; +SELECT '-'; EXPLAIN SYNTAX SELECT 1 @@ -75,12 +75,12 @@ SELECT 1 UNION DISTINCT SELECT 1; -SELECT ' '; +SELECT '-'; EXPLAIN SYNTAX (((((((((((((((SELECT 1))))))))))))))); -SELECT ' '; +SELECT '-'; EXPLAIN SYNTAX (((((((((((((((SELECT 1 UNION DISTINCT SELECT 1))) UNION DISTINCT SELECT 1)))) UNION ALL SELECT 1)))))))); diff --git a/tests/queries/0_stateless/01754_clickhouse_format_backslash.reference b/tests/queries/0_stateless/01754_clickhouse_format_backslash.reference index 328483d9867..28519b2f92f 100644 --- a/tests/queries/0_stateless/01754_clickhouse_format_backslash.reference +++ b/tests/queries/0_stateless/01754_clickhouse_format_backslash.reference @@ -1,5 +1,5 @@ SELECT * \ -FROM \ +FROM \ ( \ SELECT 1 AS x \ UNION ALL \ diff --git a/tests/queries/0_stateless/01798_uniq_theta_sketch.reference b/tests/queries/0_stateless/01798_uniq_theta_sketch.reference index e5f3fe4911e..0455a06036f 100644 --- a/tests/queries/0_stateless/01798_uniq_theta_sketch.reference +++ b/tests/queries/0_stateless/01798_uniq_theta_sketch.reference @@ -32,13 +32,13 @@ uniqTheta decimals (101,101,101) uniqTheta remove injective SELECT uniqTheta(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) ) SELECT uniqTheta(x + y) -FROM +FROM ( SELECT number % 2 AS x, @@ -46,37 +46,37 @@ FROM FROM numbers(10) ) SELECT uniqTheta(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) ) SELECT uniqTheta(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) ) SELECT uniqTheta(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) ) SELECT uniqTheta(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) ) SELECT uniqTheta(x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) ) SELECT uniqTheta(x + y) -FROM +FROM ( SELECT number % 2 AS x, @@ -84,25 +84,25 @@ FROM FROM numbers(10) ) SELECT uniqTheta(-x) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) ) SELECT uniqTheta(bitNot(x)) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) ) SELECT uniqTheta(bitNot(-x)) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) ) SELECT uniqTheta(-bitNot(-x)) -FROM +FROM ( SELECT number % 2 AS x FROM numbers(10) diff --git a/tests/queries/0_stateless/01852_hints_enum_name.sh b/tests/queries/0_stateless/01852_hints_enum_name.sh index bffde6e6c8c..1e7d09602e9 100755 --- a/tests/queries/0_stateless/01852_hints_enum_name.sh +++ b/tests/queries/0_stateless/01852_hints_enum_name.sh @@ -4,5 +4,5 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -$CLICKHOUSE_CLIENT --query="SELECT CAST('Helo', 'Enum(\'Hello\' = 1, \'World\' = 2)')" 2>&1 | grep -q "may be you meant: \['Hello'\]" && echo 'OK' || echo 'FAIL' +$CLICKHOUSE_CLIENT --query="SELECT CAST('Helo' AS Enum('Hello' = 1, 'World' = 2))" 2>&1 | grep -q -F "maybe you meant: ['Hello']" && echo 'OK' || echo 'FAIL' diff --git a/tests/queries/0_stateless/01855_jit_comparison_constant_result.reference b/tests/queries/0_stateless/01855_jit_comparison_constant_result.reference index a9e2f17562a..e97edac16d6 100644 --- a/tests/queries/0_stateless/01855_jit_comparison_constant_result.reference +++ b/tests/queries/0_stateless/01855_jit_comparison_constant_result.reference @@ -1,3 +1,11 @@ +ComparisionOperator column with same column +1 +1 +1 +1 +1 +1 +ComparisionOperator column with alias on same column 1 1 1 diff --git a/tests/queries/0_stateless/01855_jit_comparison_constant_result.sql b/tests/queries/0_stateless/01855_jit_comparison_constant_result.sql index b8d06e218e0..51cf9aa1d17 100644 --- a/tests/queries/0_stateless/01855_jit_comparison_constant_result.sql +++ b/tests/queries/0_stateless/01855_jit_comparison_constant_result.sql @@ -1,6 +1,8 @@ SET compile_expressions = 1; SET min_count_to_compile_expression = 0; +SELECT 'ComparisionOperator column with same column'; + DROP TABLE IF EXISTS test_table; CREATE TABLE test_table (a UInt64) ENGINE = MergeTree() ORDER BY tuple(); INSERT INTO test_table VALUES (1); @@ -13,3 +15,22 @@ SELECT test_table.a FROM test_table ORDER BY (test_table.a <= test_table.a) + 1; SELECT test_table.a FROM test_table ORDER BY (test_table.a == test_table.a) + 1; SELECT test_table.a FROM test_table ORDER BY (test_table.a != test_table.a) + 1; + +DROP TABLE test_table; + +SELECT 'ComparisionOperator column with alias on same column'; + +DROP TABLE IF EXISTS test_table; +CREATE TABLE test_table (a UInt64, b ALIAS a, c ALIAS b) ENGINE = MergeTree() ORDER BY tuple(); +INSERT INTO test_table VALUES (1); + +SELECT test_table.a FROM test_table ORDER BY (test_table.a > test_table.b) + 1 AND (test_table.a > test_table.c) + 1; +SELECT test_table.a FROM test_table ORDER BY (test_table.a >= test_table.b) + 1 AND (test_table.a >= test_table.c) + 1; + +SELECT test_table.a FROM test_table ORDER BY (test_table.a < test_table.b) + 1 AND (test_table.a < test_table.c) + 1; +SELECT test_table.a FROM test_table ORDER BY (test_table.a <= test_table.b) + 1 AND (test_table.a <= test_table.c) + 1; + +SELECT test_table.a FROM test_table ORDER BY (test_table.a == test_table.b) + 1 AND (test_table.a == test_table.c) + 1; +SELECT test_table.a FROM test_table ORDER BY (test_table.a != test_table.b) + 1 AND (test_table.a != test_table.c) + 1; + +DROP TABLE test_table; diff --git a/tests/queries/0_stateless/01860_Distributed__shard_num_GROUP_BY.reference b/tests/queries/0_stateless/01860_Distributed__shard_num_GROUP_BY.reference new file mode 100644 index 00000000000..fa0301316ae --- /dev/null +++ b/tests/queries/0_stateless/01860_Distributed__shard_num_GROUP_BY.reference @@ -0,0 +1,18 @@ +1 1 +2 1 +1 1 +2 1 +2 1 +3 1 +2 1 +3 1 +1 1 +2 1 +1 1 +2 1 +1 +2 +1 +2 +1 1 +2 1 diff --git a/tests/queries/0_stateless/01860_Distributed__shard_num_GROUP_BY.sql b/tests/queries/0_stateless/01860_Distributed__shard_num_GROUP_BY.sql new file mode 100644 index 00000000000..91215fd8ee6 --- /dev/null +++ b/tests/queries/0_stateless/01860_Distributed__shard_num_GROUP_BY.sql @@ -0,0 +1,16 @@ +-- GROUP BY _shard_num +SELECT _shard_num, count() FROM remote('127.0.0.{1,2}', system.one) GROUP BY _shard_num ORDER BY _shard_num; +SELECT _shard_num s, count() FROM remote('127.0.0.{1,2}', system.one) GROUP BY _shard_num ORDER BY _shard_num; + +SELECT _shard_num + 1, count() FROM remote('127.0.0.{1,2}', system.one) GROUP BY _shard_num + 1 ORDER BY _shard_num + 1; +SELECT _shard_num + 1 s, count() FROM remote('127.0.0.{1,2}', system.one) GROUP BY _shard_num + 1 ORDER BY _shard_num + 1; + +SELECT _shard_num + dummy, count() FROM remote('127.0.0.{1,2}', system.one) GROUP BY _shard_num + dummy ORDER BY _shard_num + dummy; +SELECT _shard_num + dummy s, count() FROM remote('127.0.0.{1,2}', system.one) GROUP BY _shard_num + dummy ORDER BY _shard_num + dummy; + +SELECT _shard_num FROM remote('127.0.0.{1,2}', system.one) ORDER BY _shard_num; +SELECT _shard_num s FROM remote('127.0.0.{1,2}', system.one) ORDER BY _shard_num; + +SELECT _shard_num s, count() FROM remote('127.0.0.{1,2}', system.one) GROUP BY s order by s; + +select materialize(_shard_num), * from remote('127.{1,2}', system.one) limit 1 by dummy format Null; diff --git a/tests/queries/0_stateless/01866_datetime64_cmp_with_constant.reference b/tests/queries/0_stateless/01866_datetime64_cmp_with_constant.reference new file mode 100644 index 00000000000..db516fa83d4 --- /dev/null +++ b/tests/queries/0_stateless/01866_datetime64_cmp_with_constant.reference @@ -0,0 +1,12 @@ +dt64 <= const dt +dt64 <= dt +dt <= const dt64 +dt <= dt64 +dt64 = const dt +dt64 = dt +dt = const dt64 +dt = dt64 +dt64 >= const dt +dt64 >= dt +dt >= const dt64 +dt >= dt64 diff --git a/tests/queries/0_stateless/01866_datetime64_cmp_with_constant.sql b/tests/queries/0_stateless/01866_datetime64_cmp_with_constant.sql new file mode 100644 index 00000000000..e6782656887 --- /dev/null +++ b/tests/queries/0_stateless/01866_datetime64_cmp_with_constant.sql @@ -0,0 +1,40 @@ +CREATE TABLE dt64test +( + `dt64_column` DateTime64(3), + `dt_column` DateTime DEFAULT toDateTime(dt64_column) +) +ENGINE = MergeTree +PARTITION BY toYYYYMM(dt64_column) +ORDER BY dt64_column; + +INSERT INTO dt64test (`dt64_column`) VALUES ('2020-01-13 13:37:00'); + +SELECT 'dt64 < const dt' FROM dt64test WHERE dt64_column < toDateTime('2020-01-13 13:37:00'); +SELECT 'dt64 < dt' FROM dt64test WHERE dt64_column < materialize(toDateTime('2020-01-13 13:37:00')); +SELECT 'dt < const dt64' FROM dt64test WHERE dt_column < toDateTime64('2020-01-13 13:37:00', 3); +SELECT 'dt < dt64' FROM dt64test WHERE dt_column < materialize(toDateTime64('2020-01-13 13:37:00', 3)); + +SELECT 'dt64 <= const dt' FROM dt64test WHERE dt64_column <= toDateTime('2020-01-13 13:37:00'); +SELECT 'dt64 <= dt' FROM dt64test WHERE dt64_column <= materialize(toDateTime('2020-01-13 13:37:00')); +SELECT 'dt <= const dt64' FROM dt64test WHERE dt_column <= toDateTime64('2020-01-13 13:37:00', 3); +SELECT 'dt <= dt64' FROM dt64test WHERE dt_column <= materialize(toDateTime64('2020-01-13 13:37:00', 3)); + +SELECT 'dt64 = const dt' FROM dt64test WHERE dt64_column = toDateTime('2020-01-13 13:37:00'); +SELECT 'dt64 = dt' FROM dt64test WHERE dt64_column = materialize(toDateTime('2020-01-13 13:37:00')); +SELECT 'dt = const dt64' FROM dt64test WHERE dt_column = toDateTime64('2020-01-13 13:37:00', 3); +SELECT 'dt = dt64' FROM dt64test WHERE dt_column = materialize(toDateTime64('2020-01-13 13:37:00', 3)); + +SELECT 'dt64 >= const dt' FROM dt64test WHERE dt64_column >= toDateTime('2020-01-13 13:37:00'); +SELECT 'dt64 >= dt' FROM dt64test WHERE dt64_column >= materialize(toDateTime('2020-01-13 13:37:00')); +SELECT 'dt >= const dt64' FROM dt64test WHERE dt_column >= toDateTime64('2020-01-13 13:37:00', 3); +SELECT 'dt >= dt64' FROM dt64test WHERE dt_column >= materialize(toDateTime64('2020-01-13 13:37:00', 3)); + +SELECT 'dt64 > const dt' FROM dt64test WHERE dt64_column > toDateTime('2020-01-13 13:37:00'); +SELECT 'dt64 > dt' FROM dt64test WHERE dt64_column > materialize(toDateTime('2020-01-13 13:37:00')); +SELECT 'dt > const dt64' FROM dt64test WHERE dt_column > toDateTime64('2020-01-13 13:37:00', 3); +SELECT 'dt > dt64' FROM dt64test WHERE dt_column > materialize(toDateTime64('2020-01-13 13:37:00', 3)); + +SELECT 'dt64 != const dt' FROM dt64test WHERE dt64_column != toDateTime('2020-01-13 13:37:00'); +SELECT 'dt64 != dt' FROM dt64test WHERE dt64_column != materialize(toDateTime('2020-01-13 13:37:00')); +SELECT 'dt != const dt64' FROM dt64test WHERE dt_column != toDateTime64('2020-01-13 13:37:00', 3); +SELECT 'dt != dt64' FROM dt64test WHERE dt_column != materialize(toDateTime64('2020-01-13 13:37:00', 3)); diff --git a/tests/queries/0_stateless/01867_fix_storage_memory_mutation.reference b/tests/queries/0_stateless/01867_fix_storage_memory_mutation.reference new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/queries/0_stateless/01867_fix_storage_memory_mutation.sql b/tests/queries/0_stateless/01867_fix_storage_memory_mutation.sql new file mode 100644 index 00000000000..4cb80036d73 --- /dev/null +++ b/tests/queries/0_stateless/01867_fix_storage_memory_mutation.sql @@ -0,0 +1,32 @@ +DROP TABLE IF EXISTS mem_test; + +CREATE TABLE mem_test +( + `a` Int64, + `b` Int64 +) +ENGINE = Memory; + +SET max_block_size = 3; + +INSERT INTO mem_test SELECT + number, + number +FROM numbers(100); + +ALTER TABLE mem_test + UPDATE a = 0 WHERE b = 99; +ALTER TABLE mem_test + UPDATE a = 0 WHERE b = 99; +ALTER TABLE mem_test + UPDATE a = 0 WHERE b = 99; +ALTER TABLE mem_test + UPDATE a = 0 WHERE b = 99; +ALTER TABLE mem_test + UPDATE a = 0 WHERE b = 99; + +SELECT * +FROM mem_test +FORMAT Null; + +DROP TABLE mem_test; diff --git a/tests/queries/0_stateless/01871_merge_tree_compile_expressions.reference b/tests/queries/0_stateless/01871_merge_tree_compile_expressions.reference new file mode 100644 index 00000000000..a6905f8ba44 --- /dev/null +++ b/tests/queries/0_stateless/01871_merge_tree_compile_expressions.reference @@ -0,0 +1 @@ +999 diff --git a/tests/queries/0_stateless/01871_merge_tree_compile_expressions.sql b/tests/queries/0_stateless/01871_merge_tree_compile_expressions.sql new file mode 100644 index 00000000000..f8cad868187 --- /dev/null +++ b/tests/queries/0_stateless/01871_merge_tree_compile_expressions.sql @@ -0,0 +1,17 @@ +DROP TABLE IF EXISTS data_01875_1; +DROP TABLE IF EXISTS data_01875_2; +DROP TABLE IF EXISTS data_01875_3; + +SET compile_expressions=true; + +-- CREATE TABLE will use global profile with default min_count_to_compile_expression=3 +-- so retry 3 times +CREATE TABLE data_01875_1 Engine=MergeTree ORDER BY number PARTITION BY bitShiftRight(number,8) AS SELECT * FROM numbers(16384); +CREATE TABLE data_01875_2 Engine=MergeTree ORDER BY number PARTITION BY bitShiftRight(number,8) AS SELECT * FROM numbers(16384); +CREATE TABLE data_01875_3 Engine=MergeTree ORDER BY number PARTITION BY bitShiftRight(number,8) AS SELECT * FROM numbers(16384); + +SELECT number FROM data_01875_3 WHERE number = 999; + +DROP TABLE data_01875_1; +DROP TABLE data_01875_2; +DROP TABLE data_01875_3; diff --git a/tests/queries/0_stateless/01874_select_from_trailing_whitespaces.reference b/tests/queries/0_stateless/01874_select_from_trailing_whitespaces.reference new file mode 100644 index 00000000000..a52505659d1 --- /dev/null +++ b/tests/queries/0_stateless/01874_select_from_trailing_whitespaces.reference @@ -0,0 +1,114 @@ +# select * from system.one a +SELECT * +FROM system.one AS a +# /* oneline */ select * from system.one a +SELECT * FROM system.one AS a +# select * from (select * from system.one) b, system.one a +SELECT * +FROM +( + SELECT * + FROM system.one +) AS b, system.one AS a +# /* oneline */ select * from (select * from system.one) b, system.one a +SELECT * FROM (SELECT * FROM system.one) AS b, system.one AS a +# select * from system.one a, (select * from system.one) b, system.one c +SELECT * +FROM system.one AS a, +( + SELECT * + FROM system.one +) AS b, system.one AS c +# /* oneline */ select * from system.one a, (select * from system.one) b, system.one c +SELECT * FROM system.one AS a, (SELECT * FROM system.one) AS b, system.one AS c +# select * from system.one a, (select * from system.one) b, system.one c, (select * from system.one) d +SELECT * +FROM system.one AS a, +( + SELECT * + FROM system.one +) AS b, system.one AS c, +( + SELECT * + FROM system.one +) AS d +# /* oneline */ select * from system.one a, (select * from system.one) b, system.one c, (select * from system.one) d +SELECT * FROM system.one AS a, (SELECT * FROM system.one) AS b, system.one AS c, (SELECT * FROM system.one) AS d +# select * from system.one union all select * from system.one +SELECT * +FROM system.one +UNION ALL +SELECT * +FROM system.one +# /* oneline */ select * from system.one union all select * from system.one +SELECT * FROM system.one UNION ALL SELECT * FROM system.one +# select * from system.one union all (select * from system.one) +SELECT * +FROM system.one +UNION ALL +SELECT * +FROM system.one +# /* oneline */ select * from system.one union all (select * from system.one) +SELECT * FROM system.one UNION ALL SELECT * FROM system.one +# select 1 union all (select 1 union distinct select 1) +SELECT 1 +UNION ALL +( + SELECT 1 + UNION DISTINCT + SELECT 1 +) +# /* oneline */ select 1 union all (select 1 union distinct select 1) +SELECT 1 UNION ALL (SELECT 1 UNION DISTINCT SELECT 1) +# select * from system.one array join arr as row +SELECT * +FROM system.one +ARRAY JOIN arr AS row +# /* oneline */ select * from system.one array join arr as row +SELECT * FROM system.one ARRAY JOIN arr AS row +# select 1 in 1 +SELECT 1 IN (1) +# /* oneline */ select 1 in 1 +SELECT 1 IN (1) +# select 1 in (select 1) +SELECT 1 IN ( + SELECT 1 + ) +# /* oneline */ select 1 in (select 1) +SELECT 1 IN (SELECT 1) +# select 1 in f(1) +SELECT 1 IN f(1) +# /* oneline */ select 1 in f(1) +SELECT 1 IN f(1) +# select 1 in ((select 1) as sub) +SELECT 1 IN (( + SELECT 1 + ) AS sub) +# /* oneline */ select 1 in ((select 1) as sub) +SELECT 1 IN ((SELECT 1) AS sub) +# with it as ( select * from numbers(1) ) select it.number, i.number from it as i +WITH it AS + ( + SELECT * + FROM numbers(1) + ) +SELECT + it.number, + i.number +FROM it AS i +# /* oneline */ with it as ( select * from numbers(1) ) select it.number, i.number from it as i +WITH it AS (SELECT * FROM numbers(1)) SELECT it.number, i.number FROM it AS i +# SELECT x FROM ( SELECT 1 AS x UNION ALL ( SELECT 1 UNION ALL SELECT 1)) +SELECT x +FROM +( + SELECT 1 AS x + UNION ALL + ( + SELECT 1 + UNION ALL + SELECT 1 + ) +) +# /* oneline */ SELECT x FROM ( SELECT 1 AS x UNION ALL ( SELECT 1 UNION ALL SELECT 1)) +SELECT x FROM (SELECT 1 AS x UNION ALL (SELECT 1 UNION ALL SELECT 1)) diff --git a/tests/queries/0_stateless/01874_select_from_trailing_whitespaces.sh b/tests/queries/0_stateless/01874_select_from_trailing_whitespaces.sh new file mode 100755 index 00000000000..f416ac33d69 --- /dev/null +++ b/tests/queries/0_stateless/01874_select_from_trailing_whitespaces.sh @@ -0,0 +1,30 @@ +#!/usr/bin/env bash + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +set -e + +queries=( + "select * from system.one a" + "select * from (select * from system.one) b, system.one a" + "select * from system.one a, (select * from system.one) b, system.one c" + "select * from system.one a, (select * from system.one) b, system.one c, (select * from system.one) d" + "select * from system.one union all select * from system.one" + "select * from system.one union all (select * from system.one)" + "select 1 union all (select 1 union distinct select 1)" + "select * from system.one array join arr as row" + "select 1 in 1" + "select 1 in (select 1)" + "select 1 in f(1)" + "select 1 in ((select 1) as sub)" + "with it as ( select * from numbers(1) ) select it.number, i.number from it as i" + "SELECT x FROM ( SELECT 1 AS x UNION ALL ( SELECT 1 UNION ALL SELECT 1))" +) +for q in "${queries[@]}"; do + echo "# $q" + $CLICKHOUSE_FORMAT <<<"$q" + echo "# /* oneline */ $q" + $CLICKHOUSE_FORMAT --oneline <<<"$q" +done diff --git a/tests/queries/1_stateful/00031_array_enumerate_uniq.sql b/tests/queries/1_stateful/00031_array_enumerate_uniq.sql index 1898fc20579..f0392a17024 100644 --- a/tests/queries/1_stateful/00031_array_enumerate_uniq.sql +++ b/tests/queries/1_stateful/00031_array_enumerate_uniq.sql @@ -1,5 +1,5 @@ SELECT UserID, arrayEnumerateUniq(groupArray(SearchPhrase)) AS arr -FROM +FROM ( SELECT UserID, SearchPhrase FROM test.hits diff --git a/tests/queries/1_stateful/00063_loyalty_joins.sql b/tests/queries/1_stateful/00063_loyalty_joins.sql index 7713c65838c..1e7011ea909 100644 --- a/tests/queries/1_stateful/00063_loyalty_joins.sql +++ b/tests/queries/1_stateful/00063_loyalty_joins.sql @@ -23,7 +23,7 @@ ORDER BY loyalty ASC; SELECT loyalty, count() -FROM +FROM ( SELECT UserID FROM test.hits @@ -46,12 +46,12 @@ ORDER BY loyalty ASC; SELECT loyalty, count() -FROM +FROM ( SELECT loyalty, UserID - FROM + FROM ( SELECT UserID FROM test.hits @@ -81,7 +81,7 @@ FROM test.hits ANY INNER JOIN SELECT UserID, toInt8(if(yandex > google, yandex / (yandex + google), -google / (yandex + google)) * 10) AS loyalty - FROM + FROM ( SELECT UserID, diff --git a/tests/queries/skip_list.json b/tests/queries/skip_list.json index cba09040311..cbe7d4868be 100644 --- a/tests/queries/skip_list.json +++ b/tests/queries/skip_list.json @@ -572,6 +572,7 @@ "01153_attach_mv_uuid", "01152_cross_replication", "01154_move_partition_long", + "01155_rename_move_materialized_view", "01185_create_or_replace_table", "01190_full_attach_syntax", "01191_rename_dictionary", diff --git a/tests/testflows/ldap/role_mapping/tests/user_dn_detection.py b/tests/testflows/ldap/role_mapping/tests/user_dn_detection.py index b1a74d6e6b5..147da8a5dcc 100644 --- a/tests/testflows/ldap/role_mapping/tests/user_dn_detection.py +++ b/tests/testflows/ldap/role_mapping/tests/user_dn_detection.py @@ -33,7 +33,7 @@ def check_config(self, entries, valid=True, ldap_server="openldap1", user="user1 @TestScenario @Tags("config") @Requirements( - # FIXME + RQ_SRS_014_LDAP_RoleMapping_Configuration_Server_UserDNDetection_BaseDN("1.0") ) def config_invalid_base_dn(self): """Check when invalid `base_dn` is specified in the user_dn_detection section. @@ -62,7 +62,7 @@ def config_invalid_base_dn(self): @TestScenario @Tags("config") @Requirements( - # FIXME + RQ_SRS_014_LDAP_RoleMapping_Configuration_Server_UserDNDetection_BaseDN("1.0") ) def config_empty_base_dn(self): """Check when empty `base_dn` is specified in the user_dn_detection section. @@ -90,7 +90,7 @@ def config_empty_base_dn(self): @TestScenario @Tags("config") @Requirements( - # FIXME + RQ_SRS_014_LDAP_RoleMapping_Configuration_Server_UserDNDetection_BaseDN("1.0") ) def config_missing_base_dn(self): """Check when missing `base_dn` is specified in the user_dn_detection section. @@ -145,7 +145,7 @@ def config_invalid_search_filter(self): @TestScenario @Tags("config") @Requirements( - # FIXME + RQ_SRS_014_LDAP_RoleMapping_Configuration_Server_UserDNDetection_SearchFilter("1.0") ) def config_missing_search_filter(self): """Check when missing `search_filter` is specified in the user_dn_detection section. @@ -172,7 +172,7 @@ def config_missing_search_filter(self): @TestScenario @Tags("config") @Requirements( - # FIXME + RQ_SRS_014_LDAP_RoleMapping_Configuration_Server_UserDNDetection_SearchFilter("1.0") ) def config_empty_search_filter(self): """Check when empty `search_filter` is specified in the user_dn_detection section. @@ -200,7 +200,8 @@ def config_empty_search_filter(self): @TestScenario @Tags("config") @Requirements( - # FIXME + RQ_SRS_014_LDAP_RoleMapping_Configuration_Server_UserDNDetection_BaseDN("1.0"), + RQ_SRS_014_LDAP_RoleMapping_Configuration_Server_UserDNDetection_SearchFilter("1.0") ) def config_valid(self): """Check valid config with valid user_dn_detection section. @@ -228,7 +229,8 @@ def config_valid(self): @TestScenario @Tags("config") @Requirements( - # FIXME + RQ_SRS_014_LDAP_RoleMapping_Configuration_Server_UserDNDetection_BaseDN("1.0"), + RQ_SRS_014_LDAP_RoleMapping_Configuration_Server_UserDNDetection_SearchFilter("1.0") ) def config_valid_tls_connection(self): """Check valid config with valid user_dn_detection section when @@ -256,6 +258,9 @@ def config_valid_tls_connection(self): check_config(entries=entries, valid=True, ldap_server="openldap2", user="user2", password="user2") @TestOutline(Scenario) +@Requirements( + RQ_SRS_014_LDAP_RoleMapping_Configuration_Server_UserDNDetection_Scope("1.0") +) @Examples("scope base_dn", [ ("base", "cn=user1,ou=users,dc=company,dc=com"), ("one_level","ou=users,dc=company,dc=com"), @@ -399,9 +404,6 @@ def setup_different_bind_dn_and_user_dn(self, uid, map_by, user_dn_detection): role_mappings=role_mappings, restart=True) @TestScenario -@Requirements( - # FIXME: -) def map_roles_by_user_dn_when_base_dn_and_user_dn_are_different(self): """Check the case when we map roles using user_dn then the first user has uid of second user and second user @@ -429,9 +431,6 @@ def map_roles_by_user_dn_when_base_dn_and_user_dn_are_different(self): assert f"GRANT role0_{uid} TO second_user" in r.output, error() @TestScenario -@Requirements( - # FIXME: -) def map_roles_by_bind_dn_when_base_dn_and_user_dn_are_different(self): """Check the case when we map roles by bind_dn when bind_dn and user_dn are different. @@ -457,7 +456,7 @@ def map_roles_by_bind_dn_when_base_dn_and_user_dn_are_different(self): @TestFeature @Name("user dn detection") @Requirements( - #RQ_SRS_014_LDAP_UserDNDetection("1.0") + RQ_SRS_014_LDAP_RoleMapping_Configuration_Server_UserDNDetection("1.0") ) def feature(self): """Check LDAP user DN detection. diff --git a/tests/testflows/regression.py b/tests/testflows/regression.py index 21e65ef73c5..2547463a91d 100755 --- a/tests/testflows/regression.py +++ b/tests/testflows/regression.py @@ -29,7 +29,7 @@ def regression(self, local, clickhouse_binary_path, stress=None, parallel=None): run_scenario(pool, tasks, Feature(test=load("map_type.regression", "regression")), args) run_scenario(pool, tasks, Feature(test=load("window_functions.regression", "regression")), args) run_scenario(pool, tasks, Feature(test=load("datetime64_extended_range.regression", "regression")), args) - # run_scenario(pool, tasks, Feature(test=load("kerberos.regression", "regression")), args) + #run_scenario(pool, tasks, Feature(test=load("kerberos.regression", "regression")), args) finally: join(tasks) diff --git a/utils/CMakeLists.txt b/utils/CMakeLists.txt index 3da8612e6c1..bd6453e406b 100644 --- a/utils/CMakeLists.txt +++ b/utils/CMakeLists.txt @@ -39,7 +39,7 @@ if (NOT DEFINED ENABLE_UTILS OR ENABLE_UTILS) endif () # memcpy_jart.S contains position dependent code - if (NOT CMAKE_POSITION_INDEPENDENT_CODE AND NOT OS_DARWIN AND NOT OS_SUNOS) + if (NOT CMAKE_POSITION_INDEPENDENT_CODE AND NOT OS_DARWIN AND NOT OS_SUNOS AND NOT ARCH_AARCH64) add_subdirectory (memcpy-bench) endif () endif () diff --git a/utils/list-versions/version_date.tsv b/utils/list-versions/version_date.tsv index 1b09f8c167a..a2773b22cec 100644 --- a/utils/list-versions/version_date.tsv +++ b/utils/list-versions/version_date.tsv @@ -1,3 +1,5 @@ +v21.5.5.12-stable 2021-05-20 +v21.4.7.3-stable 2021-05-19 v21.4.6.55-stable 2021-04-30 v21.4.5.46-stable 2021-04-24 v21.4.4.30-stable 2021-04-16 diff --git a/website/css/blog.css b/website/css/blog.css index 064dbf3d975..089856b8e00 100644 --- a/website/css/blog.css +++ b/website/css/blog.css @@ -11,14 +11,6 @@ body.blog .dropdown-item:focus { background: #eee; } -.comment-even { - background: #fff; -} - -.comment-odd, .comments-bg { - background: #f8f9fa; -} - @media (prefers-color-scheme: dark) { body.blog .dropdown-item { color: #fff !important; @@ -33,12 +25,4 @@ body.blog .dropdown-item:focus { .blog .social-icon { background: #444451; } - - .comment-even { - background: #111; - } - - .comment-odd, .comments-bg { - background: #333; - } } diff --git a/website/js/embedd.min.js b/website/js/embedd.min.js deleted file mode 100644 index bda09649faa..00000000000 --- a/website/js/embedd.min.js +++ /dev/null @@ -1,11 +0,0 @@ -!function(e){function t(r){if(n[r])return n[r].exports;var o=n[r]={i:r,l:!1,exports:{}};return e[r].call(o.exports,o,o.exports,t),o.l=!0,o.exports}var n={};t.m=e,t.c=n,t.d=function(e,n,r){t.o(e,n)||Object.defineProperty(e,n,{configurable:!1,enumerable:!0,get:r})},t.n=function(e){var n=e&&e.__esModule?function(){return e.default}:function(){return e};return t.d(n,"a",n),n},t.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},t.p="",t(t.s=11)}([function(e,t,n){"use strict";function r(e){return"[object Array]"===E.call(e)}function o(e){return"[object ArrayBuffer]"===E.call(e)}function i(e){return"undefined"!=typeof FormData&&e instanceof FormData}function u(e){return"undefined"!=typeof ArrayBuffer&&ArrayBuffer.isView?ArrayBuffer.isView(e):e&&e.buffer&&e.buffer instanceof ArrayBuffer}function c(e){return"string"==typeof e}function a(e){return"number"==typeof e}function s(e){return void 0===e}function f(e){return null!==e&&"object"==typeof e}function l(e){return"[object Date]"===E.call(e)}function p(e){return"[object File]"===E.call(e)}function d(e){return"[object Blob]"===E.call(e)}function h(e){return"[object Function]"===E.call(e)}function m(e){return f(e)&&h(e.pipe)}function v(e){return"undefined"!=typeof URLSearchParams&&e instanceof URLSearchParams}function y(e){return e.replace(/^\s*/,"").replace(/\s*$/,"")}function g(){return("undefined"==typeof navigator||"ReactNative"!==navigator.product)&&("undefined"!=typeof window&&"undefined"!=typeof document)}function b(e,t){if(null!==e&&void 0!==e)if("object"!=typeof e&&(e=[e]),r(e))for(var n=0,o=e.length;n1)for(var n=1;n=200&&e<300}};c.headers={common:{Accept:"application/json, text/plain, */*"}},o.forEach(["delete","get","head"],function(e){c.headers[e]={}}),o.forEach(["post","put","patch"],function(e){c.headers[e]=o.merge(u)}),e.exports=c}).call(t,n(2))},function(e,t,n){(function(e,n,r,o){!function(e,n){n(t)}(0,function(t){"use strict";function i(e,t){t|=0;for(var n=Math.max(e.length-t,0),r=Array(n),o=0;o-1&&e%1==0&&e<=Ot}function w(e){return null!=e&&b(e.length)&&!g(e)}function x(){}function j(e){return function(){if(null!==e){var t=e;e=null,t.apply(this,arguments)}}}function S(e,t){for(var n=-1,r=Array(e);++n-1&&e%1==0&&eo?0:o+t),n=n>o?o:n,n<0&&(n+=o),o=t>n?0:n-t>>>0,t>>>=0;for(var i=Array(o);++r=r?e:ee(e,t,n)}function ne(e,t){for(var n=e.length;n--&&G(t,e[n],0)>-1;);return n}function re(e,t){for(var n=-1,r=e.length;++n-1;);return n}function oe(e){return e.split("")}function ie(e){return xn.test(e)}function ue(e){return e.match(_n)||[]}function ce(e){return ie(e)?ue(e):oe(e)}function ae(e){return null==e?"":Z(e)}function se(e,t,n){if((e=ae(e))&&(n||void 0===t))return e.replace(Cn,"");if(!e||!(t=Z(t)))return e;var r=ce(e),o=ce(t);return te(r,re(r,o),ne(r,o)+1).join("")}function fe(e){return e=e.toString().replace(In,""),e=e.match(Rn)[2].replace(" ",""),e=e?e.split(Fn):[],e=e.map(function(e){return se(e.replace(Un,""))})}function le(e,t){var n={};X(e,function(e,t){function r(t,n){var r=K(o,function(e){return t[e]});r.push(n),d(e).apply(null,r)}var o,i=p(e),u=!i&&1===e.length||i&&0===e.length;if(Bt(e))o=e.slice(0,-1),e=e[e.length-1],n[t]=o.concat(o.length>0?r:e);else if(u)n[t]=e;else{if(o=fe(e),0===e.length&&!i&&0===o.length)throw new Error("autoInject task functions require explicit parameters.");i||o.pop(),n[t]=o.concat(r)}}),vn(n,t)}function pe(){this.head=this.tail=null,this.length=0}function de(e,t){e.length=1,e.head=e.tail=t}function he(e,t,n){function r(e,t,n){if(null!=n&&"function"!=typeof n)throw new Error("task callback must be a function");if(f.started=!0,Bt(e)||(e=[e]),0===e.length&&f.idle())return ft(function(){f.drain()});for(var r=0,o=e.length;r0&&c.splice(i,1),o.callback.apply(o,arguments),null!=t&&f.error(t,o.data)}u<=f.concurrency-f.buffer&&f.unsaturated(),f.idle()&&f.drain(),f.process()}}if(null==t)t=1;else if(0===t)throw new Error("Concurrency must not be zero");var i=d(e),u=0,c=[],a=!1,s=!1,f={_tasks:new pe,concurrency:t,payload:n,saturated:x,unsaturated:x,buffer:t/4,empty:x,drain:x,error:x,started:!1,paused:!1,push:function(e,t){r(e,!1,t)},kill:function(){f.drain=x,f._tasks.empty()},unshift:function(e,t){r(e,!0,t)},remove:function(e){f._tasks.remove(e)},process:function(){if(!s){for(s=!0;!f.paused&&u2&&(o=i(arguments,1)),r[t]=o,n(e)})},function(e){n(e,r)})}function Pe(e,t){Ne(sn,e,t)}function De(e,t,n){Ne(B(t),e,n)}function He(e,t){if(t=j(t||x),!Bt(e))return t(new TypeError("First argument to race must be an array of functions"));if(!e.length)return t();for(var n=0,r=e.length;nr?1:0}var o=d(t);fn(e,function(e,t){o(e,function(n,r){if(n)return t(n);t(null,{value:e,criteria:r})})},function(e,t){if(e)return n(e);n(null,K(t.sort(r),Ce("value")))})}function Ke(e,t,n){var r=d(e);return ct(function(o,i){function u(){var t=e.name||"anonymous",r=new Error('Callback function "'+t+'" timed out.');r.code="ETIMEDOUT",n&&(r.info=n),a=!0,i(r)}var c,a=!1;o.push(function(){a||(i.apply(null,arguments),clearTimeout(c))}),c=setTimeout(u,t),r.apply(null,o)})}function Ye(e,t,n,r){for(var o=-1,i=gr(yr((t-e)/(n||1)),0),u=Array(i);i--;)u[r?i:++o]=e,e+=n;return u}function Ze(e,t,n,r){var o=d(n);pn(Ye(0,e,1),t,o,r)}function et(e,t,n,r){arguments.length<=3&&(r=n,n=t,t=Bt(e)?[]:{}),r=j(r||x);var o=d(n);sn(e,function(e,n,r){o(t,e,n,r)},function(e){r(e,t)})}function tt(e,t){var n,r=null;t=t||x,Wn(e,function(e,t){d(e)(function(e,o){n=arguments.length>2?i(arguments,1):o,r=e,t(!e)})},function(){t(r,n)})}function nt(e){return function(){return(e.unmemoized||e).apply(null,arguments)}}function rt(e,t,n){n=M(n||x);var r=d(t);if(!e())return n(null);var o=function(t){if(t)return n(t);if(e())return r(o);var u=i(arguments,1);n.apply(null,[null].concat(u))};r(o)}function ot(e,t,n){rt(function(){return!e.apply(this,arguments)},t,n)}var it,ut=function(e){var t=i(arguments,1);return function(){var n=i(arguments);return e.apply(null,t.concat(n))}},ct=function(e){return function(){var t=i(arguments),n=t.pop();e.call(this,t,n)}},at="function"==typeof e&&e,st="object"==typeof n&&"function"==typeof n.nextTick;it=at?e:st?n.nextTick:c;var ft=a(it),lt="function"==typeof Symbol,pt="object"==typeof r&&r&&r.Object===Object&&r,dt="object"==typeof self&&self&&self.Object===Object&&self,ht=pt||dt||Function("return this")(),mt=ht.Symbol,vt=Object.prototype,yt=vt.hasOwnProperty,gt=vt.toString,bt=mt?mt.toStringTag:void 0,wt=Object.prototype,xt=wt.toString,jt="[object Null]",St="[object Undefined]",Et=mt?mt.toStringTag:void 0,kt="[object AsyncFunction]",Tt="[object Function]",Lt="[object GeneratorFunction]",At="[object Proxy]",Ot=9007199254740991,_t={},Ct="function"==typeof Symbol&&Symbol.iterator,Rt=function(e){return Ct&&e[Ct]&&e[Ct]()},Ft="[object Arguments]",Ut=Object.prototype,It=Ut.hasOwnProperty,qt=Ut.propertyIsEnumerable,Mt=k(function(){return arguments}())?k:function(e){return E(e)&&It.call(e,"callee")&&!qt.call(e,"callee")},Bt=Array.isArray,Nt="object"==typeof t&&t&&!t.nodeType&&t,Pt=Nt&&"object"==typeof o&&o&&!o.nodeType&&o,Dt=Pt&&Pt.exports===Nt,Ht=Dt?ht.Buffer:void 0,Vt=Ht?Ht.isBuffer:void 0,zt=Vt||T,$t=9007199254740991,Xt=/^(?:0|[1-9]\d*)$/,Qt={};Qt["[object Float32Array]"]=Qt["[object Float64Array]"]=Qt["[object Int8Array]"]=Qt["[object Int16Array]"]=Qt["[object Int32Array]"]=Qt["[object Uint8Array]"]=Qt["[object Uint8ClampedArray]"]=Qt["[object Uint16Array]"]=Qt["[object Uint32Array]"]=!0,Qt["[object Arguments]"]=Qt["[object Array]"]=Qt["[object ArrayBuffer]"]=Qt["[object Boolean]"]=Qt["[object DataView]"]=Qt["[object Date]"]=Qt["[object Error]"]=Qt["[object Function]"]=Qt["[object Map]"]=Qt["[object Number]"]=Qt["[object Object]"]=Qt["[object RegExp]"]=Qt["[object Set]"]=Qt["[object String]"]=Qt["[object WeakMap]"]=!1;var Wt="object"==typeof t&&t&&!t.nodeType&&t,Jt=Wt&&"object"==typeof o&&o&&!o.nodeType&&o,Gt=Jt&&Jt.exports===Wt,Kt=Gt&&pt.process,Yt=function(){try{var e=Jt&&Jt.require&&Jt.require("util").types;return e||Kt&&Kt.binding&&Kt.binding("util")}catch(e){}}(),Zt=Yt&&Yt.isTypedArray,en=Zt?function(e){return function(t){return e(t)}}(Zt):A,tn=Object.prototype,nn=tn.hasOwnProperty,rn=Object.prototype,on=function(e,t){return function(n){return e(t(n))}}(Object.keys,Object),un=Object.prototype,cn=un.hasOwnProperty,an=P(N,1/0),sn=function(e,t,n){(w(e)?D:an)(e,d(t),n)},fn=H(V),ln=h(fn),pn=z(V),dn=P(pn,1),hn=h(dn),mn=function(e){return function(t,n,r){for(var o=-1,i=Object(t),u=r(t),c=u.length;c--;){var a=u[e?c:++o];if(!1===n(i[a],a,i))break}return t}}(),vn=function(e,t,n){function r(e,t){y.push(function(){a(e,t)})}function o(){if(0===y.length&&0===h)return n(null,p);for(;y.length&&h2&&(r=i(arguments,1)),t){var o={};X(p,function(e,t){o[t]=e}),o[e]=r,m=!0,v=Object.create(null),n(t,o)}else p[e]=r,c(e)});h++;var o=d(t[t.length-1]);t.length>1?o(p,r):o(r)}}function s(t){var n=[];return X(e,function(e,r){Bt(e)&&G(e,t,0)>=0&&n.push(r)}),n}"function"==typeof t&&(n=t,t=null),n=j(n||x);var f=R(e),l=f.length;if(!l)return n(null);t||(t=l);var p={},h=0,m=!1,v=Object.create(null),y=[],g=[],b={};X(e,function(t,n){if(!Bt(t))return r(n,[t]),void g.push(n);var o=t.slice(0,t.length-1),i=o.length;if(0===i)return r(n,t),void g.push(n);b[n]=i,$(o,function(c){if(!e[c])throw new Error("async.auto task `"+n+"` has a non-existent dependency `"+c+"` in "+o.join(", "));u(c,function(){0===--i&&r(n,t)})})}),function(){for(var e,t=0;g.length;)e=g.pop(),t++,$(s(e),function(e){0==--b[e]&&g.push(e)});if(t!==l)throw new Error("async.auto cannot execute tasks due to a recursive dependency")}(),o()},yn="[object Symbol]",gn=1/0,bn=mt?mt.prototype:void 0,wn=bn?bn.toString:void 0,xn=RegExp("[\\u200d\\ud800-\\udfff\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff\\ufe0e\\ufe0f]"),jn="[\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]",Sn="\\ud83c[\\udffb-\\udfff]",En="(?:\\ud83c[\\udde6-\\uddff]){2}",kn="[\\ud800-\\udbff][\\udc00-\\udfff]",Tn="(?:[\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]|\\ud83c[\\udffb-\\udfff])?",Ln="(?:\\u200d(?:"+["[^\\ud800-\\udfff]",En,kn].join("|")+")[\\ufe0e\\ufe0f]?"+Tn+")*",An="[\\ufe0e\\ufe0f]?"+Tn+Ln,On="(?:"+["[^\\ud800-\\udfff]"+jn+"?",jn,En,kn,"[\\ud800-\\udfff]"].join("|")+")",_n=RegExp(Sn+"(?="+Sn+")|"+On+An,"g"),Cn=/^\s+|\s+$/g,Rn=/^(?:async\s+)?(function)?\s*[^\(]*\(\s*([^\)]*)\)/m,Fn=/,/,Un=/(=.+)?(\s*)$/,In=/((\/\/.*$)|(\/\*[\s\S]*?\*\/))/gm;pe.prototype.removeLink=function(e){return e.prev?e.prev.next=e.next:this.head=e.next,e.next?e.next.prev=e.prev:this.tail=e.prev,e.prev=e.next=null,this.length-=1,e},pe.prototype.empty=function(){for(;this.head;)this.shift();return this},pe.prototype.insertAfter=function(e,t){t.prev=e,t.next=e.next,e.next?e.next.prev=t:this.tail=t,e.next=t,this.length+=1},pe.prototype.insertBefore=function(e,t){t.prev=e.prev,t.next=e,e.prev?e.prev.next=t:this.head=t,e.prev=t,this.length+=1},pe.prototype.unshift=function(e){this.head?this.insertBefore(this.head,e):de(this,e)},pe.prototype.push=function(e){this.tail?this.insertAfter(this.tail,e):de(this,e)},pe.prototype.shift=function(){return this.head&&this.removeLink(this.head)},pe.prototype.pop=function(){return this.tail&&this.removeLink(this.tail)},pe.prototype.toArray=function(){for(var e=Array(this.length),t=this.head,n=0;n=o.priority;)o=o.next;for(var i=0,u=e.length;it)return!1;var n=t-e,r=Math.floor(n/60),o=Math.floor(r/60),i=Math.floor(o/24);return 1===i?"1 day ago":i>0?i+" days ago":1===o?"1 hour ago":o>0?o+" hours ago":1===r?"1 minute ago":r>0?r+" minutes ago":"a few seconds ago"}function u(e){function t(e,t){if(!e)throw new Error("No URL has been specified");if(s[e])return void t(null,s[e]);f.default.get(e).then(function(n){s[e]=n.data,t(null,n.data)}).catch(t)}function n(t){var n=t.sub,r=t.id;return n&&r?e.base+"/r/"+n+"/comments/"+r+".json":!(n||!r)&&e.base+r}function r(e,r){var o=e.hits.filter(function(e){return!!e.num_comments});a.default.map(o.slice(0,10),function(e,r){var o=e.id,i=e.subreddit;if("undefined"===o)throw new Error("No ID specified");t(n({sub:i,id:o}),r)},r)}function o(t){var n=t.comment,r=t.op,i=t.depth,u=i||0,c=e.commentFmt(n);if(c.depth=u,c.subreddit=r.subreddit,r.permalink&&(c.permalink=e.base+r.permalink,c.thread=e.base+r.permalink+n.id),n.children&&n.children.length>0){var a=u+1;c.hasReplies=!0,c.replies=n.children.reduce(function(e,t){return t.author&&e.push(o({comment:t,op:r,depth:a})),e},[]),c.loadMore=c.replies.length>4}return c}function i(t,n){n(null,t.map(function(t){var n=e.threadFmt(t),r=n.children.reduce(function(e,t){return t.author&&e.push(o({comment:t,op:n})),e},[]);return{op:n,comments:r}}))}function u(t,n){var r=function e(n,r,o){if(o>t.length-1)return{score:n,threads:t.length,comments:r,multiple:function(){return this.threads>1}};var i=t[o];return e(n+=i.op.points,r.concat(i.comments),o+1)}(0,[],0),o=r.comments.sort(function(e,t){return t.score-e.score}),i=e.limit||o.length;r.comments=o.slice(0,i),r.next=o.slice(i),r.hasMore=!!r.next.length,n(null,r)}if(!e)throw new Error("No spec object has been specified");if(!e.submitUrl)throw new Error("submitUrl isnt defined");if(!e.dataFmt)throw new Error("dataFmt method isnt defined");if(!e.commentFmt)throw new Error("commentFmt method isnt defined");if(!e.threadFmt)throw new Error("threadFmt method isnt defined");0===e.limit&&(e.limit=null);var c={},s={};return c.submitUrl=e.submitUrl,c.hasComments=function(n){a.default.waterfall([a.default.apply(t,e.query),e.dataFmt],function(e,t){if(e)throw new Error(e);var r=t.hits.filter(function(e){return!!e.num_comments});n(null,!!r.length)})},c.getComments=function(n){a.default.waterfall([a.default.apply(t,e.query),e.dataFmt,r,i,u],n)},c}Object.defineProperty(t,"__esModule",{value:!0}),t.decode=o,t.parseDate=i,t.embeddConstructor=u;var c=n(4),a=r(c),s=n(17),f=r(s)},function(e,t,n){"use strict";e.exports=function(e,t){return function(){for(var n=new Array(arguments.length),r=0;r comment}}{{/comments}}",r,{comment:d});if(u.insertAdjacentHTML("beforeend",a),!r.hasMore){var s=document.querySelector(".embedd-container .more-btn");s?s.style.display="none":window.removeEventListener("scroll",n,!1)}t()}function u(e){e.target.parentNode.parentNode.parentNode.classList.toggle("closed")}function a(e){function t(e,n){e&&3!==n?e instanceof Text||"block"===f(e)?t(e.nextSibling,n):(e.style.display="block",t(e.nextSibling,n+1)):r.querySelector(".viewMore").style.display="none"}var n=e.currentTarget,r=n.parentElement;t(r.querySelector(".children").firstChild,0)}function f(e){return e.currentStyle?e.currentStyle.display:window.getComputedStyle(e,null).getPropertyValue("display")}var h={},m=document.currentScript,v=(m.parentNode,document.getElementById("embedd-comments"));if(v)return h.config={element:v,url:location.protocol+"//"+location.host+location.pathname,dark:!1,service:"hn",serviceName:"HackerNews",both:!0,loadMore:!0,infiniteScroll:!1,limit:5,debug:!1},"string"==typeof h.config.element&&(h.config.element=document.querySelector(h.config.element)),h.config.element.className="embedd-container",h.config.loadMore&&h.config.infiniteScroll&&(h.config.loadMore=!1),h.clients={},h.config.both&&(h.clients.reddit=(0,l.default)(h.config),h.clients.hn=(0,s.default)(h.config)),h.config.both||"reddit"!==h.config.service||(h.clients.reddit=(0,l.default)(h.config)),h.config.both||"hn"!==h.config.service||(h.clients.hn=(0,s.default)(h.config)),h.init=function(){var t=h.clients,n=t.reddit,o=t.hn,u=h.clients[h.config.service],c={};o&&(c.hasHn=o.hasComments),n&&(c.hasReddit=n.hasComments),c.data=u.getComments,i.default.series(c,function(t,n){if(t)throw new Error(t);n.submitUrl=u.submitUrl,h=e(h,n),r(h)})},h})().init()},function(e,t,n){(function(e){function r(e,t){this._id=e,this._clearFn=t}var o=Function.prototype.apply;t.setTimeout=function(){return new r(o.call(setTimeout,window,arguments),clearTimeout)},t.setInterval=function(){return new r(o.call(setInterval,window,arguments),clearInterval)},t.clearTimeout=t.clearInterval=function(e){e&&e.close()},r.prototype.unref=r.prototype.ref=function(){},r.prototype.close=function(){this._clearFn.call(window,this._id)},t.enroll=function(e,t){clearTimeout(e._idleTimeoutId),e._idleTimeout=t},t.unenroll=function(e){clearTimeout(e._idleTimeoutId),e._idleTimeout=-1},t._unrefActive=t.active=function(e){clearTimeout(e._idleTimeoutId);var t=e._idleTimeout;t>=0&&(e._idleTimeoutId=setTimeout(function(){e._onTimeout&&e._onTimeout()},t))},n(13),t.setImmediate="undefined"!=typeof self&&self.setImmediate||void 0!==e&&e.setImmediate||this&&this.setImmediate,t.clearImmediate="undefined"!=typeof self&&self.clearImmediate||void 0!==e&&e.clearImmediate||this&&this.clearImmediate}).call(t,n(1))},function(e,t,n){(function(e,t){!function(e,n){"use strict";function r(e){"function"!=typeof e&&(e=new Function(""+e));for(var t=new Array(arguments.length-1),n=0;n"'`=\/]/g,function(e){return g[e]})}function a(t,n){function o(e){if("string"==typeof e&&(e=e.split(w,2)),!m(e)||2!==e.length)throw new Error("Invalid tags: "+e);i=new RegExp(r(e[0])+"\\s*"),c=new RegExp("\\s*"+r(e[1])),a=new RegExp("\\s*"+r("}"+e[1]))}if(!t)return[];var i,c,a,p=[],d=[],h=[],v=!1,y=!1;o(n||e.tags);for(var g,E,k,T,L,A,O=new l(t);!O.eos();){if(g=O.pos,k=O.scanUntil(i))for(var _=0,C=k.length;_0?i[i.length-1][4]:r;break;default:o.push(t)}return r}function l(e){this.string=e,this.tail=e,this.pos=0}function p(e,t){this.view=e,this.cache={".":this.view},this.parent=t}function d(){this.cache={}}var h=Object.prototype.toString,m=Array.isArray||function(e){return"[object Array]"===h.call(e)},v=RegExp.prototype.test,y=/\S/,g={"&":"&","<":"<",">":">",'"':""","'":"'","/":"/","`":"`","=":"="},b=/\s*/,w=/\s+/,x=/\s*=/,j=/\s*\}/,S=/#|\^|\/|>|\{|&|=|!/;l.prototype.eos=function(){return""===this.tail},l.prototype.scan=function(e){var t=this.tail.match(e);if(!t||0!==t.index)return"";var n=t[0];return this.tail=this.tail.substring(n.length),this.pos+=n.length,n},l.prototype.scanUntil=function(e){var t,n=this.tail.search(e);switch(n){case-1:t=this.tail,this.tail="";break;case 0:t="";break;default:t=this.tail.substring(0,n),this.tail=this.tail.substring(n)}return this.pos+=t.length,t},p.prototype.push=function(e){return new p(e,this)},p.prototype.lookup=function(e){var n,r=this.cache;if(r.hasOwnProperty(e))n=r[e];else{for(var i,u,c=this,a=!1;c;){if(e.indexOf(".")>0)for(n=c.view,i=e.split("."),u=0;null!=n&&u"===i?u=this.renderPartial(o,t,n,r):"&"===i?u=this.unescapedValue(o,t):"name"===i?u=this.escapedValue(o,t):"text"===i&&(u=this.rawValue(o)),void 0!==u&&(c+=u);return c},d.prototype.renderSection=function(e,n,r,o){function i(e){return u.render(e,n,r)}var u=this,c="",a=n.lookup(e[1]);if(a){if(m(a))for(var s=0,f=a.length;s - * @license MIT - */ -e.exports=function(e){return null!=e&&null!=e.constructor&&"function"==typeof e.constructor.isBuffer&&e.constructor.isBuffer(e)}},function(e,t,n){"use strict";function r(e){this.defaults=e,this.interceptors={request:new u,response:new u}}var o=n(3),i=n(0),u=n(28),c=n(29);r.prototype.request=function(e){"string"==typeof e&&(e=i.merge({url:arguments[0]},arguments[1])),e=i.merge(o,{method:"get"},this.defaults,e),e.method=e.method.toLowerCase();var t=[c,void 0],n=Promise.resolve(e);for(this.interceptors.request.forEach(function(e){t.unshift(e.fulfilled,e.rejected)}),this.interceptors.response.forEach(function(e){t.push(e.fulfilled,e.rejected)});t.length;)n=n.then(t.shift(),t.shift());return n},i.forEach(["delete","get","head","options"],function(e){r.prototype[e]=function(t,n){return this.request(i.merge(n||{},{method:e,url:t}))}}),i.forEach(["post","put","patch"],function(e){r.prototype[e]=function(t,n,r){return this.request(i.merge(r||{},{method:e,url:t,data:n}))}}),e.exports=r},function(e,t,n){"use strict";var r=n(0);e.exports=function(e,t){r.forEach(e,function(n,r){r!==t&&r.toUpperCase()===t.toUpperCase()&&(e[t]=n,delete e[r])})}},function(e,t,n){"use strict";var r=n(8);e.exports=function(e,t,n){var o=n.config.validateStatus;n.status&&o&&!o(n.status)?t(r("Request failed with status code "+n.status,n.config,null,n.request,n)):e(n)}},function(e,t,n){"use strict";e.exports=function(e,t,n,r,o){return e.config=t,n&&(e.code=n),e.request=r,e.response=o,e}},function(e,t,n){"use strict";function r(e){return encodeURIComponent(e).replace(/%40/gi,"@").replace(/%3A/gi,":").replace(/%24/g,"$").replace(/%2C/gi,",").replace(/%20/g,"+").replace(/%5B/gi,"[").replace(/%5D/gi,"]")}var o=n(0);e.exports=function(e,t,n){if(!t)return e;var i;if(n)i=n(t);else if(o.isURLSearchParams(t))i=t.toString();else{var u=[];o.forEach(t,function(e,t){null!==e&&void 0!==e&&(o.isArray(e)?t+="[]":e=[e],o.forEach(e,function(e){o.isDate(e)?e=e.toISOString():o.isObject(e)&&(e=JSON.stringify(e)),u.push(r(t)+"="+r(e))}))}),i=u.join("&")}return i&&(e+=(-1===e.indexOf("?")?"?":"&")+i),e}},function(e,t,n){"use strict";var r=n(0),o=["age","authorization","content-length","content-type","etag","expires","from","host","if-modified-since","if-unmodified-since","last-modified","location","max-forwards","proxy-authorization","referer","retry-after","user-agent"];e.exports=function(e){var t,n,i,u={};return e?(r.forEach(e.split("\n"),function(e){if(i=e.indexOf(":"),t=r.trim(e.substr(0,i)).toLowerCase(),n=r.trim(e.substr(i+1)),t){if(u[t]&&o.indexOf(t)>=0)return;u[t]="set-cookie"===t?(u[t]?u[t]:[]).concat([n]):u[t]?u[t]+", "+n:n}}),u):u}},function(e,t,n){"use strict";var r=n(0);e.exports=r.isStandardBrowserEnv()?function(){function e(e){var t=e;return n&&(o.setAttribute("href",t),t=o.href),o.setAttribute("href",t),{href:o.href,protocol:o.protocol?o.protocol.replace(/:$/,""):"",host:o.host,search:o.search?o.search.replace(/^\?/,""):"",hash:o.hash?o.hash.replace(/^#/,""):"",hostname:o.hostname,port:o.port,pathname:"/"===o.pathname.charAt(0)?o.pathname:"/"+o.pathname}}var t,n=/(msie|trident)/i.test(navigator.userAgent),o=document.createElement("a");return t=e(window.location.href),function(n){var o=r.isString(n)?e(n):n;return o.protocol===t.protocol&&o.host===t.host}}():function(){return function(){return!0}}()},function(e,t,n){"use strict";var r=n(0);e.exports=r.isStandardBrowserEnv()?function(){return{write:function(e,t,n,o,i,u){var c=[];c.push(e+"="+encodeURIComponent(t)),r.isNumber(n)&&c.push("expires="+new Date(n).toGMTString()),r.isString(o)&&c.push("path="+o),r.isString(i)&&c.push("domain="+i),!0===u&&c.push("secure"),document.cookie=c.join("; ")},read:function(e){var t=document.cookie.match(new RegExp("(^|;\\s*)("+e+")=([^;]*)"));return t?decodeURIComponent(t[3]):null},remove:function(e){this.write(e,"",Date.now()-864e5)}}}():function(){return{write:function(){},read:function(){return null},remove:function(){}}}()},function(e,t,n){"use strict";function r(){this.handlers=[]}var o=n(0);r.prototype.use=function(e,t){return this.handlers.push({fulfilled:e,rejected:t}),this.handlers.length-1},r.prototype.eject=function(e){this.handlers[e]&&(this.handlers[e]=null)},r.prototype.forEach=function(e){o.forEach(this.handlers,function(t){null!==t&&e(t)})},e.exports=r},function(e,t,n){"use strict";function r(e){e.cancelToken&&e.cancelToken.throwIfRequested()}var o=n(0),i=n(30),u=n(9),c=n(3),a=n(31),s=n(32);e.exports=function(e){return r(e),e.baseURL&&!a(e.url)&&(e.url=s(e.baseURL,e.url)),e.headers=e.headers||{},e.data=i(e.data,e.headers,e.transformRequest),e.headers=o.merge(e.headers.common||{},e.headers[e.method]||{},e.headers||{}),o.forEach(["delete","get","head","post","put","patch","common"],function(t){delete e.headers[t]}),(e.adapter||c.adapter)(e).then(function(t){return r(e),t.data=i(t.data,t.headers,e.transformResponse),t},function(t){return u(t)||(r(e),t&&t.response&&(t.response.data=i(t.response.data,t.response.headers,e.transformResponse))),Promise.reject(t)})}},function(e,t,n){"use strict";var r=n(0);e.exports=function(e,t,n){return r.forEach(n,function(n){e=n(e,t)}),e}},function(e,t,n){"use strict";e.exports=function(e){return/^([a-z][a-z\d\+\-\.]*:)?\/\//i.test(e)}},function(e,t,n){"use strict";e.exports=function(e,t){return t?e.replace(/\/+$/,"")+"/"+t.replace(/^\/+/,""):e}},function(e,t,n){"use strict";function r(e){if("function"!=typeof e)throw new TypeError("executor must be a function.");var t;this.promise=new Promise(function(e){t=e});var n=this;e(function(e){n.reason||(n.reason=new o(e),t(n.reason))})}var o=n(10);r.prototype.throwIfRequested=function(){if(this.reason)throw this.reason},r.source=function(){var e;return{token:new r(function(t){e=t}),cancel:e}},e.exports=r},function(e,t,n){"use strict";e.exports=function(e){return function(t){return e.apply(null,t)}}},function(e,t,n){"use strict";function r(e){if(!e)throw new Error("The Reddit constructor requires a spec object");var t=e.url,n=e.limit,r={};return r.base="https://www.reddit.com",r.searchQs="/search.json?q=url:",r.query=r.base+r.searchQs+t,r.submitUrl="https://www.reddit.com/submit",r.limit=n,r.dataFmt=function(e,t){e.hits=e.data.children.map(function(e){return e=e.data}),t(null,e)},r.commentFmt=function(e){return{author:e.author,author_link:"https://www.reddit.com/user/"+e.author,body_html:(0,o.decode)(e.body_html),created:(0,o.parseDate)(e.created_utc),id:e.id,score:e.score,replies:null,hasReplies:!1,isEven:function(){return this.depth%2==0},isOdd:function(){return this.depth%2==1},lowScore:function(){return this.score<0}}},r.threadFmt=function(e){var t=function e(t){return t.points=t.score,t.replies&&(t.children=t.replies.data.children.map(function(t){return t=t.data,t.replies&&(t.children=e(t)),t})),t},n=e[0].data.children[0].data;return n.points=n.score,n.children=e[1].data.children.map(function(e){return e=e.data,t(e)}),n},(0,o.embeddConstructor)(r)}Object.defineProperty(t,"__esModule",{value:!0}),t.default=r;var o=n(5)},function(e,t){e.exports='
\n {{#data}}\n
\n {{#score}}\n

{{score}} upvotes {{#multiple}}over {{threads}} threads{{/multiple}} on {{config.serviceName}}

\n {{/score}}\n {{^score}}\n \n Start discussion on {{config.serviceName}}\n \n {{/score}}\n
\n\n {{#config.both}}\n {{#hasReddit}}\n {{#hasHn}}\n
\n \n\n \n
\n {{/hasHn}}\n {{/hasReddit}}\n {{/config.both}}\n\n
\n {{#comments}}\n {{> comment}}\n {{/comments}}\n
\n\n {{#config.loadMore}}\n
\n \n
\n {{/config.loadMore}}\n\n {{/data}}\n
\n'},function(e,t){e.exports='
\n\n \n\n {{#body_html}}\n\n {{& body_html}}\n {{/body_html}}\n\n \n\n {{#hasReplies}}\n
\n {{#replies}}\n {{> comment}}\n {{/replies}}\n\n {{#loadMore}}\n \n {{/loadMore}}\n
\n\n {{/hasReplies}}\n\n
\n'}]); diff --git a/website/templates/blog/content.html b/website/templates/blog/content.html index cd1af29c752..b14616608a6 100644 --- a/website/templates/blog/content.html +++ b/website/templates/blog/content.html @@ -63,21 +63,5 @@ {% endfor %} {% endif %} -{% if not page.meta.is_index and language == 'en' %} - {## end row ##} - {## end container ##} -
-
-
-
-
-
-
-
-
-
{## new container ##} -
{## new row ##} - -{% endif %} {% include "templates/blog/footer.html" %}