Merge branch 'apply_ttl_if_not_calculated' into recompression_in_background

This commit is contained in:
alesapin 2020-09-03 12:07:03 +03:00
commit aa47d0aabc
142 changed files with 2524 additions and 518 deletions

View File

@ -1,3 +1,196 @@
## ClickHouse release 20.7
### ClickHouse release v20.7.2.30-stable, 2020-08-31
#### Backward Incompatible Change
* Function `modulo` (operator `%`) with at least one floating point number as argument will calculate remainder of division directly on floating point numbers without converting both arguments to integers. It makes behaviour compatible with most of DBMS. This also applicable for Date and DateTime data types. Added alias `mod`. This closes [#7323](https://github.com/ClickHouse/ClickHouse/issues/7323). [#12585](https://github.com/ClickHouse/ClickHouse/pull/12585) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Deprecate special printing of zero Date/DateTime values as `0000-00-00` and `0000-00-00 00:00:00`. [#12442](https://github.com/ClickHouse/ClickHouse/pull/12442) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* The function `groupArrayMoving*` was not working for distributed queries. It's result was calculated within incorrect data type (without promotion to the largest type). The function `groupArrayMovingAvg` was returning integer number that was inconsistent with the `avg` function. This fixes [#12568](https://github.com/ClickHouse/ClickHouse/issues/12568). [#12622](https://github.com/ClickHouse/ClickHouse/pull/12622) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Add sanity check for MergeTree settings. If the settings are incorrect, the server will refuse to start or to create a table, printing detailed explanation to the user. [#13153](https://github.com/ClickHouse/ClickHouse/pull/13153) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Protect from the cases when user may set `background_pool_size` to value lower than `number_of_free_entries_in_pool_to_execute_mutation` or `number_of_free_entries_in_pool_to_lower_max_size_of_merge`. In these cases ALTERs won't work or the maximum size of merge will be too limited. It will throw exception explaining what to do. This closes [#10897](https://github.com/ClickHouse/ClickHouse/issues/10897). [#12728](https://github.com/ClickHouse/ClickHouse/pull/12728) ([alexey-milovidov](https://github.com/alexey-milovidov)).
#### New Feature
* Polygon dictionary type that provides efficient "reverse geocoding" lookups - to find the region by coordinates in a dictionary of many polygons (world map). It is using carefully optimized algorithm with recursive grids to maintain low CPU and memory usage. [#9278](https://github.com/ClickHouse/ClickHouse/pull/9278) ([achulkov2](https://github.com/achulkov2)).
* Added support of LDAP authentication for preconfigured users ("Simple Bind" method). [#11234](https://github.com/ClickHouse/ClickHouse/pull/11234) ([Denis Glazachev](https://github.com/traceon)).
* Introduce setting `alter_partition_verbose_result` which outputs information about touched parts for some types of `ALTER TABLE ... PARTITION ...` queries (currently `ATTACH` and `FREEZE`). Closes [#8076](https://github.com/ClickHouse/ClickHouse/issues/8076). [#13017](https://github.com/ClickHouse/ClickHouse/pull/13017) ([alesapin](https://github.com/alesapin)).
* Add `bayesAB` function for bayesian-ab-testing. [#12327](https://github.com/ClickHouse/ClickHouse/pull/12327) ([achimbab](https://github.com/achimbab)).
* Added `system.crash_log` table into which stack traces for fatal errors are collected. This table should be empty. [#12316](https://github.com/ClickHouse/ClickHouse/pull/12316) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Added http headers `X-ClickHouse-Database` and `X-ClickHouse-Format` which may be used to set default database and output format. [#12981](https://github.com/ClickHouse/ClickHouse/pull/12981) ([hcz](https://github.com/hczhcz)).
* Add `minMap` and `maxMap` functions support to `SimpleAggregateFunction`. [#12662](https://github.com/ClickHouse/ClickHouse/pull/12662) ([Ildus Kurbangaliev](https://github.com/ildus)).
* Add setting `allow_non_metadata_alters` which restricts to execute `ALTER` queries which modify data on disk. Disabled be default. Closes [#11547](https://github.com/ClickHouse/ClickHouse/issues/11547). [#12635](https://github.com/ClickHouse/ClickHouse/pull/12635) ([alesapin](https://github.com/alesapin)).
* A function `formatRow` is added to support turning arbitrary expressions into a string via given format. It's useful for manipulating SQL outputs and is quite versatile combined with the `columns` function. [#12574](https://github.com/ClickHouse/ClickHouse/pull/12574) ([Amos Bird](https://github.com/amosbird)).
* Add `FROM_UNIXTIME` function for compatibility with MySQL, related to [12149](https://github.com/ClickHouse/ClickHouse/issues/12149). [#12484](https://github.com/ClickHouse/ClickHouse/pull/12484) ([flynn](https://github.com/ucasFL)).
* Allow Nullable types as keys in MergeTree tables if `allow_nullable_key` table setting is enabled. https://github.com/ClickHouse/ClickHouse/issues/5319. [#12433](https://github.com/ClickHouse/ClickHouse/pull/12433) ([Amos Bird](https://github.com/amosbird)).
* Integration with [COS](https://intl.cloud.tencent.com/product/cos). [#12386](https://github.com/ClickHouse/ClickHouse/pull/12386) ([fastio](https://github.com/fastio)).
* Add mapAdd and mapSubtract functions for adding/subtracting key-mapped values. [#11735](https://github.com/ClickHouse/ClickHouse/pull/11735) ([Ildus Kurbangaliev](https://github.com/ildus)).
#### Bug Fix
* Fix premature `ON CLUSTER` timeouts for queries that must be executed on a single replica. Fixes [#6704](https://github.com/ClickHouse/ClickHouse/issues/6704), [#7228](https://github.com/ClickHouse/ClickHouse/issues/7228), [#13361](https://github.com/ClickHouse/ClickHouse/issues/13361), [#11884](https://github.com/ClickHouse/ClickHouse/issues/11884). [#13450](https://github.com/ClickHouse/ClickHouse/pull/13450) ([alesapin](https://github.com/alesapin)).
* Fix crash in mark inclusion search introduced in https://github.com/ClickHouse/ClickHouse/pull/12277. [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)).
* Fix race condition in external dictionaries with cache layout which can lead server crash. [#12566](https://github.com/ClickHouse/ClickHouse/pull/12566) ([alesapin](https://github.com/alesapin)).
* Fix visible data clobbering by progress bar in client in interactive mode. This fixes [#12562](https://github.com/ClickHouse/ClickHouse/issues/12562) and [#13369](https://github.com/ClickHouse/ClickHouse/issues/13369) and [#13584](https://github.com/ClickHouse/ClickHouse/issues/13584) and fixes [#12964](https://github.com/ClickHouse/ClickHouse/issues/12964). [#13691](https://github.com/ClickHouse/ClickHouse/pull/13691) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed incorrect sorting order for `LowCardinality` columns when ORDER BY multiple columns is used. This fixes [#13958](https://github.com/ClickHouse/ClickHouse/issues/13958). [#14223](https://github.com/ClickHouse/ClickHouse/pull/14223) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Removed hardcoded timeout, which wrongly overruled `query_wait_timeout_milliseconds` setting for cache-dictionary. [#14105](https://github.com/ClickHouse/ClickHouse/pull/14105) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fixed wrong mount point in extra info for `Poco::Exception: no space left on device`. [#14050](https://github.com/ClickHouse/ClickHouse/pull/14050) ([tavplubix](https://github.com/tavplubix)).
* Fix wrong query optimization of select queries with `DISTINCT` keyword when subqueries also have `DISTINCT` in case `optimize_duplicate_order_by_and_distinct` setting is enabled. [#13925](https://github.com/ClickHouse/ClickHouse/pull/13925) ([Artem Zuikov](https://github.com/4ertus2)).
* Fixed potential deadlock when renaming `Distributed` table. [#13922](https://github.com/ClickHouse/ClickHouse/pull/13922) ([tavplubix](https://github.com/tavplubix)).
* Fix incorrect sorting for `FixedString` columns when ORDER BY multiple columns is used. Fixes [#13182](https://github.com/ClickHouse/ClickHouse/issues/13182). [#13887](https://github.com/ClickHouse/ClickHouse/pull/13887) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix potentially lower precision of `topK`/`topKWeighted` aggregations (with non-default parameters). [#13817](https://github.com/ClickHouse/ClickHouse/pull/13817) ([Azat Khuzhin](https://github.com/azat)).
* Fix reading from MergeTree table with INDEX of type SET fails when compared against NULL. This fixes [#13686](https://github.com/ClickHouse/ClickHouse/issues/13686). [#13793](https://github.com/ClickHouse/ClickHouse/pull/13793) ([Amos Bird](https://github.com/amosbird)).
* Fix step overflow in function `range()`. [#13790](https://github.com/ClickHouse/ClickHouse/pull/13790) ([Azat Khuzhin](https://github.com/azat)).
* Fixed `Directory not empty` error when concurrently executing `DROP DATABASE` and `CREATE TABLE`. [#13756](https://github.com/ClickHouse/ClickHouse/pull/13756) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Add range check for `h3KRing` function. This fixes [#13633](https://github.com/ClickHouse/ClickHouse/issues/13633). [#13752](https://github.com/ClickHouse/ClickHouse/pull/13752) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix race condition between DETACH and background merges. Parts may revive after detach. This is continuation of [#8602](https://github.com/ClickHouse/ClickHouse/issues/8602) that did not fix the issue but introduced a test that started to fail in very rare cases, demonstrating the issue. [#13746](https://github.com/ClickHouse/ClickHouse/pull/13746) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix logging Settings.Names/Values when `log_queries_min_type` greater than `QUERY_START`. [#13737](https://github.com/ClickHouse/ClickHouse/pull/13737) ([Azat Khuzhin](https://github.com/azat)).
* Fix incorrect message in `clickhouse-server.init` while checking user and group. [#13711](https://github.com/ClickHouse/ClickHouse/pull/13711) ([ylchou](https://github.com/ylchou)).
* Do not optimize `any(arrayJoin())` to `arrayJoin()` under `optimize_move_functions_out_of_any`. [#13681](https://github.com/ClickHouse/ClickHouse/pull/13681) ([Azat Khuzhin](https://github.com/azat)).
* Fixed possible deadlock in concurrent `ALTER ... REPLACE/MOVE PARTITION ...` queries. [#13626](https://github.com/ClickHouse/ClickHouse/pull/13626) ([tavplubix](https://github.com/tavplubix)).
* Fixed the behaviour when sometimes cache-dictionary returned default value instead of present value from source. [#13624](https://github.com/ClickHouse/ClickHouse/pull/13624) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fix secondary indices corruption in compact parts (compact parts is an experimental feature). [#13538](https://github.com/ClickHouse/ClickHouse/pull/13538) ([Anton Popov](https://github.com/CurtizJ)).
* Fix wrong code in function `netloc`. This fixes [#13335](https://github.com/ClickHouse/ClickHouse/issues/13335). [#13446](https://github.com/ClickHouse/ClickHouse/pull/13446) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix error in `parseDateTimeBestEffort` function when unix timestamp was passed as an argument. This fixes [#13362](https://github.com/ClickHouse/ClickHouse/issues/13362). [#13441](https://github.com/ClickHouse/ClickHouse/pull/13441) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix invalid return type for comparison of tuples with `NULL` elements. Fixes [#12461](https://github.com/ClickHouse/ClickHouse/issues/12461). [#13420](https://github.com/ClickHouse/ClickHouse/pull/13420) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix wrong optimization caused `aggregate function any(x) is found inside another aggregate function in query` error with `SET optimize_move_functions_out_of_any = 1` and aliases inside `any()`. [#13419](https://github.com/ClickHouse/ClickHouse/pull/13419) ([Artem Zuikov](https://github.com/4ertus2)).
* Fix possible race in `StorageMemory`. [#13416](https://github.com/ClickHouse/ClickHouse/pull/13416) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix empty output for `Arrow` and `Parquet` formats in case if query return zero rows. It was done because empty output is not valid for this formats. [#13399](https://github.com/ClickHouse/ClickHouse/pull/13399) ([hcz](https://github.com/hczhcz)).
* Fix select queries with constant columns and prefix of primary key in `ORDER BY` clause. [#13396](https://github.com/ClickHouse/ClickHouse/pull/13396) ([Anton Popov](https://github.com/CurtizJ)).
* Fix `PrettyCompactMonoBlock` for clickhouse-local. Fix extremes/totals with `PrettyCompactMonoBlock`. Fixes [#7746](https://github.com/ClickHouse/ClickHouse/issues/7746). [#13394](https://github.com/ClickHouse/ClickHouse/pull/13394) ([Azat Khuzhin](https://github.com/azat)).
* Fixed deadlock in system.text_log. [#12452](https://github.com/ClickHouse/ClickHouse/pull/12452) ([alexey-milovidov](https://github.com/alexey-milovidov)). It is a part of [#12339](https://github.com/ClickHouse/ClickHouse/issues/12339). This fixes [#12325](https://github.com/ClickHouse/ClickHouse/issues/12325). [#13386](https://github.com/ClickHouse/ClickHouse/pull/13386) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fixed `File(TSVWithNames*)` (header was written multiple times), fixed `clickhouse-local --format CSVWithNames*` (lacks header, broken after [#12197](https://github.com/ClickHouse/ClickHouse/issues/12197)), fixed `clickhouse-local --format CSVWithNames*` with zero rows (lacks header). [#13343](https://github.com/ClickHouse/ClickHouse/pull/13343) ([Azat Khuzhin](https://github.com/azat)).
* Fix segfault when function `groupArrayMovingSum` deserializes empty state. Fixes [#13339](https://github.com/ClickHouse/ClickHouse/issues/13339). [#13341](https://github.com/ClickHouse/ClickHouse/pull/13341) ([alesapin](https://github.com/alesapin)).
* Throw error on `arrayJoin()` function in `JOIN ON` section. [#13330](https://github.com/ClickHouse/ClickHouse/pull/13330) ([Artem Zuikov](https://github.com/4ertus2)).
* Fix crash in `LEFT ASOF JOIN` with `join_use_nulls=1`. [#13291](https://github.com/ClickHouse/ClickHouse/pull/13291) ([Artem Zuikov](https://github.com/4ertus2)).
* Fix possible error `Totals having transform was already added to pipeline` in case of a query from delayed replica. [#13290](https://github.com/ClickHouse/ClickHouse/pull/13290) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* The server may crash if user passed specifically crafted arguments to the function `h3ToChildren`. This fixes [#13275](https://github.com/ClickHouse/ClickHouse/issues/13275). [#13277](https://github.com/ClickHouse/ClickHouse/pull/13277) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix potentially low performance and slightly incorrect result for `uniqExact`, `topK`, `sumDistinct` and similar aggregate functions called on Float types with `NaN` values. It also triggered assert in debug build. This fixes [#12491](https://github.com/ClickHouse/ClickHouse/issues/12491). [#13254](https://github.com/ClickHouse/ClickHouse/pull/13254) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix assertion in KeyCondition when primary key contains expression with monotonic function and query contains comparison with constant whose type is different. This fixes [#12465](https://github.com/ClickHouse/ClickHouse/issues/12465). [#13251](https://github.com/ClickHouse/ClickHouse/pull/13251) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Return passed number for numbers with MSB set in function roundUpToPowerOfTwoOrZero(). It prevents potential errors in case of overflow of array sizes. [#13234](https://github.com/ClickHouse/ClickHouse/pull/13234) ([Azat Khuzhin](https://github.com/azat)).
* Fix function if with nullable constexpr as cond that is not literal NULL. Fixes [#12463](https://github.com/ClickHouse/ClickHouse/issues/12463). [#13226](https://github.com/ClickHouse/ClickHouse/pull/13226) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix assert in `arrayElement` function in case of array elements are Nullable and array subscript is also Nullable. This fixes [#12172](https://github.com/ClickHouse/ClickHouse/issues/12172). [#13224](https://github.com/ClickHouse/ClickHouse/pull/13224) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix DateTime64 conversion functions with constant argument. [#13205](https://github.com/ClickHouse/ClickHouse/pull/13205) ([Azat Khuzhin](https://github.com/azat)).
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes https://github.com/ClickHouse/ClickHouse/issues/5779, https://github.com/ClickHouse/ClickHouse/issues/12527. [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix access to `redis` dictionary after connection was dropped once. It may happen with `cache` and `direct` dictionary layouts. [#13082](https://github.com/ClickHouse/ClickHouse/pull/13082) ([Anton Popov](https://github.com/CurtizJ)).
* Fix wrong index analysis with functions. It could lead to some data parts being skipped when reading from `MergeTree` tables. Fixes [#13060](https://github.com/ClickHouse/ClickHouse/issues/13060). Fixes [#12406](https://github.com/ClickHouse/ClickHouse/issues/12406). [#13081](https://github.com/ClickHouse/ClickHouse/pull/13081) ([Anton Popov](https://github.com/CurtizJ)).
* Fix error `Cannot convert column because it is constant but values of constants are different in source and result` for remote queries which use deterministic functions in scope of query, but not deterministic between queries, like `now()`, `now64()`, `randConstant()`. Fixes [#11327](https://github.com/ClickHouse/ClickHouse/issues/11327). [#13075](https://github.com/ClickHouse/ClickHouse/pull/13075) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix crash which was possible for queries with `ORDER BY` tuple and small `LIMIT`. Fixes [#12623](https://github.com/ClickHouse/ClickHouse/issues/12623). [#13009](https://github.com/ClickHouse/ClickHouse/pull/13009) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix `Block structure mismatch` error for queries with `UNION` and `JOIN`. Fixes [#12602](https://github.com/ClickHouse/ClickHouse/issues/12602). [#12989](https://github.com/ClickHouse/ClickHouse/pull/12989) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Corrected `merge_with_ttl_timeout` logic which did not work well when expiration affected more than one partition over one time interval. (Authored by @excitoon). [#12982](https://github.com/ClickHouse/ClickHouse/pull/12982) ([Alexander Kazakov](https://github.com/Akazz)).
* Fix columns duplication for range hashed dictionary created from DDL query. This fixes [#10605](https://github.com/ClickHouse/ClickHouse/issues/10605). [#12857](https://github.com/ClickHouse/ClickHouse/pull/12857) ([alesapin](https://github.com/alesapin)).
* Fix unnecessary limiting for the number of threads for selects from local replica. [#12840](https://github.com/ClickHouse/ClickHouse/pull/12840) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix rare bug when `ALTER DELETE` and `ALTER MODIFY COLUMN` queries executed simultaneously as a single mutation. Bug leads to an incorrect amount of rows in `count.txt` and as a consequence incorrect data in part. Also, fix a small bug with simultaneous `ALTER RENAME COLUMN` and `ALTER ADD COLUMN`. [#12760](https://github.com/ClickHouse/ClickHouse/pull/12760) ([alesapin](https://github.com/alesapin)).
* Wrong credentials being used when using `clickhouse` dictionary source to query remote tables. [#12756](https://github.com/ClickHouse/ClickHouse/pull/12756) ([sundyli](https://github.com/sundy-li)).
* Fix `CAST(Nullable(String), Enum())`. [#12745](https://github.com/ClickHouse/ClickHouse/pull/12745) ([Azat Khuzhin](https://github.com/azat)).
* Fix performance with large tuples, which are interpreted as functions in `IN` section. The case when user writes `WHERE x IN tuple(1, 2, ...)` instead of `WHERE x IN (1, 2, ...)` for some obscure reason. [#12700](https://github.com/ClickHouse/ClickHouse/pull/12700) ([Anton Popov](https://github.com/CurtizJ)).
* Fix memory tracking for input_format_parallel_parsing (by attaching thread to group). [#12672](https://github.com/ClickHouse/ClickHouse/pull/12672) ([Azat Khuzhin](https://github.com/azat)).
* Fix wrong optimization `optimize_move_functions_out_of_any=1` in case of `any(func(<lambda>))`. [#12664](https://github.com/ClickHouse/ClickHouse/pull/12664) ([Artem Zuikov](https://github.com/4ertus2)).
* Fixed [#10572](https://github.com/ClickHouse/ClickHouse/issues/10572) fix bloom filter index with const expression. [#12659](https://github.com/ClickHouse/ClickHouse/pull/12659) ([Winter Zhang](https://github.com/zhang2014)).
* Fix SIGSEGV in StorageKafka when broker is unavailable (and not only). [#12658](https://github.com/ClickHouse/ClickHouse/pull/12658) ([Azat Khuzhin](https://github.com/azat)).
* Add support for function `if` with `Array(UUID)` arguments. This fixes [#11066](https://github.com/ClickHouse/ClickHouse/issues/11066). [#12648](https://github.com/ClickHouse/ClickHouse/pull/12648) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* CREATE USER IF NOT EXISTS now doesn't throw exception if the user exists. This fixes https://github.com/ClickHouse/ClickHouse/issues/12507. [#12646](https://github.com/ClickHouse/ClickHouse/pull/12646) ([Vitaly Baranov](https://github.com/vitlibar)).
* Exception `There is no supertype...` can be thrown during `ALTER ... UPDATE` in unexpected cases (e.g. when subtracting from UInt64 column). This fixes [#7306](https://github.com/ClickHouse/ClickHouse/issues/7306). This fixes [#4165](https://github.com/ClickHouse/ClickHouse/issues/4165). [#12633](https://github.com/ClickHouse/ClickHouse/pull/12633) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix possible `Pipeline stuck` error for queries with external sorting. Fixes [#12617](https://github.com/ClickHouse/ClickHouse/issues/12617). [#12618](https://github.com/ClickHouse/ClickHouse/pull/12618) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix error `Output of TreeExecutor is not sorted` for `OPTIMIZE DEDUPLICATE`. Fixes [#11572](https://github.com/ClickHouse/ClickHouse/issues/11572). [#12613](https://github.com/ClickHouse/ClickHouse/pull/12613) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix the issue when alias on result of function `any` can be lost during query optimization. [#12593](https://github.com/ClickHouse/ClickHouse/pull/12593) ([Anton Popov](https://github.com/CurtizJ)).
* Remove data for Distributed tables (blocks from async INSERTs) on DROP TABLE. [#12556](https://github.com/ClickHouse/ClickHouse/pull/12556) ([Azat Khuzhin](https://github.com/azat)).
* Now ClickHouse will recalculate checksums for parts when file `checksums.txt` is absent. Broken since [#9827](https://github.com/ClickHouse/ClickHouse/issues/9827). [#12545](https://github.com/ClickHouse/ClickHouse/pull/12545) ([alesapin](https://github.com/alesapin)).
* Fix bug which lead to broken old parts after `ALTER DELETE` query when `enable_mixed_granularity_parts=1`. Fixes [#12536](https://github.com/ClickHouse/ClickHouse/issues/12536). [#12543](https://github.com/ClickHouse/ClickHouse/pull/12543) ([alesapin](https://github.com/alesapin)).
* Fixing race condition in live view tables which could cause data duplication. LIVE VIEW is an experimental feature. [#12519](https://github.com/ClickHouse/ClickHouse/pull/12519) ([vzakaznikov](https://github.com/vzakaznikov)).
* Fix backwards compatibility in binary format of `AggregateFunction(avg, ...)` values. This fixes [#12342](https://github.com/ClickHouse/ClickHouse/issues/12342). [#12486](https://github.com/ClickHouse/ClickHouse/pull/12486) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix crash in JOIN with dictionary when we are joining over expression of dictionary key: `t JOIN dict ON expr(dict.id) = t.id`. Disable dictionary join optimisation for this case. [#12458](https://github.com/ClickHouse/ClickHouse/pull/12458) ([Artem Zuikov](https://github.com/4ertus2)).
* Fix overflow when very large LIMIT or OFFSET is specified. This fixes [#10470](https://github.com/ClickHouse/ClickHouse/issues/10470). This fixes [#11372](https://github.com/ClickHouse/ClickHouse/issues/11372). [#12427](https://github.com/ClickHouse/ClickHouse/pull/12427) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* kafka: fix SIGSEGV if there is a message with error in the middle of the batch. [#12302](https://github.com/ClickHouse/ClickHouse/pull/12302) ([Azat Khuzhin](https://github.com/azat)).
#### Improvement
* Keep smaller amount of logs in ZooKeeper. Avoid excessive growing of ZooKeeper nodes in case of offline replicas when having many servers/tables/inserts. [#13100](https://github.com/ClickHouse/ClickHouse/pull/13100) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Now exceptions forwarded to the client if an error happened during ALTER or mutation. Closes [#11329](https://github.com/ClickHouse/ClickHouse/issues/11329). [#12666](https://github.com/ClickHouse/ClickHouse/pull/12666) ([alesapin](https://github.com/alesapin)).
* Add `QueryTimeMicroseconds`, `SelectQueryTimeMicroseconds` and `InsertQueryTimeMicroseconds` to `system.events`, along with system.metrics, processes, query_log, etc. [#13028](https://github.com/ClickHouse/ClickHouse/pull/13028) ([ianton-ru](https://github.com/ianton-ru)).
* Added `SelectedRows` and `SelectedBytes` to `system.events`, along with system.metrics, processes, query_log, etc. [#12638](https://github.com/ClickHouse/ClickHouse/pull/12638) ([ianton-ru](https://github.com/ianton-ru)).
* Added `current_database` information to `system.query_log`. [#12652](https://github.com/ClickHouse/ClickHouse/pull/12652) ([Amos Bird](https://github.com/amosbird)).
* Allow `TabSeparatedRaw` as input format. [#12009](https://github.com/ClickHouse/ClickHouse/pull/12009) ([hcz](https://github.com/hczhcz)).
* Now `joinGet` supports multi-key lookup. [#12418](https://github.com/ClickHouse/ClickHouse/pull/12418) ([Amos Bird](https://github.com/amosbird)).
* Allow `*Map` aggregate functions to work on Arrays with NULLs. Fixes [#13157](https://github.com/ClickHouse/ClickHouse/issues/13157). [#13225](https://github.com/ClickHouse/ClickHouse/pull/13225) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Avoid overflow in parsing of DateTime values that will lead to negative unix timestamp in their timezone (for example, `1970-01-01 00:00:00` in Moscow). Saturate to zero instead. This fixes [#3470](https://github.com/ClickHouse/ClickHouse/issues/3470). This fixes [#4172](https://github.com/ClickHouse/ClickHouse/issues/4172). [#12443](https://github.com/ClickHouse/ClickHouse/pull/12443) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* AvroConfluent: Skip Kafka tombstone records - Support skipping broken records [#13203](https://github.com/ClickHouse/ClickHouse/pull/13203) ([Andrew Onyshchuk](https://github.com/oandrew)).
* Fix wrong error for long queries. It was possible to get syntax error other than `Max query size exceeded` for correct query. [#13928](https://github.com/ClickHouse/ClickHouse/pull/13928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix data race in `lgamma` function. This race was caught only in `tsan`, no side effects really happened. [#13842](https://github.com/ClickHouse/ClickHouse/pull/13842) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix a 'Week'-interval formatting for ATTACH/ALTER/CREATE QUOTA-statements. [#13417](https://github.com/ClickHouse/ClickHouse/pull/13417) ([vladimir-golovchenko](https://github.com/vladimir-golovchenko)).
* Now broken parts are also reported when encountered in compact part processing. Compact parts is an experimental feature. [#13282](https://github.com/ClickHouse/ClickHouse/pull/13282) ([Amos Bird](https://github.com/amosbird)).
* Fix assert in `geohashesInBox`. This fixes [#12554](https://github.com/ClickHouse/ClickHouse/issues/12554). [#13229](https://github.com/ClickHouse/ClickHouse/pull/13229) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix assert in `parseDateTimeBestEffort`. This fixes [#12649](https://github.com/ClickHouse/ClickHouse/issues/12649). [#13227](https://github.com/ClickHouse/ClickHouse/pull/13227) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Minor optimization in Processors/PipelineExecutor: breaking out of a loop because it makes sense to do so. [#13058](https://github.com/ClickHouse/ClickHouse/pull/13058) ([Mark Papadakis](https://github.com/markpapadakis)).
* Support TRUNCATE table without TABLE keyword. [#12653](https://github.com/ClickHouse/ClickHouse/pull/12653) ([Winter Zhang](https://github.com/zhang2014)).
* Fix explain query format overwrite by default, issue https://github.com/ClickHouse/ClickHouse/issues/12432. [#12541](https://github.com/ClickHouse/ClickHouse/pull/12541) ([BohuTANG](https://github.com/BohuTANG)).
* Allow to set JOIN kind and type in more standad way: `LEFT SEMI JOIN` instead of `SEMI LEFT JOIN`. For now both are correct. [#12520](https://github.com/ClickHouse/ClickHouse/pull/12520) ([Artem Zuikov](https://github.com/4ertus2)).
* Changes default value for `multiple_joins_rewriter_version` to 2. It enables new multiple joins rewriter that knows about column names. [#12469](https://github.com/ClickHouse/ClickHouse/pull/12469) ([Artem Zuikov](https://github.com/4ertus2)).
* Add several metrics for requests to S3 storages. [#12464](https://github.com/ClickHouse/ClickHouse/pull/12464) ([ianton-ru](https://github.com/ianton-ru)).
* Use correct default secure port for clickhouse-benchmark with `--secure` argument. This fixes [#11044](https://github.com/ClickHouse/ClickHouse/issues/11044). [#12440](https://github.com/ClickHouse/ClickHouse/pull/12440) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Rollback insertion errors in `Log`, `TinyLog`, `StripeLog` engines. In previous versions insertion error lead to inconsisent table state (this works as documented and it is normal for these table engines). This fixes [#12402](https://github.com/ClickHouse/ClickHouse/issues/12402). [#12426](https://github.com/ClickHouse/ClickHouse/pull/12426) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Implement `RENAME DATABASE` and `RENAME DICTIONARY` for `Atomic` database engine - Add implicit `{uuid}` macro, which can be used in ZooKeeper path for `ReplicatedMergeTree`. It works with `CREATE ... ON CLUSTER ...` queries. Set `show_table_uuid_in_table_create_query_if_not_nil` to `true` to use it. - Make `ReplicatedMergeTree` engine arguments optional, `/clickhouse/tables/{uuid}/{shard}/` and `{replica}` are used by default. Closes [#12135](https://github.com/ClickHouse/ClickHouse/issues/12135). - Minor fixes. - These changes break backward compatibility of `Atomic` database engine. Previously created `Atomic` databases must be manually converted to new format. Atomic database is an experimental feature. [#12343](https://github.com/ClickHouse/ClickHouse/pull/12343) ([tavplubix](https://github.com/tavplubix)).
* Separated `AWSAuthV4Signer` into different logger, removed excessive `AWSClient: AWSClient` from log messages. [#12320](https://github.com/ClickHouse/ClickHouse/pull/12320) ([Vladimir Chebotarev](https://github.com/excitoon)).
* Better exception message in disk access storage. [#12625](https://github.com/ClickHouse/ClickHouse/pull/12625) ([alesapin](https://github.com/alesapin)).
* Better exception for function `in` with invalid number of arguments. [#12529](https://github.com/ClickHouse/ClickHouse/pull/12529) ([Anton Popov](https://github.com/CurtizJ)).
* Fix error message about adaptive granularity. [#12624](https://github.com/ClickHouse/ClickHouse/pull/12624) ([alesapin](https://github.com/alesapin)).
* Fix SETTINGS parse after FORMAT. [#12480](https://github.com/ClickHouse/ClickHouse/pull/12480) ([Azat Khuzhin](https://github.com/azat)).
* If MergeTree table does not contain ORDER BY or PARTITION BY, it was possible to request ALTER to CLEAR all the columns and ALTER will stuck. Fixed [#7941](https://github.com/ClickHouse/ClickHouse/issues/7941). [#12382](https://github.com/ClickHouse/ClickHouse/pull/12382) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Avoid re-loading completion from the history file after each query (to avoid history overlaps with other client sessions). [#13086](https://github.com/ClickHouse/ClickHouse/pull/13086) ([Azat Khuzhin](https://github.com/azat)).
#### Performance Improvement
* Lower memory usage for some operations up to 2 times. [#12424](https://github.com/ClickHouse/ClickHouse/pull/12424) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Optimize PK lookup for queries that match exact PK range. [#12277](https://github.com/ClickHouse/ClickHouse/pull/12277) ([Ivan Babrou](https://github.com/bobrik)).
* Slightly optimize very short queries with `LowCardinality`. [#14129](https://github.com/ClickHouse/ClickHouse/pull/14129) ([Anton Popov](https://github.com/CurtizJ)).
* Slightly improve performance of aggregation by UInt8/UInt16 keys. [#13091](https://github.com/ClickHouse/ClickHouse/pull/13091) and [#13055](https://github.com/ClickHouse/ClickHouse/pull/13055) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Push down `LIMIT` step for query plan (inside subqueries). [#13016](https://github.com/ClickHouse/ClickHouse/pull/13016) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Parallel primary key lookup and skipping index stages on parts, as described in [#11564](https://github.com/ClickHouse/ClickHouse/issues/11564). [#12589](https://github.com/ClickHouse/ClickHouse/pull/12589) ([Ivan Babrou](https://github.com/bobrik)).
* Converting String-type arguments of function "if" and "transform" into enum if `set optimize_if_transform_strings_to_enum = 1`. [#12515](https://github.com/ClickHouse/ClickHouse/pull/12515) ([Artem Zuikov](https://github.com/4ertus2)).
* Replaces monotonic functions with its argument in `ORDER BY` if `set optimize_monotonous_functions_in_order_by=1`. [#12467](https://github.com/ClickHouse/ClickHouse/pull/12467) ([Artem Zuikov](https://github.com/4ertus2)).
* Add order by optimization that rewrites `ORDER BY x, f(x)` with `ORDER by x` if `set optimize_redundant_functions_in_order_by = 1`. [#12404](https://github.com/ClickHouse/ClickHouse/pull/12404) ([Artem Zuikov](https://github.com/4ertus2)).
* Allow pushdown predicate when subquery contains `WITH` clause. This fixes [#12293](https://github.com/ClickHouse/ClickHouse/issues/12293) [#12663](https://github.com/ClickHouse/ClickHouse/pull/12663) ([Winter Zhang](https://github.com/zhang2014)).
* Improve performance of reading from compact parts. Compact parts is an experimental feature. [#12492](https://github.com/ClickHouse/ClickHouse/pull/12492) ([Anton Popov](https://github.com/CurtizJ)).
* Attempt to implement streaming optimization in `DiskS3`. DiskS3 is an experimental feature. [#12434](https://github.com/ClickHouse/ClickHouse/pull/12434) ([Vladimir Chebotarev](https://github.com/excitoon)).
#### Build/Testing/Packaging Improvement
* Use `shellcheck` for sh tests linting. [#13200](https://github.com/ClickHouse/ClickHouse/pull/13200) [#13207](https://github.com/ClickHouse/ClickHouse/pull/13207) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Add script which set labels for pull requests in GitHub hook. [#13183](https://github.com/ClickHouse/ClickHouse/pull/13183) ([alesapin](https://github.com/alesapin)).
* Remove some of recursive submodules. See [#13378](https://github.com/ClickHouse/ClickHouse/issues/13378). [#13379](https://github.com/ClickHouse/ClickHouse/pull/13379) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Ensure that all the submodules are from proper URLs. Continuation of [#13379](https://github.com/ClickHouse/ClickHouse/issues/13379). This fixes [#13378](https://github.com/ClickHouse/ClickHouse/issues/13378). [#13397](https://github.com/ClickHouse/ClickHouse/pull/13397) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Added support for user-declared settings, which can be accessed from inside queries. This is needed when ClickHouse engine is used as a component of another system. [#13013](https://github.com/ClickHouse/ClickHouse/pull/13013) ([Vitaly Baranov](https://github.com/vitlibar)).
* Added testing for RBAC functionality of INSERT privilege in TestFlows. Expanded tables on which SELECT is being tested. Added Requirements to match new table engine tests. [#13340](https://github.com/ClickHouse/ClickHouse/pull/13340) ([MyroTk](https://github.com/MyroTk)).
* Fix timeout error during server restart in the stress test. [#13321](https://github.com/ClickHouse/ClickHouse/pull/13321) ([alesapin](https://github.com/alesapin)).
* Now fast test will wait server with retries. [#13284](https://github.com/ClickHouse/ClickHouse/pull/13284) ([alesapin](https://github.com/alesapin)).
* Function `materialize()` (the function for ClickHouse testing) will work for NULL as expected - by transforming it to non-constant column. [#13212](https://github.com/ClickHouse/ClickHouse/pull/13212) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix libunwind build in AArch64. This fixes [#13204](https://github.com/ClickHouse/ClickHouse/issues/13204). [#13208](https://github.com/ClickHouse/ClickHouse/pull/13208) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Even more retries in zkutil gtest to prevent test flakiness. [#13165](https://github.com/ClickHouse/ClickHouse/pull/13165) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Small fixes to the RBAC TestFlows. [#13152](https://github.com/ClickHouse/ClickHouse/pull/13152) ([vzakaznikov](https://github.com/vzakaznikov)).
* Fixing `00960_live_view_watch_events_live.py` test. [#13108](https://github.com/ClickHouse/ClickHouse/pull/13108) ([vzakaznikov](https://github.com/vzakaznikov)).
* Improve cache purge in documentation deploy script. [#13107](https://github.com/ClickHouse/ClickHouse/pull/13107) ([alesapin](https://github.com/alesapin)).
* Rewrote some orphan tests to gtest. Removed useless includes from tests. [#13073](https://github.com/ClickHouse/ClickHouse/pull/13073) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Added tests for RBAC functionality of `SELECT` privilege in TestFlows. [#13061](https://github.com/ClickHouse/ClickHouse/pull/13061) ([Ritaank Tiwari](https://github.com/ritaank)).
* Rerun some tests in fast test check. [#12992](https://github.com/ClickHouse/ClickHouse/pull/12992) ([alesapin](https://github.com/alesapin)).
* Fix MSan error in "rdkafka" library. This closes [#12990](https://github.com/ClickHouse/ClickHouse/issues/12990). Updated `rdkafka` to version 1.5 (master). [#12991](https://github.com/ClickHouse/ClickHouse/pull/12991) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix UBSan report in base64 if tests were run on server with AVX-512. This fixes [#12318](https://github.com/ClickHouse/ClickHouse/issues/12318). Author: @qoega. [#12441](https://github.com/ClickHouse/ClickHouse/pull/12441) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix UBSan report in HDFS library. This closes [#12330](https://github.com/ClickHouse/ClickHouse/issues/12330). [#12453](https://github.com/ClickHouse/ClickHouse/pull/12453) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Check an ability that we able to restore the backup from an old version to the new version. This closes [#8979](https://github.com/ClickHouse/ClickHouse/issues/8979). [#12959](https://github.com/ClickHouse/ClickHouse/pull/12959) ([alesapin](https://github.com/alesapin)).
* Do not build helper_container image inside integrational tests. Build docker container in CI and use pre-built helper_container in integration tests. [#12953](https://github.com/ClickHouse/ClickHouse/pull/12953) ([Ilya Yatsishin](https://github.com/qoega)).
* Add a test for `ALTER TABLE CLEAR COLUMN` query for primary key columns. [#12951](https://github.com/ClickHouse/ClickHouse/pull/12951) ([alesapin](https://github.com/alesapin)).
* Increased timeouts in testflows tests. [#12949](https://github.com/ClickHouse/ClickHouse/pull/12949) ([vzakaznikov](https://github.com/vzakaznikov)).
* Fix build of test under Mac OS X. This closes [#12767](https://github.com/ClickHouse/ClickHouse/issues/12767). [#12772](https://github.com/ClickHouse/ClickHouse/pull/12772) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Connector-ODBC updated to mysql-connector-odbc-8.0.21. [#12739](https://github.com/ClickHouse/ClickHouse/pull/12739) ([Ilya Yatsishin](https://github.com/qoega)).
* Adding RBAC syntax tests in TestFlows. [#12642](https://github.com/ClickHouse/ClickHouse/pull/12642) ([vzakaznikov](https://github.com/vzakaznikov)).
* Improve performance of TestKeeper. This will speedup tests with heavy usage of Replicated tables. [#12505](https://github.com/ClickHouse/ClickHouse/pull/12505) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Now we check that server is able to start after stress tests run. This fixes [#12473](https://github.com/ClickHouse/ClickHouse/issues/12473). [#12496](https://github.com/ClickHouse/ClickHouse/pull/12496) ([alesapin](https://github.com/alesapin)).
* Update fmtlib to master (7.0.1). [#12446](https://github.com/ClickHouse/ClickHouse/pull/12446) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Add docker image for fast tests. [#12294](https://github.com/ClickHouse/ClickHouse/pull/12294) ([alesapin](https://github.com/alesapin)).
* Rework configuration paths for integration tests. [#12285](https://github.com/ClickHouse/ClickHouse/pull/12285) ([Ilya Yatsishin](https://github.com/qoega)).
* Add compiler option to control that stack frames are not too large. This will help to run the code in fibers with small stack size. [#11524](https://github.com/ClickHouse/ClickHouse/pull/11524) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Update gitignore-files. [#13447](https://github.com/ClickHouse/ClickHouse/pull/13447) ([vladimir-golovchenko](https://github.com/vladimir-golovchenko)).
## ClickHouse release 20.6 ## ClickHouse release 20.6
### ClickHouse release v20.6.3.28-stable ### ClickHouse release v20.6.3.28-stable

View File

@ -78,8 +78,9 @@ if (COMPILER_GCC)
-Wno-deprecated-declarations -Wno-class-memaccess) -Wno-deprecated-declarations -Wno-class-memaccess)
elseif (COMPILER_CLANG) elseif (COMPILER_CLANG)
set (SUPPRESS_WARNINGS -Wno-non-virtual-dtor -Wno-sign-compare -Wno-strict-aliasing -Wno-deprecated-declarations) set (SUPPRESS_WARNINGS -Wno-non-virtual-dtor -Wno-sign-compare -Wno-strict-aliasing -Wno-deprecated-declarations)
set (CAPNP_PRIVATE_CXX_FLAGS -fno-char8_t)
endif () endif ()
target_compile_options(kj PRIVATE ${SUPPRESS_WARNINGS}) target_compile_options(kj PRIVATE ${SUPPRESS_WARNINGS} ${CAPNP_PRIVATE_CXX_FLAGS})
target_compile_options(capnp PRIVATE ${SUPPRESS_WARNINGS}) target_compile_options(capnp PRIVATE ${SUPPRESS_WARNINGS} ${CAPNP_PRIVATE_CXX_FLAGS})
target_compile_options(capnpc PRIVATE ${SUPPRESS_WARNINGS}) target_compile_options(capnpc PRIVATE ${SUPPRESS_WARNINGS} ${CAPNP_PRIVATE_CXX_FLAGS})

2
debian/rules vendored
View File

@ -18,7 +18,7 @@ ifeq ($(CCACHE_PREFIX),distcc)
THREADS_COUNT=$(shell distcc -j) THREADS_COUNT=$(shell distcc -j)
endif endif
ifeq ($(THREADS_COUNT),) ifeq ($(THREADS_COUNT),)
THREADS_COUNT=$(shell nproc || grep -c ^processor /proc/cpuinfo || sysctl -n hw.ncpu || echo 4) THREADS_COUNT=$(shell echo $$(( $$(nproc || grep -c ^processor /proc/cpuinfo || sysctl -n hw.ncpu || echo 8) / 2 )) )
endif endif
DEB_BUILD_OPTIONS+=parallel=$(THREADS_COUNT) DEB_BUILD_OPTIONS+=parallel=$(THREADS_COUNT)

View File

@ -6,7 +6,7 @@ RUN apt-get update \
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \ && apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
--yes --no-install-recommends --verbose-versions \ --yes --no-install-recommends --verbose-versions \
&& export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \ && export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \
&& wget -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \ && wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
&& echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \ && echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \
&& apt-key add /tmp/llvm-snapshot.gpg.key \ && apt-key add /tmp/llvm-snapshot.gpg.key \
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \ && export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \

View File

@ -7,7 +7,7 @@ RUN apt-get update \
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \ && apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
--yes --no-install-recommends --verbose-versions \ --yes --no-install-recommends --verbose-versions \
&& export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \ && export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \
&& wget -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \ && wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
&& echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \ && echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \
&& apt-key add /tmp/llvm-snapshot.gpg.key \ && apt-key add /tmp/llvm-snapshot.gpg.key \
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \ && export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
@ -55,7 +55,6 @@ RUN apt-get update \
cmake \ cmake \
gdb \ gdb \
rename \ rename \
wget \
build-essential \ build-essential \
--yes --no-install-recommends --yes --no-install-recommends
@ -83,14 +82,14 @@ RUN git clone https://github.com/tpoechtrager/cctools-port.git \
&& rm -rf cctools-port && rm -rf cctools-port
# Download toolchain for Darwin # Download toolchain for Darwin
RUN wget https://github.com/phracker/MacOSX-SDKs/releases/download/10.14-beta4/MacOSX10.14.sdk.tar.xz RUN wget -nv https://github.com/phracker/MacOSX-SDKs/releases/download/10.14-beta4/MacOSX10.14.sdk.tar.xz
# Download toolchain for ARM # Download toolchain for ARM
# It contains all required headers and libraries. Note that it's named as "gcc" but actually we are using clang for cross compiling. # It contains all required headers and libraries. Note that it's named as "gcc" but actually we are using clang for cross compiling.
RUN wget "https://developer.arm.com/-/media/Files/downloads/gnu-a/8.3-2019.03/binrel/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz?revision=2e88a73f-d233-4f96-b1f4-d8b36e9bb0b9&la=en" -O gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz RUN wget -nv "https://developer.arm.com/-/media/Files/downloads/gnu-a/8.3-2019.03/binrel/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz?revision=2e88a73f-d233-4f96-b1f4-d8b36e9bb0b9&la=en" -O gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz
# Download toolchain for FreeBSD 11.3 # Download toolchain for FreeBSD 11.3
RUN wget https://clickhouse-datasets.s3.yandex.net/toolchains/toolchains/freebsd-11.3-toolchain.tar.xz RUN wget -nv https://clickhouse-datasets.s3.yandex.net/toolchains/toolchains/freebsd-11.3-toolchain.tar.xz
COPY build.sh / COPY build.sh /
CMD ["/bin/bash", "/build.sh"] CMD ["/bin/bash", "/build.sh"]

View File

@ -7,7 +7,7 @@ RUN apt-get update \
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \ && apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
--yes --no-install-recommends --verbose-versions \ --yes --no-install-recommends --verbose-versions \
&& export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \ && export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \
&& wget -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \ && wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
&& echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \ && echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \
&& apt-key add /tmp/llvm-snapshot.gpg.key \ && apt-key add /tmp/llvm-snapshot.gpg.key \
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \ && export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
@ -34,7 +34,7 @@ RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
ENV APACHE_PUBKEY_HASH="bba6987b63c63f710fd4ed476121c588bc3812e99659d27a855f8c4d312783ee66ad6adfce238765691b04d62fa3688f" ENV APACHE_PUBKEY_HASH="bba6987b63c63f710fd4ed476121c588bc3812e99659d27a855f8c4d312783ee66ad6adfce238765691b04d62fa3688f"
RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \ RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
&& wget -O /tmp/arrow-keyring.deb "https://apache.bintray.com/arrow/ubuntu/apache-arrow-archive-keyring-latest-${CODENAME}.deb" \ && wget -nv -O /tmp/arrow-keyring.deb "https://apache.bintray.com/arrow/ubuntu/apache-arrow-archive-keyring-latest-${CODENAME}.deb" \
&& echo "${APACHE_PUBKEY_HASH} /tmp/arrow-keyring.deb" | sha384sum -c \ && echo "${APACHE_PUBKEY_HASH} /tmp/arrow-keyring.deb" | sha384sum -c \
&& dpkg -i /tmp/arrow-keyring.deb && dpkg -i /tmp/arrow-keyring.deb

View File

@ -7,7 +7,7 @@ RUN apt-get update \
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \ && apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
--yes --no-install-recommends --verbose-versions \ --yes --no-install-recommends --verbose-versions \
&& export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \ && export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \
&& wget -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \ && wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
&& echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \ && echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \
&& apt-key add /tmp/llvm-snapshot.gpg.key \ && apt-key add /tmp/llvm-snapshot.gpg.key \
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \ && export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \

View File

@ -15,7 +15,7 @@ RUN apt-get --allow-unauthenticated update -y \
gpg-agent \ gpg-agent \
git git
RUN wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | sudo apt-key add - RUN wget -nv -O - https://apt.kitware.com/keys/kitware-archive-latest.asc | sudo apt-key add -
RUN sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main' RUN sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main'
RUN sudo echo "deb [trusted=yes] http://apt.llvm.org/bionic/ llvm-toolchain-bionic-8 main" >> /etc/apt/sources.list RUN sudo echo "deb [trusted=yes] http://apt.llvm.org/bionic/ llvm-toolchain-bionic-8 main" >> /etc/apt/sources.list

View File

@ -7,7 +7,7 @@ RUN apt-get update \
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \ && apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
--yes --no-install-recommends --verbose-versions \ --yes --no-install-recommends --verbose-versions \
&& export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \ && export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \
&& wget -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \ && wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
&& echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \ && echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \
&& apt-key add /tmp/llvm-snapshot.gpg.key \ && apt-key add /tmp/llvm-snapshot.gpg.key \
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \ && export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
@ -61,7 +61,6 @@ RUN apt-get update \
software-properties-common \ software-properties-common \
tzdata \ tzdata \
unixodbc \ unixodbc \
wget \
--yes --no-install-recommends --yes --no-install-recommends
# This symlink required by gcc to find lld compiler # This symlink required by gcc to find lld compiler
@ -70,7 +69,7 @@ RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
ARG odbc_driver_url="https://github.com/ClickHouse/clickhouse-odbc/releases/download/v1.1.4.20200302/clickhouse-odbc-1.1.4-Linux.tar.gz" ARG odbc_driver_url="https://github.com/ClickHouse/clickhouse-odbc/releases/download/v1.1.4.20200302/clickhouse-odbc-1.1.4-Linux.tar.gz"
RUN mkdir -p /tmp/clickhouse-odbc-tmp \ RUN mkdir -p /tmp/clickhouse-odbc-tmp \
&& wget --quiet -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \ && wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \ && cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
&& odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \ && odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \ && odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \

View File

@ -2,6 +2,15 @@
<profiles> <profiles>
<default> <default>
<max_execution_time>10</max_execution_time> <max_execution_time>10</max_execution_time>
<!--
Don't let the fuzzer change this setting (I've actually seen it
do this before).
-->
<constraints>
<max_execution_time>
<max>10</max>
</max_execution_time>
</constraints>
</default> </default>
</profiles> </profiles>
</yandex> </yandex>

View File

@ -91,7 +91,7 @@ function fuzz
./clickhouse-client --query "select elapsed, query from system.processes" ||: ./clickhouse-client --query "select elapsed, query from system.processes" ||:
killall clickhouse-server ||: killall clickhouse-server ||:
for x in {1..10} for _ in {1..10}
do do
if ! pgrep -f clickhouse-server if ! pgrep -f clickhouse-server
then then
@ -172,8 +172,59 @@ case "$stage" in
echo "failure" > status.txt echo "failure" > status.txt
echo "Fuzzer failed ($fuzzer_exit_code). See the logs" > description.txt echo "Fuzzer failed ($fuzzer_exit_code). See the logs" > description.txt
fi fi
;&
"report")
cat > report.html <<EOF ||:
<!DOCTYPE html>
<html lang="en">
<link rel="preload" as="font" href="https://yastatic.net/adv-www/_/sUYVCPUAQE7ExrvMS7FoISoO83s.woff2" type="font/woff2" crossorigin="anonymous"/>
<style>
@font-face {
font-family:'Yandex Sans Display Web';
src:url(https://yastatic.net/adv-www/_/H63jN0veW07XQUIA2317lr9UIm8.eot);
src:url(https://yastatic.net/adv-www/_/H63jN0veW07XQUIA2317lr9UIm8.eot?#iefix) format('embedded-opentype'),
url(https://yastatic.net/adv-www/_/sUYVCPUAQE7ExrvMS7FoISoO83s.woff2) format('woff2'),
url(https://yastatic.net/adv-www/_/v2Sve_obH3rKm6rKrtSQpf-eB7U.woff) format('woff'),
url(https://yastatic.net/adv-www/_/PzD8hWLMunow5i3RfJ6WQJAL7aI.ttf) format('truetype'),
url(https://yastatic.net/adv-www/_/lF_KG5g4tpQNlYIgA0e77fBSZ5s.svg#YandexSansDisplayWeb-Regular) format('svg');
font-weight:400;
font-style:normal;
font-stretch:normal
}
exit $task_exit_code body { font-family: "Yandex Sans Display Web", Arial, sans-serif; background: #EEE; }
h1 { margin-left: 10px; }
th, td { border: 0; padding: 5px 10px 5px 10px; text-align: left; vertical-align: top; line-height: 1.5; background-color: #FFF;
td { white-space: pre; font-family: Monospace, Courier New; }
border: 0; box-shadow: 0 0 0 1px rgba(0, 0, 0, 0.05), 0 8px 25px -5px rgba(0, 0, 0, 0.1); }
a { color: #06F; text-decoration: none; }
a:hover, a:active { color: #F40; text-decoration: underline; }
table { border: 0; }
.main { margin-left: 10%; }
p.links a { padding: 5px; margin: 3px; background: #FFF; line-height: 2; white-space: nowrap; box-shadow: 0 0 0 1px rgba(0, 0, 0, 0.05), 0 8px 25px -5px rgba(0, 0, 0, 0.1); }
th { cursor: pointer; }
</style>
<title>AST Fuzzer for PR #${PR_TO_TEST} @ ${SHA_TO_TEST}</title>
</head>
<body>
<div class="main">
<h1>AST Fuzzer for PR #${PR_TO_TEST} @ ${SHA_TO_TEST}</h1>
<p class="links">
<a href="fuzzer.log">fuzzer.log</a>
<a href="server.log">server.log</a>
<a href="main.log">main.log</a>
</p>
<table>
<tr><th>Test name</th><th>Test status</th><th>Description</th></tr>
<tr><td>AST Fuzzer</td><td>$(cat status.txt)</td><td>$(cat description.txt)</td></tr>
</table>
</body>
</html>
EOF
;& ;&
esac esac
exit $task_exit_code

View File

@ -46,7 +46,7 @@ RUN set -eux; \
\ \
# this "case" statement is generated via "update.sh" # this "case" statement is generated via "update.sh"
\ \
if ! wget -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz"; then \ if ! wget -nv -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz"; then \
echo >&2 "error: failed to download 'docker-${DOCKER_VERSION}' from '${DOCKER_CHANNEL}' for '${x86_64}'"; \ echo >&2 "error: failed to download 'docker-${DOCKER_VERSION}' from '${DOCKER_CHANNEL}' for '${x86_64}'"; \
exit 1; \ exit 1; \
fi; \ fi; \

View File

@ -5,3 +5,4 @@ services:
restart: always restart: always
ports: ports:
- 6380:6379 - 6380:6379
command: redis-server --requirepass "clickhouse"

View File

@ -536,7 +536,9 @@ create table queries engine File(TSVWithNamesAndTypes, 'report/queries.tsv')
left join query_display_names left join query_display_names
on query_metric_stats.test = query_display_names.test on query_metric_stats.test = query_display_names.test
and query_metric_stats.query_index = query_display_names.query_index and query_metric_stats.query_index = query_display_names.query_index
where metric_name = 'server_time' -- 'server_time' is rounded down to ms, which might be bad for very short queries.
-- Use 'client_time' instead.
where metric_name = 'client_time'
order by test, query_index, metric_name order by test, query_index, metric_name
; ;
@ -888,7 +890,10 @@ for log in *-err.log
do do
test=$(basename "$log" "-err.log") test=$(basename "$log" "-err.log")
{ {
grep -H -m2 -i '\(Exception\|Error\):[^:]' "$log" \ # The second grep is a heuristic for error messages like
# "socket.timeout: timed out".
grep -h -m2 -i '\(Exception\|Error\):[^:]' "$log" \
|| grep -h -m2 -i '^[^ ]\+: ' "$log" \
|| head -2 "$log" || head -2 "$log"
} | sed "s/^/$test\t/" >> run-errors.tsv ||: } | sed "s/^/$test\t/" >> run-errors.tsv ||:
done done

View File

@ -168,12 +168,6 @@ def nextRowAnchor():
global table_anchor global table_anchor
return f'{table_anchor}.{row_anchor + 1}' return f'{table_anchor}.{row_anchor + 1}'
def setRowAnchor(anchor_row_part):
global row_anchor
global table_anchor
row_anchor = anchor_row_part
return currentRowAnchor()
def advanceRowAnchor(): def advanceRowAnchor():
global row_anchor global row_anchor
global table_anchor global table_anchor
@ -480,11 +474,12 @@ if args.report == 'main':
total_runs = (nominal_runs + 1) * 2 # one prewarm run, two servers total_runs = (nominal_runs + 1) * 2 # one prewarm run, two servers
attrs = ['' for c in columns] attrs = ['' for c in columns]
for r in rows: for r in rows:
anchor = f'{currentTableAnchor()}.{r[0]}'
if float(r[6]) > 1.5 * total_runs: if float(r[6]) > 1.5 * total_runs:
# FIXME should be 15s max -- investigate parallel_insert # FIXME should be 15s max -- investigate parallel_insert
slow_average_tests += 1 slow_average_tests += 1
attrs[6] = f'style="background: {color_bad}"' attrs[6] = f'style="background: {color_bad}"'
errors_explained.append([f'<a href="./all-queries.html#all-query-times.{r[0]}.0">The test \'{r[0]}\' is too slow to run as a whole. Investigate whether the create and fill queries can be sped up']) errors_explained.append([f'<a href="#{anchor}">The test \'{r[0]}\' is too slow to run as a whole. Investigate whether the create and fill queries can be sped up'])
else: else:
attrs[6] = '' attrs[6] = ''
@ -495,7 +490,7 @@ if args.report == 'main':
else: else:
attrs[5] = '' attrs[5] = ''
text += tableRow(r, attrs) text += tableRow(r, attrs, anchor)
text += tableEnd() text += tableEnd()
tables.append(text) tables.append(text)

View File

@ -12,8 +12,8 @@ RUN apt-get update --yes \
strace \ strace \
--yes --no-install-recommends --yes --no-install-recommends
#RUN wget -q -O - http://files.viva64.com/etc/pubkey.txt | sudo apt-key add - #RUN wget -nv -O - http://files.viva64.com/etc/pubkey.txt | sudo apt-key add -
#RUN sudo wget -O /etc/apt/sources.list.d/viva64.list http://files.viva64.com/etc/viva64.list #RUN sudo wget -nv -O /etc/apt/sources.list.d/viva64.list http://files.viva64.com/etc/viva64.list
# #
#RUN apt-get --allow-unauthenticated update -y \ #RUN apt-get --allow-unauthenticated update -y \
# && env DEBIAN_FRONTEND=noninteractive \ # && env DEBIAN_FRONTEND=noninteractive \
@ -24,10 +24,10 @@ ENV PKG_VERSION="pvs-studio-latest"
RUN set -x \ RUN set -x \
&& export PUBKEY_HASHSUM="486a0694c7f92e96190bbfac01c3b5ac2cb7823981db510a28f744c99eabbbf17a7bcee53ca42dc6d84d4323c2742761" \ && export PUBKEY_HASHSUM="486a0694c7f92e96190bbfac01c3b5ac2cb7823981db510a28f744c99eabbbf17a7bcee53ca42dc6d84d4323c2742761" \
&& wget https://files.viva64.com/etc/pubkey.txt -O /tmp/pubkey.txt \ && wget -nv https://files.viva64.com/etc/pubkey.txt -O /tmp/pubkey.txt \
&& echo "${PUBKEY_HASHSUM} /tmp/pubkey.txt" | sha384sum -c \ && echo "${PUBKEY_HASHSUM} /tmp/pubkey.txt" | sha384sum -c \
&& apt-key add /tmp/pubkey.txt \ && apt-key add /tmp/pubkey.txt \
&& wget "https://files.viva64.com/${PKG_VERSION}.deb" \ && wget -nv "https://files.viva64.com/${PKG_VERSION}.deb" \
&& { debsig-verify ${PKG_VERSION}.deb \ && { debsig-verify ${PKG_VERSION}.deb \
|| echo "WARNING: Some file was just downloaded from the internet without any validation and we are installing it into the system"; } \ || echo "WARNING: Some file was just downloaded from the internet without any validation and we are installing it into the system"; } \
&& dpkg -i "${PKG_VERSION}.deb" && dpkg -i "${PKG_VERSION}.deb"

View File

@ -26,7 +26,7 @@ RUN apt-get update -y \
zookeeperd zookeeperd
RUN mkdir -p /tmp/clickhouse-odbc-tmp \ RUN mkdir -p /tmp/clickhouse-odbc-tmp \
&& wget --quiet -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \ && wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \ && cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
&& odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \ && odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \ && odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \

View File

@ -71,7 +71,7 @@ RUN apt-get --allow-unauthenticated update -y \
zookeeperd zookeeperd
RUN mkdir -p /tmp/clickhouse-odbc-tmp \ RUN mkdir -p /tmp/clickhouse-odbc-tmp \
&& wget --quiet -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \ && wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \ && cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
&& odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \ && odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \ && odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \

View File

@ -33,7 +33,7 @@ RUN apt-get update -y \
qemu-user-static qemu-user-static
RUN mkdir -p /tmp/clickhouse-odbc-tmp \ RUN mkdir -p /tmp/clickhouse-odbc-tmp \
&& wget --quiet -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \ && wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \ && cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
&& odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \ && odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \ && odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \

View File

@ -44,7 +44,7 @@ RUN set -eux; \
\ \
# this "case" statement is generated via "update.sh" # this "case" statement is generated via "update.sh"
\ \
if ! wget -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz"; then \ if ! wget -nv -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz"; then \
echo >&2 "error: failed to download 'docker-${DOCKER_VERSION}' from '${DOCKER_CHANNEL}' for '${x86_64}'"; \ echo >&2 "error: failed to download 'docker-${DOCKER_VERSION}' from '${DOCKER_CHANNEL}' for '${x86_64}'"; \
exit 1; \ exit 1; \
fi; \ fi; \

View File

@ -7,7 +7,7 @@ toc_title: RabbitMQ
This engine allows integrating ClickHouse with [RabbitMQ](https://www.rabbitmq.com). This engine allows integrating ClickHouse with [RabbitMQ](https://www.rabbitmq.com).
RabbitMQ lets you: `RabbitMQ` lets you:
- Publish or subscribe to data flows. - Publish or subscribe to data flows.
- Process streams as they become available. - Process streams as they become available.
@ -44,8 +44,8 @@ Optional parameters:
- `rabbitmq_routing_key_list` A comma-separated list of routing keys. - `rabbitmq_routing_key_list` A comma-separated list of routing keys.
- `rabbitmq_row_delimiter` Delimiter character, which ends the message. - `rabbitmq_row_delimiter` Delimiter character, which ends the message.
- `rabbitmq_num_consumers` The number of consumers per table. Default: `1`. Specify more consumers if the throughput of one consumer is insufficient. - `rabbitmq_num_consumers` The number of consumers per table. Default: `1`. Specify more consumers if the throughput of one consumer is insufficient.
- `rabbitmq_num_queues` The number of queues per consumer. Default: `1`. Specify more queues if the capacity of one queue per consumer is insufficient. Single queue can contain up to 50K messages at the same time. - `rabbitmq_num_queues` The number of queues per consumer. Default: `1`. Specify more queues if the capacity of one queue per consumer is insufficient. A single queue can contain up to 50K messages at the same time.
- `rabbitmq_transactional_channel` Wrap insert queries in transactions. Default: `0`. - `rabbitmq_transactional_channel` Wrap `INSERT` queries in transactions. Default: `0`.
Required configuration: Required configuration:
@ -72,7 +72,7 @@ Example:
## Description {#description} ## Description {#description}
`SELECT` is not particularly useful for reading messages (except for debugging), because each message can be read only once. It is more practical to create real-time threads using materialized views. To do this: `SELECT` is not particularly useful for reading messages (except for debugging), because each message can be read only once. It is more practical to create real-time threads using [materialized views](../../../sql-reference/statements/create/view.md). To do this:
1. Use the engine to create a RabbitMQ consumer and consider it a data stream. 1. Use the engine to create a RabbitMQ consumer and consider it a data stream.
2. Create a table with the desired structure. 2. Create a table with the desired structure.
@ -86,13 +86,13 @@ There can be no more than one exchange per table. One exchange can be shared bet
Exchange type options: Exchange type options:
- `direct` - Routing is based on exact matching of keys. Example table key list: `key1,key2,key3,key4,key5`, message key can eqaul any of them. - `direct` - Routing is based on the exact matching of keys. Example table key list: `key1,key2,key3,key4,key5`, message key can equal any of them.
- `fanout` - Routing to all tables (where exchange name is the same) regardless of the keys. - `fanout` - Routing to all tables (where exchange name is the same) regardless of the keys.
- `topic` - Routing is based on patterns with dot-separated keys. Examples: `*.logs`, `records.*.*.2020`, `*.2018,*.2019,*.2020`. - `topic` - Routing is based on patterns with dot-separated keys. Examples: `*.logs`, `records.*.*.2020`, `*.2018,*.2019,*.2020`.
- `headers` - Routing is based on `key=value` matches with a setting `x-match=all` or `x-match=any`. Example table key list: `x-match=all,format=logs,type=report,year=2020`. - `headers` - Routing is based on `key=value` matches with a setting `x-match=all` or `x-match=any`. Example table key list: `x-match=all,format=logs,type=report,year=2020`.
- `consistent-hash` - Data is evenly distributed between all bound tables (where exchange name is the same). Note that this exchange type must be enabled with RabbitMQ plugin: `rabbitmq-plugins enable rabbitmq_consistent_hash_exchange`. - `consistent-hash` - Data is evenly distributed between all bound tables (where the exchange name is the same). Note that this exchange type must be enabled with RabbitMQ plugin: `rabbitmq-plugins enable rabbitmq_consistent_hash_exchange`.
If exchange type is not specified, then default is `fanout` and routing keys for data publishing must be randomized in range `[1, num_consumers]` for every message/batch (or in range `[1, num_consumers * num_queues]` if `rabbitmq_num_queues` is set). This table configuration works quicker then any other, especially when `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` parameters are set. If exchange type is not specified, then default is `fanout` and routing keys for data publishing must be randomized in range `[1, num_consumers]` for every message/batch (or in range `[1, num_consumers * num_queues]` if `rabbitmq_num_queues` is set). This table configuration works quicker than any other, especially when `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` parameters are set.
If `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` parameters are specified along with `rabbitmq_exchange_type`, then: If `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` parameters are specified along with `rabbitmq_exchange_type`, then:

View File

@ -31,7 +31,7 @@ For a description of request parameters, see [statement description](../../../sq
**ReplacingMergeTree Parameters** **ReplacingMergeTree Parameters**
- `ver` — column with version. Type `UInt*`, `Date`, `DateTime` or `DateTime64`. Optional parameter. - `ver` — column with version. Type `UInt*`, `Date` or `DateTime`. Optional parameter.
When merging, `ReplacingMergeTree` from all the rows with the same sorting key leaves only one: When merging, `ReplacingMergeTree` from all the rows with the same sorting key leaves only one:

View File

@ -121,7 +121,7 @@ To find out why we need two rows for each change, see [Algorithm](#table_engines
**Notes on Usage** **Notes on Usage**
1. The program that writes the data should remember the state of an object in order to cancel it. The “cancel” string should be a copy of the “state” string with the opposite `Sign`. This increases the initial size of storage but allows to write the data quickly. 1. The program that writes the data should remember the state of an object to be able to cancel it. “Cancel” string should contain copies of the primary key fields and the version of the “state” string and the opposite `Sign`. It increases the initial size of storage but allows to write the data quickly.
2. Long growing arrays in columns reduce the efficiency of the engine due to the load for writing. The more straightforward the data, the better the efficiency. 2. Long growing arrays in columns reduce the efficiency of the engine due to the load for writing. The more straightforward the data, the better the efficiency.
3. `SELECT` results depend strongly on the consistency of the history of object changes. Be accurate when preparing data for inserting. You can get unpredictable results with inconsistent data, such as negative values for non-negative metrics like session depth. 3. `SELECT` results depend strongly on the consistency of the history of object changes. Be accurate when preparing data for inserting. You can get unpredictable results with inconsistent data, such as negative values for non-negative metrics like session depth.

View File

@ -36,7 +36,7 @@ Examples:
$ curl 'http://localhost:8123/?query=SELECT%201' $ curl 'http://localhost:8123/?query=SELECT%201'
1 1
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1' $ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
1 1
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123 $ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123

View File

@ -33,7 +33,7 @@ Para obtener una descripción de los parámetros de solicitud, consulte [descrip
**ReplacingMergeTree Parámetros** **ReplacingMergeTree Parámetros**
- `ver` — column with version. Type `UInt*`, `Date`, `DateTime` o `DateTime64`. Parámetro opcional. - `ver` — column with version. Type `UInt*`, `Date` o `DateTime`. Parámetro opcional.
Al fusionar, `ReplacingMergeTree` de todas las filas con la misma clave primaria deja solo una: Al fusionar, `ReplacingMergeTree` de todas las filas con la misma clave primaria deja solo una:

View File

@ -38,7 +38,7 @@ Ejemplos:
$ curl 'http://localhost:8123/?query=SELECT%201' $ curl 'http://localhost:8123/?query=SELECT%201'
1 1
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1' $ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
1 1
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123 $ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123

View File

@ -33,7 +33,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
**پارامترهای جایگزین** **پارامترهای جایگزین**
- `ver` — column with version. Type `UInt*`, `Date`, `DateTime` یا `DateTime64`. پارامتر اختیاری. - `ver` — column with version. Type `UInt*`, `Date` یا `DateTime`. پارامتر اختیاری.
هنگام ادغام, `ReplacingMergeTree` از تمام ردیف ها با همان کلید اصلی تنها یک برگ دارد: هنگام ادغام, `ReplacingMergeTree` از تمام ردیف ها با همان کلید اصلی تنها یک برگ دارد:

View File

@ -38,7 +38,7 @@ Ok.
$ curl 'http://localhost:8123/?query=SELECT%201' $ curl 'http://localhost:8123/?query=SELECT%201'
1 1
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1' $ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
1 1
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123 $ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123

View File

@ -33,7 +33,7 @@ Pour une description des paramètres de requête, voir [demande de description](
**ReplacingMergeTree Paramètres** **ReplacingMergeTree Paramètres**
- `ver` — column with version. Type `UInt*`, `Date`, `DateTime` ou `DateTime64`. Paramètre facultatif. - `ver` — column with version. Type `UInt*`, `Date` ou `DateTime`. Paramètre facultatif.
Lors de la fusion, `ReplacingMergeTree` de toutes les lignes avec la même clé primaire ne laisse qu'un: Lors de la fusion, `ReplacingMergeTree` de toutes les lignes avec la même clé primaire ne laisse qu'un:

View File

@ -38,7 +38,7 @@ Exemple:
$ curl 'http://localhost:8123/?query=SELECT%201' $ curl 'http://localhost:8123/?query=SELECT%201'
1 1
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1' $ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
1 1
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123 $ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123

View File

@ -33,7 +33,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
**ReplacingMergeTreeパラメータ** **ReplacingMergeTreeパラメータ**
- `ver` — column with version. Type `UInt*`, `Date`, `DateTime` または `DateTime64`. 任意パラメータ。 - `ver` — column with version. Type `UInt*`, `Date` または `DateTime`. 任意パラメータ。
マージ時, `ReplacingMergeTree` 同じ主キーを持つすべての行から、一つだけを残します: マージ時, `ReplacingMergeTree` 同じ主キーを持つすべての行から、一つだけを残します:

View File

@ -38,7 +38,7 @@ GETメソッドを使用する場合, readonly 設定されています。
$ curl 'http://localhost:8123/?query=SELECT%201' $ curl 'http://localhost:8123/?query=SELECT%201'
1 1
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1' $ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
1 1
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123 $ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123

View File

@ -1,14 +1,11 @@
--- ---
machine_translated: true
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
toc_priority: 47 toc_priority: 47
toc_title: "\u65E5\u4ED8" toc_title: "\u65E5\u4ED8"
--- ---
# 日付 {#date} # 日付 {#date}
デートだ 1970-01-01(符号なし)以降の日数として二バイト単位で格納されます。 Unixエポックの開始直後から、コンパイル段階で定数によって定義される上限しきい値までの値を格納できます現在は2106年までですが、完全にサポート 日付型です。 1970-01-01 からの日数が2バイトの符号なし整数として格納されます。 UNIX時間の開始直後から、変換段階で定数として定義される上限しきい値までの値を格納できます現在は2106年までですが、一年分を完全にサポートしているのは2105年までです
最小値は1970-01-01として出力されます。
日付値は、タイムゾーンなしで格納されます。 日付値は、タイムゾーンなしで格納されます。

View File

@ -0,0 +1,122 @@
---
toc_priority: 6
toc_title: RabbitMQ
---
# RabbitMQ {#rabbitmq-engine}
Движок работает с [RabbitMQ](https://www.rabbitmq.com).
`RabbitMQ` позволяет:
- Публиковать/подписываться на потоки данных.
- Обрабатывать потоки по мере их появления.
## Создание таблицы {#table_engine-rabbitmq-creating-a-table}
``` sql
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
(
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
...
) ENGINE = RabbitMQ SETTINGS
rabbitmq_host_port = 'host:port',
rabbitmq_exchange_name = 'exchange_name',
rabbitmq_format = 'data_format'[,]
[rabbitmq_exchange_type = 'exchange_type',]
[rabbitmq_routing_key_list = 'key1,key2,...',]
[rabbitmq_row_delimiter = 'delimiter_symbol',]
[rabbitmq_num_consumers = N,]
[rabbitmq_num_queues = N,]
[rabbitmq_transactional_channel = 0]
```
Обязательные параметры:
- `rabbitmq_host_port` адрес сервера (`хост:порт`). Например: `localhost:5672`.
- `rabbitmq_exchange_name` имя точки обмена в RabbitMQ.
- `rabbitmq_format` формат сообщения. Используется такое же обозначение, как и в функции `FORMAT` в SQL, например, `JSONEachRow`. Подробнее см. в разделе [Форматы входных и выходных данных](../../../interfaces/formats.md).
Дополнительные параметры:
- `rabbitmq_exchange_type` тип точки обмена в RabbitMQ: `direct`, `fanout`, `topic`, `headers`, `consistent-hash`. По умолчанию: `fanout`.
- `rabbitmq_routing_key_list` список ключей маршрутизации, через запятую.
- `rabbitmq_row_delimiter` символ-разделитель, который завершает сообщение.
- `rabbitmq_num_consumers` количество потребителей на таблицу. По умолчанию: `1`. Укажите больше потребителей, если пропускная способность одного потребителя недостаточна.
- `rabbitmq_num_queues` количество очередей на потребителя. По умолчанию: `1`. Укажите больше потребителей, если пропускная способность одной очереди на потребителя недостаточна. Одна очередь поддерживает до 50 тысяч сообщений одновременно.
- `rabbitmq_transactional_channel` обернутые запросы `INSERT` в транзакциях. По умолчанию: `0`.
Требуемая конфигурация:
Конфигурация сервера RabbitMQ добавляется с помощью конфигурационного файла ClickHouse.
``` xml
<rabbitmq>
<username>root</username>
<password>clickhouse</password>
</rabbitmq>
```
Example:
``` sql
CREATE TABLE queue (
key UInt64,
value UInt64
) ENGINE = RabbitMQ SETTINGS rabbitmq_host_port = 'localhost:5672',
rabbitmq_exchange_name = 'exchange1',
rabbitmq_format = 'JSONEachRow',
rabbitmq_num_consumers = 5;
```
## Описание {#description}
Запрос `SELECT` не очень полезен для чтения сообщений (за исключением отладки), поскольку каждое сообщение может быть прочитано только один раз. Практичнее создавать потоки реального времени с помощью [материализованных преставлений](../../../sql-reference/statements/create/view.md). Для этого:
1. Создайте потребителя RabbitMQ с помощью движка и рассматривайте его как поток данных.
2. Создайте таблицу с необходимой структурой.
3. Создайте материализованное представление, которое преобразует данные от движка и помещает их в ранее созданную таблицу.
Когда к движку присоединяется материализованное представление, оно начинает в фоновом режиме собирать данные. Это позволяет непрерывно получать сообщения от RabbitMQ и преобразовывать их в необходимый формат с помощью `SELECT`.
У одной таблицы RabbitMQ может быть неограниченное количество материализованных представлений.
Данные передаются с помощью параметров `rabbitmq_exchange_type` и `rabbitmq_routing_key_list`.
Может быть не более одной точки обмена на таблицу. Одна точка обмена может использоваться несколькими таблицами: это позволяет выполнять маршрутизацию по нескольким таблицам одновременно.
Параметры точек обмена:
- `direct` - маршрутизация основана на точном совпадении ключей. Пример списка ключей: `key1,key2,key3,key4,key5`. Ключ сообщения может совпадать с одним из них.
- `fanout` - маршрутизация по всем таблицам, где имя точки обмена совпадает, независимо от ключей.
- `topic` - маршрутизация основана на правилах с ключами, разделенными точками. Например: `*.logs`, `records.*.*.2020`, `*.2018,*.2019,*.2020`.
- `headers` - маршрутизация основана на совпадении `key=value` с настройкой `x-match=all` или `x-match=any`. Пример списка ключей таблицы: `x-match=all,format=logs,type=report,year=2020`.
- `consistent-hash` - данные равномерно распределяются между всеми связанными таблицами, где имя точки обмена совпадает. Обратите внимание, что этот тип обмена должен быть включен с помощью плагина RabbitMQ: `rabbitmq-plugins enable rabbitmq_consistent_hash_exchange`.
Если тип точки обмена не задан, по умолчанию используется `fanout`. В таком случае ключи маршрутизации для публикации данных должны быть рандомизированы в диапазоне `[1, num_consumers]` за каждое сообщение/пакет (или в диапазоне `[1, num_consumers * num_queues]`, если `rabbitmq_num_queues` задано). Эта конфигурация таблицы работает быстрее, чем любая другая, особенно когда заданы параметры `rabbitmq_num_consumers` и/или `rabbitmq_num_queues`.
Если параметры`rabbitmq_num_consumers` и/или `rabbitmq_num_queues` заданы вместе с параметром `rabbitmq_exchange_type`:
- плагин `rabbitmq-consistent-hash-exchange` должен быть включен.
- свойство `message_id` должно быть определено (уникальное для каждого сообщения/пакета).
Пример:
``` sql
CREATE TABLE queue (
key UInt64,
value UInt64
) ENGINE = RabbitMQ SETTINGS rabbitmq_host_port = 'localhost:5672',
rabbitmq_exchange_name = 'exchange1',
rabbitmq_exchange_type = 'headers',
rabbitmq_routing_key_list = 'format=logs,type=report,year=2020',
rabbitmq_format = 'JSONEachRow',
rabbitmq_num_consumers = 5;
CREATE TABLE daily (key UInt64, value UInt64)
ENGINE = MergeTree();
CREATE MATERIALIZED VIEW consumer TO daily
AS SELECT key, value FROM queue;
SELECT key, value FROM daily ORDER BY key;
```

View File

@ -25,7 +25,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
**Параметры ReplacingMergeTree** **Параметры ReplacingMergeTree**
- `ver` — столбец с версией, тип `UInt*`, `Date`, `DateTime` или `DateTime64`. Необязательный параметр. - `ver` — столбец с версией, тип `UInt*`, `Date` или `DateTime`. Необязательный параметр.
При слиянии, из всех строк с одинаковым значением ключа сортировки `ReplacingMergeTree` оставляет только одну: При слиянии, из всех строк с одинаковым значением ключа сортировки `ReplacingMergeTree` оставляет только одну:

View File

@ -116,7 +116,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
**Примечания по использованию** **Примечания по использованию**
1. Программа, которая записывает данные, должна помнить состояние объекта, чтобы иметь возможность отменить его. Строка отмены состояния должна быть копией предыдущей строки состояния с противоположным значением `Sign`. Это увеличивает начальный размер хранилища, но позволяет быстро записывать данные. 1. Программа, которая записывает данные, должна помнить состояние объекта, чтобы иметь возможность отменить его. Строка отмены состояния должна содержать копии полей первичного ключа и копию версии строки состояния и противоположное значение `Sign`. Это увеличивает начальный размер хранилища, но позволяет быстро записывать данные.
2. Длинные растущие массивы в столбцах снижают эффективность работы движка за счёт нагрузки на запись. Чем проще данные, тем выше эффективность. 2. Длинные растущие массивы в столбцах снижают эффективность работы движка за счёт нагрузки на запись. Чем проще данные, тем выше эффективность.
3. `SELECT` результаты сильно зависят от согласованности истории изменений объекта. Будьте точны при подготовке данных для вставки. Вы можете получить непредсказуемые результаты с несогласованными данными, такими как отрицательные значения для неотрицательных метрик, таких как глубина сеанса. 3. `SELECT` результаты сильно зависят от согласованности истории изменений объекта. Будьте точны при подготовке данных для вставки. Вы можете получить непредсказуемые результаты с несогласованными данными, такими как отрицательные значения для неотрицательных метрик, таких как глубина сеанса.

View File

@ -31,7 +31,7 @@ Ok.
$ curl 'http://localhost:8123/?query=SELECT%201' $ curl 'http://localhost:8123/?query=SELECT%201'
1 1
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1' $ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
1 1
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123 $ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123

View File

@ -401,13 +401,91 @@ INSERT INTO test VALUES (lower('Hello')), (lower('world')), (lower('INSERT')), (
Устанавливает тип поведения [JOIN](../../sql-reference/statements/select/join.md). При объединении таблиц могут появиться пустые ячейки. ClickHouse заполняет их по-разному в зависимости от настроек. Устанавливает тип поведения [JOIN](../../sql-reference/statements/select/join.md). При объединении таблиц могут появиться пустые ячейки. ClickHouse заполняет их по-разному в зависимости от настроек.
Возможные значения Возможные значения:
- 0 — пустые ячейки заполняются значением по умолчанию соответствующего типа поля. - 0 — пустые ячейки заполняются значением по умолчанию соответствующего типа поля.
- 1 — `JOIN` ведёт себя как в стандартном SQL. Тип соответствующего поля преобразуется в [Nullable](../../sql-reference/data-types/nullable.md#data_type-nullable), а пустые ячейки заполняются значениями [NULL](../../sql-reference/syntax.md). - 1 — `JOIN` ведёт себя как в стандартном SQL. Тип соответствующего поля преобразуется в [Nullable](../../sql-reference/data-types/nullable.md#data_type-nullable), а пустые ячейки заполняются значениями [NULL](../../sql-reference/syntax.md).
## partial_merge_join_optimizations {#partial_merge_join_optimizations}
Отключает все оптимизации для запросов [JOIN](../../sql-reference/statements/select/join.md) с частичным MergeJoin алгоритмом.
По умолчанию оптимизации включены, что может привести к неправильным результатам. Если вы видите подозрительные результаты в своих запросах, отключите оптимизацию с помощью этого параметра. В различных версиях сервера ClickHouse, оптимизация может отличаться.
Возможные значения:
- 0 — Оптимизация отключена.
- 1 — Оптимизация включена.
Значение по умолчанию: 1.
## partial_merge_join_rows_in_right_blocks {#partial_merge_join_rows_in_right_blocks}
Устанавливает предельные размеры блоков данных «правого» соединения, для запросов [JOIN](../../sql-reference/statements/select/join.md) с частичным MergeJoin алгоритмом.
Сервер ClickHouse:
1. Разделяет данные правого соединения на блоки с заданным числом строк.
2. Индексирует для каждого блока минимальное и максимальное значение.
3. Выгружает подготовленные блоки на диск, если это возможно.
Возможные значения:
- Положительное целое число. Рекомендуемый диапазон значений [1000, 100000].
Значение по умолчанию: 65536.
## join_on_disk_max_files_to_merge {#join_on_disk_max_files_to_merge}
Устанавливет количество файлов, разрешенных для параллельной сортировки, при выполнении операций MergeJoin на диске.
Чем больше значение параметра, тем больше оперативной памяти используется и тем меньше используется диск (I/O).
Возможные значения:
- Положительное целое число, больше 2.
Значение по умолчанию: 64.
## temporary_files_codec {#temporary_files_codec}
Устанавливает метод сжатия для временных файлов на диске, используемых при сортировки и объединения.
Возможные значения:
- LZ4 — применять сжатие, используя алгоритм [LZ4](https://ru.wikipedia.org/wiki/LZ4)
- NONE — не применять сжатие.
Значение по умолчанию: LZ4.
## any_join_distinct_right_table_keys {#any_join_distinct_right_table_keys}
Включает устаревшее поведение сервера ClickHouse при выполнении операций `ANY INNER|LEFT JOIN`.
!!! note "Внимание"
Используйте этот параметр только в целях обратной совместимости, если ваши варианты использования требуют устаревшего поведения `JOIN`.
Когда включено устаревшее поведение:
- Результаты операций "t1 ANY LEFT JOIN t2" и "t2 ANY RIGHT JOIN t1" не равны, поскольку ClickHouse использует логику с сопоставлением ключей таблицы "многие к одному слева направо".
- Результаты операций `ANY INNER JOIN` содержат все строки из левой таблицы, аналогично операции `SEMI LEFT JOIN`.
Когда устаревшее поведение отключено:
- Результаты операций `t1 ANY LEFT JOIN t2` и `t2 ANY RIGHT JOIN t1` равно, потому что ClickHouse использует логику сопоставления ключей один-ко-многим в операциях `ANY RIGHT JOIN`.
- Результаты операций `ANY INNER JOIN` содержат по одной строке на ключ из левой и правой таблиц.
Возможные значения:
- 0 — Устаревшее поведение отключено.
- 1 — Устаревшее поведение включено.
Значение по умолчанию: 0. Значение по умолчанию: 0.
См. также:
- [JOIN strictness](../../sql-reference/statements/select/join.md#select-join-strictness)
## max\_block\_size {#setting-max_block_size} ## max\_block\_size {#setting-max_block_size}
Данные в ClickHouse обрабатываются по блокам (наборам кусочков столбцов). Внутренние циклы обработки для одного блока достаточно эффективны, но есть заметные издержки на каждый блок. Настройка `max_block_size` — это рекомендация, какой размер блока (в количестве строк) загружать из таблиц. Размер блока не должен быть слишком маленьким, чтобы затраты на каждый блок были заметны, но не слишком велики, чтобы запрос с LIMIT, который завершается после первого блока, обрабатывался быстро. Цель состоит в том, чтобы не использовалось слишком много оперативки при вынимании большого количества столбцов в несколько потоков; чтобы оставалась хоть какая-нибудь кэш-локальность. Данные в ClickHouse обрабатываются по блокам (наборам кусочков столбцов). Внутренние циклы обработки для одного блока достаточно эффективны, но есть заметные издержки на каждый блок. Настройка `max_block_size` — это рекомендация, какой размер блока (в количестве строк) загружать из таблиц. Размер блока не должен быть слишком маленьким, чтобы затраты на каждый блок были заметны, но не слишком велики, чтобы запрос с LIMIT, который завершается после первого блока, обрабатывался быстро. Цель состоит в том, чтобы не использовалось слишком много оперативки при вынимании большого количества столбцов в несколько потоков; чтобы оставалась хоть какая-нибудь кэш-локальность.

View File

@ -24,7 +24,7 @@
Утилиту следует запускать вручную следующим образом: Утилиту следует запускать вручную следующим образом:
``` bash ``` bash
$ clickhouse-copier copier --daemon --config zookeeper.xml --task-path /task/path --base-dir /path/to/dir $ clickhouse-copier --daemon --config zookeeper.xml --task-path /task/path --base-dir /path/to/dir
``` ```
Параметры запуска: Параметры запуска:

View File

@ -36,7 +36,9 @@ FROM <left_table>
!!! note "Примечание" !!! note "Примечание"
Значение строгости по умолчанию может быть переопределено с помощью настройки [join\_default\_strictness](../../../operations/settings/settings.md#settings-join_default_strictness). Значение строгости по умолчанию может быть переопределено с помощью настройки [join\_default\_strictness](../../../operations/settings/settings.md#settings-join_default_strictness).
Поведение сервера ClickHouse для операций `ANY JOIN` зависит от параметра [any_join_distinct_right_table_keys](../../../operations/settings/settings.md#any_join_distinct_right_table_keys).
### Использование ASOF JOIN {#asof-join-usage} ### Использование ASOF JOIN {#asof-join-usage}
`ASOF JOIN` применим в том случае, когда необходимо объединять записи, которые не имеют точного совпадения. `ASOF JOIN` применим в том случае, когда необходимо объединять записи, которые не имеют точного совпадения.

View File

@ -33,7 +33,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
**ReplacingMergeTree Parametreleri** **ReplacingMergeTree Parametreleri**
- `ver` — column with version. Type `UInt*`, `Date`, `DateTime` veya `DateTime64`. İsteğe bağlı parametre. - `ver` — column with version. Type `UInt*`, `Date` veya `DateTime`. İsteğe bağlı parametre.
Birleş whenirken, `ReplacingMergeTree` aynı birincil anahtara sahip tüm satırlardan sadece bir tane bırakır: Birleş whenirken, `ReplacingMergeTree` aynı birincil anahtara sahip tüm satırlardan sadece bir tane bırakır:

View File

@ -38,7 +38,7 @@ GET yöntemini kullanırken, readonly ayar .lanmıştır. Başka bir deyi
$ curl 'http://localhost:8123/?query=SELECT%201' $ curl 'http://localhost:8123/?query=SELECT%201'
1 1
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1' $ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
1 1
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123 $ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123

View File

@ -2,44 +2,47 @@
Clickhouse 中最强大的表引擎当属 `MergeTree` (合并树)引擎及该系列(`*MergeTree`)中的其他引擎。 Clickhouse 中最强大的表引擎当属 `MergeTree` (合并树)引擎及该系列(`*MergeTree`)中的其他引擎。
`MergeTree` 引擎系列的基本理念如下。当你有巨量数据要插入到表中,你要高效地一批批写入数据片段,并希望这些数据片段在后台按照一定规则合并。相比在插入时不断修改(重写)数据进存储,这种策略会高效很多。 `MergeTree` 系列的引擎被设计用于插入极大量的数据到一张表当中。数据可以以数据片段的形式一个接着一个的快速写入,数据片段在后台按照一定的规则进行合并。相比在插入时不断修改(重写)已存储的数据,这种策略会高效很多。
主要特点: 主要特点:
- 存储的数据按主键排序。 - 存储的数据按主键排序。
让你可以创建一个用于快速检索数据的小稀疏索引 使得你能够创建一个小型的稀疏索引来加快数据检索
- 允许使用分区,如果指定了 [分区键](custom-partitioning-key.md) 的话。 - 支持数据分区,如果指定了 [分区键](custom-partitioning-key.md) 的话。
在相同数据集和相同结果集的情况下 ClickHouse 中某些带分区的操作会比普通操作更快。查询中指定了分区键时 ClickHouse 会自动截取分区数据。这也有效增加了查询性能。 在相同数据集和相同结果集的情况下 ClickHouse 中某些带分区的操作会比普通操作更快。查询中指定了分区键时 ClickHouse 会自动截取分区数据。这也有效增加了查询性能。
- 支持数据副本。 - 支持数据副本。
`ReplicatedMergeTree` 系列的表便是用于此。更多信息,请参阅 [数据副本](replication.md) 一节。 `ReplicatedMergeTree` 系列的表提供了数据副本功能。更多信息,请参阅 [数据副本](replication.md) 一节。
- 支持数据采样。 - 支持数据采样。
需要的话,你可以给表设置一个采样方法。 需要的话,你可以给表设置一个采样方法。
!!! 注意 "注意" !!! note "注意"
[合并](../special/merge.md#merge) 引擎并不属于 `*MergeTree` 系列。 [合并](../special/merge.md#merge) 引擎并不属于 `*MergeTree` 系列。
## 建表 {#table_engine-mergetree-creating-a-table} ## 建表 {#table_engine-mergetree-creating-a-table}
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster] ``` sql
( CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1], (
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2], name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [TTL expr1],
... name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [TTL expr2],
INDEX index_name1 expr1 TYPE type1(...) GRANULARITY value1, ...
INDEX index_name2 expr2 TYPE type2(...) GRANULARITY value2 INDEX index_name1 expr1 TYPE type1(...) GRANULARITY value1,
) ENGINE = MergeTree() INDEX index_name2 expr2 TYPE type2(...) GRANULARITY value2
[PARTITION BY expr] ) ENGINE = MergeTree()
[ORDER BY expr] ORDER BY expr
[PRIMARY KEY expr] [PARTITION BY expr]
[SAMPLE BY expr] [PRIMARY KEY expr]
[SETTINGS name=value, ...] [SAMPLE BY expr]
[TTL expr [DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'], ...]
[SETTINGS name=value, ...]
```
对于以上参数的描述,可参考 [CREATE 语句 的描述](../../../engines/table-engines/mergetree-family/mergetree.md) 。 对于以上参数的描述,可参考 [CREATE 语句 的描述](../../../engines/table-engines/mergetree-family/mergetree.md) 。
@ -62,7 +65,7 @@ Clickhouse 中最强大的表引擎当属 `MergeTree` (合并树)引擎及
要按月分区,可以使用表达式 `toYYYYMM(date_column)` ,这里的 `date_column` 是一个 [Date](../../../engines/table-engines/mergetree-family/mergetree.md) 类型的列。分区名的格式会是 `"YYYYMM"` 要按月分区,可以使用表达式 `toYYYYMM(date_column)` ,这里的 `date_column` 是一个 [Date](../../../engines/table-engines/mergetree-family/mergetree.md) 类型的列。分区名的格式会是 `"YYYYMM"`
- `PRIMARY KEY` - 主键,如果要设成 [跟排序键不相同](#xuan-ze-gen-pai-xu-jian-bu-yi-yang-zhu-jian),可选。 - `PRIMARY KEY` - 主键,如果要 [选择与排序键不同的主键](#choosing-a-primary-key-that-differs-from-the-sorting-key),可选。
默认情况下主键跟排序键(由 `ORDER BY` 子句指定)相同。 默认情况下主键跟排序键(由 `ORDER BY` 子句指定)相同。
因此,大部分情况下不需要再专门指定一个 `PRIMARY KEY` 子句。 因此,大部分情况下不需要再专门指定一个 `PRIMARY KEY` 子句。
@ -72,17 +75,19 @@ Clickhouse 中最强大的表引擎当属 `MergeTree` (合并树)引擎及
如果要用抽样表达式,主键中必须包含这个表达式。例如: 如果要用抽样表达式,主键中必须包含这个表达式。例如:
`SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))` `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`
- TTL 指定行存储的持续时间并定义 PART 在硬盘和卷上的移动逻辑的规则列表,可选。 - TTL 指定行存储的持续时间并定义数据片段在硬盘和卷上的移动逻辑的规则列表,可选。
表达式中必须存在至少一个 `Date``DateTime` 类型的列,比如: 表达式中必须存在至少一个 `Date``DateTime` 类型的列,比如:
`TTL date + INTERVAl 1 DAY` `TTL date + INTERVAl 1 DAY`
规则的类型 `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'`指定了当满足条件(到达当前时间)时所要执行的动作:移除过期的行,还是将 PART 如果PART中的所有行都满足表达式的话移动到指定的磁盘`TO DISK 'xxx'`) 或 卷(`TO VOLUME 'xxx'`)。默认的规则是移除(`DELETE`)。可以在列表中指定多个规则,但最多只能有一个`DELETE`的规则。 规则的类型 `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'`指定了当满足条件(到达指定时间)时所要执行的动作:移除过期的行,还是将数据片段(如果数据片段中的所有行都满足表达式的话)移动到指定的磁盘(`TO DISK 'xxx'`) 或 卷(`TO VOLUME 'xxx'`)。默认的规则是移除(`DELETE`)。可以在列表中指定多个规则,但最多只能有一个`DELETE`的规则。
更多细节,请查看 [表和列的 TTL](#table_engine-mergetree-ttl)
- `SETTINGS` — 影响 `MergeTree` 性能的额外参数: - `SETTINGS`控制 `MergeTree` 行为的额外参数:
- `index_granularity` — 索引粒度。索引中相邻的『标记』间的数据行数。默认值8192 。参考[Data Storage](#mergetree-data-storage)。 - `index_granularity` — 索引粒度。索引中相邻的『标记』间的数据行数。默认值8192 。参考[数据存储](#mergetree-data-storage)。
- `index_granularity_bytes` — 索引粒度,以字节为单位,默认值: 10Mb。如果想要仅按数据行数限制索引粒度, 请设置为0(不建议)。 - `index_granularity_bytes` — 索引粒度,以字节为单位,默认值: 10Mb。如果想要仅按数据行数限制索引粒度, 请设置为0(不建议)。
- `enable_mixed_granularity_parts` — 是否启用通过 `index_granularity_bytes` 控制索引粒度的大小。在19.11版本之前, 只有 `index_granularity` 配置能够用于限制索引粒度的大小。当从具有很大的行(几十上百兆字节)的表中查询数据时候,`index_granularity_bytes` 配置能够提升ClickHouse的性能。如果你的表里有很大的行可以开启这项配置来提升`SELECT` 查询的性能。 - `enable_mixed_granularity_parts` — 是否启用通过 `index_granularity_bytes` 控制索引粒度的大小。在19.11版本之前, 只有 `index_granularity` 配置能够用于限制索引粒度的大小。当从具有很大的行(几十上百兆字节)的表中查询数据时候,`index_granularity_bytes` 配置能够提升ClickHouse的性能。如果你的表里有很大的行可以开启这项配置来提升`SELECT` 查询的性能。
- `use_minimalistic_part_header_in_zookeeper` — 是否在 ZooKeeper 中启用最小的数据片段头 。如果设置了 `use_minimalistic_part_header_in_zookeeper=1` ZooKeeper 会存储更少的数据。更多信息参考『服务配置参数』这章中的 [设置描述](../../../operations/server-configuration-parameters/settings.md#server-settings-use_minimalistic_part_header_in_zookeeper) 。 - `use_minimalistic_part_header_in_zookeeper` — 是否在 ZooKeeper 中启用最小的数据片段头 。如果设置了 `use_minimalistic_part_header_in_zookeeper=1` ZooKeeper 会存储更少的数据。更多信息参考『服务配置参数』这章中的 [设置描述](../../../operations/server-configuration-parameters/settings.md#server-settings-use_minimalistic_part_header_in_zookeeper) 。
@ -90,18 +95,21 @@ Clickhouse 中最强大的表引擎当属 `MergeTree` (合并树)引擎及
<a name="mergetree_setting-merge_with_ttl_timeout"></a> <a name="mergetree_setting-merge_with_ttl_timeout"></a>
- `merge_with_ttl_timeout` — TTL合并频率的最小间隔时间单位秒。默认值: 86400 (1 天)。 - `merge_with_ttl_timeout` — TTL合并频率的最小间隔时间单位秒。默认值: 86400 (1 天)。
- `write_final_mark` — 是否启用在数据片段尾部写入最终索引标记。默认值: 1不建议更改 - `write_final_mark` — 是否启用在数据片段尾部写入最终索引标记。默认值: 1不建议更改
- `storage_policy` — 存储策略。 参见 [使用多个区块装置进行数据存储](#table_engine-mergetree-multiple-volumes). - `merge_max_block_size` — 在块中进行合并操作时的最大行数限制。默认值8192
- `min_bytes_for_wide_part`,`min_rows_for_wide_part` 在数据分段中可以使用`Wide`格式进行存储的最小字节数/行数。你可以不设置、只设置一个,或全都设置。参考:[Data Storage](#mergetree-data-storage) - `storage_policy` — 存储策略。 参见 [使用具有多个块的设备进行数据存储](#table_engine-mergetree-multiple-volumes).
- `min_bytes_for_wide_part`,`min_rows_for_wide_part` 在数据片段中可以使用`Wide`格式进行存储的最小字节数/行数。你可以不设置、只设置一个,或全都设置。参考:[数据存储](#mergetree-data-storage)
**示例配置** **示例配置**
ENGINE MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity=8192 ``` sql
ENGINE MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity=8192
```
示例中,我们设为按月分区。 在这个例子中,我们设置了按月进行分区。
同时我们设置了一个按用户ID哈希的抽样表达式。这让你可以有该表中每个 `CounterID``EventDate` 下面的数据伪随机分布。如果你在查询时指定了 [SAMPLE](../../../engines/table-engines/mergetree-family/mergetree.md#select-sample-clause) 子句。 ClickHouse会返回对于用户子集的一个均匀的伪随机数据采样。 同时我们设置了一个按用户 ID 哈希的抽样表达式。这使得你可以对该表中每个 `CounterID``EventDate` 的数据伪随机分布。如果你在查询时指定了 [SAMPLE](../../../engines/table-engines/mergetree-family/mergetree.md#select-sample-clause) 子句。 ClickHouse会返回对于用户子集的一个均匀的伪随机数据采样。
`index_granularity` 可省略,默认值为 8192 `index_granularity` 可省略因为 8192 是默认设置
<details markdown="1"> <details markdown="1">
@ -133,15 +141,20 @@ Clickhouse 中最强大的表引擎当属 `MergeTree` (合并树)引擎及
## 数据存储 {#mergetree-data-storage} ## 数据存储 {#mergetree-data-storage}
表由按主键排序的数据 *片段* 组成。 表由按主键排序的数据片段DATA PART组成。
当数据被插入到表中时,会分成数据片段并按主键的字典序排序。例如,主键是 `(CounterID, Date)` 时,片段中数据按 `CounterID` 排序,具有相同 `CounterID` 的部分按 `Date` 排序。 当数据被插入到表中时,会创建多个数据片段并按主键的字典序排序。例如,主键是 `(CounterID, Date)` 时,片段中数据首先`CounterID` 排序,具有相同 `CounterID` 的部分按 `Date` 排序。
不同分区的数据会被分成不同的片段ClickHouse 在后台合并数据片段以便更高效存储。不会合并来自不同分区的数据片段。这个合并机制并不保证相同主键的所有行都合并到同一个数据片段中。 不同分区的数据会被分成不同的片段ClickHouse 在后台合并数据片段以便更高效存储。不同分区的数据片段不会进行合并。合并机制并不保证具有相同主键的行都合并到同一个数据片段中。
ClickHouse 会为每个数据片段创建一个索引文件,索引文件包含每个索引行(『标记』)的主键值。索引行号定义为 `n * index_granularity` 。最大的 `n` 等于总行数除以 `index_granularity` 的值的整数部分。对于每列,跟主键相同的索引行处也会写入『标记』。这些『标记』让你可以直接找到数据所在的列 数据片段可以以 `Wide``Compact` 格式存储。在 `Wide` 格式下,每一列都会在文件系统中存储为单独的文件,在 `Compact` 格式下所有列都存储在一个文件中。`Compact` 格式可以提高插入量少插入频率频繁时的性能
你可以只用一单一大表并不断地一块块往里面加入数据 `MergeTree` 引擎的就是为了这样的场景。 数据存储格式由 `min_bytes_for_wide_part``min_rows_for_wide_part` 表引擎参数控制。如果数据片段中的字节数或行数少于相应的设置值,数据片段会以 `Compact` 格式存储,否则会以 `Wide` 格式存储。
每个数据片段被逻辑的分割成颗粒granules。颗粒是 ClickHouse 中进行数据查询时的最小不可分割数据集。ClickHouse 不会对行或值进行拆分,所以每个颗粒总是包含整数个行。每个颗粒的第一行通过该行的主键值进行标记,
ClickHouse 会为每个数据片段创建一个索引文件来存储这些标记。对于每列无论它是否包含在主键当中ClickHouse 都会存储类似标记。这些标记让你可以在列文件中直接找到数据。
颗粒的大小通过表引擎参数 `index_granularity``index_granularity_bytes` 控制。取决于行的大小,颗粒的行数的在 `[1, index_granularity]` 范围中。如果单行的大小超过了 `index_granularity_bytes` 设置的值,那么一个颗粒的大小会超过 `index_granularity_bytes`。在这种情况下,颗粒的大小等于该行的大小。
## 主键和索引在查询中的表现 {#primary-keys-and-indexes-in-queries} ## 主键和索引在查询中的表现 {#primary-keys-and-indexes-in-queries}
@ -162,56 +175,53 @@ ClickHouse 会为每个数据片段创建一个索引文件,索引文件包含
上面例子可以看出使用索引通常会比全表描述要高效。 上面例子可以看出使用索引通常会比全表描述要高效。
稀疏索引会引起额外的数据读取。当读取主键单个区间范围的数据时,每个数据块中最多会多读 `index_granularity * 2` 行额外的数据。大部分情况下,当 `index_granularity = 8192`ClickHouse的性能并不会降级。 稀疏索引会引起额外的数据读取。当读取主键单个区间范围的数据时,每个数据块中最多会多读 `index_granularity * 2` 行额外的数据。
稀疏索引让你能操作有巨量行的表。因为这些索引是常驻内存RAM 稀疏索引使得你可以处理极大量的行因为大多数情况下这些索引常驻与内存RAM
ClickHouse 不要求主键惟一。所以,你可以插入多条具有相同主键的行。 ClickHouse 不要求主键惟一,所以你可以插入多条具有相同主键的行。
### 主键的选择 {#zhu-jian-de-xuan-ze} ### 主键的选择 {#zhu-jian-de-xuan-ze}
主键中列的数量并没有明确的限制。依据数据结构,你应该让主键包含多些或少些列。这样可以: 主键中列的数量并没有明确的限制。依据数据结构,你可以在主键包含多些或少些列。这样可以:
- 改善索引的性能。 - 改善索引的性能。
如果当前主键是 `(a, b)` ,然后加入另一个 `c` 列,满足下面条件时,则可以改善性能: 如果当前主键是 `(a, b)` ,在下列情况下添加另一个 `c` 列会提升性能:
- 有带有 `c` 列条件的查询。
- 很长的数据范围( `index_granularity` 的数倍)里 `(a, b)` 都是相同的值,并且这种的情况很普遍。换言之,就是加入另一列后,可以让你的查询略过很长的数据范围。 - 查询会使用 `c` 列作为条件
- 很长的数据范围( `index_granularity` 的数倍)里 `(a, b)` 都是相同的值,并且这样的情况很普遍。换言之,就是加入另一列后,可以让你的查询略过很长的数据范围。
- 改善数据压缩。 - 改善数据压缩。
ClickHouse 以主键排序片段数据,所以,数据的一致性越高,压缩越好。 ClickHouse 以主键排序片段数据,所以,数据的一致性越高,压缩越好。
- [折叠树](collapsingmergetree.md#table_engine-collapsingmergetree) 和 [SummingMergeTree](summingmergetree.md) 引擎里,数据合并时,会有额外的处理逻辑。 - 在[CollapsingMergeTree](collapsingmergetree.md#table_engine-collapsingmergetree) 和 [SummingMergeTree](summingmergetree.md) 引擎里进行数据合并时会提供额外的处理逻辑。
在这种情况下,指定一个跟主键不同的 *排序键* 也是有意义的。 在这种情况下,指定与主键不同的 *排序键* 也是有意义的。
长的主键会对插入性能和内存消耗有负面影响,但主键中额外的列并不影响 `SELECT` 查询的性能。 长的主键会对插入性能和内存消耗有负面影响,但主键中额外的列并不影响 `SELECT` 查询的性能。
### 选择跟排序键不一样主键 {#xuan-ze-gen-pai-xu-jian-bu-yi-yang-zhu-jian} 可以使用 `ORDER BY tuple()` 语法创建没有主键的表。在这种情况下 ClickHouse 根据数据插入的顺序存储。如果在使用 `INSERT ... SELECT` 时希望保持数据的排序,请设置 [max\_insert\_threads = 1](../../../operations/settings/settings.md#settings-max-insert-threads)。
指定一个跟排序键(用于排序数据片段中行的表达式) 想要根据初始顺序进行数据查询,使用 [单线程查询](../../../operations/settings/settings.md#settings-max_threads)
不一样的主键(用于计算写到索引文件的每个标记值的表达式)是可以的。
这种情况下,主键表达式元组必须是排序键表达式元组的一个前缀。
当使用 [SummingMergeTree](summingmergetree.md) 和 ### 选择与排序键不同主键 {#choosing-a-primary-key-that-differs-from-the-sorting-key}
[AggregatingMergeTree](aggregatingmergetree.md) 引擎时,这个特性非常有用。
通常,使用这类引擎时,表里列分两种:*维度* 和 *度量*
典型的查询是在 `GROUP BY` 并过虑维度的情况下统计度量列的值。
像 SummingMergeTree 和 AggregatingMergeTree ,用相同的排序键值统计行时,
通常会加上所有的维度。结果就是,这键的表达式会是一长串的列组成,
并且这组列还会因为新加维度必须频繁更新。
这种情况下,主键中仅预留少量列保证高效范围扫描, 指定一个跟排序键不一样的主键是可以的,此时排序键用于在数据片段中进行排序,主键用于在索引文件中进行标记的写入。这种情况下,主键表达式元组必须是排序键表达式元组的前缀。
剩下的维度列放到排序键元组里。这样是合理的。
[排序键的修改](../../../engines/table-engines/mergetree-family/mergetree.md) 是轻量级的操作,因为一个新列同时被加入到表里和排序键后时,已存在的数据片段并不需要修改。由于旧的排序键是新排序键的前缀,并且刚刚添加的列中没有数据,因此在表修改时的数据对于新旧的排序键来说都是有序的 当使用 [SummingMergeTree](summingmergetree.md) 和 [AggregatingMergeTree](aggregatingmergetree.md) 引擎时,这个特性非常有用。通常在使用这类引擎时,表里的列分两种:*维度* 和 *度量* 。典型的查询会通过任意的 `GROUP BY` 对度量列进行聚合并通过维度列进行过滤。由于 SummingMergeTree 和 AggregatingMergeTree 会对排序键相同的行进行聚合,所以把所有的维度放进排序键是很自然的做法。但这将导致排序键中包含大量的列,并且排序键会伴随着新添加的维度不断的更新。
### 索引和分区在查询中的应用 {#suo-yin-he-fen-qu-zai-cha-xun-zhong-de-ying-yong} 在这种情况下合理的做法是,只保留少量的列在主键当中用于提升扫描效率,将维度列添加到排序键中。
`SELECT` 查询ClickHouse 分析是否可以使用索引。如果 `WHERE/PREWHERE` 子句具有下面这些表达式(作为谓词链接一子项或整个)则可以使用索引:基于主键或分区键的列或表达式的部分的等式或比较运算表达式;基于主键或分区键的列或表达式的固定前缀的 `IN``LIKE` 表达式;基于主键或分区键的列的某些函数;基于主键或分区键的表达式的逻辑表达式。 <!-- It is too hard for me to translate this section as the original text completely. So I did it with my own understanding. If you have good idea, please help me. --> 排序键进行 [ALTER](../../../sql-reference/statements/alter.md) 是轻量级的操作,因为当一个新列同时被加入到表里和排序键里时,已存在的数据片段并不需要修改。由于旧的排序键是新排序键的前缀,并且新添加的列中没有数据,因此在表修改时的数据对于新旧的排序键来说都是有序的。
因此,在索引键的一个或多个区间上快速地跑查询都是可能的。下面例子中,指定标签;指定标签和日期范围;指定标签和日期;指定多个标签和日期范围等运行查询,都会非常快。 ### 索引和分区在查询中的应用 {#use-of-indexes-and-partitions-in-queries}
对于 `SELECT` 查询ClickHouse 分析是否可以使用索引。如果 `WHERE/PREWHERE` 子句具有下面这些表达式(作为谓词链接一子项或整个)则可以使用索引:包含一个表示与主键/分区键中的部分字段或全部字段相等/不等的比较表达式;基于主键/分区键的字段上的 `IN` 或 固定前缀的`LIKE` 表达式;基于主键/分区键的字段上的某些函数;基于主键/分区键的表达式的逻辑表达式。 <!-- It is too hard for me to translate this section as the original text completely. So I did it with my own understanding. If you have good idea, please help me. -->
<!-- It is hard for me to translate this section too, but I think change the sentence struct is helpful for understanding. So I change the phraseology-->
因此,在索引键的一个或多个区间上快速地执行查询都是可能的。下面例子中,指定标签;指定标签和日期范围;指定标签和日期;指定多个标签和日期范围等执行查询,都会非常快。
当引擎配置如下时: 当引擎配置如下时:
@ -237,11 +247,18 @@ SELECT count() FROM table WHERE CounterID = 34 OR URL LIKE '%upyachka%'
要检查 ClickHouse 执行一个查询时能否使用索引,可设置 [force\_index\_by\_date](../../../operations/settings/settings.md#settings-force_index_by_date) 和 [force\_primary\_key](../../../operations/settings/settings.md) 。 要检查 ClickHouse 执行一个查询时能否使用索引,可设置 [force\_index\_by\_date](../../../operations/settings/settings.md#settings-force_index_by_date) 和 [force\_primary\_key](../../../operations/settings/settings.md) 。
按月分区的分区键是只能读取包含适当范围日期的数据块。这种情况下,数据块会包含很多天(最多整月)的数据。在块中,数据按主键排序,主键第一列可能不包含日期。因此,仅使用日期而没有带主键前缀条件的查询将会导致读取超过这个日期范围 按月分区的分区键是只能读取包含适当范围日期的数据块。这种情况下,数据块会包含很多天(最多整月)的数据。在块中,数据按主键排序,主键第一列可能不包含日期。因此,仅使用日期而没有带主键前几个字段作为条件的查询将会导致需要读取超过这个指定日期以外的数据
### 跳数索引(分段汇总索引,实验性的) {#tiao-shu-suo-yin-fen-duan-hui-zong-suo-yin-shi-yan-xing-de} ### 部分单调主键的使用
需要设置 `allow_experimental_data_skipping_indices` 为 1 才能使用此索引。(执行 `SET allow_experimental_data_skipping_indices = 1`)。 考虑这样的场景,比如一个月中的几天。它们在一个月的范围内形成一个[单调序列](https://zh.wikipedia.org/wiki/单调函数) 但如果扩展到更大的时间范围它们就不再单调了。这就是一个部分单调序列。如果用户使用部分单调的主键创建表ClickHouse同样会创建一个稀疏索引。当用户从这类表中查询数据时ClickHouse 会对查询条件进行分析。如果用户希望获取两个索引标记之间的数据并且这两个标记在一个月以内ClickHouse 可以在这种特殊情况下使用到索引,因为它可以计算出查询参数与索引标记之间的距离。
如果查询参数范围内的主键不是单调序列,那么 ClickHouse 无法使用索引。在这种情况下ClickHouse 会进行全表扫描。
ClickHouse 在任何主键代表一个部分单调序列的情况下都会使用这个逻辑。
### 跳数索引 {#tiao-shu-suo-yin-fen-duan-hui-zong-suo-yin-shi-yan-xing-de}
此索引在 `CREATE` 语句的列部分里定义。 此索引在 `CREATE` 语句的列部分里定义。
@ -249,12 +266,14 @@ SELECT count() FROM table WHERE CounterID = 34 OR URL LIKE '%upyachka%'
INDEX index_name expr TYPE type(...) GRANULARITY granularity_value INDEX index_name expr TYPE type(...) GRANULARITY granularity_value
``` ```
`*MergeTree` 系列的表都能指定跳数索引。 `*MergeTree` 系列的表可以指定跳数索引。
这些索引是由数据块按粒度分割后的每部分在指定表达式上汇总信息 `granularity_value` 组成(粒度大小用表引擎里 `index_granularity` 的指定)。 这些索引是由数据块按粒度分割后的每部分在指定表达式上汇总信息 `granularity_value` 组成(粒度大小用表引擎里 `index_granularity` 的指定)。
这些汇总信息有助于用 `where` 语句跳过大片不满足的数据,从而减少 `SELECT` 查询从磁盘读取的数据量, 这些汇总信息有助于用 `where` 语句跳过大片不满足的数据,从而减少 `SELECT` 查询从磁盘读取的数据量,
示例 这些索引会在数据块上聚合指定表达式的信息,这些信息以 granularity_value 指定的粒度组成 (粒度的大小通过在表引擎中定义 index_granularity 定义)。这些汇总信息有助于跳过大片不满足 `where` 条件的数据,从而减少 `SELECT` 查询从磁盘读取的数据量。
**示例**
``` sql ``` sql
CREATE TABLE table_name CREATE TABLE table_name
@ -282,19 +301,27 @@ SELECT count() FROM table WHERE u64 * i32 == 10 AND u64 * length(s) >= 1234
存储指定表达式的极值(如果表达式是 `tuple` ,则存储 `tuple` 中每个元素的极值),这些信息用于跳过数据块,类似主键。 存储指定表达式的极值(如果表达式是 `tuple` ,则存储 `tuple` 中每个元素的极值),这些信息用于跳过数据块,类似主键。
- `set(max_rows)` - `set(max_rows)`
存储指定表达式的惟一值(不超过 `max_rows` 个,`max_rows=0` 则表示『无限制』)。这些信息可用于检查 `WHERE` 表达式是否满足某个数据块 存储指定表达式的不重复值(不超过 `max_rows` 个,`max_rows=0` 则表示『无限制』)。这些信息可用于检查 数据块是否满足 `WHERE` 条件
- `ngrambf_v1(n, size_of_bloom_filter_in_bytes, number_of_hash_functions, random_seed)` - `ngrambf_v1(n, size_of_bloom_filter_in_bytes, number_of_hash_functions, random_seed)`
存储包含数据块中所有 n 元短语的 [布隆过滤器](https://en.wikipedia.org/wiki/Bloom_filter) 。只可用在字符串上。 存储一个包含数据块中所有 n元短语ngram 的 [布隆过滤器](https://en.wikipedia.org/wiki/Bloom_filter) 。只可用在字符串上。
可用于优化 `equals` `like``in` 表达式的性能。 可用于优化 `equals` `like``in` 表达式的性能。
`n` 短语长度。 `n` 短语长度。
`size_of_bloom_filter_in_bytes` 布隆过滤器大小单位字节。因为压缩得好可以指定比较大的值如256或512 `size_of_bloom_filter_in_bytes` 布隆过滤器大小,单位字节。(因为压缩得好,可以指定比较大的值,如 256 512
`number_of_hash_functions` 布隆过滤器中使用的 hash 函数的个数。 `number_of_hash_functions` 布隆过滤器中使用的哈希函数的个数。
`random_seed` hash 函数的随机种子。 `random_seed` 哈希函数的随机种子。
- `tokenbf_v1(size_of_bloom_filter_in_bytes, number_of_hash_functions, random_seed)` - `tokenbf_v1(size_of_bloom_filter_in_bytes, number_of_hash_functions, random_seed)`
`ngrambf_v1` 类似,不同于 ngrams 存储字符串指定长度的所有片段。它只存储被非字母数字符分割的片段。 `ngrambf_v1` 类似,不同于 ngrams 存储字符串指定长度的所有片段。它只存储被非字母数字符分割的片段。
- `bloom_filter(bloom_filter([false_positive])` 为指定的列存储布隆过滤器
可选的参数 false_positive 用来指定从布隆过滤器收到错误响应的几率。取值范围是 (0,1)默认值0.025
支持的数据类型:`Int*`, `UInt*`, `Float*`, `Enum`, `Date`, `DateTime`, `String`, `FixedString`, `Array`, `LowCardinality`, `Nullable`
以下函数会用到这个索引: [equals](../../../sql-reference/functions/comparison-functions.md), [notEquals](../../../sql-reference/functions/comparison-functions.md), [in](../../../sql-reference/functions/in-functions.md), [notIn](../../../sql-reference/functions/in-functions.md), [has](../../../sql-reference/functions/array-functions.md)
<!-- --> <!-- -->
``` sql ``` sql
@ -303,17 +330,62 @@ INDEX sample_index2 (u64 * length(str), i32 + f64 * 100, date, str) TYPE set(100
INDEX sample_index3 (lower(str), str) TYPE ngrambf_v1(3, 256, 2, 0) GRANULARITY 4 INDEX sample_index3 (lower(str), str) TYPE ngrambf_v1(3, 256, 2, 0) GRANULARITY 4
``` ```
## 并发数据访问 {#bing-fa-shu-ju-fang-wen} #### 函数支持 {#functions-support}
WHERE 子句中的条件包含对列的函数调用如果列是索引的一部分ClickHouse 会在执行函数时尝试使用索引。不同的函数对索引的支持是不同的。
`set` 索引会对所有函数生效,其他索引对函数的生效情况见下表
| 函数 (操作符) / 索引 | primary key | minmax | ngrambf\_v1 | tokenbf\_v1 | bloom\_filter |
|------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------|
| [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ |
| [notEquals(!=, \<\>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ |
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✔ |
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✗ | ✗ | ✗ |
| [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ |
| [endsWith](../../../sql-reference/functions/string-functions.md#endswith) | ✗ | ✗ | ✔ | ✔ | ✗ |
| [multiSearchAny](../../../sql-reference/functions/string-search-functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ |
| [in](../../../sql-reference/functions/in-functions.md#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔ |
| [notIn](../../../sql-reference/functions/in-functions.md#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔ |
| [less (\<)](../../../sql-reference/functions/comparison-functions.md#function-less) | ✔ | ✔ | ✗ | ✗ | ✗ |
| [greater (\>)](../../../sql-reference/functions/comparison-functions.md#function-greater) | ✔ | ✔ | ✗ | ✗ | ✗ |
| [lessOrEquals (\<=)](../../../sql-reference/functions/comparison-functions.md#function-lessorequals) | ✔ | ✔ | ✗ | ✗ | ✗ |
| [greaterOrEquals (\>=)](../../../sql-reference/functions/comparison-functions.md#function-greaterorequals) | ✔ | ✔ | ✗ | ✗ | ✗ |
| [empty](../../../sql-reference/functions/array-functions.md#function-empty) | ✔ | ✔ | ✗ | ✗ | ✗ |
| [notEmpty](../../../sql-reference/functions/array-functions.md#function-notempty) | ✔ | ✔ | ✗ | ✗ | ✗ |
| hasToken | ✗ | ✗ | ✗ | ✔ | ✗ |
常量参数小于 ngram 大小的函数不能使用 `ngrambf_v1` 进行查询优化。
!!! note "注意"
布隆过滤器可能会包含不符合条件的匹配,所以 `ngrambf_v1`, `tokenbf_v1``bloom_filter` 索引不能用于负向的函数,例如:
- 可以用来优化的场景
- `s LIKE '%test%'`
- `NOT s NOT LIKE '%test%'`
- `s = 1`
- `NOT s != 1`
- `startsWith(s, 'test')`
- 不能用来优化的场景
- `NOT s LIKE '%test%'`
- `s NOT LIKE '%test%'`
- `NOT s = 1`
- `s != 1`
- `NOT startsWith(s, 'test')`
## 并发数据访问 {#concurrent-data-access}
应对表的并发访问,我们使用多版本机制。换言之,当同时读和更新表时,数据从当前查询到的一组片段中读取。没有冗长的的锁。插入不会阻碍读取。 应对表的并发访问,我们使用多版本机制。换言之,当同时读和更新表时,数据从当前查询到的一组片段中读取。没有冗长的的锁。插入不会阻碍读取。
对表的读操作是自动并行的。 对表的读操作是自动并行的。
## 列和表的TTL {#table_engine-mergetree-ttl} ## 列和表的 TTL {#table_engine-mergetree-ttl}
TTL可以设置值的生命周期它既可以为整张表设置也可以为每个列字段单独设置。如果`TTL`同时作用于表和字段ClickHouse会使用先到期的那个。 TTL 可以设置值的生命周期,它既可以为整张表设置,也可以为每个列字段单独设置。表级别的 TTL 还会指定数据在磁盘和卷上自动转移的逻辑
被设置TTL的表必须拥有[日期](../../../engines/table-engines/mergetree-family/mergetree.md) 或 [日期时间](../../../engines/table-engines/mergetree-family/mergetree.md) 类型的字段。要定义数据的生命周期,需要在这个日期字段上使用操作符,例如: TTL 表达式的计算结果必须是 [日期](../../../engines/table-engines/mergetree-family/mergetree.md) 或 [日期时间](../../../engines/table-engines/mergetree-family/mergetree.md) 类型的字段。
示例:
``` sql ``` sql
TTL time_column TTL time_column
@ -327,15 +399,15 @@ TTL date_time + INTERVAL 1 MONTH
TTL date_time + INTERVAL 15 HOUR TTL date_time + INTERVAL 15 HOUR
``` ```
### 列字段 TTL {#mergetree-column-ttl} ### 列 TTL {#mergetree-column-ttl}
当列字段中的值过期时, ClickHouse会将它们替换成数据类型的默认值。如果分区内某一列的所有值均已过期则ClickHouse会从文件系统中删除这个分区目录下的列文件 当列中的值过期时, ClickHouse会将它们替换成该列数据类型的默认值。如果数据片段中列的所有值均已过期则ClickHouse 会从文件系统中的数据片段中此列
`TTL`子句不能被用于主键字段。 `TTL`子句不能被用于主键字段。
示例说明: 示例:
创建一张包含 `TTL` 的表 创建表时指定 `TTL`
``` sql ``` sql
CREATE TABLE example_table CREATE TABLE example_table
@ -368,11 +440,21 @@ ALTER TABLE example_table
### 表 TTL {#mergetree-table-ttl} ### 表 TTL {#mergetree-table-ttl}
当表内的数据过期时, ClickHouse会删除所有对应的行 表可以设置一个用于移除过期行的表达式以及多个用于在磁盘或卷上自动转移数据片段的表达式。当表中的行过期时ClickHouse 会删除所有对应的行。对于数据片段的转移特性,必须所有的行都满足转移条件
举例说明: ``` sql
TTL expr [DELETE|TO DISK 'aaa'|TO VOLUME 'bbb'], ...
```
创建一张包含 `TTL` 的表 TTL 规则的类型紧跟在每个 TTL 表达式后面,它会影响满足表达式时(到达指定时间时)应当执行的操作:
- `DELETE` - 删除过期的行(默认操作);
- `TO DISK 'aaa'` - 将数据片段移动到磁盘 `aaa`;
- `TO VOLUME 'bbb'` - 将数据片段移动到卷 `bbb`.
示例:
创建时指定 TTL
``` sql ``` sql
CREATE TABLE example_table CREATE TABLE example_table
@ -383,7 +465,9 @@ CREATE TABLE example_table
ENGINE = MergeTree ENGINE = MergeTree
PARTITION BY toYYYYMM(d) PARTITION BY toYYYYMM(d)
ORDER BY d ORDER BY d
TTL d + INTERVAL 1 MONTH; TTL d + INTERVAL 1 MONTH [DELETE],
d + INTERVAL 1 WEEK TO VOLUME 'aaa',
d + INTERVAL 2 WEEK TO DISK 'bbb';
``` ```
修改表的 `TTL` 修改表的 `TTL`
@ -395,14 +479,179 @@ ALTER TABLE example_table
**删除数据** **删除数据**
当ClickHouse合并数据分区时, 会删除TTL过期的数据。 ClickHouse 在数据片段合并时会删除掉过期的数据。
当ClickHouse发现数据过期时, 它将会执行一个计划外的合并。要控制这类合并的频率, 你可以设置 `merge_with_ttl_timeout`。如果该值被设置的太低, 它将导致执行许多的计划外合并,这可能会消耗大量资源。 当ClickHouse发现数据过期时, 它将会执行一个计划外的合并。要控制这类合并的频率, 你可以设置 `merge_with_ttl_timeout`。如果该值被设置的太低, 它将引发大量计划外的合并,这可能会消耗大量资源。
如果在合并的时候执行`SELECT` 查询, 则可能会得到过期的数据。为了避免这种情况,可以在`SELECT`之前使用 [OPTIMIZE](../../../engines/table-engines/mergetree-family/mergetree.md#misc_operations-optimize) 查询。 如果在合并的过程中执行 `SELECT` 查询, 则可能会得到过期的数据。为了避免这种情况,可以在 `SELECT` 之前使用 [OPTIMIZE](../../../engines/table-engines/mergetree-family/mergetree.md#misc_operations-optimize) 查询。
## 使用多个块设备进行数据存储 {#table_engine-mergetree-multiple-volumes} ## 使用具有多个块的设备进行数据存储 {#table_engine-mergetree-multiple-volumes}
### 介绍 {#introduction}
MergeTree 系列表引擎可以将数据存储在多块设备上。这对某些可以潜在被划分为“冷”“热”的表来说是很有用的。近期数据被定期的查询但只需要很小的空间。相反,详尽的历史数据很少被用到。如果有多块磁盘可用,那么“热”的数据可以放置在快速的磁盘上(比如 NVMe 固态硬盘或内存),“冷”的数据可以放在相对较慢的磁盘上(比如机械硬盘)。
数据片段是 `MergeTree` 引擎表的最小可移动单元。属于同一个数据片段的数据被存储在同一块磁盘上。数据片段会在后台自动的在磁盘间移动,也可以通过 [ALTER](../../../sql-reference/statements/alter.md#alter_move-partition) 查询来移动。
### 术语 {#terms}
- 磁盘 — 挂载到文件系统的块设备
- 默认磁盘 — 在服务器设置中通过 [path](../../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-path) 参数指定的数据存储
- 卷 — 磁盘的等效有序集合 (类似于 [JBOD](https://en.wikipedia.org/wiki/Non-RAID_drive_architectures)
- 存储策略 — 卷的集合及他们之间的数据移动规则
### 配置 {#table_engine-mergetree-multiple-volumes_configure} ### 配置 {#table_engine-mergetree-multiple-volumes_configure}
[来源文章](https://clickhouse.tech/docs/en/operations/table_engines/mergetree/) <!--hide--> 磁盘、卷和存储策略应当在主文件 `config.xml``config.d` 目录中的独立文件中的 `<storage_configuration>` 标签内定义。
配置结构:
``` xml
<storage_configuration>
<disks>
<disk_name_1> <!-- disk name -->
<path>/mnt/fast_ssd/clickhouse/</path>
</disk_name_1>
<disk_name_2>
<path>/mnt/hdd1/clickhouse/</path>
<keep_free_space_bytes>10485760</keep_free_space_bytes>
</disk_name_2>
<disk_name_3>
<path>/mnt/hdd2/clickhouse/</path>
<keep_free_space_bytes>10485760</keep_free_space_bytes>
</disk_name_3>
...
</disks>
...
</storage_configuration>
```
标签:
- `<disk_name_N>` — 磁盘名,名称必须与其他磁盘不同.
- `path` — 服务器将用来存储数据 (`data` 和 `shadow` 目录) 的路径, 应当以 / 结尾.
- `keep_free_space_bytes` — 需要保留的剩余磁盘空间.
磁盘定义的顺序无关紧要。
存储策略配置:
``` xml
<storage_configuration>
...
<policies>
<policy_name_1>
<volumes>
<volume_name_1>
<disk>disk_name_from_disks_configuration</disk>
<max_data_part_size_bytes>1073741824</max_data_part_size_bytes>
</volume_name_1>
<volume_name_2>
<!-- configuration -->
</volume_name_2>
<!-- more volumes -->
</volumes>
<move_factor>0.2</move_factor>
</policy_name_1>
<policy_name_2>
<!-- configuration -->
</policy_name_2>
<!-- more policies -->
</policies>
...
</storage_configuration>
```
标签:
- `policy_name_N` — 策略名称,不能重复。
- `volume_name_N` — 卷名称,不能重复。
- `disk` — 卷中的磁盘。
- `max_data_part_size_bytes` — 任意卷上的磁盘可以存储的数据片段的最大大小。
- `move_factor` — 当可用空间少于这个因子时,数据将自动的向下一个卷(如果有的话)移动 (默认值为 0.1)。
配置示例:
``` xml
<storage_configuration>
...
<policies>
<hdd_in_order> <!-- policy name -->
<volumes>
<single> <!-- volume name -->
<disk>disk1</disk>
<disk>disk2</disk>
</single>
</volumes>
</hdd_in_order>
<moving_from_ssd_to_hdd>
<volumes>
<hot>
<disk>fast_ssd</disk>
<max_data_part_size_bytes>1073741824</max_data_part_size_bytes>
</hot>
<cold>
<disk>disk1</disk>
</cold>
</volumes>
<move_factor>0.2</move_factor>
</moving_from_ssd_to_hdd>
</policies>
...
</storage_configuration>
```
在给出的例子中, `hdd_in_order` 策略实现了 [循环制](https://zh.wikipedia.org/wiki/循环制) 方法。因此这个策略只定义了一个卷(`single`),数据片段会以循环的顺序全部存储到它的磁盘上。当有多个类似的磁盘挂载到系统上,但没有配置 RAID 时,这种策略非常有用。请注意一个每个独立的磁盘驱动都并不可靠,你可能需要用 3 或更大的复制因此来补偿它。
如果在系统中有不同类型的磁盘可用,可以使用 `moving_from_ssd_to_hdd`。`hot` 卷由 SSD 磁盘(`fast_ssd`)组成,这个卷上可以存储的数据片段的最大大小为 1GB。所有大于 1GB 的数据片段都会被直接存储到 `cold` 卷上,`cold` 卷包含一个名为 `disk1` 的 HDD 磁盘。
同样,一旦 `fast_ssd` 被填充超过 80%,数据会通过后台进程向 `disk1` 进行转移。
存储策略中卷的枚举顺序是很重要的。因为当一个卷被充满时,数据会向下一个卷转移。磁盘的枚举顺序同样重要,因为数据是依次存储在磁盘上的。
在创建表时,可以将一个配置好的策略应用到表:
``` sql
CREATE TABLE table_with_non_default_policy (
EventDate Date,
OrderID UInt64,
BannerID UInt64,
SearchPhrase String
) ENGINE = MergeTree
ORDER BY (OrderID, BannerID)
PARTITION BY toYYYYMM(EventDate)
SETTINGS storage_policy = 'moving_from_ssd_to_hdd'
```
`default` 存储策略意味着只使用一个卷,这个卷只包含一个在 `<path>` 中定义的磁盘。表创建后,它的存储策略就不能改变了。
可以通过 [background\_move\_pool\_size](../../../operations/settings/settings.md#background_move_pool_size) 设置调整执行后台任务的线程数。
### 详细说明 {#details}
对于 `MergeTree` 表,数据通过以下不同的方式写入到磁盘当中:
- 作为插入(`INSERT`查询)的结果
- 在后台合并和[数据变异](../../../sql-reference/statements/alter.md#alter-mutations)期间
- 当从另一个副本下载时
- 作为 [ALTER TABLE … FREEZE PARTITION](../../../sql-reference/statements/alter.md#alter_freeze-partition) 冻结分区的结果
除了数据变异和冻结分区以外的情况下,数据按照以下逻辑存储到卷或磁盘上:
1. 首个卷(按定义顺序)拥有足够的磁盘空间存储数据片段(`unreserved_space > current_part_size`)并且允许存储给定数据片段的大小(`max_data_part_size_bytes > current_part_size`
2. 在这个数据卷内,紧挨着先前存储数据的那块磁盘之后的磁盘,拥有比数据片段大的剩余空间。(`unreserved_space - keep_free_space_bytes > current_part_size`
更进一步,数据变异和分区冻结使用的是 [硬链接](https://en.wikipedia.org/wiki/Hard_link)。不同磁盘之间的硬链接是不支持的,所以在这种情况下数据片段都会被存储到初始化的那一块磁盘上。
在后台,数据片段基于剩余空间(`move_factor`参数)根据卷在配置文件中定义的顺序进行转移。数据永远不会从最后一个移出也不会从第一个移入。可以通过系统表 [system.part\_log](../../../operations/system-tables/part_log.md#system_tables-part-log) (字段 `type = MOVE_PART`) 和 [system.parts](../../../operations/system-tables/parts.md#system_tables-parts) (字段 `path``disk`) 来监控后台的移动情况。同时,具体细节可以通过服务器日志查看。
用户可以通过 [ALTER TABLE … MOVE PART\|PARTITION … TO VOLUME\|DISK …](../../../sql-reference/statements/alter.md#alter_move-partition) 强制移动一个数据片段或分区到另外一个卷,所有后台移动的限制都会被考虑在内。这个查询会自行启动,无需等待后台操作完成。如果没有足够的可用空间或任何必须条件没有被满足,用户会收到报错信息。
数据移动不会妨碍到数据复制。也就是说,同一张表的不同副本可以指定不同的存储策略。
在后台合并和数据变异之后,就的数据片段会在一定时间后被移除 (`old_parts_lifetime`)。在这期间,他们不能被移动到其他的卷或磁盘。也就是说,直到数据片段被完全移除,它们仍然会被磁盘占用空间计算在内。
[原始文章](https://clickhouse.tech/docs/en/operations/table_engines/mergetree/) <!--hide-->

View File

@ -25,7 +25,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
**参数** **参数**
- `ver` — 版本列。类型为 `UInt*`, `Date`, `DateTime``DateTime64`。可选参数。 - `ver` — 版本列。类型为 `UInt*`, `Date``DateTime`。可选参数。
合并的时候,`ReplacingMergeTree` 从所有具有相同主键的行中选择一行留下: 合并的时候,`ReplacingMergeTree` 从所有具有相同主键的行中选择一行留下:
- 如果 `ver` 列未指定,选择最后一条。 - 如果 `ver` 列未指定,选择最后一条。

View File

@ -23,7 +23,7 @@ Ok.
$ curl 'http://localhost:8123/?query=SELECT%201' $ curl 'http://localhost:8123/?query=SELECT%201'
1 1
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1' $ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
1 1
$ GET 'http://localhost:8123/?query=SELECT 1' $ GET 'http://localhost:8123/?query=SELECT 1'

View File

@ -847,32 +847,26 @@ private:
} }
// Parse and execute what we've read. // Parse and execute what we've read.
fprintf(stderr, "will now parse '%s'\n", text.c_str());
const auto * new_end = processWithFuzzing(text); const auto * new_end = processWithFuzzing(text);
if (new_end > &text[0]) if (new_end > &text[0])
{ {
const auto rest_size = text.size() - (new_end - &text[0]); const auto rest_size = text.size() - (new_end - &text[0]);
fprintf(stderr, "total %zd, rest %zd\n", text.size(), rest_size);
memcpy(&text[0], new_end, rest_size); memcpy(&text[0], new_end, rest_size);
text.resize(rest_size); text.resize(rest_size);
} }
else else
{ {
fprintf(stderr, "total %zd, can't parse\n", text.size()); // We didn't read enough text to parse a query. Will read more.
} }
if (!connection->isConnected()) // Ensure that we're still connected to the server. If the server died,
{ // the reconnect is going to fail with an exception, and the fuzzer
// Uh-oh... // will exit. The ping() would be the best match here, but it's
std::cerr << "Lost connection to the server." << std::endl; // private, probably for a good reason that the protocol doesn't allow
last_exception_received_from_server // pings at any possible moment.
= std::make_unique<Exception>(210, "~"); connection->forceConnected(connection_parameters.timeouts);
return;
}
if (text.size() > 4 * 1024) if (text.size() > 4 * 1024)
{ {
@ -880,9 +874,6 @@ private:
// and we still cannot parse a single query in it. Abort. // and we still cannot parse a single query in it. Abort.
std::cerr << "Read too much text and still can't parse a query." std::cerr << "Read too much text and still can't parse a query."
" Aborting." << std::endl; " Aborting." << std::endl;
last_exception_received_from_server
= std::make_unique<Exception>(1, "~");
// return;
exit(1); exit(1);
} }
} }

View File

@ -281,41 +281,33 @@ void AccessControlManager::addStoragesFromMainConfig(
String config_dir = std::filesystem::path{config_path}.remove_filename().string(); String config_dir = std::filesystem::path{config_path}.remove_filename().string();
String dbms_dir = config.getString("path", DBMS_DEFAULT_PATH); String dbms_dir = config.getString("path", DBMS_DEFAULT_PATH);
String include_from_path = config.getString("include_from", "/etc/metrika.xml"); String include_from_path = config.getString("include_from", "/etc/metrika.xml");
bool has_user_directories = config.has("user_directories");
if (config.has("user_directories")) /// If path to users' config isn't absolute, try guess its root (current) dir.
/// At first, try to find it in dir of main config, after will use current dir.
String users_config_path = config.getString("users_config", "");
if (users_config_path.empty())
{ {
if (config.has("users_config")) if (!has_user_directories)
LOG_WARNING(getLogger(), "<user_directories> is specified, the path from <users_config> won't be used: " + config.getString("users_config"));
if (config.has("access_control_path"))
LOG_WARNING(getLogger(), "<access_control_path> is specified, the path from <access_control_path> won't be used: " + config.getString("access_control_path"));
addStoragesFromUserDirectoriesConfig(
config,
"user_directories",
config_dir,
dbms_dir,
include_from_path,
get_zookeeper_function);
}
else
{
/// If path to users' config isn't absolute, try guess its root (current) dir.
/// At first, try to find it in dir of main config, after will use current dir.
String users_config_path = config.getString("users_config", "");
if (users_config_path.empty())
users_config_path = config_path; users_config_path = config_path;
else if (std::filesystem::path{users_config_path}.is_relative() && std::filesystem::exists(config_dir + users_config_path)) }
users_config_path = config_dir + users_config_path; else if (std::filesystem::path{users_config_path}.is_relative() && std::filesystem::exists(config_dir + users_config_path))
users_config_path = config_dir + users_config_path;
if (!users_config_path.empty())
{
if (users_config_path != config_path) if (users_config_path != config_path)
checkForUsersNotInMainConfig(config, config_path, users_config_path, getLogger()); checkForUsersNotInMainConfig(config, config_path, users_config_path, getLogger());
addUsersConfigStorage(users_config_path, include_from_path, dbms_dir, get_zookeeper_function); addUsersConfigStorage(users_config_path, include_from_path, dbms_dir, get_zookeeper_function);
String disk_storage_dir = config.getString("access_control_path", "");
if (!disk_storage_dir.empty())
addDiskStorage(disk_storage_dir);
} }
String disk_storage_dir = config.getString("access_control_path", "");
if (!disk_storage_dir.empty())
addDiskStorage(disk_storage_dir);
if (has_user_directories)
addStoragesFromUserDirectoriesConfig(config, "user_directories", config_dir, dbms_dir, include_from_path, get_zookeeper_function);
} }

View File

@ -96,6 +96,22 @@ public:
/// Returns all the flags related to a dictionary. /// Returns all the flags related to a dictionary.
static AccessFlags allDictionaryFlags(); static AccessFlags allDictionaryFlags();
/// Returns all the flags which could be granted on the global level.
/// The same as allFlags().
static AccessFlags allFlagsGrantableOnGlobalLevel();
/// Returns all the flags which could be granted on the database level.
/// Returns allDatabaseFlags() | allTableFlags() | allDictionaryFlags() | allColumnFlags().
static AccessFlags allFlagsGrantableOnDatabaseLevel();
/// Returns all the flags which could be granted on the table level.
/// Returns allTableFlags() | allDictionaryFlags() | allColumnFlags().
static AccessFlags allFlagsGrantableOnTableLevel();
/// Returns all the flags which could be granted on the global level.
/// The same as allColumnFlags().
static AccessFlags allFlagsGrantableOnColumnLevel();
private: private:
static constexpr size_t NUM_FLAGS = 128; static constexpr size_t NUM_FLAGS = 128;
using Flags = std::bitset<NUM_FLAGS>; using Flags = std::bitset<NUM_FLAGS>;
@ -193,6 +209,10 @@ public:
const Flags & getTableFlags() const { return all_flags_for_target[TABLE]; } const Flags & getTableFlags() const { return all_flags_for_target[TABLE]; }
const Flags & getColumnFlags() const { return all_flags_for_target[COLUMN]; } const Flags & getColumnFlags() const { return all_flags_for_target[COLUMN]; }
const Flags & getDictionaryFlags() const { return all_flags_for_target[DICTIONARY]; } const Flags & getDictionaryFlags() const { return all_flags_for_target[DICTIONARY]; }
const Flags & getAllFlagsGrantableOnGlobalLevel() const { return getAllFlags(); }
const Flags & getAllFlagsGrantableOnDatabaseLevel() const { return all_flags_grantable_on_database_level; }
const Flags & getAllFlagsGrantableOnTableLevel() const { return all_flags_grantable_on_table_level; }
const Flags & getAllFlagsGrantableOnColumnLevel() const { return getColumnFlags(); }
private: private:
enum NodeType enum NodeType
@ -381,6 +401,9 @@ private:
} }
for (const auto & child : start_node->children) for (const auto & child : start_node->children)
collectAllFlags(child.get()); collectAllFlags(child.get());
all_flags_grantable_on_table_level = all_flags_for_target[TABLE] | all_flags_for_target[DICTIONARY] | all_flags_for_target[COLUMN];
all_flags_grantable_on_database_level = all_flags_for_target[DATABASE] | all_flags_grantable_on_table_level;
} }
Impl() Impl()
@ -431,6 +454,8 @@ private:
std::vector<Flags> access_type_to_flags_mapping; std::vector<Flags> access_type_to_flags_mapping;
Flags all_flags; Flags all_flags;
Flags all_flags_for_target[static_cast<size_t>(DICTIONARY) + 1]; Flags all_flags_for_target[static_cast<size_t>(DICTIONARY) + 1];
Flags all_flags_grantable_on_database_level;
Flags all_flags_grantable_on_table_level;
}; };
@ -447,6 +472,10 @@ inline AccessFlags AccessFlags::allDatabaseFlags() { return Impl<>::instance().g
inline AccessFlags AccessFlags::allTableFlags() { return Impl<>::instance().getTableFlags(); } inline AccessFlags AccessFlags::allTableFlags() { return Impl<>::instance().getTableFlags(); }
inline AccessFlags AccessFlags::allColumnFlags() { return Impl<>::instance().getColumnFlags(); } inline AccessFlags AccessFlags::allColumnFlags() { return Impl<>::instance().getColumnFlags(); }
inline AccessFlags AccessFlags::allDictionaryFlags() { return Impl<>::instance().getDictionaryFlags(); } inline AccessFlags AccessFlags::allDictionaryFlags() { return Impl<>::instance().getDictionaryFlags(); }
inline AccessFlags AccessFlags::allFlagsGrantableOnGlobalLevel() { return Impl<>::instance().getAllFlagsGrantableOnGlobalLevel(); }
inline AccessFlags AccessFlags::allFlagsGrantableOnDatabaseLevel() { return Impl<>::instance().getAllFlagsGrantableOnDatabaseLevel(); }
inline AccessFlags AccessFlags::allFlagsGrantableOnTableLevel() { return Impl<>::instance().getAllFlagsGrantableOnTableLevel(); }
inline AccessFlags AccessFlags::allFlagsGrantableOnColumnLevel() { return Impl<>::instance().getAllFlagsGrantableOnColumnLevel(); }
inline AccessFlags operator |(AccessType left, AccessType right) { return AccessFlags(left) | right; } inline AccessFlags operator |(AccessType left, AccessType right) { return AccessFlags(left) | right; }
inline AccessFlags operator &(AccessType left, AccessType right) { return AccessFlags(left) & right; } inline AccessFlags operator &(AccessType left, AccessType right) { return AccessFlags(left) & right; }

View File

@ -1,5 +1,4 @@
#include <Access/AccessRights.h> #include <Access/AccessRights.h>
#include <Common/Exception.h>
#include <common/logger_useful.h> #include <common/logger_useful.h>
#include <boost/container/small_vector.hpp> #include <boost/container/small_vector.hpp>
#include <boost/range/adaptor/map.hpp> #include <boost/range/adaptor/map.hpp>
@ -8,12 +7,6 @@
namespace DB namespace DB
{ {
namespace ErrorCodes
{
extern const int INVALID_GRANT;
}
namespace namespace
{ {
using Kind = AccessRightsElementWithOptions::Kind; using Kind = AccessRightsElementWithOptions::Kind;
@ -214,30 +207,14 @@ namespace
COLUMN_LEVEL, COLUMN_LEVEL,
}; };
AccessFlags getAcceptableFlags(Level level) AccessFlags getAllGrantableFlags(Level level)
{ {
switch (level) switch (level)
{ {
case GLOBAL_LEVEL: case GLOBAL_LEVEL: return AccessFlags::allFlagsGrantableOnGlobalLevel();
{ case DATABASE_LEVEL: return AccessFlags::allFlagsGrantableOnDatabaseLevel();
static const AccessFlags res = AccessFlags::allFlags(); case TABLE_LEVEL: return AccessFlags::allFlagsGrantableOnTableLevel();
return res; case COLUMN_LEVEL: return AccessFlags::allFlagsGrantableOnColumnLevel();
}
case DATABASE_LEVEL:
{
static const AccessFlags res = AccessFlags::allDatabaseFlags() | AccessFlags::allTableFlags() | AccessFlags::allDictionaryFlags() | AccessFlags::allColumnFlags();
return res;
}
case TABLE_LEVEL:
{
static const AccessFlags res = AccessFlags::allTableFlags() | AccessFlags::allDictionaryFlags() | AccessFlags::allColumnFlags();
return res;
}
case COLUMN_LEVEL:
{
static const AccessFlags res = AccessFlags::allColumnFlags();
return res;
}
} }
__builtin_unreachable(); __builtin_unreachable();
} }
@ -276,21 +253,7 @@ public:
void grant(const AccessFlags & flags_) void grant(const AccessFlags & flags_)
{ {
if (!flags_) AccessFlags flags_to_add = flags_ & getAllGrantableFlags();
return;
AccessFlags flags_to_add = flags_ & getAcceptableFlags();
if (!flags_to_add)
{
if (level == DATABASE_LEVEL)
throw Exception(flags_.toString() + " cannot be granted on the database level", ErrorCodes::INVALID_GRANT);
else if (level == TABLE_LEVEL)
throw Exception(flags_.toString() + " cannot be granted on the table level", ErrorCodes::INVALID_GRANT);
else if (level == COLUMN_LEVEL)
throw Exception(flags_.toString() + " cannot be granted on the column level", ErrorCodes::INVALID_GRANT);
}
addGrantsRec(flags_to_add); addGrantsRec(flags_to_add);
optimizeTree(); optimizeTree();
} }
@ -456,8 +419,8 @@ public:
} }
private: private:
AccessFlags getAcceptableFlags() const { return ::DB::getAcceptableFlags(level); } AccessFlags getAllGrantableFlags() const { return ::DB::getAllGrantableFlags(level); }
AccessFlags getChildAcceptableFlags() const { return ::DB::getAcceptableFlags(static_cast<Level>(level + 1)); } AccessFlags getChildAllGrantableFlags() const { return ::DB::getAllGrantableFlags(static_cast<Level>(level + 1)); }
Node * tryGetChild(const std::string_view & name) const Node * tryGetChild(const std::string_view & name) const
{ {
@ -480,7 +443,7 @@ private:
Node & new_child = (*children)[*new_child_name]; Node & new_child = (*children)[*new_child_name];
new_child.node_name = std::move(new_child_name); new_child.node_name = std::move(new_child_name);
new_child.level = static_cast<Level>(level + 1); new_child.level = static_cast<Level>(level + 1);
new_child.flags = flags & new_child.getAcceptableFlags(); new_child.flags = flags & new_child.getAllGrantableFlags();
return new_child; return new_child;
} }
@ -496,12 +459,12 @@ private:
bool canEraseChild(const Node & child) const bool canEraseChild(const Node & child) const
{ {
return ((flags & child.getAcceptableFlags()) == child.flags) && !child.children; return ((flags & child.getAllGrantableFlags()) == child.flags) && !child.children;
} }
void addGrantsRec(const AccessFlags & flags_) void addGrantsRec(const AccessFlags & flags_)
{ {
if (auto flags_to_add = flags_ & getAcceptableFlags()) if (auto flags_to_add = flags_ & getAllGrantableFlags())
{ {
flags |= flags_to_add; flags |= flags_to_add;
if (children) if (children)
@ -547,7 +510,7 @@ private:
const AccessFlags & parent_flags) const AccessFlags & parent_flags)
{ {
auto flags = node.flags; auto flags = node.flags;
auto parent_fl = parent_flags & node.getAcceptableFlags(); auto parent_fl = parent_flags & node.getAllGrantableFlags();
auto revokes = parent_fl - flags; auto revokes = parent_fl - flags;
auto grants = flags - parent_fl; auto grants = flags - parent_fl;
@ -576,9 +539,9 @@ private:
const Node * node_go, const Node * node_go,
const AccessFlags & parent_flags_go) const AccessFlags & parent_flags_go)
{ {
auto acceptable_flags = ::DB::getAcceptableFlags(static_cast<Level>(full_name.size())); auto grantable_flags = ::DB::getAllGrantableFlags(static_cast<Level>(full_name.size()));
auto parent_fl = parent_flags & acceptable_flags; auto parent_fl = parent_flags & grantable_flags;
auto parent_fl_go = parent_flags_go & acceptable_flags; auto parent_fl_go = parent_flags_go & grantable_flags;
auto flags = node ? node->flags : parent_fl; auto flags = node ? node->flags : parent_fl;
auto flags_go = node_go ? node_go->flags : parent_fl_go; auto flags_go = node_go ? node_go->flags : parent_fl_go;
auto revokes = parent_fl - flags; auto revokes = parent_fl - flags;
@ -672,8 +635,8 @@ private:
} }
max_flags_with_children |= max_among_children; max_flags_with_children |= max_among_children;
AccessFlags add_acceptable_flags = getAcceptableFlags() - getChildAcceptableFlags(); AccessFlags add_flags = getAllGrantableFlags() - getChildAllGrantableFlags();
min_flags_with_children &= min_among_children | add_acceptable_flags; min_flags_with_children &= min_among_children | add_flags;
} }
void makeUnionRec(const Node & rhs) void makeUnionRec(const Node & rhs)
@ -689,7 +652,7 @@ private:
for (auto & [lhs_childname, lhs_child] : *children) for (auto & [lhs_childname, lhs_child] : *children)
{ {
if (!rhs.tryGetChild(lhs_childname)) if (!rhs.tryGetChild(lhs_childname))
lhs_child.flags |= rhs.flags & lhs_child.getAcceptableFlags(); lhs_child.flags |= rhs.flags & lhs_child.getAllGrantableFlags();
} }
} }
} }
@ -738,7 +701,7 @@ private:
if (new_flags != flags) if (new_flags != flags)
{ {
new_flags &= getAcceptableFlags(); new_flags &= getAllGrantableFlags();
flags_added |= static_cast<bool>(new_flags - flags); flags_added |= static_cast<bool>(new_flags - flags);
flags_removed |= static_cast<bool>(flags - new_flags); flags_removed |= static_cast<bool>(flags - new_flags);
flags = new_flags; flags = new_flags;

View File

@ -71,6 +71,8 @@ struct AccessRightsElement
{ {
} }
bool empty() const { return !access_flags || (!any_column && columns.empty()); }
auto toTuple() const { return std::tie(access_flags, any_database, database, any_table, table, any_column, columns); } auto toTuple() const { return std::tie(access_flags, any_database, database, any_table, table, any_column, columns); }
friend bool operator==(const AccessRightsElement & left, const AccessRightsElement & right) { return left.toTuple() == right.toTuple(); } friend bool operator==(const AccessRightsElement & left, const AccessRightsElement & right) { return left.toTuple() == right.toTuple(); }
friend bool operator!=(const AccessRightsElement & left, const AccessRightsElement & right) { return !(left == right); } friend bool operator!=(const AccessRightsElement & left, const AccessRightsElement & right) { return !(left == right); }
@ -86,6 +88,9 @@ struct AccessRightsElement
/// If the database is empty, replaces it with `new_database`. Otherwise does nothing. /// If the database is empty, replaces it with `new_database`. Otherwise does nothing.
void replaceEmptyDatabase(const String & new_database); void replaceEmptyDatabase(const String & new_database);
/// Resets flags which cannot be granted.
void removeNonGrantableFlags();
/// Returns a human-readable representation like "SELECT, UPDATE(x, y) ON db.table". /// Returns a human-readable representation like "SELECT, UPDATE(x, y) ON db.table".
String toString() const; String toString() const;
}; };
@ -111,6 +116,9 @@ struct AccessRightsElementWithOptions : public AccessRightsElement
friend bool operator==(const AccessRightsElementWithOptions & left, const AccessRightsElementWithOptions & right) { return left.toTuple() == right.toTuple(); } friend bool operator==(const AccessRightsElementWithOptions & left, const AccessRightsElementWithOptions & right) { return left.toTuple() == right.toTuple(); }
friend bool operator!=(const AccessRightsElementWithOptions & left, const AccessRightsElementWithOptions & right) { return !(left == right); } friend bool operator!=(const AccessRightsElementWithOptions & left, const AccessRightsElementWithOptions & right) { return !(left == right); }
/// Resets flags which cannot be granted.
void removeNonGrantableFlags();
/// Returns a human-readable representation like "GRANT SELECT, UPDATE(x, y) ON db.table". /// Returns a human-readable representation like "GRANT SELECT, UPDATE(x, y) ON db.table".
String toString() const; String toString() const;
}; };
@ -120,9 +128,14 @@ struct AccessRightsElementWithOptions : public AccessRightsElement
class AccessRightsElements : public std::vector<AccessRightsElement> class AccessRightsElements : public std::vector<AccessRightsElement>
{ {
public: public:
bool empty() const { return std::all_of(begin(), end(), [](const AccessRightsElement & e) { return e.empty(); }); }
/// Replaces the empty database with `new_database`. /// Replaces the empty database with `new_database`.
void replaceEmptyDatabase(const String & new_database); void replaceEmptyDatabase(const String & new_database);
/// Resets flags which cannot be granted.
void removeNonGrantableFlags();
/// Returns a human-readable representation like "GRANT SELECT, UPDATE(x, y) ON db.table". /// Returns a human-readable representation like "GRANT SELECT, UPDATE(x, y) ON db.table".
String toString() const; String toString() const;
}; };
@ -134,6 +147,9 @@ public:
/// Replaces the empty database with `new_database`. /// Replaces the empty database with `new_database`.
void replaceEmptyDatabase(const String & new_database); void replaceEmptyDatabase(const String & new_database);
/// Resets flags which cannot be granted.
void removeNonGrantableFlags();
/// Returns a human-readable representation like "GRANT SELECT, UPDATE(x, y) ON db.table". /// Returns a human-readable representation like "GRANT SELECT, UPDATE(x, y) ON db.table".
String toString() const; String toString() const;
}; };
@ -157,4 +173,34 @@ inline void AccessRightsElementsWithOptions::replaceEmptyDatabase(const String &
element.replaceEmptyDatabase(new_database); element.replaceEmptyDatabase(new_database);
} }
inline void AccessRightsElement::removeNonGrantableFlags()
{
if (!any_column)
access_flags &= AccessFlags::allFlagsGrantableOnColumnLevel();
else if (!any_table)
access_flags &= AccessFlags::allFlagsGrantableOnTableLevel();
else if (!any_database)
access_flags &= AccessFlags::allFlagsGrantableOnDatabaseLevel();
else
access_flags &= AccessFlags::allFlagsGrantableOnGlobalLevel();
}
inline void AccessRightsElementWithOptions::removeNonGrantableFlags()
{
if (kind == Kind::GRANT)
AccessRightsElement::removeNonGrantableFlags();
}
inline void AccessRightsElements::removeNonGrantableFlags()
{
for (auto & element : *this)
element.removeNonGrantableFlags();
}
inline void AccessRightsElementsWithOptions::removeNonGrantableFlags()
{
for (auto & element : *this)
element.removeNonGrantableFlags();
}
} }

View File

@ -13,6 +13,7 @@ namespace DB
namespace ErrorCodes namespace ErrorCodes
{ {
extern const int UNKNOWN_EXCEPTION; extern const int UNKNOWN_EXCEPTION;
extern const int LOGICAL_ERROR;
} }
namespace MySQLReplication namespace MySQLReplication
@ -103,19 +104,19 @@ namespace MySQLReplication
= header.event_size - EVENT_HEADER_LENGTH - 4 - 4 - 1 - 2 - 2 - status_len - schema_len - 1 - CHECKSUM_CRC32_SIGNATURE_LENGTH; = header.event_size - EVENT_HEADER_LENGTH - 4 - 4 - 1 - 2 - 2 - status_len - schema_len - 1 - CHECKSUM_CRC32_SIGNATURE_LENGTH;
query.resize(len); query.resize(len);
payload.readStrict(reinterpret_cast<char *>(query.data()), len); payload.readStrict(reinterpret_cast<char *>(query.data()), len);
if (query.rfind("BEGIN", 0) == 0 || query.rfind("COMMIT") == 0) if (query.starts_with("BEGIN") || query.starts_with("COMMIT"))
{ {
typ = QUERY_EVENT_MULTI_TXN_FLAG; typ = QUERY_EVENT_MULTI_TXN_FLAG;
} }
else if (query.rfind("XA", 0) == 0) else if (query.starts_with("XA"))
{ {
if (query.rfind("XA ROLLBACK", 0) == 0) if (query.starts_with("XA ROLLBACK"))
throw ReplicationError("ParseQueryEvent: Unsupported query event:" + query, ErrorCodes::UNKNOWN_EXCEPTION); throw ReplicationError("ParseQueryEvent: Unsupported query event:" + query, ErrorCodes::LOGICAL_ERROR);
typ = QUERY_EVENT_XA; typ = QUERY_EVENT_XA;
} }
else if (query.rfind("SAVEPOINT", 0) == 0) else if (query.starts_with("SAVEPOINT"))
{ {
throw ReplicationError("ParseQueryEvent: Unsupported query event:" + query, ErrorCodes::UNKNOWN_EXCEPTION); throw ReplicationError("ParseQueryEvent: Unsupported query event:" + query, ErrorCodes::LOGICAL_ERROR);
} }
} }

View File

@ -185,31 +185,4 @@ bool DataTypeDateTime::equals(const IDataType & rhs) const
return typeid(rhs) == typeid(*this); return typeid(rhs) == typeid(*this);
} }
namespace ErrorCodes
{
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
}
static DataTypePtr create(const ASTPtr & arguments)
{
if (!arguments)
return std::make_shared<DataTypeDateTime>();
if (arguments->children.size() != 1)
throw Exception("DateTime data type can optionally have only one argument - time zone name", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
const auto * arg = arguments->children[0]->as<ASTLiteral>();
if (!arg || arg->value.getType() != Field::Types::String)
throw Exception("Parameter for DateTime data type must be string literal", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
return std::make_shared<DataTypeDateTime>(arg->value.get<String>());
}
void registerDataTypeDateTime(DataTypeFactory & factory)
{
factory.registerDataType("DateTime", create, DataTypeFactory::CaseInsensitive);
factory.registerAlias("TIMESTAMP", "DateTime", DataTypeFactory::CaseInsensitive);
}
} }

View File

@ -201,65 +201,4 @@ bool DataTypeDateTime64::equals(const IDataType & rhs) const
return false; return false;
} }
namespace ErrorCodes
{
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
}
enum class ArgumentKind
{
Optional,
Mandatory
};
template <typename T, ArgumentKind Kind>
std::conditional_t<Kind == ArgumentKind::Optional, std::optional<T>, T>
getArgument(const ASTPtr & arguments, size_t argument_index, const char * argument_name, const std::string context_data_type_name)
{
using NearestResultType = NearestFieldType<T>;
const auto field_type = Field::TypeToEnum<NearestResultType>::value;
const ASTLiteral * argument = nullptr;
auto exception_message = [=](const String & message)
{
return std::string("Parameter #") + std::to_string(argument_index) + " '"
+ argument_name + "' for " + context_data_type_name
+ message
+ ", expected: " + Field::Types::toString(field_type) + " literal.";
};
if (!arguments || arguments->children.size() <= argument_index
|| !(argument = arguments->children[argument_index]->as<ASTLiteral>()))
{
if constexpr (Kind == ArgumentKind::Optional)
return {};
else
throw Exception(exception_message(" is missing"),
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
}
if (argument->value.getType() != field_type)
throw Exception(exception_message(String(" has wrong type: ") + argument->value.getTypeName()),
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
return argument->value.get<NearestResultType>();
}
static DataTypePtr create64(const ASTPtr & arguments)
{
if (!arguments || arguments->size() == 0)
return std::make_shared<DataTypeDateTime64>(DataTypeDateTime64::default_scale);
const auto scale = getArgument<UInt64, ArgumentKind::Optional>(arguments, 0, "scale", "DateType64");
const auto timezone = getArgument<String, ArgumentKind::Optional>(arguments, !!scale, "timezone", "DateType64");
return std::make_shared<DataTypeDateTime64>(scale.value_or(DataTypeDateTime64::default_scale), timezone.value_or(String{}));
}
void registerDataTypeDateTime64(DataTypeFactory & factory)
{
factory.registerDataType("DateTime64", create64, DataTypeFactory::CaseInsensitive);
}
} }

View File

@ -165,7 +165,6 @@ DataTypeFactory::DataTypeFactory()
registerDataTypeDecimal(*this); registerDataTypeDecimal(*this);
registerDataTypeDate(*this); registerDataTypeDate(*this);
registerDataTypeDateTime(*this); registerDataTypeDateTime(*this);
registerDataTypeDateTime64(*this);
registerDataTypeString(*this); registerDataTypeString(*this);
registerDataTypeFixedString(*this); registerDataTypeFixedString(*this);
registerDataTypeEnum(*this); registerDataTypeEnum(*this);

View File

@ -82,7 +82,6 @@ void registerDataTypeInterval(DataTypeFactory & factory);
void registerDataTypeLowCardinality(DataTypeFactory & factory); void registerDataTypeLowCardinality(DataTypeFactory & factory);
void registerDataTypeDomainIPv4AndIPv6(DataTypeFactory & factory); void registerDataTypeDomainIPv4AndIPv6(DataTypeFactory & factory);
void registerDataTypeDomainSimpleAggregateFunction(DataTypeFactory & factory); void registerDataTypeDomainSimpleAggregateFunction(DataTypeFactory & factory);
void registerDataTypeDateTime64(DataTypeFactory & factory);
void registerDataTypeDomainGeo(DataTypeFactory & factory); void registerDataTypeDomainGeo(DataTypeFactory & factory);
} }

View File

@ -0,0 +1,118 @@
#include <Core/Field.h>
#include <Parsers/IAST.h>
#include <Parsers/ASTLiteral.h>
#include <DataTypes/IDataType.h>
#include <DataTypes/DataTypeDateTime.h>
#include <DataTypes/DataTypeDateTime64.h>
#include <DataTypes/DataTypeFactory.h>
namespace DB
{
namespace ErrorCodes
{
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
}
enum class ArgumentKind
{
Optional,
Mandatory
};
String getExceptionMessage(
const String & message, size_t argument_index, const char * argument_name,
const std::string & context_data_type_name, Field::Types::Which field_type)
{
return std::string("Parameter #") + std::to_string(argument_index) + " '"
+ argument_name + "' for " + context_data_type_name
+ message
+ ", expected: " + Field::Types::toString(field_type) + " literal.";
}
template <typename T, ArgumentKind Kind>
std::conditional_t<Kind == ArgumentKind::Optional, std::optional<T>, T>
getArgument(const ASTPtr & arguments, size_t argument_index, const char * argument_name [[maybe_unused]], const std::string context_data_type_name)
{
using NearestResultType = NearestFieldType<T>;
const auto field_type = Field::TypeToEnum<NearestResultType>::value;
const ASTLiteral * argument = nullptr;
if (!arguments || arguments->children.size() <= argument_index
|| !(argument = arguments->children[argument_index]->as<ASTLiteral>())
|| argument->value.getType() != field_type)
{
if constexpr (Kind == ArgumentKind::Optional)
return {};
else
{
if (argument && argument->value.getType() != field_type)
throw Exception(getExceptionMessage(String(" has wrong type: ") + argument->value.getTypeName(),
argument_index, argument_name, context_data_type_name, field_type), ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
else
throw Exception(getExceptionMessage(" is missing", argument_index, argument_name, context_data_type_name, field_type),
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
}
}
return argument->value.get<NearestResultType>();
}
static DataTypePtr create(const ASTPtr & arguments)
{
if (!arguments || arguments->children.empty())
return std::make_shared<DataTypeDateTime>();
const auto scale = getArgument<UInt64, ArgumentKind::Optional>(arguments, 0, "scale", "DateTime");
const auto timezone = getArgument<String, ArgumentKind::Optional>(arguments, !!scale, "timezone", "DateTime");
if (!scale && !timezone)
throw Exception(getExceptionMessage(" has wrong type: ", 0, "scale", "DateTime", Field::Types::Which::UInt64),
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
/// If scale is defined, the data type is DateTime when scale = 0 otherwise the data type is DateTime64
if (scale && scale.value() != 0)
return std::make_shared<DataTypeDateTime64>(scale.value(), timezone.value_or(String{}));
return std::make_shared<DataTypeDateTime>(timezone.value_or(String{}));
}
static DataTypePtr create32(const ASTPtr & arguments)
{
if (!arguments || arguments->children.empty())
return std::make_shared<DataTypeDateTime>();
if (arguments->children.size() != 1)
throw Exception("DateTime32 data type can optionally have only one argument - time zone name", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
const auto timezone = getArgument<String, ArgumentKind::Mandatory>(arguments, 0, "timezone", "DateTime32");
return std::make_shared<DataTypeDateTime>(timezone);
}
static DataTypePtr create64(const ASTPtr & arguments)
{
if (!arguments || arguments->children.empty())
return std::make_shared<DataTypeDateTime64>(DataTypeDateTime64::default_scale);
if (arguments->children.size() > 2)
throw Exception("DateTime64 data type can optionally have two argument - scale and time zone name", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
const auto scale = getArgument<UInt64, ArgumentKind::Mandatory>(arguments, 0, "scale", "DateTime64");
const auto timezone = getArgument<String, ArgumentKind::Optional>(arguments, 1, "timezone", "DateTime64");
return std::make_shared<DataTypeDateTime64>(scale, timezone.value_or(String{}));
}
void registerDataTypeDateTime(DataTypeFactory & factory)
{
factory.registerDataType("DateTime", create, DataTypeFactory::CaseInsensitive);
factory.registerDataType("DateTime32", create32, DataTypeFactory::CaseInsensitive);
factory.registerDataType("DateTime64", create64, DataTypeFactory::CaseInsensitive);
factory.registerAlias("TIMESTAMP", "DateTime", DataTypeFactory::CaseInsensitive);
}
}

View File

@ -38,6 +38,7 @@ SRCS(
getMostSubtype.cpp getMostSubtype.cpp
IDataType.cpp IDataType.cpp
NestedUtils.cpp NestedUtils.cpp
registerDataTypeDateTime.cpp
) )

View File

@ -33,6 +33,11 @@ static constexpr size_t PRINT_MESSAGE_EACH_N_OBJECTS = 256;
static constexpr size_t PRINT_MESSAGE_EACH_N_SECONDS = 5; static constexpr size_t PRINT_MESSAGE_EACH_N_SECONDS = 5;
static constexpr size_t METADATA_FILE_BUFFER_SIZE = 32768; static constexpr size_t METADATA_FILE_BUFFER_SIZE = 32768;
namespace ErrorCodes
{
extern const int NOT_IMPLEMENTED;
}
namespace namespace
{ {
void tryAttachTable( void tryAttachTable(
@ -249,6 +254,9 @@ void DatabaseOrdinary::alterTable(const Context & context, const StorageID & tab
auto & ast_create_query = ast->as<ASTCreateQuery &>(); auto & ast_create_query = ast->as<ASTCreateQuery &>();
if (ast_create_query.as_table_function)
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Cannot alter table {} because it was created AS table function", backQuote(table_name));
ASTPtr new_columns = InterpreterCreateQuery::formatColumns(metadata.columns); ASTPtr new_columns = InterpreterCreateQuery::formatColumns(metadata.columns);
ASTPtr new_indices = InterpreterCreateQuery::formatIndices(metadata.secondary_indices); ASTPtr new_indices = InterpreterCreateQuery::formatIndices(metadata.secondary_indices);
ASTPtr new_constraints = InterpreterCreateQuery::formatConstraints(metadata.constraints); ASTPtr new_constraints = InterpreterCreateQuery::formatConstraints(metadata.constraints);

View File

@ -13,6 +13,7 @@ namespace DB
namespace ErrorCodes namespace ErrorCodes
{ {
extern const int UNSUPPORTED_METHOD; extern const int UNSUPPORTED_METHOD;
extern const int LOGICAL_ERROR;
} }
@ -239,12 +240,15 @@ std::string ExternalQueryBuilder::composeLoadIdsQuery(const std::vector<UInt64>
} }
std::string std::string ExternalQueryBuilder::composeLoadKeysQuery(
ExternalQueryBuilder::composeLoadKeysQuery(const Columns & key_columns, const std::vector<size_t> & requested_rows, LoadKeysMethod method, size_t partition_key_prefix) const Columns & key_columns, const std::vector<size_t> & requested_rows, LoadKeysMethod method, size_t partition_key_prefix)
{ {
if (!dict_struct.key) if (!dict_struct.key)
throw Exception{"Composite key required for method", ErrorCodes::UNSUPPORTED_METHOD}; throw Exception{"Composite key required for method", ErrorCodes::UNSUPPORTED_METHOD};
if (key_columns.size() != dict_struct.key->size())
throw Exception{"The size of key_columns does not equal to the size of dictionary key", ErrorCodes::LOGICAL_ERROR};
WriteBufferFromOwnString out; WriteBufferFromOwnString out;
writeString("SELECT ", out); writeString("SELECT ", out);

View File

@ -52,12 +52,14 @@ namespace DB
const String & host_, const String & host_,
UInt16 port_, UInt16 port_,
UInt8 db_index_, UInt8 db_index_,
const String & password_,
RedisStorageType storage_type_, RedisStorageType storage_type_,
const Block & sample_block_) const Block & sample_block_)
: dict_struct{dict_struct_} : dict_struct{dict_struct_}
, host{host_} , host{host_}
, port{port_} , port{port_}
, db_index{db_index_} , db_index{db_index_}
, password{password_}
, storage_type{storage_type_} , storage_type{storage_type_}
, sample_block{sample_block_} , sample_block{sample_block_}
, client{std::make_shared<Poco::Redis::Client>(host, port)} , client{std::make_shared<Poco::Redis::Client>(host, port)}
@ -78,16 +80,22 @@ namespace DB
ErrorCodes::INVALID_CONFIG_PARAMETER}; ErrorCodes::INVALID_CONFIG_PARAMETER};
// suppose key[0] is primary key, key[1] is secondary key // suppose key[0] is primary key, key[1] is secondary key
} }
if (!password.empty())
{
RedisCommand command("AUTH");
command << password;
String reply = client->execute<String>(command);
if (reply != "OK")
throw Exception{"Authentication failed with reason "
+ reply, ErrorCodes::INTERNAL_REDIS_ERROR};
}
if (db_index != 0) if (db_index != 0)
{ {
RedisCommand command("SELECT"); RedisCommand command("SELECT");
// Use poco's Int64, because it is defined as long long, and on command << std::to_string(db_index);
// MacOS, for the purposes of template instantiation, this type is
// distinct from int64_t, which is our Int64.
command << static_cast<Poco::Int64>(db_index);
String reply = client->execute<String>(command); String reply = client->execute<String>(command);
if (reply != "+OK\r\n") if (reply != "OK")
throw Exception{"Selecting database with index " + DB::toString(db_index) throw Exception{"Selecting database with index " + DB::toString(db_index)
+ " failed with reason " + reply, ErrorCodes::INTERNAL_REDIS_ERROR}; + " failed with reason " + reply, ErrorCodes::INTERNAL_REDIS_ERROR};
} }
@ -104,6 +112,7 @@ namespace DB
config_.getString(config_prefix_ + ".host"), config_.getString(config_prefix_ + ".host"),
config_.getUInt(config_prefix_ + ".port"), config_.getUInt(config_prefix_ + ".port"),
config_.getUInt(config_prefix_ + ".db_index", 0), config_.getUInt(config_prefix_ + ".db_index", 0),
config_.getString(config_prefix_ + ".password",""),
parseStorageType(config_.getString(config_prefix_ + ".storage_type", "")), parseStorageType(config_.getString(config_prefix_ + ".storage_type", "")),
sample_block_) sample_block_)
{ {
@ -115,6 +124,7 @@ namespace DB
other.host, other.host,
other.port, other.port,
other.db_index, other.db_index,
other.password,
other.storage_type, other.storage_type,
other.sample_block} other.sample_block}
{ {

View File

@ -41,6 +41,7 @@ namespace ErrorCodes
const std::string & host, const std::string & host,
UInt16 port, UInt16 port,
UInt8 db_index, UInt8 db_index,
const std::string & password,
RedisStorageType storage_type, RedisStorageType storage_type,
const Block & sample_block); const Block & sample_block);
@ -91,6 +92,7 @@ namespace ErrorCodes
const std::string host; const std::string host;
const UInt16 port; const UInt16 port;
const UInt8 db_index; const UInt8 db_index;
const std::string password;
const RedisStorageType storage_type; const RedisStorageType storage_type;
Block sample_block; Block sample_block;

View File

@ -1120,6 +1120,8 @@ void SSDComplexKeyCacheStorage::update(
AbsentIdHandler && on_key_not_found, AbsentIdHandler && on_key_not_found,
const DictionaryLifetime lifetime) const DictionaryLifetime lifetime)
{ {
assert(key_columns.size() == key_types.size());
auto append_block = [&key_types, this]( auto append_block = [&key_types, this](
const Columns & new_keys, const Columns & new_keys,
const SSDComplexKeyCachePartition::Attributes & new_attributes, const SSDComplexKeyCachePartition::Attributes & new_attributes,
@ -1447,6 +1449,12 @@ void SSDComplexKeyCacheDictionary::getItemsNumberImpl(
const Columns & key_columns, const DataTypes & key_types, const Columns & key_columns, const DataTypes & key_types,
ResultArrayType<OutputType> & out, DefaultGetter && get_default) const ResultArrayType<OutputType> & out, DefaultGetter && get_default) const
{ {
assert(dict_struct.key);
assert(key_columns.size() == key_types.size());
assert(key_columns.size() == dict_struct.key->size());
dict_struct.validateKeyTypes(key_types);
const auto now = std::chrono::system_clock::now(); const auto now = std::chrono::system_clock::now();
TemporalComplexKeysPool not_found_pool; TemporalComplexKeysPool not_found_pool;
@ -1527,6 +1535,8 @@ void SSDComplexKeyCacheDictionary::getItemsStringImpl(
ColumnString * out, ColumnString * out,
DefaultGetter && get_default) const DefaultGetter && get_default) const
{ {
dict_struct.validateKeyTypes(key_types);
const auto now = std::chrono::system_clock::now(); const auto now = std::chrono::system_clock::now();
TemporalComplexKeysPool not_found_pool; TemporalComplexKeysPool not_found_pool;

View File

@ -427,9 +427,8 @@ private:
using SSDComplexKeyCachePartitionPtr = std::shared_ptr<SSDComplexKeyCachePartition>; using SSDComplexKeyCachePartitionPtr = std::shared_ptr<SSDComplexKeyCachePartition>;
/* /** Class for managing SSDCachePartition and getting data from source.
Class for managing SSDCachePartition and getting data from source. */
*/
class SSDComplexKeyCacheStorage class SSDComplexKeyCacheStorage
{ {
public: public:
@ -515,9 +514,8 @@ private:
}; };
/* /** Dictionary interface
Dictionary interface */
*/
class SSDComplexKeyCacheDictionary final : public IDictionaryBase class SSDComplexKeyCacheDictionary final : public IDictionaryBase
{ {
public: public:

View File

@ -36,6 +36,7 @@ void registerFunctionsConversion(FunctionFactory & factory)
factory.registerFunction<FunctionToDate>(); factory.registerFunction<FunctionToDate>();
factory.registerFunction<FunctionToDateTime>(); factory.registerFunction<FunctionToDateTime>();
factory.registerFunction<FunctionToDateTime32>();
factory.registerFunction<FunctionToDateTime64>(); factory.registerFunction<FunctionToDateTime64>();
factory.registerFunction<FunctionToUUID>(); factory.registerFunction<FunctionToUUID>();
factory.registerFunction<FunctionToString>(); factory.registerFunction<FunctionToString>();
@ -93,6 +94,9 @@ void registerFunctionsConversion(FunctionFactory & factory)
factory.registerFunction<FunctionParseDateTimeBestEffortUS>(); factory.registerFunction<FunctionParseDateTimeBestEffortUS>();
factory.registerFunction<FunctionParseDateTimeBestEffortOrZero>(); factory.registerFunction<FunctionParseDateTimeBestEffortOrZero>();
factory.registerFunction<FunctionParseDateTimeBestEffortOrNull>(); factory.registerFunction<FunctionParseDateTimeBestEffortOrNull>();
factory.registerFunction<FunctionParseDateTime32BestEffort>();
factory.registerFunction<FunctionParseDateTime32BestEffortOrZero>();
factory.registerFunction<FunctionParseDateTime32BestEffortOrNull>();
factory.registerFunction<FunctionParseDateTime64BestEffort>(); factory.registerFunction<FunctionParseDateTime64BestEffort>();
factory.registerFunction<FunctionParseDateTime64BestEffortOrZero>(); factory.registerFunction<FunctionParseDateTime64BestEffortOrZero>();
factory.registerFunction<FunctionParseDateTime64BestEffortOrNull>(); factory.registerFunction<FunctionParseDateTime64BestEffortOrNull>();

View File

@ -977,6 +977,7 @@ struct ConvertImpl<DataTypeFixedString, DataTypeString, Name>
/// Declared early because used below. /// Declared early because used below.
struct NameToDate { static constexpr auto name = "toDate"; }; struct NameToDate { static constexpr auto name = "toDate"; };
struct NameToDateTime { static constexpr auto name = "toDateTime"; }; struct NameToDateTime { static constexpr auto name = "toDateTime"; };
struct NameToDateTime32 { static constexpr auto name = "toDateTime32"; };
struct NameToDateTime64 { static constexpr auto name = "toDateTime64"; }; struct NameToDateTime64 { static constexpr auto name = "toDateTime64"; };
struct NameToString { static constexpr auto name = "toString"; }; struct NameToString { static constexpr auto name = "toString"; };
struct NameToDecimal32 { static constexpr auto name = "toDecimal32"; }; struct NameToDecimal32 { static constexpr auto name = "toDecimal32"; };
@ -1003,6 +1004,26 @@ DEFINE_NAME_TO_INTERVAL(Year)
#undef DEFINE_NAME_TO_INTERVAL #undef DEFINE_NAME_TO_INTERVAL
struct NameParseDateTimeBestEffort;
struct NameParseDateTimeBestEffortOrZero;
struct NameParseDateTimeBestEffortOrNull;
template<typename Name, typename ToDataType>
static inline bool isDateTime64(const ColumnsWithTypeAndName & arguments, const ColumnNumbers & arguments_index = {})
{
if constexpr (std::is_same_v<ToDataType, DataTypeDateTime64>)
return true;
else if constexpr (std::is_same_v<Name, NameToDateTime> || std::is_same_v<Name, NameParseDateTimeBestEffort>
|| std::is_same_v<Name, NameParseDateTimeBestEffortOrZero> || std::is_same_v<Name, NameParseDateTimeBestEffortOrNull>)
{
if (arguments_index.empty())
return (arguments.size() == 2 && isUnsignedInteger(arguments[1].type)) || arguments.size() == 3;
else
return (arguments_index.size() == 2 && isUnsignedInteger(arguments[arguments_index[1]].type)) || arguments_index.size() == 3;
}
return false;
}
template <typename ToDataType, typename Name, typename MonotonicityImpl> template <typename ToDataType, typename Name, typename MonotonicityImpl>
class FunctionConvert : public IFunction class FunctionConvert : public IFunction
@ -1034,10 +1055,16 @@ public:
FunctionArgumentDescriptors mandatory_args = {{"Value", nullptr, nullptr, nullptr}}; FunctionArgumentDescriptors mandatory_args = {{"Value", nullptr, nullptr, nullptr}};
FunctionArgumentDescriptors optional_args; FunctionArgumentDescriptors optional_args;
if constexpr (to_decimal || to_datetime64) if constexpr (to_decimal)
{ {
mandatory_args.push_back({"scale", &isNativeInteger, &isColumnConst, "const Integer"}); mandatory_args.push_back({"scale", &isNativeInteger, &isColumnConst, "const Integer"});
} }
if (!to_decimal && isDateTime64<Name, ToDataType>(arguments))
{
mandatory_args.push_back({"scale", &isNativeInteger, &isColumnConst, "const Integer"});
}
// toString(DateTime or DateTime64, [timezone: String]) // toString(DateTime or DateTime64, [timezone: String])
if ((std::is_same_v<Name, NameToString> && arguments.size() > 0 && (isDateTime64(arguments[0].type) || isDateTime(arguments[0].type))) if ((std::is_same_v<Name, NameToString> && arguments.size() > 0 && (isDateTime64(arguments[0].type) || isDateTime(arguments[0].type)))
// toUnixTimestamp(value[, timezone : String]) // toUnixTimestamp(value[, timezone : String])
@ -1080,16 +1107,22 @@ public:
UInt32 scale [[maybe_unused]] = DataTypeDateTime64::default_scale; UInt32 scale [[maybe_unused]] = DataTypeDateTime64::default_scale;
// DateTime64 requires more arguments: scale and timezone. Since timezone is optional, scale should be first. // DateTime64 requires more arguments: scale and timezone. Since timezone is optional, scale should be first.
if constexpr (to_datetime64) if (isDateTime64<Name, ToDataType>(arguments))
{ {
timezone_arg_position += 1; timezone_arg_position += 1;
scale = static_cast<UInt32>(arguments[1].column->get64(0)); scale = static_cast<UInt32>(arguments[1].column->get64(0));
if (to_datetime64 || scale != 0) /// toDateTime('xxxx-xx-xx xx:xx:xx', 0) return DateTime
return std::make_shared<DataTypeDateTime64>(scale,
extractTimeZoneNameFromFunctionArguments(arguments, timezone_arg_position, 0));
return std::make_shared<DataTypeDateTime>(extractTimeZoneNameFromFunctionArguments(arguments, timezone_arg_position, 0));
} }
if constexpr (std::is_same_v<ToDataType, DataTypeDateTime>) if constexpr (std::is_same_v<ToDataType, DataTypeDateTime>)
return std::make_shared<DataTypeDateTime>(extractTimeZoneNameFromFunctionArguments(arguments, timezone_arg_position, 0)); return std::make_shared<DataTypeDateTime>(extractTimeZoneNameFromFunctionArguments(arguments, timezone_arg_position, 0));
else if constexpr (to_datetime64) else if constexpr (std::is_same_v<ToDataType, DataTypeDateTime64>)
return std::make_shared<DataTypeDateTime64>(scale, extractTimeZoneNameFromFunctionArguments(arguments, timezone_arg_position, 0)); throw Exception("LOGICAL ERROR: It is a bug.", ErrorCodes::LOGICAL_ERROR);
else else
return std::make_shared<ToDataType>(); return std::make_shared<ToDataType>();
} }
@ -1208,6 +1241,22 @@ private:
return true; return true;
}; };
if (isDateTime64<Name, ToDataType>(block.getColumnsWithTypeAndName(), arguments))
{
/// For toDateTime('xxxx-xx-xx xx:xx:xx.00', 2[, 'timezone']) we need to it convert to DateTime64
const ColumnWithTypeAndName & scale_column = block.getByPosition(arguments[1]);
UInt32 scale = extractToDecimalScale(scale_column);
if (to_datetime64 || scale != 0) /// When scale = 0, the data type is DateTime otherwise the data type is DateTime64
{
if (!callOnIndexAndDataType<DataTypeDateTime64>(from_type->getTypeId(), call))
throw Exception("Illegal type " + block.getByPosition(arguments[0]).type->getName() + " of argument of function " + getName(),
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
return;
}
}
bool done = callOnIndexAndDataType<ToDataType>(from_type->getTypeId(), call); bool done = callOnIndexAndDataType<ToDataType>(from_type->getTypeId(), call);
if (!done) if (!done)
{ {
@ -1262,7 +1311,8 @@ public:
DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override
{ {
DataTypePtr res; DataTypePtr res;
if constexpr (to_datetime64)
if (isDateTime64<Name, ToDataType>(arguments))
{ {
validateFunctionArgumentTypes(*this, arguments, validateFunctionArgumentTypes(*this, arguments,
FunctionArgumentDescriptors{{"string", isStringOrFixedString, nullptr, "String or FixedString"}}, FunctionArgumentDescriptors{{"string", isStringOrFixedString, nullptr, "String or FixedString"}},
@ -1272,11 +1322,12 @@ public:
{"timezone", isStringOrFixedString, isColumnConst, "const String or FixedString"}, {"timezone", isStringOrFixedString, isColumnConst, "const String or FixedString"},
}); });
UInt64 scale = DataTypeDateTime64::default_scale; UInt64 scale = to_datetime64 ? DataTypeDateTime64::default_scale : 0;
if (arguments.size() > 1) if (arguments.size() > 1)
scale = extractToDecimalScale(arguments[1]); scale = extractToDecimalScale(arguments[1]);
const auto timezone = extractTimeZoneNameFromFunctionArguments(arguments, 2, 0); const auto timezone = extractTimeZoneNameFromFunctionArguments(arguments, 2, 0);
res = std::make_shared<DataTypeDateTime64>(scale, timezone);
res = scale == 0 ? res = std::make_shared<DataTypeDateTime>(timezone) : std::make_shared<DataTypeDateTime64>(scale, timezone);
} }
else else
{ {
@ -1323,6 +1374,8 @@ public:
if constexpr (std::is_same_v<ToDataType, DataTypeDateTime>) if constexpr (std::is_same_v<ToDataType, DataTypeDateTime>)
res = std::make_shared<DataTypeDateTime>(extractTimeZoneNameFromFunctionArguments(arguments, 1, 0)); res = std::make_shared<DataTypeDateTime>(extractTimeZoneNameFromFunctionArguments(arguments, 1, 0));
else if constexpr (std::is_same_v<ToDataType, DataTypeDateTime64>)
throw Exception("LOGICAL ERROR: It is a bug.", ErrorCodes::LOGICAL_ERROR);
else if constexpr (to_decimal) else if constexpr (to_decimal)
{ {
UInt64 scale = extractToDecimalScale(arguments[1]); UInt64 scale = extractToDecimalScale(arguments[1]);
@ -1340,42 +1393,53 @@ public:
return res; return res;
} }
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) const override template <typename ConvertToDataType>
bool executeInternal(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count, UInt32 scale = 0) const
{ {
const IDataType * from_type = block.getByPosition(arguments[0]).type.get(); const IDataType * from_type = block.getByPosition(arguments[0]).type.get();
bool ok = true; if (checkAndGetDataType<DataTypeString>(from_type))
if constexpr (to_decimal || to_datetime64)
{ {
const UInt32 scale = assert_cast<const ToDataType &>(*removeNullable(block.getByPosition(result).type)).getScale(); ConvertThroughParsing<DataTypeString, ConvertToDataType, Name, exception_mode, parsing_mode>::execute(
block, arguments, result, input_rows_count, scale);
if (checkAndGetDataType<DataTypeString>(from_type)) return true;
{
ConvertThroughParsing<DataTypeString, ToDataType, Name, exception_mode, parsing_mode>::execute(
block, arguments, result, input_rows_count, scale);
}
else if (checkAndGetDataType<DataTypeFixedString>(from_type))
{
ConvertThroughParsing<DataTypeFixedString, ToDataType, Name, exception_mode, parsing_mode>::execute(
block, arguments, result, input_rows_count, scale);
}
else
ok = false;
} }
else if (checkAndGetDataType<DataTypeFixedString>(from_type))
{
ConvertThroughParsing<DataTypeFixedString, ConvertToDataType, Name, exception_mode, parsing_mode>::execute(
block, arguments, result, input_rows_count, scale);
return true;
}
return false;
}
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) const override
{
bool ok = true;
if constexpr (to_decimal)
ok = executeInternal<ToDataType>(block, arguments, result, input_rows_count,
assert_cast<const ToDataType &>(*removeNullable(block.getByPosition(result).type)).getScale());
else else
{ {
if (checkAndGetDataType<DataTypeString>(from_type)) if (isDateTime64<Name, ToDataType>(block.getColumnsWithTypeAndName(), arguments))
{ {
ConvertThroughParsing<DataTypeString, ToDataType, Name, exception_mode, parsing_mode>::execute( UInt64 scale = to_datetime64 ? DataTypeDateTime64::default_scale : 0;
block, arguments, result, input_rows_count); if (arguments.size() > 1)
} scale = extractToDecimalScale(block.getColumnsWithTypeAndName()[arguments[1]]);
else if (checkAndGetDataType<DataTypeFixedString>(from_type))
{ if (scale == 0)
ConvertThroughParsing<DataTypeFixedString, ToDataType, Name, exception_mode, parsing_mode>::execute( ok = executeInternal<DataTypeDateTime>(block, arguments, result, input_rows_count);
block, arguments, result, input_rows_count); else
{
ok = executeInternal<DataTypeDateTime64>(block, arguments, result, input_rows_count, static_cast<UInt32>(scale));
}
} }
else else
ok = false; {
ok = executeInternal<ToDataType>(block, arguments, result, input_rows_count);
}
} }
if (!ok) if (!ok)
@ -1638,6 +1702,7 @@ using FunctionToFloat32 = FunctionConvert<DataTypeFloat32, NameToFloat32, ToNumb
using FunctionToFloat64 = FunctionConvert<DataTypeFloat64, NameToFloat64, ToNumberMonotonicity<Float64>>; using FunctionToFloat64 = FunctionConvert<DataTypeFloat64, NameToFloat64, ToNumberMonotonicity<Float64>>;
using FunctionToDate = FunctionConvert<DataTypeDate, NameToDate, ToDateMonotonicity>; using FunctionToDate = FunctionConvert<DataTypeDate, NameToDate, ToDateMonotonicity>;
using FunctionToDateTime = FunctionConvert<DataTypeDateTime, NameToDateTime, ToDateTimeMonotonicity>; using FunctionToDateTime = FunctionConvert<DataTypeDateTime, NameToDateTime, ToDateTimeMonotonicity>;
using FunctionToDateTime32 = FunctionConvert<DataTypeDateTime, NameToDateTime32, ToDateTimeMonotonicity>;
using FunctionToDateTime64 = FunctionConvert<DataTypeDateTime64, NameToDateTime64, UnknownMonotonicity>; using FunctionToDateTime64 = FunctionConvert<DataTypeDateTime64, NameToDateTime64, UnknownMonotonicity>;
using FunctionToUUID = FunctionConvert<DataTypeUUID, NameToUUID, ToNumberMonotonicity<UInt128>>; using FunctionToUUID = FunctionConvert<DataTypeUUID, NameToUUID, ToNumberMonotonicity<UInt128>>;
using FunctionToString = FunctionConvert<DataTypeString, NameToString, ToStringMonotonicity>; using FunctionToString = FunctionConvert<DataTypeString, NameToString, ToStringMonotonicity>;
@ -1767,6 +1832,9 @@ struct NameParseDateTimeBestEffort { static constexpr auto name = "parseDateTime
struct NameParseDateTimeBestEffortUS { static constexpr auto name = "parseDateTimeBestEffortUS"; }; struct NameParseDateTimeBestEffortUS { static constexpr auto name = "parseDateTimeBestEffortUS"; };
struct NameParseDateTimeBestEffortOrZero { static constexpr auto name = "parseDateTimeBestEffortOrZero"; }; struct NameParseDateTimeBestEffortOrZero { static constexpr auto name = "parseDateTimeBestEffortOrZero"; };
struct NameParseDateTimeBestEffortOrNull { static constexpr auto name = "parseDateTimeBestEffortOrNull"; }; struct NameParseDateTimeBestEffortOrNull { static constexpr auto name = "parseDateTimeBestEffortOrNull"; };
struct NameParseDateTime32BestEffort { static constexpr auto name = "parseDateTime32BestEffort"; };
struct NameParseDateTime32BestEffortOrZero { static constexpr auto name = "parseDateTime32BestEffortOrZero"; };
struct NameParseDateTime32BestEffortOrNull { static constexpr auto name = "parseDateTime32BestEffortOrNull"; };
struct NameParseDateTime64BestEffort { static constexpr auto name = "parseDateTime64BestEffort"; }; struct NameParseDateTime64BestEffort { static constexpr auto name = "parseDateTime64BestEffort"; };
struct NameParseDateTime64BestEffortOrZero { static constexpr auto name = "parseDateTime64BestEffortOrZero"; }; struct NameParseDateTime64BestEffortOrZero { static constexpr auto name = "parseDateTime64BestEffortOrZero"; };
struct NameParseDateTime64BestEffortOrNull { static constexpr auto name = "parseDateTime64BestEffortOrNull"; }; struct NameParseDateTime64BestEffortOrNull { static constexpr auto name = "parseDateTime64BestEffortOrNull"; };
@ -1781,6 +1849,13 @@ using FunctionParseDateTimeBestEffortOrZero = FunctionConvertFromString<
using FunctionParseDateTimeBestEffortOrNull = FunctionConvertFromString< using FunctionParseDateTimeBestEffortOrNull = FunctionConvertFromString<
DataTypeDateTime, NameParseDateTimeBestEffortOrNull, ConvertFromStringExceptionMode::Null, ConvertFromStringParsingMode::BestEffort>; DataTypeDateTime, NameParseDateTimeBestEffortOrNull, ConvertFromStringExceptionMode::Null, ConvertFromStringParsingMode::BestEffort>;
using FunctionParseDateTime32BestEffort = FunctionConvertFromString<
DataTypeDateTime, NameParseDateTime32BestEffort, ConvertFromStringExceptionMode::Throw, ConvertFromStringParsingMode::BestEffort>;
using FunctionParseDateTime32BestEffortOrZero = FunctionConvertFromString<
DataTypeDateTime, NameParseDateTime32BestEffortOrZero, ConvertFromStringExceptionMode::Zero, ConvertFromStringParsingMode::BestEffort>;
using FunctionParseDateTime32BestEffortOrNull = FunctionConvertFromString<
DataTypeDateTime, NameParseDateTime32BestEffortOrNull, ConvertFromStringExceptionMode::Null, ConvertFromStringParsingMode::BestEffort>;
using FunctionParseDateTime64BestEffort = FunctionConvertFromString< using FunctionParseDateTime64BestEffort = FunctionConvertFromString<
DataTypeDateTime64, NameParseDateTime64BestEffort, ConvertFromStringExceptionMode::Throw, ConvertFromStringParsingMode::BestEffort>; DataTypeDateTime64, NameParseDateTime64BestEffort, ConvertFromStringExceptionMode::Throw, ConvertFromStringParsingMode::BestEffort>;
using FunctionParseDateTime64BestEffortOrZero = FunctionConvertFromString< using FunctionParseDateTime64BestEffortOrZero = FunctionConvertFromString<

View File

@ -1,6 +1,6 @@
#!/usr/bin/env bash #!/usr/bin/env bash
[ ! -f public_suffix_list.dat ] && wget -O public_suffix_list.dat https://publicsuffix.org/list/public_suffix_list.dat [ ! -f public_suffix_list.dat ] && wget -nv -O public_suffix_list.dat https://publicsuffix.org/list/public_suffix_list.dat
echo '%language=C++ echo '%language=C++
%define lookup-function-name is_valid %define lookup-function-name is_valid

View File

@ -447,6 +447,19 @@ void ScopeStack::addAction(const ExpressionAction & action)
} }
} }
void ScopeStack::addActionNoInput(const ExpressionAction & action)
{
size_t level = 0;
Names required = action.getNeededColumns();
for (const auto & elem : required)
level = std::max(level, getColumnLevel(elem));
Names added;
stack[level].actions->add(action, added);
stack[level].new_columns.insert(added.begin(), added.end());
}
ExpressionActionsPtr ScopeStack::popLevel() ExpressionActionsPtr ScopeStack::popLevel()
{ {
ExpressionActionsPtr res = stack.back().actions; ExpressionActionsPtr res = stack.back().actions;
@ -549,7 +562,7 @@ void ActionsMatcher::visit(const ASTFunction & node, const ASTPtr & ast, Data &
/// It could have been possible to implement arrayJoin which keeps source column, /// It could have been possible to implement arrayJoin which keeps source column,
/// but in this case it will always be replicated (as many arrays), which is expensive. /// but in this case it will always be replicated (as many arrays), which is expensive.
String tmp_name = data.getUniqueName("_array_join_" + arg->getColumnName()); String tmp_name = data.getUniqueName("_array_join_" + arg->getColumnName());
data.addAction(ExpressionAction::copyColumn(arg->getColumnName(), tmp_name)); data.addActionNoInput(ExpressionAction::copyColumn(arg->getColumnName(), tmp_name));
data.addAction(ExpressionAction::arrayJoin(tmp_name, result_name)); data.addAction(ExpressionAction::arrayJoin(tmp_name, result_name));
} }

View File

@ -12,6 +12,7 @@ namespace DB
class Context; class Context;
class ASTFunction; class ASTFunction;
struct ExpressionAction;
class ExpressionActions; class ExpressionActions;
using ExpressionActionsPtr = std::shared_ptr<ExpressionActions>; using ExpressionActionsPtr = std::shared_ptr<ExpressionActions>;
@ -49,6 +50,8 @@ struct ScopeStack
size_t getColumnLevel(const std::string & name); size_t getColumnLevel(const std::string & name);
void addAction(const ExpressionAction & action); void addAction(const ExpressionAction & action);
/// For arrayJoin() to avoid double columns in the input.
void addActionNoInput(const ExpressionAction & action);
ExpressionActionsPtr popLevel(); ExpressionActionsPtr popLevel();
@ -115,6 +118,10 @@ public:
{ {
actions_stack.addAction(action); actions_stack.addAction(action);
} }
void addActionNoInput(const ExpressionAction & action)
{
actions_stack.addActionNoInput(action);
}
const Block & getSampleBlock() const const Block & getSampleBlock() const
{ {

View File

@ -13,6 +13,7 @@
#include <Processors/Transforms/ConvertingTransform.h> #include <Processors/Transforms/ConvertingTransform.h>
#include <Processors/Sources/RemoteSource.h> #include <Processors/Sources/RemoteSource.h>
#include <Processors/Sources/DelayedSource.h> #include <Processors/Sources/DelayedSource.h>
#include <Processors/QueryPlan/QueryPlan.h>
namespace ProfileEvents namespace ProfileEvents
{ {
@ -68,14 +69,19 @@ SelectStreamFactory::SelectStreamFactory(
namespace namespace
{ {
QueryPipeline createLocalStream( auto createLocalPipe(
const ASTPtr & query_ast, const Block & header, const Context & context, QueryProcessingStage::Enum processed_stage) const ASTPtr & query_ast, const Block & header, const Context & context, QueryProcessingStage::Enum processed_stage)
{ {
checkStackSize(); checkStackSize();
InterpreterSelectQuery interpreter{query_ast, context, SelectQueryOptions(processed_stage)}; InterpreterSelectQuery interpreter(query_ast, context, SelectQueryOptions(processed_stage));
auto query_plan = std::make_unique<QueryPlan>();
auto pipeline = interpreter.execute().pipeline; interpreter.buildQueryPlan(*query_plan);
auto pipeline = std::move(*query_plan->buildQueryPipeline());
/// Avoid going it out-of-scope for EXPLAIN
pipeline.addQueryPlan(std::move(query_plan));
pipeline.addSimpleTransform([&](const Block & source_header) pipeline.addSimpleTransform([&](const Block & source_header)
{ {
@ -94,7 +100,7 @@ QueryPipeline createLocalStream(
/// return std::make_shared<MaterializingBlockInputStream>(stream); /// return std::make_shared<MaterializingBlockInputStream>(stream);
pipeline.setMaxThreads(1); pipeline.setMaxThreads(1);
return pipeline; return QueryPipeline::getPipe(std::move(pipeline));
} }
String formattedAST(const ASTPtr & ast) String formattedAST(const ASTPtr & ast)
@ -130,7 +136,7 @@ void SelectStreamFactory::createForShard(
auto emplace_local_stream = [&]() auto emplace_local_stream = [&]()
{ {
pipes.emplace_back(QueryPipeline::getPipe(createLocalStream(modified_query_ast, header, context, processed_stage))); pipes.emplace_back(createLocalPipe(modified_query_ast, header, context, processed_stage));
}; };
String modified_query = formattedAST(modified_query_ast); String modified_query = formattedAST(modified_query_ast);
@ -270,7 +276,7 @@ void SelectStreamFactory::createForShard(
} }
if (try_results.empty() || local_delay < max_remote_delay) if (try_results.empty() || local_delay < max_remote_delay)
return QueryPipeline::getPipe(createLocalStream(modified_query_ast, header, context, stage)); return createLocalPipe(modified_query_ast, header, context, stage);
else else
{ {
std::vector<IConnectionPool::Entry> connections; std::vector<IConnectionPool::Entry> connections;

View File

@ -101,7 +101,7 @@ BlockIO InterpreterAlterQuery::execute()
switch (command.type) switch (command.type)
{ {
case LiveViewCommand::REFRESH: case LiveViewCommand::REFRESH:
live_view->refresh(context); live_view->refresh();
break; break;
} }
} }

View File

@ -269,7 +269,9 @@ BlockInputStreamPtr InterpreterExplainQuery::executeImpl()
if (settings.graph) if (settings.graph)
{ {
auto processors = Pipe::detachProcessors(QueryPipeline::getPipe(std::move(*pipeline))); /// Pipe holds QueryPlan, should not go out-of-scope
auto pipe = QueryPipeline::getPipe(std::move(*pipeline));
const auto & processors = pipe.getProcessors();
if (settings.compact) if (settings.compact)
printPipelineCompact(processors, buffer, settings.query_pipeline_options.header); printPipelineCompact(processors, buffer, settings.query_pipeline_options.header);

View File

@ -29,7 +29,6 @@ namespace
current_access.grant(access_to_grant); current_access.grant(access_to_grant);
} }
AccessRightsElements getFilteredAccessRightsElementsToRevoke( AccessRightsElements getFilteredAccessRightsElementsToRevoke(
const AccessRights & current_access, const AccessRightsElements & access_to_revoke, bool grant_option) const AccessRights & current_access, const AccessRightsElements & access_to_revoke, bool grant_option)
{ {
@ -214,6 +213,7 @@ BlockIO InterpreterGrantQuery::execute()
auto access = context.getAccess(); auto access = context.getAccess();
auto & access_control = context.getAccessControlManager(); auto & access_control = context.getAccessControlManager();
query.replaceEmptyDatabaseWithCurrent(context.getCurrentDatabase()); query.replaceEmptyDatabaseWithCurrent(context.getCurrentDatabase());
query.removeNonGrantableFlags();
RolesOrUsersSet roles_from_query; RolesOrUsersSet roles_from_query;
if (query.roles) if (query.roles)

View File

@ -18,6 +18,7 @@
#include <DataTypes/DataTypeNullable.h> #include <DataTypes/DataTypeNullable.h>
#include <Parsers/MySQL/ASTDeclareIndex.h> #include <Parsers/MySQL/ASTDeclareIndex.h>
#include <Common/quoteString.h> #include <Common/quoteString.h>
#include <Common/assert_cast.h>
#include <Interpreters/Context.h> #include <Interpreters/Context.h>
#include <Interpreters/InterpreterCreateQuery.h> #include <Interpreters/InterpreterCreateQuery.h>
#include <Storages/IStorage.h> #include <Storages/IStorage.h>
@ -124,8 +125,37 @@ static NamesAndTypesList getNames(const ASTFunction & expr, const Context & cont
return expression->getRequiredColumnsWithTypes(); return expression->getRequiredColumnsWithTypes();
} }
static NamesAndTypesList modifyPrimaryKeysToNonNullable(const NamesAndTypesList & primary_keys, NamesAndTypesList & columns)
{
/// https://dev.mysql.com/doc/refman/5.7/en/create-table.html#create-table-indexes-keys
/// PRIMARY KEY:
/// A unique index where all key columns must be defined as NOT NULL.
/// If they are not explicitly declared as NOT NULL, MySQL declares them so implicitly (and silently).
/// A table can have only one PRIMARY KEY. The name of a PRIMARY KEY is always PRIMARY,
/// which thus cannot be used as the name for any other kind of index.
NamesAndTypesList non_nullable_primary_keys;
for (const auto & primary_key : primary_keys)
{
if (!primary_key.type->isNullable())
non_nullable_primary_keys.emplace_back(primary_key);
else
{
non_nullable_primary_keys.emplace_back(
NameAndTypePair(primary_key.name, assert_cast<const DataTypeNullable *>(primary_key.type.get())->getNestedType()));
for (auto & column : columns)
{
if (column.name == primary_key.name)
column.type = assert_cast<const DataTypeNullable *>(column.type.get())->getNestedType();
}
}
}
return non_nullable_primary_keys;
}
static inline std::tuple<NamesAndTypesList, NamesAndTypesList, NamesAndTypesList, NameSet> getKeys( static inline std::tuple<NamesAndTypesList, NamesAndTypesList, NamesAndTypesList, NameSet> getKeys(
ASTExpressionList * columns_define, ASTExpressionList * indices_define, const Context & context, const NamesAndTypesList & columns) ASTExpressionList * columns_define, ASTExpressionList * indices_define, const Context & context, NamesAndTypesList & columns)
{ {
NameSet increment_columns; NameSet increment_columns;
auto keys = makeASTFunction("tuple"); auto keys = makeASTFunction("tuple");
@ -171,8 +201,9 @@ static inline std::tuple<NamesAndTypesList, NamesAndTypesList, NamesAndTypesList
} }
} }
return std::make_tuple( const auto & primary_keys_names_and_types = getNames(*primary_keys, context, columns);
getNames(*primary_keys, context, columns), getNames(*unique_keys, context, columns), getNames(*keys, context, columns), increment_columns); const auto & non_nullable_primary_keys_names_and_types = modifyPrimaryKeysToNonNullable(primary_keys_names_and_types, columns);
return std::make_tuple(non_nullable_primary_keys_names_and_types, getNames(*unique_keys, context, columns), getNames(*keys, context, columns), increment_columns);
} }
static String getUniqueColumnName(NamesAndTypesList columns_name_and_type, const String & prefix) static String getUniqueColumnName(NamesAndTypesList columns_name_and_type, const String & prefix)
@ -201,14 +232,13 @@ static String getUniqueColumnName(NamesAndTypesList columns_name_and_type, const
static ASTPtr getPartitionPolicy(const NamesAndTypesList & primary_keys) static ASTPtr getPartitionPolicy(const NamesAndTypesList & primary_keys)
{ {
const auto & numbers_partition = [&](const String & column_name, bool is_nullable, size_t type_max_size) const auto & numbers_partition = [&](const String & column_name, size_t type_max_size) -> ASTPtr
{ {
ASTPtr column = std::make_shared<ASTIdentifier>(column_name); if (type_max_size <= 1000)
return std::make_shared<ASTIdentifier>(column_name);
if (is_nullable) return makeASTFunction("intDiv", std::make_shared<ASTIdentifier>(column_name),
column = makeASTFunction("assumeNotNull", column); std::make_shared<ASTLiteral>(UInt64(type_max_size / 1000)));
return makeASTFunction("intDiv", column, std::make_shared<ASTLiteral>(UInt64(type_max_size / 1000)));
}; };
ASTPtr best_partition; ASTPtr best_partition;
@ -219,16 +249,12 @@ static ASTPtr getPartitionPolicy(const NamesAndTypesList & primary_keys)
WhichDataType which(type); WhichDataType which(type);
if (which.isNullable()) if (which.isNullable())
{ throw Exception("LOGICAL ERROR: MySQL primary key must be not null, it is a bug.", ErrorCodes::LOGICAL_ERROR);
type = (static_cast<const DataTypeNullable &>(*type)).getNestedType();
which = WhichDataType(type);
}
if (which.isDateOrDateTime()) if (which.isDateOrDateTime())
{ {
/// In any case, date or datetime is always the best partitioning key /// In any case, date or datetime is always the best partitioning key
ASTPtr res = std::make_shared<ASTIdentifier>(primary_key.name); return makeASTFunction("toYYYYMM", std::make_shared<ASTIdentifier>(primary_key.name));
return makeASTFunction("toYYYYMM", primary_key.type->isNullable() ? makeASTFunction("assumeNotNull", res) : res);
} }
if (type->haveMaximumSizeOfValue() && (!best_size || type->getSizeOfValueInMemory() < best_size)) if (type->haveMaximumSizeOfValue() && (!best_size || type->getSizeOfValueInMemory() < best_size))
@ -236,25 +262,22 @@ static ASTPtr getPartitionPolicy(const NamesAndTypesList & primary_keys)
if (which.isInt8() || which.isUInt8()) if (which.isInt8() || which.isUInt8())
{ {
best_size = type->getSizeOfValueInMemory(); best_size = type->getSizeOfValueInMemory();
best_partition = std::make_shared<ASTIdentifier>(primary_key.name); best_partition = numbers_partition(primary_key.name, std::numeric_limits<UInt8>::max());
if (primary_key.type->isNullable())
best_partition = makeASTFunction("assumeNotNull", best_partition);
} }
else if (which.isInt16() || which.isUInt16()) else if (which.isInt16() || which.isUInt16())
{ {
best_size = type->getSizeOfValueInMemory(); best_size = type->getSizeOfValueInMemory();
best_partition = numbers_partition(primary_key.name, primary_key.type->isNullable(), std::numeric_limits<UInt16>::max()); best_partition = numbers_partition(primary_key.name, std::numeric_limits<UInt16>::max());
} }
else if (which.isInt32() || which.isUInt32()) else if (which.isInt32() || which.isUInt32())
{ {
best_size = type->getSizeOfValueInMemory(); best_size = type->getSizeOfValueInMemory();
best_partition = numbers_partition(primary_key.name, primary_key.type->isNullable(), std::numeric_limits<UInt32>::max()); best_partition = numbers_partition(primary_key.name, std::numeric_limits<UInt32>::max());
} }
else if (which.isInt64() || which.isUInt64()) else if (which.isInt64() || which.isUInt64())
{ {
best_size = type->getSizeOfValueInMemory(); best_size = type->getSizeOfValueInMemory();
best_partition = numbers_partition(primary_key.name, primary_key.type->isNullable(), std::numeric_limits<UInt64>::max()); best_partition = numbers_partition(primary_key.name, std::numeric_limits<UInt64>::max());
} }
} }
} }
@ -266,12 +289,12 @@ static ASTPtr getOrderByPolicy(
const NamesAndTypesList & primary_keys, const NamesAndTypesList & unique_keys, const NamesAndTypesList & keys, const NameSet & increment_columns) const NamesAndTypesList & primary_keys, const NamesAndTypesList & unique_keys, const NamesAndTypesList & keys, const NameSet & increment_columns)
{ {
NameSet order_by_columns_set; NameSet order_by_columns_set;
std::deque<std::vector<String>> order_by_columns_list; std::deque<NamesAndTypesList> order_by_columns_list;
const auto & add_order_by_expression = [&](const NamesAndTypesList & names_and_types) const auto & add_order_by_expression = [&](const NamesAndTypesList & names_and_types)
{ {
std::vector<String> increment_keys; NamesAndTypesList increment_keys;
std::vector<String> non_increment_keys; NamesAndTypesList non_increment_keys;
for (const auto & [name, type] : names_and_types) for (const auto & [name, type] : names_and_types)
{ {
@ -280,13 +303,13 @@ static ASTPtr getOrderByPolicy(
if (increment_columns.count(name)) if (increment_columns.count(name))
{ {
increment_keys.emplace_back(name);
order_by_columns_set.emplace(name); order_by_columns_set.emplace(name);
increment_keys.emplace_back(NameAndTypePair(name, type));
} }
else else
{ {
order_by_columns_set.emplace(name); order_by_columns_set.emplace(name);
non_increment_keys.emplace_back(name); non_increment_keys.emplace_back(NameAndTypePair(name, type));
} }
} }
@ -305,8 +328,13 @@ static ASTPtr getOrderByPolicy(
for (const auto & order_by_columns : order_by_columns_list) for (const auto & order_by_columns : order_by_columns_list)
{ {
for (const auto & order_by_column : order_by_columns) for (const auto & [name, type] : order_by_columns)
order_by_expression->arguments->children.emplace_back(std::make_shared<ASTIdentifier>(order_by_column)); {
order_by_expression->arguments->children.emplace_back(std::make_shared<ASTIdentifier>(name));
if (type->isNullable())
order_by_expression->arguments->children.back() = makeASTFunction("assumeNotNull", order_by_expression->arguments->children.back());
}
} }
return order_by_expression; return order_by_expression;

View File

@ -103,21 +103,12 @@ TEST(MySQLCreateRewritten, PartitionPolicy)
{"TIMESTAMP", "DateTime", " PARTITION BY toYYYYMM(key)"}, {"BOOLEAN", "Int8", " PARTITION BY key"} {"TIMESTAMP", "DateTime", " PARTITION BY toYYYYMM(key)"}, {"BOOLEAN", "Int8", " PARTITION BY key"}
}; };
const auto & replace_string = [](const String & str, const String & old_str, const String & new_str)
{
String new_string = str;
size_t pos = new_string.find(old_str);
if (pos != std::string::npos)
new_string = new_string.replace(pos, old_str.size(), new_str);
return new_string;
};
for (const auto & [test_type, mapped_type, partition_policy] : test_types) for (const auto & [test_type, mapped_type, partition_policy] : test_types)
{ {
EXPECT_EQ(queryToString(tryRewrittenCreateQuery( EXPECT_EQ(queryToString(tryRewrittenCreateQuery(
"CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " PRIMARY KEY)", context_holder.context)), "CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " PRIMARY KEY)", context_holder.context)),
"CREATE TABLE test_database.test_table_1 (`key` Nullable(" + mapped_type + "), `_sign` Int8() MATERIALIZED 1, " "CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `_sign` Int8() MATERIALIZED 1, "
"`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + replace_string(partition_policy, "key", "assumeNotNull(key)") + " ORDER BY tuple(key)"); "`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY tuple(key)");
EXPECT_EQ(queryToString(tryRewrittenCreateQuery( EXPECT_EQ(queryToString(tryRewrittenCreateQuery(
"CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " NOT NULL PRIMARY KEY)", context_holder.context)), "CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " NOT NULL PRIMARY KEY)", context_holder.context)),
@ -126,6 +117,45 @@ TEST(MySQLCreateRewritten, PartitionPolicy)
} }
} }
TEST(MySQLCreateRewritten, OrderbyPolicy)
{
tryRegisterFunctions();
const auto & context_holder = getContext();
std::vector<std::tuple<String, String, String>> test_types
{
{"TINYINT", "Int8", " PARTITION BY key"}, {"SMALLINT", "Int16", " PARTITION BY intDiv(key, 65)"},
{"MEDIUMINT", "Int32", " PARTITION BY intDiv(key, 4294967)"}, {"INT", "Int32", " PARTITION BY intDiv(key, 4294967)"},
{"INTEGER", "Int32", " PARTITION BY intDiv(key, 4294967)"}, {"BIGINT", "Int64", " PARTITION BY intDiv(key, 18446744073709551)"},
{"FLOAT", "Float32", ""}, {"DOUBLE", "Float64", ""}, {"VARCHAR(10)", "String", ""}, {"CHAR(10)", "String", ""},
{"Date", "Date", " PARTITION BY toYYYYMM(key)"}, {"DateTime", "DateTime", " PARTITION BY toYYYYMM(key)"},
{"TIMESTAMP", "DateTime", " PARTITION BY toYYYYMM(key)"}, {"BOOLEAN", "Int8", " PARTITION BY key"}
};
for (const auto & [test_type, mapped_type, partition_policy] : test_types)
{
EXPECT_EQ(queryToString(tryRewrittenCreateQuery(
"CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " PRIMARY KEY, `key2` " + test_type + " UNIQUE KEY)", context_holder.context)),
"CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `key2` Nullable(" + mapped_type + "), `_sign` Int8() MATERIALIZED 1, "
"`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY (key, assumeNotNull(key2))");
EXPECT_EQ(queryToString(tryRewrittenCreateQuery(
"CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " NOT NULL PRIMARY KEY, `key2` " + test_type + " NOT NULL UNIQUE KEY)", context_holder.context)),
"CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `key2` " + mapped_type + ", `_sign` Int8() MATERIALIZED 1, "
"`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY (key, key2)");
EXPECT_EQ(queryToString(tryRewrittenCreateQuery(
"CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + " KEY UNIQUE KEY)", context_holder.context)),
"CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `_sign` Int8() MATERIALIZED 1, "
"`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY tuple(key)");
EXPECT_EQ(queryToString(tryRewrittenCreateQuery(
"CREATE TABLE `test_database`.`test_table_1` (`key` " + test_type + ", `key2` " + test_type + " UNIQUE KEY, PRIMARY KEY(`key`, `key2`))", context_holder.context)),
"CREATE TABLE test_database.test_table_1 (`key` " + mapped_type + ", `key2` " + mapped_type + ", `_sign` Int8() MATERIALIZED 1, "
"`_version` UInt64() MATERIALIZED 1) ENGINE = ReplacingMergeTree(_version)" + partition_policy + " ORDER BY (key, key2)");
}
}
TEST(MySQLCreateRewritten, RewrittenQueryWithPrimaryKey) TEST(MySQLCreateRewritten, RewrittenQueryWithPrimaryKey)
{ {
tryRegisterFunctions(); tryRegisterFunctions();

View File

@ -20,6 +20,7 @@ namespace ErrorCodes
extern const int TOO_DEEP_AST; extern const int TOO_DEEP_AST;
extern const int CYCLIC_ALIASES; extern const int CYCLIC_ALIASES;
extern const int UNKNOWN_QUERY_PARAMETER; extern const int UNKNOWN_QUERY_PARAMETER;
extern const int BAD_ARGUMENTS;
} }
@ -151,6 +152,13 @@ void QueryNormalizer::visitChildren(const ASTPtr & node, Data & data)
{ {
if (const auto * func_node = node->as<ASTFunction>()) if (const auto * func_node = node->as<ASTFunction>())
{ {
if (func_node->query)
{
if (func_node->name != "view")
throw Exception("Query argument can only be used in the `view` TableFunction", ErrorCodes::BAD_ARGUMENTS);
/// Don't go into query argument.
return;
}
/// We skip the first argument. We also assume that the lambda function can not have parameters. /// We skip the first argument. We also assume that the lambda function can not have parameters.
size_t first_pos = 0; size_t first_pos = 0;
if (func_node->name == "lambda") if (func_node->name == "lambda")

View File

@ -18,6 +18,7 @@
#include <Parsers/ASTLiteral.h> #include <Parsers/ASTLiteral.h>
#include <Parsers/ASTFunction.h> #include <Parsers/ASTFunction.h>
#include <Parsers/ASTColumnsMatcher.h> #include <Parsers/ASTColumnsMatcher.h>
#include <Parsers/ASTColumnsTransformers.h>
namespace DB namespace DB
@ -135,8 +136,8 @@ void TranslateQualifiedNamesMatcher::visit(ASTFunction & node, const ASTPtr &, D
void TranslateQualifiedNamesMatcher::visit(const ASTQualifiedAsterisk &, const ASTPtr & ast, Data & data) void TranslateQualifiedNamesMatcher::visit(const ASTQualifiedAsterisk &, const ASTPtr & ast, Data & data)
{ {
if (ast->children.size() != 1) if (ast->children.empty())
throw Exception("Logical error: qualified asterisk must have exactly one child", ErrorCodes::LOGICAL_ERROR); throw Exception("Logical error: qualified asterisk must have children", ErrorCodes::LOGICAL_ERROR);
auto & ident = ast->children[0]; auto & ident = ast->children[0];
@ -242,6 +243,10 @@ void TranslateQualifiedNamesMatcher::visit(ASTExpressionList & node, const ASTPt
first_table = false; first_table = false;
} }
for (const auto & transformer : asterisk->children)
{
IASTColumnsTransformer::transform(transformer, node.children);
}
} }
else if (const auto * asterisk_pattern = child->as<ASTColumnsMatcher>()) else if (const auto * asterisk_pattern = child->as<ASTColumnsMatcher>())
{ {
@ -258,6 +263,11 @@ void TranslateQualifiedNamesMatcher::visit(ASTExpressionList & node, const ASTPt
first_table = false; first_table = false;
} }
// ColumnsMatcher's transformers start to appear at child 1
for (auto it = asterisk_pattern->children.begin() + 1; it != asterisk_pattern->children.end(); ++it)
{
IASTColumnsTransformer::transform(*it, node.children);
}
} }
else if (const auto * qualified_asterisk = child->as<ASTQualifiedAsterisk>()) else if (const auto * qualified_asterisk = child->as<ASTQualifiedAsterisk>())
{ {
@ -274,6 +284,11 @@ void TranslateQualifiedNamesMatcher::visit(ASTExpressionList & node, const ASTPt
break; break;
} }
} }
// QualifiedAsterisk's transformers start to appear at child 1
for (auto it = qualified_asterisk->children.begin() + 1; it != qualified_asterisk->children.end(); ++it)
{
IASTColumnsTransformer::transform(*it, node.children);
}
} }
else else
node.children.emplace_back(child); node.children.emplace_back(child);

View File

@ -13,9 +13,14 @@ ASTPtr ASTAsterisk::clone() const
void ASTAsterisk::appendColumnName(WriteBuffer & ostr) const { ostr.write('*'); } void ASTAsterisk::appendColumnName(WriteBuffer & ostr) const { ostr.write('*'); }
void ASTAsterisk::formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const void ASTAsterisk::formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
{ {
settings.ostr << "*"; settings.ostr << "*";
for (const auto & child : children)
{
settings.ostr << ' ';
child->formatImpl(settings, state, frame);
}
} }
} }

View File

@ -9,6 +9,9 @@ namespace DB
struct AsteriskSemantic; struct AsteriskSemantic;
struct AsteriskSemanticImpl; struct AsteriskSemanticImpl;
/** SELECT * is expanded to all visible columns of the source table.
* Optional transformers can be attached to further manipulate these expanded columns.
*/
class ASTAsterisk : public IAST class ASTAsterisk : public IAST
{ {
public: public:

View File

@ -28,10 +28,15 @@ void ASTColumnsMatcher::updateTreeHashImpl(SipHash & hash_state) const
IAST::updateTreeHashImpl(hash_state); IAST::updateTreeHashImpl(hash_state);
} }
void ASTColumnsMatcher::formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const void ASTColumnsMatcher::formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
{ {
settings.ostr << (settings.hilite ? hilite_keyword : "") << "COLUMNS" << (settings.hilite ? hilite_none : "") << "(" settings.ostr << (settings.hilite ? hilite_keyword : "") << "COLUMNS" << (settings.hilite ? hilite_none : "") << "("
<< quoteString(original_pattern) << ")"; << quoteString(original_pattern) << ")";
for (ASTs::const_iterator it = children.begin() + 1; it != children.end(); ++it)
{
settings.ostr << ' ';
(*it)->formatImpl(settings, state, frame);
}
} }
void ASTColumnsMatcher::setPattern(String pattern) void ASTColumnsMatcher::setPattern(String pattern)

View File

@ -23,6 +23,7 @@ struct AsteriskSemanticImpl;
/** SELECT COLUMNS('regexp') is expanded to multiple columns like * (asterisk). /** SELECT COLUMNS('regexp') is expanded to multiple columns like * (asterisk).
* Optional transformers can be attached to further manipulate these expanded columns.
*/ */
class ASTColumnsMatcher : public IAST class ASTColumnsMatcher : public IAST
{ {

View File

@ -0,0 +1,159 @@
#include <map>
#include "ASTColumnsTransformers.h"
#include <IO/WriteHelpers.h>
#include <Parsers/ASTFunction.h>
#include <Parsers/ASTIdentifier.h>
#include <Common/SipHash.h>
#include <Common/quoteString.h>
namespace DB
{
namespace ErrorCodes
{
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
}
void IASTColumnsTransformer::transform(const ASTPtr & transformer, ASTs & nodes)
{
if (const auto * apply = transformer->as<ASTColumnsApplyTransformer>())
{
apply->transform(nodes);
}
else if (const auto * except = transformer->as<ASTColumnsExceptTransformer>())
{
except->transform(nodes);
}
else if (const auto * replace = transformer->as<ASTColumnsReplaceTransformer>())
{
replace->transform(nodes);
}
}
void ASTColumnsApplyTransformer::formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const
{
settings.ostr << (settings.hilite ? hilite_keyword : "") << "APPLY" << (settings.hilite ? hilite_none : "") << "(" << func_name << ")";
}
void ASTColumnsApplyTransformer::transform(ASTs & nodes) const
{
for (auto & column : nodes)
{
column = makeASTFunction(func_name, column);
}
}
void ASTColumnsExceptTransformer::formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
{
settings.ostr << (settings.hilite ? hilite_keyword : "") << "EXCEPT" << (settings.hilite ? hilite_none : "") << "(";
for (ASTs::const_iterator it = children.begin(); it != children.end(); ++it)
{
if (it != children.begin())
{
settings.ostr << ", ";
}
(*it)->formatImpl(settings, state, frame);
}
settings.ostr << ")";
}
void ASTColumnsExceptTransformer::transform(ASTs & nodes) const
{
nodes.erase(
std::remove_if(
nodes.begin(),
nodes.end(),
[this](const ASTPtr & node_child)
{
if (const auto * id = node_child->as<ASTIdentifier>())
{
for (const auto & except_child : children)
{
if (except_child->as<const ASTIdentifier &>().name == id->shortName())
return true;
}
}
return false;
}),
nodes.end());
}
void ASTColumnsReplaceTransformer::Replacement::formatImpl(
const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
{
expr->formatImpl(settings, state, frame);
settings.ostr << (settings.hilite ? hilite_keyword : "") << " AS " << (settings.hilite ? hilite_none : "") << name;
}
void ASTColumnsReplaceTransformer::formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
{
settings.ostr << (settings.hilite ? hilite_keyword : "") << "REPLACE" << (settings.hilite ? hilite_none : "") << "(";
for (ASTs::const_iterator it = children.begin(); it != children.end(); ++it)
{
if (it != children.begin())
{
settings.ostr << ", ";
}
(*it)->formatImpl(settings, state, frame);
}
settings.ostr << ")";
}
void ASTColumnsReplaceTransformer::replaceChildren(ASTPtr & node, const ASTPtr & replacement, const String & name)
{
for (auto & child : node->children)
{
if (const auto * id = child->as<ASTIdentifier>())
{
if (id->shortName() == name)
child = replacement;
}
else
replaceChildren(child, replacement, name);
}
}
void ASTColumnsReplaceTransformer::transform(ASTs & nodes) const
{
std::map<String, ASTPtr> replace_map;
for (const auto & replace_child : children)
{
auto & replacement = replace_child->as<Replacement &>();
if (replace_map.find(replacement.name) != replace_map.end())
throw Exception(
"Expressions in columns transformer REPLACE should not contain the same replacement more than once",
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
replace_map.emplace(replacement.name, replacement.expr);
}
for (auto & column : nodes)
{
if (const auto * id = column->as<ASTIdentifier>())
{
auto replace_it = replace_map.find(id->shortName());
if (replace_it != replace_map.end())
{
column = replace_it->second;
column->setAlias(replace_it->first);
}
}
else if (auto * ast_with_alias = dynamic_cast<ASTWithAlias *>(column.get()))
{
auto replace_it = replace_map.find(ast_with_alias->alias);
if (replace_it != replace_map.end())
{
auto new_ast = replace_it->second->clone();
ast_with_alias->alias = ""; // remove the old alias as it's useless after replace transformation
replaceChildren(new_ast, column, replace_it->first);
column = new_ast;
column->setAlias(replace_it->first);
}
}
}
}
}

View File

@ -0,0 +1,85 @@
#pragma once
#include <Parsers/IAST.h>
namespace DB
{
class IASTColumnsTransformer : public IAST
{
public:
virtual void transform(ASTs & nodes) const = 0;
static void transform(const ASTPtr & transformer, ASTs & nodes);
};
class ASTColumnsApplyTransformer : public IASTColumnsTransformer
{
public:
String getID(char) const override { return "ColumnsApplyTransformer"; }
ASTPtr clone() const override
{
auto res = std::make_shared<ASTColumnsApplyTransformer>(*this);
return res;
}
void transform(ASTs & nodes) const override;
String func_name;
protected:
void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override;
};
class ASTColumnsExceptTransformer : public IASTColumnsTransformer
{
public:
String getID(char) const override { return "ColumnsExceptTransformer"; }
ASTPtr clone() const override
{
auto clone = std::make_shared<ASTColumnsExceptTransformer>(*this);
clone->cloneChildren();
return clone;
}
void transform(ASTs & nodes) const override;
protected:
void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override;
};
class ASTColumnsReplaceTransformer : public IASTColumnsTransformer
{
public:
class Replacement : public IAST
{
public:
String getID(char) const override { return "ColumnsReplaceTransformer::Replacement"; }
ASTPtr clone() const override
{
auto replacement = std::make_shared<Replacement>(*this);
replacement->name = name;
replacement->expr = expr->clone();
replacement->children.push_back(replacement->expr);
return replacement;
}
String name;
ASTPtr expr;
protected:
void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override;
};
String getID(char) const override { return "ColumnsReplaceTransformer"; }
ASTPtr clone() const override
{
auto clone = std::make_shared<ASTColumnsReplaceTransformer>(*this);
clone->cloneChildren();
return clone;
}
void transform(ASTs & nodes) const override;
protected:
void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override;
private:
static void replaceChildren(ASTPtr & node, const ASTPtr & replacement, const String & name);
};
}

View File

@ -48,6 +48,7 @@ ASTPtr ASTFunction::clone() const
auto res = std::make_shared<ASTFunction>(*this); auto res = std::make_shared<ASTFunction>(*this);
res->children.clear(); res->children.clear();
if (query) { res->query = query->clone(); res->children.push_back(res->query); }
if (arguments) { res->arguments = arguments->clone(); res->children.push_back(res->arguments); } if (arguments) { res->arguments = arguments->clone(); res->children.push_back(res->arguments); }
if (parameters) { res->parameters = parameters->clone(); res->children.push_back(res->parameters); } if (parameters) { res->parameters = parameters->clone(); res->children.push_back(res->parameters); }
@ -118,6 +119,18 @@ void ASTFunction::formatImplWithoutAlias(const FormatSettings & settings, Format
nested_need_parens.need_parens = true; nested_need_parens.need_parens = true;
nested_dont_need_parens.need_parens = false; nested_dont_need_parens.need_parens = false;
if (query)
{
std::string nl_or_nothing = settings.one_line ? "" : "\n";
std::string indent_str = settings.one_line ? "" : std::string(4u * frame.indent, ' ');
settings.ostr << (settings.hilite ? hilite_function : "") << name << "(" << nl_or_nothing;
FormatStateStacked frame_nested = frame;
frame_nested.need_parens = false;
++frame_nested.indent;
query->formatImpl(settings, state, frame_nested);
settings.ostr << nl_or_nothing << indent_str << ")";
return;
}
/// Should this function to be written as operator? /// Should this function to be written as operator?
bool written = false; bool written = false;
if (arguments && !parameters) if (arguments && !parameters)

View File

@ -13,6 +13,7 @@ class ASTFunction : public ASTWithAlias
{ {
public: public:
String name; String name;
ASTPtr query; // It's possible for a function to accept a query as its only argument.
ASTPtr arguments; ASTPtr arguments;
/// parameters - for parametric aggregate function. Example: quantile(0.9)(x) - what in first parens are 'parameters'. /// parameters - for parametric aggregate function. Example: quantile(0.9)(x) - what in first parens are 'parameters'.
ASTPtr parameters; ASTPtr parameters;

View File

@ -144,4 +144,12 @@ void ASTGrantQuery::replaceCurrentUserTagWithName(const String & current_user_na
if (to_roles) if (to_roles)
to_roles->replaceCurrentUserTagWithName(current_user_name); to_roles->replaceCurrentUserTagWithName(current_user_name);
} }
void ASTGrantQuery::removeNonGrantableFlags()
{
if (kind == Kind::GRANT)
access_rights_elements.removeNonGrantableFlags();
}
} }

View File

@ -33,6 +33,7 @@ public:
void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override; void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override;
void replaceEmptyDatabaseWithCurrent(const String & current_database); void replaceEmptyDatabaseWithCurrent(const String & current_database);
void replaceCurrentUserTagWithName(const String & current_user_name) const; void replaceCurrentUserTagWithName(const String & current_user_name) const;
void removeNonGrantableFlags();
ASTPtr getRewrittenASTWithoutOnCluster(const std::string &) const override { return removeOnCluster<ASTGrantQuery>(clone()); } ASTPtr getRewrittenASTWithoutOnCluster(const std::string &) const override { return removeOnCluster<ASTGrantQuery>(clone()); }
}; };
} }

View File

@ -16,6 +16,11 @@ void ASTQualifiedAsterisk::formatImpl(const FormatSettings & settings, FormatSta
const auto & qualifier = children.at(0); const auto & qualifier = children.at(0);
qualifier->formatImpl(settings, state, frame); qualifier->formatImpl(settings, state, frame);
settings.ostr << ".*"; settings.ostr << ".*";
for (ASTs::const_iterator it = children.begin() + 1; it != children.end(); ++it)
{
settings.ostr << ' ';
(*it)->formatImpl(settings, state, frame);
}
} }
} }

View File

@ -11,6 +11,7 @@ struct AsteriskSemanticImpl;
/** Something like t.* /** Something like t.*
* It will have qualifier as its child ASTIdentifier. * It will have qualifier as its child ASTIdentifier.
* Optional transformers can be attached to further manipulate these expanded columns.
*/ */
class ASTQualifiedAsterisk : public IAST class ASTQualifiedAsterisk : public IAST
{ {

View File

@ -18,9 +18,13 @@
#include <Parsers/ASTQueryParameter.h> #include <Parsers/ASTQueryParameter.h>
#include <Parsers/ASTTTLElement.h> #include <Parsers/ASTTTLElement.h>
#include <Parsers/ASTOrderByElement.h> #include <Parsers/ASTOrderByElement.h>
#include <Parsers/ASTSelectWithUnionQuery.h>
#include <Parsers/ASTSelectQuery.h>
#include <Parsers/ASTSubquery.h> #include <Parsers/ASTSubquery.h>
#include <Parsers/ASTFunctionWithKeyValueArguments.h> #include <Parsers/ASTFunctionWithKeyValueArguments.h>
#include <Parsers/ASTColumnsTransformers.h>
#include <Parsers/parseIdentifierOrStringLiteral.h>
#include <Parsers/parseIntervalKind.h> #include <Parsers/parseIntervalKind.h>
#include <Parsers/ExpressionListParsers.h> #include <Parsers/ExpressionListParsers.h>
#include <Parsers/ParserSelectWithUnionQuery.h> #include <Parsers/ParserSelectWithUnionQuery.h>
@ -217,10 +221,12 @@ bool ParserFunction::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
ParserIdentifier id_parser; ParserIdentifier id_parser;
ParserKeyword distinct("DISTINCT"); ParserKeyword distinct("DISTINCT");
ParserExpressionList contents(false); ParserExpressionList contents(false);
ParserSelectWithUnionQuery select;
bool has_distinct_modifier = false; bool has_distinct_modifier = false;
ASTPtr identifier; ASTPtr identifier;
ASTPtr query;
ASTPtr expr_list_args; ASTPtr expr_list_args;
ASTPtr expr_list_params; ASTPtr expr_list_params;
@ -231,8 +237,36 @@ bool ParserFunction::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
return false; return false;
++pos; ++pos;
if (distinct.ignore(pos, expected)) if (distinct.ignore(pos, expected))
has_distinct_modifier = true; has_distinct_modifier = true;
else
{
auto old_pos = pos;
auto maybe_an_subquery = pos->type == TokenType::OpeningRoundBracket;
if (select.parse(pos, query, expected))
{
auto & select_ast = query->as<ASTSelectWithUnionQuery &>();
if (select_ast.list_of_selects->children.size() == 1 && maybe_an_subquery)
{
// It's an subquery. Bail out.
pos = old_pos;
}
else
{
if (pos->type != TokenType::ClosingRoundBracket)
return false;
++pos;
auto function_node = std::make_shared<ASTFunction>();
tryGetIdentifierNameInto(identifier, function_node->name);
function_node->query = query;
function_node->children.push_back(function_node->query);
node = function_node;
return true;
}
}
}
const char * contents_begin = pos->begin; const char * contents_begin = pos->begin;
if (!contents.parse(pos, expr_list_args, expected)) if (!contents.parse(pos, expr_list_args, expected))
@ -1172,17 +1206,131 @@ bool ParserColumnsMatcher::parseImpl(Pos & pos, ASTPtr & node, Expected & expect
auto res = std::make_shared<ASTColumnsMatcher>(); auto res = std::make_shared<ASTColumnsMatcher>();
res->setPattern(regex_node->as<ASTLiteral &>().value.get<String>()); res->setPattern(regex_node->as<ASTLiteral &>().value.get<String>());
res->children.push_back(regex_node); res->children.push_back(regex_node);
ParserColumnsTransformers transformers_p;
ASTPtr transformer;
while (transformers_p.parse(pos, transformer, expected))
{
res->children.push_back(transformer);
}
node = std::move(res); node = std::move(res);
return true; return true;
} }
bool ParserAsterisk::parseImpl(Pos & pos, ASTPtr & node, Expected &) bool ParserColumnsTransformers::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
{
ParserKeyword apply("APPLY");
ParserKeyword except("EXCEPT");
ParserKeyword replace("REPLACE");
ParserKeyword as("AS");
if (apply.ignore(pos, expected))
{
if (pos->type != TokenType::OpeningRoundBracket)
return false;
++pos;
String func_name;
if (!parseIdentifierOrStringLiteral(pos, expected, func_name))
return false;
if (pos->type != TokenType::ClosingRoundBracket)
return false;
++pos;
auto res = std::make_shared<ASTColumnsApplyTransformer>();
res->func_name = func_name;
node = std::move(res);
return true;
}
else if (except.ignore(pos, expected))
{
if (pos->type != TokenType::OpeningRoundBracket)
return false;
++pos;
ASTs identifiers;
auto parse_id = [&identifiers, &pos, &expected]
{
ASTPtr identifier;
if (!ParserIdentifier().parse(pos, identifier, expected))
return false;
identifiers.emplace_back(std::move(identifier));
return true;
};
if (!ParserList::parseUtil(pos, expected, parse_id, false))
return false;
if (pos->type != TokenType::ClosingRoundBracket)
return false;
++pos;
auto res = std::make_shared<ASTColumnsExceptTransformer>();
res->children = std::move(identifiers);
node = std::move(res);
return true;
}
else if (replace.ignore(pos, expected))
{
if (pos->type != TokenType::OpeningRoundBracket)
return false;
++pos;
ASTs replacements;
ParserExpression element_p;
ParserIdentifier ident_p;
auto parse_id = [&]
{
ASTPtr expr;
if (!element_p.parse(pos, expr, expected))
return false;
if (!as.ignore(pos, expected))
return false;
ASTPtr ident;
if (!ident_p.parse(pos, ident, expected))
return false;
auto replacement = std::make_shared<ASTColumnsReplaceTransformer::Replacement>();
replacement->name = getIdentifierName(ident);
replacement->expr = std::move(expr);
replacements.emplace_back(std::move(replacement));
return true;
};
if (!ParserList::parseUtil(pos, expected, parse_id, false))
return false;
if (pos->type != TokenType::ClosingRoundBracket)
return false;
++pos;
auto res = std::make_shared<ASTColumnsReplaceTransformer>();
res->children = std::move(replacements);
node = std::move(res);
return true;
}
return false;
}
bool ParserAsterisk::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
{ {
if (pos->type == TokenType::Asterisk) if (pos->type == TokenType::Asterisk)
{ {
++pos; ++pos;
node = std::make_shared<ASTAsterisk>(); auto asterisk = std::make_shared<ASTAsterisk>();
ParserColumnsTransformers transformers_p;
ASTPtr transformer;
while (transformers_p.parse(pos, transformer, expected))
{
asterisk->children.push_back(transformer);
}
node = asterisk;
return true; return true;
} }
return false; return false;
@ -1204,6 +1352,12 @@ bool ParserQualifiedAsterisk::parseImpl(Pos & pos, ASTPtr & node, Expected & exp
auto res = std::make_shared<ASTQualifiedAsterisk>(); auto res = std::make_shared<ASTQualifiedAsterisk>();
res->children.push_back(node); res->children.push_back(node);
ParserColumnsTransformers transformers_p;
ASTPtr transformer;
while (transformers_p.parse(pos, transformer, expected))
{
res->children.push_back(transformer);
}
node = std::move(res); node = std::move(res);
return true; return true;
} }

View File

@ -88,6 +88,15 @@ protected:
bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override; bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override;
}; };
/** *, t.*, db.table.*, COLUMNS('<regular expression>') APPLY(...) or EXCEPT(...) or REPLACE(...)
*/
class ParserColumnsTransformers : public IParserBase
{
protected:
const char * getName() const override { return "COLUMNS transformers"; }
bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override;
};
/** A function, for example, f(x, y + 1, g(z)). /** A function, for example, f(x, y + 1, g(z)).
* Or an aggregate function: sum(x + f(y)), corr(x, y). The syntax is the same as the usual function. * Or an aggregate function: sum(x + f(y)), corr(x, y). The syntax is the same as the usual function.
* Or a parametric aggregate function: quantile(0.9)(x + y). * Or a parametric aggregate function: quantile(0.9)(x + y).

View File

@ -46,10 +46,10 @@ static inline bool parseColumnDeclareOptions(IParser::Pos & pos, ASTPtr & node,
OptionDescribe("DEFAULT", "default", std::make_unique<ParserExpression>()), OptionDescribe("DEFAULT", "default", std::make_unique<ParserExpression>()),
OptionDescribe("ON UPDATE", "on_update", std::make_unique<ParserExpression>()), OptionDescribe("ON UPDATE", "on_update", std::make_unique<ParserExpression>()),
OptionDescribe("AUTO_INCREMENT", "auto_increment", std::make_unique<ParserAlwaysTrue>()), OptionDescribe("AUTO_INCREMENT", "auto_increment", std::make_unique<ParserAlwaysTrue>()),
OptionDescribe("UNIQUE", "unique_key", std::make_unique<ParserAlwaysTrue>()),
OptionDescribe("UNIQUE KEY", "unique_key", std::make_unique<ParserAlwaysTrue>()), OptionDescribe("UNIQUE KEY", "unique_key", std::make_unique<ParserAlwaysTrue>()),
OptionDescribe("KEY", "primary_key", std::make_unique<ParserAlwaysTrue>()),
OptionDescribe("PRIMARY KEY", "primary_key", std::make_unique<ParserAlwaysTrue>()), OptionDescribe("PRIMARY KEY", "primary_key", std::make_unique<ParserAlwaysTrue>()),
OptionDescribe("UNIQUE", "unique_key", std::make_unique<ParserAlwaysTrue>()),
OptionDescribe("KEY", "primary_key", std::make_unique<ParserAlwaysTrue>()),
OptionDescribe("COMMENT", "comment", std::make_unique<ParserStringLiteral>()), OptionDescribe("COMMENT", "comment", std::make_unique<ParserStringLiteral>()),
OptionDescribe("CHARACTER SET", "charset_name", std::make_unique<ParserCharsetName>()), OptionDescribe("CHARACTER SET", "charset_name", std::make_unique<ParserCharsetName>()),
OptionDescribe("COLLATE", "collate", std::make_unique<ParserCharsetName>()), OptionDescribe("COLLATE", "collate", std::make_unique<ParserCharsetName>()),

View File

@ -14,6 +14,7 @@ namespace DB
{ {
namespace ErrorCodes namespace ErrorCodes
{ {
extern const int INVALID_GRANT;
extern const int SYNTAX_ERROR; extern const int SYNTAX_ERROR;
} }
@ -156,6 +157,29 @@ namespace
} }
void removeNonGrantableFlags(AccessRightsElements & elements)
{
for (auto & element : elements)
{
if (element.empty())
continue;
auto old_flags = element.access_flags;
element.removeNonGrantableFlags();
if (!element.empty())
continue;
if (!element.any_column)
throw Exception(old_flags.toString() + " cannot be granted on the column level", ErrorCodes::INVALID_GRANT);
else if (!element.any_table)
throw Exception(old_flags.toString() + " cannot be granted on the table level", ErrorCodes::INVALID_GRANT);
else if (!element.any_database)
throw Exception(old_flags.toString() + " cannot be granted on the database level", ErrorCodes::INVALID_GRANT);
else
throw Exception(old_flags.toString() + " cannot be granted", ErrorCodes::INVALID_GRANT);
}
}
bool parseRoles(IParser::Pos & pos, Expected & expected, Kind kind, bool id_mode, std::shared_ptr<ASTRolesOrUsersSet> & roles) bool parseRoles(IParser::Pos & pos, Expected & expected, Kind kind, bool id_mode, std::shared_ptr<ASTRolesOrUsersSet> & roles)
{ {
return IParserBase::wrapParseImpl(pos, [&] return IParserBase::wrapParseImpl(pos, [&]
@ -274,6 +298,9 @@ bool ParserGrantQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
if (admin_option && !elements.empty()) if (admin_option && !elements.empty())
throw Exception("ADMIN OPTION should be specified for roles", ErrorCodes::SYNTAX_ERROR); throw Exception("ADMIN OPTION should be specified for roles", ErrorCodes::SYNTAX_ERROR);
if (kind == Kind::GRANT)
removeNonGrantableFlags(elements);
auto query = std::make_shared<ASTGrantQuery>(); auto query = std::make_shared<ASTGrantQuery>();
node = query; node = query;

View File

@ -10,6 +10,7 @@ SRCS(
ASTAsterisk.cpp ASTAsterisk.cpp
ASTColumnDeclaration.cpp ASTColumnDeclaration.cpp
ASTColumnsMatcher.cpp ASTColumnsMatcher.cpp
ASTColumnsTransformers.cpp
ASTConstraintDeclaration.cpp ASTConstraintDeclaration.cpp
ASTCreateQuery.cpp ASTCreateQuery.cpp
ASTCreateQuotaQuery.cpp ASTCreateQuotaQuery.cpp

View File

@ -469,7 +469,16 @@ void PipelineExecutor::wakeUpExecutor(size_t thread_num)
void PipelineExecutor::executeSingleThread(size_t thread_num, size_t num_threads) void PipelineExecutor::executeSingleThread(size_t thread_num, size_t num_threads)
{ {
executeStepImpl(thread_num, num_threads); try
{
executeStepImpl(thread_num, num_threads);
}
catch (...)
{
/// In case of exception from executor itself, stop other threads.
finish();
throw;
}
#ifndef NDEBUG #ifndef NDEBUG
auto & context = executor_contexts[thread_num]; auto & context = executor_contexts[thread_num];

View File

@ -102,6 +102,8 @@ Pipe::Holder & Pipe::Holder::operator=(Holder && rhs)
storage_holders.insert(storage_holders.end(), rhs.storage_holders.begin(), rhs.storage_holders.end()); storage_holders.insert(storage_holders.end(), rhs.storage_holders.begin(), rhs.storage_holders.end());
interpreter_context.insert(interpreter_context.end(), interpreter_context.insert(interpreter_context.end(),
rhs.interpreter_context.begin(), rhs.interpreter_context.end()); rhs.interpreter_context.begin(), rhs.interpreter_context.end());
for (auto & plan : rhs.query_plans)
query_plans.emplace_back(std::move(plan));
return *this; return *this;
} }

View File

@ -1,6 +1,7 @@
#pragma once #pragma once
#include <Processors/IProcessor.h> #include <Processors/IProcessor.h>
#include <Processors/Sources/SourceWithProgress.h> #include <Processors/Sources/SourceWithProgress.h>
#include <Processors/QueryPlan/QueryPlan.h>
namespace DB namespace DB
{ {
@ -8,6 +9,8 @@ namespace DB
class Pipe; class Pipe;
using Pipes = std::vector<Pipe>; using Pipes = std::vector<Pipe>;
class QueryPipeline;
class IStorage; class IStorage;
using StoragePtr = std::shared_ptr<IStorage>; using StoragePtr = std::shared_ptr<IStorage>;
@ -86,6 +89,8 @@ public:
/// Get processors from Pipe. Use it with cautious, it is easy to loss totals and extremes ports. /// Get processors from Pipe. Use it with cautious, it is easy to loss totals and extremes ports.
static Processors detachProcessors(Pipe pipe) { return std::move(pipe.processors); } static Processors detachProcessors(Pipe pipe) { return std::move(pipe.processors); }
/// Get processors from Pipe w/o destroying pipe (used for EXPLAIN to keep QueryPlan).
const Processors & getProcessors() const { return processors; }
/// Specify quotas and limits for every ISourceWithProgress. /// Specify quotas and limits for every ISourceWithProgress.
void setLimits(const SourceWithProgress::LocalLimits & limits); void setLimits(const SourceWithProgress::LocalLimits & limits);
@ -96,6 +101,8 @@ public:
/// This methods are from QueryPipeline. Needed to make conversion from pipeline to pipe possible. /// This methods are from QueryPipeline. Needed to make conversion from pipeline to pipe possible.
void addInterpreterContext(std::shared_ptr<Context> context) { holder.interpreter_context.emplace_back(std::move(context)); } void addInterpreterContext(std::shared_ptr<Context> context) { holder.interpreter_context.emplace_back(std::move(context)); }
void addStorageHolder(StoragePtr storage) { holder.storage_holders.emplace_back(std::move(storage)); } void addStorageHolder(StoragePtr storage) { holder.storage_holders.emplace_back(std::move(storage)); }
/// For queries with nested interpreters (i.e. StorageDistributed)
void addQueryPlan(std::unique_ptr<QueryPlan> plan) { holder.query_plans.emplace_back(std::move(plan)); }
private: private:
/// Destruction order: processors, header, locks, temporary storages, local contexts /// Destruction order: processors, header, locks, temporary storages, local contexts
@ -113,6 +120,7 @@ private:
std::vector<std::shared_ptr<Context>> interpreter_context; std::vector<std::shared_ptr<Context>> interpreter_context;
std::vector<StoragePtr> storage_holders; std::vector<StoragePtr> storage_holders;
std::vector<TableLockHolder> table_locks; std::vector<TableLockHolder> table_locks;
std::vector<std::unique_ptr<QueryPlan>> query_plans;
}; };
Holder holder; Holder holder;

View File

@ -21,6 +21,8 @@ class QueryPipelineProcessorsCollector;
struct AggregatingTransformParams; struct AggregatingTransformParams;
using AggregatingTransformParamsPtr = std::shared_ptr<AggregatingTransformParams>; using AggregatingTransformParamsPtr = std::shared_ptr<AggregatingTransformParams>;
class QueryPlan;
class QueryPipeline class QueryPipeline
{ {
public: public:
@ -93,6 +95,7 @@ public:
void addTableLock(const TableLockHolder & lock) { pipe.addTableLock(lock); } void addTableLock(const TableLockHolder & lock) { pipe.addTableLock(lock); }
void addInterpreterContext(std::shared_ptr<Context> context) { pipe.addInterpreterContext(std::move(context)); } void addInterpreterContext(std::shared_ptr<Context> context) { pipe.addInterpreterContext(std::move(context)); }
void addStorageHolder(StoragePtr storage) { pipe.addStorageHolder(std::move(storage)); } void addStorageHolder(StoragePtr storage) { pipe.addStorageHolder(std::move(storage)); }
void addQueryPlan(std::unique_ptr<QueryPlan> plan) { pipe.addQueryPlan(std::move(plan)); }
/// For compatibility with IBlockInputStream. /// For compatibility with IBlockInputStream.
void setProgressCallback(const ProgressCallback & callback); void setProgressCallback(const ProgressCallback & callback);

View File

@ -518,9 +518,10 @@ void StorageLiveView::drop()
condition.notify_all(); condition.notify_all();
} }
void StorageLiveView::refresh(const Context & context) void StorageLiveView::refresh()
{ {
auto table_lock = lockExclusively(context.getCurrentQueryId(), context.getSettingsRef().lock_acquire_timeout); // Lock is already acquired exclusively from InterperterAlterQuery.cpp InterpreterAlterQuery::execute() method.
// So, reacquiring lock is not needed and will result in an exception.
{ {
std::lock_guard lock(mutex); std::lock_guard lock(mutex);
if (getNewBlocks()) if (getNewBlocks())

View File

@ -122,7 +122,7 @@ public:
void startup() override; void startup() override;
void shutdown() override; void shutdown() override;
void refresh(const Context & context); void refresh();
Pipe read( Pipe read(
const Names & column_names, const Names & column_names,

View File

@ -1042,6 +1042,37 @@ void IMergeTreeDataPart::accumulateColumnSizes(ColumnToSize & column_to_size) co
column_to_size[column_name] = size.data_compressed; column_to_size[column_name] = size.data_compressed;
} }
bool IMergeTreeDataPart::checkAllTTLCalculated(const StorageMetadataPtr & metadata_snapshot) const
{
if (!metadata_snapshot->hasAnyTTL())
return false;
if (metadata_snapshot->hasRowsTTL())
{
if (isEmpty()) /// All rows were finally deleted and we don't store TTL
return true;
else if (ttl_infos.table_ttl.min == 0)
return false;
}
for (const auto & [column, desc] : metadata_snapshot->getColumnTTLs())
{
/// Part has this column, but we don't calculated TTL for it
if (!ttl_infos.columns_ttl.count(column) && getColumns().contains(column))
return false;
}
for (const auto & move_desc : metadata_snapshot->getMoveTTLs())
{
/// Move TTL is not calculated
if (!ttl_infos.moves_ttl.count(move_desc.result_column))
return false;
}
return true;
}
bool isCompactPart(const MergeTreeDataPartPtr & data_part) bool isCompactPart(const MergeTreeDataPartPtr & data_part)
{ {
return (data_part && data_part->getType() == MergeTreeDataPartType::COMPACT); return (data_part && data_part->getType() == MergeTreeDataPartType::COMPACT);

Some files were not shown because too many files have changed in this diff Show More