mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 07:31:57 +00:00
Merge branch 'master' into revolg-DOCSUP-1673-Doc_the_output_format_pretty_max_value_width_setting
This commit is contained in:
commit
352d54fa6b
193
CHANGELOG.md
193
CHANGELOG.md
@ -1,3 +1,196 @@
|
||||
## ClickHouse release 20.7
|
||||
|
||||
### ClickHouse release v20.7.2.30-stable, 2020-08-31
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* Function `modulo` (operator `%`) with at least one floating point number as argument will calculate remainder of division directly on floating point numbers without converting both arguments to integers. It makes behaviour compatible with most of DBMS. This also applicable for Date and DateTime data types. Added alias `mod`. This closes [#7323](https://github.com/ClickHouse/ClickHouse/issues/7323). [#12585](https://github.com/ClickHouse/ClickHouse/pull/12585) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Deprecate special printing of zero Date/DateTime values as `0000-00-00` and `0000-00-00 00:00:00`. [#12442](https://github.com/ClickHouse/ClickHouse/pull/12442) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* The function `groupArrayMoving*` was not working for distributed queries. It's result was calculated within incorrect data type (without promotion to the largest type). The function `groupArrayMovingAvg` was returning integer number that was inconsistent with the `avg` function. This fixes [#12568](https://github.com/ClickHouse/ClickHouse/issues/12568). [#12622](https://github.com/ClickHouse/ClickHouse/pull/12622) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add sanity check for MergeTree settings. If the settings are incorrect, the server will refuse to start or to create a table, printing detailed explanation to the user. [#13153](https://github.com/ClickHouse/ClickHouse/pull/13153) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Protect from the cases when user may set `background_pool_size` to value lower than `number_of_free_entries_in_pool_to_execute_mutation` or `number_of_free_entries_in_pool_to_lower_max_size_of_merge`. In these cases ALTERs won't work or the maximum size of merge will be too limited. It will throw exception explaining what to do. This closes [#10897](https://github.com/ClickHouse/ClickHouse/issues/10897). [#12728](https://github.com/ClickHouse/ClickHouse/pull/12728) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Polygon dictionary type that provides efficient "reverse geocoding" lookups - to find the region by coordinates in a dictionary of many polygons (world map). It is using carefully optimized algorithm with recursive grids to maintain low CPU and memory usage. [#9278](https://github.com/ClickHouse/ClickHouse/pull/9278) ([achulkov2](https://github.com/achulkov2)).
|
||||
* Added support of LDAP authentication for preconfigured users ("Simple Bind" method). [#11234](https://github.com/ClickHouse/ClickHouse/pull/11234) ([Denis Glazachev](https://github.com/traceon)).
|
||||
* Introduce setting `alter_partition_verbose_result` which outputs information about touched parts for some types of `ALTER TABLE ... PARTITION ...` queries (currently `ATTACH` and `FREEZE`). Closes [#8076](https://github.com/ClickHouse/ClickHouse/issues/8076). [#13017](https://github.com/ClickHouse/ClickHouse/pull/13017) ([alesapin](https://github.com/alesapin)).
|
||||
* Add `bayesAB` function for bayesian-ab-testing. [#12327](https://github.com/ClickHouse/ClickHouse/pull/12327) ([achimbab](https://github.com/achimbab)).
|
||||
* Added `system.crash_log` table into which stack traces for fatal errors are collected. This table should be empty. [#12316](https://github.com/ClickHouse/ClickHouse/pull/12316) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added http headers `X-ClickHouse-Database` and `X-ClickHouse-Format` which may be used to set default database and output format. [#12981](https://github.com/ClickHouse/ClickHouse/pull/12981) ([hcz](https://github.com/hczhcz)).
|
||||
* Add `minMap` and `maxMap` functions support to `SimpleAggregateFunction`. [#12662](https://github.com/ClickHouse/ClickHouse/pull/12662) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||
* Add setting `allow_non_metadata_alters` which restricts to execute `ALTER` queries which modify data on disk. Disabled be default. Closes [#11547](https://github.com/ClickHouse/ClickHouse/issues/11547). [#12635](https://github.com/ClickHouse/ClickHouse/pull/12635) ([alesapin](https://github.com/alesapin)).
|
||||
* A function `formatRow` is added to support turning arbitrary expressions into a string via given format. It's useful for manipulating SQL outputs and is quite versatile combined with the `columns` function. [#12574](https://github.com/ClickHouse/ClickHouse/pull/12574) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Add `FROM_UNIXTIME` function for compatibility with MySQL, related to [12149](https://github.com/ClickHouse/ClickHouse/issues/12149). [#12484](https://github.com/ClickHouse/ClickHouse/pull/12484) ([flynn](https://github.com/ucasFL)).
|
||||
* Allow Nullable types as keys in MergeTree tables if `allow_nullable_key` table setting is enabled. https://github.com/ClickHouse/ClickHouse/issues/5319. [#12433](https://github.com/ClickHouse/ClickHouse/pull/12433) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Integration with [COS](https://intl.cloud.tencent.com/product/cos). [#12386](https://github.com/ClickHouse/ClickHouse/pull/12386) ([fastio](https://github.com/fastio)).
|
||||
* Add mapAdd and mapSubtract functions for adding/subtracting key-mapped values. [#11735](https://github.com/ClickHouse/ClickHouse/pull/11735) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix premature `ON CLUSTER` timeouts for queries that must be executed on a single replica. Fixes [#6704](https://github.com/ClickHouse/ClickHouse/issues/6704), [#7228](https://github.com/ClickHouse/ClickHouse/issues/7228), [#13361](https://github.com/ClickHouse/ClickHouse/issues/13361), [#11884](https://github.com/ClickHouse/ClickHouse/issues/11884). [#13450](https://github.com/ClickHouse/ClickHouse/pull/13450) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix crash in mark inclusion search introduced in https://github.com/ClickHouse/ClickHouse/pull/12277. [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix race condition in external dictionaries with cache layout which can lead server crash. [#12566](https://github.com/ClickHouse/ClickHouse/pull/12566) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix visible data clobbering by progress bar in client in interactive mode. This fixes [#12562](https://github.com/ClickHouse/ClickHouse/issues/12562) and [#13369](https://github.com/ClickHouse/ClickHouse/issues/13369) and [#13584](https://github.com/ClickHouse/ClickHouse/issues/13584) and fixes [#12964](https://github.com/ClickHouse/ClickHouse/issues/12964). [#13691](https://github.com/ClickHouse/ClickHouse/pull/13691) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed incorrect sorting order for `LowCardinality` columns when ORDER BY multiple columns is used. This fixes [#13958](https://github.com/ClickHouse/ClickHouse/issues/13958). [#14223](https://github.com/ClickHouse/ClickHouse/pull/14223) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Removed hardcoded timeout, which wrongly overruled `query_wait_timeout_milliseconds` setting for cache-dictionary. [#14105](https://github.com/ClickHouse/ClickHouse/pull/14105) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fixed wrong mount point in extra info for `Poco::Exception: no space left on device`. [#14050](https://github.com/ClickHouse/ClickHouse/pull/14050) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix wrong query optimization of select queries with `DISTINCT` keyword when subqueries also have `DISTINCT` in case `optimize_duplicate_order_by_and_distinct` setting is enabled. [#13925](https://github.com/ClickHouse/ClickHouse/pull/13925) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fixed potential deadlock when renaming `Distributed` table. [#13922](https://github.com/ClickHouse/ClickHouse/pull/13922) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix incorrect sorting for `FixedString` columns when ORDER BY multiple columns is used. Fixes [#13182](https://github.com/ClickHouse/ClickHouse/issues/13182). [#13887](https://github.com/ClickHouse/ClickHouse/pull/13887) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix potentially lower precision of `topK`/`topKWeighted` aggregations (with non-default parameters). [#13817](https://github.com/ClickHouse/ClickHouse/pull/13817) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix reading from MergeTree table with INDEX of type SET fails when compared against NULL. This fixes [#13686](https://github.com/ClickHouse/ClickHouse/issues/13686). [#13793](https://github.com/ClickHouse/ClickHouse/pull/13793) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix step overflow in function `range()`. [#13790](https://github.com/ClickHouse/ClickHouse/pull/13790) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed `Directory not empty` error when concurrently executing `DROP DATABASE` and `CREATE TABLE`. [#13756](https://github.com/ClickHouse/ClickHouse/pull/13756) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add range check for `h3KRing` function. This fixes [#13633](https://github.com/ClickHouse/ClickHouse/issues/13633). [#13752](https://github.com/ClickHouse/ClickHouse/pull/13752) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix race condition between DETACH and background merges. Parts may revive after detach. This is continuation of [#8602](https://github.com/ClickHouse/ClickHouse/issues/8602) that did not fix the issue but introduced a test that started to fail in very rare cases, demonstrating the issue. [#13746](https://github.com/ClickHouse/ClickHouse/pull/13746) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix logging Settings.Names/Values when `log_queries_min_type` greater than `QUERY_START`. [#13737](https://github.com/ClickHouse/ClickHouse/pull/13737) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix incorrect message in `clickhouse-server.init` while checking user and group. [#13711](https://github.com/ClickHouse/ClickHouse/pull/13711) ([ylchou](https://github.com/ylchou)).
|
||||
* Do not optimize `any(arrayJoin())` to `arrayJoin()` under `optimize_move_functions_out_of_any`. [#13681](https://github.com/ClickHouse/ClickHouse/pull/13681) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed possible deadlock in concurrent `ALTER ... REPLACE/MOVE PARTITION ...` queries. [#13626](https://github.com/ClickHouse/ClickHouse/pull/13626) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed the behaviour when sometimes cache-dictionary returned default value instead of present value from source. [#13624](https://github.com/ClickHouse/ClickHouse/pull/13624) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix secondary indices corruption in compact parts (compact parts is an experimental feature). [#13538](https://github.com/ClickHouse/ClickHouse/pull/13538) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix wrong code in function `netloc`. This fixes [#13335](https://github.com/ClickHouse/ClickHouse/issues/13335). [#13446](https://github.com/ClickHouse/ClickHouse/pull/13446) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix error in `parseDateTimeBestEffort` function when unix timestamp was passed as an argument. This fixes [#13362](https://github.com/ClickHouse/ClickHouse/issues/13362). [#13441](https://github.com/ClickHouse/ClickHouse/pull/13441) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix invalid return type for comparison of tuples with `NULL` elements. Fixes [#12461](https://github.com/ClickHouse/ClickHouse/issues/12461). [#13420](https://github.com/ClickHouse/ClickHouse/pull/13420) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix wrong optimization caused `aggregate function any(x) is found inside another aggregate function in query` error with `SET optimize_move_functions_out_of_any = 1` and aliases inside `any()`. [#13419](https://github.com/ClickHouse/ClickHouse/pull/13419) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix possible race in `StorageMemory`. [#13416](https://github.com/ClickHouse/ClickHouse/pull/13416) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix empty output for `Arrow` and `Parquet` formats in case if query return zero rows. It was done because empty output is not valid for this formats. [#13399](https://github.com/ClickHouse/ClickHouse/pull/13399) ([hcz](https://github.com/hczhcz)).
|
||||
* Fix select queries with constant columns and prefix of primary key in `ORDER BY` clause. [#13396](https://github.com/ClickHouse/ClickHouse/pull/13396) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix `PrettyCompactMonoBlock` for clickhouse-local. Fix extremes/totals with `PrettyCompactMonoBlock`. Fixes [#7746](https://github.com/ClickHouse/ClickHouse/issues/7746). [#13394](https://github.com/ClickHouse/ClickHouse/pull/13394) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed deadlock in system.text_log. [#12452](https://github.com/ClickHouse/ClickHouse/pull/12452) ([alexey-milovidov](https://github.com/alexey-milovidov)). It is a part of [#12339](https://github.com/ClickHouse/ClickHouse/issues/12339). This fixes [#12325](https://github.com/ClickHouse/ClickHouse/issues/12325). [#13386](https://github.com/ClickHouse/ClickHouse/pull/13386) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fixed `File(TSVWithNames*)` (header was written multiple times), fixed `clickhouse-local --format CSVWithNames*` (lacks header, broken after [#12197](https://github.com/ClickHouse/ClickHouse/issues/12197)), fixed `clickhouse-local --format CSVWithNames*` with zero rows (lacks header). [#13343](https://github.com/ClickHouse/ClickHouse/pull/13343) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix segfault when function `groupArrayMovingSum` deserializes empty state. Fixes [#13339](https://github.com/ClickHouse/ClickHouse/issues/13339). [#13341](https://github.com/ClickHouse/ClickHouse/pull/13341) ([alesapin](https://github.com/alesapin)).
|
||||
* Throw error on `arrayJoin()` function in `JOIN ON` section. [#13330](https://github.com/ClickHouse/ClickHouse/pull/13330) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix crash in `LEFT ASOF JOIN` with `join_use_nulls=1`. [#13291](https://github.com/ClickHouse/ClickHouse/pull/13291) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix possible error `Totals having transform was already added to pipeline` in case of a query from delayed replica. [#13290](https://github.com/ClickHouse/ClickHouse/pull/13290) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* The server may crash if user passed specifically crafted arguments to the function `h3ToChildren`. This fixes [#13275](https://github.com/ClickHouse/ClickHouse/issues/13275). [#13277](https://github.com/ClickHouse/ClickHouse/pull/13277) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix potentially low performance and slightly incorrect result for `uniqExact`, `topK`, `sumDistinct` and similar aggregate functions called on Float types with `NaN` values. It also triggered assert in debug build. This fixes [#12491](https://github.com/ClickHouse/ClickHouse/issues/12491). [#13254](https://github.com/ClickHouse/ClickHouse/pull/13254) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix assertion in KeyCondition when primary key contains expression with monotonic function and query contains comparison with constant whose type is different. This fixes [#12465](https://github.com/ClickHouse/ClickHouse/issues/12465). [#13251](https://github.com/ClickHouse/ClickHouse/pull/13251) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Return passed number for numbers with MSB set in function roundUpToPowerOfTwoOrZero(). It prevents potential errors in case of overflow of array sizes. [#13234](https://github.com/ClickHouse/ClickHouse/pull/13234) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix function if with nullable constexpr as cond that is not literal NULL. Fixes [#12463](https://github.com/ClickHouse/ClickHouse/issues/12463). [#13226](https://github.com/ClickHouse/ClickHouse/pull/13226) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix assert in `arrayElement` function in case of array elements are Nullable and array subscript is also Nullable. This fixes [#12172](https://github.com/ClickHouse/ClickHouse/issues/12172). [#13224](https://github.com/ClickHouse/ClickHouse/pull/13224) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix DateTime64 conversion functions with constant argument. [#13205](https://github.com/ClickHouse/ClickHouse/pull/13205) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes https://github.com/ClickHouse/ClickHouse/issues/5779, https://github.com/ClickHouse/ClickHouse/issues/12527. [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix access to `redis` dictionary after connection was dropped once. It may happen with `cache` and `direct` dictionary layouts. [#13082](https://github.com/ClickHouse/ClickHouse/pull/13082) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix wrong index analysis with functions. It could lead to some data parts being skipped when reading from `MergeTree` tables. Fixes [#13060](https://github.com/ClickHouse/ClickHouse/issues/13060). Fixes [#12406](https://github.com/ClickHouse/ClickHouse/issues/12406). [#13081](https://github.com/ClickHouse/ClickHouse/pull/13081) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix error `Cannot convert column because it is constant but values of constants are different in source and result` for remote queries which use deterministic functions in scope of query, but not deterministic between queries, like `now()`, `now64()`, `randConstant()`. Fixes [#11327](https://github.com/ClickHouse/ClickHouse/issues/11327). [#13075](https://github.com/ClickHouse/ClickHouse/pull/13075) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix crash which was possible for queries with `ORDER BY` tuple and small `LIMIT`. Fixes [#12623](https://github.com/ClickHouse/ClickHouse/issues/12623). [#13009](https://github.com/ClickHouse/ClickHouse/pull/13009) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix `Block structure mismatch` error for queries with `UNION` and `JOIN`. Fixes [#12602](https://github.com/ClickHouse/ClickHouse/issues/12602). [#12989](https://github.com/ClickHouse/ClickHouse/pull/12989) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Corrected `merge_with_ttl_timeout` logic which did not work well when expiration affected more than one partition over one time interval. (Authored by @excitoon). [#12982](https://github.com/ClickHouse/ClickHouse/pull/12982) ([Alexander Kazakov](https://github.com/Akazz)).
|
||||
* Fix columns duplication for range hashed dictionary created from DDL query. This fixes [#10605](https://github.com/ClickHouse/ClickHouse/issues/10605). [#12857](https://github.com/ClickHouse/ClickHouse/pull/12857) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix unnecessary limiting for the number of threads for selects from local replica. [#12840](https://github.com/ClickHouse/ClickHouse/pull/12840) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix rare bug when `ALTER DELETE` and `ALTER MODIFY COLUMN` queries executed simultaneously as a single mutation. Bug leads to an incorrect amount of rows in `count.txt` and as a consequence incorrect data in part. Also, fix a small bug with simultaneous `ALTER RENAME COLUMN` and `ALTER ADD COLUMN`. [#12760](https://github.com/ClickHouse/ClickHouse/pull/12760) ([alesapin](https://github.com/alesapin)).
|
||||
* Wrong credentials being used when using `clickhouse` dictionary source to query remote tables. [#12756](https://github.com/ClickHouse/ClickHouse/pull/12756) ([sundyli](https://github.com/sundy-li)).
|
||||
* Fix `CAST(Nullable(String), Enum())`. [#12745](https://github.com/ClickHouse/ClickHouse/pull/12745) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix performance with large tuples, which are interpreted as functions in `IN` section. The case when user writes `WHERE x IN tuple(1, 2, ...)` instead of `WHERE x IN (1, 2, ...)` for some obscure reason. [#12700](https://github.com/ClickHouse/ClickHouse/pull/12700) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix memory tracking for input_format_parallel_parsing (by attaching thread to group). [#12672](https://github.com/ClickHouse/ClickHouse/pull/12672) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix wrong optimization `optimize_move_functions_out_of_any=1` in case of `any(func(<lambda>))`. [#12664](https://github.com/ClickHouse/ClickHouse/pull/12664) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fixed [#10572](https://github.com/ClickHouse/ClickHouse/issues/10572) fix bloom filter index with const expression. [#12659](https://github.com/ClickHouse/ClickHouse/pull/12659) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix SIGSEGV in StorageKafka when broker is unavailable (and not only). [#12658](https://github.com/ClickHouse/ClickHouse/pull/12658) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add support for function `if` with `Array(UUID)` arguments. This fixes [#11066](https://github.com/ClickHouse/ClickHouse/issues/11066). [#12648](https://github.com/ClickHouse/ClickHouse/pull/12648) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* CREATE USER IF NOT EXISTS now doesn't throw exception if the user exists. This fixes https://github.com/ClickHouse/ClickHouse/issues/12507. [#12646](https://github.com/ClickHouse/ClickHouse/pull/12646) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Exception `There is no supertype...` can be thrown during `ALTER ... UPDATE` in unexpected cases (e.g. when subtracting from UInt64 column). This fixes [#7306](https://github.com/ClickHouse/ClickHouse/issues/7306). This fixes [#4165](https://github.com/ClickHouse/ClickHouse/issues/4165). [#12633](https://github.com/ClickHouse/ClickHouse/pull/12633) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix possible `Pipeline stuck` error for queries with external sorting. Fixes [#12617](https://github.com/ClickHouse/ClickHouse/issues/12617). [#12618](https://github.com/ClickHouse/ClickHouse/pull/12618) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix error `Output of TreeExecutor is not sorted` for `OPTIMIZE DEDUPLICATE`. Fixes [#11572](https://github.com/ClickHouse/ClickHouse/issues/11572). [#12613](https://github.com/ClickHouse/ClickHouse/pull/12613) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix the issue when alias on result of function `any` can be lost during query optimization. [#12593](https://github.com/ClickHouse/ClickHouse/pull/12593) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Remove data for Distributed tables (blocks from async INSERTs) on DROP TABLE. [#12556](https://github.com/ClickHouse/ClickHouse/pull/12556) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Now ClickHouse will recalculate checksums for parts when file `checksums.txt` is absent. Broken since [#9827](https://github.com/ClickHouse/ClickHouse/issues/9827). [#12545](https://github.com/ClickHouse/ClickHouse/pull/12545) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix bug which lead to broken old parts after `ALTER DELETE` query when `enable_mixed_granularity_parts=1`. Fixes [#12536](https://github.com/ClickHouse/ClickHouse/issues/12536). [#12543](https://github.com/ClickHouse/ClickHouse/pull/12543) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixing race condition in live view tables which could cause data duplication. LIVE VIEW is an experimental feature. [#12519](https://github.com/ClickHouse/ClickHouse/pull/12519) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Fix backwards compatibility in binary format of `AggregateFunction(avg, ...)` values. This fixes [#12342](https://github.com/ClickHouse/ClickHouse/issues/12342). [#12486](https://github.com/ClickHouse/ClickHouse/pull/12486) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix crash in JOIN with dictionary when we are joining over expression of dictionary key: `t JOIN dict ON expr(dict.id) = t.id`. Disable dictionary join optimisation for this case. [#12458](https://github.com/ClickHouse/ClickHouse/pull/12458) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix overflow when very large LIMIT or OFFSET is specified. This fixes [#10470](https://github.com/ClickHouse/ClickHouse/issues/10470). This fixes [#11372](https://github.com/ClickHouse/ClickHouse/issues/11372). [#12427](https://github.com/ClickHouse/ClickHouse/pull/12427) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* kafka: fix SIGSEGV if there is a message with error in the middle of the batch. [#12302](https://github.com/ClickHouse/ClickHouse/pull/12302) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Keep smaller amount of logs in ZooKeeper. Avoid excessive growing of ZooKeeper nodes in case of offline replicas when having many servers/tables/inserts. [#13100](https://github.com/ClickHouse/ClickHouse/pull/13100) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Now exceptions forwarded to the client if an error happened during ALTER or mutation. Closes [#11329](https://github.com/ClickHouse/ClickHouse/issues/11329). [#12666](https://github.com/ClickHouse/ClickHouse/pull/12666) ([alesapin](https://github.com/alesapin)).
|
||||
* Add `QueryTimeMicroseconds`, `SelectQueryTimeMicroseconds` and `InsertQueryTimeMicroseconds` to `system.events`, along with system.metrics, processes, query_log, etc. [#13028](https://github.com/ClickHouse/ClickHouse/pull/13028) ([ianton-ru](https://github.com/ianton-ru)).
|
||||
* Added `SelectedRows` and `SelectedBytes` to `system.events`, along with system.metrics, processes, query_log, etc. [#12638](https://github.com/ClickHouse/ClickHouse/pull/12638) ([ianton-ru](https://github.com/ianton-ru)).
|
||||
* Added `current_database` information to `system.query_log`. [#12652](https://github.com/ClickHouse/ClickHouse/pull/12652) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Allow `TabSeparatedRaw` as input format. [#12009](https://github.com/ClickHouse/ClickHouse/pull/12009) ([hcz](https://github.com/hczhcz)).
|
||||
* Now `joinGet` supports multi-key lookup. [#12418](https://github.com/ClickHouse/ClickHouse/pull/12418) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Allow `*Map` aggregate functions to work on Arrays with NULLs. Fixes [#13157](https://github.com/ClickHouse/ClickHouse/issues/13157). [#13225](https://github.com/ClickHouse/ClickHouse/pull/13225) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Avoid overflow in parsing of DateTime values that will lead to negative unix timestamp in their timezone (for example, `1970-01-01 00:00:00` in Moscow). Saturate to zero instead. This fixes [#3470](https://github.com/ClickHouse/ClickHouse/issues/3470). This fixes [#4172](https://github.com/ClickHouse/ClickHouse/issues/4172). [#12443](https://github.com/ClickHouse/ClickHouse/pull/12443) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* AvroConfluent: Skip Kafka tombstone records - Support skipping broken records [#13203](https://github.com/ClickHouse/ClickHouse/pull/13203) ([Andrew Onyshchuk](https://github.com/oandrew)).
|
||||
* Fix wrong error for long queries. It was possible to get syntax error other than `Max query size exceeded` for correct query. [#13928](https://github.com/ClickHouse/ClickHouse/pull/13928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix data race in `lgamma` function. This race was caught only in `tsan`, no side effects really happened. [#13842](https://github.com/ClickHouse/ClickHouse/pull/13842) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix a 'Week'-interval formatting for ATTACH/ALTER/CREATE QUOTA-statements. [#13417](https://github.com/ClickHouse/ClickHouse/pull/13417) ([vladimir-golovchenko](https://github.com/vladimir-golovchenko)).
|
||||
* Now broken parts are also reported when encountered in compact part processing. Compact parts is an experimental feature. [#13282](https://github.com/ClickHouse/ClickHouse/pull/13282) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix assert in `geohashesInBox`. This fixes [#12554](https://github.com/ClickHouse/ClickHouse/issues/12554). [#13229](https://github.com/ClickHouse/ClickHouse/pull/13229) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix assert in `parseDateTimeBestEffort`. This fixes [#12649](https://github.com/ClickHouse/ClickHouse/issues/12649). [#13227](https://github.com/ClickHouse/ClickHouse/pull/13227) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Minor optimization in Processors/PipelineExecutor: breaking out of a loop because it makes sense to do so. [#13058](https://github.com/ClickHouse/ClickHouse/pull/13058) ([Mark Papadakis](https://github.com/markpapadakis)).
|
||||
* Support TRUNCATE table without TABLE keyword. [#12653](https://github.com/ClickHouse/ClickHouse/pull/12653) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix explain query format overwrite by default, issue https://github.com/ClickHouse/ClickHouse/issues/12432. [#12541](https://github.com/ClickHouse/ClickHouse/pull/12541) ([BohuTANG](https://github.com/BohuTANG)).
|
||||
* Allow to set JOIN kind and type in more standad way: `LEFT SEMI JOIN` instead of `SEMI LEFT JOIN`. For now both are correct. [#12520](https://github.com/ClickHouse/ClickHouse/pull/12520) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Changes default value for `multiple_joins_rewriter_version` to 2. It enables new multiple joins rewriter that knows about column names. [#12469](https://github.com/ClickHouse/ClickHouse/pull/12469) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Add several metrics for requests to S3 storages. [#12464](https://github.com/ClickHouse/ClickHouse/pull/12464) ([ianton-ru](https://github.com/ianton-ru)).
|
||||
* Use correct default secure port for clickhouse-benchmark with `--secure` argument. This fixes [#11044](https://github.com/ClickHouse/ClickHouse/issues/11044). [#12440](https://github.com/ClickHouse/ClickHouse/pull/12440) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Rollback insertion errors in `Log`, `TinyLog`, `StripeLog` engines. In previous versions insertion error lead to inconsisent table state (this works as documented and it is normal for these table engines). This fixes [#12402](https://github.com/ClickHouse/ClickHouse/issues/12402). [#12426](https://github.com/ClickHouse/ClickHouse/pull/12426) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Implement `RENAME DATABASE` and `RENAME DICTIONARY` for `Atomic` database engine - Add implicit `{uuid}` macro, which can be used in ZooKeeper path for `ReplicatedMergeTree`. It works with `CREATE ... ON CLUSTER ...` queries. Set `show_table_uuid_in_table_create_query_if_not_nil` to `true` to use it. - Make `ReplicatedMergeTree` engine arguments optional, `/clickhouse/tables/{uuid}/{shard}/` and `{replica}` are used by default. Closes [#12135](https://github.com/ClickHouse/ClickHouse/issues/12135). - Minor fixes. - These changes break backward compatibility of `Atomic` database engine. Previously created `Atomic` databases must be manually converted to new format. Atomic database is an experimental feature. [#12343](https://github.com/ClickHouse/ClickHouse/pull/12343) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Separated `AWSAuthV4Signer` into different logger, removed excessive `AWSClient: AWSClient` from log messages. [#12320](https://github.com/ClickHouse/ClickHouse/pull/12320) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Better exception message in disk access storage. [#12625](https://github.com/ClickHouse/ClickHouse/pull/12625) ([alesapin](https://github.com/alesapin)).
|
||||
* Better exception for function `in` with invalid number of arguments. [#12529](https://github.com/ClickHouse/ClickHouse/pull/12529) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix error message about adaptive granularity. [#12624](https://github.com/ClickHouse/ClickHouse/pull/12624) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix SETTINGS parse after FORMAT. [#12480](https://github.com/ClickHouse/ClickHouse/pull/12480) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* If MergeTree table does not contain ORDER BY or PARTITION BY, it was possible to request ALTER to CLEAR all the columns and ALTER will stuck. Fixed [#7941](https://github.com/ClickHouse/ClickHouse/issues/7941). [#12382](https://github.com/ClickHouse/ClickHouse/pull/12382) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Avoid re-loading completion from the history file after each query (to avoid history overlaps with other client sessions). [#13086](https://github.com/ClickHouse/ClickHouse/pull/13086) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Lower memory usage for some operations up to 2 times. [#12424](https://github.com/ClickHouse/ClickHouse/pull/12424) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Optimize PK lookup for queries that match exact PK range. [#12277](https://github.com/ClickHouse/ClickHouse/pull/12277) ([Ivan Babrou](https://github.com/bobrik)).
|
||||
* Slightly optimize very short queries with `LowCardinality`. [#14129](https://github.com/ClickHouse/ClickHouse/pull/14129) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Slightly improve performance of aggregation by UInt8/UInt16 keys. [#13091](https://github.com/ClickHouse/ClickHouse/pull/13091) and [#13055](https://github.com/ClickHouse/ClickHouse/pull/13055) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Push down `LIMIT` step for query plan (inside subqueries). [#13016](https://github.com/ClickHouse/ClickHouse/pull/13016) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Parallel primary key lookup and skipping index stages on parts, as described in [#11564](https://github.com/ClickHouse/ClickHouse/issues/11564). [#12589](https://github.com/ClickHouse/ClickHouse/pull/12589) ([Ivan Babrou](https://github.com/bobrik)).
|
||||
* Converting String-type arguments of function "if" and "transform" into enum if `set optimize_if_transform_strings_to_enum = 1`. [#12515](https://github.com/ClickHouse/ClickHouse/pull/12515) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Replaces monotonic functions with its argument in `ORDER BY` if `set optimize_monotonous_functions_in_order_by=1`. [#12467](https://github.com/ClickHouse/ClickHouse/pull/12467) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Add order by optimization that rewrites `ORDER BY x, f(x)` with `ORDER by x` if `set optimize_redundant_functions_in_order_by = 1`. [#12404](https://github.com/ClickHouse/ClickHouse/pull/12404) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Allow pushdown predicate when subquery contains `WITH` clause. This fixes [#12293](https://github.com/ClickHouse/ClickHouse/issues/12293) [#12663](https://github.com/ClickHouse/ClickHouse/pull/12663) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Improve performance of reading from compact parts. Compact parts is an experimental feature. [#12492](https://github.com/ClickHouse/ClickHouse/pull/12492) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Attempt to implement streaming optimization in `DiskS3`. DiskS3 is an experimental feature. [#12434](https://github.com/ClickHouse/ClickHouse/pull/12434) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Use `shellcheck` for sh tests linting. [#13200](https://github.com/ClickHouse/ClickHouse/pull/13200) [#13207](https://github.com/ClickHouse/ClickHouse/pull/13207) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add script which set labels for pull requests in GitHub hook. [#13183](https://github.com/ClickHouse/ClickHouse/pull/13183) ([alesapin](https://github.com/alesapin)).
|
||||
* Remove some of recursive submodules. See [#13378](https://github.com/ClickHouse/ClickHouse/issues/13378). [#13379](https://github.com/ClickHouse/ClickHouse/pull/13379) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Ensure that all the submodules are from proper URLs. Continuation of [#13379](https://github.com/ClickHouse/ClickHouse/issues/13379). This fixes [#13378](https://github.com/ClickHouse/ClickHouse/issues/13378). [#13397](https://github.com/ClickHouse/ClickHouse/pull/13397) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added support for user-declared settings, which can be accessed from inside queries. This is needed when ClickHouse engine is used as a component of another system. [#13013](https://github.com/ClickHouse/ClickHouse/pull/13013) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Added testing for RBAC functionality of INSERT privilege in TestFlows. Expanded tables on which SELECT is being tested. Added Requirements to match new table engine tests. [#13340](https://github.com/ClickHouse/ClickHouse/pull/13340) ([MyroTk](https://github.com/MyroTk)).
|
||||
* Fix timeout error during server restart in the stress test. [#13321](https://github.com/ClickHouse/ClickHouse/pull/13321) ([alesapin](https://github.com/alesapin)).
|
||||
* Now fast test will wait server with retries. [#13284](https://github.com/ClickHouse/ClickHouse/pull/13284) ([alesapin](https://github.com/alesapin)).
|
||||
* Function `materialize()` (the function for ClickHouse testing) will work for NULL as expected - by transforming it to non-constant column. [#13212](https://github.com/ClickHouse/ClickHouse/pull/13212) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix libunwind build in AArch64. This fixes [#13204](https://github.com/ClickHouse/ClickHouse/issues/13204). [#13208](https://github.com/ClickHouse/ClickHouse/pull/13208) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Even more retries in zkutil gtest to prevent test flakiness. [#13165](https://github.com/ClickHouse/ClickHouse/pull/13165) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Small fixes to the RBAC TestFlows. [#13152](https://github.com/ClickHouse/ClickHouse/pull/13152) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Fixing `00960_live_view_watch_events_live.py` test. [#13108](https://github.com/ClickHouse/ClickHouse/pull/13108) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Improve cache purge in documentation deploy script. [#13107](https://github.com/ClickHouse/ClickHouse/pull/13107) ([alesapin](https://github.com/alesapin)).
|
||||
* Rewrote some orphan tests to gtest. Removed useless includes from tests. [#13073](https://github.com/ClickHouse/ClickHouse/pull/13073) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Added tests for RBAC functionality of `SELECT` privilege in TestFlows. [#13061](https://github.com/ClickHouse/ClickHouse/pull/13061) ([Ritaank Tiwari](https://github.com/ritaank)).
|
||||
* Rerun some tests in fast test check. [#12992](https://github.com/ClickHouse/ClickHouse/pull/12992) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix MSan error in "rdkafka" library. This closes [#12990](https://github.com/ClickHouse/ClickHouse/issues/12990). Updated `rdkafka` to version 1.5 (master). [#12991](https://github.com/ClickHouse/ClickHouse/pull/12991) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix UBSan report in base64 if tests were run on server with AVX-512. This fixes [#12318](https://github.com/ClickHouse/ClickHouse/issues/12318). Author: @qoega. [#12441](https://github.com/ClickHouse/ClickHouse/pull/12441) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix UBSan report in HDFS library. This closes [#12330](https://github.com/ClickHouse/ClickHouse/issues/12330). [#12453](https://github.com/ClickHouse/ClickHouse/pull/12453) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Check an ability that we able to restore the backup from an old version to the new version. This closes [#8979](https://github.com/ClickHouse/ClickHouse/issues/8979). [#12959](https://github.com/ClickHouse/ClickHouse/pull/12959) ([alesapin](https://github.com/alesapin)).
|
||||
* Do not build helper_container image inside integrational tests. Build docker container in CI and use pre-built helper_container in integration tests. [#12953](https://github.com/ClickHouse/ClickHouse/pull/12953) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Add a test for `ALTER TABLE CLEAR COLUMN` query for primary key columns. [#12951](https://github.com/ClickHouse/ClickHouse/pull/12951) ([alesapin](https://github.com/alesapin)).
|
||||
* Increased timeouts in testflows tests. [#12949](https://github.com/ClickHouse/ClickHouse/pull/12949) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Fix build of test under Mac OS X. This closes [#12767](https://github.com/ClickHouse/ClickHouse/issues/12767). [#12772](https://github.com/ClickHouse/ClickHouse/pull/12772) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Connector-ODBC updated to mysql-connector-odbc-8.0.21. [#12739](https://github.com/ClickHouse/ClickHouse/pull/12739) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Adding RBAC syntax tests in TestFlows. [#12642](https://github.com/ClickHouse/ClickHouse/pull/12642) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Improve performance of TestKeeper. This will speedup tests with heavy usage of Replicated tables. [#12505](https://github.com/ClickHouse/ClickHouse/pull/12505) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Now we check that server is able to start after stress tests run. This fixes [#12473](https://github.com/ClickHouse/ClickHouse/issues/12473). [#12496](https://github.com/ClickHouse/ClickHouse/pull/12496) ([alesapin](https://github.com/alesapin)).
|
||||
* Update fmtlib to master (7.0.1). [#12446](https://github.com/ClickHouse/ClickHouse/pull/12446) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add docker image for fast tests. [#12294](https://github.com/ClickHouse/ClickHouse/pull/12294) ([alesapin](https://github.com/alesapin)).
|
||||
* Rework configuration paths for integration tests. [#12285](https://github.com/ClickHouse/ClickHouse/pull/12285) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Add compiler option to control that stack frames are not too large. This will help to run the code in fibers with small stack size. [#11524](https://github.com/ClickHouse/ClickHouse/pull/11524) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update gitignore-files. [#13447](https://github.com/ClickHouse/ClickHouse/pull/13447) ([vladimir-golovchenko](https://github.com/vladimir-golovchenko)).
|
||||
|
||||
|
||||
## ClickHouse release 20.6
|
||||
|
||||
### ClickHouse release v20.6.3.28-stable
|
||||
|
@ -17,5 +17,5 @@ ClickHouse is an open-source column-oriented database management system that all
|
||||
|
||||
## Upcoming Events
|
||||
|
||||
* [ClickHouse at ByteDance (in Chinese)](https://mp.weixin.qq.com/s/Em-HjPylO8D7WPui4RREAQ) on August 28, 2020.
|
||||
* [ClickHouse Data Integration Virtual Meetup](https://www.eventbrite.com/e/clickhouse-september-virtual-meetup-data-integration-tickets-117421895049) on September 10, 2020.
|
||||
* [ClickHouse talk at Ya.Subbotnik (in Russian)](https://ya.cc/t/cIBI-3yECj5JF) on September 12, 2020.
|
||||
|
@ -38,18 +38,18 @@ namespace common
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool addOverflow(bInt256 x, bInt256 y, bInt256 & res)
|
||||
inline bool addOverflow(wInt256 x, wInt256 y, wInt256 & res)
|
||||
{
|
||||
res = x + y;
|
||||
return (y > 0 && x > std::numeric_limits<bInt256>::max() - y) ||
|
||||
(y < 0 && x < std::numeric_limits<bInt256>::min() - y);
|
||||
return (y > 0 && x > std::numeric_limits<wInt256>::max() - y) ||
|
||||
(y < 0 && x < std::numeric_limits<wInt256>::min() - y);
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool addOverflow(bUInt256 x, bUInt256 y, bUInt256 & res)
|
||||
inline bool addOverflow(wUInt256 x, wUInt256 y, wUInt256 & res)
|
||||
{
|
||||
res = x + y;
|
||||
return x > std::numeric_limits<bUInt256>::max() - y;
|
||||
return x > std::numeric_limits<wUInt256>::max() - y;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
@ -86,15 +86,15 @@ namespace common
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool subOverflow(bInt256 x, bInt256 y, bInt256 & res)
|
||||
inline bool subOverflow(wInt256 x, wInt256 y, wInt256 & res)
|
||||
{
|
||||
res = x - y;
|
||||
return (y < 0 && x > std::numeric_limits<bInt256>::max() + y) ||
|
||||
(y > 0 && x < std::numeric_limits<bInt256>::min() + y);
|
||||
return (y < 0 && x > std::numeric_limits<wInt256>::max() + y) ||
|
||||
(y > 0 && x < std::numeric_limits<wInt256>::min() + y);
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool subOverflow(bUInt256 x, bUInt256 y, bUInt256 & res)
|
||||
inline bool subOverflow(wUInt256 x, wUInt256 y, wUInt256 & res)
|
||||
{
|
||||
res = x - y;
|
||||
return x < y;
|
||||
@ -137,19 +137,19 @@ namespace common
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool mulOverflow(bInt256 x, bInt256 y, bInt256 & res)
|
||||
inline bool mulOverflow(wInt256 x, wInt256 y, wInt256 & res)
|
||||
{
|
||||
res = x * y;
|
||||
if (!x || !y)
|
||||
return false;
|
||||
|
||||
bInt256 a = (x > 0) ? x : -x;
|
||||
bInt256 b = (y > 0) ? y : -y;
|
||||
wInt256 a = (x > 0) ? x : -x;
|
||||
wInt256 b = (y > 0) ? y : -y;
|
||||
return (a * b) / b != a;
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool mulOverflow(bUInt256 x, bUInt256 y, bUInt256 & res)
|
||||
inline bool mulOverflow(wUInt256 x, wUInt256 y, wUInt256 & res)
|
||||
{
|
||||
res = x * y;
|
||||
if (!x || !y)
|
||||
|
@ -6,7 +6,7 @@
|
||||
#include <string>
|
||||
#include <type_traits>
|
||||
|
||||
#include <boost/multiprecision/cpp_int.hpp>
|
||||
#include <common/wide_integer.h>
|
||||
|
||||
using Int8 = int8_t;
|
||||
using Int16 = int16_t;
|
||||
@ -25,12 +25,11 @@ using UInt64 = uint64_t;
|
||||
|
||||
using Int128 = __int128;
|
||||
|
||||
/// We have to use 127 and 255 bit integers to safe a bit for a sign serialization
|
||||
//using bInt256 = boost::multiprecision::int256_t;
|
||||
using bInt256 = boost::multiprecision::number<boost::multiprecision::cpp_int_backend<
|
||||
255, 255, boost::multiprecision::signed_magnitude, boost::multiprecision::unchecked, void> >;
|
||||
using bUInt256 = boost::multiprecision::uint256_t;
|
||||
using wInt256 = std::wide_integer<256, signed>;
|
||||
using wUInt256 = std::wide_integer<256, unsigned>;
|
||||
|
||||
static_assert(sizeof(wInt256) == 32);
|
||||
static_assert(sizeof(wUInt256) == 32);
|
||||
|
||||
using String = std::string;
|
||||
|
||||
@ -44,7 +43,7 @@ struct is_signed
|
||||
};
|
||||
|
||||
template <> struct is_signed<Int128> { static constexpr bool value = true; };
|
||||
template <> struct is_signed<bInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_signed<wInt256> { static constexpr bool value = true; };
|
||||
|
||||
template <typename T>
|
||||
inline constexpr bool is_signed_v = is_signed<T>::value;
|
||||
@ -55,7 +54,7 @@ struct is_unsigned
|
||||
static constexpr bool value = std::is_unsigned_v<T>;
|
||||
};
|
||||
|
||||
template <> struct is_unsigned<bUInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_unsigned<wUInt256> { static constexpr bool value = true; };
|
||||
|
||||
template <typename T>
|
||||
inline constexpr bool is_unsigned_v = is_unsigned<T>::value;
|
||||
@ -69,8 +68,8 @@ struct is_integer
|
||||
};
|
||||
|
||||
template <> struct is_integer<Int128> { static constexpr bool value = true; };
|
||||
template <> struct is_integer<bInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_integer<bUInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_integer<wInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_integer<wUInt256> { static constexpr bool value = true; };
|
||||
|
||||
template <typename T>
|
||||
inline constexpr bool is_integer_v = is_integer<T>::value;
|
||||
@ -93,9 +92,9 @@ struct make_unsigned
|
||||
typedef std::make_unsigned_t<T> type;
|
||||
};
|
||||
|
||||
template <> struct make_unsigned<__int128> { using type = unsigned __int128; };
|
||||
template <> struct make_unsigned<bInt256> { using type = bUInt256; };
|
||||
template <> struct make_unsigned<bUInt256> { using type = bUInt256; };
|
||||
template <> struct make_unsigned<Int128> { using type = unsigned __int128; };
|
||||
template <> struct make_unsigned<wInt256> { using type = wUInt256; };
|
||||
template <> struct make_unsigned<wUInt256> { using type = wUInt256; };
|
||||
|
||||
template <typename T> using make_unsigned_t = typename make_unsigned<T>::type;
|
||||
|
||||
@ -105,8 +104,8 @@ struct make_signed
|
||||
typedef std::make_signed_t<T> type;
|
||||
};
|
||||
|
||||
template <> struct make_signed<bInt256> { typedef bInt256 type; };
|
||||
template <> struct make_signed<bUInt256> { typedef bInt256 type; };
|
||||
template <> struct make_signed<wInt256> { using type = wInt256; };
|
||||
template <> struct make_signed<wUInt256> { using type = wInt256; };
|
||||
|
||||
template <typename T> using make_signed_t = typename make_signed<T>::type;
|
||||
|
||||
@ -116,8 +115,20 @@ struct is_big_int
|
||||
static constexpr bool value = false;
|
||||
};
|
||||
|
||||
template <> struct is_big_int<bUInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_big_int<bInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_big_int<wInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_big_int<wUInt256> { static constexpr bool value = true; };
|
||||
|
||||
template <typename T>
|
||||
inline constexpr bool is_big_int_v = is_big_int<T>::value;
|
||||
|
||||
template <typename T>
|
||||
inline std::string bigintToString(const T & x)
|
||||
{
|
||||
return to_string(x);
|
||||
}
|
||||
|
||||
template <typename To, typename From>
|
||||
inline To bigint_cast(const From & x [[maybe_unused]])
|
||||
{
|
||||
return static_cast<To>(x);
|
||||
}
|
||||
|
249
base/common/wide_integer.h
Normal file
249
base/common/wide_integer.h
Normal file
@ -0,0 +1,249 @@
|
||||
#pragma once
|
||||
|
||||
///////////////////////////////////////////////////////////////
|
||||
// Distributed under the Boost Software License, Version 1.0.
|
||||
// (See at http://www.boost.org/LICENSE_1_0.txt)
|
||||
///////////////////////////////////////////////////////////////
|
||||
|
||||
/* Divide and multiply
|
||||
*
|
||||
*
|
||||
* Copyright (c) 2008
|
||||
* Evan Teran
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software and its
|
||||
* documentation for any purpose and without fee is hereby granted, provided
|
||||
* that the above copyright notice appears in all copies and that both the
|
||||
* copyright notice and this permission notice appear in supporting
|
||||
* documentation, and that the same name not be used in advertising or
|
||||
* publicity pertaining to distribution of the software without specific,
|
||||
* written prior permission. We make no representations about the
|
||||
* suitability this software for any purpose. It is provided "as is"
|
||||
* without express or implied warranty.
|
||||
*/
|
||||
|
||||
#include <climits> // CHAR_BIT
|
||||
#include <cmath>
|
||||
#include <cstdint>
|
||||
#include <limits>
|
||||
#include <type_traits>
|
||||
|
||||
namespace std
|
||||
{
|
||||
template <size_t Bits, typename Signed>
|
||||
class wide_integer;
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
struct common_type<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>>;
|
||||
|
||||
template <size_t Bits, typename Signed, typename Arithmetic>
|
||||
struct common_type<wide_integer<Bits, Signed>, Arithmetic>;
|
||||
|
||||
template <typename Arithmetic, size_t Bits, typename Signed>
|
||||
struct common_type<Arithmetic, wide_integer<Bits, Signed>>;
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
class wide_integer
|
||||
{
|
||||
public:
|
||||
using base_type = uint8_t;
|
||||
using signed_base_type = int8_t;
|
||||
|
||||
// ctors
|
||||
wide_integer() = default;
|
||||
|
||||
template <typename T>
|
||||
constexpr wide_integer(T rhs) noexcept;
|
||||
template <typename T>
|
||||
constexpr wide_integer(std::initializer_list<T> il) noexcept;
|
||||
|
||||
// assignment
|
||||
template <size_t Bits2, typename Signed2>
|
||||
constexpr wide_integer<Bits, Signed> & operator=(const wide_integer<Bits2, Signed2> & rhs) noexcept;
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr wide_integer<Bits, Signed> & operator=(Arithmetic rhs) noexcept;
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr wide_integer<Bits, Signed> & operator*=(const Arithmetic & rhs);
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr wide_integer<Bits, Signed> & operator/=(const Arithmetic & rhs);
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr wide_integer<Bits, Signed> & operator+=(const Arithmetic & rhs) noexcept(is_same<Signed, unsigned>::value);
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr wide_integer<Bits, Signed> & operator-=(const Arithmetic & rhs) noexcept(is_same<Signed, unsigned>::value);
|
||||
|
||||
template <typename Integral>
|
||||
constexpr wide_integer<Bits, Signed> & operator%=(const Integral & rhs);
|
||||
|
||||
template <typename Integral>
|
||||
constexpr wide_integer<Bits, Signed> & operator&=(const Integral & rhs) noexcept;
|
||||
|
||||
template <typename Integral>
|
||||
constexpr wide_integer<Bits, Signed> & operator|=(const Integral & rhs) noexcept;
|
||||
|
||||
template <typename Integral>
|
||||
constexpr wide_integer<Bits, Signed> & operator^=(const Integral & rhs) noexcept;
|
||||
|
||||
constexpr wide_integer<Bits, Signed> & operator<<=(int n);
|
||||
constexpr wide_integer<Bits, Signed> & operator>>=(int n) noexcept;
|
||||
|
||||
constexpr wide_integer<Bits, Signed> & operator++() noexcept(is_same<Signed, unsigned>::value);
|
||||
constexpr wide_integer<Bits, Signed> operator++(int) noexcept(is_same<Signed, unsigned>::value);
|
||||
constexpr wide_integer<Bits, Signed> & operator--() noexcept(is_same<Signed, unsigned>::value);
|
||||
constexpr wide_integer<Bits, Signed> operator--(int) noexcept(is_same<Signed, unsigned>::value);
|
||||
|
||||
// observers
|
||||
|
||||
constexpr explicit operator bool() const noexcept;
|
||||
|
||||
template <class T>
|
||||
using __integral_not_wide_integer_class = typename std::enable_if<std::is_arithmetic<T>::value, T>::type;
|
||||
|
||||
template <class T, class = __integral_not_wide_integer_class<T>>
|
||||
constexpr operator T() const noexcept;
|
||||
|
||||
constexpr operator long double() const noexcept;
|
||||
constexpr operator double() const noexcept;
|
||||
constexpr operator float() const noexcept;
|
||||
|
||||
struct _impl;
|
||||
|
||||
private:
|
||||
template <size_t Bits2, typename Signed2>
|
||||
friend class wide_integer;
|
||||
|
||||
friend class numeric_limits<wide_integer<Bits, signed>>;
|
||||
friend class numeric_limits<wide_integer<Bits, unsigned>>;
|
||||
|
||||
base_type m_arr[_impl::arr_size];
|
||||
};
|
||||
|
||||
template <typename T>
|
||||
static constexpr bool ArithmeticConcept() noexcept;
|
||||
template <class T1, class T2>
|
||||
using __only_arithmetic = typename std::enable_if<ArithmeticConcept<T1>() && ArithmeticConcept<T2>()>::type;
|
||||
|
||||
template <typename T>
|
||||
static constexpr bool IntegralConcept() noexcept;
|
||||
template <class T, class T2>
|
||||
using __only_integer = typename std::enable_if<IntegralConcept<T>() && IntegralConcept<T2>()>::type;
|
||||
|
||||
// Unary operators
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr wide_integer<Bits, Signed> operator~(const wide_integer<Bits, Signed> & lhs) noexcept;
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr wide_integer<Bits, Signed> operator-(const wide_integer<Bits, Signed> & lhs) noexcept(is_same<Signed, unsigned>::value);
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr wide_integer<Bits, Signed> operator+(const wide_integer<Bits, Signed> & lhs) noexcept(is_same<Signed, unsigned>::value);
|
||||
|
||||
// Binary operators
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator*(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator*(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator/(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator/(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator+(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator+(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator-(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator-(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator%(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator%(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator&(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator&(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator|(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator|(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator^(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator^(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
// TODO: Integral
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr wide_integer<Bits, Signed> operator<<(const wide_integer<Bits, Signed> & lhs, int n) noexcept;
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr wide_integer<Bits, Signed> operator>>(const wide_integer<Bits, Signed> & lhs, int n) noexcept;
|
||||
|
||||
template <size_t Bits, typename Signed, typename Int, typename = std::enable_if_t<!std::is_same_v<Int, int>>>
|
||||
constexpr wide_integer<Bits, Signed> operator<<(const wide_integer<Bits, Signed> & lhs, Int n) noexcept
|
||||
{
|
||||
return lhs << int(n);
|
||||
}
|
||||
template <size_t Bits, typename Signed, typename Int, typename = std::enable_if_t<!std::is_same_v<Int, int>>>
|
||||
constexpr wide_integer<Bits, Signed> operator>>(const wide_integer<Bits, Signed> & lhs, Int n) noexcept
|
||||
{
|
||||
return lhs >> int(n);
|
||||
}
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator<(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator<(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator>(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator>(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator<=(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator<=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator>=(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator>=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator==(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator==(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator!=(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator!=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
std::string to_string(const wide_integer<Bits, Signed> & n);
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
struct hash<wide_integer<Bits, Signed>>;
|
||||
|
||||
}
|
||||
|
||||
#include "wide_integer_impl.h"
|
1301
base/common/wide_integer_impl.h
Normal file
1301
base/common/wide_integer_impl.h
Normal file
File diff suppressed because it is too large
Load Diff
@ -32,6 +32,8 @@ PEERDIR(
|
||||
contrib/restricted/cityhash-1.0.2
|
||||
)
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
argsToConfig.cpp
|
||||
coverage.cpp
|
||||
|
@ -31,6 +31,8 @@ PEERDIR(
|
||||
contrib/restricted/cityhash-1.0.2
|
||||
)
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
<? find . -name '*.cpp' | grep -v -F tests/ | grep -v -F Replxx | grep -v -F Readline | sed 's/^\.\// /' | sort ?>
|
||||
)
|
||||
|
@ -6,6 +6,8 @@ PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
BaseDaemon.cpp
|
||||
GraphiteWriter.cpp
|
||||
|
@ -4,6 +4,8 @@ PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
ExtendedLogChannel.cpp
|
||||
Loggers.cpp
|
||||
|
@ -116,7 +116,7 @@ void Connection::connect(const char* db,
|
||||
throw ConnectionFailed(errorMessage(driver.get()), mysql_errno(driver.get()));
|
||||
|
||||
/// Enables auto-reconnect.
|
||||
my_bool reconnect = true;
|
||||
bool reconnect = true;
|
||||
if (mysql_options(driver.get(), MYSQL_OPT_RECONNECT, reinterpret_cast<const char *>(&reconnect)))
|
||||
throw ConnectionFailed(errorMessage(driver.get()), mysql_errno(driver.get()));
|
||||
|
||||
|
@ -1,5 +1,7 @@
|
||||
LIBRARY()
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
readpassphrase.c
|
||||
)
|
||||
|
@ -2,6 +2,8 @@ LIBRARY()
|
||||
|
||||
ADDINCL(GLOBAL clickhouse/base/widechar_width)
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
widechar_width.cpp
|
||||
)
|
||||
|
@ -1,9 +1,9 @@
|
||||
# This strings autochanged from release_lib.sh:
|
||||
SET(VERSION_REVISION 54438)
|
||||
SET(VERSION_REVISION 54440)
|
||||
SET(VERSION_MAJOR 20)
|
||||
SET(VERSION_MINOR 8)
|
||||
SET(VERSION_MINOR 10)
|
||||
SET(VERSION_PATCH 1)
|
||||
SET(VERSION_GITHASH 5d60ab33a511efd149c7c3de77c0dd4b81e65b13)
|
||||
SET(VERSION_DESCRIBE v20.8.1.1-prestable)
|
||||
SET(VERSION_STRING 20.8.1.1)
|
||||
SET(VERSION_GITHASH 11a247d2f42010c1a17bf678c3e00a4bc89b23f8)
|
||||
SET(VERSION_DESCRIBE v20.10.1.1-prestable)
|
||||
SET(VERSION_STRING 20.10.1.1)
|
||||
# end of autochange
|
||||
|
@ -6,6 +6,11 @@ endif()
|
||||
|
||||
if ((ENABLE_CCACHE OR NOT DEFINED ENABLE_CCACHE) AND NOT COMPILER_MATCHES_CCACHE)
|
||||
find_program (CCACHE_FOUND ccache)
|
||||
if (CCACHE_FOUND)
|
||||
set(ENABLE_CCACHE_BY_DEFAULT 1)
|
||||
else()
|
||||
set(ENABLE_CCACHE_BY_DEFAULT 0)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if (NOT CCACHE_FOUND AND NOT DEFINED ENABLE_CCACHE AND NOT COMPILER_MATCHES_CCACHE)
|
||||
@ -13,7 +18,7 @@ if (NOT CCACHE_FOUND AND NOT DEFINED ENABLE_CCACHE AND NOT COMPILER_MATCHES_CCAC
|
||||
"Setting it up will significantly reduce compilation time for 2nd and consequent builds")
|
||||
endif()
|
||||
|
||||
option(ENABLE_CCACHE "Speedup re-compilations using ccache" ${CCACHE_FOUND})
|
||||
option(ENABLE_CCACHE "Speedup re-compilations using ccache" ${ENABLE_CCACHE_BY_DEFAULT})
|
||||
|
||||
if (NOT ENABLE_CCACHE)
|
||||
return()
|
||||
@ -24,7 +29,7 @@ if (CCACHE_FOUND AND NOT COMPILER_MATCHES_CCACHE)
|
||||
string(REGEX REPLACE "ccache version ([0-9\\.]+).*" "\\1" CCACHE_VERSION ${CCACHE_VERSION})
|
||||
|
||||
if (CCACHE_VERSION VERSION_GREATER "3.2.0" OR NOT CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
|
||||
#message(STATUS "Using ${CCACHE_FOUND} ${CCACHE_VERSION}")
|
||||
message(STATUS "Using ${CCACHE_FOUND} ${CCACHE_VERSION}")
|
||||
set_property (GLOBAL PROPERTY RULE_LAUNCH_COMPILE ${CCACHE_FOUND})
|
||||
set_property (GLOBAL PROPERTY RULE_LAUNCH_LINK ${CCACHE_FOUND})
|
||||
else ()
|
||||
|
@ -2,8 +2,8 @@ option (USE_INTERNAL_BOOST_LIBRARY "Use internal Boost library" ${NOT_UNBUNDLED}
|
||||
|
||||
if (NOT USE_INTERNAL_BOOST_LIBRARY)
|
||||
# 1.70 like in contrib/boost
|
||||
# 1.67 on CI
|
||||
set(BOOST_VERSION 1.67)
|
||||
# 1.71 on CI
|
||||
set(BOOST_VERSION 1.71)
|
||||
|
||||
find_package(Boost ${BOOST_VERSION} COMPONENTS
|
||||
system
|
||||
|
@ -74,12 +74,12 @@ target_link_libraries(capnpc PUBLIC capnp)
|
||||
|
||||
# The library has substandard code
|
||||
if (COMPILER_GCC)
|
||||
set (SUPPRESS_WARNINGS -Wno-non-virtual-dtor -Wno-sign-compare -Wno-strict-aliasing -Wno-maybe-uninitialized
|
||||
-Wno-deprecated-declarations -Wno-class-memaccess)
|
||||
set (SUPPRESS_WARNINGS -w)
|
||||
elseif (COMPILER_CLANG)
|
||||
set (SUPPRESS_WARNINGS -Wno-non-virtual-dtor -Wno-sign-compare -Wno-strict-aliasing -Wno-deprecated-declarations)
|
||||
set (SUPPRESS_WARNINGS -w)
|
||||
set (CAPNP_PRIVATE_CXX_FLAGS -fno-char8_t)
|
||||
endif ()
|
||||
|
||||
target_compile_options(kj PRIVATE ${SUPPRESS_WARNINGS})
|
||||
target_compile_options(capnp PRIVATE ${SUPPRESS_WARNINGS})
|
||||
target_compile_options(capnpc PRIVATE ${SUPPRESS_WARNINGS})
|
||||
target_compile_options(kj PRIVATE ${SUPPRESS_WARNINGS} ${CAPNP_PRIVATE_CXX_FLAGS})
|
||||
target_compile_options(capnp PRIVATE ${SUPPRESS_WARNINGS} ${CAPNP_PRIVATE_CXX_FLAGS})
|
||||
target_compile_options(capnpc PRIVATE ${SUPPRESS_WARNINGS} ${CAPNP_PRIVATE_CXX_FLAGS})
|
||||
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
||||
clickhouse (20.8.1.1) unstable; urgency=low
|
||||
clickhouse (20.10.1.1) unstable; urgency=low
|
||||
|
||||
* Modified source code
|
||||
|
||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Fri, 07 Aug 2020 21:45:46 +0300
|
||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Tue, 08 Sep 2020 17:04:39 +0300
|
||||
|
23
debian/clickhouse-server.init
vendored
23
debian/clickhouse-server.init
vendored
@ -67,13 +67,6 @@ if uname -mpi | grep -q 'x86_64'; then
|
||||
fi
|
||||
|
||||
|
||||
SUPPORTED_COMMANDS="{start|stop|status|restart|forcestop|forcerestart|reload|condstart|condstop|condrestart|condreload|initdb}"
|
||||
is_supported_command()
|
||||
{
|
||||
echo "$SUPPORTED_COMMANDS" | grep -E "(\{|\|)$1(\||})" &> /dev/null
|
||||
}
|
||||
|
||||
|
||||
is_running()
|
||||
{
|
||||
pgrep --pidfile "$CLICKHOUSE_PIDFILE" $(echo "${PROGRAM}" | cut -c1-15) 1> /dev/null 2> /dev/null
|
||||
@ -283,13 +276,12 @@ use_cron()
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# returns false if cron disabled (with systemd)
|
||||
enable_cron()
|
||||
{
|
||||
use_cron && sed -i 's/^#*//' "$CLICKHOUSE_CRONFILE"
|
||||
}
|
||||
|
||||
|
||||
# returns false if cron disabled (with systemd)
|
||||
disable_cron()
|
||||
{
|
||||
use_cron && sed -i 's/^#*/#/' "$CLICKHOUSE_CRONFILE"
|
||||
@ -312,15 +304,14 @@ main()
|
||||
EXIT_STATUS=0
|
||||
case "$1" in
|
||||
start)
|
||||
start && enable_cron
|
||||
service_or_func start && enable_cron
|
||||
;;
|
||||
stop)
|
||||
# disable_cron returns false if cron disabled (with systemd) - not checking return status
|
||||
disable_cron
|
||||
stop
|
||||
service_or_func stop
|
||||
;;
|
||||
restart)
|
||||
restart && enable_cron
|
||||
service_or_func restart && enable_cron
|
||||
;;
|
||||
forcestop)
|
||||
disable_cron
|
||||
@ -330,7 +321,7 @@ main()
|
||||
forcerestart && enable_cron
|
||||
;;
|
||||
reload)
|
||||
restart
|
||||
service_or_func restart
|
||||
;;
|
||||
condstart)
|
||||
is_running || service_or_func start
|
||||
@ -354,7 +345,7 @@ main()
|
||||
disable_cron
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 $SUPPORTED_COMMANDS"
|
||||
echo "Usage: $0 {start|stop|status|restart|forcestop|forcerestart|reload|condstart|condstop|condrestart|condreload|initdb}"
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
|
2
debian/rules
vendored
2
debian/rules
vendored
@ -18,7 +18,7 @@ ifeq ($(CCACHE_PREFIX),distcc)
|
||||
THREADS_COUNT=$(shell distcc -j)
|
||||
endif
|
||||
ifeq ($(THREADS_COUNT),)
|
||||
THREADS_COUNT=$(shell nproc || grep -c ^processor /proc/cpuinfo || sysctl -n hw.ncpu || echo 4)
|
||||
THREADS_COUNT=$(shell echo $$(( $$(nproc || grep -c ^processor /proc/cpuinfo || sysctl -n hw.ncpu || echo 8) / 2 )) )
|
||||
endif
|
||||
DEB_BUILD_OPTIONS+=parallel=$(THREADS_COUNT)
|
||||
|
||||
|
@ -6,7 +6,7 @@ RUN apt-get update \
|
||||
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
|
||||
--yes --no-install-recommends --verbose-versions \
|
||||
&& export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \
|
||||
&& wget -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
|
||||
&& wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
|
||||
&& echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \
|
||||
&& apt-key add /tmp/llvm-snapshot.gpg.key \
|
||||
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:18.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=20.8.1.*
|
||||
ARG version=20.10.1.*
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install --yes --no-install-recommends \
|
||||
|
@ -1,5 +1,5 @@
|
||||
# docker build -t yandex/clickhouse-binary-builder .
|
||||
FROM ubuntu:19.10
|
||||
FROM ubuntu:20.04
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=10
|
||||
|
||||
@ -7,7 +7,7 @@ RUN apt-get update \
|
||||
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
|
||||
--yes --no-install-recommends --verbose-versions \
|
||||
&& export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \
|
||||
&& wget -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
|
||||
&& wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
|
||||
&& echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \
|
||||
&& apt-key add /tmp/llvm-snapshot.gpg.key \
|
||||
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
||||
@ -32,6 +32,8 @@ RUN apt-get update \
|
||||
curl \
|
||||
gcc-9 \
|
||||
g++-9 \
|
||||
gcc-10 \
|
||||
g++-10 \
|
||||
llvm-${LLVM_VERSION} \
|
||||
clang-${LLVM_VERSION} \
|
||||
lld-${LLVM_VERSION} \
|
||||
@ -55,7 +57,6 @@ RUN apt-get update \
|
||||
cmake \
|
||||
gdb \
|
||||
rename \
|
||||
wget \
|
||||
build-essential \
|
||||
--yes --no-install-recommends
|
||||
|
||||
@ -83,14 +84,14 @@ RUN git clone https://github.com/tpoechtrager/cctools-port.git \
|
||||
&& rm -rf cctools-port
|
||||
|
||||
# Download toolchain for Darwin
|
||||
RUN wget https://github.com/phracker/MacOSX-SDKs/releases/download/10.14-beta4/MacOSX10.14.sdk.tar.xz
|
||||
RUN wget -nv https://github.com/phracker/MacOSX-SDKs/releases/download/10.14-beta4/MacOSX10.14.sdk.tar.xz
|
||||
|
||||
# Download toolchain for ARM
|
||||
# It contains all required headers and libraries. Note that it's named as "gcc" but actually we are using clang for cross compiling.
|
||||
RUN wget "https://developer.arm.com/-/media/Files/downloads/gnu-a/8.3-2019.03/binrel/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz?revision=2e88a73f-d233-4f96-b1f4-d8b36e9bb0b9&la=en" -O gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz
|
||||
RUN wget -nv "https://developer.arm.com/-/media/Files/downloads/gnu-a/8.3-2019.03/binrel/gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz?revision=2e88a73f-d233-4f96-b1f4-d8b36e9bb0b9&la=en" -O gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz
|
||||
|
||||
# Download toolchain for FreeBSD 11.3
|
||||
RUN wget https://clickhouse-datasets.s3.yandex.net/toolchains/toolchains/freebsd-11.3-toolchain.tar.xz
|
||||
RUN wget -nv https://clickhouse-datasets.s3.yandex.net/toolchains/toolchains/freebsd-11.3-toolchain.tar.xz
|
||||
|
||||
COPY build.sh /
|
||||
CMD ["/bin/bash", "/build.sh"]
|
||||
|
@ -7,7 +7,7 @@ RUN apt-get update \
|
||||
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
|
||||
--yes --no-install-recommends --verbose-versions \
|
||||
&& export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \
|
||||
&& wget -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
|
||||
&& wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
|
||||
&& echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \
|
||||
&& apt-key add /tmp/llvm-snapshot.gpg.key \
|
||||
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
||||
@ -34,7 +34,7 @@ RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
|
||||
ENV APACHE_PUBKEY_HASH="bba6987b63c63f710fd4ed476121c588bc3812e99659d27a855f8c4d312783ee66ad6adfce238765691b04d62fa3688f"
|
||||
|
||||
RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
||||
&& wget -O /tmp/arrow-keyring.deb "https://apache.bintray.com/arrow/ubuntu/apache-arrow-archive-keyring-latest-${CODENAME}.deb" \
|
||||
&& wget -nv -O /tmp/arrow-keyring.deb "https://apache.bintray.com/arrow/ubuntu/apache-arrow-archive-keyring-latest-${CODENAME}.deb" \
|
||||
&& echo "${APACHE_PUBKEY_HASH} /tmp/arrow-keyring.deb" | sha384sum -c \
|
||||
&& dpkg -i /tmp/arrow-keyring.deb
|
||||
|
||||
|
@ -143,7 +143,7 @@ def parse_env_variables(build_type, compiler, sanitizer, package_type, image_typ
|
||||
|
||||
if unbundled:
|
||||
# TODO: fix build with ENABLE_RDKAFKA
|
||||
cmake_flags.append('-DUNBUNDLED=1 -DUSE_INTERNAL_RDKAFKA_LIBRARY=1') # too old version in ubuntu 19.10
|
||||
cmake_flags.append('-DUNBUNDLED=1 -DUSE_INTERNAL_RDKAFKA_LIBRARY=1 -DENABLE_ARROW=0 -DENABLE_ORC=0 -DENABLE_PARQUET=0')
|
||||
|
||||
if split_binary:
|
||||
cmake_flags.append('-DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1')
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:20.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=20.8.1.*
|
||||
ARG version=20.10.1.*
|
||||
ARG gosu_ver=1.10
|
||||
|
||||
RUN apt-get update \
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:18.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=20.8.1.*
|
||||
ARG version=20.10.1.*
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y apt-transport-https dirmngr && \
|
||||
|
@ -7,7 +7,7 @@ RUN apt-get update \
|
||||
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
|
||||
--yes --no-install-recommends --verbose-versions \
|
||||
&& export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \
|
||||
&& wget -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
|
||||
&& wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
|
||||
&& echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \
|
||||
&& apt-key add /tmp/llvm-snapshot.gpg.key \
|
||||
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
||||
|
@ -15,7 +15,7 @@ RUN apt-get --allow-unauthenticated update -y \
|
||||
gpg-agent \
|
||||
git
|
||||
|
||||
RUN wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | sudo apt-key add -
|
||||
RUN wget -nv -O - https://apt.kitware.com/keys/kitware-archive-latest.asc | sudo apt-key add -
|
||||
RUN sudo apt-add-repository 'deb https://apt.kitware.com/ubuntu/ bionic main'
|
||||
RUN sudo echo "deb [trusted=yes] http://apt.llvm.org/bionic/ llvm-toolchain-bionic-8 main" >> /etc/apt/sources.list
|
||||
|
||||
|
@ -7,7 +7,7 @@ RUN apt-get update \
|
||||
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \
|
||||
--yes --no-install-recommends --verbose-versions \
|
||||
&& export LLVM_PUBKEY_HASH="bda960a8da687a275a2078d43c111d66b1c6a893a3275271beedf266c1ff4a0cdecb429c7a5cccf9f486ea7aa43fd27f" \
|
||||
&& wget -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
|
||||
&& wget -nv -O /tmp/llvm-snapshot.gpg.key https://apt.llvm.org/llvm-snapshot.gpg.key \
|
||||
&& echo "${LLVM_PUBKEY_HASH} /tmp/llvm-snapshot.gpg.key" | sha384sum -c \
|
||||
&& apt-key add /tmp/llvm-snapshot.gpg.key \
|
||||
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
||||
@ -61,7 +61,6 @@ RUN apt-get update \
|
||||
software-properties-common \
|
||||
tzdata \
|
||||
unixodbc \
|
||||
wget \
|
||||
--yes --no-install-recommends
|
||||
|
||||
# This symlink required by gcc to find lld compiler
|
||||
@ -70,7 +69,7 @@ RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
|
||||
ARG odbc_driver_url="https://github.com/ClickHouse/clickhouse-odbc/releases/download/v1.1.4.20200302/clickhouse-odbc-1.1.4-Linux.tar.gz"
|
||||
|
||||
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
||||
&& wget --quiet -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
|
||||
&& odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \
|
||||
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \
|
||||
|
@ -5,8 +5,7 @@ trap "exit" INT TERM
|
||||
trap 'kill $(jobs -pr) ||:' EXIT
|
||||
|
||||
# This script is separated into two stages, cloning and everything else, so
|
||||
# that we can run the "everything else" stage from the cloned source (we don't
|
||||
# do this yet).
|
||||
# that we can run the "everything else" stage from the cloned source.
|
||||
stage=${stage:-}
|
||||
|
||||
# A variable to pass additional flags to CMake.
|
||||
@ -16,7 +15,6 @@ stage=${stage:-}
|
||||
# empty parameter.
|
||||
read -ra FASTTEST_CMAKE_FLAGS <<< "${FASTTEST_CMAKE_FLAGS:-}"
|
||||
|
||||
ls -la
|
||||
|
||||
function kill_clickhouse
|
||||
{
|
||||
@ -60,6 +58,7 @@ function clone_root
|
||||
git clone https://github.com/ClickHouse/ClickHouse.git | ts '%Y-%m-%d %H:%M:%S' | tee /test_output/clone_log.txt
|
||||
cd ClickHouse
|
||||
CLICKHOUSE_DIR=$(pwd)
|
||||
export CLICKHOUSE_DIR
|
||||
|
||||
|
||||
if [ "$PULL_REQUEST_NUMBER" != "0" ]; then
|
||||
@ -211,6 +210,9 @@ TESTS_TO_SKIP=(
|
||||
# to make some progress.
|
||||
00646_url_engine
|
||||
00974_query_profiler
|
||||
|
||||
# Look at DistributedFilesToInsert, so cannot run in parallel.
|
||||
01460_DistributedFilesToInsert
|
||||
)
|
||||
|
||||
clickhouse-test -j 4 --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee /test_output/test_log.txt
|
||||
@ -248,12 +250,20 @@ fi
|
||||
|
||||
case "$stage" in
|
||||
"")
|
||||
ls -la
|
||||
;&
|
||||
|
||||
"clone_root")
|
||||
clone_root
|
||||
# TODO bootstrap into the cloned script here. Add this on Sep 1 2020 or
|
||||
# later, so that most of the old branches are updated with this code.
|
||||
|
||||
# Pass control to the script from cloned sources, unless asked otherwise.
|
||||
if ! [ -v FASTTEST_LOCAL_SCRIPT ]
|
||||
then
|
||||
stage=run "$CLICKHOUSE_DIR/docker/test/fasttest/run.sh"
|
||||
exit $?
|
||||
fi
|
||||
;&
|
||||
|
||||
"run")
|
||||
run
|
||||
;&
|
||||
|
@ -2,6 +2,15 @@
|
||||
<profiles>
|
||||
<default>
|
||||
<max_execution_time>10</max_execution_time>
|
||||
<!--
|
||||
Don't let the fuzzer change this setting (I've actually seen it
|
||||
do this before).
|
||||
-->
|
||||
<constraints>
|
||||
<max_execution_time>
|
||||
<max>10</max>
|
||||
</max_execution_time>
|
||||
</constraints>
|
||||
</default>
|
||||
</profiles>
|
||||
</yandex>
|
||||
|
@ -91,7 +91,7 @@ function fuzz
|
||||
|
||||
./clickhouse-client --query "select elapsed, query from system.processes" ||:
|
||||
killall clickhouse-server ||:
|
||||
for x in {1..10}
|
||||
for _ in {1..10}
|
||||
do
|
||||
if ! pgrep -f clickhouse-server
|
||||
then
|
||||
@ -172,8 +172,59 @@ case "$stage" in
|
||||
echo "failure" > status.txt
|
||||
echo "Fuzzer failed ($fuzzer_exit_code). See the logs" > description.txt
|
||||
fi
|
||||
;&
|
||||
"report")
|
||||
cat > report.html <<EOF ||:
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<link rel="preload" as="font" href="https://yastatic.net/adv-www/_/sUYVCPUAQE7ExrvMS7FoISoO83s.woff2" type="font/woff2" crossorigin="anonymous"/>
|
||||
<style>
|
||||
@font-face {
|
||||
font-family:'Yandex Sans Display Web';
|
||||
src:url(https://yastatic.net/adv-www/_/H63jN0veW07XQUIA2317lr9UIm8.eot);
|
||||
src:url(https://yastatic.net/adv-www/_/H63jN0veW07XQUIA2317lr9UIm8.eot?#iefix) format('embedded-opentype'),
|
||||
url(https://yastatic.net/adv-www/_/sUYVCPUAQE7ExrvMS7FoISoO83s.woff2) format('woff2'),
|
||||
url(https://yastatic.net/adv-www/_/v2Sve_obH3rKm6rKrtSQpf-eB7U.woff) format('woff'),
|
||||
url(https://yastatic.net/adv-www/_/PzD8hWLMunow5i3RfJ6WQJAL7aI.ttf) format('truetype'),
|
||||
url(https://yastatic.net/adv-www/_/lF_KG5g4tpQNlYIgA0e77fBSZ5s.svg#YandexSansDisplayWeb-Regular) format('svg');
|
||||
font-weight:400;
|
||||
font-style:normal;
|
||||
font-stretch:normal
|
||||
}
|
||||
|
||||
exit $task_exit_code
|
||||
body { font-family: "Yandex Sans Display Web", Arial, sans-serif; background: #EEE; }
|
||||
h1 { margin-left: 10px; }
|
||||
th, td { border: 0; padding: 5px 10px 5px 10px; text-align: left; vertical-align: top; line-height: 1.5; background-color: #FFF;
|
||||
td { white-space: pre; font-family: Monospace, Courier New; }
|
||||
border: 0; box-shadow: 0 0 0 1px rgba(0, 0, 0, 0.05), 0 8px 25px -5px rgba(0, 0, 0, 0.1); }
|
||||
a { color: #06F; text-decoration: none; }
|
||||
a:hover, a:active { color: #F40; text-decoration: underline; }
|
||||
table { border: 0; }
|
||||
.main { margin-left: 10%; }
|
||||
p.links a { padding: 5px; margin: 3px; background: #FFF; line-height: 2; white-space: nowrap; box-shadow: 0 0 0 1px rgba(0, 0, 0, 0.05), 0 8px 25px -5px rgba(0, 0, 0, 0.1); }
|
||||
th { cursor: pointer; }
|
||||
|
||||
</style>
|
||||
<title>AST Fuzzer for PR #${PR_TO_TEST} @ ${SHA_TO_TEST}</title>
|
||||
</head>
|
||||
<body>
|
||||
<div class="main">
|
||||
|
||||
<h1>AST Fuzzer for PR #${PR_TO_TEST} @ ${SHA_TO_TEST}</h1>
|
||||
<p class="links">
|
||||
<a href="fuzzer.log">fuzzer.log</a>
|
||||
<a href="server.log">server.log</a>
|
||||
<a href="main.log">main.log</a>
|
||||
</p>
|
||||
<table>
|
||||
<tr><th>Test name</th><th>Test status</th><th>Description</th></tr>
|
||||
<tr><td>AST Fuzzer</td><td>$(cat status.txt)</td><td>$(cat description.txt)</td></tr>
|
||||
</table>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
EOF
|
||||
;&
|
||||
esac
|
||||
|
||||
exit $task_exit_code
|
@ -46,7 +46,7 @@ RUN set -eux; \
|
||||
\
|
||||
# this "case" statement is generated via "update.sh"
|
||||
\
|
||||
if ! wget -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz"; then \
|
||||
if ! wget -nv -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz"; then \
|
||||
echo >&2 "error: failed to download 'docker-${DOCKER_VERSION}' from '${DOCKER_CHANNEL}' for '${x86_64}'"; \
|
||||
exit 1; \
|
||||
fi; \
|
||||
|
@ -7,3 +7,4 @@ services:
|
||||
MYSQL_ROOT_PASSWORD: clickhouse
|
||||
ports:
|
||||
- 3308:3306
|
||||
command: --server_id=100 --log-bin='mysql-bin-1.log' --default-time-zone='+3:00' --gtid-mode="ON" --enforce-gtid-consistency
|
@ -1,10 +0,0 @@
|
||||
version: '2.3'
|
||||
services:
|
||||
mysql5_7:
|
||||
image: mysql:5.7
|
||||
restart: always
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: clickhouse
|
||||
ports:
|
||||
- 33307:3306
|
||||
command: --server_id=100 --log-bin='mysql-bin-1.log' --default-time-zone='+3:00' --gtid-mode="ON" --enforce-gtid-consistency
|
@ -5,3 +5,4 @@ services:
|
||||
restart: always
|
||||
ports:
|
||||
- 6380:6379
|
||||
command: redis-server --requirepass "clickhouse"
|
||||
|
@ -536,7 +536,9 @@ create table queries engine File(TSVWithNamesAndTypes, 'report/queries.tsv')
|
||||
left join query_display_names
|
||||
on query_metric_stats.test = query_display_names.test
|
||||
and query_metric_stats.query_index = query_display_names.query_index
|
||||
where metric_name = 'server_time'
|
||||
-- 'server_time' is rounded down to ms, which might be bad for very short queries.
|
||||
-- Use 'client_time' instead.
|
||||
where metric_name = 'client_time'
|
||||
order by test, query_index, metric_name
|
||||
;
|
||||
|
||||
@ -563,40 +565,54 @@ create table unstable_queries_report engine File(TSV, 'report/unstable-queries.t
|
||||
toDecimal64(stat_threshold, 3), unstable_fail, test, query_index, query_display_name
|
||||
from queries where unstable_show order by stat_threshold desc;
|
||||
|
||||
create table test_time_changes engine File(TSV, 'report/test-time-changes.tsv') as
|
||||
select test, queries, average_time_change from (
|
||||
select test, count(*) queries,
|
||||
sum(left) as left, sum(right) as right,
|
||||
(right - left) / right average_time_change
|
||||
from queries
|
||||
group by test
|
||||
order by abs(average_time_change) desc
|
||||
)
|
||||
;
|
||||
|
||||
create table unstable_tests engine File(TSV, 'report/unstable-tests.tsv') as
|
||||
select test, sum(unstable_show) total_unstable, sum(changed_show) total_changed
|
||||
create view test_speedup as
|
||||
select
|
||||
test,
|
||||
exp2(avg(log2(left / right))) times_speedup,
|
||||
count(*) queries,
|
||||
unstable + changed bad,
|
||||
sum(changed_show) changed,
|
||||
sum(unstable_show) unstable
|
||||
from queries
|
||||
group by test
|
||||
order by total_unstable + total_changed desc
|
||||
order by times_speedup desc
|
||||
;
|
||||
|
||||
create view total_speedup as
|
||||
select
|
||||
'Total' test,
|
||||
exp2(avg(log2(times_speedup))) times_speedup,
|
||||
sum(queries) queries,
|
||||
unstable + changed bad,
|
||||
sum(changed) changed,
|
||||
sum(unstable) unstable
|
||||
from test_speedup
|
||||
;
|
||||
|
||||
create table test_perf_changes_report engine File(TSV, 'report/test-perf-changes.tsv') as
|
||||
select test,
|
||||
queries,
|
||||
coalesce(total_unstable, 0) total_unstable,
|
||||
coalesce(total_changed, 0) total_changed,
|
||||
total_unstable + total_changed total_bad,
|
||||
coalesce(toString(toDecimal64(average_time_change, 3)), '??') average_time_change_str
|
||||
from test_time_changes
|
||||
full join unstable_tests
|
||||
using test
|
||||
where (abs(average_time_change) > 0.05 and queries > 5)
|
||||
or (total_bad > 0)
|
||||
order by total_bad desc, average_time_change desc
|
||||
settings join_use_nulls = 1
|
||||
with
|
||||
(times_speedup >= 1
|
||||
? '-' || toString(toDecimal64(times_speedup, 3)) || 'x'
|
||||
: '+' || toString(toDecimal64(1 / times_speedup, 3)) || 'x')
|
||||
as times_speedup_str
|
||||
select test, times_speedup_str, queries, bad, changed, unstable
|
||||
-- Not sure what's the precedence of UNION ALL vs WHERE & ORDER BY, hence all
|
||||
-- the braces.
|
||||
from (
|
||||
(
|
||||
select * from total_speedup
|
||||
) union all (
|
||||
select * from test_speedup
|
||||
where
|
||||
(times_speedup >= 1 ? times_speedup : (1 / times_speedup)) >= 1.005
|
||||
or bad
|
||||
)
|
||||
)
|
||||
order by test = 'Total' desc, times_speedup desc
|
||||
;
|
||||
|
||||
|
||||
create view total_client_time_per_query as select *
|
||||
from file('analyze/client-times.tsv', TSV,
|
||||
'test text, query_index int, client float, server float');
|
||||
@ -888,7 +904,10 @@ for log in *-err.log
|
||||
do
|
||||
test=$(basename "$log" "-err.log")
|
||||
{
|
||||
grep -H -m2 -i '\(Exception\|Error\):[^:]' "$log" \
|
||||
# The second grep is a heuristic for error messages like
|
||||
# "socket.timeout: timed out".
|
||||
grep -h -m2 -i '\(Exception\|Error\):[^:]' "$log" \
|
||||
|| grep -h -m2 -i '^[^ ]\+: ' "$log" \
|
||||
|| head -2 "$log"
|
||||
} | sed "s/^/$test\t/" >> run-errors.tsv ||:
|
||||
done
|
||||
|
@ -262,6 +262,13 @@ for query_index, q in enumerate(test_queries):
|
||||
print(f'query\t{query_index}\t{run_id}\t{conn_index}\t{c.last_query.elapsed}')
|
||||
server_seconds += c.last_query.elapsed
|
||||
|
||||
if c.last_query.elapsed > 10:
|
||||
# Stop processing pathologically slow queries, to avoid timing out
|
||||
# the entire test task. This shouldn't really happen, so we don't
|
||||
# need much handling for this case and can just exit.
|
||||
print(f'The query no. {query_index} is taking too long to run ({c.last_query.elapsed} s)', file=sys.stderr)
|
||||
exit(2)
|
||||
|
||||
client_seconds = time.perf_counter() - start_seconds
|
||||
print(f'client-time\t{query_index}\t{client_seconds}\t{server_seconds}')
|
||||
|
||||
|
@ -168,12 +168,6 @@ def nextRowAnchor():
|
||||
global table_anchor
|
||||
return f'{table_anchor}.{row_anchor + 1}'
|
||||
|
||||
def setRowAnchor(anchor_row_part):
|
||||
global row_anchor
|
||||
global table_anchor
|
||||
row_anchor = anchor_row_part
|
||||
return currentRowAnchor()
|
||||
|
||||
def advanceRowAnchor():
|
||||
global row_anchor
|
||||
global table_anchor
|
||||
@ -376,7 +370,7 @@ if args.report == 'main':
|
||||
columns = [
|
||||
'Old, s', # 0
|
||||
'New, s', # 1
|
||||
'Times speedup / slowdown', # 2
|
||||
'Ratio of speedup (-) or slowdown (+)', # 2
|
||||
'Relative difference (new − old) / old', # 3
|
||||
'p < 0.001 threshold', # 4
|
||||
# Failed # 5
|
||||
@ -453,7 +447,7 @@ if args.report == 'main':
|
||||
addSimpleTable('Skipped tests', ['Test', 'Reason'], skipped_tests_rows)
|
||||
|
||||
addSimpleTable('Test performance changes',
|
||||
['Test', 'Queries', 'Unstable', 'Changed perf', 'Total not OK', 'Avg relative time diff'],
|
||||
['Test', 'Ratio of speedup (-) or slowdown (+)', 'Queries', 'Total not OK', 'Changed perf', 'Unstable'],
|
||||
tsvRows('report/test-perf-changes.tsv'))
|
||||
|
||||
def add_test_times():
|
||||
@ -480,11 +474,12 @@ if args.report == 'main':
|
||||
total_runs = (nominal_runs + 1) * 2 # one prewarm run, two servers
|
||||
attrs = ['' for c in columns]
|
||||
for r in rows:
|
||||
anchor = f'{currentTableAnchor()}.{r[0]}'
|
||||
if float(r[6]) > 1.5 * total_runs:
|
||||
# FIXME should be 15s max -- investigate parallel_insert
|
||||
slow_average_tests += 1
|
||||
attrs[6] = f'style="background: {color_bad}"'
|
||||
errors_explained.append([f'<a href="./all-queries.html#all-query-times.{r[0]}.0">The test \'{r[0]}\' is too slow to run as a whole. Investigate whether the create and fill queries can be sped up'])
|
||||
errors_explained.append([f'<a href="#{anchor}">The test \'{r[0]}\' is too slow to run as a whole. Investigate whether the create and fill queries can be sped up'])
|
||||
else:
|
||||
attrs[6] = ''
|
||||
|
||||
@ -495,7 +490,7 @@ if args.report == 'main':
|
||||
else:
|
||||
attrs[5] = ''
|
||||
|
||||
text += tableRow(r, attrs)
|
||||
text += tableRow(r, attrs, anchor)
|
||||
|
||||
text += tableEnd()
|
||||
tables.append(text)
|
||||
@ -652,7 +647,7 @@ elif args.report == 'all-queries':
|
||||
# Unstable #1
|
||||
'Old, s', #2
|
||||
'New, s', #3
|
||||
'Times speedup / slowdown', #4
|
||||
'Ratio of speedup (-) or slowdown (+)', #4
|
||||
'Relative difference (new − old) / old', #5
|
||||
'p < 0.001 threshold', #6
|
||||
'Test', #7
|
||||
|
@ -12,8 +12,8 @@ RUN apt-get update --yes \
|
||||
strace \
|
||||
--yes --no-install-recommends
|
||||
|
||||
#RUN wget -q -O - http://files.viva64.com/etc/pubkey.txt | sudo apt-key add -
|
||||
#RUN sudo wget -O /etc/apt/sources.list.d/viva64.list http://files.viva64.com/etc/viva64.list
|
||||
#RUN wget -nv -O - http://files.viva64.com/etc/pubkey.txt | sudo apt-key add -
|
||||
#RUN sudo wget -nv -O /etc/apt/sources.list.d/viva64.list http://files.viva64.com/etc/viva64.list
|
||||
#
|
||||
#RUN apt-get --allow-unauthenticated update -y \
|
||||
# && env DEBIAN_FRONTEND=noninteractive \
|
||||
@ -24,10 +24,10 @@ ENV PKG_VERSION="pvs-studio-latest"
|
||||
|
||||
RUN set -x \
|
||||
&& export PUBKEY_HASHSUM="486a0694c7f92e96190bbfac01c3b5ac2cb7823981db510a28f744c99eabbbf17a7bcee53ca42dc6d84d4323c2742761" \
|
||||
&& wget https://files.viva64.com/etc/pubkey.txt -O /tmp/pubkey.txt \
|
||||
&& wget -nv https://files.viva64.com/etc/pubkey.txt -O /tmp/pubkey.txt \
|
||||
&& echo "${PUBKEY_HASHSUM} /tmp/pubkey.txt" | sha384sum -c \
|
||||
&& apt-key add /tmp/pubkey.txt \
|
||||
&& wget "https://files.viva64.com/${PKG_VERSION}.deb" \
|
||||
&& wget -nv "https://files.viva64.com/${PKG_VERSION}.deb" \
|
||||
&& { debsig-verify ${PKG_VERSION}.deb \
|
||||
|| echo "WARNING: Some file was just downloaded from the internet without any validation and we are installing it into the system"; } \
|
||||
&& dpkg -i "${PKG_VERSION}.deb"
|
||||
|
@ -29,17 +29,26 @@ if [[ -n "$USE_DATABASE_ATOMIC" ]] && [[ "$USE_DATABASE_ATOMIC" -eq 1 ]]; then
|
||||
ln -s /usr/share/clickhouse-test/config/database_atomic_usersd.xml /etc/clickhouse-server/users.d/
|
||||
fi
|
||||
|
||||
echo "TSAN_OPTIONS='verbosity=1000 halt_on_error=1 history_size=7'" >> /etc/environment
|
||||
echo "TSAN_SYMBOLIZER_PATH=/usr/lib/llvm-10/bin/llvm-symbolizer" >> /etc/environment
|
||||
echo "UBSAN_OPTIONS='print_stacktrace=1'" >> /etc/environment
|
||||
echo "ASAN_SYMBOLIZER_PATH=/usr/lib/llvm-10/bin/llvm-symbolizer" >> /etc/environment
|
||||
echo "UBSAN_SYMBOLIZER_PATH=/usr/lib/llvm-10/bin/llvm-symbolizer" >> /etc/environment
|
||||
echo "LLVM_SYMBOLIZER_PATH=/usr/lib/llvm-10/bin/llvm-symbolizer" >> /etc/environment
|
||||
function start()
|
||||
{
|
||||
counter=0
|
||||
until clickhouse-client --query "SELECT 1"
|
||||
do
|
||||
if [ "$counter" -gt 120 ]
|
||||
then
|
||||
echo "Cannot start clickhouse-server"
|
||||
cat /var/log/clickhouse-server/stdout.log
|
||||
tail -n1000 /var/log/clickhouse-server/stderr.log
|
||||
tail -n1000 /var/log/clickhouse-server/clickhouse-server.log
|
||||
break
|
||||
fi
|
||||
timeout 120 service clickhouse-server start
|
||||
sleep 0.5
|
||||
counter=$(($counter + 1))
|
||||
done
|
||||
}
|
||||
|
||||
service zookeeper start
|
||||
sleep 5
|
||||
service clickhouse-server start
|
||||
sleep 5
|
||||
start
|
||||
/s3downloader --dataset-names $DATASETS
|
||||
chmod 777 -R /var/lib/clickhouse
|
||||
clickhouse-client --query "SHOW DATABASES"
|
||||
|
@ -2,6 +2,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import tarfile
|
||||
import logging
|
||||
import argparse
|
||||
@ -16,6 +17,8 @@ AVAILABLE_DATASETS = {
|
||||
'visits': 'visits_v1.tar',
|
||||
}
|
||||
|
||||
RETRIES_COUNT = 5
|
||||
|
||||
def _get_temp_file_name():
|
||||
return os.path.join(tempfile._get_default_tempdir(), next(tempfile._get_candidate_names()))
|
||||
|
||||
@ -24,25 +27,37 @@ def build_url(base_url, dataset):
|
||||
|
||||
def dowload_with_progress(url, path):
|
||||
logging.info("Downloading from %s to temp path %s", url, path)
|
||||
with open(path, 'w') as f:
|
||||
response = requests.get(url, stream=True)
|
||||
response.raise_for_status()
|
||||
total_length = response.headers.get('content-length')
|
||||
if total_length is None or int(total_length) == 0:
|
||||
logging.info("No content-length, will download file without progress")
|
||||
f.write(response.content)
|
||||
else:
|
||||
dl = 0
|
||||
total_length = int(total_length)
|
||||
logging.info("Content length is %ld bytes", total_length)
|
||||
for data in response.iter_content(chunk_size=4096):
|
||||
dl += len(data)
|
||||
f.write(data)
|
||||
if sys.stdout.isatty():
|
||||
done = int(50 * dl / total_length)
|
||||
percent = int(100 * float(dl) / total_length)
|
||||
sys.stdout.write("\r[{}{}] {}%".format('=' * done, ' ' * (50-done), percent))
|
||||
sys.stdout.flush()
|
||||
for i in range(RETRIES_COUNT):
|
||||
try:
|
||||
with open(path, 'w') as f:
|
||||
response = requests.get(url, stream=True)
|
||||
response.raise_for_status()
|
||||
total_length = response.headers.get('content-length')
|
||||
if total_length is None or int(total_length) == 0:
|
||||
logging.info("No content-length, will download file without progress")
|
||||
f.write(response.content)
|
||||
else:
|
||||
dl = 0
|
||||
total_length = int(total_length)
|
||||
logging.info("Content length is %ld bytes", total_length)
|
||||
for data in response.iter_content(chunk_size=4096):
|
||||
dl += len(data)
|
||||
f.write(data)
|
||||
if sys.stdout.isatty():
|
||||
done = int(50 * dl / total_length)
|
||||
percent = int(100 * float(dl) / total_length)
|
||||
sys.stdout.write("\r[{}{}] {}%".format('=' * done, ' ' * (50-done), percent))
|
||||
sys.stdout.flush()
|
||||
break
|
||||
except Exception as ex:
|
||||
sys.stdout.write("\n")
|
||||
time.sleep(3)
|
||||
logging.info("Exception while downloading %s, retry %s", ex, i + 1)
|
||||
if os.path.exists(path):
|
||||
os.remove(path)
|
||||
else:
|
||||
raise Exception("Cannot download dataset from {}, all retries exceeded".format(url))
|
||||
|
||||
sys.stdout.write("\n")
|
||||
logging.info("Downloading finished")
|
||||
|
||||
|
@ -71,14 +71,26 @@ ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config
|
||||
ln -s --backup=simple --suffix=_original.xml \
|
||||
/usr/share/clickhouse-test/config/query_masking_rules.xml /etc/clickhouse-server/config.d/
|
||||
|
||||
function start()
|
||||
{
|
||||
counter=0
|
||||
until clickhouse-client --query "SELECT 1"
|
||||
do
|
||||
if [ "$counter" -gt 120 ]
|
||||
then
|
||||
echo "Cannot start clickhouse-server"
|
||||
cat /var/log/clickhouse-server/stdout.log
|
||||
tail -n1000 /var/log/clickhouse-server/stderr.log
|
||||
tail -n1000 /var/log/clickhouse-server/clickhouse-server.log
|
||||
break
|
||||
fi
|
||||
timeout 120 service clickhouse-server start
|
||||
sleep 0.5
|
||||
counter=$(($counter + 1))
|
||||
done
|
||||
}
|
||||
|
||||
service zookeeper start
|
||||
|
||||
sleep 5
|
||||
|
||||
start_clickhouse
|
||||
|
||||
sleep 5
|
||||
start
|
||||
|
||||
if ! /s3downloader --dataset-names $DATASETS; then
|
||||
echo "Cannot download datatsets"
|
||||
|
@ -2,6 +2,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import tarfile
|
||||
import logging
|
||||
import argparse
|
||||
@ -16,6 +17,8 @@ AVAILABLE_DATASETS = {
|
||||
'visits': 'visits_v1.tar',
|
||||
}
|
||||
|
||||
RETRIES_COUNT = 5
|
||||
|
||||
def _get_temp_file_name():
|
||||
return os.path.join(tempfile._get_default_tempdir(), next(tempfile._get_candidate_names()))
|
||||
|
||||
@ -24,25 +27,37 @@ def build_url(base_url, dataset):
|
||||
|
||||
def dowload_with_progress(url, path):
|
||||
logging.info("Downloading from %s to temp path %s", url, path)
|
||||
with open(path, 'w') as f:
|
||||
response = requests.get(url, stream=True)
|
||||
response.raise_for_status()
|
||||
total_length = response.headers.get('content-length')
|
||||
if total_length is None or int(total_length) == 0:
|
||||
logging.info("No content-length, will download file without progress")
|
||||
f.write(response.content)
|
||||
else:
|
||||
dl = 0
|
||||
total_length = int(total_length)
|
||||
logging.info("Content length is %ld bytes", total_length)
|
||||
for data in response.iter_content(chunk_size=4096):
|
||||
dl += len(data)
|
||||
f.write(data)
|
||||
if sys.stdout.isatty():
|
||||
done = int(50 * dl / total_length)
|
||||
percent = int(100 * float(dl) / total_length)
|
||||
sys.stdout.write("\r[{}{}] {}%".format('=' * done, ' ' * (50-done), percent))
|
||||
sys.stdout.flush()
|
||||
for i in range(RETRIES_COUNT):
|
||||
try:
|
||||
with open(path, 'w') as f:
|
||||
response = requests.get(url, stream=True)
|
||||
response.raise_for_status()
|
||||
total_length = response.headers.get('content-length')
|
||||
if total_length is None or int(total_length) == 0:
|
||||
logging.info("No content-length, will download file without progress")
|
||||
f.write(response.content)
|
||||
else:
|
||||
dl = 0
|
||||
total_length = int(total_length)
|
||||
logging.info("Content length is %ld bytes", total_length)
|
||||
for data in response.iter_content(chunk_size=4096):
|
||||
dl += len(data)
|
||||
f.write(data)
|
||||
if sys.stdout.isatty():
|
||||
done = int(50 * dl / total_length)
|
||||
percent = int(100 * float(dl) / total_length)
|
||||
sys.stdout.write("\r[{}{}] {}%".format('=' * done, ' ' * (50-done), percent))
|
||||
sys.stdout.flush()
|
||||
break
|
||||
except Exception as ex:
|
||||
sys.stdout.write("\n")
|
||||
time.sleep(3)
|
||||
logging.info("Exception while downloading %s, retry %s", ex, i + 1)
|
||||
if os.path.exists(path):
|
||||
os.remove(path)
|
||||
else:
|
||||
raise Exception("Cannot download dataset from {}, all retries exceeded".format(url))
|
||||
|
||||
sys.stdout.write("\n")
|
||||
logging.info("Downloading finished")
|
||||
|
||||
|
@ -26,7 +26,7 @@ RUN apt-get update -y \
|
||||
zookeeperd
|
||||
|
||||
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
||||
&& wget --quiet -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
|
||||
&& odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \
|
||||
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \
|
||||
|
@ -71,7 +71,7 @@ RUN apt-get --allow-unauthenticated update -y \
|
||||
zookeeperd
|
||||
|
||||
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
||||
&& wget --quiet -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
|
||||
&& odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \
|
||||
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \
|
||||
|
@ -33,7 +33,7 @@ RUN apt-get update -y \
|
||||
qemu-user-static
|
||||
|
||||
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
||||
&& wget --quiet -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
|
||||
&& odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \
|
||||
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \
|
||||
|
@ -44,7 +44,7 @@ RUN set -eux; \
|
||||
\
|
||||
# this "case" statement is generated via "update.sh"
|
||||
\
|
||||
if ! wget -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz"; then \
|
||||
if ! wget -nv -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz"; then \
|
||||
echo >&2 "error: failed to download 'docker-${DOCKER_VERSION}' from '${DOCKER_CHANNEL}' for '${x86_64}'"; \
|
||||
exit 1; \
|
||||
fi; \
|
||||
|
@ -32,7 +32,8 @@ SETTINGS
|
||||
[kafka_num_consumers = N,]
|
||||
[kafka_max_block_size = 0,]
|
||||
[kafka_skip_broken_messages = N,]
|
||||
[kafka_commit_every_batch = 0]
|
||||
[kafka_commit_every_batch = 0,]
|
||||
[kafka_thread_per_consumer = 0]
|
||||
```
|
||||
|
||||
Required parameters:
|
||||
@ -50,6 +51,7 @@ Optional parameters:
|
||||
- `kafka_max_block_size` - The maximum batch size (in messages) for poll (default: `max_block_size`).
|
||||
- `kafka_skip_broken_messages` – Kafka message parser tolerance to schema-incompatible messages per block. Default: `0`. If `kafka_skip_broken_messages = N` then the engine skips *N* Kafka messages that cannot be parsed (a message equals a row of data).
|
||||
- `kafka_commit_every_batch` - Commit every consumed and handled batch instead of a single commit after writing a whole block (default: `0`).
|
||||
- `kafka_thread_per_consumer` - Provide independent thread for each consumer (default: `0`). When enabled, every consumer flush the data independently, in parallel (otherwise - rows from several consumers squashed to form one block).
|
||||
|
||||
Examples:
|
||||
|
||||
|
@ -7,7 +7,7 @@ toc_title: RabbitMQ
|
||||
|
||||
This engine allows integrating ClickHouse with [RabbitMQ](https://www.rabbitmq.com).
|
||||
|
||||
RabbitMQ lets you:
|
||||
`RabbitMQ` lets you:
|
||||
|
||||
- Publish or subscribe to data flows.
|
||||
- Process streams as they become available.
|
||||
@ -27,9 +27,15 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
[rabbitmq_exchange_type = 'exchange_type',]
|
||||
[rabbitmq_routing_key_list = 'key1,key2,...',]
|
||||
[rabbitmq_row_delimiter = 'delimiter_symbol',]
|
||||
[rabbitmq_schema = '',]
|
||||
[rabbitmq_num_consumers = N,]
|
||||
[rabbitmq_num_queues = N,]
|
||||
[rabbitmq_transactional_channel = 0]
|
||||
[rabbitmq_queue_base = 'queue',]
|
||||
[rabbitmq_deadletter_exchange = 'dl-exchange',]
|
||||
[rabbitmq_persistent = 0,]
|
||||
[rabbitmq_skip_broken_messages = N,]
|
||||
[rabbitmq_max_block_size = N,]
|
||||
[rabbitmq_flush_interval_ms = N]
|
||||
```
|
||||
|
||||
Required parameters:
|
||||
@ -40,12 +46,18 @@ Required parameters:
|
||||
|
||||
Optional parameters:
|
||||
|
||||
- `rabbitmq_exchange_type` – The type of RabbitMQ exchange: `direct`, `fanout`, `topic`, `headers`, `consistent-hash`. Default: `fanout`.
|
||||
- `rabbitmq_exchange_type` – The type of RabbitMQ exchange: `direct`, `fanout`, `topic`, `headers`, `consistent_hash`. Default: `fanout`.
|
||||
- `rabbitmq_routing_key_list` – A comma-separated list of routing keys.
|
||||
- `rabbitmq_row_delimiter` – Delimiter character, which ends the message.
|
||||
- `rabbitmq_schema` – Parameter that must be used if the format requires a schema definition. For example, [Cap’n Proto](https://capnproto.org/) requires the path to the schema file and the name of the root `schema.capnp:Message` object.
|
||||
- `rabbitmq_num_consumers` – The number of consumers per table. Default: `1`. Specify more consumers if the throughput of one consumer is insufficient.
|
||||
- `rabbitmq_num_queues` – The number of queues per consumer. Default: `1`. Specify more queues if the capacity of one queue per consumer is insufficient. Single queue can contain up to 50K messages at the same time.
|
||||
- `rabbitmq_transactional_channel` – Wrap insert queries in transactions. Default: `0`.
|
||||
- `rabbitmq_num_queues` – The number of queues per consumer. Default: `1`. Specify more queues if the capacity of one queue per consumer is insufficient.
|
||||
- `rabbitmq_queue_base` - Specify a base name for queues that will be declared. By default, queues are declared unique to tables based on db and table names.
|
||||
- `rabbitmq_deadletter_exchange` - Specify name for a [dead letter exchange](https://www.rabbitmq.com/dlx.html). You can create another table with this exchange name and collect messages in cases when they are republished to dead letter exchange. By default dead letter exchange is not specified.
|
||||
- `persistent` - If set to 1 (true), in insert query delivery mode will be set to 2 (marks messages as 'persistent'). Default: `0`.
|
||||
- `rabbitmq_skip_broken_messages` – RabbitMQ message parser tolerance to schema-incompatible messages per block. Default: `0`. If `rabbitmq_skip_broken_messages = N` then the engine skips *N* RabbitMQ messages that cannot be parsed (a message equals a row of data).
|
||||
- `rabbitmq_max_block_size`
|
||||
- `rabbitmq_flush_interval_ms`
|
||||
|
||||
Required configuration:
|
||||
|
||||
@ -72,7 +84,7 @@ Example:
|
||||
|
||||
## Description {#description}
|
||||
|
||||
`SELECT` is not particularly useful for reading messages (except for debugging), because each message can be read only once. It is more practical to create real-time threads using materialized views. To do this:
|
||||
`SELECT` is not particularly useful for reading messages (except for debugging), because each message can be read only once. It is more practical to create real-time threads using [materialized views](../../../sql-reference/statements/create/view.md). To do this:
|
||||
|
||||
1. Use the engine to create a RabbitMQ consumer and consider it a data stream.
|
||||
2. Create a table with the desired structure.
|
||||
@ -86,19 +98,28 @@ There can be no more than one exchange per table. One exchange can be shared bet
|
||||
|
||||
Exchange type options:
|
||||
|
||||
- `direct` - Routing is based on exact matching of keys. Example table key list: `key1,key2,key3,key4,key5`, message key can eqaul any of them.
|
||||
- `direct` - Routing is based on the exact matching of keys. Example table key list: `key1,key2,key3,key4,key5`, message key can equal any of them.
|
||||
- `fanout` - Routing to all tables (where exchange name is the same) regardless of the keys.
|
||||
- `topic` - Routing is based on patterns with dot-separated keys. Examples: `*.logs`, `records.*.*.2020`, `*.2018,*.2019,*.2020`.
|
||||
- `headers` - Routing is based on `key=value` matches with a setting `x-match=all` or `x-match=any`. Example table key list: `x-match=all,format=logs,type=report,year=2020`.
|
||||
- `consistent-hash` - Data is evenly distributed between all bound tables (where exchange name is the same). Note that this exchange type must be enabled with RabbitMQ plugin: `rabbitmq-plugins enable rabbitmq_consistent_hash_exchange`.
|
||||
- `consistent-hash` - Data is evenly distributed between all bound tables (where the exchange name is the same). Note that this exchange type must be enabled with RabbitMQ plugin: `rabbitmq-plugins enable rabbitmq_consistent_hash_exchange`.
|
||||
|
||||
If exchange type is not specified, then default is `fanout` and routing keys for data publishing must be randomized in range `[1, num_consumers]` for every message/batch (or in range `[1, num_consumers * num_queues]` if `rabbitmq_num_queues` is set). This table configuration works quicker then any other, especially when `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` parameters are set.
|
||||
Setting `rabbitmq_queue_base` may be used for the following cases:
|
||||
- to let different tables share queues, so that multiple consumers could be registered for the same queues, which makes a better performance. If using `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` settings, the exact match of queues is achieved in case these parameters are the same.
|
||||
- to be able to restore reading from certain durable queues when not all messages were successfully consumed. To be able to resume consumption from one specific queue - set its name in `rabbitmq_queue_base` setting and do not specify `rabbitmq_num_consumers` and `rabbitmq_num_queues` (defaults to 1). To be able to resume consumption from all queues, which were declared for a specific table - just specify the same settings: `rabbitmq_queue_base`, `rabbitmq_num_consumers`, `rabbitmq_num_queues`. By default, queue names will be unique to tables. Note: it makes sence only if messages are sent with delivery mode 2 - marked 'persistent', durable.
|
||||
- to reuse queues as they are declared durable and not auto-deleted.
|
||||
|
||||
If `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` parameters are specified along with `rabbitmq_exchange_type`, then:
|
||||
To improve performance, received messages are grouped into blocks the size of [max\_insert\_block\_size](../../../operations/server-configuration-parameters/settings.md#settings-max_insert_block_size). If the block wasn’t formed within [stream\_flush\_interval\_ms](../../../operations/server-configuration-parameters/settings.md) milliseconds, the data will be flushed to the table regardless of the completeness of the block.
|
||||
|
||||
If `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` settings are specified along with `rabbitmq_exchange_type`, then:
|
||||
|
||||
- `rabbitmq-consistent-hash-exchange` plugin must be enabled.
|
||||
- `message_id` property of the published messages must be specified (unique for each message/batch).
|
||||
|
||||
For insert query there is message metadata, which is added for each published message: `messageID` and `republished` flag (true, if published more than once) - can be accessed via message headers.
|
||||
|
||||
Do not use the same table for inserts and materialized views.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
@ -113,10 +134,18 @@ Example:
|
||||
rabbitmq_num_consumers = 5;
|
||||
|
||||
CREATE TABLE daily (key UInt64, value UInt64)
|
||||
ENGINE = MergeTree();
|
||||
ENGINE = MergeTree() ORDER BY key;
|
||||
|
||||
CREATE MATERIALIZED VIEW consumer TO daily
|
||||
AS SELECT key, value FROM queue;
|
||||
|
||||
SELECT key, value FROM daily ORDER BY key;
|
||||
```
|
||||
|
||||
## Virtual Columns {#virtual-columns}
|
||||
|
||||
- `_exchange_name` - RabbitMQ exchange name.
|
||||
- `_channel_id` - ChannelID, on which consumer, who received the message, was declared.
|
||||
- `_delivery_tag` - DeliveryTag of the received message. Scoped per channel.
|
||||
- `_redelivered` - `redelivered` flag of the message.
|
||||
- `_message_id` - MessageID of the received message; non-empty if was set, when message was published.
|
||||
|
@ -31,7 +31,7 @@ For a description of request parameters, see [statement description](../../../sq
|
||||
|
||||
**ReplacingMergeTree Parameters**
|
||||
|
||||
- `ver` — column with version. Type `UInt*`, `Date`, `DateTime` or `DateTime64`. Optional parameter.
|
||||
- `ver` — column with version. Type `UInt*`, `Date` or `DateTime`. Optional parameter.
|
||||
|
||||
When merging, `ReplacingMergeTree` from all the rows with the same sorting key leaves only one:
|
||||
|
||||
|
@ -121,7 +121,7 @@ To find out why we need two rows for each change, see [Algorithm](#table_engines
|
||||
|
||||
**Notes on Usage**
|
||||
|
||||
1. The program that writes the data should remember the state of an object in order to cancel it. The “cancel” string should be a copy of the “state” string with the opposite `Sign`. This increases the initial size of storage but allows to write the data quickly.
|
||||
1. The program that writes the data should remember the state of an object to be able to cancel it. “Cancel” string should contain copies of the primary key fields and the version of the “state” string and the opposite `Sign`. It increases the initial size of storage but allows to write the data quickly.
|
||||
2. Long growing arrays in columns reduce the efficiency of the engine due to the load for writing. The more straightforward the data, the better the efficiency.
|
||||
3. `SELECT` results depend strongly on the consistency of the history of object changes. Be accurate when preparing data for inserting. You can get unpredictable results with inconsistent data, such as negative values for non-negative metrics like session depth.
|
||||
|
||||
|
@ -36,7 +36,7 @@ Examples:
|
||||
$ curl 'http://localhost:8123/?query=SELECT%201'
|
||||
1
|
||||
|
||||
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1'
|
||||
$ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
|
||||
1
|
||||
|
||||
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123
|
||||
|
@ -72,6 +72,7 @@ toc_title: Adopters
|
||||
| <a href="https://www.qingcloud.com/" class="favicon">QINGCLOUD</a> | Cloud services | Main product | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/4.%20Cloud%20%2B%20TSDB%20for%20ClickHouse%20张健%20QingCloud.pdf) |
|
||||
| <a href="https://qrator.net" class="favicon">Qrator</a> | DDoS protection | Main product | — | — | [Blog Post, March 2019](https://blog.qrator.net/en/clickhouse-ddos-mitigation_37/) |
|
||||
| <a href="https://rambler.ru" class="favicon">Rambler</a> | Internet services | Analytics | — | — | [Talk in Russian, April 2018](https://medium.com/@ramblertop/разработка-api-clickhouse-для-рамблер-топ-100-f4c7e56f3141) |
|
||||
| <a href="https://retell.cc/" class="favicon">Retell</a> | Speech synthesis | Analytics | — | — | [Blog Article, August 2020](https://vc.ru/services/153732-kak-sozdat-audiostati-na-vashem-sayte-i-zachem-eto-nuzhno) |
|
||||
| <a href="https://rspamd.com/" class="favicon">Rspamd</a> | Antispam | Analytics | — | — | [Official Website](https://rspamd.com/doc/modules/clickhouse.html) |
|
||||
| <a href="https://www.s7.ru" class="favicon">S7 Airlines</a> | Airlines | Metrics, Logging | — | — | [Talk in Russian, March 2019](https://www.youtube.com/watch?v=nwG68klRpPg&t=15s) |
|
||||
| <a href="https://www.scireum.de/" class="favicon">scireum GmbH</a> | e-Commerce | Main product | — | — | [Talk in German, February 2020](https://www.youtube.com/watch?v=7QWAn5RbyR4) |
|
||||
|
@ -62,6 +62,7 @@ Management queries:
|
||||
- [ALTER USER](../sql-reference/statements/alter/user.md#alter-user-statement)
|
||||
- [DROP USER](../sql-reference/statements/drop.md)
|
||||
- [SHOW CREATE USER](../sql-reference/statements/show.md#show-create-user-statement)
|
||||
- [SHOW USERS](../sql-reference/statements/show.md#show-users-statement)
|
||||
|
||||
### Settings Applying {#access-control-settings-applying}
|
||||
|
||||
@ -90,6 +91,7 @@ Management queries:
|
||||
- [SET ROLE](../sql-reference/statements/set-role.md)
|
||||
- [SET DEFAULT ROLE](../sql-reference/statements/set-role.md#set-default-role-statement)
|
||||
- [SHOW CREATE ROLE](../sql-reference/statements/show.md#show-create-role-statement)
|
||||
- [SHOW ROLES](../sql-reference/statements/show.md#show-roles-statement)
|
||||
|
||||
Privileges can be granted to a role by the [GRANT](../sql-reference/statements/grant.md) query. To revoke privileges from a role ClickHouse provides the [REVOKE](../sql-reference/statements/revoke.md) query.
|
||||
|
||||
@ -103,6 +105,7 @@ Management queries:
|
||||
- [ALTER ROW POLICY](../sql-reference/statements/alter/row-policy.md#alter-row-policy-statement)
|
||||
- [DROP ROW POLICY](../sql-reference/statements/drop.md#drop-row-policy-statement)
|
||||
- [SHOW CREATE ROW POLICY](../sql-reference/statements/show.md#show-create-row-policy-statement)
|
||||
- [SHOW POLICIES](../sql-reference/statements/show.md#show-policies-statement)
|
||||
|
||||
## Settings Profile {#settings-profiles-management}
|
||||
|
||||
@ -114,6 +117,7 @@ Management queries:
|
||||
- [ALTER SETTINGS PROFILE](../sql-reference/statements/alter/settings-profile.md#alter-settings-profile-statement)
|
||||
- [DROP SETTINGS PROFILE](../sql-reference/statements/drop.md#drop-settings-profile-statement)
|
||||
- [SHOW CREATE SETTINGS PROFILE](../sql-reference/statements/show.md#show-create-settings-profile-statement)
|
||||
- [SHOW PROFILES](../sql-reference/statements/show.md#show-profiles-statement)
|
||||
|
||||
## Quota {#quotas-management}
|
||||
|
||||
@ -127,6 +131,8 @@ Management queries:
|
||||
- [ALTER QUOTA](../sql-reference/statements/alter/quota.md#alter-quota-statement)
|
||||
- [DROP QUOTA](../sql-reference/statements/drop.md#drop-quota-statement)
|
||||
- [SHOW CREATE QUOTA](../sql-reference/statements/show.md#show-create-quota-statement)
|
||||
- [SHOW QUOTA](../sql-reference/statements/show.md#show-quota-statement)
|
||||
- [SHOW QUOTAS](../sql-reference/statements/show.md#show-quotas-statement)
|
||||
|
||||
## Enabling SQL-driven Access Control and Account Management {#enabling-access-control}
|
||||
|
||||
|
@ -1290,6 +1290,47 @@ Possible values:
|
||||
|
||||
Default value: 0.
|
||||
|
||||
## distributed\_group\_by\_no\_merge {#distributed-group-by-no-merge}
|
||||
|
||||
Do not merge aggregation states from different servers for distributed query processing, you can use this in case it is for certain that there are different keys on different shards
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — Disabled (final query processing is done on the initiator node).
|
||||
- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data).
|
||||
- 2 - Same as 1 but apply `ORDER BY` and `LIMIT` on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`).
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
FROM remote('127.0.0.{2,3}', system.one)
|
||||
GROUP BY dummy
|
||||
LIMIT 1
|
||||
SETTINGS distributed_group_by_no_merge = 1
|
||||
FORMAT PrettyCompactMonoBlock
|
||||
|
||||
┌─dummy─┐
|
||||
│ 0 │
|
||||
│ 0 │
|
||||
└───────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
FROM remote('127.0.0.{2,3}', system.one)
|
||||
GROUP BY dummy
|
||||
LIMIT 1
|
||||
SETTINGS distributed_group_by_no_merge = 2
|
||||
FORMAT PrettyCompactMonoBlock
|
||||
|
||||
┌─dummy─┐
|
||||
│ 0 │
|
||||
└───────┘
|
||||
```
|
||||
|
||||
Default value: 0
|
||||
|
||||
## optimize\_skip\_unused\_shards {#optimize-skip-unused-shards}
|
||||
|
||||
Enables or disables skipping of unused shards for [SELECT](../../sql-reference/statements/select/index.md) queries that have sharding key condition in `WHERE/PREWHERE` (assuming that the data is distributed by sharding key, otherwise does nothing).
|
||||
@ -1337,6 +1378,40 @@ Possible values:
|
||||
|
||||
Default value: 0
|
||||
|
||||
## optimize\_distributed\_group\_by\_sharding\_key {#optimize-distributed-group-by-sharding-key}
|
||||
|
||||
Optimize `GROUP BY sharding_key` queries, by avoiding costly aggregation on the initiator server (which will reduce memory usage for the query on the initiator server).
|
||||
|
||||
The following types of queries are supported (and all combinations of them):
|
||||
|
||||
- `SELECT DISTINCT [..., ]sharding_key[, ...] FROM dist`
|
||||
- `SELECT ... FROM dist GROUP BY sharding_key[, ...]`
|
||||
- `SELECT ... FROM dist GROUP BY sharding_key[, ...] ORDER BY x`
|
||||
- `SELECT ... FROM dist GROUP BY sharding_key[, ...] LIMIT 1`
|
||||
- `SELECT ... FROM dist GROUP BY sharding_key[, ...] LIMIT 1 BY x`
|
||||
|
||||
The following types of queries are not supported (support for some of them may be added later):
|
||||
|
||||
- `SELECT ... GROUP BY sharding_key[, ...] WITH TOTALS`
|
||||
- `SELECT ... GROUP BY sharding_key[, ...] WITH ROLLUP`
|
||||
- `SELECT ... GROUP BY sharding_key[, ...] WITH CUBE`
|
||||
- `SELECT ... GROUP BY sharding_key[, ...] SETTINGS extremes=1`
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — Disabled.
|
||||
- 1 — Enabled.
|
||||
|
||||
Default value: 0
|
||||
|
||||
See also:
|
||||
|
||||
- [distributed\_group\_by\_no\_merge](#distributed-group-by-no-merge)
|
||||
- [optimize\_skip\_unused\_shards](#optimize-skip-unused-shards)
|
||||
|
||||
!!! note "Note"
|
||||
Right now it requires `optimize_skip_unused_shards` (the reason behind this is that one day it may be enabled by default, and it will work correctly only if data was inserted via Distributed table, i.e. data is distributed according to sharding_key).
|
||||
|
||||
## optimize\_throw\_if\_noop {#setting-optimize_throw_if_noop}
|
||||
|
||||
Enables or disables throwing an exception if an [OPTIMIZE](../../sql-reference/statements/misc.md#misc_operations-optimize) query didn’t perform a merge.
|
||||
@ -1894,10 +1969,10 @@ Locking timeout is used to protect from deadlocks while executing read/write ope
|
||||
|
||||
Possible values:
|
||||
|
||||
- Positive integer.
|
||||
- Positive integer (in seconds).
|
||||
- 0 — No locking timeout.
|
||||
|
||||
Default value: `120`.
|
||||
Default value: `120` seconds.
|
||||
|
||||
## output_format_pretty_max_value_width {#output_format_pretty_max_value_width}
|
||||
|
||||
|
24
docs/en/operations/system-tables/grants.md
Normal file
24
docs/en/operations/system-tables/grants.md
Normal file
@ -0,0 +1,24 @@
|
||||
# system.grants {#system_tables-grants}
|
||||
|
||||
Privileges granted to ClickHouse user accounts.
|
||||
|
||||
Columns:
|
||||
- `user_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — User name.
|
||||
|
||||
- `role_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Role assigned to user account.
|
||||
|
||||
- `access_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Access parameters for ClickHouse user account.
|
||||
|
||||
- `database` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Name of a database.
|
||||
|
||||
- `table` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Name of a table.
|
||||
|
||||
- `column` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Name of a column to which access is granted.
|
||||
|
||||
- `is_partial_revoke` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Logical value. It shows whether some privileges have been revoked. Possible values:
|
||||
- `0` — The row describes a partial revoke.
|
||||
- `1` — The row describes a grant.
|
||||
|
||||
- `grant_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Permission is granted `WITH GRANT OPTION`, see [GRANT](../../sql-reference/statements/grant.md#grant-privigele-syntax).
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/grants) <!--hide-->
|
@ -34,6 +34,7 @@ Columns:
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Query starting date.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Query starting time.
|
||||
- `query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Start time of query execution.
|
||||
- `query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Start time of query execution with microsecond precision.
|
||||
- `query_duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Duration of query execution in milliseconds.
|
||||
- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or rows read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_rows` includes the total number of rows read at all replicas. Each replica sends it’s `read_rows` value, and the server-initiator of the query summarize all received and local values. The cache volumes doesn’t affect this value.
|
||||
- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or bytes read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_bytes` includes the total number of rows read at all replicas. Each replica sends it’s `read_bytes` value, and the server-initiator of the query summarize all received and local values. The cache volumes doesn’t affect this value.
|
||||
|
@ -16,6 +16,7 @@ Columns:
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — The date when the thread has finished execution of the query.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — The date and time when the thread has finished execution of the query.
|
||||
- `query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Start time of query execution.
|
||||
- `query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Start time of query execution with microsecond precision.
|
||||
- `query_duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Duration of query execution.
|
||||
- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of read rows.
|
||||
- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of read bytes.
|
||||
|
@ -23,4 +23,8 @@ Columns:
|
||||
- `execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — The total query execution time, in seconds (wall time).
|
||||
- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Maximum of query execution time.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/quota_usage) <!--hide-->
|
||||
|
@ -20,5 +20,9 @@ Columns:
|
||||
- `apply_to_list` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — List of user names/[roles](../../operations/access-rights.md#role-management) that the quota should be applied to.
|
||||
- `apply_to_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — List of user names/roles that the quota should not apply to.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW QUOTAS](../../sql-reference/statements/show.md#show-quotas-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/quotas) <!--hide-->
|
||||
|
||||
|
@ -24,4 +24,8 @@ Columns:
|
||||
- `execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — The total query execution time, in seconds (wall time).
|
||||
- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Maximum of query execution time.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/quotas_usage) <!--hide-->
|
||||
|
@ -5,11 +5,15 @@ Contains the role grants for users and roles. To add entries to this table, use
|
||||
Columns:
|
||||
|
||||
- `user_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — User name.
|
||||
|
||||
- `role_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Role name.
|
||||
|
||||
- `granted_role_name` ([String](../../sql-reference/data-types/string.md)) — Name of role granted to the `role_name` role. To grant one role to another one use `GRANT role1 TO role2`.
|
||||
|
||||
- `granted_role_is_default` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `granted_role` is a default role. Possible values:
|
||||
- 1 — `granted_role` is a default role.
|
||||
- 0 — `granted_role` is not a default role.
|
||||
|
||||
- `with_admin_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `granted_role` is a role with [ADMIN OPTION](../../sql-reference/statements/grant.md#admin-option-privilege) privilege. Possible values:
|
||||
- 1 — The role has `ADMIN OPTION` privilege.
|
||||
- 0 — The role without `ADMIN OPTION` privilege.
|
||||
|
@ -8,4 +8,8 @@ Columns:
|
||||
- `id` ([UUID](../../sql-reference/data-types/uuid.md)) — Role ID.
|
||||
- `storage` ([String](../../sql-reference/data-types/string.md)) — Path to the storage of roles. Configured in the `access_control_path` parameter.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW ROLES](../../sql-reference/statements/show.md#show-roles-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/roles) <!--hide-->
|
||||
|
34
docs/en/operations/system-tables/row_policies.md
Normal file
34
docs/en/operations/system-tables/row_policies.md
Normal file
@ -0,0 +1,34 @@
|
||||
# system.row_policies {#system_tables-row_policies}
|
||||
|
||||
Contains filters for one particular table, as well as a list of roles and/or users which should use this row policy.
|
||||
|
||||
Columns:
|
||||
- `name` ([String](../../sql-reference/data-types/string.md)) — Name of a row policy.
|
||||
|
||||
- `short_name` ([String](../../sql-reference/data-types/string.md)) — Short name of a row policy. Names of row policies are compound, for example: myfilter ON mydb.mytable. Here "myfilter ON mydb.mytable" is the name of the row policy, "myfilter" is it's short name.
|
||||
|
||||
- `database` ([String](../../sql-reference/data-types/string.md)) — Database name.
|
||||
|
||||
- `table` ([String](../../sql-reference/data-types/string.md)) — Table name.
|
||||
|
||||
- `id` ([UUID](../../sql-reference/data-types/uuid.md)) — Row policy ID.
|
||||
|
||||
- `storage` ([String](../../sql-reference/data-types/string.md)) — Name of the directory where the row policy is stored.
|
||||
|
||||
- `select_filter` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Condition which is used to filter rows.
|
||||
|
||||
- `is_restrictive` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Shows whether the row policy restricts access to rows, see [CREATE ROW POLICY](../../sql-reference/statements/create/row-policy.md#create-row-policy-as). Value:
|
||||
- `0` — The row policy is defined with `AS PERMISSIVE` clause.
|
||||
- `1` — The row policy is defined with `AS RESTRICTIVE` clause.
|
||||
|
||||
- `apply_to_all` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Shows that the row policies set for all roles and/or users.
|
||||
|
||||
- `apply_to_list` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — List of the roles and/or users to which the row policies is applied.
|
||||
|
||||
- `apply_to_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — The row policies is applied to all roles and/or users excepting of the listed ones.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW POLICIES](../../sql-reference/statements/show.md#show-policies-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/row_policies) <!--hide-->
|
@ -0,0 +1,30 @@
|
||||
# system.settings_profile_elements {#system_tables-settings_profile_elements}
|
||||
|
||||
Describes the content of the settings profile:
|
||||
|
||||
- Сonstraints.
|
||||
- Roles and users that the setting applies to.
|
||||
- Parent settings profiles.
|
||||
|
||||
Columns:
|
||||
- `profile_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Setting profile name.
|
||||
|
||||
- `user_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — User name.
|
||||
|
||||
- `role_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Role name.
|
||||
|
||||
- `index` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Sequential number of the settings profile element.
|
||||
|
||||
- `setting_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Setting name.
|
||||
|
||||
- `value` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Setting value.
|
||||
|
||||
- `min` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — The minimum value of the setting. `NULL` if not set.
|
||||
|
||||
- `max` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — The maximum value of the setting. NULL if not set.
|
||||
|
||||
- `readonly` ([Nullable](../../sql-reference/data-types/nullable.md)([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges))) — Profile that allows only read queries.
|
||||
|
||||
- `inherit_profile` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — A parent profile for this setting profile. `NULL` if not set. Setting profile will inherit all the settings' values and constraints (`min`, `max`, `readonly`) from its parent profiles.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/settings_profile_elements) <!--hide-->
|
24
docs/en/operations/system-tables/settings_profiles.md
Normal file
24
docs/en/operations/system-tables/settings_profiles.md
Normal file
@ -0,0 +1,24 @@
|
||||
# system.settings_profiles {#system_tables-settings_profiles}
|
||||
|
||||
Contains properties of configured setting profiles.
|
||||
|
||||
Columns:
|
||||
- `name` ([String](../../sql-reference/data-types/string.md)) — Setting profile name.
|
||||
|
||||
- `id` ([UUID](../../sql-reference/data-types/uuid.md)) — Setting profile ID.
|
||||
|
||||
- `storage` ([String](../../sql-reference/data-types/string.md)) — Path to the storage of setting profiles. Configured in the `access_control_path` parameter.
|
||||
|
||||
- `num_elements` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of elements for this profile in the `system.settings_profile_elements` table.
|
||||
|
||||
- `apply_to_all` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Shows that the settings profile set for all roles and/or users.
|
||||
|
||||
- `apply_to_list` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — List of the roles and/or users to which the setting profile is applied.
|
||||
|
||||
- `apply_to_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — The setting profile is applied to all roles and/or users excepting of the listed ones.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW PROFILES](../../sql-reference/statements/show.md#show-profiles-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/settings_profiles) <!--hide-->
|
@ -82,8 +82,8 @@ res: /lib/x86_64-linux-gnu/libc-2.27.so
|
||||
|
||||
- [Introspection Functions](../../sql-reference/functions/introspection.md) — Which introspection functions are available and how to use them.
|
||||
- [system.trace_log](../system-tables/trace_log.md) — Contains stack traces collected by the sampling query profiler.
|
||||
- [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) — Description and usage example of the `arrayMap` function.
|
||||
- [arrayFilter](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-filter) — Description and usage example of the `arrayFilter` function.
|
||||
- [arrayMap](../../sql-reference/functions/array-functions.md#array-map) — Description and usage example of the `arrayMap` function.
|
||||
- [arrayFilter](../../sql-reference/functions/array-functions.md#array-filter) — Description and usage example of the `arrayFilter` function.
|
||||
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/stack_trace) <!--hide-->
|
||||
|
34
docs/en/operations/system-tables/users.md
Normal file
34
docs/en/operations/system-tables/users.md
Normal file
@ -0,0 +1,34 @@
|
||||
# system.users {#system_tables-users}
|
||||
|
||||
Contains a list of [user accounts](../../operations/access-rights.md#user-account-management) configured at the server.
|
||||
|
||||
Columns:
|
||||
- `name` ([String](../../sql-reference/data-types/string.md)) — User name.
|
||||
|
||||
- `id` ([UUID](../../sql-reference/data-types/uuid.md)) — User ID.
|
||||
|
||||
- `storage` ([String](../../sql-reference/data-types/string.md)) — Path to the storage of users. Configured in the `access_control_path` parameter.
|
||||
|
||||
- `auth_type` ([Enum8](../../sql-reference/data-types/enum.md)('no_password' = 0,'plaintext_password' = 1, 'sha256_password' = 2, 'double_sha1_password' = 3)) — Shows the authentication type. There are multiple ways of user identification: with no password, with plain text password, with [SHA256](https://ru.wikipedia.org/wiki/SHA-2)-encoded password or with [double SHA-1](https://ru.wikipedia.org/wiki/SHA-1)-encoded password.
|
||||
|
||||
- `auth_params` ([String](../../sql-reference/data-types/string.md)) — Authentication parameters in the JSON format depending on the `auth_type`.
|
||||
|
||||
- `host_ip` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — IP addresses of hosts that are allowed to connect to the ClickHouse server.
|
||||
|
||||
- `host_names` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Names of hosts that are allowed to connect to the ClickHouse server.
|
||||
|
||||
- `host_names_regexp` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Regular expression for host names that are allowed to connect to the ClickHouse server.
|
||||
|
||||
- `host_names_like` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Names of hosts that are allowed to connect to the ClickHouse server, set using the LIKE predicate.
|
||||
|
||||
- `default_roles_all` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Shows that all granted roles set for user by default.
|
||||
|
||||
- `default_roles_list` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — List of granted roles provided by default.
|
||||
|
||||
- `default_roles_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — All the granted roles set as default excepting of the listed ones.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW USERS](../../sql-reference/statements/show.md#show-users-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/users) <!--hide-->
|
@ -35,7 +35,7 @@ $ echo 0 | sudo tee /proc/sys/vm/overcommit_memory
|
||||
Always disable transparent huge pages. It interferes with memory allocators, which leads to significant performance degradation.
|
||||
|
||||
``` bash
|
||||
$ echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
|
||||
$ echo 'madvise' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
|
||||
```
|
||||
|
||||
Use `perf top` to watch the time spent in the kernel for memory management.
|
||||
|
@ -7,7 +7,7 @@ toc_title: Tuple(T1, T2, ...)
|
||||
|
||||
A tuple of elements, each having an individual [type](../../sql-reference/data-types/index.md#data_types).
|
||||
|
||||
Tuples are used for temporary column grouping. Columns can be grouped when an IN expression is used in a query, and for specifying certain formal parameters of lambda functions. For more information, see the sections [IN operators](../../sql-reference/operators/in.md) and [Higher order functions](../../sql-reference/functions/higher-order-functions.md).
|
||||
Tuples are used for temporary column grouping. Columns can be grouped when an IN expression is used in a query, and for specifying certain formal parameters of lambda functions. For more information, see the sections [IN operators](../../sql-reference/operators/in.md) and [Higher order functions](../../sql-reference/functions/index.md#higher-order-functions).
|
||||
|
||||
Tuples can be the result of a query. In this case, for text formats other than JSON, values are comma-separated in brackets. In JSON formats, tuples are output as arrays (in square brackets).
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_priority: 35
|
||||
toc_priority: 34
|
||||
toc_title: Arithmetic
|
||||
---
|
||||
|
||||
|
@ -1,9 +1,9 @@
|
||||
---
|
||||
toc_priority: 46
|
||||
toc_priority: 35
|
||||
toc_title: Arrays
|
||||
---
|
||||
|
||||
# Functions for Working with Arrays {#functions-for-working-with-arrays}
|
||||
# Array Functions {#functions-for-working-with-arrays}
|
||||
|
||||
## empty {#function-empty}
|
||||
|
||||
@ -241,6 +241,12 @@ SELECT indexOf([1, 3, NULL, NULL], NULL)
|
||||
|
||||
Elements set to `NULL` are handled as normal values.
|
||||
|
||||
## arrayCount(\[func,\] arr1, …) {#array-count}
|
||||
|
||||
Returns the number of elements in the arr array for which func returns something other than 0. If ‘func’ is not specified, it returns the number of non-zero elements in the array.
|
||||
|
||||
Note that the `arrayCount` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## countEqual(arr, x) {#countequalarr-x}
|
||||
|
||||
Returns the number of elements in the array equal to x. Equivalent to arrayCount (elem -\> elem = x, arr).
|
||||
@ -568,7 +574,7 @@ SELECT arraySort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]);
|
||||
- `NaN` values are right before `NULL`.
|
||||
- `Inf` values are right before `NaN`.
|
||||
|
||||
Note that `arraySort` is a [higher-order function](../../sql-reference/functions/higher-order-functions.md). You can pass a lambda function to it as the first argument. In this case, sorting order is determined by the result of the lambda function applied to the elements of the array.
|
||||
Note that `arraySort` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. In this case, sorting order is determined by the result of the lambda function applied to the elements of the array.
|
||||
|
||||
Let’s consider the following example:
|
||||
|
||||
@ -668,7 +674,7 @@ SELECT arrayReverseSort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]) as res;
|
||||
- `NaN` values are right before `NULL`.
|
||||
- `-Inf` values are right before `NaN`.
|
||||
|
||||
Note that the `arrayReverseSort` is a [higher-order function](../../sql-reference/functions/higher-order-functions.md). You can pass a lambda function to it as the first argument. Example is shown below.
|
||||
Note that the `arrayReverseSort` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. Example is shown below.
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseSort((x) -> -x, [1, 2, 3]) as res;
|
||||
@ -1120,7 +1126,205 @@ Result:
|
||||
``` text
|
||||
┌─arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1])─┐
|
||||
│ 0.75 │
|
||||
└────────────────────────────────────────---──┘
|
||||
└───────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## arrayMap(func, arr1, …) {#array-map}
|
||||
|
||||
Returns an array obtained from the original application of the `func` function to each element in the `arr` array.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────┐
|
||||
│ [3,4,5] │
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
The following example shows how to create a tuple of elements from different arrays:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────────────────┐
|
||||
│ [(1,4),(2,5),(3,6)] │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
Note that the `arrayMap` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayFilter(func, arr1, …) {#array-filter}
|
||||
|
||||
Returns an array containing only the elements in `arr1` for which `func` returns something other than 0.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────┐
|
||||
│ ['abc World'] │
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
arrayFilter(
|
||||
(i, x) -> x LIKE '%World%',
|
||||
arrayEnumerate(arr),
|
||||
['Hello', 'abc World'] AS arr)
|
||||
AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─┐
|
||||
│ [2] │
|
||||
└─────┘
|
||||
```
|
||||
|
||||
Note that the `arrayFilter` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayFill(func, arr1, …) {#array-fill}
|
||||
|
||||
Scan through `arr1` from the first element to the last element and replace `arr1[i]` by `arr1[i - 1]` if `func` returns 0. The first element of `arr1` will not be replaced.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res──────────────────────────────┐
|
||||
│ [1,1,3,11,12,12,12,5,6,14,14,14] │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
Note that the `arrayFill` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayReverseFill(func, arr1, …) {#array-reverse-fill}
|
||||
|
||||
Scan through `arr1` from the last element to the first element and replace `arr1[i]` by `arr1[i + 1]` if `func` returns 0. The last element of `arr1` will not be replaced.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res────────────────────────────────┐
|
||||
│ [1,3,3,11,12,5,5,5,6,14,NULL,NULL] │
|
||||
└────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Note that the `arrayReverseFilter` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arraySplit(func, arr1, …) {#array-split}
|
||||
|
||||
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the left hand side of the element. The array will not be split before the first element.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arraySplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────────────┐
|
||||
│ [[1,2,3],[4,5]] │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
Note that the `arraySplit` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayReverseSplit(func, arr1, …) {#array-reverse-split}
|
||||
|
||||
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the right hand side of the element. The array will not be split after the last element.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseSplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────────┐
|
||||
│ [[1],[2,3,4],[5]] │
|
||||
└───────────────────┘
|
||||
```
|
||||
|
||||
Note that the `arrayReverseSplit` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1}
|
||||
|
||||
Returns 1 if there is at least one element in `arr` for which `func` returns something other than 0. Otherwise, it returns 0.
|
||||
|
||||
Note that the `arrayExists` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1}
|
||||
|
||||
Returns 1 if `func` returns something other than 0 for all the elements in `arr`. Otherwise, it returns 0.
|
||||
|
||||
Note that the `arrayAll` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayFirst(func, arr1, …) {#array-first}
|
||||
|
||||
Returns the first element in the `arr1` array for which `func` returns something other than 0.
|
||||
|
||||
Note that the `arrayFirst` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayFirstIndex(func, arr1, …) {#array-first-index}
|
||||
|
||||
Returns the index of the first element in the `arr1` array for which `func` returns something other than 0.
|
||||
|
||||
Note that the `arrayFirstIndex` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arraySum(\[func,\] arr1, …) {#array-sum}
|
||||
|
||||
Returns the sum of the `func` values. If the function is omitted, it just returns the sum of the array elements.
|
||||
|
||||
Note that the `arraySum` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1}
|
||||
|
||||
Returns an array of partial sums of elements in the source array (a running sum). If the `func` function is specified, then the values of the array elements are converted by this function before summing.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res──────────┐
|
||||
│ [1, 2, 3, 4] │
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
Note that the `arrayCumSum` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayCumSumNonNegative(arr) {#arraycumsumnonnegativearr}
|
||||
|
||||
Same as `arrayCumSum`, returns an array of partial sums of elements in the source array (a running sum). Different `arrayCumSum`, when then returned value contains a value less than zero, the value is replace with zero and the subsequent calculation is performed with zero parameters. For example:
|
||||
|
||||
``` sql
|
||||
SELECT arrayCumSumNonNegative([1, 1, -4, 1]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────┐
|
||||
│ [1,2,0,1] │
|
||||
└───────────┘
|
||||
```
|
||||
Note that the `arraySumNonNegative` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/array_functions/) <!--hide-->
|
||||
|
@ -1,262 +0,0 @@
|
||||
---
|
||||
toc_priority: 57
|
||||
toc_title: Higher-Order
|
||||
---
|
||||
|
||||
# Higher-order Functions {#higher-order-functions}
|
||||
|
||||
## `->` operator, lambda(params, expr) function {#operator-lambdaparams-expr-function}
|
||||
|
||||
Allows describing a lambda function for passing to a higher-order function. The left side of the arrow has a formal parameter, which is any ID, or multiple formal parameters – any IDs in a tuple. The right side of the arrow has an expression that can use these formal parameters, as well as any table columns.
|
||||
|
||||
Examples: `x -> 2 * x, str -> str != Referer.`
|
||||
|
||||
Higher-order functions can only accept lambda functions as their functional argument.
|
||||
|
||||
A lambda function that accepts multiple arguments can be passed to a higher-order function. In this case, the higher-order function is passed several arrays of identical length that these arguments will correspond to.
|
||||
|
||||
For some functions, such as [arrayCount](#higher_order_functions-array-count) or [arraySum](#higher_order_functions-array-count), the first argument (the lambda function) can be omitted. In this case, identical mapping is assumed.
|
||||
|
||||
A lambda function can’t be omitted for the following functions:
|
||||
|
||||
- [arrayMap](#higher_order_functions-array-map)
|
||||
- [arrayFilter](#higher_order_functions-array-filter)
|
||||
- [arrayFill](#higher_order_functions-array-fill)
|
||||
- [arrayReverseFill](#higher_order_functions-array-reverse-fill)
|
||||
- [arraySplit](#higher_order_functions-array-split)
|
||||
- [arrayReverseSplit](#higher_order_functions-array-reverse-split)
|
||||
- [arrayFirst](#higher_order_functions-array-first)
|
||||
- [arrayFirstIndex](#higher_order_functions-array-first-index)
|
||||
|
||||
### arrayMap(func, arr1, …) {#higher_order_functions-array-map}
|
||||
|
||||
Returns an array obtained from the original application of the `func` function to each element in the `arr` array.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────┐
|
||||
│ [3,4,5] │
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
The following example shows how to create a tuple of elements from different arrays:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────────────────┐
|
||||
│ [(1,4),(2,5),(3,6)] │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayMap` function.
|
||||
|
||||
### arrayFilter(func, arr1, …) {#higher_order_functions-array-filter}
|
||||
|
||||
Returns an array containing only the elements in `arr1` for which `func` returns something other than 0.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────┐
|
||||
│ ['abc World'] │
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
arrayFilter(
|
||||
(i, x) -> x LIKE '%World%',
|
||||
arrayEnumerate(arr),
|
||||
['Hello', 'abc World'] AS arr)
|
||||
AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─┐
|
||||
│ [2] │
|
||||
└─────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFilter` function.
|
||||
|
||||
### arrayFill(func, arr1, …) {#higher_order_functions-array-fill}
|
||||
|
||||
Scan through `arr1` from the first element to the last element and replace `arr1[i]` by `arr1[i - 1]` if `func` returns 0. The first element of `arr1` will not be replaced.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res──────────────────────────────┐
|
||||
│ [1,1,3,11,12,12,12,5,6,14,14,14] │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFill` function.
|
||||
|
||||
### arrayReverseFill(func, arr1, …) {#higher_order_functions-array-reverse-fill}
|
||||
|
||||
Scan through `arr1` from the last element to the first element and replace `arr1[i]` by `arr1[i + 1]` if `func` returns 0. The last element of `arr1` will not be replaced.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res────────────────────────────────┐
|
||||
│ [1,3,3,11,12,5,5,5,6,14,NULL,NULL] │
|
||||
└────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayReverseFill` function.
|
||||
|
||||
### arraySplit(func, arr1, …) {#higher_order_functions-array-split}
|
||||
|
||||
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the left hand side of the element. The array will not be split before the first element.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arraySplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────────────┐
|
||||
│ [[1,2,3],[4,5]] │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arraySplit` function.
|
||||
|
||||
### arrayReverseSplit(func, arr1, …) {#higher_order_functions-array-reverse-split}
|
||||
|
||||
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the right hand side of the element. The array will not be split after the last element.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseSplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────────┐
|
||||
│ [[1],[2,3,4],[5]] │
|
||||
└───────────────────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arraySplit` function.
|
||||
|
||||
### arrayCount(\[func,\] arr1, …) {#higher_order_functions-array-count}
|
||||
|
||||
Returns the number of elements in the arr array for which func returns something other than 0. If ‘func’ is not specified, it returns the number of non-zero elements in the array.
|
||||
|
||||
### arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1}
|
||||
|
||||
Returns 1 if there is at least one element in ‘arr’ for which ‘func’ returns something other than 0. Otherwise, it returns 0.
|
||||
|
||||
### arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1}
|
||||
|
||||
Returns 1 if ‘func’ returns something other than 0 for all the elements in ‘arr’. Otherwise, it returns 0.
|
||||
|
||||
### arraySum(\[func,\] arr1, …) {#higher-order-functions-array-sum}
|
||||
|
||||
Returns the sum of the ‘func’ values. If the function is omitted, it just returns the sum of the array elements.
|
||||
|
||||
### arrayFirst(func, arr1, …) {#higher_order_functions-array-first}
|
||||
|
||||
Returns the first element in the ‘arr1’ array for which ‘func’ returns something other than 0.
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFirst` function.
|
||||
|
||||
### arrayFirstIndex(func, arr1, …) {#higher_order_functions-array-first-index}
|
||||
|
||||
Returns the index of the first element in the ‘arr1’ array for which ‘func’ returns something other than 0.
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFirstIndex` function.
|
||||
|
||||
### arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1}
|
||||
|
||||
Returns an array of partial sums of elements in the source array (a running sum). If the `func` function is specified, then the values of the array elements are converted by this function before summing.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res──────────┐
|
||||
│ [1, 2, 3, 4] │
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
### arrayCumSumNonNegative(arr) {#arraycumsumnonnegativearr}
|
||||
|
||||
Same as `arrayCumSum`, returns an array of partial sums of elements in the source array (a running sum). Different `arrayCumSum`, when then returned value contains a value less than zero, the value is replace with zero and the subsequent calculation is performed with zero parameters. For example:
|
||||
|
||||
``` sql
|
||||
SELECT arrayCumSumNonNegative([1, 1, -4, 1]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────┐
|
||||
│ [1,2,0,1] │
|
||||
└───────────┘
|
||||
```
|
||||
|
||||
### arraySort(\[func,\] arr1, …) {#arraysortfunc-arr1}
|
||||
|
||||
Returns an array as result of sorting the elements of `arr1` in ascending order. If the `func` function is specified, sorting order is determined by the result of the function `func` applied to the elements of array (arrays)
|
||||
|
||||
The [Schwartzian transform](https://en.wikipedia.org/wiki/Schwartzian_transform) is used to improve sorting efficiency.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]);
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res────────────────┐
|
||||
│ ['world', 'hello'] │
|
||||
└────────────────────┘
|
||||
```
|
||||
|
||||
For more information about the `arraySort` method, see the [Functions for Working With Arrays](../../sql-reference/functions/array-functions.md#array_functions-sort) section.
|
||||
|
||||
### arrayReverseSort(\[func,\] arr1, …) {#arrayreversesortfunc-arr1}
|
||||
|
||||
Returns an array as result of sorting the elements of `arr1` in descending order. If the `func` function is specified, sorting order is determined by the result of the function `func` applied to the elements of array (arrays).
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseSort((x, y) -> y, ['hello', 'world'], [2, 1]) as res;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────────┐
|
||||
│ ['hello','world'] │
|
||||
└───────────────────┘
|
||||
```
|
||||
|
||||
For more information about the `arrayReverseSort` method, see the [Functions for Working With Arrays](../../sql-reference/functions/array-functions.md#array_functions-reverse-sort) section.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/higher_order_functions/) <!--hide-->
|
@ -44,6 +44,21 @@ Functions have the following behaviors:
|
||||
|
||||
Functions can’t change the values of their arguments – any changes are returned as the result. Thus, the result of calculating separate functions does not depend on the order in which the functions are written in the query.
|
||||
|
||||
## Higher-order functions, `->` operator and lambda(params, expr) function {#higher-order-functions}
|
||||
|
||||
Higher-order functions can only accept lambda functions as their functional argument. To pass a lambda function to a higher-order function use `->` operator. The left side of the arrow has a formal parameter, which is any ID, or multiple formal parameters – any IDs in a tuple. The right side of the arrow has an expression that can use these formal parameters, as well as any table columns.
|
||||
|
||||
Examples:
|
||||
|
||||
```
|
||||
x -> 2 * x
|
||||
str -> str != Referer
|
||||
```
|
||||
|
||||
A lambda function that accepts multiple arguments can also be passed to a higher-order function. In this case, the higher-order function is passed several arrays of identical length that these arguments will correspond to.
|
||||
|
||||
For some functions the first argument (the lambda function) can be omitted. In this case, identical mapping is assumed.
|
||||
|
||||
## Error Handling {#error-handling}
|
||||
|
||||
Some functions might throw an exception if the data is invalid. In this case, the query is canceled and an error text is returned to the client. For distributed processing, when an exception occurs on one of the servers, the other servers also attempt to abort the query.
|
||||
|
@ -98,7 +98,7 @@ LIMIT 1
|
||||
\G
|
||||
```
|
||||
|
||||
The [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) function allows to process each individual element of the `trace` array by the `addressToLine` function. The result of this processing you see in the `trace_source_code_lines` column of output.
|
||||
The [arrayMap](../../sql-reference/functions/array-functions.md#array-map) function allows to process each individual element of the `trace` array by the `addressToLine` function. The result of this processing you see in the `trace_source_code_lines` column of output.
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
@ -184,7 +184,7 @@ LIMIT 1
|
||||
\G
|
||||
```
|
||||
|
||||
The [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) function allows to process each individual element of the `trace` array by the `addressToSymbols` function. The result of this processing you see in the `trace_symbols` column of output.
|
||||
The [arrayMap](../../sql-reference/functions/array-functions.md#array-map) function allows to process each individual element of the `trace` array by the `addressToSymbols` function. The result of this processing you see in the `trace_symbols` column of output.
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
@ -281,7 +281,7 @@ LIMIT 1
|
||||
\G
|
||||
```
|
||||
|
||||
The [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) function allows to process each individual element of the `trace` array by the `demangle` function. The result of this processing you see in the `trace_functions` column of output.
|
||||
The [arrayMap](../../sql-reference/functions/array-functions.md#array-map) function allows to process each individual element of the `trace` array by the `demangle` function. The result of this processing you see in the `trace_functions` column of output.
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
|
@ -10,7 +10,7 @@ Changes settings profiles.
|
||||
Syntax:
|
||||
|
||||
``` sql
|
||||
ALTER SETTINGS PROFILE [IF EXISTS] name [ON CLUSTER cluster_name]
|
||||
ALTER SETTINGS PROFILE [IF EXISTS] TO name [ON CLUSTER cluster_name]
|
||||
[RENAME TO new_name]
|
||||
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | INHERIT 'profile_name'] [,...]
|
||||
```
|
||||
|
@ -10,7 +10,7 @@ Creates a [settings profile](../../../operations/access-rights.md#settings-profi
|
||||
Syntax:
|
||||
|
||||
``` sql
|
||||
CREATE SETTINGS PROFILE [IF NOT EXISTS | OR REPLACE] name [ON CLUSTER cluster_name]
|
||||
CREATE SETTINGS PROFILE [IF NOT EXISTS | OR REPLACE] TO name [ON CLUSTER cluster_name]
|
||||
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | INHERIT 'profile_name'] [,...]
|
||||
```
|
||||
|
||||
|
@ -136,7 +136,7 @@ ENGINE = <Engine>
|
||||
...
|
||||
```
|
||||
|
||||
If a codec is specified, the default codec doesn’t apply. Codecs can be combined in a pipeline, for example, `CODEC(Delta, ZSTD)`. To select the best codec combination for you project, pass benchmarks similar to described in the Altinity [New Encodings to Improve ClickHouse Efficiency](https://www.altinity.com/blog/2019/7/new-encodings-to-improve-clickhouse) article.
|
||||
If a codec is specified, the default codec doesn’t apply. Codecs can be combined in a pipeline, for example, `CODEC(Delta, ZSTD)`. To select the best codec combination for you project, pass benchmarks similar to described in the Altinity [New Encodings to Improve ClickHouse Efficiency](https://www.altinity.com/blog/2019/7/new-encodings-to-improve-clickhouse) article. One thing to note is that codec can't be applied for ALIAS column type.
|
||||
|
||||
!!! warning "Warning"
|
||||
You can’t decompress ClickHouse database files with external utilities like `lz4`. Instead, use the special [clickhouse-compressor](https://github.com/ClickHouse/ClickHouse/tree/master/programs/compressor) utility.
|
||||
|
@ -148,7 +148,7 @@ SHOW CREATE [ROW] POLICY name ON [database.]table
|
||||
|
||||
Shows parameters that were used at a [quota creation](../../sql-reference/statements/create/quota.md).
|
||||
|
||||
### Syntax {#show-create-row-policy-syntax}
|
||||
### Syntax {#show-create-quota-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW CREATE QUOTA [name | CURRENT]
|
||||
@ -158,10 +158,70 @@ SHOW CREATE QUOTA [name | CURRENT]
|
||||
|
||||
Shows parameters that were used at a [settings profile creation](../../sql-reference/statements/create/settings-profile.md).
|
||||
|
||||
### Syntax {#show-create-row-policy-syntax}
|
||||
### Syntax {#show-create-settings-profile-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW CREATE [SETTINGS] PROFILE name
|
||||
```
|
||||
|
||||
## SHOW USERS {#show-users-statement}
|
||||
|
||||
Returns a list of [user account](../../operations/access-rights.md#user-account-management) names. To view user accounts parameters, see the system table [system.users](../../operations/system-tables/users.md#system_tables-users).
|
||||
|
||||
### Syntax {#show-users-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW USERS
|
||||
```
|
||||
|
||||
## SHOW ROLES {#show-roles-statement}
|
||||
|
||||
Returns a list of [roles](../../operations/access-rights.md#role-management). To view another parameters, see system tables [system.roles](../../operations/system-tables/roles.md#system_tables-roles) and [system.role-grants](../../operations/system-tables/role-grants.md#system_tables-role_grants).
|
||||
|
||||
### Syntax {#show-roles-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [CURRENT|ENABLED] ROLES
|
||||
```
|
||||
|
||||
## SHOW PROFILES {#show-profiles-statement}
|
||||
|
||||
Returns a list of [setting profiles](../../operations/access-rights.md#settings-profiles-management). To view user accounts parameters, see the system table [settings_profiles](../../operations/system-tables/settings_profiles.md#system_tables-settings_profiles).
|
||||
|
||||
### Syntax {#show-profiles-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [SETTINGS] PROFILES
|
||||
```
|
||||
|
||||
## SHOW POLICIES {#show-policies-statement}
|
||||
|
||||
Returns a list of [row policies](../../operations/access-rights.md#row-policy-management) for the specified table. To view user accounts parameters, see the system table [system.row_policies](../../operations/system-tables/row_policies.md#system_tables-row_policies).
|
||||
|
||||
### Syntax {#show-policies-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [ROW] POLICIES [ON [db.]table]
|
||||
```
|
||||
|
||||
## SHOW QUOTAS {#show-quotas-statement}
|
||||
|
||||
Returns a list of [quotas](../../operations/access-rights.md#quotas-management). To view quotas parameters, see the system table [system.quotas](../../operations/system-tables/quotas.md#system_tables-quotas).
|
||||
|
||||
### Syntax {#show-quotas-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW QUOTAS
|
||||
```
|
||||
|
||||
## SHOW QUOTA {#show-quota-statement}
|
||||
|
||||
Returns a [quota](../../operations/quotas.md) consumption for all users or for current user. To view another parameters, see system tables [system.quotas_usage](../../operations/system-tables/quotas_usage.md#system_tables-quotas_usage) and [system.quota_usage](../../operations/system-tables/quota_usage.md#system_tables-quota_usage).
|
||||
|
||||
### Syntax {#show-quota-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [CURRENT] QUOTA
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/show/) <!--hide-->
|
||||
|
@ -33,7 +33,7 @@ Para obtener una descripción de los parámetros de solicitud, consulte [descrip
|
||||
|
||||
**ReplacingMergeTree Parámetros**
|
||||
|
||||
- `ver` — column with version. Type `UInt*`, `Date`, `DateTime` o `DateTime64`. Parámetro opcional.
|
||||
- `ver` — column with version. Type `UInt*`, `Date` o `DateTime`. Parámetro opcional.
|
||||
|
||||
Al fusionar, `ReplacingMergeTree` de todas las filas con la misma clave primaria deja solo una:
|
||||
|
||||
|
@ -38,7 +38,7 @@ Ejemplos:
|
||||
$ curl 'http://localhost:8123/?query=SELECT%201'
|
||||
1
|
||||
|
||||
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1'
|
||||
$ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
|
||||
1
|
||||
|
||||
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123
|
||||
|
@ -33,7 +33,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
|
||||
**پارامترهای جایگزین**
|
||||
|
||||
- `ver` — column with version. Type `UInt*`, `Date`, `DateTime` یا `DateTime64`. پارامتر اختیاری.
|
||||
- `ver` — column with version. Type `UInt*`, `Date` یا `DateTime`. پارامتر اختیاری.
|
||||
|
||||
هنگام ادغام, `ReplacingMergeTree` از تمام ردیف ها با همان کلید اصلی تنها یک برگ دارد:
|
||||
|
||||
|
@ -38,7 +38,7 @@ Ok.
|
||||
$ curl 'http://localhost:8123/?query=SELECT%201'
|
||||
1
|
||||
|
||||
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1'
|
||||
$ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
|
||||
1
|
||||
|
||||
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123
|
||||
|
@ -33,7 +33,7 @@ Pour une description des paramètres de requête, voir [demande de description](
|
||||
|
||||
**ReplacingMergeTree Paramètres**
|
||||
|
||||
- `ver` — column with version. Type `UInt*`, `Date`, `DateTime` ou `DateTime64`. Paramètre facultatif.
|
||||
- `ver` — column with version. Type `UInt*`, `Date` ou `DateTime`. Paramètre facultatif.
|
||||
|
||||
Lors de la fusion, `ReplacingMergeTree` de toutes les lignes avec la même clé primaire ne laisse qu'un:
|
||||
|
||||
|
@ -38,7 +38,7 @@ Exemple:
|
||||
$ curl 'http://localhost:8123/?query=SELECT%201'
|
||||
1
|
||||
|
||||
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1'
|
||||
$ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
|
||||
1
|
||||
|
||||
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123
|
||||
|
@ -33,7 +33,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
|
||||
**ReplacingMergeTreeパラメータ**
|
||||
|
||||
- `ver` — column with version. Type `UInt*`, `Date`, `DateTime` または `DateTime64`. 任意パラメータ。
|
||||
- `ver` — column with version. Type `UInt*`, `Date` または `DateTime`. 任意パラメータ。
|
||||
|
||||
マージ時, `ReplacingMergeTree` 同じ主キーを持つすべての行から、一つだけを残します:
|
||||
|
||||
|
@ -38,7 +38,7 @@ GETメソッドを使用する場合, ‘readonly’ 設定されています。
|
||||
$ curl 'http://localhost:8123/?query=SELECT%201'
|
||||
1
|
||||
|
||||
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1'
|
||||
$ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
|
||||
1
|
||||
|
||||
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123
|
||||
|
@ -1,14 +1,11 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_priority: 47
|
||||
toc_title: "\u65E5\u4ED8"
|
||||
---
|
||||
|
||||
# 日付 {#date}
|
||||
|
||||
デートだ 1970-01-01(符号なし)以降の日数として二バイト単位で格納されます。 Unixエポックの開始直後から、コンパイル段階で定数によって定義される上限しきい値までの値を格納できます(現在は2106年までですが、完全にサポート
|
||||
最小値は1970-01-01として出力されます。
|
||||
日付型です。 1970-01-01 からの日数が2バイトの符号なし整数として格納されます。 UNIX時間の開始直後から、変換段階で定数として定義される上限しきい値までの値を格納できます(現在は2106年までですが、一年分を完全にサポートしているのは2105年までです)。
|
||||
|
||||
日付値は、タイムゾーンなしで格納されます。
|
||||
|
||||
|
122
docs/ru/engines/table-engines/integrations/rabbitmq.md
Normal file
122
docs/ru/engines/table-engines/integrations/rabbitmq.md
Normal file
@ -0,0 +1,122 @@
|
||||
---
|
||||
toc_priority: 6
|
||||
toc_title: RabbitMQ
|
||||
---
|
||||
|
||||
# RabbitMQ {#rabbitmq-engine}
|
||||
|
||||
Движок работает с [RabbitMQ](https://www.rabbitmq.com).
|
||||
|
||||
`RabbitMQ` позволяет:
|
||||
|
||||
- Публиковать/подписываться на потоки данных.
|
||||
- Обрабатывать потоки по мере их появления.
|
||||
|
||||
## Создание таблицы {#table_engine-rabbitmq-creating-a-table}
|
||||
|
||||
``` sql
|
||||
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
(
|
||||
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1],
|
||||
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2],
|
||||
...
|
||||
) ENGINE = RabbitMQ SETTINGS
|
||||
rabbitmq_host_port = 'host:port',
|
||||
rabbitmq_exchange_name = 'exchange_name',
|
||||
rabbitmq_format = 'data_format'[,]
|
||||
[rabbitmq_exchange_type = 'exchange_type',]
|
||||
[rabbitmq_routing_key_list = 'key1,key2,...',]
|
||||
[rabbitmq_row_delimiter = 'delimiter_symbol',]
|
||||
[rabbitmq_num_consumers = N,]
|
||||
[rabbitmq_num_queues = N,]
|
||||
[rabbitmq_transactional_channel = 0]
|
||||
```
|
||||
|
||||
Обязательные параметры:
|
||||
|
||||
- `rabbitmq_host_port` – адрес сервера (`хост:порт`). Например: `localhost:5672`.
|
||||
- `rabbitmq_exchange_name` – имя точки обмена в RabbitMQ.
|
||||
- `rabbitmq_format` – формат сообщения. Используется такое же обозначение, как и в функции `FORMAT` в SQL, например, `JSONEachRow`. Подробнее см. в разделе [Форматы входных и выходных данных](../../../interfaces/formats.md).
|
||||
|
||||
Дополнительные параметры:
|
||||
|
||||
- `rabbitmq_exchange_type` – тип точки обмена в RabbitMQ: `direct`, `fanout`, `topic`, `headers`, `consistent-hash`. По умолчанию: `fanout`.
|
||||
- `rabbitmq_routing_key_list` – список ключей маршрутизации, через запятую.
|
||||
- `rabbitmq_row_delimiter` – символ-разделитель, который завершает сообщение.
|
||||
- `rabbitmq_num_consumers` – количество потребителей на таблицу. По умолчанию: `1`. Укажите больше потребителей, если пропускная способность одного потребителя недостаточна.
|
||||
- `rabbitmq_num_queues` – количество очередей на потребителя. По умолчанию: `1`. Укажите больше потребителей, если пропускная способность одной очереди на потребителя недостаточна. Одна очередь поддерживает до 50 тысяч сообщений одновременно.
|
||||
- `rabbitmq_transactional_channel` – обернутые запросы `INSERT` в транзакциях. По умолчанию: `0`.
|
||||
|
||||
Требуемая конфигурация:
|
||||
|
||||
Конфигурация сервера RabbitMQ добавляется с помощью конфигурационного файла ClickHouse.
|
||||
|
||||
``` xml
|
||||
<rabbitmq>
|
||||
<username>root</username>
|
||||
<password>clickhouse</password>
|
||||
</rabbitmq>
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE queue (
|
||||
key UInt64,
|
||||
value UInt64
|
||||
) ENGINE = RabbitMQ SETTINGS rabbitmq_host_port = 'localhost:5672',
|
||||
rabbitmq_exchange_name = 'exchange1',
|
||||
rabbitmq_format = 'JSONEachRow',
|
||||
rabbitmq_num_consumers = 5;
|
||||
```
|
||||
|
||||
## Описание {#description}
|
||||
|
||||
Запрос `SELECT` не очень полезен для чтения сообщений (за исключением отладки), поскольку каждое сообщение может быть прочитано только один раз. Практичнее создавать потоки реального времени с помощью [материализованных преставлений](../../../sql-reference/statements/create/view.md). Для этого:
|
||||
|
||||
1. Создайте потребителя RabbitMQ с помощью движка и рассматривайте его как поток данных.
|
||||
2. Создайте таблицу с необходимой структурой.
|
||||
3. Создайте материализованное представление, которое преобразует данные от движка и помещает их в ранее созданную таблицу.
|
||||
|
||||
Когда к движку присоединяется материализованное представление, оно начинает в фоновом режиме собирать данные. Это позволяет непрерывно получать сообщения от RabbitMQ и преобразовывать их в необходимый формат с помощью `SELECT`.
|
||||
У одной таблицы RabbitMQ может быть неограниченное количество материализованных представлений.
|
||||
|
||||
Данные передаются с помощью параметров `rabbitmq_exchange_type` и `rabbitmq_routing_key_list`.
|
||||
Может быть не более одной точки обмена на таблицу. Одна точка обмена может использоваться несколькими таблицами: это позволяет выполнять маршрутизацию по нескольким таблицам одновременно.
|
||||
|
||||
Параметры точек обмена:
|
||||
|
||||
- `direct` - маршрутизация основана на точном совпадении ключей. Пример списка ключей: `key1,key2,key3,key4,key5`. Ключ сообщения может совпадать с одним из них.
|
||||
- `fanout` - маршрутизация по всем таблицам, где имя точки обмена совпадает, независимо от ключей.
|
||||
- `topic` - маршрутизация основана на правилах с ключами, разделенными точками. Например: `*.logs`, `records.*.*.2020`, `*.2018,*.2019,*.2020`.
|
||||
- `headers` - маршрутизация основана на совпадении `key=value` с настройкой `x-match=all` или `x-match=any`. Пример списка ключей таблицы: `x-match=all,format=logs,type=report,year=2020`.
|
||||
- `consistent-hash` - данные равномерно распределяются между всеми связанными таблицами, где имя точки обмена совпадает. Обратите внимание, что этот тип обмена должен быть включен с помощью плагина RabbitMQ: `rabbitmq-plugins enable rabbitmq_consistent_hash_exchange`.
|
||||
|
||||
Если тип точки обмена не задан, по умолчанию используется `fanout`. В таком случае ключи маршрутизации для публикации данных должны быть рандомизированы в диапазоне `[1, num_consumers]` за каждое сообщение/пакет (или в диапазоне `[1, num_consumers * num_queues]`, если `rabbitmq_num_queues` задано). Эта конфигурация таблицы работает быстрее, чем любая другая, особенно когда заданы параметры `rabbitmq_num_consumers` и/или `rabbitmq_num_queues`.
|
||||
|
||||
Если параметры`rabbitmq_num_consumers` и/или `rabbitmq_num_queues` заданы вместе с параметром `rabbitmq_exchange_type`:
|
||||
|
||||
- плагин `rabbitmq-consistent-hash-exchange` должен быть включен.
|
||||
- свойство `message_id` должно быть определено (уникальное для каждого сообщения/пакета).
|
||||
|
||||
Пример:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE queue (
|
||||
key UInt64,
|
||||
value UInt64
|
||||
) ENGINE = RabbitMQ SETTINGS rabbitmq_host_port = 'localhost:5672',
|
||||
rabbitmq_exchange_name = 'exchange1',
|
||||
rabbitmq_exchange_type = 'headers',
|
||||
rabbitmq_routing_key_list = 'format=logs,type=report,year=2020',
|
||||
rabbitmq_format = 'JSONEachRow',
|
||||
rabbitmq_num_consumers = 5;
|
||||
|
||||
CREATE TABLE daily (key UInt64, value UInt64)
|
||||
ENGINE = MergeTree();
|
||||
|
||||
CREATE MATERIALIZED VIEW consumer TO daily
|
||||
AS SELECT key, value FROM queue;
|
||||
|
||||
SELECT key, value FROM daily ORDER BY key;
|
||||
```
|
@ -1,3 +1,8 @@
|
||||
---
|
||||
toc_priority: 30
|
||||
toc_title: MergeTree
|
||||
---
|
||||
|
||||
# MergeTree {#table_engines-mergetree}
|
||||
|
||||
Движок `MergeTree`, а также другие движки этого семейства (`*MergeTree`) — это наиболее функциональные движки таблиц ClickHouse.
|
||||
@ -28,8 +33,8 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
INDEX index_name1 expr1 TYPE type1(...) GRANULARITY value1,
|
||||
INDEX index_name2 expr2 TYPE type2(...) GRANULARITY value2
|
||||
) ENGINE = MergeTree()
|
||||
ORDER BY expr
|
||||
[PARTITION BY expr]
|
||||
[ORDER BY expr]
|
||||
[PRIMARY KEY expr]
|
||||
[SAMPLE BY expr]
|
||||
[TTL expr [DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'], ...]
|
||||
@ -38,27 +43,42 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
|
||||
Описание параметров смотрите в [описании запроса CREATE](../../../engines/table-engines/mergetree-family/mergetree.md).
|
||||
|
||||
!!! note "Note"
|
||||
!!! note "Примечание"
|
||||
`INDEX` — экспериментальная возможность, смотрите [Индексы пропуска данных](#table_engine-mergetree-data_skipping-indexes).
|
||||
|
||||
### Секции запроса {#mergetree-query-clauses}
|
||||
|
||||
- `ENGINE` — имя и параметры движка. `ENGINE = MergeTree()`. `MergeTree` не имеет параметров.
|
||||
|
||||
- `PARTITION BY` — [ключ партиционирования](custom-partitioning-key.md). Для партиционирования по месяцам используйте выражение `toYYYYMM(date_column)`, где `date_column` — столбец с датой типа [Date](../../../engines/table-engines/mergetree-family/mergetree.md). В этом случае имена партиций имеют формат `"YYYYMM"`.
|
||||
- `ORDER BY` — ключ сортировки.
|
||||
|
||||
Кортеж столбцов или произвольных выражений. Пример: `ORDER BY (CounterID, EventDate)`.
|
||||
|
||||
- `ORDER BY` — ключ сортировки. Кортеж столбцов или произвольных выражений. Пример: `ORDER BY (CounterID, EventDate)`.
|
||||
ClickHouse использует ключ сортировки в качестве первичного ключа, если первичный ключ не задан в секции `PRIMARY KEY`.
|
||||
|
||||
- `PRIMARY KEY` — первичный ключ, если он [отличается от ключа сортировки](#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki). По умолчанию первичный ключ совпадает с ключом сортировки (который задаётся секцией `ORDER BY`.) Поэтому в большинстве случаев секцию `PRIMARY KEY` отдельно указывать не нужно.
|
||||
Чтобы отключить сортировку, используйте синтаксис `ORDER BY tuple()`. Смотрите [выбор первичного ключа](#vybor-pervichnogo-kliucha).
|
||||
|
||||
- `SAMPLE BY` — выражение для сэмплирования. Если используется выражение для сэмплирования, то первичный ключ должен содержать его. Пример: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
||||
- `PARTITION BY` — [ключ партиционирования](custom-partitioning-key.md). Необязательный параметр.
|
||||
|
||||
- `TTL` — список правил, определяющих длительности хранения строк, а также задающих правила перемещения частей на определённые тома или диски. Выражение должно возвращать столбец `Date` или `DateTime`. Пример: `TTL date + INTERVAL 1 DAY`.
|
||||
- Тип правила `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'` указывает действие, которое будет выполнено с частью, удаление строк (прореживание), перемещение (при выполнении условия для всех строк части) на определённый диск (`TO DISK 'xxx'`) или том (`TO VOLUME 'xxx'`).
|
||||
- Поведение по умолчанию соответствует удалению строк (`DELETE`). В списке правил может быть указано только одно выражение с поведением `DELETE`.
|
||||
- Дополнительные сведения смотрите в разделе [TTL для столбцов и таблиц](#table_engine-mergetree-ttl)
|
||||
Для партиционирования по месяцам используйте выражение `toYYYYMM(date_column)`, где `date_column` — столбец с датой типа [Date](../../../engines/table-engines/mergetree-family/mergetree.md). В этом случае имена партиций имеют формат `"YYYYMM"`.
|
||||
|
||||
- `SETTINGS` — дополнительные параметры, регулирующие поведение `MergeTree`:
|
||||
- `PRIMARY KEY` — первичный ключ, если он [отличается от ключа сортировки](#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki). Необязательный параметр.
|
||||
|
||||
По умолчанию первичный ключ совпадает с ключом сортировки (который задаётся секцией `ORDER BY`.) Поэтому в большинстве случаев секцию `PRIMARY KEY` отдельно указывать не нужно.
|
||||
|
||||
- `SAMPLE BY` — выражение для сэмплирования. Необязательный параметр.
|
||||
|
||||
Если используется выражение для сэмплирования, то первичный ключ должен содержать его. Пример: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
||||
|
||||
- `TTL` — список правил, определяющих длительности хранения строк, а также задающих правила перемещения частей на определённые тома или диски. Необязательный параметр.
|
||||
|
||||
Выражение должно возвращать столбец `Date` или `DateTime`. Пример: `TTL date + INTERVAL 1 DAY`.
|
||||
|
||||
Тип правила `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'` указывает действие, которое будет выполнено с частью, удаление строк (прореживание), перемещение (при выполнении условия для всех строк части) на определённый диск (`TO DISK 'xxx'`) или том (`TO VOLUME 'xxx'`). Поведение по умолчанию соответствует удалению строк (`DELETE`). В списке правил может быть указано только одно выражение с поведением `DELETE`.
|
||||
|
||||
Дополнительные сведения смотрите в разделе [TTL для столбцов и таблиц](#table_engine-mergetree-ttl)
|
||||
|
||||
- `SETTINGS` — дополнительные параметры, регулирующие поведение `MergeTree` (необязательные):
|
||||
|
||||
- `index_granularity` — максимальное количество строк данных между засечками индекса. По умолчанию — 8192. Смотрите [Хранение данных](#mergetree-data-storage).
|
||||
- `index_granularity_bytes` — максимальный размер гранул данных в байтах. По умолчанию — 10Mb. Чтобы ограничить размер гранул только количеством строк, установите значение 0 (не рекомендовано). Смотрите [Хранение данных](#mergetree-data-storage).
|
||||
@ -180,6 +200,14 @@ ClickHouse не требует уникального первичного кл
|
||||
|
||||
Длинный первичный ключ будет негативно влиять на производительность вставки и потребление памяти, однако на производительность ClickHouse при запросах `SELECT` лишние столбцы в первичном ключе не влияют.
|
||||
|
||||
Вы можете создать таблицу без первичного ключа, используя синтаксис `ORDER BY tuple()`. В этом случае ClickHouse хранит данные в порядке вставки. Если вы хотите сохранить порядок данных при вставке данных с помощью запросов `INSERT ... SELECT`, установите [max\_insert\_threads = 1](../../../operations/settings/settings.md#settings-max-insert-threads).
|
||||
|
||||
Чтобы выбрать данные в первоначальном порядке, используйте
|
||||
[однопоточные](../../../operations/settings/settings.md#settings-max_threads) запросы `SELECT.
|
||||
|
||||
|
||||
|
||||
|
||||
### Первичный ключ, отличный от ключа сортировки {#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki}
|
||||
|
||||
Существует возможность задать первичный ключ (выражение, значения которого будут записаны в индексный файл для
|
||||
|
@ -25,7 +25,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
|
||||
**Параметры ReplacingMergeTree**
|
||||
|
||||
- `ver` — столбец с версией, тип `UInt*`, `Date`, `DateTime` или `DateTime64`. Необязательный параметр.
|
||||
- `ver` — столбец с версией, тип `UInt*`, `Date` или `DateTime`. Необязательный параметр.
|
||||
|
||||
При слиянии, из всех строк с одинаковым значением ключа сортировки `ReplacingMergeTree` оставляет только одну:
|
||||
|
||||
|
@ -116,7 +116,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
|
||||
**Примечания по использованию**
|
||||
|
||||
1. Программа, которая записывает данные, должна помнить состояние объекта, чтобы иметь возможность отменить его. Строка отмены состояния должна быть копией предыдущей строки состояния с противоположным значением `Sign`. Это увеличивает начальный размер хранилища, но позволяет быстро записывать данные.
|
||||
1. Программа, которая записывает данные, должна помнить состояние объекта, чтобы иметь возможность отменить его. Строка отмены состояния должна содержать копии полей первичного ключа и копию версии строки состояния и противоположное значение `Sign`. Это увеличивает начальный размер хранилища, но позволяет быстро записывать данные.
|
||||
2. Длинные растущие массивы в столбцах снижают эффективность работы движка за счёт нагрузки на запись. Чем проще данные, тем выше эффективность.
|
||||
3. `SELECT` результаты сильно зависят от согласованности истории изменений объекта. Будьте точны при подготовке данных для вставки. Вы можете получить непредсказуемые результаты с несогласованными данными, такими как отрицательные значения для неотрицательных метрик, таких как глубина сеанса.
|
||||
|
||||
|
@ -31,7 +31,7 @@ Ok.
|
||||
$ curl 'http://localhost:8123/?query=SELECT%201'
|
||||
1
|
||||
|
||||
$ wget -O- -q 'http://localhost:8123/?query=SELECT 1'
|
||||
$ wget -nv -O- 'http://localhost:8123/?query=SELECT 1'
|
||||
1
|
||||
|
||||
$ echo -ne 'GET /?query=SELECT%201 HTTP/1.0\r\n\r\n' | nc localhost 8123
|
||||
|
8
docs/ru/interfaces/third-party/gui.md
vendored
8
docs/ru/interfaces/third-party/gui.md
vendored
@ -89,6 +89,14 @@
|
||||
|
||||
[clickhouse-flamegraph](https://github.com/Slach/clickhouse-flamegraph) — специализированный инструмент для визуализации `system.trace_log` в виде [flamegraph](http://www.brendangregg.com/flamegraphs.html).
|
||||
|
||||
### clickhouse-plantuml {#clickhouse-plantuml}
|
||||
|
||||
[cickhouse-plantuml](https://pypi.org/project/clickhouse-plantuml/) — скрипт, генерирующий [PlantUML](https://plantuml.com/) диаграммы схем таблиц.
|
||||
|
||||
### xeus-clickhouse {#xeus-clickhouse}
|
||||
|
||||
[xeus-clickhouse](https://github.com/wangfenjin/xeus-clickhouse) — это ядро Jupyter для ClickHouse, которое поддерживает запрос ClickHouse-данных с использованием SQL в Jupyter.
|
||||
|
||||
## Коммерческие {#kommercheskie}
|
||||
|
||||
### DataGrip {#datagrip}
|
||||
|
@ -16,9 +16,12 @@
|
||||
|
||||
Подстановки могут также выполняться из ZooKeeper. Для этого укажите у элемента атрибут `from_zk = "/path/to/node"`. Значение элемента заменится на содержимое узла `/path/to/node` в ZooKeeper. В ZooKeeper-узел также можно положить целое XML-поддерево, оно будет целиком вставлено в исходный элемент.
|
||||
|
||||
В `config.xml` может быть указан отдельный конфиг с настройками пользователей, профилей и квот. Относительный путь к нему указывается в элементе users\_config. По умолчанию - `users.xml`. Если `users_config` не указан, то настройки пользователей, профилей и квот, указываются непосредственно в `config.xml`.
|
||||
В элементе `users_config` файла `config.xml` можно указать относительный путь к конфигурационному файлу с настройками пользователей, профилей и квот. Значение `users_config` по умолчанию — `users.xml`. Если `users_config` не указан, то настройки пользователей, профилей и квот можно задать непосредственно в `config.xml`.
|
||||
|
||||
Для `users_config` могут также существовать переопределения в файлах из директории `users_config.d` (например, `users.d`) и подстановки. Например, можно иметь по отдельному конфигурационному файлу для каждого пользователя:
|
||||
Настройки пользователя могут быть разделены в несколько отдельных файлов аналогичных `config.xml` и `config.d\`. Имя директории задаётся также как `users_config`.
|
||||
Имя директории задаётся так же, как имя файла в `users_config`, с подстановкой `.d` вместо `.xml`.
|
||||
Директория `users.d` используется по умолчанию, также как `users.xml` используется для `users_config`.
|
||||
Например, можно иметь по отдельному конфигурационному файлу для каждого пользователя:
|
||||
|
||||
``` bash
|
||||
$ cat /etc/clickhouse-server/users.d/alice.xml
|
||||
|
@ -401,13 +401,91 @@ INSERT INTO test VALUES (lower('Hello')), (lower('world')), (lower('INSERT')), (
|
||||
|
||||
Устанавливает тип поведения [JOIN](../../sql-reference/statements/select/join.md). При объединении таблиц могут появиться пустые ячейки. ClickHouse заполняет их по-разному в зависимости от настроек.
|
||||
|
||||
Возможные значения
|
||||
Возможные значения:
|
||||
|
||||
- 0 — пустые ячейки заполняются значением по умолчанию соответствующего типа поля.
|
||||
- 1 — `JOIN` ведёт себя как в стандартном SQL. Тип соответствующего поля преобразуется в [Nullable](../../sql-reference/data-types/nullable.md#data_type-nullable), а пустые ячейки заполняются значениями [NULL](../../sql-reference/syntax.md).
|
||||
|
||||
## partial_merge_join_optimizations {#partial_merge_join_optimizations}
|
||||
|
||||
Отключает все оптимизации для запросов [JOIN](../../sql-reference/statements/select/join.md) с частичным MergeJoin алгоритмом.
|
||||
|
||||
По умолчанию оптимизации включены, что может привести к неправильным результатам. Если вы видите подозрительные результаты в своих запросах, отключите оптимизацию с помощью этого параметра. В различных версиях сервера ClickHouse, оптимизация может отличаться.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- 0 — Оптимизация отключена.
|
||||
- 1 — Оптимизация включена.
|
||||
|
||||
Значение по умолчанию: 1.
|
||||
|
||||
## partial_merge_join_rows_in_right_blocks {#partial_merge_join_rows_in_right_blocks}
|
||||
|
||||
Устанавливает предельные размеры блоков данных «правого» соединения, для запросов [JOIN](../../sql-reference/statements/select/join.md) с частичным MergeJoin алгоритмом.
|
||||
|
||||
Сервер ClickHouse:
|
||||
|
||||
1. Разделяет данные правого соединения на блоки с заданным числом строк.
|
||||
2. Индексирует для каждого блока минимальное и максимальное значение.
|
||||
3. Выгружает подготовленные блоки на диск, если это возможно.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- Положительное целое число. Рекомендуемый диапазон значений [1000, 100000].
|
||||
|
||||
Значение по умолчанию: 65536.
|
||||
|
||||
## join_on_disk_max_files_to_merge {#join_on_disk_max_files_to_merge}
|
||||
|
||||
Устанавливет количество файлов, разрешенных для параллельной сортировки, при выполнении операций MergeJoin на диске.
|
||||
|
||||
Чем больше значение параметра, тем больше оперативной памяти используется и тем меньше используется диск (I/O).
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- Положительное целое число, больше 2.
|
||||
|
||||
Значение по умолчанию: 64.
|
||||
|
||||
## temporary_files_codec {#temporary_files_codec}
|
||||
|
||||
Устанавливает метод сжатия для временных файлов на диске, используемых при сортировки и объединения.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- LZ4 — применять сжатие, используя алгоритм [LZ4](https://ru.wikipedia.org/wiki/LZ4)
|
||||
- NONE — не применять сжатие.
|
||||
|
||||
Значение по умолчанию: LZ4.
|
||||
|
||||
## any_join_distinct_right_table_keys {#any_join_distinct_right_table_keys}
|
||||
|
||||
Включает устаревшее поведение сервера ClickHouse при выполнении операций `ANY INNER|LEFT JOIN`.
|
||||
|
||||
!!! note "Внимание"
|
||||
Используйте этот параметр только в целях обратной совместимости, если ваши варианты использования требуют устаревшего поведения `JOIN`.
|
||||
|
||||
Когда включено устаревшее поведение:
|
||||
|
||||
- Результаты операций "t1 ANY LEFT JOIN t2" и "t2 ANY RIGHT JOIN t1" не равны, поскольку ClickHouse использует логику с сопоставлением ключей таблицы "многие к одному слева направо".
|
||||
- Результаты операций `ANY INNER JOIN` содержат все строки из левой таблицы, аналогично операции `SEMI LEFT JOIN`.
|
||||
|
||||
Когда устаревшее поведение отключено:
|
||||
|
||||
- Результаты операций `t1 ANY LEFT JOIN t2` и `t2 ANY RIGHT JOIN t1` равно, потому что ClickHouse использует логику сопоставления ключей один-ко-многим в операциях `ANY RIGHT JOIN`.
|
||||
- Результаты операций `ANY INNER JOIN` содержат по одной строке на ключ из левой и правой таблиц.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- 0 — Устаревшее поведение отключено.
|
||||
- 1 — Устаревшее поведение включено.
|
||||
|
||||
Значение по умолчанию: 0.
|
||||
|
||||
См. также:
|
||||
|
||||
- [JOIN strictness](../../sql-reference/statements/select/join.md#select-join-strictness)
|
||||
|
||||
## max\_block\_size {#setting-max_block_size}
|
||||
|
||||
Данные в ClickHouse обрабатываются по блокам (наборам кусочков столбцов). Внутренние циклы обработки для одного блока достаточно эффективны, но есть заметные издержки на каждый блок. Настройка `max_block_size` — это рекомендация, какой размер блока (в количестве строк) загружать из таблиц. Размер блока не должен быть слишком маленьким, чтобы затраты на каждый блок были заметны, но не слишком велики, чтобы запрос с LIMIT, который завершается после первого блока, обрабатывался быстро. Цель состоит в том, чтобы не использовалось слишком много оперативки при вынимании большого количества столбцов в несколько потоков; чтобы оставалась хоть какая-нибудь кэш-локальность.
|
||||
@ -520,31 +598,6 @@ ClickHouse использует этот параметр при чтении д
|
||||
|
||||
Значение по умолчанию: 0.
|
||||
|
||||
## network_compression_method {#network_compression_method}
|
||||
|
||||
Задает метод сжатия данных, используемый при обмене данными между серверами и при обмене между сервером и [clickhouse-client](../../interfaces/cli.md).
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- `LZ4` — устанавливает метод сжатия LZ4.
|
||||
- `ZSTD` — устанавливает метод сжатия ZSTD.
|
||||
|
||||
Значение по умолчанию: `LZ4`.
|
||||
|
||||
См. также:
|
||||
|
||||
- [network_zstd_compression_level](#network_zstd_compression_level)
|
||||
|
||||
## network_zstd_compression_level {#network_zstd_compression_level}
|
||||
|
||||
Регулирует уровень сжатия ZSTD. Используется только тогда, когда [network_compression_method](#network_compression_method) имеет значение `ZSTD`.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- Положительное целое число от 1 до 15.
|
||||
|
||||
Значение по умолчанию: `1`.
|
||||
|
||||
## log\_queries {#settings-log-queries}
|
||||
|
||||
Установка логирования запроса.
|
||||
@ -557,6 +610,60 @@ ClickHouse использует этот параметр при чтении д
|
||||
log_queries=1
|
||||
```
|
||||
|
||||
## log\_queries\_min\_type {#settings-log-queries-min-type}
|
||||
|
||||
`query_log` минимальный уровень логирования.
|
||||
|
||||
Возможные значения:
|
||||
- `QUERY_START` (`=1`)
|
||||
- `QUERY_FINISH` (`=2`)
|
||||
- `EXCEPTION_BEFORE_START` (`=3`)
|
||||
- `EXCEPTION_WHILE_PROCESSING` (`=4`)
|
||||
|
||||
Значение по умолчанию: `QUERY_START`.
|
||||
|
||||
Можно использовать для ограничения того, какие объекты будут записаны в `query_log`, например, если вас интересуют ошибки, тогда вы можете использовать `EXCEPTION_WHILE_PROCESSING`:
|
||||
|
||||
``` text
|
||||
log_queries_min_type='EXCEPTION_WHILE_PROCESSING'
|
||||
```
|
||||
|
||||
## log\_queries\_min\_type {#settings-log-queries-min-type}
|
||||
|
||||
`query_log` минимальный уровень логирования.
|
||||
|
||||
Возможные значения:
|
||||
- `QUERY_START` (`=1`)
|
||||
- `QUERY_FINISH` (`=2`)
|
||||
- `EXCEPTION_BEFORE_START` (`=3`)
|
||||
- `EXCEPTION_WHILE_PROCESSING` (`=4`)
|
||||
|
||||
Значение по умолчанию: `QUERY_START`.
|
||||
|
||||
Можно использовать для ограничения того, какие объекты будут записаны в `query_log`, например, если вас интересуют ошибки, тогда вы можете использовать `EXCEPTION_WHILE_PROCESSING`:
|
||||
|
||||
``` text
|
||||
log_queries_min_type='EXCEPTION_WHILE_PROCESSING'
|
||||
```
|
||||
|
||||
## log\_queries\_min\_type {#settings-log-queries-min-type}
|
||||
|
||||
Задаёт минимальный уровень логирования в `query_log`.
|
||||
|
||||
Возможные значения:
|
||||
- `QUERY_START` (`=1`)
|
||||
- `QUERY_FINISH` (`=2`)
|
||||
- `EXCEPTION_BEFORE_START` (`=3`)
|
||||
- `EXCEPTION_WHILE_PROCESSING` (`=4`)
|
||||
|
||||
Значение по умолчанию: `QUERY_START`.
|
||||
|
||||
Можно использовать для ограничения того, какие объекты будут записаны в `query_log`, например, если вас интересуют ошибки, тогда вы можете использовать `EXCEPTION_WHILE_PROCESSING`:
|
||||
|
||||
``` text
|
||||
log_queries_min_type='EXCEPTION_WHILE_PROCESSING'
|
||||
```
|
||||
|
||||
## log\_query\_threads {#settings-log-query-threads}
|
||||
|
||||
Установка логирования информации о потоках выполнения запроса.
|
||||
@ -571,7 +678,7 @@ log_query_threads=1
|
||||
|
||||
## max\_insert\_block\_size {#settings-max_insert_block_size}
|
||||
|
||||
Формировать блоки указанного размера (в количестве строк), при вставке в таблицу.
|
||||
Формировать блоки указанного размера, при вставке в таблицу.
|
||||
Эта настройка действует только в тех случаях, когда сервер сам формирует такие блоки.
|
||||
Например, при INSERT-е через HTTP интерфейс, сервер парсит формат данных, и формирует блоки указанного размера.
|
||||
А при использовании clickhouse-client, клиент сам парсит данные, и настройка max\_insert\_block\_size на сервере не влияет на размер вставляемых блоков.
|
||||
@ -946,7 +1053,6 @@ SELECT area/period FROM account_orders FORMAT JSON;
|
||||
"type": "Float64"
|
||||
}
|
||||
],
|
||||
|
||||
"data":
|
||||
[
|
||||
{
|
||||
@ -959,9 +1065,7 @@ SELECT area/period FROM account_orders FORMAT JSON;
|
||||
"divide(area, period)": null
|
||||
}
|
||||
],
|
||||
|
||||
"rows": 3,
|
||||
|
||||
"statistics":
|
||||
{
|
||||
"elapsed": 0.003648093,
|
||||
@ -982,7 +1086,6 @@ SELECT area/period FROM account_orders FORMAT JSON;
|
||||
"type": "Float64"
|
||||
}
|
||||
],
|
||||
|
||||
"data":
|
||||
[
|
||||
{
|
||||
@ -995,9 +1098,7 @@ SELECT area/period FROM account_orders FORMAT JSON;
|
||||
"divide(area, period)": "-inf"
|
||||
}
|
||||
],
|
||||
|
||||
"rows": 3,
|
||||
|
||||
"statistics":
|
||||
{
|
||||
"elapsed": 0.000070241,
|
||||
@ -1007,6 +1108,7 @@ SELECT area/period FROM account_orders FORMAT JSON;
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## format\_csv\_delimiter {#settings-format_csv_delimiter}
|
||||
|
||||
Символ, интерпретируемый как разделитель в данных формата CSV. По умолчанию — `,`.
|
||||
@ -1220,7 +1322,7 @@ ClickHouse генерирует исключение
|
||||
|
||||
Значение по умолчанию: 0
|
||||
|
||||
## force\_optimize\_skip\_unused\_shards {#force-optimize-skip-unused-shards}
|
||||
## force\_optimize\_skip\_unused\_shards {#settings-force_optimize_skip_unused_shards}
|
||||
|
||||
Разрешает или запрещает выполнение запроса, если настройка [optimize_skip_unused_shards](#optimize-skip-unused-shards) включена, а пропуск неиспользуемых шардов невозможен. Если данная настройка включена и пропуск невозможен, ClickHouse генерирует исключение.
|
||||
|
||||
@ -1234,19 +1336,30 @@ ClickHouse генерирует исключение
|
||||
|
||||
## force\_optimize\_skip\_unused\_shards\_nesting {#settings-force_optimize_skip_unused_shards_nesting}
|
||||
|
||||
Контролирует настройку [`force_optimize_skip_unused_shards`](#force-optimize-skip-unused-shards) (поэтому все еще требует `optimize_skip_unused_shards`) в зависимости от вложенности распределенного запроса (когда у вас есть `Distributed` таблица которая смотрит на другую `Distributed` таблицу).
|
||||
Контролирует настройку [`force_optimize_skip_unused_shards`](#settings-force_optimize_skip_unused_shards) (поэтому все еще требует `optimize_skip_unused_shards`) в зависимости от вложенности распределенного запроса (когда у вас есть `Distributed` таблица которая смотрит на другую `Distributed` таблицу).
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- 0 - Disabled, `force_optimize_skip_unused_shards` works on all levels.
|
||||
- 1 — Enables `force_optimize_skip_unused_shards` only for the first level.
|
||||
- 2 — Enables `force_optimize_skip_unused_shards` up to the second level.
|
||||
- 0 - Выключена, `force_optimize_skip_unused_shards` работает всегда.
|
||||
- 1 — Включает `force_optimize_skip_unused_shards` только для 1-ого уровня вложенности.
|
||||
- 2 — Включает `force_optimize_skip_unused_shards` для 1-ого и 2-ого уровня вложенности.
|
||||
|
||||
Значение по умолчанию: 0
|
||||
|
||||
## force\_optimize\_skip\_unused\_shards\_no\_nested {#settings-force_optimize_skip_unused_shards_no_nested}
|
||||
|
||||
Сбрасывает [`optimize_skip_unused_shards`](#settings-force_optimize_skip_unused_shards) для вложенных `Distributed` таблиц.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- 1 — Включена.
|
||||
- 0 — Выключена.
|
||||
|
||||
Значение по умолчанию: 0
|
||||
|
||||
## optimize\_throw\_if\_noop {#setting-optimize_throw_if_noop}
|
||||
|
||||
Включает или отключает генерирование исключения в в случаях, когда запрос [OPTIMIZE](../../sql-reference/statements/misc.md#misc_operations-optimize) не выполняет мёрж.
|
||||
Включает или отключает генерирование исключения в случаях, когда запрос [OPTIMIZE](../../sql-reference/statements/misc.md#misc_operations-optimize) не выполняет мёрж.
|
||||
|
||||
По умолчанию, `OPTIMIZE` завершается успешно и в тех случаях, когда он ничего не сделал. Настройка позволяет отделить подобные случаи и включает генерирование исключения с поясняющим сообщением.
|
||||
|
||||
@ -1367,7 +1480,7 @@ Default value: 0.
|
||||
- [Sampling Query Profiler](../optimizing-performance/sampling-query-profiler.md)
|
||||
- System table [trace\_log](../../operations/system-tables/trace_log.md#system_tables-trace_log)
|
||||
|
||||
## background_pool_size {#background_pool_size}
|
||||
## background\_pool\_size {#background_pool_size}
|
||||
|
||||
Задает количество потоков для выполнения фоновых операций в движках таблиц (например, слияния в таблицах c движком [MergeTree](../../engines/table-engines/mergetree-family/index.md)). Настройка применяется при запуске сервера ClickHouse и не может быть изменена во пользовательском сеансе. Настройка позволяет управлять загрузкой процессора и диска. Чем меньше пулл, тем ниже нагрузка на CPU и диск, при этом фоновые процессы замедляются, что может повлиять на скорость выполнения запроса.
|
||||
|
||||
@ -1381,7 +1494,7 @@ Default value: 0.
|
||||
|
||||
Включает параллельную обработку распределённых запросов `INSERT ... SELECT`.
|
||||
|
||||
Если при выполнении запроса `INSERT INTO distributed_table_a SELECT ... FROM distributed_table_b` оказывается, что обе таблицы находятся в одном кластере, то независимо от того [реплицируемые](../../engines/table-engines/mergetree-family/replication.md) они или нет, запрос выполняется локально на каждом шарде.
|
||||
Если при выполнении запроса `INSERT INTO distributed_table_a SELECT ... FROM distributed_table_b` оказывается, что обе таблицы находятся в одном кластере, то независимо от того [реплицируемые](../../engines/table-engines/mergetree-family/replication.md) они или нет, запрос выполняется локально на каждом шарде.
|
||||
|
||||
Допустимые значения:
|
||||
|
||||
@ -1431,7 +1544,7 @@ Default value: 0.
|
||||
|
||||
Значение по умолчанию: 0.
|
||||
|
||||
**См. также:**
|
||||
**См. также:**
|
||||
|
||||
- [Репликация данных](../../engines/table-engines/mergetree-family/replication.md)
|
||||
|
||||
@ -1448,7 +1561,7 @@ Possible values:
|
||||
|
||||
Значение по умолчанию: 0.
|
||||
|
||||
**Пример**
|
||||
**Пример**
|
||||
|
||||
Рассмотрим таблицу `null_in`:
|
||||
|
||||
@ -1499,7 +1612,7 @@ SELECT idx, i FROM null_in WHERE i IN (1, NULL) SETTINGS transform_null_in = 1;
|
||||
└──────┴───────┘
|
||||
```
|
||||
|
||||
**См. также**
|
||||
**См. также**
|
||||
|
||||
- [Обработка значения NULL в операторе IN](../../sql-reference/operators/in.md#in-null-processing)
|
||||
|
||||
@ -1610,8 +1723,8 @@ SELECT idx, i FROM null_in WHERE i IN (1, NULL) SETTINGS transform_null_in = 1;
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- 0 - мутации выполняются асинхронно.
|
||||
- 1 - запрос ждет завершения всех мутаций на текущем сервере.
|
||||
- 0 - мутации выполняются асинхронно.
|
||||
- 1 - запрос ждет завершения всех мутаций на текущем сервере.
|
||||
- 2 - запрос ждет завершения всех мутаций на всех репликах (если они есть).
|
||||
|
||||
Значение по умолчанию: `0`.
|
||||
@ -1697,4 +1810,17 @@ SELECT range(number) FROM system.numbers LIMIT 5 FORMAT PrettyCompactNoEscapes;
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
## lock_acquire_timeout {#lock_acquire_timeout}
|
||||
|
||||
Устанавливает, сколько секунд сервер ожидает возможности выполнить блокировку таблицы.
|
||||
|
||||
Таймаут устанавливается для защиты от взаимоблокировки при выполнении операций чтения или записи. Если время ожидания истекло, а блокировку выполнить не удалось, сервер возвращает исключение с кодом `DEADLOCK_AVOIDED` и сообщением "Locking attempt timed out! Possible deadlock avoided. Client should retry." ("Время ожидания блокировки истекло! Возможная взаимоблокировка предотвращена. Повторите запрос.").
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- Положительное целое число (в секундах).
|
||||
- 0 — таймаут не устанавливается.
|
||||
|
||||
Значение по умолчанию: `120` секунд.
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/settings/) <!--hide-->
|
||||
|
24
docs/ru/operations/system-tables/grants.md
Normal file
24
docs/ru/operations/system-tables/grants.md
Normal file
@ -0,0 +1,24 @@
|
||||
# system.grants {#system_tables-grants}
|
||||
|
||||
Привилегии пользовательских аккаунтов ClickHouse.
|
||||
|
||||
Столбцы:
|
||||
- `user_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Название учётной записи.
|
||||
|
||||
- `role_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Роль, назначенная учетной записи пользователя.
|
||||
|
||||
- `access_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Параметры доступа для учетной записи пользователя ClickHouse.
|
||||
|
||||
- `database` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Имя базы данных.
|
||||
|
||||
- `table` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Имя таблицы.
|
||||
|
||||
- `column` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Имя столбца, к которому предоставляется доступ.
|
||||
|
||||
- `is_partial_revoke` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Логическое значение. Показывает, были ли отменены некоторые привилегии. Возможные значения:
|
||||
- `0` — Строка описывает частичный отзыв.
|
||||
- `1` — Строка описывает грант.
|
||||
|
||||
- `grant_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Разрешение предоставлено с опцией `WITH GRANT OPTION`, подробнее см. [GRANT](../../sql-reference/statements/grant.md#grant-privigele-syntax).
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/grants) <!--hide-->
|
@ -7,10 +7,38 @@ toc_title: Системные таблицы
|
||||
|
||||
## Введение {#system-tables-introduction}
|
||||
|
||||
Системные таблицы используются для реализации части функциональности системы, а также предоставляют доступ к информации о работе системы.
|
||||
Вы не можете удалить системную таблицу (хотя можете сделать DETACH).
|
||||
Для системных таблиц нет файлов с данными на диске и файлов с метаданными. Сервер создаёт все системные таблицы при старте.
|
||||
В системные таблицы нельзя записывать данные - можно только читать.
|
||||
Системные таблицы расположены в базе данных system.
|
||||
Системные таблицы содержат информацию о:
|
||||
|
||||
- Состоянии сервера, процессов и окружении.
|
||||
- Внутренних процессах сервера.
|
||||
|
||||
Системные таблицы:
|
||||
|
||||
- Находятся в базе данных `system`.
|
||||
- Доступны только для чтения данных.
|
||||
- Не могут быть удалены или изменены, но их можно отсоединить.
|
||||
|
||||
Системные таблицы `metric_log`, `query_log`, `query_thread_log`, `trace_log` системные таблицы хранят данные в файловой системе. Остальные системные таблицы хранят свои данные в оперативной памяти. Сервер ClickHouse создает такие системные таблицы при запуске.
|
||||
|
||||
### Источники системных показателей
|
||||
|
||||
Для сбора системных показателей сервер ClickHouse использует:
|
||||
|
||||
- Возможности `CAP_NET_ADMIN`.
|
||||
- [procfs](https://ru.wikipedia.org/wiki/Procfs) (только Linux).
|
||||
|
||||
**procfs**
|
||||
|
||||
Если для сервера ClickHouse не включено `CAP_NET_ADMIN`, он пытается обратиться к `ProcfsMetricsProvider`. `ProcfsMetricsProvider` позволяет собирать системные показатели для каждого запроса (для CPU и I/O).
|
||||
|
||||
Если procfs поддерживается и включена в системе, то сервер ClickHouse собирает следующие системные показатели:
|
||||
|
||||
- `OSCPUVirtualTimeMicroseconds`
|
||||
- `OSCPUWaitMicroseconds`
|
||||
- `OSIOWaitMicroseconds`
|
||||
- `OSReadChars`
|
||||
- `OSWriteChars`
|
||||
- `OSReadBytes`
|
||||
- `OSWriteBytes`
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system-tables/) <!--hide-->
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user