mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-25 17:12:03 +00:00
Merge branch 'master' into consistent_metadata3
This commit is contained in:
commit
3847ea892d
123
CHANGELOG.md
123
CHANGELOG.md
@ -1,5 +1,48 @@
|
||||
## ClickHouse release v20.4
|
||||
|
||||
### ClickHouse release v20.4.3.16-stable 2020-05-23
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Removed logging from mutation finalization task if nothing was finalized. [#11109](https://github.com/ClickHouse/ClickHouse/pull/11109) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed memory leak in registerDiskS3. [#11074](https://github.com/ClickHouse/ClickHouse/pull/11074) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Fixed the potential missed data during termination of Kafka engine table. [#11048](https://github.com/ClickHouse/ClickHouse/pull/11048) ([filimonov](https://github.com/filimonov)).
|
||||
* Fixed `parseDateTime64BestEffort` argument resolution bugs. [#11038](https://github.com/ClickHouse/ClickHouse/pull/11038) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Fixed very rare potential use-after-free error in `MergeTree` if table was not created successfully. [#10986](https://github.com/ClickHouse/ClickHouse/pull/10986), [#10970](https://github.com/ClickHouse/ClickHouse/pull/10970) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed metadata (relative path for rename) and data (relative path for symlink) handling for Atomic database. [#10980](https://github.com/ClickHouse/ClickHouse/pull/10980) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed server crash on concurrent `ALTER` and `DROP DATABASE` queries with `Atomic` database engine. [#10968](https://github.com/ClickHouse/ClickHouse/pull/10968) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed incorrect raw data size in `getRawData()` method. [#10964](https://github.com/ClickHouse/ClickHouse/pull/10964) ([Igr](https://github.com/ObjatieGroba)).
|
||||
* Fixed incompatibility of two-level aggregation between versions 20.1 and earlier. This incompatibility happens when different versions of ClickHouse are used on initiator node and remote nodes and the size of GROUP BY result is large and aggregation is performed by a single String field. It leads to several unmerged rows for a single key in result. [#10952](https://github.com/ClickHouse/ClickHouse/pull/10952) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed sending partially written files by the `DistributedBlockOutputStream`. [#10940](https://github.com/ClickHouse/ClickHouse/pull/10940) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed crash in `SELECT count(notNullIn(NULL, []))`. [#10920](https://github.com/ClickHouse/ClickHouse/pull/10920) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed the hang which was happening sometimes during `DROP` of `Kafka` table engine. (or during server restarts). [#10910](https://github.com/ClickHouse/ClickHouse/pull/10910) ([filimonov](https://github.com/filimonov)).
|
||||
* Fixed the impossibility of executing multiple `ALTER RENAME` like `a TO b, c TO a`. [#10895](https://github.com/ClickHouse/ClickHouse/pull/10895) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed possible race which could happen when you get result from aggregate function state from multiple thread for the same column. The only way it can happen is when you use `finalizeAggregation` function while reading from table with `Memory` engine which stores `AggregateFunction` state for `quantile*` function. [#10890](https://github.com/ClickHouse/ClickHouse/pull/10890) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed backward compatibility with tuples in Distributed tables. [#10889](https://github.com/ClickHouse/ClickHouse/pull/10889) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fixed `SIGSEGV` in `StringHashTable` if such a key does not exist. [#10870](https://github.com/ClickHouse/ClickHouse/pull/10870) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed `WATCH` hangs after `LiveView` table was dropped from database with `Atomic` engine. [#10859](https://github.com/ClickHouse/ClickHouse/pull/10859) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed bug in `ReplicatedMergeTree` which might cause some `ALTER` on `OPTIMIZE` query to hang waiting for some replica after it become inactive. [#10849](https://github.com/ClickHouse/ClickHouse/pull/10849) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Now constraints are updated if the column participating in `CONSTRAINT` expression was renamed. Fixes [#10844](https://github.com/ClickHouse/ClickHouse/issues/10844). [#10847](https://github.com/ClickHouse/ClickHouse/pull/10847) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed potential read of uninitialized memory in cache-dictionary. [#10834](https://github.com/ClickHouse/ClickHouse/pull/10834) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed columns order after `Block::sortColumns()`. [#10826](https://github.com/ClickHouse/ClickHouse/pull/10826) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984] (https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed `UBSan` and `MSan` report in `DateLUT`. [#10798](https://github.com/ClickHouse/ClickHouse/pull/10798) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed incorrect type conversion in key conditions. Fixes [#6287](https://github.com/ClickHouse/ClickHouse/issues/6287). [#10791](https://github.com/ClickHouse/ClickHouse/pull/10791) ([Andrew Onyshchuk](https://github.com/oandrew)).
|
||||
* Fixed `parallel_view_processing` behavior. Now all insertions into `MATERIALIZED VIEW` without exception should be finished if exception happened. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#10757](https://github.com/ClickHouse/ClickHouse/pull/10757) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed combinator `-OrNull` and `-OrDefault` when combined with `-State`. [#10741](https://github.com/ClickHouse/ClickHouse/pull/10741) ([hcz](https://github.com/hczhcz)).
|
||||
* Fixed possible buffer overflow in function `h3EdgeAngle`. [#10711](https://github.com/ClickHouse/ClickHouse/pull/10711) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed bug which locks concurrent alters when table has a lot of parts. [#10659](https://github.com/ClickHouse/ClickHouse/pull/10659) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed `nullptr` dereference in `StorageBuffer` if server was shutdown before table startup. [#10641](https://github.com/ClickHouse/ClickHouse/pull/10641) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed `optimize_skip_unused_shards` with `LowCardinality`. [#10611](https://github.com/ClickHouse/ClickHouse/pull/10611) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed handling condition variable for synchronous mutations. In some cases signals to that condition variable could be lost. [#10588](https://github.com/ClickHouse/ClickHouse/pull/10588) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Fixed possible crash when `createDictionary()` is called before `loadStoredObject()` has finished. [#10587](https://github.com/ClickHouse/ClickHouse/pull/10587) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fixed `SELECT` of column `ALIAS` which default expression type different from column type. [#10563](https://github.com/ClickHouse/ClickHouse/pull/10563) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Implemented comparison between DateTime64 and String values. [#10560](https://github.com/ClickHouse/ClickHouse/pull/10560) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Disable `GROUP BY` sharding_key optimization by default (`optimize_distributed_group_by_sharding_key` had been introduced and turned of by default, due to trickery of sharding_key analyzing, simple example is `if` in sharding key) and fix it for `WITH ROLLUP/CUBE/TOTALS`. [#10516](https://github.com/ClickHouse/ClickHouse/pull/10516) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed [#10263](https://github.com/ClickHouse/ClickHouse/issues/10263). [#10486](https://github.com/ClickHouse/ClickHouse/pull/10486) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Added tests about `max_rows_to_sort` setting. [#10268](https://github.com/ClickHouse/ClickHouse/pull/10268) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added backward compatibility for create bloom filter index. [#10551](https://github.com/ClickHouse/ClickHouse/issues/10551). [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
|
||||
### ClickHouse release v20.4.2.9, 2020-05-12
|
||||
|
||||
#### Backward Incompatible Change
|
||||
@ -280,6 +323,57 @@
|
||||
|
||||
## ClickHouse release v20.3
|
||||
|
||||
### ClickHouse release v20.3.10.75-lts 2020-05-23
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Removed logging from mutation finalization task if nothing was finalized. [#11109](https://github.com/ClickHouse/ClickHouse/pull/11109) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed `parseDateTime64BestEffort` argument resolution bugs. [#11038](https://github.com/ClickHouse/ClickHouse/pull/11038) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Fixed incorrect raw data size in method `getRawData()`. [#10964](https://github.com/ClickHouse/ClickHouse/pull/10964) ([Igr](https://github.com/ObjatieGroba)).
|
||||
* Fixed incompatibility of two-level aggregation between versions 20.1 and earlier. This incompatibility happens when different versions of ClickHouse are used on initiator node and remote nodes and the size of `GROUP BY` result is large and aggregation is performed by a single `String` field. It leads to several unmerged rows for a single key in result. [#10952](https://github.com/ClickHouse/ClickHouse/pull/10952) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed backward compatibility with tuples in `Distributed` tables. [#10889](https://github.com/ClickHouse/ClickHouse/pull/10889) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fixed `SIGSEGV` in `StringHashTable` if such a key does not exist. [#10870](https://github.com/ClickHouse/ClickHouse/pull/10870) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed bug in `ReplicatedMergeTree` which might cause some `ALTER` on `OPTIMIZE` query to hang waiting for some replica after it become inactive. [#10849](https://github.com/ClickHouse/ClickHouse/pull/10849) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed columns order after `Block::sortColumns()`. [#10826](https://github.com/ClickHouse/ClickHouse/pull/10826) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984] (https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed `UBSan` and `MSan` report in `DateLUT`. [#10798](https://github.com/ClickHouse/ClickHouse/pull/10798) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed incorrect type conversion in key conditions. Fixes [#6287](https://github.com/ClickHouse/ClickHouse/issues/6287). [#10791](https://github.com/ClickHouse/ClickHouse/pull/10791) ([Andrew Onyshchuk](https://github.com/oandrew))
|
||||
* Fixed `parallel_view_processing` behavior. Now all insertions into `MATERIALIZED VIEW` without exception should be finished if exception happened. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#10757](https://github.com/ClickHouse/ClickHouse/pull/10757) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed combinator -`OrNull` and `-OrDefault` when combined with `-State`. [#10741](https://github.com/ClickHouse/ClickHouse/pull/10741) ([hcz](https://github.com/hczhcz)).
|
||||
* Fixed crash in `generateRandom` with nested types. Fixes [#10583](https://github.com/ClickHouse/ClickHouse/issues/10583). [#10734](https://github.com/ClickHouse/ClickHouse/pull/10734) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed data corruption for `LowCardinality(FixedString)` key column in `SummingMergeTree` which could have happened after merge. Fixes [#10489](https://github.com/ClickHouse/ClickHouse/issues/10489). [#10721](https://github.com/ClickHouse/ClickHouse/pull/10721) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed possible buffer overflow in function `h3EdgeAngle`. [#10711](https://github.com/ClickHouse/ClickHouse/pull/10711) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed disappearing totals. Totals could have being filtered if query had had join or subquery with external where condition. Fixes [#10674](https://github.com/ClickHouse/ClickHouse/issues/10674). [#10698](https://github.com/ClickHouse/ClickHouse/pull/10698) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed multiple usages of `IN` operator with the identical set in one query. [#10686](https://github.com/ClickHouse/ClickHouse/pull/10686) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fixed bug, which causes http requests stuck on client close when `readonly=2` and `cancel_http_readonly_queries_on_client_close=1`. Fixes [#7939](https://github.com/ClickHouse/ClickHouse/issues/7939), [#7019](https://github.com/ClickHouse/ClickHouse/issues/7019), [#7736](https://github.com/ClickHouse/ClickHouse/issues/7736), [#7091](https://github.com/ClickHouse/ClickHouse/issues/7091). [#10684](https://github.com/ClickHouse/ClickHouse/pull/10684) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed order of parameters in `AggregateTransform` constructor. [#10667](https://github.com/ClickHouse/ClickHouse/pull/10667) ([palasonic1](https://github.com/palasonic1)).
|
||||
* Fixed the lack of parallel execution of remote queries with `distributed_aggregation_memory_efficient` enabled. Fixes [#10655](https://github.com/ClickHouse/ClickHouse/issues/10655). [#10664](https://github.com/ClickHouse/ClickHouse/pull/10664) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed possible incorrect number of rows for queries with `LIMIT`. Fixes [#10566](https://github.com/ClickHouse/ClickHouse/issues/10566), [#10709](https://github.com/ClickHouse/ClickHouse/issues/10709). [#10660](https://github.com/ClickHouse/ClickHouse/pull/10660) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed a bug which locks concurrent alters when table has a lot of parts. [#10659](https://github.com/ClickHouse/ClickHouse/pull/10659) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed a bug when on `SYSTEM DROP DNS CACHE` query also drop caches, which are used to check if user is allowed to connect from some IP addresses. [#10608](https://github.com/ClickHouse/ClickHouse/pull/10608) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed incorrect scalar results inside inner query of `MATERIALIZED VIEW` in case if this query contained dependent table. [#10603](https://github.com/ClickHouse/ClickHouse/pull/10603) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed `SELECT` of column `ALIAS` which default expression type different from column type. [#10563](https://github.com/ClickHouse/ClickHouse/pull/10563) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Implemented comparison between DateTime64 and String values. [#10560](https://github.com/ClickHouse/ClickHouse/pull/10560) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Fixed index corruption, which may accur in some cases after merge compact parts into another compact part. [#10531](https://github.com/ClickHouse/ClickHouse/pull/10531) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fixed the situation, when mutation finished all parts, but hung up in `is_done=0`. [#10526](https://github.com/ClickHouse/ClickHouse/pull/10526) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed overflow at beginning of unix epoch for timezones with fractional offset from `UTC`. This fixes [#9335](https://github.com/ClickHouse/ClickHouse/issues/9335). [#10513](https://github.com/ClickHouse/ClickHouse/pull/10513) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed improper shutdown of `Distributed` storage. [#10491](https://github.com/ClickHouse/ClickHouse/pull/10491) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed numeric overflow in `simpleLinearRegression` over large integers. [#10474](https://github.com/ClickHouse/ClickHouse/pull/10474) ([hcz](https://github.com/hczhcz)).
|
||||
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Fix UBSan report in LZ4 library. [#10631](https://github.com/ClickHouse/ClickHouse/pull/10631) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix clang-10 build. https://github.com/ClickHouse/ClickHouse/issues/10238. [#10370](https://github.com/ClickHouse/ClickHouse/pull/10370) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Added failing tests about `max_rows_to_sort` setting. [#10268](https://github.com/ClickHouse/ClickHouse/pull/10268) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added some improvements in printing diagnostic info in input formats. Fixes [#10204](https://github.com/ClickHouse/ClickHouse/issues/10204). [#10418](https://github.com/ClickHouse/ClickHouse/pull/10418) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Added CA certificates to clickhouse-server docker image. [#10476](https://github.com/ClickHouse/ClickHouse/pull/10476) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
#### Bug fix
|
||||
|
||||
* #10551. [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.8.53, 2020-04-23
|
||||
|
||||
#### Bug Fix
|
||||
@ -630,6 +724,35 @@
|
||||
|
||||
## ClickHouse release v20.1
|
||||
|
||||
### ClickHouse release v20.1.12.86, 2020-05-26
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fixed incompatibility of two-level aggregation between versions 20.1 and earlier. This incompatibility happens when different versions of ClickHouse are used on initiator node and remote nodes and the size of GROUP BY result is large and aggregation is performed by a single String field. It leads to several unmerged rows for a single key in result. [#10952](https://github.com/ClickHouse/ClickHouse/pull/10952) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed data corruption for `LowCardinality(FixedString)` key column in `SummingMergeTree` which could have happened after merge. Fixes [#10489](https://github.com/ClickHouse/ClickHouse/issues/10489). [#10721](https://github.com/ClickHouse/ClickHouse/pull/10721) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed bug, which causes http requests stuck on client close when `readonly=2` and `cancel_http_readonly_queries_on_client_close=1`. Fixes [#7939](https://github.com/ClickHouse/ClickHouse/issues/7939), [#7019](https://github.com/ClickHouse/ClickHouse/issues/7019), [#7736](https://github.com/ClickHouse/ClickHouse/issues/7736), [#7091](https://github.com/ClickHouse/ClickHouse/issues/7091). [#10684](https://github.com/ClickHouse/ClickHouse/pull/10684) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed a bug when on `SYSTEM DROP DNS CACHE` query also drop caches, which are used to check if user is allowed to connect from some IP addresses. [#10608](https://github.com/ClickHouse/ClickHouse/pull/10608) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed incorrect scalar results inside inner query of `MATERIALIZED VIEW` in case if this query contained dependent table. [#10603](https://github.com/ClickHouse/ClickHouse/pull/10603) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed the situation when mutation finished all parts, but hung up in `is_done=0`. [#10526](https://github.com/ClickHouse/ClickHouse/pull/10526) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed overflow at beginning of unix epoch for timezones with fractional offset from UTC. This fixes [#9335](https://github.com/ClickHouse/ClickHouse/issues/9335). [#10513](https://github.com/ClickHouse/ClickHouse/pull/10513) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed improper shutdown of Distributed storage. [#10491](https://github.com/ClickHouse/ClickHouse/pull/10491) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed numeric overflow in `simpleLinearRegression` over large integers. [#10474](https://github.com/ClickHouse/ClickHouse/pull/10474) ([hcz](https://github.com/hczhcz)).
|
||||
* Fixed removing metadata directory when attach database fails. [#10442](https://github.com/ClickHouse/ClickHouse/pull/10442) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Added a check of number and type of arguments when creating `BloomFilter` index [#9623](https://github.com/ClickHouse/ClickHouse/issues/9623). [#10431](https://github.com/ClickHouse/ClickHouse/pull/10431) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fixed the issue when a query with `ARRAY JOIN`, `ORDER BY` and `LIMIT` may return incomplete result. This fixes [#10226](https://github.com/ClickHouse/ClickHouse/issues/10226). [#10427](https://github.com/ClickHouse/ClickHouse/pull/10427) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Prefer `fallback_to_stale_replicas` over `skip_unavailable_shards`. [#10422](https://github.com/ClickHouse/ClickHouse/pull/10422) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed wrong flattening of `Array(Tuple(...))` data types. This fixes [#10259](https://github.com/ClickHouse/ClickHouse/issues/10259). [#10390](https://github.com/ClickHouse/ClickHouse/pull/10390) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed wrong behavior in `HashTable` that caused compilation error when trying to read HashMap from buffer. [#10386](https://github.com/ClickHouse/ClickHouse/pull/10386) ([palasonic1](https://github.com/palasonic1)).
|
||||
* Fixed possible `Pipeline stuck` error in `ConcatProcessor` which could have happened in remote query. [#10381](https://github.com/ClickHouse/ClickHouse/pull/10381) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed error `Pipeline stuck` with `max_rows_to_group_by` and `group_by_overflow_mode = 'break'`. [#10279](https://github.com/ClickHouse/ClickHouse/pull/10279) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed several bugs when some data was inserted with quorum, then deleted somehow (DROP PARTITION, TTL) and this leaded to the stuck of INSERTs or false-positive exceptions in SELECTs. This fixes [#9946](https://github.com/ClickHouse/ClickHouse/issues/9946). [#10188](https://github.com/ClickHouse/ClickHouse/pull/10188) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fixed incompatibility when versions prior to 18.12.17 are used on remote servers and newer is used on initiating server, and GROUP BY both fixed and non-fixed keys, and when two-level group by method is activated. [#3254](https://github.com/ClickHouse/ClickHouse/pull/3254) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Added CA certificates to clickhouse-server docker image. [#10476](https://github.com/ClickHouse/ClickHouse/pull/10476) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
|
||||
### ClickHouse release v20.1.10.70, 2020-04-17
|
||||
|
||||
#### Bug Fix
|
||||
|
@ -28,7 +28,7 @@ public:
|
||||
void exception() override { logException(); }
|
||||
|
||||
private:
|
||||
Logger * log = &Logger::get("ServerErrorHandler");
|
||||
Poco::Logger * log = &Poco::Logger::get("ServerErrorHandler");
|
||||
|
||||
void logException()
|
||||
{
|
||||
|
@ -9,28 +9,33 @@
|
||||
#include <Common/CurrentThread.h>
|
||||
|
||||
|
||||
/// TODO Remove this.
|
||||
using Poco::Logger;
|
||||
using Poco::Message;
|
||||
using DB::LogsLevel;
|
||||
using DB::CurrentThread;
|
||||
namespace
|
||||
{
|
||||
template <typename... Ts> constexpr size_t numArgs(Ts &&...) { return sizeof...(Ts); }
|
||||
template <typename T, typename... Ts> constexpr auto firstArg(T && x, Ts &&...) { return std::forward<T>(x); }
|
||||
}
|
||||
|
||||
|
||||
/// Logs a message to a specified logger with that level.
|
||||
/// If more than one argument is provided,
|
||||
/// the first argument is interpreted as template with {}-substitutions
|
||||
/// and the latter arguments treat as values to substitute.
|
||||
/// If only one argument is provided, it is threat as message without substitutions.
|
||||
|
||||
#define LOG_IMPL(logger, priority, PRIORITY, ...) do \
|
||||
#define LOG_IMPL(logger, priority, PRIORITY, ...) do \
|
||||
{ \
|
||||
const bool is_clients_log = (CurrentThread::getGroup() != nullptr) && \
|
||||
(CurrentThread::getGroup()->client_logs_level >= (priority)); \
|
||||
const bool is_clients_log = (DB::CurrentThread::getGroup() != nullptr) && \
|
||||
(DB::CurrentThread::getGroup()->client_logs_level >= (priority)); \
|
||||
if ((logger)->is((PRIORITY)) || is_clients_log) \
|
||||
{ \
|
||||
std::string formatted_message = fmt::format(__VA_ARGS__); \
|
||||
std::string formatted_message = numArgs(__VA_ARGS__) > 1 ? fmt::format(__VA_ARGS__) : firstArg(__VA_ARGS__); \
|
||||
if (auto channel = (logger)->getChannel()) \
|
||||
{ \
|
||||
std::string file_function; \
|
||||
file_function += __FILE__; \
|
||||
file_function += "; "; \
|
||||
file_function += __PRETTY_FUNCTION__; \
|
||||
Message poco_message((logger)->name(), formatted_message, \
|
||||
Poco::Message poco_message((logger)->name(), formatted_message, \
|
||||
(PRIORITY), file_function.c_str(), __LINE__); \
|
||||
channel->log(poco_message); \
|
||||
} \
|
||||
@ -38,9 +43,18 @@ using DB::CurrentThread;
|
||||
} while (false)
|
||||
|
||||
|
||||
#define LOG_TRACE(logger, ...) LOG_IMPL(logger, LogsLevel::trace, Message::PRIO_TRACE, __VA_ARGS__)
|
||||
#define LOG_DEBUG(logger, ...) LOG_IMPL(logger, LogsLevel::debug, Message::PRIO_DEBUG, __VA_ARGS__)
|
||||
#define LOG_INFO(logger, ...) LOG_IMPL(logger, LogsLevel::information, Message::PRIO_INFORMATION, __VA_ARGS__)
|
||||
#define LOG_WARNING(logger, ...) LOG_IMPL(logger, LogsLevel::warning, Message::PRIO_WARNING, __VA_ARGS__)
|
||||
#define LOG_ERROR(logger, ...) LOG_IMPL(logger, LogsLevel::error, Message::PRIO_ERROR, __VA_ARGS__)
|
||||
#define LOG_FATAL(logger, ...) LOG_IMPL(logger, LogsLevel::error, Message::PRIO_FATAL, __VA_ARGS__)
|
||||
#define LOG_TRACE(logger, ...) LOG_IMPL(logger, DB::LogsLevel::trace, Poco::Message::PRIO_TRACE, __VA_ARGS__)
|
||||
#define LOG_DEBUG(logger, ...) LOG_IMPL(logger, DB::LogsLevel::debug, Poco::Message::PRIO_DEBUG, __VA_ARGS__)
|
||||
#define LOG_INFO(logger, ...) LOG_IMPL(logger, DB::LogsLevel::information, Poco::Message::PRIO_INFORMATION, __VA_ARGS__)
|
||||
#define LOG_WARNING(logger, ...) LOG_IMPL(logger, DB::LogsLevel::warning, Poco::Message::PRIO_WARNING, __VA_ARGS__)
|
||||
#define LOG_ERROR(logger, ...) LOG_IMPL(logger, DB::LogsLevel::error, Poco::Message::PRIO_ERROR, __VA_ARGS__)
|
||||
#define LOG_FATAL(logger, ...) LOG_IMPL(logger, DB::LogsLevel::error, Poco::Message::PRIO_FATAL, __VA_ARGS__)
|
||||
|
||||
|
||||
/// Compatibility for external projects.
|
||||
#if defined(ARCADIA_BUILD)
|
||||
using Poco::Logger;
|
||||
using Poco::Message;
|
||||
using DB::LogsLevel;
|
||||
using DB::CurrentThread;
|
||||
#endif
|
||||
|
@ -124,7 +124,7 @@ static void signalHandler(int sig, siginfo_t * info, void * context)
|
||||
const ucontext_t signal_context = *reinterpret_cast<ucontext_t *>(context);
|
||||
const StackTrace stack_trace(signal_context);
|
||||
|
||||
StringRef query_id = CurrentThread::getQueryId(); /// This is signal safe.
|
||||
StringRef query_id = DB::CurrentThread::getQueryId(); /// This is signal safe.
|
||||
query_id.size = std::min(query_id.size, max_query_id_size);
|
||||
|
||||
DB::writeBinary(sig, out);
|
||||
@ -162,7 +162,7 @@ public:
|
||||
};
|
||||
|
||||
explicit SignalListener(BaseDaemon & daemon_)
|
||||
: log(&Logger::get("BaseDaemon"))
|
||||
: log(&Poco::Logger::get("BaseDaemon"))
|
||||
, daemon(daemon_)
|
||||
{
|
||||
}
|
||||
@ -231,7 +231,7 @@ public:
|
||||
}
|
||||
|
||||
private:
|
||||
Logger * log;
|
||||
Poco::Logger * log;
|
||||
BaseDaemon & daemon;
|
||||
|
||||
void onTerminate(const std::string & message, UInt32 thread_num) const
|
||||
@ -288,9 +288,9 @@ extern "C" void __sanitizer_set_death_callback(void (*)());
|
||||
|
||||
static void sanitizerDeathCallback()
|
||||
{
|
||||
Logger * log = &Logger::get("BaseDaemon");
|
||||
Poco::Logger * log = &Poco::Logger::get("BaseDaemon");
|
||||
|
||||
StringRef query_id = CurrentThread::getQueryId(); /// This is signal safe.
|
||||
StringRef query_id = DB::CurrentThread::getQueryId(); /// This is signal safe.
|
||||
|
||||
{
|
||||
std::stringstream message;
|
||||
@ -498,10 +498,10 @@ void debugIncreaseOOMScore()
|
||||
}
|
||||
catch (const Poco::Exception & e)
|
||||
{
|
||||
LOG_WARNING(&Logger::root(), "Failed to adjust OOM score: '{}'.", e.displayText());
|
||||
LOG_WARNING(&Poco::Logger::root(), "Failed to adjust OOM score: '{}'.", e.displayText());
|
||||
return;
|
||||
}
|
||||
LOG_INFO(&Logger::root(), "Set OOM score adjustment to {}", new_score);
|
||||
LOG_INFO(&Poco::Logger::root(), "Set OOM score adjustment to {}", new_score);
|
||||
}
|
||||
#else
|
||||
void debugIncreaseOOMScore() {}
|
||||
@ -715,7 +715,7 @@ void BaseDaemon::initializeTerminationAndSignalProcessing()
|
||||
|
||||
void BaseDaemon::logRevision() const
|
||||
{
|
||||
Logger::root().information("Starting " + std::string{VERSION_FULL}
|
||||
Poco::Logger::root().information("Starting " + std::string{VERSION_FULL}
|
||||
+ " with revision " + std::to_string(ClickHouseRevision::get())
|
||||
+ ", PID " + std::to_string(getpid()));
|
||||
}
|
||||
@ -732,7 +732,7 @@ void BaseDaemon::handleNotification(Poco::TaskFailedNotification *_tfn)
|
||||
{
|
||||
task_failed = true;
|
||||
Poco::AutoPtr<Poco::TaskFailedNotification> fn(_tfn);
|
||||
Logger *lg = &(logger());
|
||||
Poco::Logger * lg = &(logger());
|
||||
LOG_ERROR(lg, "Task '{}' failed. Daemon is shutting down. Reason - {}", fn->task()->name(), fn->reason().displayText());
|
||||
ServerApplication::terminate();
|
||||
}
|
||||
|
@ -4,6 +4,7 @@
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
void OwnFormattingChannel::logExtended(const ExtendedLogMessage & msg)
|
||||
{
|
||||
if (pChannel && priority >= msg.base.getPriority())
|
||||
@ -28,5 +29,4 @@ void OwnFormattingChannel::log(const Poco::Message & msg)
|
||||
|
||||
OwnFormattingChannel::~OwnFormattingChannel() = default;
|
||||
|
||||
|
||||
}
|
||||
|
@ -69,7 +69,6 @@ void OwnSplitChannel::logSplit(const Poco::Message & msg)
|
||||
logs_queue->emplace(std::move(columns));
|
||||
}
|
||||
|
||||
|
||||
/// Also log to system.text_log table, if message is not too noisy
|
||||
auto text_log_max_priority_loaded = text_log_max_priority.load(std::memory_order_relaxed);
|
||||
if (text_log_max_priority_loaded && msg.getPriority() <= text_log_max_priority_loaded)
|
||||
|
@ -55,6 +55,7 @@ function(protobuf_generate_cpp_impl SRCS HDRS MODES OUTPUT_FILE_EXTS PLUGIN)
|
||||
endif()
|
||||
|
||||
set (intermediate_dir ${CMAKE_CURRENT_BINARY_DIR}/intermediate)
|
||||
file (MAKE_DIRECTORY ${intermediate_dir})
|
||||
|
||||
set (protoc_args)
|
||||
foreach (mode ${MODES})
|
||||
@ -112,7 +113,7 @@ if (PROTOBUF_GENERATE_CPP_SCRIPT_MODE)
|
||||
set (intermediate_dir ${DIR}/intermediate)
|
||||
set (intermediate_output "${intermediate_dir}/${FILENAME}")
|
||||
|
||||
if (COMPILER_ID STREQUAL "Clang")
|
||||
if (COMPILER_ID MATCHES "Clang")
|
||||
set (pragma_push "#pragma clang diagnostic push\n")
|
||||
set (pragma_pop "#pragma clang diagnostic pop\n")
|
||||
set (pragma_disable_warnings "#pragma clang diagnostic ignored \"-Weverything\"\n")
|
||||
|
2
contrib/librdkafka
vendored
2
contrib/librdkafka
vendored
@ -1 +1 @@
|
||||
Subproject commit 4ffe54b4f59ee5ae3767f9f25dc14651a3384d62
|
||||
Subproject commit b0d91bd74abb5f0e1ee972d326a317ad610f6300
|
@ -3,20 +3,30 @@ set(RDKAFKA_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/librdkafka/src)
|
||||
set(SRCS
|
||||
${RDKAFKA_SOURCE_DIR}/crc32c.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_zstd.c
|
||||
# ${RDKAFKA_SOURCE_DIR}/lz4.c
|
||||
# ${RDKAFKA_SOURCE_DIR}/lz4frame.c
|
||||
# ${RDKAFKA_SOURCE_DIR}/lz4hc.c
|
||||
# ${RDKAFKA_SOURCE_DIR}/rdxxhash.c
|
||||
# ${RDKAFKA_SOURCE_DIR}/regexp.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdaddr.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdavl.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdbuf.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdcrc32.c
|
||||
${RDKAFKA_SOURCE_DIR}/rddl.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdfnv1a.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdhdrhistogram.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_admin.c # looks optional
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_assignor.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_aux.c # looks optional
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_background.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_broker.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_buf.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_cert.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_cgrp.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_conf.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_coord.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_error.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_event.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_feature.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_idempotence.c
|
||||
@ -24,6 +34,7 @@ set(SRCS
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_metadata.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_metadata_cache.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_mock.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_mock_cgrp.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_mock_handlers.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_msg.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_msgset_reader.c
|
||||
@ -38,9 +49,11 @@ set(SRCS
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_request.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_roundrobin_assignor.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_sasl.c
|
||||
# ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_cyrus.c # needed to support Kerberos, requires cyrus-sasl
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_oauthbearer.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_plain.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_scram.c
|
||||
# ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_win32.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_ssl.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_subscription.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_timer.c
|
||||
@ -48,6 +61,7 @@ set(SRCS
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_transport.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_interceptor.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_header.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdkafka_txnmgr.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdlist.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdlog.c
|
||||
${RDKAFKA_SOURCE_DIR}/rdmurmur2.c
|
||||
|
@ -80,7 +80,9 @@ RUN apt-get --allow-unauthenticated update -y \
|
||||
pigz \
|
||||
moreutils \
|
||||
libcctz-dev \
|
||||
libldap2-dev
|
||||
libldap2-dev \
|
||||
libsasl2-dev \
|
||||
heimdal-multidev
|
||||
|
||||
|
||||
|
||||
|
@ -32,8 +32,8 @@ CMD dpkg -i package_folder/clickhouse-common-static_*.deb; \
|
||||
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/; \
|
||||
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/; \
|
||||
ln -s /usr/lib/llvm-9/bin/llvm-symbolizer /usr/bin/llvm-symbolizer; \
|
||||
if [ -n $USE_DATABASE_ATOMIC ] && [ $USE_DATABASE_ATOMIC -eq 1 ]; then ln -s /usr/share/clickhouse-test/config/database_atomic_configd.xml /etc/clickhouse-server/config.d/; fi; \
|
||||
if [ -n $USE_DATABASE_ATOMIC ] && [ $USE_DATABASE_ATOMIC -eq 1 ]; then ln -s /usr/share/clickhouse-test/config/database_atomic_usersd.xml /etc/clickhouse-server/users.d/; fi; \
|
||||
if [[ -n "$USE_DATABASE_ATOMIC" ]] && [[ "$USE_DATABASE_ATOMIC" -eq 1 ]]; then ln -s /usr/share/clickhouse-test/config/database_atomic_configd.xml /etc/clickhouse-server/config.d/; fi; \
|
||||
if [[ -n "$USE_DATABASE_ATOMIC" ]] && [[ "$USE_DATABASE_ATOMIC" -eq 1 ]]; then ln -s /usr/share/clickhouse-test/config/database_atomic_usersd.xml /etc/clickhouse-server/users.d/; fi; \
|
||||
echo "TSAN_OPTIONS='verbosity=1000 halt_on_error=1 history_size=7'" >> /etc/environment; \
|
||||
echo "TSAN_SYMBOLIZER_PATH=/usr/lib/llvm-8/bin/llvm-symbolizer" >> /etc/environment; \
|
||||
echo "UBSAN_OPTIONS='print_stacktrace=1'" >> /etc/environment; \
|
||||
|
@ -78,9 +78,9 @@ CMD dpkg -i package_folder/clickhouse-common-static_*.deb; \
|
||||
ln -s /usr/share/clickhouse-test/config/server.key /etc/clickhouse-server/; \
|
||||
ln -s /usr/share/clickhouse-test/config/server.crt /etc/clickhouse-server/; \
|
||||
ln -s /usr/share/clickhouse-test/config/dhparam.pem /etc/clickhouse-server/; \
|
||||
if [ -n $USE_POLYMORPHIC_PARTS ] && [ $USE_POLYMORPHIC_PARTS -eq 1 ]; then ln -s /usr/share/clickhouse-test/config/polymorphic_parts.xml /etc/clickhouse-server/config.d/; fi; \
|
||||
if [ -n $USE_DATABASE_ATOMIC ] && [ $USE_DATABASE_ATOMIC -eq 1 ]; then ln -s /usr/share/clickhouse-test/config/database_atomic_configd.xml /etc/clickhouse-server/config.d/; fi; \
|
||||
if [ -n $USE_DATABASE_ATOMIC ] && [ $USE_DATABASE_ATOMIC -eq 1 ]; then ln -s /usr/share/clickhouse-test/config/database_atomic_usersd.xml /etc/clickhouse-server/users.d/; fi; \
|
||||
if [[ -n "$USE_POLYMORPHIC_PARTS" ]] && [[ "$USE_POLYMORPHIC_PARTS" -eq 1 ]]; then ln -s /usr/share/clickhouse-test/config/polymorphic_parts.xml /etc/clickhouse-server/config.d/; fi; \
|
||||
if [[ -n "$USE_DATABASE_ATOMIC" ]] && [[ "$USE_DATABASE_ATOMIC" -eq 1 ]]; then ln -s /usr/share/clickhouse-test/config/database_atomic_configd.xml /etc/clickhouse-server/config.d/; fi; \
|
||||
if [[ -n "$USE_DATABASE_ATOMIC" ]] && [[ "$USE_DATABASE_ATOMIC" -eq 1 ]]; then ln -s /usr/share/clickhouse-test/config/database_atomic_usersd.xml /etc/clickhouse-server/users.d/; fi; \
|
||||
ln -sf /usr/share/clickhouse-test/config/client_config.xml /etc/clickhouse-client/config.xml; \
|
||||
service zookeeper start; sleep 5; \
|
||||
service clickhouse-server start && sleep 5 && clickhouse-test --testname --shard --zookeeper $ADDITIONAL_OPTIONS $SKIP_TESTS_OPTION 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||
|
@ -586,11 +586,11 @@ If the table doesn’t exist, ClickHouse will create it. If the structure of the
|
||||
</query_log>
|
||||
```
|
||||
|
||||
## query\_thread\_log {#server_configuration_parameters-query-thread-log}
|
||||
## query\_thread\_log {#server_configuration_parameters-query_thread_log}
|
||||
|
||||
Setting for logging threads of queries received with the [log\_query\_threads=1](../settings/settings.md#settings-log-query-threads) setting.
|
||||
|
||||
Queries are logged in the [system.query\_thread\_log](../../operations/system-tables.md#system_tables-query-thread-log) table, not in a separate file. You can change the name of the table in the `table` parameter (see below).
|
||||
Queries are logged in the [system.query\_thread\_log](../../operations/system-tables.md#system_tables-query_thread_log) table, not in a separate file. You can change the name of the table in the `table` parameter (see below).
|
||||
|
||||
Use the following parameters to configure logging:
|
||||
|
||||
|
@ -598,7 +598,7 @@ log_queries_min_type='EXCEPTION_WHILE_PROCESSING'
|
||||
|
||||
Setting up query threads logging.
|
||||
|
||||
Queries’ threads runned by ClickHouse with this setup are logged according to the rules in the [query\_thread\_log](../server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log) server configuration parameter.
|
||||
Queries’ threads runned by ClickHouse with this setup are logged according to the rules in the [query\_thread\_log](../server-configuration-parameters/settings.md#server_configuration_parameters-query_thread_log) server configuration parameter.
|
||||
|
||||
Example:
|
||||
|
||||
|
@ -5,7 +5,7 @@ toc_title: System Tables
|
||||
|
||||
# System Tables {#system-tables}
|
||||
|
||||
## Introduction
|
||||
## Introduction {#system-tables-introduction}
|
||||
|
||||
System tables provide information about:
|
||||
|
||||
@ -18,9 +18,12 @@ System tables:
|
||||
- Available only for reading data.
|
||||
- Can't be dropped or altered, but can be detached.
|
||||
|
||||
The `metric_log`, `query_log`, `query_thread_log`, `trace_log` system tables store data in a storage filesystem. Other system tables store their data in RAM. ClickHouse server creates such system tables at the start.
|
||||
Most of system tables store their data in RAM. ClickHouse server creates such system tables at the start.
|
||||
|
||||
### Sources of System Metrics
|
||||
The [metric_log](#system_tables-metric_log), [query_log](#system_tables-query_log), [query_thread_log](#system_tables-query_thread_log), [trace_log](#system_tables-trace_log) system tables store data in a storage filesystem. You can alter them or remove from a disk manually. If you remove one of that tables from a disk, the ClickHouse server creates the table again at the time of the next recording. A storage period for these tables is not limited, and ClickHouse server doesn't delete their data automatically. You need to organize removing of outdated logs by yourself. For example, you can use [TTL](../sql-reference/statements/alter.md#manipulations-with-table-ttl) settings for removing outdated log records.
|
||||
|
||||
|
||||
### Sources of System Metrics {#system-tables-sources-of-system-metrics}
|
||||
|
||||
For collecting system metrics ClickHouse server uses:
|
||||
|
||||
@ -587,97 +590,150 @@ Columns:
|
||||
- `source_file` (LowCardinality(String)) — Source file from which the logging was done.
|
||||
- `source_line` (UInt64) — Source line from which the logging was done.
|
||||
|
||||
## system.query\_log {#system_tables-query_log}
|
||||
## system.query_log {#system_tables-query_log}
|
||||
|
||||
Contains information about execution of queries. For each query, you can see processing start time, duration of processing, error messages and other information.
|
||||
Contains information about executed queries, for example, start time, duration of processing, error messages.
|
||||
|
||||
!!! note "Note"
|
||||
The table doesn’t contain input data for `INSERT` queries.
|
||||
|
||||
ClickHouse creates this table only if the [query\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) server parameter is specified. This parameter sets the logging rules, such as the logging interval or the name of the table the queries will be logged in.
|
||||
You can change settings of queries logging in the [query_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) section of the server configuration.
|
||||
|
||||
To enable query logging, set the [log\_queries](settings/settings.md#settings-log-queries) parameter to 1. For details, see the [Settings](settings/settings.md) section.
|
||||
You can disable queries logging by setting [log_queries = 0](settings/settings.md#settings-log-queries). We don't recommend to turn off logging because information in this table is important for solving issues.
|
||||
|
||||
The flushing period of logs is set in `flush_interval_milliseconds` parameter of the [query_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) server settings section. To force flushing logs, use the [SYSTEM FLUSH LOGS](../sql-reference/statements/system.md#query_language-system-flush_logs) query.
|
||||
|
||||
ClickHouse doesn't delete logs from the table automatically. See [Introduction](#system-tables-introduction) for more details.
|
||||
|
||||
The `system.query_log` table registers two kinds of queries:
|
||||
|
||||
1. Initial queries that were run directly by the client.
|
||||
2. Child queries that were initiated by other queries (for distributed query execution). For these types of queries, information about the parent queries is shown in the `initial_*` columns.
|
||||
|
||||
Each query creates one or two rows in the `query_log` table, depending on the status (see the `type` column) of the query:
|
||||
|
||||
1. If the query execution was successful, two rows with the `QueryStart` and `QueryFinish` types are created .
|
||||
2. If an error occurred during query processing, two events with the `QueryStart` and `ExceptionWhileProcessing` types are created .
|
||||
3. If an error occurred before launching the query, a single event with the `ExceptionBeforeStart` type is created.
|
||||
|
||||
Columns:
|
||||
|
||||
- `type` (`Enum8`) — Type of event that occurred when executing the query. Values:
|
||||
- `type` ([Enum8](../sql-reference/data-types/enum.md)) — Type of an event that occurred when executing the query. Values:
|
||||
- `'QueryStart' = 1` — Successful start of query execution.
|
||||
- `'QueryFinish' = 2` — Successful end of query execution.
|
||||
- `'ExceptionBeforeStart' = 3` — Exception before the start of query execution.
|
||||
- `'ExceptionWhileProcessing' = 4` — Exception during the query execution.
|
||||
- `event_date` (Date) — Query starting date.
|
||||
- `event_time` (DateTime) — Query starting time.
|
||||
- `query_start_time` (DateTime) — Start time of query execution.
|
||||
- `query_duration_ms` (UInt64) — Duration of query execution.
|
||||
- `read_rows` (UInt64) — Number of read rows.
|
||||
- `read_bytes` (UInt64) — Number of read bytes.
|
||||
- `written_rows` (UInt64) — For `INSERT` queries, the number of written rows. For other queries, the column value is 0.
|
||||
- `written_bytes` (UInt64) — For `INSERT` queries, the number of written bytes. For other queries, the column value is 0.
|
||||
- `result_rows` (UInt64) — Number of rows in the result.
|
||||
- `result_bytes` (UInt64) — Number of bytes in the result.
|
||||
- `memory_usage` (UInt64) — Memory consumption by the query.
|
||||
- `query` (String) — Query string.
|
||||
- `exception` (String) — Exception message.
|
||||
- `stack_trace` (String) — Stack trace (a list of methods called before the error occurred). An empty string, if the query is completed successfully.
|
||||
- `is_initial_query` (UInt8) — Query type. Possible values:
|
||||
- `event_date` ([Date](../sql-reference/data-types/date.md)) — Query starting date.
|
||||
- `event_time` ([DateTime](../sql-reference/data-types/datetime.md)) — Query starting time.
|
||||
- `query_start_time` ([DateTime](../sql-reference/data-types/datetime.md)) — Start time of query execution.
|
||||
- `query_duration_ms` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Duration of query execution in milliseconds.
|
||||
- `read_rows` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or rows read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_rows` includes the total number of rows read at all replicas. Each replica sends it's `read_rows` value, and the server-initiator of the query summarize all received and local values. The cache volumes doesn't affect this value.
|
||||
- `read_bytes` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or bytes read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_bytes` includes the total number of rows read at all replicas. Each replica sends it's `read_bytes` value, and the server-initiator of the query summarize all received and local values. The cache volumes doesn't affect this value.
|
||||
- `written_rows` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` queries, the number of written rows. For other queries, the column value is 0.
|
||||
- `written_bytes` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — For `INSERT` queries, the number of written bytes. For other queries, the column value is 0.
|
||||
- `result_rows` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of rows in a result of the `SELECT` query, or a number of rows in the `INSERT` query.
|
||||
- `result_bytes` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — RAM volume in bytes used to store a query result.
|
||||
- `memory_usage` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Memory consumption by the query.
|
||||
- `query` ([String](../sql-reference/data-types/string.md)) — Query string.
|
||||
- `exception` ([String](../sql-reference/data-types/string.md)) — Exception message.
|
||||
- `exception_code` ([Int32](../sql-reference/data-types/int-uint.md)) — Code of an exception.
|
||||
- `stack_trace` ([String](../sql-reference/data-types/string.md)) — [Stack trace](https://en.wikipedia.org/wiki/Stack_trace). An empty string, if the query was completed successfully.
|
||||
- `is_initial_query` ([UInt8](../sql-reference/data-types/int-uint.md)) — Query type. Possible values:
|
||||
- 1 — Query was initiated by the client.
|
||||
- 0 — Query was initiated by another query for distributed query execution.
|
||||
- `user` (String) — Name of the user who initiated the current query.
|
||||
- `query_id` (String) — ID of the query.
|
||||
- `address` (IPv6) — IP address that was used to make the query.
|
||||
- `port` (UInt16) — The client port that was used to make the query.
|
||||
- `initial_user` (String) — Name of the user who ran the initial query (for distributed query execution).
|
||||
- `initial_query_id` (String) — ID of the initial query (for distributed query execution).
|
||||
- `initial_address` (IPv6) — IP address that the parent query was launched from.
|
||||
- `initial_port` (UInt16) — The client port that was used to make the parent query.
|
||||
- `interface` (UInt8) — Interface that the query was initiated from. Possible values:
|
||||
- 0 — Query was initiated by another query as part of distributed query execution.
|
||||
- `user` ([String](../sql-reference/data-types/string.md)) — Name of the user who initiated the current query.
|
||||
- `query_id` ([String](../sql-reference/data-types/string.md)) — ID of the query.
|
||||
- `address` ([IPv6](../sql-reference/data-types/domains/ipv6.md)) — IP address that was used to make the query.
|
||||
- `port` ([UInt16](../sql-reference/data-types/int-uint.md)) — The client port that was used to make the query.
|
||||
- `initial_user` ([String](../sql-reference/data-types/string.md)) — Name of the user who ran the initial query (for distributed query execution).
|
||||
- `initial_query_id` ([String](../sql-reference/data-types/string.md)) — ID of the initial query (for distributed query execution).
|
||||
- `initial_address` ([IPv6](../sql-reference/data-types/domains/ipv6.md)) — IP address that the parent query was launched from.
|
||||
- `initial_port` ([UInt16](../sql-reference/data-types/int-uint.md)) — The client port that was used to make the parent query.
|
||||
- `interface` ([UInt8](../sql-reference/data-types/int-uint.md)) — Interface that the query was initiated from. Possible values:
|
||||
- 1 — TCP.
|
||||
- 2 — HTTP.
|
||||
- `os_user` (String) — OS’s username who runs [clickhouse-client](../interfaces/cli.md).
|
||||
- `client_hostname` (String) — Hostname of the client machine where the [clickhouse-client](../interfaces/cli.md) or another TCP client is run.
|
||||
- `client_name` (String) — The [clickhouse-client](../interfaces/cli.md) or another TCP client name.
|
||||
- `client_revision` (UInt32) — Revision of the [clickhouse-client](../interfaces/cli.md) or another TCP client.
|
||||
- `client_version_major` (UInt32) — Major version of the [clickhouse-client](../interfaces/cli.md) or another TCP client.
|
||||
- `client_version_minor` (UInt32) — Minor version of the [clickhouse-client](../interfaces/cli.md) or another TCP client.
|
||||
- `client_version_patch` (UInt32) — Patch component of the [clickhouse-client](../interfaces/cli.md) or another TCP client version.
|
||||
- `os_user` ([String](../sql-reference/data-types/string.md)) — Operating system username who runs [clickhouse-client](../interfaces/cli.md).
|
||||
- `client_hostname` ([String](../sql-reference/data-types/string.md)) — Hostname of the client machine where the [clickhouse-client](../interfaces/cli.md) or another TCP client is run.
|
||||
- `client_name` ([String](../sql-reference/data-types/string.md)) — The [clickhouse-client](../interfaces/cli.md) or another TCP client name.
|
||||
- `client_revision` ([UInt32](../sql-reference/data-types/int-uint.md)) — Revision of the [clickhouse-client](../interfaces/cli.md) or another TCP client.
|
||||
- `client_version_major` ([UInt32](../sql-reference/data-types/int-uint.md)) — Major version of the [clickhouse-client](../interfaces/cli.md) or another TCP client.
|
||||
- `client_version_minor` ([UInt32](../sql-reference/data-types/int-uint.md)) — Minor version of the [clickhouse-client](../interfaces/cli.md) or another TCP client.
|
||||
- `client_version_patch` ([UInt32](../sql-reference/data-types/int-uint.md)) — Patch component of the [clickhouse-client](../interfaces/cli.md) or another TCP client version.
|
||||
- `http_method` (UInt8) — HTTP method that initiated the query. Possible values:
|
||||
- 0 — The query was launched from the TCP interface.
|
||||
- 1 — `GET` method was used.
|
||||
- 2 — `POST` method was used.
|
||||
- `http_user_agent` (String) — The `UserAgent` header passed in the HTTP request.
|
||||
- `quota_key` (String) — The “quota key” specified in the [quotas](quotas.md) setting (see `keyed`).
|
||||
- `revision` (UInt32) — ClickHouse revision.
|
||||
- `thread_numbers` (Array(UInt32)) — Number of threads that are participating in query execution.
|
||||
- `ProfileEvents.Names` (Array(String)) — Counters that measure different metrics. The description of them could be found in the table [system.events](#system_tables-events)
|
||||
- `ProfileEvents.Values` (Array(UInt64)) — Values of metrics that are listed in the `ProfileEvents.Names` column.
|
||||
- `Settings.Names` (Array(String)) — Names of settings that were changed when the client ran the query. To enable logging changes to settings, set the `log_query_settings` parameter to 1.
|
||||
- `Settings.Values` (Array(String)) — Values of settings that are listed in the `Settings.Names` column.
|
||||
- `http_user_agent` ([String](../sql-reference/data-types/string.md)) — The `UserAgent` header passed in the HTTP request.
|
||||
- `quota_key` ([String](../sql-reference/data-types/string.md)) — The “quota key” specified in the [quotas](quotas.md) setting (see `keyed`).
|
||||
- `revision` ([UInt32](../sql-reference/data-types/int-uint.md)) — ClickHouse revision.
|
||||
- `thread_numbers` ([Array(UInt32)](../sql-reference/data-types/array.md)) — Number of threads that are participating in query execution.
|
||||
- `ProfileEvents.Names` ([Array(String)](../sql-reference/data-types/array.md)) — Counters that measure different metrics. The description of them could be found in the table [system.events](#system_tables-events)
|
||||
- `ProfileEvents.Values` ([Array(UInt64)](../sql-reference/data-types/array.md)) — Values of metrics that are listed in the `ProfileEvents.Names` column.
|
||||
- `Settings.Names` ([Array(String)](../sql-reference/data-types/array.md)) — Names of settings that were changed when the client ran the query. To enable logging changes to settings, set the `log_query_settings` parameter to 1.
|
||||
- `Settings.Values` ([Array(String)](../sql-reference/data-types/array.md)) — Values of settings that are listed in the `Settings.Names` column.
|
||||
|
||||
Each query creates one or two rows in the `query_log` table, depending on the status of the query:
|
||||
**Example**
|
||||
|
||||
1. If the query execution is successful, two events with types 1 and 2 are created (see the `type` column).
|
||||
2. If an error occurred during query processing, two events with types 1 and 4 are created.
|
||||
3. If an error occurred before launching the query, a single event with type 3 is created.
|
||||
``` sql
|
||||
SELECT * FROM system.query_log LIMIT 1 FORMAT Vertical;
|
||||
```
|
||||
|
||||
By default, logs are added to the table at intervals of 7.5 seconds. You can set this interval in the [query\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) server setting (see the `flush_interval_milliseconds` parameter). To flush the logs forcibly from the memory buffer into the table, use the `SYSTEM FLUSH LOGS` query.
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
type: QueryStart
|
||||
event_date: 2020-05-13
|
||||
event_time: 2020-05-13 14:02:28
|
||||
query_start_time: 2020-05-13 14:02:28
|
||||
query_duration_ms: 0
|
||||
read_rows: 0
|
||||
read_bytes: 0
|
||||
written_rows: 0
|
||||
written_bytes: 0
|
||||
result_rows: 0
|
||||
result_bytes: 0
|
||||
memory_usage: 0
|
||||
query: SELECT 1
|
||||
exception_code: 0
|
||||
exception:
|
||||
stack_trace:
|
||||
is_initial_query: 1
|
||||
user: default
|
||||
query_id: 5e834082-6f6d-4e34-b47b-cd1934f4002a
|
||||
address: ::ffff:127.0.0.1
|
||||
port: 57720
|
||||
initial_user: default
|
||||
initial_query_id: 5e834082-6f6d-4e34-b47b-cd1934f4002a
|
||||
initial_address: ::ffff:127.0.0.1
|
||||
initial_port: 57720
|
||||
interface: 1
|
||||
os_user: bayonet
|
||||
client_hostname: clickhouse.ru-central1.internal
|
||||
client_name: ClickHouse client
|
||||
client_revision: 54434
|
||||
client_version_major: 20
|
||||
client_version_minor: 4
|
||||
client_version_patch: 1
|
||||
http_method: 0
|
||||
http_user_agent:
|
||||
quota_key:
|
||||
revision: 54434
|
||||
thread_ids: []
|
||||
ProfileEvents.Names: []
|
||||
ProfileEvents.Values: []
|
||||
Settings.Names: ['use_uncompressed_cache','load_balancing','log_queries','max_memory_usage']
|
||||
Settings.Values: ['0','random','1','10000000000']
|
||||
|
||||
When the table is deleted manually, it will be automatically created on the fly. Note that all the previous logs will be deleted.
|
||||
```
|
||||
**See Also**
|
||||
|
||||
!!! note "Note"
|
||||
The storage period for logs is unlimited. Logs aren’t automatically deleted from the table. You need to organize the removal of outdated logs yourself.
|
||||
- [system.query_thread_log](#system_tables-query_thread_log) — This table contains information about each query execution thread.
|
||||
|
||||
You can specify an arbitrary partitioning key for the `system.query_log` table in the [query\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) server setting (see the `partition_by` parameter).
|
||||
|
||||
## system.query\_thread\_log {#system_tables-query-thread-log}
|
||||
## system.query_thread_log {#system_tables-query_thread_log}
|
||||
|
||||
The table contains information about each query execution thread.
|
||||
|
||||
ClickHouse creates this table only if the [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log) server parameter is specified. This parameter sets the logging rules, such as the logging interval or the name of the table the queries will be logged in.
|
||||
ClickHouse creates this table only if the [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query_thread_log) server parameter is specified. This parameter sets the logging rules, such as the logging interval or the name of the table the queries will be logged in.
|
||||
|
||||
To enable query logging, set the [log\_query\_threads](settings/settings.md#settings-log-query-threads) parameter to 1. For details, see the [Settings](settings/settings.md) section.
|
||||
|
||||
@ -729,14 +785,14 @@ Columns:
|
||||
- `ProfileEvents.Names` (Array(String)) — Counters that measure different metrics for this thread. The description of them could be found in the table [system.events](#system_tables-events)
|
||||
- `ProfileEvents.Values` (Array(UInt64)) — Values of metrics for this thread that are listed in the `ProfileEvents.Names` column.
|
||||
|
||||
By default, logs are added to the table at intervals of 7.5 seconds. You can set this interval in the [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log) server setting (see the `flush_interval_milliseconds` parameter). To flush the logs forcibly from the memory buffer into the table, use the `SYSTEM FLUSH LOGS` query.
|
||||
By default, logs are added to the table at intervals of 7.5 seconds. You can set this interval in the [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query_thread_log) server setting (see the `flush_interval_milliseconds` parameter). To flush the logs forcibly from the memory buffer into the table, use the `SYSTEM FLUSH LOGS` query.
|
||||
|
||||
When the table is deleted manually, it will be automatically created on the fly. Note that all the previous logs will be deleted.
|
||||
|
||||
!!! note "Note"
|
||||
The storage period for logs is unlimited. Logs aren’t automatically deleted from the table. You need to organize the removal of outdated logs yourself.
|
||||
|
||||
You can specify an arbitrary partitioning key for the `system.query_thread_log` table in the [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log) server setting (see the `partition_by` parameter).
|
||||
You can specify an arbitrary partitioning key for the `system.query_thread_log` table in the [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query_thread_log) server setting (see the `partition_by` parameter).
|
||||
|
||||
## system.trace\_log {#system_tables-trace_log}
|
||||
|
||||
|
@ -574,11 +574,11 @@ ClickHouse проверит условия `min_part_size` и `min_part_size_rat
|
||||
</query_log>
|
||||
```
|
||||
|
||||
## query\_thread\_log {#server_configuration_parameters-query-thread-log}
|
||||
## query\_thread\_log {#server_configuration_parameters-query_thread_log}
|
||||
|
||||
Настройка логирования потоков выполнения запросов, принятых с настройкой [log\_query\_threads=1](../settings/settings.md#settings-log-query-threads).
|
||||
|
||||
Запросы логируются не в отдельный файл, а в системную таблицу [system.query\_thread\_log](../../operations/server-configuration-parameters/settings.md#system_tables-query-thread-log). Вы можете изменить название этой таблицы в параметре `table` (см. ниже).
|
||||
Запросы логируются не в отдельный файл, а в системную таблицу [system.query\_thread\_log](../../operations/server-configuration-parameters/settings.md#system_tables-query_thread_log). Вы можете изменить название этой таблицы в параметре `table` (см. ниже).
|
||||
|
||||
При настройке логирования используются следующие параметры:
|
||||
|
||||
|
@ -536,7 +536,7 @@ log_queries=1
|
||||
|
||||
Установка логирования информации о потоках выполнения запроса.
|
||||
|
||||
Лог информации о потоках выполнения запросов, переданных в ClickHouse с этой установкой, записывается согласно правилам конфигурационного параметра сервера [query\_thread\_log](../server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log).
|
||||
Лог информации о потоках выполнения запросов, переданных в ClickHouse с этой установкой, записывается согласно правилам конфигурационного параметра сервера [query\_thread\_log](../server-configuration-parameters/settings.md#server_configuration_parameters-query_thread_log).
|
||||
|
||||
Пример:
|
||||
|
||||
|
@ -1,4 +1,7 @@
|
||||
# Системные таблицы {#sistemnye-tablitsy}
|
||||
# Системные таблицы {#system-tables}
|
||||
|
||||
|
||||
## Введение {#system-tables-introduction}
|
||||
|
||||
Системные таблицы используются для реализации части функциональности системы, а также предоставляют доступ к информации о работе системы.
|
||||
Вы не можете удалить системную таблицу (хотя можете сделать DETACH).
|
||||
@ -544,182 +547,156 @@ CurrentMetric_ReplicatedChecks: 0
|
||||
- `source_file` (LowCardinality(String)) — Исходный файл, из которого была сделана запись.
|
||||
- `source_line` (UInt64) — Исходная строка, из которой была сделана запись.
|
||||
|
||||
## system.query\_log {#system_tables-query_log}
|
||||
## system.query_log {#system_tables-query_log}
|
||||
|
||||
Содержит информацию о выполнении запросов. Для каждого запроса вы можете увидеть время начала обработки, продолжительность обработки, сообщения об ошибках и другую информацию.
|
||||
Содержит информацию о выполняемых запросах, например, время начала обработки, продолжительность обработки, сообщения об ошибках.
|
||||
|
||||
!!! note "Внимание"
|
||||
Таблица не содержит входных данных для запросов `INSERT`.
|
||||
|
||||
ClickHouse создаёт таблицу только в том случае, когда установлен конфигурационный параметр сервера [query\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log). Параметр задаёт правила ведения лога, такие как интервал логирования или имя таблицы, в которую будут логгироваться запросы.
|
||||
Настойки логгирования можно изменить в секции серверной конфигурации [query_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log).
|
||||
|
||||
Чтобы включить логирование, задайте значение параметра [log\_queries](settings/settings.md#settings-log-queries) равным 1. Подробности смотрите в разделе [Настройки](settings/settings.md#settings).
|
||||
Можно отключить логгирование настройкой [log_queries = 0](settings/settings.md#settings-log-queries). По-возможности, не отключайте логгирование, поскольку информация из таблицы важна при решении проблем.
|
||||
|
||||
Период сброса логов в таблицу задаётся параметром `flush_interval_milliseconds` в конфигурационной секции [query_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log). Чтобы принудительно записать логи из буффера памяти в таблицу, используйте запрос [SYSTEM FLUSH LOGS](../sql-reference/statements/system.md#query_language-system-flush_logs).
|
||||
|
||||
ClickHouse не удаляет логи из таблица автоматически. Смотрите [Введение](#system-tables-introduction).
|
||||
|
||||
Можно указать произвольный ключ партиционирования для таблицы `system.query_log` в конфигурации [query\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) (параметр `partition_by`).
|
||||
|
||||
|
||||
|
||||
Если таблицу удалить вручную, она создается заново автоматически «на лету». При этом все логи на момент удаления таблицы будут убраны.
|
||||
|
||||
Таблица `system.query_log` содержит информацию о двух видах запросов:
|
||||
|
||||
1. Первоначальные запросы, которые были выполнены непосредственно клиентом.
|
||||
2. Дочерние запросы, инициированные другими запросами (для выполнения распределенных запросов). Для дочерних запросов информация о первоначальном запросе содержится в столбцах `initial_*`.
|
||||
|
||||
В зависимости от статуса (столбец `type`) каждый запрос создаёт одну или две строки в таблице `query_log`:
|
||||
|
||||
1. Если запрос выполнен успешно, создаются два события типа `QueryStart` и `QueryFinish`.
|
||||
2. Если во время обработки запроса возникла ошибка, создаются два события с типами `QueryStart` и `ExceptionWhileProcessing`.
|
||||
3. Если ошибка произошла ещё до запуска запроса, создается одно событие с типом `ExceptionBeforeStart`.
|
||||
|
||||
Столбцы:
|
||||
|
||||
- `type` (`Enum8`) — тип события, произошедшего при выполнении запроса. Значения:
|
||||
- `type` ([Enum8](../sql-reference/data-types/enum.md)) — тип события, произошедшего при выполнении запроса. Значения:
|
||||
- `'QueryStart' = 1` — успешное начало выполнения запроса.
|
||||
- `'QueryFinish' = 2` — успешное завершение выполнения запроса.
|
||||
- `'ExceptionBeforeStart' = 3` — исключение перед началом обработки запроса.
|
||||
- `'ExceptionWhileProcessing' = 4` — исключение во время обработки запроса.
|
||||
- `event_date` (Date) — дата начала запроса.
|
||||
- `event_time` (DateTime) — время начала запроса.
|
||||
- `query_start_time` (DateTime) — время начала обработки запроса.
|
||||
- `query_duration_ms` (UInt64) — длительность обработки запроса.
|
||||
- `read_rows` (UInt64) — количество прочитанных строк.
|
||||
- `read_bytes` (UInt64) — количество прочитанных байтов.
|
||||
- `written_rows` (UInt64) — количество записанных строк для запросов `INSERT`. Для других запросов, значение столбца 0.
|
||||
- `written_bytes` (UInt64) — объём записанных данных в байтах для запросов `INSERT`. Для других запросов, значение столбца 0.
|
||||
- `result_rows` (UInt64) — количество строк в результате.
|
||||
- `result_bytes` (UInt64) — объём результата в байтах.
|
||||
- `memory_usage` (UInt64) — потребление RAM запросом.
|
||||
- `query` (String) — текст запроса.
|
||||
- `exception` (String) — сообщение исключения, если запрос завершился по исключению.
|
||||
- `stack_trace` (String) — трассировка (список функций, последовательно вызванных перед ошибкой). Пустая строка, если запрос успешно завершен.
|
||||
- `is_initial_query` (UInt8) — вид запроса. Возможные значения:
|
||||
- `event_date` ([Date](../sql-reference/data-types/date.md)) — дата начала запроса.
|
||||
- `event_time` ([DateTime](../sql-reference/data-types/datetime.md)) — время начала запроса.
|
||||
- `query_start_time` ([DateTime](../sql-reference/data-types/datetime.md)) — время начала обработки запроса.
|
||||
- `query_duration_ms` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — длительность выполнения запроса в миллисекундах.
|
||||
- `read_rows` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Общее количество строк, считанных из всех таблиц и табличных функций, участвующих в запросе. Включает в себя обычные подзапросы, подзапросы для `IN` и `JOIN`. Для распределенных запросов `read_rows` включает в себя общее количество строк, прочитанных на всех репликах. Каждая реплика передает собственное значение `read_rows`, а сервер-инициатор запроса суммирует все полученные и локальные значения. Объемы кэша не учитываюся.
|
||||
- `read_bytes` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — Общее количество байтов, считанных из всех таблиц и табличных функций, участвующих в запросе. Включает в себя обычные подзапросы, подзапросы для `IN` и `JOIN`. Для распределенных запросов `read_bytes` включает в себя общее количество байтов, прочитанных на всех репликах. Каждая реплика передает собственное значение `read_bytes`, а сервер-инициатор запроса суммирует все полученные и локальные значения. Объемы кэша не учитываюся.
|
||||
- `written_rows` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — количество записанных строк для запросов `INSERT`. Для других запросов, значение столбца 0.
|
||||
- `written_bytes` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — объём записанных данных в байтах для запросов `INSERT`. Для других запросов, значение столбца 0.
|
||||
- `result_rows` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — количество строк в результате запроса `SELECT` или количество строк в запросе `INSERT`.
|
||||
- `result_bytes` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — объём RAM в байтах, использованный для хранения результата запроса.
|
||||
- `memory_usage` ([UInt64](../sql-reference/data-types/int-uint.md#uint-ranges)) — потребление RAM запросом.
|
||||
- `query` ([String](../sql-reference/data-types/string.md)) — текст запроса.
|
||||
- `exception` ([String](../sql-reference/data-types/string.md)) — сообщение исключения, если запрос завершился по исключению.
|
||||
- `exception_code` ([Int32](../sql-reference/data-types/int-uint.md)) — код исключения.
|
||||
- `stack_trace` ([String](../sql-reference/data-types/string.md)) — [stack trace](https://en.wikipedia.org/wiki/Stack_trace). Пустая строка, если запрос успешно завершен.
|
||||
- `is_initial_query` ([UInt8](../sql-reference/data-types/int-uint.md)) — вид запроса. Возможные значения:
|
||||
- 1 — запрос был инициирован клиентом.
|
||||
- 0 — запрос был инициирован другим запросом при распределенном запросе.
|
||||
- `user` (String) — пользователь, запустивший текущий запрос.
|
||||
- `query_id` (String) — ID запроса.
|
||||
- `address` (IPv6) — IP адрес, с которого пришел запрос.
|
||||
- `port` (UInt16) — порт, с которого клиент сделал запрос
|
||||
- `initial_user` (String) — пользователь, запустивший первоначальный запрос (для распределенных запросов).
|
||||
- `initial_query_id` (String) — ID родительского запроса.
|
||||
- `initial_address` (IPv6) — IP адрес, с которого пришел родительский запрос.
|
||||
- `initial_port` (UInt16) — порт, с которого клиент сделал родительский запрос.
|
||||
- `interface` (UInt8) — интерфейс, с которого ушёл запрос. Возможные значения:
|
||||
- 0 — запрос был инициирован другим запросом при выполнении распределенного запроса.
|
||||
- `user` ([String](../sql-reference/data-types/string.md)) — пользователь, запустивший текущий запрос.
|
||||
- `query_id` ([String](../sql-reference/data-types/string.md)) — ID запроса.
|
||||
- `address` ([IPv6](../sql-reference/data-types/domains/ipv6.md)) — IP адрес, с которого пришел запрос.
|
||||
- `port` ([UInt16](../sql-reference/data-types/int-uint.md)) — порт, с которого клиент сделал запрос
|
||||
- `initial_user` ([String](../sql-reference/data-types/string.md)) — пользователь, запустивший первоначальный запрос (для распределенных запросов).
|
||||
- `initial_query_id` ([String](../sql-reference/data-types/string.md)) — ID родительского запроса.
|
||||
- `initial_address` ([IPv6](../sql-reference/data-types/domains/ipv6.md)) — IP адрес, с которого пришел родительский запрос.
|
||||
- `initial_port` ([UInt16](../sql-reference/data-types/int-uint.md)) — порт, с которого клиент сделал родительский запрос.
|
||||
- `interface` ([UInt8](../sql-reference/data-types/int-uint.md)) — интерфейс, с которого ушёл запрос. Возможные значения:
|
||||
- 1 — TCP.
|
||||
- 2 — HTTP.
|
||||
- `os_user` (String) — имя пользователя в OS, который запустил [clickhouse-client](../interfaces/cli.md).
|
||||
- `client_hostname` (String) — имя сервера, с которого присоединился [clickhouse-client](../interfaces/cli.md) или другой TCP клиент.
|
||||
- `client_name` (String) — [clickhouse-client](../interfaces/cli.md) или другой TCP клиент.
|
||||
- `client_revision` (UInt32) — ревизия [clickhouse-client](../interfaces/cli.md) или другого TCP клиента.
|
||||
- `client_version_major` (UInt32) — старшая версия [clickhouse-client](../interfaces/cli.md) или другого TCP клиента.
|
||||
- `client_version_minor` (UInt32) — младшая версия [clickhouse-client](../interfaces/cli.md) или другого TCP клиента.
|
||||
- `client_version_patch` (UInt32) — патч [clickhouse-client](../interfaces/cli.md) или другого TCP клиента.
|
||||
- `http_method` (UInt8) — HTTP метод, инициировавший запрос. Возможные значения:
|
||||
- `os_user` ([String](../sql-reference/data-types/string.md)) — имя пользователя операционной системы, который запустил [clickhouse-client](../interfaces/cli.md).
|
||||
- `client_hostname` ([String](../sql-reference/data-types/string.md)) — имя сервера, с которого присоединился [clickhouse-client](../interfaces/cli.md) или другой TCP клиент.
|
||||
- `client_name` ([String](../sql-reference/data-types/string.md)) — [clickhouse-client](../interfaces/cli.md) или другой TCP клиент.
|
||||
- `client_revision` ([UInt32](../sql-reference/data-types/int-uint.md)) — ревизия [clickhouse-client](../interfaces/cli.md) или другого TCP клиента.
|
||||
- `client_version_major` ([UInt32](../sql-reference/data-types/int-uint.md)) — старшая версия [clickhouse-client](../interfaces/cli.md) или другого TCP клиента.
|
||||
- `client_version_minor` ([UInt32](../sql-reference/data-types/int-uint.md)) — младшая версия [clickhouse-client](../interfaces/cli.md) или другого TCP клиента.
|
||||
- `client_version_patch` ([UInt32](../sql-reference/data-types/int-uint.md)) — патч [clickhouse-client](../interfaces/cli.md) или другого TCP клиента.
|
||||
- `http_method` ([UInt8](../sql-reference/data-types/int-uint.md)) — HTTP метод, инициировавший запрос. Возможные значения:
|
||||
- 0 — запрос запущен с интерфейса TCP.
|
||||
- 1 — `GET`.
|
||||
- 2 — `POST`.
|
||||
- `http_user_agent` (String) — HTTP заголовок `UserAgent`.
|
||||
- `quota_key` (String) — «ключ квоты» из настроек [квот](quotas.md) (см. `keyed`).
|
||||
- `revision` (UInt32) — ревизия ClickHouse.
|
||||
- `thread_numbers` (Array(UInt32)) — количество потоков, участвующих в обработке запросов.
|
||||
- `ProfileEvents.Names` (Array(String)) — Счетчики для изменения различных метрик. Описание метрик можно получить из таблицы [system.events](#system_tables-events)(\#system\_tables-events
|
||||
- `ProfileEvents.Values` (Array(UInt64)) — метрики, перечисленные в столбце `ProfileEvents.Names`.
|
||||
- `Settings.Names` (Array(String)) — имена настроек, которые меняются, когда клиент выполняет запрос. Чтобы разрешить логирование изменений настроек, установите параметр `log_query_settings` равным 1.
|
||||
- `Settings.Values` (Array(String)) — Значения настроек, которые перечислены в столбце `Settings.Names`.
|
||||
- `http_user_agent` ([String](../sql-reference/data-types/string.md)) — HTTP заголовок `UserAgent`.
|
||||
- `quota_key` ([String](../sql-reference/data-types/string.md)) — «ключ квоты» из настроек [квот](quotas.md) (см. `keyed`).
|
||||
- `revision` ([UInt32](../sql-reference/data-types/int-uint.md)) — ревизия ClickHouse.
|
||||
- `thread_numbers` ([Array(UInt32)](../sql-reference/data-types/array.md)) — количество потоков, участвующих в обработке запросов.
|
||||
- `ProfileEvents.Names` ([Array(String)](../sql-reference/data-types/array.md)) — Счетчики для изменения различных метрик. Описание метрик можно получить из таблицы [system.events](#system_tables-events)(\#system\_tables-events
|
||||
- `ProfileEvents.Values` ([Array(UInt64)](../sql-reference/data-types/array.md)) — метрики, перечисленные в столбце `ProfileEvents.Names`.
|
||||
- `Settings.Names` ([Array(String)](../sql-reference/data-types/array.md)) — имена настроек, которые меняются, когда клиент выполняет запрос. Чтобы разрешить логирование изменений настроек, установите параметр `log_query_settings` равным 1.
|
||||
- `Settings.Values` ([Array(String)](../sql-reference/data-types/array.md)) — Значения настроек, которые перечислены в столбце `Settings.Names`.
|
||||
|
||||
Каждый запрос создаёт одну или две строки в таблице `query_log`, в зависимости от статуса запроса:
|
||||
**Пример**
|
||||
|
||||
1. Если запрос выполнен успешно, создаются два события типа 1 и 2 (смотрите столбец `type`).
|
||||
2. Если во время обработки запроса произошла ошибка, создаются два события с типами 1 и 4.
|
||||
3. Если ошибка произошла до запуска запроса, создается одно событие с типом 3.
|
||||
``` sql
|
||||
SELECT * FROM system.query_log LIMIT 1 FORMAT Vertical;
|
||||
```
|
||||
|
||||
По умолчанию, строки добавляются в таблицу логирования с интервалом в 7,5 секунд. Можно задать интервал в конфигурационном параметре сервера [query\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) (смотрите параметр `flush_interval_milliseconds`). Чтобы принудительно записать логи из буффера памяти в таблицу, используйте запрос `SYSTEM FLUSH LOGS`.
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
type: QueryStart
|
||||
event_date: 2020-05-13
|
||||
event_time: 2020-05-13 14:02:28
|
||||
query_start_time: 2020-05-13 14:02:28
|
||||
query_duration_ms: 0
|
||||
read_rows: 0
|
||||
read_bytes: 0
|
||||
written_rows: 0
|
||||
written_bytes: 0
|
||||
result_rows: 0
|
||||
result_bytes: 0
|
||||
memory_usage: 0
|
||||
query: SELECT 1
|
||||
exception_code: 0
|
||||
exception:
|
||||
stack_trace:
|
||||
is_initial_query: 1
|
||||
user: default
|
||||
query_id: 5e834082-6f6d-4e34-b47b-cd1934f4002a
|
||||
address: ::ffff:127.0.0.1
|
||||
port: 57720
|
||||
initial_user: default
|
||||
initial_query_id: 5e834082-6f6d-4e34-b47b-cd1934f4002a
|
||||
initial_address: ::ffff:127.0.0.1
|
||||
initial_port: 57720
|
||||
interface: 1
|
||||
os_user: bayonet
|
||||
client_hostname: clickhouse.ru-central1.internal
|
||||
client_name: ClickHouse client
|
||||
client_revision: 54434
|
||||
client_version_major: 20
|
||||
client_version_minor: 4
|
||||
client_version_patch: 1
|
||||
http_method: 0
|
||||
http_user_agent:
|
||||
quota_key:
|
||||
revision: 54434
|
||||
thread_ids: []
|
||||
ProfileEvents.Names: []
|
||||
ProfileEvents.Values: []
|
||||
Settings.Names: ['use_uncompressed_cache','load_balancing','log_queries','max_memory_usage']
|
||||
Settings.Values: ['0','random','1','10000000000']
|
||||
|
||||
Если таблицу удалить вручную, она пересоздастся автоматически «на лету». При этом все логи на момент удаления таблицы будут удалены.
|
||||
```
|
||||
**Смотрите также**
|
||||
|
||||
!!! note "Примечание"
|
||||
Срок хранения логов не ограничен. Логи не удаляются из таблицы автоматически. Вам необходимо самостоятельно организовать удаление устаревших логов.
|
||||
- [system.query_thread_log](#system_tables-query_thread_log) — в этой таблице содержится информация о цепочке каждого выполненного запроса.
|
||||
|
||||
Можно указать произвольный ключ партиционирования для таблицы `system.query_log` в конфигурации [query\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) (параметр `partition_by`).
|
||||
|
||||
## system.query\_log {#system_tables-query_log}
|
||||
|
||||
Contains information about execution of queries. For each query, you can see processing start time, duration of processing, error messages and other information.
|
||||
|
||||
!!! note "Note"
|
||||
The table doesn’t contain input data for `INSERT` queries.
|
||||
|
||||
ClickHouse creates this table only if the [query\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) server parameter is specified. This parameter sets the logging rules, such as the logging interval or the name of the table the queries will be logged in.
|
||||
|
||||
To enable query logging, set the [log\_queries](settings/settings.md#settings-log-queries) parameter to 1. For details, see the [Settings](settings/settings.md) section.
|
||||
|
||||
The `system.query_log` table registers two kinds of queries:
|
||||
|
||||
1. Initial queries that were run directly by the client.
|
||||
2. Child queries that were initiated by other queries (for distributed query execution). For these types of queries, information about the parent queries is shown in the `initial_*` columns.
|
||||
|
||||
Columns:
|
||||
|
||||
- `type` (`Enum8`) — Type of event that occurred when executing the query. Values:
|
||||
- `'QueryStart' = 1` — Successful start of query execution.
|
||||
- `'QueryFinish' = 2` — Successful end of query execution.
|
||||
- `'ExceptionBeforeStart' = 3` — Exception before the start of query execution.
|
||||
- `'ExceptionWhileProcessing' = 4` — Exception during the query execution.
|
||||
- `event_date` (Date) — Query starting date.
|
||||
- `event_time` (DateTime) — Query starting time.
|
||||
- `query_start_time` (DateTime) — Start time of query execution.
|
||||
- `query_duration_ms` (UInt64) — Duration of query execution.
|
||||
- `read_rows` (UInt64) — Number of read rows.
|
||||
- `read_bytes` (UInt64) — Number of read bytes.
|
||||
- `written_rows` (UInt64) — For `INSERT` queries, the number of written rows. For other queries, the column value is 0.
|
||||
- `written_bytes` (UInt64) — For `INSERT` queries, the number of written bytes. For other queries, the column value is 0.
|
||||
- `result_rows` (UInt64) — Number of rows in the result.
|
||||
- `result_bytes` (UInt64) — Number of bytes in the result.
|
||||
- `memory_usage` (UInt64) — Memory consumption by the query.
|
||||
- `query` (String) — Query string.
|
||||
- `exception` (String) — Exception message.
|
||||
- `stack_trace` (String) — Stack trace (a list of methods called before the error occurred). An empty string, if the query is completed successfully.
|
||||
- `is_initial_query` (UInt8) — Query type. Possible values:
|
||||
- 1 — Query was initiated by the client.
|
||||
- 0 — Query was initiated by another query for distributed query execution.
|
||||
- `user` (String) — Name of the user who initiated the current query.
|
||||
- `query_id` (String) — ID of the query.
|
||||
- `address` (IPv6) — IP address that was used to make the query.
|
||||
- `port` (UInt16) — The client port that was used to make the query.
|
||||
- `initial_user` (String) — Name of the user who ran the initial query (for distributed query execution).
|
||||
- `initial_query_id` (String) — ID of the initial query (for distributed query execution).
|
||||
- `initial_address` (IPv6) — IP address that the parent query was launched from.
|
||||
- `initial_port` (UInt16) — The client port that was used to make the parent query.
|
||||
- `interface` (UInt8) — Interface that the query was initiated from. Possible values:
|
||||
- 1 — TCP.
|
||||
- 2 — HTTP.
|
||||
- `os_user` (String) — OS’s username who runs [clickhouse-client](../interfaces/cli.md).
|
||||
- `client_hostname` (String) — Hostname of the client machine where the [clickhouse-client](../interfaces/cli.md) or another TCP client is run.
|
||||
- `client_name` (String) — The [clickhouse-client](../interfaces/cli.md) or another TCP client name.
|
||||
- `client_revision` (UInt32) — Revision of the [clickhouse-client](../interfaces/cli.md) or another TCP client.
|
||||
- `client_version_major` (UInt32) — Major version of the [clickhouse-client](../interfaces/cli.md) or another TCP client.
|
||||
- `client_version_minor` (UInt32) — Minor version of the [clickhouse-client](../interfaces/cli.md) or another TCP client.
|
||||
- `client_version_patch` (UInt32) — Patch component of the [clickhouse-client](../interfaces/cli.md) or another TCP client version.
|
||||
- `http_method` (UInt8) — HTTP method that initiated the query. Possible values:
|
||||
- 0 — The query was launched from the TCP interface.
|
||||
- 1 — `GET` method was used.
|
||||
- 2 — `POST` method was used.
|
||||
- `http_user_agent` (String) — The `UserAgent` header passed in the HTTP request.
|
||||
- `quota_key` (String) — The «quota key» specified in the [quotas](quotas.md) setting (see `keyed`).
|
||||
- `revision` (UInt32) — ClickHouse revision.
|
||||
- `thread_numbers` (Array(UInt32)) — Number of threads that are participating in query execution.
|
||||
- `ProfileEvents.Names` (Array(String)) — Counters that measure different metrics. The description of them could be found in the table [system.events](#system_tables-events)
|
||||
- `ProfileEvents.Values` (Array(UInt64)) — Values of metrics that are listed in the `ProfileEvents.Names` column.
|
||||
- `Settings.Names` (Array(String)) — Names of settings that were changed when the client ran the query. To enable logging changes to settings, set the `log_query_settings` parameter to 1.
|
||||
- `Settings.Values` (Array(String)) — Values of settings that are listed in the `Settings.Names` column.
|
||||
|
||||
Each query creates one or two rows in the `query_log` table, depending on the status of the query:
|
||||
|
||||
1. If the query execution is successful, two events with types 1 and 2 are created (see the `type` column).
|
||||
2. If an error occurred during query processing, two events with types 1 and 4 are created.
|
||||
3. If an error occurred before launching the query, a single event with type 3 is created.
|
||||
|
||||
By default, logs are added to the table at intervals of 7.5 seconds. You can set this interval in the [query\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) server setting (see the `flush_interval_milliseconds` parameter). To flush the logs forcibly from the memory buffer into the table, use the `SYSTEM FLUSH LOGS` query.
|
||||
|
||||
When the table is deleted manually, it will be automatically created on the fly. Note that all the previous logs will be deleted.
|
||||
|
||||
!!! note "Note"
|
||||
The storage period for logs is unlimited. Logs aren’t automatically deleted from the table. You need to organize the removal of outdated logs yourself.
|
||||
|
||||
You can specify an arbitrary partitioning key for the `system.query_log` table in the [query\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-log) server setting (see the `partition_by` parameter).
|
||||
\#\# system.query\_thread\_log {\#system\_tables-query-thread-log}
|
||||
## system.query_thread_log {#system_tables-query_thread_log}
|
||||
|
||||
Содержит информацию о каждом потоке выполняемых запросов.
|
||||
|
||||
ClickHouse создаёт таблицу только в том случае, когда установлен конфигурационный параметр сервера [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log). Параметр задаёт правила ведения лога, такие как интервал логирования или имя таблицы, в которую будут логгироваться запросы.
|
||||
ClickHouse создаёт таблицу только в том случае, когда установлен конфигурационный параметр сервера [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query_thread_log). Параметр задаёт правила ведения лога, такие как интервал логирования или имя таблицы, в которую будут логгироваться запросы.
|
||||
|
||||
Чтобы включить логирование, задайте значение параметра [log\_query\_threads](settings/settings.md#settings-log-query-threads) равным 1. Подробности смотрите в разделе [Настройки](settings/settings.md#settings).
|
||||
|
||||
@ -770,16 +747,16 @@ ClickHouse создаёт таблицу только в том случае, к
|
||||
- `ProfileEvents.Names` (Array(String)) — Счетчики для изменения различных метрик для данного потока. Описание метрик можно получить из таблицы [system.events](#system_tables-events)(\#system\_tables-events
|
||||
- `ProfileEvents.Values` (Array(UInt64)) — метрики для данного потока, перечисленные в столбце `ProfileEvents.Names`.
|
||||
|
||||
По умолчанию, строки добавляются в таблицу логирования с интервалом в 7,5 секунд. Можно задать интервал в конфигурационном параметре сервера [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log) (смотрите параметр `flush_interval_milliseconds`). Чтобы принудительно записать логи из буффера памяти в таблицу, используйте запрос `SYSTEM FLUSH LOGS`.
|
||||
По умолчанию, строки добавляются в таблицу логирования с интервалом в 7,5 секунд. Можно задать интервал в конфигурационном параметре сервера [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query_thread_log) (смотрите параметр `flush_interval_milliseconds`). Чтобы принудительно записать логи из буффера памяти в таблицу, используйте запрос `SYSTEM FLUSH LOGS`.
|
||||
|
||||
Если таблицу удалить вручную, она пересоздастся автоматически «на лету». При этом все логи на момент удаления таблицы будут удалены.
|
||||
|
||||
!!! note "Примечание"
|
||||
Срок хранения логов не ограничен. Логи не удаляются из таблицы автоматически. Вам необходимо самостоятельно организовать удаление устаревших логов.
|
||||
|
||||
Можно указать произвольный ключ партиционирования для таблицы `system.query_log` в конфигурации [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query-thread-log) (параметр `partition_by`).
|
||||
Можно указать произвольный ключ партиционирования для таблицы `system.query_log` в конфигурации [query\_thread\_log](server-configuration-parameters/settings.md#server_configuration_parameters-query_thread_log) (параметр `partition_by`).
|
||||
|
||||
## system.query_thread_log {#system_tables-query-thread-log}
|
||||
## system.query_thread_log {#system_tables-query_thread_log}
|
||||
|
||||
Содержит информацию о каждом потоке исполнения запроса.
|
||||
|
||||
|
@ -1175,7 +1175,7 @@ private:
|
||||
/// Poll for changes after a cancellation check, otherwise it never reached
|
||||
/// because of progress updates from server.
|
||||
if (connection->poll(poll_interval))
|
||||
break;
|
||||
break;
|
||||
}
|
||||
|
||||
if (!receiveAndProcessPacket())
|
||||
@ -1583,6 +1583,11 @@ private:
|
||||
if (std::string::npos != embedded_stack_trace_pos && !config().getBool("stacktrace", false))
|
||||
text.resize(embedded_stack_trace_pos);
|
||||
|
||||
/// If we probably have progress bar, we should add additional newline,
|
||||
/// otherwise exception may display concatenated with the progress bar.
|
||||
if (need_render_progress)
|
||||
std::cerr << '\n';
|
||||
|
||||
std::cerr << "Received exception from server (version " << server_version << "):" << std::endl
|
||||
<< "Code: " << e.code() << ". " << text << std::endl;
|
||||
}
|
||||
|
@ -4,6 +4,8 @@
|
||||
|
||||
#include <Common/ZooKeeper/ZooKeeper.h>
|
||||
#include <Common/ZooKeeper/KeeperException.h>
|
||||
#include <Common/setThreadName.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -177,7 +179,11 @@ void ClusterCopier::discoverTablePartitions(const ConnectionTimeouts & timeouts,
|
||||
ThreadPool thread_pool(num_threads ? num_threads : 2 * getNumberOfPhysicalCPUCores());
|
||||
|
||||
for (const TaskShardPtr & task_shard : task_table.all_shards)
|
||||
thread_pool.scheduleOrThrowOnError([this, timeouts, task_shard]() { discoverShardPartitions(timeouts, task_shard); });
|
||||
thread_pool.scheduleOrThrowOnError([this, timeouts, task_shard]()
|
||||
{
|
||||
setThreadName("DiscoverPartns");
|
||||
discoverShardPartitions(timeouts, task_shard);
|
||||
});
|
||||
|
||||
LOG_DEBUG(log, "Waiting for {} setup jobs", thread_pool.active());
|
||||
thread_pool.wait();
|
||||
@ -609,8 +615,7 @@ TaskStatus ClusterCopier::tryMoveAllPiecesToDestinationTable(const TaskTable & t
|
||||
size_t num_nodes = executeQueryOnCluster(
|
||||
task_table.cluster_push,
|
||||
query_alter_ast_string,
|
||||
nullptr,
|
||||
&settings_push,
|
||||
settings_push,
|
||||
PoolMode::GET_MANY,
|
||||
ClusterExecutionMode::ON_EACH_NODE);
|
||||
|
||||
@ -638,8 +643,7 @@ TaskStatus ClusterCopier::tryMoveAllPiecesToDestinationTable(const TaskTable & t
|
||||
UInt64 num_nodes = executeQueryOnCluster(
|
||||
task_table.cluster_push,
|
||||
query_deduplicate_ast_string,
|
||||
nullptr,
|
||||
&task_cluster->settings_push,
|
||||
task_cluster->settings_push,
|
||||
PoolMode::GET_MANY);
|
||||
|
||||
LOG_INFO(log, "Number of shard that executed OPTIMIZE DEDUPLICATE query successfully : {}", toString(num_nodes));
|
||||
@ -818,8 +822,7 @@ bool ClusterCopier::tryDropPartitionPiece(
|
||||
/// We have to drop partition_piece on each replica
|
||||
size_t num_shards = executeQueryOnCluster(
|
||||
cluster_push, query,
|
||||
nullptr,
|
||||
&settings_push,
|
||||
settings_push,
|
||||
PoolMode::GET_MANY,
|
||||
ClusterExecutionMode::ON_EACH_NODE);
|
||||
|
||||
@ -1293,7 +1296,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl(
|
||||
local_context.setSettings(task_cluster->settings_pull);
|
||||
local_context.setSetting("skip_unavailable_shards", true);
|
||||
|
||||
Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_select_ast, local_context)->execute().in);
|
||||
Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_select_ast, local_context)->execute().getInputStream());
|
||||
count = (block) ? block.safeGetByPosition(0).column->getUInt(0) : 0;
|
||||
}
|
||||
|
||||
@ -1356,9 +1359,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl(
|
||||
String query = queryToString(create_query_push_ast);
|
||||
|
||||
LOG_DEBUG(log, "Create destination tables. Query: {}", query);
|
||||
UInt64 shards = executeQueryOnCluster(task_table.cluster_push, query,
|
||||
create_query_push_ast, &task_cluster->settings_push,
|
||||
PoolMode::GET_MANY);
|
||||
UInt64 shards = executeQueryOnCluster(task_table.cluster_push, query, task_cluster->settings_push, PoolMode::GET_MANY);
|
||||
LOG_DEBUG(log, "Destination tables {} have been created on {} shards of {}", getQuotedTable(task_table.table_push), shards, task_table.cluster_push->getShardCount());
|
||||
}
|
||||
|
||||
@ -1403,7 +1404,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl(
|
||||
BlockIO io_select = InterpreterFactory::get(query_select_ast, context_select)->execute();
|
||||
BlockIO io_insert = InterpreterFactory::get(query_insert_ast, context_insert)->execute();
|
||||
|
||||
input = io_select.in;
|
||||
input = io_select.getInputStream();
|
||||
output = io_insert.out;
|
||||
}
|
||||
|
||||
@ -1479,9 +1480,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl(
|
||||
String query = queryToString(create_query_push_ast);
|
||||
|
||||
LOG_DEBUG(log, "Create destination tables. Query: {}", query);
|
||||
UInt64 shards = executeQueryOnCluster(task_table.cluster_push, query,
|
||||
create_query_push_ast, &task_cluster->settings_push,
|
||||
PoolMode::GET_MANY);
|
||||
UInt64 shards = executeQueryOnCluster(task_table.cluster_push, query, task_cluster->settings_push, PoolMode::GET_MANY);
|
||||
LOG_DEBUG(log, "Destination tables {} have been created on {} shards of {}", getQuotedTable(task_table.table_push), shards, task_table.cluster_push->getShardCount());
|
||||
}
|
||||
catch (...)
|
||||
@ -1548,8 +1547,7 @@ void ClusterCopier::dropHelpingTables(const TaskTable & task_table)
|
||||
/// We have to drop partition_piece on each replica
|
||||
UInt64 num_nodes = executeQueryOnCluster(
|
||||
cluster_push, query,
|
||||
nullptr,
|
||||
&settings_push,
|
||||
settings_push,
|
||||
PoolMode::GET_MANY,
|
||||
ClusterExecutionMode::ON_EACH_NODE);
|
||||
|
||||
@ -1575,8 +1573,7 @@ void ClusterCopier::dropParticularPartitionPieceFromAllHelpingTables(const TaskT
|
||||
/// We have to drop partition_piece on each replica
|
||||
UInt64 num_nodes = executeQueryOnCluster(
|
||||
cluster_push, query,
|
||||
nullptr,
|
||||
&settings_push,
|
||||
settings_push,
|
||||
PoolMode::GET_MANY,
|
||||
ClusterExecutionMode::ON_EACH_NODE);
|
||||
|
||||
@ -1690,7 +1687,7 @@ std::set<String> ClusterCopier::getShardPartitions(const ConnectionTimeouts & ti
|
||||
|
||||
Context local_context = context;
|
||||
local_context.setSettings(task_cluster->settings_pull);
|
||||
Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_ast, local_context)->execute().in);
|
||||
Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_ast, local_context)->execute().getInputStream());
|
||||
|
||||
std::set<String> res;
|
||||
if (block)
|
||||
@ -1735,7 +1732,7 @@ const auto & settings = context.getSettingsRef();
|
||||
|
||||
Context local_context = context;
|
||||
local_context.setSettings(task_cluster->settings_pull);
|
||||
return InterpreterFactory::get(query_ast, local_context)->execute().in->read().rows() != 0;
|
||||
return InterpreterFactory::get(query_ast, local_context)->execute().getInputStream()->read().rows() != 0;
|
||||
}
|
||||
|
||||
bool ClusterCopier::checkPresentPartitionPiecesOnCurrentShard(const ConnectionTimeouts & timeouts,
|
||||
@ -1774,7 +1771,7 @@ bool ClusterCopier::checkPresentPartitionPiecesOnCurrentShard(const ConnectionTi
|
||||
|
||||
Context local_context = context;
|
||||
local_context.setSettings(task_cluster->settings_pull);
|
||||
auto result = InterpreterFactory::get(query_ast, local_context)->execute().in->read().rows();
|
||||
auto result = InterpreterFactory::get(query_ast, local_context)->execute().getInputStream()->read().rows();
|
||||
if (result != 0)
|
||||
LOG_DEBUG(log, "Partition {} piece number {} is PRESENT on shard {}", partition_quoted_name, std::to_string(current_piece_number), task_shard.getDescription());
|
||||
else
|
||||
@ -1788,25 +1785,16 @@ bool ClusterCopier::checkPresentPartitionPiecesOnCurrentShard(const ConnectionTi
|
||||
UInt64 ClusterCopier::executeQueryOnCluster(
|
||||
const ClusterPtr & cluster,
|
||||
const String & query,
|
||||
const ASTPtr & query_ast_,
|
||||
const Settings * settings,
|
||||
const Settings & current_settings,
|
||||
PoolMode pool_mode,
|
||||
ClusterExecutionMode execution_mode,
|
||||
UInt64 max_successful_executions_per_shard) const
|
||||
{
|
||||
Settings current_settings = settings ? *settings : task_cluster->settings_common;
|
||||
|
||||
auto num_shards = cluster->getShardsInfo().size();
|
||||
std::vector<UInt64> per_shard_num_successful_replicas(num_shards, 0);
|
||||
|
||||
ASTPtr query_ast;
|
||||
if (query_ast_ == nullptr)
|
||||
{
|
||||
ParserQuery p_query(query.data() + query.size());
|
||||
query_ast = parseQuery(p_query, query, current_settings.max_query_size, current_settings.max_parser_depth);
|
||||
}
|
||||
else
|
||||
query_ast = query_ast_;
|
||||
ParserQuery p_query(query.data() + query.size());
|
||||
ASTPtr query_ast = parseQuery(p_query, query, current_settings.max_query_size, current_settings.max_parser_depth);
|
||||
|
||||
/// We will have to execute query on each replica of a shard.
|
||||
if (execution_mode == ClusterExecutionMode::ON_EACH_NODE)
|
||||
@ -1815,8 +1803,10 @@ UInt64 ClusterCopier::executeQueryOnCluster(
|
||||
std::atomic<size_t> origin_replicas_number;
|
||||
|
||||
/// We need to execute query on one replica at least
|
||||
auto do_for_shard = [&] (UInt64 shard_index)
|
||||
auto do_for_shard = [&] (UInt64 shard_index, Settings shard_settings)
|
||||
{
|
||||
setThreadName("QueryForShard");
|
||||
|
||||
const Cluster::ShardInfo & shard = cluster->getShardsInfo().at(shard_index);
|
||||
UInt64 & num_successful_executions = per_shard_num_successful_replicas.at(shard_index);
|
||||
num_successful_executions = 0;
|
||||
@ -1846,10 +1836,10 @@ UInt64 ClusterCopier::executeQueryOnCluster(
|
||||
/// Will try to make as many as possible queries
|
||||
if (shard.hasRemoteConnections())
|
||||
{
|
||||
current_settings.max_parallel_replicas = num_remote_replicas ? num_remote_replicas : 1;
|
||||
shard_settings.max_parallel_replicas = num_remote_replicas ? num_remote_replicas : 1;
|
||||
|
||||
auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(current_settings).getSaturated(current_settings.max_execution_time);
|
||||
auto connections = shard.pool->getMany(timeouts, ¤t_settings, pool_mode);
|
||||
auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(shard_settings).getSaturated(shard_settings.max_execution_time);
|
||||
auto connections = shard.pool->getMany(timeouts, &shard_settings, pool_mode);
|
||||
|
||||
for (auto & connection : connections)
|
||||
{
|
||||
@ -1859,7 +1849,7 @@ UInt64 ClusterCopier::executeQueryOnCluster(
|
||||
try
|
||||
{
|
||||
/// CREATE TABLE and DROP PARTITION queries return empty block
|
||||
RemoteBlockInputStream stream{*connection, query, Block{}, context, ¤t_settings};
|
||||
RemoteBlockInputStream stream{*connection, query, Block{}, context, &shard_settings};
|
||||
NullBlockOutputStream output{Block{}};
|
||||
copyData(stream, output);
|
||||
|
||||
@ -1878,7 +1868,7 @@ UInt64 ClusterCopier::executeQueryOnCluster(
|
||||
ThreadPool thread_pool(std::min<UInt64>(num_shards, getNumberOfPhysicalCPUCores()));
|
||||
|
||||
for (UInt64 shard_index = 0; shard_index < num_shards; ++shard_index)
|
||||
thread_pool.scheduleOrThrowOnError([=] { do_for_shard(shard_index); });
|
||||
thread_pool.scheduleOrThrowOnError([=, shard_settings = current_settings] { do_for_shard(shard_index, std::move(shard_settings)); });
|
||||
|
||||
thread_pool.wait();
|
||||
}
|
||||
@ -1898,7 +1888,7 @@ UInt64 ClusterCopier::executeQueryOnCluster(
|
||||
LOG_INFO(log, "There was an error while executing ALTER on each node. Query was executed on {} nodes. But had to be executed on {}", toString(successful_nodes), toString(origin_replicas_number.load()));
|
||||
}
|
||||
|
||||
|
||||
return successful_nodes;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -15,7 +15,6 @@ namespace DB
|
||||
class ClusterCopier
|
||||
{
|
||||
public:
|
||||
|
||||
ClusterCopier(const String & task_path_,
|
||||
const String & host_id_,
|
||||
const String & proxy_database_name_,
|
||||
@ -187,8 +186,7 @@ protected:
|
||||
UInt64 executeQueryOnCluster(
|
||||
const ClusterPtr & cluster,
|
||||
const String & query,
|
||||
const ASTPtr & query_ast_ = nullptr,
|
||||
const Settings * settings = nullptr,
|
||||
const Settings & current_settings,
|
||||
PoolMode pool_mode = PoolMode::GET_ALL,
|
||||
ClusterExecutionMode execution_mode = ClusterExecutionMode::ON_EACH_SHARD,
|
||||
UInt64 max_successful_executions_per_shard = 0) const;
|
||||
|
@ -114,7 +114,7 @@ void ClusterCopierApp::mainImpl()
|
||||
registerDisks();
|
||||
|
||||
static const std::string default_database = "_local";
|
||||
DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared<DatabaseMemory>(default_database));
|
||||
DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared<DatabaseMemory>(default_database, *context));
|
||||
context->setCurrentDatabase(default_database);
|
||||
|
||||
/// Initialize query scope just in case.
|
||||
|
@ -118,13 +118,13 @@ void LocalServer::tryInitPath()
|
||||
}
|
||||
|
||||
|
||||
static void attachSystemTables()
|
||||
static void attachSystemTables(const Context & context)
|
||||
{
|
||||
DatabasePtr system_database = DatabaseCatalog::instance().tryGetDatabase(DatabaseCatalog::SYSTEM_DATABASE);
|
||||
if (!system_database)
|
||||
{
|
||||
/// TODO: add attachTableDelayed into DatabaseMemory to speedup loading
|
||||
system_database = std::make_shared<DatabaseMemory>(DatabaseCatalog::SYSTEM_DATABASE);
|
||||
system_database = std::make_shared<DatabaseMemory>(DatabaseCatalog::SYSTEM_DATABASE, context);
|
||||
DatabaseCatalog::instance().attachDatabase(DatabaseCatalog::SYSTEM_DATABASE, system_database);
|
||||
}
|
||||
|
||||
@ -135,7 +135,7 @@ static void attachSystemTables()
|
||||
int LocalServer::main(const std::vector<std::string> & /*args*/)
|
||||
try
|
||||
{
|
||||
Logger * log = &logger();
|
||||
Poco::Logger * log = &logger();
|
||||
ThreadStatus thread_status;
|
||||
UseSSL use_ssl;
|
||||
|
||||
@ -202,7 +202,7 @@ try
|
||||
* if such tables will not be dropped, clickhouse-server will not be able to load them due to security reasons.
|
||||
*/
|
||||
std::string default_database = config().getString("default_database", "_local");
|
||||
DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared<DatabaseMemory>(default_database));
|
||||
DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared<DatabaseMemory>(default_database, *context));
|
||||
context->setCurrentDatabase(default_database);
|
||||
applyCmdOptions();
|
||||
|
||||
@ -213,14 +213,14 @@ try
|
||||
|
||||
LOG_DEBUG(log, "Loading metadata from {}", context->getPath());
|
||||
loadMetadataSystem(*context);
|
||||
attachSystemTables();
|
||||
attachSystemTables(*context);
|
||||
loadMetadata(*context);
|
||||
DatabaseCatalog::instance().loadDatabases();
|
||||
LOG_DEBUG(log, "Loaded metadata.");
|
||||
}
|
||||
else
|
||||
{
|
||||
attachSystemTables();
|
||||
attachSystemTables(*context);
|
||||
}
|
||||
|
||||
processQueries();
|
||||
|
@ -25,7 +25,7 @@ ODBCBlockInputStream::ODBCBlockInputStream(
|
||||
, result{statement}
|
||||
, iterator{result.begin()}
|
||||
, max_block_size{max_block_size_}
|
||||
, log(&Logger::get("ODBCBlockInputStream"))
|
||||
, log(&Poco::Logger::get("ODBCBlockInputStream"))
|
||||
{
|
||||
if (sample_block.columns() != result.columnCount())
|
||||
throw Exception{"RecordSet contains " + toString(result.columnCount()) + " columns while " + toString(sample_block.columns())
|
||||
|
@ -94,7 +94,7 @@ ODBCBlockOutputStream::ODBCBlockOutputStream(Poco::Data::Session && session_,
|
||||
, table_name(remote_table_name_)
|
||||
, sample_block(sample_block_)
|
||||
, quoting(quoting_)
|
||||
, log(&Logger::get("ODBCBlockOutputStream"))
|
||||
, log(&Poco::Logger::get("ODBCBlockOutputStream"))
|
||||
{
|
||||
description.init(sample_block);
|
||||
}
|
||||
|
@ -89,7 +89,7 @@ namespace CurrentMetrics
|
||||
namespace
|
||||
{
|
||||
|
||||
void setupTmpPath(Logger * log, const std::string & path)
|
||||
void setupTmpPath(Poco::Logger * log, const std::string & path)
|
||||
{
|
||||
LOG_DEBUG(log, "Setting up {} to store temporary data in it", path);
|
||||
|
||||
@ -212,7 +212,7 @@ void Server::defineOptions(Poco::Util::OptionSet & options)
|
||||
|
||||
int Server::main(const std::vector<std::string> & /*args*/)
|
||||
{
|
||||
Logger * log = &logger();
|
||||
Poco::Logger * log = &logger();
|
||||
UseSSL use_ssl;
|
||||
|
||||
ThreadStatus thread_status;
|
||||
@ -236,6 +236,14 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
if (ThreadFuzzer::instance().isEffective())
|
||||
LOG_WARNING(log, "ThreadFuzzer is enabled. Application will run slowly and unstable.");
|
||||
|
||||
#if !defined(NDEBUG) || !defined(__OPTIMIZE__)
|
||||
LOG_WARNING(log, "Server was built in debug mode. It will work slowly.");
|
||||
#endif
|
||||
|
||||
#if defined(ADDRESS_SANITIZER) || defined(THREAD_SANITIZER) || defined(MEMORY_SANITIZER)
|
||||
LOG_WARNING(log, "Server was built with sanitizer. It will work slowly.");
|
||||
#endif
|
||||
|
||||
/** Context contains all that query execution is dependent:
|
||||
* settings, available functions, data types, aggregate functions, databases...
|
||||
*/
|
||||
|
@ -309,7 +309,7 @@ bool AllowedClientHosts::contains(const IPAddress & client_address) const
|
||||
throw;
|
||||
/// Try to ignore DNS errors: if host cannot be resolved, skip it and try next.
|
||||
LOG_WARNING(
|
||||
&Logger::get("AddressPatterns"),
|
||||
&Poco::Logger::get("AddressPatterns"),
|
||||
"Failed to check if the allowed client hosts contain address {}. {}, code = {}",
|
||||
client_address.toString(), e.displayText(), e.code());
|
||||
return false;
|
||||
@ -342,7 +342,7 @@ bool AllowedClientHosts::contains(const IPAddress & client_address) const
|
||||
throw;
|
||||
/// Try to ignore DNS errors: if host cannot be resolved, skip it and try next.
|
||||
LOG_WARNING(
|
||||
&Logger::get("AddressPatterns"),
|
||||
&Poco::Logger::get("AddressPatterns"),
|
||||
"Failed to check if the allowed client hosts contain address {}. {}, code = {}",
|
||||
client_address.toString(), e.displayText(), e.code());
|
||||
return false;
|
||||
|
@ -68,15 +68,27 @@ void ExtendedRoleSet::init(const ASTExtendedRoleSet & ast, const AccessControlMa
|
||||
{
|
||||
all = ast.all;
|
||||
|
||||
auto name_to_id = [id_mode{ast.id_mode}, manager](const String & name) -> UUID
|
||||
auto name_to_id = [&ast, manager](const String & name) -> UUID
|
||||
{
|
||||
if (id_mode)
|
||||
if (ast.id_mode)
|
||||
return parse<UUID>(name);
|
||||
assert(manager);
|
||||
auto id = manager->find<User>(name);
|
||||
if (id)
|
||||
return *id;
|
||||
return manager->getID<Role>(name);
|
||||
if (ast.can_contain_users && ast.can_contain_roles)
|
||||
{
|
||||
auto id = manager->find<User>(name);
|
||||
if (id)
|
||||
return *id;
|
||||
return manager->getID<Role>(name);
|
||||
}
|
||||
else if (ast.can_contain_users)
|
||||
{
|
||||
return manager->getID<User>(name);
|
||||
}
|
||||
else
|
||||
{
|
||||
assert(ast.can_contain_roles);
|
||||
return manager->getID<Role>(name);
|
||||
}
|
||||
};
|
||||
|
||||
if (!ast.names.empty() && !all)
|
||||
|
@ -74,6 +74,7 @@ if(USE_RDKAFKA)
|
||||
endif()
|
||||
|
||||
if (USE_AWS_S3)
|
||||
add_headers_and_sources(dbms Common/S3)
|
||||
add_headers_and_sources(dbms Disks/S3)
|
||||
endif()
|
||||
|
||||
|
@ -508,18 +508,18 @@ void Connection::sendScalarsData(Scalars & data)
|
||||
"Sent data for {} scalars, total {} rows in {} sec., {} rows/sec., {} ({}/sec.), compressed {} times to {} ({}/sec.)",
|
||||
data.size(), rows, elapsed,
|
||||
static_cast<size_t>(rows / watch.elapsedSeconds()),
|
||||
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes),
|
||||
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes / watch.elapsedSeconds()),
|
||||
ReadableSize(maybe_compressed_out_bytes),
|
||||
ReadableSize(maybe_compressed_out_bytes / watch.elapsedSeconds()),
|
||||
static_cast<double>(maybe_compressed_out_bytes) / out_bytes,
|
||||
formatReadableSizeWithBinarySuffix(out_bytes),
|
||||
formatReadableSizeWithBinarySuffix(out_bytes / watch.elapsedSeconds()));
|
||||
ReadableSize(out_bytes),
|
||||
ReadableSize(out_bytes / watch.elapsedSeconds()));
|
||||
else
|
||||
LOG_DEBUG(log_wrapper.get(),
|
||||
"Sent data for {} scalars, total {} rows in {} sec., {} rows/sec., {} ({}/sec.), no compression.",
|
||||
data.size(), rows, elapsed,
|
||||
static_cast<size_t>(rows / watch.elapsedSeconds()),
|
||||
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes),
|
||||
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes / watch.elapsedSeconds()));
|
||||
ReadableSize(maybe_compressed_out_bytes),
|
||||
ReadableSize(maybe_compressed_out_bytes / watch.elapsedSeconds()));
|
||||
}
|
||||
|
||||
namespace
|
||||
@ -612,18 +612,18 @@ void Connection::sendExternalTablesData(ExternalTablesData & data)
|
||||
"Sent data for {} external tables, total {} rows in {} sec., {} rows/sec., {} ({}/sec.), compressed {} times to {} ({}/sec.)",
|
||||
data.size(), rows, elapsed,
|
||||
static_cast<size_t>(rows / watch.elapsedSeconds()),
|
||||
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes),
|
||||
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes / watch.elapsedSeconds()),
|
||||
ReadableSize(maybe_compressed_out_bytes),
|
||||
ReadableSize(maybe_compressed_out_bytes / watch.elapsedSeconds()),
|
||||
static_cast<double>(maybe_compressed_out_bytes) / out_bytes,
|
||||
formatReadableSizeWithBinarySuffix(out_bytes),
|
||||
formatReadableSizeWithBinarySuffix(out_bytes / watch.elapsedSeconds()));
|
||||
ReadableSize(out_bytes),
|
||||
ReadableSize(out_bytes / watch.elapsedSeconds()));
|
||||
else
|
||||
LOG_DEBUG(log_wrapper.get(),
|
||||
"Sent data for {} external tables, total {} rows in {} sec., {} rows/sec., {} ({}/sec.), no compression.",
|
||||
data.size(), rows, elapsed,
|
||||
static_cast<size_t>(rows / watch.elapsedSeconds()),
|
||||
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes),
|
||||
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes / watch.elapsedSeconds()));
|
||||
ReadableSize(maybe_compressed_out_bytes),
|
||||
ReadableSize(maybe_compressed_out_bytes / watch.elapsedSeconds()));
|
||||
}
|
||||
|
||||
std::optional<Poco::Net::SocketAddress> Connection::getResolvedAddress() const
|
||||
|
@ -249,16 +249,16 @@ private:
|
||||
{
|
||||
}
|
||||
|
||||
Logger * get()
|
||||
Poco::Logger * get()
|
||||
{
|
||||
if (!log)
|
||||
log = &Logger::get("Connection (" + parent.getDescription() + ")");
|
||||
log = &Poco::Logger::get("Connection (" + parent.getDescription() + ")");
|
||||
|
||||
return log;
|
||||
}
|
||||
|
||||
private:
|
||||
std::atomic<Logger *> log;
|
||||
std::atomic<Poco::Logger *> log;
|
||||
Connection & parent;
|
||||
};
|
||||
|
||||
|
@ -56,7 +56,7 @@ public:
|
||||
Protocol::Compression compression_ = Protocol::Compression::Enable,
|
||||
Protocol::Secure secure_ = Protocol::Secure::Disable)
|
||||
: Base(max_connections_,
|
||||
&Logger::get("ConnectionPool (" + host_ + ":" + toString(port_) + ")")),
|
||||
&Poco::Logger::get("ConnectionPool (" + host_ + ":" + toString(port_) + ")")),
|
||||
host(host_),
|
||||
port(port_),
|
||||
default_database(default_database_),
|
||||
|
@ -35,7 +35,7 @@ ConnectionPoolWithFailover::ConnectionPoolWithFailover(
|
||||
LoadBalancing load_balancing,
|
||||
time_t decrease_error_period_,
|
||||
size_t max_error_cap_)
|
||||
: Base(std::move(nested_pools_), decrease_error_period_, max_error_cap_, &Logger::get("ConnectionPoolWithFailover"))
|
||||
: Base(std::move(nested_pools_), decrease_error_period_, max_error_cap_, &Poco::Logger::get("ConnectionPoolWithFailover"))
|
||||
, default_load_balancing(load_balancing)
|
||||
{
|
||||
const std::string & local_hostname = getFQDNOrHostName();
|
||||
|
@ -35,7 +35,7 @@ TimeoutSetter::~TimeoutSetter()
|
||||
catch (std::exception & e)
|
||||
{
|
||||
// Sometimes catched on macos
|
||||
LOG_ERROR(&Logger::get("Client"), "TimeoutSetter: Can't reset timeouts: {}", e.what());
|
||||
LOG_ERROR(&Poco::Logger::get("Client"), "TimeoutSetter: Can't reset timeouts: {}", e.what());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -18,8 +18,8 @@ void AlignedBuffer::alloc(size_t size, size_t alignment)
|
||||
void * new_buf;
|
||||
int res = ::posix_memalign(&new_buf, std::max(alignment, sizeof(void*)), size);
|
||||
if (0 != res)
|
||||
throwFromErrno("Cannot allocate memory (posix_memalign), size: "
|
||||
+ formatReadableSizeWithBinarySuffix(size) + ", alignment: " + formatReadableSizeWithBinarySuffix(alignment) + ".",
|
||||
throwFromErrno(fmt::format("Cannot allocate memory (posix_memalign), size: {}, alignment: {}.",
|
||||
ReadableSize(size), ReadableSize(alignment)),
|
||||
ErrorCodes::CANNOT_ALLOCATE_MEMORY, res);
|
||||
buf = new_buf;
|
||||
}
|
||||
|
@ -129,7 +129,7 @@ public:
|
||||
|
||||
void * new_buf = ::realloc(buf, new_size);
|
||||
if (nullptr == new_buf)
|
||||
DB::throwFromErrno("Allocator: Cannot realloc from " + formatReadableSizeWithBinarySuffix(old_size) + " to " + formatReadableSizeWithBinarySuffix(new_size) + ".", DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
|
||||
DB::throwFromErrno(fmt::format("Allocator: Cannot realloc from {} to {}.", ReadableSize(old_size), ReadableSize(new_size)), DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
|
||||
|
||||
buf = new_buf;
|
||||
if constexpr (clear_memory)
|
||||
@ -145,7 +145,8 @@ public:
|
||||
buf = clickhouse_mremap(buf, old_size, new_size, MREMAP_MAYMOVE,
|
||||
PROT_READ | PROT_WRITE, mmap_flags, -1, 0);
|
||||
if (MAP_FAILED == buf)
|
||||
DB::throwFromErrno("Allocator: Cannot mremap memory chunk from " + formatReadableSizeWithBinarySuffix(old_size) + " to " + formatReadableSizeWithBinarySuffix(new_size) + ".", DB::ErrorCodes::CANNOT_MREMAP);
|
||||
DB::throwFromErrno(fmt::format("Allocator: Cannot mremap memory chunk from {} to {}.",
|
||||
ReadableSize(old_size), ReadableSize(new_size)), DB::ErrorCodes::CANNOT_MREMAP);
|
||||
|
||||
/// No need for zero-fill, because mmap guarantees it.
|
||||
}
|
||||
@ -201,13 +202,13 @@ private:
|
||||
if (size >= MMAP_THRESHOLD)
|
||||
{
|
||||
if (alignment > MMAP_MIN_ALIGNMENT)
|
||||
throw DB::Exception("Too large alignment " + formatReadableSizeWithBinarySuffix(alignment) + ": more than page size when allocating "
|
||||
+ formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::BAD_ARGUMENTS);
|
||||
throw DB::Exception(fmt::format("Too large alignment {}: more than page size when allocating {}.",
|
||||
ReadableSize(alignment), ReadableSize(size)), DB::ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
buf = mmap(getMmapHint(), size, PROT_READ | PROT_WRITE,
|
||||
mmap_flags, -1, 0);
|
||||
if (MAP_FAILED == buf)
|
||||
DB::throwFromErrno("Allocator: Cannot mmap " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
|
||||
DB::throwFromErrno(fmt::format("Allocator: Cannot mmap {}.", ReadableSize(size)), DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
|
||||
|
||||
/// No need for zero-fill, because mmap guarantees it.
|
||||
}
|
||||
@ -221,7 +222,7 @@ private:
|
||||
buf = ::malloc(size);
|
||||
|
||||
if (nullptr == buf)
|
||||
DB::throwFromErrno("Allocator: Cannot malloc " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
|
||||
DB::throwFromErrno(fmt::format("Allocator: Cannot malloc {}.", ReadableSize(size)), DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -229,7 +230,8 @@ private:
|
||||
int res = posix_memalign(&buf, alignment, size);
|
||||
|
||||
if (0 != res)
|
||||
DB::throwFromErrno("Cannot allocate memory (posix_memalign) " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY, res);
|
||||
DB::throwFromErrno(fmt::format("Cannot allocate memory (posix_memalign) {}.", ReadableSize(size)),
|
||||
DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY, res);
|
||||
|
||||
if constexpr (clear_memory)
|
||||
memset(buf, 0, size);
|
||||
@ -243,7 +245,7 @@ private:
|
||||
if (size >= MMAP_THRESHOLD)
|
||||
{
|
||||
if (0 != munmap(buf, size))
|
||||
DB::throwFromErrno("Allocator: Cannot munmap " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_MUNMAP);
|
||||
DB::throwFromErrno(fmt::format("Allocator: Cannot munmap {}.", ReadableSize(size)), DB::ErrorCodes::CANNOT_MUNMAP);
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -177,13 +177,13 @@ private:
|
||||
{
|
||||
ptr = mmap(address_hint, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
|
||||
if (MAP_FAILED == ptr)
|
||||
DB::throwFromErrno("Allocator: Cannot mmap " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
|
||||
DB::throwFromErrno(fmt::format("Allocator: Cannot mmap {}.", ReadableSize(size)), DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
|
||||
}
|
||||
|
||||
~Chunk()
|
||||
{
|
||||
if (ptr && 0 != munmap(ptr, size))
|
||||
DB::throwFromErrno("Allocator: Cannot munmap " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_MUNMAP);
|
||||
DB::throwFromErrno(fmt::format("Allocator: Cannot munmap {}.", ReadableSize(size)), DB::ErrorCodes::CANNOT_MUNMAP);
|
||||
}
|
||||
|
||||
Chunk(Chunk && other) : ptr(other.ptr), size(other.size)
|
||||
|
@ -278,7 +278,7 @@ private:
|
||||
void * new_data = nullptr;
|
||||
int res = posix_memalign(&new_data, alignment, prefix_size + new_size * sizeof(T));
|
||||
if (0 != res)
|
||||
throwFromErrno("Cannot allocate memory (posix_memalign) " + formatReadableSizeWithBinarySuffix(new_size) + ".",
|
||||
throwFromErrno(fmt::format("Cannot allocate memory (posix_memalign) {}.", ReadableSize(new_size)),
|
||||
ErrorCodes::CANNOT_ALLOCATE_MEMORY, res);
|
||||
|
||||
data_ptr = static_cast<char *>(new_data);
|
||||
|
@ -66,21 +66,21 @@ ConfigProcessor::ConfigProcessor(
|
||||
, name_pool(new Poco::XML::NamePool(65521))
|
||||
, dom_parser(name_pool)
|
||||
{
|
||||
if (log_to_console && !Logger::has("ConfigProcessor"))
|
||||
if (log_to_console && !Poco::Logger::has("ConfigProcessor"))
|
||||
{
|
||||
channel_ptr = new Poco::ConsoleChannel;
|
||||
log = &Logger::create("ConfigProcessor", channel_ptr.get(), Poco::Message::PRIO_TRACE);
|
||||
log = &Poco::Logger::create("ConfigProcessor", channel_ptr.get(), Poco::Message::PRIO_TRACE);
|
||||
}
|
||||
else
|
||||
{
|
||||
log = &Logger::get("ConfigProcessor");
|
||||
log = &Poco::Logger::get("ConfigProcessor");
|
||||
}
|
||||
}
|
||||
|
||||
ConfigProcessor::~ConfigProcessor()
|
||||
{
|
||||
if (channel_ptr) /// This means we have created a new console logger in the constructor.
|
||||
Logger::destroy("ConfigProcessor");
|
||||
Poco::Logger::destroy("ConfigProcessor");
|
||||
}
|
||||
|
||||
|
||||
|
@ -116,7 +116,7 @@ private:
|
||||
|
||||
bool throw_on_bad_incl;
|
||||
|
||||
Logger * log;
|
||||
Poco::Logger * log;
|
||||
Poco::AutoPtr<Poco::Channel> channel_ptr;
|
||||
|
||||
Substitutions substitutions;
|
||||
|
@ -69,7 +69,7 @@ private:
|
||||
|
||||
static constexpr auto reload_interval = std::chrono::seconds(2);
|
||||
|
||||
Poco::Logger * log = &Logger::get("ConfigReloader");
|
||||
Poco::Logger * log = &Poco::Logger::get("ConfigReloader");
|
||||
|
||||
std::string path;
|
||||
std::string include_from_path;
|
||||
|
@ -202,7 +202,7 @@ bool DNSResolver::updateCache()
|
||||
}
|
||||
|
||||
if (!lost_hosts.empty())
|
||||
LOG_INFO(&Logger::get("DNSResolver"), "Cached hosts not found: {}", lost_hosts);
|
||||
LOG_INFO(&Poco::Logger::get("DNSResolver"), "Cached hosts not found: {}", lost_hosts);
|
||||
|
||||
return updated;
|
||||
}
|
||||
|
@ -5,8 +5,10 @@
|
||||
#include <Poco/String.h>
|
||||
#include <common/logger_useful.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <IO/ReadBufferFromString.h>
|
||||
#include <IO/ReadBufferFromFile.h>
|
||||
#include <common/demangle.h>
|
||||
#include <Common/formatReadable.h>
|
||||
#include <Common/filesystemHelpers.h>
|
||||
@ -25,6 +27,8 @@ namespace ErrorCodes
|
||||
extern const int STD_EXCEPTION;
|
||||
extern const int UNKNOWN_EXCEPTION;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int CANNOT_ALLOCATE_MEMORY;
|
||||
extern const int CANNOT_MREMAP;
|
||||
}
|
||||
|
||||
|
||||
@ -118,7 +122,7 @@ void throwFromErrnoWithPath(const std::string & s, const std::string & path, int
|
||||
|
||||
void tryLogCurrentException(const char * log_name, const std::string & start_of_message)
|
||||
{
|
||||
tryLogCurrentException(&Logger::get(log_name), start_of_message);
|
||||
tryLogCurrentException(&Poco::Logger::get(log_name), start_of_message);
|
||||
}
|
||||
|
||||
void tryLogCurrentException(Poco::Logger * logger, const std::string & start_of_message)
|
||||
@ -144,18 +148,79 @@ static void getNoSpaceLeftInfoMessage(std::filesystem::path path, std::string &
|
||||
path = path.parent_path();
|
||||
|
||||
auto fs = getStatVFS(path);
|
||||
msg += "\nTotal space: " + formatReadableSizeWithBinarySuffix(fs.f_blocks * fs.f_bsize)
|
||||
+ "\nAvailable space: " + formatReadableSizeWithBinarySuffix(fs.f_bavail * fs.f_bsize)
|
||||
+ "\nTotal inodes: " + formatReadableQuantity(fs.f_files)
|
||||
+ "\nAvailable inodes: " + formatReadableQuantity(fs.f_favail);
|
||||
|
||||
auto mount_point = getMountPoint(path).string();
|
||||
msg += "\nMount point: " + mount_point;
|
||||
|
||||
fmt::format_to(std::back_inserter(msg),
|
||||
"\nTotal space: {}\nAvailable space: {}\nTotal inodes: {}\nAvailable inodes: {}\nMount point: {}",
|
||||
ReadableSize(fs.f_blocks * fs.f_bsize),
|
||||
ReadableSize(fs.f_bavail * fs.f_bsize),
|
||||
formatReadableQuantity(fs.f_files),
|
||||
formatReadableQuantity(fs.f_favail),
|
||||
mount_point);
|
||||
|
||||
#if defined(__linux__)
|
||||
msg += "\nFilesystem: " + getFilesystemName(mount_point);
|
||||
#endif
|
||||
}
|
||||
|
||||
|
||||
/** It is possible that the system has enough memory,
|
||||
* but we have shortage of the number of available memory mappings.
|
||||
* Provide good diagnostic to user in that case.
|
||||
*/
|
||||
static void getNotEnoughMemoryMessage(std::string & msg)
|
||||
{
|
||||
#if defined(__linux__)
|
||||
try
|
||||
{
|
||||
static constexpr size_t buf_size = 4096;
|
||||
char buf[buf_size];
|
||||
|
||||
UInt64 max_map_count = 0;
|
||||
{
|
||||
ReadBufferFromFile file("/proc/sys/vm/max_map_count", buf_size, -1, buf);
|
||||
readText(max_map_count, file);
|
||||
}
|
||||
|
||||
UInt64 num_maps = 0;
|
||||
{
|
||||
ReadBufferFromFile file("/proc/self/maps", buf_size, -1, buf);
|
||||
while (!file.eof())
|
||||
{
|
||||
char * next_pos = find_first_symbols<'\n'>(file.position(), file.buffer().end());
|
||||
file.position() = next_pos;
|
||||
|
||||
if (!file.hasPendingData())
|
||||
continue;
|
||||
|
||||
if (*file.position() == '\n')
|
||||
{
|
||||
++num_maps;
|
||||
++file.position();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (num_maps > max_map_count * 0.99)
|
||||
{
|
||||
msg += fmt::format(
|
||||
"\nIt looks like that the process is near the limit on number of virtual memory mappings."
|
||||
"\nCurrent number of mappings (/proc/self/maps): {}."
|
||||
"\nLimit on number of mappings (/proc/sys/vm/max_map_count): {}."
|
||||
"\nYou should increase the limit for vm.max_map_count in /etc/sysctl.conf"
|
||||
"\n",
|
||||
num_maps, max_map_count);
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
msg += "\nCannot obtain additional info about memory usage.";
|
||||
}
|
||||
#else
|
||||
(void)msg;
|
||||
#endif
|
||||
}
|
||||
|
||||
static std::string getExtraExceptionInfo(const std::exception & e)
|
||||
{
|
||||
String msg;
|
||||
@ -170,6 +235,13 @@ static std::string getExtraExceptionInfo(const std::exception & e)
|
||||
{
|
||||
if (errno_exception->getErrno() == ENOSPC && errno_exception->getPath())
|
||||
getNoSpaceLeftInfoMessage(errno_exception->getPath().value(), msg);
|
||||
else if (errno_exception->code() == ErrorCodes::CANNOT_ALLOCATE_MEMORY
|
||||
|| errno_exception->code() == ErrorCodes::CANNOT_MREMAP)
|
||||
getNotEnoughMemoryMessage(msg);
|
||||
}
|
||||
else if (dynamic_cast<const std::bad_alloc *>(&e))
|
||||
{
|
||||
getNotEnoughMemoryMessage(msg);
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
|
@ -37,7 +37,7 @@ private:
|
||||
Map map;
|
||||
bool initialized = false;
|
||||
|
||||
Logger * log = &Logger::get("FileChecker");
|
||||
Poco::Logger * log = &Poco::Logger::get("FileChecker");
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -306,7 +306,7 @@ private:
|
||||
auto it = cells.find(key);
|
||||
if (it == cells.end())
|
||||
{
|
||||
LOG_ERROR(&Logger::get("LRUCache"), "LRUCache became inconsistent. There must be a bug in it.");
|
||||
LOG_ERROR(&Poco::Logger::get("LRUCache"), "LRUCache became inconsistent. There must be a bug in it.");
|
||||
abort();
|
||||
}
|
||||
|
||||
@ -324,7 +324,7 @@ private:
|
||||
|
||||
if (current_size > (1ull << 63))
|
||||
{
|
||||
LOG_ERROR(&Logger::get("LRUCache"), "LRUCache became inconsistent. There must be a bug in it.");
|
||||
LOG_ERROR(&Poco::Logger::get("LRUCache"), "LRUCache became inconsistent. There must be a bug in it.");
|
||||
abort();
|
||||
}
|
||||
}
|
||||
|
@ -9,6 +9,7 @@
|
||||
#include <Common/Exception.h>
|
||||
#include <IO/ReadBufferFromMemory.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <common/logger_useful.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -19,6 +20,7 @@ namespace ErrorCodes
|
||||
extern const int FILE_DOESNT_EXIST;
|
||||
extern const int CANNOT_OPEN_FILE;
|
||||
extern const int CANNOT_READ_FROM_FILE_DESCRIPTOR;
|
||||
extern const int CANNOT_CLOSE_FILE;
|
||||
}
|
||||
|
||||
static constexpr auto filename = "/proc/self/statm";
|
||||
@ -35,7 +37,18 @@ MemoryStatisticsOS::MemoryStatisticsOS()
|
||||
MemoryStatisticsOS::~MemoryStatisticsOS()
|
||||
{
|
||||
if (0 != ::close(fd))
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
{
|
||||
try
|
||||
{
|
||||
throwFromErrno(
|
||||
"File descriptor for \"" + std::string(filename) + "\" could not be closed. "
|
||||
"Something seems to have gone wrong. Inspect errno.", ErrorCodes::CANNOT_CLOSE_FILE);
|
||||
}
|
||||
catch (const ErrnoException &)
|
||||
{
|
||||
DB::tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
MemoryStatisticsOS::Data MemoryStatisticsOS::get() const
|
||||
|
@ -49,12 +49,14 @@ MemoryTracker::~MemoryTracker()
|
||||
|
||||
void MemoryTracker::logPeakMemoryUsage() const
|
||||
{
|
||||
LOG_DEBUG(&Logger::get("MemoryTracker"), "Peak memory usage{}: {}.", (description ? " " + std::string(description) : ""), formatReadableSizeWithBinarySuffix(peak));
|
||||
const auto * description = description_ptr.load(std::memory_order_relaxed);
|
||||
LOG_DEBUG(&Poco::Logger::get("MemoryTracker"), "Peak memory usage{}: {}.", (description ? " " + std::string(description) : ""), ReadableSize(peak));
|
||||
}
|
||||
|
||||
void MemoryTracker::logMemoryUsage(Int64 current) const
|
||||
{
|
||||
LOG_DEBUG(&Logger::get("MemoryTracker"), "Current memory usage{}: {}.", (description ? " " + std::string(description) : ""), formatReadableSizeWithBinarySuffix(current));
|
||||
const auto * description = description_ptr.load(std::memory_order_relaxed);
|
||||
LOG_DEBUG(&Poco::Logger::get("MemoryTracker"), "Current memory usage{}: {}.", (description ? " " + std::string(description) : ""), ReadableSize(current));
|
||||
}
|
||||
|
||||
|
||||
@ -85,7 +87,7 @@ void MemoryTracker::alloc(Int64 size)
|
||||
|
||||
std::stringstream message;
|
||||
message << "Memory tracker";
|
||||
if (description)
|
||||
if (const auto * description = description_ptr.load(std::memory_order_relaxed))
|
||||
message << " " << description;
|
||||
message << ": fault injected. Would use " << formatReadableSizeWithBinarySuffix(will_be)
|
||||
<< " (attempt to allocate chunk of " << size << " bytes)"
|
||||
@ -117,7 +119,7 @@ void MemoryTracker::alloc(Int64 size)
|
||||
|
||||
std::stringstream message;
|
||||
message << "Memory limit";
|
||||
if (description)
|
||||
if (const auto * description = description_ptr.load(std::memory_order_relaxed))
|
||||
message << " " << description;
|
||||
message << " exceeded: would use " << formatReadableSizeWithBinarySuffix(will_be)
|
||||
<< " (attempt to allocate chunk of " << size << " bytes)"
|
||||
|
@ -35,7 +35,7 @@ private:
|
||||
CurrentMetrics::Metric metric = CurrentMetrics::end();
|
||||
|
||||
/// This description will be used as prefix into log messages (if isn't nullptr)
|
||||
const char * description = nullptr;
|
||||
std::atomic<const char *> description_ptr = nullptr;
|
||||
|
||||
void updatePeak(Int64 will_be);
|
||||
void logMemoryUsage(Int64 current) const;
|
||||
@ -114,9 +114,9 @@ public:
|
||||
metric = metric_;
|
||||
}
|
||||
|
||||
void setDescription(const char * description_)
|
||||
void setDescription(const char * description)
|
||||
{
|
||||
description = description_;
|
||||
description_ptr.store(description, std::memory_order_relaxed);
|
||||
}
|
||||
|
||||
/// Reset the accumulated data
|
||||
|
@ -102,7 +102,7 @@ void LazyPipeFDs::tryIncreaseSize(int desired_size)
|
||||
if (-1 == fcntl(fds_rw[1], F_SETPIPE_SZ, pipe_size * 2) && errno != EPERM)
|
||||
throwFromErrno("Cannot increase pipe capacity to " + std::to_string(pipe_size * 2), ErrorCodes::CANNOT_FCNTL);
|
||||
|
||||
LOG_TRACE(log, "Pipe capacity is {}", formatReadableSizeWithBinarySuffix(std::min(pipe_size, desired_size)));
|
||||
LOG_TRACE(log, "Pipe capacity is {}", ReadableSize(std::min(pipe_size, desired_size)));
|
||||
}
|
||||
#else
|
||||
(void)desired_size;
|
||||
|
@ -152,9 +152,9 @@ private:
|
||||
|
||||
protected:
|
||||
|
||||
Logger * log;
|
||||
Poco::Logger * log;
|
||||
|
||||
PoolBase(unsigned max_items_, Logger * log_)
|
||||
PoolBase(unsigned max_items_, Poco::Logger * log_)
|
||||
: max_items(max_items_), log(log_)
|
||||
{
|
||||
items.reserve(max_items);
|
||||
|
@ -57,7 +57,7 @@ public:
|
||||
NestedPools nested_pools_,
|
||||
time_t decrease_error_period_,
|
||||
size_t max_error_cap_,
|
||||
Logger * log_)
|
||||
Poco::Logger * log_)
|
||||
: nested_pools(std::move(nested_pools_))
|
||||
, decrease_error_period(decrease_error_period_)
|
||||
, max_error_cap(max_error_cap_)
|
||||
@ -134,7 +134,7 @@ protected:
|
||||
/// The time when error counts were last decreased.
|
||||
time_t last_error_decrease_time = 0;
|
||||
|
||||
Logger * log;
|
||||
Poco::Logger * log;
|
||||
};
|
||||
|
||||
template <typename TNestedPool>
|
||||
|
@ -7,6 +7,7 @@
|
||||
#include <IO/ReadHelpers.h>
|
||||
|
||||
#include <common/find_symbols.h>
|
||||
#include <common/logger_useful.h>
|
||||
|
||||
#include <cassert>
|
||||
#include <sys/types.h>
|
||||
@ -22,6 +23,7 @@ namespace ErrorCodes
|
||||
{
|
||||
extern const int FILE_DOESNT_EXIST;
|
||||
extern const int CANNOT_OPEN_FILE;
|
||||
extern const int CANNOT_CLOSE_FILE;
|
||||
extern const int CANNOT_READ_FROM_FILE_DESCRIPTOR;
|
||||
}
|
||||
|
||||
@ -39,6 +41,20 @@ namespace
|
||||
errno == ENOENT ? ErrorCodes::FILE_DOESNT_EXIST : ErrorCodes::CANNOT_OPEN_FILE);
|
||||
}
|
||||
|
||||
inline void emitErrorMsgWithFailedToCloseFile(const std::string & filename)
|
||||
{
|
||||
try
|
||||
{
|
||||
throwFromErrno(
|
||||
"File descriptor for \"" + filename + "\" could not be closed. "
|
||||
"Something seems to have gone wrong. Inspect errno.", ErrorCodes::CANNOT_CLOSE_FILE);
|
||||
}
|
||||
catch (const ErrnoException &)
|
||||
{
|
||||
DB::tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
|
||||
ssize_t readFromFD(const int fd, const char * filename, char * buf, size_t buf_size)
|
||||
{
|
||||
ssize_t res = 0;
|
||||
@ -100,11 +116,11 @@ ProcfsMetricsProvider::ProcfsMetricsProvider(const pid_t /*tid*/)
|
||||
ProcfsMetricsProvider::~ProcfsMetricsProvider()
|
||||
{
|
||||
if (stats_version >= 3 && 0 != ::close(thread_io_fd))
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
emitErrorMsgWithFailedToCloseFile(thread_io);
|
||||
if (0 != ::close(thread_stat_fd))
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
emitErrorMsgWithFailedToCloseFile(thread_stat);
|
||||
if (0 != ::close(thread_schedstat_fd))
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
emitErrorMsgWithFailedToCloseFile(thread_schedstat);
|
||||
}
|
||||
|
||||
|
||||
|
@ -79,7 +79,7 @@ namespace ErrorCodes
|
||||
|
||||
template <typename ProfilerImpl>
|
||||
QueryProfilerBase<ProfilerImpl>::QueryProfilerBase(const UInt64 thread_id, const int clock_type, UInt32 period, const int pause_signal_)
|
||||
: log(&Logger::get("QueryProfiler"))
|
||||
: log(&Poco::Logger::get("QueryProfiler"))
|
||||
, pause_signal(pause_signal_)
|
||||
{
|
||||
#if USE_UNWIND
|
||||
|
@ -102,7 +102,7 @@ SensitiveDataMasker::SensitiveDataMasker(const Poco::Util::AbstractConfiguration
|
||||
{
|
||||
Poco::Util::AbstractConfiguration::Keys keys;
|
||||
config.keys(config_prefix, keys);
|
||||
Logger * logger = &Logger::get("SensitiveDataMaskerConfigRead");
|
||||
Poco::Logger * logger = &Poco::Logger::get("SensitiveDataMaskerConfigRead");
|
||||
|
||||
std::set<std::string> used_names;
|
||||
|
||||
|
@ -43,9 +43,9 @@ StatusFile::StatusFile(const std::string & path_)
|
||||
}
|
||||
|
||||
if (!contents.empty())
|
||||
LOG_INFO(&Logger::get("StatusFile"), "Status file {} already exists - unclean restart. Contents:\n{}", path, contents);
|
||||
LOG_INFO(&Poco::Logger::get("StatusFile"), "Status file {} already exists - unclean restart. Contents:\n{}", path, contents);
|
||||
else
|
||||
LOG_INFO(&Logger::get("StatusFile"), "Status file {} already exists and is empty - probably unclean hardware restart.", path);
|
||||
LOG_INFO(&Poco::Logger::get("StatusFile"), "Status file {} already exists and is empty - probably unclean hardware restart.", path);
|
||||
}
|
||||
|
||||
fd = ::open(path.c_str(), O_WRONLY | O_CREAT | O_CLOEXEC, 0666);
|
||||
@ -90,10 +90,10 @@ StatusFile::StatusFile(const std::string & path_)
|
||||
StatusFile::~StatusFile()
|
||||
{
|
||||
if (0 != close(fd))
|
||||
LOG_ERROR(&Logger::get("StatusFile"), "Cannot close file {}, {}", path, errnoToString(ErrorCodes::CANNOT_CLOSE_FILE));
|
||||
LOG_ERROR(&Poco::Logger::get("StatusFile"), "Cannot close file {}, {}", path, errnoToString(ErrorCodes::CANNOT_CLOSE_FILE));
|
||||
|
||||
if (0 != unlink(path.c_str()))
|
||||
LOG_ERROR(&Logger::get("StatusFile"), "Cannot unlink file {}, {}", path, errnoToString(ErrorCodes::CANNOT_CLOSE_FILE));
|
||||
LOG_ERROR(&Poco::Logger::get("StatusFile"), "Cannot unlink file {}, {}", path, errnoToString(ErrorCodes::CANNOT_CLOSE_FILE));
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -116,6 +116,12 @@ inline bool isControlASCII(char c)
|
||||
return static_cast<unsigned char>(c) <= 31;
|
||||
}
|
||||
|
||||
inline bool isPrintableASCII(char c)
|
||||
{
|
||||
uint8_t uc = c;
|
||||
return uc >= 32 && uc <= 126; /// 127 is ASCII DEL.
|
||||
}
|
||||
|
||||
/// Works assuming isAlphaASCII.
|
||||
inline char toLowerIfAlphaASCII(char c)
|
||||
{
|
||||
|
@ -234,14 +234,6 @@ void ThreadPoolImpl<Thread>::worker(typename std::list<Thread>::iterator thread_
|
||||
--scheduled_jobs;
|
||||
}
|
||||
|
||||
DB::tryLogCurrentException("ThreadPool",
|
||||
std::string("Exception in ThreadPool(") +
|
||||
"max_threads: " + std::to_string(max_threads)
|
||||
+ ", max_free_threads: " + std::to_string(max_free_threads)
|
||||
+ ", queue_size: " + std::to_string(queue_size)
|
||||
+ ", shutdown_on_exception: " + std::to_string(shutdown_on_exception)
|
||||
+ ").");
|
||||
|
||||
job_finished.notify_all();
|
||||
new_job_or_shutdown.notify_all();
|
||||
return;
|
||||
|
@ -1,7 +1,9 @@
|
||||
#include <Common/UTF8Helpers.h>
|
||||
#include <Common/StringUtils/StringUtils.h>
|
||||
|
||||
#include <widechar_width.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace UTF8
|
||||
@ -94,6 +96,42 @@ size_t computeWidth(const UInt8 * data, size_t size, size_t prefix) noexcept
|
||||
size_t rollback = 0;
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
/// Quickly skip regular ASCII
|
||||
|
||||
#if defined(__SSE2__)
|
||||
const auto lower_bound = _mm_set1_epi8(32);
|
||||
const auto upper_bound = _mm_set1_epi8(126);
|
||||
|
||||
while (i + 15 < size)
|
||||
{
|
||||
__m128i bytes = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&data[i]));
|
||||
|
||||
const uint16_t non_regular_width_mask = _mm_movemask_epi8(
|
||||
_mm_or_si128(
|
||||
_mm_cmplt_epi8(bytes, lower_bound),
|
||||
_mm_cmpgt_epi8(bytes, upper_bound)));
|
||||
|
||||
if (non_regular_width_mask)
|
||||
{
|
||||
auto num_regular_chars = __builtin_ctz(non_regular_width_mask);
|
||||
width += num_regular_chars;
|
||||
i += num_regular_chars;
|
||||
break;
|
||||
}
|
||||
else
|
||||
{
|
||||
i += 16;
|
||||
width += 16;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
while (i < size && isPrintableASCII(data[i]))
|
||||
{
|
||||
++width;
|
||||
++i;
|
||||
}
|
||||
|
||||
switch (decoder.decode(data[i]))
|
||||
{
|
||||
case UTF8Decoder::REJECT:
|
||||
|
@ -43,7 +43,7 @@ public:
|
||||
private:
|
||||
zkutil::ZooKeeperHolderPtr zookeeper_holder;
|
||||
std::string path;
|
||||
Logger * log = &Logger::get("zkutil::Increment");
|
||||
Poco::Logger * log = &Poco::Logger::get("zkutil::Increment");
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -39,7 +39,7 @@ public:
|
||||
LeaderElection(DB::BackgroundSchedulePool & pool_, const std::string & path_, ZooKeeper & zookeeper_, LeadershipHandler handler_, const std::string & identifier_ = "")
|
||||
: pool(pool_), path(path_), zookeeper(zookeeper_), handler(handler_), identifier(identifier_)
|
||||
, log_name("LeaderElection (" + path + ")")
|
||||
, log(&Logger::get(log_name))
|
||||
, log(&Poco::Logger::get(log_name))
|
||||
{
|
||||
task = pool.createTask(log_name, [this] { threadFunction(); });
|
||||
createNode();
|
||||
@ -67,7 +67,7 @@ private:
|
||||
LeadershipHandler handler;
|
||||
std::string identifier;
|
||||
std::string log_name;
|
||||
Logger * log;
|
||||
Poco::Logger * log;
|
||||
|
||||
EphemeralNodeHolderPtr node;
|
||||
std::string node_name;
|
||||
|
@ -21,7 +21,7 @@ namespace zkutil
|
||||
zookeeper_holder(zookeeper_holder_),
|
||||
lock_path(lock_prefix_ + "/" + lock_name_),
|
||||
lock_message(lock_message_),
|
||||
log(&Logger::get("zkutil::Lock"))
|
||||
log(&Poco::Logger::get("zkutil::Lock"))
|
||||
{
|
||||
auto zookeeper = zookeeper_holder->getZooKeeper();
|
||||
if (create_parent_path_)
|
||||
@ -72,7 +72,7 @@ namespace zkutil
|
||||
|
||||
std::string lock_path;
|
||||
std::string lock_message;
|
||||
Logger * log;
|
||||
Poco::Logger * log;
|
||||
|
||||
};
|
||||
}
|
||||
|
@ -48,7 +48,7 @@ static void check(int32_t code, const std::string & path)
|
||||
void ZooKeeper::init(const std::string & implementation_, const std::string & hosts_, const std::string & identity_,
|
||||
int32_t session_timeout_ms_, int32_t operation_timeout_ms_, const std::string & chroot_)
|
||||
{
|
||||
log = &Logger::get("ZooKeeper");
|
||||
log = &Poco::Logger::get("ZooKeeper");
|
||||
hosts = hosts_;
|
||||
identity = identity_;
|
||||
session_timeout_ms = session_timeout_ms_;
|
||||
|
@ -269,7 +269,7 @@ private:
|
||||
|
||||
std::mutex mutex;
|
||||
|
||||
Logger * log = nullptr;
|
||||
Poco::Logger * log = nullptr;
|
||||
};
|
||||
|
||||
|
||||
|
@ -70,7 +70,7 @@ private:
|
||||
mutable std::mutex mutex;
|
||||
ZooKeeper::Ptr ptr;
|
||||
|
||||
Logger * log = &Logger::get("ZooKeeperHolder");
|
||||
Poco::Logger * log = &Poco::Logger::get("ZooKeeperHolder");
|
||||
|
||||
static std::string nullptr_exception_message;
|
||||
};
|
||||
|
@ -20,8 +20,8 @@ int main(int argc, char ** argv)
|
||||
}
|
||||
|
||||
Poco::AutoPtr<Poco::ConsoleChannel> channel = new Poco::ConsoleChannel(std::cerr);
|
||||
Logger::root().setChannel(channel);
|
||||
Logger::root().setLevel("trace");
|
||||
Poco::Logger::root().setChannel(channel);
|
||||
Poco::Logger::root().setLevel("trace");
|
||||
|
||||
zkutil::ZooKeeper zk(argv[1]);
|
||||
std::string unused;
|
||||
|
@ -1,6 +1,8 @@
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
#include <fmt/format.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -20,3 +22,35 @@ std::string formatReadableSizeWithDecimalSuffix(double value, int precision = 2)
|
||||
/// Prints the number as 123.45 billion.
|
||||
void formatReadableQuantity(double value, DB::WriteBuffer & out, int precision = 2);
|
||||
std::string formatReadableQuantity(double value, int precision = 2);
|
||||
|
||||
|
||||
/// Wrapper around value. If used with fmt library (e.g. for log messages),
|
||||
/// value is automatically formatted as size with binary suffix.
|
||||
struct ReadableSize
|
||||
{
|
||||
double value;
|
||||
explicit ReadableSize(double value_) : value(value_) {}
|
||||
};
|
||||
|
||||
/// See https://fmt.dev/latest/api.html#formatting-user-defined-types
|
||||
template <>
|
||||
struct fmt::formatter<ReadableSize>
|
||||
{
|
||||
constexpr auto parse(format_parse_context & ctx)
|
||||
{
|
||||
auto it = ctx.begin();
|
||||
auto end = ctx.end();
|
||||
|
||||
/// Only support {}.
|
||||
if (it != end && *it != '}')
|
||||
throw format_error("invalid format");
|
||||
|
||||
return it;
|
||||
}
|
||||
|
||||
template <typename FormatContext>
|
||||
auto format(const ReadableSize & size, FormatContext & ctx)
|
||||
{
|
||||
return format_to(ctx.out(), "{}", formatReadableSizeWithBinarySuffix(size.value));
|
||||
}
|
||||
};
|
||||
|
@ -11,15 +11,15 @@ struct ContextHolder
|
||||
: shared_context(DB::Context::createShared())
|
||||
, context(DB::Context::createGlobal(shared_context.get()))
|
||||
{
|
||||
context.makeGlobalContext();
|
||||
context.setPath("./");
|
||||
}
|
||||
|
||||
ContextHolder(ContextHolder &&) = default;
|
||||
};
|
||||
|
||||
inline ContextHolder getContext()
|
||||
inline const ContextHolder & getContext()
|
||||
{
|
||||
ContextHolder holder;
|
||||
holder.context.makeGlobalContext();
|
||||
holder.context.setPath("./");
|
||||
static ContextHolder holder;
|
||||
return holder;
|
||||
}
|
||||
|
19
src/Common/tests/gtest_log.cpp
Normal file
19
src/Common/tests/gtest_log.cpp
Normal file
@ -0,0 +1,19 @@
|
||||
#include <string>
|
||||
#include <vector>
|
||||
#include <common/logger_useful.h>
|
||||
#include <gtest/gtest.h>
|
||||
|
||||
#include <Poco/Logger.h>
|
||||
#include <Poco/AutoPtr.h>
|
||||
#include <Poco/NullChannel.h>
|
||||
|
||||
|
||||
TEST(Logger, Log)
|
||||
{
|
||||
Poco::Logger::root().setLevel("none");
|
||||
Poco::Logger::root().setChannel(Poco::AutoPtr<Poco::NullChannel>(new Poco::NullChannel()));
|
||||
Poco::Logger * log = &Poco::Logger::get("Log");
|
||||
|
||||
/// This test checks that we don't pass this string to fmtlib, because it is the only argument.
|
||||
EXPECT_NO_THROW(LOG_INFO(log, "Hello {} World"));
|
||||
}
|
@ -111,7 +111,7 @@ void BackgroundSchedulePoolTaskInfo::execute()
|
||||
static const int32_t slow_execution_threshold_ms = 200;
|
||||
|
||||
if (milliseconds >= slow_execution_threshold_ms)
|
||||
LOG_TRACE(&Logger::get(log_name), "Execution took {} ms.", milliseconds);
|
||||
LOG_TRACE(&Poco::Logger::get(log_name), "Execution took {} ms.", milliseconds);
|
||||
|
||||
{
|
||||
std::lock_guard lock_schedule(schedule_mutex);
|
||||
@ -156,7 +156,7 @@ BackgroundSchedulePool::BackgroundSchedulePool(size_t size_, CurrentMetrics::Met
|
||||
, memory_metric(memory_metric_)
|
||||
, thread_name(thread_name_)
|
||||
{
|
||||
LOG_INFO(&Logger::get("BackgroundSchedulePool/" + thread_name), "Create BackgroundSchedulePool with {} threads", size);
|
||||
LOG_INFO(&Poco::Logger::get("BackgroundSchedulePool/" + thread_name), "Create BackgroundSchedulePool with {} threads", size);
|
||||
|
||||
threads.resize(size);
|
||||
for (auto & thread : threads)
|
||||
@ -179,7 +179,7 @@ BackgroundSchedulePool::~BackgroundSchedulePool()
|
||||
queue.wakeUpAll();
|
||||
delayed_thread.join();
|
||||
|
||||
LOG_TRACE(&Logger::get("BackgroundSchedulePool/" + thread_name), "Waiting for threads to finish.");
|
||||
LOG_TRACE(&Poco::Logger::get("BackgroundSchedulePool/" + thread_name), "Waiting for threads to finish.");
|
||||
for (auto & thread : threads)
|
||||
thread.join();
|
||||
}
|
||||
|
@ -164,7 +164,7 @@ void ExternalTablesHandler::handlePart(const Poco::Net::MessageHeader & header,
|
||||
|
||||
/// Create table
|
||||
NamesAndTypesList columns = sample_block.getNamesAndTypesList();
|
||||
auto temporary_table = TemporaryTableHolder(context, ColumnsDescription{columns});
|
||||
auto temporary_table = TemporaryTableHolder(context, ColumnsDescription{columns}, {});
|
||||
auto storage = temporary_table.getTable();
|
||||
context.addExternalTable(data->table_name, std::move(temporary_table));
|
||||
BlockOutputStreamPtr output = storage->write(ASTPtr(), context);
|
||||
|
@ -994,7 +994,7 @@ private:
|
||||
class Sha256Password : public IPlugin
|
||||
{
|
||||
public:
|
||||
Sha256Password(RSA & public_key_, RSA & private_key_, Logger * log_)
|
||||
Sha256Password(RSA & public_key_, RSA & private_key_, Poco::Logger * log_)
|
||||
: public_key(public_key_)
|
||||
, private_key(private_key_)
|
||||
, log(log_)
|
||||
@ -1130,7 +1130,7 @@ public:
|
||||
private:
|
||||
RSA & public_key;
|
||||
RSA & private_key;
|
||||
Logger * log;
|
||||
Poco::Logger * log;
|
||||
String scramble;
|
||||
};
|
||||
#endif
|
||||
|
@ -126,7 +126,7 @@ struct Settings : public SettingsCollection<Settings>
|
||||
M(SettingBool, force_optimize_skip_unused_shards_no_nested, false, "Do not apply force_optimize_skip_unused_shards for nested Distributed tables.", 0) \
|
||||
\
|
||||
M(SettingBool, input_format_parallel_parsing, true, "Enable parallel parsing for some data formats.", 0) \
|
||||
M(SettingUInt64, min_chunk_bytes_for_parallel_parsing, (1024 * 1024), "The minimum chunk size in bytes, which each thread will parse in parallel.", 0) \
|
||||
M(SettingUInt64, min_chunk_bytes_for_parallel_parsing, (10 * 1024 * 1024), "The minimum chunk size in bytes, which each thread will parse in parallel.", 0) \
|
||||
\
|
||||
M(SettingUInt64, merge_tree_min_rows_for_concurrent_read, (20 * 8192), "If at least as many lines are read from one file, the reading can be parallelized.", 0) \
|
||||
M(SettingUInt64, merge_tree_min_bytes_for_concurrent_read, (24 * 10 * 1024 * 1024), "If at least as many bytes are read from one file, the reading can be parallelized.", 0) \
|
||||
@ -310,7 +310,7 @@ struct Settings : public SettingsCollection<Settings>
|
||||
M(SettingUInt64, max_execution_speed, 0, "Maximum number of execution rows per second.", 0) \
|
||||
M(SettingUInt64, min_execution_speed_bytes, 0, "Minimum number of execution bytes per second.", 0) \
|
||||
M(SettingUInt64, max_execution_speed_bytes, 0, "Maximum number of execution bytes per second.", 0) \
|
||||
M(SettingSeconds, timeout_before_checking_execution_speed, 0, "Check that the speed is not too low after the specified time has elapsed.", 0) \
|
||||
M(SettingSeconds, timeout_before_checking_execution_speed, 10, "Check that the speed is not too low after the specified time has elapsed.", 0) \
|
||||
\
|
||||
M(SettingUInt64, max_columns_to_read, 0, "", 0) \
|
||||
M(SettingUInt64, max_temporary_columns, 0, "", 0) \
|
||||
@ -425,6 +425,8 @@ struct Settings : public SettingsCollection<Settings>
|
||||
M(SettingSeconds, lock_acquire_timeout, DBMS_DEFAULT_LOCK_ACQUIRE_TIMEOUT_SEC, "How long locking request should wait before failing", 0) \
|
||||
M(SettingBool, materialize_ttl_after_modify, true, "Apply TTL for old data, after ALTER MODIFY TTL query", 0) \
|
||||
\
|
||||
M(SettingBool, allow_experimental_geo_types, false, "Allow geo data types such as Point, Ring, Polygon, MultiPolygon", 0) \
|
||||
\
|
||||
/** Obsolete settings that do nothing but left for compatibility reasons. Remove each one after half a year of obsolescence. */ \
|
||||
\
|
||||
M(SettingBool, allow_experimental_low_cardinality_type, true, "Obsolete setting, does nothing. Will be removed after 2019-08-13", 0) \
|
||||
@ -437,6 +439,7 @@ struct Settings : public SettingsCollection<Settings>
|
||||
M(SettingUInt64, mark_cache_min_lifetime, 0, "Obsolete setting, does nothing. Will be removed after 2020-05-31", 0) \
|
||||
M(SettingBool, partial_merge_join, false, "Obsolete. Use join_algorithm='prefer_partial_merge' instead.", 0) \
|
||||
M(SettingUInt64, max_memory_usage_for_all_queries, 0, "Obsolete. Will be removed after 2020-10-20", 0) \
|
||||
M(SettingBool, experimental_use_processors, true, "Obsolete setting, does nothing. Will be removed after 2020-11-29.", 0) \
|
||||
|
||||
DECLARE_SETTINGS_COLLECTION(LIST_OF_SETTINGS)
|
||||
|
||||
|
@ -598,7 +598,7 @@ namespace details
|
||||
|
||||
void SettingsCollectionUtils::warningNameNotFound(const StringRef & name)
|
||||
{
|
||||
static auto * log = &Logger::get("Settings");
|
||||
static auto * log = &Poco::Logger::get("Settings");
|
||||
LOG_WARNING(log, "Unknown setting {}, skipping", name);
|
||||
}
|
||||
|
||||
|
@ -60,7 +60,7 @@ Block AggregatingBlockInputStream::readImpl()
|
||||
input_streams.emplace_back(temporary_inputs.back()->block_in);
|
||||
}
|
||||
|
||||
LOG_TRACE(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), formatReadableSizeWithBinarySuffix(files.sum_size_compressed), formatReadableSizeWithBinarySuffix(files.sum_size_uncompressed));
|
||||
LOG_TRACE(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), ReadableSize(files.sum_size_compressed), ReadableSize(files.sum_size_uncompressed));
|
||||
|
||||
impl = std::make_unique<MergingAggregatedMemoryEfficientBlockInputStream>(input_streams, params, final, 1, 1);
|
||||
}
|
||||
|
@ -47,7 +47,7 @@ protected:
|
||||
/** From here we will get the completed blocks after the aggregation. */
|
||||
std::unique_ptr<IBlockInputStream> impl;
|
||||
|
||||
Logger * log = &Logger::get("AggregatingBlockInputStream");
|
||||
Poco::Logger * log = &Poco::Logger::get("AggregatingBlockInputStream");
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -13,6 +13,7 @@ limitations under the License. */
|
||||
|
||||
#include <DataStreams/IBlockInputStream.h>
|
||||
#include <Processors/Sources/SourceWithProgress.h>
|
||||
#include <Processors/Transforms/AggregatingTransform.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -38,7 +39,12 @@ protected:
|
||||
|
||||
Block res = *it;
|
||||
++it;
|
||||
return Chunk(res.getColumns(), res.rows());
|
||||
|
||||
auto info = std::make_shared<AggregatedChunkInfo>();
|
||||
info->bucket_num = res.info.bucket_num;
|
||||
info->is_overflows = res.info.is_overflows;
|
||||
|
||||
return Chunk(res.getColumns(), res.rows(), std::move(info));
|
||||
}
|
||||
|
||||
private:
|
||||
|
@ -168,7 +168,7 @@ private:
|
||||
const SortDescription description;
|
||||
String sign_column_name;
|
||||
|
||||
Logger * log = &Logger::get("CollapsingFinalBlockInputStream");
|
||||
Poco::Logger * log = &Poco::Logger::get("CollapsingFinalBlockInputStream");
|
||||
|
||||
bool first = true;
|
||||
|
||||
|
@ -21,7 +21,7 @@ ColumnGathererStream::ColumnGathererStream(
|
||||
const String & column_name_, const BlockInputStreams & source_streams, ReadBuffer & row_sources_buf_,
|
||||
size_t block_preferred_size_)
|
||||
: column_name(column_name_), sources(source_streams.size()), row_sources_buf(row_sources_buf_)
|
||||
, block_preferred_size(block_preferred_size_), log(&Logger::get("ColumnGathererStream"))
|
||||
, block_preferred_size(block_preferred_size_), log(&Poco::Logger::get("ColumnGathererStream"))
|
||||
{
|
||||
if (source_streams.empty())
|
||||
throw Exception("There are no streams to gather", ErrorCodes::EMPTY_DATA_PASSED);
|
||||
@ -105,7 +105,7 @@ void ColumnGathererStream::readSuffixImpl()
|
||||
else
|
||||
LOG_DEBUG(log, "Gathered column {} ({} bytes/elem.) in {} sec., {} rows/sec., {}/sec.",
|
||||
column_name, static_cast<double>(profile_info.bytes) / profile_info.rows, seconds,
|
||||
profile_info.rows / seconds, formatReadableSizeWithBinarySuffix(profile_info.bytes / seconds));
|
||||
profile_info.rows / seconds, ReadableSize(profile_info.bytes / seconds));
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -44,7 +44,7 @@ private:
|
||||
size_t bytes_to_transfer = 0;
|
||||
|
||||
using Logger = Poco::Logger;
|
||||
Logger * log = &Logger::get("CreatingSetsBlockInputStream");
|
||||
Poco::Logger * log = &Poco::Logger::get("CreatingSetsBlockInputStream");
|
||||
|
||||
void createAll();
|
||||
void createOne(SubqueryForSet & subquery);
|
||||
|
@ -30,7 +30,8 @@ static void limitProgressingSpeed(size_t total_progress_size, size_t max_speed_i
|
||||
{
|
||||
UInt64 sleep_microseconds = desired_microseconds - total_elapsed_microseconds;
|
||||
|
||||
/// Never sleep more than one second (it should be enough to limit speed for a reasonable amount, and otherwise it's too easy to make query hang).
|
||||
/// Never sleep more than one second (it should be enough to limit speed for a reasonable amount,
|
||||
/// and otherwise it's too easy to make query hang).
|
||||
sleep_microseconds = std::min(UInt64(1000000), sleep_microseconds);
|
||||
|
||||
sleepForMicroseconds(sleep_microseconds);
|
||||
@ -45,14 +46,14 @@ void ExecutionSpeedLimits::throttle(
|
||||
{
|
||||
if ((min_execution_rps != 0 || max_execution_rps != 0
|
||||
|| min_execution_bps != 0 || max_execution_bps != 0
|
||||
|| (total_rows_to_read != 0 && timeout_before_checking_execution_speed != 0)) &&
|
||||
(static_cast<Int64>(total_elapsed_microseconds) > timeout_before_checking_execution_speed.totalMicroseconds()))
|
||||
|| (total_rows_to_read != 0 && timeout_before_checking_execution_speed != 0))
|
||||
&& (static_cast<Int64>(total_elapsed_microseconds) > timeout_before_checking_execution_speed.totalMicroseconds()))
|
||||
{
|
||||
/// Do not count sleeps in throttlers
|
||||
UInt64 throttler_sleep_microseconds = CurrentThread::getProfileEvents()[ProfileEvents::ThrottlerSleepMicroseconds];
|
||||
|
||||
double elapsed_seconds = 0;
|
||||
if (throttler_sleep_microseconds > total_elapsed_microseconds)
|
||||
if (total_elapsed_microseconds > throttler_sleep_microseconds)
|
||||
elapsed_seconds = static_cast<double>(total_elapsed_microseconds - throttler_sleep_microseconds) / 1000000.0;
|
||||
|
||||
if (elapsed_seconds > 0)
|
||||
|
@ -58,7 +58,7 @@ InputStreamFromASTInsertQuery::InputStreamFromASTInsertQuery(
|
||||
|
||||
if (context.getSettingsRef().input_format_defaults_for_omitted_fields && ast_insert_query->table_id && !input_function)
|
||||
{
|
||||
StoragePtr storage = DatabaseCatalog::instance().getTable(ast_insert_query->table_id);
|
||||
StoragePtr storage = DatabaseCatalog::instance().getTable(ast_insert_query->table_id, context);
|
||||
auto column_defaults = storage->getColumns().getDefaults();
|
||||
if (!column_defaults.empty())
|
||||
res_stream = std::make_shared<AddingDefaultsBlockInputStream>(res_stream, column_defaults, context);
|
||||
|
@ -264,7 +264,7 @@ void MergeSortingBlockInputStream::remerge()
|
||||
}
|
||||
merger.readSuffix();
|
||||
|
||||
LOG_DEBUG(log, "Memory usage is lowered from {} to {}", formatReadableSizeWithBinarySuffix(sum_bytes_in_blocks), formatReadableSizeWithBinarySuffix(new_sum_bytes_in_blocks));
|
||||
LOG_DEBUG(log, "Memory usage is lowered from {} to {}", ReadableSize(sum_bytes_in_blocks), ReadableSize(new_sum_bytes_in_blocks));
|
||||
|
||||
/// If the memory consumption was not lowered enough - we will not perform remerge anymore. 2 is a guess.
|
||||
if (new_sum_bytes_in_blocks * 2 > sum_bytes_in_blocks)
|
||||
|
@ -104,7 +104,7 @@ private:
|
||||
String codec;
|
||||
size_t min_free_disk_space;
|
||||
|
||||
Logger * log = &Logger::get("MergeSortingBlockInputStream");
|
||||
Poco::Logger * log = &Poco::Logger::get("MergeSortingBlockInputStream");
|
||||
|
||||
Blocks blocks;
|
||||
size_t sum_rows_in_blocks = 0;
|
||||
|
@ -555,7 +555,7 @@ MergingAggregatedMemoryEfficientBlockInputStream::BlocksToMerge MergingAggregate
|
||||
/// Not yet partitioned (splitted to buckets) block. Will partition it and place result to 'splitted_blocks'.
|
||||
if (input.block.info.bucket_num == -1 && input.block && input.splitted_blocks.empty())
|
||||
{
|
||||
LOG_TRACE(&Logger::get("MergingAggregatedMemoryEfficient"), "Having block without bucket: will split.");
|
||||
LOG_TRACE(&Poco::Logger::get("MergingAggregatedMemoryEfficient"), "Having block without bucket: will split.");
|
||||
|
||||
input.splitted_blocks = aggregator.convertBlockToTwoLevel(input.block);
|
||||
input.block = Block();
|
||||
|
@ -96,7 +96,7 @@ private:
|
||||
std::atomic<bool> has_overflows {false};
|
||||
int current_bucket_num = -1;
|
||||
|
||||
Logger * log = &Logger::get("MergingAggregatedMemoryEfficientBlockInputStream");
|
||||
Poco::Logger * log = &Poco::Logger::get("MergingAggregatedMemoryEfficientBlockInputStream");
|
||||
|
||||
|
||||
struct Input
|
||||
|
@ -23,7 +23,7 @@ MergingSortedBlockInputStream::MergingSortedBlockInputStream(
|
||||
: description(std::move(description_)), max_block_size(max_block_size_), limit(limit_), quiet(quiet_)
|
||||
, source_blocks(inputs_.size())
|
||||
, cursors(inputs_.size()), out_row_sources_buf(out_row_sources_buf_)
|
||||
, log(&Logger::get("MergingSortedBlockInputStream"))
|
||||
, log(&Poco::Logger::get("MergingSortedBlockInputStream"))
|
||||
{
|
||||
children.insert(children.end(), inputs_.begin(), inputs_.end());
|
||||
header = children.at(0)->getHeader();
|
||||
@ -269,7 +269,7 @@ void MergingSortedBlockInputStream::readSuffixImpl()
|
||||
LOG_DEBUG(log, "Merge sorted {} blocks, {} rows in {} sec., {} rows/sec., {}/sec",
|
||||
profile_info.blocks, profile_info.rows, seconds,
|
||||
profile_info.rows / seconds,
|
||||
formatReadableSizeWithBinarySuffix(profile_info.bytes / seconds));
|
||||
ReadableSize(profile_info.bytes / seconds));
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -82,7 +82,7 @@ Block ParallelAggregatingBlockInputStream::readImpl()
|
||||
input_streams.emplace_back(temporary_inputs.back()->block_in);
|
||||
}
|
||||
|
||||
LOG_TRACE(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), formatReadableSizeWithBinarySuffix(files.sum_size_compressed), formatReadableSizeWithBinarySuffix(files.sum_size_uncompressed));
|
||||
LOG_TRACE(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), ReadableSize(files.sum_size_compressed), ReadableSize(files.sum_size_uncompressed));
|
||||
|
||||
impl = std::make_unique<MergingAggregatedMemoryEfficientBlockInputStream>(
|
||||
input_streams, params, final, temporary_data_merge_threads, temporary_data_merge_threads);
|
||||
@ -178,16 +178,16 @@ void ParallelAggregatingBlockInputStream::execute()
|
||||
{
|
||||
size_t rows = many_data[i]->size();
|
||||
LOG_TRACE(log, "Aggregated. {} to {} rows (from {}) in {} sec. ({} rows/sec., {}/sec.)",
|
||||
threads_data[i].src_rows, rows, formatReadableSizeWithBinarySuffix(threads_data[i].src_bytes),
|
||||
threads_data[i].src_rows, rows, ReadableSize(threads_data[i].src_bytes),
|
||||
elapsed_seconds, threads_data[i].src_rows / elapsed_seconds,
|
||||
formatReadableSizeWithBinarySuffix(threads_data[i].src_bytes / elapsed_seconds));
|
||||
ReadableSize(threads_data[i].src_bytes / elapsed_seconds));
|
||||
|
||||
total_src_rows += threads_data[i].src_rows;
|
||||
total_src_bytes += threads_data[i].src_bytes;
|
||||
}
|
||||
LOG_TRACE(log, "Total aggregated. {} rows (from {}) in {} sec. ({} rows/sec., {}/sec.)",
|
||||
total_src_rows, formatReadableSizeWithBinarySuffix(total_src_bytes), elapsed_seconds,
|
||||
total_src_rows / elapsed_seconds, formatReadableSizeWithBinarySuffix(total_src_bytes / elapsed_seconds));
|
||||
total_src_rows, ReadableSize(total_src_bytes), elapsed_seconds,
|
||||
total_src_rows / elapsed_seconds, ReadableSize(total_src_bytes / elapsed_seconds));
|
||||
|
||||
/// If there was no data, and we aggregate without keys, we must return single row with the result of empty aggregation.
|
||||
/// To do this, we pass a block with zero rows to aggregate.
|
||||
|
@ -60,7 +60,7 @@ private:
|
||||
std::atomic<bool> executed {false};
|
||||
std::vector<std::unique_ptr<TemporaryFileStream>> temporary_inputs;
|
||||
|
||||
Logger * log = &Logger::get("ParallelAggregatingBlockInputStream");
|
||||
Poco::Logger * log = &Poco::Logger::get("ParallelAggregatingBlockInputStream");
|
||||
|
||||
|
||||
ManyAggregatedDataVariants many_data;
|
||||
|
@ -359,7 +359,7 @@ private:
|
||||
/// Wait for the completion of all threads.
|
||||
std::atomic<bool> joined_threads { false };
|
||||
|
||||
Logger * log = &Logger::get("ParallelInputsProcessor");
|
||||
Poco::Logger * log = &Poco::Logger::get("ParallelInputsProcessor");
|
||||
};
|
||||
|
||||
|
||||
|
@ -59,7 +59,7 @@ PushingToViewsBlockOutputStream::PushingToViewsBlockOutputStream(
|
||||
|
||||
for (const auto & database_table : dependencies)
|
||||
{
|
||||
auto dependent_table = DatabaseCatalog::instance().getTable(database_table);
|
||||
auto dependent_table = DatabaseCatalog::instance().getTable(database_table, context);
|
||||
|
||||
ASTPtr query;
|
||||
BlockOutputStreamPtr out;
|
||||
@ -274,7 +274,7 @@ void PushingToViewsBlockOutputStream::process(const Block & block, size_t view_n
|
||||
StorageValues::create(
|
||||
storage->getStorageID(), storage->getColumns(), block, storage->getVirtuals()));
|
||||
select.emplace(view.query, local_context, SelectQueryOptions());
|
||||
in = std::make_shared<MaterializingBlockInputStream>(select->execute().in);
|
||||
in = std::make_shared<MaterializingBlockInputStream>(select->execute().getInputStream());
|
||||
|
||||
/// Squashing is needed here because the materialized view query can generate a lot of blocks
|
||||
/// even when only one block is inserted into the parent table (e.g. if the query is a GROUP BY
|
||||
|
@ -151,7 +151,7 @@ private:
|
||||
PoolMode pool_mode = PoolMode::GET_MANY;
|
||||
StorageID main_table = StorageID::createEmpty();
|
||||
|
||||
Logger * log = &Logger::get("RemoteBlockInputStream");
|
||||
Poco::Logger * log = &Poco::Logger::get("RemoteBlockInputStream");
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -16,8 +16,8 @@ bool SizeLimits::check(UInt64 rows, UInt64 bytes, const char * what, int too_man
|
||||
+ ", current rows: " + formatReadableQuantity(rows), too_many_rows_exception_code);
|
||||
|
||||
if (max_bytes && bytes > max_bytes)
|
||||
throw Exception("Limit for " + std::string(what) + " exceeded, max bytes: " + formatReadableSizeWithBinarySuffix(max_bytes)
|
||||
+ ", current bytes: " + formatReadableSizeWithBinarySuffix(bytes), too_many_bytes_exception_code);
|
||||
throw Exception(fmt::format("Limit for {} exceeded, max bytes: {}, current bytes: {}",
|
||||
std::string(what), ReadableSize(max_bytes), ReadableSize(bytes)), too_many_bytes_exception_code);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
@ -28,7 +28,7 @@ TTLBlockInputStream::TTLBlockInputStream(
|
||||
, current_time(current_time_)
|
||||
, force(force_)
|
||||
, old_ttl_infos(data_part->ttl_infos)
|
||||
, log(&Logger::get(storage.getLogName() + " (TTLBlockInputStream)"))
|
||||
, log(&Poco::Logger::get(storage.getLogName() + " (TTLBlockInputStream)"))
|
||||
, date_lut(DateLUT::instance())
|
||||
{
|
||||
children.push_back(input_);
|
||||
|
@ -52,7 +52,7 @@ private:
|
||||
NameSet empty_columns;
|
||||
|
||||
size_t rows_removed = 0;
|
||||
Logger * log;
|
||||
Poco::Logger * log;
|
||||
const DateLUTImpl & date_lut;
|
||||
|
||||
/// TODO rewrite defaults logic to evaluteMissingDefaults
|
||||
|
@ -253,7 +253,7 @@ private:
|
||||
bool started = false;
|
||||
bool all_read = false;
|
||||
|
||||
Logger * log = &Logger::get("UnionBlockInputStream");
|
||||
Poco::Logger * log = &Poco::Logger::get("UnionBlockInputStream");
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -20,8 +20,8 @@ try
|
||||
using namespace DB;
|
||||
|
||||
Poco::AutoPtr<Poco::ConsoleChannel> channel = new Poco::ConsoleChannel(std::cerr);
|
||||
Logger::root().setChannel(channel);
|
||||
Logger::root().setLevel("trace");
|
||||
Poco::Logger::root().setChannel(channel);
|
||||
Poco::Logger::root().setLevel("trace");
|
||||
|
||||
Block block1;
|
||||
|
||||
|
@ -35,7 +35,7 @@ try
|
||||
Names column_names;
|
||||
column_names.push_back("WatchID");
|
||||
|
||||
StoragePtr table = DatabaseCatalog::instance().getTable({"default", "hits6"});
|
||||
StoragePtr table = DatabaseCatalog::instance().getTable({"default", "hits6"}, context);
|
||||
|
||||
QueryProcessingStage::Enum stage = table->getQueryProcessingStage(context);
|
||||
auto pipes = table->read(column_names, {}, context, stage, settings.max_block_size, settings.max_threads);
|
||||
|
131
src/DataTypes/DataTypeCustomGeo.cpp
Normal file
131
src/DataTypes/DataTypeCustomGeo.cpp
Normal file
@ -0,0 +1,131 @@
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
#include <Columns/ColumnTuple.h>
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeCustom.h>
|
||||
#include <DataTypes/DataTypeCustomSimpleTextSerialization.h>
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
class DataTypeCustomPointSerialization : public DataTypeCustomSimpleTextSerialization
|
||||
{
|
||||
public:
|
||||
void serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
|
||||
{
|
||||
nestedDataType()->serializeAsText(column, row_num, ostr, settings);
|
||||
}
|
||||
|
||||
void deserializeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
|
||||
{
|
||||
nestedDataType()->deserializeAsWholeText(column, istr, settings);
|
||||
}
|
||||
|
||||
static DataTypePtr nestedDataType()
|
||||
{
|
||||
static auto data_type = DataTypePtr(std::make_unique<DataTypeTuple>(
|
||||
DataTypes({std::make_unique<DataTypeFloat64>(), std::make_unique<DataTypeFloat64>()})));
|
||||
return data_type;
|
||||
}
|
||||
};
|
||||
|
||||
class DataTypeCustomRingSerialization : public DataTypeCustomSimpleTextSerialization
|
||||
{
|
||||
public:
|
||||
void serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
|
||||
{
|
||||
nestedDataType()->serializeAsText(column, row_num, ostr, settings);
|
||||
}
|
||||
|
||||
void deserializeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
|
||||
{
|
||||
nestedDataType()->deserializeAsWholeText(column, istr, settings);
|
||||
}
|
||||
|
||||
static DataTypePtr nestedDataType()
|
||||
{
|
||||
static auto data_type = DataTypePtr(std::make_unique<DataTypeArray>(DataTypeCustomPointSerialization::nestedDataType()));
|
||||
return data_type;
|
||||
}
|
||||
};
|
||||
|
||||
class DataTypeCustomPolygonSerialization : public DataTypeCustomSimpleTextSerialization
|
||||
{
|
||||
public:
|
||||
void serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
|
||||
{
|
||||
nestedDataType()->serializeAsText(column, row_num, ostr, settings);
|
||||
}
|
||||
|
||||
void deserializeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
|
||||
{
|
||||
nestedDataType()->deserializeAsWholeText(column, istr, settings);
|
||||
}
|
||||
|
||||
static DataTypePtr nestedDataType()
|
||||
{
|
||||
static auto data_type = DataTypePtr(std::make_unique<DataTypeArray>(DataTypeCustomRingSerialization::nestedDataType()));
|
||||
return data_type;
|
||||
}
|
||||
};
|
||||
|
||||
class DataTypeCustomMultiPolygonSerialization : public DataTypeCustomSimpleTextSerialization
|
||||
{
|
||||
public:
|
||||
void serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
|
||||
{
|
||||
nestedDataType()->serializeAsText(column, row_num, ostr, settings);
|
||||
}
|
||||
|
||||
void deserializeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
|
||||
{
|
||||
nestedDataType()->deserializeAsWholeText(column, istr, settings);
|
||||
}
|
||||
|
||||
static DataTypePtr nestedDataType()
|
||||
{
|
||||
static auto data_type = DataTypePtr(std::make_unique<DataTypeArray>(DataTypeCustomPolygonSerialization::nestedDataType()));
|
||||
return data_type;
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
void registerDataTypeDomainGeo(DataTypeFactory & factory)
|
||||
{
|
||||
// Custom type for point represented as its coordinates stored as Tuple(Float64, Float64)
|
||||
factory.registerSimpleDataTypeCustom("Point", []
|
||||
{
|
||||
return std::make_pair(DataTypeFactory::instance().get("Tuple(Float64, Float64)"),
|
||||
std::make_unique<DataTypeCustomDesc>(std::make_unique<DataTypeCustomFixedName>("Point"), std::make_unique<DataTypeCustomPointSerialization>()));
|
||||
});
|
||||
|
||||
// Custom type for simple polygon without holes stored as Array(Point)
|
||||
factory.registerSimpleDataTypeCustom("Ring", []
|
||||
{
|
||||
return std::make_pair(DataTypeFactory::instance().get("Array(Point)"),
|
||||
std::make_unique<DataTypeCustomDesc>(std::make_unique<DataTypeCustomFixedName>("Ring"), std::make_unique<DataTypeCustomRingSerialization>()));
|
||||
});
|
||||
|
||||
// Custom type for polygon with holes stored as Array(Ring)
|
||||
// First element of outer array is outer shape of polygon and all the following are holes
|
||||
factory.registerSimpleDataTypeCustom("Polygon", []
|
||||
{
|
||||
return std::make_pair(DataTypeFactory::instance().get("Array(Ring)"),
|
||||
std::make_unique<DataTypeCustomDesc>(std::make_unique<DataTypeCustomFixedName>("Polygon"), std::make_unique<DataTypeCustomPolygonSerialization>()));
|
||||
});
|
||||
|
||||
// Custom type for multiple polygons with holes stored as Array(Polygon)
|
||||
factory.registerSimpleDataTypeCustom("MultiPolygon", []
|
||||
{
|
||||
return std::make_pair(DataTypeFactory::instance().get("Array(Polygon)"),
|
||||
std::make_unique<DataTypeCustomDesc>(std::make_unique<DataTypeCustomFixedName>("MultiPolygon"), std::make_unique<DataTypeCustomMultiPolygonSerialization>()));
|
||||
});
|
||||
}
|
||||
|
||||
}
|
@ -180,6 +180,7 @@ DataTypeFactory::DataTypeFactory()
|
||||
registerDataTypeLowCardinality(*this);
|
||||
registerDataTypeDomainIPv4AndIPv6(*this);
|
||||
registerDataTypeDomainSimpleAggregateFunction(*this);
|
||||
registerDataTypeDomainGeo(*this);
|
||||
}
|
||||
|
||||
DataTypeFactory & DataTypeFactory::instance()
|
||||
|
@ -83,5 +83,6 @@ void registerDataTypeLowCardinality(DataTypeFactory & factory);
|
||||
void registerDataTypeDomainIPv4AndIPv6(DataTypeFactory & factory);
|
||||
void registerDataTypeDomainSimpleAggregateFunction(DataTypeFactory & factory);
|
||||
void registerDataTypeDateTime64(DataTypeFactory & factory);
|
||||
void registerDataTypeDomainGeo(DataTypeFactory & factory);
|
||||
|
||||
}
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user