This commit is contained in:
Albert Kidrachev 2020-06-01 15:11:36 +03:00
commit 02611de051
390 changed files with 3274 additions and 2621 deletions

View File

@ -1,5 +1,48 @@
## ClickHouse release v20.4
### ClickHouse release v20.4.3.16-stable 2020-05-23
#### Bug Fix
* Removed logging from mutation finalization task if nothing was finalized. [#11109](https://github.com/ClickHouse/ClickHouse/pull/11109) ([alesapin](https://github.com/alesapin)).
* Fixed memory leak in registerDiskS3. [#11074](https://github.com/ClickHouse/ClickHouse/pull/11074) ([Pavel Kovalenko](https://github.com/Jokser)).
* Fixed the potential missed data during termination of Kafka engine table. [#11048](https://github.com/ClickHouse/ClickHouse/pull/11048) ([filimonov](https://github.com/filimonov)).
* Fixed `parseDateTime64BestEffort` argument resolution bugs. [#11038](https://github.com/ClickHouse/ClickHouse/pull/11038) ([Vasily Nemkov](https://github.com/Enmk)).
* Fixed very rare potential use-after-free error in `MergeTree` if table was not created successfully. [#10986](https://github.com/ClickHouse/ClickHouse/pull/10986), [#10970](https://github.com/ClickHouse/ClickHouse/pull/10970) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed metadata (relative path for rename) and data (relative path for symlink) handling for Atomic database. [#10980](https://github.com/ClickHouse/ClickHouse/pull/10980) ([Azat Khuzhin](https://github.com/azat)).
* Fixed server crash on concurrent `ALTER` and `DROP DATABASE` queries with `Atomic` database engine. [#10968](https://github.com/ClickHouse/ClickHouse/pull/10968) ([tavplubix](https://github.com/tavplubix)).
* Fixed incorrect raw data size in `getRawData()` method. [#10964](https://github.com/ClickHouse/ClickHouse/pull/10964) ([Igr](https://github.com/ObjatieGroba)).
* Fixed incompatibility of two-level aggregation between versions 20.1 and earlier. This incompatibility happens when different versions of ClickHouse are used on initiator node and remote nodes and the size of GROUP BY result is large and aggregation is performed by a single String field. It leads to several unmerged rows for a single key in result. [#10952](https://github.com/ClickHouse/ClickHouse/pull/10952) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed sending partially written files by the `DistributedBlockOutputStream`. [#10940](https://github.com/ClickHouse/ClickHouse/pull/10940) ([Azat Khuzhin](https://github.com/azat)).
* Fixed crash in `SELECT count(notNullIn(NULL, []))`. [#10920](https://github.com/ClickHouse/ClickHouse/pull/10920) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed the hang which was happening sometimes during `DROP` of `Kafka` table engine. (or during server restarts). [#10910](https://github.com/ClickHouse/ClickHouse/pull/10910) ([filimonov](https://github.com/filimonov)).
* Fixed the impossibility of executing multiple `ALTER RENAME` like `a TO b, c TO a`. [#10895](https://github.com/ClickHouse/ClickHouse/pull/10895) ([alesapin](https://github.com/alesapin)).
* Fixed possible race which could happen when you get result from aggregate function state from multiple thread for the same column. The only way it can happen is when you use `finalizeAggregation` function while reading from table with `Memory` engine which stores `AggregateFunction` state for `quantile*` function. [#10890](https://github.com/ClickHouse/ClickHouse/pull/10890) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed backward compatibility with tuples in Distributed tables. [#10889](https://github.com/ClickHouse/ClickHouse/pull/10889) ([Anton Popov](https://github.com/CurtizJ)).
* Fixed `SIGSEGV` in `StringHashTable` if such a key does not exist. [#10870](https://github.com/ClickHouse/ClickHouse/pull/10870) ([Azat Khuzhin](https://github.com/azat)).
* Fixed `WATCH` hangs after `LiveView` table was dropped from database with `Atomic` engine. [#10859](https://github.com/ClickHouse/ClickHouse/pull/10859) ([tavplubix](https://github.com/tavplubix)).
* Fixed bug in `ReplicatedMergeTree` which might cause some `ALTER` on `OPTIMIZE` query to hang waiting for some replica after it become inactive. [#10849](https://github.com/ClickHouse/ClickHouse/pull/10849) ([tavplubix](https://github.com/tavplubix)).
* Now constraints are updated if the column participating in `CONSTRAINT` expression was renamed. Fixes [#10844](https://github.com/ClickHouse/ClickHouse/issues/10844). [#10847](https://github.com/ClickHouse/ClickHouse/pull/10847) ([alesapin](https://github.com/alesapin)).
* Fixed potential read of uninitialized memory in cache-dictionary. [#10834](https://github.com/ClickHouse/ClickHouse/pull/10834) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed columns order after `Block::sortColumns()`. [#10826](https://github.com/ClickHouse/ClickHouse/pull/10826) ([Azat Khuzhin](https://github.com/azat)).
* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984] (https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed `UBSan` and `MSan` report in `DateLUT`. [#10798](https://github.com/ClickHouse/ClickHouse/pull/10798) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed incorrect type conversion in key conditions. Fixes [#6287](https://github.com/ClickHouse/ClickHouse/issues/6287). [#10791](https://github.com/ClickHouse/ClickHouse/pull/10791) ([Andrew Onyshchuk](https://github.com/oandrew)).
* Fixed `parallel_view_processing` behavior. Now all insertions into `MATERIALIZED VIEW` without exception should be finished if exception happened. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#10757](https://github.com/ClickHouse/ClickHouse/pull/10757) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed combinator `-OrNull` and `-OrDefault` when combined with `-State`. [#10741](https://github.com/ClickHouse/ClickHouse/pull/10741) ([hcz](https://github.com/hczhcz)).
* Fixed possible buffer overflow in function `h3EdgeAngle`. [#10711](https://github.com/ClickHouse/ClickHouse/pull/10711) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed bug which locks concurrent alters when table has a lot of parts. [#10659](https://github.com/ClickHouse/ClickHouse/pull/10659) ([alesapin](https://github.com/alesapin)).
* Fixed `nullptr` dereference in `StorageBuffer` if server was shutdown before table startup. [#10641](https://github.com/ClickHouse/ClickHouse/pull/10641) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed `optimize_skip_unused_shards` with `LowCardinality`. [#10611](https://github.com/ClickHouse/ClickHouse/pull/10611) ([Azat Khuzhin](https://github.com/azat)).
* Fixed handling condition variable for synchronous mutations. In some cases signals to that condition variable could be lost. [#10588](https://github.com/ClickHouse/ClickHouse/pull/10588) ([Vladimir Chebotarev](https://github.com/excitoon)).
* Fixed possible crash when `createDictionary()` is called before `loadStoredObject()` has finished. [#10587](https://github.com/ClickHouse/ClickHouse/pull/10587) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fixed `SELECT` of column `ALIAS` which default expression type different from column type. [#10563](https://github.com/ClickHouse/ClickHouse/pull/10563) ([Azat Khuzhin](https://github.com/azat)).
* Implemented comparison between DateTime64 and String values. [#10560](https://github.com/ClickHouse/ClickHouse/pull/10560) ([Vasily Nemkov](https://github.com/Enmk)).
* Disable `GROUP BY` sharding_key optimization by default (`optimize_distributed_group_by_sharding_key` had been introduced and turned of by default, due to trickery of sharding_key analyzing, simple example is `if` in sharding key) and fix it for `WITH ROLLUP/CUBE/TOTALS`. [#10516](https://github.com/ClickHouse/ClickHouse/pull/10516) ([Azat Khuzhin](https://github.com/azat)).
* Fixed [#10263](https://github.com/ClickHouse/ClickHouse/issues/10263). [#10486](https://github.com/ClickHouse/ClickHouse/pull/10486) ([Azat Khuzhin](https://github.com/azat)).
* Added tests about `max_rows_to_sort` setting. [#10268](https://github.com/ClickHouse/ClickHouse/pull/10268) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Added backward compatibility for create bloom filter index. [#10551](https://github.com/ClickHouse/ClickHouse/issues/10551). [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)).
### ClickHouse release v20.4.2.9, 2020-05-12
#### Backward Incompatible Change
@ -280,6 +323,57 @@
## ClickHouse release v20.3
### ClickHouse release v20.3.10.75-lts 2020-05-23
#### Bug Fix
* Removed logging from mutation finalization task if nothing was finalized. [#11109](https://github.com/ClickHouse/ClickHouse/pull/11109) ([alesapin](https://github.com/alesapin)).
* Fixed `parseDateTime64BestEffort` argument resolution bugs. [#11038](https://github.com/ClickHouse/ClickHouse/pull/11038) ([Vasily Nemkov](https://github.com/Enmk)).
* Fixed incorrect raw data size in method `getRawData()`. [#10964](https://github.com/ClickHouse/ClickHouse/pull/10964) ([Igr](https://github.com/ObjatieGroba)).
* Fixed incompatibility of two-level aggregation between versions 20.1 and earlier. This incompatibility happens when different versions of ClickHouse are used on initiator node and remote nodes and the size of `GROUP BY` result is large and aggregation is performed by a single `String` field. It leads to several unmerged rows for a single key in result. [#10952](https://github.com/ClickHouse/ClickHouse/pull/10952) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed backward compatibility with tuples in `Distributed` tables. [#10889](https://github.com/ClickHouse/ClickHouse/pull/10889) ([Anton Popov](https://github.com/CurtizJ)).
* Fixed `SIGSEGV` in `StringHashTable` if such a key does not exist. [#10870](https://github.com/ClickHouse/ClickHouse/pull/10870) ([Azat Khuzhin](https://github.com/azat)).
* Fixed bug in `ReplicatedMergeTree` which might cause some `ALTER` on `OPTIMIZE` query to hang waiting for some replica after it become inactive. [#10849](https://github.com/ClickHouse/ClickHouse/pull/10849) ([tavplubix](https://github.com/tavplubix)).
* Fixed columns order after `Block::sortColumns()`. [#10826](https://github.com/ClickHouse/ClickHouse/pull/10826) ([Azat Khuzhin](https://github.com/azat)).
* Fixed the issue with `ODBC` bridge when no quoting of identifiers is requested. Fixes [#7984] (https://github.com/ClickHouse/ClickHouse/issues/7984). [#10821](https://github.com/ClickHouse/ClickHouse/pull/10821) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed `UBSan` and `MSan` report in `DateLUT`. [#10798](https://github.com/ClickHouse/ClickHouse/pull/10798) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed incorrect type conversion in key conditions. Fixes [#6287](https://github.com/ClickHouse/ClickHouse/issues/6287). [#10791](https://github.com/ClickHouse/ClickHouse/pull/10791) ([Andrew Onyshchuk](https://github.com/oandrew))
* Fixed `parallel_view_processing` behavior. Now all insertions into `MATERIALIZED VIEW` without exception should be finished if exception happened. Fixes [#10241](https://github.com/ClickHouse/ClickHouse/issues/10241). [#10757](https://github.com/ClickHouse/ClickHouse/pull/10757) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed combinator -`OrNull` and `-OrDefault` when combined with `-State`. [#10741](https://github.com/ClickHouse/ClickHouse/pull/10741) ([hcz](https://github.com/hczhcz)).
* Fixed crash in `generateRandom` with nested types. Fixes [#10583](https://github.com/ClickHouse/ClickHouse/issues/10583). [#10734](https://github.com/ClickHouse/ClickHouse/pull/10734) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed data corruption for `LowCardinality(FixedString)` key column in `SummingMergeTree` which could have happened after merge. Fixes [#10489](https://github.com/ClickHouse/ClickHouse/issues/10489). [#10721](https://github.com/ClickHouse/ClickHouse/pull/10721) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed possible buffer overflow in function `h3EdgeAngle`. [#10711](https://github.com/ClickHouse/ClickHouse/pull/10711) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed disappearing totals. Totals could have being filtered if query had had join or subquery with external where condition. Fixes [#10674](https://github.com/ClickHouse/ClickHouse/issues/10674). [#10698](https://github.com/ClickHouse/ClickHouse/pull/10698) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed multiple usages of `IN` operator with the identical set in one query. [#10686](https://github.com/ClickHouse/ClickHouse/pull/10686) ([Anton Popov](https://github.com/CurtizJ)).
* Fixed bug, which causes http requests stuck on client close when `readonly=2` and `cancel_http_readonly_queries_on_client_close=1`. Fixes [#7939](https://github.com/ClickHouse/ClickHouse/issues/7939), [#7019](https://github.com/ClickHouse/ClickHouse/issues/7019), [#7736](https://github.com/ClickHouse/ClickHouse/issues/7736), [#7091](https://github.com/ClickHouse/ClickHouse/issues/7091). [#10684](https://github.com/ClickHouse/ClickHouse/pull/10684) ([tavplubix](https://github.com/tavplubix)).
* Fixed order of parameters in `AggregateTransform` constructor. [#10667](https://github.com/ClickHouse/ClickHouse/pull/10667) ([palasonic1](https://github.com/palasonic1)).
* Fixed the lack of parallel execution of remote queries with `distributed_aggregation_memory_efficient` enabled. Fixes [#10655](https://github.com/ClickHouse/ClickHouse/issues/10655). [#10664](https://github.com/ClickHouse/ClickHouse/pull/10664) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed possible incorrect number of rows for queries with `LIMIT`. Fixes [#10566](https://github.com/ClickHouse/ClickHouse/issues/10566), [#10709](https://github.com/ClickHouse/ClickHouse/issues/10709). [#10660](https://github.com/ClickHouse/ClickHouse/pull/10660) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed a bug which locks concurrent alters when table has a lot of parts. [#10659](https://github.com/ClickHouse/ClickHouse/pull/10659) ([alesapin](https://github.com/alesapin)).
* Fixed a bug when on `SYSTEM DROP DNS CACHE` query also drop caches, which are used to check if user is allowed to connect from some IP addresses. [#10608](https://github.com/ClickHouse/ClickHouse/pull/10608) ([tavplubix](https://github.com/tavplubix)).
* Fixed incorrect scalar results inside inner query of `MATERIALIZED VIEW` in case if this query contained dependent table. [#10603](https://github.com/ClickHouse/ClickHouse/pull/10603) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed `SELECT` of column `ALIAS` which default expression type different from column type. [#10563](https://github.com/ClickHouse/ClickHouse/pull/10563) ([Azat Khuzhin](https://github.com/azat)).
* Implemented comparison between DateTime64 and String values. [#10560](https://github.com/ClickHouse/ClickHouse/pull/10560) ([Vasily Nemkov](https://github.com/Enmk)).
* Fixed index corruption, which may accur in some cases after merge compact parts into another compact part. [#10531](https://github.com/ClickHouse/ClickHouse/pull/10531) ([Anton Popov](https://github.com/CurtizJ)).
* Fixed the situation, when mutation finished all parts, but hung up in `is_done=0`. [#10526](https://github.com/ClickHouse/ClickHouse/pull/10526) ([alesapin](https://github.com/alesapin)).
* Fixed overflow at beginning of unix epoch for timezones with fractional offset from `UTC`. This fixes [#9335](https://github.com/ClickHouse/ClickHouse/issues/9335). [#10513](https://github.com/ClickHouse/ClickHouse/pull/10513) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed improper shutdown of `Distributed` storage. [#10491](https://github.com/ClickHouse/ClickHouse/pull/10491) ([Azat Khuzhin](https://github.com/azat)).
* Fixed numeric overflow in `simpleLinearRegression` over large integers. [#10474](https://github.com/ClickHouse/ClickHouse/pull/10474) ([hcz](https://github.com/hczhcz)).
#### Build/Testing/Packaging Improvement
* Fix UBSan report in LZ4 library. [#10631](https://github.com/ClickHouse/ClickHouse/pull/10631) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fix clang-10 build. https://github.com/ClickHouse/ClickHouse/issues/10238. [#10370](https://github.com/ClickHouse/ClickHouse/pull/10370) ([Amos Bird](https://github.com/amosbird)).
* Added failing tests about `max_rows_to_sort` setting. [#10268](https://github.com/ClickHouse/ClickHouse/pull/10268) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Added some improvements in printing diagnostic info in input formats. Fixes [#10204](https://github.com/ClickHouse/ClickHouse/issues/10204). [#10418](https://github.com/ClickHouse/ClickHouse/pull/10418) ([tavplubix](https://github.com/tavplubix)).
* Added CA certificates to clickhouse-server docker image. [#10476](https://github.com/ClickHouse/ClickHouse/pull/10476) ([filimonov](https://github.com/filimonov)).
#### Bug fix
* #10551. [#10569](https://github.com/ClickHouse/ClickHouse/pull/10569) ([Winter Zhang](https://github.com/zhang2014)).
### ClickHouse release v20.3.8.53, 2020-04-23
#### Bug Fix
@ -630,6 +724,35 @@
## ClickHouse release v20.1
### ClickHouse release v20.1.12.86, 2020-05-26
#### Bug Fix
* Fixed incompatibility of two-level aggregation between versions 20.1 and earlier. This incompatibility happens when different versions of ClickHouse are used on initiator node and remote nodes and the size of GROUP BY result is large and aggregation is performed by a single String field. It leads to several unmerged rows for a single key in result. [#10952](https://github.com/ClickHouse/ClickHouse/pull/10952) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed data corruption for `LowCardinality(FixedString)` key column in `SummingMergeTree` which could have happened after merge. Fixes [#10489](https://github.com/ClickHouse/ClickHouse/issues/10489). [#10721](https://github.com/ClickHouse/ClickHouse/pull/10721) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed bug, which causes http requests stuck on client close when `readonly=2` and `cancel_http_readonly_queries_on_client_close=1`. Fixes [#7939](https://github.com/ClickHouse/ClickHouse/issues/7939), [#7019](https://github.com/ClickHouse/ClickHouse/issues/7019), [#7736](https://github.com/ClickHouse/ClickHouse/issues/7736), [#7091](https://github.com/ClickHouse/ClickHouse/issues/7091). [#10684](https://github.com/ClickHouse/ClickHouse/pull/10684) ([tavplubix](https://github.com/tavplubix)).
* Fixed a bug when on `SYSTEM DROP DNS CACHE` query also drop caches, which are used to check if user is allowed to connect from some IP addresses. [#10608](https://github.com/ClickHouse/ClickHouse/pull/10608) ([tavplubix](https://github.com/tavplubix)).
* Fixed incorrect scalar results inside inner query of `MATERIALIZED VIEW` in case if this query contained dependent table. [#10603](https://github.com/ClickHouse/ClickHouse/pull/10603) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed the situation when mutation finished all parts, but hung up in `is_done=0`. [#10526](https://github.com/ClickHouse/ClickHouse/pull/10526) ([alesapin](https://github.com/alesapin)).
* Fixed overflow at beginning of unix epoch for timezones with fractional offset from UTC. This fixes [#9335](https://github.com/ClickHouse/ClickHouse/issues/9335). [#10513](https://github.com/ClickHouse/ClickHouse/pull/10513) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed improper shutdown of Distributed storage. [#10491](https://github.com/ClickHouse/ClickHouse/pull/10491) ([Azat Khuzhin](https://github.com/azat)).
* Fixed numeric overflow in `simpleLinearRegression` over large integers. [#10474](https://github.com/ClickHouse/ClickHouse/pull/10474) ([hcz](https://github.com/hczhcz)).
* Fixed removing metadata directory when attach database fails. [#10442](https://github.com/ClickHouse/ClickHouse/pull/10442) ([Winter Zhang](https://github.com/zhang2014)).
* Added a check of number and type of arguments when creating `BloomFilter` index [#9623](https://github.com/ClickHouse/ClickHouse/issues/9623). [#10431](https://github.com/ClickHouse/ClickHouse/pull/10431) ([Winter Zhang](https://github.com/zhang2014)).
* Fixed the issue when a query with `ARRAY JOIN`, `ORDER BY` and `LIMIT` may return incomplete result. This fixes [#10226](https://github.com/ClickHouse/ClickHouse/issues/10226). [#10427](https://github.com/ClickHouse/ClickHouse/pull/10427) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Prefer `fallback_to_stale_replicas` over `skip_unavailable_shards`. [#10422](https://github.com/ClickHouse/ClickHouse/pull/10422) ([Azat Khuzhin](https://github.com/azat)).
* Fixed wrong flattening of `Array(Tuple(...))` data types. This fixes [#10259](https://github.com/ClickHouse/ClickHouse/issues/10259). [#10390](https://github.com/ClickHouse/ClickHouse/pull/10390) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Fixed wrong behavior in `HashTable` that caused compilation error when trying to read HashMap from buffer. [#10386](https://github.com/ClickHouse/ClickHouse/pull/10386) ([palasonic1](https://github.com/palasonic1)).
* Fixed possible `Pipeline stuck` error in `ConcatProcessor` which could have happened in remote query. [#10381](https://github.com/ClickHouse/ClickHouse/pull/10381) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed error `Pipeline stuck` with `max_rows_to_group_by` and `group_by_overflow_mode = 'break'`. [#10279](https://github.com/ClickHouse/ClickHouse/pull/10279) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fixed several bugs when some data was inserted with quorum, then deleted somehow (DROP PARTITION, TTL) and this leaded to the stuck of INSERTs or false-positive exceptions in SELECTs. This fixes [#9946](https://github.com/ClickHouse/ClickHouse/issues/9946). [#10188](https://github.com/ClickHouse/ClickHouse/pull/10188) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Fixed incompatibility when versions prior to 18.12.17 are used on remote servers and newer is used on initiating server, and GROUP BY both fixed and non-fixed keys, and when two-level group by method is activated. [#3254](https://github.com/ClickHouse/ClickHouse/pull/3254) ([alexey-milovidov](https://github.com/alexey-milovidov)).
#### Build/Testing/Packaging Improvement
* Added CA certificates to clickhouse-server docker image. [#10476](https://github.com/ClickHouse/ClickHouse/pull/10476) ([filimonov](https://github.com/filimonov)).
### ClickHouse release v20.1.10.70, 2020-04-17
#### Bug Fix

View File

@ -28,7 +28,7 @@ public:
void exception() override { logException(); }
private:
Logger * log = &Logger::get("ServerErrorHandler");
Poco::Logger * log = &Poco::Logger::get("ServerErrorHandler");
void logException()
{

View File

@ -9,13 +9,6 @@
#include <Common/CurrentThread.h>
/// TODO Remove this.
using Poco::Logger;
using Poco::Message;
using DB::LogsLevel;
using DB::CurrentThread;
namespace
{
template <typename... Ts> constexpr size_t numArgs(Ts &&...) { return sizeof...(Ts); }
@ -31,8 +24,8 @@ namespace
#define LOG_IMPL(logger, priority, PRIORITY, ...) do \
{ \
const bool is_clients_log = (CurrentThread::getGroup() != nullptr) && \
(CurrentThread::getGroup()->client_logs_level >= (priority)); \
const bool is_clients_log = (DB::CurrentThread::getGroup() != nullptr) && \
(DB::CurrentThread::getGroup()->client_logs_level >= (priority)); \
if ((logger)->is((PRIORITY)) || is_clients_log) \
{ \
std::string formatted_message = numArgs(__VA_ARGS__) > 1 ? fmt::format(__VA_ARGS__) : firstArg(__VA_ARGS__); \
@ -42,7 +35,7 @@ namespace
file_function += __FILE__; \
file_function += "; "; \
file_function += __PRETTY_FUNCTION__; \
Message poco_message((logger)->name(), formatted_message, \
Poco::Message poco_message((logger)->name(), formatted_message, \
(PRIORITY), file_function.c_str(), __LINE__); \
channel->log(poco_message); \
} \
@ -50,9 +43,18 @@ namespace
} while (false)
#define LOG_TRACE(logger, ...) LOG_IMPL(logger, LogsLevel::trace, Message::PRIO_TRACE, __VA_ARGS__)
#define LOG_DEBUG(logger, ...) LOG_IMPL(logger, LogsLevel::debug, Message::PRIO_DEBUG, __VA_ARGS__)
#define LOG_INFO(logger, ...) LOG_IMPL(logger, LogsLevel::information, Message::PRIO_INFORMATION, __VA_ARGS__)
#define LOG_WARNING(logger, ...) LOG_IMPL(logger, LogsLevel::warning, Message::PRIO_WARNING, __VA_ARGS__)
#define LOG_ERROR(logger, ...) LOG_IMPL(logger, LogsLevel::error, Message::PRIO_ERROR, __VA_ARGS__)
#define LOG_FATAL(logger, ...) LOG_IMPL(logger, LogsLevel::error, Message::PRIO_FATAL, __VA_ARGS__)
#define LOG_TRACE(logger, ...) LOG_IMPL(logger, DB::LogsLevel::trace, Poco::Message::PRIO_TRACE, __VA_ARGS__)
#define LOG_DEBUG(logger, ...) LOG_IMPL(logger, DB::LogsLevel::debug, Poco::Message::PRIO_DEBUG, __VA_ARGS__)
#define LOG_INFO(logger, ...) LOG_IMPL(logger, DB::LogsLevel::information, Poco::Message::PRIO_INFORMATION, __VA_ARGS__)
#define LOG_WARNING(logger, ...) LOG_IMPL(logger, DB::LogsLevel::warning, Poco::Message::PRIO_WARNING, __VA_ARGS__)
#define LOG_ERROR(logger, ...) LOG_IMPL(logger, DB::LogsLevel::error, Poco::Message::PRIO_ERROR, __VA_ARGS__)
#define LOG_FATAL(logger, ...) LOG_IMPL(logger, DB::LogsLevel::error, Poco::Message::PRIO_FATAL, __VA_ARGS__)
/// Compatibility for external projects.
#if defined(ARCADIA_BUILD)
using Poco::Logger;
using Poco::Message;
using DB::LogsLevel;
using DB::CurrentThread;
#endif

View File

@ -124,7 +124,7 @@ static void signalHandler(int sig, siginfo_t * info, void * context)
const ucontext_t signal_context = *reinterpret_cast<ucontext_t *>(context);
const StackTrace stack_trace(signal_context);
StringRef query_id = CurrentThread::getQueryId(); /// This is signal safe.
StringRef query_id = DB::CurrentThread::getQueryId(); /// This is signal safe.
query_id.size = std::min(query_id.size, max_query_id_size);
DB::writeBinary(sig, out);
@ -162,7 +162,7 @@ public:
};
explicit SignalListener(BaseDaemon & daemon_)
: log(&Logger::get("BaseDaemon"))
: log(&Poco::Logger::get("BaseDaemon"))
, daemon(daemon_)
{
}
@ -231,7 +231,7 @@ public:
}
private:
Logger * log;
Poco::Logger * log;
BaseDaemon & daemon;
void onTerminate(const std::string & message, UInt32 thread_num) const
@ -288,9 +288,9 @@ extern "C" void __sanitizer_set_death_callback(void (*)());
static void sanitizerDeathCallback()
{
Logger * log = &Logger::get("BaseDaemon");
Poco::Logger * log = &Poco::Logger::get("BaseDaemon");
StringRef query_id = CurrentThread::getQueryId(); /// This is signal safe.
StringRef query_id = DB::CurrentThread::getQueryId(); /// This is signal safe.
{
std::stringstream message;
@ -498,10 +498,10 @@ void debugIncreaseOOMScore()
}
catch (const Poco::Exception & e)
{
LOG_WARNING(&Logger::root(), "Failed to adjust OOM score: '{}'.", e.displayText());
LOG_WARNING(&Poco::Logger::root(), "Failed to adjust OOM score: '{}'.", e.displayText());
return;
}
LOG_INFO(&Logger::root(), "Set OOM score adjustment to {}", new_score);
LOG_INFO(&Poco::Logger::root(), "Set OOM score adjustment to {}", new_score);
}
#else
void debugIncreaseOOMScore() {}
@ -715,7 +715,7 @@ void BaseDaemon::initializeTerminationAndSignalProcessing()
void BaseDaemon::logRevision() const
{
Logger::root().information("Starting " + std::string{VERSION_FULL}
Poco::Logger::root().information("Starting " + std::string{VERSION_FULL}
+ " with revision " + std::to_string(ClickHouseRevision::get())
+ ", PID " + std::to_string(getpid()));
}
@ -732,7 +732,7 @@ void BaseDaemon::handleNotification(Poco::TaskFailedNotification *_tfn)
{
task_failed = true;
Poco::AutoPtr<Poco::TaskFailedNotification> fn(_tfn);
Logger *lg = &(logger());
Poco::Logger * lg = &(logger());
LOG_ERROR(lg, "Task '{}' failed. Daemon is shutting down. Reason - {}", fn->task()->name(), fn->reason().displayText());
ServerApplication::terminate();
}

View File

@ -4,6 +4,7 @@
namespace DB
{
void OwnFormattingChannel::logExtended(const ExtendedLogMessage & msg)
{
if (pChannel && priority >= msg.base.getPriority())
@ -28,5 +29,4 @@ void OwnFormattingChannel::log(const Poco::Message & msg)
OwnFormattingChannel::~OwnFormattingChannel() = default;
}

View File

@ -69,7 +69,6 @@ void OwnSplitChannel::logSplit(const Poco::Message & msg)
logs_queue->emplace(std::move(columns));
}
/// Also log to system.text_log table, if message is not too noisy
auto text_log_max_priority_loaded = text_log_max_priority.load(std::memory_order_relaxed);
if (text_log_max_priority_loaded && msg.getPriority() <= text_log_max_priority_loaded)

View File

@ -55,6 +55,7 @@ function(protobuf_generate_cpp_impl SRCS HDRS MODES OUTPUT_FILE_EXTS PLUGIN)
endif()
set (intermediate_dir ${CMAKE_CURRENT_BINARY_DIR}/intermediate)
file (MAKE_DIRECTORY ${intermediate_dir})
set (protoc_args)
foreach (mode ${MODES})
@ -112,7 +113,7 @@ if (PROTOBUF_GENERATE_CPP_SCRIPT_MODE)
set (intermediate_dir ${DIR}/intermediate)
set (intermediate_output "${intermediate_dir}/${FILENAME}")
if (COMPILER_ID STREQUAL "Clang")
if (COMPILER_ID MATCHES "Clang")
set (pragma_push "#pragma clang diagnostic push\n")
set (pragma_pop "#pragma clang diagnostic pop\n")
set (pragma_disable_warnings "#pragma clang diagnostic ignored \"-Weverything\"\n")

2
contrib/librdkafka vendored

@ -1 +1 @@
Subproject commit 4ffe54b4f59ee5ae3767f9f25dc14651a3384d62
Subproject commit b0d91bd74abb5f0e1ee972d326a317ad610f6300

View File

@ -3,20 +3,30 @@ set(RDKAFKA_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/librdkafka/src)
set(SRCS
${RDKAFKA_SOURCE_DIR}/crc32c.c
${RDKAFKA_SOURCE_DIR}/rdkafka_zstd.c
# ${RDKAFKA_SOURCE_DIR}/lz4.c
# ${RDKAFKA_SOURCE_DIR}/lz4frame.c
# ${RDKAFKA_SOURCE_DIR}/lz4hc.c
# ${RDKAFKA_SOURCE_DIR}/rdxxhash.c
# ${RDKAFKA_SOURCE_DIR}/regexp.c
${RDKAFKA_SOURCE_DIR}/rdaddr.c
${RDKAFKA_SOURCE_DIR}/rdavl.c
${RDKAFKA_SOURCE_DIR}/rdbuf.c
${RDKAFKA_SOURCE_DIR}/rdcrc32.c
${RDKAFKA_SOURCE_DIR}/rddl.c
${RDKAFKA_SOURCE_DIR}/rdfnv1a.c
${RDKAFKA_SOURCE_DIR}/rdhdrhistogram.c
${RDKAFKA_SOURCE_DIR}/rdkafka.c
${RDKAFKA_SOURCE_DIR}/rdkafka_admin.c # looks optional
${RDKAFKA_SOURCE_DIR}/rdkafka_assignor.c
${RDKAFKA_SOURCE_DIR}/rdkafka_aux.c # looks optional
${RDKAFKA_SOURCE_DIR}/rdkafka_background.c
${RDKAFKA_SOURCE_DIR}/rdkafka_broker.c
${RDKAFKA_SOURCE_DIR}/rdkafka_buf.c
${RDKAFKA_SOURCE_DIR}/rdkafka_cert.c
${RDKAFKA_SOURCE_DIR}/rdkafka_cgrp.c
${RDKAFKA_SOURCE_DIR}/rdkafka_conf.c
${RDKAFKA_SOURCE_DIR}/rdkafka_coord.c
${RDKAFKA_SOURCE_DIR}/rdkafka_error.c
${RDKAFKA_SOURCE_DIR}/rdkafka_event.c
${RDKAFKA_SOURCE_DIR}/rdkafka_feature.c
${RDKAFKA_SOURCE_DIR}/rdkafka_idempotence.c
@ -24,6 +34,7 @@ set(SRCS
${RDKAFKA_SOURCE_DIR}/rdkafka_metadata.c
${RDKAFKA_SOURCE_DIR}/rdkafka_metadata_cache.c
${RDKAFKA_SOURCE_DIR}/rdkafka_mock.c
${RDKAFKA_SOURCE_DIR}/rdkafka_mock_cgrp.c
${RDKAFKA_SOURCE_DIR}/rdkafka_mock_handlers.c
${RDKAFKA_SOURCE_DIR}/rdkafka_msg.c
${RDKAFKA_SOURCE_DIR}/rdkafka_msgset_reader.c
@ -38,9 +49,11 @@ set(SRCS
${RDKAFKA_SOURCE_DIR}/rdkafka_request.c
${RDKAFKA_SOURCE_DIR}/rdkafka_roundrobin_assignor.c
${RDKAFKA_SOURCE_DIR}/rdkafka_sasl.c
# ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_cyrus.c # needed to support Kerberos, requires cyrus-sasl
${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_oauthbearer.c
${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_plain.c
${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_scram.c
# ${RDKAFKA_SOURCE_DIR}/rdkafka_sasl_win32.c
${RDKAFKA_SOURCE_DIR}/rdkafka_ssl.c
${RDKAFKA_SOURCE_DIR}/rdkafka_subscription.c
${RDKAFKA_SOURCE_DIR}/rdkafka_timer.c
@ -48,6 +61,7 @@ set(SRCS
${RDKAFKA_SOURCE_DIR}/rdkafka_transport.c
${RDKAFKA_SOURCE_DIR}/rdkafka_interceptor.c
${RDKAFKA_SOURCE_DIR}/rdkafka_header.c
${RDKAFKA_SOURCE_DIR}/rdkafka_txnmgr.c
${RDKAFKA_SOURCE_DIR}/rdlist.c
${RDKAFKA_SOURCE_DIR}/rdlog.c
${RDKAFKA_SOURCE_DIR}/rdmurmur2.c

View File

@ -80,7 +80,9 @@ RUN apt-get --allow-unauthenticated update -y \
pigz \
moreutils \
libcctz-dev \
libldap2-dev
libldap2-dev \
libsasl2-dev \
heimdal-multidev

View File

@ -24,16 +24,14 @@ CMD dpkg -i package_folder/clickhouse-common-static_*.deb; \
ln -s /usr/share/clickhouse-test/config/listen.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/part_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/text_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/metric_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/log_queries.xml /etc/clickhouse-server/users.d/; \
ln -s /usr/share/clickhouse-test/config/readonly.xml /etc/clickhouse-server/users.d/; \
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/; \
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/; \
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/; \
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/lib/llvm-9/bin/llvm-symbolizer /usr/bin/llvm-symbolizer; \
if [ -n $USE_DATABASE_ATOMIC ] && [ $USE_DATABASE_ATOMIC -eq 1 ]; then ln -s /usr/share/clickhouse-test/config/database_atomic_configd.xml /etc/clickhouse-server/config.d/; fi; \
if [ -n $USE_DATABASE_ATOMIC ] && [ $USE_DATABASE_ATOMIC -eq 1 ]; then ln -s /usr/share/clickhouse-test/config/database_atomic_usersd.xml /etc/clickhouse-server/users.d/; fi; \
if [[ -n "$USE_DATABASE_ATOMIC" ]] && [[ "$USE_DATABASE_ATOMIC" -eq 1 ]]; then ln -s /usr/share/clickhouse-test/config/database_atomic_configd.xml /etc/clickhouse-server/config.d/; fi; \
if [[ -n "$USE_DATABASE_ATOMIC" ]] && [[ "$USE_DATABASE_ATOMIC" -eq 1 ]]; then ln -s /usr/share/clickhouse-test/config/database_atomic_usersd.xml /etc/clickhouse-server/users.d/; fi; \
echo "TSAN_OPTIONS='verbosity=1000 halt_on_error=1 history_size=7'" >> /etc/environment; \
echo "TSAN_SYMBOLIZER_PATH=/usr/lib/llvm-8/bin/llvm-symbolizer" >> /etc/environment; \
echo "UBSAN_OPTIONS='print_stacktrace=1'" >> /etc/environment; \

View File

@ -59,9 +59,7 @@ ln -s /usr/share/clickhouse-test/config/zookeeper.xml /etc/clickhouse-server/con
ln -s /usr/share/clickhouse-test/config/listen.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/part_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/text_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/metric_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/query_masking_rules.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/log_queries.xml /etc/clickhouse-server/users.d/; \
ln -s /usr/share/clickhouse-test/config/readonly.xml /etc/clickhouse-server/users.d/; \
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/; \
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/; \

View File

@ -62,9 +62,7 @@ CMD dpkg -i package_folder/clickhouse-common-static_*.deb; \
ln -s /usr/share/clickhouse-test/config/listen.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/part_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/text_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/metric_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/query_masking_rules.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/log_queries.xml /etc/clickhouse-server/users.d/; \
ln -s /usr/share/clickhouse-test/config/readonly.xml /etc/clickhouse-server/users.d/; \
ln -s /usr/share/clickhouse-test/config/access_management.xml /etc/clickhouse-server/users.d/; \
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/; \
@ -78,9 +76,9 @@ CMD dpkg -i package_folder/clickhouse-common-static_*.deb; \
ln -s /usr/share/clickhouse-test/config/server.key /etc/clickhouse-server/; \
ln -s /usr/share/clickhouse-test/config/server.crt /etc/clickhouse-server/; \
ln -s /usr/share/clickhouse-test/config/dhparam.pem /etc/clickhouse-server/; \
if [ -n $USE_POLYMORPHIC_PARTS ] && [ $USE_POLYMORPHIC_PARTS -eq 1 ]; then ln -s /usr/share/clickhouse-test/config/polymorphic_parts.xml /etc/clickhouse-server/config.d/; fi; \
if [ -n $USE_DATABASE_ATOMIC ] && [ $USE_DATABASE_ATOMIC -eq 1 ]; then ln -s /usr/share/clickhouse-test/config/database_atomic_configd.xml /etc/clickhouse-server/config.d/; fi; \
if [ -n $USE_DATABASE_ATOMIC ] && [ $USE_DATABASE_ATOMIC -eq 1 ]; then ln -s /usr/share/clickhouse-test/config/database_atomic_usersd.xml /etc/clickhouse-server/users.d/; fi; \
if [[ -n "$USE_POLYMORPHIC_PARTS" ]] && [[ "$USE_POLYMORPHIC_PARTS" -eq 1 ]]; then ln -s /usr/share/clickhouse-test/config/polymorphic_parts.xml /etc/clickhouse-server/config.d/; fi; \
if [[ -n "$USE_DATABASE_ATOMIC" ]] && [[ "$USE_DATABASE_ATOMIC" -eq 1 ]]; then ln -s /usr/share/clickhouse-test/config/database_atomic_configd.xml /etc/clickhouse-server/config.d/; fi; \
if [[ -n "$USE_DATABASE_ATOMIC" ]] && [[ "$USE_DATABASE_ATOMIC" -eq 1 ]]; then ln -s /usr/share/clickhouse-test/config/database_atomic_usersd.xml /etc/clickhouse-server/users.d/; fi; \
ln -sf /usr/share/clickhouse-test/config/client_config.xml /etc/clickhouse-client/config.xml; \
service zookeeper start; sleep 5; \
service clickhouse-server start && sleep 5 && clickhouse-test --testname --shard --zookeeper $ADDITIONAL_OPTIONS $SKIP_TESTS_OPTION 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt

View File

@ -50,9 +50,7 @@ ln -s /usr/share/clickhouse-test/config/zookeeper.xml /etc/clickhouse-server/con
ln -s /usr/share/clickhouse-test/config/listen.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/part_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/text_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/metric_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/query_masking_rules.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/share/clickhouse-test/config/log_queries.xml /etc/clickhouse-server/users.d/; \
ln -s /usr/share/clickhouse-test/config/readonly.xml /etc/clickhouse-server/users.d/; \
ln -s /usr/share/clickhouse-test/config/access_management.xml /etc/clickhouse-server/users.d/; \
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/; \

View File

@ -31,7 +31,6 @@ CMD dpkg -i package_folder/clickhouse-common-static_*.deb; \
dpkg -i package_folder/clickhouse-server_*.deb; \
dpkg -i package_folder/clickhouse-client_*.deb; \
dpkg -i package_folder/clickhouse-test_*.deb; \
ln -s /usr/share/clickhouse-test/config/log_queries.xml /etc/clickhouse-server/users.d/; \
ln -s /usr/share/clickhouse-test/config/part_log.xml /etc/clickhouse-server/config.d/; \
ln -s /usr/lib/llvm-9/bin/llvm-symbolizer /usr/bin/llvm-symbolizer; \
echo "TSAN_OPTIONS='halt_on_error=1 history_size=7 ignore_noninstrumented_modules=1 verbosity=1'" >> /etc/environment; \

View File

@ -1175,7 +1175,7 @@ private:
/// Poll for changes after a cancellation check, otherwise it never reached
/// because of progress updates from server.
if (connection->poll(poll_interval))
break;
break;
}
if (!receiveAndProcessPacket())
@ -1583,6 +1583,11 @@ private:
if (std::string::npos != embedded_stack_trace_pos && !config().getBool("stacktrace", false))
text.resize(embedded_stack_trace_pos);
/// If we probably have progress bar, we should add additional newline,
/// otherwise exception may display concatenated with the progress bar.
if (need_render_progress)
std::cerr << '\n';
std::cerr << "Received exception from server (version " << server_version << "):" << std::endl
<< "Code: " << e.code() << ". " << text << std::endl;
}

View File

@ -4,6 +4,8 @@
#include <Common/ZooKeeper/ZooKeeper.h>
#include <Common/ZooKeeper/KeeperException.h>
#include <Common/setThreadName.h>
namespace DB
{
@ -177,7 +179,11 @@ void ClusterCopier::discoverTablePartitions(const ConnectionTimeouts & timeouts,
ThreadPool thread_pool(num_threads ? num_threads : 2 * getNumberOfPhysicalCPUCores());
for (const TaskShardPtr & task_shard : task_table.all_shards)
thread_pool.scheduleOrThrowOnError([this, timeouts, task_shard]() { discoverShardPartitions(timeouts, task_shard); });
thread_pool.scheduleOrThrowOnError([this, timeouts, task_shard]()
{
setThreadName("DiscoverPartns");
discoverShardPartitions(timeouts, task_shard);
});
LOG_DEBUG(log, "Waiting for {} setup jobs", thread_pool.active());
thread_pool.wait();
@ -609,8 +615,7 @@ TaskStatus ClusterCopier::tryMoveAllPiecesToDestinationTable(const TaskTable & t
size_t num_nodes = executeQueryOnCluster(
task_table.cluster_push,
query_alter_ast_string,
nullptr,
&settings_push,
settings_push,
PoolMode::GET_MANY,
ClusterExecutionMode::ON_EACH_NODE);
@ -638,8 +643,7 @@ TaskStatus ClusterCopier::tryMoveAllPiecesToDestinationTable(const TaskTable & t
UInt64 num_nodes = executeQueryOnCluster(
task_table.cluster_push,
query_deduplicate_ast_string,
nullptr,
&task_cluster->settings_push,
task_cluster->settings_push,
PoolMode::GET_MANY);
LOG_INFO(log, "Number of shard that executed OPTIMIZE DEDUPLICATE query successfully : {}", toString(num_nodes));
@ -818,8 +822,7 @@ bool ClusterCopier::tryDropPartitionPiece(
/// We have to drop partition_piece on each replica
size_t num_shards = executeQueryOnCluster(
cluster_push, query,
nullptr,
&settings_push,
settings_push,
PoolMode::GET_MANY,
ClusterExecutionMode::ON_EACH_NODE);
@ -1293,7 +1296,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl(
local_context.setSettings(task_cluster->settings_pull);
local_context.setSetting("skip_unavailable_shards", true);
Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_select_ast, local_context)->execute().in);
Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_select_ast, local_context)->execute().getInputStream());
count = (block) ? block.safeGetByPosition(0).column->getUInt(0) : 0;
}
@ -1356,9 +1359,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl(
String query = queryToString(create_query_push_ast);
LOG_DEBUG(log, "Create destination tables. Query: {}", query);
UInt64 shards = executeQueryOnCluster(task_table.cluster_push, query,
create_query_push_ast, &task_cluster->settings_push,
PoolMode::GET_MANY);
UInt64 shards = executeQueryOnCluster(task_table.cluster_push, query, task_cluster->settings_push, PoolMode::GET_MANY);
LOG_DEBUG(log, "Destination tables {} have been created on {} shards of {}", getQuotedTable(task_table.table_push), shards, task_table.cluster_push->getShardCount());
}
@ -1403,7 +1404,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl(
BlockIO io_select = InterpreterFactory::get(query_select_ast, context_select)->execute();
BlockIO io_insert = InterpreterFactory::get(query_insert_ast, context_insert)->execute();
input = io_select.in;
input = io_select.getInputStream();
output = io_insert.out;
}
@ -1479,9 +1480,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl(
String query = queryToString(create_query_push_ast);
LOG_DEBUG(log, "Create destination tables. Query: {}", query);
UInt64 shards = executeQueryOnCluster(task_table.cluster_push, query,
create_query_push_ast, &task_cluster->settings_push,
PoolMode::GET_MANY);
UInt64 shards = executeQueryOnCluster(task_table.cluster_push, query, task_cluster->settings_push, PoolMode::GET_MANY);
LOG_DEBUG(log, "Destination tables {} have been created on {} shards of {}", getQuotedTable(task_table.table_push), shards, task_table.cluster_push->getShardCount());
}
catch (...)
@ -1548,8 +1547,7 @@ void ClusterCopier::dropHelpingTables(const TaskTable & task_table)
/// We have to drop partition_piece on each replica
UInt64 num_nodes = executeQueryOnCluster(
cluster_push, query,
nullptr,
&settings_push,
settings_push,
PoolMode::GET_MANY,
ClusterExecutionMode::ON_EACH_NODE);
@ -1575,8 +1573,7 @@ void ClusterCopier::dropParticularPartitionPieceFromAllHelpingTables(const TaskT
/// We have to drop partition_piece on each replica
UInt64 num_nodes = executeQueryOnCluster(
cluster_push, query,
nullptr,
&settings_push,
settings_push,
PoolMode::GET_MANY,
ClusterExecutionMode::ON_EACH_NODE);
@ -1690,7 +1687,7 @@ std::set<String> ClusterCopier::getShardPartitions(const ConnectionTimeouts & ti
Context local_context = context;
local_context.setSettings(task_cluster->settings_pull);
Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_ast, local_context)->execute().in);
Block block = getBlockWithAllStreamData(InterpreterFactory::get(query_ast, local_context)->execute().getInputStream());
std::set<String> res;
if (block)
@ -1735,7 +1732,7 @@ const auto & settings = context.getSettingsRef();
Context local_context = context;
local_context.setSettings(task_cluster->settings_pull);
return InterpreterFactory::get(query_ast, local_context)->execute().in->read().rows() != 0;
return InterpreterFactory::get(query_ast, local_context)->execute().getInputStream()->read().rows() != 0;
}
bool ClusterCopier::checkPresentPartitionPiecesOnCurrentShard(const ConnectionTimeouts & timeouts,
@ -1774,7 +1771,7 @@ bool ClusterCopier::checkPresentPartitionPiecesOnCurrentShard(const ConnectionTi
Context local_context = context;
local_context.setSettings(task_cluster->settings_pull);
auto result = InterpreterFactory::get(query_ast, local_context)->execute().in->read().rows();
auto result = InterpreterFactory::get(query_ast, local_context)->execute().getInputStream()->read().rows();
if (result != 0)
LOG_DEBUG(log, "Partition {} piece number {} is PRESENT on shard {}", partition_quoted_name, std::to_string(current_piece_number), task_shard.getDescription());
else
@ -1788,25 +1785,16 @@ bool ClusterCopier::checkPresentPartitionPiecesOnCurrentShard(const ConnectionTi
UInt64 ClusterCopier::executeQueryOnCluster(
const ClusterPtr & cluster,
const String & query,
const ASTPtr & query_ast_,
const Settings * settings,
const Settings & current_settings,
PoolMode pool_mode,
ClusterExecutionMode execution_mode,
UInt64 max_successful_executions_per_shard) const
{
Settings current_settings = settings ? *settings : task_cluster->settings_common;
auto num_shards = cluster->getShardsInfo().size();
std::vector<UInt64> per_shard_num_successful_replicas(num_shards, 0);
ASTPtr query_ast;
if (query_ast_ == nullptr)
{
ParserQuery p_query(query.data() + query.size());
query_ast = parseQuery(p_query, query, current_settings.max_query_size, current_settings.max_parser_depth);
}
else
query_ast = query_ast_;
ParserQuery p_query(query.data() + query.size());
ASTPtr query_ast = parseQuery(p_query, query, current_settings.max_query_size, current_settings.max_parser_depth);
/// We will have to execute query on each replica of a shard.
if (execution_mode == ClusterExecutionMode::ON_EACH_NODE)
@ -1815,8 +1803,10 @@ UInt64 ClusterCopier::executeQueryOnCluster(
std::atomic<size_t> origin_replicas_number;
/// We need to execute query on one replica at least
auto do_for_shard = [&] (UInt64 shard_index)
auto do_for_shard = [&] (UInt64 shard_index, Settings shard_settings)
{
setThreadName("QueryForShard");
const Cluster::ShardInfo & shard = cluster->getShardsInfo().at(shard_index);
UInt64 & num_successful_executions = per_shard_num_successful_replicas.at(shard_index);
num_successful_executions = 0;
@ -1846,10 +1836,10 @@ UInt64 ClusterCopier::executeQueryOnCluster(
/// Will try to make as many as possible queries
if (shard.hasRemoteConnections())
{
current_settings.max_parallel_replicas = num_remote_replicas ? num_remote_replicas : 1;
shard_settings.max_parallel_replicas = num_remote_replicas ? num_remote_replicas : 1;
auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(current_settings).getSaturated(current_settings.max_execution_time);
auto connections = shard.pool->getMany(timeouts, &current_settings, pool_mode);
auto timeouts = ConnectionTimeouts::getTCPTimeoutsWithFailover(shard_settings).getSaturated(shard_settings.max_execution_time);
auto connections = shard.pool->getMany(timeouts, &shard_settings, pool_mode);
for (auto & connection : connections)
{
@ -1859,7 +1849,7 @@ UInt64 ClusterCopier::executeQueryOnCluster(
try
{
/// CREATE TABLE and DROP PARTITION queries return empty block
RemoteBlockInputStream stream{*connection, query, Block{}, context, &current_settings};
RemoteBlockInputStream stream{*connection, query, Block{}, context, &shard_settings};
NullBlockOutputStream output{Block{}};
copyData(stream, output);
@ -1878,7 +1868,7 @@ UInt64 ClusterCopier::executeQueryOnCluster(
ThreadPool thread_pool(std::min<UInt64>(num_shards, getNumberOfPhysicalCPUCores()));
for (UInt64 shard_index = 0; shard_index < num_shards; ++shard_index)
thread_pool.scheduleOrThrowOnError([=] { do_for_shard(shard_index); });
thread_pool.scheduleOrThrowOnError([=, shard_settings = current_settings] { do_for_shard(shard_index, std::move(shard_settings)); });
thread_pool.wait();
}
@ -1898,7 +1888,7 @@ UInt64 ClusterCopier::executeQueryOnCluster(
LOG_INFO(log, "There was an error while executing ALTER on each node. Query was executed on {} nodes. But had to be executed on {}", toString(successful_nodes), toString(origin_replicas_number.load()));
}
return successful_nodes;
}
}

View File

@ -15,7 +15,6 @@ namespace DB
class ClusterCopier
{
public:
ClusterCopier(const String & task_path_,
const String & host_id_,
const String & proxy_database_name_,
@ -187,8 +186,7 @@ protected:
UInt64 executeQueryOnCluster(
const ClusterPtr & cluster,
const String & query,
const ASTPtr & query_ast_ = nullptr,
const Settings * settings = nullptr,
const Settings & current_settings,
PoolMode pool_mode = PoolMode::GET_ALL,
ClusterExecutionMode execution_mode = ClusterExecutionMode::ON_EACH_SHARD,
UInt64 max_successful_executions_per_shard = 0) const;

View File

@ -114,7 +114,7 @@ void ClusterCopierApp::mainImpl()
registerDisks();
static const std::string default_database = "_local";
DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared<DatabaseMemory>(default_database));
DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared<DatabaseMemory>(default_database, *context));
context->setCurrentDatabase(default_database);
/// Initialize query scope just in case.

View File

@ -118,13 +118,13 @@ void LocalServer::tryInitPath()
}
static void attachSystemTables()
static void attachSystemTables(const Context & context)
{
DatabasePtr system_database = DatabaseCatalog::instance().tryGetDatabase(DatabaseCatalog::SYSTEM_DATABASE);
if (!system_database)
{
/// TODO: add attachTableDelayed into DatabaseMemory to speedup loading
system_database = std::make_shared<DatabaseMemory>(DatabaseCatalog::SYSTEM_DATABASE);
system_database = std::make_shared<DatabaseMemory>(DatabaseCatalog::SYSTEM_DATABASE, context);
DatabaseCatalog::instance().attachDatabase(DatabaseCatalog::SYSTEM_DATABASE, system_database);
}
@ -135,7 +135,7 @@ static void attachSystemTables()
int LocalServer::main(const std::vector<std::string> & /*args*/)
try
{
Logger * log = &logger();
Poco::Logger * log = &logger();
ThreadStatus thread_status;
UseSSL use_ssl;
@ -202,7 +202,7 @@ try
* if such tables will not be dropped, clickhouse-server will not be able to load them due to security reasons.
*/
std::string default_database = config().getString("default_database", "_local");
DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared<DatabaseMemory>(default_database));
DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared<DatabaseMemory>(default_database, *context));
context->setCurrentDatabase(default_database);
applyCmdOptions();
@ -213,14 +213,14 @@ try
LOG_DEBUG(log, "Loading metadata from {}", context->getPath());
loadMetadataSystem(*context);
attachSystemTables();
attachSystemTables(*context);
loadMetadata(*context);
DatabaseCatalog::instance().loadDatabases();
LOG_DEBUG(log, "Loaded metadata.");
}
else
{
attachSystemTables();
attachSystemTables(*context);
}
processQueries();

View File

@ -25,7 +25,7 @@ ODBCBlockInputStream::ODBCBlockInputStream(
, result{statement}
, iterator{result.begin()}
, max_block_size{max_block_size_}
, log(&Logger::get("ODBCBlockInputStream"))
, log(&Poco::Logger::get("ODBCBlockInputStream"))
{
if (sample_block.columns() != result.columnCount())
throw Exception{"RecordSet contains " + toString(result.columnCount()) + " columns while " + toString(sample_block.columns())

View File

@ -94,7 +94,7 @@ ODBCBlockOutputStream::ODBCBlockOutputStream(Poco::Data::Session && session_,
, table_name(remote_table_name_)
, sample_block(sample_block_)
, quoting(quoting_)
, log(&Logger::get("ODBCBlockOutputStream"))
, log(&Poco::Logger::get("ODBCBlockOutputStream"))
{
description.init(sample_block);
}

View File

@ -89,7 +89,7 @@ namespace CurrentMetrics
namespace
{
void setupTmpPath(Logger * log, const std::string & path)
void setupTmpPath(Poco::Logger * log, const std::string & path)
{
LOG_DEBUG(log, "Setting up {} to store temporary data in it", path);
@ -212,7 +212,7 @@ void Server::defineOptions(Poco::Util::OptionSet & options)
int Server::main(const std::vector<std::string> & /*args*/)
{
Logger * log = &logger();
Poco::Logger * log = &logger();
UseSSL use_ssl;
ThreadStatus thread_status;
@ -236,6 +236,14 @@ int Server::main(const std::vector<std::string> & /*args*/)
if (ThreadFuzzer::instance().isEffective())
LOG_WARNING(log, "ThreadFuzzer is enabled. Application will run slowly and unstable.");
#if !defined(NDEBUG) || !defined(__OPTIMIZE__)
LOG_WARNING(log, "Server was built in debug mode. It will work slowly.");
#endif
#if defined(ADDRESS_SANITIZER) || defined(THREAD_SANITIZER) || defined(MEMORY_SANITIZER)
LOG_WARNING(log, "Server was built with sanitizer. It will work slowly.");
#endif
/** Context contains all that query execution is dependent:
* settings, available functions, data types, aggregate functions, databases...
*/

View File

@ -309,7 +309,7 @@ bool AllowedClientHosts::contains(const IPAddress & client_address) const
throw;
/// Try to ignore DNS errors: if host cannot be resolved, skip it and try next.
LOG_WARNING(
&Logger::get("AddressPatterns"),
&Poco::Logger::get("AddressPatterns"),
"Failed to check if the allowed client hosts contain address {}. {}, code = {}",
client_address.toString(), e.displayText(), e.code());
return false;
@ -342,7 +342,7 @@ bool AllowedClientHosts::contains(const IPAddress & client_address) const
throw;
/// Try to ignore DNS errors: if host cannot be resolved, skip it and try next.
LOG_WARNING(
&Logger::get("AddressPatterns"),
&Poco::Logger::get("AddressPatterns"),
"Failed to check if the allowed client hosts contain address {}. {}, code = {}",
client_address.toString(), e.displayText(), e.code());
return false;

View File

@ -74,6 +74,7 @@ if(USE_RDKAFKA)
endif()
if (USE_AWS_S3)
add_headers_and_sources(dbms Common/S3)
add_headers_and_sources(dbms Disks/S3)
endif()

View File

@ -508,18 +508,18 @@ void Connection::sendScalarsData(Scalars & data)
"Sent data for {} scalars, total {} rows in {} sec., {} rows/sec., {} ({}/sec.), compressed {} times to {} ({}/sec.)",
data.size(), rows, elapsed,
static_cast<size_t>(rows / watch.elapsedSeconds()),
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes),
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes / watch.elapsedSeconds()),
ReadableSize(maybe_compressed_out_bytes),
ReadableSize(maybe_compressed_out_bytes / watch.elapsedSeconds()),
static_cast<double>(maybe_compressed_out_bytes) / out_bytes,
formatReadableSizeWithBinarySuffix(out_bytes),
formatReadableSizeWithBinarySuffix(out_bytes / watch.elapsedSeconds()));
ReadableSize(out_bytes),
ReadableSize(out_bytes / watch.elapsedSeconds()));
else
LOG_DEBUG(log_wrapper.get(),
"Sent data for {} scalars, total {} rows in {} sec., {} rows/sec., {} ({}/sec.), no compression.",
data.size(), rows, elapsed,
static_cast<size_t>(rows / watch.elapsedSeconds()),
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes),
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes / watch.elapsedSeconds()));
ReadableSize(maybe_compressed_out_bytes),
ReadableSize(maybe_compressed_out_bytes / watch.elapsedSeconds()));
}
namespace
@ -612,18 +612,18 @@ void Connection::sendExternalTablesData(ExternalTablesData & data)
"Sent data for {} external tables, total {} rows in {} sec., {} rows/sec., {} ({}/sec.), compressed {} times to {} ({}/sec.)",
data.size(), rows, elapsed,
static_cast<size_t>(rows / watch.elapsedSeconds()),
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes),
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes / watch.elapsedSeconds()),
ReadableSize(maybe_compressed_out_bytes),
ReadableSize(maybe_compressed_out_bytes / watch.elapsedSeconds()),
static_cast<double>(maybe_compressed_out_bytes) / out_bytes,
formatReadableSizeWithBinarySuffix(out_bytes),
formatReadableSizeWithBinarySuffix(out_bytes / watch.elapsedSeconds()));
ReadableSize(out_bytes),
ReadableSize(out_bytes / watch.elapsedSeconds()));
else
LOG_DEBUG(log_wrapper.get(),
"Sent data for {} external tables, total {} rows in {} sec., {} rows/sec., {} ({}/sec.), no compression.",
data.size(), rows, elapsed,
static_cast<size_t>(rows / watch.elapsedSeconds()),
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes),
formatReadableSizeWithBinarySuffix(maybe_compressed_out_bytes / watch.elapsedSeconds()));
ReadableSize(maybe_compressed_out_bytes),
ReadableSize(maybe_compressed_out_bytes / watch.elapsedSeconds()));
}
std::optional<Poco::Net::SocketAddress> Connection::getResolvedAddress() const

View File

@ -249,16 +249,16 @@ private:
{
}
Logger * get()
Poco::Logger * get()
{
if (!log)
log = &Logger::get("Connection (" + parent.getDescription() + ")");
log = &Poco::Logger::get("Connection (" + parent.getDescription() + ")");
return log;
}
private:
std::atomic<Logger *> log;
std::atomic<Poco::Logger *> log;
Connection & parent;
};

View File

@ -56,7 +56,7 @@ public:
Protocol::Compression compression_ = Protocol::Compression::Enable,
Protocol::Secure secure_ = Protocol::Secure::Disable)
: Base(max_connections_,
&Logger::get("ConnectionPool (" + host_ + ":" + toString(port_) + ")")),
&Poco::Logger::get("ConnectionPool (" + host_ + ":" + toString(port_) + ")")),
host(host_),
port(port_),
default_database(default_database_),

View File

@ -35,7 +35,7 @@ ConnectionPoolWithFailover::ConnectionPoolWithFailover(
LoadBalancing load_balancing,
time_t decrease_error_period_,
size_t max_error_cap_)
: Base(std::move(nested_pools_), decrease_error_period_, max_error_cap_, &Logger::get("ConnectionPoolWithFailover"))
: Base(std::move(nested_pools_), decrease_error_period_, max_error_cap_, &Poco::Logger::get("ConnectionPoolWithFailover"))
, default_load_balancing(load_balancing)
{
const std::string & local_hostname = getFQDNOrHostName();

View File

@ -35,7 +35,7 @@ TimeoutSetter::~TimeoutSetter()
catch (std::exception & e)
{
// Sometimes catched on macos
LOG_ERROR(&Logger::get("Client"), "TimeoutSetter: Can't reset timeouts: {}", e.what());
LOG_ERROR(&Poco::Logger::get("Client"), "TimeoutSetter: Can't reset timeouts: {}", e.what());
}
}
}

View File

@ -339,6 +339,17 @@ void ColumnDecimal<T>::getExtremes(Field & min, Field & max) const
max = NearestFieldType<T>(cur_max, scale);
}
TypeIndex columnDecimalDataType(const IColumn * column)
{
if (checkColumn<ColumnDecimal<Decimal32>>(column))
return TypeIndex::Decimal32;
else if (checkColumn<ColumnDecimal<Decimal64>>(column))
return TypeIndex::Decimal64;
else if (checkColumn<ColumnDecimal<Decimal128>>(column))
return TypeIndex::Decimal128;
return TypeIndex::Nothing;
}
template class ColumnDecimal<Decimal32>;
template class ColumnDecimal<Decimal64>;
template class ColumnDecimal<Decimal128>;

View File

@ -198,4 +198,6 @@ ColumnPtr ColumnDecimal<T>::indexImpl(const PaddedPODArray<Type> & indexes, size
return res;
}
TypeIndex columnDecimalDataType(const IColumn * column);
}

View File

@ -517,6 +517,33 @@ void ColumnVector<T>::getExtremes(Field & min, Field & max) const
max = NearestFieldType<T>(cur_max);
}
TypeIndex columnVectorDataType(const IColumn * column)
{
if (checkColumn<ColumnVector<UInt8>>(column))
return TypeIndex::UInt8;
else if (checkColumn<ColumnVector<UInt16>>(column))
return TypeIndex::UInt16;
else if (checkColumn<ColumnVector<UInt32>>(column))
return TypeIndex::UInt32;
else if (checkColumn<ColumnVector<UInt64>>(column))
return TypeIndex::UInt64;
else if (checkColumn<ColumnVector<Int8>>(column))
return TypeIndex::Int8;
else if (checkColumn<ColumnVector<Int16>>(column))
return TypeIndex::Int16;
else if (checkColumn<ColumnVector<Int32>>(column))
return TypeIndex::Int32;
else if (checkColumn<ColumnVector<Int64>>(column))
return TypeIndex::Int64;
else if (checkColumn<ColumnVector<Int128>>(column))
return TypeIndex::Int128;
else if (checkColumn<ColumnVector<Float32>>(column))
return TypeIndex::Float32;
else if (checkColumn<ColumnVector<Float64>>(column))
return TypeIndex::Float64;
return TypeIndex::Nothing;
}
/// Explicit template instantiations - to avoid code bloat in headers.
template class ColumnVector<UInt8>;
template class ColumnVector<UInt16>;

View File

@ -325,4 +325,6 @@ ColumnPtr ColumnVector<T>::indexImpl(const PaddedPODArray<Type> & indexes, size_
return res;
}
TypeIndex columnVectorDataType(const IColumn * column);
}

View File

@ -18,8 +18,8 @@ void AlignedBuffer::alloc(size_t size, size_t alignment)
void * new_buf;
int res = ::posix_memalign(&new_buf, std::max(alignment, sizeof(void*)), size);
if (0 != res)
throwFromErrno("Cannot allocate memory (posix_memalign), size: "
+ formatReadableSizeWithBinarySuffix(size) + ", alignment: " + formatReadableSizeWithBinarySuffix(alignment) + ".",
throwFromErrno(fmt::format("Cannot allocate memory (posix_memalign), size: {}, alignment: {}.",
ReadableSize(size), ReadableSize(alignment)),
ErrorCodes::CANNOT_ALLOCATE_MEMORY, res);
buf = new_buf;
}

View File

@ -129,7 +129,7 @@ public:
void * new_buf = ::realloc(buf, new_size);
if (nullptr == new_buf)
DB::throwFromErrno("Allocator: Cannot realloc from " + formatReadableSizeWithBinarySuffix(old_size) + " to " + formatReadableSizeWithBinarySuffix(new_size) + ".", DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
DB::throwFromErrno(fmt::format("Allocator: Cannot realloc from {} to {}.", ReadableSize(old_size), ReadableSize(new_size)), DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
buf = new_buf;
if constexpr (clear_memory)
@ -145,7 +145,8 @@ public:
buf = clickhouse_mremap(buf, old_size, new_size, MREMAP_MAYMOVE,
PROT_READ | PROT_WRITE, mmap_flags, -1, 0);
if (MAP_FAILED == buf)
DB::throwFromErrno("Allocator: Cannot mremap memory chunk from " + formatReadableSizeWithBinarySuffix(old_size) + " to " + formatReadableSizeWithBinarySuffix(new_size) + ".", DB::ErrorCodes::CANNOT_MREMAP);
DB::throwFromErrno(fmt::format("Allocator: Cannot mremap memory chunk from {} to {}.",
ReadableSize(old_size), ReadableSize(new_size)), DB::ErrorCodes::CANNOT_MREMAP);
/// No need for zero-fill, because mmap guarantees it.
}
@ -201,13 +202,13 @@ private:
if (size >= MMAP_THRESHOLD)
{
if (alignment > MMAP_MIN_ALIGNMENT)
throw DB::Exception("Too large alignment " + formatReadableSizeWithBinarySuffix(alignment) + ": more than page size when allocating "
+ formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::BAD_ARGUMENTS);
throw DB::Exception(fmt::format("Too large alignment {}: more than page size when allocating {}.",
ReadableSize(alignment), ReadableSize(size)), DB::ErrorCodes::BAD_ARGUMENTS);
buf = mmap(getMmapHint(), size, PROT_READ | PROT_WRITE,
mmap_flags, -1, 0);
if (MAP_FAILED == buf)
DB::throwFromErrno("Allocator: Cannot mmap " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
DB::throwFromErrno(fmt::format("Allocator: Cannot mmap {}.", ReadableSize(size)), DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
/// No need for zero-fill, because mmap guarantees it.
}
@ -221,7 +222,7 @@ private:
buf = ::malloc(size);
if (nullptr == buf)
DB::throwFromErrno("Allocator: Cannot malloc " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
DB::throwFromErrno(fmt::format("Allocator: Cannot malloc {}.", ReadableSize(size)), DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
}
else
{
@ -229,7 +230,8 @@ private:
int res = posix_memalign(&buf, alignment, size);
if (0 != res)
DB::throwFromErrno("Cannot allocate memory (posix_memalign) " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY, res);
DB::throwFromErrno(fmt::format("Cannot allocate memory (posix_memalign) {}.", ReadableSize(size)),
DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY, res);
if constexpr (clear_memory)
memset(buf, 0, size);
@ -243,7 +245,7 @@ private:
if (size >= MMAP_THRESHOLD)
{
if (0 != munmap(buf, size))
DB::throwFromErrno("Allocator: Cannot munmap " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_MUNMAP);
DB::throwFromErrno(fmt::format("Allocator: Cannot munmap {}.", ReadableSize(size)), DB::ErrorCodes::CANNOT_MUNMAP);
}
else
{

View File

@ -177,13 +177,13 @@ private:
{
ptr = mmap(address_hint, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (MAP_FAILED == ptr)
DB::throwFromErrno("Allocator: Cannot mmap " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
DB::throwFromErrno(fmt::format("Allocator: Cannot mmap {}.", ReadableSize(size)), DB::ErrorCodes::CANNOT_ALLOCATE_MEMORY);
}
~Chunk()
{
if (ptr && 0 != munmap(ptr, size))
DB::throwFromErrno("Allocator: Cannot munmap " + formatReadableSizeWithBinarySuffix(size) + ".", DB::ErrorCodes::CANNOT_MUNMAP);
DB::throwFromErrno(fmt::format("Allocator: Cannot munmap {}.", ReadableSize(size)), DB::ErrorCodes::CANNOT_MUNMAP);
}
Chunk(Chunk && other) : ptr(other.ptr), size(other.size)

View File

@ -278,7 +278,7 @@ private:
void * new_data = nullptr;
int res = posix_memalign(&new_data, alignment, prefix_size + new_size * sizeof(T));
if (0 != res)
throwFromErrno("Cannot allocate memory (posix_memalign) " + formatReadableSizeWithBinarySuffix(new_size) + ".",
throwFromErrno(fmt::format("Cannot allocate memory (posix_memalign) {}.", ReadableSize(new_size)),
ErrorCodes::CANNOT_ALLOCATE_MEMORY, res);
data_ptr = static_cast<char *>(new_data);

View File

@ -66,21 +66,21 @@ ConfigProcessor::ConfigProcessor(
, name_pool(new Poco::XML::NamePool(65521))
, dom_parser(name_pool)
{
if (log_to_console && !Logger::has("ConfigProcessor"))
if (log_to_console && !Poco::Logger::has("ConfigProcessor"))
{
channel_ptr = new Poco::ConsoleChannel;
log = &Logger::create("ConfigProcessor", channel_ptr.get(), Poco::Message::PRIO_TRACE);
log = &Poco::Logger::create("ConfigProcessor", channel_ptr.get(), Poco::Message::PRIO_TRACE);
}
else
{
log = &Logger::get("ConfigProcessor");
log = &Poco::Logger::get("ConfigProcessor");
}
}
ConfigProcessor::~ConfigProcessor()
{
if (channel_ptr) /// This means we have created a new console logger in the constructor.
Logger::destroy("ConfigProcessor");
Poco::Logger::destroy("ConfigProcessor");
}

View File

@ -116,7 +116,7 @@ private:
bool throw_on_bad_incl;
Logger * log;
Poco::Logger * log;
Poco::AutoPtr<Poco::Channel> channel_ptr;
Substitutions substitutions;

View File

@ -69,7 +69,7 @@ private:
static constexpr auto reload_interval = std::chrono::seconds(2);
Poco::Logger * log = &Logger::get("ConfigReloader");
Poco::Logger * log = &Poco::Logger::get("ConfigReloader");
std::string path;
std::string include_from_path;

View File

@ -202,7 +202,7 @@ bool DNSResolver::updateCache()
}
if (!lost_hosts.empty())
LOG_INFO(&Logger::get("DNSResolver"), "Cached hosts not found: {}", lost_hosts);
LOG_INFO(&Poco::Logger::get("DNSResolver"), "Cached hosts not found: {}", lost_hosts);
return updated;
}

View File

@ -5,8 +5,10 @@
#include <Poco/String.h>
#include <common/logger_useful.h>
#include <IO/WriteHelpers.h>
#include <IO/ReadHelpers.h>
#include <IO/Operators.h>
#include <IO/ReadBufferFromString.h>
#include <IO/ReadBufferFromFile.h>
#include <common/demangle.h>
#include <Common/formatReadable.h>
#include <Common/filesystemHelpers.h>
@ -25,6 +27,8 @@ namespace ErrorCodes
extern const int STD_EXCEPTION;
extern const int UNKNOWN_EXCEPTION;
extern const int LOGICAL_ERROR;
extern const int CANNOT_ALLOCATE_MEMORY;
extern const int CANNOT_MREMAP;
}
@ -118,7 +122,7 @@ void throwFromErrnoWithPath(const std::string & s, const std::string & path, int
void tryLogCurrentException(const char * log_name, const std::string & start_of_message)
{
tryLogCurrentException(&Logger::get(log_name), start_of_message);
tryLogCurrentException(&Poco::Logger::get(log_name), start_of_message);
}
void tryLogCurrentException(Poco::Logger * logger, const std::string & start_of_message)
@ -144,18 +148,79 @@ static void getNoSpaceLeftInfoMessage(std::filesystem::path path, std::string &
path = path.parent_path();
auto fs = getStatVFS(path);
msg += "\nTotal space: " + formatReadableSizeWithBinarySuffix(fs.f_blocks * fs.f_bsize)
+ "\nAvailable space: " + formatReadableSizeWithBinarySuffix(fs.f_bavail * fs.f_bsize)
+ "\nTotal inodes: " + formatReadableQuantity(fs.f_files)
+ "\nAvailable inodes: " + formatReadableQuantity(fs.f_favail);
auto mount_point = getMountPoint(path).string();
msg += "\nMount point: " + mount_point;
fmt::format_to(std::back_inserter(msg),
"\nTotal space: {}\nAvailable space: {}\nTotal inodes: {}\nAvailable inodes: {}\nMount point: {}",
ReadableSize(fs.f_blocks * fs.f_bsize),
ReadableSize(fs.f_bavail * fs.f_bsize),
formatReadableQuantity(fs.f_files),
formatReadableQuantity(fs.f_favail),
mount_point);
#if defined(__linux__)
msg += "\nFilesystem: " + getFilesystemName(mount_point);
#endif
}
/** It is possible that the system has enough memory,
* but we have shortage of the number of available memory mappings.
* Provide good diagnostic to user in that case.
*/
static void getNotEnoughMemoryMessage(std::string & msg)
{
#if defined(__linux__)
try
{
static constexpr size_t buf_size = 4096;
char buf[buf_size];
UInt64 max_map_count = 0;
{
ReadBufferFromFile file("/proc/sys/vm/max_map_count", buf_size, -1, buf);
readText(max_map_count, file);
}
UInt64 num_maps = 0;
{
ReadBufferFromFile file("/proc/self/maps", buf_size, -1, buf);
while (!file.eof())
{
char * next_pos = find_first_symbols<'\n'>(file.position(), file.buffer().end());
file.position() = next_pos;
if (!file.hasPendingData())
continue;
if (*file.position() == '\n')
{
++num_maps;
++file.position();
}
}
}
if (num_maps > max_map_count * 0.99)
{
msg += fmt::format(
"\nIt looks like that the process is near the limit on number of virtual memory mappings."
"\nCurrent number of mappings (/proc/self/maps): {}."
"\nLimit on number of mappings (/proc/sys/vm/max_map_count): {}."
"\nYou should increase the limit for vm.max_map_count in /etc/sysctl.conf"
"\n",
num_maps, max_map_count);
}
}
catch (...)
{
msg += "\nCannot obtain additional info about memory usage.";
}
#else
(void)msg;
#endif
}
static std::string getExtraExceptionInfo(const std::exception & e)
{
String msg;
@ -170,6 +235,13 @@ static std::string getExtraExceptionInfo(const std::exception & e)
{
if (errno_exception->getErrno() == ENOSPC && errno_exception->getPath())
getNoSpaceLeftInfoMessage(errno_exception->getPath().value(), msg);
else if (errno_exception->code() == ErrorCodes::CANNOT_ALLOCATE_MEMORY
|| errno_exception->code() == ErrorCodes::CANNOT_MREMAP)
getNotEnoughMemoryMessage(msg);
}
else if (dynamic_cast<const std::bad_alloc *>(&e))
{
getNotEnoughMemoryMessage(msg);
}
}
catch (...)

View File

@ -37,7 +37,7 @@ private:
Map map;
bool initialized = false;
Logger * log = &Logger::get("FileChecker");
Poco::Logger * log = &Poco::Logger::get("FileChecker");
};
}

View File

@ -306,7 +306,7 @@ private:
auto it = cells.find(key);
if (it == cells.end())
{
LOG_ERROR(&Logger::get("LRUCache"), "LRUCache became inconsistent. There must be a bug in it.");
LOG_ERROR(&Poco::Logger::get("LRUCache"), "LRUCache became inconsistent. There must be a bug in it.");
abort();
}
@ -324,7 +324,7 @@ private:
if (current_size > (1ull << 63))
{
LOG_ERROR(&Logger::get("LRUCache"), "LRUCache became inconsistent. There must be a bug in it.");
LOG_ERROR(&Poco::Logger::get("LRUCache"), "LRUCache became inconsistent. There must be a bug in it.");
abort();
}
}

View File

@ -50,13 +50,13 @@ MemoryTracker::~MemoryTracker()
void MemoryTracker::logPeakMemoryUsage() const
{
const auto * description = description_ptr.load(std::memory_order_relaxed);
LOG_DEBUG(&Logger::get("MemoryTracker"), "Peak memory usage{}: {}.", (description ? " " + std::string(description) : ""), formatReadableSizeWithBinarySuffix(peak));
LOG_DEBUG(&Poco::Logger::get("MemoryTracker"), "Peak memory usage{}: {}.", (description ? " " + std::string(description) : ""), ReadableSize(peak));
}
void MemoryTracker::logMemoryUsage(Int64 current) const
{
const auto * description = description_ptr.load(std::memory_order_relaxed);
LOG_DEBUG(&Logger::get("MemoryTracker"), "Current memory usage{}: {}.", (description ? " " + std::string(description) : ""), formatReadableSizeWithBinarySuffix(current));
LOG_DEBUG(&Poco::Logger::get("MemoryTracker"), "Current memory usage{}: {}.", (description ? " " + std::string(description) : ""), ReadableSize(current));
}

View File

@ -102,7 +102,7 @@ void LazyPipeFDs::tryIncreaseSize(int desired_size)
if (-1 == fcntl(fds_rw[1], F_SETPIPE_SZ, pipe_size * 2) && errno != EPERM)
throwFromErrno("Cannot increase pipe capacity to " + std::to_string(pipe_size * 2), ErrorCodes::CANNOT_FCNTL);
LOG_TRACE(log, "Pipe capacity is {}", formatReadableSizeWithBinarySuffix(std::min(pipe_size, desired_size)));
LOG_TRACE(log, "Pipe capacity is {}", ReadableSize(std::min(pipe_size, desired_size)));
}
#else
(void)desired_size;

View File

@ -152,9 +152,9 @@ private:
protected:
Logger * log;
Poco::Logger * log;
PoolBase(unsigned max_items_, Logger * log_)
PoolBase(unsigned max_items_, Poco::Logger * log_)
: max_items(max_items_), log(log_)
{
items.reserve(max_items);

View File

@ -57,7 +57,7 @@ public:
NestedPools nested_pools_,
time_t decrease_error_period_,
size_t max_error_cap_,
Logger * log_)
Poco::Logger * log_)
: nested_pools(std::move(nested_pools_))
, decrease_error_period(decrease_error_period_)
, max_error_cap(max_error_cap_)
@ -134,7 +134,7 @@ protected:
/// The time when error counts were last decreased.
time_t last_error_decrease_time = 0;
Logger * log;
Poco::Logger * log;
};
template <typename TNestedPool>

View File

@ -79,7 +79,7 @@ namespace ErrorCodes
template <typename ProfilerImpl>
QueryProfilerBase<ProfilerImpl>::QueryProfilerBase(const UInt64 thread_id, const int clock_type, UInt32 period, const int pause_signal_)
: log(&Logger::get("QueryProfiler"))
: log(&Poco::Logger::get("QueryProfiler"))
, pause_signal(pause_signal_)
{
#if USE_UNWIND

View File

@ -102,7 +102,7 @@ SensitiveDataMasker::SensitiveDataMasker(const Poco::Util::AbstractConfiguration
{
Poco::Util::AbstractConfiguration::Keys keys;
config.keys(config_prefix, keys);
Logger * logger = &Logger::get("SensitiveDataMaskerConfigRead");
Poco::Logger * logger = &Poco::Logger::get("SensitiveDataMaskerConfigRead");
std::set<std::string> used_names;

View File

@ -43,9 +43,9 @@ StatusFile::StatusFile(const std::string & path_)
}
if (!contents.empty())
LOG_INFO(&Logger::get("StatusFile"), "Status file {} already exists - unclean restart. Contents:\n{}", path, contents);
LOG_INFO(&Poco::Logger::get("StatusFile"), "Status file {} already exists - unclean restart. Contents:\n{}", path, contents);
else
LOG_INFO(&Logger::get("StatusFile"), "Status file {} already exists and is empty - probably unclean hardware restart.", path);
LOG_INFO(&Poco::Logger::get("StatusFile"), "Status file {} already exists and is empty - probably unclean hardware restart.", path);
}
fd = ::open(path.c_str(), O_WRONLY | O_CREAT | O_CLOEXEC, 0666);
@ -90,10 +90,10 @@ StatusFile::StatusFile(const std::string & path_)
StatusFile::~StatusFile()
{
if (0 != close(fd))
LOG_ERROR(&Logger::get("StatusFile"), "Cannot close file {}, {}", path, errnoToString(ErrorCodes::CANNOT_CLOSE_FILE));
LOG_ERROR(&Poco::Logger::get("StatusFile"), "Cannot close file {}, {}", path, errnoToString(ErrorCodes::CANNOT_CLOSE_FILE));
if (0 != unlink(path.c_str()))
LOG_ERROR(&Logger::get("StatusFile"), "Cannot unlink file {}, {}", path, errnoToString(ErrorCodes::CANNOT_CLOSE_FILE));
LOG_ERROR(&Poco::Logger::get("StatusFile"), "Cannot unlink file {}, {}", path, errnoToString(ErrorCodes::CANNOT_CLOSE_FILE));
}
}

View File

@ -116,6 +116,12 @@ inline bool isControlASCII(char c)
return static_cast<unsigned char>(c) <= 31;
}
inline bool isPrintableASCII(char c)
{
uint8_t uc = c;
return uc >= 32 && uc <= 126; /// 127 is ASCII DEL.
}
/// Works assuming isAlphaASCII.
inline char toLowerIfAlphaASCII(char c)
{

View File

@ -234,14 +234,6 @@ void ThreadPoolImpl<Thread>::worker(typename std::list<Thread>::iterator thread_
--scheduled_jobs;
}
DB::tryLogCurrentException("ThreadPool",
std::string("Exception in ThreadPool(") +
"max_threads: " + std::to_string(max_threads)
+ ", max_free_threads: " + std::to_string(max_free_threads)
+ ", queue_size: " + std::to_string(queue_size)
+ ", shutdown_on_exception: " + std::to_string(shutdown_on_exception)
+ ").");
job_finished.notify_all();
new_job_or_shutdown.notify_all();
return;

View File

@ -1,7 +1,9 @@
#include <Common/UTF8Helpers.h>
#include <Common/StringUtils/StringUtils.h>
#include <widechar_width.h>
namespace DB
{
namespace UTF8
@ -94,6 +96,42 @@ size_t computeWidth(const UInt8 * data, size_t size, size_t prefix) noexcept
size_t rollback = 0;
for (size_t i = 0; i < size; ++i)
{
/// Quickly skip regular ASCII
#if defined(__SSE2__)
const auto lower_bound = _mm_set1_epi8(32);
const auto upper_bound = _mm_set1_epi8(126);
while (i + 15 < size)
{
__m128i bytes = _mm_loadu_si128(reinterpret_cast<const __m128i *>(&data[i]));
const uint16_t non_regular_width_mask = _mm_movemask_epi8(
_mm_or_si128(
_mm_cmplt_epi8(bytes, lower_bound),
_mm_cmpgt_epi8(bytes, upper_bound)));
if (non_regular_width_mask)
{
auto num_regular_chars = __builtin_ctz(non_regular_width_mask);
width += num_regular_chars;
i += num_regular_chars;
break;
}
else
{
i += 16;
width += 16;
}
}
#endif
while (i < size && isPrintableASCII(data[i]))
{
++width;
++i;
}
switch (decoder.decode(data[i]))
{
case UTF8Decoder::REJECT:

View File

@ -43,7 +43,7 @@ public:
private:
zkutil::ZooKeeperHolderPtr zookeeper_holder;
std::string path;
Logger * log = &Logger::get("zkutil::Increment");
Poco::Logger * log = &Poco::Logger::get("zkutil::Increment");
};
}

View File

@ -39,7 +39,7 @@ public:
LeaderElection(DB::BackgroundSchedulePool & pool_, const std::string & path_, ZooKeeper & zookeeper_, LeadershipHandler handler_, const std::string & identifier_ = "")
: pool(pool_), path(path_), zookeeper(zookeeper_), handler(handler_), identifier(identifier_)
, log_name("LeaderElection (" + path + ")")
, log(&Logger::get(log_name))
, log(&Poco::Logger::get(log_name))
{
task = pool.createTask(log_name, [this] { threadFunction(); });
createNode();
@ -67,7 +67,7 @@ private:
LeadershipHandler handler;
std::string identifier;
std::string log_name;
Logger * log;
Poco::Logger * log;
EphemeralNodeHolderPtr node;
std::string node_name;

View File

@ -21,7 +21,7 @@ namespace zkutil
zookeeper_holder(zookeeper_holder_),
lock_path(lock_prefix_ + "/" + lock_name_),
lock_message(lock_message_),
log(&Logger::get("zkutil::Lock"))
log(&Poco::Logger::get("zkutil::Lock"))
{
auto zookeeper = zookeeper_holder->getZooKeeper();
if (create_parent_path_)
@ -72,7 +72,7 @@ namespace zkutil
std::string lock_path;
std::string lock_message;
Logger * log;
Poco::Logger * log;
};
}

View File

@ -48,7 +48,7 @@ static void check(int32_t code, const std::string & path)
void ZooKeeper::init(const std::string & implementation_, const std::string & hosts_, const std::string & identity_,
int32_t session_timeout_ms_, int32_t operation_timeout_ms_, const std::string & chroot_)
{
log = &Logger::get("ZooKeeper");
log = &Poco::Logger::get("ZooKeeper");
hosts = hosts_;
identity = identity_;
session_timeout_ms = session_timeout_ms_;

View File

@ -269,7 +269,7 @@ private:
std::mutex mutex;
Logger * log = nullptr;
Poco::Logger * log = nullptr;
};

View File

@ -70,7 +70,7 @@ private:
mutable std::mutex mutex;
ZooKeeper::Ptr ptr;
Logger * log = &Logger::get("ZooKeeperHolder");
Poco::Logger * log = &Poco::Logger::get("ZooKeeperHolder");
static std::string nullptr_exception_message;
};

View File

@ -20,8 +20,8 @@ int main(int argc, char ** argv)
}
Poco::AutoPtr<Poco::ConsoleChannel> channel = new Poco::ConsoleChannel(std::cerr);
Logger::root().setChannel(channel);
Logger::root().setLevel("trace");
Poco::Logger::root().setChannel(channel);
Poco::Logger::root().setLevel("trace");
zkutil::ZooKeeper zk(argv[1]);
std::string unused;

View File

@ -1,6 +1,8 @@
#pragma once
#include <string>
#include <fmt/format.h>
namespace DB
{
@ -20,3 +22,35 @@ std::string formatReadableSizeWithDecimalSuffix(double value, int precision = 2)
/// Prints the number as 123.45 billion.
void formatReadableQuantity(double value, DB::WriteBuffer & out, int precision = 2);
std::string formatReadableQuantity(double value, int precision = 2);
/// Wrapper around value. If used with fmt library (e.g. for log messages),
/// value is automatically formatted as size with binary suffix.
struct ReadableSize
{
double value;
explicit ReadableSize(double value_) : value(value_) {}
};
/// See https://fmt.dev/latest/api.html#formatting-user-defined-types
template <>
struct fmt::formatter<ReadableSize>
{
constexpr auto parse(format_parse_context & ctx)
{
auto it = ctx.begin();
auto end = ctx.end();
/// Only support {}.
if (it != end && *it != '}')
throw format_error("invalid format");
return it;
}
template <typename FormatContext>
auto format(const ReadableSize & size, FormatContext & ctx)
{
return format_to(ctx.out(), "{}", formatReadableSizeWithBinarySuffix(size.value));
}
};

View File

@ -12,7 +12,7 @@ TEST(Logger, Log)
{
Poco::Logger::root().setLevel("none");
Poco::Logger::root().setChannel(Poco::AutoPtr<Poco::NullChannel>(new Poco::NullChannel()));
Logger * log = &Logger::get("Log");
Poco::Logger * log = &Poco::Logger::get("Log");
/// This test checks that we don't pass this string to fmtlib, because it is the only argument.
EXPECT_NO_THROW(LOG_INFO(log, "Hello {} World"));

View File

@ -111,7 +111,7 @@ void BackgroundSchedulePoolTaskInfo::execute()
static const int32_t slow_execution_threshold_ms = 200;
if (milliseconds >= slow_execution_threshold_ms)
LOG_TRACE(&Logger::get(log_name), "Execution took {} ms.", milliseconds);
LOG_TRACE(&Poco::Logger::get(log_name), "Execution took {} ms.", milliseconds);
{
std::lock_guard lock_schedule(schedule_mutex);
@ -156,7 +156,7 @@ BackgroundSchedulePool::BackgroundSchedulePool(size_t size_, CurrentMetrics::Met
, memory_metric(memory_metric_)
, thread_name(thread_name_)
{
LOG_INFO(&Logger::get("BackgroundSchedulePool/" + thread_name), "Create BackgroundSchedulePool with {} threads", size);
LOG_INFO(&Poco::Logger::get("BackgroundSchedulePool/" + thread_name), "Create BackgroundSchedulePool with {} threads", size);
threads.resize(size);
for (auto & thread : threads)
@ -179,7 +179,7 @@ BackgroundSchedulePool::~BackgroundSchedulePool()
queue.wakeUpAll();
delayed_thread.join();
LOG_TRACE(&Logger::get("BackgroundSchedulePool/" + thread_name), "Waiting for threads to finish.");
LOG_TRACE(&Poco::Logger::get("BackgroundSchedulePool/" + thread_name), "Waiting for threads to finish.");
for (auto & thread : threads)
thread.join();
}

View File

@ -164,7 +164,7 @@ void ExternalTablesHandler::handlePart(const Poco::Net::MessageHeader & header,
/// Create table
NamesAndTypesList columns = sample_block.getNamesAndTypesList();
auto temporary_table = TemporaryTableHolder(context, ColumnsDescription{columns});
auto temporary_table = TemporaryTableHolder(context, ColumnsDescription{columns}, {});
auto storage = temporary_table.getTable();
context.addExternalTable(data->table_name, std::move(temporary_table));
BlockOutputStreamPtr output = storage->write(ASTPtr(), context);

View File

@ -994,7 +994,7 @@ private:
class Sha256Password : public IPlugin
{
public:
Sha256Password(RSA & public_key_, RSA & private_key_, Logger * log_)
Sha256Password(RSA & public_key_, RSA & private_key_, Poco::Logger * log_)
: public_key(public_key_)
, private_key(private_key_)
, log(log_)
@ -1130,7 +1130,7 @@ public:
private:
RSA & public_key;
RSA & private_key;
Logger * log;
Poco::Logger * log;
String scramble;
};
#endif

View File

@ -126,7 +126,7 @@ struct Settings : public SettingsCollection<Settings>
M(SettingBool, force_optimize_skip_unused_shards_no_nested, false, "Do not apply force_optimize_skip_unused_shards for nested Distributed tables.", 0) \
\
M(SettingBool, input_format_parallel_parsing, true, "Enable parallel parsing for some data formats.", 0) \
M(SettingUInt64, min_chunk_bytes_for_parallel_parsing, (1024 * 1024), "The minimum chunk size in bytes, which each thread will parse in parallel.", 0) \
M(SettingUInt64, min_chunk_bytes_for_parallel_parsing, (10 * 1024 * 1024), "The minimum chunk size in bytes, which each thread will parse in parallel.", 0) \
\
M(SettingUInt64, merge_tree_min_rows_for_concurrent_read, (20 * 8192), "If at least as many lines are read from one file, the reading can be parallelized.", 0) \
M(SettingUInt64, merge_tree_min_bytes_for_concurrent_read, (24 * 10 * 1024 * 1024), "If at least as many bytes are read from one file, the reading can be parallelized.", 0) \
@ -310,7 +310,7 @@ struct Settings : public SettingsCollection<Settings>
M(SettingUInt64, max_execution_speed, 0, "Maximum number of execution rows per second.", 0) \
M(SettingUInt64, min_execution_speed_bytes, 0, "Minimum number of execution bytes per second.", 0) \
M(SettingUInt64, max_execution_speed_bytes, 0, "Maximum number of execution bytes per second.", 0) \
M(SettingSeconds, timeout_before_checking_execution_speed, 0, "Check that the speed is not too low after the specified time has elapsed.", 0) \
M(SettingSeconds, timeout_before_checking_execution_speed, 10, "Check that the speed is not too low after the specified time has elapsed.", 0) \
\
M(SettingUInt64, max_columns_to_read, 0, "", 0) \
M(SettingUInt64, max_temporary_columns, 0, "", 0) \
@ -425,6 +425,8 @@ struct Settings : public SettingsCollection<Settings>
M(SettingSeconds, lock_acquire_timeout, DBMS_DEFAULT_LOCK_ACQUIRE_TIMEOUT_SEC, "How long locking request should wait before failing", 0) \
M(SettingBool, materialize_ttl_after_modify, true, "Apply TTL for old data, after ALTER MODIFY TTL query", 0) \
\
M(SettingBool, allow_experimental_geo_types, false, "Allow geo data types such as Point, Ring, Polygon, MultiPolygon", 0) \
\
/** Obsolete settings that do nothing but left for compatibility reasons. Remove each one after half a year of obsolescence. */ \
\
M(SettingBool, allow_experimental_low_cardinality_type, true, "Obsolete setting, does nothing. Will be removed after 2019-08-13", 0) \
@ -437,6 +439,7 @@ struct Settings : public SettingsCollection<Settings>
M(SettingUInt64, mark_cache_min_lifetime, 0, "Obsolete setting, does nothing. Will be removed after 2020-05-31", 0) \
M(SettingBool, partial_merge_join, false, "Obsolete. Use join_algorithm='prefer_partial_merge' instead.", 0) \
M(SettingUInt64, max_memory_usage_for_all_queries, 0, "Obsolete. Will be removed after 2020-10-20", 0) \
M(SettingBool, experimental_use_processors, true, "Obsolete setting, does nothing. Will be removed after 2020-11-29.", 0) \
DECLARE_SETTINGS_COLLECTION(LIST_OF_SETTINGS)

View File

@ -598,7 +598,7 @@ namespace details
void SettingsCollectionUtils::warningNameNotFound(const StringRef & name)
{
static auto * log = &Logger::get("Settings");
static auto * log = &Poco::Logger::get("Settings");
LOG_WARNING(log, "Unknown setting {}, skipping", name);
}

View File

@ -60,7 +60,7 @@ Block AggregatingBlockInputStream::readImpl()
input_streams.emplace_back(temporary_inputs.back()->block_in);
}
LOG_TRACE(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), formatReadableSizeWithBinarySuffix(files.sum_size_compressed), formatReadableSizeWithBinarySuffix(files.sum_size_uncompressed));
LOG_TRACE(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), ReadableSize(files.sum_size_compressed), ReadableSize(files.sum_size_uncompressed));
impl = std::make_unique<MergingAggregatedMemoryEfficientBlockInputStream>(input_streams, params, final, 1, 1);
}

View File

@ -47,7 +47,7 @@ protected:
/** From here we will get the completed blocks after the aggregation. */
std::unique_ptr<IBlockInputStream> impl;
Logger * log = &Logger::get("AggregatingBlockInputStream");
Poco::Logger * log = &Poco::Logger::get("AggregatingBlockInputStream");
};
}

View File

@ -13,6 +13,7 @@ limitations under the License. */
#include <DataStreams/IBlockInputStream.h>
#include <Processors/Sources/SourceWithProgress.h>
#include <Processors/Transforms/AggregatingTransform.h>
namespace DB
@ -38,7 +39,12 @@ protected:
Block res = *it;
++it;
return Chunk(res.getColumns(), res.rows());
auto info = std::make_shared<AggregatedChunkInfo>();
info->bucket_num = res.info.bucket_num;
info->is_overflows = res.info.is_overflows;
return Chunk(res.getColumns(), res.rows(), std::move(info));
}
private:

View File

@ -168,7 +168,7 @@ private:
const SortDescription description;
String sign_column_name;
Logger * log = &Logger::get("CollapsingFinalBlockInputStream");
Poco::Logger * log = &Poco::Logger::get("CollapsingFinalBlockInputStream");
bool first = true;

View File

@ -21,7 +21,7 @@ ColumnGathererStream::ColumnGathererStream(
const String & column_name_, const BlockInputStreams & source_streams, ReadBuffer & row_sources_buf_,
size_t block_preferred_size_)
: column_name(column_name_), sources(source_streams.size()), row_sources_buf(row_sources_buf_)
, block_preferred_size(block_preferred_size_), log(&Logger::get("ColumnGathererStream"))
, block_preferred_size(block_preferred_size_), log(&Poco::Logger::get("ColumnGathererStream"))
{
if (source_streams.empty())
throw Exception("There are no streams to gather", ErrorCodes::EMPTY_DATA_PASSED);
@ -105,7 +105,7 @@ void ColumnGathererStream::readSuffixImpl()
else
LOG_DEBUG(log, "Gathered column {} ({} bytes/elem.) in {} sec., {} rows/sec., {}/sec.",
column_name, static_cast<double>(profile_info.bytes) / profile_info.rows, seconds,
profile_info.rows / seconds, formatReadableSizeWithBinarySuffix(profile_info.bytes / seconds));
profile_info.rows / seconds, ReadableSize(profile_info.bytes / seconds));
}
}

View File

@ -44,7 +44,7 @@ private:
size_t bytes_to_transfer = 0;
using Logger = Poco::Logger;
Logger * log = &Logger::get("CreatingSetsBlockInputStream");
Poco::Logger * log = &Poco::Logger::get("CreatingSetsBlockInputStream");
void createAll();
void createOne(SubqueryForSet & subquery);

View File

@ -30,7 +30,8 @@ static void limitProgressingSpeed(size_t total_progress_size, size_t max_speed_i
{
UInt64 sleep_microseconds = desired_microseconds - total_elapsed_microseconds;
/// Never sleep more than one second (it should be enough to limit speed for a reasonable amount, and otherwise it's too easy to make query hang).
/// Never sleep more than one second (it should be enough to limit speed for a reasonable amount,
/// and otherwise it's too easy to make query hang).
sleep_microseconds = std::min(UInt64(1000000), sleep_microseconds);
sleepForMicroseconds(sleep_microseconds);
@ -45,14 +46,14 @@ void ExecutionSpeedLimits::throttle(
{
if ((min_execution_rps != 0 || max_execution_rps != 0
|| min_execution_bps != 0 || max_execution_bps != 0
|| (total_rows_to_read != 0 && timeout_before_checking_execution_speed != 0)) &&
(static_cast<Int64>(total_elapsed_microseconds) > timeout_before_checking_execution_speed.totalMicroseconds()))
|| (total_rows_to_read != 0 && timeout_before_checking_execution_speed != 0))
&& (static_cast<Int64>(total_elapsed_microseconds) > timeout_before_checking_execution_speed.totalMicroseconds()))
{
/// Do not count sleeps in throttlers
UInt64 throttler_sleep_microseconds = CurrentThread::getProfileEvents()[ProfileEvents::ThrottlerSleepMicroseconds];
double elapsed_seconds = 0;
if (throttler_sleep_microseconds > total_elapsed_microseconds)
if (total_elapsed_microseconds > throttler_sleep_microseconds)
elapsed_seconds = static_cast<double>(total_elapsed_microseconds - throttler_sleep_microseconds) / 1000000.0;
if (elapsed_seconds > 0)

View File

@ -58,7 +58,7 @@ InputStreamFromASTInsertQuery::InputStreamFromASTInsertQuery(
if (context.getSettingsRef().input_format_defaults_for_omitted_fields && ast_insert_query->table_id && !input_function)
{
StoragePtr storage = DatabaseCatalog::instance().getTable(ast_insert_query->table_id);
StoragePtr storage = DatabaseCatalog::instance().getTable(ast_insert_query->table_id, context);
auto column_defaults = storage->getColumns().getDefaults();
if (!column_defaults.empty())
res_stream = std::make_shared<AddingDefaultsBlockInputStream>(res_stream, column_defaults, context);

View File

@ -264,7 +264,7 @@ void MergeSortingBlockInputStream::remerge()
}
merger.readSuffix();
LOG_DEBUG(log, "Memory usage is lowered from {} to {}", formatReadableSizeWithBinarySuffix(sum_bytes_in_blocks), formatReadableSizeWithBinarySuffix(new_sum_bytes_in_blocks));
LOG_DEBUG(log, "Memory usage is lowered from {} to {}", ReadableSize(sum_bytes_in_blocks), ReadableSize(new_sum_bytes_in_blocks));
/// If the memory consumption was not lowered enough - we will not perform remerge anymore. 2 is a guess.
if (new_sum_bytes_in_blocks * 2 > sum_bytes_in_blocks)

View File

@ -104,7 +104,7 @@ private:
String codec;
size_t min_free_disk_space;
Logger * log = &Logger::get("MergeSortingBlockInputStream");
Poco::Logger * log = &Poco::Logger::get("MergeSortingBlockInputStream");
Blocks blocks;
size_t sum_rows_in_blocks = 0;

View File

@ -555,7 +555,7 @@ MergingAggregatedMemoryEfficientBlockInputStream::BlocksToMerge MergingAggregate
/// Not yet partitioned (splitted to buckets) block. Will partition it and place result to 'splitted_blocks'.
if (input.block.info.bucket_num == -1 && input.block && input.splitted_blocks.empty())
{
LOG_TRACE(&Logger::get("MergingAggregatedMemoryEfficient"), "Having block without bucket: will split.");
LOG_TRACE(&Poco::Logger::get("MergingAggregatedMemoryEfficient"), "Having block without bucket: will split.");
input.splitted_blocks = aggregator.convertBlockToTwoLevel(input.block);
input.block = Block();

View File

@ -96,7 +96,7 @@ private:
std::atomic<bool> has_overflows {false};
int current_bucket_num = -1;
Logger * log = &Logger::get("MergingAggregatedMemoryEfficientBlockInputStream");
Poco::Logger * log = &Poco::Logger::get("MergingAggregatedMemoryEfficientBlockInputStream");
struct Input

View File

@ -23,7 +23,7 @@ MergingSortedBlockInputStream::MergingSortedBlockInputStream(
: description(std::move(description_)), max_block_size(max_block_size_), limit(limit_), quiet(quiet_)
, source_blocks(inputs_.size())
, cursors(inputs_.size()), out_row_sources_buf(out_row_sources_buf_)
, log(&Logger::get("MergingSortedBlockInputStream"))
, log(&Poco::Logger::get("MergingSortedBlockInputStream"))
{
children.insert(children.end(), inputs_.begin(), inputs_.end());
header = children.at(0)->getHeader();
@ -269,7 +269,7 @@ void MergingSortedBlockInputStream::readSuffixImpl()
LOG_DEBUG(log, "Merge sorted {} blocks, {} rows in {} sec., {} rows/sec., {}/sec",
profile_info.blocks, profile_info.rows, seconds,
profile_info.rows / seconds,
formatReadableSizeWithBinarySuffix(profile_info.bytes / seconds));
ReadableSize(profile_info.bytes / seconds));
}
}

View File

@ -82,7 +82,7 @@ Block ParallelAggregatingBlockInputStream::readImpl()
input_streams.emplace_back(temporary_inputs.back()->block_in);
}
LOG_TRACE(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), formatReadableSizeWithBinarySuffix(files.sum_size_compressed), formatReadableSizeWithBinarySuffix(files.sum_size_uncompressed));
LOG_TRACE(log, "Will merge {} temporary files of size {} compressed, {} uncompressed.", files.files.size(), ReadableSize(files.sum_size_compressed), ReadableSize(files.sum_size_uncompressed));
impl = std::make_unique<MergingAggregatedMemoryEfficientBlockInputStream>(
input_streams, params, final, temporary_data_merge_threads, temporary_data_merge_threads);
@ -178,16 +178,16 @@ void ParallelAggregatingBlockInputStream::execute()
{
size_t rows = many_data[i]->size();
LOG_TRACE(log, "Aggregated. {} to {} rows (from {}) in {} sec. ({} rows/sec., {}/sec.)",
threads_data[i].src_rows, rows, formatReadableSizeWithBinarySuffix(threads_data[i].src_bytes),
threads_data[i].src_rows, rows, ReadableSize(threads_data[i].src_bytes),
elapsed_seconds, threads_data[i].src_rows / elapsed_seconds,
formatReadableSizeWithBinarySuffix(threads_data[i].src_bytes / elapsed_seconds));
ReadableSize(threads_data[i].src_bytes / elapsed_seconds));
total_src_rows += threads_data[i].src_rows;
total_src_bytes += threads_data[i].src_bytes;
}
LOG_TRACE(log, "Total aggregated. {} rows (from {}) in {} sec. ({} rows/sec., {}/sec.)",
total_src_rows, formatReadableSizeWithBinarySuffix(total_src_bytes), elapsed_seconds,
total_src_rows / elapsed_seconds, formatReadableSizeWithBinarySuffix(total_src_bytes / elapsed_seconds));
total_src_rows, ReadableSize(total_src_bytes), elapsed_seconds,
total_src_rows / elapsed_seconds, ReadableSize(total_src_bytes / elapsed_seconds));
/// If there was no data, and we aggregate without keys, we must return single row with the result of empty aggregation.
/// To do this, we pass a block with zero rows to aggregate.

View File

@ -60,7 +60,7 @@ private:
std::atomic<bool> executed {false};
std::vector<std::unique_ptr<TemporaryFileStream>> temporary_inputs;
Logger * log = &Logger::get("ParallelAggregatingBlockInputStream");
Poco::Logger * log = &Poco::Logger::get("ParallelAggregatingBlockInputStream");
ManyAggregatedDataVariants many_data;

View File

@ -359,7 +359,7 @@ private:
/// Wait for the completion of all threads.
std::atomic<bool> joined_threads { false };
Logger * log = &Logger::get("ParallelInputsProcessor");
Poco::Logger * log = &Poco::Logger::get("ParallelInputsProcessor");
};

View File

@ -59,7 +59,7 @@ PushingToViewsBlockOutputStream::PushingToViewsBlockOutputStream(
for (const auto & database_table : dependencies)
{
auto dependent_table = DatabaseCatalog::instance().getTable(database_table);
auto dependent_table = DatabaseCatalog::instance().getTable(database_table, context);
ASTPtr query;
BlockOutputStreamPtr out;
@ -274,7 +274,7 @@ void PushingToViewsBlockOutputStream::process(const Block & block, size_t view_n
StorageValues::create(
storage->getStorageID(), storage->getColumns(), block, storage->getVirtuals()));
select.emplace(view.query, local_context, SelectQueryOptions());
in = std::make_shared<MaterializingBlockInputStream>(select->execute().in);
in = std::make_shared<MaterializingBlockInputStream>(select->execute().getInputStream());
/// Squashing is needed here because the materialized view query can generate a lot of blocks
/// even when only one block is inserted into the parent table (e.g. if the query is a GROUP BY

View File

@ -151,7 +151,7 @@ private:
PoolMode pool_mode = PoolMode::GET_MANY;
StorageID main_table = StorageID::createEmpty();
Logger * log = &Logger::get("RemoteBlockInputStream");
Poco::Logger * log = &Poco::Logger::get("RemoteBlockInputStream");
};
}

View File

@ -16,8 +16,8 @@ bool SizeLimits::check(UInt64 rows, UInt64 bytes, const char * what, int too_man
+ ", current rows: " + formatReadableQuantity(rows), too_many_rows_exception_code);
if (max_bytes && bytes > max_bytes)
throw Exception("Limit for " + std::string(what) + " exceeded, max bytes: " + formatReadableSizeWithBinarySuffix(max_bytes)
+ ", current bytes: " + formatReadableSizeWithBinarySuffix(bytes), too_many_bytes_exception_code);
throw Exception(fmt::format("Limit for {} exceeded, max bytes: {}, current bytes: {}",
std::string(what), ReadableSize(max_bytes), ReadableSize(bytes)), too_many_bytes_exception_code);
return true;
}

View File

@ -5,7 +5,7 @@
#include <Interpreters/ExpressionAnalyzer.h>
#include <Columns/ColumnConst.h>
#include <Interpreters/addTypeConversionToAST.h>
#include <Storages/MergeTree/TTLMode.h>
#include <Storages/TTLMode.h>
#include <Interpreters/Context.h>
namespace DB
@ -28,7 +28,7 @@ TTLBlockInputStream::TTLBlockInputStream(
, current_time(current_time_)
, force(force_)
, old_ttl_infos(data_part->ttl_infos)
, log(&Logger::get(storage.getLogName() + " (TTLBlockInputStream)"))
, log(&Poco::Logger::get(storage.getLogName() + " (TTLBlockInputStream)"))
, date_lut(DateLUT::instance())
{
children.push_back(input_);
@ -38,7 +38,7 @@ TTLBlockInputStream::TTLBlockInputStream(
const auto & column_defaults = storage_columns.getDefaults();
ASTPtr default_expr_list = std::make_shared<ASTExpressionList>();
for (const auto & [name, _] : storage.column_ttl_entries_by_name)
for (const auto & [name, _] : storage.getColumnTTLs())
{
auto it = column_defaults.find(name);
if (it != column_defaults.end())
@ -70,21 +70,21 @@ TTLBlockInputStream::TTLBlockInputStream(
defaults_expression = ExpressionAnalyzer{default_expr_list, syntax_result, storage.global_context}.getActions(true);
}
if (storage.hasRowsTTL() && storage.rows_ttl_entry.mode == TTLMode::GROUP_BY)
if (storage.hasRowsTTL() && storage.getRowsTTL().mode == TTLMode::GROUP_BY)
{
current_key_value.resize(storage.rows_ttl_entry.group_by_keys.size());
current_key_value.resize(storage.getRowsTTL().group_by_keys.size());
ColumnNumbers keys;
for (const auto & key : storage.rows_ttl_entry.group_by_keys)
for (const auto & key : storage.getRowsTTL().group_by_keys)
keys.push_back(header.getPositionByName(key));
agg_key_columns.resize(storage.rows_ttl_entry.group_by_keys.size());
agg_key_columns.resize(storage.getRowsTTL().group_by_keys.size());
AggregateDescriptions aggregates = storage.rows_ttl_entry.aggregate_descriptions;
AggregateDescriptions aggregates = storage.getRowsTTL().aggregate_descriptions;
for (auto & descr : aggregates)
if (descr.arguments.empty())
for (const auto & name : descr.argument_names)
descr.arguments.push_back(header.getPositionByName(name));
agg_aggregate_columns.resize(storage.rows_ttl_entry.aggregate_descriptions.size());
agg_aggregate_columns.resize(storage.getRowsTTL().aggregate_descriptions.size());
const Settings & settings = storage.global_context.getSettingsRef();
@ -105,8 +105,8 @@ bool TTLBlockInputStream::isTTLExpired(time_t ttl) const
Block TTLBlockInputStream::readImpl()
{
/// Skip all data if table ttl is expired for part
if (storage.hasRowsTTL() && !storage.rows_ttl_entry.where_expression &&
storage.rows_ttl_entry.mode != TTLMode::GROUP_BY && isTTLExpired(old_ttl_infos.table_ttl.max))
if (storage.hasRowsTTL() && !storage.getRowsTTL().where_expression &&
storage.getRowsTTL().mode != TTLMode::GROUP_BY && isTTLExpired(old_ttl_infos.table_ttl.max))
{
rows_removed = data_part->rows_count;
return {};
@ -151,15 +151,17 @@ void TTLBlockInputStream::readSuffixImpl()
void TTLBlockInputStream::removeRowsWithExpiredTableTTL(Block & block)
{
storage.rows_ttl_entry.expression->execute(block);
if (storage.rows_ttl_entry.where_expression)
storage.rows_ttl_entry.where_expression->execute(block);
const auto & rows_ttl = storage.getRowsTTL();
rows_ttl.expression->execute(block);
if (rows_ttl.where_expression)
rows_ttl.where_expression->execute(block);
const IColumn * ttl_column =
block.getByName(storage.rows_ttl_entry.result_column).column.get();
block.getByName(rows_ttl.result_column).column.get();
const IColumn * where_result_column = storage.rows_ttl_entry.where_expression ?
block.getByName(storage.rows_ttl_entry.where_result_column).column.get() : nullptr;
const IColumn * where_result_column = storage.getRowsTTL().where_expression ?
block.getByName(storage.getRowsTTL().where_result_column).column.get() : nullptr;
const auto & column_names = header.getNames();
@ -204,9 +206,9 @@ void TTLBlockInputStream::removeRowsWithExpiredTableTTL(Block & block)
bool ttl_expired = isTTLExpired(cur_ttl) && where_filter_passed;
bool same_as_current = true;
for (size_t j = 0; j < storage.rows_ttl_entry.group_by_keys.size(); ++j)
for (size_t j = 0; j < storage.getRowsTTL().group_by_keys.size(); ++j)
{
const String & key_column = storage.rows_ttl_entry.group_by_keys[j];
const String & key_column = storage.getRowsTTL().group_by_keys[j];
const IColumn * values_column = block.getByName(key_column).column.get();
if (!same_as_current || (*values_column)[i] != current_key_value[j])
{
@ -275,18 +277,18 @@ void TTLBlockInputStream::finalizeAggregates(MutableColumns & result_columns)
auto aggregated_res = aggregator->convertToBlocks(agg_result, true, 1);
for (auto & agg_block : aggregated_res)
{
for (const auto & it : storage.rows_ttl_entry.group_by_aggregations)
std::get<2>(it)->execute(agg_block);
for (const auto & name : storage.rows_ttl_entry.group_by_keys)
for (const auto & it : storage.getRowsTTL().set_parts)
it.expression->execute(agg_block);
for (const auto & name : storage.getRowsTTL().group_by_keys)
{
const IColumn * values_column = agg_block.getByName(name).column.get();
auto & result_column = result_columns[header.getPositionByName(name)];
result_column->insertRangeFrom(*values_column, 0, agg_block.rows());
}
for (const auto & it : storage.rows_ttl_entry.group_by_aggregations)
for (const auto & it : storage.getRowsTTL().set_parts)
{
const IColumn * values_column = agg_block.getByName(get<1>(it)).column.get();
auto & result_column = result_columns[header.getPositionByName(std::get<0>(it))];
const IColumn * values_column = agg_block.getByName(it.expression_result_column_name).column.get();
auto & result_column = result_columns[header.getPositionByName(it.column_name)];
result_column->insertRangeFrom(*values_column, 0, agg_block.rows());
}
}
@ -304,7 +306,7 @@ void TTLBlockInputStream::removeValuesWithExpiredColumnTTL(Block & block)
}
std::vector<String> columns_to_remove;
for (const auto & [name, ttl_entry] : storage.column_ttl_entries_by_name)
for (const auto & [name, ttl_entry] : storage.getColumnTTLs())
{
/// If we read not all table columns. E.g. while mutation.
if (!block.has(name))
@ -365,7 +367,7 @@ void TTLBlockInputStream::removeValuesWithExpiredColumnTTL(Block & block)
void TTLBlockInputStream::updateMovesTTL(Block & block)
{
std::vector<String> columns_to_remove;
for (const auto & ttl_entry : storage.move_ttl_entries)
for (const auto & ttl_entry : storage.getMoveTTLs())
{
auto & new_ttl_info = new_ttl_infos.moves_ttl[ttl_entry.result_column];

View File

@ -52,7 +52,7 @@ private:
NameSet empty_columns;
size_t rows_removed = 0;
Logger * log;
Poco::Logger * log;
const DateLUTImpl & date_lut;
/// TODO rewrite defaults logic to evaluteMissingDefaults

View File

@ -253,7 +253,7 @@ private:
bool started = false;
bool all_read = false;
Logger * log = &Logger::get("UnionBlockInputStream");
Poco::Logger * log = &Poco::Logger::get("UnionBlockInputStream");
};
}

View File

@ -20,8 +20,8 @@ try
using namespace DB;
Poco::AutoPtr<Poco::ConsoleChannel> channel = new Poco::ConsoleChannel(std::cerr);
Logger::root().setChannel(channel);
Logger::root().setLevel("trace");
Poco::Logger::root().setChannel(channel);
Poco::Logger::root().setLevel("trace");
Block block1;

View File

@ -35,7 +35,7 @@ try
Names column_names;
column_names.push_back("WatchID");
StoragePtr table = DatabaseCatalog::instance().getTable({"default", "hits6"});
StoragePtr table = DatabaseCatalog::instance().getTable({"default", "hits6"}, context);
QueryProcessingStage::Enum stage = table->getQueryProcessingStage(context);
auto pipes = table->read(column_names, {}, context, stage, settings.max_block_size, settings.max_threads);

View File

@ -0,0 +1,131 @@
#include <Columns/ColumnsNumber.h>
#include <Columns/ColumnTuple.h>
#include <DataTypes/DataTypeArray.h>
#include <DataTypes/DataTypeCustom.h>
#include <DataTypes/DataTypeCustomSimpleTextSerialization.h>
#include <DataTypes/DataTypeFactory.h>
#include <DataTypes/DataTypeTuple.h>
#include <DataTypes/DataTypesNumber.h>
namespace DB
{
namespace
{
class DataTypeCustomPointSerialization : public DataTypeCustomSimpleTextSerialization
{
public:
void serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
{
nestedDataType()->serializeAsText(column, row_num, ostr, settings);
}
void deserializeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
{
nestedDataType()->deserializeAsWholeText(column, istr, settings);
}
static DataTypePtr nestedDataType()
{
static auto data_type = DataTypePtr(std::make_unique<DataTypeTuple>(
DataTypes({std::make_unique<DataTypeFloat64>(), std::make_unique<DataTypeFloat64>()})));
return data_type;
}
};
class DataTypeCustomRingSerialization : public DataTypeCustomSimpleTextSerialization
{
public:
void serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
{
nestedDataType()->serializeAsText(column, row_num, ostr, settings);
}
void deserializeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
{
nestedDataType()->deserializeAsWholeText(column, istr, settings);
}
static DataTypePtr nestedDataType()
{
static auto data_type = DataTypePtr(std::make_unique<DataTypeArray>(DataTypeCustomPointSerialization::nestedDataType()));
return data_type;
}
};
class DataTypeCustomPolygonSerialization : public DataTypeCustomSimpleTextSerialization
{
public:
void serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
{
nestedDataType()->serializeAsText(column, row_num, ostr, settings);
}
void deserializeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
{
nestedDataType()->deserializeAsWholeText(column, istr, settings);
}
static DataTypePtr nestedDataType()
{
static auto data_type = DataTypePtr(std::make_unique<DataTypeArray>(DataTypeCustomRingSerialization::nestedDataType()));
return data_type;
}
};
class DataTypeCustomMultiPolygonSerialization : public DataTypeCustomSimpleTextSerialization
{
public:
void serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const override
{
nestedDataType()->serializeAsText(column, row_num, ostr, settings);
}
void deserializeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override
{
nestedDataType()->deserializeAsWholeText(column, istr, settings);
}
static DataTypePtr nestedDataType()
{
static auto data_type = DataTypePtr(std::make_unique<DataTypeArray>(DataTypeCustomPolygonSerialization::nestedDataType()));
return data_type;
}
};
}
void registerDataTypeDomainGeo(DataTypeFactory & factory)
{
// Custom type for point represented as its coordinates stored as Tuple(Float64, Float64)
factory.registerSimpleDataTypeCustom("Point", []
{
return std::make_pair(DataTypeFactory::instance().get("Tuple(Float64, Float64)"),
std::make_unique<DataTypeCustomDesc>(std::make_unique<DataTypeCustomFixedName>("Point"), std::make_unique<DataTypeCustomPointSerialization>()));
});
// Custom type for simple polygon without holes stored as Array(Point)
factory.registerSimpleDataTypeCustom("Ring", []
{
return std::make_pair(DataTypeFactory::instance().get("Array(Point)"),
std::make_unique<DataTypeCustomDesc>(std::make_unique<DataTypeCustomFixedName>("Ring"), std::make_unique<DataTypeCustomRingSerialization>()));
});
// Custom type for polygon with holes stored as Array(Ring)
// First element of outer array is outer shape of polygon and all the following are holes
factory.registerSimpleDataTypeCustom("Polygon", []
{
return std::make_pair(DataTypeFactory::instance().get("Array(Ring)"),
std::make_unique<DataTypeCustomDesc>(std::make_unique<DataTypeCustomFixedName>("Polygon"), std::make_unique<DataTypeCustomPolygonSerialization>()));
});
// Custom type for multiple polygons with holes stored as Array(Polygon)
factory.registerSimpleDataTypeCustom("MultiPolygon", []
{
return std::make_pair(DataTypeFactory::instance().get("Array(Polygon)"),
std::make_unique<DataTypeCustomDesc>(std::make_unique<DataTypeCustomFixedName>("MultiPolygon"), std::make_unique<DataTypeCustomMultiPolygonSerialization>()));
});
}
}

View File

@ -180,6 +180,7 @@ DataTypeFactory::DataTypeFactory()
registerDataTypeLowCardinality(*this);
registerDataTypeDomainIPv4AndIPv6(*this);
registerDataTypeDomainSimpleAggregateFunction(*this);
registerDataTypeDomainGeo(*this);
}
DataTypeFactory & DataTypeFactory::instance()

View File

@ -83,5 +83,6 @@ void registerDataTypeLowCardinality(DataTypeFactory & factory);
void registerDataTypeDomainIPv4AndIPv6(DataTypeFactory & factory);
void registerDataTypeDomainSimpleAggregateFunction(DataTypeFactory & factory);
void registerDataTypeDateTime64(DataTypeFactory & factory);
void registerDataTypeDomainGeo(DataTypeFactory & factory);
}

View File

@ -1,3 +1,4 @@
# This file is generated automatically, do not edit. See 'ya.make.in' and use 'utils/generate-ya-make' to regenerate it.
LIBRARY()
PEERDIR(
@ -9,12 +10,13 @@ SRCS(
convertMySQLDataType.cpp
DataTypeAggregateFunction.cpp
DataTypeArray.cpp
DataTypeCustomGeo.cpp
DataTypeCustomIPv4AndIPv6.cpp
DataTypeCustomSimpleAggregateFunction.cpp
DataTypeCustomSimpleTextSerialization.cpp
DataTypeDate.cpp
DataTypeDateTime.cpp
DataTypeDateTime64.cpp
DataTypeDateTime.cpp
DataTypeDecimalBase.cpp
DataTypeEnum.cpp
DataTypeFactory.cpp
@ -36,6 +38,7 @@ SRCS(
getMostSubtype.cpp
IDataType.cpp
NestedUtils.cpp
)
END()

12
src/DataTypes/ya.make.in Normal file
View File

@ -0,0 +1,12 @@
LIBRARY()
PEERDIR(
clickhouse/src/Common
clickhouse/src/Formats
)
SRCS(
<? find . -name '*.cpp' | grep -v -F tests | sed 's/^\.\// /' | sort ?>
)
END()

View File

@ -288,15 +288,15 @@ void DatabaseAtomic::assertCanBeDetached(bool cleenup)
"because some tables are still in use. Retry later.", ErrorCodes::DATABASE_NOT_EMPTY);
}
DatabaseTablesIteratorPtr DatabaseAtomic::getTablesIterator(const IDatabase::FilterByNameFunction & filter_by_table_name)
DatabaseTablesIteratorPtr DatabaseAtomic::getTablesIterator(const Context & context, const IDatabase::FilterByNameFunction & filter_by_table_name)
{
auto base_iter = DatabaseWithOwnTablesBase::getTablesIterator(filter_by_table_name);
auto base_iter = DatabaseWithOwnTablesBase::getTablesIterator(context, filter_by_table_name);
return std::make_unique<AtomicDatabaseTablesSnapshotIterator>(std::move(typeid_cast<DatabaseTablesSnapshotIterator &>(*base_iter)));
}
UUID DatabaseAtomic::tryGetTableUUID(const String & table_name) const
{
if (auto table = tryGetTable(table_name))
if (auto table = tryGetTable(table_name, global_context))
return table->getStorageID().uuid;
return UUIDHelpers::Nil;
}

View File

@ -42,7 +42,7 @@ public:
void drop(const Context & /*context*/) override;
DatabaseTablesIteratorPtr getTablesIterator(const FilterByNameFunction & filter_by_table_name) override;
DatabaseTablesIteratorPtr getTablesIterator(const Context & context, const FilterByNameFunction & filter_by_table_name) override;
void loadStoredObjects(Context & context, bool has_force_restore_data_flag) override;

Some files were not shown because too many files have changed in this diff Show More