mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 15:12:02 +00:00
Merge remote-tracking branch 'upstream/master' into improvement/diff-types-in-avg-weighted
This commit is contained in:
commit
862c8a428c
277
CHANGELOG.md
277
CHANGELOG.md
@ -1,3 +1,12 @@
|
||||
## ClickHouse release 20.11
|
||||
|
||||
### ClickHouse release v20.11.3.3-stable, 2020-11-13
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
|
||||
### ClickHouse release v20.11.2.1, 2020-11-11
|
||||
|
||||
#### Backward Incompatible Change
|
||||
@ -119,6 +128,24 @@
|
||||
|
||||
## ClickHouse release 20.10
|
||||
|
||||
### ClickHouse release v20.10.4.1-stable, 2020-11-13
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `IN` operator over several columns and tuples with enabled `transform_null_in` setting. Fixes [#15310](https://github.com/ClickHouse/ClickHouse/issues/15310). [#16722](https://github.com/ClickHouse/ClickHouse/pull/16722) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* This will fix optimize_read_in_order/optimize_aggregation_in_order with max_threads>0 and expression in ORDER BY. [#16637](https://github.com/ClickHouse/ClickHouse/pull/16637) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Now when parsing AVRO from input the LowCardinality is removed from type. Fixes [#16188](https://github.com/ClickHouse/ClickHouse/issues/16188). [#16521](https://github.com/ClickHouse/ClickHouse/pull/16521) ([Mike](https://github.com/myrrc)).
|
||||
* Fix rapid growth of metadata when using MySQL Master -> MySQL Slave -> ClickHouse MaterializeMySQL Engine, and `slave_parallel_worker` enabled on MySQL Slave, by properly shrinking GTID sets. This fixes [#15951](https://github.com/ClickHouse/ClickHouse/issues/15951). [#16504](https://github.com/ClickHouse/ClickHouse/pull/16504) ([TCeason](https://github.com/TCeason)).
|
||||
* Fix DROP TABLE for Distributed (racy with INSERT). [#16409](https://github.com/ClickHouse/ClickHouse/pull/16409) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix processing of very large entries in replication queue. Very large entries may appear in ALTER queries if table structure is extremely large (near 1 MB). This fixes [#16307](https://github.com/ClickHouse/ClickHouse/issues/16307). [#16332](https://github.com/ClickHouse/ClickHouse/pull/16332) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix bug with MySQL database. When MySQL server used as database engine is down some queries raise Exception, because they try to get tables from disabled server, while it's unnecessary. For example, query `SELECT ... FROM system.parts` should work only with MergeTree tables and don't touch MySQL database at all. [#16032](https://github.com/ClickHouse/ClickHouse/pull/16032) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Workaround for use S3 with nginx server as proxy. Nginx currenty does not accept urls with empty path like http://domain.com?delete, but vanilla aws-sdk-cpp produces this kind of urls. This commit uses patched aws-sdk-cpp version, which makes urls with "/" as path in this cases, like http://domain.com/?delete. [#16813](https://github.com/ClickHouse/ClickHouse/pull/16813) ([ianton-ru](https://github.com/ianton-ru)).
|
||||
|
||||
|
||||
### ClickHouse release v20.10.3.30, 2020-10-28
|
||||
|
||||
#### Backward Incompatible Change
|
||||
@ -331,6 +358,84 @@
|
||||
|
||||
## ClickHouse release 20.9
|
||||
|
||||
### ClickHouse release v20.9.5.5-stable, 2020-11-13
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Now when parsing AVRO from input the LowCardinality is removed from type. Fixes [#16188](https://github.com/ClickHouse/ClickHouse/issues/16188). [#16521](https://github.com/ClickHouse/ClickHouse/pull/16521) ([Mike](https://github.com/myrrc)).
|
||||
* Fix rapid growth of metadata when using MySQL Master -> MySQL Slave -> ClickHouse MaterializeMySQL Engine, and `slave_parallel_worker` enabled on MySQL Slave, by properly shrinking GTID sets. This fixes [#15951](https://github.com/ClickHouse/ClickHouse/issues/15951). [#16504](https://github.com/ClickHouse/ClickHouse/pull/16504) ([TCeason](https://github.com/TCeason)).
|
||||
* Fix DROP TABLE for Distributed (racy with INSERT). [#16409](https://github.com/ClickHouse/ClickHouse/pull/16409) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix processing of very large entries in replication queue. Very large entries may appear in ALTER queries if table structure is extremely large (near 1 MB). This fixes [#16307](https://github.com/ClickHouse/ClickHouse/issues/16307). [#16332](https://github.com/ClickHouse/ClickHouse/pull/16332) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed the inconsistent behaviour when a part of return data could be dropped because the set for its filtration wasn't created. [#16308](https://github.com/ClickHouse/ClickHouse/pull/16308) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix bug with MySQL database. When MySQL server used as database engine is down some queries raise Exception, because they try to get tables from disabled server, while it's unnecessary. For example, query `SELECT ... FROM system.parts` should work only with MergeTree tables and don't touch MySQL database at all. [#16032](https://github.com/ClickHouse/ClickHouse/pull/16032) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
|
||||
### ClickHouse release v20.9.4.76-stable (2020-10-29)
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix double free in case of exception in function `dictGet`. It could have happened if dictionary was loaded with error. [#16429](https://github.com/ClickHouse/ClickHouse/pull/16429) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix group by with totals/rollup/cube modifers and min/max functions over group by keys. Fixes [#16393](https://github.com/ClickHouse/ClickHouse/issues/16393). [#16397](https://github.com/ClickHouse/ClickHouse/pull/16397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix async Distributed INSERT w/ prefer_localhost_replica=0 and internal_replication. [#16358](https://github.com/ClickHouse/ClickHouse/pull/16358) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix a very wrong code in TwoLevelStringHashTable implementation, which might lead to memory leak. I'm suprised how this bug can lurk for so long.... [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix the case when memory can be overallocated regardless to the limit. This closes [#14560](https://github.com/ClickHouse/ClickHouse/issues/14560). [#16206](https://github.com/ClickHouse/ClickHouse/pull/16206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Allow to use direct layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes https://github.com/ClickHouse/ClickHouse/issues/15628. [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix a crash when database creation fails. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix possible deadlocks in RBAC. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix `select count()` inaccuracy for MaterializeMySQL. [#15767](https://github.com/ClickHouse/ClickHouse/pull/15767) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed bug with globs in S3 table function, region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Decrement the `ReadonlyReplica` metric when detaching read-only tables. This fixes https://github.com/ClickHouse/ClickHouse/issues/15598. [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
|
||||
* Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||
* Unfold `{database}`, `{table}` and `{uuid}` macros in `ReplicatedMergeTree` arguments on table creation. [#16160](https://github.com/ClickHouse/ClickHouse/pull/16160) ([tavplubix](https://github.com/tavplubix)).
|
||||
|
||||
|
||||
### ClickHouse release v20.9.3.45-stable (2020-10-09)
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix error `Cannot find column` which may happen at insertion into `MATERIALIZED VIEW` in case if query for `MV` containes `ARRAY JOIN`. [#15717](https://github.com/ClickHouse/ClickHouse/pull/15717) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix race condition in AMQP-CPP. [#15667](https://github.com/ClickHouse/ClickHouse/pull/15667) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix the order of destruction for resources in `ReadFromStorage` step of query plan. It might cause crashes in rare cases. Possibly connected with [#15610](https://github.com/ClickHouse/ClickHouse/issues/15610). [#15645](https://github.com/ClickHouse/ClickHouse/pull/15645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed `Element ... is not a constant expression` error when using `JSON*` function result in `VALUES`, `LIMIT` or right side of `IN` operator. [#15589](https://github.com/ClickHouse/ClickHouse/pull/15589) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes [#15541](https://github.com/ClickHouse/ClickHouse/issues/15541). [#15557](https://github.com/ClickHouse/ClickHouse/pull/15557) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Significantly reduce memory usage in AggregatingInOrderTransform/optimize_aggregation_in_order. [#15543](https://github.com/ClickHouse/ClickHouse/pull/15543) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. [#15536](https://github.com/ClickHouse/ClickHouse/pull/15536) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)).
|
||||
* Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)).
|
||||
* Fix bug where queries like SELECT toStartOfDay(today()) fail complaining about empty time_zone argument. [#15319](https://github.com/ClickHouse/ClickHouse/pull/15319) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix instance crash when using joinGet with LowCardinality types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
|
||||
* Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
|
||||
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Now it's possible to change the type of version column for `VersionedCollapsingMergeTree` with `ALTER` query. [#15442](https://github.com/ClickHouse/ClickHouse/pull/15442) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
||||
### ClickHouse release v20.9.2.20, 2020-09-22
|
||||
|
||||
#### New Feature
|
||||
@ -405,6 +510,110 @@
|
||||
|
||||
## ClickHouse release 20.8
|
||||
|
||||
### ClickHouse release v20.8.6.6-lts, 2020-11-13
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Now when parsing AVRO from input the LowCardinality is removed from type. Fixes [#16188](https://github.com/ClickHouse/ClickHouse/issues/16188). [#16521](https://github.com/ClickHouse/ClickHouse/pull/16521) ([Mike](https://github.com/myrrc)).
|
||||
* Fix rapid growth of metadata when using MySQL Master -> MySQL Slave -> ClickHouse MaterializeMySQL Engine, and `slave_parallel_worker` enabled on MySQL Slave, by properly shrinking GTID sets. This fixes [#15951](https://github.com/ClickHouse/ClickHouse/issues/15951). [#16504](https://github.com/ClickHouse/ClickHouse/pull/16504) ([TCeason](https://github.com/TCeason)).
|
||||
* Fix DROP TABLE for Distributed (racy with INSERT). [#16409](https://github.com/ClickHouse/ClickHouse/pull/16409) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix processing of very large entries in replication queue. Very large entries may appear in ALTER queries if table structure is extremely large (near 1 MB). This fixes [#16307](https://github.com/ClickHouse/ClickHouse/issues/16307). [#16332](https://github.com/ClickHouse/ClickHouse/pull/16332) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed the inconsistent behaviour when a part of return data could be dropped because the set for its filtration wasn't created. [#16308](https://github.com/ClickHouse/ClickHouse/pull/16308) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix bug with MySQL database. When MySQL server used as database engine is down some queries raise Exception, because they try to get tables from disabled server, while it's unnecessary. For example, query `SELECT ... FROM system.parts` should work only with MergeTree tables and don't touch MySQL database at all. [#16032](https://github.com/ClickHouse/ClickHouse/pull/16032) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
|
||||
### ClickHouse release v20.8.5.45-lts, 2020-10-29
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix double free in case of exception in function `dictGet`. It could have happened if dictionary was loaded with error. [#16429](https://github.com/ClickHouse/ClickHouse/pull/16429) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix group by with totals/rollup/cube modifers and min/max functions over group by keys. Fixes [#16393](https://github.com/ClickHouse/ClickHouse/issues/16393). [#16397](https://github.com/ClickHouse/ClickHouse/pull/16397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix async Distributed INSERT w/ prefer_localhost_replica=0 and internal_replication. [#16358](https://github.com/ClickHouse/ClickHouse/pull/16358) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix a possible memory leak during `GROUP BY` with string keys, caused by an error in `TwoLevelStringHashTable` implementation. [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix the case when memory can be overallocated regardless to the limit. This closes [#14560](https://github.com/ClickHouse/ClickHouse/issues/14560). [#16206](https://github.com/ClickHouse/ClickHouse/pull/16206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Allow to use direct layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix a crash when database creation fails. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix possible deadlocks in RBAC. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix error `Cannot find column` which may happen at insertion into `MATERIALIZED VIEW` in case if query for `MV` containes `ARRAY JOIN`. [#15717](https://github.com/ClickHouse/ClickHouse/pull/15717) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed bug with globs in S3 table function, region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Decrement the `ReadonlyReplica` metric when detaching read-only tables. This fixes [#15598](https://github.com/ClickHouse/ClickHouse/issues/15598). [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
|
||||
* Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||
* Unfold `{database}`, `{table}` and `{uuid}` macros in `ReplicatedMergeTree` arguments on table creation. [#16159](https://github.com/ClickHouse/ClickHouse/pull/16159) ([tavplubix](https://github.com/tavplubix)).
|
||||
|
||||
|
||||
### ClickHouse release v20.8.4.11-lts, 2020-10-09
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix the order of destruction for resources in `ReadFromStorage` step of query plan. It might cause crashes in rare cases. Possibly connected with [#15610](https://github.com/ClickHouse/ClickHouse/issues/15610). [#15645](https://github.com/ClickHouse/ClickHouse/pull/15645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed `Element ... is not a constant expression` error when using `JSON*` function result in `VALUES`, `LIMIT` or right side of `IN` operator. [#15589](https://github.com/ClickHouse/ClickHouse/pull/15589) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes [#15541](https://github.com/ClickHouse/ClickHouse/issues/15541). [#15557](https://github.com/ClickHouse/ClickHouse/pull/15557) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Significantly reduce memory usage in AggregatingInOrderTransform/optimize_aggregation_in_order. [#15543](https://github.com/ClickHouse/ClickHouse/pull/15543) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. [#15536](https://github.com/ClickHouse/ClickHouse/pull/15536) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)).
|
||||
* Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)).
|
||||
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix rare race condition on server startup when system.logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix instance crash when using joinGet with LowCardinality types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
|
||||
* Adjust decimals field size in mysql column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
|
||||
* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
|
||||
* If function `bar` was called with specifically crafter arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Now settings `number_of_free_entries_in_pool_to_execute_mutation` and `number_of_free_entries_in_pool_to_lower_max_size_of_merge` can be equal to `background_pool_size`. [#14975](https://github.com/ClickHouse/ClickHouse/pull/14975) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Fixed `.metadata.tmp File exists` error when using `MaterializeMySQL` database engine. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in [#14513](https://github.com/ClickHouse/ClickHouse/issues/14513). [#14783](https://github.com/ClickHouse/ClickHouse/pull/14783) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fixed the incorrect sorting order of `Nullable` column. This fixes [#14344](https://github.com/ClickHouse/ClickHouse/issues/14344). [#14495](https://github.com/ClickHouse/ClickHouse/pull/14495) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Now it's possible to change the type of version column for `VersionedCollapsingMergeTree` with `ALTER` query. [#15442](https://github.com/ClickHouse/ClickHouse/pull/15442) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
||||
### ClickHouse release v20.8.3.18-stable, 2020-09-18
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix the issue when some invocations of `extractAllGroups` function may trigger "Memory limit exceeded" error. This fixes [#13383](https://github.com/ClickHouse/ClickHouse/issues/13383). [#14889](https://github.com/ClickHouse/ClickHouse/pull/14889) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix SIGSEGV for an attempt to INSERT into StorageFile(fd). [#14887](https://github.com/ClickHouse/ClickHouse/pull/14887) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed missed default database name in metadata of materialized view when executing `ALTER ... MODIFY QUERY`. [#14664](https://github.com/ClickHouse/ClickHouse/pull/14664) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix bug when `ALTER UPDATE` mutation with Nullable column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Added the checker as neither calling `lc->isNullable()` nor calling `ls->getDictionaryPtr()->isNullable()` would return the correct result. [#14591](https://github.com/ClickHouse/ClickHouse/pull/14591) ([myrrc](https://github.com/myrrc)).
|
||||
* Cleanup data directory after Zookeeper exceptions during CreateQuery for StorageReplicatedMergeTree Engine. [#14563](https://github.com/ClickHouse/ClickHouse/pull/14563) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Fix rare segfaults in functions with combinator -Resample, which could appear in result of overflow with very large parameters. [#14562](https://github.com/ClickHouse/ClickHouse/pull/14562) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Speed up server shutdown process if there are ongoing S3 requests. [#14858](https://github.com/ClickHouse/ClickHouse/pull/14858) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Allow using multi-volume storage configuration in storage Distributed. [#14839](https://github.com/ClickHouse/ClickHouse/pull/14839) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Speed up server shutdown process if there are ongoing S3 requests. [#14496](https://github.com/ClickHouse/ClickHouse/pull/14496) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
|
||||
### ClickHouse release v20.8.2.3-stable, 2020-09-08
|
||||
|
||||
#### Backward Incompatible Change
|
||||
@ -1755,6 +1964,74 @@ No changes compared to v20.4.3.16-stable.
|
||||
|
||||
## ClickHouse release v20.3
|
||||
|
||||
|
||||
### ClickHouse release v20.3.21.2-lts, 2020-11-02
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix dictGet in sharding_key (and similar places, i.e. when the function context is stored permanently). [#16205](https://github.com/ClickHouse/ClickHouse/pull/16205) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix missing or excessive headers in `TSV/CSVWithNames` formats. This fixes [#12504](https://github.com/ClickHouse/ClickHouse/issues/12504). [#13343](https://github.com/ClickHouse/ClickHouse/pull/13343) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.20.6-lts, 2020-10-09
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15724](https://github.com/ClickHouse/ClickHouse/pull/15724), [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix hang of queries with a lot of subqueries to same table of `MySQL` engine. Previously, if there were more than 16 subqueries to same `MySQL` table in query, it hang forever. [#15299](https://github.com/ClickHouse/ClickHouse/pull/15299) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix 'Unknown identifier' in GROUP BY when query has JOIN over Merge table. [#15242](https://github.com/ClickHouse/ClickHouse/pull/15242) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix to make predicate push down work when subquery contains finalizeAggregation function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||
* Concurrent `ALTER ... REPLACE/MOVE PARTITION ...` queries might cause deadlock. It's fixed. [#13626](https://github.com/ClickHouse/ClickHouse/pull/13626) ([tavplubix](https://github.com/tavplubix)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.19.4-lts, 2020-09-18
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix bug when `ALTER UPDATE` mutation with Nullable column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.18.10-lts, 2020-09-08
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Stop query execution if exception happened in `PipelineExecutor` itself. This could prevent rare possible query hung. Continuation of [#14334](https://github.com/ClickHouse/ClickHouse/issues/14334). [#14402](https://github.com/ClickHouse/ClickHouse/pull/14402) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed the behaviour when sometimes cache-dictionary returned default value instead of present value from source. [#13624](https://github.com/ClickHouse/ClickHouse/pull/13624) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes [#5779](https://github.com/ClickHouse/ClickHouse/issues/5779), [#12527](https://github.com/ClickHouse/ClickHouse/issues/12527). [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix CAST(Nullable(String), Enum()). [#12745](https://github.com/ClickHouse/ClickHouse/pull/12745) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed data race in `text_log`. It does not correspond to any real bug. [#9726](https://github.com/ClickHouse/ClickHouse/pull/9726) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Fix wrong error for long queries. It was possible to get syntax error other than `Max query size exceeded` for correct query. [#13928](https://github.com/ClickHouse/ClickHouse/pull/13928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Return NULL/zero when value is not parsed completely in parseDateTimeBestEffortOrNull/Zero functions. This fixes [#7876](https://github.com/ClickHouse/ClickHouse/issues/7876). [#11653](https://github.com/ClickHouse/ClickHouse/pull/11653) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Slightly optimize very short queries with LowCardinality. [#14129](https://github.com/ClickHouse/ClickHouse/pull/14129) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Fix UBSan report (adding zero to nullptr) in HashTable that appeared after migration to clang-10. [#10638](https://github.com/ClickHouse/ClickHouse/pull/10638) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.17.173-lts, 2020-08-15
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix crash in JOIN with StorageMerge and `set enable_optimize_predicate_expression=1`. [#13679](https://github.com/ClickHouse/ClickHouse/pull/13679) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix invalid return type for comparison of tuples with `NULL` elements. Fixes [#12461](https://github.com/ClickHouse/ClickHouse/issues/12461). [#13420](https://github.com/ClickHouse/ClickHouse/pull/13420) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix queries with constant columns and `ORDER BY` prefix of primary key. [#13396](https://github.com/ClickHouse/ClickHouse/pull/13396) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Return passed number for numbers with MSB set in roundUpToPowerOfTwoOrZero(). [#13234](https://github.com/ClickHouse/ClickHouse/pull/13234) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
|
||||
### ClickHouse release v20.3.16.165-lts 2020-08-10
|
||||
|
||||
#### Bug Fix
|
||||
|
@ -1157,6 +1157,7 @@ SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
||||
┌─res──────────┐
|
||||
│ [1, 2, 3, 4] │
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
## arrayAUC {#arrayauc}
|
||||
|
||||
|
@ -21,7 +21,7 @@ mkdocs-htmlproofer-plugin==0.0.3
|
||||
mkdocs-macros-plugin==0.4.20
|
||||
nltk==3.5
|
||||
nose==1.3.7
|
||||
protobuf==3.13.0
|
||||
protobuf==3.14.0
|
||||
numpy==1.19.2
|
||||
Pygments==2.5.2
|
||||
pymdown-extensions==8.0
|
||||
|
@ -329,14 +329,20 @@ int mainEntryClickHouseInstall(int argc, char ** argv)
|
||||
|
||||
bool has_password_for_default_user = false;
|
||||
|
||||
if (!fs::exists(main_config_file))
|
||||
if (!fs::exists(config_d))
|
||||
{
|
||||
fmt::print("Creating config directory {} that is used for tweaks of main server configuration.\n", config_d.string());
|
||||
fs::create_directory(config_d);
|
||||
}
|
||||
|
||||
if (!fs::exists(users_d))
|
||||
{
|
||||
fmt::print("Creating config directory {} that is used for tweaks of users configuration.\n", users_d.string());
|
||||
fs::create_directory(users_d);
|
||||
}
|
||||
|
||||
if (!fs::exists(main_config_file))
|
||||
{
|
||||
std::string_view main_config_content = getResource("config.xml");
|
||||
if (main_config_content.empty())
|
||||
{
|
||||
@ -349,7 +355,30 @@ int mainEntryClickHouseInstall(int argc, char ** argv)
|
||||
out.sync();
|
||||
out.finalize();
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
fmt::print("Config file {} already exists, will keep it and extract path info from it.\n", main_config_file.string());
|
||||
|
||||
ConfigProcessor processor(main_config_file.string(), /* throw_on_bad_incl = */ false, /* log_to_console = */ false);
|
||||
ConfigurationPtr configuration(new Poco::Util::XMLConfiguration(processor.processConfig()));
|
||||
|
||||
if (configuration->has("path"))
|
||||
{
|
||||
data_path = configuration->getString("path");
|
||||
fmt::print("{} has {} as data path.\n", main_config_file.string(), data_path);
|
||||
}
|
||||
|
||||
if (configuration->has("logger.log"))
|
||||
{
|
||||
log_path = fs::path(configuration->getString("logger.log")).remove_filename();
|
||||
fmt::print("{} has {} as log path.\n", main_config_file.string(), log_path);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
if (!fs::exists(users_config_file))
|
||||
{
|
||||
std::string_view users_config_content = getResource("users.xml");
|
||||
if (users_config_content.empty())
|
||||
{
|
||||
@ -365,38 +394,17 @@ int mainEntryClickHouseInstall(int argc, char ** argv)
|
||||
}
|
||||
else
|
||||
{
|
||||
{
|
||||
fmt::print("Config file {} already exists, will keep it and extract path info from it.\n", main_config_file.string());
|
||||
|
||||
ConfigProcessor processor(main_config_file.string(), /* throw_on_bad_incl = */ false, /* log_to_console = */ false);
|
||||
ConfigurationPtr configuration(new Poco::Util::XMLConfiguration(processor.processConfig()));
|
||||
|
||||
if (configuration->has("path"))
|
||||
{
|
||||
data_path = configuration->getString("path");
|
||||
fmt::print("{} has {} as data path.\n", main_config_file.string(), data_path);
|
||||
}
|
||||
|
||||
if (configuration->has("logger.log"))
|
||||
{
|
||||
log_path = fs::path(configuration->getString("logger.log")).remove_filename();
|
||||
fmt::print("{} has {} as log path.\n", main_config_file.string(), log_path);
|
||||
}
|
||||
}
|
||||
fmt::print("Users config file {} already exists, will keep it and extract users info from it.\n", users_config_file.string());
|
||||
|
||||
/// Check if password for default user already specified.
|
||||
ConfigProcessor processor(users_config_file.string(), /* throw_on_bad_incl = */ false, /* log_to_console = */ false);
|
||||
ConfigurationPtr configuration(new Poco::Util::XMLConfiguration(processor.processConfig()));
|
||||
|
||||
if (fs::exists(users_config_file))
|
||||
if (!configuration->getString("users.default.password", "").empty()
|
||||
|| configuration->getString("users.default.password_sha256_hex", "").empty()
|
||||
|| configuration->getString("users.default.password_double_sha1_hex", "").empty())
|
||||
{
|
||||
ConfigProcessor processor(users_config_file.string(), /* throw_on_bad_incl = */ false, /* log_to_console = */ false);
|
||||
ConfigurationPtr configuration(new Poco::Util::XMLConfiguration(processor.processConfig()));
|
||||
|
||||
if (!configuration->getString("users.default.password", "").empty()
|
||||
|| configuration->getString("users.default.password_sha256_hex", "").empty()
|
||||
|| configuration->getString("users.default.password_double_sha1_hex", "").empty())
|
||||
{
|
||||
has_password_for_default_user = true;
|
||||
}
|
||||
has_password_for_default_user = true;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -8,7 +8,7 @@ namespace DB
|
||||
{
|
||||
|
||||
AggregateFunctionPtr AggregateFunctionCount::getOwnNullAdapter(
|
||||
const AggregateFunctionPtr &, const DataTypes & types, const Array & params) const
|
||||
const AggregateFunctionPtr &, const DataTypes & types, const Array & params, const AggregateFunctionProperties & /*properties*/) const
|
||||
{
|
||||
return std::make_shared<AggregateFunctionCountNotNullUnary>(types[0], params);
|
||||
}
|
||||
|
@ -69,7 +69,7 @@ public:
|
||||
}
|
||||
|
||||
AggregateFunctionPtr getOwnNullAdapter(
|
||||
const AggregateFunctionPtr &, const DataTypes & types, const Array & params) const override;
|
||||
const AggregateFunctionPtr &, const DataTypes & types, const Array & params, const AggregateFunctionProperties & /*properties*/) const override;
|
||||
};
|
||||
|
||||
|
||||
|
@ -1,6 +1,7 @@
|
||||
#include <AggregateFunctions/AggregateFunctionIf.h>
|
||||
#include <AggregateFunctions/AggregateFunctionCombinatorFactory.h>
|
||||
#include "registerAggregateFunctions.h"
|
||||
#include "AggregateFunctionNull.h"
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -8,6 +9,7 @@ namespace DB
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
}
|
||||
@ -40,6 +42,164 @@ public:
|
||||
}
|
||||
};
|
||||
|
||||
/** There are two cases: for single argument and variadic.
|
||||
* Code for single argument is much more efficient.
|
||||
*/
|
||||
template <bool result_is_nullable, bool serialize_flag>
|
||||
class AggregateFunctionIfNullUnary final
|
||||
: public AggregateFunctionNullBase<result_is_nullable, serialize_flag,
|
||||
AggregateFunctionIfNullUnary<result_is_nullable, serialize_flag>>
|
||||
{
|
||||
private:
|
||||
size_t num_arguments;
|
||||
|
||||
using Base = AggregateFunctionNullBase<result_is_nullable, serialize_flag,
|
||||
AggregateFunctionIfNullUnary<result_is_nullable, serialize_flag>>;
|
||||
public:
|
||||
|
||||
String getName() const override
|
||||
{
|
||||
return Base::getName();
|
||||
}
|
||||
|
||||
AggregateFunctionIfNullUnary(AggregateFunctionPtr nested_function_, const DataTypes & arguments, const Array & params)
|
||||
: Base(std::move(nested_function_), arguments, params), num_arguments(arguments.size())
|
||||
{
|
||||
if (num_arguments == 0)
|
||||
throw Exception("Aggregate function " + getName() + " require at least one argument",
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
}
|
||||
|
||||
static inline bool singleFilter(const IColumn ** columns, size_t row_num, size_t num_arguments)
|
||||
{
|
||||
const IColumn * filter_column = columns[num_arguments - 1];
|
||||
if (const ColumnNullable * nullable_column = typeid_cast<const ColumnNullable *>(filter_column))
|
||||
filter_column = nullable_column->getNestedColumnPtr().get();
|
||||
|
||||
return assert_cast<const ColumnUInt8 &>(*filter_column).getData()[row_num];
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
const ColumnNullable * column = assert_cast<const ColumnNullable *>(columns[0]);
|
||||
const IColumn * nested_column = &column->getNestedColumn();
|
||||
if (!column->isNullAt(row_num) && singleFilter(columns, row_num, num_arguments))
|
||||
{
|
||||
this->setFlag(place);
|
||||
this->nested_function->add(this->nestedPlace(place), &nested_column, row_num, arena);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
template <bool result_is_nullable, bool serialize_flag, bool null_is_skipped>
|
||||
class AggregateFunctionIfNullVariadic final
|
||||
: public AggregateFunctionNullBase<result_is_nullable, serialize_flag,
|
||||
AggregateFunctionIfNullVariadic<result_is_nullable, serialize_flag, null_is_skipped>>
|
||||
{
|
||||
public:
|
||||
|
||||
String getName() const override
|
||||
{
|
||||
return Base::getName();
|
||||
}
|
||||
|
||||
AggregateFunctionIfNullVariadic(AggregateFunctionPtr nested_function_, const DataTypes & arguments, const Array & params)
|
||||
: Base(std::move(nested_function_), arguments, params), number_of_arguments(arguments.size())
|
||||
{
|
||||
if (number_of_arguments == 1)
|
||||
throw Exception("Logical error: single argument is passed to AggregateFunctionIfNullVariadic", ErrorCodes::LOGICAL_ERROR);
|
||||
|
||||
if (number_of_arguments > MAX_ARGS)
|
||||
throw Exception("Maximum number of arguments for aggregate function with Nullable types is " + toString(size_t(MAX_ARGS)),
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
for (size_t i = 0; i < number_of_arguments; ++i)
|
||||
is_nullable[i] = arguments[i]->isNullable();
|
||||
}
|
||||
|
||||
static inline bool singleFilter(const IColumn ** columns, size_t row_num, size_t num_arguments)
|
||||
{
|
||||
return assert_cast<const ColumnUInt8 &>(*columns[num_arguments - 1]).getData()[row_num];
|
||||
}
|
||||
|
||||
void add(AggregateDataPtr place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
/// This container stores the columns we really pass to the nested function.
|
||||
const IColumn * nested_columns[number_of_arguments];
|
||||
|
||||
for (size_t i = 0; i < number_of_arguments; ++i)
|
||||
{
|
||||
if (is_nullable[i])
|
||||
{
|
||||
const ColumnNullable & nullable_col = assert_cast<const ColumnNullable &>(*columns[i]);
|
||||
if (null_is_skipped && nullable_col.isNullAt(row_num))
|
||||
{
|
||||
/// If at least one column has a null value in the current row,
|
||||
/// we don't process this row.
|
||||
return;
|
||||
}
|
||||
nested_columns[i] = &nullable_col.getNestedColumn();
|
||||
}
|
||||
else
|
||||
nested_columns[i] = columns[i];
|
||||
}
|
||||
|
||||
if (singleFilter(nested_columns, row_num, number_of_arguments))
|
||||
{
|
||||
this->setFlag(place);
|
||||
this->nested_function->add(this->nestedPlace(place), nested_columns, row_num, arena);
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
using Base = AggregateFunctionNullBase<result_is_nullable, serialize_flag,
|
||||
AggregateFunctionIfNullVariadic<result_is_nullable, serialize_flag, null_is_skipped>>;
|
||||
|
||||
enum { MAX_ARGS = 8 };
|
||||
size_t number_of_arguments = 0;
|
||||
std::array<char, MAX_ARGS> is_nullable; /// Plain array is better than std::vector due to one indirection less.
|
||||
};
|
||||
|
||||
|
||||
AggregateFunctionPtr AggregateFunctionIf::getOwnNullAdapter(
|
||||
const AggregateFunctionPtr & nested_function, const DataTypes & arguments,
|
||||
const Array & params, const AggregateFunctionProperties & properties) const
|
||||
{
|
||||
bool return_type_is_nullable = !properties.returns_default_when_only_null && getReturnType()->canBeInsideNullable();
|
||||
size_t nullable_size = std::count_if(arguments.begin(), arguments.end(), [](const auto & element) { return element->isNullable(); });
|
||||
return_type_is_nullable &= nullable_size != 1 || !arguments.back()->isNullable(); /// If only condition is nullable. we should non-nullable type.
|
||||
bool serialize_flag = return_type_is_nullable || properties.returns_default_when_only_null;
|
||||
|
||||
if (arguments.size() <= 2 && arguments.front()->isNullable())
|
||||
{
|
||||
if (return_type_is_nullable)
|
||||
{
|
||||
return std::make_shared<AggregateFunctionIfNullUnary<true, true>>(nested_func, arguments, params);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (serialize_flag)
|
||||
return std::make_shared<AggregateFunctionIfNullUnary<false, true>>(nested_func, arguments, params);
|
||||
else
|
||||
return std::make_shared<AggregateFunctionIfNullUnary<false, false>>(nested_func, arguments, params);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
if (return_type_is_nullable)
|
||||
{
|
||||
return std::make_shared<AggregateFunctionIfNullVariadic<true, true, true>>(nested_function, arguments, params);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (serialize_flag)
|
||||
return std::make_shared<AggregateFunctionIfNullVariadic<false, true, true>>(nested_function, arguments, params);
|
||||
else
|
||||
return std::make_shared<AggregateFunctionIfNullVariadic<false, false, true>>(nested_function, arguments, params);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void registerAggregateFunctionCombinatorIf(AggregateFunctionCombinatorFactory & factory)
|
||||
{
|
||||
factory.registerCombinator(std::make_shared<AggregateFunctionCombinatorIf>());
|
||||
|
@ -109,6 +109,10 @@ public:
|
||||
{
|
||||
return nested_func->isState();
|
||||
}
|
||||
|
||||
AggregateFunctionPtr getOwnNullAdapter(
|
||||
const AggregateFunctionPtr & nested_function, const DataTypes & arguments,
|
||||
const Array & params, const AggregateFunctionProperties & properties) const override;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -72,7 +72,7 @@ public:
|
||||
|
||||
assert(nested_function);
|
||||
|
||||
if (auto adapter = nested_function->getOwnNullAdapter(nested_function, arguments, params))
|
||||
if (auto adapter = nested_function->getOwnNullAdapter(nested_function, arguments, params, properties))
|
||||
return adapter;
|
||||
|
||||
/// If applied to aggregate function with -State combinator, we apply -Null combinator to it's nested_function instead of itself.
|
||||
|
@ -239,7 +239,8 @@ public:
|
||||
}
|
||||
|
||||
AggregateFunctionPtr getOwnNullAdapter(
|
||||
const AggregateFunctionPtr & nested_function, const DataTypes & arguments, const Array & params) const override
|
||||
const AggregateFunctionPtr & nested_function, const DataTypes & arguments, const Array & params,
|
||||
const AggregateFunctionProperties & /*properties*/) const override
|
||||
{
|
||||
return std::make_shared<AggregateFunctionNullVariadic<false, false, false>>(nested_function, arguments, params);
|
||||
}
|
||||
|
@ -33,6 +33,7 @@ using ConstAggregateDataPtr = const char *;
|
||||
|
||||
class IAggregateFunction;
|
||||
using AggregateFunctionPtr = std::shared_ptr<IAggregateFunction>;
|
||||
struct AggregateFunctionProperties;
|
||||
|
||||
/** Aggregate functions interface.
|
||||
* Instances of classes with this interface do not contain the data itself for aggregation,
|
||||
@ -185,7 +186,8 @@ public:
|
||||
* arguments and params are for nested_function.
|
||||
*/
|
||||
virtual AggregateFunctionPtr getOwnNullAdapter(
|
||||
const AggregateFunctionPtr & /*nested_function*/, const DataTypes & /*arguments*/, const Array & /*params*/) const
|
||||
const AggregateFunctionPtr & /*nested_function*/, const DataTypes & /*arguments*/,
|
||||
const Array & /*params*/, const AggregateFunctionProperties & /*properties*/) const
|
||||
{
|
||||
return nullptr;
|
||||
}
|
||||
|
@ -73,6 +73,11 @@ void Connection::connect(const ConnectionTimeouts & timeouts)
|
||||
{
|
||||
#if USE_SSL
|
||||
socket = std::make_unique<Poco::Net::SecureStreamSocket>();
|
||||
|
||||
/// we resolve the ip when we open SecureStreamSocket, so to make Server Name Indication (SNI)
|
||||
/// work we need to pass host name separately. It will be send into TLS Hello packet to let
|
||||
/// the server know which host we want to talk with (single IP can process requests for multiple hosts using SNI).
|
||||
static_cast<Poco::Net::SecureStreamSocket*>(socket.get())->setPeerHostName(host);
|
||||
#else
|
||||
throw Exception{"tcp_secure protocol is disabled because poco library was built without NetSSL support.", ErrorCodes::SUPPORT_IS_DISABLED};
|
||||
#endif
|
||||
|
@ -519,9 +519,11 @@
|
||||
M(550, CONDITIONAL_TREE_PARENT_NOT_FOUND) \
|
||||
M(551, ILLEGAL_PROJECTION_MANIPULATOR) \
|
||||
M(552, UNRECOGNIZED_ARGUMENTS) \
|
||||
M(553, ROCKSDB_ERROR) \
|
||||
M(553, LZMA_STREAM_ENCODER_FAILED) \
|
||||
M(554, LZMA_STREAM_DECODER_FAILED) \
|
||||
M(555, ROCKSDB_ERROR) \
|
||||
M(556, SYNC_MYSQL_USER_ACCESS_ERROR)\
|
||||
\
|
||||
M(999, KEEPER_EXCEPTION) \
|
||||
M(1000, POCO_EXCEPTION) \
|
||||
M(1001, STD_EXCEPTION) \
|
||||
|
@ -11,6 +11,7 @@
|
||||
#include <cstdint>
|
||||
#include <cassert>
|
||||
#include <type_traits>
|
||||
#include <memory>
|
||||
|
||||
#include <ext/bit_cast.h>
|
||||
#include <common/extended_types.h>
|
||||
|
@ -70,7 +70,7 @@
|
||||
/// Minimum revision supporting OpenTelemetry
|
||||
#define DBMS_MIN_REVISION_WITH_OPENTELEMETRY 54442
|
||||
|
||||
/// Mininum revision supporting interserver secret.
|
||||
/// Minimum revision supporting interserver secret.
|
||||
#define DBMS_MIN_REVISION_WITH_INTERSERVER_SECRET 54441
|
||||
|
||||
/// Version of ClickHouse TCP protocol. Increment it manually when you change the protocol.
|
||||
|
@ -10,6 +10,7 @@
|
||||
#include <Common/CurrentThread.h>
|
||||
#include <Common/setThreadName.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
#include <Common/checkStackSize.h>
|
||||
#include <Storages/MergeTree/ReplicatedMergeTreeBlockOutputStream.h>
|
||||
#include <Storages/StorageValues.h>
|
||||
#include <Storages/LiveView/StorageLiveView.h>
|
||||
@ -29,6 +30,8 @@ PushingToViewsBlockOutputStream::PushingToViewsBlockOutputStream(
|
||||
, context(context_)
|
||||
, query_ptr(query_ptr_)
|
||||
{
|
||||
checkStackSize();
|
||||
|
||||
/** TODO This is a very important line. At any insertion into the table one of streams should own lock.
|
||||
* Although now any insertion into the table is done via PushingToViewsBlockOutputStream,
|
||||
* but it's clear that here is not the best place for this functionality.
|
||||
|
@ -12,6 +12,7 @@
|
||||
#include <Common/quoteString.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <IO/Operators.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -19,6 +20,7 @@ namespace DB
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int SYNC_MYSQL_USER_ACCESS_ERROR;
|
||||
}
|
||||
|
||||
static std::unordered_map<String, String> fetchTablesCreateQuery(
|
||||
@ -64,6 +66,7 @@ static std::vector<String> fetchTablesInDB(const mysqlxx::PoolWithFailover::Entr
|
||||
|
||||
return tables_in_db;
|
||||
}
|
||||
|
||||
void MaterializeMetadata::fetchMasterStatus(mysqlxx::PoolWithFailover::Entry & connection)
|
||||
{
|
||||
Block header{
|
||||
@ -105,6 +108,49 @@ static Block getShowMasterLogHeader(const String & mysql_version)
|
||||
};
|
||||
}
|
||||
|
||||
static bool checkSyncUserPrivImpl(mysqlxx::PoolWithFailover::Entry & connection, WriteBuffer & out)
|
||||
{
|
||||
Block sync_user_privs_header
|
||||
{
|
||||
{std::make_shared<DataTypeString>(), "current_user_grants"}
|
||||
};
|
||||
|
||||
String grants_query, sub_privs;
|
||||
MySQLBlockInputStream input(connection, "SHOW GRANTS FOR CURRENT_USER();", sync_user_privs_header, DEFAULT_BLOCK_SIZE);
|
||||
while (Block block = input.read())
|
||||
{
|
||||
for (size_t index = 0; index < block.rows(); ++index)
|
||||
{
|
||||
grants_query = (*block.getByPosition(0).column)[index].safeGet<String>();
|
||||
out << grants_query << "; ";
|
||||
sub_privs = grants_query.substr(0, grants_query.find(" ON "));
|
||||
if (sub_privs.find("ALL PRIVILEGES") == std::string::npos)
|
||||
{
|
||||
if ((sub_privs.find("RELOAD") != std::string::npos and
|
||||
sub_privs.find("REPLICATION SLAVE") != std::string::npos and
|
||||
sub_privs.find("REPLICATION CLIENT") != std::string::npos))
|
||||
return true;
|
||||
}
|
||||
else
|
||||
{
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static void checkSyncUserPriv(mysqlxx::PoolWithFailover::Entry & connection)
|
||||
{
|
||||
WriteBufferFromOwnString out;
|
||||
|
||||
if (!checkSyncUserPrivImpl(connection, out))
|
||||
throw Exception("MySQL SYNC USER ACCESS ERR: mysql sync user needs "
|
||||
"at least GLOBAL PRIVILEGES:'RELOAD, REPLICATION SLAVE, REPLICATION CLIENT' "
|
||||
"and SELECT PRIVILEGE on MySQL Database."
|
||||
"But the SYNC USER grant query is: " + out.str(), ErrorCodes::SYNC_MYSQL_USER_ACCESS_ERROR);
|
||||
}
|
||||
|
||||
bool MaterializeMetadata::checkBinlogFileExists(mysqlxx::PoolWithFailover::Entry & connection, const String & mysql_version) const
|
||||
{
|
||||
MySQLBlockInputStream input(connection, "SHOW MASTER LOGS", getShowMasterLogHeader(mysql_version), DEFAULT_BLOCK_SIZE);
|
||||
@ -167,6 +213,8 @@ MaterializeMetadata::MaterializeMetadata(
|
||||
const String & database, bool & opened_transaction, const String & mysql_version)
|
||||
: persistent_path(path_)
|
||||
{
|
||||
checkSyncUserPriv(connection);
|
||||
|
||||
if (Poco::File(persistent_path).exists())
|
||||
{
|
||||
ReadBufferFromFile in(persistent_path, DBMS_DEFAULT_BUFFER_SIZE);
|
||||
|
@ -5,7 +5,6 @@
|
||||
#if USE_MYSQL
|
||||
|
||||
#include <Databases/MySQL/MaterializeMySQLSyncThread.h>
|
||||
|
||||
# include <cstdlib>
|
||||
# include <random>
|
||||
# include <Columns/ColumnTuple.h>
|
||||
@ -34,6 +33,8 @@ namespace ErrorCodes
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int ILLEGAL_MYSQL_VARIABLE;
|
||||
extern const int SYNC_MYSQL_USER_ACCESS_ERROR;
|
||||
extern const int UNKNOWN_DATABASE;
|
||||
}
|
||||
|
||||
static constexpr auto MYSQL_BACKGROUND_THREAD_NAME = "MySQLDBSync";
|
||||
@ -214,10 +215,33 @@ void MaterializeMySQLSyncThread::stopSynchronization()
|
||||
|
||||
void MaterializeMySQLSyncThread::startSynchronization()
|
||||
{
|
||||
const auto & mysql_server_version = checkVariableAndGetVersion(pool.get());
|
||||
try
|
||||
{
|
||||
const auto & mysql_server_version = checkVariableAndGetVersion(pool.get());
|
||||
|
||||
background_thread_pool = std::make_unique<ThreadFromGlobalPool>(
|
||||
[this, mysql_server_version = mysql_server_version]() { synchronization(mysql_server_version); });
|
||||
background_thread_pool = std::make_unique<ThreadFromGlobalPool>(
|
||||
[this, mysql_server_version = mysql_server_version]() { synchronization(mysql_server_version); });
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
try
|
||||
{
|
||||
throw;
|
||||
}
|
||||
catch (mysqlxx::ConnectionFailed & e)
|
||||
{
|
||||
if (e.errnum() == ER_ACCESS_DENIED_ERROR
|
||||
|| e.errnum() == ER_DBACCESS_DENIED_ERROR)
|
||||
throw Exception("MySQL SYNC USER ACCESS ERR: mysql sync user needs "
|
||||
"at least GLOBAL PRIVILEGES:'RELOAD, REPLICATION SLAVE, REPLICATION CLIENT' "
|
||||
"and SELECT PRIVILEGE on Database " + mysql_database_name
|
||||
, ErrorCodes::SYNC_MYSQL_USER_ACCESS_ERROR);
|
||||
else if (e.errnum() == ER_BAD_DB_ERROR)
|
||||
throw Exception("Unknown database '" + mysql_database_name + "' on MySQL", ErrorCodes::UNKNOWN_DATABASE);
|
||||
else
|
||||
throw;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static inline void cleanOutdatedTables(const String & database_name, const Context & context)
|
||||
|
@ -20,6 +20,7 @@
|
||||
# include <mysqlxx/Pool.h>
|
||||
# include <mysqlxx/PoolWithFailover.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
@ -63,6 +64,12 @@ private:
|
||||
MaterializeMySQLSettings * settings;
|
||||
String query_prefix;
|
||||
|
||||
// USE MySQL ERROR CODE:
|
||||
// https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html
|
||||
const int ER_ACCESS_DENIED_ERROR = 1045;
|
||||
const int ER_DBACCESS_DENIED_ERROR = 1044;
|
||||
const int ER_BAD_DB_ERROR = 1049;
|
||||
|
||||
struct Buffers
|
||||
{
|
||||
String database;
|
||||
|
@ -1052,7 +1052,7 @@ SetPtr ActionsMatcher::makeSet(const ASTFunction & node, Data & data, bool no_su
|
||||
* - this function shows the expression IN_data1.
|
||||
*
|
||||
* In case that we have HAVING with IN subquery, we have to force creating set for it.
|
||||
* Also it doesn't make sence if it is GLOBAL IN or ordinary IN.
|
||||
* Also it doesn't make sense if it is GLOBAL IN or ordinary IN.
|
||||
*/
|
||||
if (!subquery_for_set.source && data.create_source_for_in)
|
||||
{
|
||||
|
@ -104,7 +104,7 @@ public:
|
||||
KeepAggregateFunctionVisitor(keep_data).visit(function_node->arguments);
|
||||
|
||||
/// Place argument of an aggregate function instead of function
|
||||
if (!keep_aggregator)
|
||||
if (!keep_aggregator && !function_node->arguments->children.empty())
|
||||
{
|
||||
String alias = function_node->alias;
|
||||
ast = (function_node->arguments->children[0])->clone();
|
||||
|
@ -167,7 +167,7 @@ void DatabaseCatalog::shutdownImpl()
|
||||
std::lock_guard lock(databases_mutex);
|
||||
assert(std::find_if(uuid_map.begin(), uuid_map.end(), [](const auto & elem)
|
||||
{
|
||||
/// Ensure that all UUID mappings are emtpy (i.e. all mappings contain nullptr instead of a pointer to storage)
|
||||
/// Ensure that all UUID mappings are empty (i.e. all mappings contain nullptr instead of a pointer to storage)
|
||||
const auto & not_empty_mapping = [] (const auto & mapping)
|
||||
{
|
||||
auto & table = mapping.second.second;
|
||||
|
@ -131,8 +131,13 @@ public:
|
||||
data.reject();
|
||||
}
|
||||
|
||||
static bool needChildVisit(const ASTPtr &, const ASTPtr &)
|
||||
static bool needChildVisit(const ASTPtr & parent, const ASTPtr &)
|
||||
{
|
||||
/// Currently we check monotonicity only for single-argument functions.
|
||||
/// Although, multi-argument functions with all but one constant arguments can also be monotonic.
|
||||
if (const auto * func = typeid_cast<const ASTFunction *>(parent.get()))
|
||||
return func->arguments->children.size() < 2;
|
||||
|
||||
return true;
|
||||
}
|
||||
};
|
||||
|
@ -259,7 +259,7 @@ static void onExceptionBeforeStart(const String & query_for_logging, Context & c
|
||||
span.finish_time_us = current_time_us;
|
||||
span.duration_ns = 0;
|
||||
|
||||
// keep values synchonized to type enum in QueryLogElement::createBlock
|
||||
/// Keep values synchronized to type enum in QueryLogElement::createBlock.
|
||||
span.attribute_names.push_back("clickhouse.query_status");
|
||||
span.attribute_values.push_back("ExceptionBeforeStart");
|
||||
|
||||
@ -697,7 +697,7 @@ static std::tuple<ASTPtr, BlockIO> executeQueryImpl(
|
||||
span.finish_time_us = time_in_microseconds(finish_time);
|
||||
span.duration_ns = elapsed_seconds * 1000000000;
|
||||
|
||||
// keep values synchonized to type enum in QueryLogElement::createBlock
|
||||
/// Keep values synchronized to type enum in QueryLogElement::createBlock.
|
||||
span.attribute_names.push_back("clickhouse.query_status");
|
||||
span.attribute_values.push_back("QueryFinish");
|
||||
|
||||
|
@ -644,11 +644,21 @@ private:
|
||||
request.setHost(url.getHost());
|
||||
|
||||
auto session = makePooledHTTPSession(url, timeouts, 1);
|
||||
session->sendRequest(request);
|
||||
|
||||
Poco::Net::HTTPResponse response;
|
||||
auto * response_body = receiveResponse(*session, request, response, false);
|
||||
std::istream * response_body{};
|
||||
try
|
||||
{
|
||||
session->sendRequest(request);
|
||||
|
||||
Poco::Net::HTTPResponse response;
|
||||
response_body = receiveResponse(*session, request, response, false);
|
||||
}
|
||||
catch (const Poco::Exception & e)
|
||||
{
|
||||
/// We use session data storage as storage for exception text
|
||||
/// Depend on it we can deduce to reconnect session or reresolve session host
|
||||
session->attachSessionData(e.message());
|
||||
throw;
|
||||
}
|
||||
Poco::JSON::Parser parser;
|
||||
auto json_body = parser.parse(*response_body).extract<Poco::JSON::Object::Ptr>();
|
||||
auto schema = json_body->getValue<std::string>("schema");
|
||||
|
@ -115,7 +115,7 @@ try
|
||||
}
|
||||
});
|
||||
/// We've scheduled task in the background pool and when it will finish we will be triggered again. But this task can be
|
||||
/// extremely long and we may have a lot of other small tasks to do, so we schedule ourselfs here.
|
||||
/// extremely long and we may have a lot of other small tasks to do, so we schedule ourselves here.
|
||||
scheduleTask(true);
|
||||
}
|
||||
catch (...)
|
||||
|
@ -22,7 +22,7 @@ struct BackgroundTaskSchedulingSettings
|
||||
|
||||
double task_sleep_seconds_when_no_work_random_part = 1.0;
|
||||
|
||||
/// deprected settings, don't affect background execution
|
||||
/// Deprecated settings, don't affect background execution
|
||||
double thread_sleep_seconds = 10;
|
||||
double task_sleep_seconds_when_no_work_min = 10;
|
||||
};
|
||||
|
@ -1217,6 +1217,9 @@ void MergeTreeData::clearOldWriteAheadLogs()
|
||||
|
||||
void MergeTreeData::clearEmptyParts()
|
||||
{
|
||||
if (!getSettings()->remove_empty_parts)
|
||||
return;
|
||||
|
||||
auto parts = getDataPartsVector();
|
||||
for (const auto & part : parts)
|
||||
{
|
||||
@ -2662,7 +2665,7 @@ void MergeTreeData::checkPartCanBeDropped(const ASTPtr & part_ast)
|
||||
String part_name = part_ast->as<ASTLiteral &>().value.safeGet<String>();
|
||||
auto part = getPartIfExists(part_name, {MergeTreeDataPartState::Committed});
|
||||
if (!part)
|
||||
throw Exception(ErrorCodes::NO_SUCH_DATA_PART, "No part {} in commited state", part_name);
|
||||
throw Exception(ErrorCodes::NO_SUCH_DATA_PART, "No part {} in committed state", part_name);
|
||||
|
||||
auto table_id = getStorageID();
|
||||
global_context.checkPartitionCanBeDropped(table_id.database_name, table_id.table_name, part->getBytesOnDisk());
|
||||
|
@ -345,7 +345,7 @@ static bool indexOfCanUseBloomFilter(const ASTPtr & parent)
|
||||
if (function->arguments->children.size() != 2)
|
||||
return false;
|
||||
|
||||
/// We don't allow constant expressions like `indexOf(arr, x) = 1 + 0` but it's neglible.
|
||||
/// We don't allow constant expressions like `indexOf(arr, x) = 1 + 0` but it's negligible.
|
||||
|
||||
/// We should return true when the corresponding expression implies that the array contains the element.
|
||||
/// Example: when `indexOf(arr, x)` > 10 is written, it means that arr definitely should contain the element
|
||||
|
@ -635,30 +635,34 @@ MergeTreeRangeReader::ReadResult MergeTreeRangeReader::read(size_t max_rows, Mar
|
||||
merge_tree_reader->fillMissingColumns(columns, should_evaluate_missing_defaults, num_rows);
|
||||
}
|
||||
|
||||
if (!columns.empty() && should_evaluate_missing_defaults)
|
||||
if (!columns.empty())
|
||||
{
|
||||
auto block = prev_reader->sample_block.cloneWithColumns(read_result.columns);
|
||||
auto block_before_prewhere = read_result.block_before_prewhere;
|
||||
for (auto & ctn : block)
|
||||
/// If some columns absent in part, then evaulate default values
|
||||
if (should_evaluate_missing_defaults)
|
||||
{
|
||||
if (block_before_prewhere.has(ctn.name))
|
||||
block_before_prewhere.erase(ctn.name);
|
||||
}
|
||||
|
||||
if (block_before_prewhere)
|
||||
{
|
||||
if (read_result.need_filter)
|
||||
auto block = prev_reader->sample_block.cloneWithColumns(read_result.columns);
|
||||
auto block_before_prewhere = read_result.block_before_prewhere;
|
||||
for (auto & ctn : block)
|
||||
{
|
||||
auto old_columns = block_before_prewhere.getColumns();
|
||||
filterColumns(old_columns, read_result.getFilterOriginal()->getData());
|
||||
block_before_prewhere.setColumns(std::move(old_columns));
|
||||
if (block_before_prewhere.has(ctn.name))
|
||||
block_before_prewhere.erase(ctn.name);
|
||||
}
|
||||
|
||||
for (auto && ctn : block_before_prewhere)
|
||||
block.insert(std::move(ctn));
|
||||
}
|
||||
if (block_before_prewhere)
|
||||
{
|
||||
if (read_result.need_filter)
|
||||
{
|
||||
auto old_columns = block_before_prewhere.getColumns();
|
||||
filterColumns(old_columns, read_result.getFilterOriginal()->getData());
|
||||
block_before_prewhere.setColumns(std::move(old_columns));
|
||||
}
|
||||
|
||||
merge_tree_reader->evaluateMissingDefaults(block, columns);
|
||||
for (auto && ctn : block_before_prewhere)
|
||||
block.insert(std::move(ctn));
|
||||
}
|
||||
merge_tree_reader->evaluateMissingDefaults(block, columns);
|
||||
}
|
||||
/// If columns not empty, then apply on-fly alter conversions if any required
|
||||
merge_tree_reader->performRequiredConversions(columns);
|
||||
}
|
||||
|
||||
@ -677,9 +681,11 @@ MergeTreeRangeReader::ReadResult MergeTreeRangeReader::read(size_t max_rows, Mar
|
||||
merge_tree_reader->fillMissingColumns(read_result.columns, should_evaluate_missing_defaults,
|
||||
read_result.num_rows);
|
||||
|
||||
/// If some columns absent in part, then evaulate default values
|
||||
if (should_evaluate_missing_defaults)
|
||||
merge_tree_reader->evaluateMissingDefaults({}, read_result.columns);
|
||||
|
||||
/// If result not empty, then apply on-fly alter conversions if any required
|
||||
merge_tree_reader->performRequiredConversions(read_result.columns);
|
||||
}
|
||||
else
|
||||
|
@ -105,6 +105,7 @@ struct Settings;
|
||||
M(UInt64, concurrent_part_removal_threshold, 100, "Activate concurrent part removal (see 'max_part_removal_threads') only if the number of inactive data parts is at least this.", 0) \
|
||||
M(String, storage_policy, "default", "Name of storage disk policy", 0) \
|
||||
M(Bool, allow_nullable_key, false, "Allow Nullable types as primary keys.", 0) \
|
||||
M(Bool, remove_empty_parts, true, "Remove empty parts after they were pruned by TTL, mutation, or collapsing merge algorithm", 0) \
|
||||
\
|
||||
/** Settings for testing purposes */ \
|
||||
M(Bool, randomize_part_type, false, "For testing purposes only. Randomizes part type between wide and compact", 0) \
|
||||
|
@ -21,6 +21,7 @@
|
||||
#include <Storages/SelectQueryDescription.h>
|
||||
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Common/checkStackSize.h>
|
||||
#include <Processors/Sources/SourceFromInputStream.h>
|
||||
#include <Processors/QueryPlan/SettingQuotaAndLimitsStep.h>
|
||||
|
||||
@ -30,6 +31,7 @@ namespace DB
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int BAD_ARGUMENTS;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int INCORRECT_QUERY;
|
||||
extern const int QUERY_IS_NOT_SUPPORTED_IN_MATERIALIZED_VIEW;
|
||||
@ -72,7 +74,11 @@ StorageMaterializedView::StorageMaterializedView(
|
||||
setInMemoryMetadata(storage_metadata);
|
||||
|
||||
if (!has_inner_table)
|
||||
{
|
||||
if (query.to_table_id.database_name == table_id_.database_name && query.to_table_id.table_name == table_id_.table_name)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Materialized view {} cannot point to itself", table_id_.getFullTableName());
|
||||
target_table_id = query.to_table_id;
|
||||
}
|
||||
else if (attach_)
|
||||
{
|
||||
/// If there is an ATTACH request, then the internal table must already be created.
|
||||
@ -351,11 +357,13 @@ void StorageMaterializedView::shutdown()
|
||||
|
||||
StoragePtr StorageMaterializedView::getTargetTable() const
|
||||
{
|
||||
checkStackSize();
|
||||
return DatabaseCatalog::instance().getTable(target_table_id, global_context);
|
||||
}
|
||||
|
||||
StoragePtr StorageMaterializedView::tryGetTargetTable() const
|
||||
{
|
||||
checkStackSize();
|
||||
return DatabaseCatalog::instance().tryGetTable(target_table_id, global_context);
|
||||
}
|
||||
|
||||
|
@ -2,8 +2,10 @@ import time
|
||||
|
||||
import pymysql.cursors
|
||||
|
||||
import pytest
|
||||
from helpers.client import QueryRuntimeException
|
||||
|
||||
def check_query(clickhouse_node, query, result_set, retry_count=3, interval_seconds=3):
|
||||
def check_query(clickhouse_node, query, result_set, retry_count=60, interval_seconds=3):
|
||||
lastest_result = ''
|
||||
for index in range(retry_count):
|
||||
lastest_result = clickhouse_node.query(query)
|
||||
@ -18,6 +20,8 @@ def check_query(clickhouse_node, query, result_set, retry_count=3, interval_seco
|
||||
|
||||
|
||||
def dml_with_materialize_mysql_database(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'")
|
||||
# existed before the mapping was created
|
||||
|
||||
@ -100,6 +104,8 @@ def dml_with_materialize_mysql_database(clickhouse_node, mysql_node, service_nam
|
||||
|
||||
|
||||
def materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'")
|
||||
mysql_node.query("CREATE TABLE test_database.test_table_1 (`key` INT NOT NULL PRIMARY KEY, _datetime DateTime(6), _timestamp TIMESTAMP(3), _decimal DECIMAL(65, 30)) ENGINE = InnoDB;")
|
||||
mysql_node.query("INSERT INTO test_database.test_table_1 VALUES(1, '2020-01-01 01:02:03.999999', '2020-01-01 01:02:03.999', " + ('9' * 35) + "." + ('9' * 30) + ")")
|
||||
@ -121,6 +127,7 @@ def materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, mysql_
|
||||
mysql_node.query("INSERT INTO test_database.test_table_2 VALUES(2, '2020-01-01 01:02:03.000000', '2020-01-01 01:02:03.000', ." + ('0' * 29) + "1)")
|
||||
mysql_node.query("INSERT INTO test_database.test_table_2 VALUES(3, '2020-01-01 01:02:03.9999', '2020-01-01 01:02:03.99', -" + ('9' * 35) + "." + ('9' * 30) + ")")
|
||||
mysql_node.query("INSERT INTO test_database.test_table_2 VALUES(4, '2020-01-01 01:02:03.9999', '2020-01-01 01:02:03.9999', -." + ('0' * 29) + "1)")
|
||||
check_query(clickhouse_node, "SHOW TABLES FROM test_database FORMAT TSV", "test_table_1\ntest_table_2\n")
|
||||
check_query(clickhouse_node, "SELECT * FROM test_database.test_table_2 ORDER BY key FORMAT TSV",
|
||||
"1\t2020-01-01 01:02:03.999999\t2020-01-01 01:02:03.999\t" + ('9' * 35) + "." + ('9' * 30) + "\n"
|
||||
"2\t2020-01-01 01:02:03.000000\t2020-01-01 01:02:03.000\t0." + ('0' * 29) + "1\n"
|
||||
@ -132,6 +139,8 @@ def materialize_mysql_database_with_datetime_and_decimal(clickhouse_node, mysql_
|
||||
|
||||
|
||||
def drop_table_with_materialize_mysql_database(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'")
|
||||
mysql_node.query("CREATE TABLE test_database.test_table_1 (id INT NOT NULL PRIMARY KEY) ENGINE = InnoDB;")
|
||||
|
||||
@ -164,8 +173,9 @@ def drop_table_with_materialize_mysql_database(clickhouse_node, mysql_node, serv
|
||||
clickhouse_node.query("DROP DATABASE test_database")
|
||||
mysql_node.query("DROP DATABASE test_database")
|
||||
|
||||
|
||||
def create_table_with_materialize_mysql_database(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'")
|
||||
# existed before the mapping was created
|
||||
mysql_node.query("CREATE TABLE test_database.test_table_1 (id INT NOT NULL PRIMARY KEY) ENGINE = InnoDB;")
|
||||
@ -194,6 +204,8 @@ def create_table_with_materialize_mysql_database(clickhouse_node, mysql_node, se
|
||||
|
||||
|
||||
def rename_table_with_materialize_mysql_database(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'")
|
||||
mysql_node.query("CREATE TABLE test_database.test_table_1 (id INT NOT NULL PRIMARY KEY) ENGINE = InnoDB;")
|
||||
|
||||
@ -214,6 +226,8 @@ def rename_table_with_materialize_mysql_database(clickhouse_node, mysql_node, se
|
||||
|
||||
|
||||
def alter_add_column_with_materialize_mysql_database(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'")
|
||||
mysql_node.query("CREATE TABLE test_database.test_table_1 (id INT NOT NULL PRIMARY KEY) ENGINE = InnoDB;")
|
||||
|
||||
@ -255,6 +269,8 @@ def alter_add_column_with_materialize_mysql_database(clickhouse_node, mysql_node
|
||||
|
||||
|
||||
def alter_drop_column_with_materialize_mysql_database(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'")
|
||||
mysql_node.query(
|
||||
"CREATE TABLE test_database.test_table_1 (id INT NOT NULL PRIMARY KEY, drop_column INT) ENGINE = InnoDB;")
|
||||
@ -287,6 +303,8 @@ def alter_drop_column_with_materialize_mysql_database(clickhouse_node, mysql_nod
|
||||
|
||||
|
||||
def alter_rename_column_with_materialize_mysql_database(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'")
|
||||
|
||||
# maybe should test rename primary key?
|
||||
@ -322,6 +340,8 @@ def alter_rename_column_with_materialize_mysql_database(clickhouse_node, mysql_n
|
||||
|
||||
|
||||
def alter_modify_column_with_materialize_mysql_database(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'")
|
||||
|
||||
# maybe should test rename primary key?
|
||||
@ -366,6 +386,8 @@ def alter_modify_column_with_materialize_mysql_database(clickhouse_node, mysql_n
|
||||
# pass
|
||||
|
||||
def alter_rename_table_with_materialize_mysql_database(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
mysql_node.query("CREATE DATABASE test_database DEFAULT CHARACTER SET 'utf8'")
|
||||
mysql_node.query(
|
||||
"CREATE TABLE test_database.test_table_1 (id INT NOT NULL PRIMARY KEY, drop_column INT) ENGINE = InnoDB;")
|
||||
@ -401,6 +423,8 @@ def alter_rename_table_with_materialize_mysql_database(clickhouse_node, mysql_no
|
||||
|
||||
|
||||
def query_event_with_empty_transaction(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS test_database")
|
||||
mysql_node.query("CREATE DATABASE test_database")
|
||||
|
||||
mysql_node.query("RESET MASTER")
|
||||
@ -433,6 +457,8 @@ def query_event_with_empty_transaction(clickhouse_node, mysql_node, service_name
|
||||
mysql_node.query("DROP DATABASE test_database")
|
||||
|
||||
def select_without_columns(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE IF EXISTS db")
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS db")
|
||||
mysql_node.query("CREATE DATABASE db")
|
||||
mysql_node.query("CREATE TABLE db.t (a INT PRIMARY KEY, b INT)")
|
||||
clickhouse_node.query(
|
||||
@ -461,3 +487,51 @@ def select_without_columns(clickhouse_node, mysql_node, service_name):
|
||||
clickhouse_node.query("DROP VIEW v")
|
||||
clickhouse_node.query("DROP DATABASE db")
|
||||
mysql_node.query("DROP DATABASE db")
|
||||
|
||||
|
||||
def err_sync_user_privs_with_materialize_mysql_database(clickhouse_node, mysql_node, service_name):
|
||||
clickhouse_node.query("DROP DATABASE IF EXISTS priv_err_db")
|
||||
mysql_node.query("DROP DATABASE IF EXISTS priv_err_db")
|
||||
mysql_node.query("CREATE DATABASE priv_err_db DEFAULT CHARACTER SET 'utf8'")
|
||||
mysql_node.query("CREATE TABLE priv_err_db.test_table_1 (id INT NOT NULL PRIMARY KEY) ENGINE = InnoDB;")
|
||||
mysql_node.query("INSERT INTO priv_err_db.test_table_1 VALUES(1);")
|
||||
|
||||
mysql_node.result("SHOW GRANTS FOR 'test'@'%';")
|
||||
|
||||
clickhouse_node.query(
|
||||
"CREATE DATABASE priv_err_db ENGINE = MaterializeMySQL('{}:3306', 'priv_err_db', 'test', '123')".format(
|
||||
service_name))
|
||||
# wait MaterializeMySQL read binlog events
|
||||
check_query(clickhouse_node, "SHOW TABLES FROM priv_err_db FORMAT TSV;", "test_table_1\n")
|
||||
check_query(clickhouse_node, "SELECT count() FROM priv_err_db.test_table_1 FORMAT TSV", "1\n", 30, 5)
|
||||
mysql_node.query("INSERT INTO priv_err_db.test_table_1 VALUES(2);")
|
||||
check_query(clickhouse_node, "SELECT count() FROM priv_err_db.test_table_1 FORMAT TSV", "2\n")
|
||||
clickhouse_node.query("DROP DATABASE priv_err_db;")
|
||||
|
||||
mysql_node.query("REVOKE REPLICATION SLAVE ON *.* FROM 'test'@'%'")
|
||||
clickhouse_node.query(
|
||||
"CREATE DATABASE priv_err_db ENGINE = MaterializeMySQL('{}:3306', 'priv_err_db', 'test', '123')".format(
|
||||
service_name))
|
||||
assert "priv_err_db" in clickhouse_node.query("SHOW DATABASES")
|
||||
assert "test_table_1" not in clickhouse_node.query("SHOW TABLES FROM priv_err_db")
|
||||
clickhouse_node.query("DROP DATABASE priv_err_db")
|
||||
|
||||
mysql_node.query("REVOKE REPLICATION CLIENT, RELOAD ON *.* FROM 'test'@'%'")
|
||||
clickhouse_node.query(
|
||||
"CREATE DATABASE priv_err_db ENGINE = MaterializeMySQL('{}:3306', 'priv_err_db', 'test', '123')".format(
|
||||
service_name))
|
||||
assert "priv_err_db" in clickhouse_node.query("SHOW DATABASES")
|
||||
assert "test_table_1" not in clickhouse_node.query("SHOW TABLES FROM priv_err_db")
|
||||
clickhouse_node.query("DETACH DATABASE priv_err_db")
|
||||
|
||||
mysql_node.query("REVOKE SELECT ON priv_err_db.* FROM 'test'@'%'")
|
||||
time.sleep(3)
|
||||
|
||||
with pytest.raises(QueryRuntimeException) as exception:
|
||||
clickhouse_node.query("ATTACH DATABASE priv_err_db")
|
||||
|
||||
assert 'MySQL SYNC USER ACCESS ERR:' in str(exception.value)
|
||||
assert "priv_err_db" not in clickhouse_node.query("SHOW DATABASES")
|
||||
|
||||
mysql_node.query("DROP DATABASE priv_err_db;")
|
||||
mysql_node.grant_min_priv_for_user("test")
|
||||
|
@ -41,6 +41,20 @@ class MySQLNodeInstance:
|
||||
with self.alloc_connection().cursor() as cursor:
|
||||
cursor.execute(execution_query)
|
||||
|
||||
def create_min_priv_user(self, user, password):
|
||||
self.query("CREATE USER '" + user + "'@'%' IDENTIFIED BY '" + password + "'")
|
||||
self.grant_min_priv_for_user(user)
|
||||
|
||||
def grant_min_priv_for_user(self, user, db='priv_err_db'):
|
||||
self.query("GRANT REPLICATION SLAVE, REPLICATION CLIENT, RELOAD ON *.* TO '" + user + "'@'%'")
|
||||
self.query("GRANT SELECT ON " + db + ".* TO '" + user + "'@'%'")
|
||||
|
||||
def result(self, execution_query):
|
||||
with self.alloc_connection().cursor() as cursor:
|
||||
result = cursor.execute(execution_query)
|
||||
if result is not None:
|
||||
print(cursor.fetchall())
|
||||
|
||||
def close(self):
|
||||
if self.mysql_connection is not None:
|
||||
self.mysql_connection.close()
|
||||
@ -51,6 +65,8 @@ class MySQLNodeInstance:
|
||||
try:
|
||||
self.alloc_connection()
|
||||
print("Mysql Started")
|
||||
self.create_min_priv_user("test", "123")
|
||||
print("min priv user created")
|
||||
return
|
||||
except Exception as ex:
|
||||
print("Can't connect to MySQL " + str(ex))
|
||||
@ -153,3 +169,22 @@ def test_select_without_columns_5_7(started_cluster, started_mysql_5_7):
|
||||
|
||||
def test_select_without_columns_8_0(started_cluster, started_mysql_8_0):
|
||||
materialize_with_ddl.select_without_columns(clickhouse_node, started_mysql_8_0, "mysql8_0")
|
||||
|
||||
|
||||
def test_materialize_database_err_sync_user_privs_5_7(started_cluster, started_mysql_5_7):
|
||||
try:
|
||||
materialize_with_ddl.err_sync_user_privs_with_materialize_mysql_database(clickhouse_node, started_mysql_5_7, "mysql1")
|
||||
except:
|
||||
print((clickhouse_node.query(
|
||||
"select '\n', thread_id, query_id, arrayStringConcat(arrayMap(x -> concat(demangle(addressToSymbol(x)), '\n ', addressToLine(x)), trace), '\n') AS sym from system.stack_trace format TSVRaw")))
|
||||
raise
|
||||
|
||||
|
||||
def test_materialize_database_err_sync_user_privs_8_0(started_cluster, started_mysql_8_0):
|
||||
try:
|
||||
materialize_with_ddl.err_sync_user_privs_with_materialize_mysql_database(clickhouse_node, started_mysql_8_0, "mysql8_0")
|
||||
except:
|
||||
print((clickhouse_node.query(
|
||||
"select '\n', thread_id, query_id, arrayStringConcat(arrayMap(x -> concat(demangle(addressToSymbol(x)), '\n ', addressToLine(x)), trace), '\n') AS sym from system.stack_trace format TSVRaw")))
|
||||
raise
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
83
|
||||
84
|
||||
1
|
||||
46
|
||||
1
|
||||
|
@ -23,9 +23,9 @@ OPTIMIZE TABLE default_codec_synthetic FINAL;
|
||||
SELECT
|
||||
floor(big_size / small_size) AS ratio
|
||||
FROM
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) AS small_size FROM system.parts WHERE database == currentDatabase() and table == 'delta_codec_synthetic')
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) AS small_size FROM system.parts WHERE database == currentDatabase() and table == 'delta_codec_synthetic' and active)
|
||||
INNER JOIN
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) as big_size FROM system.parts WHERE database == currentDatabase() and table == 'default_codec_synthetic')
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) as big_size FROM system.parts WHERE database == currentDatabase() and table == 'default_codec_synthetic' and active)
|
||||
USING(key);
|
||||
|
||||
SELECT
|
||||
@ -61,9 +61,9 @@ OPTIMIZE TABLE default_codec_float FINAL;
|
||||
SELECT
|
||||
floor(big_size / small_size) as ratio
|
||||
FROM
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) AS small_size FROM system.parts WHERE database = currentDatabase() and table = 'delta_codec_float')
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) AS small_size FROM system.parts WHERE database = currentDatabase() and table = 'delta_codec_float' and active)
|
||||
INNER JOIN
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) as big_size FROM system.parts WHERE database = currentDatabase() and table = 'default_codec_float') USING(key);
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) as big_size FROM system.parts WHERE database = currentDatabase() and table = 'default_codec_float' and active) USING(key);
|
||||
|
||||
SELECT
|
||||
small_hash == big_hash
|
||||
@ -99,9 +99,9 @@ OPTIMIZE TABLE default_codec_string FINAL;
|
||||
SELECT
|
||||
floor(big_size / small_size) as ratio
|
||||
FROM
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) AS small_size FROM system.parts WHERE database = currentDatabase() and table = 'delta_codec_string')
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) AS small_size FROM system.parts WHERE database = currentDatabase() and table = 'delta_codec_string' and active)
|
||||
INNER JOIN
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) as big_size FROM system.parts WHERE database = currentDatabase() and table = 'default_codec_string') USING(key);
|
||||
(SELECT 1 AS key, sum(bytes_on_disk) as big_size FROM system.parts WHERE database = currentDatabase() and table = 'default_codec_string' and active) USING(key);
|
||||
|
||||
SELECT
|
||||
small_hash == big_hash
|
||||
|
@ -1,4 +1,4 @@
|
||||
CREATE TABLE default.ttl\n(\n `d` Date,\n `a` Int32\n)\nENGINE = MergeTree\nPARTITION BY toDayOfMonth(d)\nORDER BY a\nTTL d + toIntervalDay(1)\nSETTINGS index_granularity = 8192
|
||||
CREATE TABLE default.ttl\n(\n `d` Date,\n `a` Int32\n)\nENGINE = MergeTree\nPARTITION BY toDayOfMonth(d)\nORDER BY a\nTTL d + toIntervalDay(1)\nSETTINGS remove_empty_parts = 0, index_granularity = 8192
|
||||
2100-10-10 3
|
||||
2100-10-10 4
|
||||
d Date
|
||||
|
@ -2,14 +2,13 @@ set send_logs_level = 'fatal';
|
||||
|
||||
drop table if exists ttl;
|
||||
|
||||
create table ttl (d Date, a Int) engine = MergeTree order by a partition by toDayOfMonth(d);
|
||||
create table ttl (d Date, a Int) engine = MergeTree order by a partition by toDayOfMonth(d) settings remove_empty_parts = 0;
|
||||
alter table ttl modify ttl d + interval 1 day;
|
||||
show create table ttl;
|
||||
insert into ttl values (toDateTime('2000-10-10 00:00:00'), 1);
|
||||
insert into ttl values (toDateTime('2000-10-10 00:00:00'), 2);
|
||||
insert into ttl values (toDateTime('2100-10-10 00:00:00'), 3);
|
||||
insert into ttl values (toDateTime('2100-10-10 00:00:00'), 4);
|
||||
select sleep(1) format Null; -- wait if very fast merge happen
|
||||
optimize table ttl partition 10 final;
|
||||
|
||||
select * from ttl order by d;
|
||||
@ -18,7 +17,7 @@ alter table ttl modify ttl a; -- { serverError 450 }
|
||||
|
||||
drop table if exists ttl;
|
||||
|
||||
create table ttl (d Date, a Int) engine = MergeTree order by tuple() partition by toDayOfMonth(d);
|
||||
create table ttl (d Date, a Int) engine = MergeTree order by tuple() partition by toDayOfMonth(d) settings remove_empty_parts = 0;
|
||||
alter table ttl modify column a Int ttl d + interval 1 day;
|
||||
desc table ttl;
|
||||
alter table ttl modify column d Int ttl d + interval 1 day; -- { serverError 43 }
|
||||
|
@ -11,7 +11,9 @@ select a, b from ttl_00933_1;
|
||||
|
||||
drop table if exists ttl_00933_1;
|
||||
|
||||
create table ttl_00933_1 (d DateTime, a Int, b Int) engine = MergeTree order by toDate(d) partition by tuple() ttl d + interval 1 second;
|
||||
create table ttl_00933_1 (d DateTime, a Int, b Int)
|
||||
engine = MergeTree order by toDate(d) partition by tuple() ttl d + interval 1 second
|
||||
settings remove_empty_parts = 0;
|
||||
insert into ttl_00933_1 values (now(), 1, 2);
|
||||
insert into ttl_00933_1 values (now(), 3, 4);
|
||||
insert into ttl_00933_1 values (now() + 1000, 5, 6);
|
||||
@ -30,7 +32,9 @@ select * from ttl_00933_1 order by d;
|
||||
|
||||
drop table if exists ttl_00933_1;
|
||||
|
||||
create table ttl_00933_1 (d DateTime, a Int) engine = MergeTree order by tuple() partition by tuple() ttl d + interval 1 day;
|
||||
create table ttl_00933_1 (d DateTime, a Int)
|
||||
engine = MergeTree order by tuple() partition by tuple() ttl d + interval 1 day
|
||||
settings remove_empty_parts = 0;
|
||||
insert into ttl_00933_1 values (toDateTime('2000-10-10 00:00:00'), 1);
|
||||
insert into ttl_00933_1 values (toDateTime('2000-10-10 00:00:00'), 2);
|
||||
insert into ttl_00933_1 values (toDateTime('2100-10-10 00:00:00'), 3);
|
||||
@ -39,7 +43,9 @@ select * from ttl_00933_1 order by d;
|
||||
|
||||
drop table if exists ttl_00933_1;
|
||||
|
||||
create table ttl_00933_1 (d Date, a Int) engine = MergeTree order by a partition by toDayOfMonth(d) ttl d + interval 1 day;
|
||||
create table ttl_00933_1 (d Date, a Int)
|
||||
engine = MergeTree order by a partition by toDayOfMonth(d) ttl d + interval 1 day
|
||||
settings remove_empty_parts = 0;
|
||||
insert into ttl_00933_1 values (toDate('2000-10-10'), 1);
|
||||
insert into ttl_00933_1 values (toDate('2100-10-10'), 2);
|
||||
optimize table ttl_00933_1 final;
|
||||
|
@ -1,6 +1,6 @@
|
||||
drop table if exists ttl;
|
||||
|
||||
create table ttl (d Date, a Int) engine = MergeTree order by a partition by toDayOfMonth(d);
|
||||
create table ttl (d Date, a Int) engine = MergeTree order by a partition by toDayOfMonth(d) settings remove_empty_parts = 0;
|
||||
insert into ttl values (toDateTime('2000-10-10 00:00:00'), 1);
|
||||
insert into ttl values (toDateTime('2000-10-10 00:00:00'), 2);
|
||||
insert into ttl values (toDateTime('2100-10-10 00:00:00'), 3);
|
||||
|
@ -16,7 +16,7 @@ ${CLICKHOUSE_CLIENT} --query="CREATE TABLE table_with_empty_part
|
||||
ENGINE = MergeTree()
|
||||
ORDER BY id
|
||||
PARTITION BY id
|
||||
SETTINGS vertical_merge_algorithm_min_rows_to_activate=0, vertical_merge_algorithm_min_columns_to_activate=0
|
||||
SETTINGS vertical_merge_algorithm_min_rows_to_activate=0, vertical_merge_algorithm_min_columns_to_activate=0, remove_empty_parts = 0
|
||||
"
|
||||
|
||||
|
||||
|
@ -30,12 +30,9 @@ LAYOUT(FLAT())"
|
||||
|
||||
$CLICKHOUSE_CLIENT --query "SELECT dictGetUInt8('dictdb.invalidate', 'two', toUInt64(122))"
|
||||
|
||||
# No exception happened
|
||||
$CLICKHOUSE_CLIENT --query "SELECT last_exception FROM system.dictionaries WHERE database = 'dictdb' AND name = 'invalidate'"
|
||||
|
||||
# Bad solution, but it's quite complicated to detect, that invalidte_query stopped updates.
|
||||
# In worst case we don't check anything, but fortunately it doesn't lead to false negatives.
|
||||
sleep 5
|
||||
|
||||
$CLICKHOUSE_CLIENT --query "DROP TABLE dictdb.dict_invalidate"
|
||||
|
||||
function check_exception_detected()
|
||||
@ -52,7 +49,7 @@ function check_exception_detected()
|
||||
|
||||
|
||||
export -f check_exception_detected;
|
||||
timeout 10 bash -c check_exception_detected 2> /dev/null
|
||||
timeout 30 bash -c check_exception_detected 2> /dev/null
|
||||
|
||||
$CLICKHOUSE_CLIENT --query "SELECT last_exception FROM system.dictionaries WHERE database = 'dictdb' AND name = 'invalidate'" 2>&1 | grep -Eo "Table dictdb.dict_invalidate .* exist."
|
||||
|
||||
@ -76,7 +73,8 @@ function check_exception_fixed()
|
||||
}
|
||||
|
||||
export -f check_exception_fixed;
|
||||
timeout 10 bash -c check_exception_fixed 2> /dev/null
|
||||
# it may take a while until dictionary reloads
|
||||
timeout 60 bash -c check_exception_fixed 2> /dev/null
|
||||
|
||||
$CLICKHOUSE_CLIENT --query "SELECT last_exception FROM system.dictionaries WHERE database = 'dictdb' AND name = 'invalidate'" 2>&1
|
||||
$CLICKHOUSE_CLIENT --query "SELECT dictGetUInt8('dictdb.invalidate', 'two', toUInt64(133))"
|
||||
|
@ -1,6 +1,6 @@
|
||||
DROP TABLE IF EXISTS column_size_bug;
|
||||
|
||||
CREATE TABLE column_size_bug (date_time DateTime, value SimpleAggregateFunction(sum,UInt64)) ENGINE = AggregatingMergeTree PARTITION BY toStartOfInterval(date_time, INTERVAL 1 DAY) ORDER BY (date_time);
|
||||
CREATE TABLE column_size_bug (date_time DateTime, value SimpleAggregateFunction(sum,UInt64)) ENGINE = AggregatingMergeTree PARTITION BY toStartOfInterval(date_time, INTERVAL 1 DAY) ORDER BY (date_time) SETTINGS remove_empty_parts = 0;
|
||||
|
||||
INSERT INTO column_size_bug VALUES(now(),1);
|
||||
INSERT INTO column_size_bug VALUES(now(),1);
|
||||
|
@ -0,0 +1,3 @@
|
||||
\N Nullable(UInt8)
|
||||
\N Nullable(UInt8)
|
||||
0 UInt8
|
@ -0,0 +1,6 @@
|
||||
-- Value nullable
|
||||
SELECT anyIf(CAST(number, 'Nullable(UInt8)'), number = 3) AS a, toTypeName(a) FROM numbers(2);
|
||||
-- Value and condition nullable
|
||||
SELECT anyIf(number, number = 3) AS a, toTypeName(a) FROM (SELECT CAST(number, 'Nullable(UInt8)') AS number FROM numbers(2));
|
||||
-- Condition nullable
|
||||
SELECT anyIf(CAST(number, 'UInt8'), number = 3) AS a, toTypeName(a) FROM (SELECT CAST(number, 'Nullable(UInt8)') AS number FROM numbers(2));
|
@ -0,0 +1,28 @@
|
||||
DROP TABLE IF EXISTS t;
|
||||
DROP TABLE IF EXISTS v;
|
||||
|
||||
CREATE TABLE t (c String) ENGINE = Memory;
|
||||
|
||||
CREATE MATERIALIZED VIEW v to v AS SELECT c FROM t; -- { serverError 36 }
|
||||
CREATE MATERIALIZED VIEW v to t AS SELECT * FROM v; -- { serverError 60 }
|
||||
|
||||
DROP TABLE IF EXISTS t1;
|
||||
DROP TABLE IF EXISTS t2;
|
||||
DROP TABLE IF EXISTS v1;
|
||||
DROP TABLE IF EXISTS v2;
|
||||
|
||||
CREATE TABLE t1 (c String) ENGINE = Memory;
|
||||
CREATE TABLE t2 (c String) ENGINE = Memory;
|
||||
|
||||
CREATE MATERIALIZED VIEW v1 to t1 AS SELECT * FROM t2;
|
||||
CREATE MATERIALIZED VIEW v2 to t2 AS SELECT * FROM t1;
|
||||
|
||||
INSERT INTO t1 VALUES ('Hello'); -- { serverError 306 }
|
||||
INSERT INTO t2 VALUES ('World'); -- { serverError 306 }
|
||||
|
||||
DROP TABLE IF EXISTS t;
|
||||
DROP TABLE IF EXISTS v;
|
||||
DROP TABLE IF EXISTS t1;
|
||||
DROP TABLE IF EXISTS t2;
|
||||
DROP TABLE IF EXISTS v1;
|
||||
DROP TABLE IF EXISTS v2;
|
@ -1 +1 @@
|
||||
([1],[5]) 4 4
|
||||
([1],[4]) 4 4
|
||||
|
@ -0,0 +1 @@
|
||||
1
|
@ -0,0 +1,5 @@
|
||||
-- make sure the system.query_log table is created
|
||||
SELECT 1;
|
||||
SYSTEM FLUSH LOGS;
|
||||
|
||||
SELECT any() as t, substring(query, 1, 70) AS query, avg(memory_usage) usage, count() count FROM system.query_log WHERE event_date >= toDate(1604295323) AND event_time >= toDateTime(1604295323) AND type in (1,2,3,4) and initial_user in ('') and('all' = 'all' or(positionCaseInsensitive(query, 'all') = 1)) GROUP BY query ORDER BY usage desc LIMIT 5; -- { serverError 42 }
|
@ -0,0 +1,4 @@
|
||||
2020-11-12
|
||||
2020-11-13
|
||||
2020-11-12
|
||||
2020-11-13
|
@ -0,0 +1,17 @@
|
||||
WITH arrayJoin(range(2)) AS delta
|
||||
SELECT
|
||||
toDate(time) + toIntervalDay(delta) AS dt
|
||||
FROM
|
||||
(
|
||||
SELECT toDateTime('2020.11.12 19:02:04') AS time
|
||||
)
|
||||
ORDER BY dt ASC;
|
||||
|
||||
WITH arrayJoin([0, 1]) AS delta
|
||||
SELECT
|
||||
toDate(time) + toIntervalDay(delta) AS dt
|
||||
FROM
|
||||
(
|
||||
SELECT toDateTime('2020.11.12 19:02:04') AS time
|
||||
)
|
||||
ORDER BY dt ASC;
|
@ -0,0 +1,6 @@
|
||||
733 733
|
||||
CREATE TABLE default.alter_table\n(\n `key` UInt64,\n `value` LowCardinality(String)\n)\nENGINE = MergeTree\nORDER BY key\nSETTINGS index_granularity = 8192
|
||||
all_1_1_0
|
||||
all_2_2_0
|
||||
all_3_3_0
|
||||
701 701
|
40
tests/queries/0_stateless/01576_alter_low_cardinality_and_select.sh
Executable file
40
tests/queries/0_stateless/01576_alter_low_cardinality_and_select.sh
Executable file
@ -0,0 +1,40 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
. "$CURDIR"/../shell_config.sh
|
||||
|
||||
|
||||
${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS alter_table"
|
||||
|
||||
${CLICKHOUSE_CLIENT} --query "CREATE TABLE alter_table (key UInt64, value String) ENGINE MergeTree ORDER BY key"
|
||||
|
||||
# we don't need mutations and merges
|
||||
${CLICKHOUSE_CLIENT} --query "SYSTEM STOP MERGES alter_table"
|
||||
|
||||
${CLICKHOUSE_CLIENT} --query "INSERT INTO alter_table SELECT number, toString(number) FROM numbers(10000)"
|
||||
${CLICKHOUSE_CLIENT} --query "INSERT INTO alter_table SELECT number, toString(number) FROM numbers(10000, 10000)"
|
||||
${CLICKHOUSE_CLIENT} --query "INSERT INTO alter_table SELECT number, toString(number) FROM numbers(20000, 10000)"
|
||||
|
||||
${CLICKHOUSE_CLIENT} --query "SELECT * FROM alter_table WHERE value == '733'"
|
||||
|
||||
${CLICKHOUSE_CLIENT} --query "ALTER TABLE alter_table MODIFY COLUMN value LowCardinality(String)" &
|
||||
|
||||
# waiting until schema will change (but not data)
|
||||
show_query="SHOW CREATE TABLE alter_table"
|
||||
create_query=""
|
||||
while [[ "$create_query" != *"LowCardinality"* ]]
|
||||
do
|
||||
sleep 0.1
|
||||
create_query=$($CLICKHOUSE_CLIENT --query "$show_query")
|
||||
done
|
||||
|
||||
# checking type is LowCardinalty
|
||||
${CLICKHOUSE_CLIENT} --query "SHOW CREATE TABLE alter_table"
|
||||
|
||||
# checking no mutations happened
|
||||
${CLICKHOUSE_CLIENT} --query "SELECT name FROM system.parts where table='alter_table' and active and database='${CLICKHOUSE_DATABASE}' ORDER BY name"
|
||||
|
||||
# checking that conversions applied "on fly" works
|
||||
${CLICKHOUSE_CLIENT} --query "SELECT * FROM alter_table PREWHERE key > 700 WHERE value = '701'"
|
||||
|
||||
${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS alter_table"
|
@ -169,3 +169,4 @@
|
||||
01557_max_parallel_replicas_no_sample.sql
|
||||
01525_select_with_offset_fetch_clause
|
||||
01560_timeseriesgroupsum_segfault
|
||||
00976_ttl_with_old_parts
|
||||
|
@ -108,8 +108,11 @@ def print_category(category):
|
||||
user = users[pr["user"]["id"]]
|
||||
user_name = user["name"] if user["name"] else user["login"]
|
||||
|
||||
# Substitute issue links
|
||||
# Substitute issue links.
|
||||
# 1) issue number w/o markdown link
|
||||
pr["entry"] = re.sub(r'([^[])#([0-9]{4,})', r'\1[#\2](https://github.com/ClickHouse/ClickHouse/issues/\2)', pr["entry"])
|
||||
# 2) issue URL w/o markdown link
|
||||
pr["entry"] = re.sub(r'([^(])https://github.com/ClickHouse/ClickHouse/issues/([0-9]{4,})', r'\1[#\2](https://github.com/ClickHouse/ClickHouse/issues/\2)', pr["entry"])
|
||||
|
||||
print(f'* {pr["entry"]} [#{pr["number"]}]({pr["html_url"]}) ([{user_name}]({user["html_url"]})).')
|
||||
|
||||
|
@ -70,7 +70,8 @@ Results for AWS Lightsail is from <b>Vamsi Krishna B.</b><br/>
|
||||
Results for Dell XPS laptop and Google Pixel phone is from <b>Alexander Kuzmenkov</b>.<br/>
|
||||
Results for Android phones for "cold cache" are done without cache flushing, so they are not "cold" and cannot be compared.<br/>
|
||||
Results for Digital Ocean are from <b>Zimin Aleksey</b>.<br/>
|
||||
Results for 2x EPYC 7642 w/ 512 GB RAM (192 Cores) + 12X 1TB SSD (RAID6) are from <b>Yiğit Konur</b> and <b>Metehan Çetinkaya</b> of seo.do.
|
||||
Results for 2x EPYC 7642 w/ 512 GB RAM (192 Cores) + 12X 1TB SSD (RAID6) are from <b>Yiğit Konur</b> and <b>Metehan Çetinkaya</b> of seo.do.<br/>
|
||||
Results for Raspberry Pi and Digital Ocean CPU-optimized are from <b>Fritz Wijaya</b>.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
56
website/benchmark/hardware/results/do_xeon_6140_4.json
Normal file
56
website/benchmark/hardware/results/do_xeon_6140_4.json
Normal file
@ -0,0 +1,56 @@
|
||||
[
|
||||
{
|
||||
"system": "DigitalOcean CPU-opt 4",
|
||||
"system_full": "DigitalOcean CPU-Optimized 4CPU/8GB RAM Intel(R) Xeon(R) Gold 6140",
|
||||
"cpu_vendor": "Intel",
|
||||
"cpu_model": "Xeon Gold 6140",
|
||||
"time": "2020-11-14 00:00:00",
|
||||
"kind": "cloud",
|
||||
"result":
|
||||
[
|
||||
[0.001, 0.002, 0.002],
|
||||
[0.039, 0.046, 0.025],
|
||||
[0.116, 0.087, 0.086],
|
||||
[0.250, 0.120, 0.109],
|
||||
[0.391, 0.313, 0.321],
|
||||
[1.035, 0.946, 0.960],
|
||||
[0.058, 0.047, 0.047],
|
||||
[0.030, 0.026, 0.026],
|
||||
[1.498, 1.368, 1.371],
|
||||
[1.708, 1.568, 1.568],
|
||||
[0.568, 0.480, 0.478],
|
||||
[0.652, 0.568, 0.566],
|
||||
[2.200, 1.968, 1.924],
|
||||
[2.739, 2.561, 2.531],
|
||||
[2.358, 2.208, 2.206],
|
||||
[2.544, 2.407, 2.405],
|
||||
[6.307, 5.914, 5.927],
|
||||
[3.838, 3.608, 3.589],
|
||||
[null, null, null],
|
||||
[0.251, 0.121, 0.120],
|
||||
[3.337, 2.447, 2.441],
|
||||
[3.785, 2.669, 2.602],
|
||||
[8.053, 6.082, 6.054],
|
||||
[6.301, 2.976, 2.931],
|
||||
[1.109, 0.816, 0.811],
|
||||
[0.791, 0.693, 0.681],
|
||||
[1.111, 0.821, 0.817],
|
||||
[3.162, 2.162, 2.090],
|
||||
[4.601, 3.854, 3.825],
|
||||
[3.590, 3.560, 3.582],
|
||||
[2.114, 1.847, 1.823],
|
||||
[3.559, 2.851, 2.797],
|
||||
[null, null, null],
|
||||
[null, null, null],
|
||||
[null, null, null],
|
||||
[3.620, 3.446, 3.397],
|
||||
[0.231, 0.196, 0.182],
|
||||
[0.079, 0.066, 0.066],
|
||||
[0.095, 0.059, 0.069],
|
||||
[0.447, 0.382, 0.386],
|
||||
[0.050, 0.034, 0.021],
|
||||
[0.042, 0.016, 0.015],
|
||||
[0.006, 0.008, 0.007]
|
||||
]
|
||||
}
|
||||
]
|
54
website/benchmark/hardware/results/raspberry_pi_b.json
Normal file
54
website/benchmark/hardware/results/raspberry_pi_b.json
Normal file
@ -0,0 +1,54 @@
|
||||
[
|
||||
{
|
||||
"system": "Raspberry Pi 4",
|
||||
"system_full": "Raspberry Pi 4 Model B 8GB",
|
||||
"time": "2020-11-14 00:00:00",
|
||||
"kind": "desktop",
|
||||
"result":
|
||||
[
|
||||
[0.015, 0.005, 0.005],
|
||||
[0.214, 0.176, 0.171],
|
||||
[1.205, 0.535, 0.534],
|
||||
[5.138, 1.320, 1.342],
|
||||
[5.129, 1.489, 1.494],
|
||||
[8.349, 3.230, 3.290],
|
||||
[0.327, 0.234, 0.231],
|
||||
[0.228, 0.183, 0.175],
|
||||
[7.530, 5.440, 5.662],
|
||||
[9.073, 6.912, 6.881],
|
||||
[5.609, 2.995, 3.201],
|
||||
[5.817, 3.245, 3.239],
|
||||
[12.712, 11.231, 11.141],
|
||||
[18.517, 14.798, 14.698],
|
||||
[13.510, 11.171, 11.260],
|
||||
[12.944, 11.576, 11.706],
|
||||
[28.042, 23.958, 22.930],
|
||||
[18.430, 13.992, 14.173],
|
||||
[null, null, null],
|
||||
[5.193, 1.342, 1.311],
|
||||
[59.597, 19.483, 20.791],
|
||||
[68.012, 24.377, 24.159],
|
||||
[127.859, 49.266, 47.251],
|
||||
[133.812, 25.078, 24.812],
|
||||
[16.838, 5.128, 4.824],
|
||||
[8.195, 4.025, 4.066],
|
||||
[16.791, 4.911, 4.997],
|
||||
[59.740, 24.009, 23.916],
|
||||
[50.460, 25.922, 26.049],
|
||||
[23.961, 23.536, 23.835],
|
||||
[15.293, 8.960, 8.687],
|
||||
[36.904, 14.905, 14.755],
|
||||
[null, null, null],
|
||||
[74.268, 74.887, 74.103],
|
||||
[74.727, 59.369, 65.550],
|
||||
[15.400, 14.807, 15.437],
|
||||
[1.286, 0.836, 0.804],
|
||||
[0.501, 0.341, 0.320],
|
||||
[0.704, 0.299, 0.265],
|
||||
[2.539, 1.756, 1.710],
|
||||
[0.345, 0.085, 0.082],
|
||||
[0.219, 0.070, 0.072],
|
||||
[0.044, 0.021, 0.023]
|
||||
]
|
||||
}
|
||||
]
|
Loading…
Reference in New Issue
Block a user