mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-24 16:42:05 +00:00
Merge remote-tracking branch 'upstream/master' into improvement/diff-types-in-avg-weighted
This commit is contained in:
commit
e460248624
216
CHANGELOG.md
216
CHANGELOG.md
@ -1,6 +1,218 @@
|
||||
## ClickHouse release 20.10
|
||||
|
||||
### ClickHouse release v20.10.3.30, 2020-10-28
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* Make `multiple_joins_rewriter_version` obsolete. Remove first version of joins rewriter. [#15472](https://github.com/ClickHouse/ClickHouse/pull/15472) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Change default value of `format_regexp_escaping_rule` setting (it's related to `Regexp` format) to `Raw` (it means - read whole subpattern as a value) to make the behaviour more like to what users expect. [#15426](https://github.com/ClickHouse/ClickHouse/pull/15426) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add support for nested multiline comments `/* comment /* comment */ */` in SQL. This conforms to the SQL standard. [#14655](https://github.com/ClickHouse/ClickHouse/pull/14655) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added MergeTree settings (`max_replicated_merges_with_ttl_in_queue` and `max_number_of_merges_with_ttl_in_pool`) to control the number of merges with TTL in the background pool and replicated queue. This change breaks compatibility with older versions only if you use delete TTL. Otherwise, replication will stay compatible. You can avoid incompatibility issues if you update all shard replicas at once or execute `SYSTEM STOP TTL MERGES` until you finish the update of all replicas. If you'll get an incompatible entry in the replication queue, first of all, execute `SYSTEM STOP TTL MERGES` and after `ALTER TABLE ... DETACH PARTITION ...` the partition where incompatible TTL merge was assigned. Attach it back on a single replica. [#14490](https://github.com/ClickHouse/ClickHouse/pull/14490) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Background data recompression. Add the ability to specify `TTL ... RECOMPRESS codec_name` for MergeTree table engines family. [#14494](https://github.com/ClickHouse/ClickHouse/pull/14494) ([alesapin](https://github.com/alesapin)).
|
||||
* Add parallel quorum inserts. This closes [#15601](https://github.com/ClickHouse/ClickHouse/issues/15601). [#15601](https://github.com/ClickHouse/ClickHouse/pull/15601) ([Latysheva Alexandra](https://github.com/alexelex)).
|
||||
* Settings for additional enforcement of data durability. Useful for non-replicated setups. [#11948](https://github.com/ClickHouse/ClickHouse/pull/11948) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* When duplicate block is written to replica where it does not exist locally (has not been fetched from replicas), don't ignore it and write locally to achieve the same effect as if it was successfully replicated. [#11684](https://github.com/ClickHouse/ClickHouse/pull/11684) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Now we support `WITH <identifier> AS (subquery) ... ` to introduce named subqueries in the query context. This closes [#2416](https://github.com/ClickHouse/ClickHouse/issues/2416). This closes [#4967](https://github.com/ClickHouse/ClickHouse/issues/4967). [#14771](https://github.com/ClickHouse/ClickHouse/pull/14771) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Introduce `enable_global_with_statement` setting which propagates the first select's `WITH` statements to other select queries at the same level, and makes aliases in `WITH` statements visible to subqueries. [#15451](https://github.com/ClickHouse/ClickHouse/pull/15451) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Secure inter-cluster query execution (with initial_user as current query user). [#13156](https://github.com/ClickHouse/ClickHouse/pull/13156) ([Azat Khuzhin](https://github.com/azat)). [#15551](https://github.com/ClickHouse/ClickHouse/pull/15551) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add the ability to remove column properties and table TTLs. Introduced queries `ALTER TABLE MODIFY COLUMN col_name REMOVE what_to_remove` and `ALTER TABLE REMOVE TTL`. Both operations are lightweight and executed at the metadata level. [#14742](https://github.com/ClickHouse/ClickHouse/pull/14742) ([alesapin](https://github.com/alesapin)).
|
||||
* Added format `RawBLOB`. It is intended for input or output a single value without any escaping and delimiters. This closes [#15349](https://github.com/ClickHouse/ClickHouse/issues/15349). [#15364](https://github.com/ClickHouse/ClickHouse/pull/15364) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add the `reinterpretAsUUID` function that allows to convert a big-endian byte string to UUID. [#15480](https://github.com/ClickHouse/ClickHouse/pull/15480) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Implement `force_data_skipping_indices` setting. [#15642](https://github.com/ClickHouse/ClickHouse/pull/15642) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add a setting `output_format_pretty_row_numbers` to numerate the result in Pretty formats. This closes [#15350](https://github.com/ClickHouse/ClickHouse/issues/15350). [#15443](https://github.com/ClickHouse/ClickHouse/pull/15443) ([flynn](https://github.com/ucasFL)).
|
||||
* Added query obfuscation tool. It allows to share more queries for better testing. This closes [#15268](https://github.com/ClickHouse/ClickHouse/issues/15268). [#15321](https://github.com/ClickHouse/ClickHouse/pull/15321) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add table function `null('structure')`. [#14797](https://github.com/ClickHouse/ClickHouse/pull/14797) ([vxider](https://github.com/Vxider)).
|
||||
* Added `formatReadableQuantity` function. It is useful for reading big numbers by human. [#14725](https://github.com/ClickHouse/ClickHouse/pull/14725) ([Artem Hnilov](https://github.com/BooBSD)).
|
||||
* Add format `LineAsString` that accepts a sequence of lines separated by newlines, every line is parsed as a whole as a single String field. [#14703](https://github.com/ClickHouse/ClickHouse/pull/14703) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)), [#13846](https://github.com/ClickHouse/ClickHouse/pull/13846) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* Add `JSONStrings` format which output data in arrays of strings. [#14333](https://github.com/ClickHouse/ClickHouse/pull/14333) ([hcz](https://github.com/hczhcz)).
|
||||
* Add support for "Raw" column format for `Regexp` format. It allows to simply extract subpatterns as a whole without any escaping rules. [#15363](https://github.com/ClickHouse/ClickHouse/pull/15363) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Allow configurable `NULL` representation for `TSV` output format. It is controlled by the setting `output_format_tsv_null_representation` which is `\N` by default. This closes [#9375](https://github.com/ClickHouse/ClickHouse/issues/9375). Note that the setting only controls output format and `\N` is the only supported `NULL` representation for `TSV` input format. [#14586](https://github.com/ClickHouse/ClickHouse/pull/14586) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Support Decimal data type for `MaterializedMySQL`. `MaterializedMySQL` is an experimental feature. [#14535](https://github.com/ClickHouse/ClickHouse/pull/14535) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Add new feature: `SHOW DATABASES LIKE 'xxx'`. [#14521](https://github.com/ClickHouse/ClickHouse/pull/14521) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* Added a script to import (arbitrary) git repository to ClickHouse as a sample dataset. [#14471](https://github.com/ClickHouse/ClickHouse/pull/14471) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Now insert statements can have asterisk (or variants) with column transformers in the column list. [#14453](https://github.com/ClickHouse/ClickHouse/pull/14453) ([Amos Bird](https://github.com/amosbird)).
|
||||
* New query complexity limit settings `max_rows_to_read_leaf`, `max_bytes_to_read_leaf` for distributed queries to limit max rows/bytes read on the leaf nodes. Limit is applied for local reads only, *excluding* the final merge stage on the root node. [#14221](https://github.com/ClickHouse/ClickHouse/pull/14221) ([Roman Khavronenko](https://github.com/hagen1778)).
|
||||
* Allow user to specify settings for `ReplicatedMergeTree*` storage in `<replicated_merge_tree>` section of config file. It works similarly to `<merge_tree>` section. For `ReplicatedMergeTree*` storages settings from `<merge_tree>` and `<replicated_merge_tree>` are applied together, but settings from `<replicated_merge_tree>` has higher priority. Added `system.replicated_merge_tree_settings` table. [#13573](https://github.com/ClickHouse/ClickHouse/pull/13573) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Add `mapPopulateSeries` function. [#13166](https://github.com/ClickHouse/ClickHouse/pull/13166) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||
* Supporting MySQL types: `decimal` (as ClickHouse `Decimal`) and `datetime` with sub-second precision (as `DateTime64`). [#11512](https://github.com/ClickHouse/ClickHouse/pull/11512) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Introduce `event_time_microseconds` field to `system.text_log`, `system.trace_log`, `system.query_log` and `system.query_thread_log` tables. [#14760](https://github.com/ClickHouse/ClickHouse/pull/14760) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Add `event_time_microseconds` to `system.asynchronous_metric_log` & `system.metric_log` tables. [#14514](https://github.com/ClickHouse/ClickHouse/pull/14514) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Add `query_start_time_microseconds` field to `system.query_log` & `system.query_thread_log` tables. [#14252](https://github.com/ClickHouse/ClickHouse/pull/14252) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix the case when memory can be overallocated regardless to the limit. This closes [#14560](https://github.com/ClickHouse/ClickHouse/issues/14560). [#16206](https://github.com/ClickHouse/ClickHouse/pull/16206) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `executable` dictionary source hang. In previous versions, when using some formats (e.g. `JSONEachRow`) data was not feed to a child process before it outputs at least something. This closes [#1697](https://github.com/ClickHouse/ClickHouse/issues/1697). This closes [#2455](https://github.com/ClickHouse/ClickHouse/issues/2455). [#14525](https://github.com/ClickHouse/ClickHouse/pull/14525) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix double free in case of exception in function `dictGet`. It could have happened if dictionary was loaded with error. [#16429](https://github.com/ClickHouse/ClickHouse/pull/16429) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix group by with totals/rollup/cube modifers and min/max functions over group by keys. Fixes [#16393](https://github.com/ClickHouse/ClickHouse/issues/16393). [#16397](https://github.com/ClickHouse/ClickHouse/pull/16397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix async Distributed INSERT with prefer_localhost_replica=0 and internal_replication. [#16358](https://github.com/ClickHouse/ClickHouse/pull/16358) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix a very wrong code in TwoLevelStringHashTable implementation, which might lead to memory leak. [#16264](https://github.com/ClickHouse/ClickHouse/pull/16264) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix segfault in some cases of wrong aggregation in lambdas. [#16082](https://github.com/ClickHouse/ClickHouse/pull/16082) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix `ALTER MODIFY ... ORDER BY` query hang for `ReplicatedVersionedCollapsingMergeTree`. This fixes [#15980](https://github.com/ClickHouse/ClickHouse/issues/15980). [#16011](https://github.com/ClickHouse/ClickHouse/pull/16011) ([alesapin](https://github.com/alesapin)).
|
||||
* `MaterializedMySQL` (experimental feature): Fix collate name & charset name parser and support `length = 0` for string type. [#16008](https://github.com/ClickHouse/ClickHouse/pull/16008) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Allow to use `direct` layout for dictionaries with complex keys. [#16007](https://github.com/ClickHouse/ClickHouse/pull/16007) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Prevent replica hang for 5-10 mins when replication error happens after a period of inactivity. [#15987](https://github.com/ClickHouse/ClickHouse/pull/15987) ([filimonov](https://github.com/filimonov)).
|
||||
* Fix rare segfaults when inserting into or selecting from MaterializedView and concurrently dropping target table (for Atomic database engine). [#15984](https://github.com/ClickHouse/ClickHouse/pull/15984) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix ambiguity in parsing of settings profiles: `CREATE USER ... SETTINGS profile readonly` is now considered as using a profile named `readonly`, not a setting named `profile` with the readonly constraint. This fixes [#15628](https://github.com/ClickHouse/ClickHouse/issues/15628). [#15982](https://github.com/ClickHouse/ClickHouse/pull/15982) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* `MaterializedMySQL` (experimental feature): Fix crash on create database failure. [#15954](https://github.com/ClickHouse/ClickHouse/pull/15954) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fixed `DROP TABLE IF EXISTS` failure with `Table ... doesn't exist` error when table is concurrently renamed (for Atomic database engine). Fixed rare deadlock when concurrently executing some DDL queries with multiple tables (like `DROP DATABASE` and `RENAME TABLE`) - Fixed `DROP/DETACH DATABASE` failure with `Table ... doesn't exist` when concurrently executing `DROP/DETACH TABLE`. [#15934](https://github.com/ClickHouse/ClickHouse/pull/15934) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix incorrect empty result for query from `Distributed` table if query has `WHERE`, `PREWHERE` and `GLOBAL IN`. Fixes [#15792](https://github.com/ClickHouse/ClickHouse/issues/15792). [#15933](https://github.com/ClickHouse/ClickHouse/pull/15933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixes [#12513](https://github.com/ClickHouse/ClickHouse/issues/12513): difference expressions with same alias when query is reanalyzed. [#15886](https://github.com/ClickHouse/ClickHouse/pull/15886) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix possible very rare deadlocks in RBAC implementation. [#15875](https://github.com/ClickHouse/ClickHouse/pull/15875) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix exception `Block structure mismatch` in `SELECT ... ORDER BY DESC` queries which were executed after `ALTER MODIFY COLUMN` query. Fixes [#15800](https://github.com/ClickHouse/ClickHouse/issues/15800). [#15852](https://github.com/ClickHouse/ClickHouse/pull/15852) ([alesapin](https://github.com/alesapin)).
|
||||
* `MaterializedMySQL` (experimental feature): Fix `select count()` inaccuracy. [#15767](https://github.com/ClickHouse/ClickHouse/pull/15767) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix some cases of queries, in which only virtual columns are selected. Previously `Not found column _nothing in block` exception may be thrown. Fixes [#12298](https://github.com/ClickHouse/ClickHouse/issues/12298). [#15756](https://github.com/ClickHouse/ClickHouse/pull/15756) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix drop of materialized view with inner table in Atomic database (hangs all subsequent DROP TABLE due to hang of the worker thread, due to recursive DROP TABLE for inner table of MV). [#15743](https://github.com/ClickHouse/ClickHouse/pull/15743) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Possibility to move part to another disk/volume if the first attempt was failed. [#15723](https://github.com/ClickHouse/ClickHouse/pull/15723) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Fix error `Cannot find column` which may happen at insertion into `MATERIALIZED VIEW` in case if query for `MV` containes `ARRAY JOIN`. [#15717](https://github.com/ClickHouse/ClickHouse/pull/15717) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fixed too low default value of `max_replicated_logs_to_keep` setting, which might cause replicas to become lost too often. Improve lost replica recovery process by choosing the most up-to-date replica to clone. Also do not remove old parts from lost replica, detach them instead. [#15701](https://github.com/ClickHouse/ClickHouse/pull/15701) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix rare race condition in dictionaries and tables from MySQL. [#15686](https://github.com/ClickHouse/ClickHouse/pull/15686) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix (benign) race condition in AMQP-CPP. [#15667](https://github.com/ClickHouse/ClickHouse/pull/15667) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix error `Cannot add simple transform to empty Pipe` which happened while reading from `Buffer` table which has different structure than destination table. It was possible if destination table returned empty result for query. Fixes [#15529](https://github.com/ClickHouse/ClickHouse/issues/15529). [#15662](https://github.com/ClickHouse/ClickHouse/pull/15662) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Proper error handling during insert into MergeTree with S3. MergeTree over S3 is an experimental feature. [#15657](https://github.com/ClickHouse/ClickHouse/pull/15657) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Fixed bug with S3 table function: region from URL was not applied to S3 client configuration. [#15646](https://github.com/ClickHouse/ClickHouse/pull/15646) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Fix the order of destruction for resources in `ReadFromStorage` step of query plan. It might cause crashes in rare cases. Possibly connected with [#15610](https://github.com/ClickHouse/ClickHouse/issues/15610). [#15645](https://github.com/ClickHouse/ClickHouse/pull/15645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Subtract `ReadonlyReplica` metric when detach readonly tables. [#15592](https://github.com/ClickHouse/ClickHouse/pull/15592) ([sundyli](https://github.com/sundy-li)).
|
||||
* Fixed `Element ... is not a constant expression` error when using `JSON*` function result in `VALUES`, `LIMIT` or right side of `IN` operator. [#15589](https://github.com/ClickHouse/ClickHouse/pull/15589) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Query will finish faster in case of exception. Cancel execution on remote replicas if exception happens. [#15578](https://github.com/ClickHouse/ClickHouse/pull/15578) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Prevent the possibility of error message `Could not calculate available disk space (statvfs), errno: 4, strerror: Interrupted system call`. This fixes [#15541](https://github.com/ClickHouse/ClickHouse/issues/15541). [#15557](https://github.com/ClickHouse/ClickHouse/pull/15557) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `Database <db> doesn't exist.` in queries with IN and Distributed table when there's no database on initiator. [#15538](https://github.com/ClickHouse/ClickHouse/pull/15538) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Mutation might hang waiting for some non-existent part after `MOVE` or `REPLACE PARTITION` or, in rare cases, after `DETACH` or `DROP PARTITION`. It's fixed. [#15537](https://github.com/ClickHouse/ClickHouse/pull/15537) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix bug when `ILIKE` operator stops being case insensitive if `LIKE` with the same pattern was executed. [#15536](https://github.com/ClickHouse/ClickHouse/pull/15536) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix `Missing columns` errors when selecting columns which absent in data, but depend on other columns which also absent in data. Fixes [#15530](https://github.com/ClickHouse/ClickHouse/issues/15530). [#15532](https://github.com/ClickHouse/ClickHouse/pull/15532) ([alesapin](https://github.com/alesapin)).
|
||||
* Throw an error when a single parameter is passed to ReplicatedMergeTree instead of ignoring it. [#15516](https://github.com/ClickHouse/ClickHouse/pull/15516) ([nvartolomei](https://github.com/nvartolomei)).
|
||||
* Fix bug with event subscription in DDLWorker which rarely may lead to query hangs in `ON CLUSTER`. Introduced in [#13450](https://github.com/ClickHouse/ClickHouse/issues/13450). [#15477](https://github.com/ClickHouse/ClickHouse/pull/15477) ([alesapin](https://github.com/alesapin)).
|
||||
* Report proper error when the second argument of `boundingRatio` aggregate function has a wrong type. [#15407](https://github.com/ClickHouse/ClickHouse/pull/15407) ([detailyang](https://github.com/detailyang)).
|
||||
* Fixes [#15365](https://github.com/ClickHouse/ClickHouse/issues/15365): attach a database with MySQL engine throws exception (no query context). [#15384](https://github.com/ClickHouse/ClickHouse/pull/15384) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix the case of multiple occurrences of column transformers in a select query. [#15378](https://github.com/ClickHouse/ClickHouse/pull/15378) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fixed compression in `S3` storage. [#15376](https://github.com/ClickHouse/ClickHouse/pull/15376) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Fix bug where queries like `SELECT toStartOfDay(today())` fail complaining about empty time_zone argument. [#15319](https://github.com/ClickHouse/ClickHouse/pull/15319) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Fix race condition during MergeTree table rename and background cleanup. [#15304](https://github.com/ClickHouse/ClickHouse/pull/15304) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix rare race condition on server startup when system logs are enabled. [#15300](https://github.com/ClickHouse/ClickHouse/pull/15300) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix hang of queries with a lot of subqueries to same table of `MySQL` engine. Previously, if there were more than 16 subqueries to same `MySQL` table in query, it hang forever. [#15299](https://github.com/ClickHouse/ClickHouse/pull/15299) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix MSan report in QueryLog. Uninitialized memory can be used for the field `memory_usage`. [#15258](https://github.com/ClickHouse/ClickHouse/pull/15258) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix 'Unknown identifier' in GROUP BY when query has JOIN over Merge table. [#15242](https://github.com/ClickHouse/ClickHouse/pull/15242) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix instance crash when using `joinGet` with `LowCardinality` types. This fixes https://github.com/ClickHouse/ClickHouse/issues/15214. [#15220](https://github.com/ClickHouse/ClickHouse/pull/15220) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix bug in table engine `Buffer` which doesn't allow to insert data of new structure into `Buffer` after `ALTER` query. Fixes [#15117](https://github.com/ClickHouse/ClickHouse/issues/15117). [#15192](https://github.com/ClickHouse/ClickHouse/pull/15192) ([alesapin](https://github.com/alesapin)).
|
||||
* Adjust Decimal field size in MySQL column definition packet. [#15152](https://github.com/ClickHouse/ClickHouse/pull/15152) ([maqroll](https://github.com/maqroll)).
|
||||
* Fixes `Data compressed with different methods` in `join_algorithm='auto'`. Keep LowCardinality as type for left table join key in `join_algorithm='partial_merge'`. [#15088](https://github.com/ClickHouse/ClickHouse/pull/15088) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Update `jemalloc` to fix `percpu_arena` with affinity mask. [#15035](https://github.com/ClickHouse/ClickHouse/pull/15035) ([Azat Khuzhin](https://github.com/azat)). [#14957](https://github.com/ClickHouse/ClickHouse/pull/14957) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* We already use padded comparison between String and FixedString (https://github.com/ClickHouse/ClickHouse/blob/master/src/Functions/FunctionsComparison.h#L333). This PR applies the same logic to field comparison which corrects the usage of FixedString as primary keys. This fixes https://github.com/ClickHouse/ClickHouse/issues/14908. [#15033](https://github.com/ClickHouse/ClickHouse/pull/15033) ([Amos Bird](https://github.com/amosbird)).
|
||||
* If function `bar` was called with specifically crafted arguments, buffer overflow was possible. This closes [#13926](https://github.com/ClickHouse/ClickHouse/issues/13926). [#15028](https://github.com/ClickHouse/ClickHouse/pull/15028) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed `Cannot rename ... errno: 22, strerror: Invalid argument` error on DDL query execution in Atomic database when running clickhouse-server in Docker on Mac OS. [#15024](https://github.com/ClickHouse/ClickHouse/pull/15024) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix crash in RIGHT or FULL JOIN with join_algorith='auto' when memory limit exceeded and we should change HashJoin with MergeJoin. [#15002](https://github.com/ClickHouse/ClickHouse/pull/15002) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Now settings `number_of_free_entries_in_pool_to_execute_mutation` and `number_of_free_entries_in_pool_to_lower_max_size_of_merge` can be equal to `background_pool_size`. [#14975](https://github.com/ClickHouse/ClickHouse/pull/14975) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix to make predicate push down work when subquery contains `finalizeAggregation` function. Fixes [#14847](https://github.com/ClickHouse/ClickHouse/issues/14847). [#14937](https://github.com/ClickHouse/ClickHouse/pull/14937) ([filimonov](https://github.com/filimonov)).
|
||||
* Publish CPU frequencies per logical core in `system.asynchronous_metrics`. This fixes https://github.com/ClickHouse/ClickHouse/issues/14923. [#14924](https://github.com/ClickHouse/ClickHouse/pull/14924) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* `MaterializedMySQL` (experimental feature): Fixed `.metadata.tmp File exists` error. [#14898](https://github.com/ClickHouse/ClickHouse/pull/14898) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix the issue when some invocations of `extractAllGroups` function may trigger "Memory limit exceeded" error. This fixes [#13383](https://github.com/ClickHouse/ClickHouse/issues/13383). [#14889](https://github.com/ClickHouse/ClickHouse/pull/14889) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix SIGSEGV for an attempt to INSERT into StorageFile with file descriptor. [#14887](https://github.com/ClickHouse/ClickHouse/pull/14887) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed segfault in `cache` dictionary [#14837](https://github.com/ClickHouse/ClickHouse/issues/14837). [#14879](https://github.com/ClickHouse/ClickHouse/pull/14879) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* `MaterializedMySQL` (experimental feature): Fixed bug in parsing MySQL binlog events, which causes `Attempt to read after eof` and `Packet payload is not fully read` in `MaterializeMySQL` database engine. [#14852](https://github.com/ClickHouse/ClickHouse/pull/14852) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix rare error in `SELECT` queries when the queried column has `DEFAULT` expression which depends on the other column which also has `DEFAULT` and not present in select query and not exists on disk. Partially fixes [#14531](https://github.com/ClickHouse/ClickHouse/issues/14531). [#14845](https://github.com/ClickHouse/ClickHouse/pull/14845) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix a problem where the server may get stuck on startup while talking to ZooKeeper, if the configuration files have to be fetched from ZK (using the `from_zk` include option). This fixes [#14814](https://github.com/ClickHouse/ClickHouse/issues/14814). [#14843](https://github.com/ClickHouse/ClickHouse/pull/14843) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Fix wrong monotonicity detection for shrunk `Int -> Int` cast of signed types. It might lead to incorrect query result. This bug is unveiled in [#14513](https://github.com/ClickHouse/ClickHouse/issues/14513). [#14783](https://github.com/ClickHouse/ClickHouse/pull/14783) ([Amos Bird](https://github.com/amosbird)).
|
||||
* `Replace` column transformer should replace identifiers with cloned ASTs. This fixes https://github.com/ClickHouse/ClickHouse/issues/14695 . [#14734](https://github.com/ClickHouse/ClickHouse/pull/14734) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fixed missed default database name in metadata of materialized view when executing `ALTER ... MODIFY QUERY`. [#14664](https://github.com/ClickHouse/ClickHouse/pull/14664) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix bug when `ALTER UPDATE` mutation with `Nullable` column in assignment expression and constant value (like `UPDATE x = 42`) leads to incorrect value in column or segfault. Fixes [#13634](https://github.com/ClickHouse/ClickHouse/issues/13634), [#14045](https://github.com/ClickHouse/ClickHouse/issues/14045). [#14646](https://github.com/ClickHouse/ClickHouse/pull/14646) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix wrong Decimal multiplication result caused wrong decimal scale of result column. [#14603](https://github.com/ClickHouse/ClickHouse/pull/14603) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix function `has` with `LowCardinality` of `Nullable`. [#14591](https://github.com/ClickHouse/ClickHouse/pull/14591) ([Mike](https://github.com/myrrc)).
|
||||
* Cleanup data directory after Zookeeper exceptions during CreateQuery for StorageReplicatedMergeTree Engine. [#14563](https://github.com/ClickHouse/ClickHouse/pull/14563) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Fix rare segfaults in functions with combinator `-Resample`, which could appear in result of overflow with very large parameters. [#14562](https://github.com/ClickHouse/ClickHouse/pull/14562) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix a bug when converting `Nullable(String)` to Enum. Introduced by https://github.com/ClickHouse/ClickHouse/pull/12745. This fixes https://github.com/ClickHouse/ClickHouse/issues/14435. [#14530](https://github.com/ClickHouse/ClickHouse/pull/14530) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fixed the incorrect sorting order of `Nullable` column. This fixes [#14344](https://github.com/ClickHouse/ClickHouse/issues/14344). [#14495](https://github.com/ClickHouse/ClickHouse/pull/14495) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix `currentDatabase()` function cannot be used in `ON CLUSTER` ddl query. [#14211](https://github.com/ClickHouse/ClickHouse/pull/14211) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* `MaterializedMySQL` (experimental feature): Fixed `Packet payload is not fully read` error in `MaterializeMySQL` database engine. [#14696](https://github.com/ClickHouse/ClickHouse/pull/14696) ([BohuTANG](https://github.com/BohuTANG)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Enable `Atomic` database engine by default for newly created databases. [#15003](https://github.com/ClickHouse/ClickHouse/pull/15003) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Add the ability to specify specialized codecs like `Delta`, `T64`, etc. for columns with subtypes. Implements [#12551](https://github.com/ClickHouse/ClickHouse/issues/12551), fixes [#11397](https://github.com/ClickHouse/ClickHouse/issues/11397), fixes [#4609](https://github.com/ClickHouse/ClickHouse/issues/4609). [#15089](https://github.com/ClickHouse/ClickHouse/pull/15089) ([alesapin](https://github.com/alesapin)).
|
||||
* Dynamic reload of zookeeper config. [#14678](https://github.com/ClickHouse/ClickHouse/pull/14678) ([sundyli](https://github.com/sundy-li)).
|
||||
* Now it's allowed to execute `ALTER ... ON CLUSTER` queries regardless of the `<internal_replication>` setting in cluster config. [#16075](https://github.com/ClickHouse/ClickHouse/pull/16075) ([alesapin](https://github.com/alesapin)).
|
||||
* Now `joinGet` supports multi-key lookup. Continuation of [#12418](https://github.com/ClickHouse/ClickHouse/issues/12418). [#13015](https://github.com/ClickHouse/ClickHouse/pull/13015) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Wait for `DROP/DETACH TABLE` to actually finish if `NO DELAY` or `SYNC` is specified for `Atomic` database. [#15448](https://github.com/ClickHouse/ClickHouse/pull/15448) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Now it's possible to change the type of version column for `VersionedCollapsingMergeTree` with `ALTER` query. [#15442](https://github.com/ClickHouse/ClickHouse/pull/15442) ([alesapin](https://github.com/alesapin)).
|
||||
* Unfold `{database}`, `{table}` and `{uuid}` macros in `zookeeper_path` on replicated table creation. Do not allow `RENAME TABLE` if it may break `zookeeper_path` after server restart. Fixes [#6917](https://github.com/ClickHouse/ClickHouse/issues/6917). [#15348](https://github.com/ClickHouse/ClickHouse/pull/15348) ([tavplubix](https://github.com/tavplubix)).
|
||||
* The function `now` allows an argument with timezone. This closes [15264](https://github.com/ClickHouse/ClickHouse/issues/15264). [#15285](https://github.com/ClickHouse/ClickHouse/pull/15285) ([flynn](https://github.com/ucasFL)).
|
||||
* Do not allow connections to ClickHouse server until all scripts in `/docker-entrypoint-initdb.d/` are executed. [#15244](https://github.com/ClickHouse/ClickHouse/pull/15244) ([Aleksei Kozharin](https://github.com/alekseik1)).
|
||||
* Added `optimize` setting to `EXPLAIN PLAN` query. If enabled, query plan level optimisations are applied. Enabled by default. [#15201](https://github.com/ClickHouse/ClickHouse/pull/15201) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Proper exception message for wrong number of arguments of CAST. This closes [#13992](https://github.com/ClickHouse/ClickHouse/issues/13992). [#15029](https://github.com/ClickHouse/ClickHouse/pull/15029) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add option to disable TTL move on data part insert. [#15000](https://github.com/ClickHouse/ClickHouse/pull/15000) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Ignore key constraints when doing mutations. Without this pull request, it's not possible to do mutations when `force_index_by_date = 1` or `force_primary_key = 1`. [#14973](https://github.com/ClickHouse/ClickHouse/pull/14973) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Allow to drop Replicated table if previous drop attempt was failed due to ZooKeeper session expiration. This fixes [#11891](https://github.com/ClickHouse/ClickHouse/issues/11891). [#14926](https://github.com/ClickHouse/ClickHouse/pull/14926) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed excessive settings constraint violation when running SELECT with SETTINGS from a distributed table. [#14876](https://github.com/ClickHouse/ClickHouse/pull/14876) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Provide a `load_balancing_first_offset` query setting to explicitly state what the first replica is. It's used together with `FIRST_OR_RANDOM` load balancing strategy, which allows to control replicas workload. [#14867](https://github.com/ClickHouse/ClickHouse/pull/14867) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Show subqueries for `SET` and `JOIN` in `EXPLAIN` result. [#14856](https://github.com/ClickHouse/ClickHouse/pull/14856) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Allow using multi-volume storage configuration in storage `Distributed`. [#14839](https://github.com/ClickHouse/ClickHouse/pull/14839) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Construct `query_start_time` and `query_start_time_microseconds` from the same timespec. [#14831](https://github.com/ClickHouse/ClickHouse/pull/14831) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Support for disabling persistency for `StorageJoin` and `StorageSet`, this feature is controlled by setting `disable_set_and_join_persistency`. And this PR solved issue [#6318](https://github.com/ClickHouse/ClickHouse/issues/6318). [#14776](https://github.com/ClickHouse/ClickHouse/pull/14776) ([vxider](https://github.com/Vxider)).
|
||||
* Now `COLUMNS` can be used to wrap over a list of columns and apply column transformers afterwards. [#14775](https://github.com/ClickHouse/ClickHouse/pull/14775) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Add `merge_algorithm` to `system.merges` table to improve merging inspections. [#14705](https://github.com/ClickHouse/ClickHouse/pull/14705) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix potential memory leak caused by zookeeper exists watch. [#14693](https://github.com/ClickHouse/ClickHouse/pull/14693) ([hustnn](https://github.com/hustnn)).
|
||||
* Allow parallel execution of distributed DDL. [#14684](https://github.com/ClickHouse/ClickHouse/pull/14684) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add `QueryMemoryLimitExceeded` event counter. This closes [#14589](https://github.com/ClickHouse/ClickHouse/issues/14589). [#14647](https://github.com/ClickHouse/ClickHouse/pull/14647) ([fastio](https://github.com/fastio)).
|
||||
* Fix some trailing whitespaces in query formatting. [#14595](https://github.com/ClickHouse/ClickHouse/pull/14595) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* ClickHouse treats partition expr and key expr differently. Partition expr is used to construct an minmax index containing related columns, while primary key expr is stored as an expr. Sometimes user might partition a table at coarser levels, such as `partition by i / 1000`. However, binary operators are not monotonic and this PR tries to fix that. It might also benifit other use cases. [#14513](https://github.com/ClickHouse/ClickHouse/pull/14513) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Add an option to skip access checks for `DiskS3`. `s3` disk is an experimental feature. [#14497](https://github.com/ClickHouse/ClickHouse/pull/14497) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Speed up server shutdown process if there are ongoing S3 requests. [#14496](https://github.com/ClickHouse/ClickHouse/pull/14496) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* `SYSTEM RELOAD CONFIG` now throws an exception if failed to reload and continues using the previous users.xml. The background periodic reloading also continues using the previous users.xml if failed to reload. [#14492](https://github.com/ClickHouse/ClickHouse/pull/14492) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* For INSERTs with inline data in VALUES format in the script mode of `clickhouse-client`, support semicolon as the data terminator, in addition to the new line. Closes https://github.com/ClickHouse/ClickHouse/issues/12288. [#13192](https://github.com/ClickHouse/ClickHouse/pull/13192) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Support custom codecs in compact parts. [#12183](https://github.com/ClickHouse/ClickHouse/pull/12183) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Enable compact parts by default for small parts. This will allow to process frequent inserts slightly more efficiently (4..100 times). [#11913](https://github.com/ClickHouse/ClickHouse/pull/11913) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improve `quantileTDigest` performance. This fixes [#2668](https://github.com/ClickHouse/ClickHouse/issues/2668). [#15542](https://github.com/ClickHouse/ClickHouse/pull/15542) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Significantly reduce memory usage in AggregatingInOrderTransform/optimize_aggregation_in_order. [#15543](https://github.com/ClickHouse/ClickHouse/pull/15543) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Faster 256-bit multiplication. [#15418](https://github.com/ClickHouse/ClickHouse/pull/15418) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Improve performance of 256-bit types using (u)int64_t as base type for wide integers. Original wide integers use 8-bit types as base. [#14859](https://github.com/ClickHouse/ClickHouse/pull/14859) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Explicitly use a temporary disk to store vertical merge temporary data. [#15639](https://github.com/ClickHouse/ClickHouse/pull/15639) ([Grigory Pervakov](https://github.com/GrigoryPervakov)).
|
||||
* Use one S3 DeleteObjects request instead of multiple DeleteObject in a loop. No any functionality changes, so covered by existing tests like integration/test_log_family_s3. [#15238](https://github.com/ClickHouse/ClickHouse/pull/15238) ([ianton-ru](https://github.com/ianton-ru)).
|
||||
* Fix `DateTime <op> DateTime` mistakenly choosing the slow generic implementation. This fixes https://github.com/ClickHouse/ClickHouse/issues/15153. [#15178](https://github.com/ClickHouse/ClickHouse/pull/15178) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Improve performance of GROUP BY key of type `FixedString`. [#15034](https://github.com/ClickHouse/ClickHouse/pull/15034) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Only `mlock` code segment when starting clickhouse-server. In previous versions, all mapped regions were locked in memory, including debug info. Debug info is usually splitted to a separate file but if it isn't, it led to +2..3 GiB memory usage. [#14929](https://github.com/ClickHouse/ClickHouse/pull/14929) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* ClickHouse binary become smaller due to link time optimization.
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Now we use clang-11 for production ClickHouse build. [#15239](https://github.com/ClickHouse/ClickHouse/pull/15239) ([alesapin](https://github.com/alesapin)).
|
||||
* Now we use clang-11 to build ClickHouse in CI. [#14846](https://github.com/ClickHouse/ClickHouse/pull/14846) ([alesapin](https://github.com/alesapin)).
|
||||
* Switch binary builds (Linux, Darwin, AArch64, FreeDSD) to clang-11. [#15622](https://github.com/ClickHouse/ClickHouse/pull/15622) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Now all test images use `llvm-symbolizer-11`. [#15069](https://github.com/ClickHouse/ClickHouse/pull/15069) ([alesapin](https://github.com/alesapin)).
|
||||
* Allow to build with llvm-11. [#15366](https://github.com/ClickHouse/ClickHouse/pull/15366) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Switch from `clang-tidy-10` to `clang-tidy-11`. [#14922](https://github.com/ClickHouse/ClickHouse/pull/14922) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Use LLVM's experimental pass manager by default. [#15608](https://github.com/ClickHouse/ClickHouse/pull/15608) ([Danila Kutenin](https://github.com/danlark1)).
|
||||
* Don't allow any C++ translation unit to build more than 10 minutes or to use more than 10 GB or memory. This fixes [#14925](https://github.com/ClickHouse/ClickHouse/issues/14925). [#15060](https://github.com/ClickHouse/ClickHouse/pull/15060) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Make performance test more stable and representative by splitting test runs and profile runs. [#15027](https://github.com/ClickHouse/ClickHouse/pull/15027) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Attempt to make performance test more reliable. It is done by remapping the executable memory of the process on the fly with `madvise` to use transparent huge pages - it can lower the number of iTLB misses which is the main source of instabilities in performance tests. [#14685](https://github.com/ClickHouse/ClickHouse/pull/14685) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Convert to python3. This closes [#14886](https://github.com/ClickHouse/ClickHouse/issues/14886). [#15007](https://github.com/ClickHouse/ClickHouse/pull/15007) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fail early in functional tests if server failed to respond. This closes [#15262](https://github.com/ClickHouse/ClickHouse/issues/15262). [#15267](https://github.com/ClickHouse/ClickHouse/pull/15267) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Allow to run AArch64 version of clickhouse-server without configs. This facilitates [#15174](https://github.com/ClickHouse/ClickHouse/issues/15174). [#15266](https://github.com/ClickHouse/ClickHouse/pull/15266) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improvements in CI docker images: get rid of ZooKeeper and single script for test configs installation. [#15215](https://github.com/ClickHouse/ClickHouse/pull/15215) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix CMake options forwarding in fast test script. Fixes error in [#14711](https://github.com/ClickHouse/ClickHouse/issues/14711). [#15155](https://github.com/ClickHouse/ClickHouse/pull/15155) ([alesapin](https://github.com/alesapin)).
|
||||
* Added a script to perform hardware benchmark in a single command. [#15115](https://github.com/ClickHouse/ClickHouse/pull/15115) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Splitted huge test `test_dictionaries_all_layouts_and_sources` into smaller ones. [#15110](https://github.com/ClickHouse/ClickHouse/pull/15110) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Maybe fix MSan report in base64 (on servers with AVX-512). This fixes [#14006](https://github.com/ClickHouse/ClickHouse/issues/14006). [#15030](https://github.com/ClickHouse/ClickHouse/pull/15030) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Reformat and cleanup code in all integration test *.py files. [#14864](https://github.com/ClickHouse/ClickHouse/pull/14864) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Fix MaterializeMySQL empty transaction unstable test case found in CI. [#14854](https://github.com/ClickHouse/ClickHouse/pull/14854) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Attempt to speed up build a little. [#14808](https://github.com/ClickHouse/ClickHouse/pull/14808) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Speed up build a little by removing unused headers. [#14714](https://github.com/ClickHouse/ClickHouse/pull/14714) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix build failure in OSX. [#14761](https://github.com/ClickHouse/ClickHouse/pull/14761) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Enable ccache by default in cmake if it's found in OS. [#14575](https://github.com/ClickHouse/ClickHouse/pull/14575) ([alesapin](https://github.com/alesapin)).
|
||||
* Control CI builds configuration from the ClickHouse repository. [#14547](https://github.com/ClickHouse/ClickHouse/pull/14547) ([alesapin](https://github.com/alesapin)).
|
||||
* In CMake files: - Moved some options' descriptions' parts to comments above. - Replace 0 -> `OFF`, 1 -> `ON` in `option`s default values. - Added some descriptions and links to docs to the options. - Replaced `FUZZER` option (there is another option `ENABLE_FUZZING` which also enables same functionality). - Removed `ENABLE_GTEST_LIBRARY` option as there is `ENABLE_TESTS`. See the full description in PR: [#14711](https://github.com/ClickHouse/ClickHouse/pull/14711) ([Mike](https://github.com/myrrc)).
|
||||
* Make binary a bit smaller (~50 Mb for debug version). [#14555](https://github.com/ClickHouse/ClickHouse/pull/14555) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Use std::filesystem::path in ConfigProcessor for concatenating file paths. [#14558](https://github.com/ClickHouse/ClickHouse/pull/14558) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Fix debug assertion in `bitShiftLeft()` when called with negative big integer. [#14697](https://github.com/ClickHouse/ClickHouse/pull/14697) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
|
||||
|
||||
## ClickHouse release 20.9
|
||||
|
||||
### ClickHouse release v20.9.2.20-stable, 2020-09-22
|
||||
### ClickHouse release v20.9.2.20, 2020-09-22
|
||||
|
||||
#### New Feature
|
||||
|
||||
@ -84,7 +296,6 @@
|
||||
|
||||
#### New Feature
|
||||
|
||||
* ClickHouse can work as MySQL replica - it is implemented by `MaterializeMySQL` database engine. Implements [#4006](https://github.com/ClickHouse/ClickHouse/issues/4006). [#10851](https://github.com/ClickHouse/ClickHouse/pull/10851) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Add the ability to specify `Default` compression codec for columns that correspond to settings specified in `config.xml`. Implements: [#9074](https://github.com/ClickHouse/ClickHouse/issues/9074). [#14049](https://github.com/ClickHouse/ClickHouse/pull/14049) ([alesapin](https://github.com/alesapin)).
|
||||
* Support Kerberos authentication in Kafka, using `krb5` and `cyrus-sasl` libraries. [#12771](https://github.com/ClickHouse/ClickHouse/pull/12771) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||
* Add function `normalizeQuery` that replaces literals, sequences of literals and complex aliases with placeholders. Add function `normalizedQueryHash` that returns identical 64bit hash values for similar queries. It helps to analyze query log. This closes [#11271](https://github.com/ClickHouse/ClickHouse/issues/11271). [#13816](https://github.com/ClickHouse/ClickHouse/pull/13816) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
@ -184,6 +395,7 @@
|
||||
|
||||
#### Experimental Feature
|
||||
|
||||
* ClickHouse can work as MySQL replica - it is implemented by `MaterializeMySQL` database engine. Implements [#4006](https://github.com/ClickHouse/ClickHouse/issues/4006). [#10851](https://github.com/ClickHouse/ClickHouse/pull/10851) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Add types `Int128`, `Int256`, `UInt256` and related functions for them. Extend Decimals with Decimal256 (precision up to 76 digits). New types are under the setting `allow_experimental_bigint_types`. It is working extremely slow and bad. The implementation is incomplete. Please don't use this feature. [#13097](https://github.com/ClickHouse/ClickHouse/pull/13097) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
@ -35,25 +35,25 @@ PEERDIR(
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
argsToConfig.cpp
|
||||
coverage.cpp
|
||||
DateLUT.cpp
|
||||
DateLUTImpl.cpp
|
||||
JSON.cpp
|
||||
LineReader.cpp
|
||||
StringRef.cpp
|
||||
argsToConfig.cpp
|
||||
coverage.cpp
|
||||
demangle.cpp
|
||||
errnoToString.cpp
|
||||
getFQDNOrHostName.cpp
|
||||
getMemoryAmount.cpp
|
||||
getResource.cpp
|
||||
getThreadId.cpp
|
||||
JSON.cpp
|
||||
LineReader.cpp
|
||||
mremap.cpp
|
||||
phdr_cache.cpp
|
||||
preciseExp10.cpp
|
||||
setTerminalEcho.cpp
|
||||
shift10.cpp
|
||||
sleep.cpp
|
||||
StringRef.cpp
|
||||
terminalColors.cpp
|
||||
|
||||
)
|
||||
|
@ -1,9 +1,9 @@
|
||||
# This strings autochanged from release_lib.sh:
|
||||
SET(VERSION_REVISION 54442)
|
||||
SET(VERSION_REVISION 54443)
|
||||
SET(VERSION_MAJOR 20)
|
||||
SET(VERSION_MINOR 11)
|
||||
SET(VERSION_MINOR 12)
|
||||
SET(VERSION_PATCH 1)
|
||||
SET(VERSION_GITHASH 76a04fb4b4f6cd27ad999baf6dc9a25e88851c42)
|
||||
SET(VERSION_DESCRIBE v20.11.1.1-prestable)
|
||||
SET(VERSION_STRING 20.11.1.1)
|
||||
SET(VERSION_GITHASH c53725fb1f846fda074347607ab582fbb9c6f7a1)
|
||||
SET(VERSION_DESCRIBE v20.12.1.1-prestable)
|
||||
SET(VERSION_STRING 20.12.1.1)
|
||||
# end of autochange
|
||||
|
8
contrib/CMakeLists.txt
vendored
8
contrib/CMakeLists.txt
vendored
@ -14,6 +14,11 @@ unset (_current_dir_name)
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -w")
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -w")
|
||||
|
||||
if (SANITIZE STREQUAL "undefined")
|
||||
# 3rd-party libraries usually not intended to work with UBSan.
|
||||
add_compile_options(-fno-sanitize=undefined)
|
||||
endif()
|
||||
|
||||
set_property(DIRECTORY PROPERTY EXCLUDE_FROM_ALL 1)
|
||||
|
||||
add_subdirectory (boost-cmake)
|
||||
@ -157,9 +162,6 @@ if(USE_INTERNAL_SNAPPY_LIBRARY)
|
||||
add_subdirectory(snappy)
|
||||
|
||||
set (SNAPPY_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/snappy")
|
||||
if(SANITIZE STREQUAL "undefined")
|
||||
target_compile_options(${SNAPPY_LIBRARY} PRIVATE -fno-sanitize=undefined)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if (USE_INTERNAL_PARQUET_LIBRARY)
|
||||
|
2
contrib/aws
vendored
2
contrib/aws
vendored
@ -1 +1 @@
|
||||
Subproject commit 17e10c0fc77f22afe890fa6d1b283760e5edaa56
|
||||
Subproject commit a220591e335923ce1c19bbf9eb925787f7ab6c13
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
||||
Subproject commit 757d947235b307675cff964f29b19d388140a9eb
|
||||
Subproject commit f49c6ab8d3aa71828bd1b411485c21722e8c9d82
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
||||
clickhouse (20.11.1.1) unstable; urgency=low
|
||||
clickhouse (20.12.1.1) unstable; urgency=low
|
||||
|
||||
* Modified source code
|
||||
|
||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Sat, 10 Oct 2020 18:39:55 +0300
|
||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Thu, 05 Nov 2020 21:52:47 +0300
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:18.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=20.11.1.*
|
||||
ARG version=20.12.1.*
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install --yes --no-install-recommends \
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:20.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=20.11.1.*
|
||||
ARG version=20.12.1.*
|
||||
ARG gosu_ver=1.10
|
||||
|
||||
RUN apt-get update \
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:18.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=20.11.1.*
|
||||
ARG version=20.12.1.*
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y apt-transport-https dirmngr && \
|
||||
|
@ -240,6 +240,10 @@ TESTS_TO_SKIP=(
|
||||
01354_order_by_tuple_collate_const
|
||||
01355_ilike
|
||||
01411_bayesian_ab_testing
|
||||
01532_collate_in_low_cardinality
|
||||
01533_collate_in_nullable
|
||||
01542_collate_in_array
|
||||
01543_collate_in_tuple
|
||||
_orc_
|
||||
arrow
|
||||
avro
|
||||
|
@ -1074,6 +1074,53 @@ wait
|
||||
unset IFS
|
||||
}
|
||||
|
||||
function upload_results
|
||||
{
|
||||
if ! [ -v CHPC_DATABASE_URL ]
|
||||
then
|
||||
echo Database for test results is not specified, will not upload them.
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Surprisingly, clickhouse-client doesn't understand --host 127.0.0.1:9000
|
||||
# so I have to extract host and port with clickhouse-local. I tried to use
|
||||
# Poco URI parser to support this in the client, but it's broken and can't
|
||||
# parse host:port.
|
||||
set +x # Don't show password in the log
|
||||
clickhouse-client \
|
||||
$(clickhouse-local --query "with '${CHPC_DATABASE_URL}' as url select '--host ' || domain(url) || ' --port ' || toString(port(url)) format TSV") \
|
||||
--secure \
|
||||
--user "${CHPC_DATABASE_USER}" \
|
||||
--password "${CHPC_DATABASE_PASSWORD}" \
|
||||
--config "right/config/client_config.xml" \
|
||||
--database perftest \
|
||||
--date_time_input_format=best_effort \
|
||||
--query "
|
||||
insert into query_metrics_v2
|
||||
select
|
||||
toDate(event_time) event_date,
|
||||
toDateTime('$(cd right/ch && git show -s --format=%ci "$SHA_TO_TEST" | cut -d' ' -f-2)') event_time,
|
||||
$PR_TO_TEST pr_number,
|
||||
'$REF_SHA' old_sha,
|
||||
'$SHA_TO_TEST' new_sha,
|
||||
test,
|
||||
query_index,
|
||||
query_display_name,
|
||||
metric_name,
|
||||
old_value,
|
||||
new_value,
|
||||
diff,
|
||||
stat_threshold
|
||||
from input('metric_name text, old_value float, new_value float, diff float,
|
||||
ratio_display_text text, stat_threshold float,
|
||||
test text, query_index int, query_display_name text')
|
||||
settings date_time_input_format='best_effort'
|
||||
format TSV
|
||||
settings date_time_input_format='best_effort'
|
||||
" < report/all-query-metrics.tsv # Don't leave whitespace after INSERT: https://github.com/ClickHouse/ClickHouse/issues/16652
|
||||
set -x
|
||||
}
|
||||
|
||||
# Check that local and client are in PATH
|
||||
clickhouse-local --version > /dev/null
|
||||
clickhouse-client --version > /dev/null
|
||||
@ -1145,6 +1192,9 @@ case "$stage" in
|
||||
time "$script_dir/report.py" --report=all-queries > all-queries.html 2> >(tee -a report/errors.log 1>&2) ||:
|
||||
time "$script_dir/report.py" > report.html
|
||||
;&
|
||||
"upload_results")
|
||||
time upload_results ||:
|
||||
;&
|
||||
esac
|
||||
|
||||
# Print some final debug info to help debug Weirdness, of which there is plenty.
|
||||
|
17
docker/test/performance-comparison/config/client_config.xml
Normal file
17
docker/test/performance-comparison/config/client_config.xml
Normal file
@ -0,0 +1,17 @@
|
||||
<!--
|
||||
This config is used to upload test results to a public ClickHouse instance.
|
||||
It has bad certificates so we ignore them.
|
||||
-->
|
||||
<config>
|
||||
<openSSL>
|
||||
<client>
|
||||
<loadDefaultCAFile>true</loadDefaultCAFile>
|
||||
<cacheSessions>true</cacheSessions>
|
||||
<disableProtocols>sslv2,sslv3</disableProtocols>
|
||||
<preferServerCiphers>true</preferServerCiphers>
|
||||
<invalidCertificateHandler>
|
||||
<name>AcceptCertificateHandler</name> <!-- For tests only-->
|
||||
</invalidCertificateHandler>
|
||||
</client>
|
||||
</openSSL>
|
||||
</config>
|
@ -121,6 +121,9 @@ set +e
|
||||
PATH="$(readlink -f right/)":"$PATH"
|
||||
export PATH
|
||||
|
||||
export REF_PR
|
||||
export REF_SHA
|
||||
|
||||
# Start the main comparison script.
|
||||
{ \
|
||||
time ../download.sh "$REF_PR" "$REF_SHA" "$PR_TO_TEST" "$SHA_TO_TEST" && \
|
||||
|
@ -17,13 +17,6 @@ def get_skip_list_cmd(path):
|
||||
return ''
|
||||
|
||||
|
||||
def run_perf_test(cmd, xmls_path, output_folder):
|
||||
output_path = os.path.join(output_folder, "perf_stress_run.txt")
|
||||
f = open(output_path, 'w')
|
||||
p = Popen("{} --skip-tags=long --recursive --input-files {}".format(cmd, xmls_path), shell=True, stdout=f, stderr=f)
|
||||
return p
|
||||
|
||||
|
||||
def get_options(i):
|
||||
options = ""
|
||||
if 0 < i:
|
||||
@ -75,8 +68,6 @@ if __name__ == "__main__":
|
||||
|
||||
args = parser.parse_args()
|
||||
func_pipes = []
|
||||
perf_process = None
|
||||
perf_process = run_perf_test(args.perf_test_cmd, args.perf_test_xml_path, args.output_folder)
|
||||
func_pipes = run_func_test(args.test_cmd, args.output_folder, args.num_parallel, args.skip_func_tests, args.global_time_limit)
|
||||
|
||||
logging.info("Will wait functests to finish")
|
||||
|
@ -35,7 +35,7 @@ RUN apt-get update \
|
||||
ENV TZ=Europe/Moscow
|
||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||
|
||||
RUN pip3 install urllib3 testflows==1.6.59 docker-compose docker dicttoxml kazoo tzlocal
|
||||
RUN pip3 install urllib3 testflows==1.6.62 docker-compose docker dicttoxml kazoo tzlocal
|
||||
|
||||
ENV DOCKER_CHANNEL stable
|
||||
ENV DOCKER_VERSION 17.09.1-ce
|
||||
|
141
docs/en/development/adding_test_queries.md
Normal file
141
docs/en/development/adding_test_queries.md
Normal file
@ -0,0 +1,141 @@
|
||||
# How to add test queries to ClickHouse CI
|
||||
|
||||
ClickHouse has hundreds (or even thousands) of features. Every commit get checked by a complex set of tests containing many thousands of test cases.
|
||||
|
||||
The core functionality is very well tested, but some corner-cases and different combinations of features can be uncovered with ClickHouse CI.
|
||||
|
||||
Most of the bugs/regressions we see happen in that 'grey area' where test coverage is poor.
|
||||
|
||||
And we are very interested in covering most of the possible scenarios and feature combinations used in real life by tests.
|
||||
|
||||
## Why adding tests
|
||||
|
||||
Why/when you should add a test case into ClickHouse code:
|
||||
1) you use some complicated scenarios / feature combinations / you have some corner case which is probably not widely used
|
||||
2) you see that certain behavior gets changed between version w/o notifications in the changelog
|
||||
3) you just want to help to improve ClickHouse quality and ensure the features you use will not be broken in the future releases
|
||||
4) once the test is added/accepted, you can be sure the corner case you check will never be accidentally broken.
|
||||
5) you will be a part of great open-source community
|
||||
6) your name will be visible in the `system.contributors` table!
|
||||
7) you will make a world bit better :)
|
||||
|
||||
### Steps to do
|
||||
|
||||
#### Prerequisite
|
||||
|
||||
I assume you run some Linux machine (you can use docker / virtual machines on other OS) and any modern browser / internet connection, and you have some basic Linux & SQL skills.
|
||||
|
||||
Any highly specialized knowledge is not needed (so you don't need to know C++ or know something about how ClickHouse CI works).
|
||||
|
||||
|
||||
#### Preparation
|
||||
|
||||
1) [create GitHub account](https://github.com/join) (if you haven't one yet)
|
||||
2) [setup git](https://docs.github.com/en/free-pro-team@latest/github/getting-started-with-github/set-up-git)
|
||||
```bash
|
||||
# for Ubuntu
|
||||
sudo apt-get update
|
||||
sudo apt-get install git
|
||||
|
||||
git config --global user.name "John Doe" # fill with your name
|
||||
git config --global user.email "email@example.com" # fill with your email
|
||||
|
||||
```
|
||||
3) [fork ClickHouse project](https://docs.github.com/en/free-pro-team@latest/github/getting-started-with-github/fork-a-repo) - just open [https://github.com/ClickHouse/ClickHouse](https://github.com/ClickHouse/ClickHouse) and press fork button in the top right corner:
|
||||
![fork repo](https://github-images.s3.amazonaws.com/help/bootcamp/Bootcamp-Fork.png)
|
||||
|
||||
4) clone your fork to some folder on your PC, for example, `~/workspace/ClickHouse`
|
||||
```
|
||||
mkdir ~/workspace && cd ~/workspace
|
||||
git clone https://github.com/< your GitHub username>/ClickHouse
|
||||
cd ClickHouse
|
||||
git remote add upstream https://github.com/ClickHouse/ClickHouse
|
||||
```
|
||||
|
||||
#### New branch for the test
|
||||
|
||||
1) create a new branch from the latest clickhouse master
|
||||
```
|
||||
cd ~/workspace/ClickHouse
|
||||
git fetch upstream
|
||||
git checkout -b name_for_a_branch_with_my_test upstream/master
|
||||
```
|
||||
|
||||
#### Install & run clickhouse
|
||||
|
||||
1) install `clickhouse-server` (follow [official docs](https://clickhouse.tech/docs/en/getting-started/install/))
|
||||
2) install test configurations (it will use Zookeeper mock implementation and adjust some settings)
|
||||
```
|
||||
cd ~/workspace/ClickHouse/tests/config
|
||||
sudo ./install.sh
|
||||
```
|
||||
3) run clickhouse-server
|
||||
```
|
||||
sudo systemctl restart clickhouse-server
|
||||
```
|
||||
|
||||
#### Creating the test file
|
||||
|
||||
|
||||
1) find the number for your test - find the file with the biggest number in `tests/queries/0_stateless/`
|
||||
|
||||
```sh
|
||||
$ cd ~/workspace/ClickHouse
|
||||
$ ls tests/queries/0_stateless/[0-9]*.reference | tail -n 1
|
||||
tests/queries/0_stateless/01520_client_print_query_id.reference
|
||||
```
|
||||
Currently, the last number for the test is `01520`, so my test will have the number `01521`
|
||||
|
||||
2) create an SQL file with the next number and name of the feature you test
|
||||
|
||||
```sh
|
||||
touch tests/queries/0_stateless/01521_dummy_test.sql
|
||||
```
|
||||
|
||||
3) edit SQL file with your favorite editor (see hint of creating tests below)
|
||||
```sh
|
||||
vim tests/queries/0_stateless/01521_dummy_test.sql
|
||||
```
|
||||
|
||||
|
||||
4) run the test, and put the result of that into the reference file:
|
||||
```
|
||||
clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee tests/queries/0_stateless/01521_dummy_test.reference
|
||||
```
|
||||
|
||||
5) ensure everything is correct, if the test output is incorrect (due to some bug for example), adjust the reference file using text editor.
|
||||
|
||||
#### How create good test
|
||||
|
||||
- test should be
|
||||
- minimal - create only tables related to tested functionality, remove unrelated columns and parts of query
|
||||
- fast - should not take longer than few seconds (better subseconds)
|
||||
- correct - fails then feature is not working
|
||||
- deteministic
|
||||
- isolated / stateless
|
||||
- don't rely on some environment things
|
||||
- don't rely on timing when possible
|
||||
- try to cover corner cases (zeros / Nulls / empty sets / throwing exceptions)
|
||||
- to test that query return errors, you can put special comment after the query: `-- { serverError 60 }` or `-- { clientError 20 }`
|
||||
- don't switch databases (unless necessary)
|
||||
- you can create several table replicas on the same node if needed
|
||||
- you can use one of the test cluster definitions when needed (see system.clusters)
|
||||
- use `number` / `numbers_mt` / `zeros` / `zeros_mt` and similar for queries / to initialize data when appliable
|
||||
- clean up the created objects after test and before the test (DROP IF EXISTS) - in case of some dirty state
|
||||
- prefer sync mode of operations (mutations, merges, etc.)
|
||||
- use other SQL files in the `0_stateless` folder as an example
|
||||
- ensure the feature / feature combination you want to tests is not covered yet with existsing tests
|
||||
|
||||
#### Commit / push / create PR.
|
||||
|
||||
1) commit & push your changes
|
||||
```sh
|
||||
cd ~/workspace/ClickHouse
|
||||
git add tests/queries/0_stateless/01521_dummy_test.sql
|
||||
git add tests/queries/0_stateless/01521_dummy_test.reference
|
||||
git commit # use some nice commit message when possible
|
||||
git push origin HEAD
|
||||
```
|
||||
2) use a link which was shown during the push, to create a PR into the main repo
|
||||
3) adjust the PR title and contents, in `Changelog category (leave one)` keep
|
||||
`Build/Testing/Packaging Improvement`, fill the rest of the fields if you want.
|
@ -23,7 +23,7 @@ $ sudo apt-get install git cmake python ninja-build
|
||||
|
||||
Or cmake3 instead of cmake on older systems.
|
||||
|
||||
### Install GCC 9 {#install-gcc-9}
|
||||
### Install GCC 10 {#install-gcc-10}
|
||||
|
||||
There are several ways to do this.
|
||||
|
||||
@ -32,7 +32,7 @@ There are several ways to do this.
|
||||
On Ubuntu 19.10 or newer:
|
||||
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install gcc-9 g++-9
|
||||
$ sudo apt-get install gcc-10 g++-10
|
||||
|
||||
#### Install from a PPA Package {#install-from-a-ppa-package}
|
||||
|
||||
@ -42,18 +42,18 @@ On older Ubuntu:
|
||||
$ sudo apt-get install software-properties-common
|
||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install gcc-9 g++-9
|
||||
$ sudo apt-get install gcc-10 g++-10
|
||||
```
|
||||
|
||||
#### Install from Sources {#install-from-sources}
|
||||
|
||||
See [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||
|
||||
### Use GCC 9 for Builds {#use-gcc-9-for-builds}
|
||||
### Use GCC 10 for Builds {#use-gcc-10-for-builds}
|
||||
|
||||
``` bash
|
||||
$ export CC=gcc-9
|
||||
$ export CXX=g++-9
|
||||
$ export CC=gcc-10
|
||||
$ export CXX=g++-10
|
||||
```
|
||||
|
||||
### Checkout ClickHouse Sources {#checkout-clickhouse-sources}
|
||||
@ -88,7 +88,7 @@ The build requires the following components:
|
||||
- Git (is used only to checkout the sources, it’s not needed for the build)
|
||||
- CMake 3.10 or newer
|
||||
- Ninja (recommended) or Make
|
||||
- C++ compiler: gcc 9 or clang 8 or newer
|
||||
- C++ compiler: gcc 10 or clang 8 or newer
|
||||
- Linker: lld or gold (the classic GNU ld won’t work)
|
||||
- Python (is only used inside LLVM build and it is optional)
|
||||
|
||||
|
@ -131,13 +131,13 @@ ClickHouse uses several external libraries for building. All of them do not need
|
||||
|
||||
## C++ Compiler {#c-compiler}
|
||||
|
||||
Compilers GCC starting from version 9 and Clang version 8 or above are supported for building ClickHouse.
|
||||
Compilers GCC starting from version 10 and Clang version 8 or above are supported for building ClickHouse.
|
||||
|
||||
Official Yandex builds currently use GCC because it generates machine code of slightly better performance (yielding a difference of up to several percent according to our benchmarks). And Clang is more convenient for development usually. Though, our continuous integration (CI) platform runs checks for about a dozen of build combinations.
|
||||
|
||||
To install GCC on Ubuntu run: `sudo apt install gcc g++`
|
||||
|
||||
Check the version of gcc: `gcc --version`. If it is below 9, then follow the instruction here: https://clickhouse.tech/docs/en/development/build/#install-gcc-9.
|
||||
Check the version of gcc: `gcc --version`. If it is below 10, then follow the instruction here: https://clickhouse.tech/docs/en/development/build/#install-gcc-10.
|
||||
|
||||
Mac OS X build is supported only for Clang. Just run `brew install llvm`
|
||||
|
||||
@ -152,11 +152,11 @@ Now that you are ready to build ClickHouse we recommend you to create a separate
|
||||
|
||||
You can have several different directories (build_release, build_debug, etc.) for different types of build.
|
||||
|
||||
While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler (version 9 gcc compiler in this example).
|
||||
While inside the `build` directory, configure your build by running CMake. Before the first run, you need to define environment variables that specify compiler (version 10 gcc compiler in this example).
|
||||
|
||||
Linux:
|
||||
|
||||
export CC=gcc-9 CXX=g++-9
|
||||
export CC=gcc-10 CXX=g++-10
|
||||
cmake ..
|
||||
|
||||
Mac OS X:
|
||||
|
@ -343,8 +343,8 @@ The `set` index can be used with all functions. Function subsets for other index
|
||||
|------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------|
|
||||
| [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [notEquals(!=, \<\>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✗ | ✗ | ✗ |
|
||||
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||
| [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||
| [endsWith](../../../sql-reference/functions/string-functions.md#endswith) | ✗ | ✗ | ✔ | ✔ | ✗ |
|
||||
| [multiSearchAny](../../../sql-reference/functions/string-search-functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ |
|
||||
|
@ -98,6 +98,7 @@ When creating a table, the following settings are applied:
|
||||
- [max_bytes_in_join](../../../operations/settings/query-complexity.md#settings-max_bytes_in_join)
|
||||
- [join_overflow_mode](../../../operations/settings/query-complexity.md#settings-join_overflow_mode)
|
||||
- [join_any_take_last_row](../../../operations/settings/settings.md#settings-join_any_take_last_row)
|
||||
- [persistent](../../../operations/settings/settings.md#persistent)
|
||||
|
||||
The `Join`-engine tables can’t be used in `GLOBAL JOIN` operations.
|
||||
|
||||
|
@ -14,4 +14,10 @@ Data is always located in RAM. For `INSERT`, the blocks of inserted data are als
|
||||
|
||||
For a rough server restart, the block of data on the disk might be lost or damaged. In the latter case, you may need to manually delete the file with damaged data.
|
||||
|
||||
### Limitations and Settings {#join-limitations-and-settings}
|
||||
|
||||
When creating a table, the following settings are applied:
|
||||
|
||||
- [persistent](../../../operations/settings/settings.md#persistent)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/set/) <!--hide-->
|
||||
|
@ -123,6 +123,7 @@ You can pass parameters to `clickhouse-client` (all parameters have a default va
|
||||
- `--stacktrace` – If specified, also print the stack trace if an exception occurs.
|
||||
- `--config-file` – The name of the configuration file.
|
||||
- `--secure` – If specified, will connect to server over secure connection.
|
||||
- `--history_file` — Path to a file containing command history.
|
||||
- `--param_<name>` — Value for a [query with parameters](#cli-queries-with-parameters).
|
||||
|
||||
### Configuration Files {#configuration_files}
|
||||
|
@ -36,6 +36,7 @@ toc_title: Adopters
|
||||
| <a href="https://www.criteo.com/" class="favicon">Criteo</a> | Retail | Main product | — | — | [Slides in English, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup18/3_storetail.pptx) |
|
||||
| <a href="https://www.chinatelecomglobal.com/" class="favicon">Dataliance for China Telecom</a> | Telecom | Analytics | — | — | [Slides in Chinese, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/telecom.pdf) |
|
||||
| <a href="https://db.com" class="favicon">Deutsche Bank</a> | Finance | BI Analytics | — | — | [Slides in English, October 2019](https://bigdatadays.ru/wp-content/uploads/2019/10/D2-H3-3_Yakunin-Goihburg.pdf) |
|
||||
| <a href="https://deeplay.io/eng/" class="favicon">Deeplay</a> | Gaming Analytics | — | — | — | [Job advertisement, 2020](https://career.habr.com/vacancies/1000062568) |
|
||||
| <a href="https://www.diva-e.com" class="favicon">Diva-e</a> | Digital consulting | Main Product | — | — | [Slides in English, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup29/ClickHouse-MeetUp-Unusual-Applications-sd-2019-09-17.pdf) |
|
||||
| <a href="https://www.ecwid.com/" class="favicon">Ecwid</a> | E-commerce SaaS | Metrics, Logging | — | — | [Slides in Russian, April 2019](https://nastachku.ru/var/files/1/presentation/backend/2_Backend_6.pdf) |
|
||||
| <a href="https://www.ebay.com/" class="favicon">eBay</a> | E-commerce | Logs, Metrics and Events | — | — | [Official website, Sep 2020](https://tech.ebayinc.com/engineering/ou-online-analytical-processing/) |
|
||||
@ -45,6 +46,7 @@ toc_title: Adopters
|
||||
| <a href="https://fun.co/rp" class="favicon">FunCorp</a> | Games | | — | — | [Article](https://www.altinity.com/blog/migrating-from-redshift-to-clickhouse) |
|
||||
| <a href="https://geniee.co.jp" class="favicon">Geniee</a> | Ad network | Main product | — | — | [Blog post in Japanese, July 2017](https://tech.geniee.co.jp/entry/2017/07/20/160100) |
|
||||
| <a href="https://www.huya.com/" class="favicon">HUYA</a> | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) |
|
||||
| <a href="https://www.the-ica.com/" class="favicon">ICA</a> | FinTech | Risk Management | — | — | [Blog Post in English, Sep 2020](https://altinity.com/blog/clickhouse-vs-redshift-performance-for-fintech-risk-management?utm_campaign=ClickHouse%20vs%20RedShift&utm_content=143520807&utm_medium=social&utm_source=twitter&hss_channel=tw-3894792263) |
|
||||
| <a href="https://www.idealista.com" class="favicon">Idealista</a> | Real Estate | Analytics | — | — | [Blog Post in English, April 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) |
|
||||
| <a href="https://www.infovista.com/" class="favicon">Infovista</a> | Networks | Analytics | — | — | [Slides in English, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) |
|
||||
| <a href="https://www.innogames.com" class="favicon">InnoGames</a> | Games | Metrics, Logging | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) |
|
||||
@ -68,6 +70,7 @@ toc_title: Adopters
|
||||
| <a href="https://www.nuna.com/" class="favicon">Nuna Inc.</a> | Health Data Analytics | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=170) |
|
||||
| <a href="https://www.oneapm.com/" class="favicon">OneAPM</a> | Monitorings and Data Analysis | Main product | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/8.%20clickhouse在OneAPM的应用%20杜龙.pdf) |
|
||||
| <a href="https://www.percent.cn/" class="favicon">Percent 百分点</a> | Analytics | Main Product | — | — | [Slides in Chinese, June 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup24/4.%20ClickHouse万亿数据双中心的设计与实践%20.pdf) |
|
||||
| <a href="https://www.percona.com/" class="favicon">Percona</a> | Performance analysis | Percona Monitoring and Management | — | — | [Official website, Mar 2020](https://www.percona.com/blog/2020/03/30/advanced-query-analysis-in-percona-monitoring-and-management-with-direct-clickhouse-access/) |
|
||||
| <a href="https://plausible.io/" class="favicon">Plausible</a> | Analytics | Main Product | — | — | [Blog post, June 2020](https://twitter.com/PlausibleHQ/status/1273889629087969280) |
|
||||
| <a href="https://posthog.com/" class="favicon">PostHog</a> | Product Analytics | Main Product | — | — | [Release Notes, Oct 2020](https://posthog.com/blog/the-posthog-array-1-15-0) |
|
||||
| <a href="https://postmates.com/" class="favicon">Postmates</a> | Delivery | — | — | — | [Talk in English, July 2020](https://youtu.be/GMiXCMFDMow?t=188) |
|
||||
|
@ -571,7 +571,7 @@ For more information, see the MergeTreeSettings.h header file.
|
||||
|
||||
Fine tuning for tables in the [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
||||
|
||||
This setting has higher priority.
|
||||
This setting has a higher priority.
|
||||
|
||||
For more information, see the MergeTreeSettings.h header file.
|
||||
|
||||
|
@ -384,7 +384,7 @@ Possible values:
|
||||
|
||||
- `'basic'` — Use basic parser.
|
||||
|
||||
ClickHouse can parse only the basic `YYYY-MM-DD HH:MM:SS` format. For example, `'2019-08-20 10:18:56'`.
|
||||
ClickHouse can parse only the basic `YYYY-MM-DD HH:MM:SS` or `YYYY-MM-DD` format. For example, `'2019-08-20 10:18:56'` or `2019-08-20`.
|
||||
|
||||
Default value: `'basic'`.
|
||||
|
||||
@ -1765,6 +1765,23 @@ Default value: `0`.
|
||||
|
||||
- [Distributed Table Engine](../../engines/table-engines/special/distributed.md#distributed)
|
||||
- [Managing Distributed Tables](../../sql-reference/statements/system.md#query-language-system-distributed)
|
||||
|
||||
|
||||
## use_compact_format_in_distributed_parts_names {#use_compact_format_in_distributed_parts_names}
|
||||
|
||||
Uses compact format for storing blocks for async (`insert_distributed_sync`) INSERT into tables with `Distributed` engine.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — Uses `user[:password]@host:port#default_database` directory format.
|
||||
- 1 — Uses `[shard{shard_index}[_replica{replica_index}]]` directory format.
|
||||
|
||||
Default value: `1`.
|
||||
|
||||
!!! note "Note"
|
||||
- with `use_compact_format_in_distributed_parts_names=0` changes from cluster definition will not be applied for async INSERT.
|
||||
- with `use_compact_format_in_distributed_parts_names=1` changing the order of the nodes in the cluster definition, will change the `shard_index`/`replica_index` so be aware.
|
||||
|
||||
## background_buffer_flush_schedule_pool_size {#background_buffer_flush_schedule_pool_size}
|
||||
|
||||
Sets the number of threads performing background flush in [Buffer](../../engines/table-engines/special/buffer.md)-engine tables. This setting is applied at the ClickHouse server start and can’t be changed in a user session.
|
||||
@ -2203,4 +2220,17 @@ Possible values:
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
## persistent {#persistent}
|
||||
|
||||
Disables persistency for the [Set](../../engines/table-engines/special/set.md#set) and [Join](../../engines/table-engines/special/join.md#join) table engines.
|
||||
|
||||
Reduces the I/O overhead. Suitable for scenarios that pursue performance and do not require persistence.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 1 — Enabled.
|
||||
- 0 — Disabled.
|
||||
|
||||
Default value: `1`.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) <!-- hide -->
|
||||
|
@ -4,6 +4,6 @@ toc_priority: 140
|
||||
|
||||
# sumWithOverflow {#sumwithoverflowx}
|
||||
|
||||
Computes the sum of the numbers, using the same data type for the result as for the input parameters. If the sum exceeds the maximum value for this data type, the function returns an error.
|
||||
Computes the sum of the numbers, using the same data type for the result as for the input parameters. If the sum exceeds the maximum value for this data type, it is calculated with overflow.
|
||||
|
||||
Only works for numbers.
|
||||
|
@ -3,10 +3,45 @@ toc_priority: 47
|
||||
toc_title: Date
|
||||
---
|
||||
|
||||
# Date {#date}
|
||||
# Date {#data_type-date}
|
||||
|
||||
A date. Stored in two bytes as the number of days since 1970-01-01 (unsigned). Allows storing values from just after the beginning of the Unix Epoch to the upper threshold defined by a constant at the compilation stage (currently, this is until the year 2106, but the final fully-supported year is 2105).
|
||||
|
||||
The date value is stored without the time zone.
|
||||
|
||||
## Examples {#examples}
|
||||
|
||||
**1.** Creating a table with a `DateTime`-type column and inserting data into it:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE dt
|
||||
(
|
||||
`timestamp` Date,
|
||||
`event_id` UInt8
|
||||
)
|
||||
ENGINE = TinyLog;
|
||||
```
|
||||
|
||||
``` sql
|
||||
INSERT INTO dt Values (1546300800, 1), ('2019-01-01', 2);
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT * FROM dt;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌──timestamp─┬─event_id─┐
|
||||
│ 2019-01-01 │ 1 │
|
||||
│ 2019-01-01 │ 2 │
|
||||
└────────────┴──────────┘
|
||||
```
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [Functions for working with dates and times](../../sql-reference/functions/date-time-functions.md)
|
||||
- [Operators for working with dates and times](../../sql-reference/operators/index.md#operators-datetime)
|
||||
- [`DateTime` data type](../../sql-reference/data-types/datetime.md)
|
||||
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/data_types/date/) <!--hide-->
|
||||
|
@ -0,0 +1,91 @@
|
||||
---
|
||||
toc_priority: 46
|
||||
toc_title: Polygon Dictionaries With Grids
|
||||
---
|
||||
|
||||
|
||||
# Polygon dictionaries {#polygon-dictionaries}
|
||||
|
||||
Polygon dictionaries allow you to efficiently search for the polygon containing specified points.
|
||||
For example: defining a city area by geographical coordinates.
|
||||
|
||||
Example configuration:
|
||||
|
||||
``` xml
|
||||
<dictionary>
|
||||
<structure>
|
||||
<key>
|
||||
<name>key</name>
|
||||
<type>Array(Array(Array(Array(Float64))))</type>
|
||||
</key>
|
||||
|
||||
<attribute>
|
||||
<name>name</name>
|
||||
<type>String</type>
|
||||
<null_value></null_value>
|
||||
</attribute>
|
||||
|
||||
<attribute>
|
||||
<name>value</name>
|
||||
<type>UInt64</type>
|
||||
<null_value>0</null_value>
|
||||
</attribute>
|
||||
|
||||
</structure>
|
||||
|
||||
<layout>
|
||||
<polygon />
|
||||
</layout>
|
||||
|
||||
</dictionary>
|
||||
```
|
||||
|
||||
Tne corresponding [DDL-query](../../../sql-reference/statements/create/dictionary.md#create-dictionary-query):
|
||||
``` sql
|
||||
CREATE DICTIONARY polygon_dict_name (
|
||||
key Array(Array(Array(Array(Float64)))),
|
||||
name String,
|
||||
value UInt64
|
||||
)
|
||||
PRIMARY KEY key
|
||||
LAYOUT(POLYGON())
|
||||
...
|
||||
```
|
||||
|
||||
When configuring the polygon dictionary, the key must have one of two types:
|
||||
- A simple polygon. It is an array of points.
|
||||
- MultiPolygon. It is an array of polygons. Each polygon is a two-dimensional array of points. The first element of this array is the outer boundary of the polygon, and subsequent elements specify areas to be excluded from it.
|
||||
|
||||
Points can be specified as an array or a tuple of their coordinates. In the current implementation, only two-dimensional points are supported.
|
||||
|
||||
The user can [upload their own data](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md) in all formats supported by ClickHouse.
|
||||
|
||||
|
||||
There are 3 types of [in-memory storage](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md) available:
|
||||
|
||||
- POLYGON_SIMPLE. This is a naive implementation, where a linear pass through all polygons is made for each query, and membership is checked for each one without using additional indexes.
|
||||
|
||||
- POLYGON_INDEX_EACH. A separate index is built for each polygon, which allows you to quickly check whether it belongs in most cases (optimized for geographical regions).
|
||||
Also, a grid is superimposed on the area under consideration, which significantly narrows the number of polygons under consideration.
|
||||
The grid is created by recursively dividing the cell into 16 equal parts and is configured with two parameters.
|
||||
The division stops when the recursion depth reaches MAX_DEPTH or when the cell crosses no more than MIN_INTERSECTIONS polygons.
|
||||
To respond to the query, there is a corresponding cell, and the index for the polygons stored in it is accessed alternately.
|
||||
|
||||
- POLYGON_INDEX_CELL. This placement also creates the grid described above. The same options are available. For each sheet cell, an index is built on all pieces of polygons that fall into it, which allows you to quickly respond to a request.
|
||||
|
||||
- POLYGON. Synonym to POLYGON_INDEX_CELL.
|
||||
|
||||
Dictionary queries are carried out using standard [functions](../../../sql-reference/functions/ext-dict-functions.md) for working with external dictionaries.
|
||||
An important difference is that here the keys will be the points for which you want to find the polygon containing them.
|
||||
|
||||
Example of working with the dictionary defined above:
|
||||
``` sql
|
||||
CREATE TABLE points (
|
||||
x Float64,
|
||||
y Float64
|
||||
)
|
||||
...
|
||||
SELECT tuple(x, y) AS key, dictGet(dict_name, 'name', key), dictGet(dict_name, 'value', key) FROM points ORDER BY x, y;
|
||||
```
|
||||
|
||||
As a result of executing the last command for each point in the 'points' table, a minimum area polygon containing this point will be found, and the requested attributes will be output.
|
@ -89,7 +89,7 @@ If the index falls outside of the bounds of an array, it returns some default va
|
||||
## has(arr, elem) {#hasarr-elem}
|
||||
|
||||
Checks whether the ‘arr’ array has the ‘elem’ element.
|
||||
Returns 0 if the the element is not in the array, or 1 if it is.
|
||||
Returns 0 if the element is not in the array, or 1 if it is.
|
||||
|
||||
`NULL` is processed as a value.
|
||||
|
||||
|
@ -153,15 +153,18 @@ A fast, decent-quality non-cryptographic hash function for a string obtained fro
|
||||
`URLHash(s, N)` – Calculates a hash from a string up to the N level in the URL hierarchy, without one of the trailing symbols `/`,`?` or `#` at the end, if present.
|
||||
Levels are the same as in URLHierarchy. This function is specific to Yandex.Metrica.
|
||||
|
||||
## farmFingerprint64 {#farmfingerprint64}
|
||||
|
||||
## farmHash64 {#farmhash64}
|
||||
|
||||
Produces a 64-bit [FarmHash](https://github.com/google/farmhash) hash value.
|
||||
Produces a 64-bit [FarmHash](https://github.com/google/farmhash) or Fingerprint value. Prefer `farmFingerprint64` for a stable and portable value.
|
||||
|
||||
``` sql
|
||||
farmFingerprint64(par1, ...)
|
||||
farmHash64(par1, ...)
|
||||
```
|
||||
|
||||
The function uses the `Hash64` method from all [available methods](https://github.com/google/farmhash/blob/master/src/farmhash.h).
|
||||
These functions use the `Fingerprint64` and `Hash64` method respectively from all [available methods](https://github.com/google/farmhash/blob/master/src/farmhash.h).
|
||||
|
||||
**Parameters**
|
||||
|
||||
|
@ -221,3 +221,85 @@ returns
|
||||
│ 1970-03-12 │ 1970-01-08 │ original │
|
||||
└────────────┴────────────┴──────────┘
|
||||
```
|
||||
|
||||
## OFFSET FETCH Clause {#offset-fetch}
|
||||
|
||||
`OFFSET` and `FETCH` allow you to retrieve data by portions. They specify a row block which you want to get by a single query.
|
||||
|
||||
``` sql
|
||||
OFFSET offset_row_count {ROW | ROWS}] [FETCH {FIRST | NEXT} fetch_row_count {ROW | ROWS} {ONLY | WITH TIES}]
|
||||
```
|
||||
|
||||
The `offset_row_count` or `fetch_row_count` value can be a number or a literal constant. You can omit `fetch_row_count`; by default, it equals 1.
|
||||
|
||||
`OFFSET` specifies the number of rows to skip before starting to return rows from the query.
|
||||
|
||||
The `FETCH` specifies the maximum number of rows that can be in the result of a query.
|
||||
|
||||
The `ONLY` option is used to return rows that immediately follow the rows omitted by the `OFFSET`. In this case the `FETCH` is an alternative to the [LIMIT](../../../sql-reference/statements/select/limit.md) clause. For example, the following query
|
||||
|
||||
``` sql
|
||||
SELECT * FROM test_fetch ORDER BY a OFFSET 1 ROW FETCH FIRST 3 ROWS ONLY;
|
||||
```
|
||||
|
||||
is identical to the query
|
||||
|
||||
``` sql
|
||||
SELECT * FROM test_fetch ORDER BY a LIMIT 3 OFFSET 1;
|
||||
```
|
||||
|
||||
The `WITH TIES` option is used to return any additional rows that tie for the last place in the result set according to the `ORDER BY` clause. For example, if `fetch_row_count` is set to 5 but two additional rows match the values of the `ORDER BY` columns in the fifth row, the result set will contain seven rows.
|
||||
|
||||
!!! note "Note"
|
||||
According to the standard, the `OFFSET` clause must come before the `FETCH` clause if both are present.
|
||||
|
||||
### Examples {#examples}
|
||||
|
||||
Input table:
|
||||
|
||||
``` text
|
||||
┌─a─┬─b─┐
|
||||
│ 1 │ 1 │
|
||||
│ 2 │ 1 │
|
||||
│ 3 │ 4 │
|
||||
│ 1 │ 3 │
|
||||
│ 5 │ 4 │
|
||||
│ 0 │ 6 │
|
||||
│ 5 │ 7 │
|
||||
└───┴───┘
|
||||
```
|
||||
|
||||
Usage of the `ONLY` option:
|
||||
|
||||
``` sql
|
||||
SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS ONLY;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─a─┬─b─┐
|
||||
│ 2 │ 1 │
|
||||
│ 3 │ 4 │
|
||||
│ 5 │ 4 │
|
||||
└───┴───┘
|
||||
```
|
||||
|
||||
Usage of the `WITH TIES` option:
|
||||
|
||||
``` sql
|
||||
SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS WITH TIES;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─a─┬─b─┐
|
||||
│ 2 │ 1 │
|
||||
│ 3 │ 4 │
|
||||
│ 5 │ 4 │
|
||||
│ 5 │ 7 │
|
||||
└───┴───┘
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/sql-reference/statements/select/order-by/) <!--hide-->
|
||||
|
@ -19,7 +19,7 @@ $ sudo apt-get install git cmake python ninja-build
|
||||
|
||||
O cmake3 en lugar de cmake en sistemas más antiguos.
|
||||
|
||||
## Instalar GCC 9 {#install-gcc-9}
|
||||
## Instalar GCC 10 {#install-gcc-10}
|
||||
|
||||
Hay varias formas de hacer esto.
|
||||
|
||||
@ -29,18 +29,18 @@ Hay varias formas de hacer esto.
|
||||
$ sudo apt-get install software-properties-common
|
||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install gcc-9 g++-9
|
||||
$ sudo apt-get install gcc-10 g++-10
|
||||
```
|
||||
|
||||
### Instalar desde fuentes {#install-from-sources}
|
||||
|
||||
Mira [Sistema abierto.](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||
|
||||
## Usar GCC 9 para compilaciones {#use-gcc-9-for-builds}
|
||||
## Usar GCC 10 para compilaciones {#use-gcc-10-for-builds}
|
||||
|
||||
``` bash
|
||||
$ export CC=gcc-9
|
||||
$ export CXX=g++-9
|
||||
$ export CC=gcc-10
|
||||
$ export CXX=g++-10
|
||||
```
|
||||
|
||||
## Fuentes de ClickHouse de pago {#checkout-clickhouse-sources}
|
||||
@ -76,7 +76,7 @@ La compilación requiere los siguientes componentes:
|
||||
- Git (se usa solo para verificar las fuentes, no es necesario para la compilación)
|
||||
- CMake 3.10 o más reciente
|
||||
- Ninja (recomendado) o Hacer
|
||||
- Compilador de C ++: gcc 9 o clang 8 o más reciente
|
||||
- Compilador de C ++: gcc 10 o clang 8 o más reciente
|
||||
- Enlazador: lld u oro (el clásico GNU ld no funcionará)
|
||||
- Python (solo se usa dentro de la compilación LLVM y es opcional)
|
||||
|
||||
|
@ -135,13 +135,13 @@ ClickHouse utiliza varias bibliotecas externas para la construcción. Todos ello
|
||||
|
||||
# Compilador de C ++ {#c-compiler}
|
||||
|
||||
Los compiladores GCC a partir de la versión 9 y Clang versión 8 o superior son compatibles para construir ClickHouse.
|
||||
Los compiladores GCC a partir de la versión 10 y Clang versión 8 o superior son compatibles para construir ClickHouse.
|
||||
|
||||
Las compilaciones oficiales de Yandex actualmente usan GCC porque genera código de máquina de un rendimiento ligeramente mejor (con una diferencia de hasta varios por ciento según nuestros puntos de referencia). Y Clang es más conveniente para el desarrollo generalmente. Sin embargo, nuestra plataforma de integración continua (CI) ejecuta verificaciones de aproximadamente una docena de combinaciones de compilación.
|
||||
|
||||
Para instalar GCC en Ubuntu, ejecute: `sudo apt install gcc g++`
|
||||
|
||||
Compruebe la versión de gcc: `gcc --version`. Si está por debajo de 9, siga las instrucciones aquí: https://clickhouse.tech/docs/es/development/build/#install-gcc-9.
|
||||
Compruebe la versión de gcc: `gcc --version`. Si está por debajo de 9, siga las instrucciones aquí: https://clickhouse.tech/docs/es/development/build/#install-gcc-10.
|
||||
|
||||
La compilación de Mac OS X solo es compatible con Clang. Sólo tiene que ejecutar `brew install llvm`
|
||||
|
||||
@ -156,11 +156,11 @@ Ahora que está listo para construir ClickHouse, le recomendamos que cree un dir
|
||||
|
||||
Puede tener varios directorios diferentes (build_release, build_debug, etc.) para diferentes tipos de construcción.
|
||||
|
||||
Mientras que dentro de la `build` directorio, configure su compilación ejecutando CMake. Antes de la primera ejecución, debe definir variables de entorno que especifiquen el compilador (compilador gcc versión 9 en este ejemplo).
|
||||
Mientras que dentro de la `build` directorio, configure su compilación ejecutando CMake. Antes de la primera ejecución, debe definir variables de entorno que especifiquen el compilador (compilador gcc versión 10 en este ejemplo).
|
||||
|
||||
Linux:
|
||||
|
||||
export CC=gcc-9 CXX=g++-9
|
||||
export CC=gcc-10 CXX=g++-10
|
||||
cmake ..
|
||||
|
||||
Mac OS X:
|
||||
|
@ -20,7 +20,7 @@ $ sudo apt-get install git cmake python ninja-build
|
||||
|
||||
یا سیمک 3 به جای کیک در سیستم های قدیمی تر.
|
||||
|
||||
## نصب شورای همکاری خلیج فارس 9 {#install-gcc-9}
|
||||
## نصب شورای همکاری خلیج فارس 9 {#install-gcc-10}
|
||||
|
||||
راه های مختلفی برای انجام این کار وجود دارد.
|
||||
|
||||
@ -30,18 +30,18 @@ $ sudo apt-get install git cmake python ninja-build
|
||||
$ sudo apt-get install software-properties-common
|
||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install gcc-9 g++-9
|
||||
$ sudo apt-get install gcc-10 g++-10
|
||||
```
|
||||
|
||||
### نصب از منابع {#install-from-sources}
|
||||
|
||||
نگاه کن [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||
|
||||
## استفاده از شورای همکاری خلیج فارس 9 برای ساخت {#use-gcc-9-for-builds}
|
||||
## استفاده از شورای همکاری خلیج فارس 10 برای ساخت {#use-gcc-10-for-builds}
|
||||
|
||||
``` bash
|
||||
$ export CC=gcc-9
|
||||
$ export CXX=g++-9
|
||||
$ export CC=gcc-10
|
||||
$ export CXX=g++-10
|
||||
```
|
||||
|
||||
## پرداخت منابع کلیک {#checkout-clickhouse-sources}
|
||||
@ -77,7 +77,7 @@ $ cd ..
|
||||
- دستگاه گوارش (استفاده می شود تنها به پرداخت منابع مورد نیاز برای ساخت)
|
||||
- کیک 3.10 یا جدیدتر
|
||||
- نینجا (توصیه می شود) و یا
|
||||
- ج ++ کامپایلر: شورای همکاری خلیج فارس 9 یا صدای شیپور 8 یا جدیدتر
|
||||
- ج ++ کامپایلر: شورای همکاری خلیج فارس 10 یا صدای شیپور 8 یا جدیدتر
|
||||
- لینکر: لیلند یا طلا (کلاسیک گنو الدی کار نخواهد کرد)
|
||||
- پایتون (فقط در داخل ساخت لورم استفاده می شود و اختیاری است)
|
||||
|
||||
|
@ -137,13 +137,13 @@ toc_title: "\u062F\u0633\u062A\u0648\u0631\u0627\u0644\u0639\u0645\u0644 \u062A\
|
||||
|
||||
# ج ++ کامپایلر {#c-compiler}
|
||||
|
||||
کامپایلر شورای همکاری خلیج فارس با شروع از نسخه 9 و صدای شیپور نسخه 8 یا بالاتر برای ساخت و ساز خانه عروسکی پشتیبانی می کند.
|
||||
کامپایلر شورای همکاری خلیج فارس با شروع از نسخه 10 و صدای شیپور نسخه 8 یا بالاتر برای ساخت و ساز خانه عروسکی پشتیبانی می کند.
|
||||
|
||||
یاندکس رسمی ایجاد شده در حال حاضر با استفاده از شورای همکاری خلیج فارس به دلیل تولید کد ماشین از عملکرد کمی بهتر (بازده تفاوت تا چند درصد با توجه به معیار ما). و کلانگ معمولا برای توسعه راحت تر است. هر چند, ادغام مداوم ما (سی) پلت فرم اجرا می شود چک برای حدود یک دوجین از ترکیب ساخت.
|
||||
|
||||
برای نصب شورای همکاری خلیج فارس در اوبونتو اجرای: `sudo apt install gcc g++`
|
||||
|
||||
بررسی نسخه شورای همکاری خلیج فارس: `gcc --version`. اگر زیر است 9, سپس دستورالعمل اینجا را دنبال کنید: https://clickhouse.tech/docs/fa/development/build/#install-gcc-9.
|
||||
بررسی نسخه شورای همکاری خلیج فارس: `gcc --version`. اگر زیر است 10, سپس دستورالعمل اینجا را دنبال کنید: https://clickhouse.tech/docs/fa/development/build/#install-gcc-10.
|
||||
|
||||
سیستم عامل مک ایکس ساخت فقط برای صدای جرنگ جرنگ پشتیبانی می شود. فقط فرار کن `brew install llvm`
|
||||
|
||||
@ -158,11 +158,11 @@ toc_title: "\u062F\u0633\u062A\u0648\u0631\u0627\u0644\u0639\u0645\u0644 \u062A\
|
||||
|
||||
شما می توانید چندین دایرکتوری های مختلف (build_release, build_debug ، ) برای انواع مختلف ساخت.
|
||||
|
||||
در حالی که در داخل `build` فهرست, پیکربندی ساخت خود را با در حال اجرا کیک. قبل از اولین اجرا, شما نیاز به تعریف متغیرهای محیطی که کامپایلر را مشخص (نسخه 9 کامپایلر شورای همکاری خلیج فارس در این مثال).
|
||||
در حالی که در داخل `build` فهرست, پیکربندی ساخت خود را با در حال اجرا کیک. قبل از اولین اجرا, شما نیاز به تعریف متغیرهای محیطی که کامپایلر را مشخص (نسخه 10 کامپایلر شورای همکاری خلیج فارس در این مثال).
|
||||
|
||||
لینوکس:
|
||||
|
||||
export CC=gcc-9 CXX=g++-9
|
||||
export CC=gcc-10 CXX=g++-10
|
||||
cmake ..
|
||||
|
||||
سیستم عامل مک ایکس:
|
||||
|
@ -19,7 +19,7 @@ $ sudo apt-get install git cmake python ninja-build
|
||||
|
||||
Ou cmake3 au lieu de cmake sur les systèmes plus anciens.
|
||||
|
||||
## Installer GCC 9 {#install-gcc-9}
|
||||
## Installer GCC 10 {#install-gcc-10}
|
||||
|
||||
Il y a plusieurs façons de le faire.
|
||||
|
||||
@ -29,18 +29,18 @@ Il y a plusieurs façons de le faire.
|
||||
$ sudo apt-get install software-properties-common
|
||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install gcc-9 g++-9
|
||||
$ sudo apt-get install gcc-10 g++-10
|
||||
```
|
||||
|
||||
### Installer à partir de Sources {#install-from-sources}
|
||||
|
||||
Regarder [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||
|
||||
## Utilisez GCC 9 pour les Builds {#use-gcc-9-for-builds}
|
||||
## Utilisez GCC 10 pour les Builds {#use-gcc-10-for-builds}
|
||||
|
||||
``` bash
|
||||
$ export CC=gcc-9
|
||||
$ export CXX=g++-9
|
||||
$ export CC=gcc-10
|
||||
$ export CXX=g++-10
|
||||
```
|
||||
|
||||
## Commander Clickhouse Sources {#checkout-clickhouse-sources}
|
||||
@ -76,7 +76,7 @@ La construction nécessite les composants suivants:
|
||||
- Git (est utilisé uniquement pour extraire les sources, ce n'est pas nécessaire pour la construction)
|
||||
- CMake 3.10 ou plus récent
|
||||
- Ninja (recommandé) ou faire
|
||||
- Compilateur C++: gcc 9 ou clang 8 ou plus récent
|
||||
- Compilateur C++: gcc 10 ou clang 8 ou plus récent
|
||||
- Linker: lld ou gold (le classique GNU LD ne fonctionnera pas)
|
||||
- Python (est seulement utilisé dans la construction LLVM et il est facultatif)
|
||||
|
||||
|
@ -135,13 +135,13 @@ ClickHouse utilise plusieurs bibliothèques externes pour la construction. Tous
|
||||
|
||||
# Compilateur C++ {#c-compiler}
|
||||
|
||||
Les compilateurs GCC à partir de la version 9 et Clang version 8 ou supérieure sont pris en charge pour construire ClickHouse.
|
||||
Les compilateurs GCC à partir de la version 10 et Clang version 8 ou supérieure sont pris en charge pour construire ClickHouse.
|
||||
|
||||
Les builds officiels de Yandex utilisent actuellement GCC car ils génèrent du code machine de performances légèrement meilleures (ce qui donne une différence allant jusqu'à plusieurs pour cent selon nos benchmarks). Et Clang est plus pratique pour le développement habituellement. Cependant, notre plate-forme d'intégration continue (CI) vérifie environ une douzaine de combinaisons de construction.
|
||||
|
||||
Pour installer GCC sur Ubuntu Exécutez: `sudo apt install gcc g++`
|
||||
|
||||
Vérifiez la version de gcc: `gcc --version`. Si elle est inférieure à 9, suivez les instructions ici: https://clickhouse.tech/docs/fr/development/build/#install-gcc-9.
|
||||
Vérifiez la version de gcc: `gcc --version`. Si elle est inférieure à 10, suivez les instructions ici: https://clickhouse.tech/docs/fr/development/build/#install-gcc-10.
|
||||
|
||||
Mac OS X build est pris en charge uniquement pour Clang. Il suffit d'exécuter `brew install llvm`
|
||||
|
||||
@ -156,11 +156,11 @@ Maintenant que vous êtes prêt à construire ClickHouse nous vous conseillons d
|
||||
|
||||
Vous pouvez avoir plusieurs répertoires différents (build_release, build_debug, etc.) pour les différents types de construction.
|
||||
|
||||
Tandis qu'à l'intérieur de la `build` répertoire, configurez votre build en exécutant CMake. Avant la première exécution, vous devez définir des variables d'environnement qui spécifient le compilateur (compilateur gcc version 9 dans cet exemple).
|
||||
Tandis qu'à l'intérieur de la `build` répertoire, configurez votre build en exécutant CMake. Avant la première exécution, vous devez définir des variables d'environnement qui spécifient le compilateur (compilateur gcc version 10 dans cet exemple).
|
||||
|
||||
Linux:
|
||||
|
||||
export CC=gcc-9 CXX=g++-9
|
||||
export CC=gcc-10 CXX=g++-10
|
||||
cmake ..
|
||||
|
||||
Mac OS X:
|
||||
|
@ -19,7 +19,7 @@ $ sudo apt-get install git cmake python ninja-build
|
||||
|
||||
古いシステムではcmakeの代わりにcmake3。
|
||||
|
||||
## GCC9のインストール {#install-gcc-9}
|
||||
## GCC9のインストール {#install-gcc-10}
|
||||
|
||||
これを行うにはいくつかの方法があります。
|
||||
|
||||
@ -29,18 +29,18 @@ $ sudo apt-get install git cmake python ninja-build
|
||||
$ sudo apt-get install software-properties-common
|
||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install gcc-9 g++-9
|
||||
$ sudo apt-get install gcc-10 g++-10
|
||||
```
|
||||
|
||||
### ソースからインスト {#install-from-sources}
|
||||
|
||||
見て [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||
|
||||
## ビルドにGCC9を使用する {#use-gcc-9-for-builds}
|
||||
## ビルドにGCC9を使用する {#use-gcc-10-for-builds}
|
||||
|
||||
``` bash
|
||||
$ export CC=gcc-9
|
||||
$ export CXX=g++-9
|
||||
$ export CC=gcc-10
|
||||
$ export CXX=g++-10
|
||||
```
|
||||
|
||||
## ツつィツ姪"ツ債ツつケ {#checkout-clickhouse-sources}
|
||||
|
@ -141,7 +141,7 @@ ClickHouseのビルドには、バージョン9以降のGCCとClangバージョ
|
||||
|
||||
UBUNTUにGCCをインストールするには: `sudo apt install gcc g++`
|
||||
|
||||
Gccのバージョンを確認する: `gcc --version`. の場合は下記9その指示に従う。https://clickhouse.tech/docs/ja/development/build/#install-gcc-9.
|
||||
Gccのバージョンを確認する: `gcc --version`. の場合は下記9その指示に従う。https://clickhouse.tech/docs/ja/development/build/#install-gcc-10.
|
||||
|
||||
Mac OS XのビルドはClangでのみサポートされています。 ちょうど実行 `brew install llvm`
|
||||
|
||||
@ -160,7 +160,7 @@ ClickHouseを構築する準備ができたので、別のディレクトリを
|
||||
|
||||
Linux:
|
||||
|
||||
export CC=gcc-9 CXX=g++-9
|
||||
export CC=gcc-10 CXX=g++-10
|
||||
cmake ..
|
||||
|
||||
Mac OS X:
|
||||
|
@ -142,7 +142,7 @@ ClickHouse использует для сборки некоторое коли
|
||||
|
||||
Для установки GCC под Ubuntu, выполните: `sudo apt install gcc g++`.
|
||||
|
||||
Проверьте версию gcc: `gcc --version`. Если версия меньше 9, то следуйте инструкции: https://clickhouse.tech/docs/ru/development/build/#install-gcc-9.
|
||||
Проверьте версию gcc: `gcc --version`. Если версия меньше 10, то следуйте инструкции: https://clickhouse.tech/docs/ru/development/build/#install-gcc-10.
|
||||
|
||||
Сборка под Mac OS X поддерживается только для компилятора Clang. Чтобы установить его выполните `brew install llvm`
|
||||
|
||||
@ -162,7 +162,7 @@ ClickHouse использует для сборки некоторое коли
|
||||
|
||||
Linux:
|
||||
|
||||
export CC=gcc-9 CXX=g++-9
|
||||
export CC=gcc-10 CXX=g++-10
|
||||
cmake ..
|
||||
|
||||
Mac OS X:
|
||||
|
@ -332,8 +332,8 @@ INDEX b (u64 * length(str), i32 + f64 * 100, date, str) TYPE set(100) GRANULARIT
|
||||
|------------------------------------------------------------------------------------------------------------|-------------|--------|-------------|-------------|---------------|
|
||||
| [equals (=, ==)](../../../sql-reference/functions/comparison-functions.md#function-equals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [notEquals(!=, \<\>)](../../../sql-reference/functions/comparison-functions.md#function-notequals) | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✗ | ✗ |
|
||||
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✔ | ✗ | ✗ |
|
||||
| [like](../../../sql-reference/functions/string-search-functions.md#function-like) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||
| [notLike](../../../sql-reference/functions/string-search-functions.md#function-notlike) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||
| [startsWith](../../../sql-reference/functions/string-functions.md#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ |
|
||||
| [endsWith](../../../sql-reference/functions/string-functions.md#endswith) | ✗ | ✗ | ✔ | ✔ | ✗ |
|
||||
| [multiSearchAny](../../../sql-reference/functions/string-search-functions.md#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ |
|
||||
|
@ -64,6 +64,6 @@ CREATE TABLE merge.hits_buffer AS merge.hits ENGINE = Buffer(merge, hits, 16, 10
|
||||
|
||||
Таблицы типа Buffer используются в тех случаях, когда от большого количества серверов поступает слишком много INSERT-ов в единицу времени, и нет возможности заранее самостоятельно буферизовать данные перед вставкой, в результате чего, INSERT-ы не успевают выполняться.
|
||||
|
||||
Заметим, что даже для таблиц типа Buffer не имеет смысла вставлять данные по одной строке, так как таким образом будет достигнута скорость всего лишь в несколько тысяч строк в секунду, тогда как при вставке более крупными блоками, достижимо более миллиона строк в секунду (смотрите раздел «Производительность»).
|
||||
Заметим, что даже для таблиц типа Buffer не имеет смысла вставлять данные по одной строке, так как таким образом будет достигнута скорость всего лишь в несколько тысяч строк в секунду, тогда как при вставке более крупными блоками, достижимо более миллиона строк в секунду (смотрите раздел [«Производительность»](../../../introduction/performance/).
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/buffer/) <!--hide-->
|
||||
|
@ -95,6 +95,7 @@ SELECT joinGet('id_val_join', 'val', toUInt32(1))
|
||||
- [max_bytes_in_join](../../../operations/settings/query-complexity.md#settings-max_bytes_in_join)
|
||||
- [join_overflow_mode](../../../operations/settings/query-complexity.md#settings-join_overflow_mode)
|
||||
- [join_any_take_last_row](../../../operations/settings/settings.md#settings-join_any_take_last_row)
|
||||
- [persistent](../../../operations/settings/settings.md#persistent)
|
||||
|
||||
Таблицы с движком `Join` нельзя использовать в операциях `GLOBAL JOIN`.
|
||||
|
||||
|
@ -14,4 +14,10 @@ toc_title: Set
|
||||
|
||||
При грубом перезапуске сервера, блок данных на диске может быть потерян или повреждён. В последнем случае, может потребоваться вручную удалить файл с повреждёнными данными.
|
||||
|
||||
### Ограничения и настройки {#join-limitations-and-settings}
|
||||
|
||||
При создании таблицы, применяются следующие параметры:
|
||||
|
||||
- [persistent](../../../operations/settings/settings.md#persistent)
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/set/) <!--hide-->
|
||||
|
@ -467,6 +467,26 @@ ClickHouse проверяет условия для `min_part_size` и `min_part
|
||||
<max_concurrent_queries>100</max_concurrent_queries>
|
||||
```
|
||||
|
||||
## max_concurrent_queries_for_all_users {#max-concurrent-queries-for-all-users}
|
||||
|
||||
Если значение этой настройки меньше или равно текущему количеству одновременно обрабатываемых запросов, то будет сгенерировано исключение.
|
||||
|
||||
Пример: `max_concurrent_queries_for_all_users` установлен на 99 для всех пользователей. Чтобы выполнять запросы даже когда сервер перегружен, администратор баз данных устанавливает для себя значение настройки на 100.
|
||||
|
||||
Изменение настройки для одного запроса или пользователя не влияет на другие запросы.
|
||||
|
||||
Значение по умолчанию: `0` — отсутствие ограничений.
|
||||
|
||||
**Пример**
|
||||
|
||||
``` xml
|
||||
<max_concurrent_queries_for_all_users>99</max_concurrent_queries_for_all_users>
|
||||
```
|
||||
|
||||
**Смотрите также**
|
||||
|
||||
- [max_concurrent_queries](#max-concurrent-queries)
|
||||
|
||||
## max_connections {#max-connections}
|
||||
|
||||
Максимальное количество входящих соединений.
|
||||
@ -535,6 +555,22 @@ ClickHouse проверяет условия для `min_part_size` и `min_part
|
||||
</merge_tree>
|
||||
```
|
||||
|
||||
## replicated\_merge\_tree {#server_configuration_parameters-replicated_merge_tree}
|
||||
|
||||
Тонкая настройка таблиц в [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
||||
|
||||
Эта настройка имеет более высокий приоритет.
|
||||
|
||||
Подробнее смотрите в заголовочном файле MergeTreeSettings.h.
|
||||
|
||||
**Пример**
|
||||
|
||||
``` xml
|
||||
<replicated_merge_tree>
|
||||
<max_suspicious_broken_parts>5</max_suspicious_broken_parts>
|
||||
</replicated_merge_tree>
|
||||
```
|
||||
|
||||
## openSSL {#server_configuration_parameters-openssl}
|
||||
|
||||
Настройки клиента/сервера SSL.
|
||||
|
@ -2082,4 +2082,17 @@ SELECT CAST(toNullable(toInt32(0)) AS Int32) as x, toTypeName(x);
|
||||
|
||||
- Функция [CAST](../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast)
|
||||
|
||||
## persistent {#persistent}
|
||||
|
||||
Отключает перманентность для табличных движков [Set](../../engines/table-engines/special/set.md#set) и [Join](../../engines/table-engines/special/join.md#join).
|
||||
|
||||
Уменьшает расходы на ввод/вывод. Может быть полезно, когда требуется высокая производительность, а перманентность не обязательна.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- 1 — включено.
|
||||
- 0 — отключено.
|
||||
|
||||
Значение по умолчанию: `1`.
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/settings/) <!--hide-->
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Cловари полигонов {#slovari-polygonov}
|
||||
# Cловари полигонов {#polygon-dictionaries}
|
||||
|
||||
Словари полигонов позволяют эффективно искать полигон, в который попадают данные точки, среди множества полигонов.
|
||||
Для примера: определение района города по географическим координатам.
|
||||
|
@ -152,21 +152,43 @@ FROM test.Orders;
|
||||
- `QUARTER`
|
||||
- `YEAR`
|
||||
|
||||
В качестве значения оператора `INTERVAL` вы можете также использовать строковый литерал. Например, выражение `INTERVAL 1 HOUR` идентично выражению `INTERVAL '1 hour'` или `INTERVAL '1' hour`.
|
||||
|
||||
!!! warning "Внимание"
|
||||
Интервалы различных типов нельзя объединять. Нельзя использовать выражения вида `INTERVAL 4 DAY 1 HOUR`. Вместо этого интервалы можно выразить в единицах меньших или равных наименьшей единице интервала, Например, `INTERVAL 25 HOUR`. Также можно выполнять последовательные операции как показано в примере ниже.
|
||||
|
||||
Пример:
|
||||
Примеры:
|
||||
|
||||
``` sql
|
||||
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR
|
||||
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐
|
||||
│ 2019-10-23 11:16:28 │ 2019-10-27 14:16:28 │
|
||||
│ 2020-11-03 22:09:50 │ 2020-11-08 01:09:50 │
|
||||
└─────────────────────┴────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT now() AS current_date_time, current_date_time + INTERVAL '4 day' + INTERVAL '3 hour';
|
||||
```
|
||||
|
||||
``` text
|
||||
┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐
|
||||
│ 2020-11-03 22:12:10 │ 2020-11-08 01:12:10 │
|
||||
└─────────────────────┴────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT now() AS current_date_time, current_date_time + INTERVAL '4' day + INTERVAL '3' hour;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌───current_date_time─┬─plus(plus(now(), toIntervalDay('4')), toIntervalHour('3'))─┐
|
||||
│ 2020-11-03 22:33:19 │ 2020-11-08 01:33:19 │
|
||||
└─────────────────────┴────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Смотрите также**
|
||||
|
||||
- Тип данных [Interval](../../sql-reference/operators/index.md)
|
||||
|
@ -214,3 +214,85 @@ ORDER BY
|
||||
│ 1970-03-12 │ 1970-01-08 │ original │
|
||||
└────────────┴────────────┴──────────┘
|
||||
```
|
||||
|
||||
## Секция OFFSET FETCH {#offset-fetch}
|
||||
|
||||
`OFFSET` и `FETCH` позволяют извлекать данные по частям. Они указывают строки, которые вы хотите получить в результате запроса.
|
||||
|
||||
``` sql
|
||||
OFFSET offset_row_count {ROW | ROWS}] [FETCH {FIRST | NEXT} fetch_row_count {ROW | ROWS} {ONLY | WITH TIES}]
|
||||
```
|
||||
|
||||
`offset_row_count` или `fetch_row_count` может быть числом или литеральной константой. Если вы не используете `fetch_row_count`, то его значение равно 1.
|
||||
|
||||
`OFFSET` указывает количество строк, которые необходимо пропустить перед началом возврата строк из запроса.
|
||||
|
||||
`FETCH` указывает максимальное количество строк, которые могут быть получены в результате запроса.
|
||||
|
||||
Опция `ONLY` используется для возврата строк, которые следуют сразу же за строками, пропущенными секцией `OFFSET`. В этом случае `FETCH` — это альтернатива [LIMIT](../../../sql-reference/statements/select/limit.md). Например, следующий запрос
|
||||
|
||||
``` sql
|
||||
SELECT * FROM test_fetch ORDER BY a OFFSET 1 ROW FETCH FIRST 3 ROWS ONLY;
|
||||
```
|
||||
|
||||
идентичен запросу
|
||||
|
||||
``` sql
|
||||
SELECT * FROM test_fetch ORDER BY a LIMIT 3 OFFSET 1;
|
||||
```
|
||||
|
||||
Опция `WITH TIES` используется для возврата дополнительных строк, которые привязываются к последней в результате запроса. Например, если `fetch_row_count` имеет значение 5 и существуют еще 2 строки с такими же значениями столбцов, указанных в `ORDER BY`, что и у пятой строки результата, то финальный набор будет содержать 7 строк.
|
||||
|
||||
!!! note "Примечание"
|
||||
Секция `OFFSET` должна находиться перед секцией `FETCH`, если обе присутствуют.
|
||||
|
||||
### Примеры {#examples}
|
||||
|
||||
Входная таблица:
|
||||
|
||||
``` text
|
||||
┌─a─┬─b─┐
|
||||
│ 1 │ 1 │
|
||||
│ 2 │ 1 │
|
||||
│ 3 │ 4 │
|
||||
│ 1 │ 3 │
|
||||
│ 5 │ 4 │
|
||||
│ 0 │ 6 │
|
||||
│ 5 │ 7 │
|
||||
└───┴───┘
|
||||
```
|
||||
|
||||
Использование опции `ONLY`:
|
||||
|
||||
``` sql
|
||||
SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS ONLY;
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─a─┬─b─┐
|
||||
│ 2 │ 1 │
|
||||
│ 3 │ 4 │
|
||||
│ 5 │ 4 │
|
||||
└───┴───┘
|
||||
```
|
||||
|
||||
Использование опции `WITH TIES`:
|
||||
|
||||
``` sql
|
||||
SELECT * FROM test_fetch ORDER BY a OFFSET 3 ROW FETCH FIRST 3 ROWS WITH TIES;
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─a─┬─b─┐
|
||||
│ 2 │ 1 │
|
||||
│ 3 │ 4 │
|
||||
│ 5 │ 4 │
|
||||
│ 5 │ 7 │
|
||||
└───┴───┘
|
||||
```
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/select/order-by/) <!--hide-->
|
||||
|
@ -18,7 +18,7 @@ Markdown==3.3.2
|
||||
MarkupSafe==1.1.1
|
||||
mkdocs==1.1.2
|
||||
mkdocs-htmlproofer-plugin==0.0.3
|
||||
mkdocs-macros-plugin==0.4.17
|
||||
mkdocs-macros-plugin==0.4.20
|
||||
nltk==3.5
|
||||
nose==1.3.7
|
||||
protobuf==3.13.0
|
||||
|
@ -19,7 +19,7 @@ $ sudo apt-get install git cmake python ninja-build
|
||||
|
||||
Veya eski sistemlerde cmake yerine cmake3.
|
||||
|
||||
## Gcc 9'u yükle {#install-gcc-9}
|
||||
## Gcc 10'u yükle {#install-gcc-10}
|
||||
|
||||
Bunu yapmak için çeşitli yollar vardır.
|
||||
|
||||
@ -29,18 +29,18 @@ Bunu yapmak için çeşitli yollar vardır.
|
||||
$ sudo apt-get install software-properties-common
|
||||
$ sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||
$ sudo apt-get update
|
||||
$ sudo apt-get install gcc-9 g++-9
|
||||
$ sudo apt-get install gcc-10 g++-10
|
||||
```
|
||||
|
||||
### Kaynaklardan yükleyin {#install-from-sources}
|
||||
|
||||
Bakmak [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||
|
||||
## Yapılar için GCC 9 kullanın {#use-gcc-9-for-builds}
|
||||
## Yapılar için GCC 10 kullanın {#use-gcc-10-for-builds}
|
||||
|
||||
``` bash
|
||||
$ export CC=gcc-9
|
||||
$ export CXX=g++-9
|
||||
$ export CC=gcc-10
|
||||
$ export CXX=g++-10
|
||||
```
|
||||
|
||||
## Checkout ClickHouse Kaynakları {#checkout-clickhouse-sources}
|
||||
@ -76,7 +76,7 @@ Yapı aşağıdaki bileşenleri gerektirir:
|
||||
- Git (yalnızca kaynakları kontrol etmek için kullanılır, yapı için gerekli değildir)
|
||||
- Cmake 3.10 veya daha yeni
|
||||
- Ninja (önerilir) veya yapmak
|
||||
- C ++ derleyici: gcc 9 veya clang 8 veya daha yeni
|
||||
- C ++ derleyici: gcc 10 veya clang 8 veya daha yeni
|
||||
- Linker :lld veya altın (klasik GNU ld çalışmaz)
|
||||
- Python (sadece LLVM yapısında kullanılır ve isteğe bağlıdır)
|
||||
|
||||
|
@ -141,7 +141,7 @@ Resmi Yandex şu anda GCC'Yİ kullanıyor çünkü biraz daha iyi performansa sa
|
||||
|
||||
Ubuntu run GCC yüklemek için: `sudo apt install gcc g++`
|
||||
|
||||
Gcc sürümünü kontrol edin: `gcc --version`. 9'un altındaysa, buradaki talimatları izleyin: https://clickhouse.tech/docs/tr/development/build/#install-gcc-9.
|
||||
Gcc sürümünü kontrol edin: `gcc --version`. 10'un altındaysa, buradaki talimatları izleyin: https://clickhouse.tech/docs/tr/development/build/#install-gcc-10.
|
||||
|
||||
Mac OS X build sadece Clang için desteklenir. Sadece koş `brew install llvm`
|
||||
|
||||
@ -160,7 +160,7 @@ Birkaç farklı dizine (build_release, build_debug, vb.) sahip olabilirsiniz.) f
|
||||
|
||||
Linux:
|
||||
|
||||
export CC=gcc-9 CXX=g++-9
|
||||
export CC=gcc-10 CXX=g++-10
|
||||
cmake ..
|
||||
|
||||
Mac OS X:
|
||||
|
@ -35,7 +35,7 @@ sudo apt-get install git cmake ninja-build
|
||||
或cmake3而不是旧系统上的cmake。
|
||||
或者在早期版本的系统中用 cmake3 替代 cmake
|
||||
|
||||
## 安装 GCC 9 {#an-zhuang-gcc-9}
|
||||
## 安装 GCC 10 {#an-zhuang-gcc-10}
|
||||
|
||||
有几种方法可以做到这一点。
|
||||
|
||||
@ -45,18 +45,18 @@ sudo apt-get install git cmake ninja-build
|
||||
sudo apt-get install software-properties-common
|
||||
sudo apt-add-repository ppa:ubuntu-toolchain-r/test
|
||||
sudo apt-get update
|
||||
sudo apt-get install gcc-9 g++-9
|
||||
sudo apt-get install gcc-10 g++-10
|
||||
```
|
||||
|
||||
### 源码安装 gcc {#yuan-ma-an-zhuang-gcc}
|
||||
|
||||
请查看 [utils/ci/build-gcc-from-sources.sh](https://github.com/ClickHouse/ClickHouse/blob/master/utils/ci/build-gcc-from-sources.sh)
|
||||
|
||||
## 使用 GCC 9 来编译 {#shi-yong-gcc-9-lai-bian-yi}
|
||||
## 使用 GCC 10 来编译 {#shi-yong-gcc-10-lai-bian-yi}
|
||||
|
||||
``` bash
|
||||
export CC=gcc-9
|
||||
export CXX=g++-9
|
||||
export CC=gcc-10
|
||||
export CXX=g++-10
|
||||
```
|
||||
|
||||
## 拉取 ClickHouse 源码 {#la-qu-clickhouse-yuan-ma-1}
|
||||
|
@ -129,7 +129,7 @@ Yandex官方当前使用GCC构建ClickHouse,因为它生成的机器代码性
|
||||
|
||||
在Ubuntu上安装GCC,请执行:`sudo apt install gcc g++`
|
||||
|
||||
请使用`gcc --version`查看gcc的版本。如果gcc版本低于9,请参考此处的指示:https://clickhouse.tech/docs/zh/development/build/#an-zhuang-gcc-9 。
|
||||
请使用`gcc --version`查看gcc的版本。如果gcc版本低于9,请参考此处的指示:https://clickhouse.tech/docs/zh/development/build/#an-zhuang-gcc-10 。
|
||||
|
||||
在Mac OS X上安装GCC,请执行:`brew install gcc`
|
||||
|
||||
@ -146,7 +146,7 @@ Yandex官方当前使用GCC构建ClickHouse,因为它生成的机器代码性
|
||||
|
||||
在`build`目录下,通过运行CMake配置构建。 在第一次运行之前,请定义用于指定编译器的环境变量(本示例中为gcc 9 编译器)。
|
||||
|
||||
export CC=gcc-9 CXX=g++-9
|
||||
export CC=gcc-10 CXX=g++-10
|
||||
cmake ..
|
||||
|
||||
`CC`变量指代C的编译器(C Compiler的缩写),而`CXX`变量指代要使用哪个C++编译器进行编译。
|
||||
|
@ -295,9 +295,6 @@ else ()
|
||||
install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-git-import DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
||||
list(APPEND CLICKHOUSE_BUNDLE clickhouse-git-import)
|
||||
endif ()
|
||||
if(ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
||||
list(APPEND CLICKHOUSE_BUNDLE clickhouse-odbc-bridge)
|
||||
endif()
|
||||
|
||||
install (TARGETS clickhouse RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
||||
|
||||
|
@ -75,6 +75,7 @@
|
||||
#include <Common/InterruptListener.h>
|
||||
#include <Functions/registerFunctions.h>
|
||||
#include <AggregateFunctions/registerAggregateFunctions.h>
|
||||
#include <Formats/registerFormats.h>
|
||||
#include <Common/Config/configReadClient.h>
|
||||
#include <Storages/ColumnsDescription.h>
|
||||
#include <common/argsToConfig.h>
|
||||
@ -463,6 +464,7 @@ private:
|
||||
{
|
||||
UseSSL use_ssl;
|
||||
|
||||
registerFormats();
|
||||
registerFunctions();
|
||||
registerAggregateFunctions();
|
||||
|
||||
@ -2329,6 +2331,7 @@ public:
|
||||
("query-fuzzer-runs", po::value<int>()->default_value(0), "query fuzzer runs")
|
||||
("opentelemetry-traceparent", po::value<std::string>(), "OpenTelemetry traceparent header as described by W3C Trace Context recommendation")
|
||||
("opentelemetry-tracestate", po::value<std::string>(), "OpenTelemetry tracestate header as described by W3C Trace Context recommendation")
|
||||
("history_file", po::value<std::string>(), "path to history file")
|
||||
;
|
||||
|
||||
Settings cmd_settings;
|
||||
@ -2485,6 +2488,8 @@ public:
|
||||
config().setInt("suggestion_limit", options["suggestion_limit"].as<int>());
|
||||
if (options.count("highlight"))
|
||||
config().setBool("highlight", options["highlight"].as<bool>());
|
||||
if (options.count("history_file"))
|
||||
config().setString("history_file", options["history_file"].as<std::string>());
|
||||
|
||||
if ((query_fuzzer_runs = options["query-fuzzer-runs"].as<int>()))
|
||||
{
|
||||
|
@ -1,6 +1,7 @@
|
||||
#include "ClusterCopierApp.h"
|
||||
#include <Common/StatusFile.h>
|
||||
#include <Common/TerminalSize.h>
|
||||
#include <Formats/registerFormats.h>
|
||||
#include <unistd.h>
|
||||
|
||||
|
||||
@ -122,6 +123,7 @@ void ClusterCopierApp::mainImpl()
|
||||
registerStorages();
|
||||
registerDictionaries();
|
||||
registerDisks();
|
||||
registerFormats();
|
||||
|
||||
static const std::string default_database = "_local";
|
||||
DatabaseCatalog::instance().attachDatabase(default_database, std::make_shared<DatabaseMemory>(default_database, *context));
|
||||
|
@ -33,6 +33,7 @@
|
||||
#include <Storages/registerStorages.h>
|
||||
#include <Dictionaries/registerDictionaries.h>
|
||||
#include <Disks/registerDisks.h>
|
||||
#include <Formats/registerFormats.h>
|
||||
#include <boost/program_options/options_description.hpp>
|
||||
#include <boost/program_options.hpp>
|
||||
#include <common/argsToConfig.h>
|
||||
@ -224,6 +225,7 @@ try
|
||||
registerStorages();
|
||||
registerDictionaries();
|
||||
registerDisks();
|
||||
registerFormats();
|
||||
|
||||
/// Maybe useless
|
||||
if (config().has("macros"))
|
||||
|
@ -23,6 +23,7 @@
|
||||
#include <Common/HashTable/HashMap.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Formats/registerFormats.h>
|
||||
#include <Core/Block.h>
|
||||
#include <common/StringRef.h>
|
||||
#include <common/DateLUT.h>
|
||||
@ -1050,6 +1051,8 @@ try
|
||||
using namespace DB;
|
||||
namespace po = boost::program_options;
|
||||
|
||||
registerFormats();
|
||||
|
||||
po::options_description description = createOptionsDescription("Options", getTerminalWidth());
|
||||
description.add_options()
|
||||
("help", "produce help message")
|
||||
|
@ -10,19 +10,8 @@ set (CLICKHOUSE_ODBC_BRIDGE_SOURCES
|
||||
PingHandler.cpp
|
||||
SchemaAllowedHandler.cpp
|
||||
validateODBCConnectionString.cpp
|
||||
odbc-bridge.cpp
|
||||
)
|
||||
set (CLICKHOUSE_ODBC_BRIDGE_LINK
|
||||
PRIVATE
|
||||
clickhouse_parsers
|
||||
clickhouse_aggregate_functions
|
||||
daemon
|
||||
dbms
|
||||
Poco::Data
|
||||
PUBLIC
|
||||
Poco::Data::ODBC
|
||||
)
|
||||
|
||||
clickhouse_program_add_library(odbc-bridge)
|
||||
|
||||
if (OS_LINUX)
|
||||
# clickhouse-odbc-bridge is always a separate binary.
|
||||
@ -30,10 +19,17 @@ if (OS_LINUX)
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,--no-export-dynamic")
|
||||
endif ()
|
||||
|
||||
add_executable(clickhouse-odbc-bridge odbc-bridge.cpp)
|
||||
set_target_properties(clickhouse-odbc-bridge PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..)
|
||||
add_executable(clickhouse-odbc-bridge ${CLICKHOUSE_ODBC_BRIDGE_SOURCES})
|
||||
|
||||
clickhouse_program_link_split_binary(odbc-bridge)
|
||||
target_link_libraries(clickhouse-odbc-bridge PRIVATE
|
||||
daemon
|
||||
dbms
|
||||
clickhouse_parsers
|
||||
Poco::Data
|
||||
Poco::Data::ODBC
|
||||
)
|
||||
|
||||
set_target_properties(clickhouse-odbc-bridge PROPERTIES RUNTIME_OUTPUT_DIRECTORY ..)
|
||||
|
||||
if (USE_GDB_ADD_INDEX)
|
||||
add_custom_command(TARGET clickhouse-odbc-bridge POST_BUILD COMMAND ${GDB_ADD_INDEX_EXE} ../clickhouse-odbc-bridge COMMENT "Adding .gdb-index to clickhouse-odbc-bridge" VERBATIM)
|
||||
|
@ -18,11 +18,13 @@
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/StringUtils/StringUtils.h>
|
||||
#include <Common/config.h>
|
||||
#include <Formats/registerFormats.h>
|
||||
#include <common/logger_useful.h>
|
||||
#include <ext/scope_guard.h>
|
||||
#include <ext/range.h>
|
||||
#include <Common/SensitiveDataMasker.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
@ -160,6 +162,8 @@ int ODBCBridge::main(const std::vector<std::string> & /*args*/)
|
||||
if (is_help)
|
||||
return Application::EXIT_OK;
|
||||
|
||||
registerFormats();
|
||||
|
||||
LOG_INFO(log, "Starting up");
|
||||
Poco::Net::ServerSocket socket;
|
||||
auto address = socketBindListen(socket, hostname, port, log);
|
||||
|
@ -1,3 +1,2 @@
|
||||
add_executable (validate-odbc-connection-string validate-odbc-connection-string.cpp)
|
||||
clickhouse_target_link_split_lib(validate-odbc-connection-string odbc-bridge)
|
||||
add_executable (validate-odbc-connection-string validate-odbc-connection-string.cpp ../validateODBCConnectionString.cpp)
|
||||
target_link_libraries (validate-odbc-connection-string PRIVATE clickhouse_common_io)
|
||||
|
@ -51,6 +51,7 @@
|
||||
#include <AggregateFunctions/registerAggregateFunctions.h>
|
||||
#include <Functions/registerFunctions.h>
|
||||
#include <TableFunctions/registerTableFunctions.h>
|
||||
#include <Formats/registerFormats.h>
|
||||
#include <Storages/registerStorages.h>
|
||||
#include <Dictionaries/registerDictionaries.h>
|
||||
#include <Disks/registerDisks.h>
|
||||
@ -266,6 +267,7 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
registerStorages();
|
||||
registerDictionaries();
|
||||
registerDisks();
|
||||
registerFormats();
|
||||
|
||||
CurrentMetrics::set(CurrentMetrics::Revision, ClickHouseRevision::getVersionRevision());
|
||||
CurrentMetrics::set(CurrentMetrics::VersionInteger, ClickHouseRevision::getVersionInteger());
|
||||
|
1
programs/server/config.d/test_cluster_with_incorrect_pw.xml
Symbolic link
1
programs/server/config.d/test_cluster_with_incorrect_pw.xml
Symbolic link
@ -0,0 +1 @@
|
||||
../../../tests/config/config.d/test_cluster_with_incorrect_pw.xml
|
@ -198,6 +198,7 @@ namespace
|
||||
|
||||
/// Serialize the list of ATTACH queries to a string.
|
||||
std::stringstream ss;
|
||||
ss.exceptions(std::ios::failbit);
|
||||
for (const ASTPtr & query : queries)
|
||||
ss << *query << ";\n";
|
||||
String file_contents = std::move(ss).str();
|
||||
@ -353,6 +354,7 @@ String DiskAccessStorage::getStorageParamsJSON() const
|
||||
if (readonly)
|
||||
json.set("readonly", readonly.load());
|
||||
std::ostringstream oss;
|
||||
oss.exceptions(std::ios::failbit);
|
||||
Poco::JSON::Stringifier::stringify(json, oss);
|
||||
return oss.str();
|
||||
}
|
||||
|
@ -151,6 +151,7 @@ String LDAPAccessStorage::getStorageParamsJSON() const
|
||||
params_json.set("roles", default_role_names);
|
||||
|
||||
std::ostringstream oss;
|
||||
oss.exceptions(std::ios::failbit);
|
||||
Poco::JSON::Stringifier::stringify(params_json, oss);
|
||||
|
||||
return oss.str();
|
||||
|
@ -461,6 +461,7 @@ String UsersConfigAccessStorage::getStorageParamsJSON() const
|
||||
if (!path.empty())
|
||||
json.set("path", path);
|
||||
std::ostringstream oss;
|
||||
oss.exceptions(std::ios::failbit);
|
||||
Poco::JSON::Stringifier::stringify(json, oss);
|
||||
return oss.str();
|
||||
}
|
||||
|
@ -27,14 +27,14 @@ SRCS(
|
||||
LDAPClient.cpp
|
||||
MemoryAccessStorage.cpp
|
||||
MultipleAccessStorage.cpp
|
||||
QuotaCache.cpp
|
||||
Quota.cpp
|
||||
QuotaCache.cpp
|
||||
QuotaUsage.cpp
|
||||
RoleCache.cpp
|
||||
Role.cpp
|
||||
RoleCache.cpp
|
||||
RolesOrUsersSet.cpp
|
||||
RowPolicyCache.cpp
|
||||
RowPolicy.cpp
|
||||
RowPolicyCache.cpp
|
||||
SettingsConstraints.cpp
|
||||
SettingsProfile.cpp
|
||||
SettingsProfileElement.cpp
|
||||
|
@ -245,6 +245,7 @@ public:
|
||||
{
|
||||
DB::writeIntBinary<size_t>(this->data(place).total_values, buf);
|
||||
std::ostringstream rng_stream;
|
||||
rng_stream.exceptions(std::ios::failbit);
|
||||
rng_stream << this->data(place).rng;
|
||||
DB::writeStringBinary(rng_stream.str(), buf);
|
||||
}
|
||||
@ -275,6 +276,7 @@ public:
|
||||
std::string rng_string;
|
||||
DB::readStringBinary(rng_string, buf);
|
||||
std::istringstream rng_stream(rng_string);
|
||||
rng_stream.exceptions(std::ios::failbit);
|
||||
rng_stream >> this->data(place).rng;
|
||||
}
|
||||
|
||||
@ -564,6 +566,7 @@ public:
|
||||
{
|
||||
DB::writeIntBinary<size_t>(data(place).total_values, buf);
|
||||
std::ostringstream rng_stream;
|
||||
rng_stream.exceptions(std::ios::failbit);
|
||||
rng_stream << data(place).rng;
|
||||
DB::writeStringBinary(rng_stream.str(), buf);
|
||||
}
|
||||
@ -598,6 +601,7 @@ public:
|
||||
std::string rng_string;
|
||||
DB::readStringBinary(rng_string, buf);
|
||||
std::istringstream rng_stream(rng_string);
|
||||
rng_stream.exceptions(std::ios::failbit);
|
||||
rng_stream >> data(place).rng;
|
||||
}
|
||||
|
||||
|
@ -56,7 +56,7 @@ struct AggregateFunctionMapData
|
||||
* minMap and maxMap share the same idea, but calculate min and max correspondingly.
|
||||
*/
|
||||
|
||||
template <typename T, typename Derived, typename Visitor, bool overflow, bool tuple_argument>
|
||||
template <typename T, typename Derived, typename Visitor, bool overflow, bool tuple_argument, bool compact>
|
||||
class AggregateFunctionMapBase : public IAggregateFunctionDataHelper<
|
||||
AggregateFunctionMapData<NearestFieldType<T>>, Derived>
|
||||
{
|
||||
@ -255,23 +255,28 @@ public:
|
||||
{
|
||||
// Final step does compaction of keys that have zero values, this mutates the state
|
||||
auto & merged_maps = this->data(place).merged_maps;
|
||||
for (auto it = merged_maps.cbegin(); it != merged_maps.cend();)
|
||||
{
|
||||
// Key is not compacted if it has at least one non-zero value
|
||||
bool erase = true;
|
||||
for (size_t col = 0; col < values_types.size(); ++col)
|
||||
{
|
||||
if (it->second[col] != values_types[col]->getDefault())
|
||||
{
|
||||
erase = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (erase)
|
||||
it = merged_maps.erase(it);
|
||||
else
|
||||
++it;
|
||||
// Remove keys which are zeros or empty. This should be enabled only for sumMap.
|
||||
if constexpr (compact)
|
||||
{
|
||||
for (auto it = merged_maps.cbegin(); it != merged_maps.cend();)
|
||||
{
|
||||
// Key is not compacted if it has at least one non-zero value
|
||||
bool erase = true;
|
||||
for (size_t col = 0; col < values_types.size(); ++col)
|
||||
{
|
||||
if (it->second[col] != values_types[col]->getDefault())
|
||||
{
|
||||
erase = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (erase)
|
||||
it = merged_maps.erase(it);
|
||||
else
|
||||
++it;
|
||||
}
|
||||
}
|
||||
|
||||
size_t size = merged_maps.size();
|
||||
@ -314,11 +319,11 @@ public:
|
||||
|
||||
template <typename T, bool overflow, bool tuple_argument>
|
||||
class AggregateFunctionSumMap final :
|
||||
public AggregateFunctionMapBase<T, AggregateFunctionSumMap<T, overflow, tuple_argument>, FieldVisitorSum, overflow, tuple_argument>
|
||||
public AggregateFunctionMapBase<T, AggregateFunctionSumMap<T, overflow, tuple_argument>, FieldVisitorSum, overflow, tuple_argument, true>
|
||||
{
|
||||
private:
|
||||
using Self = AggregateFunctionSumMap<T, overflow, tuple_argument>;
|
||||
using Base = AggregateFunctionMapBase<T, Self, FieldVisitorSum, overflow, tuple_argument>;
|
||||
using Base = AggregateFunctionMapBase<T, Self, FieldVisitorSum, overflow, tuple_argument, true>;
|
||||
|
||||
public:
|
||||
AggregateFunctionSumMap(const DataTypePtr & keys_type_,
|
||||
@ -342,11 +347,12 @@ class AggregateFunctionSumMapFiltered final :
|
||||
AggregateFunctionSumMapFiltered<T, overflow, tuple_argument>,
|
||||
FieldVisitorSum,
|
||||
overflow,
|
||||
tuple_argument>
|
||||
tuple_argument,
|
||||
true>
|
||||
{
|
||||
private:
|
||||
using Self = AggregateFunctionSumMapFiltered<T, overflow, tuple_argument>;
|
||||
using Base = AggregateFunctionMapBase<T, Self, FieldVisitorSum, overflow, tuple_argument>;
|
||||
using Base = AggregateFunctionMapBase<T, Self, FieldVisitorSum, overflow, tuple_argument, true>;
|
||||
|
||||
/// ARCADIA_BUILD disallow unordered_set for big ints for some reason
|
||||
static constexpr const bool allow_hash = !OverBigInt<T>;
|
||||
@ -474,11 +480,11 @@ public:
|
||||
|
||||
template <typename T, bool tuple_argument>
|
||||
class AggregateFunctionMinMap final :
|
||||
public AggregateFunctionMapBase<T, AggregateFunctionMinMap<T, tuple_argument>, FieldVisitorMin, true, tuple_argument>
|
||||
public AggregateFunctionMapBase<T, AggregateFunctionMinMap<T, tuple_argument>, FieldVisitorMin, true, tuple_argument, false>
|
||||
{
|
||||
private:
|
||||
using Self = AggregateFunctionMinMap<T, tuple_argument>;
|
||||
using Base = AggregateFunctionMapBase<T, Self, FieldVisitorMin, true, tuple_argument>;
|
||||
using Base = AggregateFunctionMapBase<T, Self, FieldVisitorMin, true, tuple_argument, false>;
|
||||
|
||||
public:
|
||||
AggregateFunctionMinMap(const DataTypePtr & keys_type_,
|
||||
@ -498,11 +504,11 @@ public:
|
||||
|
||||
template <typename T, bool tuple_argument>
|
||||
class AggregateFunctionMaxMap final :
|
||||
public AggregateFunctionMapBase<T, AggregateFunctionMaxMap<T, tuple_argument>, FieldVisitorMax, true, tuple_argument>
|
||||
public AggregateFunctionMapBase<T, AggregateFunctionMaxMap<T, tuple_argument>, FieldVisitorMax, true, tuple_argument, false>
|
||||
{
|
||||
private:
|
||||
using Self = AggregateFunctionMaxMap<T, tuple_argument>;
|
||||
using Base = AggregateFunctionMapBase<T, Self, FieldVisitorMax, true, tuple_argument>;
|
||||
using Base = AggregateFunctionMapBase<T, Self, FieldVisitorMax, true, tuple_argument, false>;
|
||||
|
||||
public:
|
||||
AggregateFunctionMaxMap(const DataTypePtr & keys_type_,
|
||||
|
@ -191,6 +191,7 @@ public:
|
||||
std::string rng_string;
|
||||
DB::readStringBinary(rng_string, buf);
|
||||
std::istringstream rng_stream(rng_string);
|
||||
rng_stream.exceptions(std::ios::failbit);
|
||||
rng_stream >> rng;
|
||||
|
||||
for (size_t i = 0; i < samples.size(); ++i)
|
||||
@ -205,6 +206,7 @@ public:
|
||||
DB::writeIntBinary<size_t>(total_values, buf);
|
||||
|
||||
std::ostringstream rng_stream;
|
||||
rng_stream.exceptions(std::ios::failbit);
|
||||
rng_stream << rng;
|
||||
DB::writeStringBinary(rng_stream.str(), buf);
|
||||
|
||||
|
@ -26,10 +26,10 @@ SRCS(
|
||||
AggregateFunctionGroupUniqArray.cpp
|
||||
AggregateFunctionHistogram.cpp
|
||||
AggregateFunctionIf.cpp
|
||||
AggregateFunctionMLMethod.cpp
|
||||
AggregateFunctionMaxIntersections.cpp
|
||||
AggregateFunctionMerge.cpp
|
||||
AggregateFunctionMinMaxAny.cpp
|
||||
AggregateFunctionMLMethod.cpp
|
||||
AggregateFunctionNull.cpp
|
||||
AggregateFunctionOrFill.cpp
|
||||
AggregateFunctionQuantile.cpp
|
||||
@ -45,14 +45,14 @@ SRCS(
|
||||
AggregateFunctionSumMap.cpp
|
||||
AggregateFunctionTimeSeriesGroupSum.cpp
|
||||
AggregateFunctionTopK.cpp
|
||||
AggregateFunctionUniqCombined.cpp
|
||||
AggregateFunctionUniq.cpp
|
||||
AggregateFunctionUniqCombined.cpp
|
||||
AggregateFunctionUniqUpTo.cpp
|
||||
AggregateFunctionWindowFunnel.cpp
|
||||
parseAggregateFunctionParameters.cpp
|
||||
registerAggregateFunctions.cpp
|
||||
UniqCombinedBiasData.cpp
|
||||
UniqVariadicHash.cpp
|
||||
parseAggregateFunctionParameters.cpp
|
||||
registerAggregateFunctions.cpp
|
||||
|
||||
)
|
||||
|
||||
|
@ -223,6 +223,7 @@ std::string MultiplexedConnections::dumpAddressesUnlocked() const
|
||||
{
|
||||
bool is_first = true;
|
||||
std::ostringstream os;
|
||||
os.exceptions(std::ios::failbit);
|
||||
for (const ReplicaState & state : replica_states)
|
||||
{
|
||||
const Connection * connection = state.connection;
|
||||
|
@ -324,8 +324,7 @@ void ColumnArray::popBack(size_t n)
|
||||
offsets_data.resize_assume_reserved(offsets_data.size() - n);
|
||||
}
|
||||
|
||||
|
||||
int ColumnArray::compareAt(size_t n, size_t m, const IColumn & rhs_, int nan_direction_hint) const
|
||||
int ColumnArray::compareAtImpl(size_t n, size_t m, const IColumn & rhs_, int nan_direction_hint, const Collator * collator) const
|
||||
{
|
||||
const ColumnArray & rhs = assert_cast<const ColumnArray &>(rhs_);
|
||||
|
||||
@ -334,8 +333,15 @@ int ColumnArray::compareAt(size_t n, size_t m, const IColumn & rhs_, int nan_dir
|
||||
size_t rhs_size = rhs.sizeAt(m);
|
||||
size_t min_size = std::min(lhs_size, rhs_size);
|
||||
for (size_t i = 0; i < min_size; ++i)
|
||||
if (int res = getData().compareAt(offsetAt(n) + i, rhs.offsetAt(m) + i, *rhs.data.get(), nan_direction_hint))
|
||||
{
|
||||
int res;
|
||||
if (collator)
|
||||
res = getData().compareAtWithCollation(offsetAt(n) + i, rhs.offsetAt(m) + i, *rhs.data.get(), nan_direction_hint, *collator);
|
||||
else
|
||||
res = getData().compareAt(offsetAt(n) + i, rhs.offsetAt(m) + i, *rhs.data.get(), nan_direction_hint);
|
||||
if (res)
|
||||
return res;
|
||||
}
|
||||
|
||||
return lhs_size < rhs_size
|
||||
? -1
|
||||
@ -344,6 +350,16 @@ int ColumnArray::compareAt(size_t n, size_t m, const IColumn & rhs_, int nan_dir
|
||||
: 1);
|
||||
}
|
||||
|
||||
int ColumnArray::compareAt(size_t n, size_t m, const IColumn & rhs_, int nan_direction_hint) const
|
||||
{
|
||||
return compareAtImpl(n, m, rhs_, nan_direction_hint);
|
||||
}
|
||||
|
||||
int ColumnArray::compareAtWithCollation(size_t n, size_t m, const IColumn & rhs_, int nan_direction_hint, const Collator & collator) const
|
||||
{
|
||||
return compareAtImpl(n, m, rhs_, nan_direction_hint, &collator);
|
||||
}
|
||||
|
||||
void ColumnArray::compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
PaddedPODArray<UInt64> * row_indexes, PaddedPODArray<Int8> & compare_results,
|
||||
int direction, int nan_direction_hint) const
|
||||
@ -352,27 +368,26 @@ void ColumnArray::compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
compare_results, direction, nan_direction_hint);
|
||||
}
|
||||
|
||||
namespace
|
||||
template <bool positive>
|
||||
struct ColumnArray::Cmp
|
||||
{
|
||||
template <bool positive>
|
||||
struct Less
|
||||
const ColumnArray & parent;
|
||||
int nan_direction_hint;
|
||||
const Collator * collator;
|
||||
|
||||
Cmp(const ColumnArray & parent_, int nan_direction_hint_, const Collator * collator_=nullptr)
|
||||
: parent(parent_), nan_direction_hint(nan_direction_hint_), collator(collator_) {}
|
||||
|
||||
int operator()(size_t lhs, size_t rhs) const
|
||||
{
|
||||
const ColumnArray & parent;
|
||||
int nan_direction_hint;
|
||||
|
||||
Less(const ColumnArray & parent_, int nan_direction_hint_)
|
||||
: parent(parent_), nan_direction_hint(nan_direction_hint_) {}
|
||||
|
||||
bool operator()(size_t lhs, size_t rhs) const
|
||||
{
|
||||
if (positive)
|
||||
return parent.compareAt(lhs, rhs, parent, nan_direction_hint) < 0;
|
||||
else
|
||||
return parent.compareAt(lhs, rhs, parent, nan_direction_hint) > 0;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
int res;
|
||||
if (collator)
|
||||
res = parent.compareAtWithCollation(lhs, rhs, parent, nan_direction_hint, *collator);
|
||||
else
|
||||
res = parent.compareAt(lhs, rhs, parent, nan_direction_hint);
|
||||
return positive ? res : -res;
|
||||
}
|
||||
};
|
||||
|
||||
void ColumnArray::reserve(size_t n)
|
||||
{
|
||||
@ -753,7 +768,8 @@ ColumnPtr ColumnArray::indexImpl(const PaddedPODArray<T> & indexes, size_t limit
|
||||
|
||||
INSTANTIATE_INDEX_IMPL(ColumnArray)
|
||||
|
||||
void ColumnArray::getPermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const
|
||||
template <typename Comparator>
|
||||
void ColumnArray::getPermutationImpl(size_t limit, Permutation & res, Comparator cmp) const
|
||||
{
|
||||
size_t s = size();
|
||||
if (limit >= s)
|
||||
@ -763,23 +779,16 @@ void ColumnArray::getPermutation(bool reverse, size_t limit, int nan_direction_h
|
||||
for (size_t i = 0; i < s; ++i)
|
||||
res[i] = i;
|
||||
|
||||
auto less = [&cmp](size_t lhs, size_t rhs){ return cmp(lhs, rhs) < 0; };
|
||||
|
||||
if (limit)
|
||||
{
|
||||
if (reverse)
|
||||
std::partial_sort(res.begin(), res.begin() + limit, res.end(), Less<false>(*this, nan_direction_hint));
|
||||
else
|
||||
std::partial_sort(res.begin(), res.begin() + limit, res.end(), Less<true>(*this, nan_direction_hint));
|
||||
}
|
||||
std::partial_sort(res.begin(), res.begin() + limit, res.end(), less);
|
||||
else
|
||||
{
|
||||
if (reverse)
|
||||
std::sort(res.begin(), res.end(), Less<false>(*this, nan_direction_hint));
|
||||
else
|
||||
std::sort(res.begin(), res.end(), Less<true>(*this, nan_direction_hint));
|
||||
}
|
||||
std::sort(res.begin(), res.end(), less);
|
||||
}
|
||||
|
||||
void ColumnArray::updatePermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res, EqualRanges & equal_range) const
|
||||
template <typename Comparator>
|
||||
void ColumnArray::updatePermutationImpl(size_t limit, Permutation & res, EqualRanges & equal_range, Comparator cmp) const
|
||||
{
|
||||
if (equal_range.empty())
|
||||
return;
|
||||
@ -792,20 +801,19 @@ void ColumnArray::updatePermutation(bool reverse, size_t limit, int nan_directio
|
||||
if (limit)
|
||||
--number_of_ranges;
|
||||
|
||||
auto less = [&cmp](size_t lhs, size_t rhs){ return cmp(lhs, rhs) < 0; };
|
||||
|
||||
EqualRanges new_ranges;
|
||||
for (size_t i = 0; i < number_of_ranges; ++i)
|
||||
{
|
||||
const auto & [first, last] = equal_range[i];
|
||||
|
||||
if (reverse)
|
||||
std::sort(res.begin() + first, res.begin() + last, Less<false>(*this, nan_direction_hint));
|
||||
else
|
||||
std::sort(res.begin() + first, res.begin() + last, Less<true>(*this, nan_direction_hint));
|
||||
std::sort(res.begin() + first, res.begin() + last, less);
|
||||
auto new_first = first;
|
||||
|
||||
for (auto j = first + 1; j < last; ++j)
|
||||
{
|
||||
if (compareAt(res[new_first], res[j], *this, nan_direction_hint) != 0)
|
||||
if (cmp(res[new_first], res[j]) != 0)
|
||||
{
|
||||
if (j - new_first > 1)
|
||||
new_ranges.emplace_back(new_first, j);
|
||||
@ -827,14 +835,11 @@ void ColumnArray::updatePermutation(bool reverse, size_t limit, int nan_directio
|
||||
|
||||
/// Since then we are working inside the interval.
|
||||
|
||||
if (reverse)
|
||||
std::partial_sort(res.begin() + first, res.begin() + limit, res.begin() + last, Less<false>(*this, nan_direction_hint));
|
||||
else
|
||||
std::partial_sort(res.begin() + first, res.begin() + limit, res.begin() + last, Less<true>(*this, nan_direction_hint));
|
||||
std::partial_sort(res.begin() + first, res.begin() + limit, res.begin() + last, less);
|
||||
auto new_first = first;
|
||||
for (auto j = first + 1; j < limit; ++j)
|
||||
{
|
||||
if (compareAt(res[new_first], res[j], *this, nan_direction_hint) != 0)
|
||||
if (cmp(res[new_first], res[j]) != 0)
|
||||
{
|
||||
if (j - new_first > 1)
|
||||
new_ranges.emplace_back(new_first, j);
|
||||
@ -845,7 +850,7 @@ void ColumnArray::updatePermutation(bool reverse, size_t limit, int nan_directio
|
||||
auto new_last = limit;
|
||||
for (auto j = limit; j < last; ++j)
|
||||
{
|
||||
if (compareAt(res[new_first], res[j], *this, nan_direction_hint) == 0)
|
||||
if (cmp(res[new_first], res[j]) == 0)
|
||||
{
|
||||
std::swap(res[new_last], res[j]);
|
||||
++new_last;
|
||||
@ -859,6 +864,39 @@ void ColumnArray::updatePermutation(bool reverse, size_t limit, int nan_directio
|
||||
equal_range = std::move(new_ranges);
|
||||
}
|
||||
|
||||
void ColumnArray::getPermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const
|
||||
{
|
||||
if (reverse)
|
||||
getPermutationImpl(limit, res, Cmp<false>(*this, nan_direction_hint));
|
||||
else
|
||||
getPermutationImpl(limit, res, Cmp<true>(*this, nan_direction_hint));
|
||||
|
||||
}
|
||||
|
||||
void ColumnArray::updatePermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res, EqualRanges & equal_range) const
|
||||
{
|
||||
if (reverse)
|
||||
updatePermutationImpl(limit, res, equal_range, Cmp<false>(*this, nan_direction_hint));
|
||||
else
|
||||
updatePermutationImpl(limit, res, equal_range, Cmp<true>(*this, nan_direction_hint));
|
||||
}
|
||||
|
||||
void ColumnArray::getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const
|
||||
{
|
||||
if (reverse)
|
||||
getPermutationImpl(limit, res, Cmp<false>(*this, nan_direction_hint, &collator));
|
||||
else
|
||||
getPermutationImpl(limit, res, Cmp<true>(*this, nan_direction_hint, &collator));
|
||||
}
|
||||
|
||||
void ColumnArray::updatePermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res, EqualRanges & equal_range) const
|
||||
{
|
||||
if (reverse)
|
||||
updatePermutationImpl(limit, res, equal_range, Cmp<false>(*this, nan_direction_hint, &collator));
|
||||
else
|
||||
updatePermutationImpl(limit, res, equal_range, Cmp<true>(*this, nan_direction_hint, &collator));
|
||||
}
|
||||
|
||||
ColumnPtr ColumnArray::replicate(const Offsets & replicate_offsets) const
|
||||
{
|
||||
if (replicate_offsets.empty())
|
||||
|
@ -77,8 +77,11 @@ public:
|
||||
void compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
PaddedPODArray<UInt64> * row_indexes, PaddedPODArray<Int8> & compare_results,
|
||||
int direction, int nan_direction_hint) const override;
|
||||
int compareAtWithCollation(size_t n, size_t m, const IColumn & rhs_, int nan_direction_hint, const Collator & collator) const override;
|
||||
void getPermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const override;
|
||||
void updatePermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res, EqualRanges & equal_range) const override;
|
||||
void getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const override;
|
||||
void updatePermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res, EqualRanges& equal_range) const override;
|
||||
void reserve(size_t n) override;
|
||||
size_t byteSize() const override;
|
||||
size_t allocatedBytes() const override;
|
||||
@ -132,6 +135,8 @@ public:
|
||||
return false;
|
||||
}
|
||||
|
||||
bool isCollationSupported() const override { return getData().isCollationSupported(); }
|
||||
|
||||
private:
|
||||
WrappedPtr data;
|
||||
WrappedPtr offsets;
|
||||
@ -169,6 +174,17 @@ private:
|
||||
ColumnPtr filterTuple(const Filter & filt, ssize_t result_size_hint) const;
|
||||
ColumnPtr filterNullable(const Filter & filt, ssize_t result_size_hint) const;
|
||||
ColumnPtr filterGeneric(const Filter & filt, ssize_t result_size_hint) const;
|
||||
|
||||
int compareAtImpl(size_t n, size_t m, const IColumn & rhs_, int nan_direction_hint, const Collator * collator=nullptr) const;
|
||||
|
||||
template <typename Comparator>
|
||||
void getPermutationImpl(size_t limit, Permutation & res, Comparator cmp) const;
|
||||
|
||||
template <typename Comparator>
|
||||
void updatePermutationImpl(size_t limit, Permutation & res, EqualRanges & equal_range, Comparator cmp) const;
|
||||
|
||||
template <bool positive>
|
||||
struct Cmp;
|
||||
};
|
||||
|
||||
|
||||
|
@ -248,6 +248,8 @@ public:
|
||||
/// The constant value. It is valid even if the size of the column is 0.
|
||||
template <typename T>
|
||||
T getValue() const { return getField().safeGet<NearestFieldType<T>>(); }
|
||||
|
||||
bool isCollationSupported() const override { return data->isCollationSupported(); }
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -1,5 +1,6 @@
|
||||
#include <Columns/ColumnLowCardinality.h>
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <DataStreams/ColumnGathererStream.h>
|
||||
#include <DataTypes/NumberTraits.h>
|
||||
#include <Common/HashTable/HashMap.h>
|
||||
@ -278,14 +279,26 @@ MutableColumnPtr ColumnLowCardinality::cloneResized(size_t size) const
|
||||
return ColumnLowCardinality::create(IColumn::mutate(std::move(unique_ptr)), getIndexes().cloneResized(size));
|
||||
}
|
||||
|
||||
int ColumnLowCardinality::compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const
|
||||
int ColumnLowCardinality::compareAtImpl(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint, const Collator * collator) const
|
||||
{
|
||||
const auto & low_cardinality_column = assert_cast<const ColumnLowCardinality &>(rhs);
|
||||
size_t n_index = getIndexes().getUInt(n);
|
||||
size_t m_index = low_cardinality_column.getIndexes().getUInt(m);
|
||||
if (collator)
|
||||
return getDictionary().getNestedColumn()->compareAtWithCollation(n_index, m_index, *low_cardinality_column.getDictionary().getNestedColumn(), nan_direction_hint, *collator);
|
||||
return getDictionary().compareAt(n_index, m_index, low_cardinality_column.getDictionary(), nan_direction_hint);
|
||||
}
|
||||
|
||||
int ColumnLowCardinality::compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const
|
||||
{
|
||||
return compareAtImpl(n, m, rhs, nan_direction_hint);
|
||||
}
|
||||
|
||||
int ColumnLowCardinality::compareAtWithCollation(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint, const Collator & collator) const
|
||||
{
|
||||
return compareAtImpl(n, m, rhs, nan_direction_hint, &collator);
|
||||
}
|
||||
|
||||
void ColumnLowCardinality::compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
PaddedPODArray<UInt64> * row_indexes, PaddedPODArray<Int8> & compare_results,
|
||||
int direction, int nan_direction_hint) const
|
||||
@ -295,14 +308,17 @@ void ColumnLowCardinality::compareColumn(const IColumn & rhs, size_t rhs_row_num
|
||||
compare_results, direction, nan_direction_hint);
|
||||
}
|
||||
|
||||
void ColumnLowCardinality::getPermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const
|
||||
void ColumnLowCardinality::getPermutationImpl(bool reverse, size_t limit, int nan_direction_hint, Permutation & res, const Collator * collator) const
|
||||
{
|
||||
if (limit == 0)
|
||||
limit = size();
|
||||
|
||||
size_t unique_limit = getDictionary().size();
|
||||
Permutation unique_perm;
|
||||
getDictionary().getNestedColumn()->getPermutation(reverse, unique_limit, nan_direction_hint, unique_perm);
|
||||
if (collator)
|
||||
getDictionary().getNestedColumn()->getPermutationWithCollation(*collator, reverse, unique_limit, nan_direction_hint, unique_perm);
|
||||
else
|
||||
getDictionary().getNestedColumn()->getPermutation(reverse, unique_limit, nan_direction_hint, unique_perm);
|
||||
|
||||
/// TODO: optimize with sse.
|
||||
|
||||
@ -330,7 +346,8 @@ void ColumnLowCardinality::getPermutation(bool reverse, size_t limit, int nan_di
|
||||
}
|
||||
}
|
||||
|
||||
void ColumnLowCardinality::updatePermutation(bool reverse, size_t limit, int nan_direction_hint, IColumn::Permutation & res, EqualRanges & equal_ranges) const
|
||||
template <typename Cmp>
|
||||
void ColumnLowCardinality::updatePermutationImpl(size_t limit, Permutation & res, EqualRanges & equal_ranges, Cmp comparator) const
|
||||
{
|
||||
if (equal_ranges.empty())
|
||||
return;
|
||||
@ -345,20 +362,17 @@ void ColumnLowCardinality::updatePermutation(bool reverse, size_t limit, int nan
|
||||
EqualRanges new_ranges;
|
||||
SCOPE_EXIT({equal_ranges = std::move(new_ranges);});
|
||||
|
||||
auto less = [&comparator](size_t lhs, size_t rhs){ return comparator(lhs, rhs) < 0; };
|
||||
|
||||
for (size_t i = 0; i < number_of_ranges; ++i)
|
||||
{
|
||||
const auto& [first, last] = equal_ranges[i];
|
||||
if (reverse)
|
||||
std::sort(res.begin() + first, res.begin() + last, [this, nan_direction_hint](size_t a, size_t b)
|
||||
{return getDictionary().compareAt(getIndexes().getUInt(a), getIndexes().getUInt(b), getDictionary(), nan_direction_hint) > 0; });
|
||||
else
|
||||
std::sort(res.begin() + first, res.begin() + last, [this, nan_direction_hint](size_t a, size_t b)
|
||||
{return getDictionary().compareAt(getIndexes().getUInt(a), getIndexes().getUInt(b), getDictionary(), nan_direction_hint) < 0; });
|
||||
std::sort(res.begin() + first, res.begin() + last, less);
|
||||
|
||||
auto new_first = first;
|
||||
for (auto j = first + 1; j < last; ++j)
|
||||
{
|
||||
if (compareAt(res[new_first], res[j], *this, nan_direction_hint) != 0)
|
||||
if (comparator(res[new_first], res[j]) != 0)
|
||||
{
|
||||
if (j - new_first > 1)
|
||||
new_ranges.emplace_back(new_first, j);
|
||||
@ -379,17 +393,12 @@ void ColumnLowCardinality::updatePermutation(bool reverse, size_t limit, int nan
|
||||
|
||||
/// Since then we are working inside the interval.
|
||||
|
||||
if (reverse)
|
||||
std::partial_sort(res.begin() + first, res.begin() + limit, res.begin() + last, [this, nan_direction_hint](size_t a, size_t b)
|
||||
{return getDictionary().compareAt(getIndexes().getUInt(a), getIndexes().getUInt(b), getDictionary(), nan_direction_hint) > 0; });
|
||||
else
|
||||
std::partial_sort(res.begin() + first, res.begin() + limit, res.begin() + last, [this, nan_direction_hint](size_t a, size_t b)
|
||||
{return getDictionary().compareAt(getIndexes().getUInt(a), getIndexes().getUInt(b), getDictionary(), nan_direction_hint) < 0; });
|
||||
std::partial_sort(res.begin() + first, res.begin() + limit, res.begin() + last, less);
|
||||
auto new_first = first;
|
||||
|
||||
for (auto j = first + 1; j < limit; ++j)
|
||||
{
|
||||
if (getDictionary().compareAt(getIndexes().getUInt(res[new_first]), getIndexes().getUInt(res[j]), getDictionary(), nan_direction_hint) != 0)
|
||||
if (comparator(res[new_first],res[j]) != 0)
|
||||
{
|
||||
if (j - new_first > 1)
|
||||
new_ranges.emplace_back(new_first, j);
|
||||
@ -401,7 +410,7 @@ void ColumnLowCardinality::updatePermutation(bool reverse, size_t limit, int nan
|
||||
auto new_last = limit;
|
||||
for (auto j = limit; j < last; ++j)
|
||||
{
|
||||
if (getDictionary().compareAt(getIndexes().getUInt(res[new_first]), getIndexes().getUInt(res[j]), getDictionary(), nan_direction_hint) == 0)
|
||||
if (comparator(res[new_first], res[j]) == 0)
|
||||
{
|
||||
std::swap(res[new_last], res[j]);
|
||||
++new_last;
|
||||
@ -412,6 +421,38 @@ void ColumnLowCardinality::updatePermutation(bool reverse, size_t limit, int nan
|
||||
}
|
||||
}
|
||||
|
||||
void ColumnLowCardinality::getPermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const
|
||||
{
|
||||
getPermutationImpl(reverse, limit, nan_direction_hint, res);
|
||||
}
|
||||
|
||||
void ColumnLowCardinality::updatePermutation(bool reverse, size_t limit, int nan_direction_hint, IColumn::Permutation & res, EqualRanges & equal_ranges) const
|
||||
{
|
||||
auto comparator = [this, nan_direction_hint, reverse](size_t lhs, size_t rhs)
|
||||
{
|
||||
int ret = getDictionary().compareAt(getIndexes().getUInt(lhs), getIndexes().getUInt(rhs), getDictionary(), nan_direction_hint);
|
||||
return reverse ? -ret : ret;
|
||||
};
|
||||
|
||||
updatePermutationImpl(limit, res, equal_ranges, comparator);
|
||||
}
|
||||
|
||||
void ColumnLowCardinality::getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const
|
||||
{
|
||||
getPermutationImpl(reverse, limit, nan_direction_hint, res, &collator);
|
||||
}
|
||||
|
||||
void ColumnLowCardinality::updatePermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res, EqualRanges & equal_ranges) const
|
||||
{
|
||||
auto comparator = [this, &collator, reverse, nan_direction_hint](size_t lhs, size_t rhs)
|
||||
{
|
||||
int ret = getDictionary().getNestedColumn()->compareAtWithCollation(getIndexes().getUInt(lhs), getIndexes().getUInt(rhs), *getDictionary().getNestedColumn(), nan_direction_hint, collator);
|
||||
return reverse ? -ret : ret;
|
||||
};
|
||||
|
||||
updatePermutationImpl(limit, res, equal_ranges, comparator);
|
||||
}
|
||||
|
||||
std::vector<MutableColumnPtr> ColumnLowCardinality::scatter(ColumnIndex num_columns, const Selector & selector) const
|
||||
{
|
||||
auto columns = getIndexes().scatter(num_columns, selector);
|
||||
|
@ -125,10 +125,16 @@ public:
|
||||
PaddedPODArray<UInt64> * row_indexes, PaddedPODArray<Int8> & compare_results,
|
||||
int direction, int nan_direction_hint) const override;
|
||||
|
||||
int compareAtWithCollation(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint, const Collator &) const override;
|
||||
|
||||
void getPermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const override;
|
||||
|
||||
void updatePermutation(bool reverse, size_t limit, int, IColumn::Permutation & res, EqualRanges & equal_range) const override;
|
||||
|
||||
void getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const override;
|
||||
|
||||
void updatePermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res, EqualRanges& equal_range) const override;
|
||||
|
||||
ColumnPtr replicate(const Offsets & offsets) const override
|
||||
{
|
||||
return ColumnLowCardinality::create(dictionary.getColumnUniquePtr(), getIndexes().replicate(offsets));
|
||||
@ -170,6 +176,7 @@ public:
|
||||
size_t sizeOfValueIfFixed() const override { return getDictionary().sizeOfValueIfFixed(); }
|
||||
bool isNumeric() const override { return getDictionary().isNumeric(); }
|
||||
bool lowCardinality() const override { return true; }
|
||||
bool isCollationSupported() const override { return getDictionary().getNestedColumn()->isCollationSupported(); }
|
||||
|
||||
/**
|
||||
* Checks if the dictionary column is Nullable(T).
|
||||
@ -309,6 +316,13 @@ private:
|
||||
|
||||
void compactInplace();
|
||||
void compactIfSharedDictionary();
|
||||
|
||||
int compareAtImpl(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint, const Collator * collator=nullptr) const;
|
||||
|
||||
void getPermutationImpl(bool reverse, size_t limit, int nan_direction_hint, Permutation & res, const Collator * collator = nullptr) const;
|
||||
|
||||
template <typename Cmp>
|
||||
void updatePermutationImpl(size_t limit, Permutation & res, EqualRanges & equal_ranges, Cmp comparator) const;
|
||||
};
|
||||
|
||||
|
||||
|
@ -6,6 +6,7 @@
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Columns/ColumnNullable.h>
|
||||
#include <Columns/ColumnConst.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <DataStreams/ColumnGathererStream.h>
|
||||
|
||||
|
||||
@ -223,7 +224,7 @@ ColumnPtr ColumnNullable::index(const IColumn & indexes, size_t limit) const
|
||||
return ColumnNullable::create(indexed_data, indexed_null_map);
|
||||
}
|
||||
|
||||
int ColumnNullable::compareAt(size_t n, size_t m, const IColumn & rhs_, int null_direction_hint) const
|
||||
int ColumnNullable::compareAtImpl(size_t n, size_t m, const IColumn & rhs_, int null_direction_hint, const Collator * collator) const
|
||||
{
|
||||
/// NULL values share the properties of NaN values.
|
||||
/// Here the last parameter of compareAt is called null_direction_hint
|
||||
@ -245,9 +246,22 @@ int ColumnNullable::compareAt(size_t n, size_t m, const IColumn & rhs_, int null
|
||||
}
|
||||
|
||||
const IColumn & nested_rhs = nullable_rhs.getNestedColumn();
|
||||
if (collator)
|
||||
return getNestedColumn().compareAtWithCollation(n, m, nested_rhs, null_direction_hint, *collator);
|
||||
|
||||
return getNestedColumn().compareAt(n, m, nested_rhs, null_direction_hint);
|
||||
}
|
||||
|
||||
int ColumnNullable::compareAt(size_t n, size_t m, const IColumn & rhs_, int null_direction_hint) const
|
||||
{
|
||||
return compareAtImpl(n, m, rhs_, null_direction_hint);
|
||||
}
|
||||
|
||||
int ColumnNullable::compareAtWithCollation(size_t n, size_t m, const IColumn & rhs_, int null_direction_hint, const Collator & collator) const
|
||||
{
|
||||
return compareAtImpl(n, m, rhs_, null_direction_hint, &collator);
|
||||
}
|
||||
|
||||
void ColumnNullable::compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
PaddedPODArray<UInt64> * row_indexes, PaddedPODArray<Int8> & compare_results,
|
||||
int direction, int nan_direction_hint) const
|
||||
@ -256,10 +270,14 @@ void ColumnNullable::compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
compare_results, direction, nan_direction_hint);
|
||||
}
|
||||
|
||||
void ColumnNullable::getPermutation(bool reverse, size_t limit, int null_direction_hint, Permutation & res) const
|
||||
void ColumnNullable::getPermutationImpl(bool reverse, size_t limit, int null_direction_hint, Permutation & res, const Collator * collator) const
|
||||
{
|
||||
/// Cannot pass limit because of unknown amount of NULLs.
|
||||
getNestedColumn().getPermutation(reverse, 0, null_direction_hint, res);
|
||||
|
||||
if (collator)
|
||||
getNestedColumn().getPermutationWithCollation(*collator, reverse, 0, null_direction_hint, res);
|
||||
else
|
||||
getNestedColumn().getPermutation(reverse, 0, null_direction_hint, res);
|
||||
|
||||
if ((null_direction_hint > 0) != reverse)
|
||||
{
|
||||
@ -329,7 +347,7 @@ void ColumnNullable::getPermutation(bool reverse, size_t limit, int null_directi
|
||||
}
|
||||
}
|
||||
|
||||
void ColumnNullable::updatePermutation(bool reverse, size_t limit, int null_direction_hint, IColumn::Permutation & res, EqualRanges & equal_ranges) const
|
||||
void ColumnNullable::updatePermutationImpl(bool reverse, size_t limit, int null_direction_hint, Permutation & res, EqualRanges & equal_ranges, const Collator * collator) const
|
||||
{
|
||||
if (equal_ranges.empty())
|
||||
return;
|
||||
@ -432,12 +450,35 @@ void ColumnNullable::updatePermutation(bool reverse, size_t limit, int null_dire
|
||||
}
|
||||
}
|
||||
|
||||
getNestedColumn().updatePermutation(reverse, limit, null_direction_hint, res, new_ranges);
|
||||
if (collator)
|
||||
getNestedColumn().updatePermutationWithCollation(*collator, reverse, limit, null_direction_hint, res, new_ranges);
|
||||
else
|
||||
getNestedColumn().updatePermutation(reverse, limit, null_direction_hint, res, new_ranges);
|
||||
|
||||
equal_ranges = std::move(new_ranges);
|
||||
std::move(null_ranges.begin(), null_ranges.end(), std::back_inserter(equal_ranges));
|
||||
}
|
||||
|
||||
void ColumnNullable::getPermutation(bool reverse, size_t limit, int null_direction_hint, Permutation & res) const
|
||||
{
|
||||
getPermutationImpl(reverse, limit, null_direction_hint, res);
|
||||
}
|
||||
|
||||
void ColumnNullable::updatePermutation(bool reverse, size_t limit, int null_direction_hint, IColumn::Permutation & res, EqualRanges & equal_ranges) const
|
||||
{
|
||||
updatePermutationImpl(reverse, limit, null_direction_hint, res, equal_ranges);
|
||||
}
|
||||
|
||||
void ColumnNullable::getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int null_direction_hint, Permutation & res) const
|
||||
{
|
||||
getPermutationImpl(reverse, limit, null_direction_hint, res, &collator);
|
||||
}
|
||||
|
||||
void ColumnNullable::updatePermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int null_direction_hint, Permutation & res, EqualRanges & equal_range) const
|
||||
{
|
||||
updatePermutationImpl(reverse, limit, null_direction_hint, res, equal_range, &collator);
|
||||
}
|
||||
|
||||
void ColumnNullable::gather(ColumnGathererStream & gatherer)
|
||||
{
|
||||
gatherer.gather(*this);
|
||||
|
@ -6,6 +6,7 @@
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Common/assert_cast.h>
|
||||
|
||||
class Collator;
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -92,8 +93,12 @@ public:
|
||||
void compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
PaddedPODArray<UInt64> * row_indexes, PaddedPODArray<Int8> & compare_results,
|
||||
int direction, int nan_direction_hint) const override;
|
||||
int compareAtWithCollation(size_t n, size_t m, const IColumn & rhs, int null_direction_hint, const Collator &) const override;
|
||||
void getPermutation(bool reverse, size_t limit, int null_direction_hint, Permutation & res) const override;
|
||||
void updatePermutation(bool reverse, size_t limit, int, Permutation & res, EqualRanges & equal_range) const override;
|
||||
void updatePermutation(bool reverse, size_t limit, int null_direction_hint, Permutation & res, EqualRanges & equal_range) const override;
|
||||
void getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int null_direction_hint, Permutation & res) const override;
|
||||
void updatePermutationWithCollation(
|
||||
const Collator & collator, bool reverse, size_t limit, int null_direction_hint, Permutation & res, EqualRanges& equal_range) const override;
|
||||
void reserve(size_t n) override;
|
||||
size_t byteSize() const override;
|
||||
size_t allocatedBytes() const override;
|
||||
@ -129,6 +134,7 @@ public:
|
||||
bool valuesHaveFixedSize() const override { return nested_column->valuesHaveFixedSize(); }
|
||||
size_t sizeOfValueIfFixed() const override { return null_map->sizeOfValueIfFixed() + nested_column->sizeOfValueIfFixed(); }
|
||||
bool onlyNull() const override { return nested_column->isDummy(); }
|
||||
bool isCollationSupported() const override { return nested_column->isCollationSupported(); }
|
||||
|
||||
|
||||
/// Return the column that represents values.
|
||||
@ -164,6 +170,13 @@ private:
|
||||
|
||||
template <bool negative>
|
||||
void applyNullMapImpl(const ColumnUInt8 & map);
|
||||
|
||||
int compareAtImpl(size_t n, size_t m, const IColumn & rhs_, int null_direction_hint, const Collator * collator=nullptr) const;
|
||||
|
||||
void getPermutationImpl(bool reverse, size_t limit, int null_direction_hint, Permutation & res, const Collator * collator = nullptr) const;
|
||||
|
||||
void updatePermutationImpl(
|
||||
bool reverse, size_t limit, int null_direction_hint, Permutation & res, EqualRanges & equal_ranges, const Collator * collator = nullptr) const;
|
||||
};
|
||||
|
||||
ColumnPtr makeNullable(const ColumnPtr & column);
|
||||
|
@ -285,21 +285,22 @@ void ColumnString::compareColumn(
|
||||
}
|
||||
|
||||
template <bool positive>
|
||||
struct ColumnString::less
|
||||
struct ColumnString::Cmp
|
||||
{
|
||||
const ColumnString & parent;
|
||||
explicit less(const ColumnString & parent_) : parent(parent_) {}
|
||||
bool operator()(size_t lhs, size_t rhs) const
|
||||
explicit Cmp(const ColumnString & parent_) : parent(parent_) {}
|
||||
int operator()(size_t lhs, size_t rhs) const
|
||||
{
|
||||
int res = memcmpSmallAllowOverflow15(
|
||||
parent.chars.data() + parent.offsetAt(lhs), parent.sizeAt(lhs) - 1,
|
||||
parent.chars.data() + parent.offsetAt(rhs), parent.sizeAt(rhs) - 1);
|
||||
|
||||
return positive ? (res < 0) : (res > 0);
|
||||
return positive ? res : -res;
|
||||
}
|
||||
};
|
||||
|
||||
void ColumnString::getPermutation(bool reverse, size_t limit, int /*nan_direction_hint*/, Permutation & res) const
|
||||
template <typename Comparator>
|
||||
void ColumnString::getPermutationImpl(size_t limit, Permutation & res, Comparator cmp) const
|
||||
{
|
||||
size_t s = offsets.size();
|
||||
res.resize(s);
|
||||
@ -309,23 +310,16 @@ void ColumnString::getPermutation(bool reverse, size_t limit, int /*nan_directio
|
||||
if (limit >= s)
|
||||
limit = 0;
|
||||
|
||||
auto less = [&cmp](size_t lhs, size_t rhs){ return cmp(lhs, rhs) < 0; };
|
||||
|
||||
if (limit)
|
||||
{
|
||||
if (reverse)
|
||||
std::partial_sort(res.begin(), res.begin() + limit, res.end(), less<false>(*this));
|
||||
else
|
||||
std::partial_sort(res.begin(), res.begin() + limit, res.end(), less<true>(*this));
|
||||
}
|
||||
std::partial_sort(res.begin(), res.begin() + limit, res.end(), less);
|
||||
else
|
||||
{
|
||||
if (reverse)
|
||||
std::sort(res.begin(), res.end(), less<false>(*this));
|
||||
else
|
||||
std::sort(res.begin(), res.end(), less<true>(*this));
|
||||
}
|
||||
std::sort(res.begin(), res.end(), less);
|
||||
}
|
||||
|
||||
void ColumnString::updatePermutation(bool reverse, size_t limit, int /*nan_direction_hint*/, Permutation & res, EqualRanges & equal_ranges) const
|
||||
template <typename Comparator>
|
||||
void ColumnString::updatePermutationImpl(size_t limit, Permutation & res, EqualRanges & equal_ranges, Comparator cmp) const
|
||||
{
|
||||
if (equal_ranges.empty())
|
||||
return;
|
||||
@ -340,21 +334,17 @@ void ColumnString::updatePermutation(bool reverse, size_t limit, int /*nan_direc
|
||||
if (limit)
|
||||
--number_of_ranges;
|
||||
|
||||
auto less = [&cmp](size_t lhs, size_t rhs){ return cmp(lhs, rhs) < 0; };
|
||||
|
||||
for (size_t i = 0; i < number_of_ranges; ++i)
|
||||
{
|
||||
const auto & [first, last] = equal_ranges[i];
|
||||
|
||||
if (reverse)
|
||||
std::sort(res.begin() + first, res.begin() + last, less<false>(*this));
|
||||
else
|
||||
std::sort(res.begin() + first, res.begin() + last, less<true>(*this));
|
||||
std::sort(res.begin() + first, res.begin() + last, less);
|
||||
|
||||
size_t new_first = first;
|
||||
for (size_t j = first + 1; j < last; ++j)
|
||||
{
|
||||
if (memcmpSmallAllowOverflow15(
|
||||
chars.data() + offsetAt(res[j]), sizeAt(res[j]) - 1,
|
||||
chars.data() + offsetAt(res[new_first]), sizeAt(res[new_first]) - 1) != 0)
|
||||
if (cmp(res[j], res[new_first]) != 0)
|
||||
{
|
||||
if (j - new_first > 1)
|
||||
new_ranges.emplace_back(new_first, j);
|
||||
@ -375,17 +365,12 @@ void ColumnString::updatePermutation(bool reverse, size_t limit, int /*nan_direc
|
||||
|
||||
/// Since then we are working inside the interval.
|
||||
|
||||
if (reverse)
|
||||
std::partial_sort(res.begin() + first, res.begin() + limit, res.begin() + last, less<false>(*this));
|
||||
else
|
||||
std::partial_sort(res.begin() + first, res.begin() + limit, res.begin() + last, less<true>(*this));
|
||||
std::partial_sort(res.begin() + first, res.begin() + limit, res.begin() + last, less);
|
||||
|
||||
size_t new_first = first;
|
||||
for (size_t j = first + 1; j < limit; ++j)
|
||||
{
|
||||
if (memcmpSmallAllowOverflow15(
|
||||
chars.data() + offsetAt(res[j]), sizeAt(res[j]) - 1,
|
||||
chars.data() + offsetAt(res[new_first]), sizeAt(res[new_first]) - 1) != 0)
|
||||
if (cmp(res[j], res[new_first]) != 0)
|
||||
{
|
||||
if (j - new_first > 1)
|
||||
new_ranges.emplace_back(new_first, j);
|
||||
@ -395,9 +380,7 @@ void ColumnString::updatePermutation(bool reverse, size_t limit, int /*nan_direc
|
||||
size_t new_last = limit;
|
||||
for (size_t j = limit; j < last; ++j)
|
||||
{
|
||||
if (memcmpSmallAllowOverflow15(
|
||||
chars.data() + offsetAt(res[j]), sizeAt(res[j]) - 1,
|
||||
chars.data() + offsetAt(res[new_first]), sizeAt(res[new_first]) - 1) == 0)
|
||||
if (cmp(res[j], res[new_first]) == 0)
|
||||
{
|
||||
std::swap(res[j], res[new_last]);
|
||||
++new_last;
|
||||
@ -408,6 +391,56 @@ void ColumnString::updatePermutation(bool reverse, size_t limit, int /*nan_direc
|
||||
}
|
||||
}
|
||||
|
||||
void ColumnString::getPermutation(bool reverse, size_t limit, int /*nan_direction_hint*/, Permutation & res) const
|
||||
{
|
||||
if (reverse)
|
||||
getPermutationImpl(limit, res, Cmp<false>(*this));
|
||||
else
|
||||
getPermutationImpl(limit, res, Cmp<true>(*this));
|
||||
}
|
||||
|
||||
void ColumnString::updatePermutation(bool reverse, size_t limit, int /*nan_direction_hint*/, Permutation & res, EqualRanges & equal_ranges) const
|
||||
{
|
||||
if (reverse)
|
||||
updatePermutationImpl(limit, res, equal_ranges, Cmp<false>(*this));
|
||||
else
|
||||
updatePermutationImpl(limit, res, equal_ranges, Cmp<true>(*this));
|
||||
}
|
||||
|
||||
template <bool positive>
|
||||
struct ColumnString::CmpWithCollation
|
||||
{
|
||||
const ColumnString & parent;
|
||||
const Collator & collator;
|
||||
|
||||
CmpWithCollation(const ColumnString & parent_, const Collator & collator_) : parent(parent_), collator(collator_) {}
|
||||
|
||||
int operator()(size_t lhs, size_t rhs) const
|
||||
{
|
||||
int res = collator.compare(
|
||||
reinterpret_cast<const char *>(&parent.chars[parent.offsetAt(lhs)]), parent.sizeAt(lhs),
|
||||
reinterpret_cast<const char *>(&parent.chars[parent.offsetAt(rhs)]), parent.sizeAt(rhs));
|
||||
|
||||
return positive ? res : -res;
|
||||
}
|
||||
};
|
||||
|
||||
void ColumnString::getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int, Permutation & res) const
|
||||
{
|
||||
if (reverse)
|
||||
getPermutationImpl(limit, res, CmpWithCollation<false>(*this, collator));
|
||||
else
|
||||
getPermutationImpl(limit, res, CmpWithCollation<true>(*this, collator));
|
||||
}
|
||||
|
||||
void ColumnString::updatePermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int, Permutation & res, EqualRanges & equal_ranges) const
|
||||
{
|
||||
if (reverse)
|
||||
updatePermutationImpl(limit, res, equal_ranges, CmpWithCollation<false>(*this, collator));
|
||||
else
|
||||
updatePermutationImpl(limit, res, equal_ranges, CmpWithCollation<true>(*this, collator));
|
||||
}
|
||||
|
||||
ColumnPtr ColumnString::replicate(const Offsets & replicate_offsets) const
|
||||
{
|
||||
size_t col_size = size();
|
||||
@ -476,13 +509,13 @@ void ColumnString::getExtremes(Field & min, Field & max) const
|
||||
size_t min_idx = 0;
|
||||
size_t max_idx = 0;
|
||||
|
||||
less<true> less_op(*this);
|
||||
Cmp<true> cmp_op(*this);
|
||||
|
||||
for (size_t i = 1; i < col_size; ++i)
|
||||
{
|
||||
if (less_op(i, min_idx))
|
||||
if (cmp_op(i, min_idx) < 0)
|
||||
min_idx = i;
|
||||
else if (less_op(max_idx, i))
|
||||
else if (cmp_op(max_idx, i) < 0)
|
||||
max_idx = i;
|
||||
}
|
||||
|
||||
@ -491,7 +524,7 @@ void ColumnString::getExtremes(Field & min, Field & max) const
|
||||
}
|
||||
|
||||
|
||||
int ColumnString::compareAtWithCollation(size_t n, size_t m, const IColumn & rhs_, const Collator & collator) const
|
||||
int ColumnString::compareAtWithCollation(size_t n, size_t m, const IColumn & rhs_, int, const Collator & collator) const
|
||||
{
|
||||
const ColumnString & rhs = assert_cast<const ColumnString &>(rhs_);
|
||||
|
||||
@ -500,134 +533,6 @@ int ColumnString::compareAtWithCollation(size_t n, size_t m, const IColumn & rhs
|
||||
reinterpret_cast<const char *>(&rhs.chars[rhs.offsetAt(m)]), rhs.sizeAt(m));
|
||||
}
|
||||
|
||||
|
||||
template <bool positive>
|
||||
struct ColumnString::lessWithCollation
|
||||
{
|
||||
const ColumnString & parent;
|
||||
const Collator & collator;
|
||||
|
||||
lessWithCollation(const ColumnString & parent_, const Collator & collator_) : parent(parent_), collator(collator_) {}
|
||||
|
||||
bool operator()(size_t lhs, size_t rhs) const
|
||||
{
|
||||
int res = collator.compare(
|
||||
reinterpret_cast<const char *>(&parent.chars[parent.offsetAt(lhs)]), parent.sizeAt(lhs),
|
||||
reinterpret_cast<const char *>(&parent.chars[parent.offsetAt(rhs)]), parent.sizeAt(rhs));
|
||||
|
||||
return positive ? (res < 0) : (res > 0);
|
||||
}
|
||||
};
|
||||
|
||||
void ColumnString::getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, Permutation & res) const
|
||||
{
|
||||
size_t s = offsets.size();
|
||||
res.resize(s);
|
||||
for (size_t i = 0; i < s; ++i)
|
||||
res[i] = i;
|
||||
|
||||
if (limit >= s)
|
||||
limit = 0;
|
||||
|
||||
if (limit)
|
||||
{
|
||||
if (reverse)
|
||||
std::partial_sort(res.begin(), res.begin() + limit, res.end(), lessWithCollation<false>(*this, collator));
|
||||
else
|
||||
std::partial_sort(res.begin(), res.begin() + limit, res.end(), lessWithCollation<true>(*this, collator));
|
||||
}
|
||||
else
|
||||
{
|
||||
if (reverse)
|
||||
std::sort(res.begin(), res.end(), lessWithCollation<false>(*this, collator));
|
||||
else
|
||||
std::sort(res.begin(), res.end(), lessWithCollation<true>(*this, collator));
|
||||
}
|
||||
}
|
||||
|
||||
void ColumnString::updatePermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int, Permutation & res, EqualRanges & equal_ranges) const
|
||||
{
|
||||
if (equal_ranges.empty())
|
||||
return;
|
||||
|
||||
if (limit >= size() || limit >= equal_ranges.back().second)
|
||||
limit = 0;
|
||||
|
||||
size_t number_of_ranges = equal_ranges.size();
|
||||
if (limit)
|
||||
--number_of_ranges;
|
||||
|
||||
EqualRanges new_ranges;
|
||||
SCOPE_EXIT({equal_ranges = std::move(new_ranges);});
|
||||
|
||||
for (size_t i = 0; i < number_of_ranges; ++i)
|
||||
{
|
||||
const auto& [first, last] = equal_ranges[i];
|
||||
|
||||
if (reverse)
|
||||
std::sort(res.begin() + first, res.begin() + last, lessWithCollation<false>(*this, collator));
|
||||
else
|
||||
std::sort(res.begin() + first, res.begin() + last, lessWithCollation<true>(*this, collator));
|
||||
auto new_first = first;
|
||||
for (auto j = first + 1; j < last; ++j)
|
||||
{
|
||||
if (collator.compare(
|
||||
reinterpret_cast<const char *>(&chars[offsetAt(res[new_first])]), sizeAt(res[new_first]),
|
||||
reinterpret_cast<const char *>(&chars[offsetAt(res[j])]), sizeAt(res[j])) != 0)
|
||||
{
|
||||
if (j - new_first > 1)
|
||||
new_ranges.emplace_back(new_first, j);
|
||||
|
||||
new_first = j;
|
||||
}
|
||||
}
|
||||
if (last - new_first > 1)
|
||||
new_ranges.emplace_back(new_first, last);
|
||||
}
|
||||
|
||||
if (limit)
|
||||
{
|
||||
const auto & [first, last] = equal_ranges.back();
|
||||
|
||||
if (limit < first || limit > last)
|
||||
return;
|
||||
|
||||
/// Since then we are working inside the interval.
|
||||
|
||||
if (reverse)
|
||||
std::partial_sort(res.begin() + first, res.begin() + limit, res.begin() + last, lessWithCollation<false>(*this, collator));
|
||||
else
|
||||
std::partial_sort(res.begin() + first, res.begin() + limit, res.begin() + last, lessWithCollation<true>(*this, collator));
|
||||
|
||||
auto new_first = first;
|
||||
for (auto j = first + 1; j < limit; ++j)
|
||||
{
|
||||
if (collator.compare(
|
||||
reinterpret_cast<const char *>(&chars[offsetAt(res[new_first])]), sizeAt(res[new_first]),
|
||||
reinterpret_cast<const char *>(&chars[offsetAt(res[j])]), sizeAt(res[j])) != 0)
|
||||
{
|
||||
if (j - new_first > 1)
|
||||
new_ranges.emplace_back(new_first, j);
|
||||
|
||||
new_first = j;
|
||||
}
|
||||
}
|
||||
auto new_last = limit;
|
||||
for (auto j = limit; j < last; ++j)
|
||||
{
|
||||
if (collator.compare(
|
||||
reinterpret_cast<const char *>(&chars[offsetAt(res[new_first])]), sizeAt(res[new_first]),
|
||||
reinterpret_cast<const char *>(&chars[offsetAt(res[j])]), sizeAt(res[j])) == 0)
|
||||
{
|
||||
std::swap(res[new_last], res[j]);
|
||||
++new_last;
|
||||
}
|
||||
}
|
||||
if (new_last - new_first > 1)
|
||||
new_ranges.emplace_back(new_first, new_last);
|
||||
}
|
||||
}
|
||||
|
||||
void ColumnString::protect()
|
||||
{
|
||||
getChars().protect();
|
||||
|
@ -43,14 +43,20 @@ private:
|
||||
size_t ALWAYS_INLINE sizeAt(ssize_t i) const { return offsets[i] - offsets[i - 1]; }
|
||||
|
||||
template <bool positive>
|
||||
struct less;
|
||||
struct Cmp;
|
||||
|
||||
template <bool positive>
|
||||
struct lessWithCollation;
|
||||
struct CmpWithCollation;
|
||||
|
||||
ColumnString() = default;
|
||||
ColumnString(const ColumnString & src);
|
||||
|
||||
template <typename Comparator>
|
||||
void getPermutationImpl(size_t limit, Permutation & res, Comparator cmp) const;
|
||||
|
||||
template <typename Comparator>
|
||||
void updatePermutationImpl(size_t limit, Permutation & res, EqualRanges & equal_ranges, Comparator cmp) const;
|
||||
|
||||
public:
|
||||
const char * getFamilyName() const override { return "String"; }
|
||||
TypeIndex getDataType() const override { return TypeIndex::String; }
|
||||
@ -229,16 +235,16 @@ public:
|
||||
int direction, int nan_direction_hint) const override;
|
||||
|
||||
/// Variant of compareAt for string comparison with respect of collation.
|
||||
int compareAtWithCollation(size_t n, size_t m, const IColumn & rhs_, const Collator & collator) const;
|
||||
int compareAtWithCollation(size_t n, size_t m, const IColumn & rhs_, int, const Collator & collator) const override;
|
||||
|
||||
void getPermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const override;
|
||||
|
||||
void updatePermutation(bool reverse, size_t limit, int, Permutation & res, EqualRanges & equal_range) const override;
|
||||
void updatePermutation(bool reverse, size_t limit, int, Permutation & res, EqualRanges & equal_ranges) const override;
|
||||
|
||||
/// Sorting with respect of collation.
|
||||
void getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, Permutation & res) const;
|
||||
void getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int, Permutation & res) const override;
|
||||
|
||||
void updatePermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int, Permutation & res, EqualRanges& equal_range) const;
|
||||
void updatePermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int, Permutation & res, EqualRanges & equal_ranges) const override;
|
||||
|
||||
ColumnPtr replicate(const Offsets & replicate_offsets) const override;
|
||||
|
||||
@ -270,6 +276,8 @@ public:
|
||||
|
||||
// Throws an exception if offsets/chars are messed up
|
||||
void validate() const;
|
||||
|
||||
bool isCollationSupported() const override { return true; }
|
||||
};
|
||||
|
||||
|
||||
|
@ -275,16 +275,27 @@ MutableColumns ColumnTuple::scatter(ColumnIndex num_columns, const Selector & se
|
||||
return res;
|
||||
}
|
||||
|
||||
int ColumnTuple::compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const
|
||||
int ColumnTuple::compareAtImpl(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint, const Collator * collator) const
|
||||
{
|
||||
const size_t tuple_size = columns.size();
|
||||
for (size_t i = 0; i < tuple_size; ++i)
|
||||
if (int res = columns[i]->compareAt(n, m, *assert_cast<const ColumnTuple &>(rhs).columns[i], nan_direction_hint))
|
||||
{
|
||||
int res;
|
||||
if (collator && columns[i]->isCollationSupported())
|
||||
res = columns[i]->compareAtWithCollation(n, m, *assert_cast<const ColumnTuple &>(rhs).columns[i], nan_direction_hint, *collator);
|
||||
else
|
||||
res = columns[i]->compareAt(n, m, *assert_cast<const ColumnTuple &>(rhs).columns[i], nan_direction_hint);
|
||||
if (res)
|
||||
return res;
|
||||
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int ColumnTuple::compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const
|
||||
{
|
||||
return compareAtImpl(n, m, rhs, nan_direction_hint);
|
||||
}
|
||||
|
||||
void ColumnTuple::compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
PaddedPODArray<UInt64> * row_indexes, PaddedPODArray<Int8> & compare_results,
|
||||
int direction, int nan_direction_hint) const
|
||||
@ -293,14 +304,20 @@ void ColumnTuple::compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
compare_results, direction, nan_direction_hint);
|
||||
}
|
||||
|
||||
int ColumnTuple::compareAtWithCollation(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint, const Collator & collator) const
|
||||
{
|
||||
return compareAtImpl(n, m, rhs, nan_direction_hint, &collator);
|
||||
}
|
||||
|
||||
template <bool positive>
|
||||
struct ColumnTuple::Less
|
||||
{
|
||||
TupleColumns columns;
|
||||
int nan_direction_hint;
|
||||
const Collator * collator;
|
||||
|
||||
Less(const TupleColumns & columns_, int nan_direction_hint_)
|
||||
: columns(columns_), nan_direction_hint(nan_direction_hint_)
|
||||
Less(const TupleColumns & columns_, int nan_direction_hint_, const Collator * collator_=nullptr)
|
||||
: columns(columns_), nan_direction_hint(nan_direction_hint_), collator(collator_)
|
||||
{
|
||||
}
|
||||
|
||||
@ -308,7 +325,11 @@ struct ColumnTuple::Less
|
||||
{
|
||||
for (const auto & column : columns)
|
||||
{
|
||||
int res = column->compareAt(a, b, *column, nan_direction_hint);
|
||||
int res;
|
||||
if (collator && column->isCollationSupported())
|
||||
res = column->compareAtWithCollation(a, b, *column, nan_direction_hint, *collator);
|
||||
else
|
||||
res = column->compareAt(a, b, *column, nan_direction_hint);
|
||||
if (res < 0)
|
||||
return positive;
|
||||
else if (res > 0)
|
||||
@ -318,7 +339,8 @@ struct ColumnTuple::Less
|
||||
}
|
||||
};
|
||||
|
||||
void ColumnTuple::getPermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const
|
||||
template <typename LessOperator>
|
||||
void ColumnTuple::getPermutationImpl(size_t limit, Permutation & res, LessOperator less) const
|
||||
{
|
||||
size_t rows = size();
|
||||
res.resize(rows);
|
||||
@ -330,28 +352,25 @@ void ColumnTuple::getPermutation(bool reverse, size_t limit, int nan_direction_h
|
||||
|
||||
if (limit)
|
||||
{
|
||||
if (reverse)
|
||||
std::partial_sort(res.begin(), res.begin() + limit, res.end(), Less<false>(columns, nan_direction_hint));
|
||||
else
|
||||
std::partial_sort(res.begin(), res.begin() + limit, res.end(), Less<true>(columns, nan_direction_hint));
|
||||
std::partial_sort(res.begin(), res.begin() + limit, res.end(), less);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (reverse)
|
||||
std::sort(res.begin(), res.end(), Less<false>(columns, nan_direction_hint));
|
||||
else
|
||||
std::sort(res.begin(), res.end(), Less<true>(columns, nan_direction_hint));
|
||||
std::sort(res.begin(), res.end(), less);
|
||||
}
|
||||
}
|
||||
|
||||
void ColumnTuple::updatePermutation(bool reverse, size_t limit, int nan_direction_hint, IColumn::Permutation & res, EqualRanges & equal_ranges) const
|
||||
void ColumnTuple::updatePermutationImpl(bool reverse, size_t limit, int nan_direction_hint, IColumn::Permutation & res, EqualRanges & equal_ranges, const Collator * collator) const
|
||||
{
|
||||
if (equal_ranges.empty())
|
||||
return;
|
||||
|
||||
for (const auto & column : columns)
|
||||
{
|
||||
column->updatePermutation(reverse, limit, nan_direction_hint, res, equal_ranges);
|
||||
if (collator && column->isCollationSupported())
|
||||
column->updatePermutationWithCollation(*collator, reverse, limit, nan_direction_hint, res, equal_ranges);
|
||||
else
|
||||
column->updatePermutation(reverse, limit, nan_direction_hint, res, equal_ranges);
|
||||
|
||||
while (limit && !equal_ranges.empty() && limit <= equal_ranges.back().first)
|
||||
equal_ranges.pop_back();
|
||||
@ -361,6 +380,32 @@ void ColumnTuple::updatePermutation(bool reverse, size_t limit, int nan_directio
|
||||
}
|
||||
}
|
||||
|
||||
void ColumnTuple::getPermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const
|
||||
{
|
||||
if (reverse)
|
||||
getPermutationImpl(limit, res, Less<false>(columns, nan_direction_hint));
|
||||
else
|
||||
getPermutationImpl(limit, res, Less<true>(columns, nan_direction_hint));
|
||||
}
|
||||
|
||||
void ColumnTuple::updatePermutation(bool reverse, size_t limit, int nan_direction_hint, IColumn::Permutation & res, EqualRanges & equal_ranges) const
|
||||
{
|
||||
updatePermutationImpl(reverse, limit, nan_direction_hint, res, equal_ranges);
|
||||
}
|
||||
|
||||
void ColumnTuple::getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const
|
||||
{
|
||||
if (reverse)
|
||||
getPermutationImpl(limit, res, Less<false>(columns, nan_direction_hint, &collator));
|
||||
else
|
||||
getPermutationImpl(limit, res, Less<true>(columns, nan_direction_hint, &collator));
|
||||
}
|
||||
|
||||
void ColumnTuple::updatePermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res, EqualRanges & equal_ranges) const
|
||||
{
|
||||
updatePermutationImpl(reverse, limit, nan_direction_hint, res, equal_ranges, &collator);
|
||||
}
|
||||
|
||||
void ColumnTuple::gather(ColumnGathererStream & gatherer)
|
||||
{
|
||||
gatherer.gather(*this);
|
||||
@ -433,5 +478,15 @@ bool ColumnTuple::structureEquals(const IColumn & rhs) const
|
||||
return false;
|
||||
}
|
||||
|
||||
bool ColumnTuple::isCollationSupported() const
|
||||
{
|
||||
for (const auto& column : columns)
|
||||
{
|
||||
if (column->isCollationSupported())
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
@ -75,15 +75,19 @@ public:
|
||||
void compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
PaddedPODArray<UInt64> * row_indexes, PaddedPODArray<Int8> & compare_results,
|
||||
int direction, int nan_direction_hint) const override;
|
||||
int compareAtWithCollation(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint, const Collator & collator) const override;
|
||||
void getExtremes(Field & min, Field & max) const override;
|
||||
void getPermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const override;
|
||||
void updatePermutation(bool reverse, size_t limit, int nan_direction_hint, IColumn::Permutation & res, EqualRanges & equal_range) const override;
|
||||
void updatePermutation(bool reverse, size_t limit, int nan_direction_hint, IColumn::Permutation & res, EqualRanges & equal_ranges) const override;
|
||||
void getPermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res) const override;
|
||||
void updatePermutationWithCollation(const Collator & collator, bool reverse, size_t limit, int nan_direction_hint, Permutation & res, EqualRanges& equal_ranges) const override;
|
||||
void reserve(size_t n) override;
|
||||
size_t byteSize() const override;
|
||||
size_t allocatedBytes() const override;
|
||||
void protect() override;
|
||||
void forEachSubcolumn(ColumnCallback callback) override;
|
||||
bool structureEquals(const IColumn & rhs) const override;
|
||||
bool isCollationSupported() const override;
|
||||
|
||||
size_t tupleSize() const { return columns.size(); }
|
||||
|
||||
@ -94,6 +98,15 @@ public:
|
||||
Columns getColumnsCopy() const { return {columns.begin(), columns.end()}; }
|
||||
|
||||
const ColumnPtr & getColumnPtr(size_t idx) const { return columns[idx]; }
|
||||
|
||||
private:
|
||||
int compareAtImpl(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint, const Collator * collator=nullptr) const;
|
||||
|
||||
template <typename LessOperator>
|
||||
void getPermutationImpl(size_t limit, Permutation & res, LessOperator less) const;
|
||||
|
||||
void updatePermutationImpl(
|
||||
bool reverse, size_t limit, int nan_direction_hint, IColumn::Permutation & res, EqualRanges & equal_ranges, const Collator * collator=nullptr) const;
|
||||
};
|
||||
|
||||
|
||||
|
@ -9,7 +9,7 @@
|
||||
|
||||
|
||||
class SipHash;
|
||||
|
||||
class Collator;
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -18,6 +18,7 @@ namespace ErrorCodes
|
||||
{
|
||||
extern const int CANNOT_GET_SIZE_OF_FIELD;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int BAD_COLLATION;
|
||||
}
|
||||
|
||||
class Arena;
|
||||
@ -250,6 +251,12 @@ public:
|
||||
*/
|
||||
virtual int compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const = 0;
|
||||
|
||||
/// Equivalent to compareAt, but collator is used to compare values.
|
||||
virtual int compareAtWithCollation(size_t, size_t, const IColumn &, int, const Collator &) const
|
||||
{
|
||||
throw Exception("Collations could be specified only for String, LowCardinality(String), Nullable(String) or for Array or Tuple, containing it.", ErrorCodes::BAD_COLLATION);
|
||||
}
|
||||
|
||||
/// Compare the whole column with single value from rhs column.
|
||||
/// If row_indexes is nullptr, it's ignored. Otherwise, it is a set of rows to compare.
|
||||
/// compare_results[i] will be equal to compareAt(row_indexes[i], rhs_row_num, rhs, nan_direction_hint) * direction
|
||||
@ -277,6 +284,18 @@ public:
|
||||
*/
|
||||
virtual void updatePermutation(bool reverse, size_t limit, int nan_direction_hint, Permutation & res, EqualRanges & equal_ranges) const = 0;
|
||||
|
||||
/** Equivalent to getPermutation and updatePermutation but collator is used to compare values.
|
||||
* Supported for String, LowCardinality(String), Nullable(String) and for Array and Tuple, containing them.
|
||||
*/
|
||||
virtual void getPermutationWithCollation(const Collator &, bool, size_t, int, Permutation &) const
|
||||
{
|
||||
throw Exception("Collations could be specified only for String, LowCardinality(String), Nullable(String) or for Array or Tuple, containing them.", ErrorCodes::BAD_COLLATION);
|
||||
}
|
||||
virtual void updatePermutationWithCollation(const Collator &, bool, size_t, int, Permutation &, EqualRanges&) const
|
||||
{
|
||||
throw Exception("Collations could be specified only for String, LowCardinality(String), Nullable(String) or for Array or Tuple, containing them.", ErrorCodes::BAD_COLLATION);
|
||||
}
|
||||
|
||||
/** Copies each element according offsets parameter.
|
||||
* (i-th element should be copied offsets[i] - offsets[i - 1] times.)
|
||||
* It is necessary in ARRAY JOIN operation.
|
||||
@ -402,6 +421,8 @@ public:
|
||||
|
||||
virtual bool lowCardinality() const { return false; }
|
||||
|
||||
virtual bool isCollationSupported() const { return false; }
|
||||
|
||||
virtual ~IColumn() = default;
|
||||
IColumn() = default;
|
||||
IColumn(const IColumn &) = default;
|
||||
|
@ -71,7 +71,8 @@ void checkColumn(
|
||||
std::unordered_map<UInt32, T> map;
|
||||
size_t num_collisions = 0;
|
||||
|
||||
std::stringstream collitions_str;
|
||||
std::stringstream collisions_str;
|
||||
collisions_str.exceptions(std::ios::failbit);
|
||||
|
||||
for (size_t i = 0; i < eq_class.size(); ++i)
|
||||
{
|
||||
@ -86,14 +87,14 @@ void checkColumn(
|
||||
|
||||
if (num_collisions <= max_collisions_to_print)
|
||||
{
|
||||
collitions_str << "Collision:\n";
|
||||
collitions_str << print_for_row(it->second) << '\n';
|
||||
collitions_str << print_for_row(i) << std::endl;
|
||||
collisions_str << "Collision:\n";
|
||||
collisions_str << print_for_row(it->second) << '\n';
|
||||
collisions_str << print_for_row(i) << std::endl;
|
||||
}
|
||||
|
||||
if (num_collisions > allowed_collisions)
|
||||
{
|
||||
std::cerr << collitions_str.rdbuf();
|
||||
std::cerr << collisions_str.rdbuf();
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
@ -13,7 +13,6 @@ PEERDIR(
|
||||
contrib/libs/pdqsort
|
||||
)
|
||||
|
||||
|
||||
SRCS(
|
||||
Collator.cpp
|
||||
ColumnAggregateFunction.cpp
|
||||
@ -24,13 +23,13 @@ SRCS(
|
||||
ColumnFunction.cpp
|
||||
ColumnLowCardinality.cpp
|
||||
ColumnNullable.cpp
|
||||
ColumnsCommon.cpp
|
||||
ColumnString.cpp
|
||||
ColumnTuple.cpp
|
||||
ColumnVector.cpp
|
||||
ColumnsCommon.cpp
|
||||
FilterDescription.cpp
|
||||
getLeastSuperColumn.cpp
|
||||
IColumn.cpp
|
||||
getLeastSuperColumn.cpp
|
||||
|
||||
)
|
||||
|
||||
|
19
src/Columns/ya.make.in
Normal file
19
src/Columns/ya.make.in
Normal file
@ -0,0 +1,19 @@
|
||||
LIBRARY()
|
||||
|
||||
ADDINCL(
|
||||
contrib/libs/icu/common
|
||||
contrib/libs/icu/i18n
|
||||
contrib/libs/pdqsort
|
||||
)
|
||||
|
||||
PEERDIR(
|
||||
clickhouse/src/Common
|
||||
contrib/libs/icu
|
||||
contrib/libs/pdqsort
|
||||
)
|
||||
|
||||
SRCS(
|
||||
<? find . -name '*.cpp' | grep -v -F tests | sed 's/^\.\// /' | sort ?>
|
||||
)
|
||||
|
||||
END()
|
@ -538,6 +538,7 @@ XMLDocumentPtr ConfigProcessor::processConfig(
|
||||
*has_zk_includes = !contributing_zk_paths.empty();
|
||||
|
||||
std::stringstream comment;
|
||||
comment.exceptions(std::ios::failbit);
|
||||
comment << " This file was generated automatically.\n";
|
||||
comment << " Do not edit it: it is likely to be discarded and generated again before it's read next time.\n";
|
||||
comment << " Files used to generate this file:";
|
||||
|
@ -246,6 +246,7 @@ static std::string getExtraExceptionInfo(const std::exception & e)
|
||||
std::string getCurrentExceptionMessage(bool with_stacktrace, bool check_embedded_stacktrace /*= false*/, bool with_extra_info /*= true*/)
|
||||
{
|
||||
std::stringstream stream;
|
||||
stream.exceptions(std::ios::failbit);
|
||||
|
||||
try
|
||||
{
|
||||
@ -365,6 +366,7 @@ void tryLogException(std::exception_ptr e, Poco::Logger * logger, const std::str
|
||||
std::string getExceptionMessage(const Exception & e, bool with_stacktrace, bool check_embedded_stacktrace)
|
||||
{
|
||||
std::stringstream stream;
|
||||
stream.exceptions(std::ios::failbit);
|
||||
|
||||
try
|
||||
{
|
||||
|
@ -3,6 +3,7 @@
|
||||
#include <Core/DecimalFunctions.h>
|
||||
#include <Core/Field.h>
|
||||
#include <common/demangle.h>
|
||||
#include <Common/NaNUtils.h>
|
||||
|
||||
|
||||
class SipHash;
|
||||
@ -142,6 +143,19 @@ public:
|
||||
|
||||
T operator() (const Float64 & x) const
|
||||
{
|
||||
if constexpr (!std::is_floating_point_v<T>)
|
||||
{
|
||||
if (!isFinite(x))
|
||||
{
|
||||
/// When converting to bool it's ok (non-zero converts to true, NaN including).
|
||||
if (std::is_same_v<T, bool>)
|
||||
return true;
|
||||
|
||||
/// Conversion of infinite values to integer is undefined.
|
||||
throw Exception("Cannot convert infinite value to integer type", ErrorCodes::CANNOT_CONVERT_TYPE);
|
||||
}
|
||||
}
|
||||
|
||||
if constexpr (std::is_same_v<Decimal256, T>)
|
||||
return Int256(x);
|
||||
else
|
||||
|
@ -134,6 +134,7 @@ void MemoryTracker::alloc(Int64 size)
|
||||
|
||||
ProfileEvents::increment(ProfileEvents::QueryMemoryLimitExceeded);
|
||||
std::stringstream message;
|
||||
message.exceptions(std::ios::failbit);
|
||||
message << "Memory tracker";
|
||||
if (const auto * description = description_ptr.load(std::memory_order_relaxed))
|
||||
message << " " << description;
|
||||
@ -166,6 +167,7 @@ void MemoryTracker::alloc(Int64 size)
|
||||
|
||||
ProfileEvents::increment(ProfileEvents::QueryMemoryLimitExceeded);
|
||||
std::stringstream message;
|
||||
message.exceptions(std::ios::failbit);
|
||||
message << "Memory limit";
|
||||
if (const auto * description = description_ptr.load(std::memory_order_relaxed))
|
||||
message << " " << description;
|
||||
|
@ -74,6 +74,7 @@ ShellCommand::~ShellCommand()
|
||||
void ShellCommand::logCommand(const char * filename, char * const argv[])
|
||||
{
|
||||
std::stringstream args;
|
||||
args.exceptions(std::ios::failbit);
|
||||
for (int i = 0; argv != nullptr && argv[i] != nullptr; ++i)
|
||||
{
|
||||
if (i > 0)
|
||||
|
@ -1,76 +1,44 @@
|
||||
#pragma once
|
||||
|
||||
#include <Common/Arena.h>
|
||||
#include <ext/range.h>
|
||||
#include <ext/size.h>
|
||||
#include <ext/bit_cast.h>
|
||||
#include <cstdlib>
|
||||
#include <memory>
|
||||
#include <common/unaligned.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
|
||||
/** Can allocate memory objects of fixed size with deletion support.
|
||||
* For small `object_size`s allocated no less than getMinAllocationSize() bytes. */
|
||||
* For small `object_size`s allocated no less than pointer size.
|
||||
*/
|
||||
class SmallObjectPool
|
||||
{
|
||||
private:
|
||||
struct Block { Block * next; };
|
||||
static constexpr auto getMinAllocationSize() { return sizeof(Block); }
|
||||
|
||||
const size_t object_size;
|
||||
Arena pool;
|
||||
Block * free_list{};
|
||||
char * free_list = nullptr;
|
||||
|
||||
public:
|
||||
SmallObjectPool(
|
||||
const size_t object_size_, const size_t initial_size = 4096, const size_t growth_factor = 2,
|
||||
const size_t linear_growth_threshold = 128 * 1024 * 1024)
|
||||
: object_size{std::max(object_size_, getMinAllocationSize())},
|
||||
pool{initial_size, growth_factor, linear_growth_threshold}
|
||||
SmallObjectPool(size_t object_size_)
|
||||
: object_size{std::max(object_size_, sizeof(char *))}
|
||||
{
|
||||
if (pool.size() < object_size)
|
||||
return;
|
||||
|
||||
const auto num_objects = pool.size() / object_size;
|
||||
auto head = free_list = ext::bit_cast<Block *>(pool.alloc(num_objects * object_size));
|
||||
|
||||
for (const auto i : ext::range(0, num_objects - 1))
|
||||
{
|
||||
(void) i;
|
||||
head->next = ext::bit_cast<Block *>(ext::bit_cast<char *>(head) + object_size);
|
||||
head = head->next;
|
||||
}
|
||||
|
||||
head->next = nullptr;
|
||||
}
|
||||
|
||||
char * alloc()
|
||||
{
|
||||
if (free_list)
|
||||
{
|
||||
const auto res = reinterpret_cast<char *>(free_list);
|
||||
free_list = free_list->next;
|
||||
char * res = free_list;
|
||||
free_list = unalignedLoad<char *>(free_list);
|
||||
return res;
|
||||
}
|
||||
|
||||
return pool.alloc(object_size);
|
||||
}
|
||||
|
||||
void free(const void * ptr)
|
||||
void free(char * ptr)
|
||||
{
|
||||
union
|
||||
{
|
||||
const void * p_v;
|
||||
Block * block;
|
||||
};
|
||||
|
||||
p_v = ptr;
|
||||
block->next = free_list;
|
||||
|
||||
free_list = block;
|
||||
unalignedStore<char *>(ptr, free_list);
|
||||
free_list = ptr;
|
||||
}
|
||||
|
||||
/// The size of the allocated pool in bytes
|
||||
@ -81,5 +49,4 @@ public:
|
||||
|
||||
};
|
||||
|
||||
|
||||
}
|
||||
|
@ -24,6 +24,7 @@
|
||||
std::string signalToErrorMessage(int sig, const siginfo_t & info, const ucontext_t & context)
|
||||
{
|
||||
std::stringstream error;
|
||||
error.exceptions(std::ios::failbit);
|
||||
switch (sig)
|
||||
{
|
||||
case SIGSEGV:
|
||||
@ -319,6 +320,7 @@ static void toStringEveryLineImpl(
|
||||
std::unordered_map<std::string, DB::Dwarf> dwarfs;
|
||||
|
||||
std::stringstream out;
|
||||
out.exceptions(std::ios::failbit);
|
||||
|
||||
for (size_t i = offset; i < size; ++i)
|
||||
{
|
||||
@ -358,6 +360,7 @@ static void toStringEveryLineImpl(
|
||||
}
|
||||
#else
|
||||
std::stringstream out;
|
||||
out.exceptions(std::ios::failbit);
|
||||
|
||||
for (size_t i = offset; i < size; ++i)
|
||||
{
|
||||
@ -373,6 +376,7 @@ static void toStringEveryLineImpl(
|
||||
static std::string toStringImpl(const StackTrace::FramePointers & frame_pointers, size_t offset, size_t size)
|
||||
{
|
||||
std::stringstream out;
|
||||
out.exceptions(std::ios::failbit);
|
||||
toStringEveryLineImpl(frame_pointers, offset, size, [&](const std::string & str) { out << str << '\n'; });
|
||||
return out.str();
|
||||
}
|
||||
|
@ -154,6 +154,8 @@ std::pair<bool, std::string> StudentTTest::compareAndReport(size_t confidence_le
|
||||
double mean_confidence_interval = table_value * t_statistic;
|
||||
|
||||
std::stringstream ss;
|
||||
ss.exceptions(std::ios::failbit);
|
||||
|
||||
if (mean_difference > mean_confidence_interval && (mean_difference - mean_confidence_interval > 0.0001)) /// difference must be more than 0.0001, to take into account connection latency.
|
||||
{
|
||||
ss << "Difference at " << confidence_level[confidence_level_index] << "% confidence : ";
|
||||
|
@ -216,7 +216,7 @@ void ThreadPoolImpl<Thread>::worker(typename std::list<Thread>::iterator thread_
|
||||
|
||||
if (!jobs.empty())
|
||||
{
|
||||
job = jobs.top().job;
|
||||
job = std::move(jobs.top().job);
|
||||
jobs.pop();
|
||||
}
|
||||
else
|
||||
|
@ -398,8 +398,7 @@ bool PerfEventsCounters::processThreadLocalChanges(const std::string & needed_ev
|
||||
return true;
|
||||
}
|
||||
|
||||
// Parse comma-separated list of event names. Empty means all available
|
||||
// events.
|
||||
// Parse comma-separated list of event names. Empty means all available events.
|
||||
std::vector<size_t> PerfEventsCounters::eventIndicesFromString(const std::string & events_list)
|
||||
{
|
||||
std::vector<size_t> result;
|
||||
@ -418,8 +417,7 @@ std::vector<size_t> PerfEventsCounters::eventIndicesFromString(const std::string
|
||||
std::string event_name;
|
||||
while (std::getline(iss, event_name, ','))
|
||||
{
|
||||
// Allow spaces at the beginning of the token, so that you can write
|
||||
// 'a, b'.
|
||||
// Allow spaces at the beginning of the token, so that you can write 'a, b'.
|
||||
event_name.erase(0, event_name.find_first_not_of(' '));
|
||||
|
||||
auto entry = event_name_to_index.find(event_name);
|
||||
|
@ -80,6 +80,7 @@ void ThreadStatus::assertState(const std::initializer_list<int> & permitted_stat
|
||||
}
|
||||
|
||||
std::stringstream ss;
|
||||
ss.exceptions(std::ios::failbit);
|
||||
ss << "Unexpected thread state " << getCurrentState();
|
||||
if (description)
|
||||
ss << ": " << description;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user