mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-30 11:32:03 +00:00
Merge branch 'master' of https://github.com/ClickHouse/ClickHouse
This commit is contained in:
commit
a6fc9d379b
142
CHANGELOG.md
142
CHANGELOG.md
@ -1,3 +1,145 @@
|
||||
## ClickHouse release 20.8
|
||||
|
||||
### ClickHouse release v20.8.2.3-stable, 2020-09-08
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* Now `OPTIMIZE FINAL` query doesn't recalculate TTL for parts that were added before TTL was created. Use `ALTER TABLE ... MATERIALIZE TTL` once to calculate them, after that `OPTIMIZE FINAL` will evaluate TTL's properly. This behavior never worked for replicated tables. [#14220](https://github.com/ClickHouse/ClickHouse/pull/14220) ([alesapin](https://github.com/alesapin)).
|
||||
* Extend `parallel_distributed_insert_select` setting, adding an option to run `INSERT` into local table. The setting changes type from `Bool` to `UInt64`, so the values `false` and `true` are no longer supported. If you have these values in server configuration, the server will not start. Please replace them with `0` and `1`, respectively. [#14060](https://github.com/ClickHouse/ClickHouse/pull/14060) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Remove support for the `ODBCDriver` input/output format. This was a deprecated format once used for communication with the ClickHouse ODBC driver, now long superseded by the `ODBCDriver2` format. Resolves [#13629](https://github.com/ClickHouse/ClickHouse/issues/13629). [#13847](https://github.com/ClickHouse/ClickHouse/pull/13847) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
|
||||
#### New Feature
|
||||
|
||||
* ClickHouse can work as MySQL replica - it is implemented by `MaterializeMySQL` database engine. Implements [#4006](https://github.com/ClickHouse/ClickHouse/issues/4006). [#10851](https://github.com/ClickHouse/ClickHouse/pull/10851) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Add the ability to specify `Default` compression codec for columns that correspond to settings specified in `config.xml`. Implements: [#9074](https://github.com/ClickHouse/ClickHouse/issues/9074). [#14049](https://github.com/ClickHouse/ClickHouse/pull/14049) ([alesapin](https://github.com/alesapin)).
|
||||
* Support Kerberos authentication in Kafka, using `krb5` and `cyrus-sasl` libraries. [#12771](https://github.com/ClickHouse/ClickHouse/pull/12771) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||
* Add function `normalizeQuery` that replaces literals, sequences of literals and complex aliases with placeholders. Add function `normalizedQueryHash` that returns identical 64bit hash values for similar queries. It helps to analyze query log. This closes [#11271](https://github.com/ClickHouse/ClickHouse/issues/11271). [#13816](https://github.com/ClickHouse/ClickHouse/pull/13816) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add `time_zones` table. [#13880](https://github.com/ClickHouse/ClickHouse/pull/13880) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Add function `defaultValueOfTypeName` that returns the default value for a given type. [#13877](https://github.com/ClickHouse/ClickHouse/pull/13877) ([hcz](https://github.com/hczhcz)).
|
||||
* Add `countDigits(x)` function that count number of decimal digits in integer or decimal column. Add `isDecimalOverflow(d, [p])` function that checks if the value in Decimal column is out of its (or specified) precision. [#14151](https://github.com/ClickHouse/ClickHouse/pull/14151) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Add `quantileExactLow` and `quantileExactHigh` implementations with respective aliases for `medianExactLow` and `medianExactHigh`. [#13818](https://github.com/ClickHouse/ClickHouse/pull/13818) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Added `date_trunc` function that truncates a date/time value to a specified date/time part. [#13888](https://github.com/ClickHouse/ClickHouse/pull/13888) ([Vladimir Golovchenko](https://github.com/vladimir-golovchenko)).
|
||||
* Add new optional section `<user_directories>` to the main config. [#13425](https://github.com/ClickHouse/ClickHouse/pull/13425) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Add `ALTER SAMPLE BY` statement that allows to change table sample clause. [#13280](https://github.com/ClickHouse/ClickHouse/pull/13280) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Function `position` now supports optional `start_pos` argument. [#13237](https://github.com/ClickHouse/ClickHouse/pull/13237) ([vdimir](https://github.com/vdimir)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix visible data clobbering by progress bar in client in interactive mode. This fixes [#12562](https://github.com/ClickHouse/ClickHouse/issues/12562) and [#13369](https://github.com/ClickHouse/ClickHouse/issues/13369) and [#13584](https://github.com/ClickHouse/ClickHouse/issues/13584) and fixes [#12964](https://github.com/ClickHouse/ClickHouse/issues/12964). [#13691](https://github.com/ClickHouse/ClickHouse/pull/13691) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed incorrect sorting order if `LowCardinality` column when sorting by multiple columns. This fixes [#13958](https://github.com/ClickHouse/ClickHouse/issues/13958). [#14223](https://github.com/ClickHouse/ClickHouse/pull/14223) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Check for array size overflow in `topK` aggregate function. Without this check the user may send a query with carefully crafter parameters that will lead to server crash. This closes [#14452](https://github.com/ClickHouse/ClickHouse/issues/14452). [#14467](https://github.com/ClickHouse/ClickHouse/pull/14467) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix bug which can lead to wrong merges assignment if table has partitions with a single part. [#14444](https://github.com/ClickHouse/ClickHouse/pull/14444) ([alesapin](https://github.com/alesapin)).
|
||||
* Stop query execution if exception happened in `PipelineExecutor` itself. This could prevent rare possible query hung. Continuation of [#14334](https://github.com/ClickHouse/ClickHouse/issues/14334). [#14402](https://github.com/ClickHouse/ClickHouse/pull/14402) [#14334](https://github.com/ClickHouse/ClickHouse/pull/14334) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix crash during `ALTER` query for table which was created `AS table_function`. Fixes [#14212](https://github.com/ClickHouse/ClickHouse/issues/14212). [#14326](https://github.com/ClickHouse/ClickHouse/pull/14326) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix exception during ALTER LIVE VIEW query with REFRESH command. Live view is an experimental feature. [#14320](https://github.com/ClickHouse/ClickHouse/pull/14320) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Fix QueryPlan lifetime (for EXPLAIN PIPELINE graph=1) for queries with nested interpreter. [#14315](https://github.com/ClickHouse/ClickHouse/pull/14315) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix segfault in `clickhouse-odbc-bridge` during schema fetch from some external sources. This PR fixes https://github.com/ClickHouse/ClickHouse/issues/13861. [#14267](https://github.com/ClickHouse/ClickHouse/pull/14267) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix crash in mark inclusion search introduced in https://github.com/ClickHouse/ClickHouse/pull/12277. [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix creation of tables with named tuples. This fixes [#13027](https://github.com/ClickHouse/ClickHouse/issues/13027). [#14143](https://github.com/ClickHouse/ClickHouse/pull/14143) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix formatting of minimal negative decimal numbers. This fixes https://github.com/ClickHouse/ClickHouse/issues/14111. [#14119](https://github.com/ClickHouse/ClickHouse/pull/14119) ([Alexander Kuzmenkov](https://github.com/akuzm)).
|
||||
* Fix `DistributedFilesToInsert` metric (zeroed when it should not). [#14095](https://github.com/ClickHouse/ClickHouse/pull/14095) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `pointInPolygon` with const 2d array as polygon. [#14079](https://github.com/ClickHouse/ClickHouse/pull/14079) ([Alexey Ilyukhov](https://github.com/livace)).
|
||||
* Fixed wrong mount point in extra info for `Poco::Exception: no space left on device`. [#14050](https://github.com/ClickHouse/ClickHouse/pull/14050) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix GRANT ALL statement when executed on a non-global level. [#13987](https://github.com/ClickHouse/ClickHouse/pull/13987) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix parser to reject create table as table function with engine. [#13940](https://github.com/ClickHouse/ClickHouse/pull/13940) ([hcz](https://github.com/hczhcz)).
|
||||
* Fix wrong results in select queries with `DISTINCT` keyword and subqueries with UNION ALL in case `optimize_duplicate_order_by_and_distinct` setting is enabled. [#13925](https://github.com/ClickHouse/ClickHouse/pull/13925) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fixed potential deadlock when renaming `Distributed` table. [#13922](https://github.com/ClickHouse/ClickHouse/pull/13922) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix incorrect sorting for `FixedString` columns when sorting by multiple columns. Fixes [#13182](https://github.com/ClickHouse/ClickHouse/issues/13182). [#13887](https://github.com/ClickHouse/ClickHouse/pull/13887) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix potentially imprecise result of `topK`/`topKWeighted` merge (with non-default parameters). [#13817](https://github.com/ClickHouse/ClickHouse/pull/13817) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix reading from MergeTree table with INDEX of type SET fails when comparing against NULL. This fixes [#13686](https://github.com/ClickHouse/ClickHouse/issues/13686). [#13793](https://github.com/ClickHouse/ClickHouse/pull/13793) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix `arrayJoin` capturing in lambda (LOGICAL_ERROR). [#13792](https://github.com/ClickHouse/ClickHouse/pull/13792) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add step overflow check in function `range`. [#13790](https://github.com/ClickHouse/ClickHouse/pull/13790) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed `Directory not empty` error when concurrently executing `DROP DATABASE` and `CREATE TABLE`. [#13756](https://github.com/ClickHouse/ClickHouse/pull/13756) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add range check for `h3KRing` function. This fixes [#13633](https://github.com/ClickHouse/ClickHouse/issues/13633). [#13752](https://github.com/ClickHouse/ClickHouse/pull/13752) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix race condition between DETACH and background merges. Parts may revive after detach. This is continuation of [#8602](https://github.com/ClickHouse/ClickHouse/issues/8602) that did not fix the issue but introduced a test that started to fail in very rare cases, demonstrating the issue. [#13746](https://github.com/ClickHouse/ClickHouse/pull/13746) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix logging Settings.Names/Values when log_queries_min_type > QUERY_START. [#13737](https://github.com/ClickHouse/ClickHouse/pull/13737) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixes `/replicas_status` endpoint response status code when verbose=1. [#13722](https://github.com/ClickHouse/ClickHouse/pull/13722) ([javi santana](https://github.com/javisantana)).
|
||||
* Fix incorrect message in `clickhouse-server.init` while checking user and group. [#13711](https://github.com/ClickHouse/ClickHouse/pull/13711) ([ylchou](https://github.com/ylchou)).
|
||||
* Do not optimize any(arrayJoin()) -> arrayJoin() under `optimize_move_functions_out_of_any` setting. [#13681](https://github.com/ClickHouse/ClickHouse/pull/13681) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix crash in JOIN with StorageMerge and `set enable_optimize_predicate_expression=1`. [#13679](https://github.com/ClickHouse/ClickHouse/pull/13679) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Fix typo in error message about `The value of 'number_of_free_entries_in_pool_to_lower_max_size_of_merge' setting`. [#13678](https://github.com/ClickHouse/ClickHouse/pull/13678) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Concurrent `ALTER ... REPLACE/MOVE PARTITION ...` queries might cause deadlock. It's fixed. [#13626](https://github.com/ClickHouse/ClickHouse/pull/13626) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed the behaviour when sometimes cache-dictionary returned default value instead of present value from source. [#13624](https://github.com/ClickHouse/ClickHouse/pull/13624) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix secondary indices corruption in compact parts. Compact parts are experimental feature. [#13538](https://github.com/ClickHouse/ClickHouse/pull/13538) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix premature `ON CLUSTER` timeouts for queries that must be executed on a single replica. Fixes [#6704](https://github.com/ClickHouse/ClickHouse/issues/6704), [#7228](https://github.com/ClickHouse/ClickHouse/issues/7228), [#13361](https://github.com/ClickHouse/ClickHouse/issues/13361), [#11884](https://github.com/ClickHouse/ClickHouse/issues/11884). [#13450](https://github.com/ClickHouse/ClickHouse/pull/13450) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix wrong code in function `netloc`. This fixes [#13335](https://github.com/ClickHouse/ClickHouse/issues/13335). [#13446](https://github.com/ClickHouse/ClickHouse/pull/13446) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix possible race in `StorageMemory`. [#13416](https://github.com/ClickHouse/ClickHouse/pull/13416) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix missing or excessive headers in `TSV/CSVWithNames` formats in HTTP protocol. This fixes [#12504](https://github.com/ClickHouse/ClickHouse/issues/12504). [#13343](https://github.com/ClickHouse/ClickHouse/pull/13343) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes https://github.com/ClickHouse/ClickHouse/issues/5779, https://github.com/ClickHouse/ClickHouse/issues/12527. [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix access to `redis` dictionary after connection was dropped once. It may happen with `cache` and `direct` dictionary layouts. [#13082](https://github.com/ClickHouse/ClickHouse/pull/13082) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Removed wrong auth access check when using ClickHouseDictionarySource to query remote tables. [#12756](https://github.com/ClickHouse/ClickHouse/pull/12756) ([sundyli](https://github.com/sundy-li)).
|
||||
* Properly distinguish subqueries in some cases for common subexpression elimination. https://github.com/ClickHouse/ClickHouse/issues/8333. [#8367](https://github.com/ClickHouse/ClickHouse/pull/8367) ([Amos Bird](https://github.com/amosbird)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Disallows `CODEC` on `ALIAS` column type. Fixes [#13911](https://github.com/ClickHouse/ClickHouse/issues/13911). [#14263](https://github.com/ClickHouse/ClickHouse/pull/14263) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* When waiting for a dictionary update to complete, use the timeout specified by `query_wait_timeout_milliseconds` setting instead of a hard-coded value. [#14105](https://github.com/ClickHouse/ClickHouse/pull/14105) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Add setting `min_index_granularity_bytes` that protects against accidentally creating a table with very low `index_granularity_bytes` setting. [#14139](https://github.com/ClickHouse/ClickHouse/pull/14139) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Now it's possible to fetch partitions from clusters that use different ZooKeeper: `ALTER TABLE table_name FETCH PARTITION partition_expr FROM 'zk-name:/path-in-zookeeper'`. It's useful for shipping data to new clusters. [#14155](https://github.com/ClickHouse/ClickHouse/pull/14155) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Slightly better performance of Memory table if it was constructed from a huge number of very small blocks (that's unlikely). Author of the idea: [Mark Papadakis](https://github.com/markpapadakis). Closes [#14043](https://github.com/ClickHouse/ClickHouse/issues/14043). [#14056](https://github.com/ClickHouse/ClickHouse/pull/14056) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Conditional aggregate functions (for example: `avgIf`, `sumIf`, `maxIf`) should return `NULL` when miss rows and use nullable arguments. [#13964](https://github.com/ClickHouse/ClickHouse/pull/13964) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Increase limit in -Resample combinator to 1M. [#13947](https://github.com/ClickHouse/ClickHouse/pull/13947) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Corrected an error in AvroConfluent format that caused the Kafka table engine to stop processing messages when an abnormally small, malformed, message was received. [#13941](https://github.com/ClickHouse/ClickHouse/pull/13941) ([Gervasio Varela](https://github.com/gervarela)).
|
||||
* Fix wrong error for long queries. It was possible to get syntax error other than `Max query size exceeded` for correct query. [#13928](https://github.com/ClickHouse/ClickHouse/pull/13928) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Better error message for null value of `TabSeparated` format. [#13906](https://github.com/ClickHouse/ClickHouse/pull/13906) ([jiang tao](https://github.com/tomjiang1987)).
|
||||
* Function `arrayCompact` will compare NaNs bitwise if the type of array elements is Float32/Float64. In previous versions NaNs were always not equal if the type of array elements is Float32/Float64 and were always equal if the type is more complex, like Nullable(Float64). This closes [#13857](https://github.com/ClickHouse/ClickHouse/issues/13857). [#13868](https://github.com/ClickHouse/ClickHouse/pull/13868) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix data race in `lgamma` function. This race was caught only in `tsan`, no side effects a really happened. [#13842](https://github.com/ClickHouse/ClickHouse/pull/13842) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Avoid too slow queries when arrays are manipulated as fields. Throw exception instead. [#13753](https://github.com/ClickHouse/ClickHouse/pull/13753) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added Redis requirepass authorization (for redis dictionary source). [#13688](https://github.com/ClickHouse/ClickHouse/pull/13688) ([Ivan Torgashov](https://github.com/it1804)).
|
||||
* Add MergeTree Write-Ahead-Log (WAL) dump tool. WAL is an experimental feature. [#13640](https://github.com/ClickHouse/ClickHouse/pull/13640) ([BohuTANG](https://github.com/BohuTANG)).
|
||||
* In previous versions `lcm` function may produce assertion violation in debug build if called with specifically crafted arguments. This fixes [#13368](https://github.com/ClickHouse/ClickHouse/issues/13368). [#13510](https://github.com/ClickHouse/ClickHouse/pull/13510) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Provide monotonicity for `toDate/toDateTime` functions in more cases. Monotonicity information is used for index analysis (more complex queries will be able to use index). Now the input arguments are saturated more naturally and provides better monotonicity. [#13497](https://github.com/ClickHouse/ClickHouse/pull/13497) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Support compound identifiers for custom settings. Custom settings is an integration point of ClickHouse codebase with other codebases (no benefits for ClickHouse itself) [#13496](https://github.com/ClickHouse/ClickHouse/pull/13496) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Move parts from DiskLocal to DiskS3 in parallel. `DiskS3` is an experimental feature. [#13459](https://github.com/ClickHouse/ClickHouse/pull/13459) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Enable mixed granularity parts by default. [#13449](https://github.com/ClickHouse/ClickHouse/pull/13449) ([alesapin](https://github.com/alesapin)).
|
||||
* Proper remote host checking in S3 redirects (security-related thing). [#13404](https://github.com/ClickHouse/ClickHouse/pull/13404) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Add `QueryTimeMicroseconds`, `SelectQueryTimeMicroseconds` and `InsertQueryTimeMicroseconds` to system.events. [#13336](https://github.com/ClickHouse/ClickHouse/pull/13336) ([ianton-ru](https://github.com/ianton-ru)).
|
||||
* Fix debug assertion when Decimal has too large negative exponent. Fixes [#13188](https://github.com/ClickHouse/ClickHouse/issues/13188). [#13228](https://github.com/ClickHouse/ClickHouse/pull/13228) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added cache layer for DiskS3 (cache to local disk mark and index files). `DiskS3` is an experimental feature. [#13076](https://github.com/ClickHouse/ClickHouse/pull/13076) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Fix readline so it dumps history to file now. [#13600](https://github.com/ClickHouse/ClickHouse/pull/13600) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Create `system` database with `Atomic` engine by default (a preparation to enable `Atomic` database engine by default everywhere). [#13680](https://github.com/ClickHouse/ClickHouse/pull/13680) ([tavplubix](https://github.com/tavplubix)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Slightly optimize very short queries with `LowCardinality`. [#14129](https://github.com/ClickHouse/ClickHouse/pull/14129) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Enable parallel INSERTs for table engines `Null`, `Memory`, `Distributed` and `Buffer` when the setting `max_insert_threads` is set. [#14120](https://github.com/ClickHouse/ClickHouse/pull/14120) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fail fast if `max_rows_to_read` limit is exceeded on parts scan. The motivation behind this change is to skip ranges scan for all selected parts if it is clear that `max_rows_to_read` is already exceeded. The change is quite noticeable for queries over big number of parts. [#13677](https://github.com/ClickHouse/ClickHouse/pull/13677) ([Roman Khavronenko](https://github.com/hagen1778)).
|
||||
* Slightly improve performance of aggregation by UInt8/UInt16 keys. [#13099](https://github.com/ClickHouse/ClickHouse/pull/13099) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Optimize `has()`, `indexOf()` and `countEqual()` functions for `Array(LowCardinality(T))` and constant right arguments. [#12550](https://github.com/ClickHouse/ClickHouse/pull/12550) ([myrrc](https://github.com/myrrc)).
|
||||
* When performing trivial `INSERT SELECT` queries, automatically set `max_threads` to 1 or `max_insert_threads`, and set `max_block_size` to `min_insert_block_size_rows`. Related to [#5907](https://github.com/ClickHouse/ClickHouse/issues/5907). [#12195](https://github.com/ClickHouse/ClickHouse/pull/12195) ([flynn](https://github.com/ucasFL)).
|
||||
|
||||
#### Experimental Feature
|
||||
|
||||
* Add types `Int128`, `Int256`, `UInt256` and related functions for them. Extend Decimals with Decimal256 (precision up to 76 digits). New types are under the setting `allow_experimental_bigint_types`. It is working extremely slow and bad. The implementation is incomplete. Please don't use this feature. [#13097](https://github.com/ClickHouse/ClickHouse/pull/13097) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Added `clickhouse install` script, that is useful if you only have a single binary. [#13528](https://github.com/ClickHouse/ClickHouse/pull/13528) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Allow to run `clickhouse` binary without configuration. [#13515](https://github.com/ClickHouse/ClickHouse/pull/13515) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Enable check for typos in code with `codespell`. [#13513](https://github.com/ClickHouse/ClickHouse/pull/13513) [#13511](https://github.com/ClickHouse/ClickHouse/pull/13511) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Enable Shellcheck in CI as a linter of .sh tests. This closes [#13168](https://github.com/ClickHouse/ClickHouse/issues/13168). [#13530](https://github.com/ClickHouse/ClickHouse/pull/13530) [#13529](https://github.com/ClickHouse/ClickHouse/pull/13529) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add a CMake option to fail configuration instead of auto-reconfiguration, enabled by default. [#13687](https://github.com/ClickHouse/ClickHouse/pull/13687) ([Konstantin](https://github.com/podshumok)).
|
||||
* Expose version of embedded tzdata via TZDATA_VERSION in system.build_options. [#13648](https://github.com/ClickHouse/ClickHouse/pull/13648) ([filimonov](https://github.com/filimonov)).
|
||||
* Improve generation of system.time_zones table during build. Closes [#14209](https://github.com/ClickHouse/ClickHouse/issues/14209). [#14215](https://github.com/ClickHouse/ClickHouse/pull/14215) ([filimonov](https://github.com/filimonov)).
|
||||
* Build ClickHouse with the most fresh tzdata from package repository. [#13623](https://github.com/ClickHouse/ClickHouse/pull/13623) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add the ability to write js-style comments in skip_list.json. [#14159](https://github.com/ClickHouse/ClickHouse/pull/14159) ([alesapin](https://github.com/alesapin)).
|
||||
* Ensure that there is no copy-pasted GPL code. [#13514](https://github.com/ClickHouse/ClickHouse/pull/13514) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Switch tests docker images to use test-base parent. [#14167](https://github.com/ClickHouse/ClickHouse/pull/14167) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Adding retry logic when bringing up docker-compose cluster; Increasing COMPOSE_HTTP_TIMEOUT. [#14112](https://github.com/ClickHouse/ClickHouse/pull/14112) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Enabled `system.text_log` in stress test to find more bugs. [#13855](https://github.com/ClickHouse/ClickHouse/pull/13855) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Testflows LDAP module: adding missing certificates and dhparam.pem for openldap4. [#13780](https://github.com/ClickHouse/ClickHouse/pull/13780) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* ZooKeeper cannot work reliably in unit tests in CI infrastructure. Using unit tests for ZooKeeper interaction with real ZooKeeper is bad idea from the start (unit tests are not supposed to verify complex distributed systems). We already using integration tests for this purpose and they are better suited. [#13745](https://github.com/ClickHouse/ClickHouse/pull/13745) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added docker image for style check. Added style check that all docker and docker compose files are located in docker directory. [#13724](https://github.com/ClickHouse/ClickHouse/pull/13724) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Fix cassandra build on Mac OS. [#13708](https://github.com/ClickHouse/ClickHouse/pull/13708) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Fix link error in shared build. [#13700](https://github.com/ClickHouse/ClickHouse/pull/13700) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Updating LDAP user authentication suite to check that it works with RBAC. [#13656](https://github.com/ClickHouse/ClickHouse/pull/13656) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Removed `-DENABLE_CURL_CLIENT` for `contrib/aws`. [#13628](https://github.com/ClickHouse/ClickHouse/pull/13628) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Increasing health-check timeouts for ClickHouse nodes and adding support to dump docker-compose logs if unhealthy containers found. [#13612](https://github.com/ClickHouse/ClickHouse/pull/13612) ([vzakaznikov](https://github.com/vzakaznikov)).
|
||||
* Make sure https://github.com/ClickHouse/ClickHouse/issues/10977 is invalid. [#13539](https://github.com/ClickHouse/ClickHouse/pull/13539) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Skip PR's from robot-clickhouse. [#13489](https://github.com/ClickHouse/ClickHouse/pull/13489) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Move Dockerfiles from integration tests to `docker/test` directory. docker_compose files are available in `runner` docker container. Docker images are built in CI and not in integration tests. [#13448](https://github.com/ClickHouse/ClickHouse/pull/13448) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
|
||||
|
||||
## ClickHouse release 20.7
|
||||
|
||||
### ClickHouse release v20.7.2.30-stable, 2020-08-31
|
||||
|
@ -15,6 +15,7 @@ ClickHouse is an open-source column-oriented database management system that all
|
||||
* [Contacts](https://clickhouse.tech/#contacts) can help to get your questions answered if there are any.
|
||||
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
|
||||
|
||||
## Upcoming Events
|
||||
## Upcoming Events
|
||||
|
||||
* [ClickHouse talk at Ya.Subbotnik (in Russian)](https://ya.cc/t/cIBI-3yECj5JF) on September 12, 2020.
|
||||
* [eBay migrating from Druid](https://us02web.zoom.us/webinar/register/tZMkfu6rpjItHtaQ1DXcgPWcSOnmM73HLGKL) on September 23, 2020.
|
||||
* [ClickHouse for Edge Analytics](https://ones2020.sched.com/event/bWPs) on September 29, 2020.
|
||||
|
13
base/common/throwError.h
Normal file
13
base/common/throwError.h
Normal file
@ -0,0 +1,13 @@
|
||||
#pragma once
|
||||
#include <stdexcept>
|
||||
|
||||
/// Throw DB::Exception-like exception before its definition.
|
||||
/// DB::Exception derived from Poco::Exception derived from std::exception.
|
||||
/// DB::Exception generally cought as Poco::Exception. std::exception generally has other catch blocks and could lead to other outcomes.
|
||||
/// DB::Exception is not defined yet. It'd better to throw Poco::Exception but we do not want to include any big header here, even <string>.
|
||||
/// So we throw some std::exception instead in the hope its catch block is the same as DB::Exception one.
|
||||
template <typename T>
|
||||
inline void throwError(const T & err)
|
||||
{
|
||||
throw std::runtime_error(err);
|
||||
}
|
@ -23,8 +23,8 @@ using UInt64 = uint64_t;
|
||||
|
||||
using Int128 = __int128;
|
||||
|
||||
using wInt256 = std::wide_integer<256, signed>;
|
||||
using wUInt256 = std::wide_integer<256, unsigned>;
|
||||
using wInt256 = wide::integer<256, signed>;
|
||||
using wUInt256 = wide::integer<256, unsigned>;
|
||||
|
||||
static_assert(sizeof(wInt256) == 32);
|
||||
static_assert(sizeof(wUInt256) == 32);
|
||||
@ -119,12 +119,6 @@ template <> struct is_big_int<wUInt256> { static constexpr bool value = true; };
|
||||
template <typename T>
|
||||
inline constexpr bool is_big_int_v = is_big_int<T>::value;
|
||||
|
||||
template <typename T>
|
||||
inline std::string bigintToString(const T & x)
|
||||
{
|
||||
return to_string(x);
|
||||
}
|
||||
|
||||
template <typename To, typename From>
|
||||
inline To bigint_cast(const From & x [[maybe_unused]])
|
||||
{
|
||||
|
@ -22,79 +22,87 @@
|
||||
* without express or implied warranty.
|
||||
*/
|
||||
|
||||
#include <climits> // CHAR_BIT
|
||||
#include <cmath>
|
||||
#include <cstdint>
|
||||
#include <limits>
|
||||
#include <type_traits>
|
||||
#include <initializer_list>
|
||||
|
||||
namespace wide
|
||||
{
|
||||
template <size_t Bits, typename Signed>
|
||||
class integer;
|
||||
}
|
||||
|
||||
namespace std
|
||||
{
|
||||
template <size_t Bits, typename Signed>
|
||||
class wide_integer;
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
struct common_type<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>>;
|
||||
struct common_type<wide::integer<Bits, Signed>, wide::integer<Bits2, Signed2>>;
|
||||
|
||||
template <size_t Bits, typename Signed, typename Arithmetic>
|
||||
struct common_type<wide_integer<Bits, Signed>, Arithmetic>;
|
||||
struct common_type<wide::integer<Bits, Signed>, Arithmetic>;
|
||||
|
||||
template <typename Arithmetic, size_t Bits, typename Signed>
|
||||
struct common_type<Arithmetic, wide_integer<Bits, Signed>>;
|
||||
struct common_type<Arithmetic, wide::integer<Bits, Signed>>;
|
||||
|
||||
}
|
||||
|
||||
namespace wide
|
||||
{
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
class wide_integer
|
||||
class integer
|
||||
{
|
||||
public:
|
||||
using base_type = uint8_t;
|
||||
using signed_base_type = int8_t;
|
||||
|
||||
// ctors
|
||||
wide_integer() = default;
|
||||
integer() = default;
|
||||
|
||||
template <typename T>
|
||||
constexpr wide_integer(T rhs) noexcept;
|
||||
constexpr integer(T rhs) noexcept;
|
||||
template <typename T>
|
||||
constexpr wide_integer(std::initializer_list<T> il) noexcept;
|
||||
constexpr integer(std::initializer_list<T> il) noexcept;
|
||||
|
||||
// assignment
|
||||
template <size_t Bits2, typename Signed2>
|
||||
constexpr wide_integer<Bits, Signed> & operator=(const wide_integer<Bits2, Signed2> & rhs) noexcept;
|
||||
constexpr integer<Bits, Signed> & operator=(const integer<Bits2, Signed2> & rhs) noexcept;
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr wide_integer<Bits, Signed> & operator=(Arithmetic rhs) noexcept;
|
||||
constexpr integer<Bits, Signed> & operator=(Arithmetic rhs) noexcept;
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr wide_integer<Bits, Signed> & operator*=(const Arithmetic & rhs);
|
||||
constexpr integer<Bits, Signed> & operator*=(const Arithmetic & rhs);
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr wide_integer<Bits, Signed> & operator/=(const Arithmetic & rhs);
|
||||
constexpr integer<Bits, Signed> & operator/=(const Arithmetic & rhs);
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr wide_integer<Bits, Signed> & operator+=(const Arithmetic & rhs) noexcept(is_same<Signed, unsigned>::value);
|
||||
constexpr integer<Bits, Signed> & operator+=(const Arithmetic & rhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr wide_integer<Bits, Signed> & operator-=(const Arithmetic & rhs) noexcept(is_same<Signed, unsigned>::value);
|
||||
constexpr integer<Bits, Signed> & operator-=(const Arithmetic & rhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
|
||||
template <typename Integral>
|
||||
constexpr wide_integer<Bits, Signed> & operator%=(const Integral & rhs);
|
||||
constexpr integer<Bits, Signed> & operator%=(const Integral & rhs);
|
||||
|
||||
template <typename Integral>
|
||||
constexpr wide_integer<Bits, Signed> & operator&=(const Integral & rhs) noexcept;
|
||||
constexpr integer<Bits, Signed> & operator&=(const Integral & rhs) noexcept;
|
||||
|
||||
template <typename Integral>
|
||||
constexpr wide_integer<Bits, Signed> & operator|=(const Integral & rhs) noexcept;
|
||||
constexpr integer<Bits, Signed> & operator|=(const Integral & rhs) noexcept;
|
||||
|
||||
template <typename Integral>
|
||||
constexpr wide_integer<Bits, Signed> & operator^=(const Integral & rhs) noexcept;
|
||||
constexpr integer<Bits, Signed> & operator^=(const Integral & rhs) noexcept;
|
||||
|
||||
constexpr wide_integer<Bits, Signed> & operator<<=(int n);
|
||||
constexpr wide_integer<Bits, Signed> & operator>>=(int n) noexcept;
|
||||
constexpr integer<Bits, Signed> & operator<<=(int n) noexcept;
|
||||
constexpr integer<Bits, Signed> & operator>>=(int n) noexcept;
|
||||
|
||||
constexpr wide_integer<Bits, Signed> & operator++() noexcept(is_same<Signed, unsigned>::value);
|
||||
constexpr wide_integer<Bits, Signed> operator++(int) noexcept(is_same<Signed, unsigned>::value);
|
||||
constexpr wide_integer<Bits, Signed> & operator--() noexcept(is_same<Signed, unsigned>::value);
|
||||
constexpr wide_integer<Bits, Signed> operator--(int) noexcept(is_same<Signed, unsigned>::value);
|
||||
constexpr integer<Bits, Signed> & operator++() noexcept(std::is_same_v<Signed, unsigned>);
|
||||
constexpr integer<Bits, Signed> operator++(int) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
constexpr integer<Bits, Signed> & operator--() noexcept(std::is_same_v<Signed, unsigned>);
|
||||
constexpr integer<Bits, Signed> operator--(int) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
|
||||
// observers
|
||||
|
||||
@ -114,10 +122,10 @@ public:
|
||||
|
||||
private:
|
||||
template <size_t Bits2, typename Signed2>
|
||||
friend class wide_integer;
|
||||
friend class integer;
|
||||
|
||||
friend class numeric_limits<wide_integer<Bits, signed>>;
|
||||
friend class numeric_limits<wide_integer<Bits, unsigned>>;
|
||||
friend class std::numeric_limits<integer<Bits, signed>>;
|
||||
friend class std::numeric_limits<integer<Bits, unsigned>>;
|
||||
|
||||
base_type m_arr[_impl::arr_size];
|
||||
};
|
||||
@ -134,115 +142,117 @@ using __only_integer = typename std::enable_if<IntegralConcept<T>() && IntegralC
|
||||
|
||||
// Unary operators
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr wide_integer<Bits, Signed> operator~(const wide_integer<Bits, Signed> & lhs) noexcept;
|
||||
constexpr integer<Bits, Signed> operator~(const integer<Bits, Signed> & lhs) noexcept;
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr wide_integer<Bits, Signed> operator-(const wide_integer<Bits, Signed> & lhs) noexcept(is_same<Signed, unsigned>::value);
|
||||
constexpr integer<Bits, Signed> operator-(const integer<Bits, Signed> & lhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr wide_integer<Bits, Signed> operator+(const wide_integer<Bits, Signed> & lhs) noexcept(is_same<Signed, unsigned>::value);
|
||||
constexpr integer<Bits, Signed> operator+(const integer<Bits, Signed> & lhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
|
||||
// Binary operators
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator*(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator*(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator*(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator/(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator/(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator/(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator+(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator+(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator+(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator-(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator-(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator-(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator%(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator%(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator%(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator&(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator&(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator&(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator|(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator|(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator|(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<wide_integer<Bits, Signed>, wide_integer<Bits2, Signed2>> constexpr
|
||||
operator^(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator^(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator^(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
// TODO: Integral
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr wide_integer<Bits, Signed> operator<<(const wide_integer<Bits, Signed> & lhs, int n) noexcept;
|
||||
constexpr integer<Bits, Signed> operator<<(const integer<Bits, Signed> & lhs, int n) noexcept;
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr wide_integer<Bits, Signed> operator>>(const wide_integer<Bits, Signed> & lhs, int n) noexcept;
|
||||
constexpr integer<Bits, Signed> operator>>(const integer<Bits, Signed> & lhs, int n) noexcept;
|
||||
|
||||
template <size_t Bits, typename Signed, typename Int, typename = std::enable_if_t<!std::is_same_v<Int, int>>>
|
||||
constexpr wide_integer<Bits, Signed> operator<<(const wide_integer<Bits, Signed> & lhs, Int n) noexcept
|
||||
constexpr integer<Bits, Signed> operator<<(const integer<Bits, Signed> & lhs, Int n) noexcept
|
||||
{
|
||||
return lhs << int(n);
|
||||
}
|
||||
template <size_t Bits, typename Signed, typename Int, typename = std::enable_if_t<!std::is_same_v<Int, int>>>
|
||||
constexpr wide_integer<Bits, Signed> operator>>(const wide_integer<Bits, Signed> & lhs, Int n) noexcept
|
||||
constexpr integer<Bits, Signed> operator>>(const integer<Bits, Signed> & lhs, Int n) noexcept
|
||||
{
|
||||
return lhs >> int(n);
|
||||
}
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator<(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
constexpr bool operator<(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator<(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator>(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
constexpr bool operator>(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator>(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator<=(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
constexpr bool operator<=(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator<=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator>=(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
constexpr bool operator>=(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator>=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator==(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
constexpr bool operator==(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator==(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator!=(const wide_integer<Bits, Signed> & lhs, const wide_integer<Bits2, Signed2> & rhs);
|
||||
constexpr bool operator!=(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator!=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
std::string to_string(const wide_integer<Bits, Signed> & n);
|
||||
}
|
||||
|
||||
namespace std
|
||||
{
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
struct hash<wide_integer<Bits, Signed>>;
|
||||
struct hash<wide::integer<Bits, Signed>>;
|
||||
|
||||
}
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
35
base/common/wide_integer_to_string.h
Normal file
35
base/common/wide_integer_to_string.h
Normal file
@ -0,0 +1,35 @@
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
|
||||
#include "wide_integer.h"
|
||||
|
||||
namespace wide
|
||||
{
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
inline std::string to_string(const integer<Bits, Signed> & n)
|
||||
{
|
||||
std::string res;
|
||||
if (integer<Bits, Signed>::_impl::operator_eq(n, 0U))
|
||||
return "0";
|
||||
|
||||
integer<Bits, unsigned> t;
|
||||
bool is_neg = integer<Bits, Signed>::_impl::is_negative(n);
|
||||
if (is_neg)
|
||||
t = integer<Bits, Signed>::_impl::operator_unary_minus(n);
|
||||
else
|
||||
t = n;
|
||||
|
||||
while (!integer<Bits, unsigned>::_impl::operator_eq(t, 0U))
|
||||
{
|
||||
res.insert(res.begin(), '0' + char(integer<Bits, unsigned>::_impl::operator_percent(t, 10U)));
|
||||
t = integer<Bits, unsigned>::_impl::operator_slash(t, 10U);
|
||||
}
|
||||
|
||||
if (is_neg)
|
||||
res.insert(res.begin(), '-');
|
||||
return res;
|
||||
}
|
||||
|
||||
}
|
@ -36,7 +36,15 @@ if (SANITIZE)
|
||||
endif ()
|
||||
|
||||
elseif (SANITIZE STREQUAL "thread")
|
||||
set (TSAN_FLAGS "-fsanitize=thread -fsanitize-blacklist=${CMAKE_SOURCE_DIR}/tests/tsan_suppressions.txt")
|
||||
set (TSAN_FLAGS "-fsanitize=thread")
|
||||
if (COMPILER_CLANG)
|
||||
set (TSAN_FLAGS "${TSAN_FLAGS} -fsanitize-blacklist=${CMAKE_SOURCE_DIR}/tests/tsan_suppressions.txt")
|
||||
else()
|
||||
message (WARNING "TSAN suppressions was not passed to the compiler (since the compiler is not clang)")
|
||||
message (WARNING "Use the following command to pass them manually:")
|
||||
message (WARNING " export TSAN_OPTIONS=\"$TSAN_OPTIONS suppressions=${CMAKE_SOURCE_DIR}/tests/tsan_suppressions.txt\"")
|
||||
endif()
|
||||
|
||||
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SAN_FLAGS} ${TSAN_FLAGS}")
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SAN_FLAGS} ${TSAN_FLAGS}")
|
||||
|
@ -23,7 +23,7 @@ option (WEVERYTHING "Enables -Weverything option with some exceptions. This is i
|
||||
# Control maximum size of stack frames. It can be important if the code is run in fibers with small stack size.
|
||||
# Only in release build because debug has too large stack frames.
|
||||
if ((NOT CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG") AND (NOT SANITIZE))
|
||||
add_warning(frame-larger-than=16384)
|
||||
add_warning(frame-larger-than=32768)
|
||||
endif ()
|
||||
|
||||
if (COMPILER_CLANG)
|
||||
@ -169,9 +169,16 @@ elseif (COMPILER_GCC)
|
||||
# Warn if vector operation is not implemented via SIMD capabilities of the architecture
|
||||
add_cxx_compile_options(-Wvector-operation-performance)
|
||||
|
||||
# XXX: gcc10 stuck with this option while compiling GatherUtils code
|
||||
# (anyway there are builds with clang, that will warn)
|
||||
if (CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 10)
|
||||
# XXX: gcc10 stuck with this option while compiling GatherUtils code
|
||||
# (anyway there are builds with clang, that will warn)
|
||||
add_cxx_compile_options(-Wno-sequence-point)
|
||||
# XXX: gcc10 false positive with this warning in MergeTreePartition.cpp
|
||||
# inlined from 'void writeHexByteLowercase(UInt8, void*)' at ../src/Common/hex.h:39:11,
|
||||
# inlined from 'DB::String DB::MergeTreePartition::getID(const DB::Block&) const' at ../src/Storages/MergeTree/MergeTreePartition.cpp:85:30:
|
||||
# ../contrib/libc-headers/x86_64-linux-gnu/bits/string_fortified.h:34:33: error: writing 2 bytes into a region of size 0 [-Werror=stringop-overflow=]
|
||||
# 34 | return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest));
|
||||
# For some reason (bug in gcc?) macro 'GCC diagnostic ignored "-Wstringop-overflow"' doesn't help.
|
||||
add_cxx_compile_options(-Wno-stringop-overflow)
|
||||
endif()
|
||||
endif ()
|
||||
|
2
contrib/llvm
vendored
2
contrib/llvm
vendored
@ -1 +1 @@
|
||||
Subproject commit 3d6c7e916760b395908f28a1c885c8334d4fa98b
|
||||
Subproject commit 8f24d507c1cfeec66d27f48fe74518fd278e2d25
|
@ -32,8 +32,6 @@ RUN apt-get update \
|
||||
curl \
|
||||
gcc-9 \
|
||||
g++-9 \
|
||||
gcc-10 \
|
||||
g++-10 \
|
||||
llvm-${LLVM_VERSION} \
|
||||
clang-${LLVM_VERSION} \
|
||||
lld-${LLVM_VERSION} \
|
||||
@ -93,5 +91,16 @@ RUN wget -nv "https://developer.arm.com/-/media/Files/downloads/gnu-a/8.3-2019.0
|
||||
# Download toolchain for FreeBSD 11.3
|
||||
RUN wget -nv https://clickhouse-datasets.s3.yandex.net/toolchains/toolchains/freebsd-11.3-toolchain.tar.xz
|
||||
|
||||
# NOTE: For some reason we have outdated version of gcc-10 in ubuntu 20.04 stable.
|
||||
# Current workaround is to use latest version proposed repo. Remove as soon as
|
||||
# gcc-10.2 appear in stable repo.
|
||||
RUN echo 'deb http://archive.ubuntu.com/ubuntu/ focal-proposed restricted main multiverse universe' > /etc/apt/sources.list.d/proposed-repositories.list
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install gcc-10 g++-10 --yes
|
||||
|
||||
RUN rm /etc/apt/sources.list.d/proposed-repositories.list && apt-get update
|
||||
|
||||
|
||||
COPY build.sh /
|
||||
CMD ["/bin/bash", "/build.sh"]
|
||||
|
@ -42,8 +42,6 @@ RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
||||
# Libraries from OS are only needed to test the "unbundled" build (this is not used in production).
|
||||
RUN apt-get update \
|
||||
&& apt-get install \
|
||||
gcc-10 \
|
||||
g++-10 \
|
||||
gcc-9 \
|
||||
g++-9 \
|
||||
clang-11 \
|
||||
@ -75,6 +73,16 @@ RUN apt-get update \
|
||||
pigz \
|
||||
--yes --no-install-recommends
|
||||
|
||||
# NOTE: For some reason we have outdated version of gcc-10 in ubuntu 20.04 stable.
|
||||
# Current workaround is to use latest version proposed repo. Remove as soon as
|
||||
# gcc-10.2 appear in stable repo.
|
||||
RUN echo 'deb http://archive.ubuntu.com/ubuntu/ focal-proposed restricted main multiverse universe' > /etc/apt/sources.list.d/proposed-repositories.list
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install gcc-10 g++-10 --yes --no-install-recommends
|
||||
|
||||
RUN rm /etc/apt/sources.list.d/proposed-repositories.list && apt-get update
|
||||
|
||||
# This symlink required by gcc to find lld compiler
|
||||
RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
|
||||
|
||||
|
@ -93,7 +93,7 @@ def parse_env_variables(build_type, compiler, sanitizer, package_type, image_typ
|
||||
|
||||
cxx = cc.replace('gcc', 'g++').replace('clang', 'clang++')
|
||||
|
||||
if image_type == "deb":
|
||||
if image_type == "deb" or image_type == "unbundled":
|
||||
result.append("DEB_CC={}".format(cc))
|
||||
result.append("DEB_CXX={}".format(cxx))
|
||||
elif image_type == "binary":
|
||||
|
@ -394,12 +394,24 @@ create table query_run_metrics_denorm engine File(TSV, 'analyze/query-run-metric
|
||||
order by test, query_index, metric_names, version, query_id
|
||||
;
|
||||
|
||||
-- Filter out tests that don't have an even number of runs, to avoid breaking
|
||||
-- the further calculations. This may happen if there was an error during the
|
||||
-- test runs, e.g. the server died. It will be reported in test errors, so we
|
||||
-- don't have to report it again.
|
||||
create view broken_queries as
|
||||
select test, query_index
|
||||
from query_runs
|
||||
group by test, query_index
|
||||
having count(*) % 2 != 0
|
||||
;
|
||||
|
||||
-- This is for statistical processing with eqmed.sql
|
||||
create table query_run_metrics_for_stats engine File(
|
||||
TSV, -- do not add header -- will parse with grep
|
||||
'analyze/query-run-metrics-for-stats.tsv')
|
||||
as select test, query_index, 0 run, version, metric_values
|
||||
from query_run_metric_arrays
|
||||
where (test, query_index) not in broken_queries
|
||||
order by test, query_index, run, version
|
||||
;
|
||||
|
||||
@ -915,13 +927,15 @@ done
|
||||
|
||||
function report_metrics
|
||||
{
|
||||
build_log_column_definitions
|
||||
|
||||
rm -rf metrics ||:
|
||||
mkdir metrics
|
||||
|
||||
clickhouse-local --query "
|
||||
create view right_async_metric_log as
|
||||
select * from file('right-async-metric-log.tsv', TSVWithNamesAndTypes,
|
||||
'event_date Date, event_time DateTime, name String, value Float64')
|
||||
'$(cat right-async-metric-log.tsv.columns)')
|
||||
;
|
||||
|
||||
-- Use the right log as time reference because it may have higher precision.
|
||||
@ -930,7 +944,7 @@ create table metrics engine File(TSV, 'metrics/metrics.tsv') as
|
||||
select name metric, r.event_time - min_time event_time, l.value as left, r.value as right
|
||||
from right_async_metric_log r
|
||||
asof join file('left-async-metric-log.tsv', TSVWithNamesAndTypes,
|
||||
'event_date Date, event_time DateTime, name String, value Float64') l
|
||||
'$(cat left-async-metric-log.tsv.columns)') l
|
||||
on l.name = r.name and r.event_time <= l.event_time
|
||||
order by metric, event_time
|
||||
;
|
||||
|
@ -8,7 +8,7 @@ select
|
||||
from
|
||||
(
|
||||
-- quantiles of randomization distributions
|
||||
select quantileExactForEach(0.999)(
|
||||
select quantileExactForEach(0.99)(
|
||||
arrayMap(x, y -> abs(x - y), metrics_by_label[1], metrics_by_label[2]) as d
|
||||
) threshold
|
||||
---- uncomment to see what the distribution is really like
|
||||
@ -33,7 +33,7 @@ from
|
||||
-- strip the query away before the join -- it might be several kB long;
|
||||
(select metrics, run, version from table) no_query,
|
||||
-- duplicate input measurements into many virtual runs
|
||||
numbers(1, 100000) nn
|
||||
numbers(1, 10000) nn
|
||||
-- for each virtual run, randomly reorder measurements
|
||||
order by virtual_run, rand()
|
||||
) virtual_runs
|
||||
|
@ -20,7 +20,7 @@ parser = argparse.ArgumentParser(description='Run performance test.')
|
||||
parser.add_argument('file', metavar='FILE', type=argparse.FileType('r', encoding='utf-8'), nargs=1, help='test description file')
|
||||
parser.add_argument('--host', nargs='*', default=['localhost'], help="Server hostname(s). Corresponds to '--port' options.")
|
||||
parser.add_argument('--port', nargs='*', default=[9000], help="Server port(s). Corresponds to '--host' options.")
|
||||
parser.add_argument('--runs', type=int, default=int(os.environ.get('CHPC_RUNS', 13)), help='Number of query runs per server. Defaults to CHPC_RUNS environment variable.')
|
||||
parser.add_argument('--runs', type=int, default=int(os.environ.get('CHPC_RUNS', 7)), help='Number of query runs per server. Defaults to CHPC_RUNS environment variable.')
|
||||
parser.add_argument('--long', action='store_true', help='Do not skip the tests tagged as long.')
|
||||
parser.add_argument('--print-queries', action='store_true', help='Print test queries and exit.')
|
||||
parser.add_argument('--print-settings', action='store_true', help='Print test settings and exit.')
|
||||
|
@ -372,7 +372,7 @@ if args.report == 'main':
|
||||
'New, s', # 1
|
||||
'Ratio of speedup (-) or slowdown (+)', # 2
|
||||
'Relative difference (new − old) / old', # 3
|
||||
'p < 0.001 threshold', # 4
|
||||
'p < 0.01 threshold', # 4
|
||||
# Failed # 5
|
||||
'Test', # 6
|
||||
'#', # 7
|
||||
@ -416,7 +416,7 @@ if args.report == 'main':
|
||||
'Old, s', #0
|
||||
'New, s', #1
|
||||
'Relative difference (new - old)/old', #2
|
||||
'p < 0.001 threshold', #3
|
||||
'p < 0.01 threshold', #3
|
||||
# Failed #4
|
||||
'Test', #5
|
||||
'#', #6
|
||||
@ -470,12 +470,13 @@ if args.report == 'main':
|
||||
text = tableStart('Test times')
|
||||
text += tableHeader(columns)
|
||||
|
||||
nominal_runs = 13 # FIXME pass this as an argument
|
||||
nominal_runs = 7 # FIXME pass this as an argument
|
||||
total_runs = (nominal_runs + 1) * 2 # one prewarm run, two servers
|
||||
allowed_average_run_time = allowed_single_run_time + 60 / total_runs; # some allowance for fill/create queries
|
||||
attrs = ['' for c in columns]
|
||||
for r in rows:
|
||||
anchor = f'{currentTableAnchor()}.{r[0]}'
|
||||
if float(r[6]) > 1.5 * total_runs:
|
||||
if float(r[6]) > allowed_average_run_time * total_runs:
|
||||
# FIXME should be 15s max -- investigate parallel_insert
|
||||
slow_average_tests += 1
|
||||
attrs[6] = f'style="background: {color_bad}"'
|
||||
@ -649,7 +650,7 @@ elif args.report == 'all-queries':
|
||||
'New, s', #3
|
||||
'Ratio of speedup (-) or slowdown (+)', #4
|
||||
'Relative difference (new − old) / old', #5
|
||||
'p < 0.001 threshold', #6
|
||||
'p < 0.01 threshold', #6
|
||||
'Test', #7
|
||||
'#', #8
|
||||
'Query', #9
|
||||
|
@ -28,7 +28,7 @@ def get_options(i):
|
||||
options = ""
|
||||
if 0 < i:
|
||||
options += " --order=random"
|
||||
if i == 1:
|
||||
if i % 2 == 1:
|
||||
options += " --atomic-db-engine"
|
||||
return options
|
||||
|
||||
|
@ -10,12 +10,16 @@ Columns:
|
||||
- `progress` (Float64) — The percentage of completed work from 0 to 1.
|
||||
- `num_parts` (UInt64) — The number of pieces to be merged.
|
||||
- `result_part_name` (String) — The name of the part that will be formed as the result of merging.
|
||||
- `is_mutation` (UInt8) - 1 if this process is a part mutation.
|
||||
- `is_mutation` (UInt8) — 1 if this process is a part mutation.
|
||||
- `total_size_bytes_compressed` (UInt64) — The total size of the compressed data in the merged chunks.
|
||||
- `total_size_marks` (UInt64) — The total number of marks in the merged parts.
|
||||
- `bytes_read_uncompressed` (UInt64) — Number of bytes read, uncompressed.
|
||||
- `rows_read` (UInt64) — Number of rows read.
|
||||
- `bytes_written_uncompressed` (UInt64) — Number of bytes written, uncompressed.
|
||||
- `rows_written` (UInt64) — Number of rows written.
|
||||
- `memory_usage` (UInt64) — Memory consumption of the merge process.
|
||||
- `thread_id` (UInt64) — Thread ID of the merge process.
|
||||
- `merge_type` — The type of current merge. Empty if it's an mutation.
|
||||
- `merge_algorithm` — The algorithm used in current merge. Empty if it's an mutation.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/merges) <!--hide-->
|
||||
|
@ -38,7 +38,7 @@ clickhouse-benchmark [keys] < queries_file
|
||||
- `-d N`, `--delay=N` — Interval in seconds between intermediate reports (set 0 to disable reports). Default value: 1.
|
||||
- `-h WORD`, `--host=WORD` — Server host. Default value: `localhost`. For the [comparison mode](#clickhouse-benchmark-comparison-mode) you can use multiple `-h` keys.
|
||||
- `-p N`, `--port=N` — Server port. Default value: 9000. For the [comparison mode](#clickhouse-benchmark-comparison-mode) you can use multiple `-p` keys.
|
||||
- `-i N`, `--iterations=N` — Total number of queries. Default value: 0.
|
||||
- `-i N`, `--iterations=N` — Total number of queries. Default value: 0 (repeat forever).
|
||||
- `-r`, `--randomize` — Random order of queries execution if there is more then one input query.
|
||||
- `-s`, `--secure` — Using TLS connection.
|
||||
- `-t N`, `--timelimit=N` — Time limit in seconds. `clickhouse-benchmark` stops sending queries when the specified time limit is reached. Default value: 0 (time limit disabled).
|
||||
|
@ -46,3 +46,25 @@ SELECT mapSubtract(([toUInt8(1), 2], [toInt32(1), 1]), ([toUInt8(1), 2], [toInt3
|
||||
│ ([1,2],[-1,0]) │ Tuple(Array(UInt8), Array(Int64)) │
|
||||
└────────────────┴───────────────────────────────────┘
|
||||
````
|
||||
|
||||
## mapPopulateSeries {#function-mappopulateseries}
|
||||
|
||||
Syntax: `mapPopulateSeries((keys : Array(<IntegerType>), values : Array(<IntegerType>)[, max : <IntegerType>])`
|
||||
|
||||
Generates a map, where keys are a series of numbers, from minimum to maximum keys (or `max` argument if it specified) taken from `keys` array with step size of one,
|
||||
and corresponding values taken from `values` array. If the value is not specified for the key, then it uses default value in the resulting map.
|
||||
For repeated keys only the first value (in order of appearing) gets associated with the key.
|
||||
|
||||
The number of elements in `keys` and `values` must be the same for each row.
|
||||
|
||||
Returns a tuple of two arrays: keys in sorted order, and values the corresponding keys.
|
||||
|
||||
``` sql
|
||||
select mapPopulateSeries([1,2,4], [11,22,44], 5) as res, toTypeName(res) as type;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res──────────────────────────┬─type──────────────────────────────┐
|
||||
│ ([1,2,3,4,5],[11,22,0,44,0]) │ Tuple(Array(UInt8), Array(UInt8)) │
|
||||
└──────────────────────────────┴───────────────────────────────────┘
|
||||
```
|
||||
|
@ -1,20 +1,18 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_priority: 49
|
||||
toc_title: Copia de seguridad de datos
|
||||
---
|
||||
|
||||
# Copia de seguridad de datos {#data-backup}
|
||||
|
||||
Mientras [replicación](../engines/table-engines/mergetree-family/replication.md) provides protection from hardware failures, it does not protect against human errors: accidental deletion of data, deletion of the wrong table or a table on the wrong cluster, and software bugs that result in incorrect data processing or data corruption. In many cases mistakes like these will affect all replicas. ClickHouse has built-in safeguards to prevent some types of mistakes — for example, by default [no puede simplemente eliminar tablas con un motor similar a MergeTree que contenga más de 50 Gb de datos](https://github.com/ClickHouse/ClickHouse/blob/v18.14.18-stable/programs/server/config.xml#L322-L330). Sin embargo, estas garantías no cubren todos los casos posibles y pueden eludirse.
|
||||
Mientras que la [replicación](../engines/table-engines/mergetree-family/replication.md) proporciona protección contra fallos de hardware, no protege de errores humanos: el borrado accidental de datos, elminar la tabla equivocada o una tabla en el clúster equivocado, y bugs de software que dan como resultado un procesado incorrecto de los datos o la corrupción de los datos. En muchos casos, errores como estos afectarán a todas las réplicas. ClickHouse dispone de salvaguardas para prevenir algunos tipos de errores — por ejemplo, por defecto [no se puede simplemente eliminar tablas con un motor similar a MergeTree que contenga más de 50 Gb de datos](https://github.com/ClickHouse/ClickHouse/blob/v18.14.18-stable/programs/server/config.xml#L322-L330). Sin embargo, estas salvaguardas no cubren todos los casos posibles y pueden eludirse.
|
||||
|
||||
Para mitigar eficazmente los posibles errores humanos, debe preparar cuidadosamente una estrategia para realizar copias de seguridad y restaurar sus datos **previamente**.
|
||||
|
||||
Cada empresa tiene diferentes recursos disponibles y requisitos comerciales, por lo que no existe una solución universal para las copias de seguridad y restauraciones de ClickHouse que se adapten a cada situación. Lo que funciona para un gigabyte de datos probablemente no funcionará para decenas de petabytes. Hay una variedad de posibles enfoques con sus propios pros y contras, que se discutirán a continuación. Es una buena idea utilizar varios enfoques en lugar de solo uno para compensar sus diversas deficiencias.
|
||||
Cada empresa tiene diferentes recursos disponibles y requisitos comerciales, por lo que no existe una solución universal para las copias de seguridad y restauraciones de ClickHouse que se adapten a cada situación. Lo que funciona para un gigabyte de datos probablemente no funcionará para decenas de petabytes. Hay una variedad de posibles enfoques con sus propios pros y contras, que se discutirán a continuación. Es una buena idea utilizar varios enfoques en lugar de uno solo para compensar sus diversas deficiencias.
|
||||
|
||||
!!! note "Nota"
|
||||
Tenga en cuenta que si realizó una copia de seguridad de algo y nunca intentó restaurarlo, es probable que la restauración no funcione correctamente cuando realmente la necesite (o al menos tomará más tiempo de lo que las empresas pueden tolerar). Por lo tanto, cualquiera que sea el enfoque de copia de seguridad que elija, asegúrese de automatizar el proceso de restauración también y practicarlo en un clúster de ClickHouse de repuesto regularmente.
|
||||
Tenga en cuenta que si realizó una copia de seguridad de algo y nunca intentó restaurarlo, es probable que la restauración no funcione correctamente cuando realmente la necesite (o al menos tomará más tiempo de lo que las empresas pueden tolerar). Por lo tanto, cualquiera que sea el enfoque de copia de seguridad que elija, asegúrese de automatizar el proceso de restauración también y ponerlo en practica en un clúster de ClickHouse de repuesto regularmente.
|
||||
|
||||
## Duplicar datos de origen en otro lugar {#duplicating-source-data-somewhere-else}
|
||||
|
||||
@ -32,7 +30,7 @@ Para volúmenes de datos más pequeños, un simple `INSERT INTO ... SELECT ...`
|
||||
|
||||
## Manipulaciones con piezas {#manipulations-with-parts}
|
||||
|
||||
ClickHouse permite usar el `ALTER TABLE ... FREEZE PARTITION ...` consulta para crear una copia local de particiones de tabla. Esto se implementa utilizando enlaces duros al `/var/lib/clickhouse/shadow/` carpeta, por lo que generalmente no consume espacio adicional en disco para datos antiguos. Las copias creadas de archivos no son manejadas por el servidor ClickHouse, por lo que puede dejarlas allí: tendrá una copia de seguridad simple que no requiere ningún sistema externo adicional, pero seguirá siendo propenso a problemas de hardware. Por esta razón, es mejor copiarlos de forma remota en otra ubicación y luego eliminar las copias locales. Los sistemas de archivos distribuidos y los almacenes de objetos siguen siendo una buena opción para esto, pero los servidores de archivos conectados normales con una capacidad lo suficientemente grande podrían funcionar también (en este caso, la transferencia ocurrirá a través del sistema de archivos de red o tal vez [rsync](https://en.wikipedia.org/wiki/Rsync)).
|
||||
ClickHouse permite usar la consulta `ALTER TABLE ... FREEZE PARTITION ...` para crear una copia local de particiones de tabla. Esto se implementa utilizando enlaces duros a la carpeta `/var/lib/clickhouse/shadow/`, por lo que generalmente no consume espacio adicional en disco para datos antiguos. Las copias creadas de archivos no son manejadas por el servidor ClickHouse, por lo que puede dejarlas allí: tendrá una copia de seguridad simple que no requiere ningún sistema externo adicional, pero seguirá siendo propenso a problemas de hardware. Por esta razón, es mejor copiarlos de forma remota en otra ubicación y luego eliminar las copias locales. Los sistemas de archivos distribuidos y los almacenes de objetos siguen siendo una buena opción para esto, pero los servidores de archivos conectados normales con una capacidad lo suficientemente grande podrían funcionar también (en este caso, la transferencia ocurrirá a través del sistema de archivos de red o tal vez [rsync](https://en.wikipedia.org/wiki/Rsync)).
|
||||
|
||||
Para obtener más información sobre las consultas relacionadas con las manipulaciones de particiones, consulte [Documentación de ALTER](../sql-reference/statements/alter.md#alter_manipulations-with-partitions).
|
||||
|
||||
|
@ -5,13 +5,15 @@ toc_title: Представление
|
||||
|
||||
# CREATE VIEW {#create-view}
|
||||
|
||||
``` sql
|
||||
CREATE [MATERIALIZED] VIEW [IF NOT EXISTS] [db.]table_name [TO[db.]name] [ENGINE = engine] [POPULATE] AS SELECT ...
|
||||
```
|
||||
|
||||
Создаёт представление. Представления бывают двух видов - обычные и материализованные (MATERIALIZED).
|
||||
|
||||
Обычные представления не хранят никаких данных, а всего лишь производят чтение из другой таблицы. То есть, обычное представление - не более чем сохранённый запрос. При чтении из представления, этот сохранённый запрос, используется в качестве подзапроса в секции FROM.
|
||||
## Обычные представления {#normal}
|
||||
|
||||
``` sql
|
||||
CREATE [OR REPLACE] VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] AS SELECT ...
|
||||
```
|
||||
|
||||
Normal views don’t store any data, they just perform a read from another table on each access. In other words, a normal view is nothing more than a saved query. When reading from a view, this saved query is used as a subquery in the [FROM](../../../sql-reference/statements/select/from.md) clause.
|
||||
|
||||
Для примера, пусть вы создали представление:
|
||||
|
||||
@ -31,15 +33,24 @@ SELECT a, b, c FROM view
|
||||
SELECT a, b, c FROM (SELECT ...)
|
||||
```
|
||||
|
||||
Материализованные (MATERIALIZED) представления хранят данные, преобразованные соответствующим запросом SELECT.
|
||||
## Материализованные представления {#materialized}
|
||||
|
||||
При создании материализованного представления без использования `TO [db].[table]`, нужно обязательно указать ENGINE - движок таблицы для хранения данных.
|
||||
``` sql
|
||||
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] [TO[db.]name] [ENGINE = engine] [POPULATE] AS SELECT ...
|
||||
```
|
||||
|
||||
Материализованные (MATERIALIZED) представления хранят данные, преобразованные соответствующим запросом [SELECT](../../../sql-reference/statements/select/index.md).
|
||||
|
||||
При создании материализованного представления без использования `TO [db].[table]`, нужно обязательно указать `ENGINE` - движок таблицы для хранения данных.
|
||||
|
||||
При создании материализованного представления с испольованием `TO [db].[table]`, нельзя указывать `POPULATE`
|
||||
|
||||
Материализованное представление устроено следующим образом: при вставке данных в таблицу, указанную в SELECT-е, кусок вставляемых данных преобразуется этим запросом SELECT, и полученный результат вставляется в представление.
|
||||
|
||||
Если указано POPULATE, то при создании представления, в него будут вставлены имеющиеся данные таблицы, как если бы был сделан запрос `CREATE TABLE ... AS SELECT ...` . Иначе, представление будет содержать только данные, вставляемые в таблицу после создания представления. Не рекомендуется использовать POPULATE, так как вставляемые в таблицу данные во время создания представления, не попадут в него.
|
||||
!!! important "Важно"
|
||||
Материализованные представлени в ClickHouse больше похожи на `after insert` триггеры. Если в запросе материализованного представления есть агрегирование, оно применяется только к вставляемому блоку записей. Любые изменения существующих данных исходной таблицы (например обновление, удаление, удаление раздела и т.д.) не изменяют материализованное представление.
|
||||
|
||||
Если указано `POPULATE`, то при создании представления, в него будут вставлены имеющиеся данные таблицы, как если бы был сделан запрос `CREATE TABLE ... AS SELECT ...` . Иначе, представление будет содержать только данные, вставляемые в таблицу после создания представления. Не рекомендуется использовать POPULATE, так как вставляемые в таблицу данные во время создания представления, не попадут в него.
|
||||
|
||||
Запрос `SELECT` может содержать `DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`… Следует иметь ввиду, что соответствующие преобразования будут выполняться независимо, на каждый блок вставляемых данных. Например, при наличии `GROUP BY`, данные будут агрегироваться при вставке, но только в рамках одной пачки вставляемых данных. Далее, данные не будут доагрегированы. Исключение - использование ENGINE, производящего агрегацию данных самостоятельно, например, `SummingMergeTree`.
|
||||
|
||||
@ -50,4 +61,4 @@ SELECT a, b, c FROM (SELECT ...)
|
||||
Отсутствует отдельный запрос для удаления представлений. Чтобы удалить представление, следует использовать `DROP TABLE`.
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/create/view)
|
||||
<!--hide-->
|
||||
<!--hide-->
|
||||
|
@ -5,18 +5,35 @@ toc_title: DROP
|
||||
|
||||
# DROP {#drop}
|
||||
|
||||
Запрос имеет два вида: `DROP DATABASE` и `DROP TABLE`.
|
||||
Удаляет существующий объект.
|
||||
Если указано `IF EXISTS` - не выдавать ошибку, если объекта не существует.
|
||||
|
||||
## DROP DATABASE {#drop-database}
|
||||
|
||||
``` sql
|
||||
DROP DATABASE [IF EXISTS] db [ON CLUSTER cluster]
|
||||
```
|
||||
|
||||
Удаляет все таблицы в базе данных db, затем удаляет саму базу данных db.
|
||||
|
||||
|
||||
## DROP TABLE {#drop-table}
|
||||
|
||||
``` sql
|
||||
DROP [TEMPORARY] TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
|
||||
```
|
||||
|
||||
Удаляет таблицу.
|
||||
Если указано `IF EXISTS` - не выдавать ошибку, если таблица не существует или база данных не существует.
|
||||
|
||||
|
||||
## DROP DICTIONARY {#drop-dictionary}
|
||||
|
||||
``` sql
|
||||
DROP DICTIONARY [IF EXISTS] [db.]name
|
||||
```
|
||||
|
||||
Удаляет словарь.
|
||||
|
||||
|
||||
## DROP USER {#drop-user-statement}
|
||||
|
||||
@ -41,6 +58,7 @@ DROP USER [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
|
||||
DROP ROLE [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
|
||||
```
|
||||
|
||||
|
||||
## DROP ROW POLICY {#drop-row-policy-statement}
|
||||
|
||||
Удаляет политику доступа к строкам.
|
||||
@ -80,5 +98,13 @@ DROP [SETTINGS] PROFILE [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
|
||||
```
|
||||
|
||||
|
||||
## DROP VIEW {#drop-view}
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/drop/) <!--hide-->
|
||||
``` sql
|
||||
DROP VIEW [IF EXISTS] [db.]name [ON CLUSTER cluster]
|
||||
```
|
||||
|
||||
Удаляет представление. Представления могут быть удалены и командой `DROP TABLE`, но команда `DROP VIEW` проверяет, что `[db.]name` является представлением.
|
||||
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/drop/) <!--hide-->
|
||||
|
@ -1,13 +1,6 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_priority: 40
|
||||
toc_title: "\u8FDC\u7A0B"
|
||||
---
|
||||
|
||||
# 远程,远程安全 {#remote-remotesecure}
|
||||
|
||||
允许您访问远程服务器,而无需创建 `Distributed` 桌子
|
||||
允许您访问远程服务器,而无需创建 `Distributed` 表
|
||||
|
||||
签名:
|
||||
|
||||
@ -18,10 +11,10 @@ remoteSecure('addresses_expr', db, table[, 'user'[, 'password']])
|
||||
remoteSecure('addresses_expr', db.table[, 'user'[, 'password']])
|
||||
```
|
||||
|
||||
`addresses_expr` – An expression that generates addresses of remote servers. This may be just one server address. The server address is `host:port`,或者只是 `host`. 主机可以指定为服务器名称,也可以指定为IPv4或IPv6地址。 IPv6地址在方括号中指定。 端口是远程服务器上的TCP端口。 如果省略端口,它使用 `tcp_port` 从服务器的配置文件(默认情况下,9000)。
|
||||
`addresses_expr` – 代表远程服务器地址的一个表达式。可以只是单个服务器地址。 服务器地址可以是 `host:port` 或 `host`。`host` 可以指定为服务器域名,或是IPV4或IPV6地址。IPv6地址在方括号中指定。`port` 是远程服务器上的TCP端口。 如果省略端口,则使用服务器配置文件中的 `tcp_port` (默认情况为,9000)。
|
||||
|
||||
!!! important "重要事项"
|
||||
IPv6地址需要该端口。
|
||||
IPv6地址需要指定端口。
|
||||
|
||||
例:
|
||||
|
||||
@ -34,7 +27,7 @@ localhost
|
||||
[2a02:6b8:0:1111::11]:9000
|
||||
```
|
||||
|
||||
多个地址可以用逗号分隔。 在这种情况下,ClickHouse将使用分布式处理,因此它将将查询发送到所有指定的地址(如具有不同数据的分片)。
|
||||
多个地址可以用逗号分隔。在这种情况下,ClickHouse将使用分布式处理,因此它将将查询发送到所有指定的地址(如具有不同数据的分片)。
|
||||
|
||||
示例:
|
||||
|
||||
@ -56,7 +49,7 @@ example01-{01..02}-1
|
||||
|
||||
如果您有多对大括号,它会生成相应集合的直接乘积。
|
||||
|
||||
大括号中的地址和部分地址可以用管道符号(\|)分隔。 在这种情况下,相应的地址集被解释为副本,并且查询将被发送到第一个正常副本。 但是,副本将按照当前设置的顺序进行迭代 [load\_balancing](../../operations/settings/settings.md) 设置。
|
||||
大括号中的地址和部分地址可以用管道符号(\|)分隔。 在这种情况下,相应的地址集被解释为副本,并且查询将被发送到第一个正常副本。 但是,副本将按照当前[load\_balancing](../../operations/settings/settings.md)设置的顺序进行迭代。
|
||||
|
||||
示例:
|
||||
|
||||
@ -66,20 +59,20 @@ example01-{01..02}-{1|2}
|
||||
|
||||
此示例指定两个分片,每个分片都有两个副本。
|
||||
|
||||
生成的地址数由常量限制。 现在这是1000个地址。
|
||||
生成的地址数由常量限制。目前这是1000个地址。
|
||||
|
||||
使用 `remote` 表函数比创建一个不太优化 `Distributed` 表,因为在这种情况下,服务器连接被重新建立为每个请求。 此外,如果设置了主机名,则会解析这些名称,并且在使用各种副本时不会计算错误。 在处理大量查询时,始终创建 `Distributed` 表的时间提前,不要使用 `remote` 表功能。
|
||||
使用 `remote` 表函数没有创建一个 `Distributed` 表更优,因为在这种情况下,将为每个请求重新建立服务器连接。此外,如果设置了主机名,则会解析这些名称,并且在使用各种副本时不会计算错误。 在处理大量查询时,始终优先创建 `Distributed` 表,不要使用 `remote` 表功能。
|
||||
|
||||
该 `remote` 表函数可以在以下情况下是有用的:
|
||||
|
||||
- 访问特定服务器进行数据比较、调试和测试。
|
||||
- 查询之间的各种ClickHouse群集用于研究目的。
|
||||
- 手动发出的罕见分布式请求。
|
||||
- 在多个ClickHouse集群之间的用户研究目的的查询。
|
||||
- 手动发出的不频繁分布式请求。
|
||||
- 每次重新定义服务器集的分布式请求。
|
||||
|
||||
如果未指定用户, `default` 被使用。
|
||||
如果未指定用户, 将会使用`default`。
|
||||
如果未指定密码,则使用空密码。
|
||||
|
||||
`remoteSecure` -相同 `remote` but with secured connection. Default port — [tcp\_port\_secure](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port_secure) 从配置或9440.
|
||||
`remoteSecure` - 与 `remote` 相同,但是会使用加密链接。默认端口为配置文件中的[tcp\_port\_secure](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port_secure),或9440。
|
||||
|
||||
[原始文章](https://clickhouse.tech/docs/en/query_language/table_functions/remote/) <!--hide-->
|
||||
|
@ -16,6 +16,7 @@ option (ENABLE_CLICKHOUSE_COMPRESSOR "Enable clickhouse-compressor" ${ENABLE_CLI
|
||||
option (ENABLE_CLICKHOUSE_COPIER "Enable clickhouse-copier" ${ENABLE_CLICKHOUSE_ALL})
|
||||
option (ENABLE_CLICKHOUSE_FORMAT "Enable clickhouse-format" ${ENABLE_CLICKHOUSE_ALL})
|
||||
option (ENABLE_CLICKHOUSE_OBFUSCATOR "Enable clickhouse-obfuscator" ${ENABLE_CLICKHOUSE_ALL})
|
||||
option (ENABLE_CLICKHOUSE_GIT_IMPORT "Enable clickhouse-git-import" ${ENABLE_CLICKHOUSE_ALL})
|
||||
option (ENABLE_CLICKHOUSE_ODBC_BRIDGE "Enable clickhouse-odbc-bridge" ${ENABLE_CLICKHOUSE_ALL})
|
||||
|
||||
if (CLICKHOUSE_SPLIT_BINARY)
|
||||
@ -91,21 +92,22 @@ add_subdirectory (copier)
|
||||
add_subdirectory (format)
|
||||
add_subdirectory (obfuscator)
|
||||
add_subdirectory (install)
|
||||
add_subdirectory (git-import)
|
||||
|
||||
if (ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
||||
add_subdirectory (odbc-bridge)
|
||||
endif ()
|
||||
|
||||
if (CLICKHOUSE_ONE_SHARED)
|
||||
add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES})
|
||||
target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK})
|
||||
target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE})
|
||||
add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_GIT_IMPORT_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES})
|
||||
target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_GIT_IMPORT_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK})
|
||||
target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_GIT_IMPORT_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE})
|
||||
set_target_properties(clickhouse-lib PROPERTIES SOVERSION ${VERSION_MAJOR}.${VERSION_MINOR} VERSION ${VERSION_SO} OUTPUT_NAME clickhouse DEBUG_POSTFIX "")
|
||||
install (TARGETS clickhouse-lib LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} COMPONENT clickhouse)
|
||||
endif()
|
||||
|
||||
if (CLICKHOUSE_SPLIT_BINARY)
|
||||
set (CLICKHOUSE_ALL_TARGETS clickhouse-server clickhouse-client clickhouse-local clickhouse-benchmark clickhouse-extract-from-config clickhouse-compressor clickhouse-format clickhouse-obfuscator clickhouse-copier)
|
||||
set (CLICKHOUSE_ALL_TARGETS clickhouse-server clickhouse-client clickhouse-local clickhouse-benchmark clickhouse-extract-from-config clickhouse-compressor clickhouse-format clickhouse-obfuscator clickhouse-git-import clickhouse-copier)
|
||||
|
||||
if (ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
||||
list (APPEND CLICKHOUSE_ALL_TARGETS clickhouse-odbc-bridge)
|
||||
@ -149,6 +151,9 @@ else ()
|
||||
if (ENABLE_CLICKHOUSE_OBFUSCATOR)
|
||||
clickhouse_target_link_split_lib(clickhouse obfuscator)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_GIT_IMPORT)
|
||||
clickhouse_target_link_split_lib(clickhouse git-import)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_INSTALL)
|
||||
clickhouse_target_link_split_lib(clickhouse install)
|
||||
endif ()
|
||||
@ -199,6 +204,11 @@ else ()
|
||||
install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-obfuscator DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
||||
list(APPEND CLICKHOUSE_BUNDLE clickhouse-obfuscator)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_GIT_IMPORT)
|
||||
add_custom_target (clickhouse-git-import ALL COMMAND ${CMAKE_COMMAND} -E create_symlink clickhouse clickhouse-git-import DEPENDS clickhouse)
|
||||
install (FILES ${CMAKE_CURRENT_BINARY_DIR}/clickhouse-git-import DESTINATION ${CMAKE_INSTALL_BINDIR} COMPONENT clickhouse)
|
||||
list(APPEND CLICKHOUSE_BUNDLE clickhouse-git-import)
|
||||
endif ()
|
||||
if(ENABLE_CLICKHOUSE_ODBC_BRIDGE)
|
||||
list(APPEND CLICKHOUSE_BUNDLE clickhouse-odbc-bridge)
|
||||
endif()
|
||||
|
@ -958,7 +958,31 @@ private:
|
||||
|
||||
// Try to parse the query.
|
||||
const char * this_query_end = this_query_begin;
|
||||
parsed_query = parseQuery(this_query_end, all_queries_end, true);
|
||||
try
|
||||
{
|
||||
parsed_query = parseQuery(this_query_end, all_queries_end, true);
|
||||
}
|
||||
catch (Exception & e)
|
||||
{
|
||||
if (!test_mode)
|
||||
throw;
|
||||
|
||||
/// Try find test hint for syntax error
|
||||
const char * end_of_line = find_first_symbols<'\n'>(this_query_begin,all_queries_end);
|
||||
TestHint hint(true, String(this_query_end, end_of_line - this_query_end));
|
||||
if (hint.serverError()) /// Syntax errors are considered as client errors
|
||||
throw;
|
||||
if (hint.clientError() != e.code())
|
||||
{
|
||||
if (hint.clientError())
|
||||
e.addMessage("\nExpected clinet error: " + std::to_string(hint.clientError()));
|
||||
throw;
|
||||
}
|
||||
|
||||
/// It's expected syntax error, skip the line
|
||||
this_query_begin = end_of_line;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!parsed_query)
|
||||
{
|
||||
|
@ -12,5 +12,6 @@
|
||||
#cmakedefine01 ENABLE_CLICKHOUSE_COMPRESSOR
|
||||
#cmakedefine01 ENABLE_CLICKHOUSE_FORMAT
|
||||
#cmakedefine01 ENABLE_CLICKHOUSE_OBFUSCATOR
|
||||
#cmakedefine01 ENABLE_CLICKHOUSE_GIT_IMPORT
|
||||
#cmakedefine01 ENABLE_CLICKHOUSE_INSTALL
|
||||
#cmakedefine01 ENABLE_CLICKHOUSE_ODBC_BRIDGE
|
||||
|
10
programs/git-import/CMakeLists.txt
Normal file
10
programs/git-import/CMakeLists.txt
Normal file
@ -0,0 +1,10 @@
|
||||
set (CLICKHOUSE_GIT_IMPORT_SOURCES git-import.cpp)
|
||||
|
||||
set (CLICKHOUSE_GIT_IMPORT_LINK
|
||||
PRIVATE
|
||||
boost::program_options
|
||||
dbms
|
||||
)
|
||||
|
||||
clickhouse_program_add(git-import)
|
||||
|
2
programs/git-import/clickhouse-git-import.cpp
Normal file
2
programs/git-import/clickhouse-git-import.cpp
Normal file
@ -0,0 +1,2 @@
|
||||
int mainEntryClickHouseGitImport(int argc, char ** argv);
|
||||
int main(int argc_, char ** argv_) { return mainEntryClickHouseGitImport(argc_, argv_); }
|
1235
programs/git-import/git-import.cpp
Normal file
1235
programs/git-import/git-import.cpp
Normal file
File diff suppressed because it is too large
Load Diff
@ -205,6 +205,7 @@ int mainEntryClickHouseInstall(int argc, char ** argv)
|
||||
"clickhouse-benchmark",
|
||||
"clickhouse-copier",
|
||||
"clickhouse-obfuscator",
|
||||
"clickhouse-git-import",
|
||||
"clickhouse-compressor",
|
||||
"clickhouse-format",
|
||||
"clickhouse-extract-from-config"
|
||||
|
@ -46,6 +46,9 @@ int mainEntryClickHouseClusterCopier(int argc, char ** argv);
|
||||
#if ENABLE_CLICKHOUSE_OBFUSCATOR
|
||||
int mainEntryClickHouseObfuscator(int argc, char ** argv);
|
||||
#endif
|
||||
#if ENABLE_CLICKHOUSE_GIT_IMPORT
|
||||
int mainEntryClickHouseGitImport(int argc, char ** argv);
|
||||
#endif
|
||||
#if ENABLE_CLICKHOUSE_INSTALL
|
||||
int mainEntryClickHouseInstall(int argc, char ** argv);
|
||||
int mainEntryClickHouseStart(int argc, char ** argv);
|
||||
@ -91,6 +94,9 @@ std::pair<const char *, MainFunc> clickhouse_applications[] =
|
||||
#if ENABLE_CLICKHOUSE_OBFUSCATOR
|
||||
{"obfuscator", mainEntryClickHouseObfuscator},
|
||||
#endif
|
||||
#if ENABLE_CLICKHOUSE_GIT_IMPORT
|
||||
{"git-import", mainEntryClickHouseGitImport},
|
||||
#endif
|
||||
#if ENABLE_CLICKHOUSE_INSTALL
|
||||
{"install", mainEntryClickHouseInstall},
|
||||
{"start", mainEntryClickHouseStart},
|
||||
|
13
programs/server/config.d/access_control.xml
Normal file
13
programs/server/config.d/access_control.xml
Normal file
@ -0,0 +1,13 @@
|
||||
<yandex>
|
||||
<!-- Sources to read users, roles, access rights, profiles of settings, quotas. -->
|
||||
<user_directories replace="replace">
|
||||
<users_xml>
|
||||
<!-- Path to configuration file with predefined users. -->
|
||||
<path>users.xml</path>
|
||||
</users_xml>
|
||||
<local_directory>
|
||||
<!-- Path to folder where users created by SQL commands are stored. -->
|
||||
<path>access/</path>
|
||||
</local_directory>
|
||||
</user_directories>
|
||||
</yandex>
|
@ -212,8 +212,17 @@
|
||||
<!-- Directory with user provided files that are accessible by 'file' table function. -->
|
||||
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
|
||||
|
||||
<!-- Path to folder where users and roles created by SQL commands are stored. -->
|
||||
<access_control_path>/var/lib/clickhouse/access/</access_control_path>
|
||||
<!-- Sources to read users, roles, access rights, profiles of settings, quotas. -->
|
||||
<user_directories>
|
||||
<users_xml>
|
||||
<!-- Path to configuration file with predefined users. -->
|
||||
<path>users.xml</path>
|
||||
</users_xml>
|
||||
<local_directory>
|
||||
<!-- Path to folder where users created by SQL commands are stored. -->
|
||||
<path>/var/lib/clickhouse/access/</path>
|
||||
</local_directory>
|
||||
</user_directories>
|
||||
|
||||
<!-- External user directories (LDAP). -->
|
||||
<ldap_servers>
|
||||
@ -256,9 +265,6 @@
|
||||
-->
|
||||
</ldap_servers>
|
||||
|
||||
<!-- Path to configuration file with users, access rights, profiles of settings, quotas. -->
|
||||
<users_config>users.xml</users_config>
|
||||
|
||||
<!-- Default profile of settings. -->
|
||||
<default_profile>default</default_profile>
|
||||
|
||||
|
@ -181,6 +181,15 @@ void AccessControlManager::addUsersConfigStorage(
|
||||
const String & preprocessed_dir_,
|
||||
const zkutil::GetZooKeeper & get_zookeeper_function_)
|
||||
{
|
||||
auto storages = getStoragesPtr();
|
||||
for (const auto & storage : *storages)
|
||||
{
|
||||
if (auto users_config_storage = typeid_cast<std::shared_ptr<UsersConfigAccessStorage>>(storage))
|
||||
{
|
||||
if (users_config_storage->getStoragePath() == users_config_path_)
|
||||
return;
|
||||
}
|
||||
}
|
||||
auto check_setting_name_function = [this](const std::string_view & setting_name) { checkSettingNameIsAllowed(setting_name); };
|
||||
auto new_storage = std::make_shared<UsersConfigAccessStorage>(storage_name_, check_setting_name_function);
|
||||
new_storage->load(users_config_path_, include_from_path_, preprocessed_dir_, get_zookeeper_function_);
|
||||
@ -210,17 +219,36 @@ void AccessControlManager::startPeriodicReloadingUsersConfigs()
|
||||
|
||||
void AccessControlManager::addDiskStorage(const String & directory_, bool readonly_)
|
||||
{
|
||||
addStorage(std::make_shared<DiskAccessStorage>(directory_, readonly_));
|
||||
addDiskStorage(DiskAccessStorage::STORAGE_TYPE, directory_, readonly_);
|
||||
}
|
||||
|
||||
void AccessControlManager::addDiskStorage(const String & storage_name_, const String & directory_, bool readonly_)
|
||||
{
|
||||
auto storages = getStoragesPtr();
|
||||
for (const auto & storage : *storages)
|
||||
{
|
||||
if (auto disk_storage = typeid_cast<std::shared_ptr<DiskAccessStorage>>(storage))
|
||||
{
|
||||
if (disk_storage->isStoragePathEqual(directory_))
|
||||
{
|
||||
if (readonly_)
|
||||
disk_storage->setReadOnly(readonly_);
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
addStorage(std::make_shared<DiskAccessStorage>(storage_name_, directory_, readonly_));
|
||||
}
|
||||
|
||||
|
||||
void AccessControlManager::addMemoryStorage(const String & storage_name_)
|
||||
{
|
||||
auto storages = getStoragesPtr();
|
||||
for (const auto & storage : *storages)
|
||||
{
|
||||
if (auto memory_storage = typeid_cast<std::shared_ptr<MemoryAccessStorage>>(storage))
|
||||
return;
|
||||
}
|
||||
addStorage(std::make_shared<MemoryAccessStorage>(storage_name_));
|
||||
}
|
||||
|
||||
|
@ -218,6 +218,16 @@ namespace
|
||||
}
|
||||
|
||||
|
||||
/// Converts a path to an absolute path and append it with a separator.
|
||||
String makeDirectoryPathCanonical(const String & directory_path)
|
||||
{
|
||||
auto canonical_directory_path = std::filesystem::weakly_canonical(directory_path);
|
||||
if (canonical_directory_path.has_filename())
|
||||
canonical_directory_path += std::filesystem::path::preferred_separator;
|
||||
return canonical_directory_path;
|
||||
}
|
||||
|
||||
|
||||
/// Calculates the path to a file named <id>.sql for saving an access entity.
|
||||
String getEntityFilePath(const String & directory_path, const UUID & id)
|
||||
{
|
||||
@ -298,22 +308,17 @@ DiskAccessStorage::DiskAccessStorage(const String & directory_path_, bool readon
|
||||
{
|
||||
}
|
||||
|
||||
|
||||
DiskAccessStorage::DiskAccessStorage(const String & storage_name_, const String & directory_path_, bool readonly_)
|
||||
: IAccessStorage(storage_name_)
|
||||
{
|
||||
auto canonical_directory_path = std::filesystem::weakly_canonical(directory_path_);
|
||||
if (canonical_directory_path.has_filename())
|
||||
canonical_directory_path += std::filesystem::path::preferred_separator;
|
||||
directory_path = makeDirectoryPathCanonical(directory_path_);
|
||||
readonly = readonly_;
|
||||
|
||||
std::error_code create_dir_error_code;
|
||||
std::filesystem::create_directories(canonical_directory_path, create_dir_error_code);
|
||||
std::filesystem::create_directories(directory_path, create_dir_error_code);
|
||||
|
||||
if (!std::filesystem::exists(canonical_directory_path) || !std::filesystem::is_directory(canonical_directory_path) || create_dir_error_code)
|
||||
throw Exception("Couldn't create directory " + canonical_directory_path.string() + " reason: '" + create_dir_error_code.message() + "'", ErrorCodes::DIRECTORY_DOESNT_EXIST);
|
||||
|
||||
directory_path = canonical_directory_path;
|
||||
readonly = readonly_;
|
||||
if (!std::filesystem::exists(directory_path) || !std::filesystem::is_directory(directory_path) || create_dir_error_code)
|
||||
throw Exception("Couldn't create directory " + directory_path + " reason: '" + create_dir_error_code.message() + "'", ErrorCodes::DIRECTORY_DOESNT_EXIST);
|
||||
|
||||
bool should_rebuild_lists = std::filesystem::exists(getNeedRebuildListsMarkFilePath(directory_path));
|
||||
if (!should_rebuild_lists)
|
||||
@ -337,6 +342,12 @@ DiskAccessStorage::~DiskAccessStorage()
|
||||
}
|
||||
|
||||
|
||||
bool DiskAccessStorage::isStoragePathEqual(const String & directory_path_) const
|
||||
{
|
||||
return getStoragePath() == makeDirectoryPathCanonical(directory_path_);
|
||||
}
|
||||
|
||||
|
||||
void DiskAccessStorage::clear()
|
||||
{
|
||||
entries_by_id.clear();
|
||||
@ -426,33 +437,41 @@ bool DiskAccessStorage::writeLists()
|
||||
void DiskAccessStorage::scheduleWriteLists(EntityType type)
|
||||
{
|
||||
if (failed_to_write_lists)
|
||||
return;
|
||||
return; /// We don't try to write list files after the first fail.
|
||||
/// The next restart of the server will invoke rebuilding of the list files.
|
||||
|
||||
bool already_scheduled = !types_of_lists_to_write.empty();
|
||||
types_of_lists_to_write.insert(type);
|
||||
|
||||
if (already_scheduled)
|
||||
return;
|
||||
if (lists_writing_thread_is_waiting)
|
||||
return; /// If the lists' writing thread is still waiting we can update `types_of_lists_to_write` easily,
|
||||
/// without restarting that thread.
|
||||
|
||||
if (lists_writing_thread.joinable())
|
||||
lists_writing_thread.join();
|
||||
|
||||
/// Create the 'need_rebuild_lists.mark' file.
|
||||
/// This file will be used later to find out if writing lists is successful or not.
|
||||
std::ofstream{getNeedRebuildListsMarkFilePath(directory_path)};
|
||||
|
||||
startListsWritingThread();
|
||||
lists_writing_thread = ThreadFromGlobalPool{&DiskAccessStorage::listsWritingThreadFunc, this};
|
||||
lists_writing_thread_is_waiting = true;
|
||||
}
|
||||
|
||||
|
||||
void DiskAccessStorage::startListsWritingThread()
|
||||
void DiskAccessStorage::listsWritingThreadFunc()
|
||||
{
|
||||
if (lists_writing_thread.joinable())
|
||||
std::unique_lock lock{mutex};
|
||||
|
||||
{
|
||||
if (!lists_writing_thread_exited)
|
||||
return;
|
||||
lists_writing_thread.detach();
|
||||
/// It's better not to write the lists files too often, that's why we need
|
||||
/// the following timeout.
|
||||
const auto timeout = std::chrono::minutes(1);
|
||||
SCOPE_EXIT({ lists_writing_thread_is_waiting = false; });
|
||||
if (lists_writing_thread_should_exit.wait_for(lock, timeout) != std::cv_status::timeout)
|
||||
return; /// The destructor requires us to exit.
|
||||
}
|
||||
|
||||
lists_writing_thread_exited = false;
|
||||
lists_writing_thread = ThreadFromGlobalPool{&DiskAccessStorage::listsWritingThreadFunc, this};
|
||||
writeLists();
|
||||
}
|
||||
|
||||
|
||||
@ -466,21 +485,6 @@ void DiskAccessStorage::stopListsWritingThread()
|
||||
}
|
||||
|
||||
|
||||
void DiskAccessStorage::listsWritingThreadFunc()
|
||||
{
|
||||
std::unique_lock lock{mutex};
|
||||
SCOPE_EXIT({ lists_writing_thread_exited = true; });
|
||||
|
||||
/// It's better not to write the lists files too often, that's why we need
|
||||
/// the following timeout.
|
||||
const auto timeout = std::chrono::minutes(1);
|
||||
if (lists_writing_thread_should_exit.wait_for(lock, timeout) != std::cv_status::timeout)
|
||||
return; /// The destructor requires us to exit.
|
||||
|
||||
writeLists();
|
||||
}
|
||||
|
||||
|
||||
/// Reads and parses all the "<id>.sql" files from a specified directory
|
||||
/// and then saves the files "users.list", "roles.list", etc. to the same directory.
|
||||
bool DiskAccessStorage::rebuildLists()
|
||||
|
@ -18,7 +18,11 @@ public:
|
||||
~DiskAccessStorage() override;
|
||||
|
||||
const char * getStorageType() const override { return STORAGE_TYPE; }
|
||||
|
||||
String getStoragePath() const override { return directory_path; }
|
||||
bool isStoragePathEqual(const String & directory_path_) const;
|
||||
|
||||
void setReadOnly(bool readonly_) { readonly = readonly_; }
|
||||
bool isStorageReadOnly() const override { return readonly; }
|
||||
|
||||
private:
|
||||
@ -42,9 +46,8 @@ private:
|
||||
void scheduleWriteLists(EntityType type);
|
||||
bool rebuildLists();
|
||||
|
||||
void startListsWritingThread();
|
||||
void stopListsWritingThread();
|
||||
void listsWritingThreadFunc();
|
||||
void stopListsWritingThread();
|
||||
|
||||
void insertNoLock(const UUID & id, const AccessEntityPtr & new_entity, bool replace_if_exists, Notifications & notifications);
|
||||
void removeNoLock(const UUID & id, Notifications & notifications);
|
||||
@ -67,14 +70,14 @@ private:
|
||||
void prepareNotifications(const UUID & id, const Entry & entry, bool remove, Notifications & notifications) const;
|
||||
|
||||
String directory_path;
|
||||
bool readonly;
|
||||
std::atomic<bool> readonly;
|
||||
std::unordered_map<UUID, Entry> entries_by_id;
|
||||
std::unordered_map<std::string_view, Entry *> entries_by_name_and_type[static_cast<size_t>(EntityType::MAX)];
|
||||
boost::container::flat_set<EntityType> types_of_lists_to_write;
|
||||
bool failed_to_write_lists = false; /// Whether writing of the list files has been failed since the recent restart of the server.
|
||||
ThreadFromGlobalPool lists_writing_thread; /// List files are written in a separate thread.
|
||||
std::condition_variable lists_writing_thread_should_exit; /// Signals `lists_writing_thread` to exit.
|
||||
std::atomic<bool> lists_writing_thread_exited = false;
|
||||
bool lists_writing_thread_is_waiting = false;
|
||||
mutable std::list<OnChangedHandler> handlers_by_type[static_cast<size_t>(EntityType::MAX)];
|
||||
mutable std::mutex mutex;
|
||||
};
|
||||
|
@ -7,6 +7,7 @@
|
||||
#include <common/unaligned.h>
|
||||
#include <Core/Field.h>
|
||||
#include <Core/BigInt.h>
|
||||
#include <Common/assert_cast.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -130,7 +131,7 @@ public:
|
||||
|
||||
void insertFrom(const IColumn & src, size_t n) override
|
||||
{
|
||||
data.push_back(static_cast<const Self &>(src).getData()[n]);
|
||||
data.push_back(assert_cast<const Self &>(src).getData()[n]);
|
||||
}
|
||||
|
||||
void insertData(const char * pos, size_t) override
|
||||
@ -205,14 +206,14 @@ public:
|
||||
/// This method implemented in header because it could be possibly devirtualized.
|
||||
int compareAt(size_t n, size_t m, const IColumn & rhs_, int nan_direction_hint) const override
|
||||
{
|
||||
return CompareHelper<T>::compare(data[n], static_cast<const Self &>(rhs_).data[m], nan_direction_hint);
|
||||
return CompareHelper<T>::compare(data[n], assert_cast<const Self &>(rhs_).data[m], nan_direction_hint);
|
||||
}
|
||||
|
||||
void compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
PaddedPODArray<UInt64> * row_indexes, PaddedPODArray<Int8> & compare_results,
|
||||
int direction, int nan_direction_hint) const override
|
||||
{
|
||||
return this->template doCompareColumn<Self>(static_cast<const Self &>(rhs), rhs_row_num, row_indexes,
|
||||
return this->template doCompareColumn<Self>(assert_cast<const Self &>(rhs), rhs_row_num, row_indexes,
|
||||
compare_results, direction, nan_direction_hint);
|
||||
}
|
||||
|
||||
|
41
src/Common/FileSyncGuard.h
Normal file
41
src/Common/FileSyncGuard.h
Normal file
@ -0,0 +1,41 @@
|
||||
#pragma once
|
||||
|
||||
#include <Disks/IDisk.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/// Helper class, that recieves file descriptor and does fsync for it in destructor.
|
||||
/// It's used to keep descriptor open, while doing some operations with it, and do fsync at the end.
|
||||
/// Guaranties of sequence 'close-reopen-fsync' may depend on kernel version.
|
||||
/// Source: linux-fsdevel mailing-list https://marc.info/?l=linux-fsdevel&m=152535409207496
|
||||
class FileSyncGuard
|
||||
{
|
||||
public:
|
||||
/// NOTE: If you have already opened descriptor, it's preffered to use
|
||||
/// this constructor instead of construnctor with path.
|
||||
FileSyncGuard(const DiskPtr & disk_, int fd_) : disk(disk_), fd(fd_) {}
|
||||
|
||||
FileSyncGuard(const DiskPtr & disk_, const String & path)
|
||||
: disk(disk_), fd(disk_->open(path, O_RDWR)) {}
|
||||
|
||||
~FileSyncGuard()
|
||||
{
|
||||
try
|
||||
{
|
||||
disk->sync(fd);
|
||||
disk->close(fd);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
DiskPtr disk;
|
||||
int fd = -1;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -57,7 +57,16 @@ ShellCommand::~ShellCommand()
|
||||
LOG_WARNING(getLogger(), "Cannot kill shell command pid {} errno '{}'", pid, errnoToString(retcode));
|
||||
}
|
||||
else if (!wait_called)
|
||||
tryWait();
|
||||
{
|
||||
try
|
||||
{
|
||||
tryWait();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(getLogger());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void ShellCommand::logCommand(const char * filename, char * const argv[])
|
||||
@ -74,7 +83,8 @@ void ShellCommand::logCommand(const char * filename, char * const argv[])
|
||||
LOG_TRACE(ShellCommand::getLogger(), "Will start shell command '{}' with arguments {}", filename, args.str());
|
||||
}
|
||||
|
||||
std::unique_ptr<ShellCommand> ShellCommand::executeImpl(const char * filename, char * const argv[], bool pipe_stdin_only, bool terminate_in_destructor)
|
||||
std::unique_ptr<ShellCommand> ShellCommand::executeImpl(
|
||||
const char * filename, char * const argv[], bool pipe_stdin_only, bool terminate_in_destructor)
|
||||
{
|
||||
logCommand(filename, argv);
|
||||
|
||||
@ -130,7 +140,8 @@ std::unique_ptr<ShellCommand> ShellCommand::executeImpl(const char * filename, c
|
||||
_exit(int(ReturnCodes::CANNOT_EXEC));
|
||||
}
|
||||
|
||||
std::unique_ptr<ShellCommand> res(new ShellCommand(pid, pipe_stdin.fds_rw[1], pipe_stdout.fds_rw[0], pipe_stderr.fds_rw[0], terminate_in_destructor));
|
||||
std::unique_ptr<ShellCommand> res(new ShellCommand(
|
||||
pid, pipe_stdin.fds_rw[1], pipe_stdout.fds_rw[0], pipe_stderr.fds_rw[0], terminate_in_destructor));
|
||||
|
||||
LOG_TRACE(getLogger(), "Started shell command '{}' with pid {}", filename, pid);
|
||||
|
||||
@ -143,7 +154,8 @@ std::unique_ptr<ShellCommand> ShellCommand::executeImpl(const char * filename, c
|
||||
}
|
||||
|
||||
|
||||
std::unique_ptr<ShellCommand> ShellCommand::execute(const std::string & command, bool pipe_stdin_only, bool terminate_in_destructor)
|
||||
std::unique_ptr<ShellCommand> ShellCommand::execute(
|
||||
const std::string & command, bool pipe_stdin_only, bool terminate_in_destructor)
|
||||
{
|
||||
/// Arguments in non-constant chunks of memory (as required for `execv`).
|
||||
/// Moreover, their copying must be done before calling `vfork`, so after `vfork` do a minimum of things.
|
||||
@ -157,7 +169,8 @@ std::unique_ptr<ShellCommand> ShellCommand::execute(const std::string & command,
|
||||
}
|
||||
|
||||
|
||||
std::unique_ptr<ShellCommand> ShellCommand::executeDirect(const std::string & path, const std::vector<std::string> & arguments, bool terminate_in_destructor)
|
||||
std::unique_ptr<ShellCommand> ShellCommand::executeDirect(
|
||||
const std::string & path, const std::vector<std::string> & arguments, bool terminate_in_destructor)
|
||||
{
|
||||
size_t argv_sum_size = path.size() + 1;
|
||||
for (const auto & arg : arguments)
|
||||
@ -186,6 +199,10 @@ int ShellCommand::tryWait()
|
||||
{
|
||||
wait_called = true;
|
||||
|
||||
in.close();
|
||||
out.close();
|
||||
err.close();
|
||||
|
||||
LOG_TRACE(getLogger(), "Will wait for shell command pid {}", pid);
|
||||
|
||||
int status = 0;
|
||||
|
@ -83,13 +83,13 @@ struct MultiEnum
|
||||
template <typename ValueType, typename = std::enable_if_t<std::is_convertible_v<ValueType, StorageType>>>
|
||||
friend bool operator==(ValueType left, MultiEnum right)
|
||||
{
|
||||
return right == left;
|
||||
return right.operator==(left);
|
||||
}
|
||||
|
||||
template <typename L>
|
||||
friend bool operator!=(L left, MultiEnum right)
|
||||
{
|
||||
return !(right == left);
|
||||
return !(right.operator==(left));
|
||||
}
|
||||
|
||||
private:
|
||||
|
@ -165,4 +165,19 @@ void DiskDecorator::truncateFile(const String & path, size_t size)
|
||||
delegate->truncateFile(path, size);
|
||||
}
|
||||
|
||||
int DiskDecorator::open(const String & path, mode_t mode) const
|
||||
{
|
||||
return delegate->open(path, mode);
|
||||
}
|
||||
|
||||
void DiskDecorator::close(int fd) const
|
||||
{
|
||||
delegate->close(fd);
|
||||
}
|
||||
|
||||
void DiskDecorator::sync(int fd) const
|
||||
{
|
||||
delegate->sync(fd);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -42,6 +42,9 @@ public:
|
||||
void setReadOnly(const String & path) override;
|
||||
void createHardLink(const String & src_path, const String & dst_path) override;
|
||||
void truncateFile(const String & path, size_t size) override;
|
||||
int open(const String & path, mode_t mode) const override;
|
||||
void close(int fd) const override;
|
||||
void sync(int fd) const override;
|
||||
const String getType() const override { return delegate->getType(); }
|
||||
|
||||
protected:
|
||||
|
@ -8,7 +8,7 @@
|
||||
|
||||
#include <IO/createReadBufferFromFileBase.h>
|
||||
#include <IO/createWriteBufferFromFileBase.h>
|
||||
|
||||
#include <unistd.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -19,6 +19,10 @@ namespace ErrorCodes
|
||||
extern const int EXCESSIVE_ELEMENT_IN_CONFIG;
|
||||
extern const int PATH_ACCESS_DENIED;
|
||||
extern const int INCORRECT_DISK_INDEX;
|
||||
extern const int FILE_DOESNT_EXIST;
|
||||
extern const int CANNOT_OPEN_FILE;
|
||||
extern const int CANNOT_FSYNC;
|
||||
extern const int CANNOT_CLOSE_FILE;
|
||||
extern const int CANNOT_TRUNCATE_FILE;
|
||||
}
|
||||
|
||||
@ -292,6 +296,28 @@ void DiskLocal::copy(const String & from_path, const std::shared_ptr<IDisk> & to
|
||||
IDisk::copy(from_path, to_disk, to_path); /// Copy files through buffers.
|
||||
}
|
||||
|
||||
int DiskLocal::open(const String & path, mode_t mode) const
|
||||
{
|
||||
String full_path = disk_path + path;
|
||||
int fd = ::open(full_path.c_str(), mode);
|
||||
if (-1 == fd)
|
||||
throwFromErrnoWithPath("Cannot open file " + full_path, full_path,
|
||||
errno == ENOENT ? ErrorCodes::FILE_DOESNT_EXIST : ErrorCodes::CANNOT_OPEN_FILE);
|
||||
return fd;
|
||||
}
|
||||
|
||||
void DiskLocal::close(int fd) const
|
||||
{
|
||||
if (-1 == ::close(fd))
|
||||
throw Exception("Cannot close file", ErrorCodes::CANNOT_CLOSE_FILE);
|
||||
}
|
||||
|
||||
void DiskLocal::sync(int fd) const
|
||||
{
|
||||
if (-1 == ::fsync(fd))
|
||||
throw Exception("Cannot fsync", ErrorCodes::CANNOT_FSYNC);
|
||||
}
|
||||
|
||||
DiskPtr DiskLocalReservation::getDisk(size_t i) const
|
||||
{
|
||||
if (i != 0)
|
||||
|
@ -99,6 +99,10 @@ public:
|
||||
|
||||
void createHardLink(const String & src_path, const String & dst_path) override;
|
||||
|
||||
int open(const String & path, mode_t mode) const override;
|
||||
void close(int fd) const override;
|
||||
void sync(int fd) const override;
|
||||
|
||||
void truncateFile(const String & path, size_t size) override;
|
||||
|
||||
const String getType() const override { return "local"; }
|
||||
|
@ -408,6 +408,21 @@ void DiskMemory::setReadOnly(const String &)
|
||||
throw Exception("Method setReadOnly is not implemented for memory disks", ErrorCodes::NOT_IMPLEMENTED);
|
||||
}
|
||||
|
||||
int DiskMemory::open(const String & /*path*/, mode_t /*mode*/) const
|
||||
{
|
||||
throw Exception("Method open is not implemented for memory disks", ErrorCodes::NOT_IMPLEMENTED);
|
||||
}
|
||||
|
||||
void DiskMemory::close(int /*fd*/) const
|
||||
{
|
||||
throw Exception("Method close is not implemented for memory disks", ErrorCodes::NOT_IMPLEMENTED);
|
||||
}
|
||||
|
||||
void DiskMemory::sync(int /*fd*/) const
|
||||
{
|
||||
throw Exception("Method sync is not implemented for memory disks", ErrorCodes::NOT_IMPLEMENTED);
|
||||
}
|
||||
|
||||
void DiskMemory::truncateFile(const String & path, size_t size)
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
|
@ -90,6 +90,10 @@ public:
|
||||
|
||||
void createHardLink(const String & src_path, const String & dst_path) override;
|
||||
|
||||
int open(const String & path, mode_t mode) const override;
|
||||
void close(int fd) const override;
|
||||
void sync(int fd) const override;
|
||||
|
||||
void truncateFile(const String & path, size_t size) override;
|
||||
|
||||
const String getType() const override { return "memory"; }
|
||||
|
@ -177,6 +177,15 @@ public:
|
||||
/// Create hardlink from `src_path` to `dst_path`.
|
||||
virtual void createHardLink(const String & src_path, const String & dst_path) = 0;
|
||||
|
||||
/// Wrapper for POSIX open
|
||||
virtual int open(const String & path, mode_t mode) const = 0;
|
||||
|
||||
/// Wrapper for POSIX close
|
||||
virtual void close(int fd) const = 0;
|
||||
|
||||
/// Wrapper for POSIX fsync
|
||||
virtual void sync(int fd) const = 0;
|
||||
|
||||
/// Truncate file to specified size.
|
||||
virtual void truncateFile(const String & path, size_t size);
|
||||
|
||||
|
@ -33,6 +33,7 @@ namespace ErrorCodes
|
||||
extern const int CANNOT_SEEK_THROUGH_FILE;
|
||||
extern const int UNKNOWN_FORMAT;
|
||||
extern const int INCORRECT_DISK_INDEX;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
|
||||
namespace
|
||||
@ -746,6 +747,21 @@ void DiskS3::setReadOnly(const String & path)
|
||||
Poco::File(metadata_path + path).setReadOnly(true);
|
||||
}
|
||||
|
||||
int DiskS3::open(const String & /*path*/, mode_t /*mode*/) const
|
||||
{
|
||||
throw Exception("Method open is not implemented for S3 disks", ErrorCodes::NOT_IMPLEMENTED);
|
||||
}
|
||||
|
||||
void DiskS3::close(int /*fd*/) const
|
||||
{
|
||||
throw Exception("Method close is not implemented for S3 disks", ErrorCodes::NOT_IMPLEMENTED);
|
||||
}
|
||||
|
||||
void DiskS3::sync(int /*fd*/) const
|
||||
{
|
||||
throw Exception("Method sync is not implemented for S3 disks", ErrorCodes::NOT_IMPLEMENTED);
|
||||
}
|
||||
|
||||
void DiskS3::shutdown()
|
||||
{
|
||||
/// This call stops any next retry attempts for ongoing S3 requests.
|
||||
|
@ -100,6 +100,10 @@ public:
|
||||
|
||||
void setReadOnly(const String & path) override;
|
||||
|
||||
int open(const String & path, mode_t mode) const override;
|
||||
void close(int fd) const override;
|
||||
void sync(int fd) const override;
|
||||
|
||||
const String getType() const override { return "s3"; }
|
||||
|
||||
void shutdown() override;
|
||||
|
@ -28,6 +28,9 @@
|
||||
#include "FunctionFactory.h"
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Common/FieldVisitors.h>
|
||||
#include <Common/FieldVisitorsAccurateComparison.h>
|
||||
#include <ext/map.h>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config.h>
|
||||
@ -51,6 +54,7 @@ namespace ErrorCodes
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int DECIMAL_OVERFLOW;
|
||||
extern const int CANNOT_ADD_DIFFERENT_AGGREGATE_STATES;
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
}
|
||||
|
||||
|
||||
@ -561,6 +565,8 @@ public:
|
||||
template <template <typename, typename> class Op, typename Name, bool valid_on_default_arguments = true>
|
||||
class FunctionBinaryArithmetic : public IFunction
|
||||
{
|
||||
static constexpr const bool is_plus = IsOperation<Op>::plus;
|
||||
static constexpr const bool is_minus = IsOperation<Op>::minus;
|
||||
static constexpr const bool is_multiply = IsOperation<Op>::multiply;
|
||||
static constexpr const bool is_division = IsOperation<Op>::division;
|
||||
|
||||
@ -600,7 +606,8 @@ class FunctionBinaryArithmetic : public IFunction
|
||||
return castType(left, [&](const auto & left_) { return castType(right, [&](const auto & right_) { return f(left_, right_); }); });
|
||||
}
|
||||
|
||||
FunctionOverloadResolverPtr getFunctionForIntervalArithmetic(const DataTypePtr & type0, const DataTypePtr & type1) const
|
||||
static FunctionOverloadResolverPtr
|
||||
getFunctionForIntervalArithmetic(const DataTypePtr & type0, const DataTypePtr & type1, const Context & context)
|
||||
{
|
||||
bool first_is_date_or_datetime = isDateOrDateTime(type0);
|
||||
bool second_is_date_or_datetime = isDateOrDateTime(type1);
|
||||
@ -612,9 +619,7 @@ class FunctionBinaryArithmetic : public IFunction
|
||||
/// Special case when the function is plus or minus, one of arguments is Date/DateTime and another is Interval.
|
||||
/// We construct another function (example: addMonths) and call it.
|
||||
|
||||
static constexpr bool function_is_plus = IsOperation<Op>::plus;
|
||||
static constexpr bool function_is_minus = IsOperation<Op>::minus;
|
||||
if constexpr (!function_is_plus && !function_is_minus)
|
||||
if constexpr (!is_plus && !is_minus)
|
||||
return {};
|
||||
|
||||
const DataTypePtr & type_time = first_is_date_or_datetime ? type0 : type1;
|
||||
@ -631,29 +636,29 @@ class FunctionBinaryArithmetic : public IFunction
|
||||
return {};
|
||||
}
|
||||
|
||||
if (second_is_date_or_datetime && function_is_minus)
|
||||
throw Exception("Wrong order of arguments for function " + getName() + ": argument of type Interval cannot be first.",
|
||||
if (second_is_date_or_datetime && is_minus)
|
||||
throw Exception("Wrong order of arguments for function " + String(name) + ": argument of type Interval cannot be first.",
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
|
||||
std::string function_name;
|
||||
if (interval_data_type)
|
||||
{
|
||||
function_name = String(function_is_plus ? "add" : "subtract") + interval_data_type->getKind().toString() + 's';
|
||||
function_name = String(is_plus ? "add" : "subtract") + interval_data_type->getKind().toString() + 's';
|
||||
}
|
||||
else
|
||||
{
|
||||
if (isDate(type_time))
|
||||
function_name = function_is_plus ? "addDays" : "subtractDays";
|
||||
function_name = is_plus ? "addDays" : "subtractDays";
|
||||
else
|
||||
function_name = function_is_plus ? "addSeconds" : "subtractSeconds";
|
||||
function_name = is_plus ? "addSeconds" : "subtractSeconds";
|
||||
}
|
||||
|
||||
return FunctionFactory::instance().get(function_name, context);
|
||||
}
|
||||
|
||||
bool isAggregateMultiply(const DataTypePtr & type0, const DataTypePtr & type1) const
|
||||
static bool isAggregateMultiply(const DataTypePtr & type0, const DataTypePtr & type1)
|
||||
{
|
||||
if constexpr (!IsOperation<Op>::multiply)
|
||||
if constexpr (!is_multiply)
|
||||
return false;
|
||||
|
||||
WhichDataType which0(type0);
|
||||
@ -663,9 +668,9 @@ class FunctionBinaryArithmetic : public IFunction
|
||||
|| (which0.isNativeUInt() && which1.isAggregateFunction());
|
||||
}
|
||||
|
||||
bool isAggregateAddition(const DataTypePtr & type0, const DataTypePtr & type1) const
|
||||
static bool isAggregateAddition(const DataTypePtr & type0, const DataTypePtr & type1)
|
||||
{
|
||||
if constexpr (!IsOperation<Op>::plus)
|
||||
if constexpr (!is_plus)
|
||||
return false;
|
||||
|
||||
WhichDataType which0(type0);
|
||||
@ -812,6 +817,11 @@ public:
|
||||
size_t getNumberOfArguments() const override { return 2; }
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
||||
{
|
||||
return getReturnTypeImplStatic(arguments, context);
|
||||
}
|
||||
|
||||
static DataTypePtr getReturnTypeImplStatic(const DataTypes & arguments, const Context & context)
|
||||
{
|
||||
/// Special case when multiply aggregate function state
|
||||
if (isAggregateMultiply(arguments[0], arguments[1]))
|
||||
@ -832,7 +842,7 @@ public:
|
||||
}
|
||||
|
||||
/// Special case when the function is plus or minus, one of arguments is Date/DateTime and another is Interval.
|
||||
if (auto function_builder = getFunctionForIntervalArithmetic(arguments[0], arguments[1]))
|
||||
if (auto function_builder = getFunctionForIntervalArithmetic(arguments[0], arguments[1], context))
|
||||
{
|
||||
ColumnsWithTypeAndName new_arguments(2);
|
||||
|
||||
@ -903,7 +913,7 @@ public:
|
||||
return false;
|
||||
});
|
||||
if (!valid)
|
||||
throw Exception("Illegal types " + arguments[0]->getName() + " and " + arguments[1]->getName() + " of arguments of function " + getName(),
|
||||
throw Exception("Illegal types " + arguments[0]->getName() + " and " + arguments[1]->getName() + " of arguments of function " + String(name),
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
return type_res;
|
||||
}
|
||||
@ -994,8 +1004,6 @@ public:
|
||||
|
||||
if constexpr (!std::is_same_v<ResultDataType, InvalidType>)
|
||||
{
|
||||
constexpr bool result_is_decimal = IsDataTypeDecimal<LeftDataType> || IsDataTypeDecimal<RightDataType>;
|
||||
|
||||
using T0 = typename LeftDataType::FieldType;
|
||||
using T1 = typename RightDataType::FieldType;
|
||||
using ResultType = typename ResultDataType::FieldType;
|
||||
@ -1003,112 +1011,91 @@ public:
|
||||
using ColVecT1 = std::conditional_t<IsDecimalNumber<T1>, ColumnDecimal<T1>, ColumnVector<T1>>;
|
||||
using ColVecResult = std::conditional_t<IsDecimalNumber<ResultType>, ColumnDecimal<ResultType>, ColumnVector<ResultType>>;
|
||||
|
||||
/// Decimal operations need scale. Operations are on result type.
|
||||
using OpImpl = std::conditional_t<IsDataTypeDecimal<ResultDataType>,
|
||||
DecimalBinaryOperation<T0, T1, Op, ResultType>,
|
||||
BinaryOperationImpl<T0, T1, Op<T0, T1>, ResultType>>;
|
||||
|
||||
auto col_left_raw = block.getByPosition(arguments[0]).column.get();
|
||||
auto col_right_raw = block.getByPosition(arguments[1]).column.get();
|
||||
if (auto col_left = checkAndGetColumnConst<ColVecT0>(col_left_raw))
|
||||
{
|
||||
if (auto col_right = checkAndGetColumnConst<ColVecT1>(col_right_raw))
|
||||
{
|
||||
/// the only case with a non-vector result
|
||||
if constexpr (result_is_decimal)
|
||||
{
|
||||
ResultDataType type = decimalResultType<is_multiply, is_division>(left, right);
|
||||
typename ResultDataType::FieldType scale_a = type.scaleFactorFor(left, is_multiply);
|
||||
typename ResultDataType::FieldType scale_b = type.scaleFactorFor(right, is_multiply || is_division);
|
||||
if constexpr (IsDataTypeDecimal<RightDataType> && is_division)
|
||||
scale_a = right.getScaleMultiplier();
|
||||
|
||||
auto res = OpImpl::constantConstant(col_left->template getValue<T0>(), col_right->template getValue<T1>(),
|
||||
scale_a, scale_b, check_decimal_overflow);
|
||||
block.getByPosition(result).column =
|
||||
ResultDataType(type.getPrecision(), type.getScale()).createColumnConst(
|
||||
col_left->size(), toField(res, type.getScale()));
|
||||
|
||||
}
|
||||
else
|
||||
{
|
||||
auto res = OpImpl::constantConstant(col_left->template getValue<T0>(), col_right->template getValue<T1>());
|
||||
block.getByPosition(result).column = ResultDataType().createColumnConst(col_left->size(), toField(res));
|
||||
}
|
||||
return true;
|
||||
}
|
||||
}
|
||||
auto col_left_const = checkAndGetColumnConst<ColVecT0>(col_left_raw);
|
||||
auto col_right_const = checkAndGetColumnConst<ColVecT1>(col_right_raw);
|
||||
|
||||
typename ColVecResult::MutablePtr col_res = nullptr;
|
||||
if constexpr (result_is_decimal)
|
||||
|
||||
auto col_left = checkAndGetColumn<ColVecT0>(col_left_raw);
|
||||
auto col_right = checkAndGetColumn<ColVecT1>(col_right_raw);
|
||||
|
||||
if constexpr (IsDataTypeDecimal<LeftDataType> || IsDataTypeDecimal<RightDataType>)
|
||||
{
|
||||
using OpImpl = DecimalBinaryOperation<T0, T1, Op, ResultType>;
|
||||
|
||||
ResultDataType type = decimalResultType<is_multiply, is_division>(left, right);
|
||||
col_res = ColVecResult::create(0, type.getScale());
|
||||
}
|
||||
else
|
||||
col_res = ColVecResult::create();
|
||||
|
||||
auto & vec_res = col_res->getData();
|
||||
vec_res.resize(block.rows());
|
||||
typename ResultDataType::FieldType scale_a = type.scaleFactorFor(left, is_multiply);
|
||||
typename ResultDataType::FieldType scale_b = type.scaleFactorFor(right, is_multiply || is_division);
|
||||
if constexpr (IsDataTypeDecimal<RightDataType> && is_division)
|
||||
scale_a = right.getScaleMultiplier();
|
||||
|
||||
if (auto col_left_const = checkAndGetColumnConst<ColVecT0>(col_left_raw))
|
||||
{
|
||||
if (auto col_right = checkAndGetColumn<ColVecT1>(col_right_raw))
|
||||
/// non-vector result
|
||||
if (col_left_const && col_right_const)
|
||||
{
|
||||
if constexpr (result_is_decimal)
|
||||
{
|
||||
ResultDataType type = decimalResultType<is_multiply, is_division>(left, right);
|
||||
auto res = OpImpl::constantConstant(col_left_const->template getValue<T0>(), col_right_const->template getValue<T1>(),
|
||||
scale_a, scale_b, check_decimal_overflow);
|
||||
|
||||
typename ResultDataType::FieldType scale_a = type.scaleFactorFor(left, is_multiply);
|
||||
typename ResultDataType::FieldType scale_b = type.scaleFactorFor(right, is_multiply || is_division);
|
||||
if constexpr (IsDataTypeDecimal<RightDataType> && is_division)
|
||||
scale_a = right.getScaleMultiplier();
|
||||
block.getByPosition(result).column = ResultDataType(type.getPrecision(), type.getScale()).createColumnConst(
|
||||
col_left_const->size(), toField(res, type.getScale()));
|
||||
return true;
|
||||
}
|
||||
|
||||
OpImpl::constantVector(col_left_const->template getValue<T0>(), col_right->getData(), vec_res,
|
||||
scale_a, scale_b, check_decimal_overflow);
|
||||
}
|
||||
else
|
||||
OpImpl::constantVector(col_left_const->template getValue<T0>(), col_right->getData().data(), vec_res.data(), vec_res.size());
|
||||
col_res = ColVecResult::create(0, type.getScale());
|
||||
auto & vec_res = col_res->getData();
|
||||
vec_res.resize(block.rows());
|
||||
|
||||
if (col_left && col_right)
|
||||
{
|
||||
OpImpl::vectorVector(col_left->getData(), col_right->getData(), vec_res, scale_a, scale_b, check_decimal_overflow);
|
||||
}
|
||||
else if (col_left_const && col_right)
|
||||
{
|
||||
OpImpl::constantVector(col_left_const->template getValue<T0>(), col_right->getData(), vec_res,
|
||||
scale_a, scale_b, check_decimal_overflow);
|
||||
}
|
||||
else if (col_left && col_right_const)
|
||||
{
|
||||
OpImpl::vectorConstant(col_left->getData(), col_right_const->template getValue<T1>(), vec_res,
|
||||
scale_a, scale_b, check_decimal_overflow);
|
||||
}
|
||||
else
|
||||
return false;
|
||||
}
|
||||
else if (auto col_left = checkAndGetColumn<ColVecT0>(col_left_raw))
|
||||
else
|
||||
{
|
||||
if constexpr (result_is_decimal)
|
||||
using OpImpl = BinaryOperationImpl<T0, T1, Op<T0, T1>, ResultType>;
|
||||
|
||||
/// non-vector result
|
||||
if (col_left_const && col_right_const)
|
||||
{
|
||||
ResultDataType type = decimalResultType<is_multiply, is_division>(left, right);
|
||||
auto res = OpImpl::constantConstant(col_left_const->template getValue<T0>(), col_right_const->template getValue<T1>());
|
||||
block.getByPosition(result).column = ResultDataType().createColumnConst(col_left_const->size(), toField(res));
|
||||
return true;
|
||||
}
|
||||
|
||||
typename ResultDataType::FieldType scale_a = type.scaleFactorFor(left, is_multiply);
|
||||
typename ResultDataType::FieldType scale_b = type.scaleFactorFor(right, is_multiply || is_division);
|
||||
if constexpr (IsDataTypeDecimal<RightDataType> && is_division)
|
||||
scale_a = right.getScaleMultiplier();
|
||||
col_res = ColVecResult::create();
|
||||
auto & vec_res = col_res->getData();
|
||||
vec_res.resize(block.rows());
|
||||
|
||||
if (auto col_right = checkAndGetColumn<ColVecT1>(col_right_raw))
|
||||
{
|
||||
OpImpl::vectorVector(col_left->getData(), col_right->getData(), vec_res, scale_a, scale_b,
|
||||
check_decimal_overflow);
|
||||
}
|
||||
else if (auto col_right_const = checkAndGetColumnConst<ColVecT1>(col_right_raw))
|
||||
{
|
||||
OpImpl::vectorConstant(col_left->getData(), col_right_const->template getValue<T1>(), vec_res,
|
||||
scale_a, scale_b, check_decimal_overflow);
|
||||
}
|
||||
else
|
||||
return false;
|
||||
if (col_left && col_right)
|
||||
{
|
||||
OpImpl::vectorVector(col_left->getData().data(), col_right->getData().data(), vec_res.data(), vec_res.size());
|
||||
}
|
||||
else if (col_left_const && col_right)
|
||||
{
|
||||
OpImpl::constantVector(col_left_const->template getValue<T0>(), col_right->getData().data(), vec_res.data(), vec_res.size());
|
||||
}
|
||||
else if (col_left && col_right_const)
|
||||
{
|
||||
OpImpl::vectorConstant(col_left->getData().data(), col_right_const->template getValue<T1>(), vec_res.data(), vec_res.size());
|
||||
}
|
||||
else
|
||||
{
|
||||
if (auto col_right = checkAndGetColumn<ColVecT1>(col_right_raw))
|
||||
OpImpl::vectorVector(col_left->getData().data(), col_right->getData().data(), vec_res.data(), vec_res.size());
|
||||
else if (auto col_right_const = checkAndGetColumnConst<ColVecT1>(col_right_raw))
|
||||
OpImpl::vectorConstant(col_left->getData().data(), col_right_const->template getValue<T1>(), vec_res.data(), vec_res.size());
|
||||
else
|
||||
return false;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
else
|
||||
return false;
|
||||
|
||||
block.getByPosition(result).column = std::move(col_res);
|
||||
return true;
|
||||
@ -1133,7 +1120,8 @@ public:
|
||||
}
|
||||
|
||||
/// Special case when the function is plus or minus, one of arguments is Date/DateTime and another is Interval.
|
||||
if (auto function_builder = getFunctionForIntervalArithmetic(block.getByPosition(arguments[0]).type, block.getByPosition(arguments[1]).type))
|
||||
if (auto function_builder
|
||||
= getFunctionForIntervalArithmetic(block.getByPosition(arguments[0]).type, block.getByPosition(arguments[1]).type, context))
|
||||
{
|
||||
executeDateTimeIntervalPlusMinus(block, arguments, result, input_rows_count, function_builder);
|
||||
return;
|
||||
@ -1223,4 +1211,228 @@ public:
|
||||
bool canBeExecutedOnDefaultArguments() const override { return valid_on_default_arguments; }
|
||||
};
|
||||
|
||||
|
||||
template <template <typename, typename> class Op, typename Name, bool valid_on_default_arguments = true>
|
||||
class FunctionBinaryArithmeticWithConstants : public FunctionBinaryArithmetic<Op, Name, valid_on_default_arguments>
|
||||
{
|
||||
public:
|
||||
using Base = FunctionBinaryArithmetic<Op, Name, valid_on_default_arguments>;
|
||||
using Monotonicity = typename Base::Monotonicity;
|
||||
|
||||
static FunctionPtr create(
|
||||
const ColumnWithTypeAndName & left_,
|
||||
const ColumnWithTypeAndName & right_,
|
||||
const DataTypePtr & return_type_,
|
||||
const Context & context)
|
||||
{
|
||||
return std::make_shared<FunctionBinaryArithmeticWithConstants>(left_, right_, return_type_, context);
|
||||
}
|
||||
|
||||
FunctionBinaryArithmeticWithConstants(
|
||||
const ColumnWithTypeAndName & left_,
|
||||
const ColumnWithTypeAndName & right_,
|
||||
const DataTypePtr & return_type_,
|
||||
const Context & context_)
|
||||
: Base(context_), left(left_), right(right_), return_type(return_type_)
|
||||
{
|
||||
}
|
||||
|
||||
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) const override
|
||||
{
|
||||
if (left.column && isColumnConst(*left.column) && arguments.size() == 1)
|
||||
{
|
||||
Block block_with_constant
|
||||
= {{left.column->cloneResized(input_rows_count), left.type, left.name},
|
||||
block.getByPosition(arguments[0]),
|
||||
block.getByPosition(result)};
|
||||
Base::executeImpl(block_with_constant, {0, 1}, 2, input_rows_count);
|
||||
block.getByPosition(result) = block_with_constant.getByPosition(2);
|
||||
}
|
||||
else if (right.column && isColumnConst(*right.column) && arguments.size() == 1)
|
||||
{
|
||||
Block block_with_constant
|
||||
= {block.getByPosition(arguments[0]),
|
||||
{right.column->cloneResized(input_rows_count), right.type, right.name},
|
||||
block.getByPosition(result)};
|
||||
Base::executeImpl(block_with_constant, {0, 1}, 2, input_rows_count);
|
||||
block.getByPosition(result) = block_with_constant.getByPosition(2);
|
||||
}
|
||||
else
|
||||
Base::executeImpl(block, arguments, result, input_rows_count);
|
||||
}
|
||||
|
||||
bool hasInformationAboutMonotonicity() const override
|
||||
{
|
||||
std::string_view name_ = Name::name;
|
||||
if (name_ == "minus" || name_ == "plus" || name_ == "divide" || name_ == "intDiv")
|
||||
{
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
Monotonicity getMonotonicityForRange(const IDataType &, const Field & left_point, const Field & right_point) const override
|
||||
{
|
||||
// For simplicity, we treat null values as monotonicity breakers.
|
||||
if (left_point.isNull() || right_point.isNull())
|
||||
return {false, true, false};
|
||||
|
||||
// For simplicity, we treat every single value interval as positive monotonic.
|
||||
if (applyVisitor(FieldVisitorAccurateEquals(), left_point, right_point))
|
||||
return {true, true, false};
|
||||
|
||||
std::string_view name_ = Name::name;
|
||||
if (name_ == "minus" || name_ == "plus")
|
||||
{
|
||||
// const +|- variable
|
||||
if (left.column && isColumnConst(*left.column))
|
||||
{
|
||||
auto transform = [&](const Field & point)
|
||||
{
|
||||
Block block_with_constant
|
||||
= {{left.column->cloneResized(1), left.type, left.name},
|
||||
{right.type->createColumnConst(1, point), right.type, right.name},
|
||||
{nullptr, return_type, ""}};
|
||||
Base::executeImpl(block_with_constant, {0, 1}, 2, 1);
|
||||
Field point_transformed;
|
||||
block_with_constant.getByPosition(2).column->get(0, point_transformed);
|
||||
return point_transformed;
|
||||
};
|
||||
transform(left_point);
|
||||
transform(right_point);
|
||||
if (name_ == "plus")
|
||||
{
|
||||
// Check if there is an overflow
|
||||
if (applyVisitor(FieldVisitorAccurateLess(), left_point, right_point)
|
||||
== applyVisitor(FieldVisitorAccurateLess(), transform(left_point), transform(right_point)))
|
||||
return {true, true, false};
|
||||
else
|
||||
return {false, true, false};
|
||||
}
|
||||
else
|
||||
{
|
||||
// Check if there is an overflow
|
||||
if (applyVisitor(FieldVisitorAccurateLess(), left_point, right_point)
|
||||
!= applyVisitor(FieldVisitorAccurateLess(), transform(left_point), transform(right_point)))
|
||||
return {true, false, false};
|
||||
else
|
||||
return {false, false, false};
|
||||
}
|
||||
}
|
||||
// variable +|- constant
|
||||
else if (right.column && isColumnConst(*right.column))
|
||||
{
|
||||
auto transform = [&](const Field & point)
|
||||
{
|
||||
Block block_with_constant
|
||||
= {{left.type->createColumnConst(1, point), left.type, left.name},
|
||||
{right.column->cloneResized(1), right.type, right.name},
|
||||
{nullptr, return_type, ""}};
|
||||
Base::executeImpl(block_with_constant, {0, 1}, 2, 1);
|
||||
Field point_transformed;
|
||||
block_with_constant.getByPosition(2).column->get(0, point_transformed);
|
||||
return point_transformed;
|
||||
};
|
||||
|
||||
// Check if there is an overflow
|
||||
if (applyVisitor(FieldVisitorAccurateLess(), left_point, right_point)
|
||||
== applyVisitor(FieldVisitorAccurateLess(), transform(left_point), transform(right_point)))
|
||||
return {true, true, false};
|
||||
else
|
||||
return {false, true, false};
|
||||
}
|
||||
}
|
||||
if (name_ == "divide" || name_ == "intDiv")
|
||||
{
|
||||
// const / variable
|
||||
if (left.column && isColumnConst(*left.column))
|
||||
{
|
||||
auto constant = (*left.column)[0];
|
||||
if (applyVisitor(FieldVisitorAccurateEquals(), constant, Field(0)))
|
||||
return {true, true, false}; // 0 / 0 is undefined, thus it's not always monotonic
|
||||
|
||||
bool is_constant_positive = applyVisitor(FieldVisitorAccurateLess(), Field(0), constant);
|
||||
if (applyVisitor(FieldVisitorAccurateLess(), left_point, Field(0)) &&
|
||||
applyVisitor(FieldVisitorAccurateLess(), right_point, Field(0)))
|
||||
{
|
||||
return {true, is_constant_positive, false};
|
||||
}
|
||||
else
|
||||
if (applyVisitor(FieldVisitorAccurateLess(), Field(0), left_point) &&
|
||||
applyVisitor(FieldVisitorAccurateLess(), Field(0), right_point))
|
||||
{
|
||||
return {true, !is_constant_positive, false};
|
||||
}
|
||||
}
|
||||
// variable / constant
|
||||
else if (right.column && isColumnConst(*right.column))
|
||||
{
|
||||
auto constant = (*right.column)[0];
|
||||
if (applyVisitor(FieldVisitorAccurateEquals(), constant, Field(0)))
|
||||
return {false, true, false}; // variable / 0 is undefined, let's treat it as non-monotonic
|
||||
|
||||
bool is_constant_positive = applyVisitor(FieldVisitorAccurateLess(), Field(0), constant);
|
||||
// division is saturated to `inf`, thus it doesn't have overflow issues.
|
||||
return {true, is_constant_positive, false};
|
||||
}
|
||||
}
|
||||
return {false, true, false};
|
||||
}
|
||||
|
||||
private:
|
||||
ColumnWithTypeAndName left;
|
||||
ColumnWithTypeAndName right;
|
||||
DataTypePtr return_type;
|
||||
};
|
||||
|
||||
|
||||
template <template <typename, typename> class Op, typename Name, bool valid_on_default_arguments = true>
|
||||
class BinaryArithmeticOverloadResolver : public IFunctionOverloadResolverImpl
|
||||
{
|
||||
public:
|
||||
static constexpr auto name = Name::name;
|
||||
static FunctionOverloadResolverImplPtr create(const Context & context)
|
||||
{
|
||||
return std::make_unique<BinaryArithmeticOverloadResolver>(context);
|
||||
}
|
||||
|
||||
explicit BinaryArithmeticOverloadResolver(const Context & context_) : context(context_) {}
|
||||
|
||||
String getName() const override { return name; }
|
||||
size_t getNumberOfArguments() const override { return 2; }
|
||||
bool isVariadic() const override { return false; }
|
||||
|
||||
FunctionBaseImplPtr build(const ColumnsWithTypeAndName & arguments, const DataTypePtr & return_type) const override
|
||||
{
|
||||
/// More efficient specialization for two numeric arguments.
|
||||
if (arguments.size() == 2
|
||||
&& ((arguments[0].column && isColumnConst(*arguments[0].column))
|
||||
|| (arguments[1].column && isColumnConst(*arguments[1].column))))
|
||||
{
|
||||
return std::make_unique<DefaultFunction>(
|
||||
FunctionBinaryArithmeticWithConstants<Op, Name, valid_on_default_arguments>::create(
|
||||
arguments[0], arguments[1], return_type, context),
|
||||
ext::map<DataTypes>(arguments, [](const auto & elem) { return elem.type; }),
|
||||
return_type);
|
||||
}
|
||||
|
||||
return std::make_unique<DefaultFunction>(
|
||||
FunctionBinaryArithmetic<Op, Name, valid_on_default_arguments>::create(context),
|
||||
ext::map<DataTypes>(arguments, [](const auto & elem) { return elem.type; }),
|
||||
return_type);
|
||||
}
|
||||
|
||||
DataTypePtr getReturnType(const DataTypes & arguments) const override
|
||||
{
|
||||
if (arguments.size() != 2)
|
||||
throw Exception(
|
||||
"Number of arguments for function " + getName() + " doesn't match: passed " + toString(arguments.size()) + ", should be 2",
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
return FunctionBinaryArithmetic<Op, Name, valid_on_default_arguments>::getReturnTypeImplStatic(arguments, context);
|
||||
}
|
||||
|
||||
private:
|
||||
const Context & context;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -1,10 +1,10 @@
|
||||
#include <Functions/FunctionJoinGet.h>
|
||||
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Functions/FunctionFactory.h>
|
||||
#include <Functions/FunctionHelpers.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/HashJoin.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Storages/StorageJoin.h>
|
||||
|
||||
|
||||
@ -16,19 +16,35 @@ namespace ErrorCodes
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
}
|
||||
|
||||
template <bool or_null>
|
||||
void ExecutableFunctionJoinGet<or_null>::execute(Block & block, const ColumnNumbers & arguments, size_t result, size_t)
|
||||
{
|
||||
Block keys;
|
||||
for (size_t i = 2; i < arguments.size(); ++i)
|
||||
{
|
||||
auto key = block.getByPosition(arguments[i]);
|
||||
keys.insert(std::move(key));
|
||||
}
|
||||
block.getByPosition(result) = join->joinGet(keys, result_block);
|
||||
}
|
||||
|
||||
template <bool or_null>
|
||||
ExecutableFunctionImplPtr FunctionJoinGet<or_null>::prepare(const Block &, const ColumnNumbers &, size_t) const
|
||||
{
|
||||
return std::make_unique<ExecutableFunctionJoinGet<or_null>>(join, Block{{return_type->createColumn(), return_type, attr_name}});
|
||||
}
|
||||
|
||||
static auto getJoin(const ColumnsWithTypeAndName & arguments, const Context & context)
|
||||
{
|
||||
if (arguments.size() != 3)
|
||||
throw Exception{"Function joinGet takes 3 arguments", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH};
|
||||
|
||||
String join_name;
|
||||
if (const auto * name_col = checkAndGetColumnConst<ColumnString>(arguments[0].column.get()))
|
||||
{
|
||||
join_name = name_col->getValue<String>();
|
||||
}
|
||||
else
|
||||
throw Exception{"Illegal type " + arguments[0].type->getName() + " of first argument of function joinGet, expected a const string.",
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT};
|
||||
throw Exception(
|
||||
"Illegal type " + arguments[0].type->getName() + " of first argument of function joinGet, expected a const string.",
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
|
||||
size_t dot = join_name.find('.');
|
||||
String database_name;
|
||||
@ -43,10 +59,12 @@ static auto getJoin(const ColumnsWithTypeAndName & arguments, const Context & co
|
||||
++dot;
|
||||
}
|
||||
String table_name = join_name.substr(dot);
|
||||
if (table_name.empty())
|
||||
throw Exception("joinGet does not allow empty table name", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
auto table = DatabaseCatalog::instance().getTable({database_name, table_name}, context);
|
||||
auto storage_join = std::dynamic_pointer_cast<StorageJoin>(table);
|
||||
if (!storage_join)
|
||||
throw Exception{"Table " + join_name + " should have engine StorageJoin", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT};
|
||||
throw Exception("Table " + join_name + " should have engine StorageJoin", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
|
||||
String attr_name;
|
||||
if (const auto * name_col = checkAndGetColumnConst<ColumnString>(arguments[1].column.get()))
|
||||
@ -54,57 +72,30 @@ static auto getJoin(const ColumnsWithTypeAndName & arguments, const Context & co
|
||||
attr_name = name_col->getValue<String>();
|
||||
}
|
||||
else
|
||||
throw Exception{"Illegal type " + arguments[1].type->getName()
|
||||
+ " of second argument of function joinGet, expected a const string.",
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT};
|
||||
throw Exception(
|
||||
"Illegal type " + arguments[1].type->getName() + " of second argument of function joinGet, expected a const string.",
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
return std::make_pair(storage_join, attr_name);
|
||||
}
|
||||
|
||||
template <bool or_null>
|
||||
FunctionBaseImplPtr JoinGetOverloadResolver<or_null>::build(const ColumnsWithTypeAndName & arguments, const DataTypePtr &) const
|
||||
{
|
||||
if (arguments.size() < 3)
|
||||
throw Exception(
|
||||
"Number of arguments for function " + getName() + " doesn't match: passed " + toString(arguments.size())
|
||||
+ ", should be greater or equal to 3",
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
auto [storage_join, attr_name] = getJoin(arguments, context);
|
||||
auto join = storage_join->getJoin();
|
||||
DataTypes data_types(arguments.size());
|
||||
|
||||
DataTypes data_types(arguments.size() - 2);
|
||||
for (size_t i = 2; i < arguments.size(); ++i)
|
||||
data_types[i - 2] = arguments[i].type;
|
||||
auto return_type = join->joinGetCheckAndGetReturnType(data_types, attr_name, or_null);
|
||||
auto table_lock = storage_join->lockForShare(context.getInitialQueryId(), context.getSettingsRef().lock_acquire_timeout);
|
||||
for (size_t i = 0; i < arguments.size(); ++i)
|
||||
data_types[i] = arguments[i].type;
|
||||
|
||||
auto return_type = join->joinGetReturnType(attr_name, or_null);
|
||||
return std::make_unique<FunctionJoinGet<or_null>>(table_lock, storage_join, join, attr_name, data_types, return_type);
|
||||
}
|
||||
|
||||
template <bool or_null>
|
||||
DataTypePtr JoinGetOverloadResolver<or_null>::getReturnType(const ColumnsWithTypeAndName & arguments) const
|
||||
{
|
||||
auto [storage_join, attr_name] = getJoin(arguments, context);
|
||||
auto join = storage_join->getJoin();
|
||||
return join->joinGetReturnType(attr_name, or_null);
|
||||
}
|
||||
|
||||
|
||||
template <bool or_null>
|
||||
void ExecutableFunctionJoinGet<or_null>::execute(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count)
|
||||
{
|
||||
auto ctn = block.getByPosition(arguments[2]);
|
||||
if (isColumnConst(*ctn.column))
|
||||
ctn.column = ctn.column->cloneResized(1);
|
||||
ctn.name = ""; // make sure the key name never collide with the join columns
|
||||
Block key_block = {ctn};
|
||||
join->joinGet(key_block, attr_name, or_null);
|
||||
auto & result_ctn = key_block.getByPosition(1);
|
||||
if (isColumnConst(*ctn.column))
|
||||
result_ctn.column = ColumnConst::create(result_ctn.column, input_rows_count);
|
||||
block.getByPosition(result) = result_ctn;
|
||||
}
|
||||
|
||||
template <bool or_null>
|
||||
ExecutableFunctionImplPtr FunctionJoinGet<or_null>::prepare(const Block &, const ColumnNumbers &, size_t) const
|
||||
{
|
||||
return std::make_unique<ExecutableFunctionJoinGet<or_null>>(join, attr_name);
|
||||
}
|
||||
|
||||
void registerFunctionJoinGet(FunctionFactory & factory)
|
||||
{
|
||||
// joinGet
|
||||
|
@ -13,14 +13,14 @@ template <bool or_null>
|
||||
class ExecutableFunctionJoinGet final : public IExecutableFunctionImpl
|
||||
{
|
||||
public:
|
||||
ExecutableFunctionJoinGet(HashJoinPtr join_, String attr_name_)
|
||||
: join(std::move(join_)), attr_name(std::move(attr_name_)) {}
|
||||
ExecutableFunctionJoinGet(HashJoinPtr join_, const Block & result_block_)
|
||||
: join(std::move(join_)), result_block(result_block_) {}
|
||||
|
||||
static constexpr auto name = or_null ? "joinGetOrNull" : "joinGet";
|
||||
|
||||
bool useDefaultImplementationForNulls() const override { return false; }
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
bool useDefaultImplementationForLowCardinalityColumns() const override { return true; }
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
|
||||
void execute(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) override;
|
||||
|
||||
@ -28,7 +28,7 @@ public:
|
||||
|
||||
private:
|
||||
HashJoinPtr join;
|
||||
const String attr_name;
|
||||
Block result_block;
|
||||
};
|
||||
|
||||
template <bool or_null>
|
||||
@ -77,13 +77,14 @@ public:
|
||||
String getName() const override { return name; }
|
||||
|
||||
FunctionBaseImplPtr build(const ColumnsWithTypeAndName & arguments, const DataTypePtr &) const override;
|
||||
DataTypePtr getReturnType(const ColumnsWithTypeAndName & arguments) const override;
|
||||
DataTypePtr getReturnType(const ColumnsWithTypeAndName &) const override { return {}; } // Not used
|
||||
|
||||
bool useDefaultImplementationForNulls() const override { return false; }
|
||||
bool useDefaultImplementationForLowCardinalityColumns() const override { return true; }
|
||||
|
||||
bool isVariadic() const override { return true; }
|
||||
size_t getNumberOfArguments() const override { return 0; }
|
||||
ColumnNumbers getArgumentsThatAreAlwaysConstant() const override { return {0, 1}; }
|
||||
|
||||
private:
|
||||
const Context & context;
|
||||
|
@ -1570,25 +1570,15 @@ struct ToNumberMonotonicity
|
||||
if (left.isNull() || right.isNull())
|
||||
return {};
|
||||
|
||||
if (from_is_unsigned == to_is_unsigned)
|
||||
{
|
||||
/// all bits other than that fits, must be same.
|
||||
if (divideByRangeOfType(left.get<UInt64>()) == divideByRangeOfType(right.get<UInt64>()))
|
||||
return {true};
|
||||
|
||||
/// Function cannot be monotonic when left and right are not on the same ranges.
|
||||
if (divideByRangeOfType(left.get<UInt64>()) != divideByRangeOfType(right.get<UInt64>()))
|
||||
return {};
|
||||
}
|
||||
|
||||
if (to_is_unsigned)
|
||||
return {true};
|
||||
else
|
||||
{
|
||||
/// When signedness is changed, it's also required for arguments to be from the same half.
|
||||
/// And they must be in the same half after converting to the result type.
|
||||
if (left_in_first_half == right_in_first_half
|
||||
&& (T(left.get<Int64>()) >= 0) == (T(right.get<Int64>()) >= 0)
|
||||
&& divideByRangeOfType(left.get<UInt64>()) == divideByRangeOfType(right.get<UInt64>()))
|
||||
return {true};
|
||||
|
||||
return {};
|
||||
}
|
||||
// If To is signed, it's possible that the signedness is different after conversion. So we check it explicitly.
|
||||
return {(T(left.get<UInt64>()) >= 0) == (T(right.get<UInt64>()) >= 0)};
|
||||
}
|
||||
|
||||
__builtin_unreachable();
|
||||
|
@ -3,6 +3,17 @@ add_headers_and_sources(clickhouse_functions_gatherutils .)
|
||||
add_library(clickhouse_functions_gatherutils ${clickhouse_functions_gatherutils_sources} ${clickhouse_functions_gatherutils_headers})
|
||||
target_link_libraries(clickhouse_functions_gatherutils PRIVATE dbms)
|
||||
|
||||
check_cxx_compiler_flag("-Wsuggest-override" HAS_SUGGEST_OVERRIDE)
|
||||
check_cxx_compiler_flag("-Wsuggest-destructor-override" HAS_SUGGEST_DESTRUCTOR_OVERRIDE)
|
||||
|
||||
if (HAS_SUGGEST_OVERRIDE)
|
||||
target_compile_definitions(clickhouse_functions_gatherutils PUBLIC HAS_SUGGEST_OVERRIDE)
|
||||
endif()
|
||||
|
||||
if (HAS_SUGGEST_DESTRUCTOR_OVERRIDE)
|
||||
target_compile_definitions(clickhouse_functions_gatherutils PUBLIC HAS_SUGGEST_DESTRUCTOR_OVERRIDE)
|
||||
endif()
|
||||
|
||||
if (STRIP_DEBUG_SYMBOLS_FUNCTIONS)
|
||||
target_compile_options(clickhouse_functions_gatherutils PRIVATE "-g0")
|
||||
endif()
|
||||
|
@ -129,9 +129,13 @@ struct NumericArraySource : public ArraySourceImpl<NumericArraySource<T>>
|
||||
#pragma GCC diagnostic ignored "-Wsuggest-override"
|
||||
#elif __clang_major__ >= 11
|
||||
#pragma GCC diagnostic push
|
||||
#ifdef HAS_SUGGEST_OVERRIDE
|
||||
#pragma GCC diagnostic ignored "-Wsuggest-override"
|
||||
#endif
|
||||
#ifdef HAS_SUGGEST_DESTRUCTOR_OVERRIDE
|
||||
#pragma GCC diagnostic ignored "-Wsuggest-destructor-override"
|
||||
#endif
|
||||
#endif
|
||||
|
||||
template <typename Base>
|
||||
struct ConstSource : public Base
|
||||
|
312
src/Functions/array/mapPopulateSeries.cpp
Normal file
312
src/Functions/array/mapPopulateSeries.cpp
Normal file
@ -0,0 +1,312 @@
|
||||
#include <Columns/ColumnArray.h>
|
||||
#include <Columns/ColumnTuple.h>
|
||||
#include <Columns/ColumnVector.h>
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <Functions/FunctionFactory.h>
|
||||
#include <Functions/FunctionHelpers.h>
|
||||
#include <Functions/IFunction.h>
|
||||
#include "Core/ColumnWithTypeAndName.h"
|
||||
#include "DataTypes/IDataType.h"
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
}
|
||||
|
||||
class FunctionMapPopulateSeries : public IFunction
|
||||
{
|
||||
public:
|
||||
static constexpr auto name = "mapPopulateSeries";
|
||||
static FunctionPtr create(const Context &) { return std::make_shared<FunctionMapPopulateSeries>(); }
|
||||
|
||||
private:
|
||||
String getName() const override { return name; }
|
||||
|
||||
size_t getNumberOfArguments() const override { return 0; }
|
||||
bool isVariadic() const override { return true; }
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
||||
{
|
||||
if (arguments.size() < 2)
|
||||
throw Exception{getName() + " accepts at least two arrays for key and value", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH};
|
||||
|
||||
if (arguments.size() > 3)
|
||||
throw Exception{"too many arguments in " + getName() + " call", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH};
|
||||
|
||||
const DataTypeArray * key_array_type = checkAndGetDataType<DataTypeArray>(arguments[0].get());
|
||||
const DataTypeArray * val_array_type = checkAndGetDataType<DataTypeArray>(arguments[1].get());
|
||||
|
||||
if (!key_array_type || !val_array_type)
|
||||
throw Exception{getName() + " accepts two arrays for key and value", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT};
|
||||
|
||||
DataTypePtr keys_type = key_array_type->getNestedType();
|
||||
WhichDataType which_key(keys_type);
|
||||
if (!(which_key.isNativeInt() || which_key.isNativeUInt()))
|
||||
{
|
||||
throw Exception(
|
||||
"Keys for " + getName() + " should be of native integer type (signed or unsigned)", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
|
||||
if (arguments.size() == 3)
|
||||
{
|
||||
DataTypePtr max_key_type = arguments[2];
|
||||
WhichDataType which_max_key(max_key_type);
|
||||
|
||||
if (which_max_key.isNullable())
|
||||
throw Exception(
|
||||
"Max key argument in arguments of function " + getName() + " can not be Nullable",
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
|
||||
if (keys_type->getTypeId() != max_key_type->getTypeId())
|
||||
throw Exception("Max key type in " + getName() + " should be same as keys type", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
|
||||
return std::make_shared<DataTypeTuple>(DataTypes{arguments[0], arguments[1]});
|
||||
}
|
||||
|
||||
template <typename KeyType, typename ValType>
|
||||
void execute2(
|
||||
Block & block, size_t result, ColumnPtr key_column, ColumnPtr val_column, ColumnPtr max_key_column, const DataTypeTuple & res_type)
|
||||
const
|
||||
{
|
||||
MutableColumnPtr res_tuple = res_type.createColumn();
|
||||
|
||||
auto * to_tuple = assert_cast<ColumnTuple *>(res_tuple.get());
|
||||
auto & to_keys_arr = assert_cast<ColumnArray &>(to_tuple->getColumn(0));
|
||||
auto & to_keys_data = to_keys_arr.getData();
|
||||
auto & to_keys_offsets = to_keys_arr.getOffsets();
|
||||
|
||||
auto & to_vals_arr = assert_cast<ColumnArray &>(to_tuple->getColumn(1));
|
||||
auto & to_values_data = to_vals_arr.getData();
|
||||
|
||||
bool max_key_is_const = false, key_is_const = false, val_is_const = false;
|
||||
|
||||
const auto * keys_array = checkAndGetColumn<ColumnArray>(key_column.get());
|
||||
if (!keys_array)
|
||||
{
|
||||
const ColumnConst * const_array = checkAndGetColumnConst<ColumnArray>(key_column.get());
|
||||
if (!const_array)
|
||||
throw Exception("Expected array column, found " + key_column->getName(), ErrorCodes::ILLEGAL_COLUMN);
|
||||
|
||||
keys_array = checkAndGetColumn<ColumnArray>(const_array->getDataColumnPtr().get());
|
||||
key_is_const = true;
|
||||
}
|
||||
|
||||
const auto * values_array = checkAndGetColumn<ColumnArray>(val_column.get());
|
||||
if (!values_array)
|
||||
{
|
||||
const ColumnConst * const_array = checkAndGetColumnConst<ColumnArray>(val_column.get());
|
||||
if (!const_array)
|
||||
throw Exception("Expected array column, found " + val_column->getName(), ErrorCodes::ILLEGAL_COLUMN);
|
||||
|
||||
values_array = checkAndGetColumn<ColumnArray>(const_array->getDataColumnPtr().get());
|
||||
val_is_const = true;
|
||||
}
|
||||
|
||||
if (!keys_array || !values_array)
|
||||
/* something went wrong */
|
||||
throw Exception{"Illegal columns in arguments of function " + getName(), ErrorCodes::ILLEGAL_COLUMN};
|
||||
|
||||
KeyType max_key_const{0};
|
||||
|
||||
if (max_key_column && isColumnConst(*max_key_column))
|
||||
{
|
||||
const auto * column_const = static_cast<const ColumnConst *>(&*max_key_column);
|
||||
max_key_const = column_const->template getValue<KeyType>();
|
||||
max_key_is_const = true;
|
||||
}
|
||||
|
||||
auto & keys_data = assert_cast<const ColumnVector<KeyType> &>(keys_array->getData()).getData();
|
||||
auto & values_data = assert_cast<const ColumnVector<ValType> &>(values_array->getData()).getData();
|
||||
|
||||
// Original offsets
|
||||
const IColumn::Offsets & key_offsets = keys_array->getOffsets();
|
||||
const IColumn::Offsets & val_offsets = values_array->getOffsets();
|
||||
|
||||
IColumn::Offset offset{0};
|
||||
size_t row_count = key_is_const ? values_array->size() : keys_array->size();
|
||||
|
||||
std::map<KeyType, ValType> res_map;
|
||||
|
||||
//Iterate through two arrays and fill result values.
|
||||
for (size_t row = 0; row < row_count; ++row)
|
||||
{
|
||||
size_t key_offset = 0, val_offset = 0, array_size = key_offsets[0], val_array_size = val_offsets[0];
|
||||
|
||||
res_map.clear();
|
||||
|
||||
if (!key_is_const)
|
||||
{
|
||||
key_offset = row > 0 ? key_offsets[row - 1] : 0;
|
||||
array_size = key_offsets[row] - key_offset;
|
||||
}
|
||||
|
||||
if (!val_is_const)
|
||||
{
|
||||
val_offset = row > 0 ? val_offsets[row - 1] : 0;
|
||||
val_array_size = val_offsets[row] - val_offset;
|
||||
}
|
||||
|
||||
if (array_size != val_array_size)
|
||||
throw Exception("Key and value array should have same amount of elements", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
if (array_size == 0)
|
||||
{
|
||||
to_keys_offsets.push_back(offset);
|
||||
continue;
|
||||
}
|
||||
|
||||
for (size_t i = 0; i < array_size; ++i)
|
||||
{
|
||||
res_map.insert({keys_data[key_offset + i], values_data[val_offset + i]});
|
||||
}
|
||||
|
||||
auto min_key = res_map.begin()->first;
|
||||
auto max_key = res_map.rbegin()->first;
|
||||
|
||||
if (max_key_column)
|
||||
{
|
||||
/* update the current max key if it's not constant */
|
||||
if (max_key_is_const)
|
||||
{
|
||||
max_key = max_key_const;
|
||||
}
|
||||
else
|
||||
{
|
||||
max_key = (static_cast<const ColumnVector<KeyType> *>(max_key_column.get()))->getData()[row];
|
||||
}
|
||||
|
||||
/* no need to add anything, max key is less that first key */
|
||||
if (max_key < min_key)
|
||||
{
|
||||
to_keys_offsets.push_back(offset);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
/* fill the result arrays */
|
||||
KeyType key;
|
||||
for (key = min_key; key <= max_key; ++key)
|
||||
{
|
||||
to_keys_data.insert(key);
|
||||
|
||||
auto it = res_map.find(key);
|
||||
if (it != res_map.end())
|
||||
{
|
||||
to_values_data.insert(it->second);
|
||||
}
|
||||
else
|
||||
{
|
||||
to_values_data.insertDefault();
|
||||
}
|
||||
|
||||
++offset;
|
||||
}
|
||||
|
||||
to_keys_offsets.push_back(offset);
|
||||
}
|
||||
|
||||
to_vals_arr.getOffsets().insert(to_keys_offsets.begin(), to_keys_offsets.end());
|
||||
block.getByPosition(result).column = std::move(res_tuple);
|
||||
}
|
||||
|
||||
template <typename KeyType>
|
||||
void execute1(
|
||||
Block & block, size_t result, ColumnPtr key_column, ColumnPtr val_column, ColumnPtr max_key_column, const DataTypeTuple & res_type)
|
||||
const
|
||||
{
|
||||
const auto & val_type = (assert_cast<const DataTypeArray *>(res_type.getElements()[1].get()))->getNestedType();
|
||||
switch (val_type->getTypeId())
|
||||
{
|
||||
case TypeIndex::Int8:
|
||||
execute2<KeyType, Int8>(block, result, key_column, val_column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::Int16:
|
||||
execute2<KeyType, Int16>(block, result, key_column, val_column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::Int32:
|
||||
execute2<KeyType, Int32>(block, result, key_column, val_column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::Int64:
|
||||
execute2<KeyType, Int64>(block, result, key_column, val_column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::UInt8:
|
||||
execute2<KeyType, UInt8>(block, result, key_column, val_column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::UInt16:
|
||||
execute2<KeyType, UInt16>(block, result, key_column, val_column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::UInt32:
|
||||
execute2<KeyType, UInt32>(block, result, key_column, val_column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::UInt64:
|
||||
execute2<KeyType, UInt64>(block, result, key_column, val_column, max_key_column, res_type);
|
||||
break;
|
||||
default:
|
||||
throw Exception{"Illegal columns in arguments of function " + getName(), ErrorCodes::ILLEGAL_COLUMN};
|
||||
}
|
||||
}
|
||||
|
||||
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t) const override
|
||||
{
|
||||
auto col1 = block.safeGetByPosition(arguments[0]), col2 = block.safeGetByPosition(arguments[1]);
|
||||
|
||||
const auto * k = assert_cast<const DataTypeArray *>(col1.type.get());
|
||||
const auto * v = assert_cast<const DataTypeArray *>(col2.type.get());
|
||||
|
||||
/* determine output type */
|
||||
const DataTypeTuple & res_type = DataTypeTuple(
|
||||
DataTypes{std::make_shared<DataTypeArray>(k->getNestedType()), std::make_shared<DataTypeArray>(v->getNestedType())});
|
||||
|
||||
ColumnPtr max_key_column = nullptr;
|
||||
|
||||
if (arguments.size() == 3)
|
||||
{
|
||||
/* max key provided */
|
||||
max_key_column = block.safeGetByPosition(arguments[2]).column;
|
||||
}
|
||||
|
||||
switch (k->getNestedType()->getTypeId())
|
||||
{
|
||||
case TypeIndex::Int8:
|
||||
execute1<Int8>(block, result, col1.column, col2.column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::Int16:
|
||||
execute1<Int16>(block, result, col1.column, col2.column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::Int32:
|
||||
execute1<Int32>(block, result, col1.column, col2.column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::Int64:
|
||||
execute1<Int64>(block, result, col1.column, col2.column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::UInt8:
|
||||
execute1<UInt8>(block, result, col1.column, col2.column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::UInt16:
|
||||
execute1<UInt16>(block, result, col1.column, col2.column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::UInt32:
|
||||
execute1<UInt32>(block, result, col1.column, col2.column, max_key_column, res_type);
|
||||
break;
|
||||
case TypeIndex::UInt64:
|
||||
execute1<UInt64>(block, result, col1.column, col2.column, max_key_column, res_type);
|
||||
break;
|
||||
default:
|
||||
throw Exception{"Illegal columns in arguments of function " + getName(), ErrorCodes::ILLEGAL_COLUMN};
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
void registerFunctionMapPopulateSeries(FunctionFactory & factory)
|
||||
{
|
||||
factory.registerFunction<FunctionMapPopulateSeries>();
|
||||
}
|
||||
|
||||
}
|
@ -36,6 +36,7 @@ void registerFunctionArrayZip(FunctionFactory &);
|
||||
void registerFunctionArrayAUC(FunctionFactory &);
|
||||
void registerFunctionArrayReduceInRanges(FunctionFactory &);
|
||||
void registerFunctionMapOp(FunctionFactory &);
|
||||
void registerFunctionMapPopulateSeries(FunctionFactory &);
|
||||
|
||||
void registerFunctionsArray(FunctionFactory & factory)
|
||||
{
|
||||
@ -73,6 +74,7 @@ void registerFunctionsArray(FunctionFactory & factory)
|
||||
registerFunctionArrayZip(factory);
|
||||
registerFunctionArrayAUC(factory);
|
||||
registerFunctionMapOp(factory);
|
||||
registerFunctionMapPopulateSeries(factory);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -37,7 +37,7 @@ struct BitAndImpl
|
||||
};
|
||||
|
||||
struct NameBitAnd { static constexpr auto name = "bitAnd"; };
|
||||
using FunctionBitAnd = FunctionBinaryArithmetic<BitAndImpl, NameBitAnd, true>;
|
||||
using FunctionBitAnd = BinaryArithmeticOverloadResolver<BitAndImpl, NameBitAnd, true>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -42,7 +42,7 @@ struct BitBoolMaskAndImpl
|
||||
};
|
||||
|
||||
struct NameBitBoolMaskAnd { static constexpr auto name = "__bitBoolMaskAnd"; };
|
||||
using FunctionBitBoolMaskAnd = FunctionBinaryArithmetic<BitBoolMaskAndImpl, NameBitBoolMaskAnd>;
|
||||
using FunctionBitBoolMaskAnd = BinaryArithmeticOverloadResolver<BitBoolMaskAndImpl, NameBitBoolMaskAnd>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -42,7 +42,7 @@ struct BitBoolMaskOrImpl
|
||||
};
|
||||
|
||||
struct NameBitBoolMaskOr { static constexpr auto name = "__bitBoolMaskOr"; };
|
||||
using FunctionBitBoolMaskOr = FunctionBinaryArithmetic<BitBoolMaskOrImpl, NameBitBoolMaskOr>;
|
||||
using FunctionBitBoolMaskOr = BinaryArithmeticOverloadResolver<BitBoolMaskOrImpl, NameBitBoolMaskOr>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -36,7 +36,7 @@ struct BitOrImpl
|
||||
};
|
||||
|
||||
struct NameBitOr { static constexpr auto name = "bitOr"; };
|
||||
using FunctionBitOr = FunctionBinaryArithmetic<BitOrImpl, NameBitOr, true>;
|
||||
using FunctionBitOr = BinaryArithmeticOverloadResolver<BitOrImpl, NameBitOr, true>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -43,7 +43,7 @@ struct BitRotateLeftImpl
|
||||
};
|
||||
|
||||
struct NameBitRotateLeft { static constexpr auto name = "bitRotateLeft"; };
|
||||
using FunctionBitRotateLeft = FunctionBinaryArithmetic<BitRotateLeftImpl, NameBitRotateLeft>;
|
||||
using FunctionBitRotateLeft = BinaryArithmeticOverloadResolver<BitRotateLeftImpl, NameBitRotateLeft>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -42,7 +42,7 @@ struct BitRotateRightImpl
|
||||
};
|
||||
|
||||
struct NameBitRotateRight { static constexpr auto name = "bitRotateRight"; };
|
||||
using FunctionBitRotateRight = FunctionBinaryArithmetic<BitRotateRightImpl, NameBitRotateRight>;
|
||||
using FunctionBitRotateRight = BinaryArithmeticOverloadResolver<BitRotateRightImpl, NameBitRotateRight>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -42,7 +42,7 @@ struct BitShiftLeftImpl
|
||||
};
|
||||
|
||||
struct NameBitShiftLeft { static constexpr auto name = "bitShiftLeft"; };
|
||||
using FunctionBitShiftLeft = FunctionBinaryArithmetic<BitShiftLeftImpl, NameBitShiftLeft>;
|
||||
using FunctionBitShiftLeft = BinaryArithmeticOverloadResolver<BitShiftLeftImpl, NameBitShiftLeft>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -42,7 +42,7 @@ struct BitShiftRightImpl
|
||||
};
|
||||
|
||||
struct NameBitShiftRight { static constexpr auto name = "bitShiftRight"; };
|
||||
using FunctionBitShiftRight = FunctionBinaryArithmetic<BitShiftRightImpl, NameBitShiftRight>;
|
||||
using FunctionBitShiftRight = BinaryArithmeticOverloadResolver<BitShiftRightImpl, NameBitShiftRight>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -34,7 +34,7 @@ struct BitTestImpl
|
||||
};
|
||||
|
||||
struct NameBitTest { static constexpr auto name = "bitTest"; };
|
||||
using FunctionBitTest = FunctionBinaryArithmetic<BitTestImpl, NameBitTest>;
|
||||
using FunctionBitTest = BinaryArithmeticOverloadResolver<BitTestImpl, NameBitTest>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -36,7 +36,7 @@ struct BitXorImpl
|
||||
};
|
||||
|
||||
struct NameBitXor { static constexpr auto name = "bitXor"; };
|
||||
using FunctionBitXor = FunctionBinaryArithmetic<BitXorImpl, NameBitXor, true>;
|
||||
using FunctionBitXor = BinaryArithmeticOverloadResolver<BitXorImpl, NameBitXor, true>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -13,7 +13,6 @@ template <typename A, typename B>
|
||||
struct DivideFloatingImpl
|
||||
{
|
||||
using ResultType = typename NumberTraits::ResultOfFloatingPointDivision<A, B>::Type;
|
||||
static const constexpr bool allow_decimal = true;
|
||||
static const constexpr bool allow_fixed_string = false;
|
||||
|
||||
template <typename Result = ResultType>
|
||||
@ -38,7 +37,7 @@ struct DivideFloatingImpl
|
||||
};
|
||||
|
||||
struct NameDivide { static constexpr auto name = "divide"; };
|
||||
using FunctionDivide = FunctionBinaryArithmetic<DivideFloatingImpl, NameDivide>;
|
||||
using FunctionDivide = BinaryArithmeticOverloadResolver<DivideFloatingImpl, NameDivide>;
|
||||
|
||||
void registerFunctionDivide(FunctionFactory & factory)
|
||||
{
|
||||
|
@ -40,7 +40,7 @@ struct GCDImpl
|
||||
};
|
||||
|
||||
struct NameGCD { static constexpr auto name = "gcd"; };
|
||||
using FunctionGCD = FunctionBinaryArithmetic<GCDImpl, NameGCD, false>;
|
||||
using FunctionGCD = BinaryArithmeticOverloadResolver<GCDImpl, NameGCD, false>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -604,7 +604,6 @@ private:
|
||||
const ColumnUInt8 * cond_col, Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count)
|
||||
{
|
||||
/// Convert both columns to the common type (if needed).
|
||||
|
||||
const ColumnWithTypeAndName & arg1 = block.getByPosition(arguments[1]);
|
||||
const ColumnWithTypeAndName & arg2 = block.getByPosition(arguments[2]);
|
||||
|
||||
@ -765,10 +764,22 @@ private:
|
||||
return ColumnNullable::create(materialized, ColumnUInt8::create(column->size(), 0));
|
||||
}
|
||||
|
||||
static ColumnPtr getNestedColumn(const ColumnPtr & column)
|
||||
/// Return nested column recursively removing Nullable, examples:
|
||||
/// Nullable(size = 1, Int32(size = 1), UInt8(size = 1)) -> Int32(size = 1)
|
||||
/// Const(size = 0, Nullable(size = 1, Int32(size = 1), UInt8(size = 1))) ->
|
||||
/// Const(size = 0, Int32(size = 1))
|
||||
static ColumnPtr recursiveGetNestedColumnWithoutNullable(const ColumnPtr & column)
|
||||
{
|
||||
if (const auto * nullable = checkAndGetColumn<ColumnNullable>(*column))
|
||||
{
|
||||
/// Nullable cannot contain Nullable
|
||||
return nullable->getNestedColumnPtr();
|
||||
}
|
||||
else if (const auto * column_const = checkAndGetColumn<ColumnConst>(*column))
|
||||
{
|
||||
/// Save Constant, but remove Nullable
|
||||
return ColumnConst::create(recursiveGetNestedColumnWithoutNullable(column_const->getDataColumnPtr()), column->size());
|
||||
}
|
||||
|
||||
return column;
|
||||
}
|
||||
@ -826,12 +837,12 @@ private:
|
||||
{
|
||||
arg_cond,
|
||||
{
|
||||
getNestedColumn(arg_then.column),
|
||||
recursiveGetNestedColumnWithoutNullable(arg_then.column),
|
||||
removeNullable(arg_then.type),
|
||||
""
|
||||
},
|
||||
{
|
||||
getNestedColumn(arg_else.column),
|
||||
recursiveGetNestedColumnWithoutNullable(arg_else.column),
|
||||
removeNullable(arg_else.type),
|
||||
""
|
||||
},
|
||||
|
@ -110,7 +110,7 @@ template <> struct BinaryOperationImpl<Int32, Int64, DivideIntegralImpl<Int32, I
|
||||
|
||||
|
||||
struct NameIntDiv { static constexpr auto name = "intDiv"; };
|
||||
using FunctionIntDiv = FunctionBinaryArithmetic<DivideIntegralImpl, NameIntDiv, false>;
|
||||
using FunctionIntDiv = BinaryArithmeticOverloadResolver<DivideIntegralImpl, NameIntDiv, false>;
|
||||
|
||||
void registerFunctionIntDiv(FunctionFactory & factory)
|
||||
{
|
||||
|
@ -26,7 +26,7 @@ struct DivideIntegralOrZeroImpl
|
||||
};
|
||||
|
||||
struct NameIntDivOrZero { static constexpr auto name = "intDivOrZero"; };
|
||||
using FunctionIntDivOrZero = FunctionBinaryArithmetic<DivideIntegralOrZeroImpl, NameIntDivOrZero>;
|
||||
using FunctionIntDivOrZero = BinaryArithmeticOverloadResolver<DivideIntegralOrZeroImpl, NameIntDivOrZero>;
|
||||
|
||||
void registerFunctionIntDivOrZero(FunctionFactory & factory)
|
||||
{
|
||||
|
@ -78,7 +78,7 @@ struct LCMImpl
|
||||
};
|
||||
|
||||
struct NameLCM { static constexpr auto name = "lcm"; };
|
||||
using FunctionLCM = FunctionBinaryArithmetic<LCMImpl, NameLCM, false>;
|
||||
using FunctionLCM = BinaryArithmeticOverloadResolver<LCMImpl, NameLCM, false>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -9,7 +9,6 @@ template <typename A, typename B>
|
||||
struct MinusImpl
|
||||
{
|
||||
using ResultType = typename NumberTraits::ResultOfSubtraction<A, B>::Type;
|
||||
static const constexpr bool allow_decimal = true;
|
||||
static const constexpr bool allow_fixed_string = false;
|
||||
|
||||
template <typename Result = ResultType>
|
||||
@ -44,7 +43,7 @@ struct MinusImpl
|
||||
};
|
||||
|
||||
struct NameMinus { static constexpr auto name = "minus"; };
|
||||
using FunctionMinus = FunctionBinaryArithmetic<MinusImpl, NameMinus>;
|
||||
using FunctionMinus = BinaryArithmeticOverloadResolver<MinusImpl, NameMinus>;
|
||||
|
||||
void registerFunctionMinus(FunctionFactory & factory)
|
||||
{
|
||||
|
@ -101,7 +101,7 @@ template <> struct BinaryOperationImpl<Int32, Int64, ModuloImpl<Int32, Int64>> :
|
||||
|
||||
|
||||
struct NameModulo { static constexpr auto name = "modulo"; };
|
||||
using FunctionModulo = FunctionBinaryArithmetic<ModuloImpl, NameModulo, false>;
|
||||
using FunctionModulo = BinaryArithmeticOverloadResolver<ModuloImpl, NameModulo, false>;
|
||||
|
||||
void registerFunctionModulo(FunctionFactory & factory)
|
||||
{
|
||||
|
@ -36,7 +36,7 @@ struct ModuloOrZeroImpl
|
||||
};
|
||||
|
||||
struct NameModuloOrZero { static constexpr auto name = "moduloOrZero"; };
|
||||
using FunctionModuloOrZero = FunctionBinaryArithmetic<ModuloOrZeroImpl, NameModuloOrZero>;
|
||||
using FunctionModuloOrZero = BinaryArithmeticOverloadResolver<ModuloOrZeroImpl, NameModuloOrZero>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -9,7 +9,6 @@ template <typename A, typename B>
|
||||
struct MultiplyImpl
|
||||
{
|
||||
using ResultType = typename NumberTraits::ResultOfAdditionMultiplication<A, B>::Type;
|
||||
static const constexpr bool allow_decimal = true;
|
||||
static const constexpr bool allow_fixed_string = false;
|
||||
|
||||
template <typename Result = ResultType>
|
||||
@ -44,7 +43,7 @@ struct MultiplyImpl
|
||||
};
|
||||
|
||||
struct NameMultiply { static constexpr auto name = "multiply"; };
|
||||
using FunctionMultiply = FunctionBinaryArithmetic<MultiplyImpl, NameMultiply>;
|
||||
using FunctionMultiply = BinaryArithmeticOverloadResolver<MultiplyImpl, NameMultiply>;
|
||||
|
||||
void registerFunctionMultiply(FunctionFactory & factory)
|
||||
{
|
||||
|
@ -13,7 +13,14 @@ struct NegateImpl
|
||||
|
||||
static inline NO_SANITIZE_UNDEFINED ResultType apply(A a)
|
||||
{
|
||||
#if defined (__GNUC__) && __GNUC__ >= 10
|
||||
#pragma GCC diagnostic push
|
||||
#pragma GCC diagnostic ignored "-Wvector-operation-performance"
|
||||
#endif
|
||||
return -static_cast<ResultType>(a);
|
||||
#if defined (__GNUC__) && __GNUC__ >= 10
|
||||
#pragma GCC diagnostic pop
|
||||
#endif
|
||||
}
|
||||
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
|
@ -9,8 +9,8 @@ template <typename A, typename B>
|
||||
struct PlusImpl
|
||||
{
|
||||
using ResultType = typename NumberTraits::ResultOfAdditionMultiplication<A, B>::Type;
|
||||
static const constexpr bool allow_decimal = true;
|
||||
static const constexpr bool allow_fixed_string = false;
|
||||
static const constexpr bool is_commutative = true;
|
||||
|
||||
template <typename Result = ResultType>
|
||||
static inline NO_SANITIZE_UNDEFINED Result apply(A a, B b)
|
||||
@ -45,7 +45,7 @@ struct PlusImpl
|
||||
};
|
||||
|
||||
struct NamePlus { static constexpr auto name = "plus"; };
|
||||
using FunctionPlus = FunctionBinaryArithmetic<PlusImpl, NamePlus>;
|
||||
using FunctionPlus = BinaryArithmeticOverloadResolver<PlusImpl, NamePlus>;
|
||||
|
||||
void registerFunctionPlus(FunctionFactory & factory)
|
||||
{
|
||||
|
@ -99,6 +99,7 @@ SRCS(
|
||||
array/indexOf.cpp
|
||||
array/length.cpp
|
||||
array/mapOp.cpp
|
||||
array/mapPopulateSeries.cpp
|
||||
array/range.cpp
|
||||
array/registerFunctionsArray.cpp
|
||||
asin.cpp
|
||||
|
@ -77,6 +77,9 @@ ReadBufferFromFile::~ReadBufferFromFile()
|
||||
|
||||
void ReadBufferFromFile::close()
|
||||
{
|
||||
if (fd < 0)
|
||||
return;
|
||||
|
||||
if (0 != ::close(fd))
|
||||
throw Exception("Cannot close file", ErrorCodes::CANNOT_CLOSE_FILE);
|
||||
|
||||
|
@ -92,6 +92,9 @@ WriteBufferFromFile::~WriteBufferFromFile()
|
||||
/// Close file before destruction of object.
|
||||
void WriteBufferFromFile::close()
|
||||
{
|
||||
if (fd < 0)
|
||||
return;
|
||||
|
||||
next();
|
||||
|
||||
if (0 != ::close(fd))
|
||||
|
@ -11,6 +11,7 @@
|
||||
#include <common/LocalDateTime.h>
|
||||
#include <common/find_symbols.h>
|
||||
#include <common/StringRef.h>
|
||||
#include <common/wide_integer_to_string.h>
|
||||
|
||||
#include <Core/DecimalFunctions.h>
|
||||
#include <Core/Types.h>
|
||||
@ -42,6 +43,12 @@ namespace ErrorCodes
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
inline std::string bigintToString(const T & x)
|
||||
{
|
||||
return to_string(x);
|
||||
}
|
||||
|
||||
/// Helper functions for formatted and binary output.
|
||||
|
||||
inline void writeChar(char x, WriteBuffer & buf)
|
||||
|
@ -23,6 +23,7 @@
|
||||
#include <Storages/MergeTree/MergeTreeSettings.h>
|
||||
#include <Storages/CompressionCodecSelector.h>
|
||||
#include <Storages/StorageS3Settings.h>
|
||||
#include <Storages/LiveView/TemporaryLiveViewCleaner.h>
|
||||
#include <Disks/DiskLocal.h>
|
||||
#include <TableFunctions/TableFunctionFactory.h>
|
||||
#include <Interpreters/ActionLocksManager.h>
|
||||
@ -430,6 +431,7 @@ struct ContextShared
|
||||
if (system_logs)
|
||||
system_logs->shutdown();
|
||||
|
||||
TemporaryLiveViewCleaner::shutdown();
|
||||
DatabaseCatalog::shutdown();
|
||||
|
||||
/// Preemptive destruction is important, because these objects may have a refcount to ContextShared (cyclic reference).
|
||||
@ -487,6 +489,12 @@ Context Context::createGlobal(ContextShared * shared)
|
||||
return res;
|
||||
}
|
||||
|
||||
void Context::initGlobal()
|
||||
{
|
||||
DatabaseCatalog::init(this);
|
||||
TemporaryLiveViewCleaner::init(*this);
|
||||
}
|
||||
|
||||
SharedContextHolder Context::createShared()
|
||||
{
|
||||
return SharedContextHolder(std::make_unique<ContextShared>());
|
||||
|
@ -445,11 +445,7 @@ public:
|
||||
|
||||
void makeQueryContext() { query_context = this; }
|
||||
void makeSessionContext() { session_context = this; }
|
||||
void makeGlobalContext()
|
||||
{
|
||||
global_context = this;
|
||||
DatabaseCatalog::init(this);
|
||||
}
|
||||
void makeGlobalContext() { initGlobal(); global_context = this; }
|
||||
|
||||
const Settings & getSettingsRef() const { return settings; }
|
||||
|
||||
@ -625,6 +621,8 @@ public:
|
||||
private:
|
||||
std::unique_lock<std::recursive_mutex> getLock() const;
|
||||
|
||||
void initGlobal();
|
||||
|
||||
/// Compute and set actual user settings, client_info.current_user should be set
|
||||
void calculateAccessRights();
|
||||
|
||||
|
@ -657,7 +657,10 @@ void DatabaseCatalog::enqueueDroppedTableCleanup(StorageID table_id, StoragePtr
|
||||
/// Table was removed from database. Enqueue removal of its data from disk.
|
||||
time_t drop_time;
|
||||
if (table)
|
||||
{
|
||||
drop_time = std::chrono::system_clock::to_time_t(std::chrono::system_clock::now());
|
||||
table->is_dropped = true;
|
||||
}
|
||||
else
|
||||
{
|
||||
/// Try load table from metadata to drop it correctly (e.g. remove metadata from zk or remove data from all volumes)
|
||||
@ -674,6 +677,7 @@ void DatabaseCatalog::enqueueDroppedTableCleanup(StorageID table_id, StoragePtr
|
||||
try
|
||||
{
|
||||
table = createTableFromAST(*create, table_id.getDatabaseName(), data_path, *global_context, false).second;
|
||||
table->is_dropped = true;
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
@ -763,7 +767,6 @@ void DatabaseCatalog::dropTableFinally(const TableMarkedAsDropped & table) const
|
||||
if (table.table)
|
||||
{
|
||||
table.table->drop();
|
||||
table.table->is_dropped = true;
|
||||
}
|
||||
|
||||
/// Even if table is not loaded, try remove its data from disk.
|
||||
|
@ -893,6 +893,8 @@ private:
|
||||
cancelLoading(info);
|
||||
}
|
||||
|
||||
putBackFinishedThreadsToPool();
|
||||
|
||||
/// All loadings have unique loading IDs.
|
||||
size_t loading_id = next_id_counter++;
|
||||
info.loading_id = loading_id;
|
||||
@ -914,6 +916,21 @@ private:
|
||||
}
|
||||
}
|
||||
|
||||
void putBackFinishedThreadsToPool()
|
||||
{
|
||||
for (auto loading_id : recently_finished_loadings)
|
||||
{
|
||||
auto it = loading_threads.find(loading_id);
|
||||
if (it != loading_threads.end())
|
||||
{
|
||||
auto thread = std::move(it->second);
|
||||
loading_threads.erase(it);
|
||||
thread.join(); /// It's very likely that `thread` has already finished.
|
||||
}
|
||||
}
|
||||
recently_finished_loadings.clear();
|
||||
}
|
||||
|
||||
static void cancelLoading(Info & info)
|
||||
{
|
||||
if (!info.isLoading())
|
||||
@ -1095,12 +1112,11 @@ private:
|
||||
}
|
||||
min_id_to_finish_loading_dependencies.erase(std::this_thread::get_id());
|
||||
|
||||
auto it = loading_threads.find(loading_id);
|
||||
if (it != loading_threads.end())
|
||||
{
|
||||
it->second.detach();
|
||||
loading_threads.erase(it);
|
||||
}
|
||||
/// Add `loading_id` to the list of recently finished loadings.
|
||||
/// This list is used to later put the threads which finished loading back to the thread pool.
|
||||
/// (We can't put the loading thread back to the thread pool immediately here because at this point
|
||||
/// the loading thread is about to finish but it's not finished yet right now.)
|
||||
recently_finished_loadings.push_back(loading_id);
|
||||
}
|
||||
|
||||
/// Calculate next update time for loaded_object. Can be called without mutex locking,
|
||||
@ -1158,6 +1174,7 @@ private:
|
||||
bool always_load_everything = false;
|
||||
std::atomic<bool> enable_async_loading = false;
|
||||
std::unordered_map<size_t, ThreadFromGlobalPool> loading_threads;
|
||||
std::vector<size_t> recently_finished_loadings;
|
||||
std::unordered_map<std::thread::id, size_t> min_id_to_finish_loading_dependencies;
|
||||
size_t next_id_counter = 1; /// should always be > 0
|
||||
mutable pcg64 rnd_engine{randomSeed()};
|
||||
|
@ -41,6 +41,7 @@ namespace ErrorCodes
|
||||
extern const int SYNTAX_ERROR;
|
||||
extern const int SET_SIZE_LIMIT_EXCEEDED;
|
||||
extern const int TYPE_MISMATCH;
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
}
|
||||
|
||||
namespace
|
||||
@ -1109,27 +1110,34 @@ void HashJoin::joinBlockImplCross(Block & block, ExtraBlockPtr & not_processed)
|
||||
block = block.cloneWithColumns(std::move(dst_columns));
|
||||
}
|
||||
|
||||
static void checkTypeOfKey(const Block & block_left, const Block & block_right)
|
||||
{
|
||||
const auto & [c1, left_type_origin, left_name] = block_left.safeGetByPosition(0);
|
||||
const auto & [c2, right_type_origin, right_name] = block_right.safeGetByPosition(0);
|
||||
auto left_type = removeNullable(left_type_origin);
|
||||
auto right_type = removeNullable(right_type_origin);
|
||||
|
||||
if (!left_type->equals(*right_type))
|
||||
throw Exception("Type mismatch of columns to joinGet by: "
|
||||
+ left_name + " " + left_type->getName() + " at left, "
|
||||
+ right_name + " " + right_type->getName() + " at right",
|
||||
ErrorCodes::TYPE_MISMATCH);
|
||||
}
|
||||
|
||||
|
||||
DataTypePtr HashJoin::joinGetReturnType(const String & column_name, bool or_null) const
|
||||
DataTypePtr HashJoin::joinGetCheckAndGetReturnType(const DataTypes & data_types, const String & column_name, bool or_null) const
|
||||
{
|
||||
std::shared_lock lock(data->rwlock);
|
||||
|
||||
size_t num_keys = data_types.size();
|
||||
if (right_table_keys.columns() != num_keys)
|
||||
throw Exception(
|
||||
"Number of arguments for function joinGet" + toString(or_null ? "OrNull" : "")
|
||||
+ " doesn't match: passed, should be equal to " + toString(num_keys),
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
for (size_t i = 0; i < num_keys; ++i)
|
||||
{
|
||||
const auto & left_type_origin = data_types[i];
|
||||
const auto & [c2, right_type_origin, right_name] = right_table_keys.safeGetByPosition(i);
|
||||
auto left_type = removeNullable(left_type_origin);
|
||||
auto right_type = removeNullable(right_type_origin);
|
||||
if (!left_type->equals(*right_type))
|
||||
throw Exception(
|
||||
"Type mismatch in joinGet key " + toString(i) + ": found type " + left_type->getName() + ", while the needed type is "
|
||||
+ right_type->getName(),
|
||||
ErrorCodes::TYPE_MISMATCH);
|
||||
}
|
||||
|
||||
if (!sample_block_with_columns_to_add.has(column_name))
|
||||
throw Exception("StorageJoin doesn't contain column " + column_name, ErrorCodes::NO_SUCH_COLUMN_IN_TABLE);
|
||||
|
||||
auto elem = sample_block_with_columns_to_add.getByName(column_name);
|
||||
if (or_null)
|
||||
elem.type = makeNullable(elem.type);
|
||||
@ -1138,34 +1146,33 @@ DataTypePtr HashJoin::joinGetReturnType(const String & column_name, bool or_null
|
||||
|
||||
|
||||
template <typename Maps>
|
||||
void HashJoin::joinGetImpl(Block & block, const Block & block_with_columns_to_add, const Maps & maps_) const
|
||||
ColumnWithTypeAndName HashJoin::joinGetImpl(const Block & block, const Block & block_with_columns_to_add, const Maps & maps_) const
|
||||
{
|
||||
joinBlockImpl<ASTTableJoin::Kind::Left, ASTTableJoin::Strictness::RightAny>(
|
||||
block, {block.getByPosition(0).name}, block_with_columns_to_add, maps_);
|
||||
// Assemble the key block with correct names.
|
||||
Block keys;
|
||||
for (size_t i = 0; i < block.columns(); ++i)
|
||||
{
|
||||
auto key = block.getByPosition(i);
|
||||
key.name = key_names_right[i];
|
||||
keys.insert(std::move(key));
|
||||
}
|
||||
|
||||
joinBlockImpl<ASTTableJoin::Kind::Left, ASTTableJoin::Strictness::Any>(
|
||||
keys, key_names_right, block_with_columns_to_add, maps_);
|
||||
return keys.getByPosition(keys.columns() - 1);
|
||||
}
|
||||
|
||||
|
||||
// TODO: support composite key
|
||||
// TODO: return multiple columns as named tuple
|
||||
// TODO: return array of values when strictness == ASTTableJoin::Strictness::All
|
||||
void HashJoin::joinGet(Block & block, const String & column_name, bool or_null) const
|
||||
ColumnWithTypeAndName HashJoin::joinGet(const Block & block, const Block & block_with_columns_to_add) const
|
||||
{
|
||||
std::shared_lock lock(data->rwlock);
|
||||
|
||||
if (key_names_right.size() != 1)
|
||||
throw Exception("joinGet only supports StorageJoin containing exactly one key", ErrorCodes::UNSUPPORTED_JOIN_KEYS);
|
||||
|
||||
checkTypeOfKey(block, right_table_keys);
|
||||
|
||||
auto elem = sample_block_with_columns_to_add.getByName(column_name);
|
||||
if (or_null)
|
||||
elem.type = makeNullable(elem.type);
|
||||
elem.column = elem.type->createColumn();
|
||||
|
||||
if ((strictness == ASTTableJoin::Strictness::Any || strictness == ASTTableJoin::Strictness::RightAny) &&
|
||||
kind == ASTTableJoin::Kind::Left)
|
||||
{
|
||||
joinGetImpl(block, {elem}, std::get<MapsOne>(data->maps));
|
||||
return joinGetImpl(block, block_with_columns_to_add, std::get<MapsOne>(data->maps));
|
||||
}
|
||||
else
|
||||
throw Exception("joinGet only supports StorageJoin of type Left Any", ErrorCodes::INCOMPATIBLE_TYPE_OF_JOIN);
|
||||
|
@ -162,11 +162,11 @@ public:
|
||||
*/
|
||||
void joinBlock(Block & block, ExtraBlockPtr & not_processed) override;
|
||||
|
||||
/// Infer the return type for joinGet function
|
||||
DataTypePtr joinGetReturnType(const String & column_name, bool or_null) const;
|
||||
/// Check joinGet arguments and infer the return type.
|
||||
DataTypePtr joinGetCheckAndGetReturnType(const DataTypes & data_types, const String & column_name, bool or_null) const;
|
||||
|
||||
/// Used by joinGet function that turns StorageJoin into a dictionary
|
||||
void joinGet(Block & block, const String & column_name, bool or_null) const;
|
||||
/// Used by joinGet function that turns StorageJoin into a dictionary.
|
||||
ColumnWithTypeAndName joinGet(const Block & block, const Block & block_with_columns_to_add) const;
|
||||
|
||||
/** Keep "totals" (separate part of dataset, see WITH TOTALS) to use later.
|
||||
*/
|
||||
@ -389,7 +389,7 @@ private:
|
||||
void joinBlockImplCross(Block & block, ExtraBlockPtr & not_processed) const;
|
||||
|
||||
template <typename Maps>
|
||||
void joinGetImpl(Block & block, const Block & block_with_columns_to_add, const Maps & maps_) const;
|
||||
ColumnWithTypeAndName joinGetImpl(const Block & block, const Block & block_with_columns_to_add, const Maps & maps_) const;
|
||||
|
||||
static Type chooseMethod(const ColumnRawPtrs & key_columns, Sizes & key_sizes);
|
||||
};
|
||||
|
@ -673,6 +673,7 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create)
|
||||
create.attach_short_syntax = true;
|
||||
create.if_not_exists = if_not_exists;
|
||||
}
|
||||
/// TODO maybe assert table structure if create.attach_short_syntax is false?
|
||||
|
||||
if (!create.temporary && create.database.empty())
|
||||
create.database = current_database;
|
||||
|
@ -10,6 +10,7 @@ namespace DB
|
||||
{
|
||||
class Context;
|
||||
using DatabaseAndTable = std::pair<DatabasePtr, StoragePtr>;
|
||||
class AccessRightsElements;
|
||||
|
||||
/** Allow to either drop table with all its data (DROP),
|
||||
* or remove information about table (just forget) from server (DETACH),
|
||||
|
@ -17,6 +17,7 @@
|
||||
#include <Interpreters/InterpreterWatchQuery.h>
|
||||
#include <Interpreters/JoinedTables.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Parsers/ASTIdentifier.h>
|
||||
#include <Parsers/ASTInsertQuery.h>
|
||||
#include <Parsers/ASTSelectQuery.h>
|
||||
#include <Parsers/ASTSelectWithUnionQuery.h>
|
||||
@ -29,6 +30,8 @@
|
||||
#include <Storages/StorageDistributed.h>
|
||||
#include <TableFunctions/TableFunctionFactory.h>
|
||||
#include <Common/checkStackSize.h>
|
||||
#include <Interpreters/TranslateQualifiedNamesVisitor.h>
|
||||
#include <Interpreters/getTableExpressions.h>
|
||||
|
||||
namespace
|
||||
{
|
||||
@ -90,9 +93,32 @@ Block InterpreterInsertQuery::getSampleBlock(
|
||||
}
|
||||
|
||||
Block table_sample = metadata_snapshot->getSampleBlock();
|
||||
|
||||
/// Process column transformers (e.g. * EXCEPT(a)), asterisks and qualified columns.
|
||||
const auto & columns = metadata_snapshot->getColumns();
|
||||
auto names_and_types = columns.getOrdinary();
|
||||
removeDuplicateColumns(names_and_types);
|
||||
auto table_expr = std::make_shared<ASTTableExpression>();
|
||||
table_expr->database_and_table_name = createTableIdentifier(table->getStorageID());
|
||||
table_expr->children.push_back(table_expr->database_and_table_name);
|
||||
TablesWithColumns tables_with_columns;
|
||||
tables_with_columns.emplace_back(DatabaseAndTableWithAlias(*table_expr, context.getCurrentDatabase()), names_and_types);
|
||||
|
||||
tables_with_columns[0].addHiddenColumns(columns.getMaterialized());
|
||||
tables_with_columns[0].addHiddenColumns(columns.getAliases());
|
||||
tables_with_columns[0].addHiddenColumns(table->getVirtuals());
|
||||
|
||||
NameSet source_columns_set;
|
||||
for (const auto & identifier : query.columns->children)
|
||||
source_columns_set.insert(identifier->getColumnName());
|
||||
TranslateQualifiedNamesVisitor::Data visitor_data(source_columns_set, tables_with_columns);
|
||||
TranslateQualifiedNamesVisitor visitor(visitor_data);
|
||||
auto columns_ast = query.columns->clone();
|
||||
visitor.visit(columns_ast);
|
||||
|
||||
/// Form the block based on the column names from the query
|
||||
Block res;
|
||||
for (const auto & identifier : query.columns->children)
|
||||
for (const auto & identifier : columns_ast->children)
|
||||
{
|
||||
std::string current_name = identifier->getColumnName();
|
||||
|
||||
|
@ -28,7 +28,7 @@ inline bool functionIsLikeOperator(const std::string & name)
|
||||
|
||||
inline bool functionIsJoinGet(const std::string & name)
|
||||
{
|
||||
return name == "joinGet" || startsWith(name, "dictGet");
|
||||
return startsWith(name, "joinGet");
|
||||
}
|
||||
|
||||
inline bool functionIsDictGet(const std::string & name)
|
||||
|
@ -110,7 +110,7 @@ void ASTColumnsReplaceTransformer::replaceChildren(ASTPtr & node, const ASTPtr &
|
||||
if (const auto * id = child->as<ASTIdentifier>())
|
||||
{
|
||||
if (id->shortName() == name)
|
||||
child = replacement;
|
||||
child = replacement->clone();
|
||||
}
|
||||
else
|
||||
replaceChildren(child, replacement, name);
|
||||
|
@ -36,7 +36,7 @@ bool ParserInsertQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
||||
ParserToken s_lparen(TokenType::OpeningRoundBracket);
|
||||
ParserToken s_rparen(TokenType::ClosingRoundBracket);
|
||||
ParserIdentifier name_p;
|
||||
ParserList columns_p(std::make_unique<ParserCompoundIdentifier>(), std::make_unique<ParserToken>(TokenType::Comma), false);
|
||||
ParserList columns_p(std::make_unique<ParserInsertElement>(), std::make_unique<ParserToken>(TokenType::Comma), false);
|
||||
ParserFunction table_function_p{false};
|
||||
|
||||
ASTPtr database;
|
||||
@ -189,5 +189,12 @@ bool ParserInsertQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
||||
return true;
|
||||
}
|
||||
|
||||
bool ParserInsertElement::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
||||
{
|
||||
return ParserColumnsMatcher().parse(pos, node, expected)
|
||||
|| ParserQualifiedAsterisk().parse(pos, node, expected)
|
||||
|| ParserAsterisk().parse(pos, node, expected)
|
||||
|| ParserCompoundIdentifier().parse(pos, node, expected);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -33,4 +33,13 @@ public:
|
||||
ParserInsertQuery(const char * end_) : end(end_) {}
|
||||
};
|
||||
|
||||
/** Insert accepts an identifier and an asterisk with variants.
|
||||
*/
|
||||
class ParserInsertElement : public IParserBase
|
||||
{
|
||||
protected:
|
||||
const char * getName() const override { return "insert element"; }
|
||||
bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -16,27 +16,17 @@ class LiveViewBlockInputStream : public IBlockInputStream
|
||||
using NonBlockingResult = std::pair<Block, bool>;
|
||||
|
||||
public:
|
||||
~LiveViewBlockInputStream() override
|
||||
{
|
||||
/// Start storage no users thread
|
||||
/// if we are the last active user
|
||||
if (!storage->is_dropped && blocks_ptr.use_count() < 3)
|
||||
storage->startNoUsersThread(temporary_live_view_timeout_sec);
|
||||
}
|
||||
|
||||
LiveViewBlockInputStream(std::shared_ptr<StorageLiveView> storage_,
|
||||
std::shared_ptr<BlocksPtr> blocks_ptr_,
|
||||
std::shared_ptr<BlocksMetadataPtr> blocks_metadata_ptr_,
|
||||
std::shared_ptr<bool> active_ptr_,
|
||||
const bool has_limit_, const UInt64 limit_,
|
||||
const UInt64 heartbeat_interval_sec_,
|
||||
const UInt64 temporary_live_view_timeout_sec_)
|
||||
const UInt64 heartbeat_interval_sec_)
|
||||
: storage(std::move(storage_)), blocks_ptr(std::move(blocks_ptr_)),
|
||||
blocks_metadata_ptr(std::move(blocks_metadata_ptr_)),
|
||||
active_ptr(std::move(active_ptr_)),
|
||||
has_limit(has_limit_), limit(limit_),
|
||||
heartbeat_interval_usec(heartbeat_interval_sec_ * 1000000),
|
||||
temporary_live_view_timeout_sec(temporary_live_view_timeout_sec_)
|
||||
heartbeat_interval_usec(heartbeat_interval_sec_ * 1000000)
|
||||
{
|
||||
/// grab active pointer
|
||||
active = active_ptr.lock();
|
||||
@ -205,7 +195,6 @@ private:
|
||||
Int64 num_updates = -1;
|
||||
bool end_of_blocks = false;
|
||||
UInt64 heartbeat_interval_usec;
|
||||
UInt64 temporary_live_view_timeout_sec;
|
||||
UInt64 last_event_timestamp_usec = 0;
|
||||
};
|
||||
|
||||
|
@ -34,13 +34,6 @@ class LiveViewEventsBlockInputStream : public IBlockInputStream
|
||||
using NonBlockingResult = std::pair<Block, bool>;
|
||||
|
||||
public:
|
||||
~LiveViewEventsBlockInputStream() override
|
||||
{
|
||||
/// Start storage no users thread
|
||||
/// if we are the last active user
|
||||
if (!storage->is_dropped && blocks_ptr.use_count() < 3)
|
||||
storage->startNoUsersThread(temporary_live_view_timeout_sec);
|
||||
}
|
||||
/// length default -2 because we want LIMIT to specify number of updates so that LIMIT 1 waits for 1 update
|
||||
/// and LIMIT 0 just returns data without waiting for any updates
|
||||
LiveViewEventsBlockInputStream(std::shared_ptr<StorageLiveView> storage_,
|
||||
@ -48,14 +41,12 @@ public:
|
||||
std::shared_ptr<BlocksMetadataPtr> blocks_metadata_ptr_,
|
||||
std::shared_ptr<bool> active_ptr_,
|
||||
const bool has_limit_, const UInt64 limit_,
|
||||
const UInt64 heartbeat_interval_sec_,
|
||||
const UInt64 temporary_live_view_timeout_sec_)
|
||||
const UInt64 heartbeat_interval_sec_)
|
||||
: storage(std::move(storage_)), blocks_ptr(std::move(blocks_ptr_)),
|
||||
blocks_metadata_ptr(std::move(blocks_metadata_ptr_)),
|
||||
active_ptr(std::move(active_ptr_)), has_limit(has_limit_),
|
||||
limit(limit_),
|
||||
heartbeat_interval_usec(heartbeat_interval_sec_ * 1000000),
|
||||
temporary_live_view_timeout_sec(temporary_live_view_timeout_sec_)
|
||||
heartbeat_interval_usec(heartbeat_interval_sec_ * 1000000)
|
||||
{
|
||||
/// grab active pointer
|
||||
active = active_ptr.lock();
|
||||
@ -236,7 +227,6 @@ private:
|
||||
Int64 num_updates = -1;
|
||||
bool end_of_blocks = false;
|
||||
UInt64 heartbeat_interval_usec;
|
||||
UInt64 temporary_live_view_timeout_sec;
|
||||
UInt64 last_event_timestamp_usec = 0;
|
||||
Poco::Timestamp timestamp;
|
||||
};
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user