mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-10-14 12:30:49 +00:00
Merge branch 'master' of github.com:yandex/ClickHouse
This commit is contained in:
commit
fc2b3454bd
2
.gitignore
vendored
2
.gitignore
vendored
@ -9,8 +9,6 @@
|
|||||||
# auto generated files
|
# auto generated files
|
||||||
*.logrt
|
*.logrt
|
||||||
|
|
||||||
dbms/src/Storages/System/StorageSystemContributors.generated.cpp
|
|
||||||
|
|
||||||
/build
|
/build
|
||||||
/build_*
|
/build_*
|
||||||
/docs/build
|
/docs/build
|
||||||
|
148
CHANGELOG.md
148
CHANGELOG.md
@ -1,3 +1,151 @@
|
|||||||
|
## ClickHouse release 18.14.11, 2018-10-29
|
||||||
|
|
||||||
|
### Bug fixes:
|
||||||
|
|
||||||
|
* Fixed the error `Block structure mismatch in UNION stream: different number of columns` in LIMIT queries. [#2156](https://github.com/yandex/ClickHouse/issues/2156)
|
||||||
|
* Fixed errors when merging data in tables containing arrays inside Nested structures. [#3397](https://github.com/yandex/ClickHouse/pull/3397)
|
||||||
|
* Fixed incorrect query results if the `merge_tree_uniform_read_distribution` setting is disabled (it is enabled by default). [#3429](https://github.com/yandex/ClickHouse/pull/3429)
|
||||||
|
* Fixed an error on inserts to a Distributed table in Native format. [#3411](https://github.com/yandex/ClickHouse/issues/3411)
|
||||||
|
|
||||||
|
## ClickHouse release 18.14.10, 2018-10-23
|
||||||
|
|
||||||
|
* The `compile_expressions` setting (JIT compilation of expressions) is disabled by default. [#3410](https://github.com/yandex/ClickHouse/pull/3410)
|
||||||
|
* The `enable_optimize_predicate_expression` setting is disabled by default.
|
||||||
|
|
||||||
|
## ClickHouse release 18.14.9, 2018-10-16
|
||||||
|
|
||||||
|
### New features:
|
||||||
|
|
||||||
|
* The `WITH CUBE` modifier for `GROUP BY` (the alternative syntax `GROUP BY CUBE(...)` is also available). [#3172](https://github.com/yandex/ClickHouse/pull/3172)
|
||||||
|
* Added the `formatDateTime` function. [Alexandr Krasheninnikov](https://github.com/yandex/ClickHouse/pull/2770)
|
||||||
|
* Added the `JDBC` table engine and `jdbc` table function (requires installing clickhouse-jdbc-bridge). [Alexandr Krasheninnikov](https://github.com/yandex/ClickHouse/pull/3210)
|
||||||
|
* Added functions for working with the ISO week number: `toISOWeek`, `toISOYear`, `toStartOfISOYear`, and `toDayOfYear`. [#3146](https://github.com/yandex/ClickHouse/pull/3146)
|
||||||
|
* Now you can use `Nullable` columns for `MySQL` and `ODBC` tables. [#3362](https://github.com/yandex/ClickHouse/pull/3362)
|
||||||
|
* Nested data structures can be read as nested objects in `JSONEachRow` format. Added the `input_format_import_nested_json` setting. [Veloman Yunkan](https://github.com/yandex/ClickHouse/pull/3144)
|
||||||
|
* Parallel processing is available for many `MATERIALIZED VIEW`s when inserting data. See the `parallel_view_processing` setting. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3208)
|
||||||
|
* Added the `SYSTEM FLUSH LOGS` query (forced log flushes to system tables such as `query_log`) [#3321](https://github.com/yandex/ClickHouse/pull/3321)
|
||||||
|
* Now you can use pre-defined `database` and `table` macros when declaring `Replicated` tables. [#3251](https://github.com/yandex/ClickHouse/pull/3251)
|
||||||
|
* Added the ability to read `Decimal` type values in engineering notation (indicating powers of ten). [#3153](https://github.com/yandex/ClickHouse/pull/3153)
|
||||||
|
|
||||||
|
### Experimental features:
|
||||||
|
|
||||||
|
* Optimization of the GROUP BY clause for `LowCardinality data types.` [#3138](https://github.com/yandex/ClickHouse/pull/3138)
|
||||||
|
* Optimized calculation of expressions for `LowCardinality data types.` [#3200](https://github.com/yandex/ClickHouse/pull/3200)
|
||||||
|
|
||||||
|
### Improvements:
|
||||||
|
|
||||||
|
* Significantly reduced memory consumption for requests with `ORDER BY` and `LIMIT`. See the `max_bytes_before_remerge_sort` setting. [#3205](https://github.com/yandex/ClickHouse/pull/3205)
|
||||||
|
* In the absence of `JOIN` (`LEFT`, `INNER`, ...), `INNER JOIN` is assumed. [#3147](https://github.com/yandex/ClickHouse/pull/3147)
|
||||||
|
* Qualified asterisks work correctly in queries with `JOIN`. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3202)
|
||||||
|
* The `ODBC` table engine correctly chooses the method for quoting identifiers in the SQL dialect of a remote database. [Alexandr Krasheninnikov](https://github.com/yandex/ClickHouse/pull/3210)
|
||||||
|
* The `compile_expressions` setting (JIT compilation of expressions) is enabled by default.
|
||||||
|
* Fixed behavior for simultaneous DROP DATABASE/TABLE IF EXISTS and CREATE DATABASE/TABLE IF NOT EXISTS. Previously, a `CREATE DATABASE ... IF NOT EXISTS` query could return the error message "File ... already exists", and the `CREATE TABLE ... IF NOT EXISTS` and `DROP TABLE IF EXISTS` queries could return `Table ... is creating or attaching right now`. [#3101](https://github.com/yandex/ClickHouse/pull/3101)
|
||||||
|
* LIKE and IN expressions with a constant right half are passed to the remote server when querying from MySQL or ODBC tables. [#3182](https://github.com/yandex/ClickHouse/pull/3182)
|
||||||
|
* Comparisons with constant expressions in a WHERE clause are passed to the remote server when querying from MySQL and ODBC tables. Previously, only comparisons with constants were passed. [#3182](https://github.com/yandex/ClickHouse/pull/3182)
|
||||||
|
* Correct calculation of row width in the terminal for `Pretty` formats, including strings with hieroglyphs. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3257).
|
||||||
|
* `ON CLUSTER` can be specified for `ALTER UPDATE` queries.
|
||||||
|
* Improved performance for reading data in `JSONEachRow` format. [#3332](https://github.com/yandex/ClickHouse/pull/3332)
|
||||||
|
* Added synonyms for the `LENGTH` and `CHARACTER_LENGTH` functions for compatibility. The `CONCAT` function is no longer case-sensitive. [#3306](https://github.com/yandex/ClickHouse/pull/3306)
|
||||||
|
* Added the `TIMESTAMP` synonym for the `DateTime` type. [#3390](https://github.com/yandex/ClickHouse/pull/3390)
|
||||||
|
* There is always space reserved for query_id in the server logs, even if the log line is not related to a query. This makes it easier to parse server text logs with third-party tools.
|
||||||
|
* Memory consumption by a query is logged when it exceeds the next level of an integer number of gigabytes. [#3205](https://github.com/yandex/ClickHouse/pull/3205)
|
||||||
|
* Added compatibility mode for the case when the client library that uses the Native protocol sends fewer columns by mistake than the server expects for the INSERT query. This scenario was possible when using the clickhouse-cpp library. Previously, this scenario caused the server to crash. [#3171](https://github.com/yandex/ClickHouse/pull/3171)
|
||||||
|
* In a user-defined WHERE expression in `clickhouse-copier`, you can now use a `partition_key` alias (for additional filtering by source table partition). This is useful if the partitioning scheme changes during copying, but only changes slightly. [#3166](https://github.com/yandex/ClickHouse/pull/3166)
|
||||||
|
* The workflow of the `Kafka` engine has been moved to a background thread pool in order to automatically reduce the speed of data reading at high loads. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3215).
|
||||||
|
* Support for reading `Tuple` and `Nested` values of structures like `struct` in the `Cap'n'Proto format`. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3216)
|
||||||
|
* The list of top-level domains for the `firstSignificantSubdomain` function now includes the domain `biz`. [decaseal](https://github.com/yandex/ClickHouse/pull/3219)
|
||||||
|
* In the configuration of external dictionaries, `null_value` is interpreted as the value of the default data type. [#3330](https://github.com/yandex/ClickHouse/pull/3330)
|
||||||
|
* Support for the `intDiv` and `intDivOrZero` functions for `Decimal`. [b48402e8](https://github.com/yandex/ClickHouse/commit/b48402e8712e2b9b151e0eef8193811d433a1264)
|
||||||
|
* Support for the `Date`, `DateTime`, `UUID`, and `Decimal` types as a key for the `sumMap` aggregate function. [#3281](https://github.com/yandex/ClickHouse/pull/3281)
|
||||||
|
* Support for the `Decimal` data type in external dictionaries. [#3324](https://github.com/yandex/ClickHouse/pull/3324)
|
||||||
|
* Support for the `Decimal` data type in `SummingMergeTree` tables. [#3348](https://github.com/yandex/ClickHouse/pull/3348)
|
||||||
|
* Added specializations for `UUID` in `if`. [#3366](https://github.com/yandex/ClickHouse/pull/3366)
|
||||||
|
* Reduced the number of `open` and `close` system calls when reading from a `MergeTree table`. [#3283](https://github.com/yandex/ClickHouse/pull/3283)
|
||||||
|
* A `TRUNCATE TABLE` query can be executed on any replica (the query is passed to the leader replica). [Kirill Shvakov](https://github.com/yandex/ClickHouse/pull/3375)
|
||||||
|
|
||||||
|
### Bug fixes:
|
||||||
|
|
||||||
|
* Fixed an issue with `Dictionary` tables for `range_hashed` dictionaries. This error occurred in version 18.12.17. [#1702](https://github.com/yandex/ClickHouse/pull/1702)
|
||||||
|
* Fixed an error when loading `range_hashed` dictionaries (the message `Unsupported type Nullable (...)`). This error occurred in version 18.12.17. [#3362](https://github.com/yandex/ClickHouse/pull/3362)
|
||||||
|
* Fixed errors in the `pointInPolygon` function due to the accumulation of inaccurate calculations for polygons with a large number of vertices located close to each other. [#3331](https://github.com/yandex/ClickHouse/pull/3331) [#3341](https://github.com/yandex/ClickHouse/pull/3341)
|
||||||
|
* If after merging data parts, the checksum for the resulting part differs from the result of the same merge in another replica, the result of the merge is deleted and the data part is downloaded from the other replica (this is the correct behavior). But after downloading the data part, it couldn't be added to the working set because of an error that the part already exists (because the data part was deleted with some delay after the merge). This led to cyclical attempts to download the same data. [#3194](https://github.com/yandex/ClickHouse/pull/3194)
|
||||||
|
* Fixed incorrect calculation of total memory consumption by queries (because of incorrect calculation, the `max_memory_usage_for_all_queries` setting worked incorrectly and the `MemoryTracking` metric had an incorrect value). This error occurred in version 18.12.13. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3344)
|
||||||
|
* Fixed the functionality of `CREATE TABLE ... ON CLUSTER ... AS SELECT ...` This error occurred in version 18.12.13. [#3247](https://github.com/yandex/ClickHouse/pull/3247)
|
||||||
|
* Fixed unnecessary preparation of data structures for `JOIN`s on the server that initiates the request if the `JOIN` is only performed on remote servers. [#3340](https://github.com/yandex/ClickHouse/pull/3340)
|
||||||
|
* Fixed bugs in the `Kafka` engine: deadlocks after exceptions when starting to read data, and locks upon completion [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3215).
|
||||||
|
* For `Kafka` tables, the optional `schema` parameter was not passed (the schema of the `Cap'n'Proto` format). [Vojtech Splichal](https://github.com/yandex/ClickHouse/pull/3150)
|
||||||
|
* If the ensemble of ZooKeeper servers has servers that accept the connection but then immediately close it instead of responding to the handshake, ClickHouse chooses to connect another server. Previously, this produced the error `Cannot read all data. Bytes read: 0. Bytes expected: 4.` and the server couldn't start. [8218cf3a](https://github.com/yandex/ClickHouse/commit/8218cf3a5f39a43401953769d6d12a0bb8d29da9)
|
||||||
|
* If the ensemble of ZooKeeper servers contains servers for which the DNS query returns an error, these servers are ignored. [17b8e209](https://github.com/yandex/ClickHouse/commit/17b8e209221061325ad7ba0539f03c6e65f87f29)
|
||||||
|
* Fixed type conversion between `Date` and `DateTime` when inserting data in the `VALUES` format (if `input_format_values_interpret_expressions = 1`). Previously, the conversion was performed between the numerical value of the number of days in Unix Epoch time and the Unix timestamp, which led to unexpected results. [#3229](https://github.com/yandex/ClickHouse/pull/3229)
|
||||||
|
* Corrected type conversion between `Decimal` and integer numbers. [#3211](https://github.com/yandex/ClickHouse/pull/3211)
|
||||||
|
* Fixed errors in the `enable_optimize_predicate_expression` setting. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3231)
|
||||||
|
* Fixed a parsing error in CSV format with floating-point numbers if a non-default CSV separator is used, such as `;` [#3155](https://github.com/yandex/ClickHouse/pull/3155)
|
||||||
|
* Fixed the `arrayCumSumNonNegative` function (it does not accumulate negative values if the accumulator is less than zero). [Aleksey Studnev](https://github.com/yandex/ClickHouse/pull/3163)
|
||||||
|
* Fixed how `Merge` tables work on top of `Distributed` tables when using `PREWHERE`. [#3165](https://github.com/yandex/ClickHouse/pull/3165)
|
||||||
|
* Bug fixes in the `ALTER UPDATE` query.
|
||||||
|
* Fixed bugs in the `odbc` table function that appeared in version 18.12. [#3197](https://github.com/yandex/ClickHouse/pull/3197)
|
||||||
|
* Fixed the operation of aggregate functions with `StateArray` combinators. [#3188](https://github.com/yandex/ClickHouse/pull/3188)
|
||||||
|
* Fixed a crash when dividing a `Decimal` value by zero. [69dd6609](https://github.com/yandex/ClickHouse/commit/69dd6609193beb4e7acd3e6ad216eca0ccfb8179)
|
||||||
|
* Fixed output of types for operations using `Decimal` and integer arguments. [#3224](https://github.com/yandex/ClickHouse/pull/3224)
|
||||||
|
* Fixed the segfault during `GROUP BY` on `Decimal128`. [3359ba06](https://github.com/yandex/ClickHouse/commit/3359ba06c39fcd05bfdb87d6c64154819621e13a)
|
||||||
|
* The `log_query_threads` setting (logging information about each thread of query execution) now takes effect only if the `log_queries` option (logging information about queries) is set to 1. Since the `log_query_threads` option is enabled by default, information about threads was previously logged even if query logging was disabled. [#3241](https://github.com/yandex/ClickHouse/pull/3241)
|
||||||
|
* Fixed an error in the distributed operation of the quantiles aggregate function (the error message `Not found column quantile...`). [292a8855](https://github.com/yandex/ClickHouse/commit/292a885533b8e3b41ce8993867069d14cbd5a664)
|
||||||
|
* Fixed the compatibility problem when working on a cluster of version 18.12.17 servers and older servers at the same time. For distributed queries with GROUP BY keys of both fixed and non-fixed length, if there was a large amount of data to aggregate, the returned data was not always fully aggregated (two different rows contained the same aggregation keys). [#3254](https://github.com/yandex/ClickHouse/pull/3254)
|
||||||
|
* Fixed handling of substitutions in `clickhouse-performance-test`, if the query contains only part of the substitutions declared in the test. [#3263](https://github.com/yandex/ClickHouse/pull/3263)
|
||||||
|
* Fixed an error when using `FINAL` with `PREWHERE`. [#3298](https://github.com/yandex/ClickHouse/pull/3298)
|
||||||
|
* Fixed an error when using `PREWHERE` over columns that were added during `ALTER`. [#3298](https://github.com/yandex/ClickHouse/pull/3298)
|
||||||
|
* Added a check for the absence of `arrayJoin` for `DEFAULT` and `MATERIALIZED` expressions. Previously, `arrayJoin` led to an error when inserting data. [#3337](https://github.com/yandex/ClickHouse/pull/3337)
|
||||||
|
* Added a check for the absence of `arrayJoin` in a `PREWHERE` clause. Previously, this led to messages like `Size ... doesn't match` or `Unknown compression method` when executing queries. [#3357](https://github.com/yandex/ClickHouse/pull/3357)
|
||||||
|
* Fixed segfault that could occur in rare cases after optimization that replaced AND chains from equality evaluations with the corresponding IN expression. [liuyimin-bytedance](https://github.com/yandex/ClickHouse/pull/3339)
|
||||||
|
* Minor corrections to `clickhouse-benchmark`: previously, client information was not sent to the server; now the number of queries executed is calculated more accurately when shutting down and for limiting the number of iterations. [#3351](https://github.com/yandex/ClickHouse/pull/3351) [#3352](https://github.com/yandex/ClickHouse/pull/3352)
|
||||||
|
|
||||||
|
### Backward incompatible changes:
|
||||||
|
|
||||||
|
* Removed the `allow_experimental_decimal_type` option. The `Decimal` data type is available for default use. [#3329](https://github.com/yandex/ClickHouse/pull/3329)
|
||||||
|
|
||||||
|
## ClickHouse release 18.12.17, 2018-09-16
|
||||||
|
|
||||||
|
### New features:
|
||||||
|
|
||||||
|
* `invalidate_query` (the ability to specify a query to check whether an external dictionary needs to be updated) is implemented for the `clickhouse` source. [#3126](https://github.com/yandex/ClickHouse/pull/3126)
|
||||||
|
* Added the ability to use `UInt*`, `Int*`, and `DateTime` data types (along with the `Date` type) as a `range_hashed` external dictionary key that defines the boundaries of ranges. Now `NULL` can be used to designate an open range. [Vasily Nemkov](https://github.com/yandex/ClickHouse/pull/3123)
|
||||||
|
* The `Decimal` type now supports `var*` and `stddev*` aggregate functions. [#3129](https://github.com/yandex/ClickHouse/pull/3129)
|
||||||
|
* The `Decimal` type now supports mathematical functions (`exp`, `sin` and so on.) [#3129](https://github.com/yandex/ClickHouse/pull/3129)
|
||||||
|
* The `system.part_log` table now has the `partition_id` column. [#3089](https://github.com/yandex/ClickHouse/pull/3089)
|
||||||
|
|
||||||
|
### Bug fixes:
|
||||||
|
|
||||||
|
* `Merge` now works correctly on `Distributed` tables. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3159)
|
||||||
|
* Fixed incompatibility (unnecessary dependency on the `glibc` version) that made it impossible to run ClickHouse on `Ubuntu Precise` and older versions. The incompatibility arose in version 18.12.13. [#3130](https://github.com/yandex/ClickHouse/pull/3130)
|
||||||
|
* Fixed errors in the `enable_optimize_predicate_expression` setting. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3107)
|
||||||
|
* Fixed a minor issue with backwards compatibility that appeared when working with a cluster of replicas on versions earlier than 18.12.13 and simultaneously creating a new replica of a table on a server with a newer version (shown in the message `Can not clone replica, because the ... updated to new ClickHouse version`, which is logical, but shouldn't happen). [#3122](https://github.com/yandex/ClickHouse/pull/3122)
|
||||||
|
|
||||||
|
### Backward incompatible changes:
|
||||||
|
|
||||||
|
* The `enable_optimize_predicate_expression` option is enabled by default (which is rather optimistic). If query analysis errors occur that are related to searching for the column names, set `enable_optimize_predicate_expression` to 0. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3107)
|
||||||
|
|
||||||
|
## ClickHouse release 18.12.14, 2018-09-13
|
||||||
|
|
||||||
|
### New features:
|
||||||
|
|
||||||
|
* Added support for `ALTER UPDATE` queries. [#3035](https://github.com/yandex/ClickHouse/pull/3035)
|
||||||
|
* Added the `allow_ddl` option, which restricts the user's access to DDL queries. [#3104](https://github.com/yandex/ClickHouse/pull/3104)
|
||||||
|
* Added the `min_merge_bytes_to_use_direct_io` option for `MergeTree` engines, which allows you to set a threshold for the total size of the merge (when above the threshold, data part files will be handled using O_DIRECT). [#3117](https://github.com/yandex/ClickHouse/pull/3117)
|
||||||
|
* The `system.merges` system table now contains the `partition_id` column. [#3099](https://github.com/yandex/ClickHouse/pull/3099)
|
||||||
|
|
||||||
|
### Improvements
|
||||||
|
|
||||||
|
* If a data part remains unchanged during mutation, it isn't downloaded by replicas. [#3103](https://github.com/yandex/ClickHouse/pull/3103)
|
||||||
|
* Autocomplete is available for names of settings when working with `clickhouse-client`. [#3106](https://github.com/yandex/ClickHouse/pull/3106)
|
||||||
|
|
||||||
|
### Bug fixes:
|
||||||
|
|
||||||
|
* Added a check for the sizes of arrays that are elements of `Nested` type fields when inserting. [#3118](https://github.com/yandex/ClickHouse/pull/3118)
|
||||||
|
* Fixed an error updating external dictionaries with the `ODBC` source and `hashed` storage. This error occurred in version 18.12.13.
|
||||||
|
* Fixed a crash when creating a temporary table from a query with an `IN` condition. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3098)
|
||||||
|
* Fixed an error in aggregate functions for arrays that can have `NULL` elements. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3097)
|
||||||
|
|
||||||
|
|
||||||
## ClickHouse release 18.12.13, 2018-09-10
|
## ClickHouse release 18.12.13, 2018-09-10
|
||||||
|
|
||||||
### New features:
|
### New features:
|
||||||
|
@ -1,3 +1,11 @@
|
|||||||
|
## ClickHouse release 18.14.12, 2018-11-02
|
||||||
|
|
||||||
|
### Исправления ошибок:
|
||||||
|
|
||||||
|
* Исправлена ошибка при join-запросе двух неименованных подзапросов. [#3505](https://github.com/yandex/ClickHouse/pull/3505)
|
||||||
|
* Исправлена генерация пустой `WHERE`-части при запросах к внешним базам. [hotid](https://github.com/yandex/ClickHouse/pull/3477)
|
||||||
|
* Исправлена ошибка использования неправильной настройки таймаута в ODBC-словарях. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3511)
|
||||||
|
|
||||||
## ClickHouse release 18.14.11, 2018-10-29
|
## ClickHouse release 18.14.11, 2018-10-29
|
||||||
|
|
||||||
### Исправления ошибок:
|
### Исправления ошибок:
|
||||||
|
@ -2,14 +2,11 @@
|
|||||||
|
|
||||||
ClickHouse is an open-source column-oriented database management system that allows generating analytical data reports in real time.
|
ClickHouse is an open-source column-oriented database management system that allows generating analytical data reports in real time.
|
||||||
|
|
||||||
|
🎤🥂 **ClickHouse Meetup in [Amsterdam on November 15](https://events.yandex.com/events/meetings/15-11-2018/)** 🍰🔥🐻
|
||||||
|
|
||||||
## Useful Links
|
## Useful Links
|
||||||
|
|
||||||
* [Official website](https://clickhouse.yandex/) has quick high-level overview of ClickHouse on main page.
|
* [Official website](https://clickhouse.yandex/) has quick high-level overview of ClickHouse on main page.
|
||||||
* [Tutorial](https://clickhouse.yandex/tutorial.html) shows how to set up and query small ClickHouse cluster.
|
* [Tutorial](https://clickhouse.yandex/tutorial.html) shows how to set up and query small ClickHouse cluster.
|
||||||
* [Documentation](https://clickhouse.yandex/docs/en/) provides more in-depth information.
|
* [Documentation](https://clickhouse.yandex/docs/en/) provides more in-depth information.
|
||||||
* [Contacts](https://clickhouse.yandex/#contacts) can help to get your questions answered if there are any.
|
* [Contacts](https://clickhouse.yandex/#contacts) can help to get your questions answered if there are any.
|
||||||
|
|
||||||
## Upcoming Meetups
|
|
||||||
|
|
||||||
* [Beijing on October 28](http://www.clickhouse.com.cn/topic/5ba0e3f99d28dfde2ddc62a1)
|
|
||||||
* [Amsterdam on November 15](https://events.yandex.com/events/meetings/15-11-2018/)
|
|
||||||
|
@ -265,6 +265,10 @@ if (NOT USE_INTERNAL_ZSTD_LIBRARY)
|
|||||||
target_include_directories (dbms SYSTEM BEFORE PRIVATE ${ZSTD_INCLUDE_DIR})
|
target_include_directories (dbms SYSTEM BEFORE PRIVATE ${ZSTD_INCLUDE_DIR})
|
||||||
endif ()
|
endif ()
|
||||||
|
|
||||||
|
if (USE_JEMALLOC)
|
||||||
|
target_include_directories (dbms SYSTEM BEFORE PRIVATE ${JEMALLOC_INCLUDE_DIR}) # used in Interpreters/AsynchronousMetrics.cpp
|
||||||
|
endif ()
|
||||||
|
|
||||||
target_include_directories (dbms PUBLIC ${DBMS_INCLUDE_DIR})
|
target_include_directories (dbms PUBLIC ${DBMS_INCLUDE_DIR})
|
||||||
target_include_directories (clickhouse_common_io PUBLIC ${DBMS_INCLUDE_DIR})
|
target_include_directories (clickhouse_common_io PUBLIC ${DBMS_INCLUDE_DIR})
|
||||||
target_include_directories (clickhouse_common_io SYSTEM PUBLIC ${PCG_RANDOM_INCLUDE_DIR})
|
target_include_directories (clickhouse_common_io SYSTEM PUBLIC ${PCG_RANDOM_INCLUDE_DIR})
|
||||||
|
@ -121,6 +121,9 @@ void TCPHandler::runImpl()
|
|||||||
|
|
||||||
while (1)
|
while (1)
|
||||||
{
|
{
|
||||||
|
/// Restore context of request.
|
||||||
|
query_context = connection_context;
|
||||||
|
|
||||||
/// We are waiting for a packet from the client. Thus, every `POLL_INTERVAL` seconds check whether we need to shut down.
|
/// We are waiting for a packet from the client. Thus, every `POLL_INTERVAL` seconds check whether we need to shut down.
|
||||||
while (!static_cast<ReadBufferFromPocoSocket &>(*in).poll(global_settings.poll_interval * 1000000) && !server.isCancelled())
|
while (!static_cast<ReadBufferFromPocoSocket &>(*in).poll(global_settings.poll_interval * 1000000) && !server.isCancelled())
|
||||||
;
|
;
|
||||||
@ -145,9 +148,6 @@ void TCPHandler::runImpl()
|
|||||||
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
/// Restore context of request.
|
|
||||||
query_context = connection_context;
|
|
||||||
|
|
||||||
/// If a user passed query-local timeouts, reset socket to initial state at the end of the query
|
/// If a user passed query-local timeouts, reset socket to initial state at the end of the query
|
||||||
SCOPE_EXIT({state.timeout_setter.reset();});
|
SCOPE_EXIT({state.timeout_setter.reset();});
|
||||||
|
|
||||||
|
@ -237,13 +237,13 @@ public:
|
|||||||
|
|
||||||
for (size_t i = 0; i < size; ++i)
|
for (size_t i = 0; i < size; ++i)
|
||||||
{
|
{
|
||||||
to_lower.insert((i == 0) ? lower_bound : (points[i].mean + points[i - 1].mean) / 2);
|
to_lower.insertValue((i == 0) ? lower_bound : (points[i].mean + points[i - 1].mean) / 2);
|
||||||
to_upper.insert((i + 1 == size) ? upper_bound : (points[i].mean + points[i + 1].mean) / 2);
|
to_upper.insertValue((i + 1 == size) ? upper_bound : (points[i].mean + points[i + 1].mean) / 2);
|
||||||
|
|
||||||
// linear density approximation
|
// linear density approximation
|
||||||
Weight lower_weight = (i == 0) ? points[i].weight : ((points[i - 1].weight) + points[i].weight * 3) / 4;
|
Weight lower_weight = (i == 0) ? points[i].weight : ((points[i - 1].weight) + points[i].weight * 3) / 4;
|
||||||
Weight upper_weight = (i + 1 == size) ? points[i].weight : (points[i + 1].weight + points[i].weight * 3) / 4;
|
Weight upper_weight = (i + 1 == size) ? points[i].weight : (points[i + 1].weight + points[i].weight * 3) / 4;
|
||||||
to_weights.insert((lower_weight + upper_weight) / 2);
|
to_weights.insertValue((lower_weight + upper_weight) / 2);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1,6 +1,5 @@
|
|||||||
#include <Parsers/ASTSelectQuery.h>
|
#include <Parsers/ASTSelectQuery.h>
|
||||||
#include <Parsers/ASTTablesInSelectQuery.h>
|
#include <Parsers/ASTTablesInSelectQuery.h>
|
||||||
#include <Parsers/ASTIdentifier.h>
|
|
||||||
#include <Parsers/ASTFunction.h>
|
#include <Parsers/ASTFunction.h>
|
||||||
#include <TableFunctions/ITableFunction.h>
|
#include <TableFunctions/ITableFunction.h>
|
||||||
#include <TableFunctions/TableFunctionFactory.h>
|
#include <TableFunctions/TableFunctionFactory.h>
|
||||||
|
@ -527,7 +527,7 @@ void ColumnLowCardinality::Index::insertPosition(UInt64 position)
|
|||||||
while (position > getMaxPositionForCurrentType())
|
while (position > getMaxPositionForCurrentType())
|
||||||
expandType();
|
expandType();
|
||||||
|
|
||||||
positions->assumeMutableRef().insert(UInt64(position));
|
positions->assumeMutableRef().insert(position);
|
||||||
checkSizeOfType();
|
checkSizeOfType();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -117,7 +117,7 @@ public:
|
|||||||
|
|
||||||
void getExtremes(Field & min, Field & max) const override
|
void getExtremes(Field & min, Field & max) const override
|
||||||
{
|
{
|
||||||
return getDictionary().index(getIndexes(), 0)->getExtremes(min, max); /// TODO: optimize
|
return dictionary.getColumnUnique().getNestedColumn()->index(getIndexes(), 0)->getExtremes(min, max); /// TODO: optimize
|
||||||
}
|
}
|
||||||
|
|
||||||
void reserve(size_t n) override { idx.reserve(n); }
|
void reserve(size_t n) override { idx.reserve(n); }
|
||||||
|
@ -353,8 +353,8 @@ void getExtremesFromNullableContent(const ColumnVector<T> & col, const NullMap &
|
|||||||
|
|
||||||
if (has_not_null)
|
if (has_not_null)
|
||||||
{
|
{
|
||||||
min = typename NearestFieldType<T>::Type(cur_min);
|
min = cur_min;
|
||||||
max = typename NearestFieldType<T>::Type(cur_max);
|
max = cur_max;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -62,20 +62,13 @@ public:
|
|||||||
UInt64 getUInt(size_t n) const override { return getNestedColumn()->getUInt(n); }
|
UInt64 getUInt(size_t n) const override { return getNestedColumn()->getUInt(n); }
|
||||||
Int64 getInt(size_t n) const override { return getNestedColumn()->getInt(n); }
|
Int64 getInt(size_t n) const override { return getNestedColumn()->getInt(n); }
|
||||||
bool isNullAt(size_t n) const override { return is_nullable && n == getNullValueIndex(); }
|
bool isNullAt(size_t n) const override { return is_nullable && n == getNullValueIndex(); }
|
||||||
StringRef serializeValueIntoArena(size_t n, Arena & arena, char const *& begin) const override
|
StringRef serializeValueIntoArena(size_t n, Arena & arena, char const *& begin) const override;
|
||||||
{
|
|
||||||
return column_holder->serializeValueIntoArena(n, arena, begin);
|
|
||||||
}
|
|
||||||
void updateHashWithValue(size_t n, SipHash & hash) const override
|
void updateHashWithValue(size_t n, SipHash & hash) const override
|
||||||
{
|
{
|
||||||
return getNestedColumn()->updateHashWithValue(n, hash);
|
return getNestedColumn()->updateHashWithValue(n, hash);
|
||||||
}
|
}
|
||||||
|
|
||||||
int compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const override
|
int compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const override;
|
||||||
{
|
|
||||||
auto & column_unique = static_cast<const IColumnUnique &>(rhs);
|
|
||||||
return getNestedColumn()->compareAt(n, m, *column_unique.getNestedColumn(), nan_direction_hint);
|
|
||||||
}
|
|
||||||
|
|
||||||
void getExtremes(Field & min, Field & max) const override { column_holder->getExtremes(min, max); }
|
void getExtremes(Field & min, Field & max) const override { column_holder->getExtremes(min, max); }
|
||||||
bool valuesHaveFixedSize() const override { return column_holder->valuesHaveFixedSize(); }
|
bool valuesHaveFixedSize() const override { return column_holder->valuesHaveFixedSize(); }
|
||||||
@ -298,9 +291,44 @@ size_t ColumnUnique<ColumnType>::uniqueInsertDataWithTerminatingZero(const char
|
|||||||
return static_cast<size_t>(position);
|
return static_cast<size_t>(position);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template <typename ColumnType>
|
||||||
|
StringRef ColumnUnique<ColumnType>::serializeValueIntoArena(size_t n, Arena & arena, char const *& begin) const
|
||||||
|
{
|
||||||
|
if (is_nullable)
|
||||||
|
{
|
||||||
|
const UInt8 null_flag = 1;
|
||||||
|
const UInt8 not_null_flag = 0;
|
||||||
|
|
||||||
|
auto pos = arena.allocContinue(sizeof(null_flag), begin);
|
||||||
|
auto & flag = (n == getNullValueIndex() ? null_flag : not_null_flag);
|
||||||
|
memcpy(pos, &flag, sizeof(flag));
|
||||||
|
|
||||||
|
size_t nested_size = 0;
|
||||||
|
|
||||||
|
if (n == getNullValueIndex())
|
||||||
|
nested_size = column_holder->serializeValueIntoArena(n, arena, begin).size;
|
||||||
|
|
||||||
|
return StringRef(pos, sizeof(null_flag) + nested_size);
|
||||||
|
}
|
||||||
|
|
||||||
|
return column_holder->serializeValueIntoArena(n, arena, begin);
|
||||||
|
}
|
||||||
|
|
||||||
template <typename ColumnType>
|
template <typename ColumnType>
|
||||||
size_t ColumnUnique<ColumnType>::uniqueDeserializeAndInsertFromArena(const char * pos, const char *& new_pos)
|
size_t ColumnUnique<ColumnType>::uniqueDeserializeAndInsertFromArena(const char * pos, const char *& new_pos)
|
||||||
{
|
{
|
||||||
|
if (is_nullable)
|
||||||
|
{
|
||||||
|
UInt8 val = *reinterpret_cast<const UInt8 *>(pos);
|
||||||
|
pos += sizeof(val);
|
||||||
|
|
||||||
|
if (val)
|
||||||
|
{
|
||||||
|
new_pos = pos;
|
||||||
|
return getNullValueIndex();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
auto column = getRawColumnPtr();
|
auto column = getRawColumnPtr();
|
||||||
size_t prev_size = column->size();
|
size_t prev_size = column->size();
|
||||||
new_pos = column->deserializeAndInsertFromArena(pos);
|
new_pos = column->deserializeAndInsertFromArena(pos);
|
||||||
@ -318,6 +346,28 @@ size_t ColumnUnique<ColumnType>::uniqueDeserializeAndInsertFromArena(const char
|
|||||||
return static_cast<size_t>(index_pos);
|
return static_cast<size_t>(index_pos);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template <typename ColumnType>
|
||||||
|
int ColumnUnique<ColumnType>::compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const
|
||||||
|
{
|
||||||
|
if (is_nullable)
|
||||||
|
{
|
||||||
|
/// See ColumnNullable::compareAt
|
||||||
|
bool lval_is_null = n == getNullValueIndex();
|
||||||
|
bool rval_is_null = m == getNullValueIndex();
|
||||||
|
|
||||||
|
if (unlikely(lval_is_null || rval_is_null))
|
||||||
|
{
|
||||||
|
if (lval_is_null && rval_is_null)
|
||||||
|
return 0;
|
||||||
|
else
|
||||||
|
return lval_is_null ? nan_direction_hint : -nan_direction_hint;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
auto & column_unique = static_cast<const IColumnUnique &>(rhs);
|
||||||
|
return getNestedColumn()->compareAt(n, m, *column_unique.getNestedColumn(), nan_direction_hint);
|
||||||
|
}
|
||||||
|
|
||||||
template <typename IndexType>
|
template <typename IndexType>
|
||||||
static void checkIndexes(const ColumnVector<IndexType> & indexes, size_t max_dictionary_size)
|
static void checkIndexes(const ColumnVector<IndexType> & indexes, size_t max_dictionary_size)
|
||||||
{
|
{
|
||||||
|
@ -279,8 +279,8 @@ void ColumnVector<T>::getExtremes(Field & min, Field & max) const
|
|||||||
|
|
||||||
if (size == 0)
|
if (size == 0)
|
||||||
{
|
{
|
||||||
min = typename NearestFieldType<T>::Type(0);
|
min = T(0);
|
||||||
max = typename NearestFieldType<T>::Type(0);
|
max = T(0);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -193,7 +193,7 @@ public:
|
|||||||
return data.allocated_bytes();
|
return data.allocated_bytes();
|
||||||
}
|
}
|
||||||
|
|
||||||
void insert(const T value)
|
void insertValue(const T value)
|
||||||
{
|
{
|
||||||
data.push_back(value);
|
data.push_back(value);
|
||||||
}
|
}
|
||||||
@ -217,7 +217,7 @@ public:
|
|||||||
|
|
||||||
Field operator[](size_t n) const override
|
Field operator[](size_t n) const override
|
||||||
{
|
{
|
||||||
return typename NearestFieldType<T>::Type(data[n]);
|
return data[n];
|
||||||
}
|
}
|
||||||
|
|
||||||
void get(size_t n, Field & res) const override
|
void get(size_t n, Field & res) const override
|
||||||
|
@ -11,6 +11,11 @@
|
|||||||
#include <Poco/Logger.h>
|
#include <Poco/Logger.h>
|
||||||
|
|
||||||
|
|
||||||
|
#if defined(ARCADIA_ROOT)
|
||||||
|
# include <util/thread/singleton.h>
|
||||||
|
#endif
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
@ -21,10 +26,25 @@ namespace ErrorCodes
|
|||||||
|
|
||||||
SimpleObjectPool<TaskStatsInfoGetter> task_stats_info_getter_pool;
|
SimpleObjectPool<TaskStatsInfoGetter> task_stats_info_getter_pool;
|
||||||
|
|
||||||
|
// Smoker's implementation to avoid thread_local usage: error: undefined symbol: __cxa_thread_atexit
|
||||||
|
#if defined(ARCADIA_ROOT)
|
||||||
|
struct ThreadStatusPtrHolder : ThreadStatusPtr
|
||||||
|
{
|
||||||
|
ThreadStatusPtrHolder() { ThreadStatusPtr::operator=(ThreadStatus::create()); }
|
||||||
|
};
|
||||||
|
struct ThreadScopePtrHolder : CurrentThread::ThreadScopePtr
|
||||||
|
{
|
||||||
|
ThreadScopePtrHolder() { CurrentThread::ThreadScopePtr::operator=(std::make_shared<CurrentThread::ThreadScope>()); }
|
||||||
|
};
|
||||||
|
# define current_thread (*FastTlsSingleton<ThreadStatusPtrHolder>())
|
||||||
|
# define current_thread_scope (*FastTlsSingleton<ThreadScopePtrHolder>())
|
||||||
|
#else
|
||||||
/// Order of current_thread and current_thread_scope matters
|
/// Order of current_thread and current_thread_scope matters
|
||||||
thread_local ThreadStatusPtr current_thread = ThreadStatus::create();
|
thread_local ThreadStatusPtr _current_thread = ThreadStatus::create();
|
||||||
thread_local CurrentThread::ThreadScopePtr current_thread_scope = std::make_shared<CurrentThread::ThreadScope>();
|
thread_local CurrentThread::ThreadScopePtr _current_thread_scope = std::make_shared<CurrentThread::ThreadScope>();
|
||||||
|
# define current_thread _current_thread
|
||||||
|
# define current_thread_scope _current_thread_scope
|
||||||
|
#endif
|
||||||
|
|
||||||
void CurrentThread::updatePerformanceCounters()
|
void CurrentThread::updatePerformanceCounters()
|
||||||
{
|
{
|
||||||
|
@ -420,7 +420,7 @@ protected:
|
|||||||
void destroyElements()
|
void destroyElements()
|
||||||
{
|
{
|
||||||
if (!std::is_trivially_destructible_v<Cell>)
|
if (!std::is_trivially_destructible_v<Cell>)
|
||||||
for (iterator it = begin(); it != end(); ++it)
|
for (iterator it = begin(), it_end = end(); it != it_end; ++it)
|
||||||
it.ptr->~Cell();
|
it.ptr->~Cell();
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -445,12 +445,15 @@ protected:
|
|||||||
|
|
||||||
Derived & operator++()
|
Derived & operator++()
|
||||||
{
|
{
|
||||||
|
/// If iterator was pointed to ZeroValueStorage, move it to the beginning of the main buffer.
|
||||||
if (unlikely(ptr->isZero(*container)))
|
if (unlikely(ptr->isZero(*container)))
|
||||||
ptr = container->buf;
|
ptr = container->buf;
|
||||||
else
|
else
|
||||||
++ptr;
|
++ptr;
|
||||||
|
|
||||||
while (ptr < container->buf + container->grower.bufSize() && ptr->isZero(*container))
|
/// Skip empty cells in the main buffer.
|
||||||
|
auto buf_end = container->buf + container->grower.bufSize();
|
||||||
|
while (ptr < buf_end && ptr->isZero(*container))
|
||||||
++ptr;
|
++ptr;
|
||||||
|
|
||||||
return static_cast<Derived &>(*this);
|
return static_cast<Derived &>(*this);
|
||||||
@ -569,12 +572,15 @@ public:
|
|||||||
return iteratorToZero();
|
return iteratorToZero();
|
||||||
|
|
||||||
const Cell * ptr = buf;
|
const Cell * ptr = buf;
|
||||||
while (ptr < buf + grower.bufSize() && ptr->isZero(*this))
|
auto buf_end = buf + grower.bufSize();
|
||||||
|
while (ptr < buf_end && ptr->isZero(*this))
|
||||||
++ptr;
|
++ptr;
|
||||||
|
|
||||||
return const_iterator(this, ptr);
|
return const_iterator(this, ptr);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const_iterator cbegin() const { return begin(); }
|
||||||
|
|
||||||
iterator begin()
|
iterator begin()
|
||||||
{
|
{
|
||||||
if (!buf)
|
if (!buf)
|
||||||
@ -584,13 +590,15 @@ public:
|
|||||||
return iteratorToZero();
|
return iteratorToZero();
|
||||||
|
|
||||||
Cell * ptr = buf;
|
Cell * ptr = buf;
|
||||||
while (ptr < buf + grower.bufSize() && ptr->isZero(*this))
|
auto buf_end = buf + grower.bufSize();
|
||||||
|
while (ptr < buf_end && ptr->isZero(*this))
|
||||||
++ptr;
|
++ptr;
|
||||||
|
|
||||||
return iterator(this, ptr);
|
return iterator(this, ptr);
|
||||||
}
|
}
|
||||||
|
|
||||||
const_iterator end() const { return const_iterator(this, buf + grower.bufSize()); }
|
const_iterator end() const { return const_iterator(this, buf + grower.bufSize()); }
|
||||||
|
const_iterator cend() const { return end(); }
|
||||||
iterator end() { return iterator(this, buf + grower.bufSize()); }
|
iterator end() { return iterator(this, buf + grower.bufSize()); }
|
||||||
|
|
||||||
|
|
||||||
@ -811,9 +819,9 @@ public:
|
|||||||
if (this->hasZero())
|
if (this->hasZero())
|
||||||
this->zeroValue()->write(wb);
|
this->zeroValue()->write(wb);
|
||||||
|
|
||||||
for (size_t i = 0; i < grower.bufSize(); ++i)
|
for (auto ptr = buf, buf_end = buf + grower.bufSize(); ptr < buf_end; ++ptr)
|
||||||
if (!buf[i].isZero(*this))
|
if (!ptr->isZero(*this))
|
||||||
buf[i].write(wb);
|
ptr->write(wb);
|
||||||
}
|
}
|
||||||
|
|
||||||
void writeText(DB::WriteBuffer & wb) const
|
void writeText(DB::WriteBuffer & wb) const
|
||||||
@ -827,12 +835,12 @@ public:
|
|||||||
this->zeroValue()->writeText(wb);
|
this->zeroValue()->writeText(wb);
|
||||||
}
|
}
|
||||||
|
|
||||||
for (size_t i = 0; i < grower.bufSize(); ++i)
|
for (auto ptr = buf, buf_end = buf + grower.bufSize(); ptr < buf_end; ++ptr)
|
||||||
{
|
{
|
||||||
if (!buf[i].isZero(*this))
|
if (!ptr->isZero(*this))
|
||||||
{
|
{
|
||||||
DB::writeChar(',', wb);
|
DB::writeChar(',', wb);
|
||||||
buf[i].writeText(wb);
|
ptr->writeText(wb);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -395,38 +395,17 @@ void ZooKeeper::read(T & x)
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
struct ZooKeeperResponse;
|
void ZooKeeperRequest::write(WriteBuffer & out) const
|
||||||
using ZooKeeperResponsePtr = std::shared_ptr<ZooKeeperResponse>;
|
|
||||||
|
|
||||||
|
|
||||||
struct ZooKeeperRequest : virtual Request
|
|
||||||
{
|
{
|
||||||
ZooKeeper::XID xid = 0;
|
/// Excessive copy to calculate length.
|
||||||
bool has_watch = false;
|
WriteBufferFromOwnString buf;
|
||||||
/// If the request was not send and the error happens, we definitely sure, that is has not been processed by the server.
|
Coordination::write(xid, buf);
|
||||||
/// If the request was sent and we didn't get the response and the error happens, then we cannot be sure was it processed or not.
|
Coordination::write(getOpNum(), buf);
|
||||||
bool probably_sent = false;
|
writeImpl(buf);
|
||||||
|
Coordination::write(buf.str(), out);
|
||||||
|
out.next();
|
||||||
|
}
|
||||||
|
|
||||||
virtual ~ZooKeeperRequest() {}
|
|
||||||
|
|
||||||
virtual ZooKeeper::OpNum getOpNum() const = 0;
|
|
||||||
|
|
||||||
/// Writes length, xid, op_num, then the rest.
|
|
||||||
void write(WriteBuffer & out) const
|
|
||||||
{
|
|
||||||
/// Excessive copy to calculate length.
|
|
||||||
WriteBufferFromOwnString buf;
|
|
||||||
Coordination::write(xid, buf);
|
|
||||||
Coordination::write(getOpNum(), buf);
|
|
||||||
writeImpl(buf);
|
|
||||||
Coordination::write(buf.str(), out);
|
|
||||||
out.next();
|
|
||||||
}
|
|
||||||
|
|
||||||
virtual void writeImpl(WriteBuffer &) const = 0;
|
|
||||||
|
|
||||||
virtual ZooKeeperResponsePtr makeResponse() const = 0;
|
|
||||||
};
|
|
||||||
|
|
||||||
struct ZooKeeperResponse : virtual Response
|
struct ZooKeeperResponse : virtual Response
|
||||||
{
|
{
|
||||||
|
@ -240,4 +240,29 @@ private:
|
|||||||
CurrentMetrics::Increment active_session_metric_increment{CurrentMetrics::ZooKeeperSession};
|
CurrentMetrics::Increment active_session_metric_increment{CurrentMetrics::ZooKeeperSession};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
struct ZooKeeperResponse;
|
||||||
|
using ZooKeeperResponsePtr = std::shared_ptr<ZooKeeperResponse>;
|
||||||
|
|
||||||
|
/// Exposed in header file for Yandex.Metrica code.
|
||||||
|
struct ZooKeeperRequest : virtual Request
|
||||||
|
{
|
||||||
|
ZooKeeper::XID xid = 0;
|
||||||
|
bool has_watch = false;
|
||||||
|
/// If the request was not send and the error happens, we definitely sure, that is has not been processed by the server.
|
||||||
|
/// If the request was sent and we didn't get the response and the error happens, then we cannot be sure was it processed or not.
|
||||||
|
bool probably_sent = false;
|
||||||
|
|
||||||
|
virtual ~ZooKeeperRequest() {}
|
||||||
|
|
||||||
|
virtual ZooKeeper::OpNum getOpNum() const = 0;
|
||||||
|
|
||||||
|
/// Writes length, xid, op_num, then the rest.
|
||||||
|
void write(WriteBuffer & out) const;
|
||||||
|
|
||||||
|
virtual void writeImpl(WriteBuffer &) const = 0;
|
||||||
|
|
||||||
|
virtual ZooKeeperResponsePtr makeResponse() const = 0;
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -9,6 +9,8 @@
|
|||||||
#include <Common/UInt128.h>
|
#include <Common/UInt128.h>
|
||||||
#include <Core/Types.h>
|
#include <Core/Types.h>
|
||||||
#include <Core/Defines.h>
|
#include <Core/Defines.h>
|
||||||
|
#include <Core/UUID.h>
|
||||||
|
#include <common/DayNum.h>
|
||||||
#include <common/strong_typedef.h>
|
#include <common/strong_typedef.h>
|
||||||
|
|
||||||
|
|
||||||
@ -181,10 +183,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
Field(T && rhs, std::integral_constant<int, Field::TypeToEnum<std::decay_t<T>>::value> * = nullptr)
|
Field(T && rhs, std::enable_if_t<!std::is_same_v<std::decay_t<T>, Field>, void *> = nullptr);
|
||||||
{
|
|
||||||
createConcrete(std::forward<T>(rhs));
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Create a string inplace.
|
/// Create a string inplace.
|
||||||
Field(const char * data, size_t size)
|
Field(const char * data, size_t size)
|
||||||
@ -242,18 +241,7 @@ public:
|
|||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
std::enable_if_t<!std::is_same_v<std::decay_t<T>, Field>, Field &>
|
std::enable_if_t<!std::is_same_v<std::decay_t<T>, Field>, Field &>
|
||||||
operator= (T && rhs)
|
operator= (T && rhs);
|
||||||
{
|
|
||||||
if (which != TypeToEnum<std::decay_t<T>>::value)
|
|
||||||
{
|
|
||||||
destroy();
|
|
||||||
createConcrete(std::forward<T>(rhs));
|
|
||||||
}
|
|
||||||
else
|
|
||||||
assignConcrete(std::forward<T>(rhs));
|
|
||||||
|
|
||||||
return *this;
|
|
||||||
}
|
|
||||||
|
|
||||||
~Field()
|
~Field()
|
||||||
{
|
{
|
||||||
@ -596,7 +584,13 @@ template <> struct NearestFieldType<UInt8> { using Type = UInt64; };
|
|||||||
template <> struct NearestFieldType<UInt16> { using Type = UInt64; };
|
template <> struct NearestFieldType<UInt16> { using Type = UInt64; };
|
||||||
template <> struct NearestFieldType<UInt32> { using Type = UInt64; };
|
template <> struct NearestFieldType<UInt32> { using Type = UInt64; };
|
||||||
template <> struct NearestFieldType<UInt64> { using Type = UInt64; };
|
template <> struct NearestFieldType<UInt64> { using Type = UInt64; };
|
||||||
|
#ifdef __APPLE__
|
||||||
|
template <> struct NearestFieldType<time_t> { using Type = UInt64; };
|
||||||
|
template <> struct NearestFieldType<size_t> { using Type = UInt64; };
|
||||||
|
#endif
|
||||||
|
template <> struct NearestFieldType<DayNum> { using Type = UInt64; };
|
||||||
template <> struct NearestFieldType<UInt128> { using Type = UInt128; };
|
template <> struct NearestFieldType<UInt128> { using Type = UInt128; };
|
||||||
|
template <> struct NearestFieldType<UUID> { using Type = UInt128; };
|
||||||
template <> struct NearestFieldType<Int8> { using Type = Int64; };
|
template <> struct NearestFieldType<Int8> { using Type = Int64; };
|
||||||
template <> struct NearestFieldType<Int16> { using Type = Int64; };
|
template <> struct NearestFieldType<Int16> { using Type = Int64; };
|
||||||
template <> struct NearestFieldType<Int32> { using Type = Int64; };
|
template <> struct NearestFieldType<Int32> { using Type = Int64; };
|
||||||
@ -605,19 +599,57 @@ template <> struct NearestFieldType<Int128> { using Type = Int128; };
|
|||||||
template <> struct NearestFieldType<Decimal32> { using Type = DecimalField<Decimal32>; };
|
template <> struct NearestFieldType<Decimal32> { using Type = DecimalField<Decimal32>; };
|
||||||
template <> struct NearestFieldType<Decimal64> { using Type = DecimalField<Decimal64>; };
|
template <> struct NearestFieldType<Decimal64> { using Type = DecimalField<Decimal64>; };
|
||||||
template <> struct NearestFieldType<Decimal128> { using Type = DecimalField<Decimal128>; };
|
template <> struct NearestFieldType<Decimal128> { using Type = DecimalField<Decimal128>; };
|
||||||
|
template <> struct NearestFieldType<DecimalField<Decimal32>> { using Type = DecimalField<Decimal32>; };
|
||||||
|
template <> struct NearestFieldType<DecimalField<Decimal64>> { using Type = DecimalField<Decimal64>; };
|
||||||
|
template <> struct NearestFieldType<DecimalField<Decimal128>> { using Type = DecimalField<Decimal128>; };
|
||||||
template <> struct NearestFieldType<Float32> { using Type = Float64; };
|
template <> struct NearestFieldType<Float32> { using Type = Float64; };
|
||||||
template <> struct NearestFieldType<Float64> { using Type = Float64; };
|
template <> struct NearestFieldType<Float64> { using Type = Float64; };
|
||||||
|
template <> struct NearestFieldType<const char*> { using Type = String; };
|
||||||
template <> struct NearestFieldType<String> { using Type = String; };
|
template <> struct NearestFieldType<String> { using Type = String; };
|
||||||
template <> struct NearestFieldType<Array> { using Type = Array; };
|
template <> struct NearestFieldType<Array> { using Type = Array; };
|
||||||
template <> struct NearestFieldType<Tuple> { using Type = Tuple; };
|
template <> struct NearestFieldType<Tuple> { using Type = Tuple; };
|
||||||
template <> struct NearestFieldType<bool> { using Type = UInt64; };
|
template <> struct NearestFieldType<bool> { using Type = UInt64; };
|
||||||
template <> struct NearestFieldType<Null> { using Type = Null; };
|
template <> struct NearestFieldType<Null> { using Type = Null; };
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
decltype(auto) nearestFieldType(T && x)
|
||||||
|
{
|
||||||
|
using U = typename NearestFieldType<std::decay_t<T>>::Type;
|
||||||
|
if constexpr (std::is_same_v<std::decay_t<T>, U>)
|
||||||
|
return std::forward<T>(x);
|
||||||
|
else
|
||||||
|
return U(x);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// This (rather tricky) code is to avoid ambiguity in expressions like
|
||||||
|
/// Field f = 1;
|
||||||
|
/// instead of
|
||||||
|
/// Field f = Int64(1);
|
||||||
|
/// Things to note:
|
||||||
|
/// 1. float <--> int needs explicit cast
|
||||||
|
/// 2. customized types needs explicit cast
|
||||||
|
template <typename T>
|
||||||
|
Field::Field(T && rhs, std::enable_if_t<!std::is_same_v<std::decay_t<T>, Field>, void *>)
|
||||||
|
{
|
||||||
|
auto && val = nearestFieldType(std::forward<T>(rhs));
|
||||||
|
createConcrete(std::forward<decltype(val)>(val));
|
||||||
|
}
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
typename NearestFieldType<T>::Type nearestFieldType(const T & x)
|
std::enable_if_t<!std::is_same_v<std::decay_t<T>, Field>, Field &>
|
||||||
|
Field::operator= (T && rhs)
|
||||||
{
|
{
|
||||||
return typename NearestFieldType<T>::Type(x);
|
auto && val = nearestFieldType(std::forward<T>(rhs));
|
||||||
|
using U = decltype(val);
|
||||||
|
if (which != TypeToEnum<std::decay_t<U>>::value)
|
||||||
|
{
|
||||||
|
destroy();
|
||||||
|
createConcrete(std::forward<U>(val));
|
||||||
|
}
|
||||||
|
else
|
||||||
|
assignConcrete(std::forward<U>(val));
|
||||||
|
|
||||||
|
return *this;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -39,7 +39,7 @@ FilterBlockInputStream::FilterBlockInputStream(const BlockInputStreamPtr & input
|
|||||||
{
|
{
|
||||||
/// Replace the filter column to a constant with value 1.
|
/// Replace the filter column to a constant with value 1.
|
||||||
FilterDescription filter_description_check(*column_elem.column);
|
FilterDescription filter_description_check(*column_elem.column);
|
||||||
column_elem.column = column_elem.type->createColumnConst(header.rows(), UInt64(1));
|
column_elem.column = column_elem.type->createColumnConst(header.rows(), 1u);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (remove_filter)
|
if (remove_filter)
|
||||||
@ -144,7 +144,7 @@ Block FilterBlockInputStream::readImpl()
|
|||||||
if (filtered_rows == filter_and_holder.data->size())
|
if (filtered_rows == filter_and_holder.data->size())
|
||||||
{
|
{
|
||||||
/// Replace the column with the filter by a constant.
|
/// Replace the column with the filter by a constant.
|
||||||
res.safeGetByPosition(filter_column).column = res.safeGetByPosition(filter_column).type->createColumnConst(filtered_rows, UInt64(1));
|
res.safeGetByPosition(filter_column).column = res.safeGetByPosition(filter_column).type->createColumnConst(filtered_rows, 1u);
|
||||||
/// No need to touch the rest of the columns.
|
/// No need to touch the rest of the columns.
|
||||||
return removeFilterIfNeed(std::move(res));
|
return removeFilterIfNeed(std::move(res));
|
||||||
}
|
}
|
||||||
@ -161,7 +161,7 @@ Block FilterBlockInputStream::readImpl()
|
|||||||
/// Example:
|
/// Example:
|
||||||
/// SELECT materialize(100) AS x WHERE x
|
/// SELECT materialize(100) AS x WHERE x
|
||||||
/// will work incorrectly.
|
/// will work incorrectly.
|
||||||
current_column.column = current_column.type->createColumnConst(filtered_rows, UInt64(1));
|
current_column.column = current_column.type->createColumnConst(filtered_rows, 1u);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -251,7 +251,7 @@ void GraphiteRollupSortedBlockInputStream::startNextGroup(MutableColumns & merge
|
|||||||
void GraphiteRollupSortedBlockInputStream::finishCurrentGroup(MutableColumns & merged_columns)
|
void GraphiteRollupSortedBlockInputStream::finishCurrentGroup(MutableColumns & merged_columns)
|
||||||
{
|
{
|
||||||
/// Insert calculated values of the columns `time`, `value`, `version`.
|
/// Insert calculated values of the columns `time`, `value`, `version`.
|
||||||
merged_columns[time_column_num]->insert(UInt64(current_time_rounded));
|
merged_columns[time_column_num]->insert(current_time_rounded);
|
||||||
merged_columns[version_column_num]->insertFrom(
|
merged_columns[version_column_num]->insertFrom(
|
||||||
*(*current_subgroup_newest_row.columns)[version_column_num], current_subgroup_newest_row.row_num);
|
*(*current_subgroup_newest_row.columns)[version_column_num], current_subgroup_newest_row.row_num);
|
||||||
|
|
||||||
|
@ -225,7 +225,7 @@ void DataTypeEnum<Type>::deserializeBinaryBulk(
|
|||||||
template <typename Type>
|
template <typename Type>
|
||||||
Field DataTypeEnum<Type>::getDefault() const
|
Field DataTypeEnum<Type>::getDefault() const
|
||||||
{
|
{
|
||||||
return typename NearestFieldType<FieldType>::Type(values.front().second);
|
return values.front().second;
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename Type>
|
template <typename Type>
|
||||||
@ -293,7 +293,7 @@ Field DataTypeEnum<Type>::castToValue(const Field & value_or_name) const
|
|||||||
{
|
{
|
||||||
if (value_or_name.getType() == Field::Types::String)
|
if (value_or_name.getType() == Field::Types::String)
|
||||||
{
|
{
|
||||||
return static_cast<Int64>(getValue(value_or_name.get<String>()));
|
return getValue(value_or_name.get<String>());
|
||||||
}
|
}
|
||||||
else if (value_or_name.getType() == Field::Types::Int64
|
else if (value_or_name.getType() == Field::Types::Int64
|
||||||
|| value_or_name.getType() == Field::Types::UInt64)
|
|| value_or_name.getType() == Field::Types::UInt64)
|
||||||
|
@ -464,7 +464,7 @@ ColumnPtr DictionaryBlockInputStream<DictionaryType, Key>::getColumnFromIds(cons
|
|||||||
auto column_vector = ColumnVector<UInt64>::create();
|
auto column_vector = ColumnVector<UInt64>::create();
|
||||||
column_vector->getData().reserve(ids_to_fill.size());
|
column_vector->getData().reserve(ids_to_fill.size());
|
||||||
for (UInt64 id : ids_to_fill)
|
for (UInt64 id : ids_to_fill)
|
||||||
column_vector->insert(id);
|
column_vector->insertValue(id);
|
||||||
return column_vector;
|
return column_vector;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -156,7 +156,7 @@ DictionarySourcePtr DictionarySourceFactory::create(
|
|||||||
{
|
{
|
||||||
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
|
||||||
const auto & global_config = context.getConfigRef();
|
const auto & global_config = context.getConfigRef();
|
||||||
BridgeHelperPtr bridge = std::make_shared<XDBCBridgeHelper<ODBCBridgeMixin>>(global_config, context.getSettings().http_connection_timeout, config.getString(config_prefix + ".odbc.connection_string"));
|
BridgeHelperPtr bridge = std::make_shared<XDBCBridgeHelper<ODBCBridgeMixin>>(global_config, context.getSettings().http_receive_timeout, config.getString(config_prefix + ".odbc.connection_string"));
|
||||||
return std::make_unique<XDBCDictionarySource>(dict_struct, config, config_prefix + ".odbc", sample_block, context, bridge);
|
return std::make_unique<XDBCDictionarySource>(dict_struct, config, config_prefix + ".odbc", sample_block, context, bridge);
|
||||||
#else
|
#else
|
||||||
throw Exception{"Dictionary source of type `odbc` is disabled because poco library was built without ODBC support.",
|
throw Exception{"Dictionary source of type `odbc` is disabled because poco library was built without ODBC support.",
|
||||||
@ -167,7 +167,7 @@ DictionarySourcePtr DictionarySourceFactory::create(
|
|||||||
{
|
{
|
||||||
throw Exception{"Dictionary source of type `jdbc` is disabled until consistent support for nullable fields.",
|
throw Exception{"Dictionary source of type `jdbc` is disabled until consistent support for nullable fields.",
|
||||||
ErrorCodes::SUPPORT_IS_DISABLED};
|
ErrorCodes::SUPPORT_IS_DISABLED};
|
||||||
// BridgeHelperPtr bridge = std::make_shared<XDBCBridgeHelper<JDBCBridgeMixin>>(config, context.getSettings().http_connection_timeout, config.getString(config_prefix + ".connection_string"));
|
// BridgeHelperPtr bridge = std::make_shared<XDBCBridgeHelper<JDBCBridgeMixin>>(config, context.getSettings().http_receive_timeout, config.getString(config_prefix + ".connection_string"));
|
||||||
// return std::make_unique<XDBCDictionarySource>(dict_struct, config, config_prefix + ".jdbc", sample_block, context, bridge);
|
// return std::make_unique<XDBCDictionarySource>(dict_struct, config, config_prefix + ".jdbc", sample_block, context, bridge);
|
||||||
}
|
}
|
||||||
else if ("executable" == source_type)
|
else if ("executable" == source_type)
|
||||||
|
@ -42,19 +42,19 @@ namespace
|
|||||||
{
|
{
|
||||||
switch (type)
|
switch (type)
|
||||||
{
|
{
|
||||||
case ValueType::UInt8: static_cast<ColumnUInt8 &>(column).insert(value.getUInt()); break;
|
case ValueType::UInt8: static_cast<ColumnUInt8 &>(column).insertValue(value.getUInt()); break;
|
||||||
case ValueType::UInt16: static_cast<ColumnUInt16 &>(column).insert(value.getUInt()); break;
|
case ValueType::UInt16: static_cast<ColumnUInt16 &>(column).insertValue(value.getUInt()); break;
|
||||||
case ValueType::UInt32: static_cast<ColumnUInt32 &>(column).insert(value.getUInt()); break;
|
case ValueType::UInt32: static_cast<ColumnUInt32 &>(column).insertValue(value.getUInt()); break;
|
||||||
case ValueType::UInt64: static_cast<ColumnUInt64 &>(column).insert(value.getUInt()); break;
|
case ValueType::UInt64: static_cast<ColumnUInt64 &>(column).insertValue(value.getUInt()); break;
|
||||||
case ValueType::Int8: static_cast<ColumnInt8 &>(column).insert(value.getInt()); break;
|
case ValueType::Int8: static_cast<ColumnInt8 &>(column).insertValue(value.getInt()); break;
|
||||||
case ValueType::Int16: static_cast<ColumnInt16 &>(column).insert(value.getInt()); break;
|
case ValueType::Int16: static_cast<ColumnInt16 &>(column).insertValue(value.getInt()); break;
|
||||||
case ValueType::Int32: static_cast<ColumnInt32 &>(column).insert(value.getInt()); break;
|
case ValueType::Int32: static_cast<ColumnInt32 &>(column).insertValue(value.getInt()); break;
|
||||||
case ValueType::Int64: static_cast<ColumnInt64 &>(column).insert(value.getInt()); break;
|
case ValueType::Int64: static_cast<ColumnInt64 &>(column).insertValue(value.getInt()); break;
|
||||||
case ValueType::Float32: static_cast<ColumnFloat32 &>(column).insert(value.getDouble()); break;
|
case ValueType::Float32: static_cast<ColumnFloat32 &>(column).insertValue(value.getDouble()); break;
|
||||||
case ValueType::Float64: static_cast<ColumnFloat64 &>(column).insert(value.getDouble()); break;
|
case ValueType::Float64: static_cast<ColumnFloat64 &>(column).insertValue(value.getDouble()); break;
|
||||||
case ValueType::String: static_cast<ColumnString &>(column).insertData(value.data(), value.size()); break;
|
case ValueType::String: static_cast<ColumnString &>(column).insertData(value.data(), value.size()); break;
|
||||||
case ValueType::Date: static_cast<ColumnUInt16 &>(column).insert(UInt16{value.getDate().getDayNum()}); break;
|
case ValueType::Date: static_cast<ColumnUInt16 &>(column).insertValue(UInt16(value.getDate().getDayNum())); break;
|
||||||
case ValueType::DateTime: static_cast<ColumnUInt32 &>(column).insert(time_t{value.getDateTime()}); break;
|
case ValueType::DateTime: static_cast<ColumnUInt32 &>(column).insertValue(UInt32(value.getDateTime())); break;
|
||||||
case ValueType::UUID: static_cast<ColumnUInt128 &>(column).insert(parse<UUID>(value.data(), value.size())); break;
|
case ValueType::UUID: static_cast<ColumnUInt128 &>(column).insert(parse<UUID>(value.data(), value.size())); break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -48,19 +48,19 @@ namespace
|
|||||||
{
|
{
|
||||||
switch (type)
|
switch (type)
|
||||||
{
|
{
|
||||||
case ValueType::UInt8: static_cast<ColumnUInt8 &>(column).insert(value.convert<UInt64>()); break;
|
case ValueType::UInt8: static_cast<ColumnUInt8 &>(column).insertValue(value.convert<UInt64>()); break;
|
||||||
case ValueType::UInt16: static_cast<ColumnUInt16 &>(column).insert(value.convert<UInt64>()); break;
|
case ValueType::UInt16: static_cast<ColumnUInt16 &>(column).insertValue(value.convert<UInt64>()); break;
|
||||||
case ValueType::UInt32: static_cast<ColumnUInt32 &>(column).insert(value.convert<UInt64>()); break;
|
case ValueType::UInt32: static_cast<ColumnUInt32 &>(column).insertValue(value.convert<UInt64>()); break;
|
||||||
case ValueType::UInt64: static_cast<ColumnUInt64 &>(column).insert(value.convert<UInt64>()); break;
|
case ValueType::UInt64: static_cast<ColumnUInt64 &>(column).insertValue(value.convert<UInt64>()); break;
|
||||||
case ValueType::Int8: static_cast<ColumnInt8 &>(column).insert(value.convert<Int64>()); break;
|
case ValueType::Int8: static_cast<ColumnInt8 &>(column).insertValue(value.convert<Int64>()); break;
|
||||||
case ValueType::Int16: static_cast<ColumnInt16 &>(column).insert(value.convert<Int64>()); break;
|
case ValueType::Int16: static_cast<ColumnInt16 &>(column).insertValue(value.convert<Int64>()); break;
|
||||||
case ValueType::Int32: static_cast<ColumnInt32 &>(column).insert(value.convert<Int64>()); break;
|
case ValueType::Int32: static_cast<ColumnInt32 &>(column).insertValue(value.convert<Int64>()); break;
|
||||||
case ValueType::Int64: static_cast<ColumnInt64 &>(column).insert(value.convert<Int64>()); break;
|
case ValueType::Int64: static_cast<ColumnInt64 &>(column).insertValue(value.convert<Int64>()); break;
|
||||||
case ValueType::Float32: static_cast<ColumnFloat32 &>(column).insert(value.convert<Float64>()); break;
|
case ValueType::Float32: static_cast<ColumnFloat32 &>(column).insertValue(value.convert<Float64>()); break;
|
||||||
case ValueType::Float64: static_cast<ColumnFloat64 &>(column).insert(value.convert<Float64>()); break;
|
case ValueType::Float64: static_cast<ColumnFloat64 &>(column).insertValue(value.convert<Float64>()); break;
|
||||||
case ValueType::String: static_cast<ColumnString &>(column).insert(value.convert<String>()); break;
|
case ValueType::String: static_cast<ColumnString &>(column).insert(value.convert<String>()); break;
|
||||||
case ValueType::Date: static_cast<ColumnUInt16 &>(column).insert(UInt16{LocalDate{value.convert<String>()}.getDayNum()}); break;
|
case ValueType::Date: static_cast<ColumnUInt16 &>(column).insertValue(UInt16{LocalDate{value.convert<String>()}.getDayNum()}); break;
|
||||||
case ValueType::DateTime: static_cast<ColumnUInt32 &>(column).insert(time_t{LocalDateTime{value.convert<String>()}}); break;
|
case ValueType::DateTime: static_cast<ColumnUInt32 &>(column).insertValue(time_t{LocalDateTime{value.convert<String>()}}); break;
|
||||||
case ValueType::UUID: static_cast<ColumnUInt128 &>(column).insert(parse<UUID>(value.convert<std::string>())); break;
|
case ValueType::UUID: static_cast<ColumnUInt128 &>(column).insert(parse<UUID>(value.convert<std::string>())); break;
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -141,7 +141,7 @@ ColumnPtr RangeDictionaryBlockInputStream<DictionaryType, RangeType, Key>::getCo
|
|||||||
auto column_vector = ColumnVector<T>::create();
|
auto column_vector = ColumnVector<T>::create();
|
||||||
column_vector->getData().reserve(array.size());
|
column_vector->getData().reserve(array.size());
|
||||||
for (T value : array)
|
for (T value : array)
|
||||||
column_vector->insert(value);
|
column_vector->insertValue(value);
|
||||||
return column_vector;
|
return column_vector;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -619,12 +619,12 @@ Columns TrieDictionary::getKeyColumns() const
|
|||||||
#if defined(__SIZEOF_INT128__)
|
#if defined(__SIZEOF_INT128__)
|
||||||
auto getter = [& ip_column, & mask_column](__uint128_t ip, size_t mask)
|
auto getter = [& ip_column, & mask_column](__uint128_t ip, size_t mask)
|
||||||
{
|
{
|
||||||
UInt64 * ip_array = reinterpret_cast<UInt64 *>(&ip);
|
Poco::UInt64 * ip_array = reinterpret_cast<Poco::UInt64 *>(&ip); // Poco:: for old poco + macos
|
||||||
ip_array[0] = Poco::ByteOrder::fromNetwork(ip_array[0]);
|
ip_array[0] = Poco::ByteOrder::fromNetwork(ip_array[0]);
|
||||||
ip_array[1] = Poco::ByteOrder::fromNetwork(ip_array[1]);
|
ip_array[1] = Poco::ByteOrder::fromNetwork(ip_array[1]);
|
||||||
std::swap(ip_array[0], ip_array[1]);
|
std::swap(ip_array[0], ip_array[1]);
|
||||||
ip_column->insertData(reinterpret_cast<const char *>(ip_array), IPV6_BINARY_LENGTH);
|
ip_column->insertData(reinterpret_cast<const char *>(ip_array), IPV6_BINARY_LENGTH);
|
||||||
mask_column->insert(static_cast<UInt8>(mask));
|
mask_column->insertValue(static_cast<UInt8>(mask));
|
||||||
};
|
};
|
||||||
|
|
||||||
trieTraverse<decltype(getter), __uint128_t>(trie, std::move(getter));
|
trieTraverse<decltype(getter), __uint128_t>(trie, std::move(getter));
|
||||||
|
@ -46,13 +46,13 @@ Field convertNodeToField(capnp::DynamicValue::Reader value)
|
|||||||
case capnp::DynamicValue::VOID:
|
case capnp::DynamicValue::VOID:
|
||||||
return Field();
|
return Field();
|
||||||
case capnp::DynamicValue::BOOL:
|
case capnp::DynamicValue::BOOL:
|
||||||
return UInt64(value.as<bool>() ? 1 : 0);
|
return value.as<bool>() ? 1u : 0u;
|
||||||
case capnp::DynamicValue::INT:
|
case capnp::DynamicValue::INT:
|
||||||
return Int64((value.as<int64_t>()));
|
return value.as<int64_t>();
|
||||||
case capnp::DynamicValue::UINT:
|
case capnp::DynamicValue::UINT:
|
||||||
return UInt64(value.as<uint64_t>());
|
return value.as<uint64_t>();
|
||||||
case capnp::DynamicValue::FLOAT:
|
case capnp::DynamicValue::FLOAT:
|
||||||
return Float64(value.as<double>());
|
return value.as<double>();
|
||||||
case capnp::DynamicValue::TEXT:
|
case capnp::DynamicValue::TEXT:
|
||||||
{
|
{
|
||||||
auto arr = value.as<capnp::Text>();
|
auto arr = value.as<capnp::Text>();
|
||||||
@ -73,7 +73,7 @@ Field convertNodeToField(capnp::DynamicValue::Reader value)
|
|||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
case capnp::DynamicValue::ENUM:
|
case capnp::DynamicValue::ENUM:
|
||||||
return UInt64(value.as<capnp::DynamicEnum>().getRaw());
|
return value.as<capnp::DynamicEnum>().getRaw();
|
||||||
case capnp::DynamicValue::STRUCT:
|
case capnp::DynamicValue::STRUCT:
|
||||||
{
|
{
|
||||||
auto structValue = value.as<capnp::DynamicStruct>();
|
auto structValue = value.as<capnp::DynamicStruct>();
|
||||||
|
@ -2,7 +2,6 @@
|
|||||||
#include <Interpreters/evaluateConstantExpression.h>
|
#include <Interpreters/evaluateConstantExpression.h>
|
||||||
#include <Interpreters/Context.h>
|
#include <Interpreters/Context.h>
|
||||||
#include <Interpreters/convertFieldToType.h>
|
#include <Interpreters/convertFieldToType.h>
|
||||||
#include <DataTypes/DataTypeArray.h>
|
|
||||||
#include <Parsers/TokenIterator.h>
|
#include <Parsers/TokenIterator.h>
|
||||||
#include <Parsers/ExpressionListParsers.h>
|
#include <Parsers/ExpressionListParsers.h>
|
||||||
#include <Formats/ValuesRowInputStream.h>
|
#include <Formats/ValuesRowInputStream.h>
|
||||||
@ -30,20 +29,6 @@ namespace ErrorCodes
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
bool is_array_type_compatible(const DataTypeArray & type, const Field & value)
|
|
||||||
{
|
|
||||||
if (type.getNestedType()->isNullable())
|
|
||||||
return true;
|
|
||||||
|
|
||||||
const Array & array = DB::get<const Array &>(value);
|
|
||||||
size_t size = array.size();
|
|
||||||
for (size_t i = 0; i < size; ++i)
|
|
||||||
if (array[i].isNull())
|
|
||||||
return false;
|
|
||||||
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
ValuesRowInputStream::ValuesRowInputStream(ReadBuffer & istr_, const Block & header_, const Context & context_, const FormatSettings & format_settings)
|
ValuesRowInputStream::ValuesRowInputStream(ReadBuffer & istr_, const Block & header_, const Context & context_, const FormatSettings & format_settings)
|
||||||
: istr(istr_), header(header_), context(std::make_unique<Context>(context_)), format_settings(format_settings)
|
: istr(istr_), header(header_), context(std::make_unique<Context>(context_)), format_settings(format_settings)
|
||||||
{
|
{
|
||||||
@ -131,15 +116,14 @@ bool ValuesRowInputStream::read(MutableColumns & columns)
|
|||||||
std::pair<Field, DataTypePtr> value_raw = evaluateConstantExpression(ast, *context);
|
std::pair<Field, DataTypePtr> value_raw = evaluateConstantExpression(ast, *context);
|
||||||
Field value = convertFieldToType(value_raw.first, type, value_raw.second.get());
|
Field value = convertFieldToType(value_raw.first, type, value_raw.second.get());
|
||||||
|
|
||||||
const auto * array_type = typeid_cast<const DataTypeArray *>(&type);
|
|
||||||
|
|
||||||
/// Check that we are indeed allowed to insert a NULL.
|
/// Check that we are indeed allowed to insert a NULL.
|
||||||
if ((value.isNull() && !type.isNullable()) || (array_type && !is_array_type_compatible(*array_type, value)))
|
if (value.isNull())
|
||||||
{
|
{
|
||||||
throw Exception{"Expression returns value " + applyVisitor(FieldVisitorToString(), value)
|
if (!type.isNullable())
|
||||||
+ ", that is out of range of type " + type.getName()
|
throw Exception{"Expression returns value " + applyVisitor(FieldVisitorToString(), value)
|
||||||
+ ", at: " + String(prev_istr_position, std::min(SHOW_CHARS_ON_SYNTAX_ERROR, istr.buffer().end() - prev_istr_position)),
|
+ ", that is out of range of type " + type.getName()
|
||||||
ErrorCodes::VALUE_IS_OUT_OF_RANGE_OF_DATA_TYPE};
|
+ ", at: " + String(prev_istr_position, std::min(SHOW_CHARS_ON_SYNTAX_ERROR, istr.buffer().end() - prev_istr_position)),
|
||||||
|
ErrorCodes::VALUE_IS_OUT_OF_RANGE_OF_DATA_TYPE};
|
||||||
}
|
}
|
||||||
|
|
||||||
columns[i]->insert(value);
|
columns[i]->insert(value);
|
||||||
|
@ -835,7 +835,7 @@ private:
|
|||||||
if (!in.eof())
|
if (!in.eof())
|
||||||
throw Exception("String is too long for Date: " + string_value.toString());
|
throw Exception("String is too long for Date: " + string_value.toString());
|
||||||
|
|
||||||
ColumnPtr parsed_const_date_holder = DataTypeDate().createColumnConst(input_rows_count, UInt64(date));
|
ColumnPtr parsed_const_date_holder = DataTypeDate().createColumnConst(input_rows_count, date);
|
||||||
const ColumnConst * parsed_const_date = static_cast<const ColumnConst *>(parsed_const_date_holder.get());
|
const ColumnConst * parsed_const_date = static_cast<const ColumnConst *>(parsed_const_date_holder.get());
|
||||||
executeNumLeftType<DataTypeDate::FieldType>(block, result,
|
executeNumLeftType<DataTypeDate::FieldType>(block, result,
|
||||||
left_is_num ? col_left_untyped : parsed_const_date,
|
left_is_num ? col_left_untyped : parsed_const_date,
|
||||||
@ -863,7 +863,7 @@ private:
|
|||||||
if (!in.eof())
|
if (!in.eof())
|
||||||
throw Exception("String is too long for UUID: " + string_value.toString());
|
throw Exception("String is too long for UUID: " + string_value.toString());
|
||||||
|
|
||||||
ColumnPtr parsed_const_uuid_holder = DataTypeUUID().createColumnConst(input_rows_count, UInt128(uuid));
|
ColumnPtr parsed_const_uuid_holder = DataTypeUUID().createColumnConst(input_rows_count, uuid);
|
||||||
const ColumnConst * parsed_const_uuid = static_cast<const ColumnConst *>(parsed_const_uuid_holder.get());
|
const ColumnConst * parsed_const_uuid = static_cast<const ColumnConst *>(parsed_const_uuid_holder.get());
|
||||||
executeNumLeftType<DataTypeUUID::FieldType>(block, result,
|
executeNumLeftType<DataTypeUUID::FieldType>(block, result,
|
||||||
left_is_num ? col_left_untyped : parsed_const_uuid,
|
left_is_num ? col_left_untyped : parsed_const_uuid,
|
||||||
|
@ -1445,7 +1445,7 @@ private:
|
|||||||
UInt8 res = 0;
|
UInt8 res = 0;
|
||||||
|
|
||||||
dictionary->isInConstantConstant(child_id, ancestor_id, res);
|
dictionary->isInConstantConstant(child_id, ancestor_id, res);
|
||||||
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(child_id_col->size(), UInt64(res));
|
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(child_id_col->size(), res);
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
throw Exception{"Illegal column " + ancestor_id_col_untyped->getName()
|
throw Exception{"Illegal column " + ancestor_id_col_untyped->getName()
|
||||||
|
@ -293,7 +293,7 @@ private:
|
|||||||
const auto col_const_y = static_cast<const ColumnConst *> (col_y);
|
const auto col_const_y = static_cast<const ColumnConst *> (col_y);
|
||||||
size_t start_index = 0;
|
size_t start_index = 0;
|
||||||
UInt8 res = isPointInEllipses(col_const_x->getValue<Float64>(), col_const_y->getValue<Float64>(), ellipses, ellipses_count, start_index);
|
UInt8 res = isPointInEllipses(col_const_x->getValue<Float64>(), col_const_y->getValue<Float64>(), ellipses, ellipses_count, start_index);
|
||||||
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(size, UInt64(res));
|
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(size, res);
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
|
@ -54,8 +54,10 @@ namespace ErrorCodes
|
|||||||
* Fast non-cryptographic hash function for strings:
|
* Fast non-cryptographic hash function for strings:
|
||||||
* cityHash64: String -> UInt64
|
* cityHash64: String -> UInt64
|
||||||
*
|
*
|
||||||
* A non-cryptographic hash from a tuple of values of any types (uses cityHash64 for strings and intHash64 for numbers):
|
* A non-cryptographic hashes from a tuple of values of any types (uses respective function for strings and intHash64 for numbers):
|
||||||
* cityHash64: any* -> UInt64
|
* cityHash64: any* -> UInt64
|
||||||
|
* sipHash64: any* -> UInt64
|
||||||
|
* halfMD5: any* -> UInt64
|
||||||
*
|
*
|
||||||
* Fast non-cryptographic hash function from any integer:
|
* Fast non-cryptographic hash function from any integer:
|
||||||
* intHash32: number -> UInt32
|
* intHash32: number -> UInt32
|
||||||
@ -63,8 +65,31 @@ namespace ErrorCodes
|
|||||||
*
|
*
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
struct IntHash32Impl
|
||||||
|
{
|
||||||
|
using ReturnType = UInt32;
|
||||||
|
|
||||||
|
static UInt32 apply(UInt64 x)
|
||||||
|
{
|
||||||
|
/// seed is taken from /dev/urandom. It allows you to avoid undesirable dependencies with hashes in different data structures.
|
||||||
|
return intHash32<0x75D9543DE018BF45ULL>(x);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
struct IntHash64Impl
|
||||||
|
{
|
||||||
|
using ReturnType = UInt64;
|
||||||
|
|
||||||
|
static UInt64 apply(UInt64 x)
|
||||||
|
{
|
||||||
|
return intHash64(x ^ 0x4CF2D2BAAE6DA887ULL);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
struct HalfMD5Impl
|
struct HalfMD5Impl
|
||||||
{
|
{
|
||||||
|
static constexpr auto name = "halfMD5";
|
||||||
using ReturnType = UInt64;
|
using ReturnType = UInt64;
|
||||||
|
|
||||||
static UInt64 apply(const char * begin, size_t size)
|
static UInt64 apply(const char * begin, size_t size)
|
||||||
@ -80,8 +105,18 @@ struct HalfMD5Impl
|
|||||||
MD5_Update(&ctx, reinterpret_cast<const unsigned char *>(begin), size);
|
MD5_Update(&ctx, reinterpret_cast<const unsigned char *>(begin), size);
|
||||||
MD5_Final(buf.char_data, &ctx);
|
MD5_Final(buf.char_data, &ctx);
|
||||||
|
|
||||||
return Poco::ByteOrder::flipBytes(buf.uint64_data); /// Compatibility with existing code.
|
return Poco::ByteOrder::flipBytes(static_cast<Poco::UInt64>(buf.uint64_data)); /// Compatibility with existing code. Cast need for old poco AND macos where UInt64 != uint64_t
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static UInt64 combineHashes(UInt64 h1, UInt64 h2)
|
||||||
|
{
|
||||||
|
UInt64 hashes[] = {h1, h2};
|
||||||
|
return apply(reinterpret_cast<const char *>(hashes), 16);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// If true, it will use intHash32 or intHash64 to hash POD types. This behaviour is intended for better performance of some functions.
|
||||||
|
/// Otherwise it will hash bytes in memory as a string using corresponding hash function.
|
||||||
|
static constexpr bool use_int_hash_for_pods = false;
|
||||||
};
|
};
|
||||||
|
|
||||||
struct MD5Impl
|
struct MD5Impl
|
||||||
@ -142,14 +177,22 @@ struct SHA256Impl
|
|||||||
|
|
||||||
struct SipHash64Impl
|
struct SipHash64Impl
|
||||||
{
|
{
|
||||||
|
static constexpr auto name = "sipHash64";
|
||||||
using ReturnType = UInt64;
|
using ReturnType = UInt64;
|
||||||
|
|
||||||
static UInt64 apply(const char * begin, size_t size)
|
static UInt64 apply(const char * begin, size_t size)
|
||||||
{
|
{
|
||||||
return sipHash64(begin, size);
|
return sipHash64(begin, size);
|
||||||
}
|
}
|
||||||
};
|
|
||||||
|
|
||||||
|
static UInt64 combineHashes(UInt64 h1, UInt64 h2)
|
||||||
|
{
|
||||||
|
UInt64 hashes[] = {h1, h2};
|
||||||
|
return apply(reinterpret_cast<const char *>(hashes), 16);
|
||||||
|
}
|
||||||
|
|
||||||
|
static constexpr bool use_int_hash_for_pods = false;
|
||||||
|
};
|
||||||
|
|
||||||
struct SipHash128Impl
|
struct SipHash128Impl
|
||||||
{
|
{
|
||||||
@ -162,25 +205,154 @@ struct SipHash128Impl
|
|||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
struct IntHash32Impl
|
|
||||||
|
/** Why we need MurmurHash2?
|
||||||
|
* MurmurHash2 is an outdated hash function, superseded by MurmurHash3 and subsequently by CityHash, xxHash, HighwayHash.
|
||||||
|
* Usually there is no reason to use MurmurHash.
|
||||||
|
* It is needed for the cases when you already have MurmurHash in some applications and you want to reproduce it
|
||||||
|
* in ClickHouse as is. For example, it is needed to reproduce the behaviour
|
||||||
|
* for NGINX a/b testing module: https://nginx.ru/en/docs/http/ngx_http_split_clients_module.html
|
||||||
|
*/
|
||||||
|
struct MurmurHash2Impl32
|
||||||
{
|
{
|
||||||
|
static constexpr auto name = "murmurHash2_32";
|
||||||
|
|
||||||
using ReturnType = UInt32;
|
using ReturnType = UInt32;
|
||||||
|
|
||||||
static UInt32 apply(UInt64 x)
|
static UInt32 apply(const char * data, const size_t size)
|
||||||
{
|
{
|
||||||
/// seed is taken from /dev/urandom. It allows you to avoid undesirable dependencies with hashes in different data structures.
|
return MurmurHash2(data, size, 0);
|
||||||
return intHash32<0x75D9543DE018BF45ULL>(x);
|
}
|
||||||
|
|
||||||
|
static UInt32 combineHashes(UInt32 h1, UInt32 h2)
|
||||||
|
{
|
||||||
|
return IntHash32Impl::apply(h1) ^ h2;
|
||||||
|
}
|
||||||
|
|
||||||
|
static constexpr bool use_int_hash_for_pods = false;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct MurmurHash2Impl64
|
||||||
|
{
|
||||||
|
static constexpr auto name = "murmurHash2_64";
|
||||||
|
using ReturnType = UInt64;
|
||||||
|
|
||||||
|
static UInt64 apply(const char * data, const size_t size)
|
||||||
|
{
|
||||||
|
return MurmurHash64A(data, size, 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
static UInt64 combineHashes(UInt64 h1, UInt64 h2)
|
||||||
|
{
|
||||||
|
return IntHash64Impl::apply(h1) ^ h2;
|
||||||
|
}
|
||||||
|
|
||||||
|
static constexpr bool use_int_hash_for_pods = false;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct MurmurHash3Impl32
|
||||||
|
{
|
||||||
|
static constexpr auto name = "murmurHash3_32";
|
||||||
|
using ReturnType = UInt32;
|
||||||
|
|
||||||
|
static UInt32 apply(const char * data, const size_t size)
|
||||||
|
{
|
||||||
|
union
|
||||||
|
{
|
||||||
|
UInt32 h;
|
||||||
|
char bytes[sizeof(h)];
|
||||||
|
};
|
||||||
|
MurmurHash3_x86_32(data, size, 0, bytes);
|
||||||
|
return h;
|
||||||
|
}
|
||||||
|
|
||||||
|
static UInt32 combineHashes(UInt32 h1, UInt32 h2)
|
||||||
|
{
|
||||||
|
return IntHash32Impl::apply(h1) ^ h2;
|
||||||
|
}
|
||||||
|
|
||||||
|
static constexpr bool use_int_hash_for_pods = false;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct MurmurHash3Impl64
|
||||||
|
{
|
||||||
|
static constexpr auto name = "murmurHash3_64";
|
||||||
|
using ReturnType = UInt64;
|
||||||
|
|
||||||
|
static UInt64 apply(const char * data, const size_t size)
|
||||||
|
{
|
||||||
|
union
|
||||||
|
{
|
||||||
|
UInt64 h[2];
|
||||||
|
char bytes[16];
|
||||||
|
};
|
||||||
|
MurmurHash3_x64_128(data, size, 0, bytes);
|
||||||
|
return h[0] ^ h[1];
|
||||||
|
}
|
||||||
|
|
||||||
|
static UInt64 combineHashes(UInt64 h1, UInt64 h2)
|
||||||
|
{
|
||||||
|
return IntHash64Impl::apply(h1) ^ h2;
|
||||||
|
}
|
||||||
|
|
||||||
|
static constexpr bool use_int_hash_for_pods = false;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct MurmurHash3Impl128
|
||||||
|
{
|
||||||
|
static constexpr auto name = "murmurHash3_128";
|
||||||
|
enum { length = 16 };
|
||||||
|
|
||||||
|
static void apply(const char * begin, const size_t size, unsigned char * out_char_data)
|
||||||
|
{
|
||||||
|
MurmurHash3_x64_128(begin, size, 0, out_char_data);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
struct IntHash64Impl
|
struct ImplCityHash64
|
||||||
{
|
{
|
||||||
|
static constexpr auto name = "cityHash64";
|
||||||
using ReturnType = UInt64;
|
using ReturnType = UInt64;
|
||||||
|
using uint128_t = CityHash_v1_0_2::uint128;
|
||||||
|
|
||||||
static UInt64 apply(UInt64 x)
|
static auto combineHashes(UInt64 h1, UInt64 h2) { return CityHash_v1_0_2::Hash128to64(uint128_t(h1, h2)); }
|
||||||
|
static auto apply(const char * s, const size_t len) { return CityHash_v1_0_2::CityHash64(s, len); }
|
||||||
|
static constexpr bool use_int_hash_for_pods = true;
|
||||||
|
};
|
||||||
|
|
||||||
|
// see farmhash.h for definition of NAMESPACE_FOR_HASH_FUNCTIONS
|
||||||
|
struct ImplFarmHash64
|
||||||
|
{
|
||||||
|
static constexpr auto name = "farmHash64";
|
||||||
|
using ReturnType = UInt64;
|
||||||
|
using uint128_t = NAMESPACE_FOR_HASH_FUNCTIONS::uint128_t;
|
||||||
|
|
||||||
|
static auto combineHashes(UInt64 h1, UInt64 h2) { return NAMESPACE_FOR_HASH_FUNCTIONS::Hash128to64(uint128_t(h1, h2)); }
|
||||||
|
static auto apply(const char * s, const size_t len) { return NAMESPACE_FOR_HASH_FUNCTIONS::Hash64(s, len); }
|
||||||
|
static constexpr bool use_int_hash_for_pods = true;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct ImplMetroHash64
|
||||||
|
{
|
||||||
|
static constexpr auto name = "metroHash64";
|
||||||
|
using ReturnType = UInt64;
|
||||||
|
using uint128_t = CityHash_v1_0_2::uint128;
|
||||||
|
|
||||||
|
static auto combineHashes(UInt64 h1, UInt64 h2) { return CityHash_v1_0_2::Hash128to64(uint128_t(h1, h2)); }
|
||||||
|
static auto apply(const char * s, const size_t len)
|
||||||
{
|
{
|
||||||
return intHash64(x ^ 0x4CF2D2BAAE6DA887ULL);
|
union
|
||||||
|
{
|
||||||
|
UInt64 u64;
|
||||||
|
UInt8 u8[sizeof(u64)];
|
||||||
|
};
|
||||||
|
|
||||||
|
metrohash64_1(reinterpret_cast<const UInt8 *>(s), len, 0, u8);
|
||||||
|
|
||||||
|
return u64;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static constexpr bool use_int_hash_for_pods = true;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
@ -242,12 +414,6 @@ public:
|
|||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
inline bool allowIntHash(const IDataType * data_type)
|
|
||||||
{
|
|
||||||
return data_type->isValueRepresentedByNumber();
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
template <typename Impl, typename Name>
|
template <typename Impl, typename Name>
|
||||||
class FunctionIntHash : public IFunction
|
class FunctionIntHash : public IFunction
|
||||||
{
|
{
|
||||||
@ -291,7 +457,7 @@ public:
|
|||||||
|
|
||||||
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
||||||
{
|
{
|
||||||
if (!allowIntHash(arguments[0].get()))
|
if (!arguments[0]->isValueRepresentedByNumber())
|
||||||
throw Exception("Illegal type " + arguments[0]->getName() + " of argument of function " + getName(),
|
throw Exception("Illegal type " + arguments[0]->getName() + " of argument of function " + getName(),
|
||||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||||
|
|
||||||
@ -322,19 +488,18 @@ public:
|
|||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
/** We use hash functions called CityHash, FarmHash, MetroHash.
|
|
||||||
* In this regard, this template is named with the words `NeighborhoodHash`.
|
|
||||||
*/
|
|
||||||
template <typename Impl>
|
template <typename Impl>
|
||||||
class FunctionNeighbourhoodHash64 : public IFunction
|
class FunctionAnyHash : public IFunction
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
static constexpr auto name = Impl::name;
|
static constexpr auto name = Impl::name;
|
||||||
static FunctionPtr create(const Context &) { return std::make_shared<FunctionNeighbourhoodHash64>(); }
|
static FunctionPtr create(const Context &) { return std::make_shared<FunctionAnyHash>(); }
|
||||||
|
|
||||||
private:
|
private:
|
||||||
|
using ToType = typename Impl::ReturnType;
|
||||||
|
|
||||||
template <typename FromType, bool first>
|
template <typename FromType, bool first>
|
||||||
void executeIntType(const IColumn * column, ColumnUInt64::Container & vec_to)
|
void executeIntType(const IColumn * column, typename ColumnVector<ToType>::Container & vec_to)
|
||||||
{
|
{
|
||||||
if (const ColumnVector<FromType> * col_from = checkAndGetColumn<ColumnVector<FromType>>(column))
|
if (const ColumnVector<FromType> * col_from = checkAndGetColumn<ColumnVector<FromType>>(column))
|
||||||
{
|
{
|
||||||
@ -342,16 +507,35 @@ private:
|
|||||||
size_t size = vec_from.size();
|
size_t size = vec_from.size();
|
||||||
for (size_t i = 0; i < size; ++i)
|
for (size_t i = 0; i < size; ++i)
|
||||||
{
|
{
|
||||||
UInt64 h = IntHash64Impl::apply(ext::bit_cast<UInt64>(vec_from[i]));
|
ToType h;
|
||||||
|
|
||||||
|
if constexpr (Impl::use_int_hash_for_pods)
|
||||||
|
{
|
||||||
|
if constexpr (std::is_same_v<ToType, UInt64>)
|
||||||
|
h = IntHash64Impl::apply(ext::bit_cast<UInt64>(vec_from[i]));
|
||||||
|
else
|
||||||
|
h = IntHash32Impl::apply(ext::bit_cast<UInt32>(vec_from[i]));
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
h = Impl::apply(reinterpret_cast<const char *>(&vec_from[i]), sizeof(vec_from[i]));
|
||||||
|
}
|
||||||
|
|
||||||
if (first)
|
if (first)
|
||||||
vec_to[i] = h;
|
vec_to[i] = h;
|
||||||
else
|
else
|
||||||
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], h));
|
vec_to[i] = Impl::combineHashes(vec_to[i], h);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
else if (auto col_from = checkAndGetColumnConst<ColumnVector<FromType>>(column))
|
else if (auto col_from = checkAndGetColumnConst<ColumnVector<FromType>>(column))
|
||||||
{
|
{
|
||||||
const UInt64 hash = IntHash64Impl::apply(ext::bit_cast<UInt64>(col_from->template getValue<FromType>()));
|
auto value = col_from->template getValue<FromType>();
|
||||||
|
ToType hash;
|
||||||
|
if constexpr (std::is_same_v<ToType, UInt64>)
|
||||||
|
hash = IntHash64Impl::apply(ext::bit_cast<UInt64>(value));
|
||||||
|
else
|
||||||
|
hash = IntHash32Impl::apply(ext::bit_cast<UInt32>(value));
|
||||||
|
|
||||||
size_t size = vec_to.size();
|
size_t size = vec_to.size();
|
||||||
if (first)
|
if (first)
|
||||||
{
|
{
|
||||||
@ -360,7 +544,7 @@ private:
|
|||||||
else
|
else
|
||||||
{
|
{
|
||||||
for (size_t i = 0; i < size; ++i)
|
for (size_t i = 0; i < size; ++i)
|
||||||
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], hash));
|
vec_to[i] = Impl::combineHashes(vec_to[i], hash);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
@ -370,7 +554,7 @@ private:
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <bool first>
|
template <bool first>
|
||||||
void executeString(const IColumn * column, ColumnUInt64::Container & vec_to)
|
void executeString(const IColumn * column, typename ColumnVector<ToType>::Container & vec_to)
|
||||||
{
|
{
|
||||||
if (const ColumnString * col_from = checkAndGetColumn<ColumnString>(column))
|
if (const ColumnString * col_from = checkAndGetColumn<ColumnString>(column))
|
||||||
{
|
{
|
||||||
@ -381,14 +565,14 @@ private:
|
|||||||
ColumnString::Offset current_offset = 0;
|
ColumnString::Offset current_offset = 0;
|
||||||
for (size_t i = 0; i < size; ++i)
|
for (size_t i = 0; i < size; ++i)
|
||||||
{
|
{
|
||||||
const UInt64 h = Impl::Hash64(
|
const ToType h = Impl::apply(
|
||||||
reinterpret_cast<const char *>(&data[current_offset]),
|
reinterpret_cast<const char *>(&data[current_offset]),
|
||||||
offsets[i] - current_offset - 1);
|
offsets[i] - current_offset - 1);
|
||||||
|
|
||||||
if (first)
|
if (first)
|
||||||
vec_to[i] = h;
|
vec_to[i] = h;
|
||||||
else
|
else
|
||||||
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], h));
|
vec_to[i] = Impl::combineHashes(vec_to[i], h);
|
||||||
|
|
||||||
current_offset = offsets[i];
|
current_offset = offsets[i];
|
||||||
}
|
}
|
||||||
@ -401,17 +585,17 @@ private:
|
|||||||
|
|
||||||
for (size_t i = 0; i < size; ++i)
|
for (size_t i = 0; i < size; ++i)
|
||||||
{
|
{
|
||||||
const UInt64 h = Impl::Hash64(reinterpret_cast<const char *>(&data[i * n]), n);
|
const ToType h = Impl::apply(reinterpret_cast<const char *>(&data[i * n]), n);
|
||||||
if (first)
|
if (first)
|
||||||
vec_to[i] = h;
|
vec_to[i] = h;
|
||||||
else
|
else
|
||||||
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], h));
|
vec_to[i] = Impl::combineHashes(vec_to[i], h);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
else if (const ColumnConst * col_from = checkAndGetColumnConstStringOrFixedString(column))
|
else if (const ColumnConst * col_from = checkAndGetColumnConstStringOrFixedString(column))
|
||||||
{
|
{
|
||||||
String value = col_from->getValue<String>().data();
|
String value = col_from->getValue<String>().data();
|
||||||
const UInt64 hash = Impl::Hash64(value.data(), value.size());
|
const ToType hash = Impl::apply(value.data(), value.size());
|
||||||
const size_t size = vec_to.size();
|
const size_t size = vec_to.size();
|
||||||
|
|
||||||
if (first)
|
if (first)
|
||||||
@ -422,7 +606,7 @@ private:
|
|||||||
{
|
{
|
||||||
for (size_t i = 0; i < size; ++i)
|
for (size_t i = 0; i < size; ++i)
|
||||||
{
|
{
|
||||||
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], hash));
|
vec_to[i] = Impl::combineHashes(vec_to[i], hash);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -433,7 +617,7 @@ private:
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <bool first>
|
template <bool first>
|
||||||
void executeArray(const IDataType * type, const IColumn * column, ColumnUInt64::Container & vec_to)
|
void executeArray(const IDataType * type, const IColumn * column, typename ColumnVector<ToType>::Container & vec_to)
|
||||||
{
|
{
|
||||||
const IDataType * nested_type = typeid_cast<const DataTypeArray *>(type)->getNestedType().get();
|
const IDataType * nested_type = typeid_cast<const DataTypeArray *>(type)->getNestedType().get();
|
||||||
|
|
||||||
@ -443,7 +627,7 @@ private:
|
|||||||
const ColumnArray::Offsets & offsets = col_from->getOffsets();
|
const ColumnArray::Offsets & offsets = col_from->getOffsets();
|
||||||
const size_t nested_size = nested_column->size();
|
const size_t nested_size = nested_column->size();
|
||||||
|
|
||||||
ColumnUInt64::Container vec_temp(nested_size);
|
typename ColumnVector<ToType>::Container vec_temp(nested_size);
|
||||||
executeAny<true>(nested_type, nested_column, vec_temp);
|
executeAny<true>(nested_type, nested_column, vec_temp);
|
||||||
|
|
||||||
const size_t size = offsets.size();
|
const size_t size = offsets.size();
|
||||||
@ -453,14 +637,19 @@ private:
|
|||||||
{
|
{
|
||||||
ColumnArray::Offset next_offset = offsets[i];
|
ColumnArray::Offset next_offset = offsets[i];
|
||||||
|
|
||||||
UInt64 h = IntHash64Impl::apply(next_offset - current_offset);
|
ToType h;
|
||||||
|
if constexpr (std::is_same_v<ToType, UInt64>)
|
||||||
|
h = IntHash64Impl::apply(next_offset - current_offset);
|
||||||
|
else
|
||||||
|
h = IntHash32Impl::apply(next_offset - current_offset);
|
||||||
|
|
||||||
if (first)
|
if (first)
|
||||||
vec_to[i] = h;
|
vec_to[i] = h;
|
||||||
else
|
else
|
||||||
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], h));
|
vec_to[i] = Impl::combineHashes(vec_to[i], h);
|
||||||
|
|
||||||
for (size_t j = current_offset; j < next_offset; ++j)
|
for (size_t j = current_offset; j < next_offset; ++j)
|
||||||
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], vec_temp[j]));
|
vec_to[i] = Impl::combineHashes(vec_to[i], vec_temp[j]);
|
||||||
|
|
||||||
current_offset = offsets[i];
|
current_offset = offsets[i];
|
||||||
}
|
}
|
||||||
@ -478,7 +667,7 @@ private:
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <bool first>
|
template <bool first>
|
||||||
void executeAny(const IDataType * from_type, const IColumn * icolumn, ColumnUInt64::Container & vec_to)
|
void executeAny(const IDataType * from_type, const IColumn * icolumn, typename ColumnVector<ToType>::Container & vec_to)
|
||||||
{
|
{
|
||||||
WhichDataType which(from_type);
|
WhichDataType which(from_type);
|
||||||
|
|
||||||
@ -504,7 +693,7 @@ private:
|
|||||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||||
}
|
}
|
||||||
|
|
||||||
void executeForArgument(const IDataType * type, const IColumn * column, ColumnUInt64::Container & vec_to, bool & is_first)
|
void executeForArgument(const IDataType * type, const IColumn * column, typename ColumnVector<ToType>::Container & vec_to, bool & is_first)
|
||||||
{
|
{
|
||||||
/// Flattening of tuples.
|
/// Flattening of tuples.
|
||||||
if (const ColumnTuple * tuple = typeid_cast<const ColumnTuple *>(column))
|
if (const ColumnTuple * tuple = typeid_cast<const ColumnTuple *>(column))
|
||||||
@ -549,20 +738,20 @@ public:
|
|||||||
|
|
||||||
DataTypePtr getReturnTypeImpl(const DataTypes & /*arguments*/) const override
|
DataTypePtr getReturnTypeImpl(const DataTypes & /*arguments*/) const override
|
||||||
{
|
{
|
||||||
return std::make_shared<DataTypeUInt64>();
|
return std::make_shared<DataTypeNumber<ToType>>();
|
||||||
}
|
}
|
||||||
|
|
||||||
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) override
|
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) override
|
||||||
{
|
{
|
||||||
size_t rows = input_rows_count;
|
size_t rows = input_rows_count;
|
||||||
auto col_to = ColumnUInt64::create(rows);
|
auto col_to = ColumnVector<ToType>::create(rows);
|
||||||
|
|
||||||
ColumnUInt64::Container & vec_to = col_to->getData();
|
typename ColumnVector<ToType>::Container & vec_to = col_to->getData();
|
||||||
|
|
||||||
if (arguments.empty())
|
if (arguments.empty())
|
||||||
{
|
{
|
||||||
/// Constant random number from /dev/urandom is used as a hash value of empty list of arguments.
|
/// Constant random number from /dev/urandom is used as a hash value of empty list of arguments.
|
||||||
vec_to.assign(rows, static_cast<UInt64>(0xe28dbde7fe22e41c));
|
vec_to.assign(rows, static_cast<ToType>(0xe28dbde7fe22e41c));
|
||||||
}
|
}
|
||||||
|
|
||||||
/// The function supports arbitrary number of arguments of arbitrary types.
|
/// The function supports arbitrary number of arguments of arbitrary types.
|
||||||
@ -579,181 +768,6 @@ public:
|
|||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
template <typename Impl, typename Name>
|
|
||||||
class FunctionStringHash : public IFunction
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
static constexpr auto name = Name::name;
|
|
||||||
static FunctionPtr create(const Context &) { return std::make_shared<FunctionStringHash>(); }
|
|
||||||
|
|
||||||
String getName() const override { return name; }
|
|
||||||
|
|
||||||
bool isVariadic() const override { return false; }
|
|
||||||
|
|
||||||
size_t getNumberOfArguments() const override { return 1; }
|
|
||||||
|
|
||||||
DataTypePtr getReturnTypeImpl(const DataTypes & /*arguments */) const override
|
|
||||||
{ return std::make_shared<DataTypeNumber<ToType>>(); }
|
|
||||||
|
|
||||||
bool useDefaultImplementationForConstants() const override { return true; }
|
|
||||||
|
|
||||||
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) override
|
|
||||||
{
|
|
||||||
auto col_to = ColumnVector<ToType>::create(input_rows_count);
|
|
||||||
typename ColumnVector<ToType>::Container & vec_to = col_to->getData();
|
|
||||||
|
|
||||||
const ColumnWithTypeAndName & col = block.getByPosition(arguments[0]);
|
|
||||||
const IDataType * from_type = col.type.get();
|
|
||||||
const IColumn * icolumn = col.column.get();
|
|
||||||
WhichDataType which(from_type);
|
|
||||||
|
|
||||||
if (which.isUInt8()) executeIntType<UInt8>(icolumn, vec_to);
|
|
||||||
else if (which.isUInt16()) executeIntType<UInt16>(icolumn, vec_to);
|
|
||||||
else if (which.isUInt32()) executeIntType<UInt32>(icolumn, vec_to);
|
|
||||||
else if (which.isUInt64()) executeIntType<UInt64>(icolumn, vec_to);
|
|
||||||
else if (which.isInt8()) executeIntType<Int8>(icolumn, vec_to);
|
|
||||||
else if (which.isInt16()) executeIntType<Int16>(icolumn, vec_to);
|
|
||||||
else if (which.isInt32()) executeIntType<Int32>(icolumn, vec_to);
|
|
||||||
else if (which.isInt64()) executeIntType<Int64>(icolumn, vec_to);
|
|
||||||
else if (which.isEnum8()) executeIntType<Int8>(icolumn, vec_to);
|
|
||||||
else if (which.isEnum16()) executeIntType<Int16>(icolumn, vec_to);
|
|
||||||
else if (which.isDate()) executeIntType<UInt16>(icolumn, vec_to);
|
|
||||||
else if (which.isDateTime()) executeIntType<UInt32>(icolumn, vec_to);
|
|
||||||
else if (which.isFloat32()) executeIntType<Float32>(icolumn, vec_to);
|
|
||||||
else if (which.isFloat64()) executeIntType<Float64>(icolumn, vec_to);
|
|
||||||
else if (which.isStringOrFixedString()) executeString(icolumn, vec_to);
|
|
||||||
else
|
|
||||||
throw Exception("Unexpected type " + from_type->getName() + " of argument of function " + getName(),
|
|
||||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
|
||||||
|
|
||||||
block.getByPosition(result).column = std::move(col_to);
|
|
||||||
}
|
|
||||||
private:
|
|
||||||
using ToType = typename Impl::ReturnType;
|
|
||||||
|
|
||||||
template <typename FromType>
|
|
||||||
void executeIntType(const IColumn * column, typename ColumnVector<ToType>::Container & vec_to)
|
|
||||||
{
|
|
||||||
if (const ColumnVector<FromType> * col_from = checkAndGetColumn<ColumnVector<FromType>>(column))
|
|
||||||
{
|
|
||||||
const typename ColumnVector<FromType>::Container & vec_from = col_from->getData();
|
|
||||||
size_t size = vec_from.size();
|
|
||||||
for (size_t i = 0; i < size; ++i)
|
|
||||||
{
|
|
||||||
vec_to[i] = Impl::apply(reinterpret_cast<const char *>(&vec_from[i]), sizeof(FromType));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
else
|
|
||||||
throw Exception("Illegal column " + column->getName()
|
|
||||||
+ " of argument of function " + getName(),
|
|
||||||
ErrorCodes::ILLEGAL_COLUMN);
|
|
||||||
}
|
|
||||||
|
|
||||||
void executeString(const IColumn * column, typename ColumnVector<ToType>::Container & vec_to)
|
|
||||||
{
|
|
||||||
if (const ColumnString * col_from = checkAndGetColumn<ColumnString>(column))
|
|
||||||
{
|
|
||||||
const typename ColumnString::Chars_t & data = col_from->getChars();
|
|
||||||
const typename ColumnString::Offsets & offsets = col_from->getOffsets();
|
|
||||||
size_t size = offsets.size();
|
|
||||||
|
|
||||||
ColumnString::Offset current_offset = 0;
|
|
||||||
for (size_t i = 0; i < size; ++i)
|
|
||||||
{
|
|
||||||
vec_to[i] = Impl::apply(
|
|
||||||
reinterpret_cast<const char *>(&data[current_offset]),
|
|
||||||
offsets[i] - current_offset - 1);
|
|
||||||
|
|
||||||
current_offset = offsets[i];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
else if (const ColumnFixedString * col_from = checkAndGetColumn<ColumnFixedString>(column))
|
|
||||||
{
|
|
||||||
const typename ColumnString::Chars_t & data = col_from->getChars();
|
|
||||||
size_t n = col_from->getN();
|
|
||||||
size_t size = data.size() / n;
|
|
||||||
for (size_t i = 0; i < size; ++i)
|
|
||||||
vec_to[i] = Impl::apply(reinterpret_cast<const char *>(&data[i * n]), n);
|
|
||||||
}
|
|
||||||
else
|
|
||||||
throw Exception("Illegal column " + column->getName()
|
|
||||||
+ " of first argument of function " + getName(),
|
|
||||||
ErrorCodes::ILLEGAL_COLUMN);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
/** Why we need MurmurHash2?
|
|
||||||
* MurmurHash2 is an outdated hash function, superseded by MurmurHash3 and subsequently by CityHash, xxHash, HighwayHash.
|
|
||||||
* Usually there is no reason to use MurmurHash.
|
|
||||||
* It is needed for the cases when you already have MurmurHash in some applications and you want to reproduce it
|
|
||||||
* in ClickHouse as is. For example, it is needed to reproduce the behaviour
|
|
||||||
* for NGINX a/b testing module: https://nginx.ru/en/docs/http/ngx_http_split_clients_module.html
|
|
||||||
*/
|
|
||||||
struct MurmurHash2Impl32
|
|
||||||
{
|
|
||||||
using ReturnType = UInt32;
|
|
||||||
|
|
||||||
static UInt32 apply(const char * data, const size_t size)
|
|
||||||
{
|
|
||||||
return MurmurHash2(data, size, 0);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
struct MurmurHash2Impl64
|
|
||||||
{
|
|
||||||
using ReturnType = UInt64;
|
|
||||||
|
|
||||||
static UInt64 apply(const char * data, const size_t size)
|
|
||||||
{
|
|
||||||
return MurmurHash64A(data, size, 0);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
struct MurmurHash3Impl32
|
|
||||||
{
|
|
||||||
using ReturnType = UInt32;
|
|
||||||
|
|
||||||
static UInt32 apply(const char * data, const size_t size)
|
|
||||||
{
|
|
||||||
union
|
|
||||||
{
|
|
||||||
UInt32 h;
|
|
||||||
char bytes[sizeof(h)];
|
|
||||||
};
|
|
||||||
MurmurHash3_x86_32(data, size, 0, bytes);
|
|
||||||
return h;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
struct MurmurHash3Impl64
|
|
||||||
{
|
|
||||||
using ReturnType = UInt64;
|
|
||||||
|
|
||||||
static UInt64 apply(const char * data, const size_t size)
|
|
||||||
{
|
|
||||||
union
|
|
||||||
{
|
|
||||||
UInt64 h[2];
|
|
||||||
char bytes[16];
|
|
||||||
};
|
|
||||||
MurmurHash3_x64_128(data, size, 0, bytes);
|
|
||||||
return h[0] ^ h[1];
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
struct MurmurHash3Impl128
|
|
||||||
{
|
|
||||||
static constexpr auto name = "murmurHash3_128";
|
|
||||||
enum { length = 16 };
|
|
||||||
|
|
||||||
static void apply(const char * begin, const size_t size, unsigned char * out_char_data)
|
|
||||||
{
|
|
||||||
MurmurHash3_x64_128(begin, size, 0, out_char_data);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
struct URLHashImpl
|
struct URLHashImpl
|
||||||
{
|
{
|
||||||
static UInt64 apply(const char * data, const size_t size)
|
static UInt64 apply(const char * data, const size_t size)
|
||||||
@ -943,58 +957,12 @@ private:
|
|||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
struct NameHalfMD5 { static constexpr auto name = "halfMD5"; };
|
|
||||||
struct NameSipHash64 { static constexpr auto name = "sipHash64"; };
|
|
||||||
struct NameIntHash32 { static constexpr auto name = "intHash32"; };
|
struct NameIntHash32 { static constexpr auto name = "intHash32"; };
|
||||||
struct NameIntHash64 { static constexpr auto name = "intHash64"; };
|
struct NameIntHash64 { static constexpr auto name = "intHash64"; };
|
||||||
struct NameMurmurHash2_32 { static constexpr auto name = "murmurHash2_32"; };
|
|
||||||
struct NameMurmurHash2_64 { static constexpr auto name = "murmurHash2_64"; };
|
|
||||||
struct NameMurmurHash3_32 { static constexpr auto name = "murmurHash3_32"; };
|
|
||||||
struct NameMurmurHash3_64 { static constexpr auto name = "murmurHash3_64"; };
|
|
||||||
struct NameMurmurHash3_128 { static constexpr auto name = "murmurHash3_128"; };
|
|
||||||
|
|
||||||
|
|
||||||
struct ImplCityHash64
|
using FunctionHalfMD5 = FunctionAnyHash<HalfMD5Impl>;
|
||||||
{
|
using FunctionSipHash64 = FunctionAnyHash<SipHash64Impl>;
|
||||||
static constexpr auto name = "cityHash64";
|
|
||||||
using uint128_t = CityHash_v1_0_2::uint128;
|
|
||||||
|
|
||||||
static auto Hash128to64(const uint128_t & x) { return CityHash_v1_0_2::Hash128to64(x); }
|
|
||||||
static auto Hash64(const char * s, const size_t len) { return CityHash_v1_0_2::CityHash64(s, len); }
|
|
||||||
};
|
|
||||||
|
|
||||||
// see farmhash.h for definition of NAMESPACE_FOR_HASH_FUNCTIONS
|
|
||||||
struct ImplFarmHash64
|
|
||||||
{
|
|
||||||
static constexpr auto name = "farmHash64";
|
|
||||||
using uint128_t = NAMESPACE_FOR_HASH_FUNCTIONS::uint128_t;
|
|
||||||
|
|
||||||
static auto Hash128to64(const uint128_t & x) { return NAMESPACE_FOR_HASH_FUNCTIONS::Hash128to64(x); }
|
|
||||||
static auto Hash64(const char * s, const size_t len) { return NAMESPACE_FOR_HASH_FUNCTIONS::Hash64(s, len); }
|
|
||||||
};
|
|
||||||
|
|
||||||
struct ImplMetroHash64
|
|
||||||
{
|
|
||||||
static constexpr auto name = "metroHash64";
|
|
||||||
using uint128_t = CityHash_v1_0_2::uint128;
|
|
||||||
|
|
||||||
static auto Hash128to64(const uint128_t & x) { return CityHash_v1_0_2::Hash128to64(x); }
|
|
||||||
static auto Hash64(const char * s, const size_t len)
|
|
||||||
{
|
|
||||||
union
|
|
||||||
{
|
|
||||||
UInt64 u64;
|
|
||||||
UInt8 u8[sizeof(u64)];
|
|
||||||
};
|
|
||||||
|
|
||||||
metrohash64_1(reinterpret_cast<const UInt8 *>(s), len, 0, u8);
|
|
||||||
|
|
||||||
return u64;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
using FunctionHalfMD5 = FunctionStringHash<HalfMD5Impl, NameHalfMD5>;
|
|
||||||
using FunctionSipHash64 = FunctionStringHash<SipHash64Impl, NameSipHash64>;
|
|
||||||
using FunctionIntHash32 = FunctionIntHash<IntHash32Impl, NameIntHash32>;
|
using FunctionIntHash32 = FunctionIntHash<IntHash32Impl, NameIntHash32>;
|
||||||
using FunctionIntHash64 = FunctionIntHash<IntHash64Impl, NameIntHash64>;
|
using FunctionIntHash64 = FunctionIntHash<IntHash64Impl, NameIntHash64>;
|
||||||
using FunctionMD5 = FunctionStringHashFixedString<MD5Impl>;
|
using FunctionMD5 = FunctionStringHashFixedString<MD5Impl>;
|
||||||
@ -1002,12 +970,12 @@ using FunctionSHA1 = FunctionStringHashFixedString<SHA1Impl>;
|
|||||||
using FunctionSHA224 = FunctionStringHashFixedString<SHA224Impl>;
|
using FunctionSHA224 = FunctionStringHashFixedString<SHA224Impl>;
|
||||||
using FunctionSHA256 = FunctionStringHashFixedString<SHA256Impl>;
|
using FunctionSHA256 = FunctionStringHashFixedString<SHA256Impl>;
|
||||||
using FunctionSipHash128 = FunctionStringHashFixedString<SipHash128Impl>;
|
using FunctionSipHash128 = FunctionStringHashFixedString<SipHash128Impl>;
|
||||||
using FunctionCityHash64 = FunctionNeighbourhoodHash64<ImplCityHash64>;
|
using FunctionCityHash64 = FunctionAnyHash<ImplCityHash64>;
|
||||||
using FunctionFarmHash64 = FunctionNeighbourhoodHash64<ImplFarmHash64>;
|
using FunctionFarmHash64 = FunctionAnyHash<ImplFarmHash64>;
|
||||||
using FunctionMetroHash64 = FunctionNeighbourhoodHash64<ImplMetroHash64>;
|
using FunctionMetroHash64 = FunctionAnyHash<ImplMetroHash64>;
|
||||||
using FunctionMurmurHash2_32 = FunctionStringHash<MurmurHash2Impl32, NameMurmurHash2_32>;
|
using FunctionMurmurHash2_32 = FunctionAnyHash<MurmurHash2Impl32>;
|
||||||
using FunctionMurmurHash2_64 = FunctionStringHash<MurmurHash2Impl64, NameMurmurHash2_64>;
|
using FunctionMurmurHash2_64 = FunctionAnyHash<MurmurHash2Impl64>;
|
||||||
using FunctionMurmurHash3_32 = FunctionStringHash<MurmurHash3Impl32, NameMurmurHash3_32>;
|
using FunctionMurmurHash3_32 = FunctionAnyHash<MurmurHash3Impl32>;
|
||||||
using FunctionMurmurHash3_64 = FunctionStringHash<MurmurHash3Impl64, NameMurmurHash3_64>;
|
using FunctionMurmurHash3_64 = FunctionAnyHash<MurmurHash3Impl64>;
|
||||||
using FunctionMurmurHash3_128 = FunctionStringHashFixedString<MurmurHash3Impl128>;
|
using FunctionMurmurHash3_128 = FunctionStringHashFixedString<MurmurHash3Impl128>;
|
||||||
}
|
}
|
||||||
|
@ -79,7 +79,7 @@ inline ALWAYS_INLINE void writeSlice(const NumericArraySlice<T> & slice, Generic
|
|||||||
{
|
{
|
||||||
for (size_t i = 0; i < slice.size; ++i)
|
for (size_t i = 0; i < slice.size; ++i)
|
||||||
{
|
{
|
||||||
Field field = static_cast<typename NearestFieldType<T>::Type>(slice.data[i]);
|
Field field = T(slice.data[i]);
|
||||||
sink.elements.insert(field);
|
sink.elements.insert(field);
|
||||||
}
|
}
|
||||||
sink.current_offset += slice.size;
|
sink.current_offset += slice.size;
|
||||||
@ -147,7 +147,7 @@ inline ALWAYS_INLINE void writeSlice(const GenericValueSlice & slice, NumericArr
|
|||||||
template <typename T>
|
template <typename T>
|
||||||
inline ALWAYS_INLINE void writeSlice(const NumericValueSlice<T> & slice, GenericArraySink & sink)
|
inline ALWAYS_INLINE void writeSlice(const NumericValueSlice<T> & slice, GenericArraySink & sink)
|
||||||
{
|
{
|
||||||
Field field = static_cast<typename NearestFieldType<T>::Type>(slice.value);
|
Field field = T(slice.value);
|
||||||
sink.elements.insert(field);
|
sink.elements.insert(field);
|
||||||
++sink.current_offset;
|
++sink.current_offset;
|
||||||
}
|
}
|
||||||
|
@ -33,7 +33,7 @@ struct ArrayAllImpl
|
|||||||
throw Exception("Unexpected type of filter column", ErrorCodes::ILLEGAL_COLUMN);
|
throw Exception("Unexpected type of filter column", ErrorCodes::ILLEGAL_COLUMN);
|
||||||
|
|
||||||
if (column_filter_const->getValue<UInt8>())
|
if (column_filter_const->getValue<UInt8>())
|
||||||
return DataTypeUInt8().createColumnConst(array.size(), UInt64(1));
|
return DataTypeUInt8().createColumnConst(array.size(), 1u);
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
const IColumn::Offsets & offsets = array.getOffsets();
|
const IColumn::Offsets & offsets = array.getOffsets();
|
||||||
|
@ -48,7 +48,7 @@ struct ArrayCountImpl
|
|||||||
return out_column;
|
return out_column;
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
return DataTypeUInt32().createColumnConst(array.size(), UInt64(0));
|
return DataTypeUInt32().createColumnConst(array.size(), 0u);
|
||||||
}
|
}
|
||||||
|
|
||||||
const IColumn::Filter & filter = column_filter->getData();
|
const IColumn::Filter & filter = column_filter->getData();
|
||||||
|
@ -856,7 +856,7 @@ void FunctionArrayElement::perform(Block & block, const ColumnNumbers & argument
|
|||||||
if (builder)
|
if (builder)
|
||||||
builder.initSink(input_rows_count);
|
builder.initSink(input_rows_count);
|
||||||
|
|
||||||
if (index == UInt64(0))
|
if (index == 0u)
|
||||||
throw Exception("Array indices is 1-based", ErrorCodes::ZERO_ARRAY_OR_TUPLE_INDEX);
|
throw Exception("Array indices is 1-based", ErrorCodes::ZERO_ARRAY_OR_TUPLE_INDEX);
|
||||||
|
|
||||||
if (!( executeNumberConst<UInt8>(block, arguments, result, index, builder)
|
if (!( executeNumberConst<UInt8>(block, arguments, result, index, builder)
|
||||||
|
@ -48,7 +48,7 @@ struct ArrayExistsImpl
|
|||||||
return out_column;
|
return out_column;
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
return DataTypeUInt8().createColumnConst(array.size(), UInt64(0));
|
return DataTypeUInt8().createColumnConst(array.size(), 0u);
|
||||||
}
|
}
|
||||||
|
|
||||||
const IColumn::Filter & filter = column_filter->getData();
|
const IColumn::Filter & filter = column_filter->getData();
|
||||||
|
@ -45,7 +45,7 @@ struct ArrayFirstIndexImpl
|
|||||||
return out_column;
|
return out_column;
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
return DataTypeUInt32().createColumnConst(array.size(), UInt64(0));
|
return DataTypeUInt32().createColumnConst(array.size(), 0u);
|
||||||
}
|
}
|
||||||
|
|
||||||
const auto & filter = column_filter->getData();
|
const auto & filter = column_filter->getData();
|
||||||
|
@ -751,7 +751,7 @@ private:
|
|||||||
|
|
||||||
block.getByPosition(result).column = block.getByPosition(result).type->createColumnConst(
|
block.getByPosition(result).column = block.getByPosition(result).type->createColumnConst(
|
||||||
item_arg->size(),
|
item_arg->size(),
|
||||||
static_cast<typename NearestFieldType<typename IndexConv::ResultType>::Type>(current));
|
static_cast<typename IndexConv::ResultType>(current));
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
|
@ -429,7 +429,7 @@ ColumnPtr FunctionArrayIntersect::execute(const UnpackedArrays & arrays, Mutable
|
|||||||
{
|
{
|
||||||
++result_offset;
|
++result_offset;
|
||||||
if constexpr (is_numeric_column)
|
if constexpr (is_numeric_column)
|
||||||
result_data.insert(pair.first);
|
result_data.insertValue(pair.first);
|
||||||
else if constexpr (std::is_same<ColumnType, ColumnString>::value || std::is_same<ColumnType, ColumnFixedString>::value)
|
else if constexpr (std::is_same<ColumnType, ColumnString>::value || std::is_same<ColumnType, ColumnFixedString>::value)
|
||||||
result_data.insertData(pair.first.data, pair.first.size);
|
result_data.insertData(pair.first.data, pair.first.size);
|
||||||
else
|
else
|
||||||
|
@ -51,9 +51,9 @@ public:
|
|||||||
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) override
|
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) override
|
||||||
{
|
{
|
||||||
if (auto type = checkAndGetDataType<DataTypeEnum8>(block.getByPosition(arguments[0]).type.get()))
|
if (auto type = checkAndGetDataType<DataTypeEnum8>(block.getByPosition(arguments[0]).type.get()))
|
||||||
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, UInt64(type->getValues().size()));
|
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, type->getValues().size());
|
||||||
else if (auto type = checkAndGetDataType<DataTypeEnum16>(block.getByPosition(arguments[0]).type.get()))
|
else if (auto type = checkAndGetDataType<DataTypeEnum16>(block.getByPosition(arguments[0]).type.get()))
|
||||||
block.getByPosition(result).column = DataTypeUInt16().createColumnConst(input_rows_count, UInt64(type->getValues().size()));
|
block.getByPosition(result).column = DataTypeUInt16().createColumnConst(input_rows_count, type->getValues().size());
|
||||||
else
|
else
|
||||||
throw Exception("The argument for function " + getName() + " must be Enum", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
throw Exception("The argument for function " + getName() + " must be Enum", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||||
}
|
}
|
||||||
|
@ -132,7 +132,7 @@ void FunctionHasColumnInTable::executeImpl(Block & block, const ColumnNumbers &
|
|||||||
has_column = remote_columns.hasPhysical(column_name);
|
has_column = remote_columns.hasPhysical(column_name);
|
||||||
}
|
}
|
||||||
|
|
||||||
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, UInt64(has_column));
|
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, has_column);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -40,7 +40,7 @@ public:
|
|||||||
|
|
||||||
void executeImpl(Block & block, const ColumnNumbers &, size_t result, size_t input_rows_count) override
|
void executeImpl(Block & block, const ColumnNumbers &, size_t result, size_t input_rows_count) override
|
||||||
{
|
{
|
||||||
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, UInt64(0));
|
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, 0u);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -50,7 +50,7 @@ public:
|
|||||||
|
|
||||||
void executeImpl(Block & block, const ColumnNumbers &, size_t result, size_t input_rows_count) override
|
void executeImpl(Block & block, const ColumnNumbers &, size_t result, size_t input_rows_count) override
|
||||||
{
|
{
|
||||||
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, UInt64(1));
|
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, 1u);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -53,7 +53,7 @@ public:
|
|||||||
else
|
else
|
||||||
{
|
{
|
||||||
/// Since no element is nullable, return a constant one.
|
/// Since no element is nullable, return a constant one.
|
||||||
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(elem.column->size(), UInt64(1));
|
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(elem.column->size(), 1u);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@ -47,7 +47,7 @@ public:
|
|||||||
{
|
{
|
||||||
/// Since no element is nullable, return a zero-constant column representing
|
/// Since no element is nullable, return a zero-constant column representing
|
||||||
/// a zero-filled null map.
|
/// a zero-filled null map.
|
||||||
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(elem.column->size(), UInt64(0));
|
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(elem.column->size(), 0u);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@ -92,7 +92,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// convertToFullColumn needed, because otherwise (constant expression case) function will not get called on each block.
|
/// convertToFullColumn needed, because otherwise (constant expression case) function will not get called on each block.
|
||||||
block.getByPosition(result).column = block.getByPosition(result).type->createColumnConst(size, UInt64(0))->convertToFullColumnIfConst();
|
block.getByPosition(result).column = block.getByPosition(result).type->createColumnConst(size, 0u)->convertToFullColumnIfConst();
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -103,7 +103,7 @@ struct TimeSlotsImpl
|
|||||||
Array & result)
|
Array & result)
|
||||||
{
|
{
|
||||||
for (UInt32 value = start / TIME_SLOT_SIZE; value <= (start + duration) / TIME_SLOT_SIZE; ++value)
|
for (UInt32 value = start / TIME_SLOT_SIZE; value <= (start + duration) / TIME_SLOT_SIZE; ++value)
|
||||||
result.push_back(static_cast<UInt64>(value * TIME_SLOT_SIZE));
|
result.push_back(value * TIME_SLOT_SIZE);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -33,7 +33,7 @@ public:
|
|||||||
{
|
{
|
||||||
block.getByPosition(result).column = DataTypeDate().createColumnConst(
|
block.getByPosition(result).column = DataTypeDate().createColumnConst(
|
||||||
input_rows_count,
|
input_rows_count,
|
||||||
UInt64(DateLUT::instance().toDayNum(time(nullptr))));
|
DateLUT::instance().toDayNum(time(nullptr)));
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -35,7 +35,7 @@ public:
|
|||||||
|
|
||||||
void executeImpl(Block & block, const ColumnNumbers &, size_t result, size_t input_rows_count) override
|
void executeImpl(Block & block, const ColumnNumbers &, size_t result, size_t input_rows_count) override
|
||||||
{
|
{
|
||||||
block.getByPosition(result).column = DataTypeString().createColumnConst(input_rows_count, String(VERSION_STRING));
|
block.getByPosition(result).column = DataTypeString().createColumnConst(input_rows_count, VERSION_STRING);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -33,7 +33,7 @@ public:
|
|||||||
{
|
{
|
||||||
block.getByPosition(result).column = DataTypeDate().createColumnConst(
|
block.getByPosition(result).column = DataTypeDate().createColumnConst(
|
||||||
input_rows_count,
|
input_rows_count,
|
||||||
UInt64(DateLUT::instance().toDayNum(time(nullptr)) - 1));
|
DateLUT::instance().toDayNum(time(nullptr)) - 1);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -115,7 +115,7 @@ bool ReadBufferAIO::nextImpl()
|
|||||||
|
|
||||||
/// If the end of the file is just reached, do nothing else.
|
/// If the end of the file is just reached, do nothing else.
|
||||||
if (is_eof)
|
if (is_eof)
|
||||||
return true;
|
return bytes_read != 0;
|
||||||
|
|
||||||
/// Create an asynchronous request.
|
/// Create an asynchronous request.
|
||||||
prepare();
|
prepare();
|
||||||
|
@ -568,8 +568,8 @@ void ActionsVisitor::makeSet(const ASTFunction * node, const Block & sample_bloc
|
|||||||
/// and the table has the type Set (a previously prepared set).
|
/// and the table has the type Set (a previously prepared set).
|
||||||
if (identifier)
|
if (identifier)
|
||||||
{
|
{
|
||||||
auto database_table = getDatabaseAndTableNameFromIdentifier(*identifier);
|
DatabaseAndTableWithAlias database_table(*identifier);
|
||||||
StoragePtr table = context.tryGetTable(database_table.first, database_table.second);
|
StoragePtr table = context.tryGetTable(database_table.database, database_table.table);
|
||||||
|
|
||||||
if (table)
|
if (table)
|
||||||
{
|
{
|
||||||
|
@ -1,46 +1,115 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
|
#include <Common/typeid_cast.h>
|
||||||
#include <Parsers/ASTQueryWithTableAndOutput.h>
|
#include <Parsers/ASTQueryWithTableAndOutput.h>
|
||||||
#include <Parsers/ASTRenameQuery.h>
|
#include <Parsers/ASTRenameQuery.h>
|
||||||
|
#include <Parsers/ASTIdentifier.h>
|
||||||
|
#include <Parsers/ASTSelectQuery.h>
|
||||||
|
#include <Parsers/ASTSubquery.h>
|
||||||
|
#include <Parsers/ASTSelectWithUnionQuery.h>
|
||||||
|
#include <Parsers/ASTTablesInSelectQuery.h>
|
||||||
|
#include <Parsers/DumpASTNode.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
/// Visits AST nodes, add default database to DDLs if not set.
|
/// Visitors consist of functions with unified interface 'void visit(Casted & x, ASTPtr & y)', there x is y, successfully casted to Casted.
|
||||||
|
/// Both types and fuction could have const specifiers. The second argument is used by visitor to replaces AST node (y) if needed.
|
||||||
|
|
||||||
|
/// Visits AST nodes, add default database to tables if not set. There's different logic for DDLs and selects.
|
||||||
class AddDefaultDatabaseVisitor
|
class AddDefaultDatabaseVisitor
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
AddDefaultDatabaseVisitor(const String & default_database_)
|
AddDefaultDatabaseVisitor(const String & database_name_, std::ostream * ostr_ = nullptr)
|
||||||
: default_database(default_database_)
|
: database_name(database_name_),
|
||||||
|
visit_depth(0),
|
||||||
|
ostr(ostr_)
|
||||||
{}
|
{}
|
||||||
|
|
||||||
void visit(ASTPtr & ast) const
|
void visitDDL(ASTPtr & ast) const
|
||||||
{
|
{
|
||||||
visitChildren(ast);
|
visitDDLChildren(ast);
|
||||||
|
|
||||||
if (!tryVisit<ASTQueryWithTableAndOutput>(ast) &&
|
if (!tryVisitDynamicCast<ASTQueryWithTableAndOutput>(ast) &&
|
||||||
!tryVisit<ASTRenameQuery>(ast))
|
!tryVisitDynamicCast<ASTRenameQuery>(ast))
|
||||||
{}
|
{}
|
||||||
}
|
}
|
||||||
|
|
||||||
private:
|
void visit(ASTPtr & ast) const
|
||||||
const String default_database;
|
|
||||||
|
|
||||||
void visit(ASTQueryWithTableAndOutput * node, ASTPtr &) const
|
|
||||||
{
|
{
|
||||||
if (node->database.empty())
|
if (!tryVisit<ASTSelectQuery>(ast) &&
|
||||||
node->database = default_database;
|
!tryVisit<ASTSelectWithUnionQuery>(ast))
|
||||||
|
visitChildren(ast);
|
||||||
}
|
}
|
||||||
|
|
||||||
void visit(ASTRenameQuery * node, ASTPtr &) const
|
void visit(ASTSelectQuery & select) const
|
||||||
{
|
{
|
||||||
for (ASTRenameQuery::Element & elem : node->elements)
|
ASTPtr unused;
|
||||||
|
visit(select, unused);
|
||||||
|
}
|
||||||
|
|
||||||
|
void visit(ASTSelectWithUnionQuery & select) const
|
||||||
|
{
|
||||||
|
ASTPtr unused;
|
||||||
|
visit(select, unused);
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
const String database_name;
|
||||||
|
mutable size_t visit_depth;
|
||||||
|
std::ostream * ostr;
|
||||||
|
|
||||||
|
void visit(ASTSelectWithUnionQuery & select, ASTPtr &) const
|
||||||
|
{
|
||||||
|
for (auto & child : select.list_of_selects->children)
|
||||||
|
tryVisit<ASTSelectQuery>(child);
|
||||||
|
}
|
||||||
|
|
||||||
|
void visit(ASTSelectQuery & select, ASTPtr &) const
|
||||||
|
{
|
||||||
|
if (select.tables)
|
||||||
|
tryVisit<ASTTablesInSelectQuery>(select.tables);
|
||||||
|
|
||||||
|
if (select.prewhere_expression)
|
||||||
|
visitChildren(select.prewhere_expression);
|
||||||
|
if (select.where_expression)
|
||||||
|
visitChildren(select.where_expression);
|
||||||
|
}
|
||||||
|
|
||||||
|
void visit(ASTTablesInSelectQuery & tables, ASTPtr &) const
|
||||||
|
{
|
||||||
|
for (auto & child : tables.children)
|
||||||
|
tryVisit<ASTTablesInSelectQueryElement>(child);
|
||||||
|
}
|
||||||
|
|
||||||
|
void visit(ASTTablesInSelectQueryElement & tables_element, ASTPtr &) const
|
||||||
|
{
|
||||||
|
if (tables_element.table_expression)
|
||||||
|
tryVisit<ASTTableExpression>(tables_element.table_expression);
|
||||||
|
}
|
||||||
|
|
||||||
|
void visit(ASTTableExpression & table_expression, ASTPtr &) const
|
||||||
|
{
|
||||||
|
if (table_expression.database_and_table_name)
|
||||||
{
|
{
|
||||||
if (elem.from.database.empty())
|
tryVisit<ASTIdentifier>(table_expression.database_and_table_name);
|
||||||
elem.from.database = default_database;
|
|
||||||
if (elem.to.database.empty())
|
if (table_expression.database_and_table_name->children.size() != 2)
|
||||||
elem.to.database = default_database;
|
throw Exception("Logical error: more than two components in table expression", ErrorCodes::LOGICAL_ERROR);
|
||||||
}
|
}
|
||||||
|
else if (table_expression.subquery)
|
||||||
|
tryVisit<ASTSubquery>(table_expression.subquery);
|
||||||
|
}
|
||||||
|
|
||||||
|
void visit(const ASTIdentifier & identifier, ASTPtr & ast) const
|
||||||
|
{
|
||||||
|
if (ast->children.empty())
|
||||||
|
ast = createDatabaseAndTableNode(database_name, identifier.name);
|
||||||
|
}
|
||||||
|
|
||||||
|
void visit(ASTSubquery & subquery, ASTPtr &) const
|
||||||
|
{
|
||||||
|
tryVisit<ASTSelectWithUnionQuery>(subquery.children[0]);
|
||||||
}
|
}
|
||||||
|
|
||||||
void visitChildren(ASTPtr & ast) const
|
void visitChildren(ASTPtr & ast) const
|
||||||
@ -51,10 +120,46 @@ private:
|
|||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
bool tryVisit(ASTPtr & ast) const
|
bool tryVisit(ASTPtr & ast) const
|
||||||
|
{
|
||||||
|
if (T * t = typeid_cast<T *>(ast.get()))
|
||||||
|
{
|
||||||
|
DumpASTNode dump(*ast, ostr, visit_depth, "addDefaultDatabaseName");
|
||||||
|
visit(*t, ast);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void visitDDL(ASTQueryWithTableAndOutput & node, ASTPtr &) const
|
||||||
|
{
|
||||||
|
if (node.database.empty())
|
||||||
|
node.database = database_name;
|
||||||
|
}
|
||||||
|
|
||||||
|
void visitDDL(ASTRenameQuery & node, ASTPtr &) const
|
||||||
|
{
|
||||||
|
for (ASTRenameQuery::Element & elem : node.elements)
|
||||||
|
{
|
||||||
|
if (elem.from.database.empty())
|
||||||
|
elem.from.database = database_name;
|
||||||
|
if (elem.to.database.empty())
|
||||||
|
elem.to.database = database_name;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
void visitDDLChildren(ASTPtr & ast) const
|
||||||
|
{
|
||||||
|
for (auto & child : ast->children)
|
||||||
|
visitDDL(child);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
bool tryVisitDynamicCast(ASTPtr & ast) const
|
||||||
{
|
{
|
||||||
if (T * t = dynamic_cast<T *>(ast.get()))
|
if (T * t = dynamic_cast<T *>(ast.get()))
|
||||||
{
|
{
|
||||||
visit(t, ast);
|
visitDDL(*t, ast);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
|
@ -1475,9 +1475,9 @@ void NO_INLINE Aggregator::mergeDataImpl(
|
|||||||
Table & table_src,
|
Table & table_src,
|
||||||
Arena * arena) const
|
Arena * arena) const
|
||||||
{
|
{
|
||||||
for (auto it = table_src.begin(); it != table_src.end(); ++it)
|
for (auto it = table_src.begin(), end = table_src.end(); it != end; ++it)
|
||||||
{
|
{
|
||||||
decltype(it) res_it;
|
typename Table::iterator res_it;
|
||||||
bool inserted;
|
bool inserted;
|
||||||
table_dst.emplace(it->first, res_it, inserted, it.getHash());
|
table_dst.emplace(it->first, res_it, inserted, it.getHash());
|
||||||
|
|
||||||
@ -1512,9 +1512,9 @@ void NO_INLINE Aggregator::mergeDataNoMoreKeysImpl(
|
|||||||
Table & table_src,
|
Table & table_src,
|
||||||
Arena * arena) const
|
Arena * arena) const
|
||||||
{
|
{
|
||||||
for (auto it = table_src.begin(); it != table_src.end(); ++it)
|
for (auto it = table_src.begin(), end = table_src.end(); it != end; ++it)
|
||||||
{
|
{
|
||||||
decltype(it) res_it = table_dst.find(it->first, it.getHash());
|
typename Table::iterator res_it = table_dst.find(it->first, it.getHash());
|
||||||
|
|
||||||
AggregateDataPtr res_data = table_dst.end() == res_it
|
AggregateDataPtr res_data = table_dst.end() == res_it
|
||||||
? overflows
|
? overflows
|
||||||
|
@ -817,7 +817,7 @@ struct AggregationMethodKeysFixed
|
|||||||
size_t bucket = i / 8;
|
size_t bucket = i / 8;
|
||||||
size_t offset = i % 8;
|
size_t offset = i % 8;
|
||||||
UInt8 val = (reinterpret_cast<const UInt8 *>(&value.first)[bucket] >> offset) & 1;
|
UInt8 val = (reinterpret_cast<const UInt8 *>(&value.first)[bucket] >> offset) & 1;
|
||||||
null_map->insert(val);
|
null_map->insertValue(val);
|
||||||
is_null = val == 1;
|
is_null = val == 1;
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
|
@ -3,6 +3,9 @@
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
/// Visitors consist of functions with unified interface 'void visit(Casted & x, ASTPtr & y)', there x is y, successfully casted to Casted.
|
||||||
|
/// Both types and fuction could have const specifiers. The second argument is used by visitor to replaces AST node (y) if needed.
|
||||||
|
|
||||||
/// Fills the array_join_result_to_source: on which columns-arrays to replicate, and how to call them after that.
|
/// Fills the array_join_result_to_source: on which columns-arrays to replicate, and how to call them after that.
|
||||||
class ArrayJoinedColumnsVisitor
|
class ArrayJoinedColumnsVisitor
|
||||||
{
|
{
|
||||||
@ -27,48 +30,48 @@ private:
|
|||||||
NameToNameMap & array_join_alias_to_name;
|
NameToNameMap & array_join_alias_to_name;
|
||||||
NameToNameMap & array_join_result_to_source;
|
NameToNameMap & array_join_result_to_source;
|
||||||
|
|
||||||
void visit(const ASTTablesInSelectQuery *, ASTPtr &) const
|
void visit(const ASTTablesInSelectQuery &, ASTPtr &) const
|
||||||
{}
|
{}
|
||||||
|
|
||||||
void visit(const ASTIdentifier * node, ASTPtr &) const
|
void visit(const ASTIdentifier & node, ASTPtr &) const
|
||||||
{
|
{
|
||||||
if (node->general())
|
if (!node.general())
|
||||||
{
|
return;
|
||||||
auto splitted = Nested::splitName(node->name); /// ParsedParams, Key1
|
|
||||||
|
|
||||||
if (array_join_alias_to_name.count(node->name))
|
auto splitted = Nested::splitName(node.name); /// ParsedParams, Key1
|
||||||
{
|
|
||||||
/// ARRAY JOIN was written with an array column. Example: SELECT K1 FROM ... ARRAY JOIN ParsedParams.Key1 AS K1
|
if (array_join_alias_to_name.count(node.name))
|
||||||
array_join_result_to_source[node->name] = array_join_alias_to_name[node->name]; /// K1 -> ParsedParams.Key1
|
{
|
||||||
}
|
/// ARRAY JOIN was written with an array column. Example: SELECT K1 FROM ... ARRAY JOIN ParsedParams.Key1 AS K1
|
||||||
else if (array_join_alias_to_name.count(splitted.first) && !splitted.second.empty())
|
array_join_result_to_source[node.name] = array_join_alias_to_name[node.name]; /// K1 -> ParsedParams.Key1
|
||||||
{
|
}
|
||||||
/// ARRAY JOIN was written with a nested table. Example: SELECT PP.KEY1 FROM ... ARRAY JOIN ParsedParams AS PP
|
else if (array_join_alias_to_name.count(splitted.first) && !splitted.second.empty())
|
||||||
array_join_result_to_source[node->name] /// PP.Key1 -> ParsedParams.Key1
|
{
|
||||||
= Nested::concatenateName(array_join_alias_to_name[splitted.first], splitted.second);
|
/// ARRAY JOIN was written with a nested table. Example: SELECT PP.KEY1 FROM ... ARRAY JOIN ParsedParams AS PP
|
||||||
}
|
array_join_result_to_source[node.name] /// PP.Key1 -> ParsedParams.Key1
|
||||||
else if (array_join_name_to_alias.count(node->name))
|
= Nested::concatenateName(array_join_alias_to_name[splitted.first], splitted.second);
|
||||||
{
|
}
|
||||||
/** Example: SELECT ParsedParams.Key1 FROM ... ARRAY JOIN ParsedParams.Key1 AS PP.Key1.
|
else if (array_join_name_to_alias.count(node.name))
|
||||||
* That is, the query uses the original array, replicated by itself.
|
{
|
||||||
*/
|
/** Example: SELECT ParsedParams.Key1 FROM ... ARRAY JOIN ParsedParams.Key1 AS PP.Key1.
|
||||||
array_join_result_to_source[ /// PP.Key1 -> ParsedParams.Key1
|
* That is, the query uses the original array, replicated by itself.
|
||||||
array_join_name_to_alias[node->name]] = node->name;
|
*/
|
||||||
}
|
array_join_result_to_source[ /// PP.Key1 -> ParsedParams.Key1
|
||||||
else if (array_join_name_to_alias.count(splitted.first) && !splitted.second.empty())
|
array_join_name_to_alias[node.name]] = node.name;
|
||||||
{
|
}
|
||||||
/** Example: SELECT ParsedParams.Key1 FROM ... ARRAY JOIN ParsedParams AS PP.
|
else if (array_join_name_to_alias.count(splitted.first) && !splitted.second.empty())
|
||||||
*/
|
{
|
||||||
array_join_result_to_source[ /// PP.Key1 -> ParsedParams.Key1
|
/** Example: SELECT ParsedParams.Key1 FROM ... ARRAY JOIN ParsedParams AS PP.
|
||||||
Nested::concatenateName(array_join_name_to_alias[splitted.first], splitted.second)] = node->name;
|
*/
|
||||||
}
|
array_join_result_to_source[ /// PP.Key1 -> ParsedParams.Key1
|
||||||
|
Nested::concatenateName(array_join_name_to_alias[splitted.first], splitted.second)] = node.name;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void visit(const ASTSubquery *, ASTPtr &) const
|
void visit(const ASTSubquery &, ASTPtr &) const
|
||||||
{}
|
{}
|
||||||
|
|
||||||
void visit(const ASTSelectQuery *, ASTPtr &) const
|
void visit(const ASTSelectQuery &, ASTPtr &) const
|
||||||
{}
|
{}
|
||||||
|
|
||||||
void visitChildren(ASTPtr & ast) const
|
void visitChildren(ASTPtr & ast) const
|
||||||
@ -84,7 +87,7 @@ private:
|
|||||||
{
|
{
|
||||||
if (const T * t = typeid_cast<const T *>(ast.get()))
|
if (const T * t = typeid_cast<const T *>(ast.get()))
|
||||||
{
|
{
|
||||||
visit(t, ast);
|
visit(*t, ast);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
|
@ -1427,6 +1427,15 @@ UInt16 Context::getTCPPort() const
|
|||||||
return config.getInt("tcp_port");
|
return config.getInt("tcp_port");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
std::optional<UInt16> Context::getTCPPortSecure() const
|
||||||
|
{
|
||||||
|
auto lock = getLock();
|
||||||
|
|
||||||
|
auto & config = getConfigRef();
|
||||||
|
if (config.has("tcp_port_secure"))
|
||||||
|
return config.getInt("tcp_port_secure");
|
||||||
|
return {};
|
||||||
|
}
|
||||||
|
|
||||||
std::shared_ptr<Cluster> Context::getCluster(const std::string & cluster_name) const
|
std::shared_ptr<Cluster> Context::getCluster(const std::string & cluster_name) const
|
||||||
{
|
{
|
||||||
|
@ -7,6 +7,7 @@
|
|||||||
#include <mutex>
|
#include <mutex>
|
||||||
#include <thread>
|
#include <thread>
|
||||||
#include <atomic>
|
#include <atomic>
|
||||||
|
#include <optional>
|
||||||
|
|
||||||
#include <Common/config.h>
|
#include <Common/config.h>
|
||||||
#include <common/MultiVersion.h>
|
#include <common/MultiVersion.h>
|
||||||
@ -277,6 +278,8 @@ public:
|
|||||||
/// The port that the server listens for executing SQL queries.
|
/// The port that the server listens for executing SQL queries.
|
||||||
UInt16 getTCPPort() const;
|
UInt16 getTCPPort() const;
|
||||||
|
|
||||||
|
std::optional<UInt16> getTCPPortSecure() const;
|
||||||
|
|
||||||
/// Get query for the CREATE table.
|
/// Get query for the CREATE table.
|
||||||
ASTPtr getCreateTableQuery(const String & database_name, const String & table_name) const;
|
ASTPtr getCreateTableQuery(const String & database_name, const String & table_name) const;
|
||||||
ASTPtr getCreateExternalTableQuery(const String & table_name) const;
|
ASTPtr getCreateExternalTableQuery(const String & table_name) const;
|
||||||
|
@ -300,7 +300,9 @@ bool DDLWorker::initAndCheckTask(const String & entry_name, String & out_reason)
|
|||||||
bool host_in_hostlist = false;
|
bool host_in_hostlist = false;
|
||||||
for (const HostID & host : task->entry.hosts)
|
for (const HostID & host : task->entry.hosts)
|
||||||
{
|
{
|
||||||
if (!host.isLocalAddress(context.getTCPPort()))
|
auto maybe_secure_port = context.getTCPPortSecure();
|
||||||
|
bool is_local_port = maybe_secure_port ? host.isLocalAddress(*maybe_secure_port) : host.isLocalAddress(context.getTCPPort());
|
||||||
|
if (!is_local_port)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
if (host_in_hostlist)
|
if (host_in_hostlist)
|
||||||
@ -477,7 +479,8 @@ void DDLWorker::parseQueryAndResolveHost(DDLTask & task)
|
|||||||
{
|
{
|
||||||
const Cluster::Address & address = shards[shard_num][replica_num];
|
const Cluster::Address & address = shards[shard_num][replica_num];
|
||||||
|
|
||||||
if (isLocalAddress(address.getResolvedAddress(), context.getTCPPort()))
|
if (isLocalAddress(address.getResolvedAddress(), context.getTCPPort())
|
||||||
|
|| (context.getTCPPortSecure() && isLocalAddress(address.getResolvedAddress(), *context.getTCPPortSecure())))
|
||||||
{
|
{
|
||||||
if (found_via_resolving)
|
if (found_via_resolving)
|
||||||
{
|
{
|
||||||
@ -562,6 +565,7 @@ void DDLWorker::processTask(DDLTask & task)
|
|||||||
String finished_node_path = task.entry_path + "/finished/" + task.host_id_str;
|
String finished_node_path = task.entry_path + "/finished/" + task.host_id_str;
|
||||||
|
|
||||||
auto code = zookeeper->tryCreate(active_node_path, "", zkutil::CreateMode::Ephemeral, dummy);
|
auto code = zookeeper->tryCreate(active_node_path, "", zkutil::CreateMode::Ephemeral, dummy);
|
||||||
|
|
||||||
if (code == Coordination::ZOK || code == Coordination::ZNODEEXISTS)
|
if (code == Coordination::ZOK || code == Coordination::ZNODEEXISTS)
|
||||||
{
|
{
|
||||||
// Ok
|
// Ok
|
||||||
@ -943,7 +947,7 @@ void DDLWorker::run()
|
|||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
LOG_ERROR(log, "Unexpected error: " << getCurrentExceptionMessage(true) << ". Terminating.");
|
tryLogCurrentException(log, "Unexpected error, will terminate:");
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -1056,16 +1060,16 @@ public:
|
|||||||
Cluster::Address::fromString(host_id, host, port);
|
Cluster::Address::fromString(host_id, host, port);
|
||||||
|
|
||||||
if (status.code != 0 && first_exception == nullptr)
|
if (status.code != 0 && first_exception == nullptr)
|
||||||
first_exception = std::make_unique<Exception>("There was an error on " + host + ": " + status.message, status.code);
|
first_exception = std::make_unique<Exception>("There was an error on [" + host + ":" + toString(port) + "]: " + status.message, status.code);
|
||||||
|
|
||||||
++num_hosts_finished;
|
++num_hosts_finished;
|
||||||
|
|
||||||
columns[0]->insert(host);
|
columns[0]->insert(host);
|
||||||
columns[1]->insert(static_cast<UInt64>(port));
|
columns[1]->insert(port);
|
||||||
columns[2]->insert(static_cast<Int64>(status.code));
|
columns[2]->insert(status.code);
|
||||||
columns[3]->insert(status.message);
|
columns[3]->insert(status.message);
|
||||||
columns[4]->insert(static_cast<UInt64>(waiting_hosts.size() - num_hosts_finished));
|
columns[4]->insert(waiting_hosts.size() - num_hosts_finished);
|
||||||
columns[5]->insert(static_cast<UInt64>(current_active_hosts.size()));
|
columns[5]->insert(current_active_hosts.size());
|
||||||
}
|
}
|
||||||
res = sample.cloneWithColumns(std::move(columns));
|
res = sample.cloneWithColumns(std::move(columns));
|
||||||
}
|
}
|
||||||
@ -1208,7 +1212,7 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont
|
|||||||
if (use_local_default_db)
|
if (use_local_default_db)
|
||||||
{
|
{
|
||||||
AddDefaultDatabaseVisitor visitor(current_database);
|
AddDefaultDatabaseVisitor visitor(current_database);
|
||||||
visitor.visit(query_ptr);
|
visitor.visitDDL(query_ptr);
|
||||||
}
|
}
|
||||||
|
|
||||||
DDLLogEntry entry;
|
DDLLogEntry entry;
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
#include <Interpreters/evaluateQualified.h>
|
#include <Interpreters/DatabaseAndTableWithAlias.h>
|
||||||
#include <Interpreters/Context.h>
|
#include <Interpreters/Context.h>
|
||||||
#include <Common/typeid_cast.h>
|
#include <Common/typeid_cast.h>
|
||||||
|
|
||||||
@ -6,6 +6,7 @@
|
|||||||
#include <Parsers/ASTIdentifier.h>
|
#include <Parsers/ASTIdentifier.h>
|
||||||
#include <Parsers/ASTTablesInSelectQuery.h>
|
#include <Parsers/ASTTablesInSelectQuery.h>
|
||||||
#include <Parsers/ASTSelectQuery.h>
|
#include <Parsers/ASTSelectQuery.h>
|
||||||
|
#include <Parsers/ASTSubquery.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
@ -47,46 +48,6 @@ void stripIdentifier(DB::ASTPtr & ast, size_t num_qualifiers_to_strip)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
DatabaseAndTableWithAlias getTableNameWithAliasFromTableExpression(const ASTTableExpression & table_expression,
|
|
||||||
const String & current_database)
|
|
||||||
{
|
|
||||||
DatabaseAndTableWithAlias database_and_table_with_alias;
|
|
||||||
|
|
||||||
if (table_expression.database_and_table_name)
|
|
||||||
{
|
|
||||||
const auto & identifier = static_cast<const ASTIdentifier &>(*table_expression.database_and_table_name);
|
|
||||||
|
|
||||||
database_and_table_with_alias.alias = identifier.tryGetAlias();
|
|
||||||
|
|
||||||
if (table_expression.database_and_table_name->children.empty())
|
|
||||||
{
|
|
||||||
database_and_table_with_alias.database = current_database;
|
|
||||||
database_and_table_with_alias.table = identifier.name;
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
if (table_expression.database_and_table_name->children.size() != 2)
|
|
||||||
throw Exception("Logical error: number of components in table expression not equal to two", ErrorCodes::LOGICAL_ERROR);
|
|
||||||
|
|
||||||
database_and_table_with_alias.database = static_cast<const ASTIdentifier &>(*identifier.children[0]).name;
|
|
||||||
database_and_table_with_alias.table = static_cast<const ASTIdentifier &>(*identifier.children[1]).name;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
else if (table_expression.table_function)
|
|
||||||
{
|
|
||||||
database_and_table_with_alias.alias = table_expression.table_function->tryGetAlias();
|
|
||||||
}
|
|
||||||
else if (table_expression.subquery)
|
|
||||||
{
|
|
||||||
database_and_table_with_alias.alias = table_expression.subquery->tryGetAlias();
|
|
||||||
}
|
|
||||||
else
|
|
||||||
throw Exception("Logical error: no known elements in ASTTableExpression", ErrorCodes::LOGICAL_ERROR);
|
|
||||||
|
|
||||||
return database_and_table_with_alias;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Get the number of components of identifier which are correspond to 'alias.', 'table.' or 'databas.table.' from names.
|
/// Get the number of components of identifier which are correspond to 'alias.', 'table.' or 'databas.table.' from names.
|
||||||
size_t getNumComponentsToStripInOrderToTranslateQualifiedName(const ASTIdentifier & identifier,
|
size_t getNumComponentsToStripInOrderToTranslateQualifiedName(const ASTIdentifier & identifier,
|
||||||
const DatabaseAndTableWithAlias & names)
|
const DatabaseAndTableWithAlias & names)
|
||||||
@ -121,19 +82,44 @@ size_t getNumComponentsToStripInOrderToTranslateQualifiedName(const ASTIdentifie
|
|||||||
return num_qualifiers_to_strip;
|
return num_qualifiers_to_strip;
|
||||||
}
|
}
|
||||||
|
|
||||||
std::pair<String, String> getDatabaseAndTableNameFromIdentifier(const ASTIdentifier & identifier)
|
|
||||||
|
DatabaseAndTableWithAlias::DatabaseAndTableWithAlias(const ASTIdentifier & identifier, const String & current_database)
|
||||||
{
|
{
|
||||||
std::pair<String, String> res;
|
database = current_database;
|
||||||
res.second = identifier.name;
|
table = identifier.name;
|
||||||
|
alias = identifier.tryGetAlias();
|
||||||
|
|
||||||
if (!identifier.children.empty())
|
if (!identifier.children.empty())
|
||||||
{
|
{
|
||||||
if (identifier.children.size() != 2)
|
if (identifier.children.size() != 2)
|
||||||
throw Exception("Qualified table name could have only two components", ErrorCodes::LOGICAL_ERROR);
|
throw Exception("Logical error: number of components in table expression not equal to two", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
|
||||||
res.first = typeid_cast<const ASTIdentifier &>(*identifier.children[0]).name;
|
const ASTIdentifier * db_identifier = typeid_cast<const ASTIdentifier *>(identifier.children[0].get());
|
||||||
res.second = typeid_cast<const ASTIdentifier &>(*identifier.children[1]).name;
|
const ASTIdentifier * table_identifier = typeid_cast<const ASTIdentifier *>(identifier.children[1].get());
|
||||||
|
if (!db_identifier || !table_identifier)
|
||||||
|
throw Exception("Logical error: identifiers expected", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
|
||||||
|
database = db_identifier->name;
|
||||||
|
table = table_identifier->name;
|
||||||
}
|
}
|
||||||
return res;
|
}
|
||||||
|
|
||||||
|
DatabaseAndTableWithAlias::DatabaseAndTableWithAlias(const ASTTableExpression & table_expression, const String & current_database)
|
||||||
|
{
|
||||||
|
if (table_expression.database_and_table_name)
|
||||||
|
{
|
||||||
|
const auto * identifier = typeid_cast<const ASTIdentifier *>(table_expression.database_and_table_name.get());
|
||||||
|
if (!identifier)
|
||||||
|
throw Exception("Logical error: identifier expected", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
|
||||||
|
*this = DatabaseAndTableWithAlias(*identifier, current_database);
|
||||||
|
}
|
||||||
|
else if (table_expression.table_function)
|
||||||
|
alias = table_expression.table_function->tryGetAlias();
|
||||||
|
else if (table_expression.subquery)
|
||||||
|
alias = table_expression.subquery->tryGetAlias();
|
||||||
|
else
|
||||||
|
throw Exception("Logical error: no known elements in ASTTableExpression", ErrorCodes::LOGICAL_ERROR);
|
||||||
}
|
}
|
||||||
|
|
||||||
String DatabaseAndTableWithAlias::getQualifiedNamePrefix() const
|
String DatabaseAndTableWithAlias::getQualifiedNamePrefix() const
|
||||||
@ -165,14 +151,14 @@ void DatabaseAndTableWithAlias::makeQualifiedName(const ASTPtr & ast) const
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
std::vector<const ASTTableExpression *> getSelectTablesExpression(const ASTSelectQuery * select_query)
|
std::vector<const ASTTableExpression *> getSelectTablesExpression(const ASTSelectQuery & select_query)
|
||||||
{
|
{
|
||||||
if (!select_query->tables)
|
if (!select_query.tables)
|
||||||
return {};
|
return {};
|
||||||
|
|
||||||
std::vector<const ASTTableExpression *> tables_expression;
|
std::vector<const ASTTableExpression *> tables_expression;
|
||||||
|
|
||||||
for (const auto & child : select_query->tables->children)
|
for (const auto & child : select_query.tables->children)
|
||||||
{
|
{
|
||||||
ASTTablesInSelectQueryElement * tables_element = static_cast<ASTTablesInSelectQueryElement *>(child.get());
|
ASTTablesInSelectQueryElement * tables_element = static_cast<ASTTablesInSelectQueryElement *>(child.get());
|
||||||
|
|
||||||
@ -183,7 +169,25 @@ std::vector<const ASTTableExpression *> getSelectTablesExpression(const ASTSelec
|
|||||||
return tables_expression;
|
return tables_expression;
|
||||||
}
|
}
|
||||||
|
|
||||||
std::vector<DatabaseAndTableWithAlias> getDatabaseAndTableWithAliases(const ASTSelectQuery * select_query, const String & current_database)
|
static const ASTTableExpression * getTableExpression(const ASTSelectQuery & select, size_t table_number)
|
||||||
|
{
|
||||||
|
if (!select.tables)
|
||||||
|
return {};
|
||||||
|
|
||||||
|
ASTTablesInSelectQuery & tables_in_select_query = static_cast<ASTTablesInSelectQuery &>(*select.tables);
|
||||||
|
if (tables_in_select_query.children.size() <= table_number)
|
||||||
|
return {};
|
||||||
|
|
||||||
|
ASTTablesInSelectQueryElement & tables_element =
|
||||||
|
static_cast<ASTTablesInSelectQueryElement &>(*tables_in_select_query.children[table_number]);
|
||||||
|
|
||||||
|
if (!tables_element.table_expression)
|
||||||
|
return {};
|
||||||
|
|
||||||
|
return static_cast<const ASTTableExpression *>(tables_element.table_expression.get());
|
||||||
|
}
|
||||||
|
|
||||||
|
std::vector<DatabaseAndTableWithAlias> getDatabaseAndTables(const ASTSelectQuery & select_query, const String & current_database)
|
||||||
{
|
{
|
||||||
std::vector<const ASTTableExpression *> tables_expression = getSelectTablesExpression(select_query);
|
std::vector<const ASTTableExpression *> tables_expression = getSelectTablesExpression(select_query);
|
||||||
|
|
||||||
@ -191,9 +195,51 @@ std::vector<DatabaseAndTableWithAlias> getDatabaseAndTableWithAliases(const ASTS
|
|||||||
database_and_table_with_aliases.reserve(tables_expression.size());
|
database_and_table_with_aliases.reserve(tables_expression.size());
|
||||||
|
|
||||||
for (const auto & table_expression : tables_expression)
|
for (const auto & table_expression : tables_expression)
|
||||||
database_and_table_with_aliases.emplace_back(getTableNameWithAliasFromTableExpression(*table_expression, current_database));
|
database_and_table_with_aliases.emplace_back(DatabaseAndTableWithAlias(*table_expression, current_database));
|
||||||
|
|
||||||
return database_and_table_with_aliases;
|
return database_and_table_with_aliases;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
std::optional<DatabaseAndTableWithAlias> getDatabaseAndTable(const ASTSelectQuery & select, size_t table_number)
|
||||||
|
{
|
||||||
|
const ASTTableExpression * table_expression = getTableExpression(select, table_number);
|
||||||
|
if (!table_expression)
|
||||||
|
return {};
|
||||||
|
|
||||||
|
ASTPtr database_and_table_name = table_expression->database_and_table_name;
|
||||||
|
if (!database_and_table_name)
|
||||||
|
return {};
|
||||||
|
|
||||||
|
const ASTIdentifier * identifier = typeid_cast<const ASTIdentifier *>(database_and_table_name.get());
|
||||||
|
if (!identifier)
|
||||||
|
return {};
|
||||||
|
|
||||||
|
return *identifier;
|
||||||
|
}
|
||||||
|
|
||||||
|
ASTPtr getTableFunctionOrSubquery(const ASTSelectQuery & select, size_t table_number)
|
||||||
|
{
|
||||||
|
const ASTTableExpression * table_expression = getTableExpression(select, table_number);
|
||||||
|
if (table_expression)
|
||||||
|
{
|
||||||
|
#if 1 /// TODO: It hides some logical error in InterpreterSelectQuery & distributed tables
|
||||||
|
if (table_expression->database_and_table_name)
|
||||||
|
{
|
||||||
|
if (table_expression->database_and_table_name->children.empty())
|
||||||
|
return table_expression->database_and_table_name;
|
||||||
|
|
||||||
|
if (table_expression->database_and_table_name->children.size() == 2)
|
||||||
|
return table_expression->database_and_table_name->children[1];
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
if (table_expression->table_function)
|
||||||
|
return table_expression->table_function;
|
||||||
|
|
||||||
|
if (table_expression->subquery)
|
||||||
|
return static_cast<const ASTSubquery *>(table_expression->subquery.get())->children[0];
|
||||||
|
}
|
||||||
|
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
@ -1,6 +1,7 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <memory>
|
#include <memory>
|
||||||
|
#include <optional>
|
||||||
#include <Core/Types.h>
|
#include <Core/Types.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
@ -14,12 +15,16 @@ class ASTIdentifier;
|
|||||||
struct ASTTableExpression;
|
struct ASTTableExpression;
|
||||||
|
|
||||||
|
|
||||||
|
/// Extracts database name (and/or alias) from table expression or identifier
|
||||||
struct DatabaseAndTableWithAlias
|
struct DatabaseAndTableWithAlias
|
||||||
{
|
{
|
||||||
String database;
|
String database;
|
||||||
String table;
|
String table;
|
||||||
String alias;
|
String alias;
|
||||||
|
|
||||||
|
DatabaseAndTableWithAlias(const ASTIdentifier & identifier, const String & current_database = "");
|
||||||
|
DatabaseAndTableWithAlias(const ASTTableExpression & table_expression, const String & current_database);
|
||||||
|
|
||||||
/// "alias." or "database.table." if alias is empty
|
/// "alias." or "database.table." if alias is empty
|
||||||
String getQualifiedNamePrefix() const;
|
String getQualifiedNamePrefix() const;
|
||||||
|
|
||||||
@ -29,15 +34,13 @@ struct DatabaseAndTableWithAlias
|
|||||||
|
|
||||||
void stripIdentifier(DB::ASTPtr & ast, size_t num_qualifiers_to_strip);
|
void stripIdentifier(DB::ASTPtr & ast, size_t num_qualifiers_to_strip);
|
||||||
|
|
||||||
DatabaseAndTableWithAlias getTableNameWithAliasFromTableExpression(const ASTTableExpression & table_expression,
|
|
||||||
const String & current_database);
|
|
||||||
|
|
||||||
size_t getNumComponentsToStripInOrderToTranslateQualifiedName(const ASTIdentifier & identifier,
|
size_t getNumComponentsToStripInOrderToTranslateQualifiedName(const ASTIdentifier & identifier,
|
||||||
const DatabaseAndTableWithAlias & names);
|
const DatabaseAndTableWithAlias & names);
|
||||||
|
|
||||||
std::pair<String, String> getDatabaseAndTableNameFromIdentifier(const ASTIdentifier & identifier);
|
std::vector<DatabaseAndTableWithAlias> getDatabaseAndTables(const ASTSelectQuery & select_query, const String & current_database);
|
||||||
|
std::optional<DatabaseAndTableWithAlias> getDatabaseAndTable(const ASTSelectQuery & select, size_t table_number);
|
||||||
|
|
||||||
std::vector<const ASTTableExpression *> getSelectTablesExpression(const ASTSelectQuery * select_query);
|
std::vector<const ASTTableExpression *> getSelectTablesExpression(const ASTSelectQuery & select_query);
|
||||||
std::vector<DatabaseAndTableWithAlias> getDatabaseAndTableWithAliases(const ASTSelectQuery * select_query, const String & current_database);
|
ASTPtr getTableFunctionOrSubquery(const ASTSelectQuery & select, size_t table_number);
|
||||||
|
|
||||||
}
|
}
|
@ -35,7 +35,7 @@ static ASTPtr addTypeConversion(std::unique_ptr<ASTLiteral> && ast, const String
|
|||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
|
|
||||||
void ExecuteScalarSubqueriesVisitor::visit(const ASTSubquery * subquery, ASTPtr & ast, const DumpASTNode &) const
|
void ExecuteScalarSubqueriesVisitor::visit(const ASTSubquery & subquery, ASTPtr & ast) const
|
||||||
{
|
{
|
||||||
Context subquery_context = context;
|
Context subquery_context = context;
|
||||||
Settings subquery_settings = context.getSettings();
|
Settings subquery_settings = context.getSettings();
|
||||||
@ -43,7 +43,7 @@ void ExecuteScalarSubqueriesVisitor::visit(const ASTSubquery * subquery, ASTPtr
|
|||||||
subquery_settings.extremes = 0;
|
subquery_settings.extremes = 0;
|
||||||
subquery_context.setSettings(subquery_settings);
|
subquery_context.setSettings(subquery_settings);
|
||||||
|
|
||||||
ASTPtr subquery_select = subquery->children.at(0);
|
ASTPtr subquery_select = subquery.children.at(0);
|
||||||
BlockIO res = InterpreterSelectWithUnionQuery(
|
BlockIO res = InterpreterSelectWithUnionQuery(
|
||||||
subquery_select, subquery_context, {}, QueryProcessingStage::Complete, subquery_depth + 1).execute();
|
subquery_select, subquery_context, {}, QueryProcessingStage::Complete, subquery_depth + 1).execute();
|
||||||
|
|
||||||
@ -76,14 +76,14 @@ void ExecuteScalarSubqueriesVisitor::visit(const ASTSubquery * subquery, ASTPtr
|
|||||||
if (columns == 1)
|
if (columns == 1)
|
||||||
{
|
{
|
||||||
auto lit = std::make_unique<ASTLiteral>((*block.safeGetByPosition(0).column)[0]);
|
auto lit = std::make_unique<ASTLiteral>((*block.safeGetByPosition(0).column)[0]);
|
||||||
lit->alias = subquery->alias;
|
lit->alias = subquery.alias;
|
||||||
lit->prefer_alias_to_column_name = subquery->prefer_alias_to_column_name;
|
lit->prefer_alias_to_column_name = subquery.prefer_alias_to_column_name;
|
||||||
ast = addTypeConversion(std::move(lit), block.safeGetByPosition(0).type->getName());
|
ast = addTypeConversion(std::move(lit), block.safeGetByPosition(0).type->getName());
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
auto tuple = std::make_shared<ASTFunction>();
|
auto tuple = std::make_shared<ASTFunction>();
|
||||||
tuple->alias = subquery->alias;
|
tuple->alias = subquery.alias;
|
||||||
ast = tuple;
|
ast = tuple;
|
||||||
tuple->name = "tuple";
|
tuple->name = "tuple";
|
||||||
auto exp_list = std::make_shared<ASTExpressionList>();
|
auto exp_list = std::make_shared<ASTExpressionList>();
|
||||||
@ -101,26 +101,26 @@ void ExecuteScalarSubqueriesVisitor::visit(const ASTSubquery * subquery, ASTPtr
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
void ExecuteScalarSubqueriesVisitor::visit(const ASTTableExpression *, ASTPtr &, const DumpASTNode &) const
|
void ExecuteScalarSubqueriesVisitor::visit(const ASTTableExpression &, ASTPtr &) const
|
||||||
{
|
{
|
||||||
/// Don't descend into subqueries in FROM section.
|
/// Don't descend into subqueries in FROM section.
|
||||||
}
|
}
|
||||||
|
|
||||||
void ExecuteScalarSubqueriesVisitor::visit(const ASTFunction * func, ASTPtr & ast, const DumpASTNode &) const
|
void ExecuteScalarSubqueriesVisitor::visit(const ASTFunction & func, ASTPtr & ast) const
|
||||||
{
|
{
|
||||||
/// Don't descend into subqueries in arguments of IN operator.
|
/// Don't descend into subqueries in arguments of IN operator.
|
||||||
/// But if an argument is not subquery, than deeper may be scalar subqueries and we need to descend in them.
|
/// But if an argument is not subquery, than deeper may be scalar subqueries and we need to descend in them.
|
||||||
|
|
||||||
if (functionIsInOrGlobalInOperator(func->name))
|
if (functionIsInOrGlobalInOperator(func.name))
|
||||||
{
|
{
|
||||||
for (auto & child : ast->children)
|
for (auto & child : ast->children)
|
||||||
{
|
{
|
||||||
if (child != func->arguments)
|
if (child != func.arguments)
|
||||||
visit(child);
|
visit(child);
|
||||||
else
|
else
|
||||||
for (size_t i = 0, size = func->arguments->children.size(); i < size; ++i)
|
for (size_t i = 0, size = func.arguments->children.size(); i < size; ++i)
|
||||||
if (i != 1 || !typeid_cast<ASTSubquery *>(func->arguments->children[i].get()))
|
if (i != 1 || !typeid_cast<ASTSubquery *>(func.arguments->children[i].get()))
|
||||||
visit(func->arguments->children[i]);
|
visit(func.arguments->children[i]);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
|
@ -11,6 +11,8 @@ class ASTSubquery;
|
|||||||
class ASTFunction;
|
class ASTFunction;
|
||||||
struct ASTTableExpression;
|
struct ASTTableExpression;
|
||||||
|
|
||||||
|
/// Visitors consist of functions with unified interface 'void visit(Casted & x, ASTPtr & y)', there x is y, successfully casted to Casted.
|
||||||
|
/// Both types and fuction could have const specifiers. The second argument is used by visitor to replaces AST node (y) if needed.
|
||||||
|
|
||||||
/** Replace subqueries that return exactly one row
|
/** Replace subqueries that return exactly one row
|
||||||
* ("scalar" subqueries) to the corresponding constants.
|
* ("scalar" subqueries) to the corresponding constants.
|
||||||
@ -40,11 +42,9 @@ public:
|
|||||||
|
|
||||||
void visit(ASTPtr & ast) const
|
void visit(ASTPtr & ast) const
|
||||||
{
|
{
|
||||||
DumpASTNode dump(*ast, ostr, visit_depth, "executeScalarSubqueries");
|
if (!tryVisit<ASTSubquery>(ast) &&
|
||||||
|
!tryVisit<ASTTableExpression>(ast) &&
|
||||||
if (!tryVisit<ASTSubquery>(ast, dump) &&
|
!tryVisit<ASTFunction>(ast))
|
||||||
!tryVisit<ASTTableExpression>(ast, dump) &&
|
|
||||||
!tryVisit<ASTFunction>(ast, dump))
|
|
||||||
visitChildren(ast);
|
visitChildren(ast);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -54,9 +54,9 @@ private:
|
|||||||
mutable size_t visit_depth;
|
mutable size_t visit_depth;
|
||||||
std::ostream * ostr;
|
std::ostream * ostr;
|
||||||
|
|
||||||
void visit(const ASTSubquery * subquery, ASTPtr & ast, const DumpASTNode & dump) const;
|
void visit(const ASTSubquery & subquery, ASTPtr & ast) const;
|
||||||
void visit(const ASTFunction * func, ASTPtr & ast, const DumpASTNode &) const;
|
void visit(const ASTFunction & func, ASTPtr & ast) const;
|
||||||
void visit(const ASTTableExpression *, ASTPtr &, const DumpASTNode &) const;
|
void visit(const ASTTableExpression &, ASTPtr &) const;
|
||||||
|
|
||||||
void visitChildren(ASTPtr & ast) const
|
void visitChildren(ASTPtr & ast) const
|
||||||
{
|
{
|
||||||
@ -65,11 +65,12 @@ private:
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
bool tryVisit(ASTPtr & ast, const DumpASTNode & dump) const
|
bool tryVisit(ASTPtr & ast) const
|
||||||
{
|
{
|
||||||
if (const T * t = typeid_cast<const T *>(ast.get()))
|
if (const T * t = typeid_cast<const T *>(ast.get()))
|
||||||
{
|
{
|
||||||
visit(t, ast, dump);
|
DumpASTNode dump(*ast, ostr, visit_depth, "executeScalarSubqueries");
|
||||||
|
visit(*t, ast);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
|
@ -271,8 +271,6 @@ void ExpressionAction::prepare(Block & sample_block, const Settings & settings)
|
|||||||
const std::string & name = projection[i].first;
|
const std::string & name = projection[i].first;
|
||||||
const std::string & alias = projection[i].second;
|
const std::string & alias = projection[i].second;
|
||||||
ColumnWithTypeAndName column = sample_block.getByName(name);
|
ColumnWithTypeAndName column = sample_block.getByName(name);
|
||||||
if (column.column)
|
|
||||||
column.column = (*std::move(column.column)).mutate();
|
|
||||||
if (alias != "")
|
if (alias != "")
|
||||||
column.name = alias;
|
column.name = alias;
|
||||||
new_block.insert(std::move(column));
|
new_block.insert(std::move(column));
|
||||||
@ -485,8 +483,6 @@ void ExpressionAction::execute(Block & block, std::unordered_map<std::string, si
|
|||||||
const std::string & name = projection[i].first;
|
const std::string & name = projection[i].first;
|
||||||
const std::string & alias = projection[i].second;
|
const std::string & alias = projection[i].second;
|
||||||
ColumnWithTypeAndName column = block.getByName(name);
|
ColumnWithTypeAndName column = block.getByName(name);
|
||||||
if (column.column)
|
|
||||||
column.column = (*std::move(column.column)).mutate();
|
|
||||||
if (alias != "")
|
if (alias != "")
|
||||||
column.name = alias;
|
column.name = alias;
|
||||||
new_block.insert(std::move(column));
|
new_block.insert(std::move(column));
|
||||||
|
@ -59,7 +59,7 @@
|
|||||||
#include <Parsers/parseQuery.h>
|
#include <Parsers/parseQuery.h>
|
||||||
#include <Parsers/queryToString.h>
|
#include <Parsers/queryToString.h>
|
||||||
#include <Interpreters/interpretSubquery.h>
|
#include <Interpreters/interpretSubquery.h>
|
||||||
#include <Interpreters/evaluateQualified.h>
|
#include <Interpreters/DatabaseAndTableWithAlias.h>
|
||||||
#include <Interpreters/QueryNormalizer.h>
|
#include <Interpreters/QueryNormalizer.h>
|
||||||
|
|
||||||
#include <Interpreters/QueryAliasesVisitor.h>
|
#include <Interpreters/QueryAliasesVisitor.h>
|
||||||
@ -172,36 +172,26 @@ ExpressionAnalyzer::ExpressionAnalyzer(
|
|||||||
|
|
||||||
if (!storage && select_query)
|
if (!storage && select_query)
|
||||||
{
|
{
|
||||||
auto select_database = select_query->database();
|
if (auto db_and_table = getDatabaseAndTable(*select_query, 0))
|
||||||
auto select_table = select_query->table();
|
storage = context.tryGetTable(db_and_table->database, db_and_table->table);
|
||||||
|
|
||||||
if (select_table
|
|
||||||
&& !typeid_cast<const ASTSelectWithUnionQuery *>(select_table.get())
|
|
||||||
&& !typeid_cast<const ASTFunction *>(select_table.get()))
|
|
||||||
{
|
|
||||||
String database = select_database
|
|
||||||
? typeid_cast<const ASTIdentifier &>(*select_database).name
|
|
||||||
: "";
|
|
||||||
const String & table = typeid_cast<const ASTIdentifier &>(*select_table).name;
|
|
||||||
storage = context.tryGetTable(database, table);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (storage && source_columns.empty())
|
if (storage)
|
||||||
{
|
{
|
||||||
auto physical_columns = storage->getColumns().getAllPhysical();
|
auto physical_columns = storage->getColumns().getAllPhysical();
|
||||||
if (source_columns.empty())
|
if (source_columns.empty())
|
||||||
source_columns.swap(physical_columns);
|
source_columns.swap(physical_columns);
|
||||||
else
|
else
|
||||||
{
|
|
||||||
source_columns.insert(source_columns.end(), physical_columns.begin(), physical_columns.end());
|
source_columns.insert(source_columns.end(), physical_columns.begin(), physical_columns.end());
|
||||||
removeDuplicateColumns(source_columns);
|
|
||||||
|
if (select_query)
|
||||||
|
{
|
||||||
|
const auto & storage_aliases = storage->getColumns().aliases;
|
||||||
|
source_columns.insert(source_columns.end(), storage_aliases.begin(), storage_aliases.end());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
else
|
|
||||||
removeDuplicateColumns(source_columns);
|
|
||||||
|
|
||||||
addAliasColumns();
|
removeDuplicateColumns(source_columns);
|
||||||
|
|
||||||
translateQualifiedNames();
|
translateQualifiedNames();
|
||||||
|
|
||||||
@ -215,8 +205,8 @@ ExpressionAnalyzer::ExpressionAnalyzer(
|
|||||||
/// Creates a dictionary `aliases`: alias -> ASTPtr
|
/// Creates a dictionary `aliases`: alias -> ASTPtr
|
||||||
{
|
{
|
||||||
LogAST log;
|
LogAST log;
|
||||||
QueryAliasesVisitor query_aliases_visitor(log.stream());
|
QueryAliasesVisitor query_aliases_visitor(aliases, log.stream());
|
||||||
query_aliases_visitor.visit(query, aliases);
|
query_aliases_visitor.visit(query);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Common subexpression elimination. Rewrite rules.
|
/// Common subexpression elimination. Rewrite rules.
|
||||||
@ -280,7 +270,7 @@ void ExpressionAnalyzer::translateQualifiedNames()
|
|||||||
if (!select_query || !select_query->tables || select_query->tables->children.empty())
|
if (!select_query || !select_query->tables || select_query->tables->children.empty())
|
||||||
return;
|
return;
|
||||||
|
|
||||||
std::vector<DatabaseAndTableWithAlias> tables = getDatabaseAndTableWithAliases(select_query, context.getCurrentDatabase());
|
std::vector<DatabaseAndTableWithAlias> tables = getDatabaseAndTables(*select_query, context.getCurrentDatabase());
|
||||||
|
|
||||||
LogAST log;
|
LogAST log;
|
||||||
TranslateQualifiedNamesVisitor visitor(source_columns, tables, log.stream());
|
TranslateQualifiedNamesVisitor visitor(source_columns, tables, log.stream());
|
||||||
@ -533,8 +523,8 @@ static NamesAndTypesList getNamesAndTypeListFromTableExpression(const ASTTableEx
|
|||||||
else if (table_expression.database_and_table_name)
|
else if (table_expression.database_and_table_name)
|
||||||
{
|
{
|
||||||
const auto & identifier = static_cast<const ASTIdentifier &>(*table_expression.database_and_table_name);
|
const auto & identifier = static_cast<const ASTIdentifier &>(*table_expression.database_and_table_name);
|
||||||
auto database_table = getDatabaseAndTableNameFromIdentifier(identifier);
|
DatabaseAndTableWithAlias database_table(identifier);
|
||||||
const auto & table = context.getTable(database_table.first, database_table.second);
|
const auto & table = context.getTable(database_table.database, database_table.table);
|
||||||
names_and_type_list = table->getSampleBlockNonMaterialized().getNamesAndTypesList();
|
names_and_type_list = table->getSampleBlockNonMaterialized().getNamesAndTypesList();
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -561,12 +551,12 @@ void ExpressionAnalyzer::normalizeTree()
|
|||||||
TableNamesAndColumnNames table_names_and_column_names;
|
TableNamesAndColumnNames table_names_and_column_names;
|
||||||
if (select_query && select_query->tables && !select_query->tables->children.empty())
|
if (select_query && select_query->tables && !select_query->tables->children.empty())
|
||||||
{
|
{
|
||||||
std::vector<const ASTTableExpression *> tables_expression = getSelectTablesExpression(select_query);
|
std::vector<const ASTTableExpression *> tables_expression = getSelectTablesExpression(*select_query);
|
||||||
|
|
||||||
bool first = true;
|
bool first = true;
|
||||||
for (const auto * table_expression : tables_expression)
|
for (const auto * table_expression : tables_expression)
|
||||||
{
|
{
|
||||||
const auto table_name = getTableNameWithAliasFromTableExpression(*table_expression, context.getCurrentDatabase());
|
DatabaseAndTableWithAlias table_name(*table_expression, context.getCurrentDatabase());
|
||||||
NamesAndTypesList names_and_types = getNamesAndTypeListFromTableExpression(*table_expression, context);
|
NamesAndTypesList names_and_types = getNamesAndTypeListFromTableExpression(*table_expression, context);
|
||||||
|
|
||||||
if (!first)
|
if (!first)
|
||||||
@ -587,19 +577,6 @@ void ExpressionAnalyzer::normalizeTree()
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
void ExpressionAnalyzer::addAliasColumns()
|
|
||||||
{
|
|
||||||
if (!select_query)
|
|
||||||
return;
|
|
||||||
|
|
||||||
if (!storage)
|
|
||||||
return;
|
|
||||||
|
|
||||||
const auto & storage_aliases = storage->getColumns().aliases;
|
|
||||||
source_columns.insert(std::end(source_columns), std::begin(storage_aliases), std::end(storage_aliases));
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
void ExpressionAnalyzer::executeScalarSubqueries()
|
void ExpressionAnalyzer::executeScalarSubqueries()
|
||||||
{
|
{
|
||||||
LogAST log;
|
LogAST log;
|
||||||
@ -1244,19 +1221,24 @@ const ExpressionAnalyzer::AnalyzedJoin::JoinedColumnsList & ExpressionAnalyzer::
|
|||||||
if (const ASTTablesInSelectQueryElement * node = select_query_with_join->join())
|
if (const ASTTablesInSelectQueryElement * node = select_query_with_join->join())
|
||||||
{
|
{
|
||||||
const auto & table_expression = static_cast<const ASTTableExpression &>(*node->table_expression);
|
const auto & table_expression = static_cast<const ASTTableExpression &>(*node->table_expression);
|
||||||
auto table_name_with_alias = getTableNameWithAliasFromTableExpression(table_expression, context.getCurrentDatabase());
|
DatabaseAndTableWithAlias table_name_with_alias(table_expression, context.getCurrentDatabase());
|
||||||
|
|
||||||
auto columns = getNamesAndTypeListFromTableExpression(table_expression, context);
|
auto columns = getNamesAndTypeListFromTableExpression(table_expression, context);
|
||||||
|
|
||||||
for (auto & column : columns)
|
for (auto & column : columns)
|
||||||
{
|
{
|
||||||
columns_from_joined_table.emplace_back(column, column.name);
|
JoinedColumn joined_column(column, column.name);
|
||||||
|
|
||||||
if (source_columns.contains(column.name))
|
if (source_columns.contains(column.name))
|
||||||
{
|
{
|
||||||
auto qualified_name = table_name_with_alias.getQualifiedNamePrefix() + column.name;
|
auto qualified_name = table_name_with_alias.getQualifiedNamePrefix() + column.name;
|
||||||
columns_from_joined_table.back().name_and_type.name = qualified_name;
|
joined_column.name_and_type.name = qualified_name;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// We don't want to select duplicate columns from the joined subquery if they appear
|
||||||
|
if (std::find(columns_from_joined_table.begin(), columns_from_joined_table.end(), joined_column) == columns_from_joined_table.end())
|
||||||
|
columns_from_joined_table.push_back(joined_column);
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -1302,8 +1284,8 @@ bool ExpressionAnalyzer::appendJoin(ExpressionActionsChain & chain, bool only_ty
|
|||||||
if (table_to_join.database_and_table_name)
|
if (table_to_join.database_and_table_name)
|
||||||
{
|
{
|
||||||
const auto & identifier = static_cast<const ASTIdentifier &>(*table_to_join.database_and_table_name);
|
const auto & identifier = static_cast<const ASTIdentifier &>(*table_to_join.database_and_table_name);
|
||||||
auto database_table = getDatabaseAndTableNameFromIdentifier(identifier);
|
DatabaseAndTableWithAlias database_table(identifier);
|
||||||
StoragePtr table = context.tryGetTable(database_table.first, database_table.second);
|
StoragePtr table = context.tryGetTable(database_table.database, database_table.table);
|
||||||
|
|
||||||
if (table)
|
if (table)
|
||||||
{
|
{
|
||||||
@ -1845,8 +1827,8 @@ void ExpressionAnalyzer::collectJoinedColumnsFromJoinOnExpr()
|
|||||||
const auto & left_table_expression = static_cast<const ASTTableExpression &>(*left_tables_element->table_expression);
|
const auto & left_table_expression = static_cast<const ASTTableExpression &>(*left_tables_element->table_expression);
|
||||||
const auto & right_table_expression = static_cast<const ASTTableExpression &>(*right_tables_element->table_expression);
|
const auto & right_table_expression = static_cast<const ASTTableExpression &>(*right_tables_element->table_expression);
|
||||||
|
|
||||||
auto left_source_names = getTableNameWithAliasFromTableExpression(left_table_expression, context.getCurrentDatabase());
|
DatabaseAndTableWithAlias left_source_names(left_table_expression, context.getCurrentDatabase());
|
||||||
auto right_source_names = getTableNameWithAliasFromTableExpression(right_table_expression, context.getCurrentDatabase());
|
DatabaseAndTableWithAlias right_source_names(right_table_expression, context.getCurrentDatabase());
|
||||||
|
|
||||||
/// Stores examples of columns which are only from one table.
|
/// Stores examples of columns which are only from one table.
|
||||||
struct TableBelonging
|
struct TableBelonging
|
||||||
@ -1999,7 +1981,7 @@ void ExpressionAnalyzer::collectJoinedColumns(NameSet & joined_columns)
|
|||||||
|
|
||||||
const auto & table_join = static_cast<const ASTTableJoin &>(*node->table_join);
|
const auto & table_join = static_cast<const ASTTableJoin &>(*node->table_join);
|
||||||
const auto & table_expression = static_cast<const ASTTableExpression &>(*node->table_expression);
|
const auto & table_expression = static_cast<const ASTTableExpression &>(*node->table_expression);
|
||||||
auto joined_table_name = getTableNameWithAliasFromTableExpression(table_expression, context.getCurrentDatabase());
|
DatabaseAndTableWithAlias joined_table_name(table_expression, context.getCurrentDatabase());
|
||||||
|
|
||||||
auto add_name_to_join_keys = [&](Names & join_keys, ASTs & join_asts, const ASTPtr & ast, bool right_table)
|
auto add_name_to_join_keys = [&](Names & join_keys, ASTs & join_asts, const ASTPtr & ast, bool right_table)
|
||||||
{
|
{
|
||||||
|
@ -259,6 +259,11 @@ private:
|
|||||||
|
|
||||||
JoinedColumn(const NameAndTypePair & name_and_type_, const String & original_name_)
|
JoinedColumn(const NameAndTypePair & name_and_type_, const String & original_name_)
|
||||||
: name_and_type(name_and_type_), original_name(original_name_) {}
|
: name_and_type(name_and_type_), original_name(original_name_) {}
|
||||||
|
|
||||||
|
bool operator==(const JoinedColumn & o) const
|
||||||
|
{
|
||||||
|
return name_and_type == o.name_and_type && original_name == o.original_name;
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
using JoinedColumnsList = std::list<JoinedColumn>;
|
using JoinedColumnsList = std::list<JoinedColumn>;
|
||||||
@ -322,9 +327,6 @@ private:
|
|||||||
void optimizeIfWithConstantConditionImpl(ASTPtr & current_ast);
|
void optimizeIfWithConstantConditionImpl(ASTPtr & current_ast);
|
||||||
bool tryExtractConstValueFromCondition(const ASTPtr & condition, bool & value) const;
|
bool tryExtractConstValueFromCondition(const ASTPtr & condition, bool & value) const;
|
||||||
|
|
||||||
/// Adds a list of ALIAS columns from the table.
|
|
||||||
void addAliasColumns();
|
|
||||||
|
|
||||||
/// Replacing scalar subqueries with constant values.
|
/// Replacing scalar subqueries with constant values.
|
||||||
void executeScalarSubqueries();
|
void executeScalarSubqueries();
|
||||||
|
|
||||||
|
@ -239,7 +239,7 @@ void ExternalLoader::reloadFromConfigFiles(const bool throw_on_error, const bool
|
|||||||
if (current_config.find(loadable.first) == std::end(current_config))
|
if (current_config.find(loadable.first) == std::end(current_config))
|
||||||
removed_loadable_objects.emplace_back(loadable.first);
|
removed_loadable_objects.emplace_back(loadable.first);
|
||||||
}
|
}
|
||||||
for(const auto & name : removed_loadable_objects)
|
for (const auto & name : removed_loadable_objects)
|
||||||
loadable_objects.erase(name);
|
loadable_objects.erase(name);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -25,11 +25,11 @@ private:
|
|||||||
const Context & context;
|
const Context & context;
|
||||||
Tables & external_tables;
|
Tables & external_tables;
|
||||||
|
|
||||||
void visit(const ASTIdentifier * node, ASTPtr &) const
|
void visit(const ASTIdentifier & node, ASTPtr &) const
|
||||||
{
|
{
|
||||||
if (node->special())
|
if (node.special())
|
||||||
if (StoragePtr external_storage = context.tryGetExternalTable(node->name))
|
if (StoragePtr external_storage = context.tryGetExternalTable(node.name))
|
||||||
external_tables[node->name] = external_storage;
|
external_tables[node.name] = external_storage;
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
@ -37,7 +37,7 @@ private:
|
|||||||
{
|
{
|
||||||
if (const T * t = typeid_cast<const T *>(ast.get()))
|
if (const T * t = typeid_cast<const T *>(ast.get()))
|
||||||
{
|
{
|
||||||
visit(t, ast);
|
visit(*t, ast);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
|
@ -3,6 +3,9 @@
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
/// Visitors consist of functions with unified interface 'void visit(Casted & x, ASTPtr & y)', there x is y, successfully casted to Casted.
|
||||||
|
/// Both types and fuction could have const specifiers. The second argument is used by visitor to replaces AST node (y) if needed.
|
||||||
|
|
||||||
/// Converts GLOBAL subqueries to external tables; Puts them into the external_tables dictionary: name -> StoragePtr.
|
/// Converts GLOBAL subqueries to external tables; Puts them into the external_tables dictionary: name -> StoragePtr.
|
||||||
class GlobalSubqueriesVisitor
|
class GlobalSubqueriesVisitor
|
||||||
{
|
{
|
||||||
@ -41,22 +44,22 @@ private:
|
|||||||
bool & has_global_subqueries;
|
bool & has_global_subqueries;
|
||||||
|
|
||||||
/// GLOBAL IN
|
/// GLOBAL IN
|
||||||
void visit(ASTFunction * func, ASTPtr &) const
|
void visit(ASTFunction & func, ASTPtr &) const
|
||||||
{
|
{
|
||||||
if (func->name == "globalIn" || func->name == "globalNotIn")
|
if (func.name == "globalIn" || func.name == "globalNotIn")
|
||||||
{
|
{
|
||||||
addExternalStorage(func->arguments->children.at(1));
|
addExternalStorage(func.arguments->children.at(1));
|
||||||
has_global_subqueries = true;
|
has_global_subqueries = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// GLOBAL JOIN
|
/// GLOBAL JOIN
|
||||||
void visit(ASTTablesInSelectQueryElement * table_elem, ASTPtr &) const
|
void visit(ASTTablesInSelectQueryElement & table_elem, ASTPtr &) const
|
||||||
{
|
{
|
||||||
if (table_elem->table_join
|
if (table_elem.table_join
|
||||||
&& static_cast<const ASTTableJoin &>(*table_elem->table_join).locality == ASTTableJoin::Locality::Global)
|
&& static_cast<const ASTTableJoin &>(*table_elem.table_join).locality == ASTTableJoin::Locality::Global)
|
||||||
{
|
{
|
||||||
addExternalStorage(table_elem->table_expression);
|
addExternalStorage(table_elem.table_expression);
|
||||||
has_global_subqueries = true;
|
has_global_subqueries = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -66,7 +69,7 @@ private:
|
|||||||
{
|
{
|
||||||
if (T * t = typeid_cast<T *>(ast.get()))
|
if (T * t = typeid_cast<T *>(ast.get()))
|
||||||
{
|
{
|
||||||
visit(t, ast);
|
visit(*t, ast);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
@ -139,7 +142,7 @@ private:
|
|||||||
* instead of doing a subquery, you just need to read it.
|
* instead of doing a subquery, you just need to read it.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
auto database_and_table_name = ASTIdentifier::createSpecial(external_table_name);
|
auto database_and_table_name = createDatabaseAndTableNode("", external_table_name);
|
||||||
|
|
||||||
if (auto ast_table_expr = typeid_cast<ASTTableExpression *>(subquery_or_table_name_or_table_expression.get()))
|
if (auto ast_table_expr = typeid_cast<ASTTableExpression *>(subquery_or_table_name_or_table_expression.get()))
|
||||||
{
|
{
|
||||||
|
@ -1,5 +1,6 @@
|
|||||||
#include <Interpreters/InJoinSubqueriesPreprocessor.h>
|
#include <Interpreters/InJoinSubqueriesPreprocessor.h>
|
||||||
#include <Interpreters/Context.h>
|
#include <Interpreters/Context.h>
|
||||||
|
#include <Interpreters/DatabaseAndTableWithAlias.h>
|
||||||
#include <Storages/StorageDistributed.h>
|
#include <Storages/StorageDistributed.h>
|
||||||
#include <Parsers/ASTSelectQuery.h>
|
#include <Parsers/ASTSelectQuery.h>
|
||||||
#include <Parsers/ASTTablesInSelectQuery.h>
|
#include <Parsers/ASTTablesInSelectQuery.h>
|
||||||
@ -81,40 +82,13 @@ void forEachTable(IAST * node, F && f)
|
|||||||
|
|
||||||
StoragePtr tryGetTable(const ASTPtr & database_and_table, const Context & context)
|
StoragePtr tryGetTable(const ASTPtr & database_and_table, const Context & context)
|
||||||
{
|
{
|
||||||
String database;
|
const ASTIdentifier * id = typeid_cast<const ASTIdentifier *>(database_and_table.get());
|
||||||
String table;
|
if (!id)
|
||||||
|
throw Exception("Logical error: identifier expected", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
|
||||||
const ASTIdentifier * id = static_cast<const ASTIdentifier *>(database_and_table.get());
|
DatabaseAndTableWithAlias db_and_table(*id);
|
||||||
|
|
||||||
if (id->children.empty())
|
return context.tryGetTable(db_and_table.database, db_and_table.table);
|
||||||
table = id->name;
|
|
||||||
else if (id->children.size() == 2)
|
|
||||||
{
|
|
||||||
database = static_cast<const ASTIdentifier *>(id->children[0].get())->name;
|
|
||||||
table = static_cast<const ASTIdentifier *>(id->children[1].get())->name;
|
|
||||||
}
|
|
||||||
else
|
|
||||||
throw Exception("Logical error: unexpected number of components in table expression", ErrorCodes::LOGICAL_ERROR);
|
|
||||||
|
|
||||||
return context.tryGetTable(database, table);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
void replaceDatabaseAndTable(ASTPtr & database_and_table, const String & database_name, const String & table_name)
|
|
||||||
{
|
|
||||||
ASTPtr table = ASTIdentifier::createSpecial(table_name);
|
|
||||||
|
|
||||||
if (!database_name.empty())
|
|
||||||
{
|
|
||||||
ASTPtr database = ASTIdentifier::createSpecial(database_name);
|
|
||||||
|
|
||||||
database_and_table = ASTIdentifier::createSpecial(database_name + "." + table_name);
|
|
||||||
database_and_table->children = {database, table};
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
database_and_table = ASTIdentifier::createSpecial(table_name);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
@ -156,7 +130,7 @@ void InJoinSubqueriesPreprocessor::process(ASTSelectQuery * query) const
|
|||||||
|
|
||||||
forEachNonGlobalSubquery(query, [&] (IAST * subquery, IAST * function, IAST * table_join)
|
forEachNonGlobalSubquery(query, [&] (IAST * subquery, IAST * function, IAST * table_join)
|
||||||
{
|
{
|
||||||
forEachTable(subquery, [&] (ASTPtr & database_and_table)
|
forEachTable(subquery, [&] (ASTPtr & database_and_table)
|
||||||
{
|
{
|
||||||
StoragePtr storage = tryGetTable(database_and_table, context);
|
StoragePtr storage = tryGetTable(database_and_table, context);
|
||||||
|
|
||||||
@ -199,7 +173,8 @@ void InJoinSubqueriesPreprocessor::process(ASTSelectQuery * query) const
|
|||||||
std::string table;
|
std::string table;
|
||||||
std::tie(database, table) = getRemoteDatabaseAndTableName(*storage);
|
std::tie(database, table) = getRemoteDatabaseAndTableName(*storage);
|
||||||
|
|
||||||
replaceDatabaseAndTable(database_and_table, database, table);
|
/// TODO: find a way to avoid AST node replacing
|
||||||
|
database_and_table = createDatabaseAndTableNode(database, table);
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
throw Exception("InJoinSubqueriesPreprocessor: unexpected value of 'distributed_product_mode' setting", ErrorCodes::LOGICAL_ERROR);
|
throw Exception("InJoinSubqueriesPreprocessor: unexpected value of 'distributed_product_mode' setting", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
@ -26,7 +26,7 @@ BlockIO InterpreterCheckQuery::execute()
|
|||||||
StoragePtr table = context.getTable(database_name, table_name);
|
StoragePtr table = context.getTable(database_name, table_name);
|
||||||
|
|
||||||
auto column = ColumnUInt8::create();
|
auto column = ColumnUInt8::create();
|
||||||
column->insert(UInt64(table->checkData()));
|
column->insertValue(UInt64(table->checkData()));
|
||||||
result = Block{{ std::move(column), std::make_shared<DataTypeUInt8>(), "result" }};
|
result = Block{{ std::move(column), std::make_shared<DataTypeUInt8>(), "result" }};
|
||||||
|
|
||||||
BlockIO res;
|
BlockIO res;
|
||||||
|
@ -30,6 +30,7 @@
|
|||||||
#include <Interpreters/InterpreterSelectWithUnionQuery.h>
|
#include <Interpreters/InterpreterSelectWithUnionQuery.h>
|
||||||
#include <Interpreters/InterpreterInsertQuery.h>
|
#include <Interpreters/InterpreterInsertQuery.h>
|
||||||
#include <Interpreters/ExpressionActions.h>
|
#include <Interpreters/ExpressionActions.h>
|
||||||
|
#include <Interpreters/AddDefaultDatabaseVisitor.h>
|
||||||
|
|
||||||
#include <DataTypes/DataTypeFactory.h>
|
#include <DataTypes/DataTypeFactory.h>
|
||||||
#include <DataTypes/NestedUtils.h>
|
#include <DataTypes/NestedUtils.h>
|
||||||
@ -211,7 +212,7 @@ static ColumnsAndDefaults parseColumns(const ASTExpressionList & column_list_ast
|
|||||||
|
|
||||||
default_expr_list->children.emplace_back(setAlias(
|
default_expr_list->children.emplace_back(setAlias(
|
||||||
makeASTFunction("CAST", std::make_shared<ASTIdentifier>(tmp_column_name),
|
makeASTFunction("CAST", std::make_shared<ASTIdentifier>(tmp_column_name),
|
||||||
std::make_shared<ASTLiteral>(Field(data_type_ptr->getName()))), final_column_name));
|
std::make_shared<ASTLiteral>(data_type_ptr->getName())), final_column_name));
|
||||||
default_expr_list->children.emplace_back(setAlias(col_decl.default_expression->clone(), tmp_column_name));
|
default_expr_list->children.emplace_back(setAlias(col_decl.default_expression->clone(), tmp_column_name));
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
@ -511,7 +512,10 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create)
|
|||||||
create.to_database = current_database;
|
create.to_database = current_database;
|
||||||
|
|
||||||
if (create.select && (create.is_view || create.is_materialized_view))
|
if (create.select && (create.is_view || create.is_materialized_view))
|
||||||
create.select->setDatabaseIfNeeded(current_database);
|
{
|
||||||
|
AddDefaultDatabaseVisitor visitor(current_database);
|
||||||
|
visitor.visit(*create.select);
|
||||||
|
}
|
||||||
|
|
||||||
Block as_select_sample;
|
Block as_select_sample;
|
||||||
if (create.select && (!create.attach || !create.columns))
|
if (create.select && (!create.attach || !create.columns))
|
||||||
|
@ -10,7 +10,6 @@
|
|||||||
#include <DataStreams/SquashingBlockOutputStream.h>
|
#include <DataStreams/SquashingBlockOutputStream.h>
|
||||||
#include <DataStreams/copyData.h>
|
#include <DataStreams/copyData.h>
|
||||||
|
|
||||||
#include <Parsers/ASTIdentifier.h>
|
|
||||||
#include <Parsers/ASTInsertQuery.h>
|
#include <Parsers/ASTInsertQuery.h>
|
||||||
#include <Parsers/ASTSelectWithUnionQuery.h>
|
#include <Parsers/ASTSelectWithUnionQuery.h>
|
||||||
|
|
||||||
|
@ -23,6 +23,7 @@ namespace ErrorCodes
|
|||||||
{
|
{
|
||||||
extern const int READONLY;
|
extern const int READONLY;
|
||||||
extern const int LOGICAL_ERROR;
|
extern const int LOGICAL_ERROR;
|
||||||
|
extern const int CANNOT_KILL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -62,7 +63,7 @@ using QueryDescriptors = std::vector<QueryDescriptor>;
|
|||||||
|
|
||||||
static void insertResultRow(size_t n, CancellationCode code, const Block & source_processes, const Block & sample_block, MutableColumns & columns)
|
static void insertResultRow(size_t n, CancellationCode code, const Block & source_processes, const Block & sample_block, MutableColumns & columns)
|
||||||
{
|
{
|
||||||
columns[0]->insert(String(cancellationCodeToStatus(code)));
|
columns[0]->insert(cancellationCodeToStatus(code));
|
||||||
|
|
||||||
for (size_t col_num = 1, size = columns.size(); col_num < size; ++col_num)
|
for (size_t col_num = 1, size = columns.size(); col_num < size; ++col_num)
|
||||||
columns[col_num]->insertFrom(*source_processes.getByName(sample_block.getByPosition(col_num).name).column, n);
|
columns[col_num]->insertFrom(*source_processes.getByName(sample_block.getByPosition(col_num).name).column, n);
|
||||||
@ -138,13 +139,17 @@ public:
|
|||||||
|
|
||||||
auto code = process_list.sendCancelToQuery(curr_process.query_id, curr_process.user, true);
|
auto code = process_list.sendCancelToQuery(curr_process.query_id, curr_process.user, true);
|
||||||
|
|
||||||
if (code != CancellationCode::QueryIsNotInitializedYet && code != CancellationCode::CancelSent)
|
/// Raise exception if this query is immortal, user have to know
|
||||||
|
/// This could happen only if query generate streams that don't implement IProfilingBlockInputStream
|
||||||
|
if (code == CancellationCode::CancelCannotBeSent)
|
||||||
|
throw Exception("Can't kill query '" + curr_process.query_id + "' it consits of unkillable stages", ErrorCodes::CANNOT_KILL);
|
||||||
|
else if (code != CancellationCode::QueryIsNotInitializedYet && code != CancellationCode::CancelSent)
|
||||||
{
|
{
|
||||||
curr_process.processed = true;
|
curr_process.processed = true;
|
||||||
insertResultRow(curr_process.source_num, code, processes_block, res_sample_block, columns);
|
insertResultRow(curr_process.source_num, code, processes_block, res_sample_block, columns);
|
||||||
++num_processed_queries;
|
++num_processed_queries;
|
||||||
}
|
}
|
||||||
/// Wait if QueryIsNotInitializedYet or CancelSent
|
/// Wait if CancelSent
|
||||||
}
|
}
|
||||||
|
|
||||||
/// KILL QUERY could be killed also
|
/// KILL QUERY could be killed also
|
||||||
@ -194,6 +199,12 @@ BlockIO InterpreterKillQueryQuery::execute()
|
|||||||
for (const auto & query_desc : queries_to_stop)
|
for (const auto & query_desc : queries_to_stop)
|
||||||
{
|
{
|
||||||
auto code = (query.test) ? CancellationCode::Unknown : process_list.sendCancelToQuery(query_desc.query_id, query_desc.user, true);
|
auto code = (query.test) ? CancellationCode::Unknown : process_list.sendCancelToQuery(query_desc.query_id, query_desc.user, true);
|
||||||
|
|
||||||
|
/// Raise exception if this query is immortal, user have to know
|
||||||
|
/// This could happen only if query generate streams that don't implement IProfilingBlockInputStream
|
||||||
|
if (code == CancellationCode::CancelCannotBeSent)
|
||||||
|
throw Exception("Can't kill query '" + query_desc.query_id + "' it consits of unkillable stages", ErrorCodes::CANNOT_KILL);
|
||||||
|
|
||||||
insertResultRow(query_desc.source_num, code, processes_block, header, res_columns);
|
insertResultRow(query_desc.source_num, code, processes_block, header, res_columns);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -34,6 +34,7 @@
|
|||||||
#include <Interpreters/InterpreterSelectWithUnionQuery.h>
|
#include <Interpreters/InterpreterSelectWithUnionQuery.h>
|
||||||
#include <Interpreters/InterpreterSetQuery.h>
|
#include <Interpreters/InterpreterSetQuery.h>
|
||||||
#include <Interpreters/ExpressionAnalyzer.h>
|
#include <Interpreters/ExpressionAnalyzer.h>
|
||||||
|
#include <Interpreters/DatabaseAndTableWithAlias.h>
|
||||||
#include <Storages/MergeTree/MergeTreeWhereOptimizer.h>
|
#include <Storages/MergeTree/MergeTreeWhereOptimizer.h>
|
||||||
|
|
||||||
#include <Storages/IStorage.h>
|
#include <Storages/IStorage.h>
|
||||||
@ -146,7 +147,7 @@ InterpreterSelectQuery::InterpreterSelectQuery(
|
|||||||
|
|
||||||
max_streams = settings.max_threads;
|
max_streams = settings.max_threads;
|
||||||
|
|
||||||
const auto & table_expression = query.table();
|
ASTPtr table_expression = getTableFunctionOrSubquery(query, 0);
|
||||||
|
|
||||||
if (input)
|
if (input)
|
||||||
{
|
{
|
||||||
@ -205,7 +206,7 @@ InterpreterSelectQuery::InterpreterSelectQuery(
|
|||||||
if (query_analyzer->isRewriteSubqueriesPredicate())
|
if (query_analyzer->isRewriteSubqueriesPredicate())
|
||||||
{
|
{
|
||||||
/// remake interpreter_subquery when PredicateOptimizer is rewrite subqueries and main table is subquery
|
/// remake interpreter_subquery when PredicateOptimizer is rewrite subqueries and main table is subquery
|
||||||
if (typeid_cast<ASTSelectWithUnionQuery *>(table_expression.get()))
|
if (table_expression && typeid_cast<ASTSelectWithUnionQuery *>(table_expression.get()))
|
||||||
interpreter_subquery = std::make_unique<InterpreterSelectWithUnionQuery>(
|
interpreter_subquery = std::make_unique<InterpreterSelectWithUnionQuery>(
|
||||||
table_expression, getSubqueryContext(context), required_columns, QueryProcessingStage::Complete, subquery_depth + 1,
|
table_expression, getSubqueryContext(context), required_columns, QueryProcessingStage::Complete, subquery_depth + 1,
|
||||||
only_analyze);
|
only_analyze);
|
||||||
@ -236,29 +237,20 @@ InterpreterSelectQuery::InterpreterSelectQuery(
|
|||||||
|
|
||||||
void InterpreterSelectQuery::getDatabaseAndTableNames(String & database_name, String & table_name)
|
void InterpreterSelectQuery::getDatabaseAndTableNames(String & database_name, String & table_name)
|
||||||
{
|
{
|
||||||
auto query_database = query.database();
|
if (auto db_and_table = getDatabaseAndTable(query, 0))
|
||||||
auto query_table = query.table();
|
{
|
||||||
|
table_name = db_and_table->table;
|
||||||
|
database_name = db_and_table->database;
|
||||||
|
|
||||||
/** If the table is not specified - use the table `system.one`.
|
/// If the database is not specified - use the current database.
|
||||||
* If the database is not specified - use the current database.
|
if (database_name.empty() && !context.tryGetTable("", table_name))
|
||||||
*/
|
database_name = context.getCurrentDatabase();
|
||||||
if (query_database)
|
}
|
||||||
database_name = typeid_cast<ASTIdentifier &>(*query_database).name;
|
else /// If the table is not specified - use the table `system.one`.
|
||||||
if (query_table)
|
|
||||||
table_name = typeid_cast<ASTIdentifier &>(*query_table).name;
|
|
||||||
|
|
||||||
if (!query_table)
|
|
||||||
{
|
{
|
||||||
database_name = "system";
|
database_name = "system";
|
||||||
table_name = "one";
|
table_name = "one";
|
||||||
}
|
}
|
||||||
else if (!query_database)
|
|
||||||
{
|
|
||||||
if (context.tryGetTable("", table_name))
|
|
||||||
database_name = "";
|
|
||||||
else
|
|
||||||
database_name = context.getCurrentDatabase();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -331,6 +323,8 @@ InterpreterSelectQuery::AnalysisResult InterpreterSelectQuery::analyzeExpression
|
|||||||
|
|
||||||
res.prewhere_info->remove_columns_actions = std::move(actions);
|
res.prewhere_info->remove_columns_actions = std::move(actions);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
res.columns_to_remove_after_prewhere = std::move(columns_to_remove);
|
||||||
}
|
}
|
||||||
if (has_where)
|
if (has_where)
|
||||||
res.remove_where_filter = chain.steps.at(where_step_num).can_remove_required_output.at(0);
|
res.remove_where_filter = chain.steps.at(where_step_num).can_remove_required_output.at(0);
|
||||||
@ -503,7 +497,7 @@ void InterpreterSelectQuery::executeImpl(Pipeline & pipeline, const BlockInputSt
|
|||||||
throw Exception("Distributed on Distributed is not supported", ErrorCodes::NOT_IMPLEMENTED);
|
throw Exception("Distributed on Distributed is not supported", ErrorCodes::NOT_IMPLEMENTED);
|
||||||
|
|
||||||
/** Read the data from Storage. from_stage - to what stage the request was completed in Storage. */
|
/** Read the data from Storage. from_stage - to what stage the request was completed in Storage. */
|
||||||
executeFetchColumns(from_stage, pipeline, expressions.prewhere_info);
|
executeFetchColumns(from_stage, pipeline, expressions.prewhere_info, expressions.columns_to_remove_after_prewhere);
|
||||||
|
|
||||||
LOG_TRACE(log, QueryProcessingStage::toString(from_stage) << " -> " << QueryProcessingStage::toString(to_stage));
|
LOG_TRACE(log, QueryProcessingStage::toString(from_stage) << " -> " << QueryProcessingStage::toString(to_stage));
|
||||||
}
|
}
|
||||||
@ -694,7 +688,8 @@ static void getLimitLengthAndOffset(ASTSelectQuery & query, size_t & length, siz
|
|||||||
|
|
||||||
|
|
||||||
void InterpreterSelectQuery::executeFetchColumns(
|
void InterpreterSelectQuery::executeFetchColumns(
|
||||||
QueryProcessingStage::Enum processing_stage, Pipeline & pipeline, const PrewhereInfoPtr & prewhere_info)
|
QueryProcessingStage::Enum processing_stage, Pipeline & pipeline,
|
||||||
|
const PrewhereInfoPtr & prewhere_info, const Names & columns_to_remove_after_prewhere)
|
||||||
{
|
{
|
||||||
const Settings & settings = context.getSettingsRef();
|
const Settings & settings = context.getSettingsRef();
|
||||||
|
|
||||||
@ -759,11 +754,15 @@ void InterpreterSelectQuery::executeFetchColumns(
|
|||||||
/// Columns which we will get after prewhere execution.
|
/// Columns which we will get after prewhere execution.
|
||||||
NamesAndTypesList additional_source_columns;
|
NamesAndTypesList additional_source_columns;
|
||||||
/// Add columns which will be added by prewhere (otherwise we will remove them in project action).
|
/// Add columns which will be added by prewhere (otherwise we will remove them in project action).
|
||||||
|
NameSet columns_to_remove(columns_to_remove_after_prewhere.begin(), columns_to_remove_after_prewhere.end());
|
||||||
for (const auto & column : prewhere_actions_result)
|
for (const auto & column : prewhere_actions_result)
|
||||||
{
|
{
|
||||||
if (prewhere_info->remove_prewhere_column && column.name == prewhere_info->prewhere_column_name)
|
if (prewhere_info->remove_prewhere_column && column.name == prewhere_info->prewhere_column_name)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
|
if (columns_to_remove.count(column.name))
|
||||||
|
continue;
|
||||||
|
|
||||||
required_columns_expr_list->children.emplace_back(std::make_shared<ASTIdentifier>(column.name));
|
required_columns_expr_list->children.emplace_back(std::make_shared<ASTIdentifier>(column.name));
|
||||||
additional_source_columns.emplace_back(column.name, column.type);
|
additional_source_columns.emplace_back(column.name, column.type);
|
||||||
}
|
}
|
||||||
@ -884,8 +883,12 @@ void InterpreterSelectQuery::executeFetchColumns(
|
|||||||
/// If we need less number of columns that subquery have - update the interpreter.
|
/// If we need less number of columns that subquery have - update the interpreter.
|
||||||
if (required_columns.size() < source_header.columns())
|
if (required_columns.size() < source_header.columns())
|
||||||
{
|
{
|
||||||
|
ASTPtr subquery = getTableFunctionOrSubquery(query, 0);
|
||||||
|
if (!subquery)
|
||||||
|
throw Exception("Subquery expected", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
|
||||||
interpreter_subquery = std::make_unique<InterpreterSelectWithUnionQuery>(
|
interpreter_subquery = std::make_unique<InterpreterSelectWithUnionQuery>(
|
||||||
query.table(), getSubqueryContext(context), required_columns, QueryProcessingStage::Complete, subquery_depth + 1, only_analyze);
|
subquery, getSubqueryContext(context), required_columns, QueryProcessingStage::Complete, subquery_depth + 1, only_analyze);
|
||||||
|
|
||||||
if (query_analyzer->hasAggregation())
|
if (query_analyzer->hasAggregation())
|
||||||
interpreter_subquery->ignoreWithTotals();
|
interpreter_subquery->ignoreWithTotals();
|
||||||
@ -1240,6 +1243,8 @@ void InterpreterSelectQuery::executeMergeSorted(Pipeline & pipeline)
|
|||||||
/// If there are several streams, then we merge them into one
|
/// If there are several streams, then we merge them into one
|
||||||
if (pipeline.hasMoreThanOneStream())
|
if (pipeline.hasMoreThanOneStream())
|
||||||
{
|
{
|
||||||
|
unifyStreams(pipeline);
|
||||||
|
|
||||||
/** MergingSortedBlockInputStream reads the sources sequentially.
|
/** MergingSortedBlockInputStream reads the sources sequentially.
|
||||||
* To make the data on the remote servers prepared in parallel, we wrap it in AsynchronousBlockInputStream.
|
* To make the data on the remote servers prepared in parallel, we wrap it in AsynchronousBlockInputStream.
|
||||||
*/
|
*/
|
||||||
@ -1294,16 +1299,7 @@ void InterpreterSelectQuery::executeUnion(Pipeline & pipeline)
|
|||||||
/// If there are still several streams, then we combine them into one
|
/// If there are still several streams, then we combine them into one
|
||||||
if (pipeline.hasMoreThanOneStream())
|
if (pipeline.hasMoreThanOneStream())
|
||||||
{
|
{
|
||||||
/// Unify streams in case they have different headers.
|
unifyStreams(pipeline);
|
||||||
auto first_header = pipeline.streams.at(0)->getHeader();
|
|
||||||
for (size_t i = 1; i < pipeline.streams.size(); ++i)
|
|
||||||
{
|
|
||||||
auto & stream = pipeline.streams[i];
|
|
||||||
auto header = stream->getHeader();
|
|
||||||
auto mode = ConvertingBlockInputStream::MatchColumnsMode::Name;
|
|
||||||
if (!blocksHaveEqualStructure(first_header, header))
|
|
||||||
stream = std::make_shared<ConvertingBlockInputStream>(context, stream, first_header, mode);
|
|
||||||
}
|
|
||||||
|
|
||||||
pipeline.firstStream() = std::make_shared<UnionBlockInputStream<>>(pipeline.streams, pipeline.stream_with_non_joined_data, max_streams);
|
pipeline.firstStream() = std::make_shared<UnionBlockInputStream<>>(pipeline.streams, pipeline.stream_with_non_joined_data, max_streams);
|
||||||
pipeline.stream_with_non_joined_data = nullptr;
|
pipeline.stream_with_non_joined_data = nullptr;
|
||||||
@ -1362,11 +1358,9 @@ bool hasWithTotalsInAnySubqueryInFromClause(const ASTSelectQuery & query)
|
|||||||
* In other cases, totals will be computed on the initiating server of the query, and it is not necessary to read the data to the end.
|
* In other cases, totals will be computed on the initiating server of the query, and it is not necessary to read the data to the end.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
auto query_table = query.table();
|
if (auto query_table = getTableFunctionOrSubquery(query, 0))
|
||||||
if (query_table)
|
|
||||||
{
|
{
|
||||||
auto ast_union = typeid_cast<const ASTSelectWithUnionQuery *>(query_table.get());
|
if (auto ast_union = typeid_cast<const ASTSelectWithUnionQuery *>(query_table.get()))
|
||||||
if (ast_union)
|
|
||||||
{
|
{
|
||||||
for (const auto & elem : ast_union->list_of_selects->children)
|
for (const auto & elem : ast_union->list_of_selects->children)
|
||||||
if (hasWithTotalsInAnySubqueryInFromClause(typeid_cast<const ASTSelectQuery &>(*elem)))
|
if (hasWithTotalsInAnySubqueryInFromClause(typeid_cast<const ASTSelectQuery &>(*elem)))
|
||||||
@ -1435,6 +1429,23 @@ void InterpreterSelectQuery::executeSubqueriesInSetsAndJoins(Pipeline & pipeline
|
|||||||
SizeLimits(settings.max_rows_to_transfer, settings.max_bytes_to_transfer, settings.transfer_overflow_mode));
|
SizeLimits(settings.max_rows_to_transfer, settings.max_bytes_to_transfer, settings.transfer_overflow_mode));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void InterpreterSelectQuery::unifyStreams(Pipeline & pipeline)
|
||||||
|
{
|
||||||
|
if (pipeline.hasMoreThanOneStream())
|
||||||
|
{
|
||||||
|
/// Unify streams in case they have different headers.
|
||||||
|
auto first_header = pipeline.streams.at(0)->getHeader();
|
||||||
|
for (size_t i = 1; i < pipeline.streams.size(); ++i)
|
||||||
|
{
|
||||||
|
auto & stream = pipeline.streams[i];
|
||||||
|
auto header = stream->getHeader();
|
||||||
|
auto mode = ConvertingBlockInputStream::MatchColumnsMode::Name;
|
||||||
|
if (!blocksHaveEqualStructure(first_header, header))
|
||||||
|
stream = std::make_shared<ConvertingBlockInputStream>(context, stream, first_header, mode);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
void InterpreterSelectQuery::ignoreWithTotals()
|
void InterpreterSelectQuery::ignoreWithTotals()
|
||||||
{
|
{
|
||||||
|
@ -150,6 +150,9 @@ private:
|
|||||||
/// Columns from the SELECT list, before renaming them to aliases.
|
/// Columns from the SELECT list, before renaming them to aliases.
|
||||||
Names selected_columns;
|
Names selected_columns;
|
||||||
|
|
||||||
|
/// Columns will be removed after prewhere actions execution.
|
||||||
|
Names columns_to_remove_after_prewhere;
|
||||||
|
|
||||||
/// Do I need to perform the first part of the pipeline - running on remote servers during distributed processing.
|
/// Do I need to perform the first part of the pipeline - running on remote servers during distributed processing.
|
||||||
bool first_stage = false;
|
bool first_stage = false;
|
||||||
/// Do I need to execute the second part of the pipeline - running on the initiating server during distributed processing.
|
/// Do I need to execute the second part of the pipeline - running on the initiating server during distributed processing.
|
||||||
@ -171,7 +174,8 @@ private:
|
|||||||
/// dry_run - don't read from table, use empty header block instead.
|
/// dry_run - don't read from table, use empty header block instead.
|
||||||
void executeWithMultipleStreamsImpl(Pipeline & pipeline, const BlockInputStreamPtr & input, bool dry_run);
|
void executeWithMultipleStreamsImpl(Pipeline & pipeline, const BlockInputStreamPtr & input, bool dry_run);
|
||||||
|
|
||||||
void executeFetchColumns(QueryProcessingStage::Enum processing_stage, Pipeline & pipeline, const PrewhereInfoPtr & prewhere_info);
|
void executeFetchColumns(QueryProcessingStage::Enum processing_stage, Pipeline & pipeline,
|
||||||
|
const PrewhereInfoPtr & prewhere_info, const Names & columns_to_remove_after_prewhere);
|
||||||
|
|
||||||
void executeWhere(Pipeline & pipeline, const ExpressionActionsPtr & expression, bool remove_filter);
|
void executeWhere(Pipeline & pipeline, const ExpressionActionsPtr & expression, bool remove_filter);
|
||||||
void executeAggregation(Pipeline & pipeline, const ExpressionActionsPtr & expression, bool overflow_row, bool final);
|
void executeAggregation(Pipeline & pipeline, const ExpressionActionsPtr & expression, bool overflow_row, bool final);
|
||||||
@ -190,6 +194,9 @@ private:
|
|||||||
void executeExtremes(Pipeline & pipeline);
|
void executeExtremes(Pipeline & pipeline);
|
||||||
void executeSubqueriesInSetsAndJoins(Pipeline & pipeline, std::unordered_map<String, SubqueryForSet> & subqueries_for_sets);
|
void executeSubqueriesInSetsAndJoins(Pipeline & pipeline, std::unordered_map<String, SubqueryForSet> & subqueries_for_sets);
|
||||||
|
|
||||||
|
/// If pipeline has several streams with different headers, add ConvertingBlockInputStream to first header.
|
||||||
|
void unifyStreams(Pipeline & pipeline);
|
||||||
|
|
||||||
enum class Modificator
|
enum class Modificator
|
||||||
{
|
{
|
||||||
ROLLUP = 0,
|
ROLLUP = 0,
|
||||||
|
@ -5,7 +5,6 @@
|
|||||||
#include <Interpreters/InterpreterShowProcesslistQuery.h>
|
#include <Interpreters/InterpreterShowProcesslistQuery.h>
|
||||||
|
|
||||||
#include <Parsers/ASTQueryWithOutput.h>
|
#include <Parsers/ASTQueryWithOutput.h>
|
||||||
#include <Parsers/ASTIdentifier.h>
|
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
|
@ -1,6 +1,5 @@
|
|||||||
#include <IO/ReadBufferFromString.h>
|
#include <IO/ReadBufferFromString.h>
|
||||||
#include <Parsers/ASTShowTablesQuery.h>
|
#include <Parsers/ASTShowTablesQuery.h>
|
||||||
#include <Parsers/ASTIdentifier.h>
|
|
||||||
#include <Interpreters/Context.h>
|
#include <Interpreters/Context.h>
|
||||||
#include <Interpreters/executeQuery.h>
|
#include <Interpreters/executeQuery.h>
|
||||||
#include <Interpreters/InterpreterShowTablesQuery.h>
|
#include <Interpreters/InterpreterShowTablesQuery.h>
|
||||||
|
@ -15,6 +15,8 @@
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
template <> struct NearestFieldType<PartLogElement::Type> { using Type = UInt64; };
|
||||||
|
|
||||||
Block PartLogElement::createBlock()
|
Block PartLogElement::createBlock()
|
||||||
{
|
{
|
||||||
auto event_type_datatype = std::make_shared<DataTypeEnum8>(
|
auto event_type_datatype = std::make_shared<DataTypeEnum8>(
|
||||||
@ -60,18 +62,18 @@ void PartLogElement::appendToBlock(Block & block) const
|
|||||||
|
|
||||||
size_t i = 0;
|
size_t i = 0;
|
||||||
|
|
||||||
columns[i++]->insert(Int64(event_type));
|
columns[i++]->insert(event_type);
|
||||||
columns[i++]->insert(UInt64(DateLUT::instance().toDayNum(event_time)));
|
columns[i++]->insert(DateLUT::instance().toDayNum(event_time));
|
||||||
columns[i++]->insert(UInt64(event_time));
|
columns[i++]->insert(event_time);
|
||||||
columns[i++]->insert(UInt64(duration_ms));
|
columns[i++]->insert(duration_ms);
|
||||||
|
|
||||||
columns[i++]->insert(database_name);
|
columns[i++]->insert(database_name);
|
||||||
columns[i++]->insert(table_name);
|
columns[i++]->insert(table_name);
|
||||||
columns[i++]->insert(part_name);
|
columns[i++]->insert(part_name);
|
||||||
columns[i++]->insert(partition_id);
|
columns[i++]->insert(partition_id);
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(rows));
|
columns[i++]->insert(rows);
|
||||||
columns[i++]->insert(UInt64(bytes_compressed_on_disk));
|
columns[i++]->insert(bytes_compressed_on_disk);
|
||||||
|
|
||||||
Array source_part_names_array;
|
Array source_part_names_array;
|
||||||
source_part_names_array.reserve(source_part_names.size());
|
source_part_names_array.reserve(source_part_names.size());
|
||||||
@ -80,11 +82,11 @@ void PartLogElement::appendToBlock(Block & block) const
|
|||||||
|
|
||||||
columns[i++]->insert(source_part_names_array);
|
columns[i++]->insert(source_part_names_array);
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(bytes_uncompressed));
|
columns[i++]->insert(bytes_uncompressed);
|
||||||
columns[i++]->insert(UInt64(rows_read));
|
columns[i++]->insert(rows_read);
|
||||||
columns[i++]->insert(UInt64(bytes_read_uncompressed));
|
columns[i++]->insert(bytes_read_uncompressed);
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(error));
|
columns[i++]->insert(error);
|
||||||
columns[i++]->insert(exception);
|
columns[i++]->insert(exception);
|
||||||
|
|
||||||
block.setColumns(std::move(columns));
|
block.setColumns(std::move(columns));
|
||||||
|
@ -45,7 +45,7 @@ bool PredicateExpressionsOptimizer::optimizeImpl(
|
|||||||
PredicateExpressions outer_predicate_expressions = splitConjunctionPredicate(outer_expression);
|
PredicateExpressions outer_predicate_expressions = splitConjunctionPredicate(outer_expression);
|
||||||
|
|
||||||
std::vector<DatabaseAndTableWithAlias> database_and_table_with_aliases =
|
std::vector<DatabaseAndTableWithAlias> database_and_table_with_aliases =
|
||||||
getDatabaseAndTableWithAliases(ast_select, context.getCurrentDatabase());
|
getDatabaseAndTables(*ast_select, context.getCurrentDatabase());
|
||||||
|
|
||||||
bool is_rewrite_subquery = false;
|
bool is_rewrite_subquery = false;
|
||||||
for (const auto & outer_predicate : outer_predicate_expressions)
|
for (const auto & outer_predicate : outer_predicate_expressions)
|
||||||
@ -258,15 +258,14 @@ bool PredicateExpressionsOptimizer::optimizeExpression(const ASTPtr & outer_expr
|
|||||||
|
|
||||||
void PredicateExpressionsOptimizer::getAllSubqueryProjectionColumns(SubqueriesProjectionColumns & all_subquery_projection_columns)
|
void PredicateExpressionsOptimizer::getAllSubqueryProjectionColumns(SubqueriesProjectionColumns & all_subquery_projection_columns)
|
||||||
{
|
{
|
||||||
const auto tables_expression = getSelectTablesExpression(ast_select);
|
const auto tables_expression = getSelectTablesExpression(*ast_select);
|
||||||
|
|
||||||
for (const auto & table_expression : tables_expression)
|
for (const auto & table_expression : tables_expression)
|
||||||
{
|
{
|
||||||
if (table_expression->subquery)
|
if (table_expression->subquery)
|
||||||
{
|
{
|
||||||
/// Use qualifiers to translate the columns of subqueries
|
/// Use qualifiers to translate the columns of subqueries
|
||||||
const auto database_and_table_with_alias =
|
DatabaseAndTableWithAlias database_and_table_with_alias(*table_expression, context.getCurrentDatabase());
|
||||||
getTableNameWithAliasFromTableExpression(*table_expression, context.getCurrentDatabase());
|
|
||||||
String qualified_name_prefix = database_and_table_with_alias.getQualifiedNamePrefix();
|
String qualified_name_prefix = database_and_table_with_alias.getQualifiedNamePrefix();
|
||||||
getSubqueryProjectionColumns(all_subquery_projection_columns, qualified_name_prefix,
|
getSubqueryProjectionColumns(all_subquery_projection_columns, qualified_name_prefix,
|
||||||
static_cast<const ASTSubquery *>(table_expression->subquery.get())->children[0]);
|
static_cast<const ASTSubquery *>(table_expression->subquery.get())->children[0]);
|
||||||
@ -303,8 +302,8 @@ ASTs PredicateExpressionsOptimizer::getSelectQueryProjectionColumns(ASTPtr & ast
|
|||||||
{
|
{
|
||||||
/// first should normalize query tree.
|
/// first should normalize query tree.
|
||||||
std::unordered_map<String, ASTPtr> aliases;
|
std::unordered_map<String, ASTPtr> aliases;
|
||||||
QueryAliasesVisitor query_aliases_visitor;
|
QueryAliasesVisitor query_aliases_visitor(aliases);
|
||||||
query_aliases_visitor.visit(ast, aliases, 0);
|
query_aliases_visitor.visit(ast);
|
||||||
QueryNormalizer(ast, aliases, settings, {}, {}).perform();
|
QueryNormalizer(ast, aliases, settings, {}, {}).perform();
|
||||||
|
|
||||||
ASTs projection_columns;
|
ASTs projection_columns;
|
||||||
@ -333,7 +332,7 @@ ASTs PredicateExpressionsOptimizer::evaluateAsterisk(ASTSelectQuery * select_que
|
|||||||
if (!select_query->tables || select_query->tables->children.empty())
|
if (!select_query->tables || select_query->tables->children.empty())
|
||||||
return {};
|
return {};
|
||||||
|
|
||||||
std::vector<const ASTTableExpression *> tables_expression = getSelectTablesExpression(select_query);
|
std::vector<const ASTTableExpression *> tables_expression = getSelectTablesExpression(*select_query);
|
||||||
|
|
||||||
if (const auto qualified_asterisk = typeid_cast<ASTQualifiedAsterisk *>(asterisk.get()))
|
if (const auto qualified_asterisk = typeid_cast<ASTQualifiedAsterisk *>(asterisk.get()))
|
||||||
{
|
{
|
||||||
@ -351,8 +350,7 @@ ASTs PredicateExpressionsOptimizer::evaluateAsterisk(ASTSelectQuery * select_que
|
|||||||
for (auto it = tables_expression.begin(); it != tables_expression.end(); ++it)
|
for (auto it = tables_expression.begin(); it != tables_expression.end(); ++it)
|
||||||
{
|
{
|
||||||
const ASTTableExpression * table_expression = *it;
|
const ASTTableExpression * table_expression = *it;
|
||||||
const auto database_and_table_with_alias =
|
DatabaseAndTableWithAlias database_and_table_with_alias(*table_expression, context.getCurrentDatabase());
|
||||||
getTableNameWithAliasFromTableExpression(*table_expression, context.getCurrentDatabase());
|
|
||||||
/// database.table.*
|
/// database.table.*
|
||||||
if (num_components == 2 && !database_and_table_with_alias.database.empty()
|
if (num_components == 2 && !database_and_table_with_alias.database.empty()
|
||||||
&& static_cast<const ASTIdentifier &>(*ident->children[0]).name == database_and_table_with_alias.database
|
&& static_cast<const ASTIdentifier &>(*ident->children[0]).name == database_and_table_with_alias.database
|
||||||
@ -391,8 +389,8 @@ ASTs PredicateExpressionsOptimizer::evaluateAsterisk(ASTSelectQuery * select_que
|
|||||||
else if (table_expression->database_and_table_name)
|
else if (table_expression->database_and_table_name)
|
||||||
{
|
{
|
||||||
const auto database_and_table_ast = static_cast<ASTIdentifier*>(table_expression->database_and_table_name.get());
|
const auto database_and_table_ast = static_cast<ASTIdentifier*>(table_expression->database_and_table_name.get());
|
||||||
const auto database_and_table_name = getDatabaseAndTableNameFromIdentifier(*database_and_table_ast);
|
DatabaseAndTableWithAlias database_and_table_name(*database_and_table_ast);
|
||||||
storage = context.getTable(database_and_table_name.first, database_and_table_name.second);
|
storage = context.getTable(database_and_table_name.database, database_and_table_name.table);
|
||||||
}
|
}
|
||||||
|
|
||||||
const auto block = storage->getSampleBlock();
|
const auto block = storage->getSampleBlock();
|
||||||
|
@ -9,7 +9,7 @@
|
|||||||
#include <Interpreters/ExpressionActions.h>
|
#include <Interpreters/ExpressionActions.h>
|
||||||
#include <Parsers/ASTSubquery.h>
|
#include <Parsers/ASTSubquery.h>
|
||||||
#include <Parsers/ASTTablesInSelectQuery.h>
|
#include <Parsers/ASTTablesInSelectQuery.h>
|
||||||
#include <Interpreters/evaluateQualified.h>
|
#include <Interpreters/DatabaseAndTableWithAlias.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
@ -1,10 +1,10 @@
|
|||||||
#include <Interpreters/ProcessList.h>
|
#include <Interpreters/ProcessList.h>
|
||||||
#include <Interpreters/Settings.h>
|
#include <Interpreters/Settings.h>
|
||||||
#include <Interpreters/Context.h>
|
#include <Interpreters/Context.h>
|
||||||
|
#include <Interpreters/DatabaseAndTableWithAlias.h>
|
||||||
#include <Parsers/ASTSelectWithUnionQuery.h>
|
#include <Parsers/ASTSelectWithUnionQuery.h>
|
||||||
#include <Parsers/ASTSelectQuery.h>
|
#include <Parsers/ASTSelectQuery.h>
|
||||||
#include <Parsers/ASTKillQueryQuery.h>
|
#include <Parsers/ASTKillQueryQuery.h>
|
||||||
#include <Parsers/ASTIdentifier.h>
|
|
||||||
#include <Common/typeid_cast.h>
|
#include <Common/typeid_cast.h>
|
||||||
#include <Common/Exception.h>
|
#include <Common/Exception.h>
|
||||||
#include <Common/CurrentThread.h>
|
#include <Common/CurrentThread.h>
|
||||||
@ -51,28 +51,14 @@ static bool isUnlimitedQuery(const IAST * ast)
|
|||||||
if (!ast_selects->list_of_selects || ast_selects->list_of_selects->children.empty())
|
if (!ast_selects->list_of_selects || ast_selects->list_of_selects->children.empty())
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
auto ast_select = typeid_cast<ASTSelectQuery *>(ast_selects->list_of_selects->children[0].get());
|
auto ast_select = typeid_cast<const ASTSelectQuery *>(ast_selects->list_of_selects->children[0].get());
|
||||||
|
|
||||||
if (!ast_select)
|
if (!ast_select)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
auto ast_database = ast_select->database();
|
if (auto database_and_table = getDatabaseAndTable(*ast_select, 0))
|
||||||
if (!ast_database)
|
return database_and_table->database == "system" && database_and_table->table == "processes";
|
||||||
return false;
|
|
||||||
|
|
||||||
auto ast_table = ast_select->table();
|
return false;
|
||||||
if (!ast_table)
|
|
||||||
return false;
|
|
||||||
|
|
||||||
auto ast_database_id = typeid_cast<const ASTIdentifier *>(ast_database.get());
|
|
||||||
if (!ast_database_id)
|
|
||||||
return false;
|
|
||||||
|
|
||||||
auto ast_table_id = typeid_cast<const ASTIdentifier *>(ast_table.get());
|
|
||||||
if (!ast_table_id)
|
|
||||||
return false;
|
|
||||||
|
|
||||||
return ast_database_id->name == "system" && ast_table_id->name == "processes";
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return false;
|
return false;
|
||||||
@ -396,8 +382,9 @@ ProcessList::CancellationCode ProcessList::sendCancelToQuery(const String & curr
|
|||||||
}
|
}
|
||||||
return CancellationCode::CancelCannotBeSent;
|
return CancellationCode::CancelCannotBeSent;
|
||||||
}
|
}
|
||||||
|
/// Query is not even started
|
||||||
return CancellationCode::QueryIsNotInitializedYet;
|
elem->is_killed.store(true);
|
||||||
|
return CancellationCode::CancelSent;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -191,6 +191,8 @@ public:
|
|||||||
|
|
||||||
/// Get query in/out pointers from BlockIO
|
/// Get query in/out pointers from BlockIO
|
||||||
bool tryGetQueryStreams(BlockInputStreamPtr & in, BlockOutputStreamPtr & out) const;
|
bool tryGetQueryStreams(BlockInputStreamPtr & in, BlockOutputStreamPtr & out) const;
|
||||||
|
|
||||||
|
bool isKilled() const { return is_killed; }
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
|
@ -6,7 +6,6 @@
|
|||||||
#include <Parsers/ASTSelectWithUnionQuery.h>
|
#include <Parsers/ASTSelectWithUnionQuery.h>
|
||||||
#include <Parsers/formatAST.h>
|
#include <Parsers/formatAST.h>
|
||||||
#include <Parsers/ASTSubquery.h>
|
#include <Parsers/ASTSubquery.h>
|
||||||
#include <Parsers/DumpASTNode.h>
|
|
||||||
#include <IO/WriteHelpers.h>
|
#include <IO/WriteHelpers.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
@ -17,76 +16,85 @@ namespace ErrorCodes
|
|||||||
extern const int MULTIPLE_EXPRESSIONS_FOR_ALIAS;
|
extern const int MULTIPLE_EXPRESSIONS_FOR_ALIAS;
|
||||||
}
|
}
|
||||||
|
|
||||||
/// ignore_levels - aliases in how many upper levels of the subtree should be ignored.
|
void QueryAliasesVisitor::visit(const ASTPtr & ast) const
|
||||||
/// For example, with ignore_levels=1 ast can not be put in the dictionary, but its children can.
|
|
||||||
void QueryAliasesVisitor::getQueryAliases(const ASTPtr & ast, Aliases & aliases, int ignore_levels) const
|
|
||||||
{
|
{
|
||||||
DumpASTNode dump(*ast, ostr, visit_depth, "getQueryAliases");
|
|
||||||
|
|
||||||
/// Bottom-up traversal. We do not go into subqueries.
|
/// Bottom-up traversal. We do not go into subqueries.
|
||||||
for (auto & child : ast->children)
|
visitChildren(ast);
|
||||||
|
|
||||||
|
if (!tryVisit<ASTSubquery>(ast))
|
||||||
{
|
{
|
||||||
int new_ignore_levels = std::max(0, ignore_levels - 1);
|
DumpASTNode dump(*ast, ostr, visit_depth, "getQueryAliases");
|
||||||
|
visitOther(ast);
|
||||||
/// The top-level aliases in the ARRAY JOIN section have a special meaning, we will not add them
|
|
||||||
/// (skip the expression list itself and its children).
|
|
||||||
if (typeid_cast<ASTArrayJoin *>(ast.get()))
|
|
||||||
new_ignore_levels = 3;
|
|
||||||
|
|
||||||
/// Don't descent into table functions and subqueries.
|
|
||||||
if (!typeid_cast<ASTTableExpression *>(child.get())
|
|
||||||
&& !typeid_cast<ASTSelectWithUnionQuery *>(child.get()))
|
|
||||||
getQueryAliases(child, aliases, new_ignore_levels);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (ignore_levels > 0)
|
|
||||||
return;
|
|
||||||
|
|
||||||
getNodeAlias(ast, aliases, dump);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void QueryAliasesVisitor::getNodeAlias(const ASTPtr & ast, Aliases & aliases, const DumpASTNode & dump) const
|
/// The top-level aliases in the ARRAY JOIN section have a special meaning, we will not add them
|
||||||
|
/// (skip the expression list itself and its children).
|
||||||
|
void QueryAliasesVisitor::visit(const ASTArrayJoin &, const ASTPtr & ast) const
|
||||||
|
{
|
||||||
|
for (auto & child1 : ast->children)
|
||||||
|
for (auto & child2 : child1->children)
|
||||||
|
for (auto & child3 : child2->children)
|
||||||
|
visit(child3);
|
||||||
|
}
|
||||||
|
|
||||||
|
/// set unique aliases for all subqueries. this is needed, because:
|
||||||
|
/// 1) content of subqueries could change after recursive analysis, and auto-generated column names could become incorrect
|
||||||
|
/// 2) result of different scalar subqueries can be cached inside expressions compilation cache and must have different names
|
||||||
|
void QueryAliasesVisitor::visit(ASTSubquery & subquery, const ASTPtr & ast) const
|
||||||
|
{
|
||||||
|
static std::atomic_uint64_t subquery_index = 0;
|
||||||
|
|
||||||
|
if (subquery.alias.empty())
|
||||||
|
{
|
||||||
|
String alias;
|
||||||
|
do
|
||||||
|
{
|
||||||
|
alias = "_subquery" + std::to_string(++subquery_index);
|
||||||
|
}
|
||||||
|
while (aliases.count(alias));
|
||||||
|
|
||||||
|
subquery.setAlias(alias);
|
||||||
|
subquery.prefer_alias_to_column_name = true;
|
||||||
|
aliases[alias] = ast;
|
||||||
|
}
|
||||||
|
else
|
||||||
|
visitOther(ast);
|
||||||
|
}
|
||||||
|
|
||||||
|
void QueryAliasesVisitor::visitOther(const ASTPtr & ast) const
|
||||||
{
|
{
|
||||||
String alias = ast->tryGetAlias();
|
String alias = ast->tryGetAlias();
|
||||||
if (!alias.empty())
|
if (!alias.empty())
|
||||||
{
|
{
|
||||||
if (aliases.count(alias) && ast->getTreeHash() != aliases[alias]->getTreeHash())
|
if (aliases.count(alias) && ast->getTreeHash() != aliases[alias]->getTreeHash())
|
||||||
{
|
throw Exception(wrongAliasMessage(ast, alias), ErrorCodes::MULTIPLE_EXPRESSIONS_FOR_ALIAS);
|
||||||
std::stringstream message;
|
|
||||||
message << "Different expressions with the same alias " << backQuoteIfNeed(alias) << ":\n";
|
|
||||||
formatAST(*ast, message, false, true);
|
|
||||||
message << "\nand\n";
|
|
||||||
formatAST(*aliases[alias], message, false, true);
|
|
||||||
message << "\n";
|
|
||||||
|
|
||||||
throw Exception(message.str(), ErrorCodes::MULTIPLE_EXPRESSIONS_FOR_ALIAS);
|
|
||||||
}
|
|
||||||
|
|
||||||
aliases[alias] = ast;
|
aliases[alias] = ast;
|
||||||
dump.print(visit_action, alias);
|
|
||||||
}
|
}
|
||||||
else if (auto subquery = typeid_cast<ASTSubquery *>(ast.get()))
|
}
|
||||||
|
|
||||||
|
void QueryAliasesVisitor::visitChildren(const ASTPtr & ast) const
|
||||||
|
{
|
||||||
|
for (auto & child : ast->children)
|
||||||
{
|
{
|
||||||
/// Set unique aliases for all subqueries. This is needed, because:
|
/// Don't descent into table functions and subqueries and special case for ArrayJoin.
|
||||||
/// 1) content of subqueries could change after recursive analysis, and auto-generated column names could become incorrect
|
if (!tryVisit<ASTTableExpression>(ast) &&
|
||||||
/// 2) result of different scalar subqueries can be cached inside expressions compilation cache and must have different names
|
!tryVisit<ASTSelectWithUnionQuery>(ast) &&
|
||||||
|
!tryVisit<ASTArrayJoin>(ast))
|
||||||
if (subquery->alias.empty())
|
visit(child);
|
||||||
{
|
|
||||||
static std::atomic_uint64_t subquery_index = 1;
|
|
||||||
while (true)
|
|
||||||
{
|
|
||||||
alias = "_subquery" + std::to_string(subquery_index++);
|
|
||||||
if (!aliases.count(alias))
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
subquery->setAlias(alias);
|
|
||||||
subquery->prefer_alias_to_column_name = true;
|
|
||||||
aliases[alias] = ast;
|
|
||||||
dump.print(visit_action, alias);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
String QueryAliasesVisitor::wrongAliasMessage(const ASTPtr & ast, const String & alias) const
|
||||||
|
{
|
||||||
|
std::stringstream message;
|
||||||
|
message << "Different expressions with the same alias " << backQuoteIfNeed(alias) << ":" << std::endl;
|
||||||
|
formatAST(*ast, message, false, true);
|
||||||
|
message << std::endl << "and" << std::endl;
|
||||||
|
formatAST(*aliases[alias], message, false, true);
|
||||||
|
message << std::endl;
|
||||||
|
return message.str();
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -6,29 +6,54 @@
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
class ASTSelectWithUnionQuery;
|
||||||
|
class ASTSubquery;
|
||||||
|
struct ASTTableExpression;
|
||||||
|
struct ASTArrayJoin;
|
||||||
|
|
||||||
using Aliases = std::unordered_map<String, ASTPtr>;
|
using Aliases = std::unordered_map<String, ASTPtr>;
|
||||||
|
|
||||||
|
/// Visitors consist of functions with unified interface 'void visit(Casted & x, ASTPtr & y)', there x is y, successfully casted to Casted.
|
||||||
|
/// Both types and fuction could have const specifiers. The second argument is used by visitor to replaces AST node (y) if needed.
|
||||||
|
|
||||||
/// Visits AST nodes and collect their aliases in one map (with links to source nodes).
|
/// Visits AST nodes and collect their aliases in one map (with links to source nodes).
|
||||||
class QueryAliasesVisitor
|
class QueryAliasesVisitor
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
QueryAliasesVisitor(std::ostream * ostr_ = nullptr)
|
QueryAliasesVisitor(Aliases & aliases_, std::ostream * ostr_ = nullptr)
|
||||||
: visit_depth(0),
|
: aliases(aliases_),
|
||||||
|
visit_depth(0),
|
||||||
ostr(ostr_)
|
ostr(ostr_)
|
||||||
{}
|
{}
|
||||||
|
|
||||||
void visit(const ASTPtr & ast, Aliases & aliases, int ignore_levels = 0) const
|
void visit(const ASTPtr & ast) const;
|
||||||
{
|
|
||||||
getQueryAliases(ast, aliases, ignore_levels);
|
|
||||||
}
|
|
||||||
|
|
||||||
private:
|
private:
|
||||||
static constexpr const char * visit_action = "addAlias";
|
Aliases & aliases;
|
||||||
mutable size_t visit_depth;
|
mutable size_t visit_depth;
|
||||||
std::ostream * ostr;
|
std::ostream * ostr;
|
||||||
|
|
||||||
void getQueryAliases(const ASTPtr & ast, Aliases & aliases, int ignore_levels) const;
|
void visit(const ASTTableExpression &, const ASTPtr & ) const {}
|
||||||
void getNodeAlias(const ASTPtr & ast, Aliases & aliases, const DumpASTNode & dump) const;
|
void visit(const ASTSelectWithUnionQuery &, const ASTPtr & ) const {}
|
||||||
|
|
||||||
|
void visit(ASTSubquery & subquery, const ASTPtr & ast) const;
|
||||||
|
void visit(const ASTArrayJoin &, const ASTPtr & ast) const;
|
||||||
|
void visitOther(const ASTPtr & ast) const;
|
||||||
|
void visitChildren(const ASTPtr & ast) const;
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
bool tryVisit(const ASTPtr & ast) const
|
||||||
|
{
|
||||||
|
if (T * t = typeid_cast<T *>(ast.get()))
|
||||||
|
{
|
||||||
|
DumpASTNode dump(*ast, ostr, visit_depth, "getQueryAliases");
|
||||||
|
visit(*t, ast);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
String wrongAliasMessage(const ASTPtr & ast, const String & alias) const;
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -19,7 +19,6 @@
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
|
||||||
Block QueryLogElement::createBlock()
|
Block QueryLogElement::createBlock()
|
||||||
{
|
{
|
||||||
return
|
return
|
||||||
@ -104,19 +103,19 @@ void QueryLogElement::appendToBlock(Block & block) const
|
|||||||
size_t i = 0;
|
size_t i = 0;
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(type));
|
columns[i++]->insert(UInt64(type));
|
||||||
columns[i++]->insert(UInt64(DateLUT::instance().toDayNum(event_time)));
|
columns[i++]->insert(DateLUT::instance().toDayNum(event_time));
|
||||||
columns[i++]->insert(UInt64(event_time));
|
columns[i++]->insert(event_time);
|
||||||
columns[i++]->insert(UInt64(query_start_time));
|
columns[i++]->insert(query_start_time);
|
||||||
columns[i++]->insert(UInt64(query_duration_ms));
|
columns[i++]->insert(query_duration_ms);
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(read_rows));
|
columns[i++]->insert(read_rows);
|
||||||
columns[i++]->insert(UInt64(read_bytes));
|
columns[i++]->insert(read_bytes);
|
||||||
columns[i++]->insert(UInt64(written_rows));
|
columns[i++]->insert(written_rows);
|
||||||
columns[i++]->insert(UInt64(written_bytes));
|
columns[i++]->insert(written_bytes);
|
||||||
columns[i++]->insert(UInt64(result_rows));
|
columns[i++]->insert(result_rows);
|
||||||
columns[i++]->insert(UInt64(result_bytes));
|
columns[i++]->insert(result_bytes);
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(memory_usage));
|
columns[i++]->insert(memory_usage);
|
||||||
|
|
||||||
columns[i++]->insertData(query.data(), query.size());
|
columns[i++]->insertData(query.data(), query.size());
|
||||||
columns[i++]->insertData(exception.data(), exception.size());
|
columns[i++]->insertData(exception.data(), exception.size());
|
||||||
@ -124,7 +123,7 @@ void QueryLogElement::appendToBlock(Block & block) const
|
|||||||
|
|
||||||
appendClientInfo(client_info, columns, i);
|
appendClientInfo(client_info, columns, i);
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(ClickHouseRevision::get()));
|
columns[i++]->insert(ClickHouseRevision::get());
|
||||||
|
|
||||||
{
|
{
|
||||||
Array threads_array;
|
Array threads_array;
|
||||||
@ -163,27 +162,27 @@ void QueryLogElement::appendToBlock(Block & block) const
|
|||||||
|
|
||||||
void QueryLogElement::appendClientInfo(const ClientInfo & client_info, MutableColumns & columns, size_t & i)
|
void QueryLogElement::appendClientInfo(const ClientInfo & client_info, MutableColumns & columns, size_t & i)
|
||||||
{
|
{
|
||||||
columns[i++]->insert(UInt64(client_info.query_kind == ClientInfo::QueryKind::INITIAL_QUERY));
|
columns[i++]->insert(client_info.query_kind == ClientInfo::QueryKind::INITIAL_QUERY);
|
||||||
|
|
||||||
columns[i++]->insert(client_info.current_user);
|
columns[i++]->insert(client_info.current_user);
|
||||||
columns[i++]->insert(client_info.current_query_id);
|
columns[i++]->insert(client_info.current_query_id);
|
||||||
columns[i++]->insertData(IPv6ToBinary(client_info.current_address.host()).data(), 16);
|
columns[i++]->insertData(IPv6ToBinary(client_info.current_address.host()).data(), 16);
|
||||||
columns[i++]->insert(UInt64(client_info.current_address.port()));
|
columns[i++]->insert(client_info.current_address.port());
|
||||||
|
|
||||||
columns[i++]->insert(client_info.initial_user);
|
columns[i++]->insert(client_info.initial_user);
|
||||||
columns[i++]->insert(client_info.initial_query_id);
|
columns[i++]->insert(client_info.initial_query_id);
|
||||||
columns[i++]->insertData(IPv6ToBinary(client_info.initial_address.host()).data(), 16);
|
columns[i++]->insertData(IPv6ToBinary(client_info.initial_address.host()).data(), 16);
|
||||||
columns[i++]->insert(UInt64(client_info.initial_address.port()));
|
columns[i++]->insert(client_info.initial_address.port());
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(client_info.interface));
|
columns[i++]->insert(UInt64(client_info.interface));
|
||||||
|
|
||||||
columns[i++]->insert(client_info.os_user);
|
columns[i++]->insert(client_info.os_user);
|
||||||
columns[i++]->insert(client_info.client_hostname);
|
columns[i++]->insert(client_info.client_hostname);
|
||||||
columns[i++]->insert(client_info.client_name);
|
columns[i++]->insert(client_info.client_name);
|
||||||
columns[i++]->insert(UInt64(client_info.client_revision));
|
columns[i++]->insert(client_info.client_revision);
|
||||||
columns[i++]->insert(UInt64(client_info.client_version_major));
|
columns[i++]->insert(client_info.client_version_major);
|
||||||
columns[i++]->insert(UInt64(client_info.client_version_minor));
|
columns[i++]->insert(client_info.client_version_minor);
|
||||||
columns[i++]->insert(UInt64(client_info.client_version_patch));
|
columns[i++]->insert(client_info.client_version_patch);
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(client_info.http_method));
|
columns[i++]->insert(UInt64(client_info.http_method));
|
||||||
columns[i++]->insert(client_info.http_user_agent);
|
columns[i++]->insert(client_info.http_user_agent);
|
||||||
|
@ -9,7 +9,6 @@
|
|||||||
#include <Common/typeid_cast.h>
|
#include <Common/typeid_cast.h>
|
||||||
#include <Poco/String.h>
|
#include <Poco/String.h>
|
||||||
#include <Parsers/ASTQualifiedAsterisk.h>
|
#include <Parsers/ASTQualifiedAsterisk.h>
|
||||||
//#include <iostream>
|
|
||||||
#include <IO/WriteHelpers.h>
|
#include <IO/WriteHelpers.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
#include <Core/Names.h>
|
#include <Core/Names.h>
|
||||||
#include <Parsers/IAST.h>
|
#include <Parsers/IAST.h>
|
||||||
#include <Interpreters/evaluateQualified.h>
|
#include <Interpreters/DatabaseAndTableWithAlias.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
@ -75,30 +75,30 @@ void QueryThreadLogElement::appendToBlock(Block & block) const
|
|||||||
|
|
||||||
size_t i = 0;
|
size_t i = 0;
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(DateLUT::instance().toDayNum(event_time)));
|
columns[i++]->insert(DateLUT::instance().toDayNum(event_time));
|
||||||
columns[i++]->insert(UInt64(event_time));
|
columns[i++]->insert(event_time);
|
||||||
columns[i++]->insert(UInt64(query_start_time));
|
columns[i++]->insert(query_start_time);
|
||||||
columns[i++]->insert(UInt64(query_duration_ms));
|
columns[i++]->insert(query_duration_ms);
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(read_rows));
|
columns[i++]->insert(read_rows);
|
||||||
columns[i++]->insert(UInt64(read_bytes));
|
columns[i++]->insert(read_bytes);
|
||||||
columns[i++]->insert(UInt64(written_rows));
|
columns[i++]->insert(written_rows);
|
||||||
columns[i++]->insert(UInt64(written_bytes));
|
columns[i++]->insert(written_bytes);
|
||||||
|
|
||||||
columns[i++]->insert(Int64(memory_usage));
|
columns[i++]->insert(memory_usage);
|
||||||
columns[i++]->insert(Int64(peak_memory_usage));
|
columns[i++]->insert(peak_memory_usage);
|
||||||
|
|
||||||
columns[i++]->insertData(thread_name.data(), thread_name.size());
|
columns[i++]->insertData(thread_name.data(), thread_name.size());
|
||||||
columns[i++]->insert(UInt64(thread_number));
|
columns[i++]->insert(thread_number);
|
||||||
columns[i++]->insert(Int64(os_thread_id));
|
columns[i++]->insert(os_thread_id);
|
||||||
columns[i++]->insert(UInt64(master_thread_number));
|
columns[i++]->insert(master_thread_number);
|
||||||
columns[i++]->insert(Int64(master_os_thread_id));
|
columns[i++]->insert(master_os_thread_id);
|
||||||
|
|
||||||
columns[i++]->insertData(query.data(), query.size());
|
columns[i++]->insertData(query.data(), query.size());
|
||||||
|
|
||||||
QueryLogElement::appendClientInfo(client_info, columns, i);
|
QueryLogElement::appendClientInfo(client_info, columns, i);
|
||||||
|
|
||||||
columns[i++]->insert(UInt64(ClickHouseRevision::get()));
|
columns[i++]->insert(ClickHouseRevision::get());
|
||||||
|
|
||||||
if (profile_counters)
|
if (profile_counters)
|
||||||
{
|
{
|
||||||
|
@ -8,6 +8,8 @@ namespace ErrorCodes
|
|||||||
extern const int TYPE_MISMATCH;
|
extern const int TYPE_MISMATCH;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Visitors consist of functions with unified interface 'void visit(Casted & x, ASTPtr & y)', there x is y, successfully casted to Casted.
|
||||||
|
/// Both types and fuction could have const specifiers. The second argument is used by visitor to replaces AST node (y) if needed.
|
||||||
|
|
||||||
/** Get a set of necessary columns to read from the table.
|
/** Get a set of necessary columns to read from the table.
|
||||||
* In this case, the columns specified in ignored_names are considered unnecessary. And the ignored_names parameter can be modified.
|
* In this case, the columns specified in ignored_names are considered unnecessary. And the ignored_names parameter can be modified.
|
||||||
@ -48,28 +50,28 @@ private:
|
|||||||
const NameSet & available_joined_columns;
|
const NameSet & available_joined_columns;
|
||||||
NameSet & required_joined_columns;
|
NameSet & required_joined_columns;
|
||||||
|
|
||||||
void visit(const ASTIdentifier * node, const ASTPtr &) const
|
void visit(const ASTIdentifier & node, const ASTPtr &) const
|
||||||
{
|
{
|
||||||
if (node->general()
|
if (node.general()
|
||||||
&& !ignored_names.count(node->name)
|
&& !ignored_names.count(node.name)
|
||||||
&& !ignored_names.count(Nested::extractTableName(node->name)))
|
&& !ignored_names.count(Nested::extractTableName(node.name)))
|
||||||
{
|
{
|
||||||
if (!available_joined_columns.count(node->name)
|
if (!available_joined_columns.count(node.name)
|
||||||
|| available_columns.count(node->name)) /// Read column from left table if has.
|
|| available_columns.count(node.name)) /// Read column from left table if has.
|
||||||
required_source_columns.insert(node->name);
|
required_source_columns.insert(node.name);
|
||||||
else
|
else
|
||||||
required_joined_columns.insert(node->name);
|
required_joined_columns.insert(node.name);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void visit(const ASTFunction * node, const ASTPtr & ast) const
|
void visit(const ASTFunction & node, const ASTPtr & ast) const
|
||||||
{
|
{
|
||||||
if (node->name == "lambda")
|
if (node.name == "lambda")
|
||||||
{
|
{
|
||||||
if (node->arguments->children.size() != 2)
|
if (node.arguments->children.size() != 2)
|
||||||
throw Exception("lambda requires two arguments", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
throw Exception("lambda requires two arguments", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||||
|
|
||||||
ASTFunction * lambda_args_tuple = typeid_cast<ASTFunction *>(node->arguments->children.at(0).get());
|
ASTFunction * lambda_args_tuple = typeid_cast<ASTFunction *>(node.arguments->children.at(0).get());
|
||||||
|
|
||||||
if (!lambda_args_tuple || lambda_args_tuple->name != "tuple")
|
if (!lambda_args_tuple || lambda_args_tuple->name != "tuple")
|
||||||
throw Exception("First argument of lambda must be a tuple", ErrorCodes::TYPE_MISMATCH);
|
throw Exception("First argument of lambda must be a tuple", ErrorCodes::TYPE_MISMATCH);
|
||||||
@ -90,7 +92,7 @@ private:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
visit(node->arguments->children.at(1));
|
visit(node.arguments->children.at(1));
|
||||||
|
|
||||||
for (size_t i = 0; i < added_ignored.size(); ++i)
|
for (size_t i = 0; i < added_ignored.size(); ++i)
|
||||||
ignored_names.erase(added_ignored[i]);
|
ignored_names.erase(added_ignored[i]);
|
||||||
@ -100,7 +102,7 @@ private:
|
|||||||
|
|
||||||
/// A special function `indexHint`. Everything that is inside it is not calculated
|
/// A special function `indexHint`. Everything that is inside it is not calculated
|
||||||
/// (and is used only for index analysis, see KeyCondition).
|
/// (and is used only for index analysis, see KeyCondition).
|
||||||
if (node->name == "indexHint")
|
if (node.name == "indexHint")
|
||||||
return;
|
return;
|
||||||
|
|
||||||
visitChildren(ast);
|
visitChildren(ast);
|
||||||
@ -126,7 +128,7 @@ private:
|
|||||||
{
|
{
|
||||||
if (const T * t = typeid_cast<const T *>(ast.get()))
|
if (const T * t = typeid_cast<const T *>(ast.get()))
|
||||||
{
|
{
|
||||||
visit(t, ast);
|
visit(*t, ast);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
|
@ -176,7 +176,7 @@ struct Settings
|
|||||||
\
|
\
|
||||||
M(SettingBool, join_use_nulls, 0, "Use NULLs for non-joined rows of outer JOINs. If false, use default value of corresponding columns data type.") \
|
M(SettingBool, join_use_nulls, 0, "Use NULLs for non-joined rows of outer JOINs. If false, use default value of corresponding columns data type.") \
|
||||||
\
|
\
|
||||||
M(SettingJoinStrictness, join_default_strictness, JoinStrictness::Unspecified, "Set default strictness in JOIN query. Possible values: empty string, 'ANY', 'ALL'. If empty, query without strictness will throw exception.") \
|
M(SettingJoinStrictness, join_default_strictness, JoinStrictness::ALL, "Set default strictness in JOIN query. Possible values: empty string, 'ANY', 'ALL'. If empty, query without strictness will throw exception.") \
|
||||||
\
|
\
|
||||||
M(SettingUInt64, preferred_block_size_bytes, 1000000, "") \
|
M(SettingUInt64, preferred_block_size_bytes, 1000000, "") \
|
||||||
\
|
\
|
||||||
|
@ -16,9 +16,9 @@ namespace ErrorCodes
|
|||||||
extern const int UNKNOWN_IDENTIFIER;
|
extern const int UNKNOWN_IDENTIFIER;
|
||||||
}
|
}
|
||||||
|
|
||||||
void TranslateQualifiedNamesVisitor::visit(ASTIdentifier * identifier, ASTPtr & ast, const DumpASTNode & dump) const
|
void TranslateQualifiedNamesVisitor::visit(ASTIdentifier & identifier, ASTPtr & ast, const DumpASTNode & dump) const
|
||||||
{
|
{
|
||||||
if (identifier->general())
|
if (identifier.general())
|
||||||
{
|
{
|
||||||
/// Select first table name with max number of qualifiers which can be stripped.
|
/// Select first table name with max number of qualifiers which can be stripped.
|
||||||
size_t max_num_qualifiers_to_strip = 0;
|
size_t max_num_qualifiers_to_strip = 0;
|
||||||
@ -27,7 +27,7 @@ void TranslateQualifiedNamesVisitor::visit(ASTIdentifier * identifier, ASTPtr &
|
|||||||
for (size_t table_pos = 0; table_pos < tables.size(); ++table_pos)
|
for (size_t table_pos = 0; table_pos < tables.size(); ++table_pos)
|
||||||
{
|
{
|
||||||
const auto & table = tables[table_pos];
|
const auto & table = tables[table_pos];
|
||||||
auto num_qualifiers_to_strip = getNumComponentsToStripInOrderToTranslateQualifiedName(*identifier, table);
|
auto num_qualifiers_to_strip = getNumComponentsToStripInOrderToTranslateQualifiedName(identifier, table);
|
||||||
|
|
||||||
if (num_qualifiers_to_strip > max_num_qualifiers_to_strip)
|
if (num_qualifiers_to_strip > max_num_qualifiers_to_strip)
|
||||||
{
|
{
|
||||||
@ -38,7 +38,7 @@ void TranslateQualifiedNamesVisitor::visit(ASTIdentifier * identifier, ASTPtr &
|
|||||||
|
|
||||||
if (max_num_qualifiers_to_strip)
|
if (max_num_qualifiers_to_strip)
|
||||||
{
|
{
|
||||||
dump.print(String("stripIdentifier ") + identifier->name, max_num_qualifiers_to_strip);
|
dump.print(String("stripIdentifier ") + identifier.name, max_num_qualifiers_to_strip);
|
||||||
stripIdentifier(ast, max_num_qualifiers_to_strip);
|
stripIdentifier(ast, max_num_qualifiers_to_strip);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -52,7 +52,7 @@ void TranslateQualifiedNamesVisitor::visit(ASTIdentifier * identifier, ASTPtr &
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void TranslateQualifiedNamesVisitor::visit(ASTQualifiedAsterisk *, ASTPtr & ast, const DumpASTNode &) const
|
void TranslateQualifiedNamesVisitor::visit(ASTQualifiedAsterisk &, ASTPtr & ast, const DumpASTNode &) const
|
||||||
{
|
{
|
||||||
if (ast->children.size() != 1)
|
if (ast->children.size() != 1)
|
||||||
throw Exception("Logical error: qualified asterisk must have exactly one child", ErrorCodes::LOGICAL_ERROR);
|
throw Exception("Logical error: qualified asterisk must have exactly one child", ErrorCodes::LOGICAL_ERROR);
|
||||||
@ -65,41 +65,46 @@ void TranslateQualifiedNamesVisitor::visit(ASTQualifiedAsterisk *, ASTPtr & ast,
|
|||||||
if (num_components > 2)
|
if (num_components > 2)
|
||||||
throw Exception("Qualified asterisk cannot have more than two qualifiers", ErrorCodes::UNKNOWN_ELEMENT_IN_AST);
|
throw Exception("Qualified asterisk cannot have more than two qualifiers", ErrorCodes::UNKNOWN_ELEMENT_IN_AST);
|
||||||
|
|
||||||
|
DatabaseAndTableWithAlias db_and_table(*ident);
|
||||||
|
|
||||||
for (const auto & table_names : tables)
|
for (const auto & table_names : tables)
|
||||||
{
|
{
|
||||||
/// database.table.*, table.* or alias.*
|
/// database.table.*, table.* or alias.*
|
||||||
if ((num_components == 2
|
if (num_components == 2)
|
||||||
&& !table_names.database.empty()
|
|
||||||
&& static_cast<const ASTIdentifier &>(*ident->children[0]).name == table_names.database
|
|
||||||
&& static_cast<const ASTIdentifier &>(*ident->children[1]).name == table_names.table)
|
|
||||||
|| (num_components == 0
|
|
||||||
&& ((!table_names.table.empty() && ident->name == table_names.table)
|
|
||||||
|| (!table_names.alias.empty() && ident->name == table_names.alias))))
|
|
||||||
{
|
{
|
||||||
return;
|
if (!table_names.database.empty() &&
|
||||||
|
db_and_table.database == table_names.database &&
|
||||||
|
db_and_table.table == table_names.table)
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
else if (num_components == 0)
|
||||||
|
{
|
||||||
|
if ((!table_names.table.empty() && db_and_table.table == table_names.table) ||
|
||||||
|
(!table_names.alias.empty() && db_and_table.table == table_names.alias))
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
throw Exception("Unknown qualified identifier: " + ident->getAliasOrColumnName(), ErrorCodes::UNKNOWN_IDENTIFIER);
|
throw Exception("Unknown qualified identifier: " + ident->getAliasOrColumnName(), ErrorCodes::UNKNOWN_IDENTIFIER);
|
||||||
}
|
}
|
||||||
|
|
||||||
void TranslateQualifiedNamesVisitor::visit(ASTTableJoin * join, ASTPtr &, const DumpASTNode &) const
|
void TranslateQualifiedNamesVisitor::visit(ASTTableJoin & join, ASTPtr &, const DumpASTNode &) const
|
||||||
{
|
{
|
||||||
/// Don't translate on_expression here in order to resolve equation parts later.
|
/// Don't translate on_expression here in order to resolve equation parts later.
|
||||||
if (join->using_expression_list)
|
if (join.using_expression_list)
|
||||||
visit(join->using_expression_list);
|
visit(join.using_expression_list);
|
||||||
}
|
}
|
||||||
|
|
||||||
void TranslateQualifiedNamesVisitor::visit(ASTSelectQuery * select, ASTPtr & ast, const DumpASTNode &) const
|
void TranslateQualifiedNamesVisitor::visit(ASTSelectQuery & select, ASTPtr & ast, const DumpASTNode &) const
|
||||||
{
|
{
|
||||||
/// If the WHERE clause or HAVING consists of a single quailified column, the reference must be translated not only in children,
|
/// If the WHERE clause or HAVING consists of a single quailified column, the reference must be translated not only in children,
|
||||||
/// but also in where_expression and having_expression.
|
/// but also in where_expression and having_expression.
|
||||||
if (select->prewhere_expression)
|
if (select.prewhere_expression)
|
||||||
visit(select->prewhere_expression);
|
visit(select.prewhere_expression);
|
||||||
if (select->where_expression)
|
if (select.where_expression)
|
||||||
visit(select->where_expression);
|
visit(select.where_expression);
|
||||||
if (select->having_expression)
|
if (select.having_expression)
|
||||||
visit(select->having_expression);
|
visit(select.having_expression);
|
||||||
|
|
||||||
visitChildren(ast);
|
visitChildren(ast);
|
||||||
}
|
}
|
||||||
|
@ -5,7 +5,7 @@
|
|||||||
|
|
||||||
#include <Common/typeid_cast.h>
|
#include <Common/typeid_cast.h>
|
||||||
#include <Parsers/DumpASTNode.h>
|
#include <Parsers/DumpASTNode.h>
|
||||||
#include <Interpreters/evaluateQualified.h>
|
#include <Interpreters/DatabaseAndTableWithAlias.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
@ -17,8 +17,10 @@ struct ASTTableJoin;
|
|||||||
|
|
||||||
class NamesAndTypesList;
|
class NamesAndTypesList;
|
||||||
|
|
||||||
|
/// Visitors consist of functions with unified interface 'void visit(Casted & x, ASTPtr & y)', there x is y, successfully casted to Casted.
|
||||||
|
/// Both types and fuction could have const specifiers. The second argument is used by visitor to replaces AST node (y) if needed.
|
||||||
|
|
||||||
/// It visits nodes, find identifiers and translate their names to needed form.
|
/// It visits nodes, find columns (general identifiers and asterisks) and translate their names according to tables' names.
|
||||||
class TranslateQualifiedNamesVisitor
|
class TranslateQualifiedNamesVisitor
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
@ -32,12 +34,10 @@ public:
|
|||||||
|
|
||||||
void visit(ASTPtr & ast) const
|
void visit(ASTPtr & ast) const
|
||||||
{
|
{
|
||||||
DumpASTNode dump(*ast, ostr, visit_depth, "translateQualifiedNames");
|
if (!tryVisit<ASTIdentifier>(ast) &&
|
||||||
|
!tryVisit<ASTQualifiedAsterisk>(ast) &&
|
||||||
if (!tryVisit<ASTIdentifier>(ast, dump) &&
|
!tryVisit<ASTTableJoin>(ast) &&
|
||||||
!tryVisit<ASTQualifiedAsterisk>(ast, dump) &&
|
!tryVisit<ASTSelectQuery>(ast))
|
||||||
!tryVisit<ASTTableJoin>(ast, dump) &&
|
|
||||||
!tryVisit<ASTSelectQuery>(ast, dump))
|
|
||||||
visitChildren(ast); /// default: do nothing, visit children
|
visitChildren(ast); /// default: do nothing, visit children
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -47,19 +47,20 @@ private:
|
|||||||
mutable size_t visit_depth;
|
mutable size_t visit_depth;
|
||||||
std::ostream * ostr;
|
std::ostream * ostr;
|
||||||
|
|
||||||
void visit(ASTIdentifier * node, ASTPtr & ast, const DumpASTNode & dump) const;
|
void visit(ASTIdentifier & node, ASTPtr & ast, const DumpASTNode & dump) const;
|
||||||
void visit(ASTQualifiedAsterisk * node, ASTPtr & ast, const DumpASTNode & dump) const;
|
void visit(ASTQualifiedAsterisk & node, ASTPtr & ast, const DumpASTNode & dump) const;
|
||||||
void visit(ASTTableJoin * node, ASTPtr & ast, const DumpASTNode & dump) const;
|
void visit(ASTTableJoin & node, ASTPtr & ast, const DumpASTNode & dump) const;
|
||||||
void visit(ASTSelectQuery * ast, ASTPtr &, const DumpASTNode & dump) const;
|
void visit(ASTSelectQuery & ast, ASTPtr &, const DumpASTNode & dump) const;
|
||||||
|
|
||||||
void visitChildren(ASTPtr &) const;
|
void visitChildren(ASTPtr &) const;
|
||||||
|
|
||||||
template <typename T>
|
template <typename T>
|
||||||
bool tryVisit(ASTPtr & ast, const DumpASTNode & dump) const
|
bool tryVisit(ASTPtr & ast) const
|
||||||
{
|
{
|
||||||
if (T * t = typeid_cast<T *>(ast.get()))
|
if (T * t = typeid_cast<T *>(ast.get()))
|
||||||
{
|
{
|
||||||
visit(t, ast, dump);
|
DumpASTNode dump(*ast, ostr, visit_depth, "translateQualifiedNames");
|
||||||
|
visit(*t, ast, dump);
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
|
@ -58,7 +58,7 @@ static Field convertNumericTypeImpl(const Field & from)
|
|||||||
if (!accurate::equalsOp(value, To(value)))
|
if (!accurate::equalsOp(value, To(value)))
|
||||||
return {};
|
return {};
|
||||||
|
|
||||||
return Field(typename NearestFieldType<To>::Type(value));
|
return To(value);
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename To>
|
template <typename To>
|
||||||
@ -86,7 +86,7 @@ static Field convertIntToDecimalType(const Field & from, const To & type)
|
|||||||
throw Exception("Number is too much to place in " + type.getName(), ErrorCodes::ARGUMENT_OUT_OF_BOUND);
|
throw Exception("Number is too much to place in " + type.getName(), ErrorCodes::ARGUMENT_OUT_OF_BOUND);
|
||||||
|
|
||||||
FieldType scaled_value = type.getScaleMultiplier() * value;
|
FieldType scaled_value = type.getScaleMultiplier() * value;
|
||||||
return Field(typename NearestFieldType<FieldType>::Type(scaled_value, type.getScale()));
|
return DecimalField<FieldType>(scaled_value, type.getScale());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -97,7 +97,7 @@ static Field convertStringToDecimalType(const Field & from, const DataTypeDecima
|
|||||||
|
|
||||||
const String & str_value = from.get<String>();
|
const String & str_value = from.get<String>();
|
||||||
T value = type.parseFromString(str_value);
|
T value = type.parseFromString(str_value);
|
||||||
return Field(typename NearestFieldType<FieldType>::Type(value, type.getScale()));
|
return DecimalField<FieldType>(value, type.getScale());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -150,11 +150,11 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID
|
|||||||
/// Conversion between Date and DateTime and vice versa.
|
/// Conversion between Date and DateTime and vice versa.
|
||||||
if (which_type.isDate() && which_from_type.isDateTime())
|
if (which_type.isDate() && which_from_type.isDateTime())
|
||||||
{
|
{
|
||||||
return UInt64(static_cast<const DataTypeDateTime &>(*from_type_hint).getTimeZone().toDayNum(src.get<UInt64>()));
|
return static_cast<const DataTypeDateTime &>(*from_type_hint).getTimeZone().toDayNum(src.get<UInt64>());
|
||||||
}
|
}
|
||||||
else if (which_type.isDateTime() && which_from_type.isDate())
|
else if (which_type.isDateTime() && which_from_type.isDate())
|
||||||
{
|
{
|
||||||
return UInt64(static_cast<const DataTypeDateTime &>(type).getTimeZone().fromDayNum(DayNum(src.get<UInt64>())));
|
return static_cast<const DataTypeDateTime &>(type).getTimeZone().fromDayNum(DayNum(src.get<UInt64>()));
|
||||||
}
|
}
|
||||||
else if (type.isValueRepresentedByNumber())
|
else if (type.isValueRepresentedByNumber())
|
||||||
{
|
{
|
||||||
@ -184,7 +184,7 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID
|
|||||||
if (which_type.isDate())
|
if (which_type.isDate())
|
||||||
{
|
{
|
||||||
/// Convert 'YYYY-MM-DD' Strings to Date
|
/// Convert 'YYYY-MM-DD' Strings to Date
|
||||||
return UInt64(stringToDate(src.get<const String &>()));
|
return stringToDate(src.get<const String &>());
|
||||||
}
|
}
|
||||||
else if (which_type.isDateTime())
|
else if (which_type.isDateTime())
|
||||||
{
|
{
|
||||||
@ -218,7 +218,12 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID
|
|||||||
|
|
||||||
Array res(src_arr_size);
|
Array res(src_arr_size);
|
||||||
for (size_t i = 0; i < src_arr_size; ++i)
|
for (size_t i = 0; i < src_arr_size; ++i)
|
||||||
|
{
|
||||||
res[i] = convertFieldToType(src_arr[i], *nested_type);
|
res[i] = convertFieldToType(src_arr[i], *nested_type);
|
||||||
|
if (res[i].isNull() && !type_array->getNestedType()->isNullable())
|
||||||
|
throw Exception("Type mismatch of array elements in IN or VALUES section. Expected: " + type_array->getNestedType()->getName()
|
||||||
|
+ ". Got NULL in position " + toString(i + 1), ErrorCodes::TYPE_MISMATCH);
|
||||||
|
}
|
||||||
|
|
||||||
return res;
|
return res;
|
||||||
}
|
}
|
||||||
|
@ -69,7 +69,7 @@ ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, const Context &
|
|||||||
ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, const Context & context)
|
ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, const Context & context)
|
||||||
{
|
{
|
||||||
if (auto id = typeid_cast<const ASTIdentifier *>(node.get()))
|
if (auto id = typeid_cast<const ASTIdentifier *>(node.get()))
|
||||||
return std::make_shared<ASTLiteral>(Field(id->name));
|
return std::make_shared<ASTLiteral>(id->name);
|
||||||
|
|
||||||
return evaluateConstantExpressionAsLiteral(node, context);
|
return evaluateConstantExpressionAsLiteral(node, context);
|
||||||
}
|
}
|
||||||
|
@ -33,6 +33,7 @@ namespace ErrorCodes
|
|||||||
extern const int LOGICAL_ERROR;
|
extern const int LOGICAL_ERROR;
|
||||||
extern const int QUERY_IS_TOO_LARGE;
|
extern const int QUERY_IS_TOO_LARGE;
|
||||||
extern const int INTO_OUTFILE_NOT_ALLOWED;
|
extern const int INTO_OUTFILE_NOT_ALLOWED;
|
||||||
|
extern const int QUERY_WAS_CANCELLED;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -204,9 +205,15 @@ static std::tuple<ASTPtr, BlockIO> executeQueryImpl(
|
|||||||
auto interpreter = InterpreterFactory::get(ast, context, stage);
|
auto interpreter = InterpreterFactory::get(ast, context, stage);
|
||||||
res = interpreter->execute();
|
res = interpreter->execute();
|
||||||
|
|
||||||
/// Delayed initialization of query streams (required for KILL QUERY purposes)
|
|
||||||
if (process_list_entry)
|
if (process_list_entry)
|
||||||
(*process_list_entry)->setQueryStreams(res);
|
{
|
||||||
|
/// Query was killed before execution
|
||||||
|
if ((*process_list_entry)->isKilled())
|
||||||
|
throw Exception("Query '" + (*process_list_entry)->getInfo().client_info.current_query_id + "' is killed in pending state",
|
||||||
|
ErrorCodes::QUERY_WAS_CANCELLED);
|
||||||
|
else
|
||||||
|
(*process_list_entry)->setQueryStreams(res);
|
||||||
|
}
|
||||||
|
|
||||||
/// Hold element of process list till end of query execution.
|
/// Hold element of process list till end of query execution.
|
||||||
res.process_list_entry = process_list_entry;
|
res.process_list_entry = process_list_entry;
|
||||||
|
@ -10,7 +10,7 @@
|
|||||||
#include <Parsers/ASTSubquery.h>
|
#include <Parsers/ASTSubquery.h>
|
||||||
|
|
||||||
#include <Interpreters/interpretSubquery.h>
|
#include <Interpreters/interpretSubquery.h>
|
||||||
#include <Interpreters/evaluateQualified.h>
|
#include <Interpreters/DatabaseAndTableWithAlias.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
@ -69,10 +69,10 @@ std::shared_ptr<InterpreterSelectWithUnionQuery> interpretSubquery(
|
|||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
auto database_table = getDatabaseAndTableNameFromIdentifier(*table);
|
DatabaseAndTableWithAlias database_table(*table);
|
||||||
const auto & storage = context.getTable(database_table.first, database_table.second);
|
const auto & storage = context.getTable(database_table.database, database_table.table);
|
||||||
columns = storage->getColumns().ordinary;
|
columns = storage->getColumns().ordinary;
|
||||||
select_query->replaceDatabaseAndTable(database_table.first, database_table.second);
|
select_query->replaceDatabaseAndTable(database_table.database, database_table.table);
|
||||||
}
|
}
|
||||||
|
|
||||||
select_expression_list->children.reserve(columns.size());
|
select_expression_list->children.reserve(columns.size());
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user