Merge branch 'master' into CLICKHOUSE-4032

This commit is contained in:
Sabyanin Maxim 2018-11-06 16:50:07 +03:00
commit dacd999d4f
212 changed files with 2290 additions and 1330 deletions

5
.gitignore vendored
View File

@ -9,8 +9,6 @@
# auto generated files
*.logrt
dbms/src/Storages/System/StorageSystemContributors.generated.cpp
/build
/build_*
/docs/build
@ -246,3 +244,6 @@ website/presentations
website/package-lock.json
.DS_Store
*/.DS_Store
# Ignore files for locally disabled tests
/dbms/tests/queries/**/*.disabled

View File

@ -1,3 +1,151 @@
## ClickHouse release 18.14.11, 2018-10-29
### Bug fixes:
* Fixed the error `Block structure mismatch in UNION stream: different number of columns` in LIMIT queries. [#2156](https://github.com/yandex/ClickHouse/issues/2156)
* Fixed errors when merging data in tables containing arrays inside Nested structures. [#3397](https://github.com/yandex/ClickHouse/pull/3397)
* Fixed incorrect query results if the `merge_tree_uniform_read_distribution` setting is disabled (it is enabled by default). [#3429](https://github.com/yandex/ClickHouse/pull/3429)
* Fixed an error on inserts to a Distributed table in Native format. [#3411](https://github.com/yandex/ClickHouse/issues/3411)
## ClickHouse release 18.14.10, 2018-10-23
* The `compile_expressions` setting (JIT compilation of expressions) is disabled by default. [#3410](https://github.com/yandex/ClickHouse/pull/3410)
* The `enable_optimize_predicate_expression` setting is disabled by default.
## ClickHouse release 18.14.9, 2018-10-16
### New features:
* The `WITH CUBE` modifier for `GROUP BY` (the alternative syntax `GROUP BY CUBE(...)` is also available). [#3172](https://github.com/yandex/ClickHouse/pull/3172)
* Added the `formatDateTime` function. [Alexandr Krasheninnikov](https://github.com/yandex/ClickHouse/pull/2770)
* Added the `JDBC` table engine and `jdbc` table function (requires installing clickhouse-jdbc-bridge). [Alexandr Krasheninnikov](https://github.com/yandex/ClickHouse/pull/3210)
* Added functions for working with the ISO week number: `toISOWeek`, `toISOYear`, `toStartOfISOYear`, and `toDayOfYear`. [#3146](https://github.com/yandex/ClickHouse/pull/3146)
* Now you can use `Nullable` columns for `MySQL` and `ODBC` tables. [#3362](https://github.com/yandex/ClickHouse/pull/3362)
* Nested data structures can be read as nested objects in `JSONEachRow` format. Added the `input_format_import_nested_json` setting. [Veloman Yunkan](https://github.com/yandex/ClickHouse/pull/3144)
* Parallel processing is available for many `MATERIALIZED VIEW`s when inserting data. See the `parallel_view_processing` setting. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3208)
* Added the `SYSTEM FLUSH LOGS` query (forced log flushes to system tables such as `query_log`) [#3321](https://github.com/yandex/ClickHouse/pull/3321)
* Now you can use pre-defined `database` and `table` macros when declaring `Replicated` tables. [#3251](https://github.com/yandex/ClickHouse/pull/3251)
* Added the ability to read `Decimal` type values in engineering notation (indicating powers of ten). [#3153](https://github.com/yandex/ClickHouse/pull/3153)
### Experimental features:
* Optimization of the GROUP BY clause for `LowCardinality data types.` [#3138](https://github.com/yandex/ClickHouse/pull/3138)
* Optimized calculation of expressions for `LowCardinality data types.` [#3200](https://github.com/yandex/ClickHouse/pull/3200)
### Improvements:
* Significantly reduced memory consumption for requests with `ORDER BY` and `LIMIT`. See the `max_bytes_before_remerge_sort` setting. [#3205](https://github.com/yandex/ClickHouse/pull/3205)
* In the absence of `JOIN` (`LEFT`, `INNER`, ...), `INNER JOIN` is assumed. [#3147](https://github.com/yandex/ClickHouse/pull/3147)
* Qualified asterisks work correctly in queries with `JOIN`. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3202)
* The `ODBC` table engine correctly chooses the method for quoting identifiers in the SQL dialect of a remote database. [Alexandr Krasheninnikov](https://github.com/yandex/ClickHouse/pull/3210)
* The `compile_expressions` setting (JIT compilation of expressions) is enabled by default.
* Fixed behavior for simultaneous DROP DATABASE/TABLE IF EXISTS and CREATE DATABASE/TABLE IF NOT EXISTS. Previously, a `CREATE DATABASE ... IF NOT EXISTS` query could return the error message "File ... already exists", and the `CREATE TABLE ... IF NOT EXISTS` and `DROP TABLE IF EXISTS` queries could return `Table ... is creating or attaching right now`. [#3101](https://github.com/yandex/ClickHouse/pull/3101)
* LIKE and IN expressions with a constant right half are passed to the remote server when querying from MySQL or ODBC tables. [#3182](https://github.com/yandex/ClickHouse/pull/3182)
* Comparisons with constant expressions in a WHERE clause are passed to the remote server when querying from MySQL and ODBC tables. Previously, only comparisons with constants were passed. [#3182](https://github.com/yandex/ClickHouse/pull/3182)
* Correct calculation of row width in the terminal for `Pretty` formats, including strings with hieroglyphs. [Amos Bird](https://github.com/yandex/ClickHouse/pull/3257).
* `ON CLUSTER` can be specified for `ALTER UPDATE` queries.
* Improved performance for reading data in `JSONEachRow` format. [#3332](https://github.com/yandex/ClickHouse/pull/3332)
* Added synonyms for the `LENGTH` and `CHARACTER_LENGTH` functions for compatibility. The `CONCAT` function is no longer case-sensitive. [#3306](https://github.com/yandex/ClickHouse/pull/3306)
* Added the `TIMESTAMP` synonym for the `DateTime` type. [#3390](https://github.com/yandex/ClickHouse/pull/3390)
* There is always space reserved for query_id in the server logs, even if the log line is not related to a query. This makes it easier to parse server text logs with third-party tools.
* Memory consumption by a query is logged when it exceeds the next level of an integer number of gigabytes. [#3205](https://github.com/yandex/ClickHouse/pull/3205)
* Added compatibility mode for the case when the client library that uses the Native protocol sends fewer columns by mistake than the server expects for the INSERT query. This scenario was possible when using the clickhouse-cpp library. Previously, this scenario caused the server to crash. [#3171](https://github.com/yandex/ClickHouse/pull/3171)
* In a user-defined WHERE expression in `clickhouse-copier`, you can now use a `partition_key` alias (for additional filtering by source table partition). This is useful if the partitioning scheme changes during copying, but only changes slightly. [#3166](https://github.com/yandex/ClickHouse/pull/3166)
* The workflow of the `Kafka` engine has been moved to a background thread pool in order to automatically reduce the speed of data reading at high loads. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3215).
* Support for reading `Tuple` and `Nested` values of structures like `struct` in the `Cap'n'Proto format`. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3216)
* The list of top-level domains for the `firstSignificantSubdomain` function now includes the domain `biz`. [decaseal](https://github.com/yandex/ClickHouse/pull/3219)
* In the configuration of external dictionaries, `null_value` is interpreted as the value of the default data type. [#3330](https://github.com/yandex/ClickHouse/pull/3330)
* Support for the `intDiv` and `intDivOrZero` functions for `Decimal`. [b48402e8](https://github.com/yandex/ClickHouse/commit/b48402e8712e2b9b151e0eef8193811d433a1264)
* Support for the `Date`, `DateTime`, `UUID`, and `Decimal` types as a key for the `sumMap` aggregate function. [#3281](https://github.com/yandex/ClickHouse/pull/3281)
* Support for the `Decimal` data type in external dictionaries. [#3324](https://github.com/yandex/ClickHouse/pull/3324)
* Support for the `Decimal` data type in `SummingMergeTree` tables. [#3348](https://github.com/yandex/ClickHouse/pull/3348)
* Added specializations for `UUID` in `if`. [#3366](https://github.com/yandex/ClickHouse/pull/3366)
* Reduced the number of `open` and `close` system calls when reading from a `MergeTree table`. [#3283](https://github.com/yandex/ClickHouse/pull/3283)
* A `TRUNCATE TABLE` query can be executed on any replica (the query is passed to the leader replica). [Kirill Shvakov](https://github.com/yandex/ClickHouse/pull/3375)
### Bug fixes:
* Fixed an issue with `Dictionary` tables for `range_hashed` dictionaries. This error occurred in version 18.12.17. [#1702](https://github.com/yandex/ClickHouse/pull/1702)
* Fixed an error when loading `range_hashed` dictionaries (the message `Unsupported type Nullable (...)`). This error occurred in version 18.12.17. [#3362](https://github.com/yandex/ClickHouse/pull/3362)
* Fixed errors in the `pointInPolygon` function due to the accumulation of inaccurate calculations for polygons with a large number of vertices located close to each other. [#3331](https://github.com/yandex/ClickHouse/pull/3331) [#3341](https://github.com/yandex/ClickHouse/pull/3341)
* If after merging data parts, the checksum for the resulting part differs from the result of the same merge in another replica, the result of the merge is deleted and the data part is downloaded from the other replica (this is the correct behavior). But after downloading the data part, it couldn't be added to the working set because of an error that the part already exists (because the data part was deleted with some delay after the merge). This led to cyclical attempts to download the same data. [#3194](https://github.com/yandex/ClickHouse/pull/3194)
* Fixed incorrect calculation of total memory consumption by queries (because of incorrect calculation, the `max_memory_usage_for_all_queries` setting worked incorrectly and the `MemoryTracking` metric had an incorrect value). This error occurred in version 18.12.13. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3344)
* Fixed the functionality of `CREATE TABLE ... ON CLUSTER ... AS SELECT ...` This error occurred in version 18.12.13. [#3247](https://github.com/yandex/ClickHouse/pull/3247)
* Fixed unnecessary preparation of data structures for `JOIN`s on the server that initiates the request if the `JOIN` is only performed on remote servers. [#3340](https://github.com/yandex/ClickHouse/pull/3340)
* Fixed bugs in the `Kafka` engine: deadlocks after exceptions when starting to read data, and locks upon completion [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3215).
* For `Kafka` tables, the optional `schema` parameter was not passed (the schema of the `Cap'n'Proto` format). [Vojtech Splichal](https://github.com/yandex/ClickHouse/pull/3150)
* If the ensemble of ZooKeeper servers has servers that accept the connection but then immediately close it instead of responding to the handshake, ClickHouse chooses to connect another server. Previously, this produced the error `Cannot read all data. Bytes read: 0. Bytes expected: 4.` and the server couldn't start. [8218cf3a](https://github.com/yandex/ClickHouse/commit/8218cf3a5f39a43401953769d6d12a0bb8d29da9)
* If the ensemble of ZooKeeper servers contains servers for which the DNS query returns an error, these servers are ignored. [17b8e209](https://github.com/yandex/ClickHouse/commit/17b8e209221061325ad7ba0539f03c6e65f87f29)
* Fixed type conversion between `Date` and `DateTime` when inserting data in the `VALUES` format (if `input_format_values_interpret_expressions = 1`). Previously, the conversion was performed between the numerical value of the number of days in Unix Epoch time and the Unix timestamp, which led to unexpected results. [#3229](https://github.com/yandex/ClickHouse/pull/3229)
* Corrected type conversion between `Decimal` and integer numbers. [#3211](https://github.com/yandex/ClickHouse/pull/3211)
* Fixed errors in the `enable_optimize_predicate_expression` setting. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3231)
* Fixed a parsing error in CSV format with floating-point numbers if a non-default CSV separator is used, such as `;` [#3155](https://github.com/yandex/ClickHouse/pull/3155)
* Fixed the `arrayCumSumNonNegative` function (it does not accumulate negative values if the accumulator is less than zero). [Aleksey Studnev](https://github.com/yandex/ClickHouse/pull/3163)
* Fixed how `Merge` tables work on top of `Distributed` tables when using `PREWHERE`. [#3165](https://github.com/yandex/ClickHouse/pull/3165)
* Bug fixes in the `ALTER UPDATE` query.
* Fixed bugs in the `odbc` table function that appeared in version 18.12. [#3197](https://github.com/yandex/ClickHouse/pull/3197)
* Fixed the operation of aggregate functions with `StateArray` combinators. [#3188](https://github.com/yandex/ClickHouse/pull/3188)
* Fixed a crash when dividing a `Decimal` value by zero. [69dd6609](https://github.com/yandex/ClickHouse/commit/69dd6609193beb4e7acd3e6ad216eca0ccfb8179)
* Fixed output of types for operations using `Decimal` and integer arguments. [#3224](https://github.com/yandex/ClickHouse/pull/3224)
* Fixed the segfault during `GROUP BY` on `Decimal128`. [3359ba06](https://github.com/yandex/ClickHouse/commit/3359ba06c39fcd05bfdb87d6c64154819621e13a)
* The `log_query_threads` setting (logging information about each thread of query execution) now takes effect only if the `log_queries` option (logging information about queries) is set to 1. Since the `log_query_threads` option is enabled by default, information about threads was previously logged even if query logging was disabled. [#3241](https://github.com/yandex/ClickHouse/pull/3241)
* Fixed an error in the distributed operation of the quantiles aggregate function (the error message `Not found column quantile...`). [292a8855](https://github.com/yandex/ClickHouse/commit/292a885533b8e3b41ce8993867069d14cbd5a664)
* Fixed the compatibility problem when working on a cluster of version 18.12.17 servers and older servers at the same time. For distributed queries with GROUP BY keys of both fixed and non-fixed length, if there was a large amount of data to aggregate, the returned data was not always fully aggregated (two different rows contained the same aggregation keys). [#3254](https://github.com/yandex/ClickHouse/pull/3254)
* Fixed handling of substitutions in `clickhouse-performance-test`, if the query contains only part of the substitutions declared in the test. [#3263](https://github.com/yandex/ClickHouse/pull/3263)
* Fixed an error when using `FINAL` with `PREWHERE`. [#3298](https://github.com/yandex/ClickHouse/pull/3298)
* Fixed an error when using `PREWHERE` over columns that were added during `ALTER`. [#3298](https://github.com/yandex/ClickHouse/pull/3298)
* Added a check for the absence of `arrayJoin` for `DEFAULT` and `MATERIALIZED` expressions. Previously, `arrayJoin` led to an error when inserting data. [#3337](https://github.com/yandex/ClickHouse/pull/3337)
* Added a check for the absence of `arrayJoin` in a `PREWHERE` clause. Previously, this led to messages like `Size ... doesn't match` or `Unknown compression method` when executing queries. [#3357](https://github.com/yandex/ClickHouse/pull/3357)
* Fixed segfault that could occur in rare cases after optimization that replaced AND chains from equality evaluations with the corresponding IN expression. [liuyimin-bytedance](https://github.com/yandex/ClickHouse/pull/3339)
* Minor corrections to `clickhouse-benchmark`: previously, client information was not sent to the server; now the number of queries executed is calculated more accurately when shutting down and for limiting the number of iterations. [#3351](https://github.com/yandex/ClickHouse/pull/3351) [#3352](https://github.com/yandex/ClickHouse/pull/3352)
### Backward incompatible changes:
* Removed the `allow_experimental_decimal_type` option. The `Decimal` data type is available for default use. [#3329](https://github.com/yandex/ClickHouse/pull/3329)
## ClickHouse release 18.12.17, 2018-09-16
### New features:
* `invalidate_query` (the ability to specify a query to check whether an external dictionary needs to be updated) is implemented for the `clickhouse` source. [#3126](https://github.com/yandex/ClickHouse/pull/3126)
* Added the ability to use `UInt*`, `Int*`, and `DateTime` data types (along with the `Date` type) as a `range_hashed` external dictionary key that defines the boundaries of ranges. Now `NULL` can be used to designate an open range. [Vasily Nemkov](https://github.com/yandex/ClickHouse/pull/3123)
* The `Decimal` type now supports `var*` and `stddev*` aggregate functions. [#3129](https://github.com/yandex/ClickHouse/pull/3129)
* The `Decimal` type now supports mathematical functions (`exp`, `sin` and so on.) [#3129](https://github.com/yandex/ClickHouse/pull/3129)
* The `system.part_log` table now has the `partition_id` column. [#3089](https://github.com/yandex/ClickHouse/pull/3089)
### Bug fixes:
* `Merge` now works correctly on `Distributed` tables. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3159)
* Fixed incompatibility (unnecessary dependency on the `glibc` version) that made it impossible to run ClickHouse on `Ubuntu Precise` and older versions. The incompatibility arose in version 18.12.13. [#3130](https://github.com/yandex/ClickHouse/pull/3130)
* Fixed errors in the `enable_optimize_predicate_expression` setting. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3107)
* Fixed a minor issue with backwards compatibility that appeared when working with a cluster of replicas on versions earlier than 18.12.13 and simultaneously creating a new replica of a table on a server with a newer version (shown in the message `Can not clone replica, because the ... updated to new ClickHouse version`, which is logical, but shouldn't happen). [#3122](https://github.com/yandex/ClickHouse/pull/3122)
### Backward incompatible changes:
* The `enable_optimize_predicate_expression` option is enabled by default (which is rather optimistic). If query analysis errors occur that are related to searching for the column names, set `enable_optimize_predicate_expression` to 0. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3107)
## ClickHouse release 18.12.14, 2018-09-13
### New features:
* Added support for `ALTER UPDATE` queries. [#3035](https://github.com/yandex/ClickHouse/pull/3035)
* Added the `allow_ddl` option, which restricts the user's access to DDL queries. [#3104](https://github.com/yandex/ClickHouse/pull/3104)
* Added the `min_merge_bytes_to_use_direct_io` option for `MergeTree` engines, which allows you to set a threshold for the total size of the merge (when above the threshold, data part files will be handled using O_DIRECT). [#3117](https://github.com/yandex/ClickHouse/pull/3117)
* The `system.merges` system table now contains the `partition_id` column. [#3099](https://github.com/yandex/ClickHouse/pull/3099)
### Improvements
* If a data part remains unchanged during mutation, it isn't downloaded by replicas. [#3103](https://github.com/yandex/ClickHouse/pull/3103)
* Autocomplete is available for names of settings when working with `clickhouse-client`. [#3106](https://github.com/yandex/ClickHouse/pull/3106)
### Bug fixes:
* Added a check for the sizes of arrays that are elements of `Nested` type fields when inserting. [#3118](https://github.com/yandex/ClickHouse/pull/3118)
* Fixed an error updating external dictionaries with the `ODBC` source and `hashed` storage. This error occurred in version 18.12.13.
* Fixed a crash when creating a temporary table from a query with an `IN` condition. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3098)
* Fixed an error in aggregate functions for arrays that can have `NULL` elements. [Winter Zhang](https://github.com/yandex/ClickHouse/pull/3097)
## ClickHouse release 18.12.13, 2018-09-10
### New features:

View File

@ -1,3 +1,20 @@
## ClickHouse release 18.14.12, 2018-11-02
### Исправления ошибок:
* Исправлена ошибка при join-запросе двух неименованных подзапросов. [#3505](https://github.com/yandex/ClickHouse/pull/3505)
* Исправлена генерация пустой `WHERE`-части при запросах к внешним базам. [hotid](https://github.com/yandex/ClickHouse/pull/3477)
* Исправлена ошибка использования неправильной настройки таймаута в ODBC-словарях. [Marek Vavruša](https://github.com/yandex/ClickHouse/pull/3511)
## ClickHouse release 18.14.11, 2018-10-29
### Исправления ошибок:
* Исправлена ошибка `Block structure mismatch in UNION stream: different number of columns` в запросах с LIMIT. [#2156](https://github.com/yandex/ClickHouse/issues/2156)
* Исправлены ошибки при слиянии данных в таблицах, содержащих массивы внутри Nested структур. [#3397](https://github.com/yandex/ClickHouse/pull/3397)
* Исправлен неправильный результат запросов при выключенной настройке `merge_tree_uniform_read_distribution` (включена по умолчанию). [#3429](https://github.com/yandex/ClickHouse/pull/3429)
* Исправлена ошибка при вставке в Distributed таблицу в формате Native. [#3411](https://github.com/yandex/ClickHouse/issues/3411)
## ClickHouse release 18.14.10, 2018-10-23
* Настройка `compile_expressions` (JIT компиляция выражений) выключена по умолчанию. [#3410](https://github.com/yandex/ClickHouse/pull/3410)

View File

@ -2,14 +2,11 @@
ClickHouse is an open-source column-oriented database management system that allows generating analytical data reports in real time.
🎤🥂 **ClickHouse Meetup in [Amsterdam on November 15](https://events.yandex.com/events/meetings/15-11-2018/)** 🍰🔥🐻
## Useful Links
* [Official website](https://clickhouse.yandex/) has quick high-level overview of ClickHouse on main page.
* [Tutorial](https://clickhouse.yandex/tutorial.html) shows how to set up and query small ClickHouse cluster.
* [Documentation](https://clickhouse.yandex/docs/en/) provides more in-depth information.
* [Contacts](https://clickhouse.yandex/#contacts) can help to get your questions answered if there are any.
## Upcoming Meetups
* [Beijing on October 28](http://www.clickhouse.com.cn/topic/5ba0e3f99d28dfde2ddc62a1)
* [Amsterdam on November 15](https://events.yandex.com/events/meetings/15-11-2018/)

2
contrib/ssl vendored

@ -1 +1 @@
Subproject commit de02224a42c69e3d8c9112c82018816f821878d0
Subproject commit 919f6f1331d500bfdd26f8bbbf88e92c0119879b

View File

@ -265,6 +265,10 @@ if (NOT USE_INTERNAL_ZSTD_LIBRARY)
target_include_directories (dbms SYSTEM BEFORE PRIVATE ${ZSTD_INCLUDE_DIR})
endif ()
if (USE_JEMALLOC)
target_include_directories (dbms SYSTEM BEFORE PRIVATE ${JEMALLOC_INCLUDE_DIR}) # used in Interpreters/AsynchronousMetrics.cpp
endif ()
target_include_directories (dbms PUBLIC ${DBMS_INCLUDE_DIR})
target_include_directories (clickhouse_common_io PUBLIC ${DBMS_INCLUDE_DIR})
target_include_directories (clickhouse_common_io SYSTEM PUBLIC ${PCG_RANDOM_INCLUDE_DIR})

View File

@ -63,6 +63,8 @@ namespace ErrorCodes
extern const int TOO_BIG_AST;
extern const int UNEXPECTED_AST_STRUCTURE;
extern const int SYNTAX_ERROR;
extern const int UNKNOWN_TABLE;
extern const int UNKNOWN_FUNCTION;
extern const int UNKNOWN_IDENTIFIER;
@ -109,6 +111,8 @@ static Poco::Net::HTTPResponse::HTTPStatus exceptionCodeToHTTPStatus(int excepti
exception_code == ErrorCodes::TOO_BIG_AST ||
exception_code == ErrorCodes::UNEXPECTED_AST_STRUCTURE)
return HTTPResponse::HTTP_BAD_REQUEST;
else if (exception_code == ErrorCodes::SYNTAX_ERROR)
return HTTPResponse::HTTP_BAD_REQUEST;
else if (exception_code == ErrorCodes::UNKNOWN_TABLE ||
exception_code == ErrorCodes::UNKNOWN_FUNCTION ||
exception_code == ErrorCodes::UNKNOWN_IDENTIFIER ||

View File

@ -121,6 +121,9 @@ void TCPHandler::runImpl()
while (1)
{
/// Restore context of request.
query_context = connection_context;
/// We are waiting for a packet from the client. Thus, every `POLL_INTERVAL` seconds check whether we need to shut down.
while (!static_cast<ReadBufferFromPocoSocket &>(*in).poll(global_settings.poll_interval * 1000000) && !server.isCancelled())
;
@ -145,9 +148,6 @@ void TCPHandler::runImpl()
try
{
/// Restore context of request.
query_context = connection_context;
/// If a user passed query-local timeouts, reset socket to initial state at the end of the query
SCOPE_EXIT({state.timeout_setter.reset();});

View File

@ -237,13 +237,13 @@ public:
for (size_t i = 0; i < size; ++i)
{
to_lower.insert((i == 0) ? lower_bound : (points[i].mean + points[i - 1].mean) / 2);
to_upper.insert((i + 1 == size) ? upper_bound : (points[i].mean + points[i + 1].mean) / 2);
to_lower.insertValue((i == 0) ? lower_bound : (points[i].mean + points[i - 1].mean) / 2);
to_upper.insertValue((i + 1 == size) ? upper_bound : (points[i].mean + points[i + 1].mean) / 2);
// linear density approximation
Weight lower_weight = (i == 0) ? points[i].weight : ((points[i - 1].weight) + points[i].weight * 3) / 4;
Weight upper_weight = (i + 1 == size) ? points[i].weight : (points[i + 1].weight + points[i].weight * 3) / 4;
to_weights.insert((lower_weight + upper_weight) / 2);
to_weights.insertValue((lower_weight + upper_weight) / 2);
}
}

View File

@ -1,6 +1,5 @@
#include <Parsers/ASTSelectQuery.h>
#include <Parsers/ASTTablesInSelectQuery.h>
#include <Parsers/ASTIdentifier.h>
#include <Parsers/ASTFunction.h>
#include <TableFunctions/ITableFunction.h>
#include <TableFunctions/TableFunctionFactory.h>

View File

@ -527,7 +527,7 @@ void ColumnLowCardinality::Index::insertPosition(UInt64 position)
while (position > getMaxPositionForCurrentType())
expandType();
positions->assumeMutableRef().insert(UInt64(position));
positions->assumeMutableRef().insert(position);
checkSizeOfType();
}

View File

@ -117,7 +117,7 @@ public:
void getExtremes(Field & min, Field & max) const override
{
return getDictionary().index(getIndexes(), 0)->getExtremes(min, max); /// TODO: optimize
return dictionary.getColumnUnique().getNestedColumn()->index(getIndexes(), 0)->getExtremes(min, max); /// TODO: optimize
}
void reserve(size_t n) override { idx.reserve(n); }

View File

@ -353,8 +353,8 @@ void getExtremesFromNullableContent(const ColumnVector<T> & col, const NullMap &
if (has_not_null)
{
min = typename NearestFieldType<T>::Type(cur_min);
max = typename NearestFieldType<T>::Type(cur_max);
min = cur_min;
max = cur_max;
}
}

View File

@ -62,20 +62,13 @@ public:
UInt64 getUInt(size_t n) const override { return getNestedColumn()->getUInt(n); }
Int64 getInt(size_t n) const override { return getNestedColumn()->getInt(n); }
bool isNullAt(size_t n) const override { return is_nullable && n == getNullValueIndex(); }
StringRef serializeValueIntoArena(size_t n, Arena & arena, char const *& begin) const override
{
return column_holder->serializeValueIntoArena(n, arena, begin);
}
StringRef serializeValueIntoArena(size_t n, Arena & arena, char const *& begin) const override;
void updateHashWithValue(size_t n, SipHash & hash) const override
{
return getNestedColumn()->updateHashWithValue(n, hash);
}
int compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const override
{
auto & column_unique = static_cast<const IColumnUnique &>(rhs);
return getNestedColumn()->compareAt(n, m, *column_unique.getNestedColumn(), nan_direction_hint);
}
int compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const override;
void getExtremes(Field & min, Field & max) const override { column_holder->getExtremes(min, max); }
bool valuesHaveFixedSize() const override { return column_holder->valuesHaveFixedSize(); }
@ -298,9 +291,44 @@ size_t ColumnUnique<ColumnType>::uniqueInsertDataWithTerminatingZero(const char
return static_cast<size_t>(position);
}
template <typename ColumnType>
StringRef ColumnUnique<ColumnType>::serializeValueIntoArena(size_t n, Arena & arena, char const *& begin) const
{
if (is_nullable)
{
const UInt8 null_flag = 1;
const UInt8 not_null_flag = 0;
auto pos = arena.allocContinue(sizeof(null_flag), begin);
auto & flag = (n == getNullValueIndex() ? null_flag : not_null_flag);
memcpy(pos, &flag, sizeof(flag));
size_t nested_size = 0;
if (n == getNullValueIndex())
nested_size = column_holder->serializeValueIntoArena(n, arena, begin).size;
return StringRef(pos, sizeof(null_flag) + nested_size);
}
return column_holder->serializeValueIntoArena(n, arena, begin);
}
template <typename ColumnType>
size_t ColumnUnique<ColumnType>::uniqueDeserializeAndInsertFromArena(const char * pos, const char *& new_pos)
{
if (is_nullable)
{
UInt8 val = *reinterpret_cast<const UInt8 *>(pos);
pos += sizeof(val);
if (val)
{
new_pos = pos;
return getNullValueIndex();
}
}
auto column = getRawColumnPtr();
size_t prev_size = column->size();
new_pos = column->deserializeAndInsertFromArena(pos);
@ -318,6 +346,28 @@ size_t ColumnUnique<ColumnType>::uniqueDeserializeAndInsertFromArena(const char
return static_cast<size_t>(index_pos);
}
template <typename ColumnType>
int ColumnUnique<ColumnType>::compareAt(size_t n, size_t m, const IColumn & rhs, int nan_direction_hint) const
{
if (is_nullable)
{
/// See ColumnNullable::compareAt
bool lval_is_null = n == getNullValueIndex();
bool rval_is_null = m == getNullValueIndex();
if (unlikely(lval_is_null || rval_is_null))
{
if (lval_is_null && rval_is_null)
return 0;
else
return lval_is_null ? nan_direction_hint : -nan_direction_hint;
}
}
auto & column_unique = static_cast<const IColumnUnique &>(rhs);
return getNestedColumn()->compareAt(n, m, *column_unique.getNestedColumn(), nan_direction_hint);
}
template <typename IndexType>
static void checkIndexes(const ColumnVector<IndexType> & indexes, size_t max_dictionary_size)
{

View File

@ -279,8 +279,8 @@ void ColumnVector<T>::getExtremes(Field & min, Field & max) const
if (size == 0)
{
min = typename NearestFieldType<T>::Type(0);
max = typename NearestFieldType<T>::Type(0);
min = T(0);
max = T(0);
return;
}

View File

@ -193,7 +193,7 @@ public:
return data.allocated_bytes();
}
void insert(const T value)
void insertValue(const T value)
{
data.push_back(value);
}
@ -217,7 +217,7 @@ public:
Field operator[](size_t n) const override
{
return typename NearestFieldType<T>::Type(data[n]);
return data[n];
}
void get(size_t n, Field & res) const override

View File

@ -11,6 +11,11 @@
#include <Poco/Logger.h>
#if defined(ARCADIA_ROOT)
# include <util/thread/singleton.h>
#endif
namespace DB
{
@ -21,10 +26,25 @@ namespace ErrorCodes
SimpleObjectPool<TaskStatsInfoGetter> task_stats_info_getter_pool;
// Smoker's implementation to avoid thread_local usage: error: undefined symbol: __cxa_thread_atexit
#if defined(ARCADIA_ROOT)
struct ThreadStatusPtrHolder : ThreadStatusPtr
{
ThreadStatusPtrHolder() { ThreadStatusPtr::operator=(ThreadStatus::create()); }
};
struct ThreadScopePtrHolder : CurrentThread::ThreadScopePtr
{
ThreadScopePtrHolder() { CurrentThread::ThreadScopePtr::operator=(std::make_shared<CurrentThread::ThreadScope>()); }
};
# define current_thread (*FastTlsSingleton<ThreadStatusPtrHolder>())
# define current_thread_scope (*FastTlsSingleton<ThreadScopePtrHolder>())
#else
/// Order of current_thread and current_thread_scope matters
thread_local ThreadStatusPtr current_thread = ThreadStatus::create();
thread_local CurrentThread::ThreadScopePtr current_thread_scope = std::make_shared<CurrentThread::ThreadScope>();
thread_local ThreadStatusPtr _current_thread = ThreadStatus::create();
thread_local CurrentThread::ThreadScopePtr _current_thread_scope = std::make_shared<CurrentThread::ThreadScope>();
# define current_thread _current_thread
# define current_thread_scope _current_thread_scope
#endif
void CurrentThread::updatePerformanceCounters()
{

View File

@ -420,7 +420,7 @@ protected:
void destroyElements()
{
if (!std::is_trivially_destructible_v<Cell>)
for (iterator it = begin(); it != end(); ++it)
for (iterator it = begin(), it_end = end(); it != it_end; ++it)
it.ptr->~Cell();
}
@ -445,12 +445,15 @@ protected:
Derived & operator++()
{
/// If iterator was pointed to ZeroValueStorage, move it to the beginning of the main buffer.
if (unlikely(ptr->isZero(*container)))
ptr = container->buf;
else
++ptr;
while (ptr < container->buf + container->grower.bufSize() && ptr->isZero(*container))
/// Skip empty cells in the main buffer.
auto buf_end = container->buf + container->grower.bufSize();
while (ptr < buf_end && ptr->isZero(*container))
++ptr;
return static_cast<Derived &>(*this);
@ -569,12 +572,15 @@ public:
return iteratorToZero();
const Cell * ptr = buf;
while (ptr < buf + grower.bufSize() && ptr->isZero(*this))
auto buf_end = buf + grower.bufSize();
while (ptr < buf_end && ptr->isZero(*this))
++ptr;
return const_iterator(this, ptr);
}
const_iterator cbegin() const { return begin(); }
iterator begin()
{
if (!buf)
@ -584,13 +590,15 @@ public:
return iteratorToZero();
Cell * ptr = buf;
while (ptr < buf + grower.bufSize() && ptr->isZero(*this))
auto buf_end = buf + grower.bufSize();
while (ptr < buf_end && ptr->isZero(*this))
++ptr;
return iterator(this, ptr);
}
const_iterator end() const { return const_iterator(this, buf + grower.bufSize()); }
const_iterator cend() const { return end(); }
iterator end() { return iterator(this, buf + grower.bufSize()); }
@ -811,9 +819,9 @@ public:
if (this->hasZero())
this->zeroValue()->write(wb);
for (size_t i = 0; i < grower.bufSize(); ++i)
if (!buf[i].isZero(*this))
buf[i].write(wb);
for (auto ptr = buf, buf_end = buf + grower.bufSize(); ptr < buf_end; ++ptr)
if (!ptr->isZero(*this))
ptr->write(wb);
}
void writeText(DB::WriteBuffer & wb) const
@ -827,12 +835,12 @@ public:
this->zeroValue()->writeText(wb);
}
for (size_t i = 0; i < grower.bufSize(); ++i)
for (auto ptr = buf, buf_end = buf + grower.bufSize(); ptr < buf_end; ++ptr)
{
if (!buf[i].isZero(*this))
if (!ptr->isZero(*this))
{
DB::writeChar(',', wb);
buf[i].writeText(wb);
ptr->writeText(wb);
}
}
}

View File

@ -395,24 +395,8 @@ void ZooKeeper::read(T & x)
}
struct ZooKeeperResponse;
using ZooKeeperResponsePtr = std::shared_ptr<ZooKeeperResponse>;
struct ZooKeeperRequest : virtual Request
{
ZooKeeper::XID xid = 0;
bool has_watch = false;
/// If the request was not send and the error happens, we definitely sure, that is has not been processed by the server.
/// If the request was sent and we didn't get the response and the error happens, then we cannot be sure was it processed or not.
bool probably_sent = false;
virtual ~ZooKeeperRequest() {}
virtual ZooKeeper::OpNum getOpNum() const = 0;
/// Writes length, xid, op_num, then the rest.
void write(WriteBuffer & out) const
void ZooKeeperRequest::write(WriteBuffer & out) const
{
/// Excessive copy to calculate length.
WriteBufferFromOwnString buf;
@ -423,10 +407,6 @@ struct ZooKeeperRequest : virtual Request
out.next();
}
virtual void writeImpl(WriteBuffer &) const = 0;
virtual ZooKeeperResponsePtr makeResponse() const = 0;
};
struct ZooKeeperResponse : virtual Response
{

View File

@ -240,4 +240,29 @@ private:
CurrentMetrics::Increment active_session_metric_increment{CurrentMetrics::ZooKeeperSession};
};
struct ZooKeeperResponse;
using ZooKeeperResponsePtr = std::shared_ptr<ZooKeeperResponse>;
/// Exposed in header file for Yandex.Metrica code.
struct ZooKeeperRequest : virtual Request
{
ZooKeeper::XID xid = 0;
bool has_watch = false;
/// If the request was not send and the error happens, we definitely sure, that is has not been processed by the server.
/// If the request was sent and we didn't get the response and the error happens, then we cannot be sure was it processed or not.
bool probably_sent = false;
virtual ~ZooKeeperRequest() {}
virtual ZooKeeper::OpNum getOpNum() const = 0;
/// Writes length, xid, op_num, then the rest.
void write(WriteBuffer & out) const;
virtual void writeImpl(WriteBuffer &) const = 0;
virtual ZooKeeperResponsePtr makeResponse() const = 0;
};
}

View File

@ -9,6 +9,8 @@
#include <Common/UInt128.h>
#include <Core/Types.h>
#include <Core/Defines.h>
#include <Core/UUID.h>
#include <common/DayNum.h>
#include <common/strong_typedef.h>
@ -181,10 +183,7 @@ public:
}
template <typename T>
Field(T && rhs, std::integral_constant<int, Field::TypeToEnum<std::decay_t<T>>::value> * = nullptr)
{
createConcrete(std::forward<T>(rhs));
}
Field(T && rhs, std::enable_if_t<!std::is_same_v<std::decay_t<T>, Field>, void *> = nullptr);
/// Create a string inplace.
Field(const char * data, size_t size)
@ -242,18 +241,7 @@ public:
template <typename T>
std::enable_if_t<!std::is_same_v<std::decay_t<T>, Field>, Field &>
operator= (T && rhs)
{
if (which != TypeToEnum<std::decay_t<T>>::value)
{
destroy();
createConcrete(std::forward<T>(rhs));
}
else
assignConcrete(std::forward<T>(rhs));
return *this;
}
operator= (T && rhs);
~Field()
{
@ -596,7 +584,9 @@ template <> struct NearestFieldType<UInt8> { using Type = UInt64; };
template <> struct NearestFieldType<UInt16> { using Type = UInt64; };
template <> struct NearestFieldType<UInt32> { using Type = UInt64; };
template <> struct NearestFieldType<UInt64> { using Type = UInt64; };
template <> struct NearestFieldType<DayNum> { using Type = UInt64; };
template <> struct NearestFieldType<UInt128> { using Type = UInt128; };
template <> struct NearestFieldType<UUID> { using Type = UInt128; };
template <> struct NearestFieldType<Int8> { using Type = Int64; };
template <> struct NearestFieldType<Int16> { using Type = Int64; };
template <> struct NearestFieldType<Int32> { using Type = Int64; };
@ -605,19 +595,57 @@ template <> struct NearestFieldType<Int128> { using Type = Int128; };
template <> struct NearestFieldType<Decimal32> { using Type = DecimalField<Decimal32>; };
template <> struct NearestFieldType<Decimal64> { using Type = DecimalField<Decimal64>; };
template <> struct NearestFieldType<Decimal128> { using Type = DecimalField<Decimal128>; };
template <> struct NearestFieldType<DecimalField<Decimal32>> { using Type = DecimalField<Decimal32>; };
template <> struct NearestFieldType<DecimalField<Decimal64>> { using Type = DecimalField<Decimal64>; };
template <> struct NearestFieldType<DecimalField<Decimal128>> { using Type = DecimalField<Decimal128>; };
template <> struct NearestFieldType<Float32> { using Type = Float64; };
template <> struct NearestFieldType<Float64> { using Type = Float64; };
template <> struct NearestFieldType<const char*> { using Type = String; };
template <> struct NearestFieldType<String> { using Type = String; };
template <> struct NearestFieldType<Array> { using Type = Array; };
template <> struct NearestFieldType<Tuple> { using Type = Tuple; };
template <> struct NearestFieldType<bool> { using Type = UInt64; };
template <> struct NearestFieldType<Null> { using Type = Null; };
template <typename T>
decltype(auto) nearestFieldType(T && x)
{
using U = typename NearestFieldType<std::decay_t<T>>::Type;
if constexpr (std::is_same_v<std::decay_t<T>, U>)
return std::forward<T>(x);
else
return U(x);
}
/// This (rather tricky) code is to avoid ambiguity in expressions like
/// Field f = 1;
/// instead of
/// Field f = Int64(1);
/// Things to note:
/// 1. float <--> int needs explicit cast
/// 2. customized types needs explicit cast
template <typename T>
Field::Field(T && rhs, std::enable_if_t<!std::is_same_v<std::decay_t<T>, Field>, void *>)
{
auto && val = nearestFieldType(std::forward<T>(rhs));
createConcrete(std::forward<decltype(val)>(val));
}
template <typename T>
typename NearestFieldType<T>::Type nearestFieldType(const T & x)
std::enable_if_t<!std::is_same_v<std::decay_t<T>, Field>, Field &>
Field::operator= (T && rhs)
{
return typename NearestFieldType<T>::Type(x);
auto && val = nearestFieldType(std::forward<T>(rhs));
using U = decltype(val);
if (which != TypeToEnum<std::decay_t<U>>::value)
{
destroy();
createConcrete(std::forward<U>(val));
}
else
assignConcrete(std::forward<U>(val));
return *this;
}

View File

@ -39,7 +39,7 @@ FilterBlockInputStream::FilterBlockInputStream(const BlockInputStreamPtr & input
{
/// Replace the filter column to a constant with value 1.
FilterDescription filter_description_check(*column_elem.column);
column_elem.column = column_elem.type->createColumnConst(header.rows(), UInt64(1));
column_elem.column = column_elem.type->createColumnConst(header.rows(), 1u);
}
if (remove_filter)
@ -144,7 +144,7 @@ Block FilterBlockInputStream::readImpl()
if (filtered_rows == filter_and_holder.data->size())
{
/// Replace the column with the filter by a constant.
res.safeGetByPosition(filter_column).column = res.safeGetByPosition(filter_column).type->createColumnConst(filtered_rows, UInt64(1));
res.safeGetByPosition(filter_column).column = res.safeGetByPosition(filter_column).type->createColumnConst(filtered_rows, 1u);
/// No need to touch the rest of the columns.
return removeFilterIfNeed(std::move(res));
}
@ -161,7 +161,7 @@ Block FilterBlockInputStream::readImpl()
/// Example:
/// SELECT materialize(100) AS x WHERE x
/// will work incorrectly.
current_column.column = current_column.type->createColumnConst(filtered_rows, UInt64(1));
current_column.column = current_column.type->createColumnConst(filtered_rows, 1u);
continue;
}

View File

@ -251,7 +251,7 @@ void GraphiteRollupSortedBlockInputStream::startNextGroup(MutableColumns & merge
void GraphiteRollupSortedBlockInputStream::finishCurrentGroup(MutableColumns & merged_columns)
{
/// Insert calculated values of the columns `time`, `value`, `version`.
merged_columns[time_column_num]->insert(UInt64(current_time_rounded));
merged_columns[time_column_num]->insert(current_time_rounded);
merged_columns[version_column_num]->insertFrom(
*(*current_subgroup_newest_row.columns)[version_column_num], current_subgroup_newest_row.row_num);

View File

@ -225,7 +225,7 @@ void DataTypeEnum<Type>::deserializeBinaryBulk(
template <typename Type>
Field DataTypeEnum<Type>::getDefault() const
{
return typename NearestFieldType<FieldType>::Type(values.front().second);
return values.front().second;
}
template <typename Type>
@ -293,7 +293,7 @@ Field DataTypeEnum<Type>::castToValue(const Field & value_or_name) const
{
if (value_or_name.getType() == Field::Types::String)
{
return static_cast<Int64>(getValue(value_or_name.get<String>()));
return getValue(value_or_name.get<String>());
}
else if (value_or_name.getType() == Field::Types::Int64
|| value_or_name.getType() == Field::Types::UInt64)

View File

@ -464,7 +464,7 @@ ColumnPtr DictionaryBlockInputStream<DictionaryType, Key>::getColumnFromIds(cons
auto column_vector = ColumnVector<UInt64>::create();
column_vector->getData().reserve(ids_to_fill.size());
for (UInt64 id : ids_to_fill)
column_vector->insert(id);
column_vector->insertValue(id);
return column_vector;
}

View File

@ -156,7 +156,7 @@ DictionarySourcePtr DictionarySourceFactory::create(
{
#if USE_POCO_SQLODBC || USE_POCO_DATAODBC
const auto & global_config = context.getConfigRef();
BridgeHelperPtr bridge = std::make_shared<XDBCBridgeHelper<ODBCBridgeMixin>>(global_config, context.getSettings().http_connection_timeout, config.getString(config_prefix + ".odbc.connection_string"));
BridgeHelperPtr bridge = std::make_shared<XDBCBridgeHelper<ODBCBridgeMixin>>(global_config, context.getSettings().http_receive_timeout, config.getString(config_prefix + ".odbc.connection_string"));
return std::make_unique<XDBCDictionarySource>(dict_struct, config, config_prefix + ".odbc", sample_block, context, bridge);
#else
throw Exception{"Dictionary source of type `odbc` is disabled because poco library was built without ODBC support.",
@ -167,7 +167,7 @@ DictionarySourcePtr DictionarySourceFactory::create(
{
throw Exception{"Dictionary source of type `jdbc` is disabled until consistent support for nullable fields.",
ErrorCodes::SUPPORT_IS_DISABLED};
// BridgeHelperPtr bridge = std::make_shared<XDBCBridgeHelper<JDBCBridgeMixin>>(config, context.getSettings().http_connection_timeout, config.getString(config_prefix + ".connection_string"));
// BridgeHelperPtr bridge = std::make_shared<XDBCBridgeHelper<JDBCBridgeMixin>>(config, context.getSettings().http_receive_timeout, config.getString(config_prefix + ".connection_string"));
// return std::make_unique<XDBCDictionarySource>(dict_struct, config, config_prefix + ".jdbc", sample_block, context, bridge);
}
else if ("executable" == source_type)

View File

@ -42,19 +42,19 @@ namespace
{
switch (type)
{
case ValueType::UInt8: static_cast<ColumnUInt8 &>(column).insert(value.getUInt()); break;
case ValueType::UInt16: static_cast<ColumnUInt16 &>(column).insert(value.getUInt()); break;
case ValueType::UInt32: static_cast<ColumnUInt32 &>(column).insert(value.getUInt()); break;
case ValueType::UInt64: static_cast<ColumnUInt64 &>(column).insert(value.getUInt()); break;
case ValueType::Int8: static_cast<ColumnInt8 &>(column).insert(value.getInt()); break;
case ValueType::Int16: static_cast<ColumnInt16 &>(column).insert(value.getInt()); break;
case ValueType::Int32: static_cast<ColumnInt32 &>(column).insert(value.getInt()); break;
case ValueType::Int64: static_cast<ColumnInt64 &>(column).insert(value.getInt()); break;
case ValueType::Float32: static_cast<ColumnFloat32 &>(column).insert(value.getDouble()); break;
case ValueType::Float64: static_cast<ColumnFloat64 &>(column).insert(value.getDouble()); break;
case ValueType::UInt8: static_cast<ColumnUInt8 &>(column).insertValue(value.getUInt()); break;
case ValueType::UInt16: static_cast<ColumnUInt16 &>(column).insertValue(value.getUInt()); break;
case ValueType::UInt32: static_cast<ColumnUInt32 &>(column).insertValue(value.getUInt()); break;
case ValueType::UInt64: static_cast<ColumnUInt64 &>(column).insertValue(value.getUInt()); break;
case ValueType::Int8: static_cast<ColumnInt8 &>(column).insertValue(value.getInt()); break;
case ValueType::Int16: static_cast<ColumnInt16 &>(column).insertValue(value.getInt()); break;
case ValueType::Int32: static_cast<ColumnInt32 &>(column).insertValue(value.getInt()); break;
case ValueType::Int64: static_cast<ColumnInt64 &>(column).insertValue(value.getInt()); break;
case ValueType::Float32: static_cast<ColumnFloat32 &>(column).insertValue(value.getDouble()); break;
case ValueType::Float64: static_cast<ColumnFloat64 &>(column).insertValue(value.getDouble()); break;
case ValueType::String: static_cast<ColumnString &>(column).insertData(value.data(), value.size()); break;
case ValueType::Date: static_cast<ColumnUInt16 &>(column).insert(UInt16{value.getDate().getDayNum()}); break;
case ValueType::DateTime: static_cast<ColumnUInt32 &>(column).insert(time_t{value.getDateTime()}); break;
case ValueType::Date: static_cast<ColumnUInt16 &>(column).insertValue(UInt16(value.getDate().getDayNum())); break;
case ValueType::DateTime: static_cast<ColumnUInt32 &>(column).insertValue(UInt32(value.getDateTime())); break;
case ValueType::UUID: static_cast<ColumnUInt128 &>(column).insert(parse<UUID>(value.data(), value.size())); break;
}
}

View File

@ -48,19 +48,19 @@ namespace
{
switch (type)
{
case ValueType::UInt8: static_cast<ColumnUInt8 &>(column).insert(value.convert<UInt64>()); break;
case ValueType::UInt16: static_cast<ColumnUInt16 &>(column).insert(value.convert<UInt64>()); break;
case ValueType::UInt32: static_cast<ColumnUInt32 &>(column).insert(value.convert<UInt64>()); break;
case ValueType::UInt64: static_cast<ColumnUInt64 &>(column).insert(value.convert<UInt64>()); break;
case ValueType::Int8: static_cast<ColumnInt8 &>(column).insert(value.convert<Int64>()); break;
case ValueType::Int16: static_cast<ColumnInt16 &>(column).insert(value.convert<Int64>()); break;
case ValueType::Int32: static_cast<ColumnInt32 &>(column).insert(value.convert<Int64>()); break;
case ValueType::Int64: static_cast<ColumnInt64 &>(column).insert(value.convert<Int64>()); break;
case ValueType::Float32: static_cast<ColumnFloat32 &>(column).insert(value.convert<Float64>()); break;
case ValueType::Float64: static_cast<ColumnFloat64 &>(column).insert(value.convert<Float64>()); break;
case ValueType::UInt8: static_cast<ColumnUInt8 &>(column).insertValue(value.convert<UInt64>()); break;
case ValueType::UInt16: static_cast<ColumnUInt16 &>(column).insertValue(value.convert<UInt64>()); break;
case ValueType::UInt32: static_cast<ColumnUInt32 &>(column).insertValue(value.convert<UInt64>()); break;
case ValueType::UInt64: static_cast<ColumnUInt64 &>(column).insertValue(value.convert<UInt64>()); break;
case ValueType::Int8: static_cast<ColumnInt8 &>(column).insertValue(value.convert<Int64>()); break;
case ValueType::Int16: static_cast<ColumnInt16 &>(column).insertValue(value.convert<Int64>()); break;
case ValueType::Int32: static_cast<ColumnInt32 &>(column).insertValue(value.convert<Int64>()); break;
case ValueType::Int64: static_cast<ColumnInt64 &>(column).insertValue(value.convert<Int64>()); break;
case ValueType::Float32: static_cast<ColumnFloat32 &>(column).insertValue(value.convert<Float64>()); break;
case ValueType::Float64: static_cast<ColumnFloat64 &>(column).insertValue(value.convert<Float64>()); break;
case ValueType::String: static_cast<ColumnString &>(column).insert(value.convert<String>()); break;
case ValueType::Date: static_cast<ColumnUInt16 &>(column).insert(UInt16{LocalDate{value.convert<String>()}.getDayNum()}); break;
case ValueType::DateTime: static_cast<ColumnUInt32 &>(column).insert(time_t{LocalDateTime{value.convert<String>()}}); break;
case ValueType::Date: static_cast<ColumnUInt16 &>(column).insertValue(UInt16{LocalDate{value.convert<String>()}.getDayNum()}); break;
case ValueType::DateTime: static_cast<ColumnUInt32 &>(column).insertValue(time_t{LocalDateTime{value.convert<String>()}}); break;
case ValueType::UUID: static_cast<ColumnUInt128 &>(column).insert(parse<UUID>(value.convert<std::string>())); break;
}

View File

@ -141,7 +141,7 @@ ColumnPtr RangeDictionaryBlockInputStream<DictionaryType, RangeType, Key>::getCo
auto column_vector = ColumnVector<T>::create();
column_vector->getData().reserve(array.size());
for (T value : array)
column_vector->insert(value);
column_vector->insertValue(value);
return column_vector;
}

View File

@ -619,12 +619,12 @@ Columns TrieDictionary::getKeyColumns() const
#if defined(__SIZEOF_INT128__)
auto getter = [& ip_column, & mask_column](__uint128_t ip, size_t mask)
{
UInt64 * ip_array = reinterpret_cast<UInt64 *>(&ip);
Poco::UInt64 * ip_array = reinterpret_cast<Poco::UInt64 *>(&ip); // Poco:: for old poco + macos
ip_array[0] = Poco::ByteOrder::fromNetwork(ip_array[0]);
ip_array[1] = Poco::ByteOrder::fromNetwork(ip_array[1]);
std::swap(ip_array[0], ip_array[1]);
ip_column->insertData(reinterpret_cast<const char *>(ip_array), IPV6_BINARY_LENGTH);
mask_column->insert(static_cast<UInt8>(mask));
mask_column->insertValue(static_cast<UInt8>(mask));
};
trieTraverse<decltype(getter), __uint128_t>(trie, std::move(getter));

View File

@ -46,13 +46,13 @@ Field convertNodeToField(capnp::DynamicValue::Reader value)
case capnp::DynamicValue::VOID:
return Field();
case capnp::DynamicValue::BOOL:
return UInt64(value.as<bool>() ? 1 : 0);
return value.as<bool>() ? 1u : 0u;
case capnp::DynamicValue::INT:
return Int64((value.as<int64_t>()));
return value.as<int64_t>();
case capnp::DynamicValue::UINT:
return UInt64(value.as<uint64_t>());
return value.as<uint64_t>();
case capnp::DynamicValue::FLOAT:
return Float64(value.as<double>());
return value.as<double>();
case capnp::DynamicValue::TEXT:
{
auto arr = value.as<capnp::Text>();
@ -73,7 +73,7 @@ Field convertNodeToField(capnp::DynamicValue::Reader value)
return res;
}
case capnp::DynamicValue::ENUM:
return UInt64(value.as<capnp::DynamicEnum>().getRaw());
return value.as<capnp::DynamicEnum>().getRaw();
case capnp::DynamicValue::STRUCT:
{
auto structValue = value.as<capnp::DynamicStruct>();

View File

@ -116,9 +116,9 @@ bool ValuesRowInputStream::read(MutableColumns & columns)
std::pair<Field, DataTypePtr> value_raw = evaluateConstantExpression(ast, *context);
Field value = convertFieldToType(value_raw.first, type, value_raw.second.get());
/// Check that we are indeed allowed to insert a NULL.
if (value.isNull())
{
/// Check that we are indeed allowed to insert a NULL.
if (!type.isNullable())
throw Exception{"Expression returns value " + applyVisitor(FieldVisitorToString(), value)
+ ", that is out of range of type " + type.getName()

View File

@ -835,7 +835,7 @@ private:
if (!in.eof())
throw Exception("String is too long for Date: " + string_value.toString());
ColumnPtr parsed_const_date_holder = DataTypeDate().createColumnConst(input_rows_count, UInt64(date));
ColumnPtr parsed_const_date_holder = DataTypeDate().createColumnConst(input_rows_count, date);
const ColumnConst * parsed_const_date = static_cast<const ColumnConst *>(parsed_const_date_holder.get());
executeNumLeftType<DataTypeDate::FieldType>(block, result,
left_is_num ? col_left_untyped : parsed_const_date,
@ -863,7 +863,7 @@ private:
if (!in.eof())
throw Exception("String is too long for UUID: " + string_value.toString());
ColumnPtr parsed_const_uuid_holder = DataTypeUUID().createColumnConst(input_rows_count, UInt128(uuid));
ColumnPtr parsed_const_uuid_holder = DataTypeUUID().createColumnConst(input_rows_count, uuid);
const ColumnConst * parsed_const_uuid = static_cast<const ColumnConst *>(parsed_const_uuid_holder.get());
executeNumLeftType<DataTypeUUID::FieldType>(block, result,
left_is_num ? col_left_untyped : parsed_const_uuid,

View File

@ -1445,7 +1445,7 @@ private:
UInt8 res = 0;
dictionary->isInConstantConstant(child_id, ancestor_id, res);
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(child_id_col->size(), UInt64(res));
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(child_id_col->size(), res);
}
else
throw Exception{"Illegal column " + ancestor_id_col_untyped->getName()

View File

@ -293,7 +293,7 @@ private:
const auto col_const_y = static_cast<const ColumnConst *> (col_y);
size_t start_index = 0;
UInt8 res = isPointInEllipses(col_const_x->getValue<Float64>(), col_const_y->getValue<Float64>(), ellipses, ellipses_count, start_index);
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(size, UInt64(res));
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(size, res);
}
else
{

View File

@ -54,8 +54,10 @@ namespace ErrorCodes
* Fast non-cryptographic hash function for strings:
* cityHash64: String -> UInt64
*
* A non-cryptographic hash from a tuple of values of any types (uses cityHash64 for strings and intHash64 for numbers):
* A non-cryptographic hashes from a tuple of values of any types (uses respective function for strings and intHash64 for numbers):
* cityHash64: any* -> UInt64
* sipHash64: any* -> UInt64
* halfMD5: any* -> UInt64
*
* Fast non-cryptographic hash function from any integer:
* intHash32: number -> UInt32
@ -63,8 +65,31 @@ namespace ErrorCodes
*
*/
struct IntHash32Impl
{
using ReturnType = UInt32;
static UInt32 apply(UInt64 x)
{
/// seed is taken from /dev/urandom. It allows you to avoid undesirable dependencies with hashes in different data structures.
return intHash32<0x75D9543DE018BF45ULL>(x);
}
};
struct IntHash64Impl
{
using ReturnType = UInt64;
static UInt64 apply(UInt64 x)
{
return intHash64(x ^ 0x4CF2D2BAAE6DA887ULL);
}
};
struct HalfMD5Impl
{
static constexpr auto name = "halfMD5";
using ReturnType = UInt64;
static UInt64 apply(const char * begin, size_t size)
@ -80,8 +105,18 @@ struct HalfMD5Impl
MD5_Update(&ctx, reinterpret_cast<const unsigned char *>(begin), size);
MD5_Final(buf.char_data, &ctx);
return Poco::ByteOrder::flipBytes(buf.uint64_data); /// Compatibility with existing code.
return Poco::ByteOrder::flipBytes(static_cast<Poco::UInt64>(buf.uint64_data)); /// Compatibility with existing code. Cast need for old poco AND macos where UInt64 != uint64_t
}
static UInt64 combineHashes(UInt64 h1, UInt64 h2)
{
UInt64 hashes[] = {h1, h2};
return apply(reinterpret_cast<const char *>(hashes), 16);
}
/// If true, it will use intHash32 or intHash64 to hash POD types. This behaviour is intended for better performance of some functions.
/// Otherwise it will hash bytes in memory as a string using corresponding hash function.
static constexpr bool use_int_hash_for_pods = false;
};
struct MD5Impl
@ -142,14 +177,22 @@ struct SHA256Impl
struct SipHash64Impl
{
static constexpr auto name = "sipHash64";
using ReturnType = UInt64;
static UInt64 apply(const char * begin, size_t size)
{
return sipHash64(begin, size);
}
};
static UInt64 combineHashes(UInt64 h1, UInt64 h2)
{
UInt64 hashes[] = {h1, h2};
return apply(reinterpret_cast<const char *>(hashes), 16);
}
static constexpr bool use_int_hash_for_pods = false;
};
struct SipHash128Impl
{
@ -162,25 +205,154 @@ struct SipHash128Impl
}
};
struct IntHash32Impl
/** Why we need MurmurHash2?
* MurmurHash2 is an outdated hash function, superseded by MurmurHash3 and subsequently by CityHash, xxHash, HighwayHash.
* Usually there is no reason to use MurmurHash.
* It is needed for the cases when you already have MurmurHash in some applications and you want to reproduce it
* in ClickHouse as is. For example, it is needed to reproduce the behaviour
* for NGINX a/b testing module: https://nginx.ru/en/docs/http/ngx_http_split_clients_module.html
*/
struct MurmurHash2Impl32
{
static constexpr auto name = "murmurHash2_32";
using ReturnType = UInt32;
static UInt32 apply(UInt64 x)
static UInt32 apply(const char * data, const size_t size)
{
/// seed is taken from /dev/urandom. It allows you to avoid undesirable dependencies with hashes in different data structures.
return intHash32<0x75D9543DE018BF45ULL>(x);
return MurmurHash2(data, size, 0);
}
static UInt32 combineHashes(UInt32 h1, UInt32 h2)
{
return IntHash32Impl::apply(h1) ^ h2;
}
static constexpr bool use_int_hash_for_pods = false;
};
struct MurmurHash2Impl64
{
static constexpr auto name = "murmurHash2_64";
using ReturnType = UInt64;
static UInt64 apply(const char * data, const size_t size)
{
return MurmurHash64A(data, size, 0);
}
static UInt64 combineHashes(UInt64 h1, UInt64 h2)
{
return IntHash64Impl::apply(h1) ^ h2;
}
static constexpr bool use_int_hash_for_pods = false;
};
struct MurmurHash3Impl32
{
static constexpr auto name = "murmurHash3_32";
using ReturnType = UInt32;
static UInt32 apply(const char * data, const size_t size)
{
union
{
UInt32 h;
char bytes[sizeof(h)];
};
MurmurHash3_x86_32(data, size, 0, bytes);
return h;
}
static UInt32 combineHashes(UInt32 h1, UInt32 h2)
{
return IntHash32Impl::apply(h1) ^ h2;
}
static constexpr bool use_int_hash_for_pods = false;
};
struct MurmurHash3Impl64
{
static constexpr auto name = "murmurHash3_64";
using ReturnType = UInt64;
static UInt64 apply(const char * data, const size_t size)
{
union
{
UInt64 h[2];
char bytes[16];
};
MurmurHash3_x64_128(data, size, 0, bytes);
return h[0] ^ h[1];
}
static UInt64 combineHashes(UInt64 h1, UInt64 h2)
{
return IntHash64Impl::apply(h1) ^ h2;
}
static constexpr bool use_int_hash_for_pods = false;
};
struct MurmurHash3Impl128
{
static constexpr auto name = "murmurHash3_128";
enum { length = 16 };
static void apply(const char * begin, const size_t size, unsigned char * out_char_data)
{
MurmurHash3_x64_128(begin, size, 0, out_char_data);
}
};
struct IntHash64Impl
struct ImplCityHash64
{
static constexpr auto name = "cityHash64";
using ReturnType = UInt64;
using uint128_t = CityHash_v1_0_2::uint128;
static UInt64 apply(UInt64 x)
static auto combineHashes(UInt64 h1, UInt64 h2) { return CityHash_v1_0_2::Hash128to64(uint128_t(h1, h2)); }
static auto apply(const char * s, const size_t len) { return CityHash_v1_0_2::CityHash64(s, len); }
static constexpr bool use_int_hash_for_pods = true;
};
// see farmhash.h for definition of NAMESPACE_FOR_HASH_FUNCTIONS
struct ImplFarmHash64
{
static constexpr auto name = "farmHash64";
using ReturnType = UInt64;
using uint128_t = NAMESPACE_FOR_HASH_FUNCTIONS::uint128_t;
static auto combineHashes(UInt64 h1, UInt64 h2) { return NAMESPACE_FOR_HASH_FUNCTIONS::Hash128to64(uint128_t(h1, h2)); }
static auto apply(const char * s, const size_t len) { return NAMESPACE_FOR_HASH_FUNCTIONS::Hash64(s, len); }
static constexpr bool use_int_hash_for_pods = true;
};
struct ImplMetroHash64
{
static constexpr auto name = "metroHash64";
using ReturnType = UInt64;
using uint128_t = CityHash_v1_0_2::uint128;
static auto combineHashes(UInt64 h1, UInt64 h2) { return CityHash_v1_0_2::Hash128to64(uint128_t(h1, h2)); }
static auto apply(const char * s, const size_t len)
{
return intHash64(x ^ 0x4CF2D2BAAE6DA887ULL);
union
{
UInt64 u64;
UInt8 u8[sizeof(u64)];
};
metrohash64_1(reinterpret_cast<const UInt8 *>(s), len, 0, u8);
return u64;
}
static constexpr bool use_int_hash_for_pods = true;
};
@ -242,12 +414,6 @@ public:
};
inline bool allowIntHash(const IDataType * data_type)
{
return data_type->isValueRepresentedByNumber();
}
template <typename Impl, typename Name>
class FunctionIntHash : public IFunction
{
@ -291,7 +457,7 @@ public:
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
{
if (!allowIntHash(arguments[0].get()))
if (!arguments[0]->isValueRepresentedByNumber())
throw Exception("Illegal type " + arguments[0]->getName() + " of argument of function " + getName(),
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
@ -322,19 +488,18 @@ public:
};
/** We use hash functions called CityHash, FarmHash, MetroHash.
* In this regard, this template is named with the words `NeighborhoodHash`.
*/
template <typename Impl>
class FunctionNeighbourhoodHash64 : public IFunction
class FunctionAnyHash : public IFunction
{
public:
static constexpr auto name = Impl::name;
static FunctionPtr create(const Context &) { return std::make_shared<FunctionNeighbourhoodHash64>(); }
static FunctionPtr create(const Context &) { return std::make_shared<FunctionAnyHash>(); }
private:
using ToType = typename Impl::ReturnType;
template <typename FromType, bool first>
void executeIntType(const IColumn * column, ColumnUInt64::Container & vec_to)
void executeIntType(const IColumn * column, typename ColumnVector<ToType>::Container & vec_to)
{
if (const ColumnVector<FromType> * col_from = checkAndGetColumn<ColumnVector<FromType>>(column))
{
@ -342,16 +507,35 @@ private:
size_t size = vec_from.size();
for (size_t i = 0; i < size; ++i)
{
UInt64 h = IntHash64Impl::apply(ext::bit_cast<UInt64>(vec_from[i]));
ToType h;
if constexpr (Impl::use_int_hash_for_pods)
{
if constexpr (std::is_same_v<ToType, UInt64>)
h = IntHash64Impl::apply(ext::bit_cast<UInt64>(vec_from[i]));
else
h = IntHash32Impl::apply(ext::bit_cast<UInt32>(vec_from[i]));
}
else
{
h = Impl::apply(reinterpret_cast<const char *>(&vec_from[i]), sizeof(vec_from[i]));
}
if (first)
vec_to[i] = h;
else
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], h));
vec_to[i] = Impl::combineHashes(vec_to[i], h);
}
}
else if (auto col_from = checkAndGetColumnConst<ColumnVector<FromType>>(column))
{
const UInt64 hash = IntHash64Impl::apply(ext::bit_cast<UInt64>(col_from->template getValue<FromType>()));
auto value = col_from->template getValue<FromType>();
ToType hash;
if constexpr (std::is_same_v<ToType, UInt64>)
hash = IntHash64Impl::apply(ext::bit_cast<UInt64>(value));
else
hash = IntHash32Impl::apply(ext::bit_cast<UInt32>(value));
size_t size = vec_to.size();
if (first)
{
@ -360,7 +544,7 @@ private:
else
{
for (size_t i = 0; i < size; ++i)
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], hash));
vec_to[i] = Impl::combineHashes(vec_to[i], hash);
}
}
else
@ -370,7 +554,7 @@ private:
}
template <bool first>
void executeString(const IColumn * column, ColumnUInt64::Container & vec_to)
void executeString(const IColumn * column, typename ColumnVector<ToType>::Container & vec_to)
{
if (const ColumnString * col_from = checkAndGetColumn<ColumnString>(column))
{
@ -381,14 +565,14 @@ private:
ColumnString::Offset current_offset = 0;
for (size_t i = 0; i < size; ++i)
{
const UInt64 h = Impl::Hash64(
const ToType h = Impl::apply(
reinterpret_cast<const char *>(&data[current_offset]),
offsets[i] - current_offset - 1);
if (first)
vec_to[i] = h;
else
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], h));
vec_to[i] = Impl::combineHashes(vec_to[i], h);
current_offset = offsets[i];
}
@ -401,17 +585,17 @@ private:
for (size_t i = 0; i < size; ++i)
{
const UInt64 h = Impl::Hash64(reinterpret_cast<const char *>(&data[i * n]), n);
const ToType h = Impl::apply(reinterpret_cast<const char *>(&data[i * n]), n);
if (first)
vec_to[i] = h;
else
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], h));
vec_to[i] = Impl::combineHashes(vec_to[i], h);
}
}
else if (const ColumnConst * col_from = checkAndGetColumnConstStringOrFixedString(column))
{
String value = col_from->getValue<String>().data();
const UInt64 hash = Impl::Hash64(value.data(), value.size());
const ToType hash = Impl::apply(value.data(), value.size());
const size_t size = vec_to.size();
if (first)
@ -422,7 +606,7 @@ private:
{
for (size_t i = 0; i < size; ++i)
{
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], hash));
vec_to[i] = Impl::combineHashes(vec_to[i], hash);
}
}
}
@ -433,7 +617,7 @@ private:
}
template <bool first>
void executeArray(const IDataType * type, const IColumn * column, ColumnUInt64::Container & vec_to)
void executeArray(const IDataType * type, const IColumn * column, typename ColumnVector<ToType>::Container & vec_to)
{
const IDataType * nested_type = typeid_cast<const DataTypeArray *>(type)->getNestedType().get();
@ -443,7 +627,7 @@ private:
const ColumnArray::Offsets & offsets = col_from->getOffsets();
const size_t nested_size = nested_column->size();
ColumnUInt64::Container vec_temp(nested_size);
typename ColumnVector<ToType>::Container vec_temp(nested_size);
executeAny<true>(nested_type, nested_column, vec_temp);
const size_t size = offsets.size();
@ -453,14 +637,19 @@ private:
{
ColumnArray::Offset next_offset = offsets[i];
UInt64 h = IntHash64Impl::apply(next_offset - current_offset);
ToType h;
if constexpr (std::is_same_v<ToType, UInt64>)
h = IntHash64Impl::apply(next_offset - current_offset);
else
h = IntHash32Impl::apply(next_offset - current_offset);
if (first)
vec_to[i] = h;
else
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], h));
vec_to[i] = Impl::combineHashes(vec_to[i], h);
for (size_t j = current_offset; j < next_offset; ++j)
vec_to[i] = Impl::Hash128to64(typename Impl::uint128_t(vec_to[i], vec_temp[j]));
vec_to[i] = Impl::combineHashes(vec_to[i], vec_temp[j]);
current_offset = offsets[i];
}
@ -478,7 +667,7 @@ private:
}
template <bool first>
void executeAny(const IDataType * from_type, const IColumn * icolumn, ColumnUInt64::Container & vec_to)
void executeAny(const IDataType * from_type, const IColumn * icolumn, typename ColumnVector<ToType>::Container & vec_to)
{
WhichDataType which(from_type);
@ -504,7 +693,7 @@ private:
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
}
void executeForArgument(const IDataType * type, const IColumn * column, ColumnUInt64::Container & vec_to, bool & is_first)
void executeForArgument(const IDataType * type, const IColumn * column, typename ColumnVector<ToType>::Container & vec_to, bool & is_first)
{
/// Flattening of tuples.
if (const ColumnTuple * tuple = typeid_cast<const ColumnTuple *>(column))
@ -549,20 +738,20 @@ public:
DataTypePtr getReturnTypeImpl(const DataTypes & /*arguments*/) const override
{
return std::make_shared<DataTypeUInt64>();
return std::make_shared<DataTypeNumber<ToType>>();
}
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) override
{
size_t rows = input_rows_count;
auto col_to = ColumnUInt64::create(rows);
auto col_to = ColumnVector<ToType>::create(rows);
ColumnUInt64::Container & vec_to = col_to->getData();
typename ColumnVector<ToType>::Container & vec_to = col_to->getData();
if (arguments.empty())
{
/// Constant random number from /dev/urandom is used as a hash value of empty list of arguments.
vec_to.assign(rows, static_cast<UInt64>(0xe28dbde7fe22e41c));
vec_to.assign(rows, static_cast<ToType>(0xe28dbde7fe22e41c));
}
/// The function supports arbitrary number of arguments of arbitrary types.
@ -579,181 +768,6 @@ public:
};
template <typename Impl, typename Name>
class FunctionStringHash : public IFunction
{
public:
static constexpr auto name = Name::name;
static FunctionPtr create(const Context &) { return std::make_shared<FunctionStringHash>(); }
String getName() const override { return name; }
bool isVariadic() const override { return false; }
size_t getNumberOfArguments() const override { return 1; }
DataTypePtr getReturnTypeImpl(const DataTypes & /*arguments */) const override
{ return std::make_shared<DataTypeNumber<ToType>>(); }
bool useDefaultImplementationForConstants() const override { return true; }
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) override
{
auto col_to = ColumnVector<ToType>::create(input_rows_count);
typename ColumnVector<ToType>::Container & vec_to = col_to->getData();
const ColumnWithTypeAndName & col = block.getByPosition(arguments[0]);
const IDataType * from_type = col.type.get();
const IColumn * icolumn = col.column.get();
WhichDataType which(from_type);
if (which.isUInt8()) executeIntType<UInt8>(icolumn, vec_to);
else if (which.isUInt16()) executeIntType<UInt16>(icolumn, vec_to);
else if (which.isUInt32()) executeIntType<UInt32>(icolumn, vec_to);
else if (which.isUInt64()) executeIntType<UInt64>(icolumn, vec_to);
else if (which.isInt8()) executeIntType<Int8>(icolumn, vec_to);
else if (which.isInt16()) executeIntType<Int16>(icolumn, vec_to);
else if (which.isInt32()) executeIntType<Int32>(icolumn, vec_to);
else if (which.isInt64()) executeIntType<Int64>(icolumn, vec_to);
else if (which.isEnum8()) executeIntType<Int8>(icolumn, vec_to);
else if (which.isEnum16()) executeIntType<Int16>(icolumn, vec_to);
else if (which.isDate()) executeIntType<UInt16>(icolumn, vec_to);
else if (which.isDateTime()) executeIntType<UInt32>(icolumn, vec_to);
else if (which.isFloat32()) executeIntType<Float32>(icolumn, vec_to);
else if (which.isFloat64()) executeIntType<Float64>(icolumn, vec_to);
else if (which.isStringOrFixedString()) executeString(icolumn, vec_to);
else
throw Exception("Unexpected type " + from_type->getName() + " of argument of function " + getName(),
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
block.getByPosition(result).column = std::move(col_to);
}
private:
using ToType = typename Impl::ReturnType;
template <typename FromType>
void executeIntType(const IColumn * column, typename ColumnVector<ToType>::Container & vec_to)
{
if (const ColumnVector<FromType> * col_from = checkAndGetColumn<ColumnVector<FromType>>(column))
{
const typename ColumnVector<FromType>::Container & vec_from = col_from->getData();
size_t size = vec_from.size();
for (size_t i = 0; i < size; ++i)
{
vec_to[i] = Impl::apply(reinterpret_cast<const char *>(&vec_from[i]), sizeof(FromType));
}
}
else
throw Exception("Illegal column " + column->getName()
+ " of argument of function " + getName(),
ErrorCodes::ILLEGAL_COLUMN);
}
void executeString(const IColumn * column, typename ColumnVector<ToType>::Container & vec_to)
{
if (const ColumnString * col_from = checkAndGetColumn<ColumnString>(column))
{
const typename ColumnString::Chars_t & data = col_from->getChars();
const typename ColumnString::Offsets & offsets = col_from->getOffsets();
size_t size = offsets.size();
ColumnString::Offset current_offset = 0;
for (size_t i = 0; i < size; ++i)
{
vec_to[i] = Impl::apply(
reinterpret_cast<const char *>(&data[current_offset]),
offsets[i] - current_offset - 1);
current_offset = offsets[i];
}
}
else if (const ColumnFixedString * col_from = checkAndGetColumn<ColumnFixedString>(column))
{
const typename ColumnString::Chars_t & data = col_from->getChars();
size_t n = col_from->getN();
size_t size = data.size() / n;
for (size_t i = 0; i < size; ++i)
vec_to[i] = Impl::apply(reinterpret_cast<const char *>(&data[i * n]), n);
}
else
throw Exception("Illegal column " + column->getName()
+ " of first argument of function " + getName(),
ErrorCodes::ILLEGAL_COLUMN);
}
};
/** Why we need MurmurHash2?
* MurmurHash2 is an outdated hash function, superseded by MurmurHash3 and subsequently by CityHash, xxHash, HighwayHash.
* Usually there is no reason to use MurmurHash.
* It is needed for the cases when you already have MurmurHash in some applications and you want to reproduce it
* in ClickHouse as is. For example, it is needed to reproduce the behaviour
* for NGINX a/b testing module: https://nginx.ru/en/docs/http/ngx_http_split_clients_module.html
*/
struct MurmurHash2Impl32
{
using ReturnType = UInt32;
static UInt32 apply(const char * data, const size_t size)
{
return MurmurHash2(data, size, 0);
}
};
struct MurmurHash2Impl64
{
using ReturnType = UInt64;
static UInt64 apply(const char * data, const size_t size)
{
return MurmurHash64A(data, size, 0);
}
};
struct MurmurHash3Impl32
{
using ReturnType = UInt32;
static UInt32 apply(const char * data, const size_t size)
{
union
{
UInt32 h;
char bytes[sizeof(h)];
};
MurmurHash3_x86_32(data, size, 0, bytes);
return h;
}
};
struct MurmurHash3Impl64
{
using ReturnType = UInt64;
static UInt64 apply(const char * data, const size_t size)
{
union
{
UInt64 h[2];
char bytes[16];
};
MurmurHash3_x64_128(data, size, 0, bytes);
return h[0] ^ h[1];
}
};
struct MurmurHash3Impl128
{
static constexpr auto name = "murmurHash3_128";
enum { length = 16 };
static void apply(const char * begin, const size_t size, unsigned char * out_char_data)
{
MurmurHash3_x64_128(begin, size, 0, out_char_data);
}
};
struct URLHashImpl
{
static UInt64 apply(const char * data, const size_t size)
@ -943,58 +957,12 @@ private:
};
struct NameHalfMD5 { static constexpr auto name = "halfMD5"; };
struct NameSipHash64 { static constexpr auto name = "sipHash64"; };
struct NameIntHash32 { static constexpr auto name = "intHash32"; };
struct NameIntHash64 { static constexpr auto name = "intHash64"; };
struct NameMurmurHash2_32 { static constexpr auto name = "murmurHash2_32"; };
struct NameMurmurHash2_64 { static constexpr auto name = "murmurHash2_64"; };
struct NameMurmurHash3_32 { static constexpr auto name = "murmurHash3_32"; };
struct NameMurmurHash3_64 { static constexpr auto name = "murmurHash3_64"; };
struct NameMurmurHash3_128 { static constexpr auto name = "murmurHash3_128"; };
struct ImplCityHash64
{
static constexpr auto name = "cityHash64";
using uint128_t = CityHash_v1_0_2::uint128;
static auto Hash128to64(const uint128_t & x) { return CityHash_v1_0_2::Hash128to64(x); }
static auto Hash64(const char * s, const size_t len) { return CityHash_v1_0_2::CityHash64(s, len); }
};
// see farmhash.h for definition of NAMESPACE_FOR_HASH_FUNCTIONS
struct ImplFarmHash64
{
static constexpr auto name = "farmHash64";
using uint128_t = NAMESPACE_FOR_HASH_FUNCTIONS::uint128_t;
static auto Hash128to64(const uint128_t & x) { return NAMESPACE_FOR_HASH_FUNCTIONS::Hash128to64(x); }
static auto Hash64(const char * s, const size_t len) { return NAMESPACE_FOR_HASH_FUNCTIONS::Hash64(s, len); }
};
struct ImplMetroHash64
{
static constexpr auto name = "metroHash64";
using uint128_t = CityHash_v1_0_2::uint128;
static auto Hash128to64(const uint128_t & x) { return CityHash_v1_0_2::Hash128to64(x); }
static auto Hash64(const char * s, const size_t len)
{
union
{
UInt64 u64;
UInt8 u8[sizeof(u64)];
};
metrohash64_1(reinterpret_cast<const UInt8 *>(s), len, 0, u8);
return u64;
}
};
using FunctionHalfMD5 = FunctionStringHash<HalfMD5Impl, NameHalfMD5>;
using FunctionSipHash64 = FunctionStringHash<SipHash64Impl, NameSipHash64>;
using FunctionHalfMD5 = FunctionAnyHash<HalfMD5Impl>;
using FunctionSipHash64 = FunctionAnyHash<SipHash64Impl>;
using FunctionIntHash32 = FunctionIntHash<IntHash32Impl, NameIntHash32>;
using FunctionIntHash64 = FunctionIntHash<IntHash64Impl, NameIntHash64>;
using FunctionMD5 = FunctionStringHashFixedString<MD5Impl>;
@ -1002,12 +970,12 @@ using FunctionSHA1 = FunctionStringHashFixedString<SHA1Impl>;
using FunctionSHA224 = FunctionStringHashFixedString<SHA224Impl>;
using FunctionSHA256 = FunctionStringHashFixedString<SHA256Impl>;
using FunctionSipHash128 = FunctionStringHashFixedString<SipHash128Impl>;
using FunctionCityHash64 = FunctionNeighbourhoodHash64<ImplCityHash64>;
using FunctionFarmHash64 = FunctionNeighbourhoodHash64<ImplFarmHash64>;
using FunctionMetroHash64 = FunctionNeighbourhoodHash64<ImplMetroHash64>;
using FunctionMurmurHash2_32 = FunctionStringHash<MurmurHash2Impl32, NameMurmurHash2_32>;
using FunctionMurmurHash2_64 = FunctionStringHash<MurmurHash2Impl64, NameMurmurHash2_64>;
using FunctionMurmurHash3_32 = FunctionStringHash<MurmurHash3Impl32, NameMurmurHash3_32>;
using FunctionMurmurHash3_64 = FunctionStringHash<MurmurHash3Impl64, NameMurmurHash3_64>;
using FunctionCityHash64 = FunctionAnyHash<ImplCityHash64>;
using FunctionFarmHash64 = FunctionAnyHash<ImplFarmHash64>;
using FunctionMetroHash64 = FunctionAnyHash<ImplMetroHash64>;
using FunctionMurmurHash2_32 = FunctionAnyHash<MurmurHash2Impl32>;
using FunctionMurmurHash2_64 = FunctionAnyHash<MurmurHash2Impl64>;
using FunctionMurmurHash3_32 = FunctionAnyHash<MurmurHash3Impl32>;
using FunctionMurmurHash3_64 = FunctionAnyHash<MurmurHash3Impl64>;
using FunctionMurmurHash3_128 = FunctionStringHashFixedString<MurmurHash3Impl128>;
}

View File

@ -79,7 +79,7 @@ inline ALWAYS_INLINE void writeSlice(const NumericArraySlice<T> & slice, Generic
{
for (size_t i = 0; i < slice.size; ++i)
{
Field field = static_cast<typename NearestFieldType<T>::Type>(slice.data[i]);
Field field = T(slice.data[i]);
sink.elements.insert(field);
}
sink.current_offset += slice.size;
@ -147,7 +147,7 @@ inline ALWAYS_INLINE void writeSlice(const GenericValueSlice & slice, NumericArr
template <typename T>
inline ALWAYS_INLINE void writeSlice(const NumericValueSlice<T> & slice, GenericArraySink & sink)
{
Field field = static_cast<typename NearestFieldType<T>::Type>(slice.value);
Field field = T(slice.value);
sink.elements.insert(field);
++sink.current_offset;
}

View File

@ -33,7 +33,7 @@ struct ArrayAllImpl
throw Exception("Unexpected type of filter column", ErrorCodes::ILLEGAL_COLUMN);
if (column_filter_const->getValue<UInt8>())
return DataTypeUInt8().createColumnConst(array.size(), UInt64(1));
return DataTypeUInt8().createColumnConst(array.size(), 1u);
else
{
const IColumn::Offsets & offsets = array.getOffsets();

View File

@ -48,7 +48,7 @@ struct ArrayCountImpl
return out_column;
}
else
return DataTypeUInt32().createColumnConst(array.size(), UInt64(0));
return DataTypeUInt32().createColumnConst(array.size(), 0u);
}
const IColumn::Filter & filter = column_filter->getData();

View File

@ -856,7 +856,7 @@ void FunctionArrayElement::perform(Block & block, const ColumnNumbers & argument
if (builder)
builder.initSink(input_rows_count);
if (index == UInt64(0))
if (index == 0u)
throw Exception("Array indices is 1-based", ErrorCodes::ZERO_ARRAY_OR_TUPLE_INDEX);
if (!( executeNumberConst<UInt8>(block, arguments, result, index, builder)

View File

@ -48,7 +48,7 @@ struct ArrayExistsImpl
return out_column;
}
else
return DataTypeUInt8().createColumnConst(array.size(), UInt64(0));
return DataTypeUInt8().createColumnConst(array.size(), 0u);
}
const IColumn::Filter & filter = column_filter->getData();

View File

@ -45,7 +45,7 @@ struct ArrayFirstIndexImpl
return out_column;
}
else
return DataTypeUInt32().createColumnConst(array.size(), UInt64(0));
return DataTypeUInt32().createColumnConst(array.size(), 0u);
}
const auto & filter = column_filter->getData();

View File

@ -751,7 +751,7 @@ private:
block.getByPosition(result).column = block.getByPosition(result).type->createColumnConst(
item_arg->size(),
static_cast<typename NearestFieldType<typename IndexConv::ResultType>::Type>(current));
static_cast<typename IndexConv::ResultType>(current));
}
else
{

View File

@ -429,7 +429,7 @@ ColumnPtr FunctionArrayIntersect::execute(const UnpackedArrays & arrays, Mutable
{
++result_offset;
if constexpr (is_numeric_column)
result_data.insert(pair.first);
result_data.insertValue(pair.first);
else if constexpr (std::is_same<ColumnType, ColumnString>::value || std::is_same<ColumnType, ColumnFixedString>::value)
result_data.insertData(pair.first.data, pair.first.size);
else

View File

@ -51,9 +51,9 @@ public:
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) override
{
if (auto type = checkAndGetDataType<DataTypeEnum8>(block.getByPosition(arguments[0]).type.get()))
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, UInt64(type->getValues().size()));
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, type->getValues().size());
else if (auto type = checkAndGetDataType<DataTypeEnum16>(block.getByPosition(arguments[0]).type.get()))
block.getByPosition(result).column = DataTypeUInt16().createColumnConst(input_rows_count, UInt64(type->getValues().size()));
block.getByPosition(result).column = DataTypeUInt16().createColumnConst(input_rows_count, type->getValues().size());
else
throw Exception("The argument for function " + getName() + " must be Enum", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
}

View File

@ -132,7 +132,7 @@ void FunctionHasColumnInTable::executeImpl(Block & block, const ColumnNumbers &
has_column = remote_columns.hasPhysical(column_name);
}
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, UInt64(has_column));
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, has_column);
}

View File

@ -40,7 +40,7 @@ public:
void executeImpl(Block & block, const ColumnNumbers &, size_t result, size_t input_rows_count) override
{
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, UInt64(0));
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, 0u);
}
};

View File

@ -50,7 +50,7 @@ public:
void executeImpl(Block & block, const ColumnNumbers &, size_t result, size_t input_rows_count) override
{
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, UInt64(1));
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(input_rows_count, 1u);
}
};

View File

@ -53,7 +53,7 @@ public:
else
{
/// Since no element is nullable, return a constant one.
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(elem.column->size(), UInt64(1));
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(elem.column->size(), 1u);
}
}
};

View File

@ -47,7 +47,7 @@ public:
{
/// Since no element is nullable, return a zero-constant column representing
/// a zero-filled null map.
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(elem.column->size(), UInt64(0));
block.getByPosition(result).column = DataTypeUInt8().createColumnConst(elem.column->size(), 0u);
}
}
};

View File

@ -92,7 +92,7 @@ public:
}
/// convertToFullColumn needed, because otherwise (constant expression case) function will not get called on each block.
block.getByPosition(result).column = block.getByPosition(result).type->createColumnConst(size, UInt64(0))->convertToFullColumnIfConst();
block.getByPosition(result).column = block.getByPosition(result).type->createColumnConst(size, 0u)->convertToFullColumnIfConst();
}
};

View File

@ -103,7 +103,7 @@ struct TimeSlotsImpl
Array & result)
{
for (UInt32 value = start / TIME_SLOT_SIZE; value <= (start + duration) / TIME_SLOT_SIZE; ++value)
result.push_back(static_cast<UInt64>(value * TIME_SLOT_SIZE));
result.push_back(value * TIME_SLOT_SIZE);
}
};

View File

@ -33,7 +33,7 @@ public:
{
block.getByPosition(result).column = DataTypeDate().createColumnConst(
input_rows_count,
UInt64(DateLUT::instance().toDayNum(time(nullptr))));
DateLUT::instance().toDayNum(time(nullptr)));
}
};

View File

@ -35,7 +35,7 @@ public:
void executeImpl(Block & block, const ColumnNumbers &, size_t result, size_t input_rows_count) override
{
block.getByPosition(result).column = DataTypeString().createColumnConst(input_rows_count, String(VERSION_STRING));
block.getByPosition(result).column = DataTypeString().createColumnConst(input_rows_count, VERSION_STRING);
}
};

View File

@ -33,7 +33,7 @@ public:
{
block.getByPosition(result).column = DataTypeDate().createColumnConst(
input_rows_count,
UInt64(DateLUT::instance().toDayNum(time(nullptr)) - 1));
DateLUT::instance().toDayNum(time(nullptr)) - 1);
}
};

View File

@ -568,8 +568,8 @@ void ActionsVisitor::makeSet(const ASTFunction * node, const Block & sample_bloc
/// and the table has the type Set (a previously prepared set).
if (identifier)
{
auto database_table = getDatabaseAndTableNameFromIdentifier(*identifier);
StoragePtr table = context.tryGetTable(database_table.first, database_table.second);
DatabaseAndTableWithAlias database_table(*identifier);
StoragePtr table = context.tryGetTable(database_table.database, database_table.table);
if (table)
{

View File

@ -1475,9 +1475,9 @@ void NO_INLINE Aggregator::mergeDataImpl(
Table & table_src,
Arena * arena) const
{
for (auto it = table_src.begin(); it != table_src.end(); ++it)
for (auto it = table_src.begin(), end = table_src.end(); it != end; ++it)
{
decltype(it) res_it;
typename Table::iterator res_it;
bool inserted;
table_dst.emplace(it->first, res_it, inserted, it.getHash());
@ -1512,9 +1512,9 @@ void NO_INLINE Aggregator::mergeDataNoMoreKeysImpl(
Table & table_src,
Arena * arena) const
{
for (auto it = table_src.begin(); it != table_src.end(); ++it)
for (auto it = table_src.begin(), end = table_src.end(); it != end; ++it)
{
decltype(it) res_it = table_dst.find(it->first, it.getHash());
typename Table::iterator res_it = table_dst.find(it->first, it.getHash());
AggregateDataPtr res_data = table_dst.end() == res_it
? overflows

View File

@ -817,7 +817,7 @@ struct AggregationMethodKeysFixed
size_t bucket = i / 8;
size_t offset = i % 8;
UInt8 val = (reinterpret_cast<const UInt8 *>(&value.first)[bucket] >> offset) & 1;
null_map->insert(val);
null_map->insertValue(val);
is_null = val == 1;
}
else

View File

@ -1061,11 +1061,11 @@ public:
++num_hosts_finished;
columns[0]->insert(host);
columns[1]->insert(static_cast<UInt64>(port));
columns[2]->insert(static_cast<Int64>(status.code));
columns[1]->insert(port);
columns[2]->insert(status.code);
columns[3]->insert(status.message);
columns[4]->insert(static_cast<UInt64>(waiting_hosts.size() - num_hosts_finished));
columns[5]->insert(static_cast<UInt64>(current_active_hosts.size()));
columns[4]->insert(waiting_hosts.size() - num_hosts_finished);
columns[5]->insert(current_active_hosts.size());
}
res = sample.cloneWithColumns(std::move(columns));
}

View File

@ -0,0 +1,245 @@
#include <Interpreters/DatabaseAndTableWithAlias.h>
#include <Interpreters/Context.h>
#include <Common/typeid_cast.h>
#include <Parsers/IAST.h>
#include <Parsers/ASTIdentifier.h>
#include <Parsers/ASTTablesInSelectQuery.h>
#include <Parsers/ASTSelectQuery.h>
#include <Parsers/ASTSubquery.h>
namespace DB
{
/// Checks that ast is ASTIdentifier and remove num_qualifiers_to_strip components from left.
/// Example: 'database.table.name' -> (num_qualifiers_to_strip = 2) -> 'name'.
void stripIdentifier(DB::ASTPtr & ast, size_t num_qualifiers_to_strip)
{
ASTIdentifier * identifier = typeid_cast<ASTIdentifier *>(ast.get());
if (!identifier)
throw DB::Exception("ASTIdentifier expected for stripIdentifier", DB::ErrorCodes::LOGICAL_ERROR);
if (num_qualifiers_to_strip)
{
size_t num_components = identifier->children.size();
/// plain column
if (num_components - num_qualifiers_to_strip == 1)
{
DB::String node_alias = identifier->tryGetAlias();
ast = identifier->children.back();
if (!node_alias.empty())
ast->setAlias(node_alias);
}
else
/// nested column
{
identifier->children.erase(identifier->children.begin(), identifier->children.begin() + num_qualifiers_to_strip);
DB::String new_name;
for (const auto & child : identifier->children)
{
if (!new_name.empty())
new_name += '.';
new_name += static_cast<const ASTIdentifier &>(*child.get()).name;
}
identifier->name = new_name;
}
}
}
/// Get the number of components of identifier which are correspond to 'alias.', 'table.' or 'databas.table.' from names.
size_t getNumComponentsToStripInOrderToTranslateQualifiedName(const ASTIdentifier & identifier,
const DatabaseAndTableWithAlias & names)
{
size_t num_qualifiers_to_strip = 0;
auto get_identifier_name = [](const ASTPtr & ast) { return static_cast<const ASTIdentifier &>(*ast).name; };
/// It is compound identifier
if (!identifier.children.empty())
{
size_t num_components = identifier.children.size();
/// database.table.column
if (num_components >= 3
&& !names.database.empty()
&& get_identifier_name(identifier.children[0]) == names.database
&& get_identifier_name(identifier.children[1]) == names.table)
{
num_qualifiers_to_strip = 2;
}
/// table.column or alias.column. If num_components > 2, it is like table.nested.column.
if (num_components >= 2
&& ((!names.table.empty() && get_identifier_name(identifier.children[0]) == names.table)
|| (!names.alias.empty() && get_identifier_name(identifier.children[0]) == names.alias)))
{
num_qualifiers_to_strip = 1;
}
}
return num_qualifiers_to_strip;
}
DatabaseAndTableWithAlias::DatabaseAndTableWithAlias(const ASTIdentifier & identifier, const String & current_database)
{
database = current_database;
table = identifier.name;
alias = identifier.tryGetAlias();
if (!identifier.children.empty())
{
if (identifier.children.size() != 2)
throw Exception("Logical error: number of components in table expression not equal to two", ErrorCodes::LOGICAL_ERROR);
const ASTIdentifier * db_identifier = typeid_cast<const ASTIdentifier *>(identifier.children[0].get());
const ASTIdentifier * table_identifier = typeid_cast<const ASTIdentifier *>(identifier.children[1].get());
if (!db_identifier || !table_identifier)
throw Exception("Logical error: identifiers expected", ErrorCodes::LOGICAL_ERROR);
database = db_identifier->name;
table = table_identifier->name;
}
}
DatabaseAndTableWithAlias::DatabaseAndTableWithAlias(const ASTTableExpression & table_expression, const String & current_database)
{
if (table_expression.database_and_table_name)
{
const auto * identifier = typeid_cast<const ASTIdentifier *>(table_expression.database_and_table_name.get());
if (!identifier)
throw Exception("Logical error: identifier expected", ErrorCodes::LOGICAL_ERROR);
*this = DatabaseAndTableWithAlias(*identifier, current_database);
}
else if (table_expression.table_function)
alias = table_expression.table_function->tryGetAlias();
else if (table_expression.subquery)
alias = table_expression.subquery->tryGetAlias();
else
throw Exception("Logical error: no known elements in ASTTableExpression", ErrorCodes::LOGICAL_ERROR);
}
String DatabaseAndTableWithAlias::getQualifiedNamePrefix() const
{
if (alias.empty() && table.empty())
return "";
return (!alias.empty() ? alias : (database + '.' + table)) + '.';
}
void DatabaseAndTableWithAlias::makeQualifiedName(const ASTPtr & ast) const
{
if (auto identifier = typeid_cast<ASTIdentifier *>(ast.get()))
{
String prefix = getQualifiedNamePrefix();
identifier->name.insert(identifier->name.begin(), prefix.begin(), prefix.end());
Names qualifiers;
if (!alias.empty())
qualifiers.push_back(alias);
else
{
qualifiers.push_back(database);
qualifiers.push_back(table);
}
for (const auto & qualifier : qualifiers)
identifier->children.emplace_back(std::make_shared<ASTIdentifier>(qualifier));
}
}
std::vector<const ASTTableExpression *> getSelectTablesExpression(const ASTSelectQuery & select_query)
{
if (!select_query.tables)
return {};
std::vector<const ASTTableExpression *> tables_expression;
for (const auto & child : select_query.tables->children)
{
ASTTablesInSelectQueryElement * tables_element = static_cast<ASTTablesInSelectQueryElement *>(child.get());
if (tables_element->table_expression)
tables_expression.emplace_back(static_cast<const ASTTableExpression *>(tables_element->table_expression.get()));
}
return tables_expression;
}
static const ASTTableExpression * getTableExpression(const ASTSelectQuery & select, size_t table_number)
{
if (!select.tables)
return {};
ASTTablesInSelectQuery & tables_in_select_query = static_cast<ASTTablesInSelectQuery &>(*select.tables);
if (tables_in_select_query.children.size() <= table_number)
return {};
ASTTablesInSelectQueryElement & tables_element =
static_cast<ASTTablesInSelectQueryElement &>(*tables_in_select_query.children[table_number]);
if (!tables_element.table_expression)
return {};
return static_cast<const ASTTableExpression *>(tables_element.table_expression.get());
}
std::vector<DatabaseAndTableWithAlias> getDatabaseAndTables(const ASTSelectQuery & select_query, const String & current_database)
{
std::vector<const ASTTableExpression *> tables_expression = getSelectTablesExpression(select_query);
std::vector<DatabaseAndTableWithAlias> database_and_table_with_aliases;
database_and_table_with_aliases.reserve(tables_expression.size());
for (const auto & table_expression : tables_expression)
database_and_table_with_aliases.emplace_back(DatabaseAndTableWithAlias(*table_expression, current_database));
return database_and_table_with_aliases;
}
std::optional<DatabaseAndTableWithAlias> getDatabaseAndTable(const ASTSelectQuery & select, size_t table_number)
{
const ASTTableExpression * table_expression = getTableExpression(select, table_number);
if (!table_expression)
return {};
ASTPtr database_and_table_name = table_expression->database_and_table_name;
if (!database_and_table_name)
return {};
const ASTIdentifier * identifier = typeid_cast<const ASTIdentifier *>(database_and_table_name.get());
if (!identifier)
return {};
return *identifier;
}
ASTPtr getTableFunctionOrSubquery(const ASTSelectQuery & select, size_t table_number)
{
const ASTTableExpression * table_expression = getTableExpression(select, table_number);
if (table_expression)
{
#if 1 /// TODO: It hides some logical error in InterpreterSelectQuery & distributed tables
if (table_expression->database_and_table_name)
{
if (table_expression->database_and_table_name->children.empty())
return table_expression->database_and_table_name;
if (table_expression->database_and_table_name->children.size() == 2)
return table_expression->database_and_table_name->children[1];
}
#endif
if (table_expression->table_function)
return table_expression->table_function;
if (table_expression->subquery)
return static_cast<const ASTSubquery *>(table_expression->subquery.get())->children[0];
}
return nullptr;
}
}

View File

@ -1,6 +1,7 @@
#pragma once
#include <memory>
#include <optional>
#include <Core/Types.h>
namespace DB
@ -9,16 +10,21 @@ namespace DB
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
class ASTSelectQuery;
class ASTIdentifier;
struct ASTTableExpression;
/// Extracts database name (and/or alias) from table expression or identifier
struct DatabaseAndTableWithAlias
{
String database;
String table;
String alias;
DatabaseAndTableWithAlias(const ASTIdentifier & identifier, const String & current_database = "");
DatabaseAndTableWithAlias(const ASTTableExpression & table_expression, const String & current_database);
/// "alias." or "database.table." if alias is empty
String getQualifiedNamePrefix() const;
@ -28,12 +34,13 @@ struct DatabaseAndTableWithAlias
void stripIdentifier(DB::ASTPtr & ast, size_t num_qualifiers_to_strip);
DatabaseAndTableWithAlias getTableNameWithAliasFromTableExpression(const ASTTableExpression & table_expression,
const String & current_database);
size_t getNumComponentsToStripInOrderToTranslateQualifiedName(const ASTIdentifier & identifier,
const DatabaseAndTableWithAlias & names);
std::pair<String, String> getDatabaseAndTableNameFromIdentifier(const ASTIdentifier & identifier);
std::vector<DatabaseAndTableWithAlias> getDatabaseAndTables(const ASTSelectQuery & select_query, const String & current_database);
std::optional<DatabaseAndTableWithAlias> getDatabaseAndTable(const ASTSelectQuery & select, size_t table_number);
std::vector<const ASTTableExpression *> getSelectTablesExpression(const ASTSelectQuery & select_query);
ASTPtr getTableFunctionOrSubquery(const ASTSelectQuery & select, size_t table_number);
}

View File

@ -271,8 +271,6 @@ void ExpressionAction::prepare(Block & sample_block, const Settings & settings)
const std::string & name = projection[i].first;
const std::string & alias = projection[i].second;
ColumnWithTypeAndName column = sample_block.getByName(name);
if (column.column)
column.column = (*std::move(column.column)).mutate();
if (alias != "")
column.name = alias;
new_block.insert(std::move(column));
@ -485,8 +483,6 @@ void ExpressionAction::execute(Block & block, std::unordered_map<std::string, si
const std::string & name = projection[i].first;
const std::string & alias = projection[i].second;
ColumnWithTypeAndName column = block.getByName(name);
if (column.column)
column.column = (*std::move(column.column)).mutate();
if (alias != "")
column.name = alias;
new_block.insert(std::move(column));

View File

@ -59,7 +59,7 @@
#include <Parsers/parseQuery.h>
#include <Parsers/queryToString.h>
#include <Interpreters/interpretSubquery.h>
#include <Interpreters/evaluateQualified.h>
#include <Interpreters/DatabaseAndTableWithAlias.h>
#include <Interpreters/QueryNormalizer.h>
#include <Interpreters/QueryAliasesVisitor.h>
@ -172,36 +172,26 @@ ExpressionAnalyzer::ExpressionAnalyzer(
if (!storage && select_query)
{
auto select_database = select_query->database();
auto select_table = select_query->table();
if (select_table
&& !typeid_cast<const ASTSelectWithUnionQuery *>(select_table.get())
&& !typeid_cast<const ASTFunction *>(select_table.get()))
{
String database = select_database
? typeid_cast<const ASTIdentifier &>(*select_database).name
: "";
const String & table = typeid_cast<const ASTIdentifier &>(*select_table).name;
storage = context.tryGetTable(database, table);
}
if (auto db_and_table = getDatabaseAndTable(*select_query, 0))
storage = context.tryGetTable(db_and_table->database, db_and_table->table);
}
if (storage && source_columns.empty())
if (storage)
{
auto physical_columns = storage->getColumns().getAllPhysical();
if (source_columns.empty())
source_columns.swap(physical_columns);
else
{
source_columns.insert(source_columns.end(), physical_columns.begin(), physical_columns.end());
removeDuplicateColumns(source_columns);
if (select_query)
{
const auto & storage_aliases = storage->getColumns().aliases;
source_columns.insert(source_columns.end(), storage_aliases.begin(), storage_aliases.end());
}
}
else
removeDuplicateColumns(source_columns);
addAliasColumns();
removeDuplicateColumns(source_columns);
translateQualifiedNames();
@ -275,55 +265,14 @@ bool ExpressionAnalyzer::isRemoteStorage() const
}
static std::vector<ASTTableExpression> getTableExpressions(const ASTPtr & query)
{
ASTSelectQuery * select_query = typeid_cast<ASTSelectQuery *>(query.get());
std::vector<ASTTableExpression> table_expressions;
if (select_query && select_query->tables)
{
for (const auto & element : select_query->tables->children)
{
ASTTablesInSelectQueryElement & select_element = static_cast<ASTTablesInSelectQueryElement &>(*element);
if (select_element.table_expression)
table_expressions.emplace_back(static_cast<ASTTableExpression &>(*select_element.table_expression));
}
}
return table_expressions;
}
void ExpressionAnalyzer::translateQualifiedNames()
{
if (!select_query || !select_query->tables || select_query->tables->children.empty())
return;
std::vector<DatabaseAndTableWithAlias> tables;
std::vector<ASTTableExpression> tables_expression = getTableExpressions(query);
std::vector<DatabaseAndTableWithAlias> tables = getDatabaseAndTables(*select_query, context.getCurrentDatabase());
LogAST log;
for (const auto & table_expression : tables_expression)
{
auto table = getTableNameWithAliasFromTableExpression(table_expression, context.getCurrentDatabase());
{ /// debug print
size_t depth = 0;
DumpASTNode dump(table_expression, log.stream(), depth, "getTableNames");
if (table_expression.database_and_table_name)
DumpASTNode(*table_expression.database_and_table_name, log.stream(), depth);
if (table_expression.table_function)
DumpASTNode(*table_expression.table_function, log.stream(), depth);
if (table_expression.subquery)
DumpASTNode(*table_expression.subquery, log.stream(), depth);
dump.print("getTableNameWithAlias", table.database + '.' + table.table + ' ' + table.alias);
}
tables.emplace_back(table);
}
TranslateQualifiedNamesVisitor visitor(source_columns, tables, log.stream());
visitor.visit(query);
}
@ -574,8 +523,8 @@ static NamesAndTypesList getNamesAndTypeListFromTableExpression(const ASTTableEx
else if (table_expression.database_and_table_name)
{
const auto & identifier = static_cast<const ASTIdentifier &>(*table_expression.database_and_table_name);
auto database_table = getDatabaseAndTableNameFromIdentifier(identifier);
const auto & table = context.getTable(database_table.first, database_table.second);
DatabaseAndTableWithAlias database_table(identifier);
const auto & table = context.getTable(database_table.database, database_table.table);
names_and_type_list = table->getSampleBlockNonMaterialized().getNamesAndTypesList();
}
@ -602,13 +551,13 @@ void ExpressionAnalyzer::normalizeTree()
TableNamesAndColumnNames table_names_and_column_names;
if (select_query && select_query->tables && !select_query->tables->children.empty())
{
std::vector<ASTTableExpression> tables_expression = getTableExpressions(query);
std::vector<const ASTTableExpression *> tables_expression = getSelectTablesExpression(*select_query);
bool first = true;
for (const auto & table_expression : tables_expression)
for (const auto * table_expression : tables_expression)
{
const auto table_name = getTableNameWithAliasFromTableExpression(table_expression, context.getCurrentDatabase());
NamesAndTypesList names_and_types = getNamesAndTypeListFromTableExpression(table_expression, context);
DatabaseAndTableWithAlias table_name(*table_expression, context.getCurrentDatabase());
NamesAndTypesList names_and_types = getNamesAndTypeListFromTableExpression(*table_expression, context);
if (!first)
{
@ -628,19 +577,6 @@ void ExpressionAnalyzer::normalizeTree()
}
void ExpressionAnalyzer::addAliasColumns()
{
if (!select_query)
return;
if (!storage)
return;
const auto & storage_aliases = storage->getColumns().aliases;
source_columns.insert(std::end(source_columns), std::begin(storage_aliases), std::end(storage_aliases));
}
void ExpressionAnalyzer::executeScalarSubqueries()
{
LogAST log;
@ -1285,19 +1221,24 @@ const ExpressionAnalyzer::AnalyzedJoin::JoinedColumnsList & ExpressionAnalyzer::
if (const ASTTablesInSelectQueryElement * node = select_query_with_join->join())
{
const auto & table_expression = static_cast<const ASTTableExpression &>(*node->table_expression);
auto table_name_with_alias = getTableNameWithAliasFromTableExpression(table_expression, context.getCurrentDatabase());
DatabaseAndTableWithAlias table_name_with_alias(table_expression, context.getCurrentDatabase());
auto columns = getNamesAndTypeListFromTableExpression(table_expression, context);
for (auto & column : columns)
{
columns_from_joined_table.emplace_back(column, column.name);
JoinedColumn joined_column(column, column.name);
if (source_columns.contains(column.name))
{
auto qualified_name = table_name_with_alias.getQualifiedNamePrefix() + column.name;
columns_from_joined_table.back().name_and_type.name = qualified_name;
joined_column.name_and_type.name = qualified_name;
}
/// We don't want to select duplicate columns from the joined subquery if they appear
if (std::find(columns_from_joined_table.begin(), columns_from_joined_table.end(), joined_column) == columns_from_joined_table.end())
columns_from_joined_table.push_back(joined_column);
}
}
}
@ -1343,8 +1284,8 @@ bool ExpressionAnalyzer::appendJoin(ExpressionActionsChain & chain, bool only_ty
if (table_to_join.database_and_table_name)
{
const auto & identifier = static_cast<const ASTIdentifier &>(*table_to_join.database_and_table_name);
auto database_table = getDatabaseAndTableNameFromIdentifier(identifier);
StoragePtr table = context.tryGetTable(database_table.first, database_table.second);
DatabaseAndTableWithAlias database_table(identifier);
StoragePtr table = context.tryGetTable(database_table.database, database_table.table);
if (table)
{
@ -1886,8 +1827,8 @@ void ExpressionAnalyzer::collectJoinedColumnsFromJoinOnExpr()
const auto & left_table_expression = static_cast<const ASTTableExpression &>(*left_tables_element->table_expression);
const auto & right_table_expression = static_cast<const ASTTableExpression &>(*right_tables_element->table_expression);
auto left_source_names = getTableNameWithAliasFromTableExpression(left_table_expression, context.getCurrentDatabase());
auto right_source_names = getTableNameWithAliasFromTableExpression(right_table_expression, context.getCurrentDatabase());
DatabaseAndTableWithAlias left_source_names(left_table_expression, context.getCurrentDatabase());
DatabaseAndTableWithAlias right_source_names(right_table_expression, context.getCurrentDatabase());
/// Stores examples of columns which are only from one table.
struct TableBelonging
@ -2040,7 +1981,7 @@ void ExpressionAnalyzer::collectJoinedColumns(NameSet & joined_columns)
const auto & table_join = static_cast<const ASTTableJoin &>(*node->table_join);
const auto & table_expression = static_cast<const ASTTableExpression &>(*node->table_expression);
auto joined_table_name = getTableNameWithAliasFromTableExpression(table_expression, context.getCurrentDatabase());
DatabaseAndTableWithAlias joined_table_name(table_expression, context.getCurrentDatabase());
auto add_name_to_join_keys = [&](Names & join_keys, ASTs & join_asts, const ASTPtr & ast, bool right_table)
{

View File

@ -259,6 +259,11 @@ private:
JoinedColumn(const NameAndTypePair & name_and_type_, const String & original_name_)
: name_and_type(name_and_type_), original_name(original_name_) {}
bool operator==(const JoinedColumn & o) const
{
return name_and_type == o.name_and_type && original_name == o.original_name;
}
};
using JoinedColumnsList = std::list<JoinedColumn>;
@ -322,9 +327,6 @@ private:
void optimizeIfWithConstantConditionImpl(ASTPtr & current_ast);
bool tryExtractConstValueFromCondition(const ASTPtr & condition, bool & value) const;
/// Adds a list of ALIAS columns from the table.
void addAliasColumns();
/// Replacing scalar subqueries with constant values.
void executeScalarSubqueries();

View File

@ -239,7 +239,7 @@ void ExternalLoader::reloadFromConfigFiles(const bool throw_on_error, const bool
if (current_config.find(loadable.first) == std::end(current_config))
removed_loadable_objects.emplace_back(loadable.first);
}
for(const auto & name : removed_loadable_objects)
for (const auto & name : removed_loadable_objects)
loadable_objects.erase(name);
}

View File

@ -139,7 +139,7 @@ private:
* instead of doing a subquery, you just need to read it.
*/
auto database_and_table_name = ASTIdentifier::createSpecial(external_table_name);
auto database_and_table_name = createDatabaseAndTableNode("", external_table_name);
if (auto ast_table_expr = typeid_cast<ASTTableExpression *>(subquery_or_table_name_or_table_expression.get()))
{

View File

@ -1,5 +1,6 @@
#include <Interpreters/InJoinSubqueriesPreprocessor.h>
#include <Interpreters/Context.h>
#include <Interpreters/DatabaseAndTableWithAlias.h>
#include <Storages/StorageDistributed.h>
#include <Parsers/ASTSelectQuery.h>
#include <Parsers/ASTTablesInSelectQuery.h>
@ -81,40 +82,13 @@ void forEachTable(IAST * node, F && f)
StoragePtr tryGetTable(const ASTPtr & database_and_table, const Context & context)
{
String database;
String table;
const ASTIdentifier * id = typeid_cast<const ASTIdentifier *>(database_and_table.get());
if (!id)
throw Exception("Logical error: identifier expected", ErrorCodes::LOGICAL_ERROR);
const ASTIdentifier * id = static_cast<const ASTIdentifier *>(database_and_table.get());
DatabaseAndTableWithAlias db_and_table(*id);
if (id->children.empty())
table = id->name;
else if (id->children.size() == 2)
{
database = static_cast<const ASTIdentifier *>(id->children[0].get())->name;
table = static_cast<const ASTIdentifier *>(id->children[1].get())->name;
}
else
throw Exception("Logical error: unexpected number of components in table expression", ErrorCodes::LOGICAL_ERROR);
return context.tryGetTable(database, table);
}
void replaceDatabaseAndTable(ASTPtr & database_and_table, const String & database_name, const String & table_name)
{
ASTPtr table = ASTIdentifier::createSpecial(table_name);
if (!database_name.empty())
{
ASTPtr database = ASTIdentifier::createSpecial(database_name);
database_and_table = ASTIdentifier::createSpecial(database_name + "." + table_name);
database_and_table->children = {database, table};
}
else
{
database_and_table = ASTIdentifier::createSpecial(table_name);
}
return context.tryGetTable(db_and_table.database, db_and_table.table);
}
}
@ -156,7 +130,7 @@ void InJoinSubqueriesPreprocessor::process(ASTSelectQuery * query) const
forEachNonGlobalSubquery(query, [&] (IAST * subquery, IAST * function, IAST * table_join)
{
forEachTable(subquery, [&] (ASTPtr & database_and_table)
forEachTable(subquery, [&] (ASTPtr & database_and_table)
{
StoragePtr storage = tryGetTable(database_and_table, context);
@ -199,7 +173,8 @@ void InJoinSubqueriesPreprocessor::process(ASTSelectQuery * query) const
std::string table;
std::tie(database, table) = getRemoteDatabaseAndTableName(*storage);
replaceDatabaseAndTable(database_and_table, database, table);
/// TODO: find a way to avoid AST node replacing
database_and_table = createDatabaseAndTableNode(database, table);
}
else
throw Exception("InJoinSubqueriesPreprocessor: unexpected value of 'distributed_product_mode' setting", ErrorCodes::LOGICAL_ERROR);

View File

@ -26,7 +26,7 @@ BlockIO InterpreterCheckQuery::execute()
StoragePtr table = context.getTable(database_name, table_name);
auto column = ColumnUInt8::create();
column->insert(UInt64(table->checkData()));
column->insertValue(UInt64(table->checkData()));
result = Block{{ std::move(column), std::make_shared<DataTypeUInt8>(), "result" }};
BlockIO res;

View File

@ -213,7 +213,7 @@ static ParsedColumns parseColumns(const ASTExpressionList & column_list_ast, con
default_expr_list->children.emplace_back(setAlias(
makeASTFunction("CAST", std::make_shared<ASTIdentifier>(tmp_column_name),
std::make_shared<ASTLiteral>(Field(data_type_ptr->getName()))), final_column_name));
std::make_shared<ASTLiteral>(data_type_ptr->getName())), final_column_name));
default_expr_list->children.emplace_back(setAlias(col_decl.default_expression->clone(), tmp_column_name));
}
else

View File

@ -10,7 +10,6 @@
#include <DataStreams/SquashingBlockOutputStream.h>
#include <DataStreams/copyData.h>
#include <Parsers/ASTIdentifier.h>
#include <Parsers/ASTInsertQuery.h>
#include <Parsers/ASTSelectWithUnionQuery.h>

View File

@ -23,6 +23,7 @@ namespace ErrorCodes
{
extern const int READONLY;
extern const int LOGICAL_ERROR;
extern const int CANNOT_KILL;
}
@ -62,7 +63,7 @@ using QueryDescriptors = std::vector<QueryDescriptor>;
static void insertResultRow(size_t n, CancellationCode code, const Block & source_processes, const Block & sample_block, MutableColumns & columns)
{
columns[0]->insert(String(cancellationCodeToStatus(code)));
columns[0]->insert(cancellationCodeToStatus(code));
for (size_t col_num = 1, size = columns.size(); col_num < size; ++col_num)
columns[col_num]->insertFrom(*source_processes.getByName(sample_block.getByPosition(col_num).name).column, n);
@ -138,13 +139,17 @@ public:
auto code = process_list.sendCancelToQuery(curr_process.query_id, curr_process.user, true);
if (code != CancellationCode::QueryIsNotInitializedYet && code != CancellationCode::CancelSent)
/// Raise exception if this query is immortal, user have to know
/// This could happen only if query generate streams that don't implement IProfilingBlockInputStream
if (code == CancellationCode::CancelCannotBeSent)
throw Exception("Can't kill query '" + curr_process.query_id + "' it consits of unkillable stages", ErrorCodes::CANNOT_KILL);
else if (code != CancellationCode::QueryIsNotInitializedYet && code != CancellationCode::CancelSent)
{
curr_process.processed = true;
insertResultRow(curr_process.source_num, code, processes_block, res_sample_block, columns);
++num_processed_queries;
}
/// Wait if QueryIsNotInitializedYet or CancelSent
/// Wait if CancelSent
}
/// KILL QUERY could be killed also
@ -194,6 +199,12 @@ BlockIO InterpreterKillQueryQuery::execute()
for (const auto & query_desc : queries_to_stop)
{
auto code = (query.test) ? CancellationCode::Unknown : process_list.sendCancelToQuery(query_desc.query_id, query_desc.user, true);
/// Raise exception if this query is immortal, user have to know
/// This could happen only if query generate streams that don't implement IProfilingBlockInputStream
if (code == CancellationCode::CancelCannotBeSent)
throw Exception("Can't kill query '" + query_desc.query_id + "' it consits of unkillable stages", ErrorCodes::CANNOT_KILL);
insertResultRow(query_desc.source_num, code, processes_block, header, res_columns);
}

View File

@ -34,6 +34,7 @@
#include <Interpreters/InterpreterSelectWithUnionQuery.h>
#include <Interpreters/InterpreterSetQuery.h>
#include <Interpreters/ExpressionAnalyzer.h>
#include <Interpreters/DatabaseAndTableWithAlias.h>
#include <Storages/MergeTree/MergeTreeWhereOptimizer.h>
#include <Storages/IStorage.h>
@ -146,7 +147,7 @@ InterpreterSelectQuery::InterpreterSelectQuery(
max_streams = settings.max_threads;
const auto & table_expression = query.table();
ASTPtr table_expression = getTableFunctionOrSubquery(query, 0);
if (input)
{
@ -205,7 +206,7 @@ InterpreterSelectQuery::InterpreterSelectQuery(
if (query_analyzer->isRewriteSubqueriesPredicate())
{
/// remake interpreter_subquery when PredicateOptimizer is rewrite subqueries and main table is subquery
if (typeid_cast<ASTSelectWithUnionQuery *>(table_expression.get()))
if (table_expression && typeid_cast<ASTSelectWithUnionQuery *>(table_expression.get()))
interpreter_subquery = std::make_unique<InterpreterSelectWithUnionQuery>(
table_expression, getSubqueryContext(context), required_columns, QueryProcessingStage::Complete, subquery_depth + 1,
only_analyze);
@ -236,29 +237,20 @@ InterpreterSelectQuery::InterpreterSelectQuery(
void InterpreterSelectQuery::getDatabaseAndTableNames(String & database_name, String & table_name)
{
auto query_database = query.database();
auto query_table = query.table();
if (auto db_and_table = getDatabaseAndTable(query, 0))
{
table_name = db_and_table->table;
database_name = db_and_table->database;
/** If the table is not specified - use the table `system.one`.
* If the database is not specified - use the current database.
*/
if (query_database)
database_name = typeid_cast<ASTIdentifier &>(*query_database).name;
if (query_table)
table_name = typeid_cast<ASTIdentifier &>(*query_table).name;
if (!query_table)
/// If the database is not specified - use the current database.
if (database_name.empty() && !context.tryGetTable("", table_name))
database_name = context.getCurrentDatabase();
}
else /// If the table is not specified - use the table `system.one`.
{
database_name = "system";
table_name = "one";
}
else if (!query_database)
{
if (context.tryGetTable("", table_name))
database_name = "";
else
database_name = context.getCurrentDatabase();
}
}
@ -884,8 +876,12 @@ void InterpreterSelectQuery::executeFetchColumns(
/// If we need less number of columns that subquery have - update the interpreter.
if (required_columns.size() < source_header.columns())
{
ASTPtr subquery = getTableFunctionOrSubquery(query, 0);
if (!subquery)
throw Exception("Subquery expected", ErrorCodes::LOGICAL_ERROR);
interpreter_subquery = std::make_unique<InterpreterSelectWithUnionQuery>(
query.table(), getSubqueryContext(context), required_columns, QueryProcessingStage::Complete, subquery_depth + 1, only_analyze);
subquery, getSubqueryContext(context), required_columns, QueryProcessingStage::Complete, subquery_depth + 1, only_analyze);
if (query_analyzer->hasAggregation())
interpreter_subquery->ignoreWithTotals();
@ -1240,6 +1236,8 @@ void InterpreterSelectQuery::executeMergeSorted(Pipeline & pipeline)
/// If there are several streams, then we merge them into one
if (pipeline.hasMoreThanOneStream())
{
unifyStreams(pipeline);
/** MergingSortedBlockInputStream reads the sources sequentially.
* To make the data on the remote servers prepared in parallel, we wrap it in AsynchronousBlockInputStream.
*/
@ -1294,16 +1292,7 @@ void InterpreterSelectQuery::executeUnion(Pipeline & pipeline)
/// If there are still several streams, then we combine them into one
if (pipeline.hasMoreThanOneStream())
{
/// Unify streams in case they have different headers.
auto first_header = pipeline.streams.at(0)->getHeader();
for (size_t i = 1; i < pipeline.streams.size(); ++i)
{
auto & stream = pipeline.streams[i];
auto header = stream->getHeader();
auto mode = ConvertingBlockInputStream::MatchColumnsMode::Name;
if (!blocksHaveEqualStructure(first_header, header))
stream = std::make_shared<ConvertingBlockInputStream>(context, stream, first_header, mode);
}
unifyStreams(pipeline);
pipeline.firstStream() = std::make_shared<UnionBlockInputStream<>>(pipeline.streams, pipeline.stream_with_non_joined_data, max_streams);
pipeline.stream_with_non_joined_data = nullptr;
@ -1362,11 +1351,9 @@ bool hasWithTotalsInAnySubqueryInFromClause(const ASTSelectQuery & query)
* In other cases, totals will be computed on the initiating server of the query, and it is not necessary to read the data to the end.
*/
auto query_table = query.table();
if (query_table)
if (auto query_table = getTableFunctionOrSubquery(query, 0))
{
auto ast_union = typeid_cast<const ASTSelectWithUnionQuery *>(query_table.get());
if (ast_union)
if (auto ast_union = typeid_cast<const ASTSelectWithUnionQuery *>(query_table.get()))
{
for (const auto & elem : ast_union->list_of_selects->children)
if (hasWithTotalsInAnySubqueryInFromClause(typeid_cast<const ASTSelectQuery &>(*elem)))
@ -1435,6 +1422,23 @@ void InterpreterSelectQuery::executeSubqueriesInSetsAndJoins(Pipeline & pipeline
SizeLimits(settings.max_rows_to_transfer, settings.max_bytes_to_transfer, settings.transfer_overflow_mode));
}
void InterpreterSelectQuery::unifyStreams(Pipeline & pipeline)
{
if (pipeline.hasMoreThanOneStream())
{
/// Unify streams in case they have different headers.
auto first_header = pipeline.streams.at(0)->getHeader();
for (size_t i = 1; i < pipeline.streams.size(); ++i)
{
auto & stream = pipeline.streams[i];
auto header = stream->getHeader();
auto mode = ConvertingBlockInputStream::MatchColumnsMode::Name;
if (!blocksHaveEqualStructure(first_header, header))
stream = std::make_shared<ConvertingBlockInputStream>(context, stream, first_header, mode);
}
}
}
void InterpreterSelectQuery::ignoreWithTotals()
{

View File

@ -190,6 +190,9 @@ private:
void executeExtremes(Pipeline & pipeline);
void executeSubqueriesInSetsAndJoins(Pipeline & pipeline, std::unordered_map<String, SubqueryForSet> & subqueries_for_sets);
/// If pipeline has several streams with different headers, add ConvertingBlockInputStream to first header.
void unifyStreams(Pipeline & pipeline);
enum class Modificator
{
ROLLUP = 0,

View File

@ -5,7 +5,6 @@
#include <Interpreters/InterpreterShowProcesslistQuery.h>
#include <Parsers/ASTQueryWithOutput.h>
#include <Parsers/ASTIdentifier.h>
namespace DB

View File

@ -1,6 +1,5 @@
#include <IO/ReadBufferFromString.h>
#include <Parsers/ASTShowTablesQuery.h>
#include <Parsers/ASTIdentifier.h>
#include <Interpreters/Context.h>
#include <Interpreters/executeQuery.h>
#include <Interpreters/InterpreterShowTablesQuery.h>

View File

@ -15,6 +15,8 @@
namespace DB
{
template <> struct NearestFieldType<PartLogElement::Type> { using Type = UInt64; };
Block PartLogElement::createBlock()
{
auto event_type_datatype = std::make_shared<DataTypeEnum8>(
@ -60,18 +62,18 @@ void PartLogElement::appendToBlock(Block & block) const
size_t i = 0;
columns[i++]->insert(Int64(event_type));
columns[i++]->insert(UInt64(DateLUT::instance().toDayNum(event_time)));
columns[i++]->insert(UInt64(event_time));
columns[i++]->insert(UInt64(duration_ms));
columns[i++]->insert(event_type);
columns[i++]->insert(DateLUT::instance().toDayNum(event_time));
columns[i++]->insert(event_time);
columns[i++]->insert(duration_ms);
columns[i++]->insert(database_name);
columns[i++]->insert(table_name);
columns[i++]->insert(part_name);
columns[i++]->insert(partition_id);
columns[i++]->insert(UInt64(rows));
columns[i++]->insert(UInt64(bytes_compressed_on_disk));
columns[i++]->insert(rows);
columns[i++]->insert(bytes_compressed_on_disk);
Array source_part_names_array;
source_part_names_array.reserve(source_part_names.size());
@ -80,11 +82,11 @@ void PartLogElement::appendToBlock(Block & block) const
columns[i++]->insert(source_part_names_array);
columns[i++]->insert(UInt64(bytes_uncompressed));
columns[i++]->insert(UInt64(rows_read));
columns[i++]->insert(UInt64(bytes_read_uncompressed));
columns[i++]->insert(bytes_uncompressed);
columns[i++]->insert(rows_read);
columns[i++]->insert(bytes_read_uncompressed);
columns[i++]->insert(UInt64(error));
columns[i++]->insert(error);
columns[i++]->insert(exception);
block.setColumns(std::move(columns));

View File

@ -44,11 +44,8 @@ bool PredicateExpressionsOptimizer::optimizeImpl(
/// split predicate with `and`
PredicateExpressions outer_predicate_expressions = splitConjunctionPredicate(outer_expression);
std::vector<ASTTableExpression *> tables_expression = getSelectTablesExpression(ast_select);
std::vector<DatabaseAndTableWithAlias> database_and_table_with_aliases;
for (const auto & table_expression : tables_expression)
database_and_table_with_aliases.emplace_back(
getTableNameWithAliasFromTableExpression(*table_expression, context.getCurrentDatabase()));
std::vector<DatabaseAndTableWithAlias> database_and_table_with_aliases =
getDatabaseAndTables(*ast_select, context.getCurrentDatabase());
bool is_rewrite_subquery = false;
for (const auto & outer_predicate : outer_predicate_expressions)
@ -261,15 +258,14 @@ bool PredicateExpressionsOptimizer::optimizeExpression(const ASTPtr & outer_expr
void PredicateExpressionsOptimizer::getAllSubqueryProjectionColumns(SubqueriesProjectionColumns & all_subquery_projection_columns)
{
const auto tables_expression = getSelectTablesExpression(ast_select);
const auto tables_expression = getSelectTablesExpression(*ast_select);
for (const auto & table_expression : tables_expression)
{
if (table_expression->subquery)
{
/// Use qualifiers to translate the columns of subqueries
const auto database_and_table_with_alias =
getTableNameWithAliasFromTableExpression(*table_expression, context.getCurrentDatabase());
DatabaseAndTableWithAlias database_and_table_with_alias(*table_expression, context.getCurrentDatabase());
String qualified_name_prefix = database_and_table_with_alias.getQualifiedNamePrefix();
getSubqueryProjectionColumns(all_subquery_projection_columns, qualified_name_prefix,
static_cast<const ASTSubquery *>(table_expression->subquery.get())->children[0]);
@ -336,7 +332,7 @@ ASTs PredicateExpressionsOptimizer::evaluateAsterisk(ASTSelectQuery * select_que
if (!select_query->tables || select_query->tables->children.empty())
return {};
std::vector<ASTTableExpression *> tables_expression = getSelectTablesExpression(select_query);
std::vector<const ASTTableExpression *> tables_expression = getSelectTablesExpression(*select_query);
if (const auto qualified_asterisk = typeid_cast<ASTQualifiedAsterisk *>(asterisk.get()))
{
@ -354,8 +350,7 @@ ASTs PredicateExpressionsOptimizer::evaluateAsterisk(ASTSelectQuery * select_que
for (auto it = tables_expression.begin(); it != tables_expression.end(); ++it)
{
const ASTTableExpression * table_expression = *it;
const auto database_and_table_with_alias =
getTableNameWithAliasFromTableExpression(*table_expression, context.getCurrentDatabase());
DatabaseAndTableWithAlias database_and_table_with_alias(*table_expression, context.getCurrentDatabase());
/// database.table.*
if (num_components == 2 && !database_and_table_with_alias.database.empty()
&& static_cast<const ASTIdentifier &>(*ident->children[0]).name == database_and_table_with_alias.database
@ -394,8 +389,8 @@ ASTs PredicateExpressionsOptimizer::evaluateAsterisk(ASTSelectQuery * select_que
else if (table_expression->database_and_table_name)
{
const auto database_and_table_ast = static_cast<ASTIdentifier*>(table_expression->database_and_table_name.get());
const auto database_and_table_name = getDatabaseAndTableNameFromIdentifier(*database_and_table_ast);
storage = context.getTable(database_and_table_name.first, database_and_table_name.second);
DatabaseAndTableWithAlias database_and_table_name(*database_and_table_ast);
storage = context.getTable(database_and_table_name.database, database_and_table_name.table);
}
const auto block = storage->getSampleBlock();
@ -406,25 +401,6 @@ ASTs PredicateExpressionsOptimizer::evaluateAsterisk(ASTSelectQuery * select_que
return projection_columns;
}
std::vector<ASTTableExpression *> PredicateExpressionsOptimizer::getSelectTablesExpression(ASTSelectQuery * select_query)
{
if (!select_query->tables)
return {};
std::vector<ASTTableExpression *> tables_expression;
const ASTTablesInSelectQuery & tables_in_select_query = static_cast<const ASTTablesInSelectQuery &>(*select_query->tables);
for (const auto & child : tables_in_select_query.children)
{
ASTTablesInSelectQueryElement * tables_element = static_cast<ASTTablesInSelectQueryElement *>(child.get());
if (tables_element->table_expression)
tables_expression.emplace_back(static_cast<ASTTableExpression *>(tables_element->table_expression.get()));
}
return tables_expression;
}
void PredicateExpressionsOptimizer::cleanExpressionAlias(ASTPtr & expression)
{
const auto my_alias = expression->tryGetAlias();

View File

@ -9,7 +9,7 @@
#include <Interpreters/ExpressionActions.h>
#include <Parsers/ASTSubquery.h>
#include <Parsers/ASTTablesInSelectQuery.h>
#include <Interpreters/evaluateQualified.h>
#include <Interpreters/DatabaseAndTableWithAlias.h>
namespace DB
{
@ -105,8 +105,6 @@ private:
ASTs getSelectQueryProjectionColumns(ASTPtr & ast);
std::vector<ASTTableExpression *> getSelectTablesExpression(ASTSelectQuery * select_query);
ASTs evaluateAsterisk(ASTSelectQuery * select_query, const ASTPtr & asterisk);
void cleanExpressionAlias(ASTPtr & expression);

View File

@ -1,10 +1,10 @@
#include <Interpreters/ProcessList.h>
#include <Interpreters/Settings.h>
#include <Interpreters/Context.h>
#include <Interpreters/DatabaseAndTableWithAlias.h>
#include <Parsers/ASTSelectWithUnionQuery.h>
#include <Parsers/ASTSelectQuery.h>
#include <Parsers/ASTKillQueryQuery.h>
#include <Parsers/ASTIdentifier.h>
#include <Common/typeid_cast.h>
#include <Common/Exception.h>
#include <Common/CurrentThread.h>
@ -51,28 +51,14 @@ static bool isUnlimitedQuery(const IAST * ast)
if (!ast_selects->list_of_selects || ast_selects->list_of_selects->children.empty())
return false;
auto ast_select = typeid_cast<ASTSelectQuery *>(ast_selects->list_of_selects->children[0].get());
auto ast_select = typeid_cast<const ASTSelectQuery *>(ast_selects->list_of_selects->children[0].get());
if (!ast_select)
return false;
auto ast_database = ast_select->database();
if (!ast_database)
return false;
if (auto database_and_table = getDatabaseAndTable(*ast_select, 0))
return database_and_table->database == "system" && database_and_table->table == "processes";
auto ast_table = ast_select->table();
if (!ast_table)
return false;
auto ast_database_id = typeid_cast<const ASTIdentifier *>(ast_database.get());
if (!ast_database_id)
return false;
auto ast_table_id = typeid_cast<const ASTIdentifier *>(ast_table.get());
if (!ast_table_id)
return false;
return ast_database_id->name == "system" && ast_table_id->name == "processes";
return false;
}
return false;
@ -396,8 +382,9 @@ ProcessList::CancellationCode ProcessList::sendCancelToQuery(const String & curr
}
return CancellationCode::CancelCannotBeSent;
}
return CancellationCode::QueryIsNotInitializedYet;
/// Query is not even started
elem->is_killed.store(true);
return CancellationCode::CancelSent;
}

View File

@ -191,6 +191,8 @@ public:
/// Get query in/out pointers from BlockIO
bool tryGetQueryStreams(BlockInputStreamPtr & in, BlockOutputStreamPtr & out) const;
bool isKilled() const { return is_killed; }
};

View File

@ -19,7 +19,6 @@
namespace DB
{
Block QueryLogElement::createBlock()
{
return
@ -104,19 +103,19 @@ void QueryLogElement::appendToBlock(Block & block) const
size_t i = 0;
columns[i++]->insert(UInt64(type));
columns[i++]->insert(UInt64(DateLUT::instance().toDayNum(event_time)));
columns[i++]->insert(UInt64(event_time));
columns[i++]->insert(UInt64(query_start_time));
columns[i++]->insert(UInt64(query_duration_ms));
columns[i++]->insert(DateLUT::instance().toDayNum(event_time));
columns[i++]->insert(event_time);
columns[i++]->insert(query_start_time);
columns[i++]->insert(query_duration_ms);
columns[i++]->insert(UInt64(read_rows));
columns[i++]->insert(UInt64(read_bytes));
columns[i++]->insert(UInt64(written_rows));
columns[i++]->insert(UInt64(written_bytes));
columns[i++]->insert(UInt64(result_rows));
columns[i++]->insert(UInt64(result_bytes));
columns[i++]->insert(read_rows);
columns[i++]->insert(read_bytes);
columns[i++]->insert(written_rows);
columns[i++]->insert(written_bytes);
columns[i++]->insert(result_rows);
columns[i++]->insert(result_bytes);
columns[i++]->insert(UInt64(memory_usage));
columns[i++]->insert(memory_usage);
columns[i++]->insertData(query.data(), query.size());
columns[i++]->insertData(exception.data(), exception.size());
@ -124,7 +123,7 @@ void QueryLogElement::appendToBlock(Block & block) const
appendClientInfo(client_info, columns, i);
columns[i++]->insert(UInt64(ClickHouseRevision::get()));
columns[i++]->insert(ClickHouseRevision::get());
{
Array threads_array;
@ -163,27 +162,27 @@ void QueryLogElement::appendToBlock(Block & block) const
void QueryLogElement::appendClientInfo(const ClientInfo & client_info, MutableColumns & columns, size_t & i)
{
columns[i++]->insert(UInt64(client_info.query_kind == ClientInfo::QueryKind::INITIAL_QUERY));
columns[i++]->insert(client_info.query_kind == ClientInfo::QueryKind::INITIAL_QUERY);
columns[i++]->insert(client_info.current_user);
columns[i++]->insert(client_info.current_query_id);
columns[i++]->insertData(IPv6ToBinary(client_info.current_address.host()).data(), 16);
columns[i++]->insert(UInt64(client_info.current_address.port()));
columns[i++]->insert(client_info.current_address.port());
columns[i++]->insert(client_info.initial_user);
columns[i++]->insert(client_info.initial_query_id);
columns[i++]->insertData(IPv6ToBinary(client_info.initial_address.host()).data(), 16);
columns[i++]->insert(UInt64(client_info.initial_address.port()));
columns[i++]->insert(client_info.initial_address.port());
columns[i++]->insert(UInt64(client_info.interface));
columns[i++]->insert(client_info.os_user);
columns[i++]->insert(client_info.client_hostname);
columns[i++]->insert(client_info.client_name);
columns[i++]->insert(UInt64(client_info.client_revision));
columns[i++]->insert(UInt64(client_info.client_version_major));
columns[i++]->insert(UInt64(client_info.client_version_minor));
columns[i++]->insert(UInt64(client_info.client_version_patch));
columns[i++]->insert(client_info.client_revision);
columns[i++]->insert(client_info.client_version_major);
columns[i++]->insert(client_info.client_version_minor);
columns[i++]->insert(client_info.client_version_patch);
columns[i++]->insert(UInt64(client_info.http_method));
columns[i++]->insert(client_info.http_user_agent);

View File

@ -9,7 +9,6 @@
#include <Common/typeid_cast.h>
#include <Poco/String.h>
#include <Parsers/ASTQualifiedAsterisk.h>
//#include <iostream>
#include <IO/WriteHelpers.h>
namespace DB

View File

@ -2,7 +2,7 @@
#include <Core/Names.h>
#include <Parsers/IAST.h>
#include <Interpreters/evaluateQualified.h>
#include <Interpreters/DatabaseAndTableWithAlias.h>
namespace DB
{

View File

@ -75,30 +75,30 @@ void QueryThreadLogElement::appendToBlock(Block & block) const
size_t i = 0;
columns[i++]->insert(UInt64(DateLUT::instance().toDayNum(event_time)));
columns[i++]->insert(UInt64(event_time));
columns[i++]->insert(UInt64(query_start_time));
columns[i++]->insert(UInt64(query_duration_ms));
columns[i++]->insert(DateLUT::instance().toDayNum(event_time));
columns[i++]->insert(event_time);
columns[i++]->insert(query_start_time);
columns[i++]->insert(query_duration_ms);
columns[i++]->insert(UInt64(read_rows));
columns[i++]->insert(UInt64(read_bytes));
columns[i++]->insert(UInt64(written_rows));
columns[i++]->insert(UInt64(written_bytes));
columns[i++]->insert(read_rows);
columns[i++]->insert(read_bytes);
columns[i++]->insert(written_rows);
columns[i++]->insert(written_bytes);
columns[i++]->insert(Int64(memory_usage));
columns[i++]->insert(Int64(peak_memory_usage));
columns[i++]->insert(memory_usage);
columns[i++]->insert(peak_memory_usage);
columns[i++]->insertData(thread_name.data(), thread_name.size());
columns[i++]->insert(UInt64(thread_number));
columns[i++]->insert(Int64(os_thread_id));
columns[i++]->insert(UInt64(master_thread_number));
columns[i++]->insert(Int64(master_os_thread_id));
columns[i++]->insert(thread_number);
columns[i++]->insert(os_thread_id);
columns[i++]->insert(master_thread_number);
columns[i++]->insert(master_os_thread_id);
columns[i++]->insertData(query.data(), query.size());
QueryLogElement::appendClientInfo(client_info, columns, i);
columns[i++]->insert(UInt64(ClickHouseRevision::get()));
columns[i++]->insert(ClickHouseRevision::get());
if (profile_counters)
{

View File

@ -176,7 +176,7 @@ struct Settings
\
M(SettingBool, join_use_nulls, 0, "Use NULLs for non-joined rows of outer JOINs. If false, use default value of corresponding columns data type.") \
\
M(SettingJoinStrictness, join_default_strictness, JoinStrictness::Unspecified, "Set default strictness in JOIN query. Possible values: empty string, 'ANY', 'ALL'. If empty, query without strictness will throw exception.") \
M(SettingJoinStrictness, join_default_strictness, JoinStrictness::ALL, "Set default strictness in JOIN query. Possible values: empty string, 'ANY', 'ALL'. If empty, query without strictness will throw exception.") \
\
M(SettingUInt64, preferred_block_size_bytes, 1000000, "") \
\

View File

@ -65,18 +65,23 @@ void TranslateQualifiedNamesVisitor::visit(ASTQualifiedAsterisk *, ASTPtr & ast,
if (num_components > 2)
throw Exception("Qualified asterisk cannot have more than two qualifiers", ErrorCodes::UNKNOWN_ELEMENT_IN_AST);
DatabaseAndTableWithAlias db_and_table(*ident);
for (const auto & table_names : tables)
{
/// database.table.*, table.* or alias.*
if ((num_components == 2
&& !table_names.database.empty()
&& static_cast<const ASTIdentifier &>(*ident->children[0]).name == table_names.database
&& static_cast<const ASTIdentifier &>(*ident->children[1]).name == table_names.table)
|| (num_components == 0
&& ((!table_names.table.empty() && ident->name == table_names.table)
|| (!table_names.alias.empty() && ident->name == table_names.alias))))
if (num_components == 2)
{
return;
if (!table_names.database.empty() &&
db_and_table.database == table_names.database &&
db_and_table.table == table_names.table)
return;
}
else if (num_components == 0)
{
if ((!table_names.table.empty() && db_and_table.table == table_names.table) ||
(!table_names.alias.empty() && db_and_table.table == table_names.alias))
return;
}
}

View File

@ -5,7 +5,7 @@
#include <Common/typeid_cast.h>
#include <Parsers/DumpASTNode.h>
#include <Interpreters/evaluateQualified.h>
#include <Interpreters/DatabaseAndTableWithAlias.h>
namespace DB
{
@ -18,7 +18,7 @@ struct ASTTableJoin;
class NamesAndTypesList;
/// It visits nodes, find identifiers and translate their names to needed form.
/// It visits nodes, find columns (general identifiers and asterisks) and translate their names according to tables' names.
class TranslateQualifiedNamesVisitor
{
public:

View File

@ -58,7 +58,7 @@ static Field convertNumericTypeImpl(const Field & from)
if (!accurate::equalsOp(value, To(value)))
return {};
return Field(typename NearestFieldType<To>::Type(value));
return To(value);
}
template <typename To>
@ -86,7 +86,7 @@ static Field convertIntToDecimalType(const Field & from, const To & type)
throw Exception("Number is too much to place in " + type.getName(), ErrorCodes::ARGUMENT_OUT_OF_BOUND);
FieldType scaled_value = type.getScaleMultiplier() * value;
return Field(typename NearestFieldType<FieldType>::Type(scaled_value, type.getScale()));
return DecimalField<FieldType>(scaled_value, type.getScale());
}
@ -97,7 +97,7 @@ static Field convertStringToDecimalType(const Field & from, const DataTypeDecima
const String & str_value = from.get<String>();
T value = type.parseFromString(str_value);
return Field(typename NearestFieldType<FieldType>::Type(value, type.getScale()));
return DecimalField<FieldType>(value, type.getScale());
}
@ -150,11 +150,11 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID
/// Conversion between Date and DateTime and vice versa.
if (which_type.isDate() && which_from_type.isDateTime())
{
return UInt64(static_cast<const DataTypeDateTime &>(*from_type_hint).getTimeZone().toDayNum(src.get<UInt64>()));
return static_cast<const DataTypeDateTime &>(*from_type_hint).getTimeZone().toDayNum(src.get<UInt64>());
}
else if (which_type.isDateTime() && which_from_type.isDate())
{
return UInt64(static_cast<const DataTypeDateTime &>(type).getTimeZone().fromDayNum(DayNum(src.get<UInt64>())));
return static_cast<const DataTypeDateTime &>(type).getTimeZone().fromDayNum(DayNum(src.get<UInt64>()));
}
else if (type.isValueRepresentedByNumber())
{
@ -184,7 +184,7 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID
if (which_type.isDate())
{
/// Convert 'YYYY-MM-DD' Strings to Date
return UInt64(stringToDate(src.get<const String &>()));
return stringToDate(src.get<const String &>());
}
else if (which_type.isDateTime())
{
@ -218,7 +218,12 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID
Array res(src_arr_size);
for (size_t i = 0; i < src_arr_size; ++i)
{
res[i] = convertFieldToType(src_arr[i], *nested_type);
if (res[i].isNull() && !type_array->getNestedType()->isNullable())
throw Exception("Type mismatch of array elements in IN or VALUES section. Expected: " + type_array->getNestedType()->getName()
+ ". Got NULL in position " + toString(i + 1), ErrorCodes::TYPE_MISMATCH);
}
return res;
}

View File

@ -69,7 +69,7 @@ ASTPtr evaluateConstantExpressionAsLiteral(const ASTPtr & node, const Context &
ASTPtr evaluateConstantExpressionOrIdentifierAsLiteral(const ASTPtr & node, const Context & context)
{
if (auto id = typeid_cast<const ASTIdentifier *>(node.get()))
return std::make_shared<ASTLiteral>(Field(id->name));
return std::make_shared<ASTLiteral>(id->name);
return evaluateConstantExpressionAsLiteral(node, context);
}

View File

@ -1,167 +0,0 @@
#include <Interpreters/evaluateQualified.h>
#include <Interpreters/Context.h>
#include <Common/typeid_cast.h>
#include <Parsers/IAST.h>
#include <Parsers/ASTIdentifier.h>
#include <Parsers/ASTTablesInSelectQuery.h>
namespace DB
{
/// Checks that ast is ASTIdentifier and remove num_qualifiers_to_strip components from left.
/// Example: 'database.table.name' -> (num_qualifiers_to_strip = 2) -> 'name'.
void stripIdentifier(DB::ASTPtr & ast, size_t num_qualifiers_to_strip)
{
ASTIdentifier * identifier = typeid_cast<ASTIdentifier *>(ast.get());
if (!identifier)
throw DB::Exception("ASTIdentifier expected for stripIdentifier", DB::ErrorCodes::LOGICAL_ERROR);
if (num_qualifiers_to_strip)
{
size_t num_components = identifier->children.size();
/// plain column
if (num_components - num_qualifiers_to_strip == 1)
{
DB::String node_alias = identifier->tryGetAlias();
ast = identifier->children.back();
if (!node_alias.empty())
ast->setAlias(node_alias);
}
else
/// nested column
{
identifier->children.erase(identifier->children.begin(), identifier->children.begin() + num_qualifiers_to_strip);
DB::String new_name;
for (const auto & child : identifier->children)
{
if (!new_name.empty())
new_name += '.';
new_name += static_cast<const ASTIdentifier &>(*child.get()).name;
}
identifier->name = new_name;
}
}
}
DatabaseAndTableWithAlias getTableNameWithAliasFromTableExpression(const ASTTableExpression & table_expression,
const String & current_database)
{
DatabaseAndTableWithAlias database_and_table_with_alias;
if (table_expression.database_and_table_name)
{
const auto & identifier = static_cast<const ASTIdentifier &>(*table_expression.database_and_table_name);
database_and_table_with_alias.alias = identifier.tryGetAlias();
if (table_expression.database_and_table_name->children.empty())
{
database_and_table_with_alias.database = current_database;
database_and_table_with_alias.table = identifier.name;
}
else
{
if (table_expression.database_and_table_name->children.size() != 2)
throw Exception("Logical error: number of components in table expression not equal to two", ErrorCodes::LOGICAL_ERROR);
database_and_table_with_alias.database = static_cast<const ASTIdentifier &>(*identifier.children[0]).name;
database_and_table_with_alias.table = static_cast<const ASTIdentifier &>(*identifier.children[1]).name;
}
}
else if (table_expression.table_function)
{
database_and_table_with_alias.alias = table_expression.table_function->tryGetAlias();
}
else if (table_expression.subquery)
{
database_and_table_with_alias.alias = table_expression.subquery->tryGetAlias();
}
else
throw Exception("Logical error: no known elements in ASTTableExpression", ErrorCodes::LOGICAL_ERROR);
return database_and_table_with_alias;
}
/// Get the number of components of identifier which are correspond to 'alias.', 'table.' or 'databas.table.' from names.
size_t getNumComponentsToStripInOrderToTranslateQualifiedName(const ASTIdentifier & identifier,
const DatabaseAndTableWithAlias & names)
{
size_t num_qualifiers_to_strip = 0;
auto get_identifier_name = [](const ASTPtr & ast) { return static_cast<const ASTIdentifier &>(*ast).name; };
/// It is compound identifier
if (!identifier.children.empty())
{
size_t num_components = identifier.children.size();
/// database.table.column
if (num_components >= 3
&& !names.database.empty()
&& get_identifier_name(identifier.children[0]) == names.database
&& get_identifier_name(identifier.children[1]) == names.table)
{
num_qualifiers_to_strip = 2;
}
/// table.column or alias.column. If num_components > 2, it is like table.nested.column.
if (num_components >= 2
&& ((!names.table.empty() && get_identifier_name(identifier.children[0]) == names.table)
|| (!names.alias.empty() && get_identifier_name(identifier.children[0]) == names.alias)))
{
num_qualifiers_to_strip = 1;
}
}
return num_qualifiers_to_strip;
}
std::pair<String, String> getDatabaseAndTableNameFromIdentifier(const ASTIdentifier & identifier)
{
std::pair<String, String> res;
res.second = identifier.name;
if (!identifier.children.empty())
{
if (identifier.children.size() != 2)
throw Exception("Qualified table name could have only two components", ErrorCodes::LOGICAL_ERROR);
res.first = typeid_cast<const ASTIdentifier &>(*identifier.children[0]).name;
res.second = typeid_cast<const ASTIdentifier &>(*identifier.children[1]).name;
}
return res;
}
String DatabaseAndTableWithAlias::getQualifiedNamePrefix() const
{
if (alias.empty() && table.empty())
return "";
return (!alias.empty() ? alias : (database + '.' + table)) + '.';
}
void DatabaseAndTableWithAlias::makeQualifiedName(const ASTPtr & ast) const
{
if (auto identifier = typeid_cast<ASTIdentifier *>(ast.get()))
{
String prefix = getQualifiedNamePrefix();
identifier->name.insert(identifier->name.begin(), prefix.begin(), prefix.end());
Names qualifiers;
if (!alias.empty())
qualifiers.push_back(alias);
else
{
qualifiers.push_back(database);
qualifiers.push_back(table);
}
for (const auto & qualifier : qualifiers)
identifier->children.emplace_back(std::make_shared<ASTIdentifier>(qualifier));
}
}
}

View File

@ -33,6 +33,7 @@ namespace ErrorCodes
extern const int LOGICAL_ERROR;
extern const int QUERY_IS_TOO_LARGE;
extern const int INTO_OUTFILE_NOT_ALLOWED;
extern const int QUERY_WAS_CANCELLED;
}
@ -204,9 +205,15 @@ static std::tuple<ASTPtr, BlockIO> executeQueryImpl(
auto interpreter = InterpreterFactory::get(ast, context, stage);
res = interpreter->execute();
/// Delayed initialization of query streams (required for KILL QUERY purposes)
if (process_list_entry)
(*process_list_entry)->setQueryStreams(res);
{
/// Query was killed before execution
if ((*process_list_entry)->isKilled())
throw Exception("Query '" + (*process_list_entry)->getInfo().client_info.current_query_id + "' is killed in pending state",
ErrorCodes::QUERY_WAS_CANCELLED);
else
(*process_list_entry)->setQueryStreams(res);
}
/// Hold element of process list till end of query execution.
res.process_list_entry = process_list_entry;

View File

@ -10,7 +10,7 @@
#include <Parsers/ASTSubquery.h>
#include <Interpreters/interpretSubquery.h>
#include <Interpreters/evaluateQualified.h>
#include <Interpreters/DatabaseAndTableWithAlias.h>
namespace DB
{
@ -69,10 +69,10 @@ std::shared_ptr<InterpreterSelectWithUnionQuery> interpretSubquery(
}
else
{
auto database_table = getDatabaseAndTableNameFromIdentifier(*table);
const auto & storage = context.getTable(database_table.first, database_table.second);
DatabaseAndTableWithAlias database_table(*table);
const auto & storage = context.getTable(database_table.database, database_table.table);
columns = storage->getColumns().ordinary;
select_query->replaceDatabaseAndTable(database_table.first, database_table.second);
select_query->replaceDatabaseAndTable(database_table.database, database_table.table);
}
select_expression_list->children.reserve(columns.size());

View File

@ -1,7 +1,6 @@
#pragma once
#include <Parsers/IAST.h>
#include <Parsers/ASTLiteral.h>
namespace DB

View File

@ -10,7 +10,7 @@ String ASTKillQueryQuery::getID() const
void ASTKillQueryQuery::formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
{
settings.ostr << (settings.hilite ? hilite_keyword : "") << "KILL QUERY ";
settings.ostr << (settings.hilite ? hilite_keyword : "") << "KILL QUERY";
formatOnCluster(settings);
settings.ostr << " WHERE " << (settings.hilite ? hilite_none : "");

View File

@ -2,8 +2,6 @@
#include <Parsers/queryToString.h>
#include <Parsers/CommonParsers.h>
#include <Parsers/ExpressionElementParsers.h>
#include <Parsers/ASTIdentifier.h>
#include <Parsers/ASTLiteral.h>
#include <Parsers/parseIdentifierOrStringLiteral.h>
#include <Common/typeid_cast.h>
#include <Interpreters/evaluateConstantExpression.h>

View File

@ -19,6 +19,19 @@ namespace ErrorCodes
extern const int NOT_IMPLEMENTED;
}
ASTPtr createDatabaseAndTableNode(const String & database_name, const String & table_name)
{
if (database_name.empty())
return ASTIdentifier::createSpecial(table_name);
ASTPtr database = ASTIdentifier::createSpecial(database_name);
ASTPtr table = ASTIdentifier::createSpecial(table_name);
ASTPtr database_and_table = ASTIdentifier::createSpecial(database_name + "." + table_name);
database_and_table->children = {database, table};
return database_and_table;
}
ASTPtr ASTSelectQuery::clone() const
{
@ -242,46 +255,6 @@ static const ASTTablesInSelectQueryElement * getFirstTableJoin(const ASTSelectQu
}
ASTPtr ASTSelectQuery::database() const
{
const ASTTableExpression * table_expression = getFirstTableExpression(*this);
if (!table_expression || !table_expression->database_and_table_name || table_expression->database_and_table_name->children.empty())
return {};
if (table_expression->database_and_table_name->children.size() != 2)
throw Exception("Logical error: more than two components in table expression", ErrorCodes::LOGICAL_ERROR);
return table_expression->database_and_table_name->children[0];
}
ASTPtr ASTSelectQuery::table() const
{
const ASTTableExpression * table_expression = getFirstTableExpression(*this);
if (!table_expression)
return {};
if (table_expression->database_and_table_name)
{
if (table_expression->database_and_table_name->children.empty())
return table_expression->database_and_table_name;
if (table_expression->database_and_table_name->children.size() != 2)
throw Exception("Logical error: more than two components in table expression", ErrorCodes::LOGICAL_ERROR);
return table_expression->database_and_table_name->children[1];
}
if (table_expression->table_function)
return table_expression->table_function;
if (table_expression->subquery)
return static_cast<const ASTSubquery *>(table_expression->subquery.get())->children.at(0);
throw Exception("Logical error: incorrect table expression", ErrorCodes::LOGICAL_ERROR);
}
ASTPtr ASTSelectQuery::sample_size() const
{
const ASTTableExpression * table_expression = getFirstTableExpression(*this);
@ -363,12 +336,9 @@ void ASTSelectQuery::setDatabaseIfNeeded(const String & database_name)
}
else if (table_expression->database_and_table_name->children.empty())
{
ASTPtr database = ASTIdentifier::createSpecial(database_name);
ASTPtr table = table_expression->database_and_table_name;
const ASTIdentifier & identifier = static_cast<const ASTIdentifier &>(*table_expression->database_and_table_name);
const String & old_name = static_cast<ASTIdentifier &>(*table_expression->database_and_table_name).name;
table_expression->database_and_table_name = ASTIdentifier::createSpecial(database_name + "." + old_name);
table_expression->database_and_table_name->children = {database, table};
table_expression->database_and_table_name = createDatabaseAndTableNode(database_name, identifier.name);
}
else if (table_expression->database_and_table_name->children.size() != 2)
{
@ -396,19 +366,7 @@ void ASTSelectQuery::replaceDatabaseAndTable(const String & database_name, const
table_expression = table_expr.get();
}
ASTPtr table = ASTIdentifier::createSpecial(table_name);
if (!database_name.empty())
{
ASTPtr database = ASTIdentifier::createSpecial(database_name);
table_expression->database_and_table_name = ASTIdentifier::createSpecial(database_name + "." + table_name);
table_expression->database_and_table_name->children = {database, table};
}
else
{
table_expression->database_and_table_name = ASTIdentifier::createSpecial(table_name);
}
table_expression->database_and_table_name = createDatabaseAndTableNode(database_name, table_name);
}

View File

@ -39,8 +39,6 @@ public:
ASTPtr settings;
/// Compatibility with old parser of tables list. TODO remove
ASTPtr database() const;
ASTPtr table() const;
ASTPtr sample_size() const;
ASTPtr sample_offset() const;
ASTPtr array_join_expression_list() const;
@ -55,4 +53,7 @@ protected:
void formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override;
};
ASTPtr createDatabaseAndTableNode(const String & database_name, const String & table_name);
}

View File

@ -1,4 +1,3 @@
#include <Parsers/ASTIdentifier.h>
#include <Parsers/TablePropertiesQueriesASTs.h>
#include <Parsers/CommonParsers.h>

View File

@ -1,5 +1,4 @@
#include <Parsers/ASTIdentifier.h>
#include <Parsers/ASTLiteral.h>
#include <Parsers/ASTSelectWithUnionQuery.h>
#include <Parsers/ASTInsertQuery.h>

View File

@ -4,7 +4,6 @@
#include <Parsers/ASTOptimizeQuery.h>
#include <Parsers/ASTIdentifier.h>
#include <Parsers/ASTLiteral.h>
#include <Common/typeid_cast.h>

Some files were not shown because too many files have changed in this diff Show More