Merge branch 'master' of github.com:yandex/ClickHouse

This commit is contained in:
Alexey Milovidov 2019-03-16 01:50:54 +03:00
commit a1dd8fb831
205 changed files with 1298 additions and 960 deletions

View File

@ -1,3 +1,90 @@
## ClickHouse release 19.4.0.49, 2019-03-09
### New Features
* Added full support for `Protobuf` format (input and output, nested data structures). [#4174](https://github.com/yandex/ClickHouse/pull/4174) [#4493](https://github.com/yandex/ClickHouse/pull/4493) ([Vitaly Baranov](https://github.com/vitlibar))
* Added bitmap functions with Roaring Bitmaps. [#4207](https://github.com/yandex/ClickHouse/pull/4207) ([Andy Yang](https://github.com/andyyzh)) [#4568](https://github.com/yandex/ClickHouse/pull/4568) ([Vitaly Baranov](https://github.com/vitlibar))
* Parquet format support [#4448](https://github.com/yandex/ClickHouse/pull/4448) ([proller](https://github.com/proller))
* N-gram distance was added for fuzzy string comparison. It is similar to q-gram metrics in R language. [#4466](https://github.com/yandex/ClickHouse/pull/4466) ([Danila Kutenin](https://github.com/danlark1))
* Combine rules for graphite rollup from dedicated aggregation and retention patterns. [#4426](https://github.com/yandex/ClickHouse/pull/4426) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
* Added `max_execution_speed` and `max_execution_speed_bytes` to limit resource usage. Added `min_execution_speed_bytes` setting to complement the `min_execution_speed`. [#4430](https://github.com/yandex/ClickHouse/pull/4430) ([Winter Zhang](https://github.com/zhang2014))
* Implemented function `flatten` [#4555](https://github.com/yandex/ClickHouse/pull/4555) [#4409](https://github.com/yandex/ClickHouse/pull/4409) ([alexey-milovidov](https://github.com/alexey-milovidov), [kzon](https://github.com/kzon))
* Added functions `arrayEnumerateDenseRanked` and `arrayEnumerateUniqRanked` (it's like `arrayEnumerateUniq` but allows to fine tune array depth to look inside multidimensional arrays). [#4475](https://github.com/yandex/ClickHouse/pull/4475) ([proller](https://github.com/proller)) [#4601](https://github.com/yandex/ClickHouse/pull/4601) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Multiple JOINS with some restrictions: no asterisks, no complex aliases in ON/WHERE/GROUP BY/... [#4462](https://github.com/yandex/ClickHouse/pull/4462) ([Artem Zuikov](https://github.com/4ertus2))
### Bug Fixes
* This release also contains all bug fixes from 19.3 and 19.1.
* Fixed bug in data skipping indices: order of granules after INSERT was incorrect. [#4407](https://github.com/yandex/ClickHouse/pull/4407) ([Nikita Vasilev](https://github.com/nikvas0))
* Fixed `set` index for `Nullable` and `LowCardinality` columns. Before it, `set` index with `Nullable` or `LowCardinality` column led to error `Data type must be deserialized with multiple streams` while selecting. [#4594](https://github.com/yandex/ClickHouse/pull/4594) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
* Correctly set update_time on full `executable` dictionary update. [#4551](https://github.com/yandex/ClickHouse/pull/4551) ([Tema Novikov](https://github.com/temoon))
* Fix broken progress bar in 19.3 [#4627](https://github.com/yandex/ClickHouse/pull/4627) ([filimonov](https://github.com/filimonov))
* Fixed inconsistent values of MemoryTracker when memory region was shrinked, in certain cases. [#4619](https://github.com/yandex/ClickHouse/pull/4619) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed undefined behaviour in ThreadPool [#4612](https://github.com/yandex/ClickHouse/pull/4612) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed a very rare crash with the message `mutex lock failed: Invalid argument` that could happen when a MergeTree table was dropped concurrently with a SELECT. [#4608](https://github.com/yandex/ClickHouse/pull/4608) ([Alex Zatelepin](https://github.com/ztlpn))
* ODBC driver compatibility with `LowCardinality` data type [#4381](https://github.com/yandex/ClickHouse/pull/4381) ([proller](https://github.com/proller))
* FreeBSD: Fixup for `AIOcontextPool: Found io_event with unknown id 0` error [#4438](https://github.com/yandex/ClickHouse/pull/4438) ([urgordeadbeef](https://github.com/urgordeadbeef))
* `system.part_log` table was created regardless to configuration. [#4483](https://github.com/yandex/ClickHouse/pull/4483) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix undefined behaviour in `dictIsIn` function for cache dictionaries. [#4515](https://github.com/yandex/ClickHouse/pull/4515) ([alesapin](https://github.com/alesapin))
* Fixed a deadlock when a SELECT query locks the same table multiple times (e.g. from different threads or when executing multiple subqueries) and there is a concurrent DDL query. Fixes #4316 [#4535](https://github.com/yandex/ClickHouse/pull/4535) ([Alex Zatelepin](https://github.com/ztlpn))
* Modified links that have expired in the tutorial ... [#4545](https://github.com/yandex/ClickHouse/pull/4545) ([MeiK](https://github.com/MeiK-h))
* Disable compile_expressions by default until we get own `llvm` contrib and can test it with `clang` and `asan`. [#4579](https://github.com/yandex/ClickHouse/pull/4579) ([alesapin](https://github.com/alesapin))
* Prevent `std::terminate` when `invalidate_query` for `clickhouse` external dictionary source has returned wrong resultset (empty or more than one row or more than one column). Fixed issue when the `invalidate_query` was performed every five seconds regardless to the `lifetime`. [#4583](https://github.com/yandex/ClickHouse/pull/4583) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Avoid deadlock when the `invalidate_query` for a dictionary with `clickhouse` source was involving `system.dictionaries` table or `Dictionaries` database (rare case). [#4599](https://github.com/yandex/ClickHouse/pull/4599) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixes for CROSS JOIN with empty WHERE [#4598](https://github.com/yandex/ClickHouse/pull/4598) ([Artem Zuikov](https://github.com/4ertus2))
* Fixed segfault in function "replicate" when constant argument is passed. [#4603](https://github.com/yandex/ClickHouse/pull/4603) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix lambda function with predicate optimizer. [#4408](https://github.com/yandex/ClickHouse/pull/4408) ([Winter Zhang](https://github.com/zhang2014))
* Multiple JOINs multiple fixes. [#4595](https://github.com/yandex/ClickHouse/pull/4595) ([Artem Zuikov](https://github.com/4ertus2))
### Improvements
* Support aliases in JOIN ON section for right table columns [#4412](https://github.com/yandex/ClickHouse/pull/4412) ([Artem Zuikov](https://github.com/4ertus2))
* Result of multiple JOINs need correct result names to be used in subselects. Replace flat aliases with source names in result. [#4474](https://github.com/yandex/ClickHouse/pull/4474) ([Artem Zuikov](https://github.com/4ertus2))
* Improve push-down logic for joined statements. [#4387](https://github.com/yandex/ClickHouse/pull/4387) ([Ivan](https://github.com/abyss7))
### Performance Improvements
* Improved heuristics of "move to PREWHERE" optimization. [#4405](https://github.com/yandex/ClickHouse/pull/4405) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Use proper lookup tables that uses HashTable's API for 8-bit and 16-bit keys. [#4536](https://github.com/yandex/ClickHouse/pull/4536) ([Amos Bird](https://github.com/amosbird))
* Improved performance of string comparison. [#4564](https://github.com/yandex/ClickHouse/pull/4564) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Cleanup distributed DDL queue in a separate thread so that it doesn't slow down the main loop that processes distributed DDL tasks. [#4502](https://github.com/yandex/ClickHouse/pull/4502) ([Alex Zatelepin](https://github.com/ztlpn))
* When `min_bytes_to_use_direct_io` is set to 1, not every file was opened with O_DIRECT mode because the data size to read was sometimes underestimated by the size of one compressed block. [#4526](https://github.com/yandex/ClickHouse/pull/4526) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Build/Testing/Packaging Improvement
* Added support for clang-9 [#4604](https://github.com/yandex/ClickHouse/pull/4604) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fix wrong `__asm__` instructions (again) [#4621](https://github.com/yandex/ClickHouse/pull/4621) ([Konstantin Podshumok](https://github.com/podshumok))
* Add ability to specify settings for `clickhouse-performance-test` from command line. [#4437](https://github.com/yandex/ClickHouse/pull/4437) ([alesapin](https://github.com/alesapin))
* Add dictionaries tests to integration tests. [#4477](https://github.com/yandex/ClickHouse/pull/4477) ([alesapin](https://github.com/alesapin))
* Added queries from the benchmark on the website to automated performance tests. [#4496](https://github.com/yandex/ClickHouse/pull/4496) ([alexey-milovidov](https://github.com/alexey-milovidov))
* `xxhash.h` does not exist in external lz4 because it is an implementation detail and its symbols are namespaced with `XXH_NAMESPACE` macro. When lz4 is external, xxHash has to be external too, and the dependents have to link to it. [#4495](https://github.com/yandex/ClickHouse/pull/4495) ([Orivej Desh](https://github.com/orivej))
* Fixed a case when `quantileTiming` aggregate function can be called with negative or floating point argument (this fixes fuzz test with undefined behaviour sanitizer). [#4506](https://github.com/yandex/ClickHouse/pull/4506) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Spelling error correction. [#4531](https://github.com/yandex/ClickHouse/pull/4531) ([sdk2](https://github.com/sdk2))
* Fix compilation on Mac. [#4371](https://github.com/yandex/ClickHouse/pull/4371) ([Vitaly Baranov](https://github.com/vitlibar))
* Build fixes for FreeBSD and various unusual build configurations. [#4444](https://github.com/yandex/ClickHouse/pull/4444) ([proller](https://github.com/proller))
## ClickHouse release 19.3.7, 2019-03-12
### Bug fixes
* Fixed error in #3920. This error manifestate itself as random cache corruption (messages `Unknown codec family code`, `Cannot seek through file`) and segfaults. This bug first appeared in version 19.1 and is present in versions up to 19.1.10 and 19.3.6. [#4623](https://github.com/yandex/ClickHouse/pull/4623) ([alexey-milovidov](https://github.com/alexey-milovidov))
## ClickHouse release 19.3.6, 2019-03-02
### Bug fixes
* When there are more than 1000 threads in a thread pool, `std::terminate` may happen on thread exit. [Azat Khuzhin](https://github.com/azat) [#4485](https://github.com/yandex/ClickHouse/pull/4485) [#4505](https://github.com/yandex/ClickHouse/pull/4505) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Now it's possible to create `ReplicatedMergeTree*` tables with comments on columns without defaults and tables with columns codecs without comments and defaults. Also fix comparison of codecs. [#4523](https://github.com/yandex/ClickHouse/pull/4523) ([alesapin](https://github.com/alesapin))
* Fixed crash on JOIN with array or tuple. [#4552](https://github.com/yandex/ClickHouse/pull/4552) ([Artem Zuikov](https://github.com/4ertus2))
* Fixed crash in clickhouse-copier with the message `ThreadStatus not created`. [#4540](https://github.com/yandex/ClickHouse/pull/4540) ([Artem Zuikov](https://github.com/4ertus2))
* Fixed hangup on server shutdown if distributed DDLs were used. [#4472](https://github.com/yandex/ClickHouse/pull/4472) ([Alex Zatelepin](https://github.com/ztlpn))
* Incorrect column numbers were printed in error message about text format parsing for columns with number greater than 10. [#4484](https://github.com/yandex/ClickHouse/pull/4484) ([alexey-milovidov](https://github.com/alexey-milovidov))
### Build/Testing/Packaging Improvements
* Fixed build with AVX enabled. [#4527](https://github.com/yandex/ClickHouse/pull/4527) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Enable extended accounting and IO accounting based on good known version instead of kernel under which it is compiled. [#4541](https://github.com/yandex/ClickHouse/pull/4541) ([nvartolomei](https://github.com/nvartolomei))
* Allow to skip setting of core_dump.size_limit, warning instead of throw if limit set fail. [#4473](https://github.com/yandex/ClickHouse/pull/4473) ([proller](https://github.com/proller))
* Removed the `inline` tags of `void readBinary(...)` in `Field.cpp`. Also merged redundant `namespace DB` blocks. [#4530](https://github.com/yandex/ClickHouse/pull/4530) ([hcz](https://github.com/hczhcz))
## ClickHouse release 19.3.5, 2019-02-21
### Bug fixes
@ -67,7 +154,7 @@
* Fixed race condition when selecting from `system.tables` may give `table doesn't exist` error. [#4313](https://github.com/yandex/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov))
* `clickhouse-client` can segfault on exit while loading data for command line suggestions if it was run in interactive mode. [#4317](https://github.com/yandex/ClickHouse/pull/4317) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed a bug when the execution of mutations containing `IN` operators was producing incorrect results. [#4099](https://github.com/yandex/ClickHouse/pull/4099) ([Alex Zatelepin](https://github.com/ztlpn))
* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/yandex/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/yandex/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed error when system logs are tried to create again at server shutdown. [#4254](https://github.com/yandex/ClickHouse/pull/4254) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Correctly return the right type and properly handle locks in `joinGet` function. [#4153](https://github.com/yandex/ClickHouse/pull/4153) ([Amos Bird](https://github.com/amosbird))
* Added `sumMapWithOverflow` function. [#4151](https://github.com/yandex/ClickHouse/pull/4151) ([Léo Ercolanelli](https://github.com/ercolanelli-leo))
@ -92,7 +179,7 @@
* Added script which creates changelog from pull requests description. [#4169](https://github.com/yandex/ClickHouse/pull/4169) [#4173](https://github.com/yandex/ClickHouse/pull/4173) ([KochetovNicolai](https://github.com/KochetovNicolai)) ([KochetovNicolai](https://github.com/KochetovNicolai))
* Added puppet module for Clickhouse. [#4182](https://github.com/yandex/ClickHouse/pull/4182) ([Maxim Fedotov](https://github.com/MaxFedotov))
* Added docs for a group of undocumented functions. [#4168](https://github.com/yandex/ClickHouse/pull/4168) ([Winter Zhang](https://github.com/zhang2014))
* ARM build fixes. [#4210](https://github.com/yandex/ClickHouse/pull/4210)[#4306](https://github.com/yandex/ClickHouse/pull/4306) [#4291](https://github.com/yandex/ClickHouse/pull/4291) ([proller](https://github.com/proller)) ([proller](https://github.com/proller))
* ARM build fixes. [#4210](https://github.com/yandex/ClickHouse/pull/4210)[#4306](https://github.com/yandex/ClickHouse/pull/4306) [#4291](https://github.com/yandex/ClickHouse/pull/4291) ([proller](https://github.com/proller)) ([proller](https://github.com/proller))
* Dictionary tests now able to run from `ctest`. [#4189](https://github.com/yandex/ClickHouse/pull/4189) ([proller](https://github.com/proller))
* Now `/etc/ssl` is used as default directory with SSL certificates. [#4167](https://github.com/yandex/ClickHouse/pull/4167) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Added checking SSE and AVX instruction at start. [#4234](https://github.com/yandex/ClickHouse/pull/4234) ([Igr](https://github.com/igron99))
@ -123,6 +210,20 @@
* Improved server shutdown time and ALTERs waiting time. [#4372](https://github.com/yandex/ClickHouse/pull/4372) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Added info about the replicated_can_become_leader setting to system.replicas and add logging if the replica won't try to become leader. [#4379](https://github.com/yandex/ClickHouse/pull/4379) ([Alex Zatelepin](https://github.com/ztlpn))
## ClickHouse release 19.1.14, 2019-03-14
* Fixed error `Column ... queried more than once` that may happen if the setting `asterisk_left_columns_only` is set to 1 in case of using `GLOBAL JOIN` with `SELECT *` (rare case). The issue does not exist in 19.3 and newer. [6bac7d8d](https://github.com/yandex/ClickHouse/pull/4692/commits/6bac7d8d11a9b0d6de0b32b53c47eb2f6f8e7062) ([Artem Zuikov](https://github.com/4ertus2))
## ClickHouse release 19.1.13, 2019-03-12
This release contains exactly the same set of patches as 19.3.7.
## ClickHouse release 19.1.10, 2019-03-03
This release contains exactly the same set of patches as 19.3.6.
## ClickHouse release 19.1.9, 2019-02-21
### Bug fixes
@ -140,7 +241,7 @@
### Bug Fixes
* Correctly return the right type and properly handle locks in `joinGet` function. [#4153](https://github.com/yandex/ClickHouse/pull/4153) ([Amos Bird](https://github.com/amosbird))
* Fixed error when system logs are tried to create again at server shutdown. [#4254](https://github.com/yandex/ClickHouse/pull/4254) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/yandex/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/yandex/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed a bug when the execution of mutations containing `IN` operators was producing incorrect results. [#4099](https://github.com/yandex/ClickHouse/pull/4099) ([Alex Zatelepin](https://github.com/ztlpn))
* `clickhouse-client` can segfault on exit while loading data for command line suggestions if it was run in interactive mode. [#4317](https://github.com/yandex/ClickHouse/pull/4317) ([alexey-milovidov](https://github.com/alexey-milovidov))
* Fixed race condition when selecting from `system.tables` may give `table doesn't exist` error. [#4313](https://github.com/yandex/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov))

2
contrib/librdkafka vendored

@ -1 +1 @@
Subproject commit 51ae5f5fd8b742e56f47a8bb0136344868818285
Subproject commit 73295a702cd1c85c11749ade500d713db7099cca

View File

@ -2,6 +2,8 @@
#ifndef _CONFIG_H_
#define _CONFIG_H_
#define ARCH "x86_64"
#define BUILT_WITH "GCC GXX PKGCONFIG OSXLD LIBDL PLUGINS ZLIB SSL SASL_CYRUS ZSTD HDRHISTOGRAM LZ4_EXT SNAPPY SOCKEM SASL_SCRAM CRC32C_HW"
#define CPU "generic"
#define WITHOUT_OPTIMIZATION 0
#define ENABLE_DEVEL 0

View File

@ -92,7 +92,7 @@ if (CLICKHOUSE_ONE_SHARED)
add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_PERFORMANCE_TEST_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_COMPILER_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES})
target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_PERFORMANCE_TEST_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_COMPILER_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK})
target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_PERFORMANCE_TEST_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_COMPILER_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE})
set_target_properties(clickhouse-lib PROPERTIES SOVERSION ${VERSION_MAJOR}.${VERSION_MINOR} VERSION ${VERSION_SO} OUTPUT_NAME clickhouse)
set_target_properties(clickhouse-lib PROPERTIES SOVERSION ${VERSION_MAJOR}.${VERSION_MINOR} VERSION ${VERSION_SO} OUTPUT_NAME clickhouse DEBUG_POSTFIX "")
endif()
if (CLICKHOUSE_SPLIT_BINARY)

View File

@ -704,7 +704,7 @@ private:
return true;
}
ASTInsertQuery * insert = typeid_cast<ASTInsertQuery *>(ast.get());
auto * insert = ast->as<ASTInsertQuery>();
if (insert && insert->data)
{
@ -799,14 +799,11 @@ private:
written_progress_chars = 0;
written_first_block = false;
const ASTSetQuery * set_query = typeid_cast<const ASTSetQuery *>(&*parsed_query);
const ASTUseQuery * use_query = typeid_cast<const ASTUseQuery *>(&*parsed_query);
/// INSERT query for which data transfer is needed (not an INSERT SELECT) is processed separately.
const ASTInsertQuery * insert = typeid_cast<const ASTInsertQuery *>(&*parsed_query);
connection->forceConnected();
if (insert && !insert->select)
/// INSERT query for which data transfer is needed (not an INSERT SELECT) is processed separately.
const auto * insert_query = parsed_query->as<ASTInsertQuery>();
if (insert_query && !insert_query->select)
processInsertQuery();
else
processOrdinaryQuery();
@ -814,7 +811,7 @@ private:
/// Do not change context (current DB, settings) in case of an exception.
if (!got_exception)
{
if (set_query)
if (const auto * set_query = parsed_query->as<ASTSetQuery>())
{
/// Save all changes in settings to avoid losing them if the connection is lost.
for (const auto & change : set_query->changes)
@ -826,7 +823,7 @@ private:
}
}
if (use_query)
if (const auto * use_query = parsed_query->as<ASTUseQuery>())
{
const String & new_database = use_query->database;
/// If the client initiates the reconnection, it takes the settings from the config.
@ -858,7 +855,7 @@ private:
/// Convert external tables to ExternalTableData and send them using the connection.
void sendExternalTables()
{
auto * select = typeid_cast<const ASTSelectWithUnionQuery *>(&*parsed_query);
const auto * select = parsed_query->as<ASTSelectWithUnionQuery>();
if (!select && !external_tables.empty())
throw Exception("External tables could be sent only with select query", ErrorCodes::BAD_ARGUMENTS);
@ -883,7 +880,7 @@ private:
void processInsertQuery()
{
/// Send part of query without data, because data will be sent separately.
const ASTInsertQuery & parsed_insert_query = typeid_cast<const ASTInsertQuery &>(*parsed_query);
const auto & parsed_insert_query = parsed_query->as<ASTInsertQuery &>();
String query_without_data = parsed_insert_query.data
? query.substr(0, parsed_insert_query.data - query.data())
: query;
@ -940,7 +937,7 @@ private:
void sendData(Block & sample, const ColumnsDescription & columns_description)
{
/// If INSERT data must be sent.
const ASTInsertQuery * parsed_insert_query = typeid_cast<const ASTInsertQuery *>(&*parsed_query);
const auto * parsed_insert_query = parsed_query->as<ASTInsertQuery>();
if (!parsed_insert_query)
return;
@ -965,7 +962,7 @@ private:
String current_format = insert_format;
/// Data format can be specified in the INSERT query.
if (ASTInsertQuery * insert = typeid_cast<ASTInsertQuery *>(&*parsed_query))
if (const auto * insert = parsed_query->as<ASTInsertQuery>())
{
if (!insert->format.empty())
current_format = insert->format;
@ -1231,12 +1228,14 @@ private:
String current_format = format;
/// The query can specify output format or output file.
if (ASTQueryWithOutput * query_with_output = dynamic_cast<ASTQueryWithOutput *>(&*parsed_query))
/// FIXME: try to prettify this cast using `as<>()`
if (const auto * query_with_output = dynamic_cast<const ASTQueryWithOutput *>(parsed_query.get()))
{
if (query_with_output->out_file != nullptr)
if (query_with_output->out_file)
{
const auto & out_file_node = typeid_cast<const ASTLiteral &>(*query_with_output->out_file);
const auto & out_file_node = query_with_output->out_file->as<ASTLiteral &>();
const auto & out_file = out_file_node.value.safeGet<std::string>();
out_file_buf.emplace(out_file, DBMS_DEFAULT_BUFFER_SIZE, O_WRONLY | O_EXCL | O_CREAT);
out_buf = &*out_file_buf;
@ -1248,7 +1247,7 @@ private:
{
if (has_vertical_output_suffix)
throw Exception("Output format already specified", ErrorCodes::CLIENT_OUTPUT_FORMAT_SPECIFIED);
const auto & id = typeid_cast<const ASTIdentifier &>(*query_with_output->format);
const auto & id = query_with_output->format->as<ASTIdentifier &>();
current_format = id.name;
}
if (query_with_output->settings_ast)

View File

@ -483,7 +483,7 @@ String DB::TaskShard::getHostNameExample() const
static bool isExtendedDefinitionStorage(const ASTPtr & storage_ast)
{
const ASTStorage & storage = typeid_cast<const ASTStorage &>(*storage_ast);
const auto & storage = storage_ast->as<ASTStorage &>();
return storage.partition_by || storage.order_by || storage.sample_by;
}
@ -491,8 +491,8 @@ static ASTPtr extractPartitionKey(const ASTPtr & storage_ast)
{
String storage_str = queryToString(storage_ast);
const ASTStorage & storage = typeid_cast<const ASTStorage &>(*storage_ast);
const ASTFunction & engine = typeid_cast<const ASTFunction &>(*storage.engine);
const auto & storage = storage_ast->as<ASTStorage &>();
const auto & engine = storage.engine->as<ASTFunction &>();
if (!endsWith(engine.name, "MergeTree"))
{
@ -501,7 +501,7 @@ static ASTPtr extractPartitionKey(const ASTPtr & storage_ast)
}
ASTPtr arguments_ast = engine.arguments->clone();
ASTs & arguments = typeid_cast<ASTExpressionList &>(*arguments_ast).children;
ASTs & arguments = arguments_ast->children;
if (isExtendedDefinitionStorage(storage_ast))
{
@ -1179,12 +1179,12 @@ protected:
/// Removes MATERIALIZED and ALIAS columns from create table query
static ASTPtr removeAliasColumnsFromCreateQuery(const ASTPtr & query_ast)
{
const ASTs & column_asts = typeid_cast<ASTCreateQuery &>(*query_ast).columns_list->columns->children;
const ASTs & column_asts = query_ast->as<ASTCreateQuery &>().columns_list->columns->children;
auto new_columns = std::make_shared<ASTExpressionList>();
for (const ASTPtr & column_ast : column_asts)
{
const ASTColumnDeclaration & column = typeid_cast<const ASTColumnDeclaration &>(*column_ast);
const auto & column = column_ast->as<ASTColumnDeclaration &>();
if (!column.default_specifier.empty())
{
@ -1197,12 +1197,11 @@ protected:
}
ASTPtr new_query_ast = query_ast->clone();
ASTCreateQuery & new_query = typeid_cast<ASTCreateQuery &>(*new_query_ast);
auto & new_query = new_query_ast->as<ASTCreateQuery &>();
auto new_columns_list = std::make_shared<ASTColumns>();
new_columns_list->set(new_columns_list->columns, new_columns);
new_columns_list->set(
new_columns_list->indices, typeid_cast<ASTCreateQuery &>(*query_ast).columns_list->indices->clone());
new_columns_list->set(new_columns_list->indices, query_ast->as<ASTCreateQuery>()->columns_list->indices->clone());
new_query.replace(new_query.columns_list, new_columns_list);
@ -1212,7 +1211,7 @@ protected:
/// Replaces ENGINE and table name in a create query
std::shared_ptr<ASTCreateQuery> rewriteCreateQueryStorage(const ASTPtr & create_query_ast, const DatabaseAndTableName & new_table, const ASTPtr & new_storage_ast)
{
ASTCreateQuery & create = typeid_cast<ASTCreateQuery &>(*create_query_ast);
const auto & create = create_query_ast->as<ASTCreateQuery &>();
auto res = std::make_shared<ASTCreateQuery>(create);
if (create.storage == nullptr || new_storage_ast == nullptr)
@ -1646,7 +1645,7 @@ protected:
/// Try create table (if not exists) on each shard
{
auto create_query_push_ast = rewriteCreateQueryStorage(task_shard.current_pull_table_create_query, task_table.table_push, task_table.engine_push_ast);
typeid_cast<ASTCreateQuery &>(*create_query_push_ast).if_not_exists = true;
create_query_push_ast->as<ASTCreateQuery &>().if_not_exists = true;
String query = queryToString(create_query_push_ast);
LOG_DEBUG(log, "Create destination tables. Query: " << query);
@ -1779,7 +1778,7 @@ protected:
void dropAndCreateLocalTable(const ASTPtr & create_ast)
{
auto & create = typeid_cast<ASTCreateQuery &>(*create_ast);
const auto & create = create_ast->as<ASTCreateQuery &>();
dropLocalTableIfExists({create.database, create.table});
InterpreterCreateQuery interpreter(create_ast, context);

View File

@ -12,6 +12,7 @@ namespace DB
namespace ErrorCodes
{
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int BAD_ARGUMENTS;
}
namespace

View File

@ -15,7 +15,7 @@ namespace ErrorCodes
Array getAggregateFunctionParametersArray(const ASTPtr & expression_list, const std::string & error_context)
{
const ASTs & parameters = typeid_cast<const ASTExpressionList &>(*expression_list).children;
const ASTs & parameters = expression_list->children;
if (parameters.empty())
throw Exception("Parameters list to aggregate functions cannot be empty", ErrorCodes::BAD_ARGUMENTS);
@ -23,14 +23,14 @@ Array getAggregateFunctionParametersArray(const ASTPtr & expression_list, const
for (size_t i = 0; i < parameters.size(); ++i)
{
const ASTLiteral * lit = typeid_cast<const ASTLiteral *>(parameters[i].get());
if (!lit)
const auto * literal = parameters[i]->as<ASTLiteral>();
if (!literal)
{
throw Exception("Parameters to aggregate functions must be literals" + (error_context.empty() ? "" : " (in " + error_context +")"),
ErrorCodes::PARAMETERS_TO_AGGREGATE_FUNCTIONS_MUST_BE_LITERALS);
}
params_row[i] = lit->value;
params_row[i] = literal->value;
}
return params_row;
@ -67,8 +67,7 @@ void getAggregateFunctionNameAndParametersArray(
parameters_str.data(), parameters_str.data() + parameters_str.size(),
"parameters of aggregate function in " + error_context, 0);
ASTExpressionList & args_list = typeid_cast<ASTExpressionList &>(*args_ast);
if (args_list.children.empty())
if (args_ast->children.empty())
throw Exception("Incorrect list of parameters to aggregate function "
+ aggregate_function_name, ErrorCodes::BAD_ARGUMENTS);

View File

@ -3,17 +3,12 @@
#include <Columns/IColumn.h>
#include <Columns/ColumnVector.h>
#include <Core/Defines.h>
#include <Common/typeid_cast.h>
namespace DB
{
namespace ErrorCodes
{
extern const int ILLEGAL_COLUMN;
extern const int NOT_IMPLEMENTED;
extern const int BAD_ARGUMENTS;
}
/** A column of array values.
* In memory, it is represented as one column of a nested type, whose size is equal to the sum of the sizes of all arrays,
* and as an array of offsets in it, which allows you to get each element.
@ -121,6 +116,13 @@ public:
callback(data);
}
bool structureEquals(const IColumn & rhs) const override
{
if (auto rhs_concrete = typeid_cast<const ColumnArray *>(&rhs))
return data->structureEquals(*rhs_concrete->data);
return false;
}
private:
ColumnPtr data;
ColumnPtr offsets;

View File

@ -3,6 +3,7 @@
#include <Core/Field.h>
#include <Common/Exception.h>
#include <Columns/IColumn.h>
#include <Common/typeid_cast.h>
namespace DB
@ -190,6 +191,13 @@ public:
callback(data);
}
bool structureEquals(const IColumn & rhs) const override
{
if (auto rhs_concrete = typeid_cast<const ColumnConst *>(&rhs))
return data->structureEquals(*rhs_concrete->data);
return false;
}
bool onlyNull() const override { return data->isNullAt(0); }
bool isColumnConst() const override { return true; }
bool isNumeric() const override { return data->isNumeric(); }

View File

@ -2,6 +2,7 @@
#include <cmath>
#include <Common/typeid_cast.h>
#include <Columns/IColumn.h>
#include <Columns/ColumnVectorHelper.h>
@ -133,6 +134,13 @@ public:
void gather(ColumnGathererStream & gatherer_stream) override;
bool structureEquals(const IColumn & rhs) const override
{
if (auto rhs_concrete = typeid_cast<const ColumnDecimal<T> *>(&rhs))
return scale == rhs_concrete->scale;
return false;
}
void insert(const T value) { data.push_back(value); }
Container & getData() { return data; }

View File

@ -2,6 +2,7 @@
#include <Common/PODArray.h>
#include <Common/memcmpSmall.h>
#include <Common/typeid_cast.h>
#include <Columns/IColumn.h>
#include <Columns/ColumnVectorHelper.h>
@ -134,6 +135,12 @@ public:
void getExtremes(Field & min, Field & max) const override;
bool structureEquals(const IColumn & rhs) const override
{
if (auto rhs_concrete = typeid_cast<const ColumnFixedString *>(&rhs))
return n == rhs_concrete->n;
return false;
}
bool canBeInsideNullable() const override { return true; }

View File

@ -5,6 +5,7 @@
#include <AggregateFunctions/AggregateFunctionCount.h>
#include "ColumnsNumber.h"
namespace DB
{
@ -132,6 +133,14 @@ public:
callback(dictionary.getColumnUniquePtr());
}
bool structureEquals(const IColumn & rhs) const override
{
if (auto rhs_low_cardinality = typeid_cast<const ColumnLowCardinality *>(&rhs))
return idx.getPositions()->structureEquals(*rhs_low_cardinality->idx.getPositions())
&& dictionary.getColumnUnique().structureEquals(rhs_low_cardinality->dictionary.getColumnUnique());
return false;
}
bool valuesHaveFixedSize() const override { return getDictionary().valuesHaveFixedSize(); }
bool isFixedAndContiguous() const override { return false; }
size_t sizeOfValueIfFixed() const override { return getDictionary().sizeOfValueIfFixed(); }

View File

@ -23,6 +23,11 @@ public:
MutableColumnPtr cloneDummy(size_t s_) const override { return ColumnNothing::create(s_); }
bool canBeInsideNullable() const override { return true; }
bool structureEquals(const IColumn & rhs) const override
{
return typeid(rhs) == typeid(ColumnNothing);
}
};
}

View File

@ -2,6 +2,8 @@
#include <Columns/IColumn.h>
#include <Columns/ColumnsNumber.h>
#include <Common/typeid_cast.h>
namespace DB
{
@ -89,6 +91,13 @@ public:
callback(null_map);
}
bool structureEquals(const IColumn & rhs) const override
{
if (auto rhs_nullable = typeid_cast<const ColumnNullable *>(&rhs))
return nested_column->structureEquals(*rhs_nullable->nested_column);
return false;
}
bool isColumnNullable() const override { return true; }
bool isFixedAndContiguous() const override { return false; }
bool valuesHaveFixedSize() const override { return nested_column->valuesHaveFixedSize(); }

View File

@ -231,6 +231,11 @@ public:
bool canBeInsideNullable() const override { return true; }
bool structureEquals(const IColumn & rhs) const override
{
return typeid(rhs) == typeid(ColumnString);
}
Chars & getChars() { return chars; }
const Chars & getChars() const { return chars; }

View File

@ -4,6 +4,7 @@
#include <IO/Operators.h>
#include <ext/map.h>
#include <ext/range.h>
#include <Common/typeid_cast.h>
namespace DB
@ -341,6 +342,23 @@ void ColumnTuple::forEachSubcolumn(ColumnCallback callback)
callback(column);
}
bool ColumnTuple::structureEquals(const IColumn & rhs) const
{
if (auto rhs_tuple = typeid_cast<const ColumnTuple *>(&rhs))
{
const size_t tuple_size = columns.size();
if (tuple_size != rhs_tuple->columns.size())
return false;
for (const auto i : ext::range(0, tuple_size))
if (!columns[i]->structureEquals(*rhs_tuple->columns[i]))
return false;
return true;
}
else
return false;
}
}

View File

@ -73,6 +73,7 @@ public:
size_t allocatedBytes() const override;
void protect() override;
void forEachSubcolumn(ColumnCallback callback) override;
bool structureEquals(const IColumn & rhs) const override;
size_t tupleSize() const { return columns.size(); }

View File

@ -95,6 +95,13 @@ public:
nested_column_nullable = ColumnNullable::create(column_holder, nested_null_mask);
}
bool structureEquals(const IColumn & rhs) const override
{
if (auto rhs_concrete = typeid_cast<const ColumnUnique *>(&rhs))
return column_holder->structureEquals(*rhs_concrete->column_holder);
return false;
}
const UInt64 * tryGetSavedHash() const override { return index.tryGetSavedHash(); }
UInt128 getHash() const override { return hash.getHash(*getRawColumnPtr()); }

View File

@ -251,6 +251,12 @@ public:
size_t sizeOfValueIfFixed() const override { return sizeof(T); }
StringRef getRawData() const override { return StringRef(reinterpret_cast<const char*>(data.data()), data.size()); }
bool structureEquals(const IColumn & rhs) const override
{
return typeid(rhs) == typeid(ColumnVector<T>);
}
/** More efficient methods of manipulation - to manipulate with data directly. */
Container & getData()
{

View File

@ -262,6 +262,13 @@ public:
using ColumnCallback = std::function<void(Ptr&)>;
virtual void forEachSubcolumn(ColumnCallback) {}
/// Columns have equal structure.
/// If true - you can use "compareAt", "insertFrom", etc. methods.
virtual bool structureEquals(const IColumn &) const
{
throw Exception("Method structureEquals is not supported for " + getName(), ErrorCodes::NOT_IMPLEMENTED);
}
MutablePtr mutate() const &&
{

View File

@ -0,0 +1,63 @@
#pragma once
#include <Common/typeid_cast.h>
namespace DB
{
/* This base class adds public methods:
* - Derived * as<Derived>()
* - const Derived * as<Derived>() const
* - Derived & as<Derived &>()
* - const Derived & as<Derived &>() const
*/
template <class Base>
class TypePromotion
{
private:
/// Need a helper-struct to fight the lack of the function-template partial specialization.
template <class T, bool is_const, bool is_ref = std::is_reference_v<T>>
struct CastHelper;
template <class T>
struct CastHelper<T, false, true>
{
auto & value(Base * ptr) { return typeid_cast<T>(*ptr); }
};
template <class T>
struct CastHelper<T, true, true>
{
auto & value(const Base * ptr) { return typeid_cast<std::add_lvalue_reference_t<std::add_const_t<std::remove_reference_t<T>>>>(*ptr); }
};
template <class T>
struct CastHelper<T, false, false>
{
auto * value(Base * ptr) { return typeid_cast<T *>(ptr); }
};
template <class T>
struct CastHelper<T, true, false>
{
auto * value(const Base * ptr) { return typeid_cast<std::add_const_t<T> *>(ptr); }
};
public:
template <class Derived>
auto as() -> std::invoke_result_t<decltype(&CastHelper<Derived, false>::value), CastHelper<Derived, false>, Base *>
{
// TODO: if we do downcast to base type, then just return |this|.
return CastHelper<Derived, false>().value(static_cast<Base *>(this));
}
template <class Derived>
auto as() const -> std::invoke_result_t<decltype(&CastHelper<Derived, true>::value), CastHelper<Derived, true>, const Base *>
{
// TODO: if we do downcast to base type, then just return |this|.
return CastHelper<Derived, true>().value(static_cast<const Base *>(this));
}
};
} // namespace DB

View File

@ -25,18 +25,32 @@ namespace DB
template <typename To, typename From>
std::enable_if_t<std::is_reference_v<To>, To> typeid_cast(From & from)
{
if (typeid(from) == typeid(To))
return static_cast<To>(from);
else
throw DB::Exception("Bad cast from type " + demangle(typeid(from).name()) + " to " + demangle(typeid(To).name()),
DB::ErrorCodes::BAD_CAST);
try
{
if (typeid(from) == typeid(To))
return static_cast<To>(from);
}
catch (const std::exception & e)
{
throw DB::Exception(e.what(), DB::ErrorCodes::BAD_CAST);
}
throw DB::Exception("Bad cast from type " + demangle(typeid(from).name()) + " to " + demangle(typeid(To).name()),
DB::ErrorCodes::BAD_CAST);
}
template <typename To, typename From>
To typeid_cast(From * from)
{
if (typeid(*from) == typeid(std::remove_pointer_t<To>))
return static_cast<To>(from);
else
return nullptr;
try
{
if (typeid(*from) == typeid(std::remove_pointer_t<To>))
return static_cast<To>(from);
else
return nullptr;
}
catch (const std::exception & e)
{
throw DB::Exception(e.what(), DB::ErrorCodes::BAD_CAST);
}
}

View File

@ -144,7 +144,7 @@ void registerCodecDelta(CompressionCodecFactory & factory)
throw Exception("Delta codec must have 1 parameter, given " + std::to_string(arguments->children.size()), ErrorCodes::ILLEGAL_SYNTAX_FOR_CODEC_TYPE);
const auto children = arguments->children;
const ASTLiteral * literal = static_cast<const ASTLiteral *>(children[0].get());
const auto * literal = children[0]->as<ASTLiteral>();
size_t user_bytes_size = literal->value.safeGet<UInt64>();
if (user_bytes_size != 1 && user_bytes_size != 2 && user_bytes_size != 4 && user_bytes_size != 8)
throw Exception("Delta value for delta codec can be 1, 2, 4 or 8, given " + toString(user_bytes_size), ErrorCodes::ILLEGAL_CODEC_PARAMETER);

View File

@ -86,7 +86,7 @@ void registerCodecLZ4HC(CompressionCodecFactory & factory)
throw Exception("LZ4HC codec must have 1 parameter, given " + std::to_string(arguments->children.size()), ErrorCodes::ILLEGAL_SYNTAX_FOR_CODEC_TYPE);
const auto children = arguments->children;
const ASTLiteral * literal = static_cast<const ASTLiteral *>(children[0].get());
const auto * literal = children[0]->as<ASTLiteral>();
level = literal->value.safeGet<UInt64>();
}
@ -100,4 +100,3 @@ CompressionCodecLZ4HC::CompressionCodecLZ4HC(int level_)
}
}

View File

@ -73,7 +73,7 @@ void registerCodecZSTD(CompressionCodecFactory & factory)
throw Exception("ZSTD codec must have 1 parameter, given " + std::to_string(arguments->children.size()), ErrorCodes::ILLEGAL_SYNTAX_FOR_CODEC_TYPE);
const auto children = arguments->children;
const ASTLiteral * literal = static_cast<const ASTLiteral *>(children[0].get());
const auto * literal = children[0]->as<ASTLiteral>();
level = literal->value.safeGet<UInt64>();
if (level > ZSTD_maxCLevel())
throw Exception("ZSTD codec can't have level more that " + toString(ZSTD_maxCLevel()) + ", given " + toString(level), ErrorCodes::ILLEGAL_CODEC_PARAMETER);

View File

@ -56,15 +56,15 @@ CompressionCodecPtr CompressionCodecFactory::get(const std::vector<CodecNameWith
CompressionCodecPtr CompressionCodecFactory::get(const ASTPtr & ast, DataTypePtr column_type) const
{
if (const auto * func = typeid_cast<const ASTFunction *>(ast.get()))
if (const auto * func = ast->as<ASTFunction>())
{
Codecs codecs;
codecs.reserve(func->arguments->children.size());
for (const auto & inner_codec_ast : func->arguments->children)
{
if (const auto * family_name = typeid_cast<const ASTIdentifier *>(inner_codec_ast.get()))
if (const auto * family_name = inner_codec_ast->as<ASTIdentifier>())
codecs.emplace_back(getImpl(family_name->name, {}, column_type));
else if (const auto * ast_func = typeid_cast<const ASTFunction *>(inner_codec_ast.get()))
else if (const auto * ast_func = inner_codec_ast->as<ASTFunction>())
codecs.emplace_back(getImpl(ast_func->name, ast_func->arguments, column_type));
else
throw Exception("Unexpected AST element for compression codec", ErrorCodes::UNEXPECTED_AST_STRUCTURE);

View File

@ -1,14 +1,17 @@
#pragma once
#include <memory>
#include <Compression/CompressionInfo.h>
#include <Compression/ICompressionCodec.h>
#include <DataTypes/IDataType.h>
#include <Parsers/IAST_fwd.h>
#include <Common/IFactoryWithAliases.h>
#include <ext/singleton.h>
#include <functional>
#include <memory>
#include <optional>
#include <unordered_map>
#include <ext/singleton.h>
#include <DataTypes/IDataType.h>
#include <Common/IFactoryWithAliases.h>
#include <Compression/ICompressionCodec.h>
#include <Compression/CompressionInfo.h>
namespace DB
{
@ -19,10 +22,6 @@ using CompressionCodecPtr = std::shared_ptr<ICompressionCodec>;
using CodecNameWithLevel = std::pair<String, std::optional<int>>;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
/** Creates a codec object by name of compression algorithm family and parameters.
*/
class CompressionCodecFactory final : public ext::singleton<CompressionCodecFactory>

View File

@ -20,7 +20,7 @@ namespace ErrorCodes
InputStreamFromASTInsertQuery::InputStreamFromASTInsertQuery(
const ASTPtr & ast, ReadBuffer * input_buffer_tail_part, const Block & header, Context & context)
{
const ASTInsertQuery * ast_insert_query = dynamic_cast<const ASTInsertQuery *>(ast.get());
const auto * ast_insert_query = ast->as<ASTInsertQuery>();
if (!ast_insert_query)
throw Exception("Logical error: query requires data to insert, but it is not INSERT query", ErrorCodes::LOGICAL_ERROR);

View File

@ -340,30 +340,30 @@ static DataTypePtr create(const ASTPtr & arguments)
throw Exception("Data type AggregateFunction requires parameters: "
"name of aggregate function and list of data types for arguments", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
if (const ASTFunction * parametric = typeid_cast<const ASTFunction *>(arguments->children[0].get()))
if (const auto * parametric = arguments->children[0]->as<ASTFunction>())
{
if (parametric->parameters)
throw Exception("Unexpected level of parameters to aggregate function", ErrorCodes::SYNTAX_ERROR);
function_name = parametric->name;
const ASTs & parameters = typeid_cast<const ASTExpressionList &>(*parametric->arguments).children;
const ASTs & parameters = parametric->arguments->children;
params_row.resize(parameters.size());
for (size_t i = 0; i < parameters.size(); ++i)
{
const ASTLiteral * lit = typeid_cast<const ASTLiteral *>(parameters[i].get());
if (!lit)
const auto * literal = parameters[i]->as<ASTLiteral>();
if (!literal)
throw Exception("Parameters to aggregate functions must be literals",
ErrorCodes::PARAMETERS_TO_AGGREGATE_FUNCTIONS_MUST_BE_LITERALS);
params_row[i] = lit->value;
params_row[i] = literal->value;
}
}
else if (auto opt_name = getIdentifierName(arguments->children[0]))
{
function_name = *opt_name;
}
else if (typeid_cast<ASTLiteral *>(arguments->children[0].get()))
else if (arguments->children[0]->as<ASTLiteral>())
{
throw Exception("Aggregate function name for data type AggregateFunction must be passed as identifier (without quotes) or function",
ErrorCodes::BAD_ARGUMENTS);
@ -389,4 +389,3 @@ void registerDataTypeAggregateFunction(DataTypeFactory & factory)
}

View File

@ -186,7 +186,7 @@ static DataTypePtr create(const ASTPtr & arguments)
if (arguments->children.size() != 1)
throw Exception("DateTime data type can optionally have only one argument - time zone name", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
const ASTLiteral * arg = typeid_cast<const ASTLiteral *>(arguments->children[0].get());
const auto * arg = arguments->children[0]->as<ASTLiteral>();
if (!arg || arg->value.getType() != Field::Types::String)
throw Exception("Parameter for DateTime data type must be string literal", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);

View File

@ -357,7 +357,7 @@ static DataTypePtr create(const ASTPtr & arguments)
/// Children must be functions 'equals' with string literal as left argument and numeric literal as right argument.
for (const ASTPtr & child : arguments->children)
{
const ASTFunction * func = typeid_cast<const ASTFunction *>(child.get());
const auto * func = child->as<ASTFunction>();
if (!func
|| func->name != "equals"
|| func->parameters
@ -366,8 +366,8 @@ static DataTypePtr create(const ASTPtr & arguments)
throw Exception("Elements of Enum data type must be of form: 'name' = number, where name is string literal and number is an integer",
ErrorCodes::UNEXPECTED_AST_STRUCTURE);
const ASTLiteral * name_literal = typeid_cast<const ASTLiteral *>(func->arguments->children[0].get());
const ASTLiteral * value_literal = typeid_cast<const ASTLiteral *>(func->arguments->children[1].get());
const auto * name_literal = func->arguments->children[0]->as<ASTLiteral>();
const auto * value_literal = func->arguments->children[1]->as<ASTLiteral>();
if (!name_literal
|| !value_literal

View File

@ -32,19 +32,19 @@ DataTypePtr DataTypeFactory::get(const String & full_name) const
DataTypePtr DataTypeFactory::get(const ASTPtr & ast) const
{
if (const ASTFunction * func = typeid_cast<const ASTFunction *>(ast.get()))
if (const auto * func = ast->as<ASTFunction>())
{
if (func->parameters)
throw Exception("Data type cannot have multiple parenthesed parameters.", ErrorCodes::ILLEGAL_SYNTAX_FOR_DATA_TYPE);
return get(func->name, func->arguments);
}
if (const ASTIdentifier * ident = typeid_cast<const ASTIdentifier *>(ast.get()))
if (const auto * ident = ast->as<ASTIdentifier>())
{
return get(ident->name, {});
}
if (const ASTLiteral * lit = typeid_cast<const ASTLiteral *>(ast.get()))
if (const auto * lit = ast->as<ASTLiteral>())
{
if (lit->value.isNull())
return get("Null", {});

View File

@ -1,12 +1,15 @@
#pragma once
#include <memory>
#include <functional>
#include <unordered_map>
#include <Common/IFactoryWithAliases.h>
#include <DataTypes/IDataType.h>
#include <Parsers/IAST_fwd.h>
#include <Common/IFactoryWithAliases.h>
#include <ext/singleton.h>
#include <functional>
#include <memory>
#include <unordered_map>
namespace DB
{
@ -17,9 +20,6 @@ using DataTypePtr = std::shared_ptr<const IDataType>;
class IDataTypeDomain;
using DataTypeDomainPtr = std::unique_ptr<const IDataTypeDomain>;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
/** Creates a data type by name of data type family and parameters.
*/

View File

@ -273,7 +273,7 @@ static DataTypePtr create(const ASTPtr & arguments)
if (!arguments || arguments->children.size() != 1)
throw Exception("FixedString data type family must have exactly one argument - size in bytes", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
const ASTLiteral * argument = typeid_cast<const ASTLiteral *>(arguments->children[0].get());
const auto * argument = arguments->children[0]->as<ASTLiteral>();
if (!argument || argument->value.getType() != Field::Types::UInt64 || argument->value.get<UInt64>() == 0)
throw Exception("FixedString data type family must have a number (positive integer) as its argument", ErrorCodes::UNEXPECTED_AST_STRUCTURE);

View File

@ -531,7 +531,7 @@ static DataTypePtr create(const ASTPtr & arguments)
for (const ASTPtr & child : arguments->children)
{
if (const ASTNameTypePair * name_and_type_pair = typeid_cast<const ASTNameTypePair *>(child.get()))
if (const auto * name_and_type_pair = child->as<ASTNameTypePair>())
{
nested_types.emplace_back(DataTypeFactory::instance().get(name_and_type_pair->type));
names.emplace_back(name_and_type_pair->name);

View File

@ -208,8 +208,8 @@ static DataTypePtr create(const ASTPtr & arguments)
throw Exception("Decimal data type family must have exactly two arguments: precision and scale",
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
const ASTLiteral * precision = typeid_cast<const ASTLiteral *>(arguments->children[0].get());
const ASTLiteral * scale = typeid_cast<const ASTLiteral *>(arguments->children[1].get());
const auto * precision = arguments->children[0]->as<ASTLiteral>();
const auto * scale = arguments->children[1]->as<ASTLiteral>();
if (!precision || precision->value.getType() != Field::Types::UInt64 ||
!scale || !(scale->value.getType() == Field::Types::Int64 || scale->value.getType() == Field::Types::UInt64))
@ -228,7 +228,7 @@ static DataTypePtr createExect(const ASTPtr & arguments)
throw Exception("Decimal data type family must have exactly two arguments: precision and scale",
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
const ASTLiteral * scale_arg = typeid_cast<const ASTLiteral *>(arguments->children[0].get());
const auto * scale_arg = arguments->children[0]->as<ASTLiteral>();
if (!scale_arg || !(scale_arg->value.getType() == Field::Types::Int64 || scale_arg->value.getType() == Field::Types::UInt64))
throw Exception("Decimal data type family must have a two numbers as its arguments", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);

View File

@ -370,7 +370,7 @@ static ASTPtr getCreateQueryFromMetadata(const String & metadata_path, const Str
if (ast)
{
ASTCreateQuery & ast_create_query = typeid_cast<ASTCreateQuery &>(*ast);
auto & ast_create_query = ast->as<ASTCreateQuery &>();
ast_create_query.attach = false;
ast_create_query.database = database;
}
@ -415,8 +415,7 @@ void DatabaseOrdinary::renameTable(
ASTPtr ast = getQueryFromMetadata(detail::getTableMetadataPath(metadata_path, table_name));
if (!ast)
throw Exception("There is no metadata file for table " + table_name, ErrorCodes::FILE_DOESNT_EXIST);
ASTCreateQuery & ast_create_query = typeid_cast<ASTCreateQuery &>(*ast);
ast_create_query.table = to_table_name;
ast->as<ASTCreateQuery &>().table = to_table_name;
/// NOTE Non-atomic.
to_database_concrete->createTable(context, to_table_name, table, ast);
@ -534,7 +533,7 @@ void DatabaseOrdinary::alterTable(
ParserCreateQuery parser;
ASTPtr ast = parseQuery(parser, statement.data(), statement.data() + statement.size(), "in file " + table_metadata_path, 0);
ASTCreateQuery & ast_create_query = typeid_cast<ASTCreateQuery &>(*ast);
const auto & ast_create_query = ast->as<ASTCreateQuery &>();
ASTPtr new_columns = InterpreterCreateQuery::formatColumns(columns);
ASTPtr new_indices = InterpreterCreateQuery::formatIndices(indices);

View File

@ -26,7 +26,7 @@ namespace ErrorCodes
String getTableDefinitionFromCreateQuery(const ASTPtr & query)
{
ASTPtr query_clone = query->clone();
ASTCreateQuery & create = typeid_cast<ASTCreateQuery &>(*query_clone.get());
auto & create = query_clone->as<ASTCreateQuery &>();
/// We remove everything that is not needed for ATTACH from the query.
create.attach = true;
@ -62,7 +62,7 @@ std::pair<String, StoragePtr> createTableFromDefinition(
ParserCreateQuery parser;
ASTPtr ast = parseQuery(parser, definition.data(), definition.data() + definition.size(), description_for_error_message, 0);
ASTCreateQuery & ast_create_query = typeid_cast<ASTCreateQuery &>(*ast);
auto & ast_create_query = ast->as<ASTCreateQuery &>();
ast_create_query.attach = true;
ast_create_query.database = database_name;

View File

@ -1,16 +1,18 @@
#pragma once
#include <Core/Types.h>
#include <Core/NamesAndTypes.h>
#include <Core/Types.h>
#include <Interpreters/Context.h>
#include <Parsers/IAST_fwd.h>
#include <Storages/ColumnsDescription.h>
#include <Storages/IndicesDescription.h>
#include <ctime>
#include <memory>
#include <functional>
#include <Poco/File.h>
#include <Common/escapeForFileName.h>
#include <Common/ThreadPool.h>
#include <Interpreters/Context.h>
#include <Common/escapeForFileName.h>
#include <ctime>
#include <functional>
#include <memory>
namespace DB
@ -21,9 +23,6 @@ class Context;
class IStorage;
using StoragePtr = std::shared_ptr<IStorage>;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
struct Settings;
@ -157,4 +156,3 @@ using DatabasePtr = std::shared_ptr<IDatabase>;
using Databases = std::map<String, DatabasePtr>;
}

View File

@ -3,26 +3,29 @@
#include <ext/singleton.h>
#include "IDictionary.h"
namespace Poco
{
namespace Util
{
class AbstractConfiguration;
}
class Logger;
}
namespace DB
{
class Context;
class DictionaryFactory : public ext::singleton<DictionaryFactory>
{
public:
DictionaryPtr
create(const std::string & name, const Poco::Util::AbstractConfiguration & config, const std::string & config_prefix, Context & context)
const;
DictionaryPtr create(const std::string & name, const Poco::Util::AbstractConfiguration & config, const std::string & config_prefix, Context & context) const;
using Creator = std::function<DictionaryPtr(
const std::string & name,

View File

@ -33,6 +33,7 @@ namespace ErrorCodes
{
extern const int TOO_FEW_ARGUMENTS_FOR_FUNCTION;
extern const int LOGICAL_ERROR;
extern const int ILLEGAL_COLUMN;
}
@ -123,8 +124,8 @@ public:
}
else
throw Exception("Illegal column " + block.getByPosition(arguments[0]).column->getName()
+ " of argument of function " + getName(),
ErrorCodes::ILLEGAL_COLUMN);
+ " of argument of function " + getName(),
ErrorCodes::ILLEGAL_COLUMN);
}
};

View File

@ -36,6 +36,7 @@ namespace ErrorCodes
{
extern const int DICTIONARIES_WAS_NOT_LOADED;
extern const int BAD_ARGUMENTS;
extern const int ILLEGAL_COLUMN;
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
}

View File

@ -44,6 +44,8 @@ namespace ErrorCodes
extern const int UNKNOWN_TYPE;
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int TYPE_MISMATCH;
extern const int ILLEGAL_COLUMN;
extern const int BAD_ARGUMENTS;
}
/** Functions that use plug-ins (external) dictionaries.

View File

@ -20,6 +20,7 @@ namespace DB
namespace ErrorCodes
{
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int ILLEGAL_COLUMN;
}
enum ClusterOperation

View File

@ -48,6 +48,7 @@ namespace ErrorCodes
extern const int LOGICAL_ERROR;
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int NOT_IMPLEMENTED;
extern const int ILLEGAL_COLUMN;
}

View File

@ -21,6 +21,7 @@ namespace ErrorCodes
{
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int BAD_ARGUMENTS;
extern const int ILLEGAL_COLUMN;
}

View File

@ -38,6 +38,11 @@
namespace DB
{
namespace ErrorCodes
{
extern const int ILLEGAL_COLUMN;
}
struct HasParam
{
using ResultType = UInt8;

View File

@ -53,7 +53,7 @@ inline ALWAYS_INLINE void writeSlice(const StringSource::Slice & slice, FixedStr
/// Assuming same types of underlying columns for slice and sink if (ArraySlice, ArraySink) is (GenericArraySlice, GenericArraySink).
inline ALWAYS_INLINE void writeSlice(const GenericArraySlice & slice, GenericArraySink & sink)
{
if (typeid(slice.elements) == typeid(static_cast<const IColumn *>(&sink.elements)))
if (slice.elements->structureEquals(sink.elements))
{
sink.elements.insertRangeFrom(*slice.elements, slice.begin, slice.size);
sink.current_offset += slice.size;
@ -125,7 +125,7 @@ void writeSlice(const NumericValueSlice<T> & slice, NumericArraySink<U> & sink)
/// Assuming same types of underlying columns for slice and sink if (ArraySlice, ArraySink) is (GenericValueSlice, GenericArraySink).
inline ALWAYS_INLINE void writeSlice(const GenericValueSlice & slice, GenericArraySink & sink)
{
if (typeid(slice.elements) == typeid(static_cast<const IColumn *>(&sink.elements)))
if (slice.elements->structureEquals(sink.elements))
{
sink.elements.insertFrom(*slice.elements, slice.position);
++sink.current_offset;
@ -457,7 +457,7 @@ template <bool all>
bool sliceHas(const GenericArraySlice & first, const GenericArraySlice & second)
{
/// Generic arrays should have the same type in order to use column.compareAt(...)
if (typeid(*first.elements) != typeid(*second.elements))
if (!first.elements->structureEquals(*second.elements))
return false;
auto impl = sliceHasImpl<all, GenericArraySlice, GenericArraySlice, sliceEqualElements>;

View File

@ -14,7 +14,15 @@
#include <Functions/GatherUtils/Slices.h>
#include <Functions/FunctionHelpers.h>
namespace DB::GatherUtils
namespace DB
{
namespace ErrorCodes
{
extern const int ILLEGAL_COLUMN;
}
namespace GatherUtils
{
template <typename T>
@ -660,3 +668,5 @@ struct NullableValueSource : public ValueSource
};
}
}

View File

@ -6,6 +6,11 @@
namespace DB
{
namespace ErrorCodes
{
extern const int BAD_ARGUMENTS;
}
ArraysDepths getArraysDepths(const ColumnsWithTypeAndName & arguments)
{
const size_t num_arguments = arguments.size();

View File

@ -22,6 +22,7 @@ namespace ErrorCodes
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int ILLEGAL_COLUMN;
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
extern const int BAD_ARGUMENTS;
}

View File

@ -7,6 +7,11 @@
namespace DB
{
namespace ErrorCodes
{
extern const int ILLEGAL_COLUMN;
}
/// flatten([[1, 2, 3], [4, 5]]) = [1, 2, 3, 4, 5] - flatten array.
class FunctionFlatten : public IFunction
{

View File

@ -53,10 +53,8 @@ public:
size_t rows = input_rows_count;
size_t num_args = arguments.size();
auto result_column = ColumnUInt8::create(rows);
DataTypePtr common_type = nullptr;
auto commonType = [& common_type, & block, & arguments]()
auto commonType = [&common_type, &block, &arguments]()
{
if (common_type == nullptr)
{
@ -106,6 +104,7 @@ public:
throw Exception{"Arguments for function " + getName() + " must be arrays.", ErrorCodes::LOGICAL_ERROR};
}
auto result_column = ColumnUInt8::create(rows);
auto result_column_ptr = typeid_cast<ColumnUInt8 *>(result_column.get());
GatherUtils::sliceHas(*sources[0], *sources[1], all, *result_column_ptr);

View File

@ -8,6 +8,11 @@
namespace DB
{
namespace ErrorCodes
{
extern const int ILLEGAL_COLUMN;
}
/** Creates an array, multiplying the column (the first argument) by the number of elements in the array (the second argument).
*/
class FunctionReplicate : public IFunction

View File

@ -17,6 +17,7 @@ namespace ErrorCodes
{
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
extern const int ILLEGAL_COLUMN;
}
/** timeSlots(StartTime, Duration)

View File

@ -84,11 +84,11 @@ SetPtr makeExplicitSet(
auto getTupleTypeFromAst = [&context](const ASTPtr & tuple_ast) -> DataTypePtr
{
auto ast_function = typeid_cast<const ASTFunction *>(tuple_ast.get());
if (ast_function && ast_function->name == "tuple" && !ast_function->arguments->children.empty())
const auto * func = tuple_ast->as<ASTFunction>();
if (func && func->name == "tuple" && !func->arguments->children.empty())
{
/// Won't parse all values of outer tuple.
auto element = ast_function->arguments->children.at(0);
auto element = func->arguments->children.at(0);
std::pair<Field, DataTypePtr> value_raw = evaluateConstantExpression(element, context);
return std::make_shared<DataTypeTuple>(DataTypes({value_raw.second}));
}
@ -122,7 +122,7 @@ SetPtr makeExplicitSet(
/// 1 in (1, 2); (1, 2) in ((1, 2), (3, 4)); etc.
else if (left_tuple_depth + 1 == right_tuple_depth)
{
ASTFunction * set_func = typeid_cast<ASTFunction *>(right_arg.get());
const auto * set_func = right_arg->as<ASTFunction>();
if (!set_func || set_func->name != "tuple")
throw Exception("Incorrect type of 2nd argument for function " + node->name
@ -263,11 +263,10 @@ void ActionsVisitor::visit(const ASTPtr & ast)
};
/// If the result of the calculation already exists in the block.
if ((typeid_cast<ASTFunction *>(ast.get()) || typeid_cast<ASTLiteral *>(ast.get()))
&& actions_stack.getSampleBlock().has(getColumnName()))
if ((ast->as<ASTFunction>() || ast->as<ASTLiteral>()) && actions_stack.getSampleBlock().has(getColumnName()))
return;
if (auto * identifier = typeid_cast<ASTIdentifier *>(ast.get()))
if (const auto * identifier = ast->as<ASTIdentifier>())
{
if (!only_consts && !actions_stack.getSampleBlock().has(getColumnName()))
{
@ -288,7 +287,7 @@ void ActionsVisitor::visit(const ASTPtr & ast)
actions_stack.addAction(ExpressionAction::addAliases({{identifier->name, identifier->alias}}));
}
}
else if (ASTFunction * node = typeid_cast<ASTFunction *>(ast.get()))
else if (const auto * node = ast->as<ASTFunction>())
{
if (node->name == "lambda")
throw Exception("Unexpected lambda expression", ErrorCodes::UNEXPECTED_EXPRESSION);
@ -383,14 +382,14 @@ void ActionsVisitor::visit(const ASTPtr & ast)
auto & child = node->arguments->children[arg];
auto child_column_name = child->getColumnName();
ASTFunction * lambda = typeid_cast<ASTFunction *>(child.get());
const auto * lambda = child->as<ASTFunction>();
if (lambda && lambda->name == "lambda")
{
/// If the argument is a lambda expression, just remember its approximate type.
if (lambda->arguments->children.size() != 2)
throw Exception("lambda requires two arguments", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
ASTFunction * lambda_args_tuple = typeid_cast<ASTFunction *>(lambda->arguments->children.at(0).get());
const auto * lambda_args_tuple = lambda->arguments->children.at(0)->as<ASTFunction>();
if (!lambda_args_tuple || lambda_args_tuple->name != "tuple")
throw Exception("First argument of lambda must be a tuple", ErrorCodes::TYPE_MISMATCH);
@ -454,12 +453,12 @@ void ActionsVisitor::visit(const ASTPtr & ast)
{
ASTPtr child = node->arguments->children[i];
ASTFunction * lambda = typeid_cast<ASTFunction *>(child.get());
const auto * lambda = child->as<ASTFunction>();
if (lambda && lambda->name == "lambda")
{
const DataTypeFunction * lambda_type = typeid_cast<const DataTypeFunction *>(argument_types[i].get());
ASTFunction * lambda_args_tuple = typeid_cast<ASTFunction *>(lambda->arguments->children.at(0).get());
ASTs lambda_arg_asts = lambda_args_tuple->arguments->children;
const auto * lambda_args_tuple = lambda->arguments->children.at(0)->as<ASTFunction>();
const ASTs & lambda_arg_asts = lambda_args_tuple->arguments->children;
NamesAndTypesList lambda_arguments;
for (size_t j = 0; j < lambda_arg_asts.size(); ++j)
@ -517,7 +516,7 @@ void ActionsVisitor::visit(const ASTPtr & ast)
ExpressionAction::applyFunction(function_builder, argument_names, getColumnName()));
}
}
else if (ASTLiteral * literal = typeid_cast<ASTLiteral *>(ast.get()))
else if (const auto * literal = ast->as<ASTLiteral>())
{
DataTypePtr type = applyVisitor(FieldToDataType(), literal->value);
@ -533,8 +532,7 @@ void ActionsVisitor::visit(const ASTPtr & ast)
for (auto & child : ast->children)
{
/// Do not go to FROM, JOIN, UNION.
if (!typeid_cast<const ASTTableExpression *>(child.get())
&& !typeid_cast<const ASTSelectQuery *>(child.get()))
if (!child->as<ASTTableExpression>() && !child->as<ASTSelectQuery>())
visit(child);
}
}
@ -550,8 +548,8 @@ SetPtr ActionsVisitor::makeSet(const ASTFunction * node, const Block & sample_bl
const ASTPtr & arg = args.children.at(1);
/// If the subquery or table name for SELECT.
const ASTIdentifier * identifier = typeid_cast<const ASTIdentifier *>(arg.get());
if (typeid_cast<const ASTSubquery *>(arg.get()) || identifier)
const auto * identifier = arg->as<ASTIdentifier>();
if (arg->as<ASTSubquery>() || identifier)
{
auto set_key = PreparedSetKey::forSubquery(*arg);
if (prepared_sets.count(set_key))

View File

@ -1,15 +1,13 @@
#pragma once
#include <unordered_map>
#include <memory>
#include <Core/Types.h>
#include <Parsers/IAST_fwd.h>
#include <unordered_map>
namespace DB
{
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
using Aliases = std::unordered_map<String, ASTPtr>;
}

View File

@ -45,7 +45,7 @@ ExpressionActionsPtr AnalyzedJoin::createJoinedBlockActions(
if (!join)
return nullptr;
const auto & join_params = static_cast<const ASTTableJoin &>(*join->table_join);
const auto & join_params = join->table_join->as<ASTTableJoin &>();
/// Create custom expression list with join keys from right table.
auto expression_list = std::make_shared<ASTExpressionList>();

View File

@ -40,11 +40,10 @@ public:
static bool needChildVisit(ASTPtr & node, const ASTPtr & child)
{
if (typeid_cast<ASTTablesInSelectQuery *>(node.get()))
if (node->as<ASTTablesInSelectQuery>())
return false;
if (typeid_cast<ASTSubquery *>(child.get()) ||
typeid_cast<ASTSelectQuery *>(child.get()))
if (child->as<ASTSubquery>() || child->as<ASTSelectQuery>())
return false;
return true;
@ -52,9 +51,9 @@ public:
static void visit(ASTPtr & ast, Data & data)
{
if (auto * t = typeid_cast<ASTIdentifier *>(ast.get()))
if (const auto * t = ast->as<ASTIdentifier>())
visit(*t, ast, data);
if (auto * t = typeid_cast<ASTSelectQuery *>(ast.get()))
if (const auto * t = ast->as<ASTSelectQuery>())
visit(*t, ast, data);
}
@ -73,7 +72,7 @@ private:
const String nested_table_name = ast->getColumnName();
const String nested_table_alias = ast->getAliasOrColumnName();
if (nested_table_alias == nested_table_name && !isIdentifier(ast))
if (nested_table_alias == nested_table_name && !ast->as<ASTIdentifier>())
throw Exception("No alias for non-trivial value in ARRAY JOIN: " + nested_table_name, ErrorCodes::ALIAS_REQUIRED);
if (data.array_join_alias_to_name.count(nested_table_alias) || data.aliases.count(nested_table_alias))

View File

@ -98,7 +98,7 @@ void SelectStreamFactory::createForShard(
if (table_func_ptr)
{
auto table_function = static_cast<const ASTFunction *>(table_func_ptr.get());
const auto * table_function = table_func_ptr->as<ASTFunction>();
main_table_storage = TableFunctionFactory::instance().get(table_function->name, context)->execute(table_func_ptr, context);
}
else

View File

@ -892,8 +892,7 @@ StoragePtr Context::executeTableFunction(const ASTPtr & table_expression)
if (!res)
{
TableFunctionPtr table_function_ptr = TableFunctionFactory::instance().get(
typeid_cast<const ASTFunction *>(table_expression.get())->name, *this);
TableFunctionPtr table_function_ptr = TableFunctionFactory::instance().get(table_expression->as<ASTFunction>()->name, *this);
/// Run it and remember the result
res = table_function_ptr->execute(table_expression, *this);
@ -1203,6 +1202,8 @@ EmbeddedDictionaries & Context::getEmbeddedDictionariesImpl(const bool throw_on_
ExternalDictionaries & Context::getExternalDictionariesImpl(const bool throw_on_error) const
{
const auto & config = getConfigRef();
std::lock_guard lock(shared->external_dictionaries_mutex);
if (!shared->external_dictionaries)
@ -1214,6 +1215,7 @@ ExternalDictionaries & Context::getExternalDictionariesImpl(const bool throw_on_
shared->external_dictionaries.emplace(
std::move(config_repository),
config,
*this->global_context,
throw_on_error);
}

View File

@ -1,23 +1,24 @@
#pragma once
#include <Core/Block.h>
#include <Core/NamesAndTypes.h>
#include <Core/Types.h>
#include <Interpreters/ClientInfo.h>
#include <Interpreters/Settings.h>
#include <Parsers/IAST_fwd.h>
#include <Common/LRUCache.h>
#include <Common/MultiVersion.h>
#include <Common/ThreadPool.h>
#include <Common/config.h>
#include <atomic>
#include <chrono>
#include <condition_variable>
#include <functional>
#include <memory>
#include <mutex>
#include <thread>
#include <atomic>
#include <optional>
#include <Common/config.h>
#include <Common/MultiVersion.h>
#include <Common/LRUCache.h>
#include <Common/ThreadPool.h>
#include <Core/Types.h>
#include <Core/NamesAndTypes.h>
#include <Core/Block.h>
#include <Interpreters/Settings.h>
#include <Interpreters/ClientInfo.h>
#include <thread>
namespace Poco
@ -68,8 +69,6 @@ class IStorage;
class ITableFunction;
using StoragePtr = std::shared_ptr<IStorage>;
using Tables = std::map<String, StoragePtr>;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
class IBlockInputStream;
class IBlockOutputStream;
using BlockInputStreamPtr = std::shared_ptr<IBlockInputStream>;

View File

@ -36,13 +36,13 @@ struct JoinedTable
JoinedTable(ASTPtr table_element)
{
element = typeid_cast<ASTTablesInSelectQueryElement *>(table_element.get());
element = table_element->as<ASTTablesInSelectQueryElement>();
if (!element)
throw Exception("Logical error: TablesInSelectQueryElement expected", ErrorCodes::LOGICAL_ERROR);
if (element->table_join)
{
join = typeid_cast<ASTTableJoin *>(element->table_join.get());
join = element->table_join->as<ASTTableJoin>();
if (join->kind == ASTTableJoin::Kind::Cross ||
join->kind == ASTTableJoin::Kind::Comma)
{
@ -56,7 +56,7 @@ struct JoinedTable
if (element->table_expression)
{
auto & expr = typeid_cast<const ASTTableExpression &>(*element->table_expression);
const auto & expr = element->table_expression->as<ASTTableExpression &>();
table = DatabaseAndTableWithAlias(expr);
}
@ -105,7 +105,7 @@ public:
for (auto & child : node.arguments->children)
{
if (auto func = typeid_cast<const ASTFunction *>(child.get()))
if (const auto * func = child->as<ASTFunction>())
visit(*func, child);
else
ands_only = false;
@ -160,8 +160,8 @@ private:
if (node.arguments->children.size() != 2)
return false;
auto left = typeid_cast<const ASTIdentifier *>(node.arguments->children[0].get());
auto right = typeid_cast<const ASTIdentifier *>(node.arguments->children[1].get());
const auto * left = node.arguments->children[0]->as<ASTIdentifier>();
const auto * right = node.arguments->children[1]->as<ASTIdentifier>();
if (!left || !right)
return false;
@ -213,7 +213,7 @@ bool getTables(ASTSelectQuery & select, std::vector<JoinedTable> & joined_tables
if (!select.tables)
return false;
auto tables = typeid_cast<const ASTTablesInSelectQuery *>(select.tables.get());
const auto * tables = select.tables->as<ASTTablesInSelectQuery>();
if (!tables)
return false;
@ -232,7 +232,7 @@ bool getTables(ASTSelectQuery & select, std::vector<JoinedTable> & joined_tables
if (num_tables > 2 && t.has_using)
throw Exception("Multiple CROSS/COMMA JOIN do not support USING", ErrorCodes::NOT_IMPLEMENTED);
if (ASTTableJoin * join = t.join)
if (auto * join = t.join)
if (join->kind == ASTTableJoin::Kind::Comma)
++num_comma;
}
@ -244,7 +244,7 @@ bool getTables(ASTSelectQuery & select, std::vector<JoinedTable> & joined_tables
void CrossToInnerJoinMatcher::visit(ASTPtr & ast, Data & data)
{
if (auto * t = typeid_cast<ASTSelectQuery *>(ast.get()))
if (auto * t = ast->as<ASTSelectQuery>())
visit(*t, ast, data);
}

View File

@ -449,6 +449,7 @@ void DDLWorker::parseQueryAndResolveHost(DDLTask & task)
task.query = parseQuery(parser_query, begin, end, description, 0);
}
// XXX: serious design flaw since `ASTQueryWithOnCluster` is not inherited from `IAST`!
if (!task.query || !(task.query_on_cluster = dynamic_cast<ASTQueryWithOnCluster *>(task.query.get())))
throw Exception("Received unknown DDL query", ErrorCodes::UNKNOWN_TYPE_OF_QUERY);
@ -612,7 +613,7 @@ void DDLWorker::processTask(DDLTask & task, const ZooKeeperPtr & zookeeper)
String rewritten_query = queryToString(rewritten_ast);
LOG_DEBUG(log, "Executing query: " << rewritten_query);
if (auto ast_alter = dynamic_cast<const ASTAlterQuery *>(rewritten_ast.get()))
if (const auto * ast_alter = rewritten_ast->as<ASTAlterQuery>())
{
processTaskAlter(task, ast_alter, rewritten_query, task.entry_path, zookeeper);
}
@ -1211,7 +1212,8 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont
ASTPtr query_ptr = query_ptr_->clone();
ASTQueryWithOutput::resetOutputASTIfExist(*query_ptr);
auto query = dynamic_cast<ASTQueryWithOnCluster *>(query_ptr.get());
// XXX: serious design flaw since `ASTQueryWithOnCluster` is not inherited from `IAST`!
auto * query = dynamic_cast<ASTQueryWithOnCluster *>(query_ptr.get());
if (!query)
{
throw Exception("Distributed execution is not supported for such DDL queries", ErrorCodes::NOT_IMPLEMENTED);
@ -1220,7 +1222,7 @@ BlockIO executeDDLQueryOnCluster(const ASTPtr & query_ptr_, const Context & cont
if (!context.getSettingsRef().allow_distributed_ddl)
throw Exception("Distributed DDL queries are prohibited for the user", ErrorCodes::QUERY_IS_PROHIBITED);
if (auto query_alter = dynamic_cast<const ASTAlterQuery *>(query_ptr.get()))
if (const auto * query_alter = query_ptr->as<ASTAlterQuery>())
{
for (const auto & command : query_alter->command_list->commands)
{

View File

@ -27,7 +27,7 @@ DatabaseAndTableWithAlias::DatabaseAndTableWithAlias(const ASTIdentifier & ident
DatabaseAndTableWithAlias::DatabaseAndTableWithAlias(const ASTPtr & node, const String & current_database)
{
const auto * identifier = typeid_cast<const ASTIdentifier *>(node.get());
const auto * identifier = node->as<ASTIdentifier>();
if (!identifier)
throw Exception("Logical error: identifier expected", ErrorCodes::LOGICAL_ERROR);
@ -78,10 +78,10 @@ std::vector<const ASTTableExpression *> getSelectTablesExpression(const ASTSelec
for (const auto & child : select_query.tables->children)
{
ASTTablesInSelectQueryElement * tables_element = static_cast<ASTTablesInSelectQueryElement *>(child.get());
const auto * tables_element = child->as<ASTTablesInSelectQueryElement>();
if (tables_element->table_expression)
tables_expression.emplace_back(static_cast<const ASTTableExpression *>(tables_element->table_expression.get()));
tables_expression.emplace_back(tables_element->table_expression->as<ASTTableExpression>());
}
return tables_expression;
@ -92,17 +92,16 @@ static const ASTTableExpression * getTableExpression(const ASTSelectQuery & sele
if (!select.tables)
return {};
ASTTablesInSelectQuery & tables_in_select_query = static_cast<ASTTablesInSelectQuery &>(*select.tables);
const auto & tables_in_select_query = select.tables->as<ASTTablesInSelectQuery &>();
if (tables_in_select_query.children.size() <= table_number)
return {};
ASTTablesInSelectQueryElement & tables_element =
static_cast<ASTTablesInSelectQueryElement &>(*tables_in_select_query.children[table_number]);
const auto & tables_element = tables_in_select_query.children[table_number]->as<ASTTablesInSelectQueryElement &>();
if (!tables_element.table_expression)
return {};
return static_cast<const ASTTableExpression *>(tables_element.table_expression.get());
return tables_element.table_expression->as<ASTTableExpression>();
}
std::vector<DatabaseAndTableWithAlias> getDatabaseAndTables(const ASTSelectQuery & select_query, const String & current_database)
@ -125,7 +124,7 @@ std::optional<DatabaseAndTableWithAlias> getDatabaseAndTable(const ASTSelectQuer
return {};
ASTPtr database_and_table_name = table_expression->database_and_table_name;
if (!database_and_table_name || !isIdentifier(database_and_table_name))
if (!database_and_table_name || !database_and_table_name->as<ASTIdentifier>())
return {};
return DatabaseAndTableWithAlias(database_and_table_name);
@ -142,7 +141,7 @@ ASTPtr extractTableExpression(const ASTSelectQuery & select, size_t table_number
return table_expression->table_function;
if (table_expression->subquery)
return static_cast<const ASTSubquery *>(table_expression->subquery.get())->children[0];
return table_expression->subquery->children[0];
}
return nullptr;

View File

@ -1,18 +1,16 @@
#pragma once
#include <Core/Names.h>
#include <Core/Types.h>
#include <Parsers/IAST_fwd.h>
#include <memory>
#include <optional>
#include <Core/Types.h>
#include <Core/Names.h>
namespace DB
{
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
class ASTSelectQuery;
class ASTIdentifier;
struct ASTTableExpression;

View File

@ -41,19 +41,17 @@ static ASTPtr addTypeConversion(std::unique_ptr<ASTLiteral> && ast, const String
bool ExecuteScalarSubqueriesMatcher::needChildVisit(ASTPtr & node, const ASTPtr & child)
{
/// Processed
if (typeid_cast<ASTSubquery *>(node.get()) ||
typeid_cast<ASTFunction *>(node.get()))
if (node->as<ASTSubquery>() || node->as<ASTFunction>())
return false;
/// Don't descend into subqueries in FROM section
if (typeid_cast<ASTTableExpression *>(node.get()))
if (node->as<ASTTableExpression>())
return false;
if (typeid_cast<ASTSelectQuery *>(node.get()))
if (node->as<ASTSelectQuery>())
{
/// Do not go to FROM, JOIN, UNION.
if (typeid_cast<ASTTableExpression *>(child.get()) ||
typeid_cast<ASTSelectQuery *>(child.get()))
if (child->as<ASTTableExpression>() || child->as<ASTSelectQuery>())
return false;
}
@ -62,9 +60,9 @@ bool ExecuteScalarSubqueriesMatcher::needChildVisit(ASTPtr & node, const ASTPtr
void ExecuteScalarSubqueriesMatcher::visit(ASTPtr & ast, Data & data)
{
if (auto * t = typeid_cast<ASTSubquery *>(ast.get()))
if (const auto * t = ast->as<ASTSubquery>())
visit(*t, ast, data);
if (auto * t = typeid_cast<ASTFunction *>(ast.get()))
if (const auto * t = ast->as<ASTFunction>())
visit(*t, ast, data);
}
@ -147,7 +145,7 @@ void ExecuteScalarSubqueriesMatcher::visit(const ASTFunction & func, ASTPtr & as
out.push_back(&child);
else
for (size_t i = 0, size = func.arguments->children.size(); i < size; ++i)
if (i != 1 || !typeid_cast<ASTSubquery *>(func.arguments->children[i].get()))
if (i != 1 || !func.arguments->children[i]->as<ASTSubquery>())
out.push_back(&func.arguments->children[i]);
}
}

View File

@ -414,7 +414,7 @@ void ExpressionAction::execute(Block & block, bool dry_run) const
any_array = typeid_cast<const ColumnArray *>(&*any_array_ptr);
}
else if (array_join_is_left && !unaligned_array_join)
else if (array_join_is_left)
{
for (const auto & name : array_joined_columns)
{

View File

@ -90,8 +90,6 @@ ExpressionAnalyzer::ExpressionAnalyzer(
storage = syntax->storage;
rewrite_subqueries = syntax->rewrite_subqueries;
select_query = typeid_cast<ASTSelectQuery *>(query.get());
if (!additional_source_columns.empty())
{
source_columns.insert(source_columns.end(), additional_source_columns.begin(), additional_source_columns.end());
@ -130,6 +128,8 @@ void ExpressionAnalyzer::analyzeAggregation()
* Everything below (compiling temporary ExpressionActions) - only for the purpose of query analysis (type output).
*/
auto * select_query = query->as<ASTSelectQuery>();
if (select_query && (select_query->group_expression_list || select_query->having_expression))
has_aggregation = true;
@ -149,7 +149,7 @@ void ExpressionAnalyzer::analyzeAggregation()
const ASTTablesInSelectQueryElement * join = select_query->join();
if (join)
{
const auto table_join = static_cast<const ASTTableJoin &>(*join->table_join);
const auto & table_join = join->table_join->as<ASTTableJoin &>();
if (table_join.using_expression_list)
getRootActions(table_join.using_expression_list, true, temp_actions);
if (table_join.on_expression)
@ -250,6 +250,8 @@ void ExpressionAnalyzer::initGlobalSubqueriesAndExternalTables()
void ExpressionAnalyzer::makeSetsForIndex()
{
const auto * select_query = query->as<ASTSelectQuery>();
if (storage && select_query && storage->supportsIndexForIn())
{
if (select_query->where_expression)
@ -288,18 +290,18 @@ void ExpressionAnalyzer::makeSetsForIndexImpl(const ASTPtr & node)
for (auto & child : node->children)
{
/// Don't descend into subqueries.
if (typeid_cast<ASTSubquery *>(child.get()))
if (child->as<ASTSubquery>())
continue;
/// Don't descend into lambda functions
const ASTFunction * func = typeid_cast<const ASTFunction *>(child.get());
const auto * func = child->as<ASTFunction>();
if (func && func->name == "lambda")
continue;
makeSetsForIndexImpl(child);
}
const ASTFunction * func = typeid_cast<const ASTFunction *>(node.get());
const auto * func = node->as<ASTFunction>();
if (func && functionIsInOperator(func->name))
{
const IAST & args = *func->arguments;
@ -307,7 +309,7 @@ void ExpressionAnalyzer::makeSetsForIndexImpl(const ASTPtr & node)
if (storage && storage->mayBenefitFromIndexForIn(args.children.at(0), context))
{
const ASTPtr & arg = args.children.at(1);
if (typeid_cast<ASTSubquery *>(arg.get()) || isIdentifier(arg))
if (arg->as<ASTSubquery>() || arg->as<ASTIdentifier>())
{
if (settings.use_index_for_in_with_subqueries)
tryMakeSetForIndexFromSubquery(arg);
@ -365,6 +367,8 @@ void ExpressionAnalyzer::getActionsFromJoinKeys(const ASTTableJoin & table_join,
void ExpressionAnalyzer::getAggregates(const ASTPtr & ast, ExpressionActionsPtr & actions)
{
const auto * select_query = query->as<ASTSelectQuery>();
/// There can not be aggregate functions inside the WHERE and PREWHERE.
if (select_query && (ast.get() == select_query->where_expression.get() || ast.get() == select_query->prewhere_expression.get()))
{
@ -379,7 +383,7 @@ void ExpressionAnalyzer::getAggregates(const ASTPtr & ast, ExpressionActionsPtr
return;
}
const ASTFunction * node = typeid_cast<const ASTFunction *>(ast.get());
const auto * node = ast->as<ASTFunction>();
if (node && AggregateFunctionFactory::instance().isAggregateFunctionName(node->name))
{
has_aggregation = true;
@ -414,8 +418,7 @@ void ExpressionAnalyzer::getAggregates(const ASTPtr & ast, ExpressionActionsPtr
else
{
for (const auto & child : ast->children)
if (!typeid_cast<const ASTSubquery *>(child.get())
&& !typeid_cast<const ASTSelectQuery *>(child.get()))
if (!child->as<ASTSubquery>() && !child->as<ASTSelectQuery>())
getAggregates(child, actions);
}
}
@ -423,21 +426,22 @@ void ExpressionAnalyzer::getAggregates(const ASTPtr & ast, ExpressionActionsPtr
void ExpressionAnalyzer::assertNoAggregates(const ASTPtr & ast, const char * description)
{
const ASTFunction * node = typeid_cast<const ASTFunction *>(ast.get());
const auto * node = ast->as<ASTFunction>();
if (node && AggregateFunctionFactory::instance().isAggregateFunctionName(node->name))
throw Exception("Aggregate function " + node->getColumnName()
+ " is found " + String(description) + " in query", ErrorCodes::ILLEGAL_AGGREGATION);
for (const auto & child : ast->children)
if (!typeid_cast<const ASTSubquery *>(child.get())
&& !typeid_cast<const ASTSelectQuery *>(child.get()))
if (!child->as<ASTSubquery>() && !child->as<ASTSelectQuery>())
assertNoAggregates(child, description);
}
void ExpressionAnalyzer::assertSelect() const
{
const auto * select_query = query->as<ASTSelectQuery>();
if (!select_query)
throw Exception("Not a select query", ErrorCodes::LOGICAL_ERROR);
}
@ -475,6 +479,8 @@ void ExpressionAnalyzer::addMultipleArrayJoinAction(ExpressionActionsPtr & actio
bool ExpressionAnalyzer::appendArrayJoin(ExpressionActionsChain & chain, bool only_types)
{
const auto * select_query = query->as<ASTSelectQuery>();
assertSelect();
bool is_array_join_left;
@ -520,6 +526,8 @@ static void appendRequiredColumns(NameSet & required_columns, const Block & samp
bool ExpressionAnalyzer::appendJoin(ExpressionActionsChain & chain, bool only_types)
{
const auto * select_query = query->as<ASTSelectQuery>();
assertSelect();
if (!select_query->join())
@ -528,8 +536,8 @@ bool ExpressionAnalyzer::appendJoin(ExpressionActionsChain & chain, bool only_ty
initChain(chain, source_columns);
ExpressionActionsChain::Step & step = chain.steps.back();
const auto & join_element = static_cast<const ASTTablesInSelectQueryElement &>(*select_query->join());
auto & join_params = static_cast<ASTTableJoin &>(*join_element.table_join);
const auto & join_element = select_query->join()->as<ASTTablesInSelectQueryElement &>();
auto & join_params = join_element.table_join->as<ASTTableJoin &>();
if (join_params.strictness == ASTTableJoin::Strictness::Unspecified && join_params.kind != ASTTableJoin::Kind::Cross)
{
@ -541,7 +549,7 @@ bool ExpressionAnalyzer::appendJoin(ExpressionActionsChain & chain, bool only_ty
throw Exception("Expected ANY or ALL in JOIN section, because setting (join_default_strictness) is empty", DB::ErrorCodes::EXPECTED_ALL_OR_ANY);
}
const auto & table_to_join = static_cast<const ASTTableExpression &>(*join_element.table_expression);
const auto & table_to_join = join_element.table_expression->as<ASTTableExpression &>();
getActionsFromJoinKeys(join_params, only_types, step.actions);
@ -559,7 +567,7 @@ bool ExpressionAnalyzer::appendJoin(ExpressionActionsChain & chain, bool only_ty
if (table)
{
StorageJoin * storage_join = dynamic_cast<StorageJoin *>(table.get());
auto * storage_join = dynamic_cast<StorageJoin *>(table.get());
if (storage_join)
{
@ -624,6 +632,8 @@ bool ExpressionAnalyzer::appendJoin(ExpressionActionsChain & chain, bool only_ty
bool ExpressionAnalyzer::appendPrewhere(
ExpressionActionsChain & chain, bool only_types, const Names & additional_required_columns)
{
const auto * select_query = query->as<ASTSelectQuery>();
assertSelect();
if (!select_query->prewhere_expression)
@ -697,6 +707,8 @@ bool ExpressionAnalyzer::appendPrewhere(
bool ExpressionAnalyzer::appendWhere(ExpressionActionsChain & chain, bool only_types)
{
const auto * select_query = query->as<ASTSelectQuery>();
assertSelect();
if (!select_query->where_expression)
@ -715,6 +727,8 @@ bool ExpressionAnalyzer::appendWhere(ExpressionActionsChain & chain, bool only_t
bool ExpressionAnalyzer::appendGroupBy(ExpressionActionsChain & chain, bool only_types)
{
const auto * select_query = query->as<ASTSelectQuery>();
assertAggregation();
if (!select_query->group_expression_list)
@ -735,6 +749,8 @@ bool ExpressionAnalyzer::appendGroupBy(ExpressionActionsChain & chain, bool only
void ExpressionAnalyzer::appendAggregateFunctionsArguments(ExpressionActionsChain & chain, bool only_types)
{
const auto * select_query = query->as<ASTSelectQuery>();
assertAggregation();
initChain(chain, source_columns);
@ -759,6 +775,8 @@ void ExpressionAnalyzer::appendAggregateFunctionsArguments(ExpressionActionsChai
bool ExpressionAnalyzer::appendHaving(ExpressionActionsChain & chain, bool only_types)
{
const auto * select_query = query->as<ASTSelectQuery>();
assertAggregation();
if (!select_query->having_expression)
@ -775,6 +793,8 @@ bool ExpressionAnalyzer::appendHaving(ExpressionActionsChain & chain, bool only_
void ExpressionAnalyzer::appendSelect(ExpressionActionsChain & chain, bool only_types)
{
const auto * select_query = query->as<ASTSelectQuery>();
assertSelect();
initChain(chain, aggregated_columns);
@ -788,6 +808,8 @@ void ExpressionAnalyzer::appendSelect(ExpressionActionsChain & chain, bool only_
bool ExpressionAnalyzer::appendOrderBy(ExpressionActionsChain & chain, bool only_types)
{
const auto * select_query = query->as<ASTSelectQuery>();
assertSelect();
if (!select_query->order_expression_list)
@ -801,7 +823,7 @@ bool ExpressionAnalyzer::appendOrderBy(ExpressionActionsChain & chain, bool only
ASTs asts = select_query->order_expression_list->children;
for (size_t i = 0; i < asts.size(); ++i)
{
ASTOrderByElement * ast = typeid_cast<ASTOrderByElement *>(asts[i].get());
const auto * ast = asts[i]->as<ASTOrderByElement>();
if (!ast || ast->children.size() < 1)
throw Exception("Bad order expression AST", ErrorCodes::UNKNOWN_TYPE_OF_AST_NODE);
ASTPtr order_expression = ast->children.at(0);
@ -813,6 +835,8 @@ bool ExpressionAnalyzer::appendOrderBy(ExpressionActionsChain & chain, bool only
bool ExpressionAnalyzer::appendLimitBy(ExpressionActionsChain & chain, bool only_types)
{
const auto * select_query = query->as<ASTSelectQuery>();
assertSelect();
if (!select_query->limit_by_expression_list)
@ -831,6 +855,8 @@ bool ExpressionAnalyzer::appendLimitBy(ExpressionActionsChain & chain, bool only
void ExpressionAnalyzer::appendProjectResult(ExpressionActionsChain & chain) const
{
const auto * select_query = query->as<ASTSelectQuery>();
assertSelect();
initChain(chain, aggregated_columns);
@ -864,7 +890,7 @@ void ExpressionAnalyzer::appendExpression(ExpressionActionsChain & chain, const
void ExpressionAnalyzer::getActionsBeforeAggregation(const ASTPtr & ast, ExpressionActionsPtr & actions, bool no_subqueries)
{
ASTFunction * node = typeid_cast<ASTFunction *>(ast.get());
const auto * node = ast->as<ASTFunction>();
if (node && AggregateFunctionFactory::instance().isAggregateFunctionName(node->name))
for (auto & argument : node->arguments->children)
@ -883,7 +909,7 @@ ExpressionActionsPtr ExpressionAnalyzer::getActions(bool add_aliases, bool proje
ASTs asts;
if (auto node = typeid_cast<const ASTExpressionList *>(query.get()))
if (const auto * node = query->as<ASTExpressionList>())
asts = node->children;
else
asts = ASTs(1, query);
@ -965,21 +991,6 @@ void ExpressionAnalyzer::collectUsedColumns()
if (columns_context.has_table_join)
{
const AnalyzedJoin & analyzed_join = analyzedJoin();
#if 0
std::cerr << "key_names_left: ";
for (const auto & name : analyzed_join.key_names_left)
std::cerr << "'" << name << "' ";
std::cerr << "key_names_right: ";
for (const auto & name : analyzed_join.key_names_right)
std::cerr << "'" << name << "' ";
std::cerr << "columns_from_joined_table: ";
for (const auto & column : analyzed_join.columns_from_joined_table)
std::cerr << "'" << column.name_and_type.name << '/' << column.original_name << "' ";
std::cerr << "available_joined_columns: ";
for (const auto & column : analyzed_join.available_joined_columns)
std::cerr << "'" << column.name_and_type.name << '/' << column.original_name << "' ";
std::cerr << std::endl;
#endif
NameSet avaliable_columns;
for (const auto & name : source_columns)
avaliable_columns.insert(name.name);
@ -1014,6 +1025,8 @@ void ExpressionAnalyzer::collectUsedColumns()
required.insert(column_name_type.name);
}
const auto * select_query = query->as<ASTSelectQuery>();
/// You need to read at least one column to find the number of rows.
if (select_query && required.empty())
required.insert(ExpressionActions::getSmallestColumn(source_columns));

View File

@ -1,9 +1,11 @@
#pragma once
#include <Interpreters/ActionsVisitor.h>
#include <Interpreters/AggregateDescription.h>
#include <Interpreters/Settings.h>
#include <Interpreters/ActionsVisitor.h>
#include <Interpreters/SyntaxAnalyzer.h>
#include <Parsers/IAST_fwd.h>
namespace DB
{
@ -15,9 +17,6 @@ struct ExpressionActionsChain;
class ExpressionActions;
using ExpressionActionsPtr = std::shared_ptr<ExpressionActions>;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
using ASTs = std::vector<ASTPtr>;
struct ASTTableJoin;
class IBlockInputStream;
@ -211,7 +210,6 @@ public:
private:
ASTPtr query;
ASTSelectQuery * select_query;
const Context & context;
const ExtractedSettings settings;
StoragePtr storage; /// The main table in FROM clause, if exists.

View File

@ -26,11 +26,13 @@ namespace
}
/// Must not acquire Context lock in constructor to avoid possibility of deadlocks.
ExternalDictionaries::ExternalDictionaries(
std::unique_ptr<IExternalLoaderConfigRepository> config_repository,
const Poco::Util::AbstractConfiguration & config,
Context & context,
bool throw_on_error)
: ExternalLoader(context.getConfigRef(),
: ExternalLoader(config,
externalDictionariesUpdateSettings,
getExternalDictionariesConfigSettings(),
std::move(config_repository),

View File

@ -20,6 +20,7 @@ public:
/// Dictionaries will be loaded immediately and then will be updated in separate thread, each 'reload_period' seconds.
ExternalDictionaries(
std::unique_ptr<IExternalLoaderConfigRepository> config_repository,
const Poco::Util::AbstractConfiguration & config,
Context & context,
bool throw_on_error);

View File

@ -19,8 +19,6 @@
namespace DB
{
class Context;
struct ExternalLoaderUpdateSettings
{
UInt64 check_period_sec = 5;

View File

@ -22,7 +22,7 @@ public:
static void visit(ASTPtr & ast, Data & data)
{
if (auto * t = typeid_cast<ASTIdentifier *>(ast.get()))
if (const auto * t = ast->as<ASTIdentifier>())
visit(*t, ast, data);
}

View File

@ -56,12 +56,12 @@ public:
ASTPtr table_name;
ASTPtr subquery_or_table_name;
if (isIdentifier(subquery_or_table_name_or_table_expression))
if (subquery_or_table_name_or_table_expression->as<ASTIdentifier>())
{
table_name = subquery_or_table_name_or_table_expression;
subquery_or_table_name = table_name;
}
else if (auto ast_table_expr = typeid_cast<const ASTTableExpression *>(subquery_or_table_name_or_table_expression.get()))
else if (const auto * ast_table_expr = subquery_or_table_name_or_table_expression->as<ASTTableExpression>())
{
if (ast_table_expr->database_and_table_name)
{
@ -74,7 +74,7 @@ public:
subquery_or_table_name = subquery;
}
}
else if (typeid_cast<const ASTSubquery *>(subquery_or_table_name_or_table_expression.get()))
else if (subquery_or_table_name_or_table_expression->as<ASTSubquery>())
{
subquery = subquery_or_table_name_or_table_expression;
subquery_or_table_name = subquery;
@ -115,7 +115,7 @@ public:
auto database_and_table_name = createTableIdentifier("", external_table_name);
if (auto ast_table_expr = typeid_cast<ASTTableExpression *>(subquery_or_table_name_or_table_expression.get()))
if (auto * ast_table_expr = subquery_or_table_name_or_table_expression->as<ASTTableExpression>())
{
ast_table_expr->subquery.reset();
ast_table_expr->database_and_table_name = database_and_table_name;
@ -140,16 +140,16 @@ public:
static void visit(ASTPtr & ast, Data & data)
{
if (auto * t = typeid_cast<ASTFunction *>(ast.get()))
if (auto * t = ast->as<ASTFunction>())
visit(*t, ast, data);
if (auto * t = typeid_cast<ASTTablesInSelectQueryElement *>(ast.get()))
if (auto * t = ast->as<ASTTablesInSelectQueryElement>())
visit(*t, ast, data);
}
static bool needChildVisit(ASTPtr &, const ASTPtr & child)
{
/// We do not go into subqueries.
if (typeid_cast<ASTSelectQuery *>(child.get()))
if (child->as<ASTSelectQuery>())
return false;
return true;
}
@ -168,8 +168,7 @@ private:
/// GLOBAL JOIN
static void visit(ASTTablesInSelectQueryElement & table_elem, ASTPtr &, Data & data)
{
if (table_elem.table_join
&& static_cast<const ASTTableJoin &>(*table_elem.table_join).locality == ASTTableJoin::Locality::Global)
if (table_elem.table_join && table_elem.table_join->as<ASTTableJoin &>().locality == ASTTableJoin::Locality::Global)
{
data.addExternalStorage(table_elem.table_expression);
data.has_global_subqueries = true;

View File

@ -15,7 +15,7 @@ std::optional<String> IdentifierSemantic::getColumnName(const ASTIdentifier & no
std::optional<String> IdentifierSemantic::getColumnName(const ASTPtr & ast)
{
if (ast)
if (auto id = typeid_cast<const ASTIdentifier *>(ast.get()))
if (const auto * id = ast->as<ASTIdentifier>())
if (!id->semantic->special)
return id->name;
return {};
@ -31,7 +31,7 @@ std::optional<String> IdentifierSemantic::getTableName(const ASTIdentifier & nod
std::optional<String> IdentifierSemantic::getTableName(const ASTPtr & ast)
{
if (ast)
if (auto id = typeid_cast<const ASTIdentifier *>(ast.get()))
if (const auto * id = ast->as<ASTIdentifier>())
if (id->semantic->special)
return id->name;
return {};
@ -144,7 +144,7 @@ void IdentifierSemantic::setColumnLongName(ASTIdentifier & identifier, const Dat
String IdentifierSemantic::columnNormalName(const ASTIdentifier & identifier, const DatabaseAndTableWithAlias & db_and_table)
{
ASTPtr copy = identifier.clone();
setColumnNormalName(typeid_cast<ASTIdentifier &>(*copy), db_and_table);
setColumnNormalName(copy->as<ASTIdentifier &>(), db_and_table);
return copy->getAliasOrColumnName();
}

View File

@ -30,7 +30,7 @@ namespace
template <typename F>
void forEachNonGlobalSubquery(IAST * node, F && f)
{
if (ASTFunction * function = typeid_cast<ASTFunction *>(node))
if (auto * function = node->as<ASTFunction>())
{
if (function->name == "in" || function->name == "notIn")
{
@ -40,14 +40,14 @@ void forEachNonGlobalSubquery(IAST * node, F && f)
/// Pass into other functions, as subquery could be in aggregate or in lambda functions.
}
else if (ASTTablesInSelectQueryElement * join = typeid_cast<ASTTablesInSelectQueryElement *>(node))
else if (const auto * join = node->as<ASTTablesInSelectQueryElement>())
{
if (join->table_join && join->table_expression)
{
auto & table_join = static_cast<ASTTableJoin &>(*join->table_join);
auto & table_join = join->table_join->as<ASTTableJoin &>();
if (table_join.locality != ASTTableJoin::Locality::Global)
{
auto & subquery = static_cast<ASTTableExpression &>(*join->table_expression).subquery;
auto & subquery = join->table_expression->as<ASTTableExpression>()->subquery;
if (subquery)
f(subquery.get(), nullptr, &table_join);
}
@ -59,7 +59,7 @@ void forEachNonGlobalSubquery(IAST * node, F && f)
/// Descent into all children, but not into subqueries of other kind (scalar subqueries), that are irrelevant to us.
for (auto & child : node->children)
if (!typeid_cast<ASTSelectQuery *>(child.get()))
if (!child->as<ASTSelectQuery>())
forEachNonGlobalSubquery(child.get(), f);
}
@ -69,7 +69,7 @@ void forEachNonGlobalSubquery(IAST * node, F && f)
template <typename F>
void forEachTable(IAST * node, F && f)
{
if (auto table_expression = typeid_cast<ASTTableExpression *>(node))
if (auto * table_expression = node->as<ASTTableExpression>())
{
auto & database_and_table = table_expression->database_and_table_name;
if (database_and_table)
@ -103,15 +103,15 @@ void InJoinSubqueriesPreprocessor::process(ASTSelectQuery * query) const
if (!query->tables)
return;
ASTTablesInSelectQuery & tables_in_select_query = static_cast<ASTTablesInSelectQuery &>(*query->tables);
const auto & tables_in_select_query = query->tables->as<ASTTablesInSelectQuery &>();
if (tables_in_select_query.children.empty())
return;
ASTTablesInSelectQueryElement & tables_element = static_cast<ASTTablesInSelectQueryElement &>(*tables_in_select_query.children[0]);
const auto & tables_element = tables_in_select_query.children[0]->as<ASTTablesInSelectQueryElement &>();
if (!tables_element.table_expression)
return;
ASTTableExpression * table_expression = static_cast<ASTTableExpression *>(tables_element.table_expression.get());
const auto * table_expression = tables_element.table_expression->as<ASTTableExpression>();
/// If not ordinary table, skip it.
if (!table_expression->database_and_table_name)
@ -143,7 +143,7 @@ void InJoinSubqueriesPreprocessor::process(ASTSelectQuery * query) const
{
if (function)
{
ASTFunction * concrete = static_cast<ASTFunction *>(function);
auto * concrete = function->as<ASTFunction>();
if (concrete->name == "in")
concrete->name = "globalIn";
@ -157,7 +157,7 @@ void InJoinSubqueriesPreprocessor::process(ASTSelectQuery * query) const
throw Exception("Logical error: unexpected function name " + concrete->name, ErrorCodes::LOGICAL_ERROR);
}
else if (table_join)
static_cast<ASTTableJoin &>(*table_join).locality = ASTTableJoin::Locality::Global;
table_join->as<ASTTableJoin &>().locality = ASTTableJoin::Locality::Global;
else
throw Exception("Logical error: unexpected AST node", ErrorCodes::LOGICAL_ERROR);
}

View File

@ -30,7 +30,7 @@ InterpreterAlterQuery::InterpreterAlterQuery(const ASTPtr & query_ptr_, const Co
BlockIO InterpreterAlterQuery::execute()
{
auto & alter = typeid_cast<ASTAlterQuery &>(*query_ptr);
const auto & alter = query_ptr->as<ASTAlterQuery &>();
if (!alter.cluster.empty())
return executeDDLQueryOnCluster(query_ptr, context, {alter.database});

View File

@ -1,14 +1,13 @@
#pragma once
#include <Interpreters/IInterpreter.h>
#include <Parsers/IAST_fwd.h>
namespace DB
{
class Context;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
/** Allows you add or remove a column in the table.
* It also allows you to manipulate the partitions of the MergeTree family tables.

View File

@ -19,8 +19,8 @@ InterpreterCheckQuery::InterpreterCheckQuery(const ASTPtr & query_ptr_, const Co
BlockIO InterpreterCheckQuery::execute()
{
ASTCheckQuery & alter = typeid_cast<ASTCheckQuery &>(*query_ptr);
String & table_name = alter.table;
const auto & alter = query_ptr->as<ASTCheckQuery &>();
const String & table_name = alter.table;
String database_name = alter.database.empty() ? context.getCurrentDatabase() : alter.database;
StoragePtr table = context.getTable(database_name, table_name);

View File

@ -197,7 +197,7 @@ static ColumnsDeclarationAndModifiers parseColumns(const ASTExpressionList & col
for (const auto & ast : column_list_ast.children)
{
auto & col_decl = typeid_cast<ASTColumnDeclaration &>(*ast);
auto & col_decl = ast->as<ASTColumnDeclaration &>();
DataTypePtr column_type = nullptr;
if (col_decl.type)
@ -240,7 +240,7 @@ static ColumnsDeclarationAndModifiers parseColumns(const ASTExpressionList & col
if (col_decl.comment)
{
if (auto comment_str = typeid_cast<ASTLiteral &>(*col_decl.comment).value.get<String>(); !comment_str.empty())
if (auto comment_str = col_decl.comment->as<ASTLiteral &>().value.get<String>(); !comment_str.empty())
comments.emplace(col_decl.name, comment_str);
}
}
@ -526,7 +526,7 @@ void InterpreterCreateQuery::setEngine(ASTCreateQuery & create) const
String as_table_name = create.as_table;
ASTPtr as_create_ptr = context.getCreateTableQuery(as_database_name, as_table_name);
const auto & as_create = typeid_cast<const ASTCreateQuery &>(*as_create_ptr);
const auto & as_create = as_create_ptr->as<ASTCreateQuery &>();
if (as_create.is_view)
throw Exception(
@ -566,8 +566,7 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create)
{
// Table SQL definition is available even if the table is detached
auto query = context.getCreateTableQuery(database_name, table_name);
auto & as_create = typeid_cast<const ASTCreateQuery &>(*query);
create = as_create; // Copy the saved create query, but use ATTACH instead of CREATE
create = query->as<ASTCreateQuery &>(); // Copy the saved create query, but use ATTACH instead of CREATE
create.attach = true;
}
@ -695,7 +694,7 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create)
BlockIO InterpreterCreateQuery::execute()
{
ASTCreateQuery & create = typeid_cast<ASTCreateQuery &>(*query_ptr);
auto & create = query_ptr->as<ASTCreateQuery &>();
checkAccess(create);
ASTQueryWithOutput::resetOutputASTIfExist(create);

View File

@ -58,7 +58,7 @@ Block InterpreterDescribeQuery::getSampleBlock()
BlockInputStreamPtr InterpreterDescribeQuery::executeImpl()
{
const ASTDescribeQuery & ast = typeid_cast<const ASTDescribeQuery &>(*query_ptr);
const auto & ast = query_ptr->as<ASTDescribeQuery &>();
NamesAndTypesList columns;
ColumnDefaults column_defaults;
@ -66,7 +66,7 @@ BlockInputStreamPtr InterpreterDescribeQuery::executeImpl()
ColumnCodecs column_codecs;
StoragePtr table;
auto table_expression = typeid_cast<const ASTTableExpression *>(ast.table_expression.get());
const auto * table_expression = ast.table_expression->as<ASTTableExpression>();
if (table_expression->subquery)
{
@ -76,7 +76,7 @@ BlockInputStreamPtr InterpreterDescribeQuery::executeImpl()
{
if (table_expression->table_function)
{
auto table_function = typeid_cast<const ASTFunction *>(table_expression->table_function.get());
const auto * table_function = table_expression->table_function->as<ASTFunction>();
/// Get the table function
TableFunctionPtr table_function_ptr = TableFunctionFactory::instance().get(table_function->name, context);
/// Run it and remember the result
@ -84,7 +84,7 @@ BlockInputStreamPtr InterpreterDescribeQuery::executeImpl()
}
else
{
auto identifier = typeid_cast<const ASTIdentifier *>(table_expression->database_and_table_name.get());
const auto * identifier = table_expression->database_and_table_name->as<ASTIdentifier>();
String database_name;
String table_name;

View File

@ -1,14 +1,13 @@
#pragma once
#include <Interpreters/IInterpreter.h>
#include <Parsers/IAST_fwd.h>
namespace DB
{
class Context;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
/** Return names, types and other information about columns in specified table.

View File

@ -31,7 +31,7 @@ InterpreterDropQuery::InterpreterDropQuery(const ASTPtr & query_ptr_, Context &
BlockIO InterpreterDropQuery::execute()
{
ASTDropQuery & drop = typeid_cast<ASTDropQuery &>(*query_ptr);
auto & drop = query_ptr->as<ASTDropQuery &>();
checkAccess(drop);

View File

@ -1,16 +1,14 @@
#pragma once
#include <Databases/IDatabase.h>
#include <Interpreters/IInterpreter.h>
#include <Parsers/ASTDropQuery.h>
#include <Databases/IDatabase.h>
#include <Parsers/IAST_fwd.h>
namespace DB
{
class Context;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
using DatabaseAndTable = std::pair<DatabasePtr, StoragePtr>;
/** Allow to either drop table with all its data (DROP),

View File

@ -32,7 +32,7 @@ Block InterpreterExistsQuery::getSampleBlock()
BlockInputStreamPtr InterpreterExistsQuery::executeImpl()
{
const ASTExistsQuery & ast = typeid_cast<const ASTExistsQuery &>(*query_ptr);
const auto & ast = query_ptr->as<ASTExistsQuery &>();
bool res = ast.temporary ? context.isExternalTableExist(ast.table) : context.isTableExist(ast.database, ast.table);
return std::make_shared<OneBlockInputStream>(Block{{

View File

@ -1,14 +1,13 @@
#pragma once
#include <Interpreters/IInterpreter.h>
#include <Parsers/IAST_fwd.h>
namespace DB
{
class Context;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
/** Check that table exists. Return single row with single column "result" of type UInt8 and value 0 or 1.

View File

@ -39,7 +39,7 @@ Block InterpreterExplainQuery::getSampleBlock()
BlockInputStreamPtr InterpreterExplainQuery::executeImpl()
{
const ASTExplainQuery & ast = typeid_cast<const ASTExplainQuery &>(*query);
const auto & ast = query->as<ASTExplainQuery &>();
Block sample_block = getSampleBlock();
MutableColumns res_columns = sample_block.cloneEmptyColumns();

View File

@ -2,15 +2,12 @@
#include <Interpreters/Context.h>
#include <Interpreters/IInterpreter.h>
#include <Parsers/IAST_fwd.h>
namespace DB
{
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
/// Returns single row with explain results
class InterpreterExplainQuery : public IInterpreter
{

View File

@ -80,95 +80,95 @@ std::unique_ptr<IInterpreter> InterpreterFactory::get(ASTPtr & query, Context &
{
ProfileEvents::increment(ProfileEvents::Query);
if (typeid_cast<ASTSelectQuery *>(query.get()))
if (query->as<ASTSelectQuery>())
{
/// This is internal part of ASTSelectWithUnionQuery.
/// Even if there is SELECT without union, it is represented by ASTSelectWithUnionQuery with single ASTSelectQuery as a child.
return std::make_unique<InterpreterSelectQuery>(query, context, Names{}, stage);
}
else if (typeid_cast<ASTSelectWithUnionQuery *>(query.get()))
else if (query->as<ASTSelectWithUnionQuery>())
{
ProfileEvents::increment(ProfileEvents::SelectQuery);
return std::make_unique<InterpreterSelectWithUnionQuery>(query, context, Names{}, stage);
}
else if (typeid_cast<ASTInsertQuery *>(query.get()))
else if (query->as<ASTInsertQuery>())
{
ProfileEvents::increment(ProfileEvents::InsertQuery);
/// readonly is checked inside InterpreterInsertQuery
bool allow_materialized = static_cast<bool>(context.getSettingsRef().insert_allow_materialized_columns);
return std::make_unique<InterpreterInsertQuery>(query, context, allow_materialized);
}
else if (typeid_cast<ASTCreateQuery *>(query.get()))
else if (query->as<ASTCreateQuery>())
{
/// readonly and allow_ddl are checked inside InterpreterCreateQuery
return std::make_unique<InterpreterCreateQuery>(query, context);
}
else if (typeid_cast<ASTDropQuery *>(query.get()))
else if (query->as<ASTDropQuery>())
{
/// readonly and allow_ddl are checked inside InterpreterDropQuery
return std::make_unique<InterpreterDropQuery>(query, context);
}
else if (typeid_cast<ASTRenameQuery *>(query.get()))
else if (query->as<ASTRenameQuery>())
{
throwIfNoAccess(context);
return std::make_unique<InterpreterRenameQuery>(query, context);
}
else if (typeid_cast<ASTShowTablesQuery *>(query.get()))
else if (query->as<ASTShowTablesQuery>())
{
return std::make_unique<InterpreterShowTablesQuery>(query, context);
}
else if (typeid_cast<ASTUseQuery *>(query.get()))
else if (query->as<ASTUseQuery>())
{
return std::make_unique<InterpreterUseQuery>(query, context);
}
else if (typeid_cast<ASTSetQuery *>(query.get()))
else if (query->as<ASTSetQuery>())
{
/// readonly is checked inside InterpreterSetQuery
return std::make_unique<InterpreterSetQuery>(query, context);
}
else if (typeid_cast<ASTOptimizeQuery *>(query.get()))
else if (query->as<ASTOptimizeQuery>())
{
throwIfNoAccess(context);
return std::make_unique<InterpreterOptimizeQuery>(query, context);
}
else if (typeid_cast<ASTExistsQuery *>(query.get()))
else if (query->as<ASTExistsQuery>())
{
return std::make_unique<InterpreterExistsQuery>(query, context);
}
else if (typeid_cast<ASTShowCreateTableQuery *>(query.get()))
else if (query->as<ASTShowCreateTableQuery>())
{
return std::make_unique<InterpreterShowCreateQuery>(query, context);
}
else if (typeid_cast<ASTShowCreateDatabaseQuery *>(query.get()))
else if (query->as<ASTShowCreateDatabaseQuery>())
{
return std::make_unique<InterpreterShowCreateQuery>(query, context);
}
else if (typeid_cast<ASTDescribeQuery *>(query.get()))
else if (query->as<ASTDescribeQuery>())
{
return std::make_unique<InterpreterDescribeQuery>(query, context);
}
else if (typeid_cast<ASTExplainQuery *>(query.get()))
else if (query->as<ASTExplainQuery>())
{
return std::make_unique<InterpreterExplainQuery>(query, context);
}
else if (typeid_cast<ASTShowProcesslistQuery *>(query.get()))
else if (query->as<ASTShowProcesslistQuery>())
{
return std::make_unique<InterpreterShowProcesslistQuery>(query, context);
}
else if (typeid_cast<ASTAlterQuery *>(query.get()))
else if (query->as<ASTAlterQuery>())
{
throwIfNoAccess(context);
return std::make_unique<InterpreterAlterQuery>(query, context);
}
else if (typeid_cast<ASTCheckQuery *>(query.get()))
else if (query->as<ASTCheckQuery>())
{
return std::make_unique<InterpreterCheckQuery>(query, context);
}
else if (typeid_cast<ASTKillQueryQuery *>(query.get()))
else if (query->as<ASTKillQueryQuery>())
{
return std::make_unique<InterpreterKillQueryQuery>(query, context);
}
else if (typeid_cast<ASTSystemQuery *>(query.get()))
else if (query->as<ASTSystemQuery>())
{
throwIfNoAccess(context);
return std::make_unique<InterpreterSystemQuery>(query, context);

View File

@ -2,14 +2,13 @@
#include <Core/QueryProcessingStage.h>
#include <Interpreters/IInterpreter.h>
#include <Parsers/IAST_fwd.h>
namespace DB
{
class Context;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
class InterpreterFactory

View File

@ -46,7 +46,7 @@ StoragePtr InterpreterInsertQuery::getTable(const ASTInsertQuery & query)
{
if (query.table_function)
{
auto table_function = typeid_cast<const ASTFunction *>(query.table_function.get());
const auto * table_function = query.table_function->as<ASTFunction>();
const auto & factory = TableFunctionFactory::instance();
return factory.get(table_function->name, context)->execute(query.table_function, context);
}
@ -92,7 +92,7 @@ Block InterpreterInsertQuery::getSampleBlock(const ASTInsertQuery & query, const
BlockIO InterpreterInsertQuery::execute()
{
ASTInsertQuery & query = typeid_cast<ASTInsertQuery &>(*query_ptr);
const auto & query = query_ptr->as<ASTInsertQuery &>();
checkAccess(query);
StoragePtr table = getTable(query);
@ -171,7 +171,7 @@ void InterpreterInsertQuery::checkAccess(const ASTInsertQuery & query)
std::pair<String, String> InterpreterInsertQuery::getDatabaseTable() const
{
ASTInsertQuery & query = typeid_cast<ASTInsertQuery &>(*query_ptr);
const auto & query = query_ptr->as<ASTInsertQuery &>();
return {query.database, query.table};
}

View File

@ -172,7 +172,7 @@ public:
BlockIO InterpreterKillQueryQuery::execute()
{
ASTKillQueryQuery & query = typeid_cast<ASTKillQueryQuery &>(*query_ptr);
const auto & query = query_ptr->as<ASTKillQueryQuery &>();
if (!query.cluster.empty())
return executeDDLQueryOnCluster(query_ptr, context, {"system"});
@ -261,7 +261,7 @@ BlockIO InterpreterKillQueryQuery::execute()
Block InterpreterKillQueryQuery::getSelectResult(const String & columns, const String & table)
{
String select_query = "SELECT " + columns + " FROM " + table;
auto & where_expression = static_cast<ASTKillQueryQuery &>(*query_ptr).where_expression;
auto & where_expression = query_ptr->as<ASTKillQueryQuery>()->where_expression;
if (where_expression)
select_query += " WHERE " + queryToString(where_expression);

View File

@ -1,14 +1,13 @@
#pragma once
#include <Interpreters/IInterpreter.h>
#include <Parsers/IAST.h>
namespace DB
{
class Context;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
class InterpreterKillQueryQuery : public IInterpreter
@ -28,4 +27,3 @@ private:
}

View File

@ -17,7 +17,7 @@ namespace ErrorCodes
BlockIO InterpreterOptimizeQuery::execute()
{
const ASTOptimizeQuery & ast = typeid_cast<const ASTOptimizeQuery &>(*query_ptr);
const auto & ast = query_ptr->as<ASTOptimizeQuery &>();
if (!ast.cluster.empty())
return executeDDLQueryOnCluster(query_ptr, context, {ast.database});

View File

@ -1,14 +1,13 @@
#pragma once
#include <Interpreters/IInterpreter.h>
#include <Parsers/IAST_fwd.h>
namespace DB
{
class Context;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
/** Just call method "optimize" for table.

View File

@ -36,7 +36,7 @@ struct RenameDescription
BlockIO InterpreterRenameQuery::execute()
{
ASTRenameQuery & rename = typeid_cast<ASTRenameQuery &>(*query_ptr);
const auto & rename = query_ptr->as<ASTRenameQuery &>();
if (!rename.cluster.empty())
{

View File

@ -1,14 +1,13 @@
#pragma once
#include <Interpreters/IInterpreter.h>
#include <Parsers/IAST_fwd.h>
namespace DB
{
class Context;
class IAST;
using ASTPtr = std::shared_ptr<IAST>;
/** Rename one table

View File

@ -23,7 +23,6 @@
#include <DataStreams/ConvertColumnLowCardinalityToFullBlockInputStream.h>
#include <DataStreams/ConvertingBlockInputStream.h>
#include <Parsers/ASTSelectQuery.h>
#include <Parsers/ASTSelectWithUnionQuery.h>
#include <Parsers/ASTIdentifier.h>
#include <Parsers/ASTFunction.h>
@ -169,7 +168,7 @@ InterpreterSelectQuery::InterpreterSelectQuery(
}
max_streams = settings.max_threads;
ASTSelectQuery & query = selectQuery();
auto & query = getSelectQuery();
ASTPtr table_expression = extractTableExpression(query, 0);
@ -177,8 +176,8 @@ InterpreterSelectQuery::InterpreterSelectQuery(
bool is_subquery = false;
if (table_expression)
{
is_table_func = typeid_cast<const ASTFunction *>(table_expression.get());
is_subquery = typeid_cast<const ASTSelectWithUnionQuery *>(table_expression.get());
is_table_func = table_expression->as<ASTFunction>();
is_subquery = table_expression->as<ASTSelectWithUnionQuery>();
}
if (input)
@ -277,15 +276,9 @@ InterpreterSelectQuery::InterpreterSelectQuery(
}
ASTSelectQuery & InterpreterSelectQuery::selectQuery()
{
return typeid_cast<ASTSelectQuery &>(*query_ptr);
}
void InterpreterSelectQuery::getDatabaseAndTableNames(String & database_name, String & table_name)
{
if (auto db_and_table = getDatabaseAndTable(selectQuery(), 0))
if (auto db_and_table = getDatabaseAndTable(getSelectQuery(), 0))
{
table_name = db_and_table->table;
database_name = db_and_table->database;
@ -384,7 +377,7 @@ InterpreterSelectQuery::AnalysisResult InterpreterSelectQuery::analyzeExpression
{
ExpressionActionsChain chain(context);
ASTSelectQuery & query = selectQuery();
auto & query = getSelectQuery();
Names additional_required_columns_after_prewhere;
@ -508,7 +501,8 @@ void InterpreterSelectQuery::executeImpl(Pipeline & pipeline, const BlockInputSt
* then perform the remaining operations with one resulting stream.
*/
ASTSelectQuery & query = selectQuery();
/// Now we will compose block streams that perform the necessary actions.
auto & query = getSelectQuery();
const Settings & settings = context.getSettingsRef();
QueryProcessingStage::Enum from_stage = QueryProcessingStage::FetchColumns;
@ -570,8 +564,6 @@ void InterpreterSelectQuery::executeImpl(Pipeline & pipeline, const BlockInputSt
if (to_stage > QueryProcessingStage::FetchColumns)
{
/// Now we will compose block streams that perform the necessary actions.
/// Do I need to aggregate in a separate row rows that have not passed max_rows_to_group_by.
bool aggregate_overflow_row =
expressions.need_aggregate &&
@ -590,7 +582,7 @@ void InterpreterSelectQuery::executeImpl(Pipeline & pipeline, const BlockInputSt
{
if (expressions.hasJoin())
{
const ASTTableJoin & join = static_cast<const ASTTableJoin &>(*query.join()->table_join);
const auto & join = query.join()->table_join->as<ASTTableJoin &>();
if (isRightOrFull(join.kind))
pipeline.stream_with_non_joined_data = expressions.before_join->createStreamWithNonJoinedDataIfFullOrRightJoin(
pipeline.firstStream()->getHeader(), settings.max_block_size);
@ -786,7 +778,7 @@ static std::pair<UInt64, UInt64> getLimitLengthAndOffset(const ASTSelectQuery &
return {length, offset};
}
static UInt64 getLimitForSorting(ASTSelectQuery & query, const Context & context)
static UInt64 getLimitForSorting(const ASTSelectQuery & query, const Context & context)
{
/// Partial sort can be done if there is LIMIT but no DISTINCT or LIMIT BY.
if (!query.distinct && !query.limit_by_expression_list)
@ -802,7 +794,7 @@ void InterpreterSelectQuery::executeFetchColumns(
QueryProcessingStage::Enum processing_stage, Pipeline & pipeline,
const PrewhereInfoPtr & prewhere_info, const Names & columns_to_remove_after_prewhere)
{
ASTSelectQuery & query = selectQuery();
auto & query = getSelectQuery();
const Settings & settings = context.getSettingsRef();
/// Actions to calculate ALIAS if required.
@ -1097,7 +1089,7 @@ void InterpreterSelectQuery::executeWhere(Pipeline & pipeline, const ExpressionA
{
pipeline.transform([&](auto & stream)
{
stream = std::make_shared<FilterBlockInputStream>(stream, expression, selectQuery().where_expression->getColumnName(), remove_fiter);
stream = std::make_shared<FilterBlockInputStream>(stream, expression, getSelectQuery().where_expression->getColumnName(), remove_fiter);
});
}
@ -1225,7 +1217,7 @@ void InterpreterSelectQuery::executeHaving(Pipeline & pipeline, const Expression
{
pipeline.transform([&](auto & stream)
{
stream = std::make_shared<FilterBlockInputStream>(stream, expression, selectQuery().having_expression->getColumnName());
stream = std::make_shared<FilterBlockInputStream>(stream, expression, getSelectQuery().having_expression->getColumnName());
});
}
@ -1237,8 +1229,13 @@ void InterpreterSelectQuery::executeTotalsAndHaving(Pipeline & pipeline, bool ha
const Settings & settings = context.getSettingsRef();
pipeline.firstStream() = std::make_shared<TotalsHavingBlockInputStream>(
pipeline.firstStream(), overflow_row, expression,
has_having ? selectQuery().having_expression->getColumnName() : "", settings.totals_mode, settings.totals_auto_threshold, final);
pipeline.firstStream(),
overflow_row,
expression,
has_having ? getSelectQuery().having_expression->getColumnName() : "",
settings.totals_mode,
settings.totals_auto_threshold,
final);
}
void InterpreterSelectQuery::executeRollupOrCube(Pipeline & pipeline, Modificator modificator)
@ -1281,18 +1278,18 @@ void InterpreterSelectQuery::executeExpression(Pipeline & pipeline, const Expres
}
static SortDescription getSortDescription(ASTSelectQuery & query)
static SortDescription getSortDescription(const ASTSelectQuery & query)
{
SortDescription order_descr;
order_descr.reserve(query.order_expression_list->children.size());
for (const auto & elem : query.order_expression_list->children)
{
String name = elem->children.front()->getColumnName();
const ASTOrderByElement & order_by_elem = typeid_cast<const ASTOrderByElement &>(*elem);
const auto & order_by_elem = elem->as<ASTOrderByElement &>();
std::shared_ptr<Collator> collator;
if (order_by_elem.collation)
collator = std::make_shared<Collator>(typeid_cast<const ASTLiteral &>(*order_by_elem.collation).value.get<String>());
collator = std::make_shared<Collator>(order_by_elem.collation->as<ASTLiteral &>().value.get<String>());
order_descr.emplace_back(name, order_by_elem.direction, order_by_elem.nulls_direction, collator);
}
@ -1303,7 +1300,7 @@ static SortDescription getSortDescription(ASTSelectQuery & query)
void InterpreterSelectQuery::executeOrder(Pipeline & pipeline)
{
ASTSelectQuery & query = selectQuery();
auto & query = getSelectQuery();
SortDescription order_descr = getSortDescription(query);
UInt64 limit = getLimitForSorting(query, context);
@ -1335,7 +1332,7 @@ void InterpreterSelectQuery::executeOrder(Pipeline & pipeline)
void InterpreterSelectQuery::executeMergeSorted(Pipeline & pipeline)
{
ASTSelectQuery & query = selectQuery();
auto & query = getSelectQuery();
SortDescription order_descr = getSortDescription(query);
UInt64 limit = getLimitForSorting(query, context);
@ -1372,7 +1369,7 @@ void InterpreterSelectQuery::executeProjection(Pipeline & pipeline, const Expres
void InterpreterSelectQuery::executeDistinct(Pipeline & pipeline, bool before_order, Names columns)
{
ASTSelectQuery & query = selectQuery();
auto & query = getSelectQuery();
if (query.distinct)
{
const Settings & settings = context.getSettingsRef();
@ -1415,7 +1412,7 @@ void InterpreterSelectQuery::executeUnion(Pipeline & pipeline)
/// Preliminary LIMIT - is used in every source, if there are several sources, before they are combined.
void InterpreterSelectQuery::executePreLimit(Pipeline & pipeline)
{
ASTSelectQuery & query = selectQuery();
auto & query = getSelectQuery();
/// If there is LIMIT
if (query.limit_length)
{
@ -1430,7 +1427,7 @@ void InterpreterSelectQuery::executePreLimit(Pipeline & pipeline)
void InterpreterSelectQuery::executeLimitBy(Pipeline & pipeline)
{
ASTSelectQuery & query = selectQuery();
auto & query = getSelectQuery();
if (!query.limit_by_value || !query.limit_by_expression_list)
return;
@ -1458,10 +1455,10 @@ bool hasWithTotalsInAnySubqueryInFromClause(const ASTSelectQuery & query)
if (auto query_table = extractTableExpression(query, 0))
{
if (auto ast_union = typeid_cast<const ASTSelectWithUnionQuery *>(query_table.get()))
if (const auto * ast_union = query_table->as<ASTSelectWithUnionQuery>())
{
for (const auto & elem : ast_union->list_of_selects->children)
if (hasWithTotalsInAnySubqueryInFromClause(typeid_cast<const ASTSelectQuery &>(*elem)))
if (hasWithTotalsInAnySubqueryInFromClause(elem->as<ASTSelectQuery &>()))
return true;
}
}
@ -1472,7 +1469,7 @@ bool hasWithTotalsInAnySubqueryInFromClause(const ASTSelectQuery & query)
void InterpreterSelectQuery::executeLimit(Pipeline & pipeline)
{
ASTSelectQuery & query = selectQuery();
auto & query = getSelectQuery();
/// If there is LIMIT
if (query.limit_length)
{
@ -1544,13 +1541,13 @@ void InterpreterSelectQuery::unifyStreams(Pipeline & pipeline)
void InterpreterSelectQuery::ignoreWithTotals()
{
selectQuery().group_by_with_totals = false;
getSelectQuery().group_by_with_totals = false;
}
void InterpreterSelectQuery::initSettings()
{
ASTSelectQuery & query = selectQuery();
auto & query = getSelectQuery();
if (query.settings)
InterpreterSetQuery(query.settings, context).executeForCurrentContext();
}

Some files were not shown because too many files have changed in this diff Show More