mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 15:42:02 +00:00
Merge remote-tracking branch 'upstream/master' into nikvas0/bloom_filter_index
This commit is contained in:
commit
c10f10cc0b
113
CHANGELOG.md
113
CHANGELOG.md
@ -1,3 +1,94 @@
|
||||
## ClickHouse release 19.4.1.3, 2019-03-19
|
||||
|
||||
### Bug Fixes
|
||||
* Fixed remote queries which contain both `LIMIT BY` and `LIMIT`. Previously, if `LIMIT BY` and `LIMIT` were used for remote query, `LIMIT` could happen before `LIMIT BY`, which led to too filtered result. [#4708](https://github.com/yandex/ClickHouse/pull/4708) ([Constantin S. Pan](https://github.com/kvap))
|
||||
|
||||
## ClickHouse release 19.4.0.49, 2019-03-09
|
||||
|
||||
### New Features
|
||||
* Added full support for `Protobuf` format (input and output, nested data structures). [#4174](https://github.com/yandex/ClickHouse/pull/4174) [#4493](https://github.com/yandex/ClickHouse/pull/4493) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Added bitmap functions with Roaring Bitmaps. [#4207](https://github.com/yandex/ClickHouse/pull/4207) ([Andy Yang](https://github.com/andyyzh)) [#4568](https://github.com/yandex/ClickHouse/pull/4568) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Parquet format support [#4448](https://github.com/yandex/ClickHouse/pull/4448) ([proller](https://github.com/proller))
|
||||
* N-gram distance was added for fuzzy string comparison. It is similar to q-gram metrics in R language. [#4466](https://github.com/yandex/ClickHouse/pull/4466) ([Danila Kutenin](https://github.com/danlark1))
|
||||
* Combine rules for graphite rollup from dedicated aggregation and retention patterns. [#4426](https://github.com/yandex/ClickHouse/pull/4426) ([Mikhail f. Shiryaev](https://github.com/Felixoid))
|
||||
* Added `max_execution_speed` and `max_execution_speed_bytes` to limit resource usage. Added `min_execution_speed_bytes` setting to complement the `min_execution_speed`. [#4430](https://github.com/yandex/ClickHouse/pull/4430) ([Winter Zhang](https://github.com/zhang2014))
|
||||
* Implemented function `flatten` [#4555](https://github.com/yandex/ClickHouse/pull/4555) [#4409](https://github.com/yandex/ClickHouse/pull/4409) ([alexey-milovidov](https://github.com/alexey-milovidov), [kzon](https://github.com/kzon))
|
||||
* Added functions `arrayEnumerateDenseRanked` and `arrayEnumerateUniqRanked` (it's like `arrayEnumerateUniq` but allows to fine tune array depth to look inside multidimensional arrays). [#4475](https://github.com/yandex/ClickHouse/pull/4475) ([proller](https://github.com/proller)) [#4601](https://github.com/yandex/ClickHouse/pull/4601) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Multiple JOINS with some restrictions: no asterisks, no complex aliases in ON/WHERE/GROUP BY/... [#4462](https://github.com/yandex/ClickHouse/pull/4462) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
|
||||
### Bug Fixes
|
||||
* This release also contains all bug fixes from 19.3 and 19.1.
|
||||
* Fixed bug in data skipping indices: order of granules after INSERT was incorrect. [#4407](https://github.com/yandex/ClickHouse/pull/4407) ([Nikita Vasilev](https://github.com/nikvas0))
|
||||
* Fixed `set` index for `Nullable` and `LowCardinality` columns. Before it, `set` index with `Nullable` or `LowCardinality` column led to error `Data type must be deserialized with multiple streams` while selecting. [#4594](https://github.com/yandex/ClickHouse/pull/4594) ([Nikolai Kochetov](https://github.com/KochetovNicolai))
|
||||
* Correctly set update_time on full `executable` dictionary update. [#4551](https://github.com/yandex/ClickHouse/pull/4551) ([Tema Novikov](https://github.com/temoon))
|
||||
* Fix broken progress bar in 19.3 [#4627](https://github.com/yandex/ClickHouse/pull/4627) ([filimonov](https://github.com/filimonov))
|
||||
* Fixed inconsistent values of MemoryTracker when memory region was shrinked, in certain cases. [#4619](https://github.com/yandex/ClickHouse/pull/4619) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed undefined behaviour in ThreadPool [#4612](https://github.com/yandex/ClickHouse/pull/4612) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed a very rare crash with the message `mutex lock failed: Invalid argument` that could happen when a MergeTree table was dropped concurrently with a SELECT. [#4608](https://github.com/yandex/ClickHouse/pull/4608) ([Alex Zatelepin](https://github.com/ztlpn))
|
||||
* ODBC driver compatibility with `LowCardinality` data type [#4381](https://github.com/yandex/ClickHouse/pull/4381) ([proller](https://github.com/proller))
|
||||
* FreeBSD: Fixup for `AIOcontextPool: Found io_event with unknown id 0` error [#4438](https://github.com/yandex/ClickHouse/pull/4438) ([urgordeadbeef](https://github.com/urgordeadbeef))
|
||||
* `system.part_log` table was created regardless to configuration. [#4483](https://github.com/yandex/ClickHouse/pull/4483) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix undefined behaviour in `dictIsIn` function for cache dictionaries. [#4515](https://github.com/yandex/ClickHouse/pull/4515) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed a deadlock when a SELECT query locks the same table multiple times (e.g. from different threads or when executing multiple subqueries) and there is a concurrent DDL query. [#4535](https://github.com/yandex/ClickHouse/pull/4535) ([Alex Zatelepin](https://github.com/ztlpn))
|
||||
* Disable compile_expressions by default until we get own `llvm` contrib and can test it with `clang` and `asan`. [#4579](https://github.com/yandex/ClickHouse/pull/4579) ([alesapin](https://github.com/alesapin))
|
||||
* Prevent `std::terminate` when `invalidate_query` for `clickhouse` external dictionary source has returned wrong resultset (empty or more than one row or more than one column). Fixed issue when the `invalidate_query` was performed every five seconds regardless to the `lifetime`. [#4583](https://github.com/yandex/ClickHouse/pull/4583) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Avoid deadlock when the `invalidate_query` for a dictionary with `clickhouse` source was involving `system.dictionaries` table or `Dictionaries` database (rare case). [#4599](https://github.com/yandex/ClickHouse/pull/4599) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixes for CROSS JOIN with empty WHERE [#4598](https://github.com/yandex/ClickHouse/pull/4598) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed segfault in function "replicate" when constant argument is passed. [#4603](https://github.com/yandex/ClickHouse/pull/4603) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix lambda function with predicate optimizer. [#4408](https://github.com/yandex/ClickHouse/pull/4408) ([Winter Zhang](https://github.com/zhang2014))
|
||||
* Multiple JOINs multiple fixes. [#4595](https://github.com/yandex/ClickHouse/pull/4595) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
|
||||
### Improvements
|
||||
* Support aliases in JOIN ON section for right table columns [#4412](https://github.com/yandex/ClickHouse/pull/4412) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Result of multiple JOINs need correct result names to be used in subselects. Replace flat aliases with source names in result. [#4474](https://github.com/yandex/ClickHouse/pull/4474) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Improve push-down logic for joined statements. [#4387](https://github.com/yandex/ClickHouse/pull/4387) ([Ivan](https://github.com/abyss7))
|
||||
|
||||
### Performance Improvements
|
||||
* Improved heuristics of "move to PREWHERE" optimization. [#4405](https://github.com/yandex/ClickHouse/pull/4405) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Use proper lookup tables that uses HashTable's API for 8-bit and 16-bit keys. [#4536](https://github.com/yandex/ClickHouse/pull/4536) ([Amos Bird](https://github.com/amosbird))
|
||||
* Improved performance of string comparison. [#4564](https://github.com/yandex/ClickHouse/pull/4564) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Cleanup distributed DDL queue in a separate thread so that it doesn't slow down the main loop that processes distributed DDL tasks. [#4502](https://github.com/yandex/ClickHouse/pull/4502) ([Alex Zatelepin](https://github.com/ztlpn))
|
||||
* When `min_bytes_to_use_direct_io` is set to 1, not every file was opened with O_DIRECT mode because the data size to read was sometimes underestimated by the size of one compressed block. [#4526](https://github.com/yandex/ClickHouse/pull/4526) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
### Build/Testing/Packaging Improvement
|
||||
* Added support for clang-9 [#4604](https://github.com/yandex/ClickHouse/pull/4604) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fix wrong `__asm__` instructions (again) [#4621](https://github.com/yandex/ClickHouse/pull/4621) ([Konstantin Podshumok](https://github.com/podshumok))
|
||||
* Add ability to specify settings for `clickhouse-performance-test` from command line. [#4437](https://github.com/yandex/ClickHouse/pull/4437) ([alesapin](https://github.com/alesapin))
|
||||
* Add dictionaries tests to integration tests. [#4477](https://github.com/yandex/ClickHouse/pull/4477) ([alesapin](https://github.com/alesapin))
|
||||
* Added queries from the benchmark on the website to automated performance tests. [#4496](https://github.com/yandex/ClickHouse/pull/4496) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* `xxhash.h` does not exist in external lz4 because it is an implementation detail and its symbols are namespaced with `XXH_NAMESPACE` macro. When lz4 is external, xxHash has to be external too, and the dependents have to link to it. [#4495](https://github.com/yandex/ClickHouse/pull/4495) ([Orivej Desh](https://github.com/orivej))
|
||||
* Fixed a case when `quantileTiming` aggregate function can be called with negative or floating point argument (this fixes fuzz test with undefined behaviour sanitizer). [#4506](https://github.com/yandex/ClickHouse/pull/4506) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Spelling error correction. [#4531](https://github.com/yandex/ClickHouse/pull/4531) ([sdk2](https://github.com/sdk2))
|
||||
* Fix compilation on Mac. [#4371](https://github.com/yandex/ClickHouse/pull/4371) ([Vitaly Baranov](https://github.com/vitlibar))
|
||||
* Build fixes for FreeBSD and various unusual build configurations. [#4444](https://github.com/yandex/ClickHouse/pull/4444) ([proller](https://github.com/proller))
|
||||
|
||||
|
||||
## ClickHouse release 19.3.7, 2019-03-12
|
||||
|
||||
### Bug fixes
|
||||
|
||||
* Fixed error in #3920. This error manifestate itself as random cache corruption (messages `Unknown codec family code`, `Cannot seek through file`) and segfaults. This bug first appeared in version 19.1 and is present in versions up to 19.1.10 and 19.3.6. [#4623](https://github.com/yandex/ClickHouse/pull/4623) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
|
||||
## ClickHouse release 19.3.6, 2019-03-02
|
||||
|
||||
### Bug fixes
|
||||
|
||||
* When there are more than 1000 threads in a thread pool, `std::terminate` may happen on thread exit. [Azat Khuzhin](https://github.com/azat) [#4485](https://github.com/yandex/ClickHouse/pull/4485) [#4505](https://github.com/yandex/ClickHouse/pull/4505) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Now it's possible to create `ReplicatedMergeTree*` tables with comments on columns without defaults and tables with columns codecs without comments and defaults. Also fix comparison of codecs. [#4523](https://github.com/yandex/ClickHouse/pull/4523) ([alesapin](https://github.com/alesapin))
|
||||
* Fixed crash on JOIN with array or tuple. [#4552](https://github.com/yandex/ClickHouse/pull/4552) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed crash in clickhouse-copier with the message `ThreadStatus not created`. [#4540](https://github.com/yandex/ClickHouse/pull/4540) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
* Fixed hangup on server shutdown if distributed DDLs were used. [#4472](https://github.com/yandex/ClickHouse/pull/4472) ([Alex Zatelepin](https://github.com/ztlpn))
|
||||
* Incorrect column numbers were printed in error message about text format parsing for columns with number greater than 10. [#4484](https://github.com/yandex/ClickHouse/pull/4484) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
|
||||
### Build/Testing/Packaging Improvements
|
||||
|
||||
* Fixed build with AVX enabled. [#4527](https://github.com/yandex/ClickHouse/pull/4527) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Enable extended accounting and IO accounting based on good known version instead of kernel under which it is compiled. [#4541](https://github.com/yandex/ClickHouse/pull/4541) ([nvartolomei](https://github.com/nvartolomei))
|
||||
* Allow to skip setting of core_dump.size_limit, warning instead of throw if limit set fail. [#4473](https://github.com/yandex/ClickHouse/pull/4473) ([proller](https://github.com/proller))
|
||||
* Removed the `inline` tags of `void readBinary(...)` in `Field.cpp`. Also merged redundant `namespace DB` blocks. [#4530](https://github.com/yandex/ClickHouse/pull/4530) ([hcz](https://github.com/hczhcz))
|
||||
|
||||
|
||||
## ClickHouse release 19.3.5, 2019-02-21
|
||||
|
||||
### Bug fixes
|
||||
@ -67,7 +158,7 @@
|
||||
* Fixed race condition when selecting from `system.tables` may give `table doesn't exist` error. [#4313](https://github.com/yandex/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* `clickhouse-client` can segfault on exit while loading data for command line suggestions if it was run in interactive mode. [#4317](https://github.com/yandex/ClickHouse/pull/4317) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed a bug when the execution of mutations containing `IN` operators was producing incorrect results. [#4099](https://github.com/yandex/ClickHouse/pull/4099) ([Alex Zatelepin](https://github.com/ztlpn))
|
||||
* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/yandex/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/yandex/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed error when system logs are tried to create again at server shutdown. [#4254](https://github.com/yandex/ClickHouse/pull/4254) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Correctly return the right type and properly handle locks in `joinGet` function. [#4153](https://github.com/yandex/ClickHouse/pull/4153) ([Amos Bird](https://github.com/amosbird))
|
||||
* Added `sumMapWithOverflow` function. [#4151](https://github.com/yandex/ClickHouse/pull/4151) ([Léo Ercolanelli](https://github.com/ercolanelli-leo))
|
||||
@ -92,7 +183,7 @@
|
||||
* Added script which creates changelog from pull requests description. [#4169](https://github.com/yandex/ClickHouse/pull/4169) [#4173](https://github.com/yandex/ClickHouse/pull/4173) ([KochetovNicolai](https://github.com/KochetovNicolai)) ([KochetovNicolai](https://github.com/KochetovNicolai))
|
||||
* Added puppet module for Clickhouse. [#4182](https://github.com/yandex/ClickHouse/pull/4182) ([Maxim Fedotov](https://github.com/MaxFedotov))
|
||||
* Added docs for a group of undocumented functions. [#4168](https://github.com/yandex/ClickHouse/pull/4168) ([Winter Zhang](https://github.com/zhang2014))
|
||||
* ARM build fixes. [#4210](https://github.com/yandex/ClickHouse/pull/4210)[#4306](https://github.com/yandex/ClickHouse/pull/4306) [#4291](https://github.com/yandex/ClickHouse/pull/4291) ([proller](https://github.com/proller)) ([proller](https://github.com/proller))
|
||||
* ARM build fixes. [#4210](https://github.com/yandex/ClickHouse/pull/4210)[#4306](https://github.com/yandex/ClickHouse/pull/4306) [#4291](https://github.com/yandex/ClickHouse/pull/4291) ([proller](https://github.com/proller)) ([proller](https://github.com/proller))
|
||||
* Dictionary tests now able to run from `ctest`. [#4189](https://github.com/yandex/ClickHouse/pull/4189) ([proller](https://github.com/proller))
|
||||
* Now `/etc/ssl` is used as default directory with SSL certificates. [#4167](https://github.com/yandex/ClickHouse/pull/4167) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Added checking SSE and AVX instruction at start. [#4234](https://github.com/yandex/ClickHouse/pull/4234) ([Igr](https://github.com/igron99))
|
||||
@ -123,6 +214,20 @@
|
||||
* Improved server shutdown time and ALTERs waiting time. [#4372](https://github.com/yandex/ClickHouse/pull/4372) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Added info about the replicated_can_become_leader setting to system.replicas and add logging if the replica won't try to become leader. [#4379](https://github.com/yandex/ClickHouse/pull/4379) ([Alex Zatelepin](https://github.com/ztlpn))
|
||||
|
||||
|
||||
## ClickHouse release 19.1.14, 2019-03-14
|
||||
|
||||
* Fixed error `Column ... queried more than once` that may happen if the setting `asterisk_left_columns_only` is set to 1 in case of using `GLOBAL JOIN` with `SELECT *` (rare case). The issue does not exist in 19.3 and newer. [6bac7d8d](https://github.com/yandex/ClickHouse/pull/4692/commits/6bac7d8d11a9b0d6de0b32b53c47eb2f6f8e7062) ([Artem Zuikov](https://github.com/4ertus2))
|
||||
|
||||
## ClickHouse release 19.1.13, 2019-03-12
|
||||
|
||||
This release contains exactly the same set of patches as 19.3.7.
|
||||
|
||||
## ClickHouse release 19.1.10, 2019-03-03
|
||||
|
||||
This release contains exactly the same set of patches as 19.3.6.
|
||||
|
||||
|
||||
## ClickHouse release 19.1.9, 2019-02-21
|
||||
|
||||
### Bug fixes
|
||||
@ -140,7 +245,7 @@
|
||||
### Bug Fixes
|
||||
* Correctly return the right type and properly handle locks in `joinGet` function. [#4153](https://github.com/yandex/ClickHouse/pull/4153) ([Amos Bird](https://github.com/amosbird))
|
||||
* Fixed error when system logs are tried to create again at server shutdown. [#4254](https://github.com/yandex/ClickHouse/pull/4254) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/yandex/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed error: if there is a database with `Dictionary` engine, all dictionaries forced to load at server startup, and if there is a dictionary with ClickHouse source from localhost, the dictionary cannot load. [#4255](https://github.com/yandex/ClickHouse/pull/4255) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed a bug when the execution of mutations containing `IN` operators was producing incorrect results. [#4099](https://github.com/yandex/ClickHouse/pull/4099) ([Alex Zatelepin](https://github.com/ztlpn))
|
||||
* `clickhouse-client` can segfault on exit while loading data for command line suggestions if it was run in interactive mode. [#4317](https://github.com/yandex/ClickHouse/pull/4317) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
* Fixed race condition when selecting from `system.tables` may give `table doesn't exist` error. [#4313](https://github.com/yandex/ClickHouse/pull/4313) ([alexey-milovidov](https://github.com/alexey-milovidov))
|
||||
@ -293,7 +398,7 @@
|
||||
|
||||
### New features:
|
||||
|
||||
* `DEFAULT` expressions are evaluated for missing fields when loading data in semi-structured input formats (`JSONEachRow`, `TSKV`). [#3555](https://github.com/yandex/ClickHouse/pull/3555)
|
||||
* `DEFAULT` expressions are evaluated for missing fields when loading data in semi-structured input formats (`JSONEachRow`, `TSKV`). The feature is enabled with the `insert_sample_with_metadata` setting. [#3555](https://github.com/yandex/ClickHouse/pull/3555)
|
||||
* The `ALTER TABLE` query now has the `MODIFY ORDER BY` action for changing the sorting key when adding or removing a table column. This is useful for tables in the `MergeTree` family that perform additional tasks when merging based on this sorting key, such as `SummingMergeTree`, `AggregatingMergeTree`, and so on. [#3581](https://github.com/yandex/ClickHouse/pull/3581) [#3755](https://github.com/yandex/ClickHouse/pull/3755)
|
||||
* For tables in the `MergeTree` family, now you can specify a different sorting key (`ORDER BY`) and index (`PRIMARY KEY`). The sorting key can be longer than the index. [#3581](https://github.com/yandex/ClickHouse/pull/3581)
|
||||
* Added the `hdfs` table function and the `HDFS` table engine for importing and exporting data to HDFS. [chenxing-xc](https://github.com/yandex/ClickHouse/pull/3617)
|
||||
|
@ -302,7 +302,7 @@
|
||||
|
||||
### Новые возможности:
|
||||
|
||||
* Вычисление `DEFAULT` выражений для отсутствующих полей при загрузке данных в полуструктурированных форматах (`JSONEachRow`, `TSKV`). [#3555](https://github.com/yandex/ClickHouse/pull/3555)
|
||||
* Вычисление `DEFAULT` выражений для отсутствующих полей при загрузке данных в полуструктурированных форматах (`JSONEachRow`, `TSKV`) (требуется включить настройку запроса `insert_sample_with_metadata`). [#3555](https://github.com/yandex/ClickHouse/pull/3555)
|
||||
* Для запроса `ALTER TABLE` добавлено действие `MODIFY ORDER BY` для изменения ключа сортировки при одновременном добавлении или удалении столбца таблицы. Это полезно для таблиц семейства `MergeTree`, выполняющих дополнительную работу при слияниях, согласно этому ключу сортировки, как например, `SummingMergeTree`, `AggregatingMergeTree` и т. п. [#3581](https://github.com/yandex/ClickHouse/pull/3581) [#3755](https://github.com/yandex/ClickHouse/pull/3755)
|
||||
* Для таблиц семейства `MergeTree` появилась возможность указать различный ключ сортировки (`ORDER BY`) и индекс (`PRIMARY KEY`). Ключ сортировки может быть длиннее, чем индекс. [#3581](https://github.com/yandex/ClickHouse/pull/3581)
|
||||
* Добавлена табличная функция `hdfs` и движок таблиц `HDFS` для импорта и экспорта данных в HDFS. [chenxing-xc](https://github.com/yandex/ClickHouse/pull/3617)
|
||||
|
@ -364,6 +364,7 @@ if (DEFAULT_LIBS)
|
||||
add_default_libs(zstd)
|
||||
add_default_libs(snappy)
|
||||
add_default_libs(arrow)
|
||||
add_default_libs(protoc)
|
||||
add_default_libs(thrift_static)
|
||||
add_default_libs(boost_regex_internal)
|
||||
endif ()
|
||||
|
@ -13,5 +13,4 @@ ClickHouse is an open-source column-oriented database management system that all
|
||||
|
||||
## Upcoming Events
|
||||
|
||||
* [ClickHouse Community Meetup](https://www.eventbrite.com/e/meetup-clickhouse-in-the-wild-deployment-success-stories-registration-55305051899) in San Francisco on February 19.
|
||||
* [ClickHouse Community Meetup](https://www.eventbrite.com/e/clickhouse-meetup-in-madrid-registration-55376746339) in Madrid on April 2.
|
||||
|
@ -1,3 +1,7 @@
|
||||
option (ENABLE_BROTLI "Enable brotli" ON)
|
||||
|
||||
if (ENABLE_BROTLI)
|
||||
|
||||
option (USE_INTERNAL_BROTLI_LIBRARY "Set to FALSE to use system libbrotli library instead of bundled" ${NOT_UNBUNDLED})
|
||||
|
||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/brotli/c/include/brotli/decode.h")
|
||||
@ -27,4 +31,6 @@ elseif (NOT MISSING_INTERNAL_BROTLI_LIBRARY)
|
||||
set (USE_BROTLI 1)
|
||||
endif ()
|
||||
|
||||
endif()
|
||||
|
||||
message (STATUS "Using brotli=${USE_BROTLI}: ${BROTLI_INCLUDE_DIR} : ${BROTLI_LIBRARY}")
|
||||
|
@ -20,11 +20,12 @@ if (NOT GTEST_SRC_DIR AND NOT GTEST_INCLUDE_DIRS AND NOT MISSING_INTERNAL_GTEST_
|
||||
set (USE_INTERNAL_GTEST_LIBRARY 1)
|
||||
set (GTEST_MAIN_LIBRARIES gtest_main)
|
||||
set (GTEST_LIBRARIES gtest)
|
||||
set (GTEST_BOTH_LIBRARIES ${GTEST_MAIN_LIBRARIES} ${GTEST_LIBRARIES})
|
||||
set (GTEST_INCLUDE_DIRS ${ClickHouse_SOURCE_DIR}/contrib/googletest/googletest)
|
||||
endif ()
|
||||
|
||||
if((GTEST_INCLUDE_DIRS AND GTEST_MAIN_LIBRARIES) OR GTEST_SRC_DIR)
|
||||
if((GTEST_INCLUDE_DIRS AND GTEST_BOTH_LIBRARIES) OR GTEST_SRC_DIR)
|
||||
set(USE_GTEST 1)
|
||||
endif()
|
||||
|
||||
message (STATUS "Using gtest=${USE_GTEST}: ${GTEST_INCLUDE_DIRS} : ${GTEST_LIBRARIES}, ${GTEST_MAIN_LIBRARIES} : ${GTEST_SRC_DIR}")
|
||||
message (STATUS "Using gtest=${USE_GTEST}: ${GTEST_INCLUDE_DIRS} : ${GTEST_BOTH_LIBRARIES} : ${GTEST_SRC_DIR}")
|
||||
|
@ -1,3 +1,7 @@
|
||||
option (ENABLE_PARQUET "Enable parquet" ON)
|
||||
|
||||
if (ENABLE_PARQUET)
|
||||
|
||||
if (NOT OS_FREEBSD) # Freebsd: ../contrib/arrow/cpp/src/arrow/util/bit-util.h:27:10: fatal error: endian.h: No such file or directory
|
||||
option(USE_INTERNAL_PARQUET_LIBRARY "Set to FALSE to use system parquet library instead of bundled" ${NOT_UNBUNDLED})
|
||||
endif()
|
||||
@ -61,6 +65,8 @@ elseif(NOT MISSING_INTERNAL_PARQUET_LIBRARY AND NOT OS_FREEBSD)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
endif()
|
||||
|
||||
if(USE_PARQUET)
|
||||
message(STATUS "Using Parquet: ${ARROW_LIBRARY}:${ARROW_INCLUDE_DIR} ; ${PARQUET_LIBRARY}:${PARQUET_INCLUDE_DIR} ; ${THRIFT_LIBRARY}")
|
||||
else()
|
||||
|
@ -1,10 +1,8 @@
|
||||
option(USE_INTERNAL_PROTOBUF_LIBRARY "Set to FALSE to use system protobuf instead of bundled" ${NOT_UNBUNDLED})
|
||||
option (ENABLE_PROTOBUF "Enable protobuf" ON)
|
||||
|
||||
if(OS_FREEBSD AND SANITIZE STREQUAL "address")
|
||||
# ../contrib/protobuf/src/google/protobuf/arena_impl.h:45:10: fatal error: 'sanitizer/asan_interface.h' file not found
|
||||
set(MISSING_INTERNAL_PROTOBUF_LIBRARY 1)
|
||||
set(USE_INTERNAL_PROTOBUF_LIBRARY 0)
|
||||
endif()
|
||||
if (ENABLE_PROTOBUF)
|
||||
|
||||
option(USE_INTERNAL_PROTOBUF_LIBRARY "Set to FALSE to use system protobuf instead of bundled" ${NOT_UNBUNDLED})
|
||||
|
||||
if(NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/protobuf/cmake/CMakeLists.txt")
|
||||
if(USE_INTERNAL_PROTOBUF_LIBRARY)
|
||||
@ -94,4 +92,16 @@ elseif(NOT MISSING_INTERNAL_PROTOBUF_LIBRARY)
|
||||
endfunction()
|
||||
endif()
|
||||
|
||||
if(OS_FREEBSD AND SANITIZE STREQUAL "address")
|
||||
# ../contrib/protobuf/src/google/protobuf/arena_impl.h:45:10: fatal error: 'sanitizer/asan_interface.h' file not found
|
||||
# #include <sanitizer/asan_interface.h>
|
||||
if(LLVM_INCLUDE_DIRS)
|
||||
set(Protobuf_INCLUDE_DIR ${Protobuf_INCLUDE_DIR} ${LLVM_INCLUDE_DIRS})
|
||||
else()
|
||||
set(USE_PROTOBUF 0)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
endif()
|
||||
|
||||
message(STATUS "Using protobuf=${USE_PROTOBUF}: ${Protobuf_INCLUDE_DIR} : ${Protobuf_LIBRARY}")
|
||||
|
8
contrib/CMakeLists.txt
vendored
8
contrib/CMakeLists.txt
vendored
@ -283,9 +283,13 @@ if (USE_INTERNAL_BROTLI_LIBRARY)
|
||||
endif ()
|
||||
|
||||
if (USE_INTERNAL_PROTOBUF_LIBRARY)
|
||||
set(protobuf_BUILD_TESTS OFF CACHE INTERNAL "" FORCE)
|
||||
set(protobuf_BUILD_SHARED_LIBS OFF CACHE INTERNAL "" FORCE)
|
||||
if (MAKE_STATIC_LIBRARIES)
|
||||
set(protobuf_BUILD_SHARED_LIBS OFF CACHE INTERNAL "" FORCE)
|
||||
else ()
|
||||
set(protobuf_BUILD_SHARED_LIBS ON CACHE INTERNAL "" FORCE)
|
||||
endif ()
|
||||
set(protobuf_WITH_ZLIB 0 CACHE INTERNAL "" FORCE) # actually will use zlib, but skip find
|
||||
set(protobuf_BUILD_TESTS OFF CACHE INTERNAL "" FORCE)
|
||||
add_subdirectory(protobuf/cmake)
|
||||
endif ()
|
||||
|
||||
|
2
contrib/libhdfs3
vendored
2
contrib/libhdfs3
vendored
@ -1 +1 @@
|
||||
Subproject commit bd6505cbb0c130b0db695305b9a38546fa880e5a
|
||||
Subproject commit e2131aa752d7e95441e08f9a18304c1445f2576a
|
2
contrib/librdkafka
vendored
2
contrib/librdkafka
vendored
@ -1 +1 @@
|
||||
Subproject commit 51ae5f5fd8b742e56f47a8bb0136344868818285
|
||||
Subproject commit 73295a702cd1c85c11749ade500d713db7099cca
|
@ -2,6 +2,8 @@
|
||||
#ifndef _CONFIG_H_
|
||||
#define _CONFIG_H_
|
||||
#define ARCH "x86_64"
|
||||
#define BUILT_WITH "GCC GXX PKGCONFIG OSXLD LIBDL PLUGINS ZLIB SSL SASL_CYRUS ZSTD HDRHISTOGRAM LZ4_EXT SNAPPY SOCKEM SASL_SCRAM CRC32C_HW"
|
||||
|
||||
#define CPU "generic"
|
||||
#define WITHOUT_OPTIMIZATION 0
|
||||
#define ENABLE_DEVEL 0
|
||||
|
@ -184,7 +184,9 @@ target_link_libraries (clickhouse_common_io
|
||||
string_utils
|
||||
widechar_width
|
||||
${LINK_LIBRARIES_ONLY_ON_X86_64}
|
||||
PUBLIC
|
||||
${DOUBLE_CONVERSION_LIBRARIES}
|
||||
PRIVATE
|
||||
pocoext
|
||||
PUBLIC
|
||||
${Poco_Net_LIBRARY}
|
||||
@ -351,6 +353,6 @@ if (ENABLE_TESTS AND USE_GTEST)
|
||||
# attach all dbms gtest sources
|
||||
grep_gtest_sources(${ClickHouse_SOURCE_DIR}/dbms dbms_gtest_sources)
|
||||
add_executable(unit_tests_dbms ${dbms_gtest_sources})
|
||||
target_link_libraries(unit_tests_dbms PRIVATE gtest_main dbms clickhouse_common_zookeeper)
|
||||
target_link_libraries(unit_tests_dbms PRIVATE ${GTEST_BOTH_LIBRARIES} dbms clickhouse_common_zookeeper)
|
||||
add_check(unit_tests_dbms)
|
||||
endif ()
|
||||
|
@ -22,7 +22,6 @@ endif()
|
||||
configure_file (config_tools.h.in ${CMAKE_CURRENT_BINARY_DIR}/config_tools.h)
|
||||
|
||||
|
||||
|
||||
macro(clickhouse_target_link_split_lib target name)
|
||||
if(NOT CLICKHOUSE_ONE_SHARED)
|
||||
target_link_libraries(${target} PRIVATE clickhouse-${name}-lib)
|
||||
@ -91,9 +90,9 @@ endif ()
|
||||
|
||||
if (CLICKHOUSE_ONE_SHARED)
|
||||
add_library(clickhouse-lib SHARED ${CLICKHOUSE_SERVER_SOURCES} ${CLICKHOUSE_CLIENT_SOURCES} ${CLICKHOUSE_LOCAL_SOURCES} ${CLICKHOUSE_BENCHMARK_SOURCES} ${CLICKHOUSE_PERFORMANCE_TEST_SOURCES} ${CLICKHOUSE_COPIER_SOURCES} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_SOURCES} ${CLICKHOUSE_COMPRESSOR_SOURCES} ${CLICKHOUSE_FORMAT_SOURCES} ${CLICKHOUSE_OBFUSCATOR_SOURCES} ${CLICKHOUSE_COMPILER_SOURCES} ${CLICKHOUSE_ODBC_BRIDGE_SOURCES})
|
||||
target_link_libraries(clickhouse-lib PUBLIC ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_PERFORMANCE_TEST_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_COMPILER_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK})
|
||||
set_target_properties(clickhouse-lib PROPERTIES SOVERSION ${VERSION_MAJOR}.${VERSION_MINOR} VERSION ${VERSION_SO} OUTPUT_NAME clickhouse)
|
||||
target_link_libraries(clickhouse-lib ${CLICKHOUSE_SERVER_LINK} ${CLICKHOUSE_CLIENT_LINK} ${CLICKHOUSE_LOCAL_LINK} ${CLICKHOUSE_BENCHMARK_LINK} ${CLICKHOUSE_PERFORMANCE_TEST_LINK} ${CLICKHOUSE_COPIER_LINK} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_LINK} ${CLICKHOUSE_COMPRESSOR_LINK} ${CLICKHOUSE_FORMAT_LINK} ${CLICKHOUSE_OBFUSCATOR_LINK} ${CLICKHOUSE_COMPILER_LINK} ${CLICKHOUSE_ODBC_BRIDGE_LINK})
|
||||
target_include_directories(clickhouse-lib ${CLICKHOUSE_SERVER_INCLUDE} ${CLICKHOUSE_CLIENT_INCLUDE} ${CLICKHOUSE_LOCAL_INCLUDE} ${CLICKHOUSE_BENCHMARK_INCLUDE} ${CLICKHOUSE_PERFORMANCE_TEST_INCLUDE} ${CLICKHOUSE_COPIER_INCLUDE} ${CLICKHOUSE_EXTRACT_FROM_CONFIG_INCLUDE} ${CLICKHOUSE_COMPRESSOR_INCLUDE} ${CLICKHOUSE_FORMAT_INCLUDE} ${CLICKHOUSE_OBFUSCATOR_INCLUDE} ${CLICKHOUSE_COMPILER_INCLUDE} ${CLICKHOUSE_ODBC_BRIDGE_INCLUDE})
|
||||
set_target_properties(clickhouse-lib PROPERTIES SOVERSION ${VERSION_MAJOR}.${VERSION_MINOR} VERSION ${VERSION_SO} OUTPUT_NAME clickhouse DEBUG_POSTFIX "")
|
||||
endif()
|
||||
|
||||
if (CLICKHOUSE_SPLIT_BINARY)
|
||||
@ -112,6 +111,8 @@ if (CLICKHOUSE_SPLIT_BINARY)
|
||||
|
||||
add_custom_target (clickhouse-bundle ALL DEPENDS ${CLICKHOUSE_ALL_TARGETS})
|
||||
add_custom_target (clickhouse ALL DEPENDS clickhouse-bundle)
|
||||
|
||||
install(PROGRAMS clickhouse-split-helper DESTINATION ${CMAKE_INSTALL_BINDIR} RENAME clickhouse COMPONENT clickhouse)
|
||||
else ()
|
||||
if (USE_EMBEDDED_COMPILER)
|
||||
# before add_executable !
|
||||
@ -123,37 +124,37 @@ else ()
|
||||
target_include_directories (clickhouse PRIVATE ${CMAKE_CURRENT_BINARY_DIR})
|
||||
|
||||
if (ENABLE_CLICKHOUSE_SERVER)
|
||||
target_link_libraries (clickhouse PRIVATE clickhouse-server-lib)
|
||||
clickhouse_target_link_split_lib(clickhouse server)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_CLIENT)
|
||||
target_link_libraries (clickhouse PRIVATE clickhouse-client-lib)
|
||||
clickhouse_target_link_split_lib(clickhouse client)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_LOCAL)
|
||||
target_link_libraries (clickhouse PRIVATE clickhouse-local-lib)
|
||||
clickhouse_target_link_split_lib(clickhouse local)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_BENCHMARK)
|
||||
target_link_libraries (clickhouse PRIVATE clickhouse-benchmark-lib)
|
||||
clickhouse_target_link_split_lib(clickhouse benchmark)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_PERFORMANCE_TEST)
|
||||
target_link_libraries (clickhouse PRIVATE clickhouse-performance-test-lib)
|
||||
clickhouse_target_link_split_lib(clickhouse performance-test)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_COPIER)
|
||||
target_link_libraries (clickhouse PRIVATE clickhouse-copier-lib)
|
||||
clickhouse_target_link_split_lib(clickhouse copier)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_EXTRACT_FROM_CONFIG)
|
||||
target_link_libraries (clickhouse PRIVATE clickhouse-extract-from-config-lib)
|
||||
clickhouse_target_link_split_lib(clickhouse extract-from-config)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_COMPRESSOR)
|
||||
target_link_libraries (clickhouse PRIVATE clickhouse-compressor-lib)
|
||||
clickhouse_target_link_split_lib(clickhouse compressor)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_FORMAT)
|
||||
target_link_libraries (clickhouse PRIVATE clickhouse-format-lib)
|
||||
clickhouse_target_link_split_lib(clickhouse format)
|
||||
endif ()
|
||||
if (ENABLE_CLICKHOUSE_OBFUSCATOR)
|
||||
target_link_libraries (clickhouse PRIVATE clickhouse-obfuscator-lib)
|
||||
clickhouse_target_link_split_lib(clickhouse obfuscator)
|
||||
endif ()
|
||||
if (USE_EMBEDDED_COMPILER)
|
||||
target_link_libraries (clickhouse PRIVATE clickhouse-compiler-lib)
|
||||
clickhouse_target_link_split_lib(clickhouse compiler)
|
||||
endif ()
|
||||
|
||||
set (CLICKHOUSE_BUNDLE)
|
||||
|
6
dbms/programs/clickhouse-split-helper
Executable file
6
dbms/programs/clickhouse-split-helper
Executable file
@ -0,0 +1,6 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
CMD=$1
|
||||
shift
|
||||
clickhouse-$CMD $*
|
@ -704,7 +704,7 @@ private:
|
||||
return true;
|
||||
}
|
||||
|
||||
ASTInsertQuery * insert = typeid_cast<ASTInsertQuery *>(ast.get());
|
||||
auto * insert = ast->as<ASTInsertQuery>();
|
||||
|
||||
if (insert && insert->data)
|
||||
{
|
||||
@ -799,14 +799,11 @@ private:
|
||||
written_progress_chars = 0;
|
||||
written_first_block = false;
|
||||
|
||||
const ASTSetQuery * set_query = typeid_cast<const ASTSetQuery *>(&*parsed_query);
|
||||
const ASTUseQuery * use_query = typeid_cast<const ASTUseQuery *>(&*parsed_query);
|
||||
/// INSERT query for which data transfer is needed (not an INSERT SELECT) is processed separately.
|
||||
const ASTInsertQuery * insert = typeid_cast<const ASTInsertQuery *>(&*parsed_query);
|
||||
|
||||
connection->forceConnected();
|
||||
|
||||
if (insert && !insert->select)
|
||||
/// INSERT query for which data transfer is needed (not an INSERT SELECT) is processed separately.
|
||||
const auto * insert_query = parsed_query->as<ASTInsertQuery>();
|
||||
if (insert_query && !insert_query->select)
|
||||
processInsertQuery();
|
||||
else
|
||||
processOrdinaryQuery();
|
||||
@ -814,7 +811,7 @@ private:
|
||||
/// Do not change context (current DB, settings) in case of an exception.
|
||||
if (!got_exception)
|
||||
{
|
||||
if (set_query)
|
||||
if (const auto * set_query = parsed_query->as<ASTSetQuery>())
|
||||
{
|
||||
/// Save all changes in settings to avoid losing them if the connection is lost.
|
||||
for (const auto & change : set_query->changes)
|
||||
@ -826,7 +823,7 @@ private:
|
||||
}
|
||||
}
|
||||
|
||||
if (use_query)
|
||||
if (const auto * use_query = parsed_query->as<ASTUseQuery>())
|
||||
{
|
||||
const String & new_database = use_query->database;
|
||||
/// If the client initiates the reconnection, it takes the settings from the config.
|
||||
@ -858,7 +855,7 @@ private:
|
||||
/// Convert external tables to ExternalTableData and send them using the connection.
|
||||
void sendExternalTables()
|
||||
{
|
||||
auto * select = typeid_cast<const ASTSelectWithUnionQuery *>(&*parsed_query);
|
||||
const auto * select = parsed_query->as<ASTSelectWithUnionQuery>();
|
||||
if (!select && !external_tables.empty())
|
||||
throw Exception("External tables could be sent only with select query", ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
@ -883,7 +880,7 @@ private:
|
||||
void processInsertQuery()
|
||||
{
|
||||
/// Send part of query without data, because data will be sent separately.
|
||||
const ASTInsertQuery & parsed_insert_query = typeid_cast<const ASTInsertQuery &>(*parsed_query);
|
||||
const auto & parsed_insert_query = parsed_query->as<ASTInsertQuery &>();
|
||||
String query_without_data = parsed_insert_query.data
|
||||
? query.substr(0, parsed_insert_query.data - query.data())
|
||||
: query;
|
||||
@ -940,7 +937,7 @@ private:
|
||||
void sendData(Block & sample, const ColumnsDescription & columns_description)
|
||||
{
|
||||
/// If INSERT data must be sent.
|
||||
const ASTInsertQuery * parsed_insert_query = typeid_cast<const ASTInsertQuery *>(&*parsed_query);
|
||||
const auto * parsed_insert_query = parsed_query->as<ASTInsertQuery>();
|
||||
if (!parsed_insert_query)
|
||||
return;
|
||||
|
||||
@ -965,7 +962,7 @@ private:
|
||||
String current_format = insert_format;
|
||||
|
||||
/// Data format can be specified in the INSERT query.
|
||||
if (ASTInsertQuery * insert = typeid_cast<ASTInsertQuery *>(&*parsed_query))
|
||||
if (const auto * insert = parsed_query->as<ASTInsertQuery>())
|
||||
{
|
||||
if (!insert->format.empty())
|
||||
current_format = insert->format;
|
||||
@ -976,7 +973,7 @@ private:
|
||||
BlockInputStreamPtr block_input = context.getInputFormat(
|
||||
current_format, buf, sample, insert_format_max_block_size);
|
||||
|
||||
const auto & column_defaults = columns_description.defaults;
|
||||
const auto & column_defaults = columns_description.getDefaults();
|
||||
if (!column_defaults.empty())
|
||||
block_input = std::make_shared<AddingDefaultsBlockInputStream>(block_input, column_defaults, context);
|
||||
|
||||
@ -1231,12 +1228,14 @@ private:
|
||||
String current_format = format;
|
||||
|
||||
/// The query can specify output format or output file.
|
||||
if (ASTQueryWithOutput * query_with_output = dynamic_cast<ASTQueryWithOutput *>(&*parsed_query))
|
||||
/// FIXME: try to prettify this cast using `as<>()`
|
||||
if (const auto * query_with_output = dynamic_cast<const ASTQueryWithOutput *>(parsed_query.get()))
|
||||
{
|
||||
if (query_with_output->out_file != nullptr)
|
||||
if (query_with_output->out_file)
|
||||
{
|
||||
const auto & out_file_node = typeid_cast<const ASTLiteral &>(*query_with_output->out_file);
|
||||
const auto & out_file_node = query_with_output->out_file->as<ASTLiteral &>();
|
||||
const auto & out_file = out_file_node.value.safeGet<std::string>();
|
||||
|
||||
out_file_buf.emplace(out_file, DBMS_DEFAULT_BUFFER_SIZE, O_WRONLY | O_EXCL | O_CREAT);
|
||||
out_buf = &*out_file_buf;
|
||||
|
||||
@ -1248,7 +1247,7 @@ private:
|
||||
{
|
||||
if (has_vertical_output_suffix)
|
||||
throw Exception("Output format already specified", ErrorCodes::CLIENT_OUTPUT_FORMAT_SPECIFIED);
|
||||
const auto & id = typeid_cast<const ASTIdentifier &>(*query_with_output->format);
|
||||
const auto & id = query_with_output->format->as<ASTIdentifier &>();
|
||||
current_format = id.name;
|
||||
}
|
||||
if (query_with_output->settings_ast)
|
||||
|
@ -1,5 +1,5 @@
|
||||
set(CLICKHOUSE_COPIER_SOURCES ${CMAKE_CURRENT_SOURCE_DIR}/ClusterCopier.cpp)
|
||||
set(CLICKHOUSE_COPIER_LINK PRIVATE clickhouse_functions clickhouse_table_functions clickhouse_aggregate_functions daemon)
|
||||
#set(CLICKHOUSE_COPIER_INCLUDE SYSTEM PRIVATE ...)
|
||||
set(CLICKHOUSE_COPIER_LINK PRIVATE clickhouse_functions clickhouse_table_functions clickhouse_aggregate_functions PUBLIC daemon)
|
||||
set(CLICKHOUSE_COPIER_INCLUDE SYSTEM PRIVATE ${PCG_RANDOM_INCLUDE_DIR})
|
||||
|
||||
clickhouse_program_add(copier)
|
||||
|
@ -483,7 +483,7 @@ String DB::TaskShard::getHostNameExample() const
|
||||
|
||||
static bool isExtendedDefinitionStorage(const ASTPtr & storage_ast)
|
||||
{
|
||||
const ASTStorage & storage = typeid_cast<const ASTStorage &>(*storage_ast);
|
||||
const auto & storage = storage_ast->as<ASTStorage &>();
|
||||
return storage.partition_by || storage.order_by || storage.sample_by;
|
||||
}
|
||||
|
||||
@ -491,8 +491,8 @@ static ASTPtr extractPartitionKey(const ASTPtr & storage_ast)
|
||||
{
|
||||
String storage_str = queryToString(storage_ast);
|
||||
|
||||
const ASTStorage & storage = typeid_cast<const ASTStorage &>(*storage_ast);
|
||||
const ASTFunction & engine = typeid_cast<const ASTFunction &>(*storage.engine);
|
||||
const auto & storage = storage_ast->as<ASTStorage &>();
|
||||
const auto & engine = storage.engine->as<ASTFunction &>();
|
||||
|
||||
if (!endsWith(engine.name, "MergeTree"))
|
||||
{
|
||||
@ -501,7 +501,7 @@ static ASTPtr extractPartitionKey(const ASTPtr & storage_ast)
|
||||
}
|
||||
|
||||
ASTPtr arguments_ast = engine.arguments->clone();
|
||||
ASTs & arguments = typeid_cast<ASTExpressionList &>(*arguments_ast).children;
|
||||
ASTs & arguments = arguments_ast->children;
|
||||
|
||||
if (isExtendedDefinitionStorage(storage_ast))
|
||||
{
|
||||
@ -1179,12 +1179,12 @@ protected:
|
||||
/// Removes MATERIALIZED and ALIAS columns from create table query
|
||||
static ASTPtr removeAliasColumnsFromCreateQuery(const ASTPtr & query_ast)
|
||||
{
|
||||
const ASTs & column_asts = typeid_cast<ASTCreateQuery &>(*query_ast).columns_list->columns->children;
|
||||
const ASTs & column_asts = query_ast->as<ASTCreateQuery &>().columns_list->columns->children;
|
||||
auto new_columns = std::make_shared<ASTExpressionList>();
|
||||
|
||||
for (const ASTPtr & column_ast : column_asts)
|
||||
{
|
||||
const ASTColumnDeclaration & column = typeid_cast<const ASTColumnDeclaration &>(*column_ast);
|
||||
const auto & column = column_ast->as<ASTColumnDeclaration &>();
|
||||
|
||||
if (!column.default_specifier.empty())
|
||||
{
|
||||
@ -1197,12 +1197,11 @@ protected:
|
||||
}
|
||||
|
||||
ASTPtr new_query_ast = query_ast->clone();
|
||||
ASTCreateQuery & new_query = typeid_cast<ASTCreateQuery &>(*new_query_ast);
|
||||
auto & new_query = new_query_ast->as<ASTCreateQuery &>();
|
||||
|
||||
auto new_columns_list = std::make_shared<ASTColumns>();
|
||||
new_columns_list->set(new_columns_list->columns, new_columns);
|
||||
new_columns_list->set(
|
||||
new_columns_list->indices, typeid_cast<ASTCreateQuery &>(*query_ast).columns_list->indices->clone());
|
||||
new_columns_list->set(new_columns_list->indices, query_ast->as<ASTCreateQuery>()->columns_list->indices->clone());
|
||||
|
||||
new_query.replace(new_query.columns_list, new_columns_list);
|
||||
|
||||
@ -1212,7 +1211,7 @@ protected:
|
||||
/// Replaces ENGINE and table name in a create query
|
||||
std::shared_ptr<ASTCreateQuery> rewriteCreateQueryStorage(const ASTPtr & create_query_ast, const DatabaseAndTableName & new_table, const ASTPtr & new_storage_ast)
|
||||
{
|
||||
ASTCreateQuery & create = typeid_cast<ASTCreateQuery &>(*create_query_ast);
|
||||
const auto & create = create_query_ast->as<ASTCreateQuery &>();
|
||||
auto res = std::make_shared<ASTCreateQuery>(create);
|
||||
|
||||
if (create.storage == nullptr || new_storage_ast == nullptr)
|
||||
@ -1646,7 +1645,7 @@ protected:
|
||||
/// Try create table (if not exists) on each shard
|
||||
{
|
||||
auto create_query_push_ast = rewriteCreateQueryStorage(task_shard.current_pull_table_create_query, task_table.table_push, task_table.engine_push_ast);
|
||||
typeid_cast<ASTCreateQuery &>(*create_query_push_ast).if_not_exists = true;
|
||||
create_query_push_ast->as<ASTCreateQuery &>().if_not_exists = true;
|
||||
String query = queryToString(create_query_push_ast);
|
||||
|
||||
LOG_DEBUG(log, "Create destination tables. Query: " << query);
|
||||
@ -1779,7 +1778,7 @@ protected:
|
||||
|
||||
void dropAndCreateLocalTable(const ASTPtr & create_ast)
|
||||
{
|
||||
auto & create = typeid_cast<ASTCreateQuery &>(*create_ast);
|
||||
const auto & create = create_ast->as<ASTCreateQuery &>();
|
||||
dropLocalTableIfExists({create.database, create.table});
|
||||
|
||||
InterpreterCreateQuery interpreter(create_ast, context);
|
||||
|
@ -1,3 +1,4 @@
|
||||
#include <new>
|
||||
#include <iostream>
|
||||
#include <vector>
|
||||
#include <string>
|
||||
@ -17,12 +18,6 @@
|
||||
#include <gperftools/malloc_extension.h> // Y_IGNORE
|
||||
#endif
|
||||
|
||||
#if ENABLE_CLICKHOUSE_SERVER
|
||||
#include "server/Server.h"
|
||||
#endif
|
||||
#if ENABLE_CLICKHOUSE_LOCAL
|
||||
#include "local/LocalServer.h"
|
||||
#endif
|
||||
#include <Common/StringUtils/StringUtils.h>
|
||||
|
||||
/// Universal executable for various clickhouse applications
|
||||
@ -145,6 +140,10 @@ bool isClickhouseApp(const std::string & app_suffix, std::vector<char *> & argv)
|
||||
|
||||
int main(int argc_, char ** argv_)
|
||||
{
|
||||
/// Reset new handler to default (that throws std::bad_alloc)
|
||||
/// It is needed because LLVM library clobbers it.
|
||||
std::set_new_handler(nullptr);
|
||||
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
if (argc_ >= 2 && 0 == strcmp(argv_[1], "-cc1"))
|
||||
return mainEntryClickHouseClang(argc_, argv_);
|
||||
|
@ -11,7 +11,7 @@ set(CLICKHOUSE_ODBC_BRIDGE_SOURCES
|
||||
${CMAKE_CURRENT_SOURCE_DIR}/validateODBCConnectionString.cpp
|
||||
)
|
||||
|
||||
set(CLICKHOUSE_ODBC_BRIDGE_LINK PRIVATE daemon dbms clickhouse_common_io)
|
||||
set(CLICKHOUSE_ODBC_BRIDGE_LINK PRIVATE dbms clickhouse_common_io PUBLIC daemon)
|
||||
set(CLICKHOUSE_ODBC_BRIDGE_INCLUDE PUBLIC ${ClickHouse_SOURCE_DIR}/libs/libdaemon/include)
|
||||
|
||||
if (USE_POCO_SQLODBC)
|
||||
|
@ -260,6 +260,15 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
StatusFile status{path + "status"};
|
||||
|
||||
SCOPE_EXIT({
|
||||
/** Ask to cancel background jobs all table engines,
|
||||
* and also query_log.
|
||||
* It is important to do early, not in destructor of Context, because
|
||||
* table engines could use Context on destroy.
|
||||
*/
|
||||
LOG_INFO(log, "Shutting down storages.");
|
||||
global_context->shutdown();
|
||||
LOG_DEBUG(log, "Shutted down storages.");
|
||||
|
||||
/** Explicitly destroy Context. It is more convenient than in destructor of Server, because logger is still available.
|
||||
* At this moment, no one could own shared part of Context.
|
||||
*/
|
||||
@ -498,17 +507,6 @@ int Server::main(const std::vector<std::string> & /*args*/)
|
||||
|
||||
global_context->setCurrentDatabase(default_database);
|
||||
|
||||
SCOPE_EXIT({
|
||||
/** Ask to cancel background jobs all table engines,
|
||||
* and also query_log.
|
||||
* It is important to do early, not in destructor of Context, because
|
||||
* table engines could use Context on destroy.
|
||||
*/
|
||||
LOG_INFO(log, "Shutting down storages.");
|
||||
global_context->shutdown();
|
||||
LOG_DEBUG(log, "Shutted down storages.");
|
||||
});
|
||||
|
||||
if (has_zookeeper && config().has("distributed_ddl"))
|
||||
{
|
||||
/// DDL worker should be started after all tables were loaded
|
||||
|
@ -723,8 +723,7 @@ bool TCPHandler::receiveData()
|
||||
if (!(storage = query_context->tryGetExternalTable(external_table_name)))
|
||||
{
|
||||
NamesAndTypesList columns = block.getNamesAndTypesList();
|
||||
storage = StorageMemory::create(external_table_name,
|
||||
ColumnsDescription{columns, NamesAndTypesList{}, NamesAndTypesList{}, ColumnDefaults{}, ColumnComments{}, ColumnCodecs{}});
|
||||
storage = StorageMemory::create(external_table_name, ColumnsDescription{columns});
|
||||
storage->startup();
|
||||
query_context->addExternalTable(external_table_name, storage);
|
||||
}
|
||||
@ -768,7 +767,7 @@ void TCPHandler::initBlockOutput(const Block & block)
|
||||
{
|
||||
if (!state.maybe_compressed_out)
|
||||
{
|
||||
std::string method = query_context->getSettingsRef().network_compression_method;
|
||||
std::string method = Poco::toUpper(query_context->getSettingsRef().network_compression_method.toString());
|
||||
std::optional<int> level;
|
||||
if (method == "ZSTD")
|
||||
level = query_context->getSettingsRef().network_zstd_compression_level;
|
||||
|
@ -25,7 +25,7 @@ namespace Poco { class Logger; }
|
||||
namespace DB
|
||||
{
|
||||
|
||||
struct ColumnsDescription;
|
||||
class ColumnsDescription;
|
||||
|
||||
/// State of query processing.
|
||||
struct QueryState
|
||||
|
@ -77,7 +77,7 @@
|
||||
</default>
|
||||
|
||||
<!-- Example of user with readonly access. -->
|
||||
<readonly>
|
||||
<!-- <readonly>
|
||||
<password></password>
|
||||
<networks incl="networks" replace="replace">
|
||||
<ip>::1</ip>
|
||||
@ -85,7 +85,7 @@
|
||||
</networks>
|
||||
<profile>readonly</profile>
|
||||
<quota>default</quota>
|
||||
</readonly>
|
||||
</readonly> -->
|
||||
</users>
|
||||
|
||||
<!-- Quotas. -->
|
||||
|
@ -12,6 +12,7 @@ namespace DB
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
namespace
|
||||
|
@ -6,6 +6,7 @@
|
||||
#include <boost/noncopyable.hpp>
|
||||
#include <roaring.hh>
|
||||
#include <Common/HashTable/SmallTable.h>
|
||||
#include <Common/PODArray.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -199,8 +199,13 @@ public:
|
||||
for (auto & rhs_elem : rhs_set)
|
||||
{
|
||||
cur_set.emplace(rhs_elem.getValue(), it, inserted);
|
||||
if (inserted && it->getValue().size)
|
||||
it->getValueMutable().data = arena->insert(it->getValue().data, it->getValue().size);
|
||||
if (inserted)
|
||||
{
|
||||
if (it->getValue().size)
|
||||
it->getValueMutable().data = arena->insert(it->getValue().data, it->getValue().size);
|
||||
else
|
||||
it->getValueMutable().data = nullptr;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -15,60 +15,60 @@ namespace ErrorCodes
|
||||
namespace
|
||||
{
|
||||
|
||||
template <typename T> using FuncQuantile = AggregateFunctionQuantile<T, QuantileReservoirSampler<T>, NameQuantile, false, Float64, false>;
|
||||
template <typename T> using FuncQuantiles = AggregateFunctionQuantile<T, QuantileReservoirSampler<T>, NameQuantiles, false, Float64, true>;
|
||||
template <typename Value, bool FloatReturn> using FuncQuantile = AggregateFunctionQuantile<Value, QuantileReservoirSampler<Value>, NameQuantile, false, std::conditional_t<FloatReturn, Float64, void>, false>;
|
||||
template <typename Value, bool FloatReturn> using FuncQuantiles = AggregateFunctionQuantile<Value, QuantileReservoirSampler<Value>, NameQuantiles, false, std::conditional_t<FloatReturn, Float64, void>, true>;
|
||||
|
||||
template <typename T> using FuncQuantileDeterministic = AggregateFunctionQuantile<T, QuantileReservoirSamplerDeterministic<T>, NameQuantileDeterministic, true, Float64, false>;
|
||||
template <typename T> using FuncQuantilesDeterministic = AggregateFunctionQuantile<T, QuantileReservoirSamplerDeterministic<T>, NameQuantilesDeterministic, true, Float64, true>;
|
||||
template <typename Value, bool FloatReturn> using FuncQuantileDeterministic = AggregateFunctionQuantile<Value, QuantileReservoirSamplerDeterministic<Value>, NameQuantileDeterministic, true, std::conditional_t<FloatReturn, Float64, void>, false>;
|
||||
template <typename Value, bool FloatReturn> using FuncQuantilesDeterministic = AggregateFunctionQuantile<Value, QuantileReservoirSamplerDeterministic<Value>, NameQuantilesDeterministic, true, std::conditional_t<FloatReturn, Float64, void>, true>;
|
||||
|
||||
template <typename T> using FuncQuantileExact = AggregateFunctionQuantile<T, QuantileExact<T>, NameQuantileExact, false, void, false>;
|
||||
template <typename T> using FuncQuantilesExact = AggregateFunctionQuantile<T, QuantileExact<T>, NameQuantilesExact, false, void, true>;
|
||||
template <typename Value, bool _> using FuncQuantileExact = AggregateFunctionQuantile<Value, QuantileExact<Value>, NameQuantileExact, false, void, false>;
|
||||
template <typename Value, bool _> using FuncQuantilesExact = AggregateFunctionQuantile<Value, QuantileExact<Value>, NameQuantilesExact, false, void, true>;
|
||||
|
||||
template <typename T> using FuncQuantileExactWeighted = AggregateFunctionQuantile<T, QuantileExactWeighted<T>, NameQuantileExactWeighted, true, void, false>;
|
||||
template <typename T> using FuncQuantilesExactWeighted = AggregateFunctionQuantile<T, QuantileExactWeighted<T>, NameQuantilesExactWeighted, true, void, true>;
|
||||
template <typename Value, bool _> using FuncQuantileExactWeighted = AggregateFunctionQuantile<Value, QuantileExactWeighted<Value>, NameQuantileExactWeighted, true, void, false>;
|
||||
template <typename Value, bool _> using FuncQuantilesExactWeighted = AggregateFunctionQuantile<Value, QuantileExactWeighted<Value>, NameQuantilesExactWeighted, true, void, true>;
|
||||
|
||||
template <typename T> using FuncQuantileTiming = AggregateFunctionQuantile<T, QuantileTiming<T>, NameQuantileTiming, false, Float32, false>;
|
||||
template <typename T> using FuncQuantilesTiming = AggregateFunctionQuantile<T, QuantileTiming<T>, NameQuantilesTiming, false, Float32, true>;
|
||||
template <typename Value, bool _> using FuncQuantileTiming = AggregateFunctionQuantile<Value, QuantileTiming<Value>, NameQuantileTiming, false, Float32, false>;
|
||||
template <typename Value, bool _> using FuncQuantilesTiming = AggregateFunctionQuantile<Value, QuantileTiming<Value>, NameQuantilesTiming, false, Float32, true>;
|
||||
|
||||
template <typename T> using FuncQuantileTimingWeighted = AggregateFunctionQuantile<T, QuantileTiming<T>, NameQuantileTimingWeighted, true, Float32, false>;
|
||||
template <typename T> using FuncQuantilesTimingWeighted = AggregateFunctionQuantile<T, QuantileTiming<T>, NameQuantilesTimingWeighted, true, Float32, true>;
|
||||
template <typename Value, bool _> using FuncQuantileTimingWeighted = AggregateFunctionQuantile<Value, QuantileTiming<Value>, NameQuantileTimingWeighted, true, Float32, false>;
|
||||
template <typename Value, bool _> using FuncQuantilesTimingWeighted = AggregateFunctionQuantile<Value, QuantileTiming<Value>, NameQuantilesTimingWeighted, true, Float32, true>;
|
||||
|
||||
template <typename T> using FuncQuantileTDigest = AggregateFunctionQuantile<T, QuantileTDigest<T>, NameQuantileTDigest, false, Float32, false>;
|
||||
template <typename T> using FuncQuantilesTDigest = AggregateFunctionQuantile<T, QuantileTDigest<T>, NameQuantilesTDigest, false, Float32, true>;
|
||||
template <typename Value, bool FloatReturn> using FuncQuantileTDigest = AggregateFunctionQuantile<Value, QuantileTDigest<Value>, NameQuantileTDigest, false, std::conditional_t<FloatReturn, Float32, void>, false>;
|
||||
template <typename Value, bool FloatReturn> using FuncQuantilesTDigest = AggregateFunctionQuantile<Value, QuantileTDigest<Value>, NameQuantilesTDigest, false, std::conditional_t<FloatReturn, Float32, void>, true>;
|
||||
|
||||
template <typename T> using FuncQuantileTDigestWeighted = AggregateFunctionQuantile<T, QuantileTDigest<T>, NameQuantileTDigestWeighted, true, Float32, false>;
|
||||
template <typename T> using FuncQuantilesTDigestWeighted = AggregateFunctionQuantile<T, QuantileTDigest<T>, NameQuantilesTDigestWeighted, true, Float32, true>;
|
||||
template <typename Value, bool FloatReturn> using FuncQuantileTDigestWeighted = AggregateFunctionQuantile<Value, QuantileTDigest<Value>, NameQuantileTDigestWeighted, true, std::conditional_t<FloatReturn, Float32, void>, false>;
|
||||
template <typename Value, bool FloatReturn> using FuncQuantilesTDigestWeighted = AggregateFunctionQuantile<Value, QuantileTDigest<Value>, NameQuantilesTDigestWeighted, true, std::conditional_t<FloatReturn, Float32, void>, true>;
|
||||
|
||||
|
||||
template <template <typename> class Function>
|
||||
template <template <typename, bool> class Function>
|
||||
static constexpr bool supportDecimal()
|
||||
{
|
||||
return std::is_same_v<Function<Float32>, FuncQuantileExact<Float32>> ||
|
||||
std::is_same_v<Function<Float32>, FuncQuantilesExact<Float32>>;
|
||||
return std::is_same_v<Function<Float32, false>, FuncQuantileExact<Float32, false>> ||
|
||||
std::is_same_v<Function<Float32, false>, FuncQuantilesExact<Float32, false>>;
|
||||
}
|
||||
|
||||
|
||||
template <template <typename> class Function>
|
||||
template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(const std::string & name, const DataTypes & argument_types, const Array & params)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void>::assertSecondArg(argument_types);
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
||||
#define DISPATCH(TYPE) \
|
||||
if (which.idx == TypeIndex::TYPE) return std::make_shared<Function<TYPE>>(argument_type, params);
|
||||
if (which.idx == TypeIndex::TYPE) return std::make_shared<Function<TYPE, true>>(argument_type, params);
|
||||
FOR_NUMERIC_TYPES(DISPATCH)
|
||||
#undef DISPATCH
|
||||
if (which.idx == TypeIndex::Date) return std::make_shared<Function<DataTypeDate::FieldType>>(argument_type, params);
|
||||
if (which.idx == TypeIndex::DateTime) return std::make_shared<Function<DataTypeDateTime::FieldType>>(argument_type, params);
|
||||
if (which.idx == TypeIndex::Date) return std::make_shared<Function<DataTypeDate::FieldType, false>>(argument_type, params);
|
||||
if (which.idx == TypeIndex::DateTime) return std::make_shared<Function<DataTypeDateTime::FieldType, false>>(argument_type, params);
|
||||
|
||||
if constexpr (supportDecimal<Function>())
|
||||
{
|
||||
if (which.idx == TypeIndex::Decimal32) return std::make_shared<Function<Decimal32>>(argument_type, params);
|
||||
if (which.idx == TypeIndex::Decimal64) return std::make_shared<Function<Decimal64>>(argument_type, params);
|
||||
if (which.idx == TypeIndex::Decimal128) return std::make_shared<Function<Decimal128>>(argument_type, params);
|
||||
if (which.idx == TypeIndex::Decimal32) return std::make_shared<Function<Decimal32, true>>(argument_type, params);
|
||||
if (which.idx == TypeIndex::Decimal64) return std::make_shared<Function<Decimal64, true>>(argument_type, params);
|
||||
if (which.idx == TypeIndex::Decimal128) return std::make_shared<Function<Decimal128, true>>(argument_type, params);
|
||||
}
|
||||
|
||||
throw Exception("Illegal type " + argument_type->getName() + " of argument for aggregate function " + name,
|
||||
|
@ -65,9 +65,7 @@ class AggregateFunctionQuantile final : public IAggregateFunctionDataHelper<Data
|
||||
private:
|
||||
using ColVecType = std::conditional_t<IsDecimalNumber<Value>, ColumnDecimal<Value>, ColumnVector<Value>>;
|
||||
|
||||
static constexpr bool returns_float = !(std::is_same_v<FloatReturnType, void>)
|
||||
&& (!(std::is_same_v<Value, DataTypeDate::FieldType> || std::is_same_v<Value, DataTypeDateTime::FieldType>)
|
||||
|| std::is_same_v<Data, QuantileTiming<Value>>);
|
||||
static constexpr bool returns_float = !(std::is_same_v<FloatReturnType, void>);
|
||||
static_assert(!IsDecimalNumber<Value> || !returns_float);
|
||||
|
||||
QuantileLevels<Float64> levels;
|
||||
|
@ -15,7 +15,7 @@ namespace ErrorCodes
|
||||
|
||||
Array getAggregateFunctionParametersArray(const ASTPtr & expression_list, const std::string & error_context)
|
||||
{
|
||||
const ASTs & parameters = typeid_cast<const ASTExpressionList &>(*expression_list).children;
|
||||
const ASTs & parameters = expression_list->children;
|
||||
if (parameters.empty())
|
||||
throw Exception("Parameters list to aggregate functions cannot be empty", ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
@ -23,14 +23,14 @@ Array getAggregateFunctionParametersArray(const ASTPtr & expression_list, const
|
||||
|
||||
for (size_t i = 0; i < parameters.size(); ++i)
|
||||
{
|
||||
const ASTLiteral * lit = typeid_cast<const ASTLiteral *>(parameters[i].get());
|
||||
if (!lit)
|
||||
const auto * literal = parameters[i]->as<ASTLiteral>();
|
||||
if (!literal)
|
||||
{
|
||||
throw Exception("Parameters to aggregate functions must be literals" + (error_context.empty() ? "" : " (in " + error_context +")"),
|
||||
ErrorCodes::PARAMETERS_TO_AGGREGATE_FUNCTIONS_MUST_BE_LITERALS);
|
||||
}
|
||||
|
||||
params_row[i] = lit->value;
|
||||
params_row[i] = literal->value;
|
||||
}
|
||||
|
||||
return params_row;
|
||||
@ -67,8 +67,7 @@ void getAggregateFunctionNameAndParametersArray(
|
||||
parameters_str.data(), parameters_str.data() + parameters_str.size(),
|
||||
"parameters of aggregate function in " + error_context, 0);
|
||||
|
||||
ASTExpressionList & args_list = typeid_cast<ASTExpressionList &>(*args_ast);
|
||||
if (args_list.children.empty())
|
||||
if (args_ast->children.empty())
|
||||
throw Exception("Incorrect list of parameters to aggregate function "
|
||||
+ aggregate_function_name, ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
|
@ -357,7 +357,7 @@ void Connection::sendQuery(
|
||||
if (settings)
|
||||
{
|
||||
std::optional<int> level;
|
||||
std::string method = settings->network_compression_method;
|
||||
std::string method = Poco::toUpper(settings->network_compression_method.toString());
|
||||
|
||||
/// Bad custom logic
|
||||
if (method == "ZSTD")
|
||||
|
@ -3,17 +3,12 @@
|
||||
#include <Columns/IColumn.h>
|
||||
#include <Columns/ColumnVector.h>
|
||||
#include <Core/Defines.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
/** A column of array values.
|
||||
* In memory, it is represented as one column of a nested type, whose size is equal to the sum of the sizes of all arrays,
|
||||
* and as an array of offsets in it, which allows you to get each element.
|
||||
@ -121,6 +116,13 @@ public:
|
||||
callback(data);
|
||||
}
|
||||
|
||||
bool structureEquals(const IColumn & rhs) const override
|
||||
{
|
||||
if (auto rhs_concrete = typeid_cast<const ColumnArray *>(&rhs))
|
||||
return data->structureEquals(*rhs_concrete->data);
|
||||
return false;
|
||||
}
|
||||
|
||||
private:
|
||||
ColumnPtr data;
|
||||
ColumnPtr offsets;
|
||||
|
@ -3,6 +3,7 @@
|
||||
#include <Core/Field.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Columns/IColumn.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -190,6 +191,13 @@ public:
|
||||
callback(data);
|
||||
}
|
||||
|
||||
bool structureEquals(const IColumn & rhs) const override
|
||||
{
|
||||
if (auto rhs_concrete = typeid_cast<const ColumnConst *>(&rhs))
|
||||
return data->structureEquals(*rhs_concrete->data);
|
||||
return false;
|
||||
}
|
||||
|
||||
bool onlyNull() const override { return data->isNullAt(0); }
|
||||
bool isColumnConst() const override { return true; }
|
||||
bool isNumeric() const override { return data->isNumeric(); }
|
||||
|
@ -2,6 +2,7 @@
|
||||
|
||||
#include <cmath>
|
||||
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Columns/IColumn.h>
|
||||
#include <Columns/ColumnVectorHelper.h>
|
||||
|
||||
@ -133,6 +134,13 @@ public:
|
||||
|
||||
void gather(ColumnGathererStream & gatherer_stream) override;
|
||||
|
||||
bool structureEquals(const IColumn & rhs) const override
|
||||
{
|
||||
if (auto rhs_concrete = typeid_cast<const ColumnDecimal<T> *>(&rhs))
|
||||
return scale == rhs_concrete->scale;
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
void insert(const T value) { data.push_back(value); }
|
||||
Container & getData() { return data; }
|
||||
|
@ -2,6 +2,7 @@
|
||||
|
||||
#include <Common/PODArray.h>
|
||||
#include <Common/memcmpSmall.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Columns/IColumn.h>
|
||||
#include <Columns/ColumnVectorHelper.h>
|
||||
|
||||
@ -134,6 +135,12 @@ public:
|
||||
|
||||
void getExtremes(Field & min, Field & max) const override;
|
||||
|
||||
bool structureEquals(const IColumn & rhs) const override
|
||||
{
|
||||
if (auto rhs_concrete = typeid_cast<const ColumnFixedString *>(&rhs))
|
||||
return n == rhs_concrete->n;
|
||||
return false;
|
||||
}
|
||||
|
||||
bool canBeInsideNullable() const override { return true; }
|
||||
|
||||
|
@ -5,6 +5,7 @@
|
||||
#include <AggregateFunctions/AggregateFunctionCount.h>
|
||||
#include "ColumnsNumber.h"
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
@ -132,6 +133,14 @@ public:
|
||||
callback(dictionary.getColumnUniquePtr());
|
||||
}
|
||||
|
||||
bool structureEquals(const IColumn & rhs) const override
|
||||
{
|
||||
if (auto rhs_low_cardinality = typeid_cast<const ColumnLowCardinality *>(&rhs))
|
||||
return idx.getPositions()->structureEquals(*rhs_low_cardinality->idx.getPositions())
|
||||
&& dictionary.getColumnUnique().structureEquals(rhs_low_cardinality->dictionary.getColumnUnique());
|
||||
return false;
|
||||
}
|
||||
|
||||
bool valuesHaveFixedSize() const override { return getDictionary().valuesHaveFixedSize(); }
|
||||
bool isFixedAndContiguous() const override { return false; }
|
||||
size_t sizeOfValueIfFixed() const override { return getDictionary().sizeOfValueIfFixed(); }
|
||||
|
@ -23,6 +23,11 @@ public:
|
||||
MutableColumnPtr cloneDummy(size_t s_) const override { return ColumnNothing::create(s_); }
|
||||
|
||||
bool canBeInsideNullable() const override { return true; }
|
||||
|
||||
bool structureEquals(const IColumn & rhs) const override
|
||||
{
|
||||
return typeid(rhs) == typeid(ColumnNothing);
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -2,6 +2,8 @@
|
||||
|
||||
#include <Columns/IColumn.h>
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -89,6 +91,13 @@ public:
|
||||
callback(null_map);
|
||||
}
|
||||
|
||||
bool structureEquals(const IColumn & rhs) const override
|
||||
{
|
||||
if (auto rhs_nullable = typeid_cast<const ColumnNullable *>(&rhs))
|
||||
return nested_column->structureEquals(*rhs_nullable->nested_column);
|
||||
return false;
|
||||
}
|
||||
|
||||
bool isColumnNullable() const override { return true; }
|
||||
bool isFixedAndContiguous() const override { return false; }
|
||||
bool valuesHaveFixedSize() const override { return nested_column->valuesHaveFixedSize(); }
|
||||
|
@ -231,6 +231,11 @@ public:
|
||||
|
||||
bool canBeInsideNullable() const override { return true; }
|
||||
|
||||
bool structureEquals(const IColumn & rhs) const override
|
||||
{
|
||||
return typeid(rhs) == typeid(ColumnString);
|
||||
}
|
||||
|
||||
|
||||
Chars & getChars() { return chars; }
|
||||
const Chars & getChars() const { return chars; }
|
||||
|
@ -4,6 +4,7 @@
|
||||
#include <IO/Operators.h>
|
||||
#include <ext/map.h>
|
||||
#include <ext/range.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -341,6 +342,23 @@ void ColumnTuple::forEachSubcolumn(ColumnCallback callback)
|
||||
callback(column);
|
||||
}
|
||||
|
||||
bool ColumnTuple::structureEquals(const IColumn & rhs) const
|
||||
{
|
||||
if (auto rhs_tuple = typeid_cast<const ColumnTuple *>(&rhs))
|
||||
{
|
||||
const size_t tuple_size = columns.size();
|
||||
if (tuple_size != rhs_tuple->columns.size())
|
||||
return false;
|
||||
|
||||
for (const auto i : ext::range(0, tuple_size))
|
||||
if (!columns[i]->structureEquals(*rhs_tuple->columns[i]))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
else
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
}
|
||||
|
@ -73,6 +73,7 @@ public:
|
||||
size_t allocatedBytes() const override;
|
||||
void protect() override;
|
||||
void forEachSubcolumn(ColumnCallback callback) override;
|
||||
bool structureEquals(const IColumn & rhs) const override;
|
||||
|
||||
size_t tupleSize() const { return columns.size(); }
|
||||
|
||||
|
@ -95,6 +95,13 @@ public:
|
||||
nested_column_nullable = ColumnNullable::create(column_holder, nested_null_mask);
|
||||
}
|
||||
|
||||
bool structureEquals(const IColumn & rhs) const override
|
||||
{
|
||||
if (auto rhs_concrete = typeid_cast<const ColumnUnique *>(&rhs))
|
||||
return column_holder->structureEquals(*rhs_concrete->column_holder);
|
||||
return false;
|
||||
}
|
||||
|
||||
const UInt64 * tryGetSavedHash() const override { return index.tryGetSavedHash(); }
|
||||
|
||||
UInt128 getHash() const override { return hash.getHash(*getRawColumnPtr()); }
|
||||
|
@ -251,6 +251,12 @@ public:
|
||||
size_t sizeOfValueIfFixed() const override { return sizeof(T); }
|
||||
StringRef getRawData() const override { return StringRef(reinterpret_cast<const char*>(data.data()), data.size()); }
|
||||
|
||||
|
||||
bool structureEquals(const IColumn & rhs) const override
|
||||
{
|
||||
return typeid(rhs) == typeid(ColumnVector<T>);
|
||||
}
|
||||
|
||||
/** More efficient methods of manipulation - to manipulate with data directly. */
|
||||
Container & getData()
|
||||
{
|
||||
|
@ -262,6 +262,13 @@ public:
|
||||
using ColumnCallback = std::function<void(Ptr&)>;
|
||||
virtual void forEachSubcolumn(ColumnCallback) {}
|
||||
|
||||
/// Columns have equal structure.
|
||||
/// If true - you can use "compareAt", "insertFrom", etc. methods.
|
||||
virtual bool structureEquals(const IColumn &) const
|
||||
{
|
||||
throw Exception("Method structureEquals is not supported for " + getName(), ErrorCodes::NOT_IMPLEMENTED);
|
||||
}
|
||||
|
||||
|
||||
MutablePtr mutate() const &&
|
||||
{
|
||||
|
@ -1,4 +1,4 @@
|
||||
if(USE_GTEST)
|
||||
add_executable(column_unique column_unique.cpp)
|
||||
target_link_libraries(column_unique PRIVATE dbms gtest_main)
|
||||
target_link_libraries(column_unique PRIVATE dbms ${GTEST_BOTH_LIBRARIES})
|
||||
endif()
|
@ -21,6 +21,8 @@ namespace ErrorCodes
|
||||
|
||||
void CurrentThread::updatePerformanceCounters()
|
||||
{
|
||||
if (unlikely(!current_thread))
|
||||
return;
|
||||
get().updatePerformanceCounters();
|
||||
}
|
||||
|
||||
@ -37,30 +39,38 @@ ProfileEvents::Counters & CurrentThread::getProfileEvents()
|
||||
return current_thread ? get().performance_counters : ProfileEvents::global_counters;
|
||||
}
|
||||
|
||||
MemoryTracker & CurrentThread::getMemoryTracker()
|
||||
MemoryTracker * CurrentThread::getMemoryTracker()
|
||||
{
|
||||
return get().memory_tracker;
|
||||
if (unlikely(!current_thread))
|
||||
return nullptr;
|
||||
return &get().memory_tracker;
|
||||
}
|
||||
|
||||
void CurrentThread::updateProgressIn(const Progress & value)
|
||||
{
|
||||
if (unlikely(!current_thread))
|
||||
return;
|
||||
get().progress_in.incrementPiecewiseAtomically(value);
|
||||
}
|
||||
|
||||
void CurrentThread::updateProgressOut(const Progress & value)
|
||||
{
|
||||
if (unlikely(!current_thread))
|
||||
return;
|
||||
get().progress_out.incrementPiecewiseAtomically(value);
|
||||
}
|
||||
|
||||
void CurrentThread::attachInternalTextLogsQueue(const std::shared_ptr<InternalTextLogsQueue> & logs_queue)
|
||||
{
|
||||
if (unlikely(!current_thread))
|
||||
return;
|
||||
get().attachInternalTextLogsQueue(logs_queue);
|
||||
}
|
||||
|
||||
std::shared_ptr<InternalTextLogsQueue> CurrentThread::getInternalTextLogsQueue()
|
||||
{
|
||||
/// NOTE: this method could be called at early server startup stage
|
||||
if (!current_thread)
|
||||
if (unlikely(!current_thread))
|
||||
return nullptr;
|
||||
|
||||
if (get().getCurrentState() == ThreadStatus::ThreadState::Died)
|
||||
@ -71,7 +81,7 @@ std::shared_ptr<InternalTextLogsQueue> CurrentThread::getInternalTextLogsQueue()
|
||||
|
||||
ThreadGroupStatusPtr CurrentThread::getGroup()
|
||||
{
|
||||
if (!current_thread)
|
||||
if (unlikely(!current_thread))
|
||||
return nullptr;
|
||||
|
||||
return get().getThreadGroup();
|
||||
|
@ -45,7 +45,7 @@ public:
|
||||
static void updatePerformanceCounters();
|
||||
|
||||
static ProfileEvents::Counters & getProfileEvents();
|
||||
static MemoryTracker & getMemoryTracker();
|
||||
static MemoryTracker * getMemoryTracker();
|
||||
|
||||
/// Update read and write rows (bytes) statistics (used in system.query_thread_log)
|
||||
static void updateProgressIn(const Progress & value);
|
||||
|
@ -190,24 +190,27 @@ namespace CurrentMemoryTracker
|
||||
{
|
||||
void alloc(Int64 size)
|
||||
{
|
||||
if (DB::current_thread)
|
||||
DB::CurrentThread::getMemoryTracker().alloc(size);
|
||||
if (auto memory_tracker = DB::CurrentThread::getMemoryTracker())
|
||||
memory_tracker->alloc(size);
|
||||
}
|
||||
|
||||
void realloc(Int64 old_size, Int64 new_size)
|
||||
{
|
||||
if (DB::current_thread)
|
||||
DB::CurrentThread::getMemoryTracker().alloc(new_size - old_size);
|
||||
if (auto memory_tracker = DB::CurrentThread::getMemoryTracker())
|
||||
memory_tracker->alloc(new_size - old_size);
|
||||
}
|
||||
|
||||
void free(Int64 size)
|
||||
{
|
||||
if (DB::current_thread)
|
||||
DB::CurrentThread::getMemoryTracker().free(size);
|
||||
if (auto memory_tracker = DB::CurrentThread::getMemoryTracker())
|
||||
memory_tracker->free(size);
|
||||
}
|
||||
}
|
||||
|
||||
DB::SimpleActionLock getCurrentMemoryTrackerActionLock()
|
||||
{
|
||||
return DB::CurrentThread::getMemoryTracker().blocker.cancel();
|
||||
auto memory_tracker = DB::CurrentThread::getMemoryTracker();
|
||||
if (!memory_tracker)
|
||||
return {};
|
||||
return memory_tracker->blocker.cancel();
|
||||
}
|
||||
|
@ -173,10 +173,10 @@ protected:
|
||||
/// The operation is slow and performed only for debug builds.
|
||||
void protectImpl(int prot)
|
||||
{
|
||||
static constexpr size_t PAGE_SIZE = 4096;
|
||||
static constexpr size_t PROTECT_PAGE_SIZE = 4096;
|
||||
|
||||
char * left_rounded_up = reinterpret_cast<char *>((reinterpret_cast<intptr_t>(c_start) - pad_left + PAGE_SIZE - 1) / PAGE_SIZE * PAGE_SIZE);
|
||||
char * right_rounded_down = reinterpret_cast<char *>((reinterpret_cast<intptr_t>(c_end_of_storage) + pad_right) / PAGE_SIZE * PAGE_SIZE);
|
||||
char * left_rounded_up = reinterpret_cast<char *>((reinterpret_cast<intptr_t>(c_start) - pad_left + PROTECT_PAGE_SIZE - 1) / PROTECT_PAGE_SIZE * PROTECT_PAGE_SIZE);
|
||||
char * right_rounded_down = reinterpret_cast<char *>((reinterpret_cast<intptr_t>(c_end_of_storage) + pad_right) / PROTECT_PAGE_SIZE * PROTECT_PAGE_SIZE);
|
||||
|
||||
if (right_rounded_down > left_rounded_up)
|
||||
{
|
||||
|
63
dbms/src/Common/TypePromotion.h
Normal file
63
dbms/src/Common/TypePromotion.h
Normal file
@ -0,0 +1,63 @@
|
||||
#pragma once
|
||||
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/* This base class adds public methods:
|
||||
* - Derived * as<Derived>()
|
||||
* - const Derived * as<Derived>() const
|
||||
* - Derived & as<Derived &>()
|
||||
* - const Derived & as<Derived &>() const
|
||||
*/
|
||||
|
||||
template <class Base>
|
||||
class TypePromotion
|
||||
{
|
||||
private:
|
||||
/// Need a helper-struct to fight the lack of the function-template partial specialization.
|
||||
template <class T, bool is_const, bool is_ref = std::is_reference_v<T>>
|
||||
struct CastHelper;
|
||||
|
||||
template <class T>
|
||||
struct CastHelper<T, false, true>
|
||||
{
|
||||
auto & value(Base * ptr) { return typeid_cast<T>(*ptr); }
|
||||
};
|
||||
|
||||
template <class T>
|
||||
struct CastHelper<T, true, true>
|
||||
{
|
||||
auto & value(const Base * ptr) { return typeid_cast<std::add_lvalue_reference_t<std::add_const_t<std::remove_reference_t<T>>>>(*ptr); }
|
||||
};
|
||||
|
||||
template <class T>
|
||||
struct CastHelper<T, false, false>
|
||||
{
|
||||
auto * value(Base * ptr) { return typeid_cast<T *>(ptr); }
|
||||
};
|
||||
|
||||
template <class T>
|
||||
struct CastHelper<T, true, false>
|
||||
{
|
||||
auto * value(const Base * ptr) { return typeid_cast<std::add_const_t<T> *>(ptr); }
|
||||
};
|
||||
|
||||
public:
|
||||
template <class Derived>
|
||||
auto as() -> std::invoke_result_t<decltype(&CastHelper<Derived, false>::value), CastHelper<Derived, false>, Base *>
|
||||
{
|
||||
// TODO: if we do downcast to base type, then just return |this|.
|
||||
return CastHelper<Derived, false>().value(static_cast<Base *>(this));
|
||||
}
|
||||
|
||||
template <class Derived>
|
||||
auto as() const -> std::invoke_result_t<decltype(&CastHelper<Derived, true>::value), CastHelper<Derived, true>, const Base *>
|
||||
{
|
||||
// TODO: if we do downcast to base type, then just return |this|.
|
||||
return CastHelper<Derived, true>().value(static_cast<const Base *>(this));
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace DB
|
@ -25,18 +25,32 @@ namespace DB
|
||||
template <typename To, typename From>
|
||||
std::enable_if_t<std::is_reference_v<To>, To> typeid_cast(From & from)
|
||||
{
|
||||
if (typeid(from) == typeid(To))
|
||||
return static_cast<To>(from);
|
||||
else
|
||||
throw DB::Exception("Bad cast from type " + demangle(typeid(from).name()) + " to " + demangle(typeid(To).name()),
|
||||
DB::ErrorCodes::BAD_CAST);
|
||||
try
|
||||
{
|
||||
if (typeid(from) == typeid(To))
|
||||
return static_cast<To>(from);
|
||||
}
|
||||
catch (const std::exception & e)
|
||||
{
|
||||
throw DB::Exception(e.what(), DB::ErrorCodes::BAD_CAST);
|
||||
}
|
||||
|
||||
throw DB::Exception("Bad cast from type " + demangle(typeid(from).name()) + " to " + demangle(typeid(To).name()),
|
||||
DB::ErrorCodes::BAD_CAST);
|
||||
}
|
||||
|
||||
template <typename To, typename From>
|
||||
To typeid_cast(From * from)
|
||||
{
|
||||
if (typeid(*from) == typeid(std::remove_pointer_t<To>))
|
||||
return static_cast<To>(from);
|
||||
else
|
||||
return nullptr;
|
||||
try
|
||||
{
|
||||
if (typeid(*from) == typeid(std::remove_pointer_t<To>))
|
||||
return static_cast<To>(from);
|
||||
else
|
||||
return nullptr;
|
||||
}
|
||||
catch (const std::exception & e)
|
||||
{
|
||||
throw DB::Exception(e.what(), DB::ErrorCodes::BAD_CAST);
|
||||
}
|
||||
}
|
||||
|
@ -144,7 +144,7 @@ void registerCodecDelta(CompressionCodecFactory & factory)
|
||||
throw Exception("Delta codec must have 1 parameter, given " + std::to_string(arguments->children.size()), ErrorCodes::ILLEGAL_SYNTAX_FOR_CODEC_TYPE);
|
||||
|
||||
const auto children = arguments->children;
|
||||
const ASTLiteral * literal = static_cast<const ASTLiteral *>(children[0].get());
|
||||
const auto * literal = children[0]->as<ASTLiteral>();
|
||||
size_t user_bytes_size = literal->value.safeGet<UInt64>();
|
||||
if (user_bytes_size != 1 && user_bytes_size != 2 && user_bytes_size != 4 && user_bytes_size != 8)
|
||||
throw Exception("Delta value for delta codec can be 1, 2, 4 or 8, given " + toString(user_bytes_size), ErrorCodes::ILLEGAL_CODEC_PARAMETER);
|
||||
|
@ -86,7 +86,7 @@ void registerCodecLZ4HC(CompressionCodecFactory & factory)
|
||||
throw Exception("LZ4HC codec must have 1 parameter, given " + std::to_string(arguments->children.size()), ErrorCodes::ILLEGAL_SYNTAX_FOR_CODEC_TYPE);
|
||||
|
||||
const auto children = arguments->children;
|
||||
const ASTLiteral * literal = static_cast<const ASTLiteral *>(children[0].get());
|
||||
const auto * literal = children[0]->as<ASTLiteral>();
|
||||
level = literal->value.safeGet<UInt64>();
|
||||
}
|
||||
|
||||
@ -100,4 +100,3 @@ CompressionCodecLZ4HC::CompressionCodecLZ4HC(int level_)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
@ -73,7 +73,7 @@ void registerCodecZSTD(CompressionCodecFactory & factory)
|
||||
throw Exception("ZSTD codec must have 1 parameter, given " + std::to_string(arguments->children.size()), ErrorCodes::ILLEGAL_SYNTAX_FOR_CODEC_TYPE);
|
||||
|
||||
const auto children = arguments->children;
|
||||
const ASTLiteral * literal = static_cast<const ASTLiteral *>(children[0].get());
|
||||
const auto * literal = children[0]->as<ASTLiteral>();
|
||||
level = literal->value.safeGet<UInt64>();
|
||||
if (level > ZSTD_maxCLevel())
|
||||
throw Exception("ZSTD codec can't have level more that " + toString(ZSTD_maxCLevel()) + ", given " + toString(level), ErrorCodes::ILLEGAL_CODEC_PARAMETER);
|
||||
|
@ -56,15 +56,15 @@ CompressionCodecPtr CompressionCodecFactory::get(const std::vector<CodecNameWith
|
||||
|
||||
CompressionCodecPtr CompressionCodecFactory::get(const ASTPtr & ast, DataTypePtr column_type) const
|
||||
{
|
||||
if (const auto * func = typeid_cast<const ASTFunction *>(ast.get()))
|
||||
if (const auto * func = ast->as<ASTFunction>())
|
||||
{
|
||||
Codecs codecs;
|
||||
codecs.reserve(func->arguments->children.size());
|
||||
for (const auto & inner_codec_ast : func->arguments->children)
|
||||
{
|
||||
if (const auto * family_name = typeid_cast<const ASTIdentifier *>(inner_codec_ast.get()))
|
||||
if (const auto * family_name = inner_codec_ast->as<ASTIdentifier>())
|
||||
codecs.emplace_back(getImpl(family_name->name, {}, column_type));
|
||||
else if (const auto * ast_func = typeid_cast<const ASTFunction *>(inner_codec_ast.get()))
|
||||
else if (const auto * ast_func = inner_codec_ast->as<ASTFunction>())
|
||||
codecs.emplace_back(getImpl(ast_func->name, ast_func->arguments, column_type));
|
||||
else
|
||||
throw Exception("Unexpected AST element for compression codec", ErrorCodes::UNEXPECTED_AST_STRUCTURE);
|
||||
|
@ -1,14 +1,17 @@
|
||||
#pragma once
|
||||
|
||||
#include <memory>
|
||||
#include <Compression/CompressionInfo.h>
|
||||
#include <Compression/ICompressionCodec.h>
|
||||
#include <DataTypes/IDataType.h>
|
||||
#include <Parsers/IAST_fwd.h>
|
||||
#include <Common/IFactoryWithAliases.h>
|
||||
|
||||
#include <ext/singleton.h>
|
||||
|
||||
#include <functional>
|
||||
#include <memory>
|
||||
#include <optional>
|
||||
#include <unordered_map>
|
||||
#include <ext/singleton.h>
|
||||
#include <DataTypes/IDataType.h>
|
||||
#include <Common/IFactoryWithAliases.h>
|
||||
#include <Compression/ICompressionCodec.h>
|
||||
#include <Compression/CompressionInfo.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -19,10 +22,6 @@ using CompressionCodecPtr = std::shared_ptr<ICompressionCodec>;
|
||||
|
||||
using CodecNameWithLevel = std::pair<String, std::optional<int>>;
|
||||
|
||||
class IAST;
|
||||
|
||||
using ASTPtr = std::shared_ptr<IAST>;
|
||||
|
||||
/** Creates a codec object by name of compression algorithm family and parameters.
|
||||
*/
|
||||
class CompressionCodecFactory final : public ext::singleton<CompressionCodecFactory>
|
||||
|
@ -2,9 +2,9 @@
|
||||
|
||||
#include <cmath>
|
||||
#include <limits>
|
||||
|
||||
#include "Defines.h"
|
||||
#include "Types.h"
|
||||
#include <Common/NaNUtils.h>
|
||||
#include <Core/Types.h>
|
||||
#include <Common/UInt128.h>
|
||||
|
||||
/** Preceptually-correct number comparisons.
|
||||
|
@ -250,7 +250,8 @@ void BackgroundSchedulePool::threadFunction()
|
||||
|
||||
attachToThreadGroup();
|
||||
SCOPE_EXIT({ CurrentThread::detachQueryIfNotDetached(); });
|
||||
CurrentThread::getMemoryTracker().setMetric(CurrentMetrics::MemoryTrackingInBackgroundSchedulePool);
|
||||
if (auto memory_tracker = CurrentThread::getMemoryTracker())
|
||||
memory_tracker->setMetric(CurrentMetrics::MemoryTrackingInBackgroundSchedulePool);
|
||||
|
||||
while (!shutdown)
|
||||
{
|
||||
|
@ -20,7 +20,7 @@ namespace ErrorCodes
|
||||
InputStreamFromASTInsertQuery::InputStreamFromASTInsertQuery(
|
||||
const ASTPtr & ast, ReadBuffer * input_buffer_tail_part, const Block & header, Context & context)
|
||||
{
|
||||
const ASTInsertQuery * ast_insert_query = dynamic_cast<const ASTInsertQuery *>(ast.get());
|
||||
const auto * ast_insert_query = ast->as<ASTInsertQuery>();
|
||||
|
||||
if (!ast_insert_query)
|
||||
throw Exception("Logical error: query requires data to insert, but it is not INSERT query", ErrorCodes::LOGICAL_ERROR);
|
||||
@ -52,8 +52,12 @@ InputStreamFromASTInsertQuery::InputStreamFromASTInsertQuery(
|
||||
res_stream = context.getInputFormat(format, *input_buffer_contacenated, header, context.getSettings().max_insert_block_size);
|
||||
|
||||
auto columns_description = ColumnsDescription::loadFromContext(context, ast_insert_query->database, ast_insert_query->table);
|
||||
if (columns_description && !columns_description->defaults.empty())
|
||||
res_stream = std::make_shared<AddingDefaultsBlockInputStream>(res_stream, columns_description->defaults, context);
|
||||
if (columns_description)
|
||||
{
|
||||
auto column_defaults = columns_description->getDefaults();
|
||||
if (!column_defaults.empty())
|
||||
res_stream = std::make_shared<AddingDefaultsBlockInputStream>(res_stream, column_defaults, context);
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -340,30 +340,30 @@ static DataTypePtr create(const ASTPtr & arguments)
|
||||
throw Exception("Data type AggregateFunction requires parameters: "
|
||||
"name of aggregate function and list of data types for arguments", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
if (const ASTFunction * parametric = typeid_cast<const ASTFunction *>(arguments->children[0].get()))
|
||||
if (const auto * parametric = arguments->children[0]->as<ASTFunction>())
|
||||
{
|
||||
if (parametric->parameters)
|
||||
throw Exception("Unexpected level of parameters to aggregate function", ErrorCodes::SYNTAX_ERROR);
|
||||
function_name = parametric->name;
|
||||
|
||||
const ASTs & parameters = typeid_cast<const ASTExpressionList &>(*parametric->arguments).children;
|
||||
const ASTs & parameters = parametric->arguments->children;
|
||||
params_row.resize(parameters.size());
|
||||
|
||||
for (size_t i = 0; i < parameters.size(); ++i)
|
||||
{
|
||||
const ASTLiteral * lit = typeid_cast<const ASTLiteral *>(parameters[i].get());
|
||||
if (!lit)
|
||||
const auto * literal = parameters[i]->as<ASTLiteral>();
|
||||
if (!literal)
|
||||
throw Exception("Parameters to aggregate functions must be literals",
|
||||
ErrorCodes::PARAMETERS_TO_AGGREGATE_FUNCTIONS_MUST_BE_LITERALS);
|
||||
|
||||
params_row[i] = lit->value;
|
||||
params_row[i] = literal->value;
|
||||
}
|
||||
}
|
||||
else if (auto opt_name = getIdentifierName(arguments->children[0]))
|
||||
{
|
||||
function_name = *opt_name;
|
||||
}
|
||||
else if (typeid_cast<ASTLiteral *>(arguments->children[0].get()))
|
||||
else if (arguments->children[0]->as<ASTLiteral>())
|
||||
{
|
||||
throw Exception("Aggregate function name for data type AggregateFunction must be passed as identifier (without quotes) or function",
|
||||
ErrorCodes::BAD_ARGUMENTS);
|
||||
@ -389,4 +389,3 @@ void registerDataTypeAggregateFunction(DataTypeFactory & factory)
|
||||
|
||||
|
||||
}
|
||||
|
||||
|
@ -186,7 +186,7 @@ static DataTypePtr create(const ASTPtr & arguments)
|
||||
if (arguments->children.size() != 1)
|
||||
throw Exception("DateTime data type can optionally have only one argument - time zone name", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
const ASTLiteral * arg = typeid_cast<const ASTLiteral *>(arguments->children[0].get());
|
||||
const auto * arg = arguments->children[0]->as<ASTLiteral>();
|
||||
if (!arg || arg->value.getType() != Field::Types::String)
|
||||
throw Exception("Parameter for DateTime data type must be string literal", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
|
||||
|
@ -357,7 +357,7 @@ static DataTypePtr create(const ASTPtr & arguments)
|
||||
/// Children must be functions 'equals' with string literal as left argument and numeric literal as right argument.
|
||||
for (const ASTPtr & child : arguments->children)
|
||||
{
|
||||
const ASTFunction * func = typeid_cast<const ASTFunction *>(child.get());
|
||||
const auto * func = child->as<ASTFunction>();
|
||||
if (!func
|
||||
|| func->name != "equals"
|
||||
|| func->parameters
|
||||
@ -366,8 +366,8 @@ static DataTypePtr create(const ASTPtr & arguments)
|
||||
throw Exception("Elements of Enum data type must be of form: 'name' = number, where name is string literal and number is an integer",
|
||||
ErrorCodes::UNEXPECTED_AST_STRUCTURE);
|
||||
|
||||
const ASTLiteral * name_literal = typeid_cast<const ASTLiteral *>(func->arguments->children[0].get());
|
||||
const ASTLiteral * value_literal = typeid_cast<const ASTLiteral *>(func->arguments->children[1].get());
|
||||
const auto * name_literal = func->arguments->children[0]->as<ASTLiteral>();
|
||||
const auto * value_literal = func->arguments->children[1]->as<ASTLiteral>();
|
||||
|
||||
if (!name_literal
|
||||
|| !value_literal
|
||||
|
@ -32,19 +32,19 @@ DataTypePtr DataTypeFactory::get(const String & full_name) const
|
||||
|
||||
DataTypePtr DataTypeFactory::get(const ASTPtr & ast) const
|
||||
{
|
||||
if (const ASTFunction * func = typeid_cast<const ASTFunction *>(ast.get()))
|
||||
if (const auto * func = ast->as<ASTFunction>())
|
||||
{
|
||||
if (func->parameters)
|
||||
throw Exception("Data type cannot have multiple parenthesed parameters.", ErrorCodes::ILLEGAL_SYNTAX_FOR_DATA_TYPE);
|
||||
return get(func->name, func->arguments);
|
||||
}
|
||||
|
||||
if (const ASTIdentifier * ident = typeid_cast<const ASTIdentifier *>(ast.get()))
|
||||
if (const auto * ident = ast->as<ASTIdentifier>())
|
||||
{
|
||||
return get(ident->name, {});
|
||||
}
|
||||
|
||||
if (const ASTLiteral * lit = typeid_cast<const ASTLiteral *>(ast.get()))
|
||||
if (const auto * lit = ast->as<ASTLiteral>())
|
||||
{
|
||||
if (lit->value.isNull())
|
||||
return get("Null", {});
|
||||
|
@ -1,12 +1,15 @@
|
||||
#pragma once
|
||||
|
||||
#include <memory>
|
||||
#include <functional>
|
||||
#include <unordered_map>
|
||||
#include <Common/IFactoryWithAliases.h>
|
||||
#include <DataTypes/IDataType.h>
|
||||
#include <Parsers/IAST_fwd.h>
|
||||
#include <Common/IFactoryWithAliases.h>
|
||||
|
||||
#include <ext/singleton.h>
|
||||
|
||||
#include <functional>
|
||||
#include <memory>
|
||||
#include <unordered_map>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -17,9 +20,6 @@ using DataTypePtr = std::shared_ptr<const IDataType>;
|
||||
class IDataTypeDomain;
|
||||
using DataTypeDomainPtr = std::unique_ptr<const IDataTypeDomain>;
|
||||
|
||||
class IAST;
|
||||
using ASTPtr = std::shared_ptr<IAST>;
|
||||
|
||||
|
||||
/** Creates a data type by name of data type family and parameters.
|
||||
*/
|
||||
|
@ -273,7 +273,7 @@ static DataTypePtr create(const ASTPtr & arguments)
|
||||
if (!arguments || arguments->children.size() != 1)
|
||||
throw Exception("FixedString data type family must have exactly one argument - size in bytes", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
const ASTLiteral * argument = typeid_cast<const ASTLiteral *>(arguments->children[0].get());
|
||||
const auto * argument = arguments->children[0]->as<ASTLiteral>();
|
||||
if (!argument || argument->value.getType() != Field::Types::UInt64 || argument->value.get<UInt64>() == 0)
|
||||
throw Exception("FixedString data type family must have a number (positive integer) as its argument", ErrorCodes::UNEXPECTED_AST_STRUCTURE);
|
||||
|
||||
|
@ -531,7 +531,7 @@ static DataTypePtr create(const ASTPtr & arguments)
|
||||
|
||||
for (const ASTPtr & child : arguments->children)
|
||||
{
|
||||
if (const ASTNameTypePair * name_and_type_pair = typeid_cast<const ASTNameTypePair *>(child.get()))
|
||||
if (const auto * name_and_type_pair = child->as<ASTNameTypePair>())
|
||||
{
|
||||
nested_types.emplace_back(DataTypeFactory::instance().get(name_and_type_pair->type));
|
||||
names.emplace_back(name_and_type_pair->name);
|
||||
|
@ -208,8 +208,8 @@ static DataTypePtr create(const ASTPtr & arguments)
|
||||
throw Exception("Decimal data type family must have exactly two arguments: precision and scale",
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
const ASTLiteral * precision = typeid_cast<const ASTLiteral *>(arguments->children[0].get());
|
||||
const ASTLiteral * scale = typeid_cast<const ASTLiteral *>(arguments->children[1].get());
|
||||
const auto * precision = arguments->children[0]->as<ASTLiteral>();
|
||||
const auto * scale = arguments->children[1]->as<ASTLiteral>();
|
||||
|
||||
if (!precision || precision->value.getType() != Field::Types::UInt64 ||
|
||||
!scale || !(scale->value.getType() == Field::Types::Int64 || scale->value.getType() == Field::Types::UInt64))
|
||||
@ -228,7 +228,7 @@ static DataTypePtr createExect(const ASTPtr & arguments)
|
||||
throw Exception("Decimal data type family must have exactly two arguments: precision and scale",
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
const ASTLiteral * scale_arg = typeid_cast<const ASTLiteral *>(arguments->children[0].get());
|
||||
const auto * scale_arg = arguments->children[0]->as<ASTLiteral>();
|
||||
|
||||
if (!scale_arg || !(scale_arg->value.getType() == Field::Types::Int64 || scale_arg->value.getType() == Field::Types::UInt64))
|
||||
throw Exception("Decimal data type family must have a two numbers as its arguments", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
|
@ -77,51 +77,6 @@ std::string extractTableName(const std::string & nested_name)
|
||||
}
|
||||
|
||||
|
||||
NamesAndTypesList flatten(const NamesAndTypesList & names_and_types)
|
||||
{
|
||||
std::unordered_map<std::string, std::vector<std::string>> dummy;
|
||||
return flattenWithMapping(names_and_types, dummy);
|
||||
}
|
||||
|
||||
|
||||
NamesAndTypesList flattenWithMapping(const NamesAndTypesList & names_and_types, std::unordered_map<std::string, std::vector<std::string>> & mapping)
|
||||
{
|
||||
NamesAndTypesList res;
|
||||
|
||||
for (const auto & name_type : names_and_types)
|
||||
{
|
||||
if (const DataTypeArray * type_arr = typeid_cast<const DataTypeArray *>(name_type.type.get()))
|
||||
{
|
||||
if (const DataTypeTuple * type_tuple = typeid_cast<const DataTypeTuple *>(type_arr->getNestedType().get()))
|
||||
{
|
||||
const DataTypes & elements = type_tuple->getElements();
|
||||
const Strings & names = type_tuple->getElementNames();
|
||||
size_t tuple_size = elements.size();
|
||||
|
||||
for (size_t i = 0; i < tuple_size; ++i)
|
||||
{
|
||||
String nested_name = concatenateName(name_type.name, names[i]);
|
||||
mapping[name_type.name].push_back(nested_name);
|
||||
res.emplace_back(nested_name, std::make_shared<DataTypeArray>(elements[i]));
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
mapping[name_type.name].push_back(name_type.name);
|
||||
res.push_back(name_type);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
mapping[name_type.name].push_back(name_type.name);
|
||||
res.push_back(name_type);
|
||||
}
|
||||
}
|
||||
|
||||
return res;
|
||||
|
||||
}
|
||||
|
||||
Block flatten(const Block & block)
|
||||
{
|
||||
Block res;
|
||||
|
@ -17,10 +17,7 @@ namespace Nested
|
||||
std::string extractTableName(const std::string & nested_name);
|
||||
|
||||
/// Replace Array(Tuple(...)) columns to a multiple of Array columns in a form of `column_name.element_name`.
|
||||
NamesAndTypesList flatten(const NamesAndTypesList & names_and_types);
|
||||
Block flatten(const Block & block);
|
||||
/// Works as normal flatten, but provide information about flattened columns
|
||||
NamesAndTypesList flattenWithMapping(const NamesAndTypesList & names_and_types, std::unordered_map<std::string, std::vector<std::string>> & mapping);
|
||||
|
||||
/// Collect Array columns in a form of `column_name.element_name` to single Array(Tuple(...)) column.
|
||||
NamesAndTypesList collect(const NamesAndTypesList & names_and_types);
|
||||
|
@ -7,5 +7,5 @@ target_link_libraries (data_type_string PRIVATE dbms)
|
||||
|
||||
if(USE_GTEST)
|
||||
add_executable(data_type_get_common_type data_type_get_common_type.cpp)
|
||||
target_link_libraries(data_type_get_common_type PRIVATE dbms gtest_main)
|
||||
target_link_libraries(data_type_get_common_type PRIVATE dbms ${GTEST_BOTH_LIBRARIES})
|
||||
endif()
|
||||
|
@ -370,7 +370,7 @@ static ASTPtr getCreateQueryFromMetadata(const String & metadata_path, const Str
|
||||
|
||||
if (ast)
|
||||
{
|
||||
ASTCreateQuery & ast_create_query = typeid_cast<ASTCreateQuery &>(*ast);
|
||||
auto & ast_create_query = ast->as<ASTCreateQuery &>();
|
||||
ast_create_query.attach = false;
|
||||
ast_create_query.database = database;
|
||||
}
|
||||
@ -415,8 +415,7 @@ void DatabaseOrdinary::renameTable(
|
||||
ASTPtr ast = getQueryFromMetadata(detail::getTableMetadataPath(metadata_path, table_name));
|
||||
if (!ast)
|
||||
throw Exception("There is no metadata file for table " + table_name, ErrorCodes::FILE_DOESNT_EXIST);
|
||||
ASTCreateQuery & ast_create_query = typeid_cast<ASTCreateQuery &>(*ast);
|
||||
ast_create_query.table = to_table_name;
|
||||
ast->as<ASTCreateQuery &>().table = to_table_name;
|
||||
|
||||
/// NOTE Non-atomic.
|
||||
to_database_concrete->createTable(context, to_table_name, table, ast);
|
||||
@ -534,7 +533,7 @@ void DatabaseOrdinary::alterTable(
|
||||
ParserCreateQuery parser;
|
||||
ASTPtr ast = parseQuery(parser, statement.data(), statement.data() + statement.size(), "in file " + table_metadata_path, 0);
|
||||
|
||||
ASTCreateQuery & ast_create_query = typeid_cast<ASTCreateQuery &>(*ast);
|
||||
const auto & ast_create_query = ast->as<ASTCreateQuery &>();
|
||||
|
||||
ASTPtr new_columns = InterpreterCreateQuery::formatColumns(columns);
|
||||
ASTPtr new_indices = InterpreterCreateQuery::formatIndices(indices);
|
||||
|
@ -26,7 +26,7 @@ namespace ErrorCodes
|
||||
String getTableDefinitionFromCreateQuery(const ASTPtr & query)
|
||||
{
|
||||
ASTPtr query_clone = query->clone();
|
||||
ASTCreateQuery & create = typeid_cast<ASTCreateQuery &>(*query_clone.get());
|
||||
auto & create = query_clone->as<ASTCreateQuery &>();
|
||||
|
||||
/// We remove everything that is not needed for ATTACH from the query.
|
||||
create.attach = true;
|
||||
@ -35,6 +35,7 @@ String getTableDefinitionFromCreateQuery(const ASTPtr & query)
|
||||
create.as_table.clear();
|
||||
create.if_not_exists = false;
|
||||
create.is_populate = false;
|
||||
create.replace_view = false;
|
||||
|
||||
/// For views it is necessary to save the SELECT query itself, for the rest - on the contrary
|
||||
if (!create.is_view && !create.is_materialized_view)
|
||||
@ -61,7 +62,7 @@ std::pair<String, StoragePtr> createTableFromDefinition(
|
||||
ParserCreateQuery parser;
|
||||
ASTPtr ast = parseQuery(parser, definition.data(), definition.data() + definition.size(), description_for_error_message, 0);
|
||||
|
||||
ASTCreateQuery & ast_create_query = typeid_cast<ASTCreateQuery &>(*ast);
|
||||
auto & ast_create_query = ast->as<ASTCreateQuery &>();
|
||||
ast_create_query.attach = true;
|
||||
ast_create_query.database = database_name;
|
||||
|
||||
|
@ -1,16 +1,18 @@
|
||||
#pragma once
|
||||
|
||||
#include <Core/Types.h>
|
||||
#include <Core/NamesAndTypes.h>
|
||||
#include <Core/Types.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Parsers/IAST_fwd.h>
|
||||
#include <Storages/ColumnsDescription.h>
|
||||
#include <Storages/IndicesDescription.h>
|
||||
#include <ctime>
|
||||
#include <memory>
|
||||
#include <functional>
|
||||
#include <Poco/File.h>
|
||||
#include <Common/escapeForFileName.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Common/escapeForFileName.h>
|
||||
|
||||
#include <ctime>
|
||||
#include <functional>
|
||||
#include <memory>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -21,9 +23,6 @@ class Context;
|
||||
class IStorage;
|
||||
using StoragePtr = std::shared_ptr<IStorage>;
|
||||
|
||||
class IAST;
|
||||
using ASTPtr = std::shared_ptr<IAST>;
|
||||
|
||||
struct Settings;
|
||||
|
||||
|
||||
@ -157,4 +156,3 @@ using DatabasePtr = std::shared_ptr<IDatabase>;
|
||||
using Databases = std::map<String, DatabasePtr>;
|
||||
|
||||
}
|
||||
|
||||
|
@ -3,26 +3,29 @@
|
||||
#include <ext/singleton.h>
|
||||
#include "IDictionary.h"
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
|
||||
namespace Util
|
||||
{
|
||||
class AbstractConfiguration;
|
||||
}
|
||||
|
||||
class Logger;
|
||||
|
||||
}
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
class Context;
|
||||
|
||||
class DictionaryFactory : public ext::singleton<DictionaryFactory>
|
||||
{
|
||||
public:
|
||||
DictionaryPtr
|
||||
create(const std::string & name, const Poco::Util::AbstractConfiguration & config, const std::string & config_prefix, Context & context)
|
||||
const;
|
||||
DictionaryPtr create(const std::string & name, const Poco::Util::AbstractConfiguration & config, const std::string & config_prefix, Context & context) const;
|
||||
|
||||
using Creator = std::function<DictionaryPtr(
|
||||
const std::string & name,
|
||||
|
@ -26,11 +26,11 @@
|
||||
# include <IO/WriteHelpers.h>
|
||||
# include <IO/copyData.h>
|
||||
# include <Interpreters/castColumn.h>
|
||||
# include <common/DateLUTImpl.h>
|
||||
# include <ext/range.h>
|
||||
# include <arrow/api.h>
|
||||
# include <parquet/arrow/reader.h>
|
||||
# include <parquet/file_reader.h>
|
||||
# include <common/DateLUTImpl.h>
|
||||
# include <ext/range.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -223,7 +223,8 @@ void fillColumnWithDecimalData(std::shared_ptr<arrow::Column> & arrow_column, Mu
|
||||
auto & chunk = static_cast<arrow::DecimalArray &>(*(arrow_column->data()->chunk(chunk_i)));
|
||||
for (size_t value_i = 0, length = static_cast<size_t>(chunk.length()); value_i < length; ++value_i)
|
||||
{
|
||||
column_data.emplace_back(chunk.IsNull(value_i) ? Decimal128(0) : *reinterpret_cast<const Decimal128 *>(chunk.Value(value_i))); // TODO: copy column
|
||||
column_data.emplace_back(
|
||||
chunk.IsNull(value_i) ? Decimal128(0) : *reinterpret_cast<const Decimal128 *>(chunk.Value(value_i))); // TODO: copy column
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -259,45 +260,46 @@ void fillByteMapFromArrowColumn(std::shared_ptr<arrow::Column> & arrow_column, M
|
||||
|
||||
using NameToColumnPtr = std::unordered_map<std::string, std::shared_ptr<arrow::Column>>;
|
||||
|
||||
const std::unordered_map<arrow::Type::type, std::shared_ptr<IDataType>> arrow_type_to_internal_type = {
|
||||
//{arrow::Type::DECIMAL, std::make_shared<DataTypeDecimal>()},
|
||||
{arrow::Type::UINT8, std::make_shared<DataTypeUInt8>()},
|
||||
{arrow::Type::INT8, std::make_shared<DataTypeInt8>()},
|
||||
{arrow::Type::UINT16, std::make_shared<DataTypeUInt16>()},
|
||||
{arrow::Type::INT16, std::make_shared<DataTypeInt16>()},
|
||||
{arrow::Type::UINT32, std::make_shared<DataTypeUInt32>()},
|
||||
{arrow::Type::INT32, std::make_shared<DataTypeInt32>()},
|
||||
{arrow::Type::UINT64, std::make_shared<DataTypeUInt64>()},
|
||||
{arrow::Type::INT64, std::make_shared<DataTypeInt64>()},
|
||||
{arrow::Type::HALF_FLOAT, std::make_shared<DataTypeFloat32>()},
|
||||
{arrow::Type::FLOAT, std::make_shared<DataTypeFloat32>()},
|
||||
{arrow::Type::DOUBLE, std::make_shared<DataTypeFloat64>()},
|
||||
|
||||
{arrow::Type::BOOL, std::make_shared<DataTypeUInt8>()},
|
||||
//{arrow::Type::DATE32, std::make_shared<DataTypeDate>()},
|
||||
{arrow::Type::DATE32, std::make_shared<DataTypeDate>()},
|
||||
//{arrow::Type::DATE32, std::make_shared<DataTypeDateTime>()},
|
||||
{arrow::Type::DATE64, std::make_shared<DataTypeDateTime>()},
|
||||
{arrow::Type::TIMESTAMP, std::make_shared<DataTypeDateTime>()},
|
||||
//{arrow::Type::TIME32, std::make_shared<DataTypeDateTime>()},
|
||||
|
||||
|
||||
{arrow::Type::STRING, std::make_shared<DataTypeString>()},
|
||||
{arrow::Type::BINARY, std::make_shared<DataTypeString>()},
|
||||
//{arrow::Type::FIXED_SIZE_BINARY, std::make_shared<DataTypeString>()},
|
||||
//{arrow::Type::UUID, std::make_shared<DataTypeString>()},
|
||||
|
||||
|
||||
// TODO: add other types that are convertable to internal ones:
|
||||
// 0. ENUM?
|
||||
// 1. UUID -> String
|
||||
// 2. JSON -> String
|
||||
// Full list of types: contrib/arrow/cpp/src/arrow/type.h
|
||||
};
|
||||
|
||||
|
||||
Block ParquetBlockInputStream::readImpl()
|
||||
{
|
||||
static const std::unordered_map<arrow::Type::type, std::shared_ptr<IDataType>> arrow_type_to_internal_type = {
|
||||
//{arrow::Type::DECIMAL, std::make_shared<DataTypeDecimal>()},
|
||||
{arrow::Type::UINT8, std::make_shared<DataTypeUInt8>()},
|
||||
{arrow::Type::INT8, std::make_shared<DataTypeInt8>()},
|
||||
{arrow::Type::UINT16, std::make_shared<DataTypeUInt16>()},
|
||||
{arrow::Type::INT16, std::make_shared<DataTypeInt16>()},
|
||||
{arrow::Type::UINT32, std::make_shared<DataTypeUInt32>()},
|
||||
{arrow::Type::INT32, std::make_shared<DataTypeInt32>()},
|
||||
{arrow::Type::UINT64, std::make_shared<DataTypeUInt64>()},
|
||||
{arrow::Type::INT64, std::make_shared<DataTypeInt64>()},
|
||||
{arrow::Type::HALF_FLOAT, std::make_shared<DataTypeFloat32>()},
|
||||
{arrow::Type::FLOAT, std::make_shared<DataTypeFloat32>()},
|
||||
{arrow::Type::DOUBLE, std::make_shared<DataTypeFloat64>()},
|
||||
|
||||
{arrow::Type::BOOL, std::make_shared<DataTypeUInt8>()},
|
||||
//{arrow::Type::DATE32, std::make_shared<DataTypeDate>()},
|
||||
{arrow::Type::DATE32, std::make_shared<DataTypeDate>()},
|
||||
//{arrow::Type::DATE32, std::make_shared<DataTypeDateTime>()},
|
||||
{arrow::Type::DATE64, std::make_shared<DataTypeDateTime>()},
|
||||
{arrow::Type::TIMESTAMP, std::make_shared<DataTypeDateTime>()},
|
||||
//{arrow::Type::TIME32, std::make_shared<DataTypeDateTime>()},
|
||||
|
||||
|
||||
{arrow::Type::STRING, std::make_shared<DataTypeString>()},
|
||||
{arrow::Type::BINARY, std::make_shared<DataTypeString>()},
|
||||
//{arrow::Type::FIXED_SIZE_BINARY, std::make_shared<DataTypeString>()},
|
||||
//{arrow::Type::UUID, std::make_shared<DataTypeString>()},
|
||||
|
||||
|
||||
// TODO: add other types that are convertable to internal ones:
|
||||
// 0. ENUM?
|
||||
// 1. UUID -> String
|
||||
// 2. JSON -> String
|
||||
// Full list of types: contrib/arrow/cpp/src/arrow/type.h
|
||||
};
|
||||
|
||||
|
||||
Block res;
|
||||
|
||||
if (!istr.eof())
|
||||
@ -308,7 +310,9 @@ Block ParquetBlockInputStream::readImpl()
|
||||
*/
|
||||
|
||||
if (row_group_current < row_group_total)
|
||||
throw Exception{"Got new data, but data from previous chunks not readed " + std::to_string(row_group_current) + "/" + std::to_string(row_group_total), ErrorCodes::CANNOT_READ_ALL_DATA};
|
||||
throw Exception{"Got new data, but data from previous chunks not readed " + std::to_string(row_group_current) + "/"
|
||||
+ std::to_string(row_group_total),
|
||||
ErrorCodes::CANNOT_READ_ALL_DATA};
|
||||
|
||||
file_data.clear();
|
||||
{
|
||||
|
@ -33,6 +33,7 @@ namespace ErrorCodes
|
||||
{
|
||||
extern const int TOO_FEW_ARGUMENTS_FOR_FUNCTION;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
}
|
||||
|
||||
|
||||
@ -123,8 +124,8 @@ public:
|
||||
}
|
||||
else
|
||||
throw Exception("Illegal column " + block.getByPosition(arguments[0]).column->getName()
|
||||
+ " of argument of function " + getName(),
|
||||
ErrorCodes::ILLEGAL_COLUMN);
|
||||
+ " of argument of function " + getName(),
|
||||
ErrorCodes::ILLEGAL_COLUMN);
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -36,6 +36,7 @@ namespace ErrorCodes
|
||||
{
|
||||
extern const int DICTIONARIES_WAS_NOT_LOADED;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
}
|
||||
|
||||
|
@ -44,6 +44,8 @@ namespace ErrorCodes
|
||||
extern const int UNKNOWN_TYPE;
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int TYPE_MISMATCH;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
/** Functions that use plug-ins (external) dictionaries.
|
||||
|
@ -20,6 +20,7 @@ namespace DB
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
}
|
||||
|
||||
enum ClusterOperation
|
||||
|
@ -48,6 +48,7 @@ namespace ErrorCodes
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
}
|
||||
|
||||
|
||||
|
@ -8,7 +8,7 @@
|
||||
#include <DataTypes/DataTypesDecimal.h>
|
||||
#include <Columns/ColumnVector.h>
|
||||
#include <Interpreters/castColumn.h>
|
||||
|
||||
#include "IFunction.h"
|
||||
#include <Common/intExp.h>
|
||||
#include <cmath>
|
||||
#include <type_traits>
|
||||
|
@ -21,6 +21,7 @@ namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
}
|
||||
|
||||
|
||||
|
@ -38,6 +38,11 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
}
|
||||
|
||||
struct HasParam
|
||||
{
|
||||
using ResultType = UInt8;
|
||||
|
@ -53,7 +53,7 @@ inline ALWAYS_INLINE void writeSlice(const StringSource::Slice & slice, FixedStr
|
||||
/// Assuming same types of underlying columns for slice and sink if (ArraySlice, ArraySink) is (GenericArraySlice, GenericArraySink).
|
||||
inline ALWAYS_INLINE void writeSlice(const GenericArraySlice & slice, GenericArraySink & sink)
|
||||
{
|
||||
if (typeid(slice.elements) == typeid(static_cast<const IColumn *>(&sink.elements)))
|
||||
if (slice.elements->structureEquals(sink.elements))
|
||||
{
|
||||
sink.elements.insertRangeFrom(*slice.elements, slice.begin, slice.size);
|
||||
sink.current_offset += slice.size;
|
||||
@ -125,7 +125,7 @@ void writeSlice(const NumericValueSlice<T> & slice, NumericArraySink<U> & sink)
|
||||
/// Assuming same types of underlying columns for slice and sink if (ArraySlice, ArraySink) is (GenericValueSlice, GenericArraySink).
|
||||
inline ALWAYS_INLINE void writeSlice(const GenericValueSlice & slice, GenericArraySink & sink)
|
||||
{
|
||||
if (typeid(slice.elements) == typeid(static_cast<const IColumn *>(&sink.elements)))
|
||||
if (slice.elements->structureEquals(sink.elements))
|
||||
{
|
||||
sink.elements.insertFrom(*slice.elements, slice.position);
|
||||
++sink.current_offset;
|
||||
@ -457,7 +457,7 @@ template <bool all>
|
||||
bool sliceHas(const GenericArraySlice & first, const GenericArraySlice & second)
|
||||
{
|
||||
/// Generic arrays should have the same type in order to use column.compareAt(...)
|
||||
if (typeid(*first.elements) != typeid(*second.elements))
|
||||
if (!first.elements->structureEquals(*second.elements))
|
||||
return false;
|
||||
|
||||
auto impl = sliceHasImpl<all, GenericArraySlice, GenericArraySlice, sliceEqualElements>;
|
||||
|
@ -14,7 +14,15 @@
|
||||
#include <Functions/GatherUtils/Slices.h>
|
||||
#include <Functions/FunctionHelpers.h>
|
||||
|
||||
namespace DB::GatherUtils
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
}
|
||||
|
||||
namespace GatherUtils
|
||||
{
|
||||
|
||||
template <typename T>
|
||||
@ -660,3 +668,5 @@ struct NullableValueSource : public ValueSource
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -6,6 +6,11 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
ArraysDepths getArraysDepths(const ColumnsWithTypeAndName & arguments)
|
||||
{
|
||||
const size_t num_arguments = arguments.size();
|
||||
|
@ -22,6 +22,7 @@ namespace ErrorCodes
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
|
||||
|
@ -7,6 +7,11 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
}
|
||||
|
||||
/// flatten([[1, 2, 3], [4, 5]]) = [1, 2, 3, 4, 5] - flatten array.
|
||||
class FunctionFlatten : public IFunction
|
||||
{
|
||||
|
@ -53,10 +53,8 @@ public:
|
||||
size_t rows = input_rows_count;
|
||||
size_t num_args = arguments.size();
|
||||
|
||||
auto result_column = ColumnUInt8::create(rows);
|
||||
|
||||
DataTypePtr common_type = nullptr;
|
||||
auto commonType = [& common_type, & block, & arguments]()
|
||||
auto commonType = [&common_type, &block, &arguments]()
|
||||
{
|
||||
if (common_type == nullptr)
|
||||
{
|
||||
@ -106,6 +104,7 @@ public:
|
||||
throw Exception{"Arguments for function " + getName() + " must be arrays.", ErrorCodes::LOGICAL_ERROR};
|
||||
}
|
||||
|
||||
auto result_column = ColumnUInt8::create(rows);
|
||||
auto result_column_ptr = typeid_cast<ColumnUInt8 *>(result_column.get());
|
||||
GatherUtils::sliceHas(*sources[0], *sources[1], all, *result_column_ptr);
|
||||
|
||||
|
@ -8,6 +8,11 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
}
|
||||
|
||||
/** Creates an array, multiplying the column (the first argument) by the number of elements in the array (the second argument).
|
||||
*/
|
||||
class FunctionReplicate : public IFunction
|
||||
|
@ -17,6 +17,7 @@ namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
}
|
||||
|
||||
/** timeSlots(StartTime, Duration)
|
||||
|
@ -3,6 +3,7 @@
|
||||
#include <Core/Defines.h>
|
||||
#include <IO/WriteBuffer.h>
|
||||
#include <common/itoa.h>
|
||||
#include <common/likely.h>
|
||||
|
||||
/// 40 digits or 39 digits and a sign
|
||||
#define WRITE_HELPERS_MAX_INT_WIDTH 40U
|
||||
|
@ -84,11 +84,11 @@ SetPtr makeExplicitSet(
|
||||
|
||||
auto getTupleTypeFromAst = [&context](const ASTPtr & tuple_ast) -> DataTypePtr
|
||||
{
|
||||
auto ast_function = typeid_cast<const ASTFunction *>(tuple_ast.get());
|
||||
if (ast_function && ast_function->name == "tuple" && !ast_function->arguments->children.empty())
|
||||
const auto * func = tuple_ast->as<ASTFunction>();
|
||||
if (func && func->name == "tuple" && !func->arguments->children.empty())
|
||||
{
|
||||
/// Won't parse all values of outer tuple.
|
||||
auto element = ast_function->arguments->children.at(0);
|
||||
auto element = func->arguments->children.at(0);
|
||||
std::pair<Field, DataTypePtr> value_raw = evaluateConstantExpression(element, context);
|
||||
return std::make_shared<DataTypeTuple>(DataTypes({value_raw.second}));
|
||||
}
|
||||
@ -122,7 +122,7 @@ SetPtr makeExplicitSet(
|
||||
/// 1 in (1, 2); (1, 2) in ((1, 2), (3, 4)); etc.
|
||||
else if (left_tuple_depth + 1 == right_tuple_depth)
|
||||
{
|
||||
ASTFunction * set_func = typeid_cast<ASTFunction *>(right_arg.get());
|
||||
const auto * set_func = right_arg->as<ASTFunction>();
|
||||
|
||||
if (!set_func || set_func->name != "tuple")
|
||||
throw Exception("Incorrect type of 2nd argument for function " + node->name
|
||||
@ -263,11 +263,10 @@ void ActionsVisitor::visit(const ASTPtr & ast)
|
||||
};
|
||||
|
||||
/// If the result of the calculation already exists in the block.
|
||||
if ((typeid_cast<ASTFunction *>(ast.get()) || typeid_cast<ASTLiteral *>(ast.get()))
|
||||
&& actions_stack.getSampleBlock().has(getColumnName()))
|
||||
if ((ast->as<ASTFunction>() || ast->as<ASTLiteral>()) && actions_stack.getSampleBlock().has(getColumnName()))
|
||||
return;
|
||||
|
||||
if (auto * identifier = typeid_cast<ASTIdentifier *>(ast.get()))
|
||||
if (const auto * identifier = ast->as<ASTIdentifier>())
|
||||
{
|
||||
if (!only_consts && !actions_stack.getSampleBlock().has(getColumnName()))
|
||||
{
|
||||
@ -288,7 +287,7 @@ void ActionsVisitor::visit(const ASTPtr & ast)
|
||||
actions_stack.addAction(ExpressionAction::addAliases({{identifier->name, identifier->alias}}));
|
||||
}
|
||||
}
|
||||
else if (ASTFunction * node = typeid_cast<ASTFunction *>(ast.get()))
|
||||
else if (const auto * node = ast->as<ASTFunction>())
|
||||
{
|
||||
if (node->name == "lambda")
|
||||
throw Exception("Unexpected lambda expression", ErrorCodes::UNEXPECTED_EXPRESSION);
|
||||
@ -383,14 +382,14 @@ void ActionsVisitor::visit(const ASTPtr & ast)
|
||||
auto & child = node->arguments->children[arg];
|
||||
auto child_column_name = child->getColumnName();
|
||||
|
||||
ASTFunction * lambda = typeid_cast<ASTFunction *>(child.get());
|
||||
const auto * lambda = child->as<ASTFunction>();
|
||||
if (lambda && lambda->name == "lambda")
|
||||
{
|
||||
/// If the argument is a lambda expression, just remember its approximate type.
|
||||
if (lambda->arguments->children.size() != 2)
|
||||
throw Exception("lambda requires two arguments", ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
|
||||
ASTFunction * lambda_args_tuple = typeid_cast<ASTFunction *>(lambda->arguments->children.at(0).get());
|
||||
const auto * lambda_args_tuple = lambda->arguments->children.at(0)->as<ASTFunction>();
|
||||
|
||||
if (!lambda_args_tuple || lambda_args_tuple->name != "tuple")
|
||||
throw Exception("First argument of lambda must be a tuple", ErrorCodes::TYPE_MISMATCH);
|
||||
@ -454,12 +453,12 @@ void ActionsVisitor::visit(const ASTPtr & ast)
|
||||
{
|
||||
ASTPtr child = node->arguments->children[i];
|
||||
|
||||
ASTFunction * lambda = typeid_cast<ASTFunction *>(child.get());
|
||||
const auto * lambda = child->as<ASTFunction>();
|
||||
if (lambda && lambda->name == "lambda")
|
||||
{
|
||||
const DataTypeFunction * lambda_type = typeid_cast<const DataTypeFunction *>(argument_types[i].get());
|
||||
ASTFunction * lambda_args_tuple = typeid_cast<ASTFunction *>(lambda->arguments->children.at(0).get());
|
||||
ASTs lambda_arg_asts = lambda_args_tuple->arguments->children;
|
||||
const auto * lambda_args_tuple = lambda->arguments->children.at(0)->as<ASTFunction>();
|
||||
const ASTs & lambda_arg_asts = lambda_args_tuple->arguments->children;
|
||||
NamesAndTypesList lambda_arguments;
|
||||
|
||||
for (size_t j = 0; j < lambda_arg_asts.size(); ++j)
|
||||
@ -517,7 +516,7 @@ void ActionsVisitor::visit(const ASTPtr & ast)
|
||||
ExpressionAction::applyFunction(function_builder, argument_names, getColumnName()));
|
||||
}
|
||||
}
|
||||
else if (ASTLiteral * literal = typeid_cast<ASTLiteral *>(ast.get()))
|
||||
else if (const auto * literal = ast->as<ASTLiteral>())
|
||||
{
|
||||
DataTypePtr type = applyVisitor(FieldToDataType(), literal->value);
|
||||
|
||||
@ -533,8 +532,7 @@ void ActionsVisitor::visit(const ASTPtr & ast)
|
||||
for (auto & child : ast->children)
|
||||
{
|
||||
/// Do not go to FROM, JOIN, UNION.
|
||||
if (!typeid_cast<const ASTTableExpression *>(child.get())
|
||||
&& !typeid_cast<const ASTSelectQuery *>(child.get()))
|
||||
if (!child->as<ASTTableExpression>() && !child->as<ASTSelectQuery>())
|
||||
visit(child);
|
||||
}
|
||||
}
|
||||
@ -550,8 +548,8 @@ SetPtr ActionsVisitor::makeSet(const ASTFunction * node, const Block & sample_bl
|
||||
const ASTPtr & arg = args.children.at(1);
|
||||
|
||||
/// If the subquery or table name for SELECT.
|
||||
const ASTIdentifier * identifier = typeid_cast<const ASTIdentifier *>(arg.get());
|
||||
if (typeid_cast<const ASTSubquery *>(arg.get()) || identifier)
|
||||
const auto * identifier = arg->as<ASTIdentifier>();
|
||||
if (arg->as<ASTSubquery>() || identifier)
|
||||
{
|
||||
auto set_key = PreparedSetKey::forSubquery(*arg);
|
||||
if (prepared_sets.count(set_key))
|
||||
|
@ -145,8 +145,9 @@ Aggregator::Aggregator(const Params & params_)
|
||||
isCancelled([]() { return false; })
|
||||
{
|
||||
/// Use query-level memory tracker
|
||||
if (auto memory_tracker = CurrentThread::getMemoryTracker().getParent())
|
||||
memory_usage_before_aggregation = memory_tracker->get();
|
||||
if (auto memory_tracker_child = CurrentThread::getMemoryTracker())
|
||||
if (auto memory_tracker = memory_tracker_child->getParent())
|
||||
memory_usage_before_aggregation = memory_tracker->get();
|
||||
|
||||
aggregate_functions.resize(params.aggregates_size);
|
||||
for (size_t i = 0; i < params.aggregates_size; ++i)
|
||||
@ -835,8 +836,9 @@ bool Aggregator::executeOnBlock(const Block & block, AggregatedDataVariants & re
|
||||
|
||||
size_t result_size = result.sizeWithoutOverflowRow();
|
||||
Int64 current_memory_usage = 0;
|
||||
if (auto memory_tracker = CurrentThread::getMemoryTracker().getParent())
|
||||
current_memory_usage = memory_tracker->get();
|
||||
if (auto memory_tracker_child = CurrentThread::getMemoryTracker())
|
||||
if (auto memory_tracker = memory_tracker_child->getParent())
|
||||
current_memory_usage = memory_tracker->get();
|
||||
|
||||
auto result_size_bytes = current_memory_usage - memory_usage_before_aggregation; /// Here all the results in the sum are taken into account, from different threads.
|
||||
|
||||
|
@ -1,15 +1,13 @@
|
||||
#pragma once
|
||||
|
||||
#include <unordered_map>
|
||||
|
||||
#include <Core/Types.h>
|
||||
#include <Parsers/IAST_fwd.h>
|
||||
|
||||
#include <unordered_map>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
class IAST;
|
||||
using ASTPtr = std::shared_ptr<IAST>;
|
||||
|
||||
using Aliases = std::unordered_map<String, ASTPtr>;
|
||||
|
||||
}
|
||||
|
@ -45,7 +45,7 @@ ExpressionActionsPtr AnalyzedJoin::createJoinedBlockActions(
|
||||
if (!join)
|
||||
return nullptr;
|
||||
|
||||
const auto & join_params = static_cast<const ASTTableJoin &>(*join->table_join);
|
||||
const auto & join_params = join->table_join->as<ASTTableJoin &>();
|
||||
|
||||
/// Create custom expression list with join keys from right table.
|
||||
auto expression_list = std::make_shared<ASTExpressionList>();
|
||||
|
@ -40,11 +40,10 @@ public:
|
||||
|
||||
static bool needChildVisit(ASTPtr & node, const ASTPtr & child)
|
||||
{
|
||||
if (typeid_cast<ASTTablesInSelectQuery *>(node.get()))
|
||||
if (node->as<ASTTablesInSelectQuery>())
|
||||
return false;
|
||||
|
||||
if (typeid_cast<ASTSubquery *>(child.get()) ||
|
||||
typeid_cast<ASTSelectQuery *>(child.get()))
|
||||
if (child->as<ASTSubquery>() || child->as<ASTSelectQuery>())
|
||||
return false;
|
||||
|
||||
return true;
|
||||
@ -52,9 +51,9 @@ public:
|
||||
|
||||
static void visit(ASTPtr & ast, Data & data)
|
||||
{
|
||||
if (auto * t = typeid_cast<ASTIdentifier *>(ast.get()))
|
||||
if (const auto * t = ast->as<ASTIdentifier>())
|
||||
visit(*t, ast, data);
|
||||
if (auto * t = typeid_cast<ASTSelectQuery *>(ast.get()))
|
||||
if (const auto * t = ast->as<ASTSelectQuery>())
|
||||
visit(*t, ast, data);
|
||||
}
|
||||
|
||||
@ -73,7 +72,7 @@ private:
|
||||
const String nested_table_name = ast->getColumnName();
|
||||
const String nested_table_alias = ast->getAliasOrColumnName();
|
||||
|
||||
if (nested_table_alias == nested_table_name && !isIdentifier(ast))
|
||||
if (nested_table_alias == nested_table_name && !ast->as<ASTIdentifier>())
|
||||
throw Exception("No alias for non-trivial value in ARRAY JOIN: " + nested_table_name, ErrorCodes::ALIAS_REQUIRED);
|
||||
|
||||
if (data.array_join_alias_to_name.count(nested_table_alias) || data.aliases.count(nested_table_alias))
|
||||
|
@ -98,7 +98,7 @@ void SelectStreamFactory::createForShard(
|
||||
|
||||
if (table_func_ptr)
|
||||
{
|
||||
auto table_function = static_cast<const ASTFunction *>(table_func_ptr.get());
|
||||
const auto * table_function = table_func_ptr->as<ASTFunction>();
|
||||
main_table_storage = TableFunctionFactory::instance().get(table_function->name, context)->execute(table_func_ptr, context);
|
||||
}
|
||||
else
|
||||
|
@ -892,8 +892,7 @@ StoragePtr Context::executeTableFunction(const ASTPtr & table_expression)
|
||||
|
||||
if (!res)
|
||||
{
|
||||
TableFunctionPtr table_function_ptr = TableFunctionFactory::instance().get(
|
||||
typeid_cast<const ASTFunction *>(table_expression.get())->name, *this);
|
||||
TableFunctionPtr table_function_ptr = TableFunctionFactory::instance().get(table_expression->as<ASTFunction>()->name, *this);
|
||||
|
||||
/// Run it and remember the result
|
||||
res = table_function_ptr->execute(table_expression, *this);
|
||||
@ -1203,6 +1202,8 @@ EmbeddedDictionaries & Context::getEmbeddedDictionariesImpl(const bool throw_on_
|
||||
|
||||
ExternalDictionaries & Context::getExternalDictionariesImpl(const bool throw_on_error) const
|
||||
{
|
||||
const auto & config = getConfigRef();
|
||||
|
||||
std::lock_guard lock(shared->external_dictionaries_mutex);
|
||||
|
||||
if (!shared->external_dictionaries)
|
||||
@ -1214,6 +1215,7 @@ ExternalDictionaries & Context::getExternalDictionariesImpl(const bool throw_on_
|
||||
|
||||
shared->external_dictionaries.emplace(
|
||||
std::move(config_repository),
|
||||
config,
|
||||
*this->global_context,
|
||||
throw_on_error);
|
||||
}
|
||||
|
@ -1,23 +1,24 @@
|
||||
#pragma once
|
||||
|
||||
#include <Core/Block.h>
|
||||
#include <Core/NamesAndTypes.h>
|
||||
#include <Core/Types.h>
|
||||
#include <Interpreters/ClientInfo.h>
|
||||
#include <Interpreters/Settings.h>
|
||||
#include <Parsers/IAST_fwd.h>
|
||||
#include <Common/LRUCache.h>
|
||||
#include <Common/MultiVersion.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
#include <Common/config.h>
|
||||
|
||||
#include <atomic>
|
||||
#include <chrono>
|
||||
#include <condition_variable>
|
||||
#include <functional>
|
||||
#include <memory>
|
||||
#include <mutex>
|
||||
#include <thread>
|
||||
#include <atomic>
|
||||
#include <optional>
|
||||
|
||||
#include <Common/config.h>
|
||||
#include <Common/MultiVersion.h>
|
||||
#include <Common/LRUCache.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
#include <Core/Types.h>
|
||||
#include <Core/NamesAndTypes.h>
|
||||
#include <Core/Block.h>
|
||||
#include <Interpreters/Settings.h>
|
||||
#include <Interpreters/ClientInfo.h>
|
||||
#include <thread>
|
||||
|
||||
|
||||
namespace Poco
|
||||
@ -68,8 +69,6 @@ class IStorage;
|
||||
class ITableFunction;
|
||||
using StoragePtr = std::shared_ptr<IStorage>;
|
||||
using Tables = std::map<String, StoragePtr>;
|
||||
class IAST;
|
||||
using ASTPtr = std::shared_ptr<IAST>;
|
||||
class IBlockInputStream;
|
||||
class IBlockOutputStream;
|
||||
using BlockInputStreamPtr = std::shared_ptr<IBlockInputStream>;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user