mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 23:21:59 +00:00
Merge branch 'adding-system-warnings-26039' of https://github.com/FArthur-cmd/ClickHouse into adding-system-warnings-26039
This commit is contained in:
commit
1fe4f084f3
3
.gitmodules
vendored
3
.gitmodules
vendored
@ -228,3 +228,6 @@
|
||||
[submodule "contrib/libpqxx"]
|
||||
path = contrib/libpqxx
|
||||
url = https://github.com/ClickHouse-Extras/libpqxx.git
|
||||
[submodule "contrib/sqlite-amalgamation"]
|
||||
path = contrib/sqlite-amalgamation
|
||||
url = https://github.com/azadkuh/sqlite-amalgamation
|
||||
|
156
CHANGELOG.md
156
CHANGELOG.md
@ -1,3 +1,159 @@
|
||||
### ClickHouse release v21.7, 2021-07-09
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* Improved performance of queries with explicitly defined large sets. Added compatibility setting `legacy_column_name_of_tuple_literal`. It makes sense to set it to `true`, while doing rolling update of cluster from version lower than 21.7 to any higher version. Otherwise distributed queries with explicitly defined sets at `IN` clause may fail during update. [#25371](https://github.com/ClickHouse/ClickHouse/pull/25371) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Forward/backward incompatible change of maximum buffer size in clickhouse-keeper (an experimental alternative to ZooKeeper). Better to do it now (before production), than later. [#25421](https://github.com/ClickHouse/ClickHouse/pull/25421) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Support configuration in YAML format as alternative to XML. This closes [#3607](https://github.com/ClickHouse/ClickHouse/issues/3607). [#21858](https://github.com/ClickHouse/ClickHouse/pull/21858) ([BoloniniD](https://github.com/BoloniniD)).
|
||||
* Provides a way to restore replicated table when the data is (possibly) present, but the ZooKeeper metadata is lost. Resolves [#13458](https://github.com/ClickHouse/ClickHouse/issues/13458). [#13652](https://github.com/ClickHouse/ClickHouse/pull/13652) ([Mike Kot](https://github.com/myrrc)).
|
||||
* Support structs and maps in Arrow/Parquet/ORC and dictionaries in Arrow input/output formats. Present new setting `output_format_arrow_low_cardinality_as_dictionary`. [#24341](https://github.com/ClickHouse/ClickHouse/pull/24341) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Added support for `Array` type in dictionaries. [#25119](https://github.com/ClickHouse/ClickHouse/pull/25119) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Added function `bitPositionsToArray`. Closes [#23792](https://github.com/ClickHouse/ClickHouse/issues/23792). Author [Kevin Wan] (@MaxWk). [#25394](https://github.com/ClickHouse/ClickHouse/pull/25394) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Added function `dateName` to return names like 'Friday' or 'April'. Author [Daniil Kondratyev] (@dankondr). [#25372](https://github.com/ClickHouse/ClickHouse/pull/25372) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Add `toJSONString` function to serialize columns to their JSON representations. [#25164](https://github.com/ClickHouse/ClickHouse/pull/25164) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Now `query_log` has two new columns: `initial_query_start_time`, `initial_query_start_time_microsecond` that record the starting time of a distributed query if any. [#25022](https://github.com/ClickHouse/ClickHouse/pull/25022) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Add aggregate function `segmentLengthSum`. [#24250](https://github.com/ClickHouse/ClickHouse/pull/24250) ([flynn](https://github.com/ucasfl)).
|
||||
* Add a new boolean setting `prefer_global_in_and_join` which defaults all IN/JOIN as GLOBAL IN/JOIN. [#23434](https://github.com/ClickHouse/ClickHouse/pull/23434) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Support `ALTER DELETE` queries for `Join` table engine. [#23260](https://github.com/ClickHouse/ClickHouse/pull/23260) ([foolchi](https://github.com/foolchi)).
|
||||
* Add `quantileBFloat16` aggregate function as well as the corresponding `quantilesBFloat16` and `medianBFloat16`. It is very simple and fast quantile estimator with relative error not more than 0.390625%. This closes [#16641](https://github.com/ClickHouse/ClickHouse/issues/16641). [#23204](https://github.com/ClickHouse/ClickHouse/pull/23204) ([Ivan Novitskiy](https://github.com/RedClusive)).
|
||||
* Implement `sequenceNextNode()` function useful for `flow analysis`. [#19766](https://github.com/ClickHouse/ClickHouse/pull/19766) ([achimbab](https://github.com/achimbab)).
|
||||
|
||||
#### Experimental Feature
|
||||
|
||||
* Add support for virtual filesystem over HDFS. [#11058](https://github.com/ClickHouse/ClickHouse/pull/11058) ([overshov](https://github.com/overshov)) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Now clickhouse-keeper (an experimental alternative to ZooKeeper) supports ZooKeeper-like `digest` ACLs. [#24448](https://github.com/ClickHouse/ClickHouse/pull/24448) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
#### Performance Improvement
|
||||
|
||||
* Added optimization that transforms some functions to reading of subcolumns to reduce amount of read data. E.g., statement `col IS NULL` is transformed to reading of subcolumn `col.null`. Optimization can be enabled by setting `optimize_functions_to_subcolumns` which is currently off by default. [#24406](https://github.com/ClickHouse/ClickHouse/pull/24406) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Rewrite more columns to possible alias expressions. This may enable better optimization, such as projections. [#24405](https://github.com/ClickHouse/ClickHouse/pull/24405) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Index of type `bloom_filter` can be used for expressions with `hasAny` function with constant arrays. This closes: [#24291](https://github.com/ClickHouse/ClickHouse/issues/24291). [#24900](https://github.com/ClickHouse/ClickHouse/pull/24900) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Add exponential backoff to reschedule read attempt in case RabbitMQ queues are empty. (ClickHouse has support for importing data from RabbitMQ). Closes [#24340](https://github.com/ClickHouse/ClickHouse/issues/24340). [#24415](https://github.com/ClickHouse/ClickHouse/pull/24415) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
|
||||
#### Improvement
|
||||
|
||||
* Allow to limit bandwidth for replication. Add two Replicated\*MergeTree settings: `max_replicated_fetches_network_bandwidth` and `max_replicated_sends_network_bandwidth` which allows to limit maximum speed of replicated fetches/sends for table. Add two server-wide settings (in `default` user profile): `max_replicated_fetches_network_bandwidth_for_server` and `max_replicated_sends_network_bandwidth_for_server` which limit maximum speed of replication for all tables. The settings are not followed perfectly accurately. Turned off by default. Fixes [#1821](https://github.com/ClickHouse/ClickHouse/issues/1821). [#24573](https://github.com/ClickHouse/ClickHouse/pull/24573) ([alesapin](https://github.com/alesapin)).
|
||||
* Resource constraints and isolation for ODBC and Library bridges. Use separate `clickhouse-bridge` group and user for bridge processes. Set oom_score_adj so the bridges will be first subjects for OOM killer. Set set maximum RSS to 1 GiB. Closes [#23861](https://github.com/ClickHouse/ClickHouse/issues/23861). [#25280](https://github.com/ClickHouse/ClickHouse/pull/25280) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Add standalone `clickhouse-keeper` symlink to the main `clickhouse` binary. Now it's possible to run coordination without the main clickhouse server. [#24059](https://github.com/ClickHouse/ClickHouse/pull/24059) ([alesapin](https://github.com/alesapin)).
|
||||
* Use global settings for query to `VIEW`. Fixed the behavior when queries to `VIEW` use local settings, that leads to errors if setting on `CREATE VIEW` and `SELECT` were different. As for now, `VIEW` won't use these modified settings, but you can still pass additional settings in `SETTINGS` section of `CREATE VIEW` query. Close [#20551](https://github.com/ClickHouse/ClickHouse/issues/20551). [#24095](https://github.com/ClickHouse/ClickHouse/pull/24095) ([Vladimir](https://github.com/vdimir)).
|
||||
* On server start, parts with incorrect partition ID would not be ever removed, but always detached. [#25070](https://github.com/ClickHouse/ClickHouse/issues/25070). [#25166](https://github.com/ClickHouse/ClickHouse/pull/25166) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Increase size of background schedule pool to 128 (`background_schedule_pool_size` setting). It allows avoiding replication queue hung on slow zookeeper connection. [#25072](https://github.com/ClickHouse/ClickHouse/pull/25072) ([alesapin](https://github.com/alesapin)).
|
||||
* Add merge tree setting `max_parts_to_merge_at_once` which limits the number of parts that can be merged in the background at once. Doesn't affect `OPTIMIZE FINAL` query. Fixes [#1820](https://github.com/ClickHouse/ClickHouse/issues/1820). [#24496](https://github.com/ClickHouse/ClickHouse/pull/24496) ([alesapin](https://github.com/alesapin)).
|
||||
* Allow `NOT IN` operator to be used in partition pruning. [#24894](https://github.com/ClickHouse/ClickHouse/pull/24894) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Recognize IPv4 addresses like `127.0.1.1` as local. This is controversial and closes [#23504](https://github.com/ClickHouse/ClickHouse/issues/23504). Michael Filimonov will test this feature. [#24316](https://github.com/ClickHouse/ClickHouse/pull/24316) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* ClickHouse database created with MaterializeMySQL (it is an experimental feature) now contains all column comments from the MySQL database that materialized. [#25199](https://github.com/ClickHouse/ClickHouse/pull/25199) ([Storozhuk Kostiantyn](https://github.com/sand6255)).
|
||||
* Add settings (`connection_auto_close`/`connection_max_tries`/`connection_pool_size`) for MySQL storage engine. [#24146](https://github.com/ClickHouse/ClickHouse/pull/24146) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Improve startup time of Distributed engine. [#25663](https://github.com/ClickHouse/ClickHouse/pull/25663) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Improvement for Distributed tables. Drop replicas from dirname for internal_replication=true (allows INSERT into Distributed with cluster from any number of replicas, before only 15 replicas was supported, everything more will fail with ENAMETOOLONG while creating directory for async blocks). [#25513](https://github.com/ClickHouse/ClickHouse/pull/25513) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Added support `Interval` type for `LowCardinality`. It is needed for intermediate values of some expressions. Closes [#21730](https://github.com/ClickHouse/ClickHouse/issues/21730). [#25410](https://github.com/ClickHouse/ClickHouse/pull/25410) ([Vladimir](https://github.com/vdimir)).
|
||||
* Add `==` operator on time conditions for `sequenceMatch` and `sequenceCount` functions. For eg: sequenceMatch('(?1)(?t==1)(?2)')(time, data = 1, data = 2). [#25299](https://github.com/ClickHouse/ClickHouse/pull/25299) ([Christophe Kalenzaga](https://github.com/mga-chka)).
|
||||
* Add settings `http_max_fields`, `http_max_field_name_size`, `http_max_field_value_size`. [#25296](https://github.com/ClickHouse/ClickHouse/pull/25296) ([Ivan](https://github.com/abyss7)).
|
||||
* Add support for function `if` with `Decimal` and `Int` types on its branches. This closes [#20549](https://github.com/ClickHouse/ClickHouse/issues/20549). This closes [#10142](https://github.com/ClickHouse/ClickHouse/issues/10142). [#25283](https://github.com/ClickHouse/ClickHouse/pull/25283) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update prompt in `clickhouse-client` and display a message when reconnecting. This closes [#10577](https://github.com/ClickHouse/ClickHouse/issues/10577). [#25281](https://github.com/ClickHouse/ClickHouse/pull/25281) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Correct memory tracking in aggregate function `topK`. This closes [#25259](https://github.com/ClickHouse/ClickHouse/issues/25259). [#25260](https://github.com/ClickHouse/ClickHouse/pull/25260) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `topLevelDomain` for IDN hosts (i.e. `example.рф`), before it returns empty string for such hosts. [#25103](https://github.com/ClickHouse/ClickHouse/pull/25103) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Detect Linux kernel version at runtime (for worked nested epoll, that is required for `async_socket_for_remote`/`use_hedged_requests`, otherwise remote queries may stuck). [#25067](https://github.com/ClickHouse/ClickHouse/pull/25067) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* For distributed query, when `optimize_skip_unused_shards=1`, allow to skip shard with condition like `(sharding key) IN (one-element-tuple)`. (Tuples with many elements were supported. Tuple with single element did not work because it is parsed as literal). [#24930](https://github.com/ClickHouse/ClickHouse/pull/24930) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Improved log messages of S3 errors, no more double whitespaces in case of empty keys and buckets. [#24897](https://github.com/ClickHouse/ClickHouse/pull/24897) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Some queries require multi-pass semantic analysis. Try reusing built sets for `IN` in this case. [#24874](https://github.com/ClickHouse/ClickHouse/pull/24874) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Respect `max_distributed_connections` for `insert_distributed_sync` (otherwise for huge clusters and sync insert it may run out of `max_thread_pool_size`). [#24754](https://github.com/ClickHouse/ClickHouse/pull/24754) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Avoid hiding errors like `Limit for rows or bytes to read exceeded` for scalar subqueries. [#24545](https://github.com/ClickHouse/ClickHouse/pull/24545) ([nvartolomei](https://github.com/nvartolomei)).
|
||||
* Make String-to-Int parser stricter so that `toInt64('+')` will throw. [#24475](https://github.com/ClickHouse/ClickHouse/pull/24475) ([Amos Bird](https://github.com/amosbird)).
|
||||
* If `SSD_CACHE` is created with DDL query, it can be created only inside `user_files` directory. [#24466](https://github.com/ClickHouse/ClickHouse/pull/24466) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* PostgreSQL support for specifying non default schema for insert queries. Closes [#24149](https://github.com/ClickHouse/ClickHouse/issues/24149). [#24413](https://github.com/ClickHouse/ClickHouse/pull/24413) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix IPv6 addresses resolving (i.e. fixes `select * from remote('[::1]', system.one)`). [#24319](https://github.com/ClickHouse/ClickHouse/pull/24319) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix trailing whitespaces in FROM clause with subqueries in multiline mode, and also changes the output of the queries slightly in a more human friendly way. [#24151](https://github.com/ClickHouse/ClickHouse/pull/24151) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Improvement for Distributed tables. Add ability to split distributed batch on failures (i.e. due to memory limits, corruptions), under `distributed_directory_monitor_split_batch_on_failure` (OFF by default). [#23864](https://github.com/ClickHouse/ClickHouse/pull/23864) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Handle column name clashes for `Join` table engine. Closes [#20309](https://github.com/ClickHouse/ClickHouse/issues/20309). [#23769](https://github.com/ClickHouse/ClickHouse/pull/23769) ([Vladimir](https://github.com/vdimir)).
|
||||
* Display progress for `File` table engine in `clickhouse-local` and on INSERT query in `clickhouse-client` when data is passed to stdin. Closes [#18209](https://github.com/ClickHouse/ClickHouse/issues/18209). [#23656](https://github.com/ClickHouse/ClickHouse/pull/23656) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Bugfixes and improvements of `clickhouse-copier`. Allow to copy tables with different (but compatible schemas). Closes [#9159](https://github.com/ClickHouse/ClickHouse/issues/9159). Added test to copy ReplacingMergeTree. Closes [#22711](https://github.com/ClickHouse/ClickHouse/issues/22711). Support TTL on columns and Data Skipping Indices. It simply removes it to create internal Distributed table (underlying table will have TTL and skipping indices). Closes [#19384](https://github.com/ClickHouse/ClickHouse/issues/19384). Allow to copy MATERIALIZED and ALIAS columns. There are some cases in which it could be helpful (e.g. if this column is in PRIMARY KEY). Now it could be allowed by setting `allow_to_copy_alias_and_materialized_columns` property to true in task configuration. Closes [#9177](https://github.com/ClickHouse/ClickHouse/issues/9177). Closes [#11007] (https://github.com/ClickHouse/ClickHouse/issues/11007). Closes [#9514](https://github.com/ClickHouse/ClickHouse/issues/9514). Added a property `allow_to_drop_target_partitions` in task configuration to drop partition in original table before moving helping tables. Closes [#20957](https://github.com/ClickHouse/ClickHouse/issues/20957). Get rid of `OPTIMIZE DEDUPLICATE` query. This hack was needed, because `ALTER TABLE MOVE PARTITION` was retried many times and plain MergeTree tables don't have deduplication. Closes [#17966](https://github.com/ClickHouse/ClickHouse/issues/17966). Write progress to ZooKeeper node on path `task_path + /status` in JSON format. Closes [#20955](https://github.com/ClickHouse/ClickHouse/issues/20955). Support for ReplicatedTables without arguments. Closes [#24834](https://github.com/ClickHouse/ClickHouse/issues/24834) .[#23518](https://github.com/ClickHouse/ClickHouse/pull/23518) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Added sleep with backoff between read retries from S3. [#23461](https://github.com/ClickHouse/ClickHouse/pull/23461) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Respect `insert_allow_materialized_columns` (allows materialized columns) for INSERT into `Distributed` table. [#23349](https://github.com/ClickHouse/ClickHouse/pull/23349) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add ability to push down LIMIT for distributed queries. [#23027](https://github.com/ClickHouse/ClickHouse/pull/23027) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix zero-copy replication with several S3 volumes (Fixes [#22679](https://github.com/ClickHouse/ClickHouse/issues/22679)). [#22864](https://github.com/ClickHouse/ClickHouse/pull/22864) ([ianton-ru](https://github.com/ianton-ru)).
|
||||
* Resolve the actual port number bound when a user requests any available port from the operating system to show it in the log message. [#25569](https://github.com/ClickHouse/ClickHouse/pull/25569) ([bnaecker](https://github.com/bnaecker)).
|
||||
* Fixed case, when sometimes conversion of postgres arrays resulted in String data type, not n-dimensional array, because `attndims` works incorrectly in some cases. Closes [#24804](https://github.com/ClickHouse/ClickHouse/issues/24804). [#25538](https://github.com/ClickHouse/ClickHouse/pull/25538) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix convertion of DateTime with timezone for MySQL, PostgreSQL, ODBC. Closes [#5057](https://github.com/ClickHouse/ClickHouse/issues/5057). [#25528](https://github.com/ClickHouse/ClickHouse/pull/25528) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Distinguish KILL MUTATION for different tables (fixes unexpected `Cancelled mutating parts` error). [#25025](https://github.com/ClickHouse/ClickHouse/pull/25025) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Allow to declare S3 disk at root of bucket (S3 virtual filesystem is an experimental feature under development). [#24898](https://github.com/ClickHouse/ClickHouse/pull/24898) ([Vladimir Chebotarev](https://github.com/excitoon)).
|
||||
* Enable reading of subcolumns (e.g. components of Tuples) for distributed tables. [#24472](https://github.com/ClickHouse/ClickHouse/pull/24472) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* A feature for MySQL compatibility protocol: make `user` function to return correct output. Closes [#25697](https://github.com/ClickHouse/ClickHouse/pull/25697). [#25697](https://github.com/ClickHouse/ClickHouse/pull/25697) ([sundyli](https://github.com/sundy-li)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Improvement for backward compatibility. Use old modulo function version when used in partition key. Closes [#23508](https://github.com/ClickHouse/ClickHouse/issues/23508). [#24157](https://github.com/ClickHouse/ClickHouse/pull/24157) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix extremely rare bug on low-memory servers which can lead to the inability to perform merges without restart. Possibly fixes [#24603](https://github.com/ClickHouse/ClickHouse/issues/24603). [#24872](https://github.com/ClickHouse/ClickHouse/pull/24872) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix extremely rare error `Tagging already tagged part` in replication queue during concurrent `alter move/replace partition`. Possibly fixes [#22142](https://github.com/ClickHouse/ClickHouse/issues/22142). [#24961](https://github.com/ClickHouse/ClickHouse/pull/24961) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix potential crash when calculating aggregate function states by aggregation of aggregate function states of other aggregate functions (not a practical use case). See [#24523](https://github.com/ClickHouse/ClickHouse/issues/24523). [#25015](https://github.com/ClickHouse/ClickHouse/pull/25015) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed the behavior when query `SYSTEM RESTART REPLICA` or `SYSTEM SYNC REPLICA` does not finish. This was detected on server with extremely low amount of RAM. [#24457](https://github.com/ClickHouse/ClickHouse/pull/24457) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix bug which can lead to ZooKeeper client hung inside clickhouse-server. [#24721](https://github.com/ClickHouse/ClickHouse/pull/24721) ([alesapin](https://github.com/alesapin)).
|
||||
* If ZooKeeper connection was lost and replica was cloned after restoring the connection, its replication queue might contain outdated entries. Fixed failed assertion when replication queue contains intersecting virtual parts. It may rarely happen if some data part was lost. Print error in log instead of terminating. [#24777](https://github.com/ClickHouse/ClickHouse/pull/24777) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix lost `WHERE` condition in expression-push-down optimization of query plan (setting `query_plan_filter_push_down = 1` by default). Fixes [#25368](https://github.com/ClickHouse/ClickHouse/issues/25368). [#25370](https://github.com/ClickHouse/ClickHouse/pull/25370) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix bug which can lead to intersecting parts after merges with TTL: `Part all_40_40_0 is covered by all_40_40_1 but should be merged into all_40_41_1. This shouldn't happen often.`. [#25549](https://github.com/ClickHouse/ClickHouse/pull/25549) ([alesapin](https://github.com/alesapin)).
|
||||
* On ZooKeeper connection loss `ReplicatedMergeTree` table might wait for background operations to complete before trying to reconnect. It's fixed, now background operations are stopped forcefully. [#25306](https://github.com/ClickHouse/ClickHouse/pull/25306) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix error `Key expression contains comparison between inconvertible types` for queries with `ARRAY JOIN` in case if array is used in primary key. Fixes [#8247](https://github.com/ClickHouse/ClickHouse/issues/8247). [#25546](https://github.com/ClickHouse/ClickHouse/pull/25546) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix wrong totals for query `WITH TOTALS` and `WITH FILL`. Fixes [#20872](https://github.com/ClickHouse/ClickHouse/issues/20872). [#25539](https://github.com/ClickHouse/ClickHouse/pull/25539) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix data race when querying `system.clusters` while reloading the cluster configuration at the same time. [#25737](https://github.com/ClickHouse/ClickHouse/pull/25737) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fixed `No such file or directory` error on moving `Distributed` table between databases. Fixes [#24971](https://github.com/ClickHouse/ClickHouse/issues/24971). [#25667](https://github.com/ClickHouse/ClickHouse/pull/25667) ([tavplubix](https://github.com/tavplubix)).
|
||||
* `REPLACE PARTITION` might be ignored in rare cases if the source partition was empty. It's fixed. Fixes [#24869](https://github.com/ClickHouse/ClickHouse/issues/24869). [#25665](https://github.com/ClickHouse/ClickHouse/pull/25665) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fixed a bug in `Replicated` database engine that might rarely cause some replica to skip enqueued DDL query. [#24805](https://github.com/ClickHouse/ClickHouse/pull/24805) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Fix null pointer dereference in `EXPLAIN AST` without query. [#25631](https://github.com/ClickHouse/ClickHouse/pull/25631) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix waiting of automatic dropping of empty parts. It could lead to full filling of background pool and stuck of replication. [#23315](https://github.com/ClickHouse/ClickHouse/pull/23315) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix restore of a table stored in S3 virtual filesystem (it is an experimental feature not ready for production). [#25601](https://github.com/ClickHouse/ClickHouse/pull/25601) ([ianton-ru](https://github.com/ianton-ru)).
|
||||
* Fix nullptr dereference in `Arrow` format when using `Decimal256`. Add `Decimal256` support for `Arrow` format. [#25531](https://github.com/ClickHouse/ClickHouse/pull/25531) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix excessive underscore before the names of the preprocessed configuration files. [#25431](https://github.com/ClickHouse/ClickHouse/pull/25431) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* A fix for `clickhouse-copier` tool: Fix segfault when sharding_key is absent in task config for copier. [#25419](https://github.com/ClickHouse/ClickHouse/pull/25419) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix `REPLACE` column transformer when used in DDL by correctly quoting the formated query. This fixes [#23925](https://github.com/ClickHouse/ClickHouse/issues/23925). [#25391](https://github.com/ClickHouse/ClickHouse/pull/25391) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix the possibility of non-deterministic behaviour of the `quantileDeterministic` function and similar. This closes [#20480](https://github.com/ClickHouse/ClickHouse/issues/20480). [#25313](https://github.com/ClickHouse/ClickHouse/pull/25313) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Support `SimpleAggregateFunction(LowCardinality)` for `SummingMergeTree`. Fixes [#25134](https://github.com/ClickHouse/ClickHouse/issues/25134). [#25300](https://github.com/ClickHouse/ClickHouse/pull/25300) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix logical error with exception message "Cannot sum Array/Tuple in min/maxMap". [#25298](https://github.com/ClickHouse/ClickHouse/pull/25298) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix error `Bad cast from type DB::ColumnLowCardinality to DB::ColumnVector<char8_t>` for queries where `LowCardinality` argument was used for IN (this bug appeared in 21.6). Fixes [#25187](https://github.com/ClickHouse/ClickHouse/issues/25187). [#25290](https://github.com/ClickHouse/ClickHouse/pull/25290) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix incorrect behaviour of `joinGetOrNull` with not-nullable columns. This fixes [#24261](https://github.com/ClickHouse/ClickHouse/issues/24261). [#25288](https://github.com/ClickHouse/ClickHouse/pull/25288) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix incorrect behaviour and UBSan report in big integers. In previous versions `CAST(1e19 AS UInt128)` returned zero. [#25279](https://github.com/ClickHouse/ClickHouse/pull/25279) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed an error which occurred while inserting a subset of columns using CSVWithNames format. Fixes [#25129](https://github.com/ClickHouse/ClickHouse/issues/25129). [#25169](https://github.com/ClickHouse/ClickHouse/pull/25169) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Do not use table's projection for `SELECT` with `FINAL`. It is not supported yet. [#25163](https://github.com/ClickHouse/ClickHouse/pull/25163) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix possible parts loss after updating up to 21.5 in case table used `UUID` in partition key. (It is not recommended to use `UUID` in partition key). Fixes [#25070](https://github.com/ClickHouse/ClickHouse/issues/25070). [#25127](https://github.com/ClickHouse/ClickHouse/pull/25127) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix crash in query with cross join and `joined_subquery_requires_alias = 0`. Fixes [#24011](https://github.com/ClickHouse/ClickHouse/issues/24011). [#25082](https://github.com/ClickHouse/ClickHouse/pull/25082) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix bug with constant maps in mapContains function that lead to error `empty column was returned by function mapContains`. Closes [#25077](https://github.com/ClickHouse/ClickHouse/issues/25077). [#25080](https://github.com/ClickHouse/ClickHouse/pull/25080) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Remove possibility to create tables with columns referencing themselves like `a UInt32 ALIAS a + 1` or `b UInt32 MATERIALIZED b`. Fixes [#24910](https://github.com/ClickHouse/ClickHouse/issues/24910), [#24292](https://github.com/ClickHouse/ClickHouse/issues/24292). [#25059](https://github.com/ClickHouse/ClickHouse/pull/25059) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix wrong result when using aggregate projection with *not empty* `GROUP BY` key to execute query with `GROUP BY` by *empty* key. [#25055](https://github.com/ClickHouse/ClickHouse/pull/25055) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix serialization of splitted nested messages in Protobuf format. This PR fixes [#24647](https://github.com/ClickHouse/ClickHouse/issues/24647). [#25000](https://github.com/ClickHouse/ClickHouse/pull/25000) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix limit/offset settings for distributed queries (ignore on the remote nodes). [#24940](https://github.com/ClickHouse/ClickHouse/pull/24940) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible heap-buffer-overflow in `Arrow` format. [#24922](https://github.com/ClickHouse/ClickHouse/pull/24922) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fixed possible error 'Cannot read from istream at offset 0' when reading a file from DiskS3 (S3 virtual filesystem is an experimental feature under development that should not be used in production). [#24885](https://github.com/ClickHouse/ClickHouse/pull/24885) ([Pavel Kovalenko](https://github.com/Jokser)).
|
||||
* Fix "Missing columns" exception when joining Distributed Materialized View. [#24870](https://github.com/ClickHouse/ClickHouse/pull/24870) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Allow `NULL` values in postgresql compatibility protocol. Closes [#22622](https://github.com/ClickHouse/ClickHouse/issues/22622). [#24857](https://github.com/ClickHouse/ClickHouse/pull/24857) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix bug when exception `Mutation was killed` can be thrown to the client on mutation wait when mutation not loaded into memory yet. [#24809](https://github.com/ClickHouse/ClickHouse/pull/24809) ([alesapin](https://github.com/alesapin)).
|
||||
* Fixed bug in deserialization of random generator state with might cause some data types such as `AggregateFunction(groupArraySample(N), T))` to behave in a non-deterministic way. [#24538](https://github.com/ClickHouse/ClickHouse/pull/24538) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Disallow building uniqXXXXStates of other aggregation states. [#24523](https://github.com/ClickHouse/ClickHouse/pull/24523) ([Raúl Marín](https://github.com/Algunenano)). Then allow it back by actually eliminating the root cause of the related issue. ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix usage of tuples in `CREATE .. AS SELECT` queries. [#24464](https://github.com/ClickHouse/ClickHouse/pull/24464) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix computation of total bytes in `Buffer` table. In current ClickHouse version total_writes.bytes counter decreases too much during the buffer flush. It leads to counter overflow and totalBytes return something around 17.44 EB some time after the flush. [#24450](https://github.com/ClickHouse/ClickHouse/pull/24450) ([DimasKovas](https://github.com/DimasKovas)).
|
||||
* Fix incorrect information about the monotonicity of toWeek function. This fixes [#24422](https://github.com/ClickHouse/ClickHouse/issues/24422) . This bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/5212 , and was exposed later by smarter partition pruner. [#24446](https://github.com/ClickHouse/ClickHouse/pull/24446) ([Amos Bird](https://github.com/amosbird)).
|
||||
* When user authentication is managed by LDAP. Fixed potential deadlock that can happen during LDAP role (re)mapping, when LDAP group is mapped to a nonexistent local role. [#24431](https://github.com/ClickHouse/ClickHouse/pull/24431) ([Denis Glazachev](https://github.com/traceon)).
|
||||
* In "multipart/form-data" message consider the CRLF preceding a boundary as part of it. Fixes [#23905](https://github.com/ClickHouse/ClickHouse/issues/23905). [#24399](https://github.com/ClickHouse/ClickHouse/pull/24399) ([Ivan](https://github.com/abyss7)).
|
||||
* Fix drop partition with intersect fake parts. In rare cases there might be parts with mutation version greater than current block number. [#24321](https://github.com/ClickHouse/ClickHouse/pull/24321) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fixed a bug in moving Materialized View from Ordinary to Atomic database (`RENAME TABLE` query). Now inner table is moved to new database together with Materialized View. Fixes [#23926](https://github.com/ClickHouse/ClickHouse/issues/23926). [#24309](https://github.com/ClickHouse/ClickHouse/pull/24309) ([tavplubix](https://github.com/tavplubix)).
|
||||
* Allow empty HTTP headers. Fixes [#23901](https://github.com/ClickHouse/ClickHouse/issues/23901). [#24285](https://github.com/ClickHouse/ClickHouse/pull/24285) ([Ivan](https://github.com/abyss7)).
|
||||
* Correct processing of mutations (ALTER UPDATE/DELETE) in Memory tables. Closes [#24274](https://github.com/ClickHouse/ClickHouse/issues/24274). [#24275](https://github.com/ClickHouse/ClickHouse/pull/24275) ([flynn](https://github.com/ucasfl)).
|
||||
* Make column LowCardinality property in JOIN output the same as in the input, close [#23351](https://github.com/ClickHouse/ClickHouse/issues/23351), close [#20315](https://github.com/ClickHouse/ClickHouse/issues/20315). [#24061](https://github.com/ClickHouse/ClickHouse/pull/24061) ([Vladimir](https://github.com/vdimir)).
|
||||
* A fix for Kafka tables. Fix the bug in failover behavior when Engine = Kafka was not able to start consumption if the same consumer had an empty assignment previously. Closes [#21118](https://github.com/ClickHouse/ClickHouse/issues/21118). [#21267](https://github.com/ClickHouse/ClickHouse/pull/21267) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Add `darwin-aarch64` (Mac M1 / Apple Silicon) builds in CI [#25560](https://github.com/ClickHouse/ClickHouse/pull/25560) ([Ivan](https://github.com/abyss7)) and put the links to the docs and website ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Adds cross-platform embedding of binary resources into executables. It works on Illumos. [#25146](https://github.com/ClickHouse/ClickHouse/pull/25146) ([bnaecker](https://github.com/bnaecker)).
|
||||
* Add join related options to stress tests to improve fuzzing. [#25200](https://github.com/ClickHouse/ClickHouse/pull/25200) ([Vladimir](https://github.com/vdimir)).
|
||||
* Enable build with s3 module in osx [#25217](https://github.com/ClickHouse/ClickHouse/issues/25217). [#25218](https://github.com/ClickHouse/ClickHouse/pull/25218) ([kevin wan](https://github.com/MaxWk)).
|
||||
* Add integration test cases to cover JDBC bridge. [#25047](https://github.com/ClickHouse/ClickHouse/pull/25047) ([Zhichun Wu](https://github.com/zhicwu)).
|
||||
* Integration tests configuration has special treatment for dictionaries. Removed remaining dictionaries manual setup. [#24728](https://github.com/ClickHouse/ClickHouse/pull/24728) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Add libfuzzer tests for YAMLParser class. [#24480](https://github.com/ClickHouse/ClickHouse/pull/24480) ([BoloniniD](https://github.com/BoloniniD)).
|
||||
* Ubuntu 20.04 is now used to run integration tests, docker-compose version used to run integration tests is updated to 1.28.2. Environment variables now take effect on docker-compose. Rework test_dictionaries_all_layouts_separate_sources to allow parallel run. [#20393](https://github.com/ClickHouse/ClickHouse/pull/20393) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Fix TOCTOU error in installation script. [#25277](https://github.com/ClickHouse/ClickHouse/pull/25277) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
|
||||
### ClickHouse release 21.6, 2021-06-05
|
||||
|
||||
#### Upgrade Notes
|
||||
|
@ -536,6 +536,7 @@ include (cmake/find/rapidjson.cmake)
|
||||
include (cmake/find/fastops.cmake)
|
||||
include (cmake/find/odbc.cmake)
|
||||
include (cmake/find/nanodbc.cmake)
|
||||
include (cmake/find/sqlite.cmake)
|
||||
include (cmake/find/rocksdb.cmake)
|
||||
include (cmake/find/libpqxx.cmake)
|
||||
include (cmake/find/nuraft.cmake)
|
||||
|
@ -18,6 +18,8 @@
|
||||
|
||||
#define DATE_LUT_MAX (0xFFFFFFFFU - 86400)
|
||||
#define DATE_LUT_MAX_DAY_NUM 0xFFFF
|
||||
/// Max int value of Date32, DATE LUT cache size minus daynum_offset_epoch
|
||||
#define DATE_LUT_MAX_EXTEND_DAY_NUM (DATE_LUT_SIZE - 16436)
|
||||
|
||||
/// A constant to add to time_t so every supported time point becomes non-negative and still has the same remainder of division by 3600.
|
||||
/// If we treat "remainder of division" operation in the sense of modular arithmetic (not like in C++).
|
||||
@ -270,6 +272,8 @@ public:
|
||||
auto getOffsetAtStartOfEpoch() const { return offset_at_start_of_epoch; }
|
||||
auto getTimeOffsetAtStartOfLUT() const { return offset_at_start_of_lut; }
|
||||
|
||||
auto getDayNumOffsetEpoch() const { return daynum_offset_epoch; }
|
||||
|
||||
/// All functions below are thread-safe; arguments are not checked.
|
||||
|
||||
inline ExtendedDayNum toDayNum(ExtendedDayNum d) const
|
||||
@ -926,15 +930,17 @@ public:
|
||||
{
|
||||
if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31))
|
||||
return LUTIndex(0);
|
||||
|
||||
return LUTIndex{years_months_lut[(year - DATE_LUT_MIN_YEAR) * 12 + month - 1] + day_of_month - 1};
|
||||
auto year_lut_index = (year - DATE_LUT_MIN_YEAR) * 12 + month - 1;
|
||||
UInt32 index = years_months_lut[year_lut_index].toUnderType() + day_of_month - 1;
|
||||
/// When date is out of range, default value is DATE_LUT_SIZE - 1 (2283-11-11)
|
||||
return LUTIndex{std::min(index, static_cast<UInt32>(DATE_LUT_SIZE - 1))};
|
||||
}
|
||||
|
||||
/// Create DayNum from year, month, day of month.
|
||||
inline ExtendedDayNum makeDayNum(Int16 year, UInt8 month, UInt8 day_of_month) const
|
||||
inline ExtendedDayNum makeDayNum(Int16 year, UInt8 month, UInt8 day_of_month, Int32 default_error_day_num = 0) const
|
||||
{
|
||||
if (unlikely(year < DATE_LUT_MIN_YEAR || year > DATE_LUT_MAX_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31))
|
||||
return ExtendedDayNum(0);
|
||||
return ExtendedDayNum(default_error_day_num);
|
||||
|
||||
return toDayNum(makeLUTIndex(year, month, day_of_month));
|
||||
}
|
||||
@ -1091,9 +1097,9 @@ public:
|
||||
return lut[new_index].date + time;
|
||||
}
|
||||
|
||||
inline NO_SANITIZE_UNDEFINED Time addWeeks(Time t, Int64 delta) const
|
||||
inline NO_SANITIZE_UNDEFINED Time addWeeks(Time t, Int32 delta) const
|
||||
{
|
||||
return addDays(t, delta * 7);
|
||||
return addDays(t, static_cast<Int64>(delta) * 7);
|
||||
}
|
||||
|
||||
inline UInt8 saturateDayOfMonth(Int16 year, UInt8 month, UInt8 day_of_month) const
|
||||
@ -1158,14 +1164,14 @@ public:
|
||||
return toDayNum(addMonthsIndex(d, delta));
|
||||
}
|
||||
|
||||
inline Time NO_SANITIZE_UNDEFINED addQuarters(Time t, Int64 delta) const
|
||||
inline Time NO_SANITIZE_UNDEFINED addQuarters(Time t, Int32 delta) const
|
||||
{
|
||||
return addMonths(t, delta * 3);
|
||||
return addMonths(t, static_cast<Int64>(delta) * 3);
|
||||
}
|
||||
|
||||
inline ExtendedDayNum addQuarters(ExtendedDayNum d, Int64 delta) const
|
||||
inline ExtendedDayNum addQuarters(ExtendedDayNum d, Int32 delta) const
|
||||
{
|
||||
return addMonths(d, delta * 3);
|
||||
return addMonths(d, static_cast<Int64>(delta) * 3);
|
||||
}
|
||||
|
||||
template <typename DateOrTime>
|
||||
|
@ -70,6 +70,14 @@ public:
|
||||
m_day = values.day_of_month;
|
||||
}
|
||||
|
||||
explicit LocalDate(ExtendedDayNum day_num)
|
||||
{
|
||||
const auto & values = DateLUT::instance().getValues(day_num);
|
||||
m_year = values.year;
|
||||
m_month = values.month;
|
||||
m_day = values.day_of_month;
|
||||
}
|
||||
|
||||
LocalDate(unsigned short year_, unsigned char month_, unsigned char day_)
|
||||
: m_year(year_), m_month(month_), m_day(day_)
|
||||
{
|
||||
@ -98,6 +106,12 @@ public:
|
||||
return DayNum(lut.makeDayNum(m_year, m_month, m_day).toUnderType());
|
||||
}
|
||||
|
||||
ExtendedDayNum getExtenedDayNum() const
|
||||
{
|
||||
const auto & lut = DateLUT::instance();
|
||||
return ExtendedDayNum (lut.makeDayNum(m_year, m_month, m_day).toUnderType());
|
||||
}
|
||||
|
||||
operator DayNum() const
|
||||
{
|
||||
return getDayNum();
|
||||
|
@ -2,11 +2,11 @@
|
||||
|
||||
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
||||
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
||||
SET(VERSION_REVISION 54453)
|
||||
SET(VERSION_REVISION 54454)
|
||||
SET(VERSION_MAJOR 21)
|
||||
SET(VERSION_MINOR 8)
|
||||
SET(VERSION_MINOR 9)
|
||||
SET(VERSION_PATCH 1)
|
||||
SET(VERSION_GITHASH fb895056568e26200629c7d19626e92d2dedc70d)
|
||||
SET(VERSION_DESCRIBE v21.8.1.1-prestable)
|
||||
SET(VERSION_STRING 21.8.1.1)
|
||||
SET(VERSION_GITHASH f48c5af90c2ad51955d1ee3b6b05d006b03e4238)
|
||||
SET(VERSION_DESCRIBE v21.9.1.1-prestable)
|
||||
SET(VERSION_STRING 21.9.1.1)
|
||||
# end of autochange
|
||||
|
@ -53,5 +53,6 @@ macro(clickhouse_embed_binaries)
|
||||
set_property(SOURCE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}" APPEND PROPERTY INCLUDE_DIRECTORIES "${EMBED_RESOURCE_DIR}")
|
||||
|
||||
target_sources("${EMBED_TARGET}" PRIVATE "${CMAKE_CURRENT_BINARY_DIR}/${ASSEMBLY_FILE_NAME}")
|
||||
set_target_properties("${EMBED_TARGET}" PROPERTIES OBJECT_DEPENDS "${RESOURCE_FILE}")
|
||||
endforeach()
|
||||
endmacro()
|
||||
|
16
cmake/find/sqlite.cmake
Normal file
16
cmake/find/sqlite.cmake
Normal file
@ -0,0 +1,16 @@
|
||||
option(ENABLE_SQLITE "Enable sqlite" ${ENABLE_LIBRARIES})
|
||||
|
||||
if (NOT ENABLE_SQLITE)
|
||||
return()
|
||||
endif()
|
||||
|
||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/sqlite-amalgamation/sqlite3.c")
|
||||
message (WARNING "submodule contrib/sqlite3-amalgamation is missing. to fix try run: \n git submodule update --init --recursive")
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal sqlite library")
|
||||
set (USE_SQLITE 0)
|
||||
return()
|
||||
endif()
|
||||
|
||||
set (USE_SQLITE 1)
|
||||
set (SQLITE_LIBRARY sqlite)
|
||||
message (STATUS "Using sqlite=${USE_SQLITE}")
|
4
contrib/CMakeLists.txt
vendored
4
contrib/CMakeLists.txt
vendored
@ -329,3 +329,7 @@ endif()
|
||||
|
||||
add_subdirectory(fast_float)
|
||||
|
||||
if (USE_SQLITE)
|
||||
add_subdirectory(sqlite-cmake)
|
||||
endif()
|
||||
|
||||
|
2
contrib/h3
vendored
2
contrib/h3
vendored
@ -1 +1 @@
|
||||
Subproject commit e209086ae1b5477307f545a0f6111780edc59940
|
||||
Subproject commit c7f46cfd71fb60e2fefc90e28abe81657deff735
|
@ -3,21 +3,22 @@ set(H3_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/h3/src/h3lib")
|
||||
|
||||
set(SRCS
|
||||
"${H3_SOURCE_DIR}/lib/algos.c"
|
||||
"${H3_SOURCE_DIR}/lib/baseCells.c"
|
||||
"${H3_SOURCE_DIR}/lib/bbox.c"
|
||||
"${H3_SOURCE_DIR}/lib/coordijk.c"
|
||||
"${H3_SOURCE_DIR}/lib/faceijk.c"
|
||||
"${H3_SOURCE_DIR}/lib/geoCoord.c"
|
||||
"${H3_SOURCE_DIR}/lib/h3Index.c"
|
||||
"${H3_SOURCE_DIR}/lib/h3UniEdge.c"
|
||||
"${H3_SOURCE_DIR}/lib/linkedGeo.c"
|
||||
"${H3_SOURCE_DIR}/lib/localij.c"
|
||||
"${H3_SOURCE_DIR}/lib/mathExtensions.c"
|
||||
"${H3_SOURCE_DIR}/lib/bbox.c"
|
||||
"${H3_SOURCE_DIR}/lib/polygon.c"
|
||||
"${H3_SOURCE_DIR}/lib/h3Index.c"
|
||||
"${H3_SOURCE_DIR}/lib/vec2d.c"
|
||||
"${H3_SOURCE_DIR}/lib/vec3d.c"
|
||||
"${H3_SOURCE_DIR}/lib/vertex.c"
|
||||
"${H3_SOURCE_DIR}/lib/linkedGeo.c"
|
||||
"${H3_SOURCE_DIR}/lib/localij.c"
|
||||
"${H3_SOURCE_DIR}/lib/latLng.c"
|
||||
"${H3_SOURCE_DIR}/lib/directedEdge.c"
|
||||
"${H3_SOURCE_DIR}/lib/mathExtensions.c"
|
||||
"${H3_SOURCE_DIR}/lib/iterators.c"
|
||||
"${H3_SOURCE_DIR}/lib/vertexGraph.c"
|
||||
"${H3_SOURCE_DIR}/lib/faceijk.c"
|
||||
"${H3_SOURCE_DIR}/lib/baseCells.c"
|
||||
)
|
||||
|
||||
configure_file("${H3_SOURCE_DIR}/include/h3api.h.in" "${H3_BINARY_DIR}/include/h3api.h")
|
||||
|
1
contrib/sqlite-amalgamation
vendored
Submodule
1
contrib/sqlite-amalgamation
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 9818baa5d027ffb26d57f810dc4c597d4946781c
|
6
contrib/sqlite-cmake/CMakeLists.txt
Normal file
6
contrib/sqlite-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,6 @@
|
||||
set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/sqlite-amalgamation")
|
||||
|
||||
set(SRCS ${LIBRARY_DIR}/sqlite3.c)
|
||||
|
||||
add_library(sqlite ${SRCS})
|
||||
target_include_directories(sqlite SYSTEM PUBLIC "${LIBRARY_DIR}")
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
||||
clickhouse (21.8.1.1) unstable; urgency=low
|
||||
clickhouse (21.9.1.1) unstable; urgency=low
|
||||
|
||||
* Modified source code
|
||||
|
||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Mon, 28 Jun 2021 00:50:15 +0300
|
||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Sat, 10 Jul 2021 08:22:49 +0300
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:18.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=21.8.1.*
|
||||
ARG version=21.9.1.*
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install --yes --no-install-recommends \
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:20.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=21.8.1.*
|
||||
ARG version=21.9.1.*
|
||||
ARG gosu_ver=1.10
|
||||
|
||||
# set non-empty deb_location_url url to create a docker image
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:18.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=21.8.1.*
|
||||
ARG version=21.9.1.*
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y apt-transport-https dirmngr && \
|
||||
|
@ -378,6 +378,8 @@ function run_tests
|
||||
|
||||
# needs pv
|
||||
01923_network_receive_time_metric_insert
|
||||
|
||||
01889_sqlite_read_write
|
||||
)
|
||||
|
||||
time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \
|
||||
|
@ -29,7 +29,8 @@ RUN apt-get update -y \
|
||||
unixodbc \
|
||||
wget \
|
||||
mysql-client=5.7* \
|
||||
postgresql-client
|
||||
postgresql-client \
|
||||
sqlite3
|
||||
|
||||
RUN pip3 install numpy scipy pandas
|
||||
|
||||
|
@ -105,11 +105,11 @@ clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee te
|
||||
|
||||
5) ensure everything is correct, if the test output is incorrect (due to some bug for example), adjust the reference file using text editor.
|
||||
|
||||
#### How to create good test
|
||||
#### How to create a good test
|
||||
|
||||
- test should be
|
||||
- A test should be
|
||||
- minimal - create only tables related to tested functionality, remove unrelated columns and parts of query
|
||||
- fast - should not take longer than few seconds (better subseconds)
|
||||
- fast - should not take longer than a few seconds (better subseconds)
|
||||
- correct - fails then feature is not working
|
||||
- deterministic
|
||||
- isolated / stateless
|
||||
@ -126,6 +126,16 @@ clickhouse-client -nmT < tests/queries/0_stateless/01521_dummy_test.sql | tee te
|
||||
- use other SQL files in the `0_stateless` folder as an example
|
||||
- ensure the feature / feature combination you want to test is not yet covered with existing tests
|
||||
|
||||
#### Test naming rules
|
||||
|
||||
It's important to name tests correctly, so one could turn some tests subset off in clickhouse-test invocation.
|
||||
|
||||
| Tester flag| What should be in test name | When flag should be added |
|
||||
|---|---|---|---|
|
||||
| `--[no-]zookeeper`| "zookeeper" or "replica" | Test uses tables from ReplicatedMergeTree family |
|
||||
| `--[no-]shard` | "shard" or "distributed" or "global"| Test using connections to 127.0.0.2 or similar |
|
||||
| `--[no-]long` | "long" or "deadlock" or "race" | Test runs longer than 60 seconds |
|
||||
|
||||
#### Commit / push / create PR.
|
||||
|
||||
1) commit & push your changes
|
||||
|
@ -134,10 +134,10 @@ $ ./release
|
||||
|
||||
## Faster builds for development
|
||||
|
||||
Normally all tools of the ClickHouse bundle, such as `clickhouse-server`, `clickhouse-client` etc., are linked into a single static executable, `clickhouse`. This executable must be re-linked on every change, which might be slow. Two common ways to improve linking time are to use `lld` linker, and use the 'split' build configuration, which builds a separate binary for every tool, and further splits the code into several shared libraries. To enable these tweaks, pass the following flags to `cmake`:
|
||||
Normally all tools of the ClickHouse bundle, such as `clickhouse-server`, `clickhouse-client` etc., are linked into a single static executable, `clickhouse`. This executable must be re-linked on every change, which might be slow. One common way to improve build time is to use the 'split' build configuration, which builds a separate binary for every tool, and further splits the code into several shared libraries. To enable this tweak, pass the following flags to `cmake`:
|
||||
|
||||
```
|
||||
-DCMAKE_C_FLAGS="--ld-path=lld" -DCMAKE_CXX_FLAGS="--ld-path=lld" -DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1
|
||||
-DUSE_STATIC_LIBRARIES=0 -DSPLIT_SHARED_LIBRARIES=1 -DCLICKHOUSE_SPLIT_BINARY=1
|
||||
```
|
||||
|
||||
## You Don’t Have to Build ClickHouse {#you-dont-have-to-build-clickhouse}
|
||||
|
BIN
docs/en/images/play.png
Normal file
BIN
docs/en/images/play.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 26 KiB |
@ -7,16 +7,21 @@ toc_title: HTTP Interface
|
||||
|
||||
The HTTP interface lets you use ClickHouse on any platform from any programming language. We use it for working from Java and Perl, as well as shell scripts. In other departments, the HTTP interface is used from Perl, Python, and Go. The HTTP interface is more limited than the native interface, but it has better compatibility.
|
||||
|
||||
By default, clickhouse-server listens for HTTP on port 8123 (this can be changed in the config).
|
||||
By default, `clickhouse-server` listens for HTTP on port 8123 (this can be changed in the config).
|
||||
|
||||
If you make a GET / request without parameters, it returns 200 response code and the string which defined in [http_server_default_response](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-http_server_default_response) default value “Ok.” (with a line feed at the end)
|
||||
If you make a `GET /` request without parameters, it returns 200 response code and the string which defined in [http_server_default_response](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-http_server_default_response) default value “Ok.” (with a line feed at the end)
|
||||
|
||||
``` bash
|
||||
$ curl 'http://localhost:8123/'
|
||||
Ok.
|
||||
```
|
||||
|
||||
Use GET /ping request in health-check scripts. This handler always returns “Ok.” (with a line feed at the end). Available from version 18.12.13.
|
||||
Web UI can be accessed here: `http://localhost:8123/play`.
|
||||
|
||||
![Web UI](../images/play.png)
|
||||
|
||||
|
||||
In health-check scripts use `GET /ping` request. This handler always returns “Ok.” (with a line feed at the end). Available from version 18.12.13.
|
||||
|
||||
``` bash
|
||||
$ curl 'http://localhost:8123/ping'
|
||||
@ -51,8 +56,8 @@ X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","writ
|
||||
1
|
||||
```
|
||||
|
||||
As you can see, curl is somewhat inconvenient in that spaces must be URL escaped.
|
||||
Although wget escapes everything itself, we do not recommend using it because it does not work well over HTTP 1.1 when using keep-alive and Transfer-Encoding: chunked.
|
||||
As you can see, `curl` is somewhat inconvenient in that spaces must be URL escaped.
|
||||
Although `wget` escapes everything itself, we do not recommend using it because it does not work well over HTTP 1.1 when using keep-alive and Transfer-Encoding: chunked.
|
||||
|
||||
``` bash
|
||||
$ echo 'SELECT 1' | curl 'http://localhost:8123/' --data-binary @-
|
||||
@ -75,7 +80,7 @@ ECT 1
|
||||
, expected One of: SHOW TABLES, SHOW DATABASES, SELECT, INSERT, CREATE, ATTACH, RENAME, DROP, DETACH, USE, SET, OPTIMIZE., e.what() = DB::Exception
|
||||
```
|
||||
|
||||
By default, data is returned in TabSeparated format (for more information, see the “Formats” section).
|
||||
By default, data is returned in [TabSeparated](formats.md#tabseparated) format.
|
||||
|
||||
You use the FORMAT clause of the query to request any other format.
|
||||
|
||||
@ -90,9 +95,11 @@ $ echo 'SELECT 1 FORMAT Pretty' | curl 'http://localhost:8123/?' --data-binary @
|
||||
└───┘
|
||||
```
|
||||
|
||||
The POST method of transmitting data is necessary for INSERT queries. In this case, you can write the beginning of the query in the URL parameter, and use POST to pass the data to insert. The data to insert could be, for example, a tab-separated dump from MySQL. In this way, the INSERT query replaces LOAD DATA LOCAL INFILE from MySQL.
|
||||
The POST method of transmitting data is necessary for `INSERT` queries. In this case, you can write the beginning of the query in the URL parameter, and use POST to pass the data to insert. The data to insert could be, for example, a tab-separated dump from MySQL. In this way, the `INSERT` query replaces `LOAD DATA LOCAL INFILE` from MySQL.
|
||||
|
||||
Examples: Creating a table:
|
||||
**Examples**
|
||||
|
||||
Creating a table:
|
||||
|
||||
``` bash
|
||||
$ echo 'CREATE TABLE t (a UInt8) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @-
|
||||
@ -632,6 +639,4 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler'
|
||||
<
|
||||
<html><body>Relative Path File</body></html>
|
||||
* Connection #0 to host localhost left intact
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/interfaces/http_interface/) <!--hide-->
|
||||
```
|
@ -59,6 +59,7 @@ toc_title: Adopters
|
||||
| <a href="https://www.huya.com/" class="favicon">HUYA</a> | Video Streaming | Analytics | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/7.%20ClickHouse万亿数据分析实践%20李本旺(sundy-li)%20虎牙.pdf) |
|
||||
| <a href="https://www.the-ica.com/" class="favicon">ICA</a> | FinTech | Risk Management | — | — | [Blog Post in English, Sep 2020](https://altinity.com/blog/clickhouse-vs-redshift-performance-for-fintech-risk-management?utm_campaign=ClickHouse%20vs%20RedShift&utm_content=143520807&utm_medium=social&utm_source=twitter&hss_channel=tw-3894792263) |
|
||||
| <a href="https://www.idealista.com" class="favicon">Idealista</a> | Real Estate | Analytics | — | — | [Blog Post in English, April 2019](https://clickhouse.tech/blog/en/clickhouse-meetup-in-madrid-on-april-2-2019) |
|
||||
| <a href="https://infobaleen.com" class="favicon">Infobaleen</a> | AI markting tool | Analytics | — | — | [Official site](https://infobaleen.com) |
|
||||
| <a href="https://www.infovista.com/" class="favicon">Infovista</a> | Networks | Analytics | — | — | [Slides in English, October 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup30/infovista.pdf) |
|
||||
| <a href="https://www.innogames.com" class="favicon">InnoGames</a> | Games | Metrics, Logging | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/graphite_and_clickHouse.pdf) |
|
||||
| <a href="https://instabug.com/" class="favicon">Instabug</a> | APM Platform | Main product | — | — | [A quote from Co-Founder](https://altinity.com/) |
|
||||
|
114
docs/en/operations/clickhouse-keeper.md
Normal file
114
docs/en/operations/clickhouse-keeper.md
Normal file
@ -0,0 +1,114 @@
|
||||
---
|
||||
toc_priority: 66
|
||||
toc_title: ClickHouse Keeper
|
||||
---
|
||||
|
||||
# [pre-production] clickhouse-keeper
|
||||
|
||||
ClickHouse server use [ZooKeeper](https://zookeeper.apache.org/) coordination system for data [replication](../engines/table-engines/mergetree-family/replication.md) and [distributed DDL](../sql-reference/distributed-ddl.md) queries execution. ClickHouse Keeper is an alternative coordination system compatible with ZooKeeper.
|
||||
|
||||
!!! warning "Warning"
|
||||
This feature currently in pre-production stage. We test it in our CI and on small internal installations.
|
||||
|
||||
## Implemetation details
|
||||
|
||||
ZooKeeper is one of the first well-known open-source coordination systems. It's implemented in Java, has quite a simple and powerful data model. ZooKeeper's coordination algorithm called ZAB (ZooKeeper Atomic Broadcast) doesn't provide linearizability guarantees for reads, because each ZooKeeper node serves reads locally. Unlike ZooKeeper `clickhouse-keeper` written in C++ and use [RAFT algorithm](https://raft.github.io/) [implementation](https://github.com/eBay/NuRaft). This algorithm allows to have linearizability for reads and writes, has several open-source implementations in different languages.
|
||||
|
||||
By default, `clickhouse-keeper` provides the same guarantees as ZooKeeper (linearizable writes, non-linearizable reads). It has a compatible client-server protocol, so any standard ZooKeeper client can be used to interact with `clickhouse-keeper`. Snapshots and logs have an incompatible format with ZooKeeper, but `clickhouse-keeper-converter` tool allows to convert ZooKeeper data to `clickhouse-keeper` snapshot. Interserver protocol in `clickhouse-keeper` also incompatible with ZooKeeper so mixed ZooKeeper/clickhouse-keeper cluster is impossible.
|
||||
|
||||
## Configuration
|
||||
|
||||
`clickhouse-keeper` can be used as a standalone replacement for ZooKeeper or as an internal part of the `clickhouse-server`, but in both cases configuration is almost the same `.xml` file. The main `clickhouse-keeper` configuration tag is `<keeper_server>`. Keeper configuration has the following parameters:
|
||||
|
||||
- `tcp_port` — the port for a client to connect (default for ZooKeeper is `2181`)
|
||||
- `tcp_port_secure` — the secure port for a client to connect
|
||||
- `server_id` — unique server id, each participant of the clickhouse-keeper cluster must have a unique number (1, 2, 3, and so on)
|
||||
- `log_storage_path` — path to coordination logs, better to store logs on the non-busy device (same for ZooKeeper)
|
||||
- `snapshot_storage_path` — path to coordination snapshots
|
||||
|
||||
Other common parameters are inherited from clickhouse-server config (`listen_host`, `logger` and so on).
|
||||
|
||||
Internal coordination settings are located in `<keeper_server>.<coordination_settings>` section:
|
||||
|
||||
- `operation_timeout_ms` — timeout for a single client operation
|
||||
- `session_timeout_ms` — timeout for client session
|
||||
- `dead_session_check_period_ms` — how often clickhouse-keeper check dead sessions and remove them
|
||||
- `heart_beat_interval_ms` — how often a clickhouse-keeper leader will send heartbeats to followers
|
||||
- `election_timeout_lower_bound_ms` — if follower didn't receive heartbeats from the leader in this interval, then it can initiate leader election
|
||||
- `election_timeout_upper_bound_ms` — if follower didn't receive heartbeats from the leader in this interval, then it must initiate leader election
|
||||
- `rotate_log_storage_interval` — how many logs to store in a single file
|
||||
- `reserved_log_items` — how many coordination logs to store before compaction
|
||||
- `snapshot_distance` — how often clickhouse-keeper will create new snapshots (in the number of logs)
|
||||
- `snapshots_to_keep` — how many snapshots to keep
|
||||
- `stale_log_gap` — the threshold when leader consider follower as stale and send snapshot to it instead of logs
|
||||
- `force_sync` — call `fsync` on each write to coordination log
|
||||
- `raft_logs_level` — text logging level about coordination (trace, debug, and so on)
|
||||
- `shutdown_timeout` — wait to finish internal connections and shutdown
|
||||
- `startup_timeout` — if the server doesn't connect to other quorum participants in the specified timeout it will terminate
|
||||
|
||||
Quorum configuration is located in `<keeper_server>.<raft_configuration>` section and contain servers description. The only parameter for the whole quorum is `secure`, which enables encrypted connection for communication between quorum participants. The main parameters for each `<server>` are:
|
||||
|
||||
- `id` — server_id in quorum
|
||||
- `hostname` — hostname where this server placed
|
||||
- `port` — port where this server listen for connections
|
||||
|
||||
|
||||
Examples of configuration for quorum with three nodes can be found in [integration tests](https://github.com/ClickHouse/ClickHouse/tree/master/tests/integration) with `test_keeper_` prefix. Example configuration for server #1:
|
||||
|
||||
```xml
|
||||
<keeper_server>
|
||||
<tcp_port>2181</tcp_port>
|
||||
<server_id>1</server_id>
|
||||
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
|
||||
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
|
||||
|
||||
<coordination_settings>
|
||||
<operation_timeout_ms>10000</operation_timeout_ms>
|
||||
<session_timeout_ms>30000</session_timeout_ms>
|
||||
<raft_logs_level>trace</raft_logs_level>
|
||||
</coordination_settings>
|
||||
|
||||
<raft_configuration>
|
||||
<server>
|
||||
<id>1</id>
|
||||
<hostname>zoo1</hostname>
|
||||
<port>9444</port>
|
||||
</server>
|
||||
<server>
|
||||
<id>2</id>
|
||||
<hostname>zoo2</hostname>
|
||||
<port>9444</port>
|
||||
</server>
|
||||
<server>
|
||||
<id>3</id>
|
||||
<hostname>zoo3</hostname>
|
||||
<port>9444</port>
|
||||
</server>
|
||||
</raft_configuration>
|
||||
</keeper_server>
|
||||
```
|
||||
|
||||
## How to run
|
||||
|
||||
`clickhouse-keeper` is bundled into `clickhouse-server` package, just add configuration of `<keeper_server>` and start clickhouse-server as always. If you want to run standalone `clickhouse-keeper` you can start it in a similar way with:
|
||||
|
||||
```bash
|
||||
clickhouse-keeper --config /etc/your_path_to_config/config.xml --daemon
|
||||
```
|
||||
|
||||
## [experimental] Migration from ZooKeeper
|
||||
|
||||
Seamlessly migration from ZooKeeper to `clickhouse-keeper` is impossible you have to stop your ZooKeeper cluster, convert data and start `clickhouse-keeper`. `clickhouse-keeper-converter` tool allows to convert ZooKeeper logs and snapshots to `clickhouse-keeper` snapshot. It works only with ZooKeeper > 3.4. Steps for migration:
|
||||
|
||||
1. Stop all ZooKeeper nodes.
|
||||
|
||||
2. [optional, but recommended] Found ZooKeeper leader node, start and stop it again. It will force ZooKeeper to create consistent snapshot.
|
||||
|
||||
3. Run `clickhouse-keeper-converter` on leader, example
|
||||
|
||||
```bash
|
||||
clickhouse-keeper-converter --zookeeper-logs-dir /var/lib/zookeeper/version-2 --zookeeper-snapshots-dir /var/lib/zookeeper/version-2 --output-dir /path/to/clickhouse/keeper/snapshots
|
||||
```
|
||||
|
||||
4. Copy snapshot to `clickhouse-server` nodes with configured `keeper` or start `clickhouse-keeper` instead of ZooKeeper. Snapshot must persist only on leader node, leader will sync it automatically to other nodes.
|
||||
|
@ -22,6 +22,23 @@ Some settings specified in the main configuration file can be overridden in othe
|
||||
|
||||
The config can also define “substitutions”. If an element has the `incl` attribute, the corresponding substitution from the file will be used as the value. By default, the path to the file with substitutions is `/etc/metrika.xml`. This can be changed in the [include_from](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-include_from) element in the server config. The substitution values are specified in `/yandex/substitution_name` elements in this file. If a substitution specified in `incl` does not exist, it is recorded in the log. To prevent ClickHouse from logging missing substitutions, specify the `optional="true"` attribute (for example, settings for [macros](../operations/server-configuration-parameters/settings.md)).
|
||||
|
||||
If you want to replace an entire element with a substitution use `include` as element name.
|
||||
|
||||
XML substitution example:
|
||||
|
||||
```xml
|
||||
<yandex>
|
||||
<!-- Appends XML subtree found at `/profiles-in-zookeeper` ZK path to `<profiles>` element. -->
|
||||
<profiles from_zk="/profiles-in-zookeeper" />
|
||||
|
||||
<users>
|
||||
<!-- Replaces `include` element with the subtree found at `/users-in-zookeeper` ZK path. -->
|
||||
<include from_zk="/users-in-zookeeper" />
|
||||
<include from_zk="/other-users-in-zookeeper" />
|
||||
</users>
|
||||
</yandex>
|
||||
```
|
||||
|
||||
Substitutions can also be performed from ZooKeeper. To do this, specify the attribute `from_zk = "/path/to/node"`. The element value is replaced with the contents of the node at `/path/to/node` in ZooKeeper. You can also put an entire XML subtree on the ZooKeeper node and it will be fully inserted into the source element.
|
||||
|
||||
## User Settings {#user-settings}
|
||||
@ -32,6 +49,8 @@ Users configuration can be splitted into separate files similar to `config.xml`
|
||||
Directory name is defined as `users_config` setting without `.xml` postfix concatenated with `.d`.
|
||||
Directory `users.d` is used by default, as `users_config` defaults to `users.xml`.
|
||||
|
||||
Note that configuration files are first merged taking into account [Override](#override) settings and includes are processed after that.
|
||||
|
||||
## XML example {#example}
|
||||
|
||||
For example, you can have separate config file for each user like this:
|
||||
|
@ -129,7 +129,7 @@ That dictionary source can be configured only via XML configuration. Creating di
|
||||
|
||||
## Executable Pool {#dicts-external_dicts_dict_sources-executable_pool}
|
||||
|
||||
Executable pool allows loading data from pool of processes. This source does not work with dictionary layouts that need to load all data from source. Executable pool works if the dictionary [is stored](external-dicts-dict-layout.md#ways-to-store-dictionaries-in-memory) using `cache`, `complex_key_cache`, `ssd_cache`, `complex_key_ssd_cache`, `direct`, `complex_key_direct` layouts.
|
||||
Executable pool allows loading data from pool of processes. This source does not work with dictionary layouts that need to load all data from source. Executable pool works if the dictionary [is stored](external-dicts-dict-layout.md#ways-to-store-dictionaries-in-memory) using `cache`, `complex_key_cache`, `ssd_cache`, `complex_key_ssd_cache`, `direct`, `complex_key_direct` layouts.
|
||||
|
||||
Executable pool will spawn pool of processes with specified command and keep them running until they exit. The program should read data from STDIN while it is available and output result to STDOUT, and it can wait for next block of data on STDIN. ClickHouse will not close STDIN after processing a block of data but will pipe another chunk of data when needed. The executable script should be ready for this way of data processing — it should poll STDIN and flush data to STDOUT early.
|
||||
|
||||
@ -581,6 +581,7 @@ Example of settings:
|
||||
<db>default</db>
|
||||
<table>ids</table>
|
||||
<where>id=10</where>
|
||||
<secure>1</secure>
|
||||
</clickhouse>
|
||||
</source>
|
||||
```
|
||||
@ -596,6 +597,7 @@ SOURCE(CLICKHOUSE(
|
||||
db 'default'
|
||||
table 'ids'
|
||||
where 'id=10'
|
||||
secure 1
|
||||
))
|
||||
```
|
||||
|
||||
@ -609,6 +611,7 @@ Setting fields:
|
||||
- `table` – Name of the table.
|
||||
- `where` – The selection criteria. May be omitted.
|
||||
- `invalidate_query` – Query for checking the dictionary status. Optional parameter. Read more in the section [Updating dictionaries](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md).
|
||||
- `secure` - Use ssl for connection.
|
||||
|
||||
### Mongodb {#dicts-external_dicts_dict_sources-mongodb}
|
||||
|
||||
|
@ -12,7 +12,7 @@ For information on connecting and configuring external dictionaries, see [Extern
|
||||
|
||||
## dictGet, dictGetOrDefault, dictGetOrNull {#dictget}
|
||||
|
||||
Retrieves values from an external dictionary.
|
||||
Retrieves values from an external dictionary.
|
||||
|
||||
``` sql
|
||||
dictGet('dict_name', attr_names, id_expr)
|
||||
@ -24,7 +24,7 @@ dictGetOrNull('dict_name', attr_name, id_expr)
|
||||
|
||||
- `dict_name` — Name of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal).
|
||||
- `attr_names` — Name of the column of the dictionary, [String literal](../../sql-reference/syntax.md#syntax-string-literal), or tuple of column names, [Tuple](../../sql-reference/data-types/tuple.md)([String literal](../../sql-reference/syntax.md#syntax-string-literal)).
|
||||
- `id_expr` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning a [UInt64](../../sql-reference/data-types/int-uint.md) or [Tuple](../../sql-reference/data-types/tuple.md)-type value depending on the dictionary configuration.
|
||||
- `id_expr` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning dictionary key-type value or [Tuple](../../sql-reference/data-types/tuple.md)-type value depending on the dictionary configuration.
|
||||
- `default_value_expr` — Values returned if the dictionary does not contain a row with the `id_expr` key. [Expression](../../sql-reference/syntax.md#syntax-expressions) or [Tuple](../../sql-reference/data-types/tuple.md)([Expression](../../sql-reference/syntax.md#syntax-expressions)), returning the value (or values) in the data types configured for the `attr_names` attribute.
|
||||
|
||||
**Returned value**
|
||||
@ -138,7 +138,7 @@ Configure the external dictionary:
|
||||
<name>c2</name>
|
||||
<type>String</type>
|
||||
<null_value></null_value>
|
||||
</attribute>
|
||||
</attribute>
|
||||
</structure>
|
||||
<lifetime>0</lifetime>
|
||||
</dictionary>
|
||||
@ -237,7 +237,7 @@ dictHas('dict_name', id_expr)
|
||||
**Arguments**
|
||||
|
||||
- `dict_name` — Name of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal).
|
||||
- `id_expr` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning a [UInt64](../../sql-reference/data-types/int-uint.md) or [Tuple](../../sql-reference/data-types/tuple.md)-type value depending on the dictionary configuration.
|
||||
- `id_expr` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning dictionary key-type value or [Tuple](../../sql-reference/data-types/tuple.md)-type value depending on the dictionary configuration.
|
||||
|
||||
**Returned value**
|
||||
|
||||
@ -292,16 +292,16 @@ Type: `UInt8`.
|
||||
|
||||
Returns first-level children as an array of indexes. It is the inverse transformation for [dictGetHierarchy](#dictgethierarchy).
|
||||
|
||||
**Syntax**
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
dictGetChildren(dict_name, key)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
**Arguments**
|
||||
|
||||
- `dict_name` — Name of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal).
|
||||
- `key` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning a [UInt64](../../sql-reference/data-types/int-uint.md)-type value.
|
||||
- `dict_name` — Name of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal).
|
||||
- `key` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning a [UInt64](../../sql-reference/data-types/int-uint.md)-type value.
|
||||
|
||||
**Returned values**
|
||||
|
||||
@ -339,7 +339,7 @@ SELECT dictGetChildren('hierarchy_flat_dictionary', number) FROM system.numbers
|
||||
|
||||
## dictGetDescendant {#dictgetdescendant}
|
||||
|
||||
Returns all descendants as if [dictGetChildren](#dictgetchildren) function was applied `level` times recursively.
|
||||
Returns all descendants as if [dictGetChildren](#dictgetchildren) function was applied `level` times recursively.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -347,9 +347,9 @@ Returns all descendants as if [dictGetChildren](#dictgetchildren) function was a
|
||||
dictGetDescendants(dict_name, key, level)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
**Arguments**
|
||||
|
||||
- `dict_name` — Name of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal).
|
||||
- `dict_name` — Name of the dictionary. [String literal](../../sql-reference/syntax.md#syntax-string-literal).
|
||||
- `key` — Key value. [Expression](../../sql-reference/syntax.md#syntax-expressions) returning a [UInt64](../../sql-reference/data-types/int-uint.md)-type value.
|
||||
- `level` — Hierarchy level. If `level = 0` returns all descendants to the end. [UInt8](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
|
@ -5,15 +5,186 @@ toc_title: Logical
|
||||
|
||||
# Logical Functions {#logical-functions}
|
||||
|
||||
Logical functions accept any numeric types, but return a UInt8 number equal to 0 or 1.
|
||||
Performs logical operations on arguments of any numeric types, but returns a [UInt8](../../sql-reference/data-types/int-uint.md) number equal to 0, 1 or `NULL` in some cases.
|
||||
|
||||
Zero as an argument is considered “false,” while any non-zero value is considered “true”.
|
||||
Zero as an argument is considered `false`, while any non-zero value is considered `true`.
|
||||
|
||||
## and, AND operator {#and-and-operator}
|
||||
## and {#logical-and-function}
|
||||
|
||||
## or, OR operator {#or-or-operator}
|
||||
Calculates the result of the logical conjunction between two or more values. Corresponds to [Logical AND Operator](../../sql-reference/operators/index.md#logical-and-operator).
|
||||
|
||||
## not, NOT operator {#not-not-operator}
|
||||
**Syntax**
|
||||
|
||||
## xor {#xor}
|
||||
``` sql
|
||||
and(val1, val2...)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `val1, val2, ...` — List of at least two values. [Int](../../sql-reference/data-types/int-uint.md), [UInt](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Nullable](../../sql-reference/data-types/nullable.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `0`, if there is at least one zero value argument.
|
||||
- `NULL`, if there are no zero values arguments and there is at least one `NULL` argument.
|
||||
- `1`, otherwise.
|
||||
|
||||
Type: [UInt8](../../sql-reference/data-types/int-uint.md) or [Nullable](../../sql-reference/data-types/nullable.md)([UInt8](../../sql-reference/data-types/int-uint.md)).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT and(0, 1, -2);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─and(0, 1, -2)─┐
|
||||
│ 0 │
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
With `NULL`:
|
||||
|
||||
``` sql
|
||||
SELECT and(NULL, 1, 10, -2);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─and(NULL, 1, 10, -2)─┐
|
||||
│ ᴺᵁᴸᴸ │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
## or {#logical-or-function}
|
||||
|
||||
Calculates the result of the logical disjunction between two or more values. Corresponds to [Logical OR Operator](../../sql-reference/operators/index.md#logical-or-operator).
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
and(val1, val2...)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `val1, val2, ...` — List of at least two values. [Int](../../sql-reference/data-types/int-uint.md), [UInt](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Nullable](../../sql-reference/data-types/nullable.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `1`, if there is at least one non-zero value.
|
||||
- `0`, if there are only zero values.
|
||||
- `NULL`, if there are only zero values and `NULL`.
|
||||
|
||||
Type: [UInt8](../../sql-reference/data-types/int-uint.md) or [Nullable](../../sql-reference/data-types/nullable.md)([UInt8](../../sql-reference/data-types/int-uint.md)).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT or(1, 0, 0, 2, NULL);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─or(1, 0, 0, 2, NULL)─┐
|
||||
│ 1 │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
With `NULL`:
|
||||
|
||||
``` sql
|
||||
SELECT or(0, NULL);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─or(0, NULL)─┐
|
||||
│ ᴺᵁᴸᴸ │
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
## not {#logical-not-function}
|
||||
|
||||
Calculates the result of the logical negation of the value. Corresponds to [Logical Negation Operator](../../sql-reference/operators/index.md#logical-negation-operator).
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
not(val);
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `val` — The value. [Int](../../sql-reference/data-types/int-uint.md), [UInt](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Nullable](../../sql-reference/data-types/nullable.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `1`, if the `val` is `0`.
|
||||
- `0`, if the `val` is a non-zero value.
|
||||
- `NULL`, if the `val` is a `NULL` value.
|
||||
|
||||
Type: [UInt8](../../sql-reference/data-types/int-uint.md) or [Nullable](../../sql-reference/data-types/nullable.md)([UInt8](../../sql-reference/data-types/int-uint.md)).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT NOT(1);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` test
|
||||
┌─not(1)─┐
|
||||
│ 0 │
|
||||
└────────┘
|
||||
```
|
||||
|
||||
## xor {#logical-xor-function}
|
||||
|
||||
Calculates the result of the logical exclusive disjunction between two or more values. For more than two values the function works as if it calculates `XOR` of the first two values and then uses the result with the next value to calculate `XOR` and so on.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
xor(val1, val2...)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `val1, val2, ...` — List of at least two values. [Int](../../sql-reference/data-types/int-uint.md), [UInt](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Nullable](../../sql-reference/data-types/nullable.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `1`, for two values: if one of the values is zero and other is not.
|
||||
- `0`, for two values: if both values are zero or non-zero at the same time.
|
||||
- `NULL`, if there is at least one `NULL` value.
|
||||
|
||||
Type: [UInt8](../../sql-reference/data-types/int-uint.md) or [Nullable](../../sql-reference/data-types/nullable.md)([UInt8](../../sql-reference/data-types/int-uint.md)).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT xor(0, 1, 1);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─xor(0, 1, 1)─┐
|
||||
│ 0 │
|
||||
└──────────────┘
|
||||
```
|
||||
|
@ -87,6 +87,8 @@ Result:
|
||||
└───────┴───────┘
|
||||
```
|
||||
|
||||
Note: the names are implementation specific and are subject to change. You should not assume specific names of the columns after application of the `untuple`.
|
||||
|
||||
Example of using an `EXCEPT` expression:
|
||||
|
||||
Query:
|
||||
|
@ -211,17 +211,17 @@ SELECT toDateTime('2014-10-26 00:00:00', 'Europe/Moscow') AS time, time + 60 * 6
|
||||
- [Interval](../../sql-reference/data-types/special-data-types/interval.md) data type
|
||||
- [toInterval](../../sql-reference/functions/type-conversion-functions.md#function-tointerval) type conversion functions
|
||||
|
||||
## Logical Negation Operator {#logical-negation-operator}
|
||||
|
||||
`NOT a` – The `not(a)` function.
|
||||
|
||||
## Logical AND Operator {#logical-and-operator}
|
||||
|
||||
`a AND b` – The`and(a, b)` function.
|
||||
Syntax `SELECT a AND b` — calculates logical conjunction of `a` and `b` with the function [and](../../sql-reference/functions/logical-functions.md#logical-and-function).
|
||||
|
||||
## Logical OR Operator {#logical-or-operator}
|
||||
|
||||
`a OR b` – The `or(a, b)` function.
|
||||
Syntax `SELECT a OR b` — calculates logical disjunction of `a` and `b` with the function [or](../../sql-reference/functions/logical-functions.md#logical-or-function).
|
||||
|
||||
## Logical Negation Operator {#logical-negation-operator}
|
||||
|
||||
Syntax `SELECT NOT a` — calculates logical negation of `a` with the function [not](../../sql-reference/functions/logical-functions.md#logical-not-function).
|
||||
|
||||
## Conditional Operator {#conditional-operator}
|
||||
|
||||
|
@ -61,4 +61,4 @@ clickhouse client --secure -h play-api.clickhouse.tech --port 9440 -u playground
|
||||
Бэкэнд Playground - это кластер ClickHouse без дополнительных серверных приложений. Как упоминалось выше, способы подключения по HTTPS и TCP/TLS общедоступны как часть Playground. Они проксируются через [Cloudflare Spectrum](https://www.cloudflare.com/products/cloudflare-spectrum/) для добавления дополнительного уровня защиты и улучшенного глобального подключения.
|
||||
|
||||
!!! warning "Предупреждение"
|
||||
Открывать сервер ClickHouse для публичного доступа в любой другой ситуации **настоятельно не рекомендуется**. Убедитесь, что он настроен только на частную сеть и защищен брандмауэром.
|
||||
Открывать сервер ClickHouse для публичного доступа в любой другой ситуации **настоятельно не рекомендуется**. Убедитесь, что он настроен только на частную сеть и защищен брандмауэром.
|
||||
|
BIN
docs/ru/images/play.png
Normal file
BIN
docs/ru/images/play.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 26 KiB |
@ -5,30 +5,33 @@ toc_title: "HTTP-интерфейс"
|
||||
|
||||
# HTTP-интерфейс {#http-interface}
|
||||
|
||||
HTTP интерфейс позволяет использовать ClickHouse на любой платформе, из любого языка программирования. У нас он используется для работы из Java и Perl, а также из shell-скриптов. В других отделах, HTTP интерфейс используется из Perl, Python и Go. HTTP интерфейс более ограничен по сравнению с родным интерфейсом, но является более совместимым.
|
||||
HTTP интерфейс позволяет использовать ClickHouse на любой платформе, из любого языка программирования. У нас он используется для работы из Java и Perl, а также из shell-скриптов. В других отделах HTTP интерфейс используется из Perl, Python и Go. HTTP интерфейс более ограничен по сравнению с родным интерфейсом, но является более совместимым.
|
||||
|
||||
По умолчанию, clickhouse-server слушает HTTP на порту 8123 (это можно изменить в конфиге).
|
||||
Если запросить GET / без параметров, то вернётся строка заданная с помощью настройки [http_server_default_response](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-http_server_default_response). Значение по умолчанию «Ok.» (с переводом строки на конце).
|
||||
По умолчанию `clickhouse-server` слушает HTTP на порту 8123 (это можно изменить в конфиге).
|
||||
Если запросить `GET /` без параметров, то вернётся строка заданная с помощью настройки [http_server_default_response](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-http_server_default_response). Значение по умолчанию «Ok.» (с переводом строки на конце).
|
||||
|
||||
``` bash
|
||||
$ curl 'http://localhost:8123/'
|
||||
Ok.
|
||||
```
|
||||
|
||||
В скриптах проверки доступности вы можете использовать GET /ping без параметров. Если сервер доступен всегда возвращается «Ok.» (с переводом строки на конце).
|
||||
Веб-интерфейс доступен по адресу: `http://localhost:8123/play`.
|
||||
|
||||
![Веб-интерфейс](../images/play.png)
|
||||
|
||||
В скриптах проверки доступности вы можете использовать `GET /ping` без параметров. Если сервер доступен, всегда возвращается «Ok.» (с переводом строки на конце).
|
||||
|
||||
``` bash
|
||||
$ curl 'http://localhost:8123/ping'
|
||||
Ok.
|
||||
```
|
||||
|
||||
Запрос отправляется в виде URL параметра с именем query. Или как тело запроса при использовании метода POST.
|
||||
Запрос отправляется в виде URL параметра с именем `query`. Или как тело запроса при использовании метода POST.
|
||||
Или начало запроса в URL параметре query, а продолжение POST-ом (зачем это нужно, будет объяснено ниже). Размер URL ограничен 16KB, это следует учитывать при отправке больших запросов.
|
||||
|
||||
В случае успеха, вам вернётся код ответа 200 и результат обработки запроса в теле ответа.
|
||||
В случае ошибки, вам вернётся код ответа 500 и текст с описанием ошибки в теле ответа.
|
||||
В случае успеха возвращается код ответа 200 и результат обработки запроса в теле ответа, в случае ошибки — код ответа 500 и текст с описанием ошибки в теле ответа.
|
||||
|
||||
При использовании метода GET, выставляется настройка readonly. То есть, для запросов, модифицирующие данные, можно использовать только метод POST. Сам запрос при этом можно отправлять как в теле POST-а, так и в параметре URL.
|
||||
При использовании метода GET выставляется настройка readonly. То есть, для запросов, модифицирующих данные, можно использовать только метод POST. Сам запрос при этом можно отправлять как в теле POST запроса, так и в параметре URL.
|
||||
|
||||
Примеры:
|
||||
|
||||
@ -51,8 +54,8 @@ X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","writ
|
||||
1
|
||||
```
|
||||
|
||||
Как видно, curl немного неудобен тем, что надо URL-эскейпить пробелы.
|
||||
Хотя wget сам всё эскейпит, но его не рекомендуется использовать, так как он плохо работает по HTTP 1.1 при использовании keep-alive и Transfer-Encoding: chunked.
|
||||
Как видно, `curl` немного неудобен тем, что надо URL-эскейпить пробелы.
|
||||
Хотя `wget` сам всё эскейпит, но его не рекомендуется использовать, так как он плохо работает по HTTP 1.1 при использовании `keep-alive` и `Transfer-Encoding: chunked`.
|
||||
|
||||
``` bash
|
||||
$ echo 'SELECT 1' | curl 'http://localhost:8123/' --data-binary @-
|
||||
@ -65,7 +68,7 @@ $ echo '1' | curl 'http://localhost:8123/?query=SELECT' --data-binary @-
|
||||
1
|
||||
```
|
||||
|
||||
Если часть запроса отправляется в параметре, а часть POST-ом, то между этими двумя кусками данных ставится перевод строки.
|
||||
Если часть запроса отправляется в параметре, а часть POST запросом, то между этими двумя кусками данных ставится перевод строки.
|
||||
Пример (так работать не будет):
|
||||
|
||||
``` bash
|
||||
@ -75,9 +78,9 @@ ECT 1
|
||||
, expected One of: SHOW TABLES, SHOW DATABASES, SELECT, INSERT, CREATE, ATTACH, RENAME, DROP, DETACH, USE, SET, OPTIMIZE., e.what() = DB::Exception
|
||||
```
|
||||
|
||||
По умолчанию, данные возвращаются в формате TabSeparated (подробнее смотри раздел «Форматы»).
|
||||
По умолчанию данные возвращаются в формате [TabSeparated](formats.md#tabseparated).
|
||||
|
||||
Можно попросить любой другой формат - с помощью секции FORMAT запроса.
|
||||
Можно указать любой другой формат с помощью секции FORMAT запроса.
|
||||
|
||||
Кроме того, вы можете использовать параметр URL-адреса `default_format` или заголовок `X-ClickHouse-Format`, чтобы указать формат по умолчанию, отличный от `TabSeparated`.
|
||||
|
||||
@ -90,9 +93,10 @@ $ echo 'SELECT 1 FORMAT Pretty' | curl 'http://localhost:8123/?' --data-binary @
|
||||
└───┘
|
||||
```
|
||||
|
||||
Возможность передавать данные POST-ом нужна для INSERT-запросов. В этом случае вы можете написать начало запроса в параметре URL, а вставляемые данные передать POST-ом. Вставляемыми данными может быть, например, tab-separated дамп, полученный из MySQL. Таким образом, запрос INSERT заменяет LOAD DATA LOCAL INFILE из MySQL.
|
||||
Возможность передавать данные с помощью POST нужна для запросов `INSERT`. В этом случае вы можете написать начало запроса в параметре URL, а вставляемые данные передать POST запросом. Вставляемыми данными может быть, например, tab-separated дамп, полученный из MySQL. Таким образом, запрос `INSERT` заменяет `LOAD DATA LOCAL INFILE` из MySQL.
|
||||
|
||||
**Примеры**
|
||||
|
||||
Примеры:
|
||||
Создаём таблицу:
|
||||
|
||||
``` bash
|
||||
@ -147,7 +151,7 @@ $ curl 'http://localhost:8123/?query=SELECT%20a%20FROM%20t'
|
||||
$ echo 'DROP TABLE t' | curl 'http://localhost:8123/' --data-binary @-
|
||||
```
|
||||
|
||||
Для запросов, которые не возвращают таблицу с данными, в случае успеха, выдаётся пустое тело ответа.
|
||||
Для запросов, которые не возвращают таблицу с данными, в случае успеха выдаётся пустое тело ответа.
|
||||
|
||||
|
||||
## Сжатие {#compression}
|
||||
@ -165,7 +169,7 @@ $ echo 'DROP TABLE t' | curl 'http://localhost:8123/' --data-binary @-
|
||||
- `deflate`
|
||||
- `xz`
|
||||
|
||||
Для отправки сжатого запроса `POST`, добавьте заголовок `Content-Encoding: compression_method`.
|
||||
Для отправки сжатого запроса `POST` добавьте заголовок `Content-Encoding: compression_method`.
|
||||
Чтобы ClickHouse сжимал ответ, разрешите сжатие настройкой [enable_http_compression](../operations/settings/settings.md#settings-enable_http_compression) и добавьте заголовок `Accept-Encoding: compression_method`. Уровень сжатия данных для всех методов сжатия можно задать с помощью настройки [http_zlib_compression_level](../operations/settings/settings.md#settings-http_zlib_compression_level).
|
||||
|
||||
!!! note "Примечание"
|
||||
@ -281,13 +285,13 @@ X-ClickHouse-Progress: {"read_rows":"8783786","read_bytes":"819092887","total_ro
|
||||
|
||||
HTTP интерфейс позволяет передать внешние данные (внешние временные таблицы) для использования запроса. Подробнее смотрите раздел «Внешние данные для обработки запроса»
|
||||
|
||||
## Буферизация ответа {#buferizatsiia-otveta}
|
||||
## Буферизация ответа {#response-buffering}
|
||||
|
||||
Существует возможность включить буферизацию ответа на стороне сервера. Для этого предусмотрены параметры URL `buffer_size` и `wait_end_of_query`.
|
||||
|
||||
`buffer_size` определяет количество байт результата которые будут буферизованы в памяти сервера. Если тело результата больше этого порога, то буфер будет переписан в HTTP канал, а оставшиеся данные будут отправляться в HTTP-канал напрямую.
|
||||
|
||||
Чтобы гарантировать буферизацию всего ответа необходимо выставить `wait_end_of_query=1`. В этом случае данные, не поместившиеся в памяти, будут буферизованы во временном файле сервера.
|
||||
Чтобы гарантировать буферизацию всего ответа, необходимо выставить `wait_end_of_query=1`. В этом случае данные, не поместившиеся в памяти, будут буферизованы во временном файле сервера.
|
||||
|
||||
Пример:
|
||||
|
||||
@ -295,7 +299,7 @@ HTTP интерфейс позволяет передать внешние да
|
||||
$ curl -sS 'http://localhost:8123/?max_result_bytes=4000000&buffer_size=3000000&wait_end_of_query=1' -d 'SELECT toUInt8(number) FROM system.numbers LIMIT 9000000 FORMAT RowBinary'
|
||||
```
|
||||
|
||||
Буферизация позволяет избежать ситуации когда код ответа и HTTP-заголовки были отправлены клиенту, после чего возникла ошибка выполнения запроса. В такой ситуации сообщение об ошибке записывается в конце тела ответа, и на стороне клиента ошибка может быть обнаружена только на этапе парсинга.
|
||||
Буферизация позволяет избежать ситуации, когда код ответа и HTTP-заголовки были отправлены клиенту, после чего возникла ошибка выполнения запроса. В такой ситуации сообщение об ошибке записывается в конце тела ответа, и на стороне клиента ошибка может быть обнаружена только на этапе парсинга.
|
||||
|
||||
### Запросы с параметрами {#cli-queries-with-parameters}
|
||||
|
||||
@ -634,4 +638,3 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler'
|
||||
<html><body>Relative Path File</body></html>
|
||||
* Connection #0 to host localhost left intact
|
||||
```
|
||||
|
||||
|
@ -5,15 +5,186 @@ toc_title: "Логические функции"
|
||||
|
||||
# Логические функции {#logicheskie-funktsii}
|
||||
|
||||
Логические функции принимают любые числовые типы, а возвращают число типа UInt8, равное 0 или 1.
|
||||
Логические функции производят логические операции над любыми числовыми типами, а возвращают число типа [UInt8](../../sql-reference/data-types/int-uint.md), равное 0, 1, а в некоторых случаях `NULL`.
|
||||
|
||||
Ноль в качестве аргумента считается «ложью», а любое ненулевое значение - «истиной».
|
||||
Ноль в качестве аргумента считается `ложью`, а любое ненулевое значение — `истиной`.
|
||||
|
||||
## and, оператор AND {#and-operator-and}
|
||||
## and {#logical-and-function}
|
||||
|
||||
## or, оператор OR {#or-operator-or}
|
||||
Вычисляет результат логической конъюнкции между двумя и более значениями. Соответствует [оператору логического "И"](../../sql-reference/operators/index.md#logical-and-operator).
|
||||
|
||||
## not, оператор NOT {#not-operator-not}
|
||||
**Синтаксис**
|
||||
|
||||
## xor {#xor}
|
||||
``` sql
|
||||
and(val1, val2...)
|
||||
```
|
||||
|
||||
**Аргументы**
|
||||
|
||||
- `val1, val2, ...` — список из как минимум двух значений. [Int](../../sql-reference/data-types/int-uint.md), [UInt](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) или [Nullable](../../sql-reference/data-types/nullable.md).
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
- `0`, если среди аргументов есть хотя бы один нуль.
|
||||
- `NULL`, если среди аргументов нет нулей, но есть хотя бы один `NULL`.
|
||||
- `1`, в остальных случаях.
|
||||
|
||||
Тип: [UInt8](../../sql-reference/data-types/int-uint.md) или [Nullable](../../sql-reference/data-types/nullable.md)([UInt8](../../sql-reference/data-types/int-uint.md)).
|
||||
|
||||
**Пример**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT and(0, 1, -2);
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─and(0, 1, -2)─┐
|
||||
│ 0 │
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
Со значениями `NULL`:
|
||||
|
||||
``` sql
|
||||
SELECT and(NULL, 1, 10, -2);
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─and(NULL, 1, 10, -2)─┐
|
||||
│ ᴺᵁᴸᴸ │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
## or {#logical-or-function}
|
||||
|
||||
Вычисляет результат логической дизъюнкции между двумя и более значениями. Соответствует [оператору логического "ИЛИ"](../../sql-reference/operators/index.md#logical-or-operator).
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
and(val1, val2...)
|
||||
```
|
||||
|
||||
**Аргументы**
|
||||
|
||||
- `val1, val2, ...` — список из как минимум двух значений. [Int](../../sql-reference/data-types/int-uint.md), [UInt](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) или [Nullable](../../sql-reference/data-types/nullable.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `1`, если среди аргументов есть хотя бы одно ненулевое число.
|
||||
- `0`, если среди аргументов только нули.
|
||||
- `NULL`, если среди аргументов нет ненулевых значений, и есть `NULL`.
|
||||
|
||||
Тип: [UInt8](../../sql-reference/data-types/int-uint.md) или [Nullable](../../sql-reference/data-types/nullable.md)([UInt8](../../sql-reference/data-types/int-uint.md)).
|
||||
|
||||
**Пример**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT or(1, 0, 0, 2, NULL);
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─or(1, 0, 0, 2, NULL)─┐
|
||||
│ 1 │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
Со значениями `NULL`:
|
||||
|
||||
``` sql
|
||||
SELECT or(0, NULL);
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─or(0, NULL)─┐
|
||||
│ ᴺᵁᴸᴸ │
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
## not {#logical-not-function}
|
||||
|
||||
Вычисляет результат логического отрицания аргумента. Соответствует [оператору логического отрицания](../../sql-reference/operators/index.md#logical-negation-operator).
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
not(val);
|
||||
```
|
||||
|
||||
**Аргументы**
|
||||
|
||||
- `val` — значение. [Int](../../sql-reference/data-types/int-uint.md), [UInt](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) или [Nullable](../../sql-reference/data-types/nullable.md).
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
- `1`, если `val` — это `0`.
|
||||
- `0`, если `val` — это ненулевое число.
|
||||
- `NULL`, если `val` — это `NULL`.
|
||||
|
||||
Тип: [UInt8](../../sql-reference/data-types/int-uint.md) или [Nullable](../../sql-reference/data-types/nullable.md)([UInt8](../../sql-reference/data-types/int-uint.md)).
|
||||
|
||||
**Пример**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT NOT(1);
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` test
|
||||
┌─not(1)─┐
|
||||
│ 0 │
|
||||
└────────┘
|
||||
```
|
||||
|
||||
## xor {#logical-xor-function}
|
||||
|
||||
Вычисляет результат логической исключающей дизъюнкции между двумя и более значениями. При более чем двух значениях функция работает так: сначала вычисляет `XOR` для первых двух значений, а потом использует полученный результат при вычислении `XOR` со следующим значением и так далее.
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
xor(val1, val2...)
|
||||
```
|
||||
|
||||
**Аргументы**
|
||||
|
||||
- `val1, val2, ...` — список из как минимум двух значений. [Int](../../sql-reference/data-types/int-uint.md), [UInt](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) или [Nullable](../../sql-reference/data-types/nullable.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `1`, для двух значений: если одно из значений является нулем, а второе нет.
|
||||
- `0`, для двух значений: если оба значения одновременно нули или ненулевые числа.
|
||||
- `NULL`, если среди аргументов хотя бы один `NULL`.
|
||||
|
||||
Тип: [UInt8](../../sql-reference/data-types/int-uint.md) or [Nullable](../../sql-reference/data-types/nullable.md)([UInt8](../../sql-reference/data-types/int-uint.md)).
|
||||
|
||||
**Пример**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT xor(0, 1, 1);
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─xor(0, 1, 1)─┐
|
||||
│ 0 │
|
||||
└──────────────┘
|
||||
```
|
||||
|
@ -211,17 +211,17 @@ SELECT toDateTime('2014-10-26 00:00:00', 'Europe/Moscow') AS time, time + 60 * 6
|
||||
- Тип данных [Interval](../../sql-reference/operators/index.md)
|
||||
- Функции преобразования типов [toInterval](../../sql-reference/operators/index.md#function-tointerval)
|
||||
|
||||
## Оператор логического отрицания {#operator-logicheskogo-otritsaniia}
|
||||
## Оператор логического "И" {#logical-and-operator}
|
||||
|
||||
`NOT a` - функция `not(a)`
|
||||
Синтаксис `SELECT a AND b` — вычисляет логическую конъюнкцию между `a` и `b` функцией [and](../../sql-reference/functions/logical-functions.md#logical-and-function).
|
||||
|
||||
## Оператор логического ‘И’ {#operator-logicheskogo-i}
|
||||
## Оператор логического "ИЛИ" {#logical-or-operator}
|
||||
|
||||
`a AND b` - функция `and(a, b)`
|
||||
Синтаксис `SELECT a OR b` — вычисляет логическую дизъюнкцию между `a` и `b` функцией [or](../../sql-reference/functions/logical-functions.md#logical-or-function).
|
||||
|
||||
## Оператор логического ‘ИЛИ’ {#operator-logicheskogo-ili}
|
||||
## Оператор логического отрицания {#logical-negation-operator}
|
||||
|
||||
`a OR b` - функция `or(a, b)`
|
||||
Синтаксис `SELECT NOT a` — вычисляет логическое отрицание `a` функцией [not](../../sql-reference/functions/logical-functions.md#logical-not-function).
|
||||
|
||||
## Условный оператор {#uslovnyi-operator}
|
||||
|
||||
|
@ -81,7 +81,7 @@ SELECT bitmapToArray(bitmapSubsetInRange(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,
|
||||
**示例**
|
||||
|
||||
``` sql
|
||||
SELECT bitmapToArray(bitmapSubsetInRange(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res
|
||||
SELECT bitmapToArray(bitmapSubsetLimit(bitmapBuild([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,100,200,500]), toUInt32(30), toUInt32(200))) AS res
|
||||
```
|
||||
|
||||
┌─res───────────────────────┐
|
||||
@ -174,7 +174,7 @@ SELECT bitmapToArray(bitmapAnd(bitmapBuild([1,2,3]),bitmapBuild([3,4,5]))) AS re
|
||||
│ [3] │
|
||||
└─────┘
|
||||
|
||||
## 位图 {#bitmapor}
|
||||
## 位图或 {#bitmapor}
|
||||
|
||||
为两个位图对象进行或操作,返回一个新的位图对象。
|
||||
|
||||
|
@ -1,13 +1,8 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_priority: 42
|
||||
toc_title: mysql
|
||||
---
|
||||
|
||||
# mysql {#mysql}
|
||||
|
||||
允许 `SELECT` 要对存储在远程MySQL服务器上的数据执行的查询。
|
||||
允许对存储在远程MySQL服务器上的数据执行`SELECT`和`INSERT`查询。
|
||||
|
||||
**语法**
|
||||
|
||||
``` sql
|
||||
mysql('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_duplicate_clause']);
|
||||
@ -15,31 +10,44 @@ mysql('host:port', 'database', 'table', 'user', 'password'[, replace_query, 'on_
|
||||
|
||||
**参数**
|
||||
|
||||
- `host:port` — MySQL server address.
|
||||
- `host:port` — MySQL服务器地址.
|
||||
|
||||
- `database` — Remote database name.
|
||||
- `database` — 远程数据库名称.
|
||||
|
||||
- `table` — Remote table name.
|
||||
- `table` — 远程表名称.
|
||||
|
||||
- `user` — MySQL user.
|
||||
- `user` — MySQL用户.
|
||||
|
||||
- `password` — User password.
|
||||
- `password` — 用户密码.
|
||||
|
||||
- `replace_query` — Flag that converts `INSERT INTO` 查询到 `REPLACE INTO`. 如果 `replace_query=1`,查询被替换。
|
||||
- `replace_query` — 将INSERT INTO` 查询转换为 `REPLACE INTO`的标志。如果 `replace_query=1`,查询被替换。
|
||||
|
||||
- `on_duplicate_clause` — The `ON DUPLICATE KEY on_duplicate_clause` 表达式被添加到 `INSERT` 查询。
|
||||
- `on_duplicate_clause` — 添加 `ON DUPLICATE KEY on_duplicate_clause` 表达式到 `INSERT` 查询。明确规定只能使用 `replace_query = 0` ,如果你同时设置replace_query = 1`和`on_duplicate_clause`,ClickHouse将产生异常。
|
||||
|
||||
Example: `INSERT INTO t (c1,c2) VALUES ('a', 2) ON DUPLICATE KEY UPDATE c2 = c2 + 1`, where `on_duplicate_clause` is `UPDATE c2 = c2 + 1`. See the MySQL documentation to find which `on_duplicate_clause` you can use with the `ON DUPLICATE KEY` clause.
|
||||
示例:`INSERT INTO t (c1,c2) VALUES ('a', 2) ON DUPLICATE KEY UPDATE c2 = c2 + 1`
|
||||
|
||||
To specify `on_duplicate_clause` you need to pass `0` to the `replace_query` parameter. If you simultaneously pass `replace_query = 1` and `on_duplicate_clause`, ClickHouse generates an exception.
|
||||
`on_duplicate_clause`这里是`UPDATE c2 = c2 + 1`。请查阅MySQL文档,来找到可以和`ON DUPLICATE KEY`一起使用的 `on_duplicate_clause`子句。
|
||||
|
||||
简单 `WHERE` 条款如 `=, !=, >, >=, <, <=` 当前在MySQL服务器上执行。
|
||||
简单的 `WHERE` 子句如 `=, !=, >, >=, <, <=` 将即时在MySQL服务器上执行。其余的条件和 `LIMIT` 只有在对MySQL的查询完成后,才会在ClickHouse中执行采样约束。
|
||||
|
||||
其余的条件和 `LIMIT` 只有在对MySQL的查询完成后,才会在ClickHouse中执行采样约束。
|
||||
支持使用`|`并列进行多副本查询,示例如下:
|
||||
|
||||
```sql
|
||||
SELECT name FROM mysql(`mysql{1|2|3}:3306`, 'mysql_database', 'mysql_table', 'user', 'password');
|
||||
```
|
||||
|
||||
或
|
||||
|
||||
```sql
|
||||
SELECT name FROM mysql(`mysql1:3306|mysql2:3306|mysql3:3306`, 'mysql_database', 'mysql_table', 'user', 'password');
|
||||
```
|
||||
|
||||
**返回值**
|
||||
|
||||
与原始MySQL表具有相同列的table对象。
|
||||
与原始MySQL表具有相同列的表对象。
|
||||
|
||||
!!! note "注意"
|
||||
在`INSERT`查询中为了区分`mysql(...)`与带有列名列表的表名的表函数,你必须使用关键字`FUNCTION`或`TABLE FUNCTION`。查看如下示例。
|
||||
|
||||
## 用法示例 {#usage-example}
|
||||
|
||||
@ -66,7 +74,7 @@ mysql> select * from test;
|
||||
1 row in set (0,00 sec)
|
||||
```
|
||||
|
||||
从ClickHouse中选择数据:
|
||||
从ClickHouse中查询数据:
|
||||
|
||||
``` sql
|
||||
SELECT * FROM mysql('localhost:3306', 'test', 'test', 'bayonet', '123')
|
||||
@ -78,6 +86,21 @@ SELECT * FROM mysql('localhost:3306', 'test', 'test', 'bayonet', '123')
|
||||
└────────┴──────────────┴───────┴────────────────┘
|
||||
```
|
||||
|
||||
替换和插入:
|
||||
|
||||
```sql
|
||||
INSERT INTO FUNCTION mysql('localhost:3306', 'test', 'test', 'bayonet', '123', 1) (int_id, float) VALUES (1, 3);
|
||||
INSERT INTO TABLE FUNCTION mysql('localhost:3306', 'test', 'test', 'bayonet', '123', 0, 'UPDATE int_id = int_id + 1') (int_id, float) VALUES (1, 4);
|
||||
SELECT * FROM mysql('localhost:3306', 'test', 'test', 'bayonet', '123');
|
||||
```
|
||||
|
||||
```text
|
||||
┌─int_id─┬─float─┐
|
||||
│ 1 │ 3 │
|
||||
│ 2 │ 4 │
|
||||
└────────┴───────┘
|
||||
```
|
||||
|
||||
## 另请参阅 {#see-also}
|
||||
|
||||
- [该 ‘MySQL’ 表引擎](../../engines/table-engines/integrations/mysql.md)
|
||||
|
@ -431,6 +431,7 @@ private:
|
||||
{TokenType::ClosingRoundBracket, Replxx::Color::BROWN},
|
||||
{TokenType::OpeningSquareBracket, Replxx::Color::BROWN},
|
||||
{TokenType::ClosingSquareBracket, Replxx::Color::BROWN},
|
||||
{TokenType::DoubleColon, Replxx::Color::BROWN},
|
||||
{TokenType::OpeningCurlyBrace, Replxx::Color::INTENSE},
|
||||
{TokenType::ClosingCurlyBrace, Replxx::Color::INTENSE},
|
||||
|
||||
|
@ -388,24 +388,32 @@ void LocalServer::processQueries()
|
||||
/// Use the same query_id (and thread group) for all queries
|
||||
CurrentThread::QueryScope query_scope_holder(context);
|
||||
|
||||
///Set progress show
|
||||
/// Set progress show
|
||||
need_render_progress = config().getBool("progress", false);
|
||||
|
||||
std::function<void()> finalize_progress;
|
||||
if (need_render_progress)
|
||||
{
|
||||
/// Set progress callback, which can be run from multiple threads.
|
||||
context->setProgressCallback([&](const Progress & value)
|
||||
{
|
||||
/// Write progress only if progress was updated
|
||||
if (progress_indication.updateProgress(value))
|
||||
progress_indication.writeProgress();
|
||||
});
|
||||
|
||||
/// Set finalizing callback for progress, which is called right before finalizing query output.
|
||||
finalize_progress = [&]()
|
||||
{
|
||||
progress_indication.clearProgressOutput();
|
||||
};
|
||||
|
||||
/// Set callback for file processing progress.
|
||||
progress_indication.setFileProgressCallback(context);
|
||||
}
|
||||
|
||||
bool echo_queries = config().hasOption("echo") || config().hasOption("verbose");
|
||||
|
||||
if (need_render_progress)
|
||||
progress_indication.setFileProgressCallback(context);
|
||||
|
||||
std::exception_ptr exception;
|
||||
|
||||
for (const auto & query : queries)
|
||||
@ -425,7 +433,7 @@ void LocalServer::processQueries()
|
||||
|
||||
try
|
||||
{
|
||||
executeQuery(read_buf, write_buf, /* allow_into_outfile = */ true, context, {});
|
||||
executeQuery(read_buf, write_buf, /* allow_into_outfile = */ true, context, {}, finalize_progress);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
|
@ -283,6 +283,29 @@
|
||||
color: var(--link-color);
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
/* This is for graph in svg */
|
||||
text
|
||||
{
|
||||
font-size: 14px;
|
||||
fill: var(--text-color);
|
||||
}
|
||||
|
||||
.node rect
|
||||
{
|
||||
fill: var(--element-background-color);
|
||||
filter: drop-shadow(.2rem .2rem .2rem var(--shadow-color));
|
||||
}
|
||||
|
||||
.edgePath path
|
||||
{
|
||||
stroke: var(--text-color);
|
||||
}
|
||||
|
||||
marker
|
||||
{
|
||||
fill: var(--text-color);
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
|
||||
@ -305,6 +328,7 @@
|
||||
<table class="monospace shadow" id="data-table"></table>
|
||||
<pre class="monospace shadow" id="data-unparsed"></pre>
|
||||
</div>
|
||||
<svg id="graph" fill="none"></svg>
|
||||
<p id="error" class="monospace shadow">
|
||||
</p>
|
||||
</body>
|
||||
@ -447,6 +471,12 @@
|
||||
table.removeChild(table.lastChild);
|
||||
}
|
||||
|
||||
let graph = document.getElementById('graph');
|
||||
while (graph.firstChild) {
|
||||
graph.removeChild(graph.lastChild);
|
||||
}
|
||||
graph.style.display = 'none';
|
||||
|
||||
document.getElementById('data-unparsed').innerText = '';
|
||||
document.getElementById('data-unparsed').style.display = 'none';
|
||||
|
||||
@ -461,12 +491,21 @@
|
||||
|
||||
function renderResult(response)
|
||||
{
|
||||
//console.log(response);
|
||||
clear();
|
||||
|
||||
let stats = document.getElementById('stats');
|
||||
stats.innerText = 'Elapsed: ' + response.statistics.elapsed.toFixed(3) + " sec, read " + response.statistics.rows_read + " rows.";
|
||||
|
||||
/// We can also render graphs if user performed EXPLAIN PIPELINE graph=1.
|
||||
if (response.data.length > 3 && response.data[0][0] === "digraph" && document.getElementById('query').value.match(/^\s*EXPLAIN/i)) {
|
||||
renderGraph(response);
|
||||
} else {
|
||||
renderTable(response);
|
||||
}
|
||||
}
|
||||
|
||||
function renderTable(response)
|
||||
{
|
||||
let thead = document.createElement('thead');
|
||||
for (let idx in response.meta) {
|
||||
let th = document.createElement('th');
|
||||
@ -559,6 +598,51 @@
|
||||
document.getElementById('error').style.display = 'block';
|
||||
}
|
||||
|
||||
/// Huge JS libraries should be loaded only if needed.
|
||||
function loadJS(src) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const script = document.createElement('script');
|
||||
script.src = src;
|
||||
script.addEventListener('load', function() { resolve(true); });
|
||||
document.head.appendChild(script);
|
||||
});
|
||||
}
|
||||
|
||||
let load_dagre_promise;
|
||||
function loadDagre() {
|
||||
if (load_dagre_promise) { return load_dagre_promise; }
|
||||
|
||||
load_dagre_promise = Promise.all([
|
||||
loadJS('https://dagrejs.github.io/project/dagre/v0.8.5/dagre.min.js'),
|
||||
loadJS('https://dagrejs.github.io/project/graphlib-dot/v0.6.4/graphlib-dot.min.js'),
|
||||
loadJS('https://dagrejs.github.io/project/dagre-d3/v0.6.4/dagre-d3.min.js'),
|
||||
loadJS('https://cdn.jsdelivr.net/npm/d3@7.0.0'),
|
||||
]);
|
||||
|
||||
return load_dagre_promise;
|
||||
}
|
||||
|
||||
async function renderGraph(response)
|
||||
{
|
||||
await loadDagre();
|
||||
|
||||
/// https://github.com/dagrejs/dagre-d3/issues/131
|
||||
const dot = response.data.reduce((acc, row) => acc + '\n' + row[0].replace(/shape\s*=\s*box/g, 'shape=rect'));
|
||||
|
||||
let graph = graphlibDot.read(dot);
|
||||
graph.graph().rankdir = 'TB';
|
||||
|
||||
let render = new dagreD3.render();
|
||||
|
||||
let svg = document.getElementById('graph');
|
||||
svg.style.display = 'block';
|
||||
|
||||
render(d3.select("#graph"), graph);
|
||||
|
||||
svg.style.width = graph.graph().width;
|
||||
svg.style.height = graph.graph().height;
|
||||
}
|
||||
|
||||
function setColorTheme(theme)
|
||||
{
|
||||
window.localStorage.setItem('theme', theme);
|
||||
|
@ -173,6 +173,7 @@ enum class AccessType
|
||||
M(MONGO, "", GLOBAL, SOURCES) \
|
||||
M(MYSQL, "", GLOBAL, SOURCES) \
|
||||
M(POSTGRES, "", GLOBAL, SOURCES) \
|
||||
M(SQLITE, "", GLOBAL, SOURCES) \
|
||||
M(ODBC, "", GLOBAL, SOURCES) \
|
||||
M(JDBC, "", GLOBAL, SOURCES) \
|
||||
M(HDFS, "", GLOBAL, SOURCES) \
|
||||
|
@ -9,6 +9,14 @@
|
||||
|
||||
#include <AggregateFunctions/IAggregateFunction.h>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include <Common/config.h>
|
||||
#endif
|
||||
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
# include <llvm/IR/IRBuilder.h>
|
||||
# include <DataTypes/Native.h>
|
||||
#endif
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -21,6 +29,21 @@ struct AggregateFunctionGroupBitOrData
|
||||
T value = 0;
|
||||
static const char * name() { return "groupBitOr"; }
|
||||
void update(T x) { value |= x; }
|
||||
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
|
||||
static void compileCreate(llvm::IRBuilderBase & builder, llvm::Value * value_ptr)
|
||||
{
|
||||
auto type = toNativeType<T>(builder);
|
||||
builder.CreateStore(llvm::Constant::getNullValue(type), value_ptr);
|
||||
}
|
||||
|
||||
static llvm::Value* compileUpdate(llvm::IRBuilderBase & builder, llvm::Value * lhs, llvm::Value * rhs)
|
||||
{
|
||||
return builder.CreateOr(lhs, rhs);
|
||||
}
|
||||
|
||||
#endif
|
||||
};
|
||||
|
||||
template <typename T>
|
||||
@ -29,6 +52,21 @@ struct AggregateFunctionGroupBitAndData
|
||||
T value = -1; /// Two's complement arithmetic, sign extension.
|
||||
static const char * name() { return "groupBitAnd"; }
|
||||
void update(T x) { value &= x; }
|
||||
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
|
||||
static void compileCreate(llvm::IRBuilderBase & builder, llvm::Value * value_ptr)
|
||||
{
|
||||
auto type = toNativeType<T>(builder);
|
||||
builder.CreateStore(llvm::ConstantInt::get(type, -1), value_ptr);
|
||||
}
|
||||
|
||||
static llvm::Value* compileUpdate(llvm::IRBuilderBase & builder, llvm::Value * lhs, llvm::Value * rhs)
|
||||
{
|
||||
return builder.CreateAnd(lhs, rhs);
|
||||
}
|
||||
|
||||
#endif
|
||||
};
|
||||
|
||||
template <typename T>
|
||||
@ -37,6 +75,21 @@ struct AggregateFunctionGroupBitXorData
|
||||
T value = 0;
|
||||
static const char * name() { return "groupBitXor"; }
|
||||
void update(T x) { value ^= x; }
|
||||
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
|
||||
static void compileCreate(llvm::IRBuilderBase & builder, llvm::Value * value_ptr)
|
||||
{
|
||||
auto type = toNativeType<T>(builder);
|
||||
builder.CreateStore(llvm::Constant::getNullValue(type), value_ptr);
|
||||
}
|
||||
|
||||
static llvm::Value* compileUpdate(llvm::IRBuilderBase & builder, llvm::Value * lhs, llvm::Value * rhs)
|
||||
{
|
||||
return builder.CreateXor(lhs, rhs);
|
||||
}
|
||||
|
||||
#endif
|
||||
};
|
||||
|
||||
|
||||
@ -45,7 +98,7 @@ template <typename T, typename Data>
|
||||
class AggregateFunctionBitwise final : public IAggregateFunctionDataHelper<Data, AggregateFunctionBitwise<T, Data>>
|
||||
{
|
||||
public:
|
||||
AggregateFunctionBitwise(const DataTypePtr & type)
|
||||
explicit AggregateFunctionBitwise(const DataTypePtr & type)
|
||||
: IAggregateFunctionDataHelper<Data, AggregateFunctionBitwise<T, Data>>({type}, {}) {}
|
||||
|
||||
String getName() const override { return Data::name(); }
|
||||
@ -81,6 +134,68 @@ public:
|
||||
{
|
||||
assert_cast<ColumnVector<T> &>(to).getData().push_back(this->data(place).value);
|
||||
}
|
||||
|
||||
#if USE_EMBEDDED_COMPILER
|
||||
|
||||
bool isCompilable() const override
|
||||
{
|
||||
auto return_type = getReturnType();
|
||||
return canBeNativeType(*return_type);
|
||||
}
|
||||
|
||||
void compileCreate(llvm::IRBuilderBase & builder, llvm::Value * aggregate_data_ptr) const override
|
||||
{
|
||||
llvm::IRBuilder<> & b = static_cast<llvm::IRBuilder<> &>(builder);
|
||||
|
||||
auto * return_type = toNativeType(b, getReturnType());
|
||||
auto * value_ptr = b.CreatePointerCast(aggregate_data_ptr, return_type->getPointerTo());
|
||||
Data::compileCreate(builder, value_ptr);
|
||||
}
|
||||
|
||||
void compileAdd(llvm::IRBuilderBase & builder, llvm::Value * aggregate_data_ptr, const DataTypes &, const std::vector<llvm::Value *> & argument_values) const override
|
||||
{
|
||||
llvm::IRBuilder<> & b = static_cast<llvm::IRBuilder<> &>(builder);
|
||||
|
||||
auto * return_type = toNativeType(b, getReturnType());
|
||||
|
||||
auto * value_ptr = b.CreatePointerCast(aggregate_data_ptr, return_type->getPointerTo());
|
||||
auto * value = b.CreateLoad(return_type, value_ptr);
|
||||
|
||||
const auto & argument_value = argument_values[0];
|
||||
auto * result_value = Data::compileUpdate(builder, value, argument_value);
|
||||
|
||||
b.CreateStore(result_value, value_ptr);
|
||||
}
|
||||
|
||||
void compileMerge(llvm::IRBuilderBase & builder, llvm::Value * aggregate_data_dst_ptr, llvm::Value * aggregate_data_src_ptr) const override
|
||||
{
|
||||
llvm::IRBuilder<> & b = static_cast<llvm::IRBuilder<> &>(builder);
|
||||
|
||||
auto * return_type = toNativeType(b, getReturnType());
|
||||
|
||||
auto * value_dst_ptr = b.CreatePointerCast(aggregate_data_dst_ptr, return_type->getPointerTo());
|
||||
auto * value_dst = b.CreateLoad(return_type, value_dst_ptr);
|
||||
|
||||
auto * value_src_ptr = b.CreatePointerCast(aggregate_data_src_ptr, return_type->getPointerTo());
|
||||
auto * value_src = b.CreateLoad(return_type, value_src_ptr);
|
||||
|
||||
auto * result_value = Data::compileUpdate(builder, value_dst, value_src);
|
||||
|
||||
b.CreateStore(result_value, value_dst_ptr);
|
||||
}
|
||||
|
||||
llvm::Value * compileGetResult(llvm::IRBuilderBase & builder, llvm::Value * aggregate_data_ptr) const override
|
||||
{
|
||||
llvm::IRBuilder<> & b = static_cast<llvm::IRBuilder<> &>(builder);
|
||||
|
||||
auto * return_type = toNativeType(b, getReturnType());
|
||||
auto * value_ptr = b.CreatePointerCast(aggregate_data_ptr, return_type->getPointerTo());
|
||||
|
||||
return b.CreateLoad(return_type, value_ptr);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
};
|
||||
|
||||
|
||||
|
@ -3,6 +3,7 @@
|
||||
#include <AggregateFunctions/AggregateFunctionSequenceMatch.h>
|
||||
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDate32.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
|
||||
#include <common/range.h>
|
||||
|
@ -101,6 +101,24 @@ struct AggregateFunctionSumData
|
||||
{
|
||||
const auto * end = ptr + count;
|
||||
|
||||
if constexpr (
|
||||
(is_integer_v<T> && !is_big_int_v<T>)
|
||||
|| (IsDecimalNumber<T> && !std::is_same_v<T, Decimal256> && !std::is_same_v<T, Decimal128>))
|
||||
{
|
||||
/// For integers we can vectorize the operation if we replace the null check using a multiplication (by 0 for null, 1 for not null)
|
||||
/// https://quick-bench.com/q/MLTnfTvwC2qZFVeWHfOBR3U7a8I
|
||||
T local_sum{};
|
||||
while (ptr < end)
|
||||
{
|
||||
T multiplier = !*null_map;
|
||||
Impl::add(local_sum, *ptr * multiplier);
|
||||
++ptr;
|
||||
++null_map;
|
||||
}
|
||||
Impl::add(sum, local_sum);
|
||||
return;
|
||||
}
|
||||
|
||||
if constexpr (std::is_floating_point_v<T>)
|
||||
{
|
||||
constexpr size_t unroll_count = 128 / sizeof(T);
|
||||
|
@ -4,6 +4,7 @@
|
||||
#include <AggregateFunctions/FactoryHelpers.h>
|
||||
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDate32.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <DataTypes/DataTypeUUID.h>
|
||||
@ -49,6 +50,8 @@ AggregateFunctionPtr createAggregateFunctionUniq(const std::string & name, const
|
||||
return res;
|
||||
else if (which.isDate())
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDate::FieldType, Data>>(argument_types);
|
||||
else if (which.isDate32())
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDate32::FieldType, Data>>(argument_types);
|
||||
else if (which.isDateTime())
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDateTime::FieldType, Data>>(argument_types);
|
||||
else if (which.isStringOrFixedString())
|
||||
@ -95,6 +98,8 @@ AggregateFunctionPtr createAggregateFunctionUniq(const std::string & name, const
|
||||
return res;
|
||||
else if (which.isDate())
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDate::FieldType, Data<DataTypeDate::FieldType>>>(argument_types);
|
||||
else if (which.isDate32())
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDate32::FieldType, Data<DataTypeDate32::FieldType>>>(argument_types);
|
||||
else if (which.isDateTime())
|
||||
return std::make_shared<AggregateFunctionUniq<DataTypeDateTime::FieldType, Data<DataTypeDateTime::FieldType>>>(argument_types);
|
||||
else if (which.isStringOrFixedString())
|
||||
|
@ -6,6 +6,7 @@
|
||||
#include <Common/FieldVisitorConvertToNumber.h>
|
||||
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDate32.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
|
||||
#include <functional>
|
||||
@ -51,6 +52,8 @@ namespace
|
||||
return res;
|
||||
else if (which.isDate())
|
||||
return std::make_shared<typename WithK<K, HashValueType>::template AggregateFunction<DataTypeDate::FieldType>>(argument_types, params);
|
||||
else if (which.isDate32())
|
||||
return std::make_shared<typename WithK<K, HashValueType>::template AggregateFunction<DataTypeDate32::FieldType>>(argument_types, params);
|
||||
else if (which.isDateTime())
|
||||
return std::make_shared<typename WithK<K, HashValueType>::template AggregateFunction<DataTypeDateTime::FieldType>>(argument_types, params);
|
||||
else if (which.isStringOrFixedString())
|
||||
|
@ -3,6 +3,7 @@
|
||||
#include <AggregateFunctions/AggregateFunctionUniqUpTo.h>
|
||||
#include <Common/FieldVisitorConvertToNumber.h>
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDate32.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <DataTypes/DataTypeFixedString.h>
|
||||
@ -61,6 +62,8 @@ AggregateFunctionPtr createAggregateFunctionUniqUpTo(const std::string & name, c
|
||||
return res;
|
||||
else if (which.isDate())
|
||||
return std::make_shared<AggregateFunctionUniqUpTo<DataTypeDate::FieldType>>(threshold, argument_types, params);
|
||||
else if (which.isDate32())
|
||||
return std::make_shared<AggregateFunctionUniqUpTo<DataTypeDate32::FieldType>>(threshold, argument_types, params);
|
||||
else if (which.isDateTime())
|
||||
return std::make_shared<AggregateFunctionUniqUpTo<DataTypeDateTime::FieldType>>(threshold, argument_types, params);
|
||||
else if (which.isStringOrFixedString())
|
||||
|
@ -4,6 +4,7 @@
|
||||
#include <AggregateFunctions/Helpers.h>
|
||||
#include <Core/Settings.h>
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDate32.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
|
||||
#include <common/range.h>
|
||||
|
@ -76,6 +76,10 @@ add_headers_and_sources(clickhouse_common_io IO)
|
||||
add_headers_and_sources(clickhouse_common_io IO/S3)
|
||||
list (REMOVE_ITEM clickhouse_common_io_sources Common/malloc.cpp Common/new_delete.cpp)
|
||||
|
||||
if (USE_SQLITE)
|
||||
add_headers_and_sources(dbms Databases/SQLite)
|
||||
endif()
|
||||
|
||||
if(USE_RDKAFKA)
|
||||
add_headers_and_sources(dbms Storages/Kafka)
|
||||
endif()
|
||||
@ -425,6 +429,10 @@ if (USE_AMQPCPP)
|
||||
dbms_target_include_directories (SYSTEM BEFORE PUBLIC ${AMQPCPP_INCLUDE_DIR})
|
||||
endif()
|
||||
|
||||
if (USE_SQLITE)
|
||||
dbms_target_link_libraries(PUBLIC sqlite)
|
||||
endif()
|
||||
|
||||
if (USE_CASSANDRA)
|
||||
dbms_target_link_libraries(PUBLIC ${CASSANDRA_LIBRARY})
|
||||
dbms_target_include_directories (SYSTEM BEFORE PUBLIC ${CASS_INCLUDE_DIR})
|
||||
|
@ -298,11 +298,19 @@ void ConfigProcessor::doIncludesRecursive(
|
||||
{
|
||||
const auto * subst = attributes->getNamedItem(attr_name);
|
||||
attr_nodes[attr_name] = subst;
|
||||
substs_count += static_cast<size_t>(subst == nullptr);
|
||||
substs_count += static_cast<size_t>(subst != nullptr);
|
||||
}
|
||||
|
||||
if (substs_count < SUBSTITUTION_ATTRS.size() - 1) /// only one substitution is allowed
|
||||
throw Poco::Exception("several substitutions attributes set for element <" + node->nodeName() + ">");
|
||||
if (substs_count > 1) /// only one substitution is allowed
|
||||
throw Poco::Exception("More than one substitution attribute is set for element <" + node->nodeName() + ">");
|
||||
|
||||
if (node->nodeName() == "include")
|
||||
{
|
||||
if (node->hasChildNodes())
|
||||
throw Poco::Exception("<include> element must have no children");
|
||||
if (substs_count == 0)
|
||||
throw Poco::Exception("No substitution attributes set for element <include>, must have exactly one");
|
||||
}
|
||||
|
||||
/// Replace the original contents, not add to it.
|
||||
bool replace = attributes->getNamedItem("replace");
|
||||
@ -320,37 +328,57 @@ void ConfigProcessor::doIncludesRecursive(
|
||||
else if (throw_on_bad_incl)
|
||||
throw Poco::Exception(error_msg + name);
|
||||
else
|
||||
{
|
||||
if (node->nodeName() == "include")
|
||||
node->parentNode()->removeChild(node);
|
||||
|
||||
LOG_WARNING(log, "{}{}", error_msg, name);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
Element & element = dynamic_cast<Element &>(*node);
|
||||
|
||||
for (const auto & attr_name : SUBSTITUTION_ATTRS)
|
||||
element.removeAttribute(attr_name);
|
||||
|
||||
if (replace)
|
||||
/// Replace the whole node not just contents.
|
||||
if (node->nodeName() == "include")
|
||||
{
|
||||
while (Node * child = node->firstChild())
|
||||
node->removeChild(child);
|
||||
const NodeListPtr children = node_to_include->childNodes();
|
||||
for (size_t i = 0, size = children->length(); i < size; ++i)
|
||||
{
|
||||
NodePtr new_node = config->importNode(children->item(i), true);
|
||||
node->parentNode()->insertBefore(new_node, node);
|
||||
}
|
||||
|
||||
element.removeAttribute("replace");
|
||||
node->parentNode()->removeChild(node);
|
||||
}
|
||||
|
||||
const NodeListPtr children = node_to_include->childNodes();
|
||||
for (size_t i = 0, size = children->length(); i < size; ++i)
|
||||
else
|
||||
{
|
||||
NodePtr new_node = config->importNode(children->item(i), true);
|
||||
node->appendChild(new_node);
|
||||
}
|
||||
Element & element = dynamic_cast<Element &>(*node);
|
||||
|
||||
const NamedNodeMapPtr from_attrs = node_to_include->attributes();
|
||||
for (size_t i = 0, size = from_attrs->length(); i < size; ++i)
|
||||
{
|
||||
element.setAttributeNode(dynamic_cast<Attr *>(config->importNode(from_attrs->item(i), true)));
|
||||
}
|
||||
for (const auto & attr_name : SUBSTITUTION_ATTRS)
|
||||
element.removeAttribute(attr_name);
|
||||
|
||||
included_something = true;
|
||||
if (replace)
|
||||
{
|
||||
while (Node * child = node->firstChild())
|
||||
node->removeChild(child);
|
||||
|
||||
element.removeAttribute("replace");
|
||||
}
|
||||
|
||||
const NodeListPtr children = node_to_include->childNodes();
|
||||
for (size_t i = 0, size = children->length(); i < size; ++i)
|
||||
{
|
||||
NodePtr new_node = config->importNode(children->item(i), true);
|
||||
node->appendChild(new_node);
|
||||
}
|
||||
|
||||
const NamedNodeMapPtr from_attrs = node_to_include->attributes();
|
||||
for (size_t i = 0, size = from_attrs->length(); i < size; ++i)
|
||||
{
|
||||
element.setAttributeNode(dynamic_cast<Attr *>(config->importNode(from_attrs->item(i), true)));
|
||||
}
|
||||
|
||||
included_something = true;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -10,16 +10,10 @@ namespace fs = std::filesystem;
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/// Checks if file exists without throwing an exception but with message in console.
|
||||
bool safeFsExists(const auto & path)
|
||||
bool safeFsExists(const String & path)
|
||||
{
|
||||
std::error_code ec;
|
||||
bool res = fs::exists(path, ec);
|
||||
if (ec)
|
||||
{
|
||||
std::cerr << "Can't check '" << path << "': [" << ec.value() << "] " << ec.message() << std::endl;
|
||||
}
|
||||
return res;
|
||||
return fs::exists(path, ec);
|
||||
};
|
||||
|
||||
bool configReadClient(Poco::Util::LayeredConfiguration & config, const std::string & home_path)
|
||||
|
@ -30,6 +30,8 @@
|
||||
M(OpenFileForWrite, "Number of files open for writing") \
|
||||
M(Read, "Number of read (read, pread, io_getevents, etc.) syscalls in fly") \
|
||||
M(Write, "Number of write (write, pwrite, io_getevents, etc.) syscalls in fly") \
|
||||
M(NetworkReceive, "Number of threads receiving data from network. Only ClickHouse-related network interaction is included, not by 3rd party libraries.") \
|
||||
M(NetworkSend, "Number of threads sending data to network. Only ClickHouse-related network interaction is included, not by 3rd party libraries.") \
|
||||
M(SendScalars, "Number of connections that are sending data for scalars to remote servers.") \
|
||||
M(SendExternalTables, "Number of connections that are sending data for external tables to remote servers. External tables are used to implement GLOBAL IN and GLOBAL JOIN operators with distributed subqueries.") \
|
||||
M(QueryThread, "Number of query processing threads") \
|
||||
|
@ -558,6 +558,7 @@
|
||||
M(588, DISTRIBUTED_BROKEN_BATCH_INFO) \
|
||||
M(589, DISTRIBUTED_BROKEN_BATCH_FILES) \
|
||||
M(590, CANNOT_SYSCONF) \
|
||||
M(591, SQLITE_ENGINE_ERROR) \
|
||||
\
|
||||
M(998, POSTGRESQL_CONNECTION_FAILURE) \
|
||||
M(999, KEEPER_EXCEPTION) \
|
||||
|
@ -117,4 +117,16 @@ public:
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
class FieldVisitorAccurateLessOrEqual : public StaticVisitor<bool>
|
||||
{
|
||||
public:
|
||||
template <typename T, typename U>
|
||||
bool operator()(const T & l, const U & r) const
|
||||
{
|
||||
auto less_cmp = FieldVisitorAccurateLess();
|
||||
return !less_cmp(r, l);
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -237,7 +237,12 @@ public:
|
||||
// 1. Always memcpy 8 times bytes
|
||||
// 2. Use switch case extension to generate fast dispatching table
|
||||
// 3. Funcs are named callables that can be force_inlined
|
||||
//
|
||||
// NOTE: It relies on Little Endianness
|
||||
//
|
||||
// NOTE: It requires padded to 8 bytes keys (IOW you cannot pass
|
||||
// std::string here, but you can pass i.e. ColumnString::getDataAt()),
|
||||
// since it copies 8 bytes at a time.
|
||||
template <typename Self, typename KeyHolder, typename Func>
|
||||
static auto ALWAYS_INLINE dispatch(Self & self, KeyHolder && key_holder, Func && func)
|
||||
{
|
||||
|
@ -22,10 +22,6 @@
|
||||
M(WriteBufferFromFileDescriptorWrite, "Number of writes (write/pwrite) to a file descriptor. Does not include sockets.") \
|
||||
M(WriteBufferFromFileDescriptorWriteFailed, "Number of times the write (write/pwrite) to a file descriptor have failed.") \
|
||||
M(WriteBufferFromFileDescriptorWriteBytes, "Number of bytes written to file descriptors. If the file is compressed, this will show compressed data size.") \
|
||||
M(ReadBufferAIORead, "") \
|
||||
M(ReadBufferAIOReadBytes, "") \
|
||||
M(WriteBufferAIOWrite, "") \
|
||||
M(WriteBufferAIOWriteBytes, "") \
|
||||
M(ReadCompressedBytes, "Number of bytes (the number of bytes before decompression) read from compressed sources (files, network).") \
|
||||
M(CompressedReadBufferBlocks, "Number of compressed blocks (the blocks of data that are compressed independent of each other) read from compressed sources (files, network).") \
|
||||
M(CompressedReadBufferBytes, "Number of uncompressed bytes (the number of bytes after decompression) read from compressed sources (files, network).") \
|
||||
@ -34,6 +30,10 @@
|
||||
M(UncompressedCacheWeightLost, "") \
|
||||
M(MMappedFileCacheHits, "") \
|
||||
M(MMappedFileCacheMisses, "") \
|
||||
M(AIOWrite, "Number of writes with Linux or FreeBSD AIO interface") \
|
||||
M(AIOWriteBytes, "Number of bytes written with Linux or FreeBSD AIO interface") \
|
||||
M(AIORead, "Number of reads with Linux or FreeBSD AIO interface") \
|
||||
M(AIOReadBytes, "Number of bytes read with Linux or FreeBSD AIO interface") \
|
||||
M(IOBufferAllocs, "") \
|
||||
M(IOBufferAllocBytes, "") \
|
||||
M(ArenaAllocChunks, "") \
|
||||
@ -43,14 +43,16 @@
|
||||
M(MarkCacheHits, "") \
|
||||
M(MarkCacheMisses, "") \
|
||||
M(CreatedReadBufferOrdinary, "") \
|
||||
M(CreatedReadBufferAIO, "") \
|
||||
M(CreatedReadBufferAIOFailed, "") \
|
||||
M(CreatedReadBufferDirectIO, "") \
|
||||
M(CreatedReadBufferDirectIOFailed, "") \
|
||||
M(CreatedReadBufferMMap, "") \
|
||||
M(CreatedReadBufferMMapFailed, "") \
|
||||
M(DiskReadElapsedMicroseconds, "Total time spent waiting for read syscall. This include reads from page cache.") \
|
||||
M(DiskWriteElapsedMicroseconds, "Total time spent waiting for write syscall. This include writes to page cache.") \
|
||||
M(NetworkReceiveElapsedMicroseconds, "") \
|
||||
M(NetworkSendElapsedMicroseconds, "") \
|
||||
M(NetworkReceiveElapsedMicroseconds, "Total time spent waiting for data to receive or receiving data from network. Only ClickHouse-related network interaction is included, not by 3rd party libraries.") \
|
||||
M(NetworkSendElapsedMicroseconds, "Total time spent waiting for data to send to network or sending data to network. Only ClickHouse-related network interaction is included, not by 3rd party libraries..") \
|
||||
M(NetworkReceiveBytes, "Total number of bytes received from network. Only ClickHouse-related network interaction is included, not by 3rd party libraries.") \
|
||||
M(NetworkSendBytes, "Total number of bytes send to network. Only ClickHouse-related network interaction is included, not by 3rd party libraries.") \
|
||||
M(ThrottlerSleepMicroseconds, "Total time a query was sleeping to conform the 'max_network_bandwidth' setting.") \
|
||||
\
|
||||
M(QueryMaskingRulesMatch, "Number of times query masking rules was successfully matched.") \
|
||||
|
@ -4,9 +4,6 @@
|
||||
#include <Common/UnicodeBar.h>
|
||||
#include <Databases/DatabaseMemory.h>
|
||||
|
||||
/// FIXME: progress bar in clickhouse-local needs to be cleared after query execution
|
||||
/// - same as it is now in clickhouse-client. Also there is no writeFinalProgress call
|
||||
/// in clickhouse-local.
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -45,6 +45,8 @@ struct ZooKeeperRequest : virtual Request
|
||||
/// If the request was sent and we didn't get the response and the error happens, then we cannot be sure was it processed or not.
|
||||
bool probably_sent = false;
|
||||
|
||||
bool restored_from_zookeeper_log = false;
|
||||
|
||||
ZooKeeperRequest() = default;
|
||||
ZooKeeperRequest(const ZooKeeperRequest &) = default;
|
||||
virtual ~ZooKeeperRequest() override = default;
|
||||
@ -172,6 +174,9 @@ struct ZooKeeperCloseResponse final : ZooKeeperResponse
|
||||
|
||||
struct ZooKeeperCreateRequest final : public CreateRequest, ZooKeeperRequest
|
||||
{
|
||||
/// used only during restore from zookeeper log
|
||||
int32_t parent_cversion = -1;
|
||||
|
||||
ZooKeeperCreateRequest() = default;
|
||||
explicit ZooKeeperCreateRequest(const CreateRequest & base) : CreateRequest(base) {}
|
||||
|
||||
@ -183,9 +188,6 @@ struct ZooKeeperCreateRequest final : public CreateRequest, ZooKeeperRequest
|
||||
bool isReadRequest() const override { return false; }
|
||||
|
||||
size_t bytesSize() const override { return CreateRequest::bytesSize() + sizeof(xid) + sizeof(has_watch); }
|
||||
|
||||
/// During recovery from log we don't rehash ACLs
|
||||
bool need_to_hash_acls = true;
|
||||
};
|
||||
|
||||
struct ZooKeeperCreateResponse final : CreateResponse, ZooKeeperResponse
|
||||
@ -362,8 +364,6 @@ struct ZooKeeperSetACLRequest final : SetACLRequest, ZooKeeperRequest
|
||||
bool isReadRequest() const override { return false; }
|
||||
|
||||
size_t bytesSize() const override { return SetACLRequest::bytesSize() + sizeof(xid); }
|
||||
|
||||
bool need_to_hash_acls = true;
|
||||
};
|
||||
|
||||
struct ZooKeeperSetACLResponse final : SetACLResponse, ZooKeeperResponse
|
||||
|
@ -853,7 +853,8 @@ void ZooKeeper::finalize(bool error_send, bool error_receive)
|
||||
}
|
||||
|
||||
/// Send thread will exit after sending close request or on expired flag
|
||||
send_thread.join();
|
||||
if (send_thread.joinable())
|
||||
send_thread.join();
|
||||
}
|
||||
|
||||
/// Set expired flag after we sent close event
|
||||
@ -870,7 +871,7 @@ void ZooKeeper::finalize(bool error_send, bool error_receive)
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
|
||||
if (!error_receive)
|
||||
if (!error_receive && receive_thread.joinable())
|
||||
receive_thread.join();
|
||||
|
||||
{
|
||||
|
@ -47,13 +47,13 @@ CompressedReadBufferFromFile::CompressedReadBufferFromFile(std::unique_ptr<ReadB
|
||||
CompressedReadBufferFromFile::CompressedReadBufferFromFile(
|
||||
const std::string & path,
|
||||
size_t estimated_size,
|
||||
size_t aio_threshold,
|
||||
size_t direct_io_threshold,
|
||||
size_t mmap_threshold,
|
||||
MMappedFileCache * mmap_cache,
|
||||
size_t buf_size,
|
||||
bool allow_different_codecs_)
|
||||
: BufferWithOwnMemory<ReadBuffer>(0)
|
||||
, p_file_in(createReadBufferFromFileBase(path, estimated_size, aio_threshold, mmap_threshold, mmap_cache, buf_size))
|
||||
, p_file_in(createReadBufferFromFileBase(path, estimated_size, direct_io_threshold, mmap_threshold, mmap_cache, buf_size))
|
||||
, file_in(*p_file_in)
|
||||
{
|
||||
compressed_in = &file_in;
|
||||
|
@ -33,7 +33,7 @@ public:
|
||||
CompressedReadBufferFromFile(std::unique_ptr<ReadBufferFromFileBase> buf, bool allow_different_codecs_ = false);
|
||||
|
||||
CompressedReadBufferFromFile(
|
||||
const std::string & path, size_t estimated_size, size_t aio_threshold, size_t mmap_threshold, MMappedFileCache * mmap_cache,
|
||||
const std::string & path, size_t estimated_size, size_t direct_io_threshold, size_t mmap_threshold, MMappedFileCache * mmap_cache,
|
||||
size_t buf_size = DBMS_DEFAULT_BUFFER_SIZE, bool allow_different_codecs_ = false);
|
||||
|
||||
void seek(size_t offset_in_compressed_file, size_t offset_in_decompressed_block);
|
||||
|
@ -267,13 +267,12 @@ struct KeeperStorageCreateRequest final : public KeeperStorageRequest
|
||||
}
|
||||
else
|
||||
{
|
||||
|
||||
auto & session_auth_ids = storage.session_and_auth[session_id];
|
||||
|
||||
KeeperStorage::Node created_node;
|
||||
|
||||
Coordination::ACLs node_acls;
|
||||
if (!fixupACL(request.acls, session_auth_ids, node_acls, request.need_to_hash_acls))
|
||||
if (!fixupACL(request.acls, session_auth_ids, node_acls, !request.restored_from_zookeeper_log))
|
||||
{
|
||||
response.error = Coordination::Error::ZINVALIDACL;
|
||||
return {response_ptr, {}};
|
||||
@ -307,16 +306,28 @@ struct KeeperStorageCreateRequest final : public KeeperStorageRequest
|
||||
path_created += seq_num_str.str();
|
||||
}
|
||||
|
||||
int32_t parent_cversion = request.parent_cversion;
|
||||
auto child_path = getBaseName(path_created);
|
||||
int64_t prev_parent_zxid;
|
||||
container.updateValue(parent_path, [child_path, zxid, &prev_parent_zxid] (KeeperStorage::Node & parent)
|
||||
int32_t prev_parent_cversion;
|
||||
container.updateValue(parent_path, [child_path, zxid, &prev_parent_zxid,
|
||||
parent_cversion, &prev_parent_cversion] (KeeperStorage::Node & parent)
|
||||
{
|
||||
|
||||
parent.children.insert(child_path);
|
||||
prev_parent_cversion = parent.stat.cversion;
|
||||
prev_parent_zxid = parent.stat.pzxid;
|
||||
|
||||
/// Increment sequential number even if node is not sequential
|
||||
++parent.seq_num;
|
||||
parent.children.insert(child_path);
|
||||
++parent.stat.cversion;
|
||||
prev_parent_zxid = parent.stat.pzxid;
|
||||
parent.stat.pzxid = zxid;
|
||||
|
||||
if (parent_cversion == -1)
|
||||
++parent.stat.cversion;
|
||||
else if (parent_cversion > parent.stat.cversion)
|
||||
parent.stat.cversion = parent_cversion;
|
||||
|
||||
if (zxid > parent.stat.pzxid)
|
||||
parent.stat.pzxid = zxid;
|
||||
++parent.stat.numChildren;
|
||||
});
|
||||
|
||||
@ -326,7 +337,7 @@ struct KeeperStorageCreateRequest final : public KeeperStorageRequest
|
||||
if (request.is_ephemeral)
|
||||
ephemerals[session_id].emplace(path_created);
|
||||
|
||||
undo = [&storage, prev_parent_zxid, session_id, path_created, is_ephemeral = request.is_ephemeral, parent_path, child_path, acl_id]
|
||||
undo = [&storage, prev_parent_zxid, prev_parent_cversion, session_id, path_created, is_ephemeral = request.is_ephemeral, parent_path, child_path, acl_id]
|
||||
{
|
||||
storage.container.erase(path_created);
|
||||
storage.acl_map.removeUsage(acl_id);
|
||||
@ -334,11 +345,11 @@ struct KeeperStorageCreateRequest final : public KeeperStorageRequest
|
||||
if (is_ephemeral)
|
||||
storage.ephemerals[session_id].erase(path_created);
|
||||
|
||||
storage.container.updateValue(parent_path, [child_path, prev_parent_zxid] (KeeperStorage::Node & undo_parent)
|
||||
storage.container.updateValue(parent_path, [child_path, prev_parent_zxid, prev_parent_cversion] (KeeperStorage::Node & undo_parent)
|
||||
{
|
||||
--undo_parent.stat.cversion;
|
||||
--undo_parent.stat.numChildren;
|
||||
--undo_parent.seq_num;
|
||||
undo_parent.stat.cversion = prev_parent_cversion;
|
||||
undo_parent.stat.pzxid = prev_parent_zxid;
|
||||
undo_parent.children.erase(child_path);
|
||||
});
|
||||
@ -394,6 +405,24 @@ struct KeeperStorageGetRequest final : public KeeperStorageRequest
|
||||
}
|
||||
};
|
||||
|
||||
namespace
|
||||
{
|
||||
/// Garbage required to apply log to "fuzzy" zookeeper snapshot
|
||||
void updateParentPzxid(const std::string & child_path, int64_t zxid, KeeperStorage::Container & container)
|
||||
{
|
||||
auto parent_path = parentPath(child_path);
|
||||
auto parent_it = container.find(parent_path);
|
||||
if (parent_it != container.end())
|
||||
{
|
||||
container.updateValue(parent_path, [zxid](KeeperStorage::Node & parent)
|
||||
{
|
||||
if (parent.stat.pzxid < zxid)
|
||||
parent.stat.pzxid = zxid;
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
struct KeeperStorageRemoveRequest final : public KeeperStorageRequest
|
||||
{
|
||||
bool checkAuth(KeeperStorage & storage, int64_t session_id) const override
|
||||
@ -412,7 +441,7 @@ struct KeeperStorageRemoveRequest final : public KeeperStorageRequest
|
||||
}
|
||||
|
||||
using KeeperStorageRequest::KeeperStorageRequest;
|
||||
std::pair<Coordination::ZooKeeperResponsePtr, Undo> process(KeeperStorage & storage, int64_t /*zxid*/, int64_t /*session_id*/) const override
|
||||
std::pair<Coordination::ZooKeeperResponsePtr, Undo> process(KeeperStorage & storage, int64_t zxid, int64_t /*session_id*/) const override
|
||||
{
|
||||
auto & container = storage.container;
|
||||
auto & ephemerals = storage.ephemerals;
|
||||
@ -425,6 +454,8 @@ struct KeeperStorageRemoveRequest final : public KeeperStorageRequest
|
||||
auto it = container.find(request.path);
|
||||
if (it == container.end())
|
||||
{
|
||||
if (request.restored_from_zookeeper_log)
|
||||
updateParentPzxid(request.path, zxid, container);
|
||||
response.error = Coordination::Error::ZNONODE;
|
||||
}
|
||||
else if (request.version != -1 && request.version != it->value.stat.version)
|
||||
@ -437,6 +468,9 @@ struct KeeperStorageRemoveRequest final : public KeeperStorageRequest
|
||||
}
|
||||
else
|
||||
{
|
||||
if (request.restored_from_zookeeper_log)
|
||||
updateParentPzxid(request.path, zxid, container);
|
||||
|
||||
auto prev_node = it->value;
|
||||
if (prev_node.stat.ephemeralOwner != 0)
|
||||
{
|
||||
@ -719,7 +753,7 @@ struct KeeperStorageSetACLRequest final : public KeeperStorageRequest
|
||||
auto & session_auth_ids = storage.session_and_auth[session_id];
|
||||
Coordination::ACLs node_acls;
|
||||
|
||||
if (!fixupACL(request.acls, session_auth_ids, node_acls, request.need_to_hash_acls))
|
||||
if (!fixupACL(request.acls, session_auth_ids, node_acls, !request.restored_from_zookeeper_log))
|
||||
{
|
||||
response.error = Coordination::Error::ZINVALIDACL;
|
||||
return {response_ptr, {}};
|
||||
|
@ -174,7 +174,22 @@ void deserializeKeeperStorageFromSnapshot(KeeperStorage & storage, const std::st
|
||||
|
||||
LOG_INFO(log, "Deserializing data from snapshot");
|
||||
int64_t zxid_from_nodes = deserializeStorageData(storage, reader, log);
|
||||
storage.zxid = std::max(zxid, zxid_from_nodes);
|
||||
/// In ZooKeeper Snapshots can contain inconsistent state of storage. They call
|
||||
/// this inconsistent state "fuzzy". So it's guaranteed that snapshot contain all
|
||||
/// records up to zxid from snapshot name and also some records for future.
|
||||
/// But it doesn't mean that we have just some state of storage from future (like zxid + 100 log records).
|
||||
/// We have incorrect state of storage where some random log entries from future were applied....
|
||||
///
|
||||
/// In ZooKeeper they say that their transactions log is idempotent and can be applied to "fuzzy" state as is.
|
||||
/// It's true but there is no any general invariant which produces this property. They just have ad-hoc "if's" which detects
|
||||
/// "fuzzy" state inconsistencies and apply log records in special way. Several examples:
|
||||
/// https://github.com/apache/zookeeper/blob/master/zookeeper-server/src/main/java/org/apache/zookeeper/server/DataTree.java#L453-L463
|
||||
/// https://github.com/apache/zookeeper/blob/master/zookeeper-server/src/main/java/org/apache/zookeeper/server/DataTree.java#L476-L480
|
||||
/// https://github.com/apache/zookeeper/blob/master/zookeeper-server/src/main/java/org/apache/zookeeper/server/DataTree.java#L547-L549
|
||||
if (zxid_from_nodes > zxid)
|
||||
LOG_WARNING(log, "ZooKeeper snapshot was in inconsistent (fuzzy) state. Will try to apply log. ZooKeeper create non fuzzy snapshot with restart. You can just restart ZooKeeper server and get consistent version.");
|
||||
|
||||
storage.zxid = zxid;
|
||||
|
||||
LOG_INFO(log, "Finished, snapshot ZXID {}", storage.zxid);
|
||||
}
|
||||
@ -210,16 +225,18 @@ void deserializeLogMagic(ReadBuffer & in)
|
||||
|
||||
static constexpr int32_t LOG_HEADER = 1514884167; /// "ZKLG"
|
||||
if (magic_header != LOG_HEADER)
|
||||
throw Exception(ErrorCodes::CORRUPTED_DATA ,"Incorrect magic header in file, expected {}, got {}", LOG_HEADER, magic_header);
|
||||
throw Exception(ErrorCodes::CORRUPTED_DATA, "Incorrect magic header in file, expected {}, got {}", LOG_HEADER, magic_header);
|
||||
|
||||
if (version != 2)
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED,"Cannot deserialize ZooKeeper data other than version 2, got version {}", version);
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Cannot deserialize ZooKeeper data other than version 2, got version {}", version);
|
||||
}
|
||||
|
||||
|
||||
/// For some reason zookeeper stores slightly different records in log then
|
||||
/// requests. For example:
|
||||
/// class CreateTxn {
|
||||
/// ZooKeeper transactions log differs from requests. The main reason: to store records in log
|
||||
/// in some "finalized" state (for example with concrete versions).
|
||||
///
|
||||
/// Example:
|
||||
/// class CreateTxn {
|
||||
/// ustring path;
|
||||
/// buffer data;
|
||||
/// vector<org.apache.zookeeper.data.ACL> acl;
|
||||
@ -289,10 +306,9 @@ Coordination::ZooKeeperRequestPtr deserializeCreateTxn(ReadBuffer & in)
|
||||
Coordination::read(result->data, in);
|
||||
Coordination::read(result->acls, in);
|
||||
Coordination::read(result->is_ephemeral, in);
|
||||
result->need_to_hash_acls = false;
|
||||
/// How we should use it? It should just increment on request execution
|
||||
int32_t parent_c_version;
|
||||
Coordination::read(parent_c_version, in);
|
||||
Coordination::read(result->parent_cversion, in);
|
||||
|
||||
result->restored_from_zookeeper_log = true;
|
||||
return result;
|
||||
}
|
||||
|
||||
@ -300,6 +316,7 @@ Coordination::ZooKeeperRequestPtr deserializeDeleteTxn(ReadBuffer & in)
|
||||
{
|
||||
std::shared_ptr<Coordination::ZooKeeperRemoveRequest> result = std::make_shared<Coordination::ZooKeeperRemoveRequest>();
|
||||
Coordination::read(result->path, in);
|
||||
result->restored_from_zookeeper_log = true;
|
||||
return result;
|
||||
}
|
||||
|
||||
@ -309,6 +326,7 @@ Coordination::ZooKeeperRequestPtr deserializeSetTxn(ReadBuffer & in)
|
||||
Coordination::read(result->path, in);
|
||||
Coordination::read(result->data, in);
|
||||
Coordination::read(result->version, in);
|
||||
result->restored_from_zookeeper_log = true;
|
||||
/// It stores version + 1 (which should be, not for request)
|
||||
result->version -= 1;
|
||||
|
||||
@ -320,6 +338,7 @@ Coordination::ZooKeeperRequestPtr deserializeCheckVersionTxn(ReadBuffer & in)
|
||||
std::shared_ptr<Coordination::ZooKeeperCheckRequest> result = std::make_shared<Coordination::ZooKeeperCheckRequest>();
|
||||
Coordination::read(result->path, in);
|
||||
Coordination::read(result->version, in);
|
||||
result->restored_from_zookeeper_log = true;
|
||||
return result;
|
||||
}
|
||||
|
||||
@ -329,14 +348,19 @@ Coordination::ZooKeeperRequestPtr deserializeCreateSession(ReadBuffer & in)
|
||||
int32_t timeout;
|
||||
Coordination::read(timeout, in);
|
||||
result->session_timeout_ms = timeout;
|
||||
result->restored_from_zookeeper_log = true;
|
||||
return result;
|
||||
}
|
||||
|
||||
Coordination::ZooKeeperRequestPtr deserializeCloseSession(ReadBuffer & in)
|
||||
Coordination::ZooKeeperRequestPtr deserializeCloseSession(ReadBuffer & in, bool empty)
|
||||
{
|
||||
std::shared_ptr<Coordination::ZooKeeperCloseRequest> result = std::make_shared<Coordination::ZooKeeperCloseRequest>();
|
||||
std::vector<std::string> data;
|
||||
Coordination::read(data, in);
|
||||
if (!empty)
|
||||
{
|
||||
std::vector<std::string> data;
|
||||
Coordination::read(data, in);
|
||||
}
|
||||
result->restored_from_zookeeper_log = true;
|
||||
return result;
|
||||
}
|
||||
|
||||
@ -356,14 +380,14 @@ Coordination::ZooKeeperRequestPtr deserializeSetACLTxn(ReadBuffer & in)
|
||||
Coordination::read(result->version, in);
|
||||
/// It stores version + 1 (which should be, not for request)
|
||||
result->version -= 1;
|
||||
result->need_to_hash_acls = false;
|
||||
result->restored_from_zookeeper_log = true;
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
Coordination::ZooKeeperRequestPtr deserializeMultiTxn(ReadBuffer & in);
|
||||
|
||||
Coordination::ZooKeeperRequestPtr deserializeTxnImpl(ReadBuffer & in, bool subtxn)
|
||||
Coordination::ZooKeeperRequestPtr deserializeTxnImpl(ReadBuffer & in, bool subtxn, int64_t txn_length = 0)
|
||||
{
|
||||
int32_t type;
|
||||
Coordination::read(type, in);
|
||||
@ -372,6 +396,11 @@ Coordination::ZooKeeperRequestPtr deserializeTxnImpl(ReadBuffer & in, bool subtx
|
||||
if (subtxn)
|
||||
Coordination::read(sub_txn_length, in);
|
||||
|
||||
bool empty_txn = !subtxn && txn_length == 32; /// Possible for old-style CloseTxn's
|
||||
|
||||
if (empty_txn && type != -11)
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Empty non close-session transaction found");
|
||||
|
||||
int64_t in_count_before = in.count();
|
||||
|
||||
switch (type)
|
||||
@ -398,7 +427,7 @@ Coordination::ZooKeeperRequestPtr deserializeTxnImpl(ReadBuffer & in, bool subtx
|
||||
result = deserializeCreateSession(in);
|
||||
break;
|
||||
case -11:
|
||||
result = deserializeCloseSession(in);
|
||||
result = deserializeCloseSession(in, empty_txn);
|
||||
break;
|
||||
case -1:
|
||||
result = deserializeErrorTxn(in);
|
||||
@ -442,7 +471,7 @@ bool hasErrorsInMultiRequest(Coordination::ZooKeeperRequestPtr request)
|
||||
if (request == nullptr)
|
||||
return true;
|
||||
|
||||
for (const auto & subrequest : dynamic_cast<Coordination::ZooKeeperMultiRequest *>(request.get())->requests) //-V522
|
||||
for (const auto & subrequest : dynamic_cast<Coordination::ZooKeeperMultiRequest *>(request.get())->requests) // -V522
|
||||
if (subrequest == nullptr)
|
||||
return true;
|
||||
return false;
|
||||
@ -470,7 +499,7 @@ bool deserializeTxn(KeeperStorage & storage, ReadBuffer & in, Poco::Logger * /*l
|
||||
int64_t time;
|
||||
Coordination::read(time, in);
|
||||
|
||||
Coordination::ZooKeeperRequestPtr request = deserializeTxnImpl(in, false);
|
||||
Coordination::ZooKeeperRequestPtr request = deserializeTxnImpl(in, false, txn_len);
|
||||
|
||||
/// Skip all other bytes
|
||||
int64_t bytes_read = in.count() - count_before;
|
||||
|
@ -62,6 +62,8 @@ void ExternalResultDescription::init(const Block & sample_block_)
|
||||
types.emplace_back(ValueType::vtString, is_nullable);
|
||||
else if (which.isDate())
|
||||
types.emplace_back(ValueType::vtDate, is_nullable);
|
||||
else if (which.isDate32())
|
||||
types.emplace_back(ValueType::vtDate32, is_nullable);
|
||||
else if (which.isDateTime())
|
||||
types.emplace_back(ValueType::vtDateTime, is_nullable);
|
||||
else if (which.isUUID())
|
||||
|
@ -26,6 +26,7 @@ struct ExternalResultDescription
|
||||
vtEnum16,
|
||||
vtString,
|
||||
vtDate,
|
||||
vtDate32,
|
||||
vtDateTime,
|
||||
vtUUID,
|
||||
vtDateTime64,
|
||||
|
@ -89,6 +89,9 @@ void insertPostgreSQLValue(
|
||||
case ExternalResultDescription::ValueType::vtDate:
|
||||
assert_cast<ColumnUInt16 &>(column).insertValue(UInt16{LocalDate{std::string(value)}.getDayNum()});
|
||||
break;
|
||||
case ExternalResultDescription::ValueType::vtDate32:
|
||||
assert_cast<ColumnInt32 &>(column).insertValue(Int32{LocalDate{std::string(value)}.getExtenedDayNum()});
|
||||
break;
|
||||
case ExternalResultDescription::ValueType::vtDateTime:
|
||||
{
|
||||
ReadBufferFromString in(value);
|
||||
|
@ -108,7 +108,7 @@ class IColumn;
|
||||
M(Bool, compile_expressions, true, "Compile some scalar functions and operators to native code.", 0) \
|
||||
M(UInt64, min_count_to_compile_expression, 3, "The number of identical expressions before they are JIT-compiled", 0) \
|
||||
M(Bool, compile_aggregate_expressions, true, "Compile aggregate functions to native code.", 0) \
|
||||
M(UInt64, min_count_to_compile_aggregate_expression, 3, "The number of identical aggregate expressions before they are JIT-compiled", 0) \
|
||||
M(UInt64, min_count_to_compile_aggregate_expression, 0, "The number of identical aggregate expressions before they are JIT-compiled", 0) \
|
||||
M(UInt64, group_by_two_level_threshold, 100000, "From what number of keys, a two-level aggregation starts. 0 - the threshold is not set.", 0) \
|
||||
M(UInt64, group_by_two_level_threshold_bytes, 50000000, "From what size of the aggregation state in bytes, a two-level aggregation begins to be used. 0 - the threshold is not set. Two-level aggregation is used when at least one of the thresholds is triggered.", 0) \
|
||||
M(Bool, distributed_aggregation_memory_efficient, true, "Is the memory-saving mode of distributed aggregation enabled.", 0) \
|
||||
|
@ -39,6 +39,7 @@ enum class TypeIndex
|
||||
Float32,
|
||||
Float64,
|
||||
Date,
|
||||
Date32,
|
||||
DateTime,
|
||||
DateTime64,
|
||||
String,
|
||||
@ -257,6 +258,7 @@ inline constexpr const char * getTypeName(TypeIndex idx)
|
||||
case TypeIndex::Float32: return "Float32";
|
||||
case TypeIndex::Float64: return "Float64";
|
||||
case TypeIndex::Date: return "Date";
|
||||
case TypeIndex::Date32: return "Date32";
|
||||
case TypeIndex::DateTime: return "DateTime";
|
||||
case TypeIndex::DateTime64: return "DateTime64";
|
||||
case TypeIndex::String: return "String";
|
||||
|
@ -192,6 +192,7 @@ bool callOnIndexAndDataType(TypeIndex number, F && f, ExtraArgs && ... args)
|
||||
case TypeIndex::Decimal256: return f(TypePair<DataTypeDecimal<Decimal256>, T>(), std::forward<ExtraArgs>(args)...);
|
||||
|
||||
case TypeIndex::Date: return f(TypePair<DataTypeDate, T>(), std::forward<ExtraArgs>(args)...);
|
||||
case TypeIndex::Date32: return f(TypePair<DataTypeDate, T>(), std::forward<ExtraArgs>(args)...);
|
||||
case TypeIndex::DateTime: return f(TypePair<DataTypeDateTime, T>(), std::forward<ExtraArgs>(args)...);
|
||||
case TypeIndex::DateTime64: return f(TypePair<DataTypeDateTime64, T>(), std::forward<ExtraArgs>(args)...);
|
||||
|
||||
|
@ -13,5 +13,6 @@
|
||||
#cmakedefine01 USE_LDAP
|
||||
#cmakedefine01 USE_ROCKSDB
|
||||
#cmakedefine01 USE_LIBPQXX
|
||||
#cmakedefine01 USE_SQLITE
|
||||
#cmakedefine01 USE_NURAFT
|
||||
#cmakedefine01 USE_KRB5
|
||||
|
163
src/DataStreams/SQLiteBlockInputStream.cpp
Normal file
163
src/DataStreams/SQLiteBlockInputStream.cpp
Normal file
@ -0,0 +1,163 @@
|
||||
#include "SQLiteBlockInputStream.h"
|
||||
|
||||
#if USE_SQLITE
|
||||
#include <common/range.h>
|
||||
#include <common/logger_useful.h>
|
||||
#include <Common/assert_cast.h>
|
||||
|
||||
#include <Columns/ColumnArray.h>
|
||||
#include <Columns/ColumnDecimal.h>
|
||||
#include <Columns/ColumnNullable.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
|
||||
#include <DataTypes/DataTypeNullable.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int SQLITE_ENGINE_ERROR;
|
||||
}
|
||||
|
||||
SQLiteBlockInputStream::SQLiteBlockInputStream(
|
||||
SQLitePtr sqlite_db_,
|
||||
const String & query_str_,
|
||||
const Block & sample_block,
|
||||
const UInt64 max_block_size_)
|
||||
: query_str(query_str_)
|
||||
, max_block_size(max_block_size_)
|
||||
, sqlite_db(std::move(sqlite_db_))
|
||||
{
|
||||
description.init(sample_block);
|
||||
}
|
||||
|
||||
|
||||
void SQLiteBlockInputStream::readPrefix()
|
||||
{
|
||||
sqlite3_stmt * compiled_stmt = nullptr;
|
||||
int status = sqlite3_prepare_v2(sqlite_db.get(), query_str.c_str(), query_str.size() + 1, &compiled_stmt, nullptr);
|
||||
|
||||
if (status != SQLITE_OK)
|
||||
throw Exception(ErrorCodes::SQLITE_ENGINE_ERROR,
|
||||
"Cannot prepate sqlite statement. Status: {}. Message: {}",
|
||||
status, sqlite3_errstr(status));
|
||||
|
||||
compiled_statement = std::unique_ptr<sqlite3_stmt, StatementDeleter>(compiled_stmt, StatementDeleter());
|
||||
}
|
||||
|
||||
|
||||
Block SQLiteBlockInputStream::readImpl()
|
||||
{
|
||||
if (!compiled_statement)
|
||||
return Block();
|
||||
|
||||
MutableColumns columns = description.sample_block.cloneEmptyColumns();
|
||||
size_t num_rows = 0;
|
||||
|
||||
while (true)
|
||||
{
|
||||
int status = sqlite3_step(compiled_statement.get());
|
||||
|
||||
if (status == SQLITE_BUSY)
|
||||
{
|
||||
continue;
|
||||
}
|
||||
else if (status == SQLITE_DONE)
|
||||
{
|
||||
compiled_statement.reset();
|
||||
break;
|
||||
}
|
||||
else if (status != SQLITE_ROW)
|
||||
{
|
||||
throw Exception(ErrorCodes::SQLITE_ENGINE_ERROR,
|
||||
"Expected SQLITE_ROW status, but got status {}. Error: {}, Message: {}",
|
||||
status, sqlite3_errstr(status), sqlite3_errmsg(sqlite_db.get()));
|
||||
}
|
||||
|
||||
int column_count = sqlite3_column_count(compiled_statement.get());
|
||||
for (const auto idx : collections::range(0, column_count))
|
||||
{
|
||||
const auto & sample = description.sample_block.getByPosition(idx);
|
||||
|
||||
if (sqlite3_column_type(compiled_statement.get(), idx) == SQLITE_NULL)
|
||||
{
|
||||
insertDefaultSQLiteValue(*columns[idx], *sample.column);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (description.types[idx].second)
|
||||
{
|
||||
ColumnNullable & column_nullable = assert_cast<ColumnNullable &>(*columns[idx]);
|
||||
insertValue(column_nullable.getNestedColumn(), description.types[idx].first, idx);
|
||||
column_nullable.getNullMapData().emplace_back(0);
|
||||
}
|
||||
else
|
||||
{
|
||||
insertValue(*columns[idx], description.types[idx].first, idx);
|
||||
}
|
||||
}
|
||||
|
||||
if (++num_rows == max_block_size)
|
||||
break;
|
||||
}
|
||||
|
||||
return description.sample_block.cloneWithColumns(std::move(columns));
|
||||
}
|
||||
|
||||
|
||||
void SQLiteBlockInputStream::readSuffix()
|
||||
{
|
||||
if (compiled_statement)
|
||||
compiled_statement.reset();
|
||||
}
|
||||
|
||||
|
||||
void SQLiteBlockInputStream::insertValue(IColumn & column, const ExternalResultDescription::ValueType type, size_t idx)
|
||||
{
|
||||
switch (type)
|
||||
{
|
||||
case ValueType::vtUInt8:
|
||||
assert_cast<ColumnUInt8 &>(column).insertValue(sqlite3_column_int(compiled_statement.get(), idx));
|
||||
break;
|
||||
case ValueType::vtUInt16:
|
||||
assert_cast<ColumnUInt16 &>(column).insertValue(sqlite3_column_int(compiled_statement.get(), idx));
|
||||
break;
|
||||
case ValueType::vtUInt32:
|
||||
assert_cast<ColumnUInt32 &>(column).insertValue(sqlite3_column_int64(compiled_statement.get(), idx));
|
||||
break;
|
||||
case ValueType::vtUInt64:
|
||||
/// There is no uint64 in sqlite3, only int and int64
|
||||
assert_cast<ColumnUInt64 &>(column).insertValue(sqlite3_column_int64(compiled_statement.get(), idx));
|
||||
break;
|
||||
case ValueType::vtInt8:
|
||||
assert_cast<ColumnInt8 &>(column).insertValue(sqlite3_column_int(compiled_statement.get(), idx));
|
||||
break;
|
||||
case ValueType::vtInt16:
|
||||
assert_cast<ColumnInt16 &>(column).insertValue(sqlite3_column_int(compiled_statement.get(), idx));
|
||||
break;
|
||||
case ValueType::vtInt32:
|
||||
assert_cast<ColumnInt32 &>(column).insertValue(sqlite3_column_int(compiled_statement.get(), idx));
|
||||
break;
|
||||
case ValueType::vtInt64:
|
||||
assert_cast<ColumnInt64 &>(column).insertValue(sqlite3_column_int64(compiled_statement.get(), idx));
|
||||
break;
|
||||
case ValueType::vtFloat32:
|
||||
assert_cast<ColumnFloat32 &>(column).insertValue(sqlite3_column_double(compiled_statement.get(), idx));
|
||||
break;
|
||||
case ValueType::vtFloat64:
|
||||
assert_cast<ColumnFloat64 &>(column).insertValue(sqlite3_column_double(compiled_statement.get(), idx));
|
||||
break;
|
||||
default:
|
||||
const char * data = reinterpret_cast<const char *>(sqlite3_column_text(compiled_statement.get(), idx));
|
||||
int len = sqlite3_column_bytes(compiled_statement.get(), idx);
|
||||
assert_cast<ColumnString &>(column).insertData(data, len);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
#endif
|
62
src/DataStreams/SQLiteBlockInputStream.h
Normal file
62
src/DataStreams/SQLiteBlockInputStream.h
Normal file
@ -0,0 +1,62 @@
|
||||
#pragma once
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
#include "config_core.h"
|
||||
#endif
|
||||
|
||||
#if USE_SQLITE
|
||||
#include <Core/ExternalResultDescription.h>
|
||||
#include <DataStreams/IBlockInputStream.h>
|
||||
|
||||
#include <sqlite3.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
class SQLiteBlockInputStream : public IBlockInputStream
|
||||
{
|
||||
using SQLitePtr = std::shared_ptr<sqlite3>;
|
||||
|
||||
public:
|
||||
SQLiteBlockInputStream(SQLitePtr sqlite_db_,
|
||||
const String & query_str_,
|
||||
const Block & sample_block,
|
||||
UInt64 max_block_size_);
|
||||
|
||||
String getName() const override { return "SQLite"; }
|
||||
|
||||
Block getHeader() const override { return description.sample_block.cloneEmpty(); }
|
||||
|
||||
private:
|
||||
void insertDefaultSQLiteValue(IColumn & column, const IColumn & sample_column)
|
||||
{
|
||||
column.insertFrom(sample_column, 0);
|
||||
}
|
||||
|
||||
using ValueType = ExternalResultDescription::ValueType;
|
||||
|
||||
struct StatementDeleter
|
||||
{
|
||||
void operator()(sqlite3_stmt * stmt) { sqlite3_finalize(stmt); }
|
||||
};
|
||||
|
||||
void readPrefix() override;
|
||||
|
||||
Block readImpl() override;
|
||||
|
||||
void readSuffix() override;
|
||||
|
||||
void insertValue(IColumn & column, const ExternalResultDescription::ValueType type, size_t idx);
|
||||
|
||||
String query_str;
|
||||
UInt64 max_block_size;
|
||||
|
||||
ExternalResultDescription description;
|
||||
|
||||
SQLitePtr sqlite_db;
|
||||
std::unique_ptr<sqlite3_stmt, StatementDeleter> compiled_statement;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
@ -41,6 +41,7 @@ SRCS(
|
||||
RemoteBlockOutputStream.cpp
|
||||
RemoteQueryExecutor.cpp
|
||||
RemoteQueryExecutorReadContext.cpp
|
||||
SQLiteBlockInputStream.cpp
|
||||
SizeLimits.cpp
|
||||
SquashingBlockInputStream.cpp
|
||||
SquashingBlockOutputStream.cpp
|
||||
|
23
src/DataTypes/DataTypeDate32.cpp
Normal file
23
src/DataTypes/DataTypeDate32.cpp
Normal file
@ -0,0 +1,23 @@
|
||||
#include <DataTypes/DataTypeDate32.h>
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
#include <DataTypes/Serializations/SerializationDate32.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
bool DataTypeDate32::equals(const IDataType & rhs) const
|
||||
{
|
||||
return typeid(rhs) == typeid(*this);
|
||||
}
|
||||
|
||||
SerializationPtr DataTypeDate32::doGetDefaultSerialization() const
|
||||
{
|
||||
return std::make_shared<SerializationDate32>();
|
||||
}
|
||||
|
||||
void registerDataTypeDate32(DataTypeFactory & factory)
|
||||
{
|
||||
factory.registerSimpleDataType(
|
||||
"Date32", [] { return DataTypePtr(std::make_shared<DataTypeDate32>()); }, DataTypeFactory::CaseInsensitive);
|
||||
}
|
||||
|
||||
}
|
23
src/DataTypes/DataTypeDate32.h
Normal file
23
src/DataTypes/DataTypeDate32.h
Normal file
@ -0,0 +1,23 @@
|
||||
#pragma once
|
||||
|
||||
#include <DataTypes/DataTypeNumberBase.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
class DataTypeDate32 final : public DataTypeNumberBase<Int32>
|
||||
{
|
||||
public:
|
||||
static constexpr auto family_name = "Date32";
|
||||
|
||||
TypeIndex getTypeId() const override { return TypeIndex::Date32; }
|
||||
const char * getFamilyName() const override { return family_name; }
|
||||
|
||||
bool canBeUsedAsVersion() const override { return true; }
|
||||
bool canBeInsideNullable() const override { return true; }
|
||||
|
||||
bool equals(const IDataType & rhs) const override;
|
||||
|
||||
protected:
|
||||
SerializationPtr doGetDefaultSerialization() const override;
|
||||
};
|
||||
}
|
@ -194,6 +194,7 @@ DataTypeFactory::DataTypeFactory()
|
||||
registerDataTypeNumbers(*this);
|
||||
registerDataTypeDecimal(*this);
|
||||
registerDataTypeDate(*this);
|
||||
registerDataTypeDate32(*this);
|
||||
registerDataTypeDateTime(*this);
|
||||
registerDataTypeString(*this);
|
||||
registerDataTypeFixedString(*this);
|
||||
|
@ -69,6 +69,7 @@ private:
|
||||
void registerDataTypeNumbers(DataTypeFactory & factory);
|
||||
void registerDataTypeDecimal(DataTypeFactory & factory);
|
||||
void registerDataTypeDate(DataTypeFactory & factory);
|
||||
void registerDataTypeDate32(DataTypeFactory & factory);
|
||||
void registerDataTypeDateTime(DataTypeFactory & factory);
|
||||
void registerDataTypeString(DataTypeFactory & factory);
|
||||
void registerDataTypeFixedString(DataTypeFactory & factory);
|
||||
|
@ -78,6 +78,8 @@ MutableColumnUniquePtr DataTypeLowCardinality::createColumnUniqueImpl(const IDat
|
||||
return creator(static_cast<ColumnFixedString *>(nullptr));
|
||||
else if (which.isDate())
|
||||
return creator(static_cast<ColumnVector<UInt16> *>(nullptr));
|
||||
else if (which.isDate32())
|
||||
return creator(static_cast<ColumnVector<Int32> *>(nullptr));
|
||||
else if (which.isDateTime())
|
||||
return creator(static_cast<ColumnVector<UInt32> *>(nullptr));
|
||||
else if (which.isUUID())
|
||||
|
@ -1,11 +1,13 @@
|
||||
#include <Columns/ColumnArray.h>
|
||||
#include <Columns/ColumnConst.h>
|
||||
#include <Columns/ColumnTuple.h>
|
||||
#include <Columns/ColumnMap.h>
|
||||
#include <Columns/ColumnLowCardinality.h>
|
||||
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <DataTypes/DataTypeMap.h>
|
||||
|
||||
#include <Common/assert_cast.h>
|
||||
|
||||
@ -39,6 +41,11 @@ DataTypePtr recursiveRemoveLowCardinality(const DataTypePtr & type)
|
||||
return std::make_shared<DataTypeTuple>(elements);
|
||||
}
|
||||
|
||||
if (const auto * map_type = typeid_cast<const DataTypeMap *>(type.get()))
|
||||
{
|
||||
return std::make_shared<DataTypeMap>(recursiveRemoveLowCardinality(map_type->getKeyType()), recursiveRemoveLowCardinality(map_type->getValueType()));
|
||||
}
|
||||
|
||||
if (const auto * low_cardinality_type = typeid_cast<const DataTypeLowCardinality *>(type.get()))
|
||||
return low_cardinality_type->getDictionaryType();
|
||||
|
||||
@ -78,6 +85,16 @@ ColumnPtr recursiveRemoveLowCardinality(const ColumnPtr & column)
|
||||
return ColumnTuple::create(columns);
|
||||
}
|
||||
|
||||
if (const auto * column_map = typeid_cast<const ColumnMap *>(column.get()))
|
||||
{
|
||||
const auto & nested = column_map->getNestedColumnPtr();
|
||||
auto nested_no_lc = recursiveRemoveLowCardinality(nested);
|
||||
if (nested.get() == nested_no_lc.get())
|
||||
return column;
|
||||
|
||||
return ColumnMap::create(nested_no_lc);
|
||||
}
|
||||
|
||||
if (const auto * column_low_cardinality = typeid_cast<const ColumnLowCardinality *>(column.get()))
|
||||
return column_low_cardinality->convertToFullColumn();
|
||||
|
||||
|
@ -7,6 +7,7 @@
|
||||
#include <DataTypes/DataTypeMap.h>
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
#include <DataTypes/Serializations/SerializationMap.h>
|
||||
#include <Parsers/IAST.h>
|
||||
@ -53,12 +54,24 @@ DataTypeMap::DataTypeMap(const DataTypePtr & key_type_, const DataTypePtr & valu
|
||||
|
||||
void DataTypeMap::assertKeyType() const
|
||||
{
|
||||
if (!key_type->isValueRepresentedByInteger()
|
||||
bool type_error = false;
|
||||
if (key_type->getTypeId() == TypeIndex::LowCardinality)
|
||||
{
|
||||
const auto & low_cardinality_data_type = assert_cast<const DataTypeLowCardinality &>(*key_type);
|
||||
if (!isStringOrFixedString(*(low_cardinality_data_type.getDictionaryType())))
|
||||
type_error = true;
|
||||
}
|
||||
else if (!key_type->isValueRepresentedByInteger()
|
||||
&& !isStringOrFixedString(*key_type)
|
||||
&& !WhichDataType(key_type).isNothing()
|
||||
&& !WhichDataType(key_type).isUUID())
|
||||
{
|
||||
type_error = true;
|
||||
}
|
||||
|
||||
if (type_error)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Type of Map key must be a type, that can be represented by integer or string or UUID,"
|
||||
"Type of Map key must be a type, that can be represented by integer or String or FixedString (possibly LowCardinality) or UUID,"
|
||||
" but {} given", key_type->getName());
|
||||
}
|
||||
|
||||
|
@ -322,8 +322,10 @@ struct WhichDataType
|
||||
constexpr bool isEnum() const { return isEnum8() || isEnum16(); }
|
||||
|
||||
constexpr bool isDate() const { return idx == TypeIndex::Date; }
|
||||
constexpr bool isDate32() const { return idx == TypeIndex::Date32; }
|
||||
constexpr bool isDateTime() const { return idx == TypeIndex::DateTime; }
|
||||
constexpr bool isDateTime64() const { return idx == TypeIndex::DateTime64; }
|
||||
constexpr bool isDateOrDate32() const { return isDate() || isDate32(); }
|
||||
|
||||
constexpr bool isString() const { return idx == TypeIndex::String; }
|
||||
constexpr bool isFixedString() const { return idx == TypeIndex::FixedString; }
|
||||
@ -347,6 +349,10 @@ struct WhichDataType
|
||||
template <typename T>
|
||||
inline bool isDate(const T & data_type) { return WhichDataType(data_type).isDate(); }
|
||||
template <typename T>
|
||||
inline bool isDate32(const T & data_type) { return WhichDataType(data_type).isDate32(); }
|
||||
template <typename T>
|
||||
inline bool isDateOrDate32(const T & data_type) { return WhichDataType(data_type).isDateOrDate32(); }
|
||||
template <typename T>
|
||||
inline bool isDateTime(const T & data_type) { return WhichDataType(data_type).isDateTime(); }
|
||||
template <typename T>
|
||||
inline bool isDateTime64(const T & data_type) { return WhichDataType(data_type).isDateTime64(); }
|
||||
|
@ -29,7 +29,7 @@ namespace ErrorCodes
|
||||
static inline bool typeIsSigned(const IDataType & type)
|
||||
{
|
||||
WhichDataType data_type(type);
|
||||
return data_type.isNativeInt() || data_type.isFloat();
|
||||
return data_type.isNativeInt() || data_type.isFloat() || data_type.isEnum();
|
||||
}
|
||||
|
||||
static inline llvm::Type * toNativeType(llvm::IRBuilderBase & builder, const IDataType & type)
|
||||
@ -57,6 +57,10 @@ static inline llvm::Type * toNativeType(llvm::IRBuilderBase & builder, const IDa
|
||||
return builder.getFloatTy();
|
||||
else if (data_type.isFloat64())
|
||||
return builder.getDoubleTy();
|
||||
else if (data_type.isEnum8())
|
||||
return builder.getInt8Ty();
|
||||
else if (data_type.isEnum16())
|
||||
return builder.getInt16Ty();
|
||||
|
||||
return nullptr;
|
||||
}
|
||||
@ -109,7 +113,7 @@ static inline bool canBeNativeType(const IDataType & type)
|
||||
return canBeNativeType(*data_type_nullable.getNestedType());
|
||||
}
|
||||
|
||||
return data_type.isNativeInt() || data_type.isNativeUInt() || data_type.isFloat() || data_type.isDate();
|
||||
return data_type.isNativeInt() || data_type.isNativeUInt() || data_type.isFloat() || data_type.isDate() || data_type.isEnum();
|
||||
}
|
||||
|
||||
static inline llvm::Type * toNativeType(llvm::IRBuilderBase & builder, const DataTypePtr & type)
|
||||
@ -266,7 +270,7 @@ static inline llvm::Constant * getColumnNativeValue(llvm::IRBuilderBase & builde
|
||||
{
|
||||
return llvm::ConstantInt::get(type, column.getUInt(index));
|
||||
}
|
||||
else if (column_data_type.isNativeInt())
|
||||
else if (column_data_type.isNativeInt() || column_data_type.isEnum())
|
||||
{
|
||||
return llvm::ConstantInt::get(type, column.getInt(index));
|
||||
}
|
||||
|
78
src/DataTypes/Serializations/SerializationDate32.cpp
Normal file
78
src/DataTypes/Serializations/SerializationDate32.cpp
Normal file
@ -0,0 +1,78 @@
|
||||
#include <DataTypes/Serializations/SerializationDate32.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
|
||||
#include <Common/assert_cast.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
void SerializationDate32::serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings &) const
|
||||
{
|
||||
writeDateText(ExtendedDayNum(assert_cast<const ColumnInt32 &>(column).getData()[row_num]), ostr);
|
||||
}
|
||||
|
||||
void SerializationDate32::deserializeWholeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const
|
||||
{
|
||||
deserializeTextEscaped(column, istr, settings);
|
||||
}
|
||||
|
||||
void SerializationDate32::deserializeTextEscaped(IColumn & column, ReadBuffer & istr, const FormatSettings &) const
|
||||
{
|
||||
ExtendedDayNum x;
|
||||
readDateText(x, istr);
|
||||
assert_cast<ColumnInt32 &>(column).getData().push_back(x);
|
||||
}
|
||||
|
||||
void SerializationDate32::serializeTextEscaped(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
serializeText(column, row_num, ostr, settings);
|
||||
}
|
||||
|
||||
void SerializationDate32::serializeTextQuoted(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
writeChar('\'', ostr);
|
||||
serializeText(column, row_num, ostr, settings);
|
||||
writeChar('\'', ostr);
|
||||
}
|
||||
|
||||
void SerializationDate32::deserializeTextQuoted(IColumn & column, ReadBuffer & istr, const FormatSettings &) const
|
||||
{
|
||||
ExtendedDayNum x;
|
||||
assertChar('\'', istr);
|
||||
readDateText(x, istr);
|
||||
assertChar('\'', istr);
|
||||
assert_cast<ColumnInt32 &>(column).getData().push_back(x); /// It's important to do this at the end - for exception safety.
|
||||
}
|
||||
|
||||
void SerializationDate32::serializeTextJSON(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
writeChar('"', ostr);
|
||||
serializeText(column, row_num, ostr, settings);
|
||||
writeChar('"', ostr);
|
||||
}
|
||||
|
||||
void SerializationDate32::deserializeTextJSON(IColumn & column, ReadBuffer & istr, const FormatSettings &) const
|
||||
{
|
||||
ExtendedDayNum x;
|
||||
assertChar('"', istr);
|
||||
readDateText(x, istr);
|
||||
assertChar('"', istr);
|
||||
assert_cast<ColumnInt32 &>(column).getData().push_back(x);
|
||||
}
|
||||
|
||||
void SerializationDate32::serializeTextCSV(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
writeChar('"', ostr);
|
||||
serializeText(column, row_num, ostr, settings);
|
||||
writeChar('"', ostr);
|
||||
}
|
||||
|
||||
void SerializationDate32::deserializeTextCSV(IColumn & column, ReadBuffer & istr, const FormatSettings &) const
|
||||
{
|
||||
LocalDate value;
|
||||
readCSV(value, istr);
|
||||
assert_cast<ColumnInt32 &>(column).getData().push_back(value.getExtenedDayNum());
|
||||
}
|
||||
}
|
21
src/DataTypes/Serializations/SerializationDate32.h
Normal file
21
src/DataTypes/Serializations/SerializationDate32.h
Normal file
@ -0,0 +1,21 @@
|
||||
#pragma once
|
||||
|
||||
#include <DataTypes/Serializations/SerializationNumber.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
class SerializationDate32 final : public SerializationNumber<Int32>
|
||||
{
|
||||
public:
|
||||
void serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings &) const override;
|
||||
void deserializeWholeText(IColumn & column, ReadBuffer & istr, const FormatSettings &) const override;
|
||||
void serializeTextEscaped(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings &) const override;
|
||||
void deserializeTextEscaped(IColumn & column, ReadBuffer & istr, const FormatSettings &) const override;
|
||||
void serializeTextQuoted(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings &) const override;
|
||||
void deserializeTextQuoted(IColumn & column, ReadBuffer & istr, const FormatSettings &) const override;
|
||||
void serializeTextJSON(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings &) const override;
|
||||
void deserializeTextJSON(IColumn & column, ReadBuffer & istr, const FormatSettings &) const override;
|
||||
void serializeTextCSV(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings &) const override;
|
||||
void deserializeTextCSV(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const override;
|
||||
};
|
||||
}
|
@ -80,8 +80,13 @@ void SerializationMap::deserializeBinary(IColumn & column, ReadBuffer & istr) co
|
||||
}
|
||||
|
||||
|
||||
template <typename Writer>
|
||||
void SerializationMap::serializeTextImpl(const IColumn & column, size_t row_num, bool quote_key, WriteBuffer & ostr, Writer && writer) const
|
||||
template <typename KeyWriter, typename ValueWriter>
|
||||
void SerializationMap::serializeTextImpl(
|
||||
const IColumn & column,
|
||||
size_t row_num,
|
||||
WriteBuffer & ostr,
|
||||
KeyWriter && key_writer,
|
||||
ValueWriter && value_writer) const
|
||||
{
|
||||
const auto & column_map = assert_cast<const ColumnMap &>(column);
|
||||
|
||||
@ -98,17 +103,9 @@ void SerializationMap::serializeTextImpl(const IColumn & column, size_t row_num,
|
||||
if (i != offset)
|
||||
writeChar(',', ostr);
|
||||
|
||||
if (quote_key)
|
||||
{
|
||||
writeChar('"', ostr);
|
||||
writer(key, nested_tuple.getColumn(0), i);
|
||||
writeChar('"', ostr);
|
||||
}
|
||||
else
|
||||
writer(key, nested_tuple.getColumn(0), i);
|
||||
|
||||
key_writer(ostr, key, nested_tuple.getColumn(0), i);
|
||||
writeChar(':', ostr);
|
||||
writer(value, nested_tuple.getColumn(1), i);
|
||||
value_writer(ostr, value, nested_tuple.getColumn(1), i);
|
||||
}
|
||||
writeChar('}', ostr);
|
||||
}
|
||||
@ -148,13 +145,13 @@ void SerializationMap::deserializeTextImpl(IColumn & column, ReadBuffer & istr,
|
||||
if (*istr.position() == '}')
|
||||
break;
|
||||
|
||||
reader(key, key_column);
|
||||
reader(istr, key, key_column);
|
||||
skipWhitespaceIfAny(istr);
|
||||
assertChar(':', istr);
|
||||
|
||||
++size;
|
||||
skipWhitespaceIfAny(istr);
|
||||
reader(value, value_column);
|
||||
reader(istr, value, value_column);
|
||||
|
||||
skipWhitespaceIfAny(istr);
|
||||
}
|
||||
@ -170,41 +167,45 @@ void SerializationMap::deserializeTextImpl(IColumn & column, ReadBuffer & istr,
|
||||
|
||||
void SerializationMap::serializeText(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
serializeTextImpl(column, row_num, /*quote_key=*/ false, ostr,
|
||||
[&](const SerializationPtr & subcolumn_serialization, const IColumn & subcolumn, size_t pos)
|
||||
{
|
||||
subcolumn_serialization->serializeTextQuoted(subcolumn, pos, ostr, settings);
|
||||
});
|
||||
auto writer = [&settings](WriteBuffer & buf, const SerializationPtr & subcolumn_serialization, const IColumn & subcolumn, size_t pos)
|
||||
{
|
||||
subcolumn_serialization->serializeTextQuoted(subcolumn, pos, buf, settings);
|
||||
};
|
||||
|
||||
serializeTextImpl(column, row_num, ostr, writer, writer);
|
||||
}
|
||||
|
||||
void SerializationMap::deserializeText(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const
|
||||
{
|
||||
deserializeTextImpl(column, istr,
|
||||
[&](const SerializationPtr & subcolumn_serialization, IColumn & subcolumn)
|
||||
[&settings](ReadBuffer & buf, const SerializationPtr & subcolumn_serialization, IColumn & subcolumn)
|
||||
{
|
||||
subcolumn_serialization->deserializeTextQuoted(subcolumn, istr, settings);
|
||||
subcolumn_serialization->deserializeTextQuoted(subcolumn, buf, settings);
|
||||
});
|
||||
}
|
||||
|
||||
void SerializationMap::serializeTextJSON(const IColumn & column, size_t row_num, WriteBuffer & ostr, const FormatSettings & settings) const
|
||||
{
|
||||
/// We need to double-quote integer keys to produce valid JSON.
|
||||
const auto & column_key = assert_cast<const ColumnMap &>(column).getNestedData().getColumn(0);
|
||||
bool quote_key = !WhichDataType(column_key.getDataType()).isStringOrFixedString();
|
||||
|
||||
serializeTextImpl(column, row_num, quote_key, ostr,
|
||||
[&](const SerializationPtr & subcolumn_serialization, const IColumn & subcolumn, size_t pos)
|
||||
serializeTextImpl(column, row_num, ostr,
|
||||
[&settings](WriteBuffer & buf, const SerializationPtr & subcolumn_serialization, const IColumn & subcolumn, size_t pos)
|
||||
{
|
||||
subcolumn_serialization->serializeTextJSON(subcolumn, pos, ostr, settings);
|
||||
/// We need to double-quote all keys (including integers) to produce valid JSON.
|
||||
WriteBufferFromOwnString str_buf;
|
||||
subcolumn_serialization->serializeText(subcolumn, pos, str_buf, settings);
|
||||
writeJSONString(str_buf.str(), buf, settings);
|
||||
},
|
||||
[&settings](WriteBuffer & buf, const SerializationPtr & subcolumn_serialization, const IColumn & subcolumn, size_t pos)
|
||||
{
|
||||
subcolumn_serialization->serializeTextJSON(subcolumn, pos, buf, settings);
|
||||
});
|
||||
}
|
||||
|
||||
void SerializationMap::deserializeTextJSON(IColumn & column, ReadBuffer & istr, const FormatSettings & settings) const
|
||||
{
|
||||
deserializeTextImpl(column, istr,
|
||||
[&](const SerializationPtr & subcolumn_serialization, IColumn & subcolumn)
|
||||
[&settings](ReadBuffer & buf, const SerializationPtr & subcolumn_serialization, IColumn & subcolumn)
|
||||
{
|
||||
subcolumn_serialization->deserializeTextJSON(subcolumn, istr, settings);
|
||||
subcolumn_serialization->deserializeTextJSON(subcolumn, buf, settings);
|
||||
});
|
||||
}
|
||||
|
||||
|
@ -60,8 +60,8 @@ public:
|
||||
SubstreamsCache * cache) const override;
|
||||
|
||||
private:
|
||||
template <typename Writer>
|
||||
void serializeTextImpl(const IColumn & column, size_t row_num, bool quote_key, WriteBuffer & ostr, Writer && writer) const;
|
||||
template <typename KeyWriter, typename ValueWriter>
|
||||
void serializeTextImpl(const IColumn & column, size_t row_num, WriteBuffer & ostr, KeyWriter && key_writer, ValueWriter && value_writer) const;
|
||||
|
||||
template <typename Reader>
|
||||
void deserializeTextImpl(IColumn & column, ReadBuffer & istr, Reader && reader) const;
|
||||
|
@ -16,6 +16,7 @@ SRCS(
|
||||
DataTypeCustomIPv4AndIPv6.cpp
|
||||
DataTypeCustomSimpleAggregateFunction.cpp
|
||||
DataTypeDate.cpp
|
||||
DataTypeDate32.cpp
|
||||
DataTypeDateTime.cpp
|
||||
DataTypeDateTime64.cpp
|
||||
DataTypeDecimalBase.cpp
|
||||
@ -45,6 +46,7 @@ SRCS(
|
||||
Serializations/SerializationArray.cpp
|
||||
Serializations/SerializationCustomSimpleText.cpp
|
||||
Serializations/SerializationDate.cpp
|
||||
Serializations/SerializationDate32.cpp
|
||||
Serializations/SerializationDateTime.cpp
|
||||
Serializations/SerializationDateTime64.cpp
|
||||
Serializations/SerializationDecimal.cpp
|
||||
|
@ -1,17 +1,17 @@
|
||||
#include <Databases/DatabaseFactory.h>
|
||||
|
||||
#include <Databases/DatabaseAtomic.h>
|
||||
#include <Databases/DatabaseReplicated.h>
|
||||
#include <Databases/DatabaseDictionary.h>
|
||||
#include <Databases/DatabaseLazy.h>
|
||||
#include <Databases/DatabaseMemory.h>
|
||||
#include <Databases/DatabaseOrdinary.h>
|
||||
#include <Databases/DatabaseReplicated.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Parsers/ASTIdentifier.h>
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
#include <Parsers/formatAST.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Common/Macros.h>
|
||||
#include <filesystem>
|
||||
|
||||
@ -40,6 +40,10 @@
|
||||
#include <Storages/PostgreSQL/MaterializedPostgreSQLSettings.h>
|
||||
#endif
|
||||
|
||||
#if USE_SQLITE
|
||||
#include <Databases/SQLite/DatabaseSQLite.h>
|
||||
#endif
|
||||
|
||||
namespace fs = std::filesystem;
|
||||
|
||||
namespace DB
|
||||
@ -100,7 +104,7 @@ DatabasePtr DatabaseFactory::getImpl(const ASTCreateQuery & create, const String
|
||||
const UUID & uuid = create.uuid;
|
||||
|
||||
bool engine_may_have_arguments = engine_name == "MySQL" || engine_name == "MaterializeMySQL" || engine_name == "Lazy" ||
|
||||
engine_name == "Replicated" || engine_name == "PostgreSQL" || engine_name == "MaterializedPostgreSQL";
|
||||
engine_name == "Replicated" || engine_name == "PostgreSQL" || engine_name == "MaterializedPostgreSQL" || engine_name == "SQLite";
|
||||
if (engine_define->engine->arguments && !engine_may_have_arguments)
|
||||
throw Exception("Database engine " + engine_name + " cannot have arguments", ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
@ -299,6 +303,22 @@ DatabasePtr DatabaseFactory::getImpl(const ASTCreateQuery & create, const String
|
||||
}
|
||||
|
||||
|
||||
#endif
|
||||
|
||||
#if USE_SQLITE
|
||||
else if (engine_name == "SQLite")
|
||||
{
|
||||
const ASTFunction * engine = engine_define->engine;
|
||||
|
||||
if (!engine->arguments || engine->arguments->children.size() != 1)
|
||||
throw Exception("SQLite database requires 1 argument: database path", ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
const auto & arguments = engine->arguments->children;
|
||||
|
||||
String database_path = safeGetLiteralValue<String>(arguments[0], "SQLite");
|
||||
|
||||
return std::make_shared<DatabaseSQLite>(context, engine_define, database_path);
|
||||
}
|
||||
#endif
|
||||
|
||||
throw Exception("Unknown database engine: " + engine_name, ErrorCodes::UNKNOWN_DATABASE_ENGINE);
|
||||
|
224
src/Databases/SQLite/DatabaseSQLite.cpp
Normal file
224
src/Databases/SQLite/DatabaseSQLite.cpp
Normal file
@ -0,0 +1,224 @@
|
||||
#include "DatabaseSQLite.h"
|
||||
|
||||
#if USE_SQLITE
|
||||
|
||||
#include <common/logger_useful.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <DataTypes/DataTypeNullable.h>
|
||||
#include <Databases/SQLite/fetchSQLiteTableStructure.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
#include <Parsers/ASTColumnDeclaration.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Storages/StorageSQLite.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int SQLITE_ENGINE_ERROR;
|
||||
extern const int UNKNOWN_TABLE;
|
||||
}
|
||||
|
||||
DatabaseSQLite::DatabaseSQLite(
|
||||
ContextPtr context_,
|
||||
const ASTStorage * database_engine_define_,
|
||||
const String & database_path_)
|
||||
: IDatabase("SQLite")
|
||||
, WithContext(context_->getGlobalContext())
|
||||
, database_engine_define(database_engine_define_->clone())
|
||||
, log(&Poco::Logger::get("DatabaseSQLite"))
|
||||
{
|
||||
sqlite3 * tmp_sqlite_db = nullptr;
|
||||
int status = sqlite3_open(database_path_.c_str(), &tmp_sqlite_db);
|
||||
|
||||
if (status != SQLITE_OK)
|
||||
{
|
||||
throw Exception(ErrorCodes::SQLITE_ENGINE_ERROR,
|
||||
"Cannot access sqlite database. Error status: {}. Message: {}",
|
||||
status, sqlite3_errstr(status));
|
||||
}
|
||||
|
||||
sqlite_db = std::shared_ptr<sqlite3>(tmp_sqlite_db, sqlite3_close);
|
||||
}
|
||||
|
||||
|
||||
bool DatabaseSQLite::empty() const
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(mutex);
|
||||
return fetchTablesList().empty();
|
||||
}
|
||||
|
||||
|
||||
DatabaseTablesIteratorPtr DatabaseSQLite::getTablesIterator(ContextPtr local_context, const IDatabase::FilterByNameFunction &)
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(mutex);
|
||||
|
||||
Tables tables;
|
||||
auto table_names = fetchTablesList();
|
||||
for (const auto & table_name : table_names)
|
||||
tables[table_name] = fetchTable(table_name, local_context, true);
|
||||
|
||||
return std::make_unique<DatabaseTablesSnapshotIterator>(tables, database_name);
|
||||
}
|
||||
|
||||
|
||||
std::unordered_set<std::string> DatabaseSQLite::fetchTablesList() const
|
||||
{
|
||||
std::unordered_set<String> tables;
|
||||
std::string query = "SELECT name FROM sqlite_master "
|
||||
"WHERE type = 'table' AND name NOT LIKE 'sqlite_%'";
|
||||
|
||||
auto callback_get_data = [](void * res, int col_num, char ** data_by_col, char ** /* col_names */) -> int
|
||||
{
|
||||
for (int i = 0; i < col_num; ++i)
|
||||
static_cast<std::unordered_set<std::string> *>(res)->insert(data_by_col[i]);
|
||||
return 0;
|
||||
};
|
||||
|
||||
char * err_message = nullptr;
|
||||
int status = sqlite3_exec(sqlite_db.get(), query.c_str(), callback_get_data, &tables, &err_message);
|
||||
if (status != SQLITE_OK)
|
||||
{
|
||||
String err_msg(err_message);
|
||||
sqlite3_free(err_message);
|
||||
throw Exception(ErrorCodes::SQLITE_ENGINE_ERROR,
|
||||
"Cannot fetch sqlite database tables. Error status: {}. Message: {}",
|
||||
status, err_msg);
|
||||
}
|
||||
|
||||
return tables;
|
||||
}
|
||||
|
||||
|
||||
bool DatabaseSQLite::checkSQLiteTable(const String & table_name) const
|
||||
{
|
||||
const String query = fmt::format("SELECT name FROM sqlite_master WHERE type='table' AND name='{table_name}';", table_name);
|
||||
|
||||
auto callback_get_data = [](void * res, int, char **, char **) -> int
|
||||
{
|
||||
*(static_cast<int *>(res)) += 1;
|
||||
return 0;
|
||||
};
|
||||
|
||||
int count = 0;
|
||||
char * err_message = nullptr;
|
||||
int status = sqlite3_exec(sqlite_db.get(), query.c_str(), callback_get_data, &count, &err_message);
|
||||
if (status != SQLITE_OK)
|
||||
{
|
||||
String err_msg(err_message);
|
||||
sqlite3_free(err_message);
|
||||
throw Exception(ErrorCodes::SQLITE_ENGINE_ERROR,
|
||||
"Cannot check sqlite table. Error status: {}. Message: {}",
|
||||
status, err_msg);
|
||||
}
|
||||
|
||||
return (count != 0);
|
||||
}
|
||||
|
||||
|
||||
bool DatabaseSQLite::isTableExist(const String & table_name, ContextPtr) const
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(mutex);
|
||||
return checkSQLiteTable(table_name);
|
||||
}
|
||||
|
||||
|
||||
StoragePtr DatabaseSQLite::tryGetTable(const String & table_name, ContextPtr local_context) const
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(mutex);
|
||||
return fetchTable(table_name, local_context, false);
|
||||
}
|
||||
|
||||
|
||||
StoragePtr DatabaseSQLite::fetchTable(const String & table_name, ContextPtr local_context, bool table_checked) const
|
||||
{
|
||||
if (!table_checked && !checkSQLiteTable(table_name))
|
||||
return StoragePtr{};
|
||||
|
||||
auto columns = fetchSQLiteTableStructure(sqlite_db.get(), table_name);
|
||||
|
||||
if (!columns)
|
||||
return StoragePtr{};
|
||||
|
||||
auto storage = StorageSQLite::create(
|
||||
StorageID(database_name, table_name),
|
||||
sqlite_db,
|
||||
table_name,
|
||||
ColumnsDescription{*columns},
|
||||
ConstraintsDescription{},
|
||||
local_context);
|
||||
|
||||
return storage;
|
||||
}
|
||||
|
||||
|
||||
ASTPtr DatabaseSQLite::getCreateDatabaseQuery() const
|
||||
{
|
||||
const auto & create_query = std::make_shared<ASTCreateQuery>();
|
||||
create_query->database = getDatabaseName();
|
||||
create_query->set(create_query->storage, database_engine_define);
|
||||
return create_query;
|
||||
}
|
||||
|
||||
|
||||
ASTPtr DatabaseSQLite::getCreateTableQueryImpl(const String & table_name, ContextPtr local_context, bool throw_on_error) const
|
||||
{
|
||||
auto storage = fetchTable(table_name, local_context, false);
|
||||
if (!storage)
|
||||
{
|
||||
if (throw_on_error)
|
||||
throw Exception(ErrorCodes::UNKNOWN_TABLE, "SQLite table {}.{} does not exist",
|
||||
database_name, table_name);
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
auto create_table_query = std::make_shared<ASTCreateQuery>();
|
||||
auto table_storage_define = database_engine_define->clone();
|
||||
create_table_query->set(create_table_query->storage, table_storage_define);
|
||||
|
||||
auto columns_declare_list = std::make_shared<ASTColumns>();
|
||||
auto columns_expression_list = std::make_shared<ASTExpressionList>();
|
||||
|
||||
columns_declare_list->set(columns_declare_list->columns, columns_expression_list);
|
||||
create_table_query->set(create_table_query->columns_list, columns_declare_list);
|
||||
|
||||
/// init create query.
|
||||
auto table_id = storage->getStorageID();
|
||||
create_table_query->table = table_id.table_name;
|
||||
create_table_query->database = table_id.database_name;
|
||||
|
||||
auto metadata_snapshot = storage->getInMemoryMetadataPtr();
|
||||
for (const auto & column_type_and_name : metadata_snapshot->getColumns().getOrdinary())
|
||||
{
|
||||
const auto & column_declaration = std::make_shared<ASTColumnDeclaration>();
|
||||
column_declaration->name = column_type_and_name.name;
|
||||
column_declaration->type = getColumnDeclaration(column_type_and_name.type);
|
||||
columns_expression_list->children.emplace_back(column_declaration);
|
||||
}
|
||||
|
||||
ASTStorage * ast_storage = table_storage_define->as<ASTStorage>();
|
||||
ASTs storage_children = ast_storage->children;
|
||||
auto storage_engine_arguments = ast_storage->engine->arguments;
|
||||
|
||||
/// Add table_name to engine arguments
|
||||
storage_engine_arguments->children.insert(storage_engine_arguments->children.begin() + 1, std::make_shared<ASTLiteral>(table_id.table_name));
|
||||
|
||||
return create_table_query;
|
||||
}
|
||||
|
||||
|
||||
ASTPtr DatabaseSQLite::getColumnDeclaration(const DataTypePtr & data_type) const
|
||||
{
|
||||
WhichDataType which(data_type);
|
||||
|
||||
if (which.isNullable())
|
||||
return makeASTFunction("Nullable", getColumnDeclaration(typeid_cast<const DataTypeNullable *>(data_type.get())->getNestedType()));
|
||||
|
||||
return std::make_shared<ASTIdentifier>(data_type->getName());
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
#endif
|
65
src/Databases/SQLite/DatabaseSQLite.h
Normal file
65
src/Databases/SQLite/DatabaseSQLite.h
Normal file
@ -0,0 +1,65 @@
|
||||
#pragma once
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
#include "config_core.h"
|
||||
#endif
|
||||
|
||||
#if USE_SQLITE
|
||||
#include <Core/Names.h>
|
||||
#include <Databases/DatabasesCommon.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
|
||||
#include <sqlite3.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
class DatabaseSQLite final : public IDatabase, protected WithContext
|
||||
{
|
||||
public:
|
||||
using SQLitePtr = std::shared_ptr<sqlite3>;
|
||||
|
||||
DatabaseSQLite(ContextPtr context_, const ASTStorage * database_engine_define_, const String & database_path_);
|
||||
|
||||
String getEngineName() const override { return "SQLite"; }
|
||||
|
||||
bool canContainMergeTreeTables() const override { return false; }
|
||||
|
||||
bool canContainDistributedTables() const override { return false; }
|
||||
|
||||
bool shouldBeEmptyOnDetach() const override { return false; }
|
||||
|
||||
bool isTableExist(const String & name, ContextPtr context) const override;
|
||||
|
||||
StoragePtr tryGetTable(const String & name, ContextPtr context) const override;
|
||||
|
||||
DatabaseTablesIteratorPtr getTablesIterator(ContextPtr context, const FilterByNameFunction & filter_by_table_name) override;
|
||||
|
||||
bool empty() const override;
|
||||
|
||||
ASTPtr getCreateDatabaseQuery() const override;
|
||||
|
||||
void shutdown() override {}
|
||||
|
||||
protected:
|
||||
ASTPtr getCreateTableQueryImpl(const String & table_name, ContextPtr context, bool throw_on_error) const override;
|
||||
|
||||
private:
|
||||
ASTPtr database_engine_define;
|
||||
|
||||
SQLitePtr sqlite_db;
|
||||
|
||||
Poco::Logger * log;
|
||||
|
||||
bool checkSQLiteTable(const String & table_name) const;
|
||||
|
||||
NameSet fetchTablesList() const;
|
||||
|
||||
StoragePtr fetchTable(const String & table_name, ContextPtr context, bool table_checked) const;
|
||||
|
||||
ASTPtr getColumnDeclaration(const DataTypePtr & data_type) const;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
#endif
|
104
src/Databases/SQLite/fetchSQLiteTableStructure.cpp
Normal file
104
src/Databases/SQLite/fetchSQLiteTableStructure.cpp
Normal file
@ -0,0 +1,104 @@
|
||||
#include <Databases/SQLite/fetchSQLiteTableStructure.h>
|
||||
|
||||
#if USE_SQLITE
|
||||
|
||||
#include <Common/quoteString.h>
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
#include <DataTypes/DataTypeNullable.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <DataTypes/DataTypesDecimal.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <Poco/String.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int SQLITE_ENGINE_ERROR;
|
||||
}
|
||||
|
||||
static DataTypePtr convertSQLiteDataType(String type)
|
||||
{
|
||||
DataTypePtr res;
|
||||
type = Poco::toLower(type);
|
||||
|
||||
if (type == "tinyint")
|
||||
res = std::make_shared<DataTypeInt8>();
|
||||
else if (type == "smallint")
|
||||
res = std::make_shared<DataTypeInt16>();
|
||||
else if (type.starts_with("int") || type == "mediumint")
|
||||
res = std::make_shared<DataTypeInt32>();
|
||||
else if (type == "bigint")
|
||||
res = std::make_shared<DataTypeInt64>();
|
||||
else if (type == "float")
|
||||
res = std::make_shared<DataTypeFloat32>();
|
||||
else if (type.starts_with("double") || type == "real")
|
||||
res = std::make_shared<DataTypeFloat64>();
|
||||
else
|
||||
res = std::make_shared<DataTypeString>(); // No decimal when fetching data through API
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
|
||||
std::shared_ptr<NamesAndTypesList> fetchSQLiteTableStructure(sqlite3 * connection, const String & sqlite_table_name)
|
||||
{
|
||||
auto columns = NamesAndTypesList();
|
||||
auto query = fmt::format("pragma table_info({});", quoteString(sqlite_table_name));
|
||||
|
||||
auto callback_get_data = [](void * res, int col_num, char ** data_by_col, char ** col_names) -> int
|
||||
{
|
||||
NameAndTypePair name_and_type;
|
||||
bool is_nullable = false;
|
||||
|
||||
for (int i = 0; i < col_num; ++i)
|
||||
{
|
||||
if (strcmp(col_names[i], "name") == 0)
|
||||
{
|
||||
name_and_type.name = data_by_col[i];
|
||||
}
|
||||
else if (strcmp(col_names[i], "type") == 0)
|
||||
{
|
||||
name_and_type.type = convertSQLiteDataType(data_by_col[i]);
|
||||
}
|
||||
else if (strcmp(col_names[i], "notnull") == 0)
|
||||
{
|
||||
is_nullable = (data_by_col[i][0] == '0');
|
||||
}
|
||||
}
|
||||
|
||||
if (is_nullable)
|
||||
name_and_type.type = std::make_shared<DataTypeNullable>(name_and_type.type);
|
||||
|
||||
static_cast<NamesAndTypesList *>(res)->push_back(name_and_type);
|
||||
|
||||
return 0;
|
||||
};
|
||||
|
||||
char * err_message = nullptr;
|
||||
int status = sqlite3_exec(connection, query.c_str(), callback_get_data, &columns, &err_message);
|
||||
|
||||
if (status != SQLITE_OK)
|
||||
{
|
||||
String err_msg(err_message);
|
||||
sqlite3_free(err_message);
|
||||
|
||||
throw Exception(ErrorCodes::SQLITE_ENGINE_ERROR,
|
||||
"Failed to fetch SQLite data. Status: {}. Message: {}",
|
||||
status, err_msg);
|
||||
}
|
||||
|
||||
if (columns.empty())
|
||||
return nullptr;
|
||||
|
||||
return std::make_shared<NamesAndTypesList>(columns);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
#endif
|
19
src/Databases/SQLite/fetchSQLiteTableStructure.h
Normal file
19
src/Databases/SQLite/fetchSQLiteTableStructure.h
Normal file
@ -0,0 +1,19 @@
|
||||
#pragma once
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
#include "config_core.h"
|
||||
#endif
|
||||
|
||||
#if USE_SQLITE
|
||||
|
||||
#include <Storages/StorageSQLite.h>
|
||||
#include <sqlite3.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
std::shared_ptr<NamesAndTypesList> fetchSQLiteTableStructure(sqlite3 * connection,
|
||||
const String & sqlite_table_name);
|
||||
}
|
||||
|
||||
#endif
|
@ -27,6 +27,8 @@ SRCS(
|
||||
MySQL/MaterializeMetadata.cpp
|
||||
MySQL/MaterializeMySQLSettings.cpp
|
||||
MySQL/MaterializeMySQLSyncThread.cpp
|
||||
SQLite/DatabaseSQLite.cpp
|
||||
SQLite/fetchSQLiteTableStructure.cpp
|
||||
|
||||
)
|
||||
|
||||
|
@ -224,9 +224,7 @@ void registerDictionarySourceClickHouse(DictionarySourceFactory & factory)
|
||||
|
||||
ClickHouseDictionarySource::Configuration configuration
|
||||
{
|
||||
.secure = config.getBool(settings_config_prefix + ".secure", false),
|
||||
.host = host,
|
||||
.port = port,
|
||||
.user = config.getString(settings_config_prefix + ".user", "default"),
|
||||
.password = config.getString(settings_config_prefix + ".password", ""),
|
||||
.db = config.getString(settings_config_prefix + ".db", default_database),
|
||||
@ -235,7 +233,9 @@ void registerDictionarySourceClickHouse(DictionarySourceFactory & factory)
|
||||
.invalidate_query = config.getString(settings_config_prefix + ".invalidate_query", ""),
|
||||
.update_field = config.getString(settings_config_prefix + ".update_field", ""),
|
||||
.update_lag = config.getUInt64(settings_config_prefix + ".update_lag", 1),
|
||||
.is_local = isLocalAddress({host, port}, default_port)
|
||||
.port = port,
|
||||
.is_local = isLocalAddress({host, port}, default_port),
|
||||
.secure = config.getBool(settings_config_prefix + ".secure", false)
|
||||
};
|
||||
|
||||
/// We should set user info even for the case when the dictionary is loaded in-process (without TCP communication).
|
||||
|
@ -20,9 +20,7 @@ class ClickHouseDictionarySource final : public IDictionarySource
|
||||
public:
|
||||
struct Configuration
|
||||
{
|
||||
const bool secure;
|
||||
const std::string host;
|
||||
const UInt16 port;
|
||||
const std::string user;
|
||||
const std::string password;
|
||||
const std::string db;
|
||||
@ -31,7 +29,9 @@ public:
|
||||
const std::string invalidate_query;
|
||||
const std::string update_field;
|
||||
const UInt64 update_lag;
|
||||
const UInt16 port;
|
||||
const bool is_local;
|
||||
const bool secure;
|
||||
};
|
||||
|
||||
ClickHouseDictionarySource(
|
||||
|
@ -26,8 +26,10 @@
|
||||
namespace ProfileEvents
|
||||
{
|
||||
extern const Event FileOpen;
|
||||
extern const Event WriteBufferAIOWrite;
|
||||
extern const Event WriteBufferAIOWriteBytes;
|
||||
extern const Event AIOWrite;
|
||||
extern const Event AIOWriteBytes;
|
||||
extern const Event AIORead;
|
||||
extern const Event AIOReadBytes;
|
||||
}
|
||||
|
||||
namespace DB
|
||||
@ -531,8 +533,8 @@ public:
|
||||
|
||||
auto bytes_written = eventResult(event);
|
||||
|
||||
ProfileEvents::increment(ProfileEvents::WriteBufferAIOWrite);
|
||||
ProfileEvents::increment(ProfileEvents::WriteBufferAIOWriteBytes, bytes_written);
|
||||
ProfileEvents::increment(ProfileEvents::AIOWrite);
|
||||
ProfileEvents::increment(ProfileEvents::AIOWriteBytes, bytes_written);
|
||||
|
||||
if (bytes_written != static_cast<decltype(bytes_written)>(block_size * buffer_size_in_blocks))
|
||||
throw Exception(ErrorCodes::AIO_WRITE_ERROR,
|
||||
@ -600,6 +602,9 @@ public:
|
||||
buffer_size_in_bytes,
|
||||
read_bytes);
|
||||
|
||||
ProfileEvents::increment(ProfileEvents::AIORead);
|
||||
ProfileEvents::increment(ProfileEvents::AIOReadBytes, read_bytes);
|
||||
|
||||
SSDCacheBlock block(block_size);
|
||||
|
||||
for (size_t i = 0; i < blocks_length; ++i)
|
||||
@ -687,6 +692,9 @@ public:
|
||||
throw Exception(ErrorCodes::AIO_READ_ERROR,
|
||||
"GC: AIO failed to read file ({}). Expected bytes ({}). Actual bytes ({})", file_path, block_size, read_bytes);
|
||||
|
||||
ProfileEvents::increment(ProfileEvents::AIORead);
|
||||
ProfileEvents::increment(ProfileEvents::AIOReadBytes, read_bytes);
|
||||
|
||||
char * request_buffer = getRequestBuffer(request);
|
||||
|
||||
// Unpoison the memory returned from an uninstrumented system function.
|
||||
|
@ -90,17 +90,17 @@ DiskCacheWrapper::readFile(
|
||||
const String & path,
|
||||
size_t buf_size,
|
||||
size_t estimated_size,
|
||||
size_t aio_threshold,
|
||||
size_t direct_io_threshold,
|
||||
size_t mmap_threshold,
|
||||
MMappedFileCache * mmap_cache) const
|
||||
{
|
||||
if (!cache_file_predicate(path))
|
||||
return DiskDecorator::readFile(path, buf_size, estimated_size, aio_threshold, mmap_threshold, mmap_cache);
|
||||
return DiskDecorator::readFile(path, buf_size, estimated_size, direct_io_threshold, mmap_threshold, mmap_cache);
|
||||
|
||||
LOG_DEBUG(log, "Read file {} from cache", backQuote(path));
|
||||
|
||||
if (cache_disk->exists(path))
|
||||
return cache_disk->readFile(path, buf_size, estimated_size, aio_threshold, mmap_threshold, mmap_cache);
|
||||
return cache_disk->readFile(path, buf_size, estimated_size, direct_io_threshold, mmap_threshold, mmap_cache);
|
||||
|
||||
auto metadata = acquireDownloadMetadata(path);
|
||||
|
||||
@ -134,7 +134,7 @@ DiskCacheWrapper::readFile(
|
||||
|
||||
auto tmp_path = path + ".tmp";
|
||||
{
|
||||
auto src_buffer = DiskDecorator::readFile(path, buf_size, estimated_size, aio_threshold, mmap_threshold, mmap_cache);
|
||||
auto src_buffer = DiskDecorator::readFile(path, buf_size, estimated_size, direct_io_threshold, mmap_threshold, mmap_cache);
|
||||
auto dst_buffer = cache_disk->writeFile(tmp_path, buf_size, WriteMode::Rewrite);
|
||||
copyData(*src_buffer, *dst_buffer);
|
||||
}
|
||||
@ -158,9 +158,9 @@ DiskCacheWrapper::readFile(
|
||||
}
|
||||
|
||||
if (metadata->status == DOWNLOADED)
|
||||
return cache_disk->readFile(path, buf_size, estimated_size, aio_threshold, mmap_threshold, mmap_cache);
|
||||
return cache_disk->readFile(path, buf_size, estimated_size, direct_io_threshold, mmap_threshold, mmap_cache);
|
||||
|
||||
return DiskDecorator::readFile(path, buf_size, estimated_size, aio_threshold, mmap_threshold, mmap_cache);
|
||||
return DiskDecorator::readFile(path, buf_size, estimated_size, direct_io_threshold, mmap_threshold, mmap_cache);
|
||||
}
|
||||
|
||||
std::unique_ptr<WriteBufferFromFileBase>
|
||||
|
@ -38,7 +38,7 @@ public:
|
||||
const String & path,
|
||||
size_t buf_size,
|
||||
size_t estimated_size,
|
||||
size_t aio_threshold,
|
||||
size_t direct_io_threshold,
|
||||
size_t mmap_threshold,
|
||||
MMappedFileCache * mmap_cache) const override;
|
||||
|
||||
|
@ -115,9 +115,9 @@ void DiskDecorator::listFiles(const String & path, std::vector<String> & file_na
|
||||
|
||||
std::unique_ptr<ReadBufferFromFileBase>
|
||||
DiskDecorator::readFile(
|
||||
const String & path, size_t buf_size, size_t estimated_size, size_t aio_threshold, size_t mmap_threshold, MMappedFileCache * mmap_cache) const
|
||||
const String & path, size_t buf_size, size_t estimated_size, size_t direct_io_threshold, size_t mmap_threshold, MMappedFileCache * mmap_cache) const
|
||||
{
|
||||
return delegate->readFile(path, buf_size, estimated_size, aio_threshold, mmap_threshold, mmap_cache);
|
||||
return delegate->readFile(path, buf_size, estimated_size, direct_io_threshold, mmap_threshold, mmap_cache);
|
||||
}
|
||||
|
||||
std::unique_ptr<WriteBufferFromFileBase>
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user