mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-23 16:12:01 +00:00
Merge remote-tracking branch 'origin/master' into refector-function-node
This commit is contained in:
commit
1df038e39c
6
.gitignore
vendored
6
.gitignore
vendored
@ -158,3 +158,9 @@ website/package-lock.json
|
||||
# temporary test files
|
||||
tests/queries/0_stateless/test_*
|
||||
tests/queries/0_stateless/*.binary
|
||||
tests/queries/0_stateless/*.generated-expect
|
||||
|
||||
# rust
|
||||
/rust/**/target
|
||||
# It is autogenerated from *.in
|
||||
/rust/**/.cargo/config.toml
|
||||
|
117
CHANGELOG.md
117
CHANGELOG.md
@ -1,4 +1,5 @@
|
||||
### Table of Contents
|
||||
**[ClickHouse release v22.12, 2022-12-15](#2212)**<br/>
|
||||
**[ClickHouse release v22.11, 2022-11-17](#2211)**<br/>
|
||||
**[ClickHouse release v22.10, 2022-10-25](#2210)**<br/>
|
||||
**[ClickHouse release v22.9, 2022-09-22](#229)**<br/>
|
||||
@ -12,6 +13,122 @@
|
||||
**[ClickHouse release v22.1, 2022-01-18](#221)**<br/>
|
||||
**[Changelog for 2021](https://clickhouse.com/docs/en/whats-new/changelog/2021/)**<br/>
|
||||
|
||||
# 2022 Changelog
|
||||
|
||||
### <a id="2212"></a> ClickHouse release 22.12, 2022-12-15
|
||||
|
||||
#### Upgrade Notes
|
||||
* Fixed backward incompatibility in (de)serialization of states of `min`, `max`, `any*`, `argMin`, `argMax` aggregate functions with `String` argument. The incompatibility affects 22.9, 22.10 and 22.11 branches (fixed since 22.9.6, 22.10.4 and 22.11.2 correspondingly). Some minor releases of 22.3, 22.7 and 22.8 branches are also affected: 22.3.13...22.3.14 (fixed since 22.3.15), 22.8.6...22.8.9 (fixed since 22.8.10), 22.7.6 and newer (will not be fixed in 22.7, we recommend to upgrade from 22.7.* to 22.8.10 or newer). This release note does not concern users that have never used affected versions. Incompatible versions append extra `'\0'` to strings when reading states of the aggregate functions mentioned above. For example, if an older version saved state of `anyState('foobar')` to `state_column` then incompatible version will print `'foobar\0'` on `anyMerge(state_column)`. Also incompatible versions write states of the aggregate functions without trailing `'\0'`. Newer versions (that have the fix) can correctly read data written by all versions including incompatible versions, except one corner case. If an incompatible version saved a state with a string that actually ends with null character, then newer version will trim trailing `'\0'` when reading state of affected aggregate function. For example, if an incompatible version saved state of `anyState('abrac\0dabra\0')` to `state_column` then newer versions will print `'abrac\0dabra'` on `anyMerge(state_column)`. The issue also affects distributed queries when an incompatible version works in a cluster together with older or newer versions. [#43038](https://github.com/ClickHouse/ClickHouse/pull/43038) ([Alexander Tokmakov](https://github.com/tavplubix), [Raúl Marín](https://github.com/Algunenano)). Note: all the official ClickHouse builds already include the patches. This is not necessarily true for unofficial third-party builds that should be avoided.
|
||||
|
||||
#### New Feature
|
||||
* Add `BSONEachRow` input/output format. In this format, ClickHouse formats/parses each row as a separate BSON document and each column is formatted/parsed as a single BSON field with column name as a key. [#42033](https://github.com/ClickHouse/ClickHouse/pull/42033) ([mark-polokhov](https://github.com/mark-polokhov)).
|
||||
* Add `grace_hash` JOIN algorithm, it can be enabled with `SET join_algorithm = 'grace_hash'`. [#38191](https://github.com/ClickHouse/ClickHouse/pull/38191) ([BigRedEye](https://github.com/BigRedEye), [Vladimir C](https://github.com/vdimir)).
|
||||
* Allow configuring password complexity rules and checks for creating and changing users. [#43719](https://github.com/ClickHouse/ClickHouse/pull/43719) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Add `CREATE / ALTER / DROP NAMED COLLECTION` queries. [#43252](https://github.com/ClickHouse/ClickHouse/pull/43252) ([Kseniia Sumarokova](https://github.com/kssenii)). Restrict default access to named collections for user defined in config. It must have explicit `show_named_collections = 1` to be able to see them. [#43325](https://github.com/ClickHouse/ClickHouse/pull/43325) ([Kseniia Sumarokova](https://github.com/kssenii)). The `system.named_collections` table is introduced [#43147](https://github.com/ClickHouse/ClickHouse/pull/43147) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Mask sensitive information in logs; mask secret parts in the output of queries `SHOW CREATE TABLE` and `SELECT FROM system.tables`. Also resolves [#41418](https://github.com/ClickHouse/ClickHouse/issues/41418). [#43227](https://github.com/ClickHouse/ClickHouse/pull/43227) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Add `GROUP BY ALL` syntax: [#37631](https://github.com/ClickHouse/ClickHouse/issues/37631). [#42265](https://github.com/ClickHouse/ClickHouse/pull/42265) ([刘陶峰](https://github.com/taofengliu)).
|
||||
* Add `FROM table SELECT column` syntax. [#41095](https://github.com/ClickHouse/ClickHouse/pull/41095) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Added function `concatWithSeparator` and `concat_ws` as an alias for Spark SQL compatibility. A function `concatWithSeparatorAssumeInjective` added as a variant to enable GROUP BY optimization, similarly to `concatAssumeInjective`. [#43749](https://github.com/ClickHouse/ClickHouse/pull/43749) ([李扬](https://github.com/taiyang-li)).
|
||||
* Added `multiplyDecimal` and `divideDecimal` functions for decimal operations with fixed precision. [#42438](https://github.com/ClickHouse/ClickHouse/pull/42438) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* Added `system.moves` table with list of currently moving parts. [#42660](https://github.com/ClickHouse/ClickHouse/pull/42660) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Add support for embedded Prometheus endpoint for ClickHouse Keeper. [#43087](https://github.com/ClickHouse/ClickHouse/pull/43087) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Support numeric literals with `_` as separator as, for example, `1_000_000`. [#43925](https://github.com/ClickHouse/ClickHouse/pull/43925) ([jh0x](https://github.com/jh0x)).
|
||||
* Added possibility to use array as a second parameter for `cutURLParameter` function. It will cut multiple parameters. Close [#6827](https://github.com/ClickHouse/ClickHouse/issues/6827). [#43788](https://github.com/ClickHouse/ClickHouse/pull/43788) ([Roman Vasin](https://github.com/rvasin)).
|
||||
* Add a column with the expression of the index in the `system.data_skipping_indices` table. [#43308](https://github.com/ClickHouse/ClickHouse/pull/43308) ([Guillaume Tassery](https://github.com/YiuRULE)).
|
||||
* Add column `engine_full` to system table `databases` so that users can access whole engine definition of database via system tables. [#43468](https://github.com/ClickHouse/ClickHouse/pull/43468) ([凌涛](https://github.com/lingtaolf)).
|
||||
* New hash function [xxh3](https://github.com/Cyan4973/xxHash) added. Also performance of `xxHash32` and `xxHash64` improved on arm thanks to library update. [#43411](https://github.com/ClickHouse/ClickHouse/pull/43411) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Added support to define constraints for merge tree settings. For example you can forbid overriding the `storage_policy` by users. [#43903](https://github.com/ClickHouse/ClickHouse/pull/43903) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Add a new setting `input_format_json_read_objects_as_strings` that allows to parse nested JSON objects into Strings in all JSON input formats. This setting is disabled by default. [#44052](https://github.com/ClickHouse/ClickHouse/pull/44052) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
#### Experimental Feature
|
||||
* Support deduplication for asynchronous inserts. Before this change async inserts don't support deduplication, because multiple small inserts will coexist in one inserted batch. Closes [#38075](https://github.com/ClickHouse/ClickHouse/issues/38075). [#43304](https://github.com/ClickHouse/ClickHouse/pull/43304) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Add support for cosine distance for the experimental Annoy (vector similarity search) index. [#42778](https://github.com/ClickHouse/ClickHouse/pull/42778) ([Filatenkov Artur](https://github.com/FArthur-cmd)).
|
||||
|
||||
#### Performance Improvement
|
||||
* Add settings `max_streams_for_merge_tree_reading` and `allow_asynchronous_read_from_io_pool_for_merge_tree`. Setting `max_streams_for_merge_tree_reading` limits the number of reading streams for MergeTree tables. Setting `allow_asynchronous_read_from_io_pool_for_merge_tree` enables background I/O pool to read from `MergeTree` tables. This may increase performance for I/O bound queries if used together with `max_streams_to_max_threads_ratio` or `max_streams_for_merge_tree_reading`. [#43260](https://github.com/ClickHouse/ClickHouse/pull/43260) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). This improves performance up to 100 times in case of high latency storage, low number of CPU and high number of data parts.
|
||||
* Settings `merge_tree_min_rows_for_concurrent_read_for_remote_filesystem/merge_tree_min_bytes_for_concurrent_read_for_remote_filesystem` did not respect adaptive granularity. Fat rows did not decrease the number of read rows (as it is was done for `merge_tree_min_rows_for_concurrent_read/merge_tree_min_bytes_for_concurrent_read`, which could lead to high memory usage when using remote filesystems. [#43965](https://github.com/ClickHouse/ClickHouse/pull/43965) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Optimized number of list requests to ZooKeeper or Keeper when selecting a part to merge. Previously it could produce thousands of requests in some cases. Fixes [#43647](https://github.com/ClickHouse/ClickHouse/issues/43647). [#43675](https://github.com/ClickHouse/ClickHouse/pull/43675) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Optimisation is getting skipped now if `max_size_to_preallocate_for_aggregation` has too small value. Default value of this setting increased to `10^8`. [#43945](https://github.com/ClickHouse/ClickHouse/pull/43945) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Speed-up server shutdown by avoiding cleaning up of old data parts. Because it is unnecessary after https://github.com/ClickHouse/ClickHouse/pull/41145. [#43760](https://github.com/ClickHouse/ClickHouse/pull/43760) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Merging on initiator now uses the same memory bound approach as merging of local aggregation results if `enable_memory_bound_merging_of_aggregation_results` is set. [#40879](https://github.com/ClickHouse/ClickHouse/pull/40879) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Keeper improvement: try syncing logs to disk in parallel with replication. [#43450](https://github.com/ClickHouse/ClickHouse/pull/43450) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Keeper improvement: requests are batched more often. The batching can be controlled with the new setting `max_requests_quick_batch_size`. [#43686](https://github.com/ClickHouse/ClickHouse/pull/43686) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
|
||||
#### Improvement
|
||||
* Implement referential dependencies and use them to create tables in the correct order while restoring from a backup. [#43834](https://github.com/ClickHouse/ClickHouse/pull/43834) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Substitute UDFs in `CREATE` query to avoid failures during loading at the startup. Additionally, UDFs can now be used as `DEFAULT` expressions for columns. [#43539](https://github.com/ClickHouse/ClickHouse/pull/43539) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Change how the followed queries delete parts: TRUNCATE TABLE, ALTER TABLE DROP PART, ALTER TABLE DROP PARTITION. Now these queries make empty parts which cover old parts. This makes TRUNCATE query works without exclusive lock which means concurrent reads aren't locked. Also achieved durability in all those queries. If request is succeeded then no resurrected pars appear later. Note that atomicity is achieved only with transaction scope. [#41145](https://github.com/ClickHouse/ClickHouse/pull/41145) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* `SET param_x` query no longer requires manual string serialization for the value of the parameter. For example, query `SET param_a = '[\'a\', \'b\']'` can now be written like `SET param_a = ['a', 'b']`. [#41874](https://github.com/ClickHouse/ClickHouse/pull/41874) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Show read rows in the progress indication while reading from stdin from client. Closes [#43423](https://github.com/ClickHouse/ClickHouse/issues/43423). [#43442](https://github.com/ClickHouse/ClickHouse/pull/43442) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Show progress bar while reading from s3 table function / engine. [#43454](https://github.com/ClickHouse/ClickHouse/pull/43454) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Progress bar will show both read and written rows. [#43496](https://github.com/ClickHouse/ClickHouse/pull/43496) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* `filesystemAvailable` and related functions support one optional argument with disk name, and change `filesystemFree` to `filesystemUnreserved`. Closes [#35076](https://github.com/ClickHouse/ClickHouse/issues/35076). [#42064](https://github.com/ClickHouse/ClickHouse/pull/42064) ([flynn](https://github.com/ucasfl)).
|
||||
* Integration with LDAP: increased the default value of search_limit to 256, and added LDAP server config option to change that to an arbitrary value. Closes: [#42276](https://github.com/ClickHouse/ClickHouse/issues/42276). [#42461](https://github.com/ClickHouse/ClickHouse/pull/42461) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Allow to remove sensitive information (see the `query_masking_rules` in the configuration file) from the exception messages as well. Resolves [#41418](https://github.com/ClickHouse/ClickHouse/issues/41418). [#42940](https://github.com/ClickHouse/ClickHouse/pull/42940) ([filimonov](https://github.com/filimonov)).
|
||||
* Support query like `SHOW FULL TABLES ...` for MySQL compatibility. [#43910](https://github.com/ClickHouse/ClickHouse/pull/43910) ([Filatenkov Artur](https://github.com/FArthur-cmd)).
|
||||
* Keeper improvement: Add 4lw command `rqld` which can manually assign a node as leader. [#43026](https://github.com/ClickHouse/ClickHouse/pull/43026) ([JackyWoo](https://github.com/JackyWoo)).
|
||||
* Apply connection timeouts settings for Distributed async INSERT from the query. [#43156](https://github.com/ClickHouse/ClickHouse/pull/43156) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* The `unhex` function now supports `FixedString` arguments. [issue42369](https://github.com/ClickHouse/ClickHouse/issues/42369). [#43207](https://github.com/ClickHouse/ClickHouse/pull/43207) ([DR](https://github.com/freedomDR)).
|
||||
* Priority is given to deleting completely expired parts according to the TTL rules, see [#42869](https://github.com/ClickHouse/ClickHouse/issues/42869). [#43222](https://github.com/ClickHouse/ClickHouse/pull/43222) ([zhongyuankai](https://github.com/zhongyuankai)).
|
||||
* More precise and reactive CPU load indication in clickhouse-client. [#43307](https://github.com/ClickHouse/ClickHouse/pull/43307) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Support reading of subcolumns of nested types from storage `S3` and table function `s3` with formats `Parquet`, `Arrow` and `ORC`. [#43329](https://github.com/ClickHouse/ClickHouse/pull/43329) ([chen](https://github.com/xiedeyantu)).
|
||||
* Add `table_uuid` column to the `system.parts` table. [#43404](https://github.com/ClickHouse/ClickHouse/pull/43404) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Added client option to display the number of locally processed rows in non-interactive mode (`--print-num-processed-rows`). [#43407](https://github.com/ClickHouse/ClickHouse/pull/43407) ([jh0x](https://github.com/jh0x)).
|
||||
* Implement `aggregation-in-order` optimization on top of query plan. It is enabled by default (but works only together with `optimize_aggregation_in_order`, which is disabled by default). Set `query_plan_aggregation_in_order = 0` to use previous AST-based version. [#43592](https://github.com/ClickHouse/ClickHouse/pull/43592) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Allow to collect profile events with `trace_type = 'ProfileEvent'` to `system.trace_log` on each increment with current stack, profile event name and value of the increment. It can be enabled by the setting `trace_profile_events` and used to investigate performance of queries. [#43639](https://github.com/ClickHouse/ClickHouse/pull/43639) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Add a new setting `input_format_max_binary_string_size` to limit string size in RowBinary format. [#43842](https://github.com/ClickHouse/ClickHouse/pull/43842) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* When ClickHouse requests a remote HTTP server, and it returns an error, the numeric HTTP code was not displayed correctly in the exception message. Closes [#43919](https://github.com/ClickHouse/ClickHouse/issues/43919). [#43920](https://github.com/ClickHouse/ClickHouse/pull/43920) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Correctly report errors in queries even when multiple JOINs optimization is taking place. [#43583](https://github.com/ClickHouse/ClickHouse/pull/43583) ([Salvatore](https://github.com/tbsal)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
|
||||
* Systemd integration now correctly notifies systemd that service is really started and is ready to server requests. [#43400](https://github.com/ClickHouse/ClickHouse/pull/43400) ([Коренберг Марк](https://github.com/socketpair)).
|
||||
* If someone wants, they can build ClickHouse with OpenSSL instead of BoringSSL, and even use dynamic library. This type of build is unsupported and not recommended anyhow. It is not tested and therefore not secure. The use-case is to supply the FIPS 140-2 certified build of OpenSSL. [#43991](https://github.com/ClickHouse/ClickHouse/pull/43991) ([Boris Kuschel](https://github.com/bkuschel)).
|
||||
* This is to upgrade the new `DeflateQpl` compression codec which has been implemented on previous PR (details: https://github.com/ClickHouse/ClickHouse/pull/39494). This patch improves codec on below aspects: 1. QPL v0.2.0 to QPL v0.3.0 [Intel® Query Processing Library (QPL)](https://github.com/intel/qpl) 2. Improve CMake file for fixing QPL build issues for QPL v0.3.0. 3. Link the QPL library with libaccel-config at build time instead of runtime loading on QPL v0.2.0 (dlopen) 4. Fixed log print issue in CompressionCodecDeflateQpl.cpp. [#44024](https://github.com/ClickHouse/ClickHouse/pull/44024) ([jasperzhu](https://github.com/jinjunzh)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
|
||||
|
||||
* Fixed bug which could lead to deadlock while using asynchronous inserts. [#43233](https://github.com/ClickHouse/ClickHouse/pull/43233) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix some incorrect logic in AST level optimization `optimize_normalize_count_variants`. [#43873](https://github.com/ClickHouse/ClickHouse/pull/43873) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Fix a case when mutations not making progress when checksums do not match between replicas (e.g. caused by a change in data format on an upgrade). [#36877](https://github.com/ClickHouse/ClickHouse/pull/36877) ([nvartolomei](https://github.com/nvartolomei)).
|
||||
* Fix the `skip_unavailable_shards` optimization which did not work with the `hdfsCluster` table function. [#43236](https://github.com/ClickHouse/ClickHouse/pull/43236) ([chen](https://github.com/xiedeyantu)).
|
||||
* Fix `s3` support for the `?` wildcard. Closes [#42731](https://github.com/ClickHouse/ClickHouse/issues/42731). [#43253](https://github.com/ClickHouse/ClickHouse/pull/43253) ([chen](https://github.com/xiedeyantu)).
|
||||
* Fix functions `arrayFirstOrNull` and `arrayLastOrNull` or null when array contains `Nullable` elements. [#43274](https://github.com/ClickHouse/ClickHouse/pull/43274) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Fix incorrect `UserTimeMicroseconds`/`SystemTimeMicroseconds` accounting related to Kafka tables. [#42791](https://github.com/ClickHouse/ClickHouse/pull/42791) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Do not suppress exceptions in `web` disks. Fix retries for the `web` disk. [#42800](https://github.com/ClickHouse/ClickHouse/pull/42800) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed (logical) race condition between inserts and dropping materialized views. A race condition happened when a Materialized View was dropped at the same time as an INSERT, where the MVs was present as a dependency of the insert at the beggining of the execution, but the table has been dropped by the time the insert chain tries to access to it, producing either an `UNKNOWN_TABLE` or `TABLE_IS_DROPPED` exception, and stopping the insertion. After this change we avoid these exceptions and just continue with the insert if the dependency is gone. [#43161](https://github.com/ClickHouse/ClickHouse/pull/43161) ([AlfVII](https://github.com/AlfVII)).
|
||||
* Fix undefined behavior in the `quantiles` function, which might lead to uninitialized memory. Found by fuzzer. This closes [#44066](https://github.com/ClickHouse/ClickHouse/issues/44066). [#44067](https://github.com/ClickHouse/ClickHouse/pull/44067) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Additional check on zero uncompressed size is added to `CompressionCodecDelta`. [#43255](https://github.com/ClickHouse/ClickHouse/pull/43255) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Flatten arrays from Parquet to avoid an issue with inconsistent data in arrays. These incorrect files can be generated by Apache Iceberg. [#43297](https://github.com/ClickHouse/ClickHouse/pull/43297) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Fix bad cast from `LowCardinality` column when using short circuit function execution. [#43311](https://github.com/ClickHouse/ClickHouse/pull/43311) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fixed queries with `SAMPLE BY` with prewhere optimization on tables using `Merge` engine. [#43315](https://github.com/ClickHouse/ClickHouse/pull/43315) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Check and compare the content of the `format_version` file in `MergeTreeData` so tables can be loaded even if the storage policy was changed. [#43328](https://github.com/ClickHouse/ClickHouse/pull/43328) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix possible (very unlikely) "No column to rollback" logical error during INSERT into `Buffer` tables. [#43336](https://github.com/ClickHouse/ClickHouse/pull/43336) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix a bug that allowed the parser to parse an unlimited amount of round brackets into one function if `allow_function_parameters` is set. [#43350](https://github.com/ClickHouse/ClickHouse/pull/43350) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* `MaterializeMySQL` (experimental feature) support DDL: `drop table t1, t2` and compatible with most of MySQL DROP DDL. [#43366](https://github.com/ClickHouse/ClickHouse/pull/43366) ([zzsmdfj](https://github.com/zzsmdfj)).
|
||||
* `session_log` (experimental feature): Fixed the unability to log in (because of failure to create the session_log entry) in a very rare case of messed up setting profiles. [#42641](https://github.com/ClickHouse/ClickHouse/pull/42641) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
* Fix possible `Cannot create non-empty column with type Nothing` in functions `if`/`multiIf`. Closes [#43356](https://github.com/ClickHouse/ClickHouse/issues/43356). [#43368](https://github.com/ClickHouse/ClickHouse/pull/43368) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix a bug when a row level filter uses default value of column. [#43387](https://github.com/ClickHouse/ClickHouse/pull/43387) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Query with `DISTINCT` + `LIMIT BY` + `LIMIT` can return fewer rows than expected. Fixes [#43377](https://github.com/ClickHouse/ClickHouse/issues/43377). [#43410](https://github.com/ClickHouse/ClickHouse/pull/43410) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Fix `sumMap` for `Nullable(Decimal(...))`. [#43414](https://github.com/ClickHouse/ClickHouse/pull/43414) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `date_diff` for hour/minute on macOS. Close [#42742](https://github.com/ClickHouse/ClickHouse/issues/42742). [#43466](https://github.com/ClickHouse/ClickHouse/pull/43466) ([zzsmdfj](https://github.com/zzsmdfj)).
|
||||
* Fix incorrect memory accounting because of merges/mutations. [#43516](https://github.com/ClickHouse/ClickHouse/pull/43516) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed primary key analysis with conditions involving `toString(enum)`. [#43596](https://github.com/ClickHouse/ClickHouse/pull/43596) ([Nikita Taranov](https://github.com/nickitat)). This error has been found by @tisonkun.
|
||||
* Ensure consistency when `clickhouse-copier` update status and `attach_is_done` in keeper after partition attach is done. [#43602](https://github.com/ClickHouse/ClickHouse/pull/43602) ([lzydmxy](https://github.com/lzydmxy)).
|
||||
* During recovering of the lost replica of a `Replicated` database (experimental feature) there could a situation where we need to atomically swap two table names (use EXCHANGE), but instead previously we tried to use two RENAME queries. Which was obviously failed and moreover failed the whole recovery process of the database replica. [#43628](https://github.com/ClickHouse/ClickHouse/pull/43628) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix the case when `s3Cluster` function throws `NOT_FOUND_COLUMN_IN_BLOCK` error. Closes [#43534](https://github.com/ClickHouse/ClickHouse/issues/43534). [#43629](https://github.com/ClickHouse/ClickHouse/pull/43629) ([chen](https://github.com/xiedeyantu)).
|
||||
* Fix posssible logical error `Array sizes mismatched` while parsing JSON object with arrays with same key names but with different nesting level. Closes [#43569](https://github.com/ClickHouse/ClickHouse/issues/43569). [#43693](https://github.com/ClickHouse/ClickHouse/pull/43693) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fixed possible exception in case of distributed `GROUP BY` with an `ALIAS` column among aggregation keys. [#43709](https://github.com/ClickHouse/ClickHouse/pull/43709) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fix bug which can lead to broken projections if zero-copy replication (experimental feature) is enabled and used. [#43764](https://github.com/ClickHouse/ClickHouse/pull/43764) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix using multipart upload for very large S3 objects in AWS S3. [#43824](https://github.com/ClickHouse/ClickHouse/pull/43824) ([ianton-ru](https://github.com/ianton-ru)).
|
||||
* Fixed `ALTER ... RESET SETTING` with `ON CLUSTER`. It could be applied to one replica only. Fixes [#43843](https://github.com/ClickHouse/ClickHouse/issues/43843). [#43848](https://github.com/ClickHouse/ClickHouse/pull/43848) ([Elena Torró](https://github.com/elenatorro)).
|
||||
* Fix a logical error in JOIN with `Join` table engine at right hand side, if `USING` is being used. [#43963](https://github.com/ClickHouse/ClickHouse/pull/43963) ([Vladimir C](https://github.com/vdimir)). Fix a bug with wrong order of keys in `Join` table engine. [#44012](https://github.com/ClickHouse/ClickHouse/pull/44012) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Keeper fix: throw if interserver port for Raft is already in use. [#43984](https://github.com/ClickHouse/ClickHouse/pull/43984) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix ORDER BY positional argument (example: `ORDER BY 1, 2`) in case of unneeded columns pruning from subqueries. Closes [#43964](https://github.com/ClickHouse/ClickHouse/issues/43964). [#43987](https://github.com/ClickHouse/ClickHouse/pull/43987) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fixed exception when a subquery contains HAVING but doesn't contain actual aggregation. [#44051](https://github.com/ClickHouse/ClickHouse/pull/44051) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fix race in s3 multipart upload. This race could cause the error `Part number must be an integer between 1 and 10000, inclusive. (S3_ERROR)` while restoring from a backup. [#44065](https://github.com/ClickHouse/ClickHouse/pull/44065) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
|
||||
### <a id="2211"></a> ClickHouse release 22.11, 2022-11-17
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
@ -609,6 +609,8 @@ if (NATIVE_BUILD_TARGETS
|
||||
"-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}"
|
||||
"-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}"
|
||||
"-DENABLE_CCACHE=${ENABLE_CCACHE}"
|
||||
# Avoid overriding .cargo/config.toml with native toolchain.
|
||||
"-DENABLE_RUST=OFF"
|
||||
"-DENABLE_CLICKHOUSE_SELF_EXTRACTING=${ENABLE_CLICKHOUSE_SELF_EXTRACTING}"
|
||||
${CMAKE_SOURCE_DIR}
|
||||
WORKING_DIRECTORY "${NATIVE_BUILD_DIR}"
|
||||
|
@ -16,6 +16,6 @@ ClickHouse® is an open-source column-oriented database management system that a
|
||||
* [Contacts](https://clickhouse.com/company/contact) can help to get your questions answered if there are any.
|
||||
|
||||
## Upcoming events
|
||||
* [**v22.12 Release Webinar**](https://clickhouse.com/company/events/v22-12-release-webinar) Original creator, co-founder, and CTO of ClickHouse Alexey Milovidov will walk us through the highlights of the release, provide live demos, and share vision into what is coming in the roadmap.
|
||||
* [**v22.12 Release Webinar**](https://clickhouse.com/company/events/v22-12-release-webinar) 22.12 is the ClickHouse Christmas release. There are plenty of gifts (a new JOIN algorithm among them) and we adopted something from MongoDB. Original creator, co-founder, and CTO of ClickHouse Alexey Milovidov will walk us through the highlights of the release.
|
||||
* [**ClickHouse Meetup at the CHEQ office in Tel Aviv**](https://www.meetup.com/clickhouse-tel-aviv-user-group/events/289599423/) - Jan 16 - We are very excited to be holding our next in-person ClickHouse meetup at the CHEQ office in Tel Aviv! Hear from CHEQ, ServiceNow and Contentsquare, as well as a deep dive presentation from ClickHouse CTO Alexey Milovidov. Join us for a fun evening of talks, food and discussion!
|
||||
* **ClickHouse Meetup in Seattle* - Keep an eye on this space as we will be announcing a January meetup in Seattle soon!
|
||||
* [**ClickHouse Meetup at Microsoft Office in Seattle**](https://www.meetup.com/clickhouse-seattle-user-group/events/290310025/) - Jan 18 - Keep an eye on this space as we will be announcing speakers soon!
|
||||
|
@ -40,6 +40,11 @@ else ()
|
||||
target_compile_definitions(common PUBLIC WITH_COVERAGE=0)
|
||||
endif ()
|
||||
|
||||
# FIXME: move libraries for line reading out from base
|
||||
if (TARGET ch_rust::skim)
|
||||
target_link_libraries(common PUBLIC ch_rust::skim)
|
||||
endif()
|
||||
|
||||
target_include_directories(common PUBLIC .. "${CMAKE_CURRENT_BINARY_DIR}/..")
|
||||
|
||||
if (OS_DARWIN AND NOT USE_STATIC_LIBRARIES)
|
||||
|
@ -16,9 +16,11 @@
|
||||
#include <fstream>
|
||||
#include <filesystem>
|
||||
#include <fmt/format.h>
|
||||
#include <boost/algorithm/string/split.hpp>
|
||||
#include <boost/algorithm/string/replace.hpp>
|
||||
#include <boost/algorithm/string/classification.hpp> /// is_any_of
|
||||
#include "config.h" // USE_SKIM
|
||||
|
||||
#if USE_SKIM
|
||||
#include <skim.h>
|
||||
#endif
|
||||
|
||||
namespace
|
||||
{
|
||||
@ -39,36 +41,6 @@ std::string getEditor()
|
||||
return editor;
|
||||
}
|
||||
|
||||
std::pair<std::string, FuzzyFinderType> getFuzzyFinder()
|
||||
{
|
||||
const char * env_path = std::getenv("PATH"); // NOLINT(concurrency-mt-unsafe)
|
||||
|
||||
if (!env_path || !*env_path)
|
||||
return {};
|
||||
|
||||
std::vector<std::string> paths;
|
||||
boost::split(paths, env_path, boost::is_any_of(":"));
|
||||
for (const auto & path_str : paths)
|
||||
{
|
||||
std::filesystem::path path(path_str);
|
||||
std::filesystem::path sk_bin_path = path / "sk";
|
||||
if (!access(sk_bin_path.c_str(), X_OK))
|
||||
return {sk_bin_path, FUZZY_FINDER_SKIM};
|
||||
|
||||
std::filesystem::path fzf_bin_path = path / "fzf";
|
||||
if (!access(fzf_bin_path.c_str(), X_OK))
|
||||
return {fzf_bin_path, FUZZY_FINDER_FZF};
|
||||
}
|
||||
|
||||
return {"", FUZZY_FINDER_NONE};
|
||||
}
|
||||
|
||||
String escapeShellArgument(std::string arg)
|
||||
{
|
||||
boost::replace_all(arg, "'", "'\\''");
|
||||
return fmt::format("'{}'", arg);
|
||||
}
|
||||
|
||||
/// See comments in ShellCommand::executeImpl()
|
||||
/// (for the vfork via dlsym())
|
||||
int executeCommand(char * const argv[])
|
||||
@ -316,8 +288,6 @@ ReplxxLineReader::ReplxxLineReader(
|
||||
using namespace std::placeholders;
|
||||
using Replxx = replxx::Replxx;
|
||||
|
||||
std::tie(fuzzy_finder, fuzzy_finder_type) = getFuzzyFinder();
|
||||
|
||||
if (!history_file_path.empty())
|
||||
{
|
||||
history_file_fd = open(history_file_path.c_str(), O_RDWR);
|
||||
@ -422,17 +392,30 @@ ReplxxLineReader::ReplxxLineReader(
|
||||
};
|
||||
rx.bind_key(Replxx::KEY::meta('#'), insert_comment_action);
|
||||
|
||||
/// interactive search in history (requires fzf/sk)
|
||||
if (fuzzy_finder_type != FUZZY_FINDER_NONE)
|
||||
#if USE_SKIM
|
||||
auto interactive_history_search = [this](char32_t code)
|
||||
{
|
||||
auto interactive_history_search = [this](char32_t code)
|
||||
std::vector<std::string> words;
|
||||
{
|
||||
openInteractiveHistorySearch();
|
||||
rx.invoke(Replxx::ACTION::CLEAR_SELF, code);
|
||||
return rx.invoke(Replxx::ACTION::REPAINT, code);
|
||||
};
|
||||
rx.bind_key(Replxx::KEY::control('R'), interactive_history_search);
|
||||
}
|
||||
auto hs(rx.history_scan());
|
||||
while (hs.next())
|
||||
words.push_back(hs.get().text());
|
||||
}
|
||||
|
||||
std::string new_query(skim(words));
|
||||
if (!new_query.empty())
|
||||
rx.set_state(replxx::Replxx::State(new_query.c_str(), static_cast<int>(new_query.size())));
|
||||
|
||||
if (bracketed_paste_enabled)
|
||||
enableBracketedPaste();
|
||||
|
||||
rx.invoke(Replxx::ACTION::CLEAR_SELF, code);
|
||||
return rx.invoke(Replxx::ACTION::REPAINT, code);
|
||||
};
|
||||
|
||||
/// NOTE: You can use Ctrl-S for non-fuzzy complete.
|
||||
rx.bind_key(Replxx::KEY::control('R'), interactive_history_search);
|
||||
#endif
|
||||
}
|
||||
|
||||
ReplxxLineReader::~ReplxxLineReader()
|
||||
@ -501,65 +484,6 @@ void ReplxxLineReader::openEditor()
|
||||
enableBracketedPaste();
|
||||
}
|
||||
|
||||
void ReplxxLineReader::openInteractiveHistorySearch()
|
||||
{
|
||||
assert(!fuzzy_finder.empty());
|
||||
TemporaryFile history_file("clickhouse_client_history_in_XXXXXX.bin");
|
||||
auto hs(rx.history_scan());
|
||||
while (hs.next())
|
||||
{
|
||||
history_file.write(hs.get().text());
|
||||
history_file.write(std::string(1, '\0'));
|
||||
}
|
||||
history_file.close();
|
||||
|
||||
TemporaryFile output_file("clickhouse_client_history_out_XXXXXX.sql");
|
||||
output_file.close();
|
||||
|
||||
char sh[] = "sh";
|
||||
char sh_c[] = "-c";
|
||||
/// NOTE: You can use one of the following to configure the behaviour additionally:
|
||||
/// - SKIM_DEFAULT_OPTIONS
|
||||
/// - FZF_DEFAULT_OPTS
|
||||
///
|
||||
/// And also note, that fzf and skim is 95% compatible (at least option
|
||||
/// that is used here)
|
||||
std::string fuzzy_finder_command = fmt::format("{} --read0 --height=30%", fuzzy_finder);
|
||||
switch (fuzzy_finder_type)
|
||||
{
|
||||
case FUZZY_FINDER_SKIM:
|
||||
fuzzy_finder_command += " --tac --tiebreak=-score";
|
||||
break;
|
||||
case FUZZY_FINDER_FZF:
|
||||
fuzzy_finder_command += " --tac --tiebreak=index";
|
||||
break;
|
||||
case FUZZY_FINDER_NONE:
|
||||
/// assertion for !fuzzy_finder.empty() is enough
|
||||
break;
|
||||
}
|
||||
fuzzy_finder_command += fmt::format(" < {} > {}",
|
||||
escapeShellArgument(history_file.getPath()),
|
||||
escapeShellArgument(output_file.getPath()));
|
||||
char * const argv[] = {sh, sh_c, fuzzy_finder_command.data(), nullptr};
|
||||
|
||||
try
|
||||
{
|
||||
if (executeCommand(argv) == 0)
|
||||
{
|
||||
std::string new_query = readFile(output_file.getPath());
|
||||
rightTrim(new_query);
|
||||
rx.set_state(replxx::Replxx::State(new_query.c_str(), static_cast<int>(new_query.size())));
|
||||
}
|
||||
}
|
||||
catch (const std::runtime_error & e)
|
||||
{
|
||||
rx.print(e.what());
|
||||
}
|
||||
|
||||
if (bracketed_paste_enabled)
|
||||
enableBracketedPaste();
|
||||
}
|
||||
|
||||
void ReplxxLineReader::enableBracketedPaste()
|
||||
{
|
||||
bracketed_paste_enabled = true;
|
||||
|
@ -4,15 +4,6 @@
|
||||
|
||||
#include <replxx.hxx>
|
||||
|
||||
enum FuzzyFinderType
|
||||
{
|
||||
FUZZY_FINDER_NONE,
|
||||
/// Use https://github.com/junegunn/fzf
|
||||
FUZZY_FINDER_FZF,
|
||||
/// Use https://github.com/lotabout/skim
|
||||
FUZZY_FINDER_SKIM,
|
||||
};
|
||||
|
||||
class ReplxxLineReader : public LineReader
|
||||
{
|
||||
public:
|
||||
@ -35,7 +26,6 @@ private:
|
||||
void addToHistory(const String & line) override;
|
||||
int executeEditor(const std::string & path);
|
||||
void openEditor();
|
||||
void openInteractiveHistorySearch();
|
||||
|
||||
replxx::Replxx rx;
|
||||
replxx::Replxx::highlighter_callback_t highlighter;
|
||||
@ -45,6 +35,4 @@ private:
|
||||
bool bracketed_paste_enabled = false;
|
||||
|
||||
std::string editor;
|
||||
std::string fuzzy_finder;
|
||||
FuzzyFinderType fuzzy_finder_type = FUZZY_FINDER_NONE;
|
||||
};
|
||||
|
@ -10,9 +10,6 @@ else()
|
||||
endif()
|
||||
|
||||
option(ENABLE_RUST "Enable rust" ${DEFAULT_ENABLE_RUST})
|
||||
|
||||
message(STATUS ${ENABLE_RUST})
|
||||
|
||||
if(NOT ENABLE_RUST)
|
||||
message(STATUS "Not using rust")
|
||||
return()
|
||||
|
@ -55,7 +55,8 @@ ccache --zero-stats ||:
|
||||
if [ "$BUILD_MUSL_KEEPER" == "1" ]
|
||||
then
|
||||
# build keeper with musl separately
|
||||
cmake --debug-trycompile -DBUILD_STANDALONE_KEEPER=1 -DENABLE_CLICKHOUSE_KEEPER=1 -DCMAKE_VERBOSE_MAKEFILE=1 -DUSE_MUSL=1 -LA -DCMAKE_TOOLCHAIN_FILE=/build/cmake/linux/toolchain-x86_64-musl.cmake "-DCMAKE_BUILD_TYPE=$BUILD_TYPE" "-DSANITIZE=$SANITIZER" -DENABLE_CHECK_HEAVY_BUILDS=1 "${CMAKE_FLAGS[@]}" ..
|
||||
# and without rust bindings
|
||||
cmake --debug-trycompile -DENABLE_RUST=OFF -DBUILD_STANDALONE_KEEPER=1 -DENABLE_CLICKHOUSE_KEEPER=1 -DCMAKE_VERBOSE_MAKEFILE=1 -DUSE_MUSL=1 -LA -DCMAKE_TOOLCHAIN_FILE=/build/cmake/linux/toolchain-x86_64-musl.cmake "-DCMAKE_BUILD_TYPE=$BUILD_TYPE" "-DSANITIZE=$SANITIZER" -DENABLE_CHECK_HEAVY_BUILDS=1 "${CMAKE_FLAGS[@]}" ..
|
||||
# shellcheck disable=SC2086 # No quotes because I want it to expand to nothing if empty.
|
||||
ninja $NINJA_FLAGS clickhouse-keeper
|
||||
|
||||
|
@ -497,6 +497,7 @@ else
|
||||
-e "Coordination::Exception: Connection loss" \
|
||||
-e "MutateFromLogEntryTask" \
|
||||
-e "No connection to ZooKeeper, cannot get shared table ID" \
|
||||
-e "Session expired" \
|
||||
/var/log/clickhouse-server/clickhouse-server.backward.clean.log | zgrep -Fa "<Error>" > /test_output/bc_check_error_messages.txt \
|
||||
&& echo -e 'Backward compatibility check: Error message in clickhouse-server.log (see bc_check_error_messages.txt)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Backward compatibility check: No Error messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||
|
@ -13,6 +13,7 @@ Columns:
|
||||
- `metadata_path` ([String](../../sql-reference/data-types/enum.md)) — Metadata path.
|
||||
- `uuid` ([UUID](../../sql-reference/data-types/uuid.md)) — Database UUID.
|
||||
- `comment` ([String](../../sql-reference/data-types/enum.md)) — Database comment.
|
||||
- `engine_full` ([String](../../sql-reference/data-types/enum.md)) — Parameters of the database engine.
|
||||
|
||||
The `name` column from this system table is used for implementing the `SHOW DATABASES` query.
|
||||
|
||||
@ -31,10 +32,12 @@ SELECT * FROM system.databases;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─name───────────────┬─engine─┬─data_path──────────────────┬─metadata_path───────────────────────────────────────────────────────┬─uuid─────────────────────────────────┬─comment─┐
|
||||
│ INFORMATION_SCHEMA │ Memory │ /var/lib/clickhouse/ │ │ 00000000-0000-0000-0000-000000000000 │ │
|
||||
│ default │ Atomic │ /var/lib/clickhouse/store/ │ /var/lib/clickhouse/store/d31/d317b4bd-3595-4386-81ee-c2334694128a/ │ 24363899-31d7-42a0-a436-389931d752a0 │ │
|
||||
│ information_schema │ Memory │ /var/lib/clickhouse/ │ │ 00000000-0000-0000-0000-000000000000 │ │
|
||||
│ system │ Atomic │ /var/lib/clickhouse/store/ │ /var/lib/clickhouse/store/1d1/1d1c869d-e465-4b1b-a51f-be033436ebf9/ │ 03e9f3d1-cc88-4a49-83e9-f3d1cc881a49 │ │
|
||||
└────────────────────┴────────┴────────────────────────────┴─────────────────────────────────────────────────────────────────────┴──────────────────────────────────────┴─────────┘
|
||||
┌─name────────────────┬─engine─────┬─data_path────────────────────┬─metadata_path─────────────────────────────────────────────────────────┬─uuid─────────────────────────────────┬─engine_full────────────────────────────────────────────┬─comment─┐
|
||||
│ INFORMATION_SCHEMA │ Memory │ /data/clickhouse_data/ │ │ 00000000-0000-0000-0000-000000000000 │ Memory │ │
|
||||
│ default │ Atomic │ /data/clickhouse_data/store/ │ /data/clickhouse_data/store/f97/f97a3ceb-2e8a-4912-a043-c536e826a4d4/ │ f97a3ceb-2e8a-4912-a043-c536e826a4d4 │ Atomic │ │
|
||||
│ information_schema │ Memory │ /data/clickhouse_data/ │ │ 00000000-0000-0000-0000-000000000000 │ Memory │ │
|
||||
│ replicated_database │ Replicated │ /data/clickhouse_data/store/ │ /data/clickhouse_data/store/da8/da85bb71-102b-4f69-9aad-f8d6c403905e/ │ da85bb71-102b-4f69-9aad-f8d6c403905e │ Replicated('some/path/database', 'shard1', 'replica1') │ │
|
||||
│ system │ Atomic │ /data/clickhouse_data/store/ │ /data/clickhouse_data/store/b57/b5770419-ac7a-4b67-8229-524122024076/ │ b5770419-ac7a-4b67-8229-524122024076 │ Atomic │ │
|
||||
└─────────────────────┴────────────┴──────────────────────────────┴───────────────────────────────────────────────────────────────────────┴──────────────────────────────────────┴────────────────────────────────────────────────────────┴─────────┘
|
||||
|
||||
```
|
||||
|
@ -6,6 +6,26 @@ sidebar_label: Float32, Float64
|
||||
|
||||
# Float32, Float64
|
||||
|
||||
:::warning
|
||||
If you need accurate calculations, in particular if you work with financial or business data requiring a high precision you should consider using Decimal instead. Floats might lead to inaccurate results as illustrated below:
|
||||
|
||||
```
|
||||
CREATE TABLE IF NOT EXISTS float_vs_decimal
|
||||
(
|
||||
my_float Float64,
|
||||
my_decimal Decimal64(3)
|
||||
)Engine=MergeTree ORDER BY tuple()
|
||||
|
||||
INSERT INTO float_vs_decimal SELECT round(canonicalRand(), 3) AS res, res FROM system.numbers LIMIT 1000000; # Generate 1 000 000 random number with 2 decimal places and store them as a float and as a decimal
|
||||
|
||||
SELECT sum(my_float), sum(my_decimal) FROM float_vs_decimal;
|
||||
> 500279.56300000014 500279.563
|
||||
|
||||
SELECT sumKahan(my_float), sumKahan(my_decimal) FROM float_vs_decimal;
|
||||
> 500279.563 500279.563
|
||||
```
|
||||
:::
|
||||
|
||||
[Floating point numbers](https://en.wikipedia.org/wiki/IEEE_754).
|
||||
|
||||
Types are equivalent to types of C:
|
||||
@ -13,8 +33,6 @@ Types are equivalent to types of C:
|
||||
- `Float32` — `float`.
|
||||
- `Float64` — `double`.
|
||||
|
||||
We recommend that you store data in integer form whenever possible. For example, convert fixed precision numbers to integer values, such as monetary amounts or page load times in milliseconds.
|
||||
|
||||
Aliases:
|
||||
|
||||
- `Float32` — `FLOAT`.
|
||||
|
@ -410,35 +410,35 @@ Converts a date with time to a certain fixed date, while preserving the time.
|
||||
|
||||
## toRelativeYearNum
|
||||
|
||||
Converts a date or date with time to the number of the year, starting from a certain fixed point in the past.
|
||||
Converts a date with time or date to the number of the year, starting from a certain fixed point in the past.
|
||||
|
||||
## toRelativeQuarterNum
|
||||
|
||||
Converts a date or date with time to the number of the quarter, starting from a certain fixed point in the past.
|
||||
Converts a date with time or date to the number of the quarter, starting from a certain fixed point in the past.
|
||||
|
||||
## toRelativeMonthNum
|
||||
|
||||
Converts a date or date with time to the number of the month, starting from a certain fixed point in the past.
|
||||
Converts a date with time or date to the number of the month, starting from a certain fixed point in the past.
|
||||
|
||||
## toRelativeWeekNum
|
||||
|
||||
Converts a date or date with time to the number of the week, starting from a certain fixed point in the past.
|
||||
Converts a date with time or date to the number of the week, starting from a certain fixed point in the past.
|
||||
|
||||
## toRelativeDayNum
|
||||
|
||||
Converts a date or date with time to the number of the day, starting from a certain fixed point in the past.
|
||||
Converts a date with time or date to the number of the day, starting from a certain fixed point in the past.
|
||||
|
||||
## toRelativeHourNum
|
||||
|
||||
Converts a date or date with time to the number of the hour, starting from a certain fixed point in the past.
|
||||
Converts a date with time or date to the number of the hour, starting from a certain fixed point in the past.
|
||||
|
||||
## toRelativeMinuteNum
|
||||
|
||||
Converts a date or date with time to the number of the minute, starting from a certain fixed point in the past.
|
||||
Converts a date with time or date to the number of the minute, starting from a certain fixed point in the past.
|
||||
|
||||
## toRelativeSecondNum
|
||||
|
||||
Converts a date or date with time to the number of the second, starting from a certain fixed point in the past.
|
||||
Converts a date with time or date to the number of the second, starting from a certain fixed point in the past.
|
||||
|
||||
## toISOYear
|
||||
|
||||
@ -517,154 +517,6 @@ SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(d
|
||||
└────────────┴───────────┴───────────┴───────────┘
|
||||
```
|
||||
|
||||
## age
|
||||
|
||||
Returns the `unit` component of the difference between `startdate` and `enddate`. The difference is calculated using a precision of 1 second.
|
||||
E.g. the difference between `2021-12-29` and `2022-01-01` is 3 days for `day` unit, 0 months for `month` unit, 0 years for `year` unit.
|
||||
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
age('unit', startdate, enddate, [timezone])
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `unit` — The type of interval for result. [String](../../sql-reference/data-types/string.md).
|
||||
Possible values:
|
||||
|
||||
- `second` (possible abbreviations: `ss`, `s`)
|
||||
- `minute` (possible abbreviations: `mi`, `n`)
|
||||
- `hour` (possible abbreviations: `hh`, `h`)
|
||||
- `day` (possible abbreviations: `dd`, `d`)
|
||||
- `week` (possible abbreviations: `wk`, `ww`)
|
||||
- `month` (possible abbreviations: `mm`, `m`)
|
||||
- `quarter` (possible abbreviations: `qq`, `q`)
|
||||
- `year` (possible abbreviations: `yyyy`, `yy`)
|
||||
|
||||
- `startdate` — The first time value to subtract (the subtrahend). [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `enddate` — The second time value to subtract from (the minuend). [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional). If specified, it is applied to both `startdate` and `enddate`. If not specified, timezones of `startdate` and `enddate` are used. If they are not the same, the result is unspecified. [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
Difference between `enddate` and `startdate` expressed in `unit`.
|
||||
|
||||
Type: [Int](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT age('hour', toDateTime('2018-01-01 22:30:00'), toDateTime('2018-01-02 23:00:00'));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─age('hour', toDateTime('2018-01-01 22:30:00'), toDateTime('2018-01-02 23:00:00'))─┐
|
||||
│ 24 │
|
||||
└───────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
toDate('2022-01-01') AS e,
|
||||
toDate('2021-12-29') AS s,
|
||||
age('day', s, e) AS day_age,
|
||||
age('month', s, e) AS month__age,
|
||||
age('year', s, e) AS year_age;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌──────────e─┬──────────s─┬─day_age─┬─month__age─┬─year_age─┐
|
||||
│ 2022-01-01 │ 2021-12-29 │ 3 │ 0 │ 0 │
|
||||
└────────────┴────────────┴─────────┴────────────┴──────────┘
|
||||
```
|
||||
|
||||
|
||||
## date\_diff
|
||||
|
||||
Returns the count of the specified `unit` boundaries crossed between the `startdate` and `enddate`.
|
||||
The difference is calculated using relative units, e.g. the difference between `2021-12-29` and `2022-01-01` is 3 days for day unit (see [toRelativeDayNum](#torelativedaynum)), 1 month for month unit (see [toRelativeMonthNum](#torelativemonthnum)), 1 year for year unit (see [toRelativeYearNum](#torelativeyearnum)).
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
date_diff('unit', startdate, enddate, [timezone])
|
||||
```
|
||||
|
||||
Aliases: `dateDiff`, `DATE_DIFF`.
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `unit` — The type of interval for result. [String](../../sql-reference/data-types/string.md).
|
||||
Possible values:
|
||||
|
||||
- `second` (possible abbreviations: `ss`, `s`)
|
||||
- `minute` (possible abbreviations: `mi`, `n`)
|
||||
- `hour` (possible abbreviations: `hh`, `h`)
|
||||
- `day` (possible abbreviations: `dd`, `d`)
|
||||
- `week` (possible abbreviations: `wk`, `ww`)
|
||||
- `month` (possible abbreviations: `mm`, `m`)
|
||||
- `quarter` (possible abbreviations: `qq`, `q`)
|
||||
- `year` (possible abbreviations: `yyyy`, `yy`)
|
||||
|
||||
- `startdate` — The first time value to subtract (the subtrahend). [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `enddate` — The second time value to subtract from (the minuend). [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional). If specified, it is applied to both `startdate` and `enddate`. If not specified, timezones of `startdate` and `enddate` are used. If they are not the same, the result is unspecified. [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
Difference between `enddate` and `startdate` expressed in `unit`.
|
||||
|
||||
Type: [Int](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'))─┐
|
||||
│ 25 │
|
||||
└────────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
toDate('2022-01-01') AS e,
|
||||
toDate('2021-12-29') AS s,
|
||||
dateDiff('day', s, e) AS day_diff,
|
||||
dateDiff('month', s, e) AS month__diff,
|
||||
dateDiff('year', s, e) AS year_diff;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌──────────e─┬──────────s─┬─day_diff─┬─month__diff─┬─year_diff─┐
|
||||
│ 2022-01-01 │ 2021-12-29 │ 3 │ 1 │ 1 │
|
||||
└────────────┴────────────┴──────────┴─────────────┴───────────┘
|
||||
```
|
||||
|
||||
## date\_trunc
|
||||
|
||||
Truncates date and time data to the specified part of date.
|
||||
@ -785,6 +637,80 @@ Result:
|
||||
└───────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## date\_diff
|
||||
|
||||
Returns the difference between two dates or dates with time values.
|
||||
The difference is calculated using relative units, e.g. the difference between `2022-01-01` and `2021-12-29` is 3 days for day unit (see [toRelativeDayNum](#torelativedaynum)), 1 month for month unit (see [toRelativeMonthNum](#torelativemonthnum)), 1 year for year unit (see [toRelativeYearNum](#torelativeyearnum)).
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
date_diff('unit', startdate, enddate, [timezone])
|
||||
```
|
||||
|
||||
Aliases: `dateDiff`, `DATE_DIFF`.
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `unit` — The type of interval for result. [String](../../sql-reference/data-types/string.md).
|
||||
Possible values:
|
||||
|
||||
- `second`
|
||||
- `minute`
|
||||
- `hour`
|
||||
- `day`
|
||||
- `week`
|
||||
- `month`
|
||||
- `quarter`
|
||||
- `year`
|
||||
|
||||
- `startdate` — The first time value to subtract (the subtrahend). [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `enddate` — The second time value to subtract from (the minuend). [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (optional). If specified, it is applied to both `startdate` and `enddate`. If not specified, timezones of `startdate` and `enddate` are used. If they are not the same, the result is unspecified. [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
Difference between `enddate` and `startdate` expressed in `unit`.
|
||||
|
||||
Type: [Int](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'))─┐
|
||||
│ 25 │
|
||||
└────────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
toDate('2022-01-01') AS e,
|
||||
toDate('2021-12-29') AS s,
|
||||
dateDiff('day', s, e) AS day_diff,
|
||||
dateDiff('month', s, e) AS month__diff,
|
||||
dateDiff('year', s, e) AS year_diff;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌──────────e─┬──────────s─┬─day_diff─┬─month__diff─┬─year_diff─┐
|
||||
│ 2022-01-01 │ 2021-12-29 │ 3 │ 1 │ 1 │
|
||||
└────────────┴────────────┴──────────┴─────────────┴───────────┘
|
||||
```
|
||||
|
||||
## date\_sub
|
||||
|
||||
Subtracts the time interval or date interval from the provided date or date with time.
|
||||
|
@ -1159,4 +1159,40 @@ If s is empty, the result is 0. If the first character is not an ASCII character
|
||||
|
||||
|
||||
|
||||
## concatWithSeparator
|
||||
|
||||
Returns the concatenation strings separated by string separator. If any of the argument values is `NULL`, the function returns `NULL`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
concatWithSeparator(sep, expr1, expr2, expr3...)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
- sep — separator. Const [String](../../sql-reference/data-types/string.md) or [FixedString](../../sql-reference/data-types/fixedstring.md).
|
||||
- exprN — expression to be concatenated. [String](../../sql-reference/data-types/string.md) or [FixedString](../../sql-reference/data-types/fixedstring.md).
|
||||
|
||||
**Returned values**
|
||||
- The concatenated String.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT concatWithSeparator('a', '1', '2', '3', '4')
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─concatWithSeparator('a', '1', '2', '3', '4')─┐
|
||||
│ 1a2a3a4 │
|
||||
└───────────────────────────────────┘
|
||||
```
|
||||
|
||||
## concatWithSeparatorAssumeInjective
|
||||
Same as concatWithSeparator, the difference is that you need to ensure that concatWithSeparator(sep, expr1, expr2, expr3...) → result is injective, it will be used for optimization of GROUP BY.
|
||||
|
||||
The function is named “injective” if it always returns different result for different values of arguments. In other words: different arguments never yield identical result.
|
||||
|
@ -77,8 +77,9 @@ Numeric literal tries to be parsed:
|
||||
|
||||
Literal value has the smallest type that the value fits in.
|
||||
For example, 1 is parsed as `UInt8`, but 256 is parsed as `UInt16`. For more information, see [Data types](../sql-reference/data-types/index.md).
|
||||
Underscores `_` inside numeric literals are ignored and can be used for better readability.
|
||||
|
||||
Examples: `1`, `18446744073709551615`, `0xDEADBEEF`, `01`, `0.1`, `1e100`, `-1e-100`, `inf`, `nan`.
|
||||
Examples: `1`, `10_000_000`, `0xffff_ffff`, `18446744073709551615`, `0xDEADBEEF`, `01`, `0.1`, `1e100`, `-1e-100`, `inf`, `nan`.
|
||||
|
||||
### String
|
||||
|
||||
|
@ -424,23 +424,23 @@ WITH toDateTime64('2020-01-01 10:20:30.999', 3) AS dt64 SELECT toStartOfSecond(d
|
||||
|
||||
## toRelativeYearNum {#torelativeyearnum}
|
||||
|
||||
Переводит дату или дату-с-временем в номер года, начиная с некоторого фиксированного момента в прошлом.
|
||||
Переводит дату-с-временем или дату в номер года, начиная с некоторого фиксированного момента в прошлом.
|
||||
|
||||
## toRelativeQuarterNum {#torelativequarternum}
|
||||
|
||||
Переводит дату или дату-с-временем в номер квартала, начиная с некоторого фиксированного момента в прошлом.
|
||||
Переводит дату-с-временем или дату в номер квартала, начиная с некоторого фиксированного момента в прошлом.
|
||||
|
||||
## toRelativeMonthNum {#torelativemonthnum}
|
||||
|
||||
Переводит дату или дату-с-временем в номер месяца, начиная с некоторого фиксированного момента в прошлом.
|
||||
Переводит дату-с-временем или дату в номер месяца, начиная с некоторого фиксированного момента в прошлом.
|
||||
|
||||
## toRelativeWeekNum {#torelativeweeknum}
|
||||
|
||||
Переводит дату или дату-с-временем в номер недели, начиная с некоторого фиксированного момента в прошлом.
|
||||
Переводит дату-с-временем или дату в номер недели, начиная с некоторого фиксированного момента в прошлом.
|
||||
|
||||
## toRelativeDayNum {#torelativedaynum}
|
||||
|
||||
Переводит дату или дату-с-временем в номер дня, начиная с некоторого фиксированного момента в прошлом.
|
||||
Переводит дату-с-временем или дату в номер дня, начиная с некоторого фиксированного момента в прошлом.
|
||||
|
||||
## toRelativeHourNum {#torelativehournum}
|
||||
|
||||
@ -456,7 +456,7 @@ WITH toDateTime64('2020-01-01 10:20:30.999', 3) AS dt64 SELECT toStartOfSecond(d
|
||||
|
||||
## toISOYear {#toisoyear}
|
||||
|
||||
Переводит дату или дату-с-временем в число типа UInt16, содержащее номер ISO года. ISO год отличается от обычного года, потому что в соответствии с [ISO 8601:1988](https://en.wikipedia.org/wiki/ISO_8601) ISO год начинается необязательно первого января.
|
||||
Переводит дату-с-временем или дату в число типа UInt16, содержащее номер ISO года. ISO год отличается от обычного года, потому что в соответствии с [ISO 8601:1988](https://en.wikipedia.org/wiki/ISO_8601) ISO год начинается необязательно первого января.
|
||||
|
||||
**Пример**
|
||||
|
||||
@ -479,7 +479,7 @@ SELECT
|
||||
|
||||
## toISOWeek {#toisoweek}
|
||||
|
||||
Переводит дату или дату-с-временем в число типа UInt8, содержащее номер ISO недели.
|
||||
Переводит дату-с-временем или дату в число типа UInt8, содержащее номер ISO недели.
|
||||
Начало ISO года отличается от начала обычного года, потому что в соответствии с [ISO 8601:1988](https://en.wikipedia.org/wiki/ISO_8601) первая неделя года - это неделя с четырьмя или более днями в этом году.
|
||||
|
||||
1 Января 2017 г. - воскресение, т.е. первая ISO неделя 2017 года началась в понедельник 2 января, поэтому 1 января 2017 это последняя неделя 2016 года.
|
||||
@ -503,7 +503,7 @@ SELECT
|
||||
```
|
||||
|
||||
## toWeek(date\[, mode\]\[, timezone\]) {#toweek}
|
||||
Переводит дату или дату-с-временем в число UInt8, содержащее номер недели. Второй аргументам mode задает режим, начинается ли неделя с воскресенья или с понедельника и должно ли возвращаемое значение находиться в диапазоне от 0 до 53 или от 1 до 53. Если аргумент mode опущен, то используется режим 0.
|
||||
Переводит дату-с-временем или дату в число UInt8, содержащее номер недели. Второй аргументам mode задает режим, начинается ли неделя с воскресенья или с понедельника и должно ли возвращаемое значение находиться в диапазоне от 0 до 53 или от 1 до 53. Если аргумент mode опущен, то используется режим 0.
|
||||
|
||||
`toISOWeek() ` эквивалентно `toWeek(date,3)`.
|
||||
|
||||
@ -569,132 +569,6 @@ SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(d
|
||||
└────────────┴───────────┴───────────┴───────────┘
|
||||
```
|
||||
|
||||
## age
|
||||
|
||||
Вычисляет компонент `unit` разницы между `startdate` и `enddate`. Разница вычисляется с точностью в 1 секунду.
|
||||
Например, разница между `2021-12-29` и `2022-01-01` 3 дня для единицы `day`, 0 месяцев для единицы `month`, 0 лет для единицы `year`.
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
age('unit', startdate, enddate, [timezone])
|
||||
```
|
||||
|
||||
**Аргументы**
|
||||
|
||||
- `unit` — единица измерения времени, в которой будет выражено возвращаемое значение функции. [String](../../sql-reference/data-types/string.md).
|
||||
Возможные значения:
|
||||
|
||||
- `second` (возможные сокращения: `ss`, `s`)
|
||||
- `minute` (возможные сокращения: `mi`, `n`)
|
||||
- `hour` (возможные сокращения: `hh`, `h`)
|
||||
- `day` (возможные сокращения: `dd`, `d`)
|
||||
- `week` (возможные сокращения: `wk`, `ww`)
|
||||
- `month` (возможные сокращения: `mm`, `m`)
|
||||
- `quarter` (возможные сокращения: `qq`, `q`)
|
||||
- `year` (возможные сокращения: `yyyy`, `yy`)
|
||||
|
||||
- `startdate` — первая дата или дата со временем, которая вычитается из `enddate`. [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) или [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `enddate` — вторая дата или дата со временем, из которой вычитается `startdate`. [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) или [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `timezone` — [часовой пояс](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (необязательно). Если этот аргумент указан, то он применяется как для `startdate`, так и для `enddate`. Если этот аргумент не указан, то используются часовые пояса аргументов `startdate` и `enddate`. Если часовые пояса аргументов `startdate` и `enddate` не совпадают, то результат не определен. [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
Разница между `enddate` и `startdate`, выраженная в `unit`.
|
||||
|
||||
Тип: [Int](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Пример**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT age('hour', toDateTime('2018-01-01 22:30:00'), toDateTime('2018-01-02 23:00:00'));
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─age('hour', toDateTime('2018-01-01 22:30:00'), toDateTime('2018-01-02 23:00:00'))─┐
|
||||
│ 24 │
|
||||
└───────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
toDate('2022-01-01') AS e,
|
||||
toDate('2021-12-29') AS s,
|
||||
age('day', s, e) AS day_age,
|
||||
age('month', s, e) AS month__age,
|
||||
age('year', s, e) AS year_age;
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌──────────e─┬──────────s─┬─day_age─┬─month__age─┬─year_age─┐
|
||||
│ 2022-01-01 │ 2021-12-29 │ 3 │ 0 │ 0 │
|
||||
└────────────┴────────────┴─────────┴────────────┴──────────┘
|
||||
```
|
||||
|
||||
## date\_diff {#date_diff}
|
||||
|
||||
Вычисляет разницу указанных границ `unit` пересекаемых между `startdate` и `enddate`.
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
date_diff('unit', startdate, enddate, [timezone])
|
||||
```
|
||||
|
||||
Синонимы: `dateDiff`, `DATE_DIFF`.
|
||||
|
||||
**Аргументы**
|
||||
|
||||
- `unit` — единица измерения времени, в которой будет выражено возвращаемое значение функции. [String](../../sql-reference/data-types/string.md).
|
||||
Возможные значения:
|
||||
|
||||
- `second` (возможные сокращения: `ss`, `s`)
|
||||
- `minute` (возможные сокращения: `mi`, `n`)
|
||||
- `hour` (возможные сокращения: `hh`, `h`)
|
||||
- `day` (возможные сокращения: `dd`, `d`)
|
||||
- `week` (возможные сокращения: `wk`, `ww`)
|
||||
- `month` (возможные сокращения: `mm`, `m`)
|
||||
- `quarter` (возможные сокращения: `qq`, `q`)
|
||||
- `year` (возможные сокращения: `yyyy`, `yy`)
|
||||
|
||||
- `startdate` — первая дата или дата со временем, которая вычитается из `enddate`. [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) или [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `enddate` — вторая дата или дата со временем, из которой вычитается `startdate`. [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) или [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `timezone` — [часовой пояс](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (необязательно). Если этот аргумент указан, то он применяется как для `startdate`, так и для `enddate`. Если этот аргумент не указан, то используются часовые пояса аргументов `startdate` и `enddate`. Если часовые пояса аргументов `startdate` и `enddate` не совпадают, то результат не определен. [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
Разница между `enddate` и `startdate`, выраженная в `unit`.
|
||||
|
||||
Тип: [Int](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Пример**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'));
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'))─┐
|
||||
│ 25 │
|
||||
└────────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## date_trunc {#date_trunc}
|
||||
|
||||
Отсекает от даты и времени части, меньшие чем указанная часть.
|
||||
@ -815,6 +689,60 @@ SELECT date_add(YEAR, 3, toDate('2018-01-01'));
|
||||
└───────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## date\_diff {#date_diff}
|
||||
|
||||
Вычисляет разницу между двумя значениями дат или дат со временем.
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
date_diff('unit', startdate, enddate, [timezone])
|
||||
```
|
||||
|
||||
Синонимы: `dateDiff`, `DATE_DIFF`.
|
||||
|
||||
**Аргументы**
|
||||
|
||||
- `unit` — единица измерения времени, в которой будет выражено возвращаемое значение функции. [String](../../sql-reference/data-types/string.md).
|
||||
Возможные значения:
|
||||
|
||||
- `second`
|
||||
- `minute`
|
||||
- `hour`
|
||||
- `day`
|
||||
- `week`
|
||||
- `month`
|
||||
- `quarter`
|
||||
- `year`
|
||||
|
||||
- `startdate` — первая дата или дата со временем, которая вычитается из `enddate`. [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) или [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `enddate` — вторая дата или дата со временем, из которой вычитается `startdate`. [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) или [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
- `timezone` — [часовой пояс](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) (необязательно). Если этот аргумент указан, то он применяется как для `startdate`, так и для `enddate`. Если этот аргумент не указан, то используются часовые пояса аргументов `startdate` и `enddate`. Если часовые пояса аргументов `startdate` и `enddate` не совпадают, то результат не определен. [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
Разница между `enddate` и `startdate`, выраженная в `unit`.
|
||||
|
||||
Тип: [Int](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Пример**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'));
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─dateDiff('hour', toDateTime('2018-01-01 22:00:00'), toDateTime('2018-01-02 23:00:00'))─┐
|
||||
│ 25 │
|
||||
└────────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## date\_sub {#date_sub}
|
||||
|
||||
Вычитает интервал времени или даты из указанной даты или даты со временем.
|
||||
|
@ -16,6 +16,8 @@
|
||||
|
||||
#include <base/find_symbols.h>
|
||||
|
||||
#include <Access/AccessControl.h>
|
||||
|
||||
#include "config_version.h"
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/formatReadable.h>
|
||||
@ -258,6 +260,10 @@ try
|
||||
if (is_interactive && !config().has("no-warnings"))
|
||||
showWarnings();
|
||||
|
||||
/// Set user password complexity rules
|
||||
auto & access_control = global_context->getAccessControl();
|
||||
access_control.setPasswordComplexityRules(connection->getPasswordComplexityRules());
|
||||
|
||||
if (is_interactive && !delayed_interactive)
|
||||
{
|
||||
runInteractive();
|
||||
|
@ -466,6 +466,30 @@
|
||||
<allow_no_password>1</allow_no_password>
|
||||
<allow_implicit_no_password>1</allow_implicit_no_password>
|
||||
|
||||
<!-- Complexity requirements for user passwords. -->
|
||||
<!-- <password_complexity>
|
||||
<rule>
|
||||
<pattern>.{12}</pattern>
|
||||
<message>be at least 12 characters long</message>
|
||||
</rule>
|
||||
<rule>
|
||||
<pattern>\p{N}</pattern>
|
||||
<message>contain at least 1 numeric character</message>
|
||||
</rule>
|
||||
<rule>
|
||||
<pattern>\p{Ll}</pattern>
|
||||
<message>contain at least 1 lowercase character</message>
|
||||
</rule>
|
||||
<rule>
|
||||
<pattern>\p{Lu}</pattern>
|
||||
<message>contain at least 1 uppercase character</message>
|
||||
</rule>
|
||||
<rule>
|
||||
<pattern>[^\p{L}\p{N}]</pattern>
|
||||
<message>contain at least 1 special character</message>
|
||||
</rule>
|
||||
</password_complexity> -->
|
||||
|
||||
<!-- Policy from the <storage_configuration> for the temporary files.
|
||||
If not set <tmp_path> is used, otherwise <tmp_path> is ignored.
|
||||
|
||||
|
3
rust/.cargo/config.toml.in
Normal file
3
rust/.cargo/config.toml.in
Normal file
@ -0,0 +1,3 @@
|
||||
[env]
|
||||
CFLAGS = "@RUST_CFLAGS@"
|
||||
CXXFLAGS = "@RUST_CXXFLAGS@"
|
0
rust/BLAKE3/CMakeLists.txt
Executable file → Normal file
0
rust/BLAKE3/CMakeLists.txt
Executable file → Normal file
@ -1 +1,25 @@
|
||||
function(configure_rustc)
|
||||
# NOTE: this can also be done by overriding rustc, but it not trivial with rustup.
|
||||
set(RUST_CFLAGS "${CMAKE_C_FLAGS}")
|
||||
|
||||
set(CXX_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/llvm-project/libcxx/include")
|
||||
set(RUST_CXXFLAGS "${CMAKE_CXX_FLAGS} -isystem ${CXX_INCLUDE_DIR} ")
|
||||
|
||||
if (CMAKE_OSX_SYSROOT)
|
||||
set(RUST_CXXFLAGS "${RUST_CXXFLAGS} -isysroot ${CMAKE_OSX_SYSROOT}")
|
||||
set(RUST_CFLAGS "${RUST_CFLAGS} -isysroot ${CMAKE_OSX_SYSROOT}")
|
||||
elseif(CMAKE_SYSROOT)
|
||||
set(RUST_CXXFLAGS "${RUST_CXXFLAGS} --sysroot ${CMAKE_SYSROOT}")
|
||||
set(RUST_CFLAGS "${RUST_CFLAGS} --sysroot ${CMAKE_SYSROOT}")
|
||||
endif()
|
||||
|
||||
message(STATUS "RUST_CFLAGS: ${RUST_CFLAGS}")
|
||||
message(STATUS "RUST_CXXFLAGS: ${RUST_CXXFLAGS}")
|
||||
|
||||
# NOTE: requires RW access for the source dir
|
||||
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/.cargo/config.toml.in" "${CMAKE_CURRENT_SOURCE_DIR}/.cargo/config.toml" @ONLY)
|
||||
endfunction()
|
||||
configure_rustc()
|
||||
|
||||
add_subdirectory (BLAKE3)
|
||||
add_subdirectory (skim)
|
||||
|
2
rust/skim/.cargo/config.toml.in
Normal file
2
rust/skim/.cargo/config.toml.in
Normal file
@ -0,0 +1,2 @@
|
||||
[env]
|
||||
CXXFLAGS = "@RUST_CXXFLAGS@"
|
2
rust/skim/.gitignore
vendored
Normal file
2
rust/skim/.gitignore
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
build.rs
|
||||
.cargo/config.toml
|
62
rust/skim/CMakeLists.txt
Normal file
62
rust/skim/CMakeLists.txt
Normal file
@ -0,0 +1,62 @@
|
||||
if (OS_FREEBSD)
|
||||
# Right nix/libc requires fspacectl and it had been added only since FreeBSD14.
|
||||
# And sicne sysroot has older libararies you will got undefined reference for clickhouse binary.
|
||||
#
|
||||
# But likely everything should work without this syscall, however it is not
|
||||
# possible right now to gently override libraries versions for depdendcies,
|
||||
# and forking rust modules is a little bit too much for this thing.
|
||||
#
|
||||
# You can take a look at the details in the fillowing issue [1].
|
||||
#
|
||||
# [1]: https://github.com/rust-lang/cargo/issues/5640
|
||||
#
|
||||
message(STATUS "skim is disabled for FreeBSD")
|
||||
return()
|
||||
endif()
|
||||
|
||||
corrosion_import_crate(MANIFEST_PATH Cargo.toml NO_STD)
|
||||
|
||||
set(CXX_INCLUDE_DIR "${ClickHouse_SOURCE_DIR}/contrib/llvm-project/libcxx/include")
|
||||
# -Wno-dollar-in-identifier-extension: cxx bridge complies names with '$'
|
||||
# -Wno-unused-macros: unused CXXBRIDGE1_RUST_STRING
|
||||
set(CXXBRIDGE_CXXFLAGS "-Wno-dollar-in-identifier-extension -Wno-unused-macros")
|
||||
set(RUST_CXXFLAGS "${CMAKE_CXX_FLAGS} -isystem ${CXX_INCLUDE_DIR} ${CXXBRIDGE_CXXFLAGS}")
|
||||
if (CMAKE_OSX_SYSROOT)
|
||||
set(RUST_CXXFLAGS "${RUST_CXXFLAGS} -isysroot ${CMAKE_OSX_SYSROOT}")
|
||||
elseif(CMAKE_SYSROOT)
|
||||
set(RUST_CXXFLAGS "${RUST_CXXFLAGS} --sysroot ${CMAKE_SYSROOT}")
|
||||
endif()
|
||||
message(STATUS "RUST_CXXFLAGS (for skim): ${RUST_CXXFLAGS}")
|
||||
# NOTE: requires RW access for the source dir
|
||||
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/build.rs.in" "${CMAKE_CURRENT_SOURCE_DIR}/build.rs" @ONLY)
|
||||
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/.cargo/config.toml.in" "${CMAKE_CURRENT_SOURCE_DIR}/.cargo/config.toml" @ONLY)
|
||||
|
||||
set (ffi_binding_generated_path
|
||||
${CMAKE_BINARY_DIR}/${CMAKE_BUILD_TYPE}/cargo/build/${Rust_CARGO_TARGET_CACHED}/cxxbridge/_ch_rust_skim_rust/src/lib.rs.cc)
|
||||
set (ffi_binding_final_path ${CMAKE_CURRENT_BINARY_DIR}/skim-ffi.cc)
|
||||
message(STATUS "Writing FFI Binding for skim: ${ffi_binding_generated_path} => ${ffi_binding_final_path}")
|
||||
|
||||
add_custom_command(OUTPUT ${ffi_binding_final_path}
|
||||
COMMAND ${CMAKE_COMMAND} -E copy ${ffi_binding_generated_path} ${ffi_binding_final_path}
|
||||
DEPENDS cargo-build__ch_rust_skim_rust)
|
||||
|
||||
add_library(_ch_rust_skim_ffi ${ffi_binding_final_path})
|
||||
if (USE_STATIC_LIBRARIES OR NOT SPLIT_SHARED_LIBRARIES)
|
||||
# static
|
||||
else()
|
||||
if (OS_DARWIN)
|
||||
target_link_libraries(_ch_rust_skim_ffi PRIVATE -Wl,-undefined,dynamic_lookup)
|
||||
else()
|
||||
target_link_libraries(_ch_rust_skim_ffi PRIVATE -Wl,--unresolved-symbols=ignore-all)
|
||||
endif()
|
||||
endif()
|
||||
# cxx bridge compiles such bindings
|
||||
set_target_properties(_ch_rust_skim_ffi PROPERTIES COMPILE_FLAGS "${CXXBRIDGE_CXXFLAGS}")
|
||||
|
||||
add_library(_ch_rust_skim INTERFACE)
|
||||
target_include_directories(_ch_rust_skim INTERFACE include)
|
||||
target_link_libraries(_ch_rust_skim INTERFACE
|
||||
_ch_rust_skim_rust
|
||||
_ch_rust_skim_ffi)
|
||||
|
||||
add_library(ch_rust::skim ALIAS _ch_rust_skim)
|
982
rust/skim/Cargo.lock
generated
Normal file
982
rust/skim/Cargo.lock
generated
Normal file
@ -0,0 +1,982 @@
|
||||
# This file is automatically @generated by Cargo.
|
||||
# It is not intended for manual editing.
|
||||
version = 3
|
||||
|
||||
[[package]]
|
||||
name = "_ch_rust_skim_rust"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"cxx",
|
||||
"cxx-build",
|
||||
"skim",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "aho-corasick"
|
||||
version = "0.7.20"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cc936419f96fa211c1b9166887b38e5e40b19958e5b895be7c1f93adec7071ac"
|
||||
dependencies = [
|
||||
"memchr",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "android_system_properties"
|
||||
version = "0.1.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "819e7219dbd41043ac279b19830f2efc897156490d7fd6ea916720117ee66311"
|
||||
dependencies = [
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "arrayvec"
|
||||
version = "0.7.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8da52d66c7071e2e3fa2a1e5c6d088fec47b593032b254f5e980de8ea54454d6"
|
||||
|
||||
[[package]]
|
||||
name = "atty"
|
||||
version = "0.2.14"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d9b39be18770d11421cdb1b9947a45dd3f37e93092cbf377614828a319d5fee8"
|
||||
dependencies = [
|
||||
"hermit-abi",
|
||||
"libc",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "autocfg"
|
||||
version = "1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d468802bab17cbc0cc575e9b053f41e72aa36bfa6b7f55e3529ffa43161b97fa"
|
||||
|
||||
[[package]]
|
||||
name = "beef"
|
||||
version = "0.5.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3a8241f3ebb85c056b509d4327ad0358fbbba6ffb340bf388f26350aeda225b1"
|
||||
|
||||
[[package]]
|
||||
name = "bitflags"
|
||||
version = "1.3.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
|
||||
|
||||
[[package]]
|
||||
name = "bumpalo"
|
||||
version = "3.11.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "572f695136211188308f16ad2ca5c851a712c464060ae6974944458eb83880ba"
|
||||
|
||||
[[package]]
|
||||
name = "cc"
|
||||
version = "1.0.77"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e9f73505338f7d905b19d18738976aae232eb46b8efc15554ffc56deb5d9ebe4"
|
||||
|
||||
[[package]]
|
||||
name = "cfg-if"
|
||||
version = "1.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
|
||||
|
||||
[[package]]
|
||||
name = "chrono"
|
||||
version = "0.4.23"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "16b0a3d9ed01224b22057780a37bb8c5dbfe1be8ba48678e7bf57ec4b385411f"
|
||||
dependencies = [
|
||||
"iana-time-zone",
|
||||
"js-sys",
|
||||
"num-integer",
|
||||
"num-traits",
|
||||
"time 0.1.45",
|
||||
"wasm-bindgen",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "clap"
|
||||
version = "3.2.23"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "71655c45cb9845d3270c9d6df84ebe72b4dad3c2ba3f7023ad47c144e4e473a5"
|
||||
dependencies = [
|
||||
"atty",
|
||||
"bitflags",
|
||||
"clap_lex",
|
||||
"indexmap",
|
||||
"once_cell",
|
||||
"strsim",
|
||||
"termcolor",
|
||||
"textwrap",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "clap_lex"
|
||||
version = "0.2.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2850f2f5a82cbf437dd5af4d49848fbdfc27c157c3d010345776f952765261c5"
|
||||
dependencies = [
|
||||
"os_str_bytes",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "codespan-reporting"
|
||||
version = "0.11.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3538270d33cc669650c4b093848450d380def10c331d38c768e34cac80576e6e"
|
||||
dependencies = [
|
||||
"termcolor",
|
||||
"unicode-width",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "core-foundation-sys"
|
||||
version = "0.8.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5827cebf4670468b8772dd191856768aedcb1b0278a04f989f7766351917b9dc"
|
||||
|
||||
[[package]]
|
||||
name = "crossbeam"
|
||||
version = "0.8.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2801af0d36612ae591caa9568261fddce32ce6e08a7275ea334a06a4ad021a2c"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"crossbeam-channel",
|
||||
"crossbeam-deque",
|
||||
"crossbeam-epoch",
|
||||
"crossbeam-queue",
|
||||
"crossbeam-utils",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crossbeam-channel"
|
||||
version = "0.5.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c2dd04ddaf88237dc3b8d8f9a3c1004b506b54b3313403944054d23c0870c521"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"crossbeam-utils",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crossbeam-deque"
|
||||
version = "0.8.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "715e8152b692bba2d374b53d4875445368fdf21a94751410af607a5ac677d1fc"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"crossbeam-epoch",
|
||||
"crossbeam-utils",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crossbeam-epoch"
|
||||
version = "0.9.13"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "01a9af1f4c2ef74bb8aa1f7e19706bc72d03598c8a570bb5de72243c7a9d9d5a"
|
||||
dependencies = [
|
||||
"autocfg",
|
||||
"cfg-if",
|
||||
"crossbeam-utils",
|
||||
"memoffset 0.7.1",
|
||||
"scopeguard",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crossbeam-queue"
|
||||
version = "0.3.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d1cfb3ea8a53f37c40dea2c7bedcbd88bdfae54f5e2175d6ecaff1c988353add"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"crossbeam-utils",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "crossbeam-utils"
|
||||
version = "0.8.14"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4fb766fa798726286dbbb842f174001dab8abc7b627a1dd86e0b7222a95d929f"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "cxx"
|
||||
version = "1.0.83"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bdf07d07d6531bfcdbe9b8b739b104610c6508dcc4d63b410585faf338241daf"
|
||||
dependencies = [
|
||||
"cc",
|
||||
"cxxbridge-flags",
|
||||
"cxxbridge-macro",
|
||||
"link-cplusplus",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "cxx-build"
|
||||
version = "1.0.83"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d2eb5b96ecdc99f72657332953d4d9c50135af1bac34277801cc3937906ebd39"
|
||||
dependencies = [
|
||||
"cc",
|
||||
"codespan-reporting",
|
||||
"once_cell",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"scratch",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "cxxbridge-flags"
|
||||
version = "1.0.83"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ac040a39517fd1674e0f32177648334b0f4074625b5588a64519804ba0553b12"
|
||||
|
||||
[[package]]
|
||||
name = "cxxbridge-macro"
|
||||
version = "1.0.83"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1362b0ddcfc4eb0a1f57b68bd77dd99f0e826958a96abd0ae9bd092e114ffed6"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "darling"
|
||||
version = "0.14.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b0dd3cd20dc6b5a876612a6e5accfe7f3dd883db6d07acfbf14c128f61550dfa"
|
||||
dependencies = [
|
||||
"darling_core",
|
||||
"darling_macro",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "darling_core"
|
||||
version = "0.14.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a784d2ccaf7c98501746bf0be29b2022ba41fd62a2e622af997a03e9f972859f"
|
||||
dependencies = [
|
||||
"fnv",
|
||||
"ident_case",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"strsim",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "darling_macro"
|
||||
version = "0.14.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7618812407e9402654622dd402b0a89dff9ba93badd6540781526117b92aab7e"
|
||||
dependencies = [
|
||||
"darling_core",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "defer-drop"
|
||||
version = "1.3.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f613ec9fa66a6b28cdb1842b27f9adf24f39f9afc4dcdd9fdecee4aca7945c57"
|
||||
dependencies = [
|
||||
"crossbeam-channel",
|
||||
"once_cell",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "derive_builder"
|
||||
version = "0.11.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d07adf7be193b71cc36b193d0f5fe60b918a3a9db4dad0449f57bcfd519704a3"
|
||||
dependencies = [
|
||||
"derive_builder_macro",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "derive_builder_core"
|
||||
version = "0.11.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1f91d4cfa921f1c05904dc3c57b4a32c38aed3340cce209f3a6fd1478babafc4"
|
||||
dependencies = [
|
||||
"darling",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "derive_builder_macro"
|
||||
version = "0.11.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8f0314b72bed045f3a68671b3c86328386762c93f82d98c65c3cb5e5f573dd68"
|
||||
dependencies = [
|
||||
"derive_builder_core",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dirs-next"
|
||||
version = "2.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b98cf8ebf19c3d1b223e151f99a4f9f0690dca41414773390fc824184ac833e1"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"dirs-sys-next",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dirs-sys-next"
|
||||
version = "0.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4ebda144c4fe02d1f7ea1a7d9641b6fc6b580adcfa024ae48797ecdeb6825b4d"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"redox_users",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "either"
|
||||
version = "1.8.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "90e5c1c8368803113bf0c9584fc495a58b86dc8a29edbf8fe877d21d9507e797"
|
||||
|
||||
[[package]]
|
||||
name = "env_logger"
|
||||
version = "0.9.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a12e6657c4c97ebab115a42dcee77225f7f482cdd841cf7088c657a42e9e00e7"
|
||||
dependencies = [
|
||||
"atty",
|
||||
"humantime",
|
||||
"log",
|
||||
"regex",
|
||||
"termcolor",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fnv"
|
||||
version = "1.0.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1"
|
||||
|
||||
[[package]]
|
||||
name = "fuzzy-matcher"
|
||||
version = "0.3.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "54614a3312934d066701a80f20f15fa3b56d67ac7722b39eea5b4c9dd1d66c94"
|
||||
dependencies = [
|
||||
"thread_local",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "getrandom"
|
||||
version = "0.2.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c05aeb6a22b8f62540c194aac980f2115af067bfe15a0734d7277a768d396b31"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"libc",
|
||||
"wasi 0.11.0+wasi-snapshot-preview1",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "hashbrown"
|
||||
version = "0.12.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888"
|
||||
|
||||
[[package]]
|
||||
name = "hermit-abi"
|
||||
version = "0.1.19"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "62b467343b94ba476dcb2500d242dadbb39557df889310ac77c5d99100aaac33"
|
||||
dependencies = [
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "humantime"
|
||||
version = "2.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9a3a5bfb195931eeb336b2a7b4d761daec841b97f947d34394601737a7bba5e4"
|
||||
|
||||
[[package]]
|
||||
name = "iana-time-zone"
|
||||
version = "0.1.53"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "64c122667b287044802d6ce17ee2ddf13207ed924c712de9a66a5814d5b64765"
|
||||
dependencies = [
|
||||
"android_system_properties",
|
||||
"core-foundation-sys",
|
||||
"iana-time-zone-haiku",
|
||||
"js-sys",
|
||||
"wasm-bindgen",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "iana-time-zone-haiku"
|
||||
version = "0.1.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0703ae284fc167426161c2e3f1da3ea71d94b21bedbcc9494e92b28e334e3dca"
|
||||
dependencies = [
|
||||
"cxx",
|
||||
"cxx-build",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ident_case"
|
||||
version = "1.0.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b9e0384b61958566e926dc50660321d12159025e767c18e043daf26b70104c39"
|
||||
|
||||
[[package]]
|
||||
name = "indexmap"
|
||||
version = "1.9.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1885e79c1fc4b10f0e172c475f458b7f7b93061064d98c3293e98c5ba0c8b399"
|
||||
dependencies = [
|
||||
"autocfg",
|
||||
"hashbrown",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "js-sys"
|
||||
version = "0.3.60"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "49409df3e3bf0856b916e2ceaca09ee28e6871cf7d9ce97a692cacfdb2a25a47"
|
||||
dependencies = [
|
||||
"wasm-bindgen",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "lazy_static"
|
||||
version = "1.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
|
||||
|
||||
[[package]]
|
||||
name = "libc"
|
||||
version = "0.2.138"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "db6d7e329c562c5dfab7a46a2afabc8b987ab9a4834c9d1ca04dc54c1546cef8"
|
||||
|
||||
[[package]]
|
||||
name = "link-cplusplus"
|
||||
version = "1.0.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9272ab7b96c9046fbc5bc56c06c117cb639fe2d509df0c421cad82d2915cf369"
|
||||
dependencies = [
|
||||
"cc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "log"
|
||||
version = "0.4.17"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "abb12e687cfb44aa40f41fc3978ef76448f9b6038cad6aef4259d3c095a2382e"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "memchr"
|
||||
version = "2.5.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2dffe52ecf27772e601905b7522cb4ef790d2cc203488bbd0e2fe85fcb74566d"
|
||||
|
||||
[[package]]
|
||||
name = "memoffset"
|
||||
version = "0.6.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5aa361d4faea93603064a027415f07bd8e1d5c88c9fbf68bf56a285428fd79ce"
|
||||
dependencies = [
|
||||
"autocfg",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "memoffset"
|
||||
version = "0.7.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5de893c32cde5f383baa4c04c5d6dbdd735cfd4a794b0debdb2bb1b421da5ff4"
|
||||
dependencies = [
|
||||
"autocfg",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nix"
|
||||
version = "0.24.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fa52e972a9a719cecb6864fb88568781eb706bac2cd1d4f04a648542dbf78069"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"cfg-if",
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nix"
|
||||
version = "0.25.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f346ff70e7dbfd675fe90590b92d59ef2de15a8779ae305ebcbfd3f0caf59be4"
|
||||
dependencies = [
|
||||
"autocfg",
|
||||
"bitflags",
|
||||
"cfg-if",
|
||||
"libc",
|
||||
"memoffset 0.6.5",
|
||||
"pin-utils",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "num-integer"
|
||||
version = "0.1.45"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "225d3389fb3509a24c93f5c29eb6bde2586b98d9f016636dff58d7c6f7569cd9"
|
||||
dependencies = [
|
||||
"autocfg",
|
||||
"num-traits",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "num-traits"
|
||||
version = "0.2.15"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "578ede34cf02f8924ab9447f50c28075b4d3e5b269972345e7e0372b38c6cdcd"
|
||||
dependencies = [
|
||||
"autocfg",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "num_cpus"
|
||||
version = "1.14.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f6058e64324c71e02bc2b150e4f3bc8286db6c83092132ffa3f6b1eab0f9def5"
|
||||
dependencies = [
|
||||
"hermit-abi",
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "once_cell"
|
||||
version = "1.16.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "86f0b0d4bf799edbc74508c1e8bf170ff5f41238e5f8225603ca7caaae2b7860"
|
||||
|
||||
[[package]]
|
||||
name = "os_str_bytes"
|
||||
version = "6.4.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9b7820b9daea5457c9f21c69448905d723fbd21136ccf521748f23fd49e723ee"
|
||||
|
||||
[[package]]
|
||||
name = "pin-utils"
|
||||
version = "0.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
|
||||
|
||||
[[package]]
|
||||
name = "proc-macro2"
|
||||
version = "1.0.47"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5ea3d908b0e36316caf9e9e2c4625cdde190a7e6f440d794667ed17a1855e725"
|
||||
dependencies = [
|
||||
"unicode-ident",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "quote"
|
||||
version = "1.0.21"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bbe448f377a7d6961e30f5955f9b8d106c3f5e449d493ee1b125c1d43c2b5179"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rayon"
|
||||
version = "1.6.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6db3a213adf02b3bcfd2d3846bb41cb22857d131789e01df434fb7e7bc0759b7"
|
||||
dependencies = [
|
||||
"either",
|
||||
"rayon-core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rayon-core"
|
||||
version = "1.10.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cac410af5d00ab6884528b4ab69d1e8e146e8d471201800fa1b4524126de6ad3"
|
||||
dependencies = [
|
||||
"crossbeam-channel",
|
||||
"crossbeam-deque",
|
||||
"crossbeam-utils",
|
||||
"num_cpus",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "redox_syscall"
|
||||
version = "0.2.16"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fb5a58c1855b4b6819d59012155603f0b22ad30cad752600aadfcb695265519a"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "redox_users"
|
||||
version = "0.4.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b033d837a7cf162d7993aded9304e30a83213c648b6e389db233191f891e5c2b"
|
||||
dependencies = [
|
||||
"getrandom",
|
||||
"redox_syscall",
|
||||
"thiserror",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "1.7.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e076559ef8e241f2ae3479e36f97bd5741c0330689e217ad51ce2c76808b868a"
|
||||
dependencies = [
|
||||
"aho-corasick",
|
||||
"memchr",
|
||||
"regex-syntax",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "regex-syntax"
|
||||
version = "0.6.28"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "456c603be3e8d448b072f410900c09faf164fbce2d480456f50eea6e25f9c848"
|
||||
|
||||
[[package]]
|
||||
name = "rustversion"
|
||||
version = "1.0.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "97477e48b4cf8603ad5f7aaf897467cf42ab4218a38ef76fb14c2d6773a6d6a8"
|
||||
|
||||
[[package]]
|
||||
name = "scopeguard"
|
||||
version = "1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
|
||||
|
||||
[[package]]
|
||||
name = "scratch"
|
||||
version = "1.0.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9c8132065adcfd6e02db789d9285a0deb2f3fcb04002865ab67d5fb103533898"
|
||||
|
||||
[[package]]
|
||||
name = "serde"
|
||||
version = "1.0.149"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "256b9932320c590e707b94576e3cc1f7c9024d0ee6612dfbcf1cb106cbe8e055"
|
||||
|
||||
[[package]]
|
||||
name = "shlex"
|
||||
version = "1.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "43b2853a4d09f215c24cc5489c992ce46052d359b5109343cbafbf26bc62f8a3"
|
||||
|
||||
[[package]]
|
||||
name = "skim"
|
||||
version = "0.10.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cebed5f897cd6c0d80fbe30adb36c0abf7400e93043a63ae56458495642b3485"
|
||||
dependencies = [
|
||||
"atty",
|
||||
"beef",
|
||||
"bitflags",
|
||||
"chrono",
|
||||
"clap",
|
||||
"crossbeam",
|
||||
"defer-drop",
|
||||
"derive_builder",
|
||||
"env_logger",
|
||||
"fuzzy-matcher",
|
||||
"lazy_static",
|
||||
"log",
|
||||
"nix 0.25.1",
|
||||
"rayon",
|
||||
"regex",
|
||||
"shlex",
|
||||
"time 0.3.17",
|
||||
"timer",
|
||||
"tuikit",
|
||||
"unicode-width",
|
||||
"vte",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "strsim"
|
||||
version = "0.10.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "73473c0e59e6d5812c5dfe2a064a6444949f089e20eec9a2e5506596494e4623"
|
||||
|
||||
[[package]]
|
||||
name = "syn"
|
||||
version = "1.0.105"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "60b9b43d45702de4c839cb9b51d9f529c5dd26a4aff255b42b1ebc03e88ee908"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"unicode-ident",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "term"
|
||||
version = "0.7.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c59df8ac95d96ff9bede18eb7300b0fda5e5d8d90960e76f8e14ae765eedbf1f"
|
||||
dependencies = [
|
||||
"dirs-next",
|
||||
"rustversion",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "termcolor"
|
||||
version = "1.1.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bab24d30b911b2376f3a13cc2cd443142f0c81dda04c118693e35b3835757755"
|
||||
dependencies = [
|
||||
"winapi-util",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "textwrap"
|
||||
version = "0.16.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "222a222a5bfe1bba4a77b45ec488a741b3cb8872e5e499451fd7d0129c9c7c3d"
|
||||
|
||||
[[package]]
|
||||
name = "thiserror"
|
||||
version = "1.0.37"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "10deb33631e3c9018b9baf9dcbbc4f737320d2b576bac10f6aefa048fa407e3e"
|
||||
dependencies = [
|
||||
"thiserror-impl",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "thiserror-impl"
|
||||
version = "1.0.37"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "982d17546b47146b28f7c22e3d08465f6b8903d0ea13c1660d9d84a6e7adcdbb"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "thread_local"
|
||||
version = "1.1.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5516c27b78311c50bf42c071425c560ac799b11c30b31f87e3081965fe5e0180"
|
||||
dependencies = [
|
||||
"once_cell",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "time"
|
||||
version = "0.1.45"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1b797afad3f312d1c66a56d11d0316f916356d11bd158fbc6ca6389ff6bf805a"
|
||||
dependencies = [
|
||||
"libc",
|
||||
"wasi 0.10.0+wasi-snapshot-preview1",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "time"
|
||||
version = "0.3.17"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a561bf4617eebd33bca6434b988f39ed798e527f51a1e797d0ee4f61c0a38376"
|
||||
dependencies = [
|
||||
"serde",
|
||||
"time-core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "time-core"
|
||||
version = "0.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2e153e1f1acaef8acc537e68b44906d2db6436e2b35ac2c6b42640fff91f00fd"
|
||||
|
||||
[[package]]
|
||||
name = "timer"
|
||||
version = "0.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "31d42176308937165701f50638db1c31586f183f1aab416268216577aec7306b"
|
||||
dependencies = [
|
||||
"chrono",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tuikit"
|
||||
version = "0.5.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5e19c6ab038babee3d50c8c12ff8b910bdb2196f62278776422f50390d8e53d8"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"lazy_static",
|
||||
"log",
|
||||
"nix 0.24.3",
|
||||
"term",
|
||||
"unicode-width",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "unicode-ident"
|
||||
version = "1.0.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6ceab39d59e4c9499d4e5a8ee0e2735b891bb7308ac83dfb4e80cad195c9f6f3"
|
||||
|
||||
[[package]]
|
||||
name = "unicode-width"
|
||||
version = "0.1.10"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c0edd1e5b14653f783770bce4a4dabb4a5108a5370a5f5d8cfe8710c361f6c8b"
|
||||
|
||||
[[package]]
|
||||
name = "utf8parse"
|
||||
version = "0.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "936e4b492acfd135421d8dca4b1aa80a7bfc26e702ef3af710e0752684df5372"
|
||||
|
||||
[[package]]
|
||||
name = "vte"
|
||||
version = "0.11.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1aae21c12ad2ec2d168c236f369c38ff332bc1134f7246350dca641437365045"
|
||||
dependencies = [
|
||||
"arrayvec",
|
||||
"utf8parse",
|
||||
"vte_generate_state_changes",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "vte_generate_state_changes"
|
||||
version = "0.1.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d257817081c7dffcdbab24b9e62d2def62e2ff7d00b1c20062551e6cccc145ff"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wasi"
|
||||
version = "0.10.0+wasi-snapshot-preview1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1a143597ca7c7793eff794def352d41792a93c481eb1042423ff7ff72ba2c31f"
|
||||
|
||||
[[package]]
|
||||
name = "wasi"
|
||||
version = "0.11.0+wasi-snapshot-preview1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423"
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen"
|
||||
version = "0.2.83"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "eaf9f5aceeec8be17c128b2e93e031fb8a4d469bb9c4ae2d7dc1888b26887268"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"wasm-bindgen-macro",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-backend"
|
||||
version = "0.2.83"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4c8ffb332579b0557b52d268b91feab8df3615f265d5270fec2a8c95b17c1142"
|
||||
dependencies = [
|
||||
"bumpalo",
|
||||
"log",
|
||||
"once_cell",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
"wasm-bindgen-shared",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-macro"
|
||||
version = "0.2.83"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "052be0f94026e6cbc75cdefc9bae13fd6052cdcaf532fa6c45e7ae33a1e6c810"
|
||||
dependencies = [
|
||||
"quote",
|
||||
"wasm-bindgen-macro-support",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-macro-support"
|
||||
version = "0.2.83"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "07bc0c051dc5f23e307b13285f9d75df86bfdf816c5721e573dec1f9b8aa193c"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
"wasm-bindgen-backend",
|
||||
"wasm-bindgen-shared",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wasm-bindgen-shared"
|
||||
version = "0.2.83"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1c38c045535d93ec4f0b4defec448e4291638ee608530863b1e2ba115d4fff7f"
|
||||
|
||||
[[package]]
|
||||
name = "winapi"
|
||||
version = "0.3.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
|
||||
dependencies = [
|
||||
"winapi-i686-pc-windows-gnu",
|
||||
"winapi-x86_64-pc-windows-gnu",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "winapi-i686-pc-windows-gnu"
|
||||
version = "0.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
|
||||
|
||||
[[package]]
|
||||
name = "winapi-util"
|
||||
version = "0.1.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "70ec6ce85bb158151cae5e5c87f95a8e97d2c0c4b001223f33a334e3ce5de178"
|
||||
dependencies = [
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "winapi-x86_64-pc-windows-gnu"
|
||||
version = "0.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
|
19
rust/skim/Cargo.toml
Normal file
19
rust/skim/Cargo.toml
Normal file
@ -0,0 +1,19 @@
|
||||
[package]
|
||||
name = "_ch_rust_skim_rust"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
skim = "0.10.2"
|
||||
cxx = "1.0.83"
|
||||
|
||||
[build-dependencies]
|
||||
cxx-build = "1.0.83"
|
||||
|
||||
[lib]
|
||||
crate-type = ["staticlib"]
|
||||
|
||||
[profile.release]
|
||||
debug = true
|
8
rust/skim/build.rs.in
Normal file
8
rust/skim/build.rs.in
Normal file
@ -0,0 +1,8 @@
|
||||
fn main() {
|
||||
let mut build = cxx_build::bridge("src/lib.rs");
|
||||
for flag in "@RUST_CXXFLAGS@".split(' ') {
|
||||
build.flag(flag);
|
||||
}
|
||||
build.compile("skim");
|
||||
println!("cargo:rerun-if-changed=src/lib.rs");
|
||||
}
|
90
rust/skim/include/skim.h
Normal file
90
rust/skim/include/skim.h
Normal file
@ -0,0 +1,90 @@
|
||||
/// This header was compiled with:
|
||||
///
|
||||
/// $ cxxbridge rust/skim/src/lib.rs --header
|
||||
///
|
||||
/// For more info [1].
|
||||
///
|
||||
/// [1]: https://cxx.rs/build/other.html
|
||||
|
||||
#pragma once
|
||||
#include <array>
|
||||
#include <cstdint>
|
||||
#include <string>
|
||||
#include <vector>
|
||||
|
||||
namespace rust {
|
||||
inline namespace cxxbridge1 {
|
||||
// #include "rust/cxx.h"
|
||||
|
||||
struct unsafe_bitcopy_t;
|
||||
|
||||
#ifndef CXXBRIDGE1_RUST_STRING
|
||||
#define CXXBRIDGE1_RUST_STRING
|
||||
class String final {
|
||||
public:
|
||||
String() noexcept;
|
||||
String(const String &) noexcept;
|
||||
String(String &&) noexcept;
|
||||
~String() noexcept;
|
||||
|
||||
String(const std::string &);
|
||||
String(const char *);
|
||||
String(const char *, std::size_t);
|
||||
String(const char16_t *);
|
||||
String(const char16_t *, std::size_t);
|
||||
|
||||
static String lossy(const std::string &) noexcept;
|
||||
static String lossy(const char *) noexcept;
|
||||
static String lossy(const char *, std::size_t) noexcept;
|
||||
static String lossy(const char16_t *) noexcept;
|
||||
static String lossy(const char16_t *, std::size_t) noexcept;
|
||||
|
||||
String &operator=(const String &) &noexcept;
|
||||
String &operator=(String &&) &noexcept;
|
||||
|
||||
explicit operator std::string() const;
|
||||
|
||||
const char *data() const noexcept;
|
||||
std::size_t size() const noexcept;
|
||||
std::size_t length() const noexcept;
|
||||
bool empty() const noexcept;
|
||||
|
||||
const char *c_str() noexcept;
|
||||
|
||||
std::size_t capacity() const noexcept;
|
||||
void reserve(size_t new_cap) noexcept;
|
||||
|
||||
using iterator = char *;
|
||||
iterator begin() noexcept;
|
||||
iterator end() noexcept;
|
||||
|
||||
using const_iterator = const char *;
|
||||
const_iterator begin() const noexcept;
|
||||
const_iterator end() const noexcept;
|
||||
const_iterator cbegin() const noexcept;
|
||||
const_iterator cend() const noexcept;
|
||||
|
||||
bool operator==(const String &) const noexcept;
|
||||
bool operator!=(const String &) const noexcept;
|
||||
bool operator<(const String &) const noexcept;
|
||||
bool operator<=(const String &) const noexcept;
|
||||
bool operator>(const String &) const noexcept;
|
||||
bool operator>=(const String &) const noexcept;
|
||||
|
||||
void swap(String &) noexcept;
|
||||
|
||||
String(unsafe_bitcopy_t, const String &) noexcept;
|
||||
|
||||
private:
|
||||
struct lossy_t;
|
||||
String(lossy_t, const char *, std::size_t) noexcept;
|
||||
String(lossy_t, const char16_t *, std::size_t) noexcept;
|
||||
friend void swap(String &lhs, String &rhs) noexcept { lhs.swap(rhs); }
|
||||
|
||||
std::array<std::uintptr_t, 3> repr;
|
||||
};
|
||||
#endif // CXXBRIDGE1_RUST_STRING
|
||||
} // namespace cxxbridge1
|
||||
} // namespace rust
|
||||
|
||||
::rust::String skim(::std::vector<::std::string> const &words) noexcept;
|
49
rust/skim/src/lib.rs
Normal file
49
rust/skim/src/lib.rs
Normal file
@ -0,0 +1,49 @@
|
||||
use skim::prelude::*;
|
||||
use cxx::{CxxString, CxxVector};
|
||||
|
||||
#[cxx::bridge]
|
||||
mod ffi {
|
||||
extern "Rust" {
|
||||
fn skim(words: &CxxVector<CxxString>) -> String;
|
||||
}
|
||||
}
|
||||
|
||||
struct Item {
|
||||
text: String,
|
||||
}
|
||||
impl SkimItem for Item {
|
||||
fn text(&self) -> Cow<str> {
|
||||
return Cow::Borrowed(&self.text);
|
||||
}
|
||||
}
|
||||
|
||||
fn skim(words: &CxxVector<CxxString>) -> String {
|
||||
// TODO: configure colors
|
||||
let options = SkimOptionsBuilder::default()
|
||||
.height(Some("30%"))
|
||||
.tac(true)
|
||||
.tiebreak(Some("-score".to_string()))
|
||||
.build()
|
||||
.unwrap();
|
||||
|
||||
let (tx, rx): (SkimItemSender, SkimItemReceiver) = unbounded();
|
||||
for word in words {
|
||||
tx.send(Arc::new(Item{ text: word.to_string() })).unwrap();
|
||||
}
|
||||
// so that skim could know when to stop waiting for more items.
|
||||
drop(tx);
|
||||
|
||||
let output = Skim::run_with(&options, Some(rx));
|
||||
if output.is_none() {
|
||||
return "".to_string();
|
||||
}
|
||||
let output = output.unwrap();
|
||||
if output.is_abort {
|
||||
return "".to_string();
|
||||
}
|
||||
|
||||
if output.selected_items.is_empty() {
|
||||
return "".to_string();
|
||||
}
|
||||
return output.selected_items[0].output().to_string();
|
||||
}
|
@ -27,6 +27,7 @@
|
||||
#include <boost/algorithm/string/join.hpp>
|
||||
#include <boost/algorithm/string/split.hpp>
|
||||
#include <boost/algorithm/string/trim.hpp>
|
||||
#include <re2/re2.h>
|
||||
#include <filesystem>
|
||||
#include <mutex>
|
||||
|
||||
@ -38,6 +39,8 @@ namespace ErrorCodes
|
||||
extern const int UNKNOWN_ELEMENT_IN_CONFIG;
|
||||
extern const int UNKNOWN_SETTING;
|
||||
extern const int AUTHENTICATION_FAILED;
|
||||
extern const int CANNOT_COMPILE_REGEXP;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
namespace
|
||||
@ -140,6 +143,109 @@ private:
|
||||
};
|
||||
|
||||
|
||||
class AccessControl::PasswordComplexityRules
|
||||
{
|
||||
public:
|
||||
void setPasswordComplexityRulesFromConfig(const Poco::Util::AbstractConfiguration & config_)
|
||||
{
|
||||
std::lock_guard lock{mutex};
|
||||
|
||||
rules.clear();
|
||||
|
||||
if (config_.has("password_complexity"))
|
||||
{
|
||||
Poco::Util::AbstractConfiguration::Keys password_complexity;
|
||||
config_.keys("password_complexity", password_complexity);
|
||||
|
||||
for (const auto & key : password_complexity)
|
||||
{
|
||||
if (key == "rule" || key.starts_with("rule["))
|
||||
{
|
||||
String pattern(config_.getString("password_complexity." + key + ".pattern"));
|
||||
String message(config_.getString("password_complexity." + key + ".message"));
|
||||
|
||||
auto matcher = std::make_unique<RE2>(pattern, RE2::Quiet);
|
||||
if (!matcher->ok())
|
||||
throw Exception(ErrorCodes::CANNOT_COMPILE_REGEXP,
|
||||
"Password complexity pattern {} cannot be compiled: {}",
|
||||
pattern, matcher->error());
|
||||
|
||||
rules.push_back({std::move(matcher), std::move(pattern), std::move(message)});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void setPasswordComplexityRules(const std::vector<std::pair<String, String>> & rules_)
|
||||
{
|
||||
Rules new_rules;
|
||||
|
||||
for (const auto & [original_pattern, exception_message] : rules_)
|
||||
{
|
||||
auto matcher = std::make_unique<RE2>(original_pattern, RE2::Quiet);
|
||||
if (!matcher->ok())
|
||||
throw Exception(ErrorCodes::CANNOT_COMPILE_REGEXP,
|
||||
"Password complexity pattern {} cannot be compiled: {}",
|
||||
original_pattern, matcher->error());
|
||||
|
||||
new_rules.push_back({std::move(matcher), original_pattern, exception_message});
|
||||
}
|
||||
|
||||
std::lock_guard lock{mutex};
|
||||
rules = std::move(new_rules);
|
||||
}
|
||||
|
||||
void checkPasswordComplexityRules(const String & password_) const
|
||||
{
|
||||
String exception_text;
|
||||
bool failed = false;
|
||||
|
||||
std::lock_guard lock{mutex};
|
||||
for (const auto & rule : rules)
|
||||
{
|
||||
if (!RE2::PartialMatch(password_, *rule.matcher))
|
||||
{
|
||||
failed = true;
|
||||
|
||||
if (!exception_text.empty())
|
||||
exception_text += ", ";
|
||||
|
||||
exception_text += rule.exception_message;
|
||||
}
|
||||
}
|
||||
|
||||
if (failed)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Invalid password. The password should: {}", exception_text);
|
||||
}
|
||||
|
||||
std::vector<std::pair<String, String>> getPasswordComplexityRules()
|
||||
{
|
||||
std::vector<std::pair<String, String>> result;
|
||||
|
||||
std::lock_guard lock{mutex};
|
||||
result.reserve(rules.size());
|
||||
|
||||
for (const auto & rule : rules)
|
||||
result.push_back({rule.original_pattern, rule.exception_message});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
private:
|
||||
struct Rule
|
||||
{
|
||||
std::unique_ptr<RE2> matcher;
|
||||
String original_pattern;
|
||||
String exception_message;
|
||||
};
|
||||
|
||||
using Rules = std::vector<Rule>;
|
||||
|
||||
Rules rules TSA_GUARDED_BY(mutex);
|
||||
mutable std::mutex mutex;
|
||||
};
|
||||
|
||||
|
||||
AccessControl::AccessControl()
|
||||
: MultipleAccessStorage("user directories"),
|
||||
context_access_cache(std::make_unique<ContextAccessCache>(*this)),
|
||||
@ -149,7 +255,8 @@ AccessControl::AccessControl()
|
||||
settings_profiles_cache(std::make_unique<SettingsProfilesCache>(*this)),
|
||||
external_authenticators(std::make_unique<ExternalAuthenticators>()),
|
||||
custom_settings_prefixes(std::make_unique<CustomSettingsPrefixes>()),
|
||||
changes_notifier(std::make_unique<AccessChangesNotifier>())
|
||||
changes_notifier(std::make_unique<AccessChangesNotifier>()),
|
||||
password_rules(std::make_unique<PasswordComplexityRules>())
|
||||
{
|
||||
}
|
||||
|
||||
@ -166,6 +273,7 @@ void AccessControl::setUpFromMainConfig(const Poco::Util::AbstractConfiguration
|
||||
setImplicitNoPasswordAllowed(config_.getBool("allow_implicit_no_password", true));
|
||||
setNoPasswordAllowed(config_.getBool("allow_no_password", true));
|
||||
setPlaintextPasswordAllowed(config_.getBool("allow_plaintext_password", true));
|
||||
setPasswordComplexityRulesFromConfig(config_);
|
||||
|
||||
/// Optional improvements in access control system.
|
||||
/// The default values are false because we need to be compatible with earlier access configurations
|
||||
@ -543,6 +651,26 @@ bool AccessControl::isPlaintextPasswordAllowed() const
|
||||
return allow_plaintext_password;
|
||||
}
|
||||
|
||||
void AccessControl::setPasswordComplexityRulesFromConfig(const Poco::Util::AbstractConfiguration & config_)
|
||||
{
|
||||
password_rules->setPasswordComplexityRulesFromConfig(config_);
|
||||
}
|
||||
|
||||
void AccessControl::setPasswordComplexityRules(const std::vector<std::pair<String, String>> & rules_)
|
||||
{
|
||||
password_rules->setPasswordComplexityRules(rules_);
|
||||
}
|
||||
|
||||
void AccessControl::checkPasswordComplexityRules(const String & password_) const
|
||||
{
|
||||
password_rules->checkPasswordComplexityRules(password_);
|
||||
}
|
||||
|
||||
std::vector<std::pair<String, String>> AccessControl::getPasswordComplexityRules() const
|
||||
{
|
||||
return password_rules->getPasswordComplexityRules();
|
||||
}
|
||||
|
||||
|
||||
std::shared_ptr<const ContextAccess> AccessControl::getContextAccess(
|
||||
const UUID & user_id,
|
||||
|
@ -147,6 +147,13 @@ public:
|
||||
void setPlaintextPasswordAllowed(const bool allow_plaintext_password_);
|
||||
bool isPlaintextPasswordAllowed() const;
|
||||
|
||||
/// Check complexity requirements for plaintext passwords
|
||||
|
||||
void setPasswordComplexityRulesFromConfig(const Poco::Util::AbstractConfiguration & config_);
|
||||
void setPasswordComplexityRules(const std::vector<std::pair<String, String>> & rules_);
|
||||
void checkPasswordComplexityRules(const String & password_) const;
|
||||
std::vector<std::pair<String, String>> getPasswordComplexityRules() const;
|
||||
|
||||
/// Enables logic that users without permissive row policies can still read rows using a SELECT query.
|
||||
/// For example, if there two users A, B and a row policy is defined only for A, then
|
||||
/// if this setting is true the user B will see all rows, and if this setting is false the user B will see no rows.
|
||||
@ -212,6 +219,7 @@ public:
|
||||
private:
|
||||
class ContextAccessCache;
|
||||
class CustomSettingsPrefixes;
|
||||
class PasswordComplexityRules;
|
||||
|
||||
std::optional<UUID> insertImpl(const AccessEntityPtr & entity, bool replace_if_exists, bool throw_if_exists) override;
|
||||
bool removeImpl(const UUID & id, bool throw_if_not_exists) override;
|
||||
@ -225,6 +233,7 @@ private:
|
||||
std::unique_ptr<ExternalAuthenticators> external_authenticators;
|
||||
std::unique_ptr<CustomSettingsPrefixes> custom_settings_prefixes;
|
||||
std::unique_ptr<AccessChangesNotifier> changes_notifier;
|
||||
std::unique_ptr<PasswordComplexityRules> password_rules;
|
||||
std::atomic_bool allow_plaintext_password = true;
|
||||
std::atomic_bool allow_no_password = true;
|
||||
std::atomic_bool allow_implicit_no_password = true;
|
||||
|
@ -11,6 +11,11 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int UNSUPPORTED_METHOD;
|
||||
}
|
||||
|
||||
class IFunctionOverloadResolver;
|
||||
using FunctionOverloadResolverPtr = std::shared_ptr<IFunctionOverloadResolver>;
|
||||
|
||||
@ -211,7 +216,9 @@ public:
|
||||
DataTypePtr getResultType() const override
|
||||
{
|
||||
if (!function)
|
||||
return {};
|
||||
throw Exception(ErrorCodes::UNSUPPORTED_METHOD,
|
||||
"Function node with name '{}' is not resolved",
|
||||
function_name);
|
||||
return function->getResultType();
|
||||
}
|
||||
|
||||
|
197
src/Analyzer/Passes/IfTransformStringsToEnumPass.cpp
Normal file
197
src/Analyzer/Passes/IfTransformStringsToEnumPass.cpp
Normal file
@ -0,0 +1,197 @@
|
||||
#include <Analyzer/Passes/IfTransformStringsToEnumPass.h>
|
||||
|
||||
#include <Analyzer/ConstantNode.h>
|
||||
#include <Analyzer/FunctionNode.h>
|
||||
#include <Analyzer/IQueryTreeNode.h>
|
||||
#include <Analyzer/InDepthQueryTreeVisitor.h>
|
||||
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeEnum.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <DataTypes/IDataType.h>
|
||||
|
||||
#include <Functions/FunctionFactory.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
/// We place strings in ascending order here under the assumption it could speed up String to Enum conversion.
|
||||
template <typename EnumType>
|
||||
auto getDataEnumType(const std::set<std::string> & string_values)
|
||||
{
|
||||
using EnumValues = typename EnumType::Values;
|
||||
EnumValues enum_values;
|
||||
enum_values.reserve(string_values.size());
|
||||
|
||||
size_t number = 1;
|
||||
for (const auto & value : string_values)
|
||||
enum_values.emplace_back(value, number++);
|
||||
|
||||
return std::make_shared<EnumType>(std::move(enum_values));
|
||||
}
|
||||
|
||||
DataTypePtr getEnumType(const std::set<std::string> & string_values)
|
||||
{
|
||||
if (string_values.size() >= 255)
|
||||
return getDataEnumType<DataTypeEnum16>(string_values);
|
||||
else
|
||||
return getDataEnumType<DataTypeEnum8>(string_values);
|
||||
}
|
||||
|
||||
QueryTreeNodePtr createCastFunction(QueryTreeNodePtr from, DataTypePtr result_type, ContextPtr context)
|
||||
{
|
||||
auto enum_literal = std::make_shared<ConstantValue>(result_type->getName(), std::make_shared<DataTypeString>());
|
||||
auto enum_literal_node = std::make_shared<ConstantNode>(std::move(enum_literal));
|
||||
|
||||
auto cast_function = FunctionFactory::instance().get("_CAST", std::move(context));
|
||||
QueryTreeNodes arguments{std::move(from), std::move(enum_literal_node)};
|
||||
|
||||
auto function_node = std::make_shared<FunctionNode>("_CAST");
|
||||
function_node->resolveAsFunction(std::move(cast_function), std::move(result_type));
|
||||
function_node->getArguments().getNodes() = std::move(arguments);
|
||||
|
||||
return function_node;
|
||||
}
|
||||
|
||||
/// if(arg1, arg2, arg3) will be transformed to if(arg1, _CAST(arg2, Enum...), _CAST(arg3, Enum...))
|
||||
/// where Enum is generated based on the possible values stored in string_values
|
||||
void changeIfArguments(
|
||||
QueryTreeNodePtr & first, QueryTreeNodePtr & second, const std::set<std::string> & string_values, const ContextPtr & context)
|
||||
{
|
||||
auto result_type = getEnumType(string_values);
|
||||
|
||||
first = createCastFunction(first, result_type, context);
|
||||
second = createCastFunction(second, result_type, context);
|
||||
}
|
||||
|
||||
/// transform(value, array_from, array_to, default_value) will be transformed to transform(value, array_from, _CAST(array_to, Array(Enum...)), _CAST(default_value, Enum...))
|
||||
/// where Enum is generated based on the possible values stored in string_values
|
||||
void changeTransformArguments(
|
||||
QueryTreeNodePtr & array_to,
|
||||
QueryTreeNodePtr & default_value,
|
||||
const std::set<std::string> & string_values,
|
||||
const ContextPtr & context)
|
||||
{
|
||||
auto result_type = getEnumType(string_values);
|
||||
|
||||
array_to = createCastFunction(array_to, std::make_shared<DataTypeArray>(result_type), context);
|
||||
default_value = createCastFunction(default_value, std::move(result_type), context);
|
||||
}
|
||||
|
||||
void wrapIntoToString(FunctionNode & function_node, QueryTreeNodePtr arg, ContextPtr context)
|
||||
{
|
||||
assert(isString(function_node.getResultType()));
|
||||
|
||||
auto to_string_function = FunctionFactory::instance().get("toString", std::move(context));
|
||||
QueryTreeNodes arguments{std::move(arg)};
|
||||
|
||||
function_node.resolveAsFunction(std::move(to_string_function), std::make_shared<DataTypeString>());
|
||||
function_node.getArguments().getNodes() = std::move(arguments);
|
||||
}
|
||||
|
||||
class ConvertStringsToEnumVisitor : public InDepthQueryTreeVisitor<ConvertStringsToEnumVisitor>
|
||||
{
|
||||
public:
|
||||
explicit ConvertStringsToEnumVisitor(ContextPtr context_)
|
||||
: context(std::move(context_))
|
||||
{
|
||||
}
|
||||
|
||||
void visitImpl(QueryTreeNodePtr & node)
|
||||
{
|
||||
auto * function_node = node->as<FunctionNode>();
|
||||
|
||||
if (!function_node)
|
||||
return;
|
||||
|
||||
/// to preserve return type (String) of the current function_node, we wrap the newly
|
||||
/// generated function nodes into toString
|
||||
|
||||
std::string_view function_name = function_node->getFunctionName();
|
||||
if (function_name == "if")
|
||||
{
|
||||
if (function_node->getArguments().getNodes().size() != 3)
|
||||
return;
|
||||
|
||||
auto modified_if_node = function_node->clone();
|
||||
auto & argument_nodes = modified_if_node->as<FunctionNode>()->getArguments().getNodes();
|
||||
|
||||
const auto * first_literal = argument_nodes[1]->as<ConstantNode>();
|
||||
const auto * second_literal = argument_nodes[2]->as<ConstantNode>();
|
||||
|
||||
if (!first_literal || !second_literal)
|
||||
return;
|
||||
|
||||
if (!isString(first_literal->getResultType()) || !isString(second_literal->getResultType()))
|
||||
return;
|
||||
|
||||
std::set<std::string> string_values;
|
||||
string_values.insert(first_literal->getValue().get<std::string>());
|
||||
string_values.insert(second_literal->getValue().get<std::string>());
|
||||
|
||||
changeIfArguments(argument_nodes[1], argument_nodes[2], string_values, context);
|
||||
wrapIntoToString(*function_node, std::move(modified_if_node), context);
|
||||
return;
|
||||
}
|
||||
|
||||
if (function_name == "transform")
|
||||
{
|
||||
if (function_node->getArguments().getNodes().size() != 4)
|
||||
return;
|
||||
|
||||
auto modified_transform_node = function_node->clone();
|
||||
auto & argument_nodes = modified_transform_node->as<FunctionNode>()->getArguments().getNodes();
|
||||
|
||||
if (!isString(function_node->getResultType()))
|
||||
return;
|
||||
|
||||
const auto * literal_to = argument_nodes[2]->as<ConstantNode>();
|
||||
const auto * literal_default = argument_nodes[3]->as<ConstantNode>();
|
||||
|
||||
if (!literal_to || !literal_default)
|
||||
return;
|
||||
|
||||
if (!isArray(literal_to->getResultType()) || !isString(literal_default->getResultType()))
|
||||
return;
|
||||
|
||||
auto array_to = literal_to->getValue().get<Array>();
|
||||
|
||||
if (array_to.empty())
|
||||
return;
|
||||
|
||||
if (!std::all_of(
|
||||
array_to.begin(),
|
||||
array_to.end(),
|
||||
[](const auto & field) { return field.getType() == Field::Types::Which::String; }))
|
||||
return;
|
||||
|
||||
/// collect possible string values
|
||||
std::set<std::string> string_values;
|
||||
|
||||
for (const auto & value : array_to)
|
||||
string_values.insert(value.get<std::string>());
|
||||
|
||||
string_values.insert(literal_default->getValue().get<std::string>());
|
||||
|
||||
changeTransformArguments(argument_nodes[2], argument_nodes[3], string_values, context);
|
||||
wrapIntoToString(*function_node, std::move(modified_transform_node), context);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
ContextPtr context;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
void IfTransformStringsToEnumPass::run(QueryTreeNodePtr query, ContextPtr context)
|
||||
{
|
||||
ConvertStringsToEnumVisitor visitor(context);
|
||||
visitor.visit(query);
|
||||
}
|
||||
|
||||
}
|
39
src/Analyzer/Passes/IfTransformStringsToEnumPass.h
Normal file
39
src/Analyzer/Passes/IfTransformStringsToEnumPass.h
Normal file
@ -0,0 +1,39 @@
|
||||
#pragma once
|
||||
|
||||
#include <Analyzer/IQueryTreePass.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/**
|
||||
* This pass replaces string-type arguments in If and Transform to enum.
|
||||
*
|
||||
* E.g.
|
||||
* -------------------------------
|
||||
* SELECT if(number > 5, 'a', 'b')
|
||||
* FROM system.numbers;
|
||||
*
|
||||
* will be transformed into
|
||||
*
|
||||
* SELECT if(number > 5, _CAST('a', 'Enum8(\'a\' = 1, \'b\' = 2)'), _CAST('b', 'Enum8(\'a\' = 1, \'b\' = 2)'))
|
||||
* FROM system.numbers;
|
||||
* -------------------------------
|
||||
* SELECT transform(number, [2, 4], ['a', 'b'], 'c') FROM system.numbers;
|
||||
*
|
||||
* will be transformed into
|
||||
*
|
||||
* SELECT transform(number, [2, 4], _CAST(['a', 'b'], 'Array(Enum8(\'a\' = 1, \'b\' = 2, \'c\' = 3)'), _CAST('c', 'Enum8(\'a\' = 1, \'b\' = 2, \'c\' = 3)'))
|
||||
* FROM system.numbers;
|
||||
* -------------------------------
|
||||
*/
|
||||
class IfTransformStringsToEnumPass final : public IQueryTreePass
|
||||
{
|
||||
public:
|
||||
String getName() override { return "IfTransformStringsToEnumPass"; }
|
||||
|
||||
String getDescription() override { return "Replaces string-type arguments in If and Transform to enum"; }
|
||||
|
||||
void run(QueryTreeNodePtr query_tree_node, ContextPtr context) override;
|
||||
};
|
||||
|
||||
}
|
@ -3983,7 +3983,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
const auto * constant_node = parameter_node->as<ConstantNode>();
|
||||
if (!constant_node)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Parameter for function {} expected to have constant value. Actual {}. In scope {}",
|
||||
"Parameter for function '{}' expected to have constant value. Actual {}. In scope {}",
|
||||
function_name,
|
||||
parameter_node->formatASTForErrorMessage(),
|
||||
scope.scope_node->formatASTForErrorMessage());
|
||||
@ -4079,7 +4079,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
{
|
||||
auto & function_in_arguments_nodes = function_node.getArguments().getNodes();
|
||||
if (function_in_arguments_nodes.size() != 2)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Function {} expects 2 arguments", function_name);
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Function '{}' expects 2 arguments", function_name);
|
||||
|
||||
auto & in_second_argument = function_in_arguments_nodes[1];
|
||||
auto * table_node = in_second_argument->as<TableNode>();
|
||||
@ -4169,8 +4169,8 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
|
||||
if (!argument_column.type)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"Function {} argument is not resolved. In scope {}",
|
||||
function_node.getFunctionName(),
|
||||
"Function '{}' argument is not resolved. In scope {}",
|
||||
function_name,
|
||||
scope.scope_node->formatASTForErrorMessage());
|
||||
|
||||
const auto * constant_node = function_argument->as<ConstantNode>();
|
||||
@ -4220,7 +4220,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
auto * lambda_expression = lambda_expression_untyped->as<LambdaNode>();
|
||||
if (!lambda_expression)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"Function identifier {} must be resolved as lambda. Actual {}. In scope {}",
|
||||
"Function identifier '{}' must be resolved as lambda. Actual {}. In scope {}",
|
||||
function_node.getFunctionName(),
|
||||
lambda_expression_untyped->formatASTForErrorMessage(),
|
||||
scope.scope_node->formatASTForErrorMessage());
|
||||
@ -4253,7 +4253,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
const auto * tuple_data_type = typeid_cast<const DataTypeTuple *>(result_type.get());
|
||||
if (!tuple_data_type)
|
||||
throw Exception(ErrorCodes::UNSUPPORTED_METHOD,
|
||||
"Function untuple argument must be have compound type. Actual type {}. In scope {}",
|
||||
"Function 'untuple' argument must have compound type. Actual type {}. In scope {}",
|
||||
result_type->getName(),
|
||||
scope.scope_node->formatASTForErrorMessage());
|
||||
|
||||
@ -4311,7 +4311,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
{
|
||||
if (!AggregateFunctionFactory::instance().isAggregateFunctionName(function_name))
|
||||
{
|
||||
std::string error_message = fmt::format("Aggregate function with name {} does not exists. In scope {}",
|
||||
std::string error_message = fmt::format("Aggregate function with name '{}' does not exists. In scope {}",
|
||||
function_name,
|
||||
scope.scope_node->formatASTForErrorMessage());
|
||||
|
||||
@ -4319,6 +4319,11 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
throw Exception(ErrorCodes::UNKNOWN_AGGREGATE_FUNCTION, error_message);
|
||||
}
|
||||
|
||||
if (!function_lambda_arguments_indexes.empty())
|
||||
throw Exception(ErrorCodes::UNSUPPORTED_METHOD,
|
||||
"Window function '{}' does not support lambda arguments",
|
||||
function_name);
|
||||
|
||||
AggregateFunctionProperties properties;
|
||||
auto aggregate_function = AggregateFunctionFactory::instance().get(function_name, argument_types, parameters, properties);
|
||||
|
||||
@ -4368,12 +4373,17 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
auto hints = name_prompter.getHints(function_name, possible_function_names);
|
||||
|
||||
throw Exception(ErrorCodes::UNKNOWN_FUNCTION,
|
||||
"Function with name {} does not exists. In scope {}{}",
|
||||
"Function with name '{}' does not exists. In scope {}{}",
|
||||
function_name,
|
||||
scope.scope_node->formatASTForErrorMessage(),
|
||||
getHintsErrorMessageSuffix(hints));
|
||||
}
|
||||
|
||||
if (!function_lambda_arguments_indexes.empty())
|
||||
throw Exception(ErrorCodes::UNSUPPORTED_METHOD,
|
||||
"Aggregate function '{}' does not support lambda arguments",
|
||||
function_name);
|
||||
|
||||
AggregateFunctionProperties properties;
|
||||
auto aggregate_function = AggregateFunctionFactory::instance().get(function_name, argument_types, parameters, properties);
|
||||
function_node.resolveAsAggregateFunction(aggregate_function);
|
||||
@ -4404,7 +4414,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
const auto * function_data_type = typeid_cast<const DataTypeFunction *>(argument_types[function_lambda_argument_index].get());
|
||||
if (!function_data_type)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"Function {} expected function data type for lambda argument with index {}. Actual {}. In scope {}",
|
||||
"Function '{}' expected function data type for lambda argument with index {}. Actual {}. In scope {}",
|
||||
function_name,
|
||||
function_lambda_argument_index,
|
||||
argument_types[function_lambda_argument_index]->getName(),
|
||||
@ -4414,7 +4424,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
size_t function_data_type_arguments_size = function_data_type_argument_types.size();
|
||||
if (function_data_type_arguments_size != lambda_arguments_size)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"Function {} function data type for lambda argument with index {} arguments size mismatch. Actual {}. Expected {}. In scope {}",
|
||||
"Function '{}' function data type for lambda argument with index {} arguments size mismatch. Actual {}. Expected {}. In scope {}",
|
||||
function_name,
|
||||
function_data_type_arguments_size,
|
||||
lambda_arguments_size,
|
||||
|
@ -14,6 +14,7 @@
|
||||
#include <Analyzer/Passes/UniqInjectiveFunctionsEliminationPass.h>
|
||||
#include <Analyzer/Passes/OrderByLimitByDuplicateEliminationPass.h>
|
||||
#include <Analyzer/Passes/FuseFunctionsPass.h>
|
||||
#include <Analyzer/Passes/IfTransformStringsToEnumPass.h>
|
||||
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <IO/Operators.h>
|
||||
@ -92,7 +93,6 @@ public:
|
||||
* TODO: Support setting optimize_duplicate_order_by_and_distinct.
|
||||
* TODO: Support setting optimize_redundant_functions_in_order_by.
|
||||
* TODO: Support setting optimize_monotonous_functions_in_order_by.
|
||||
* TODO: Support setting optimize_if_transform_strings_to_enum.
|
||||
* TODO: Support settings.optimize_or_like_chain.
|
||||
* TODO: Add optimizations based on function semantics. Example: SELECT * FROM test_table WHERE id != id. (id is not nullable column).
|
||||
*/
|
||||
@ -208,6 +208,9 @@ void addQueryTreePasses(QueryTreePassManager & manager)
|
||||
|
||||
if (settings.optimize_syntax_fuse_functions)
|
||||
manager.addPass(std::make_unique<FuseFunctionsPass>());
|
||||
|
||||
if (settings.optimize_if_transform_strings_to_enum)
|
||||
manager.addPass(std::make_unique<IfTransformStringsToEnumPass>());
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -29,6 +29,7 @@ public:
|
||||
virtual UInt64 getFileSize(const String & file_name) = 0;
|
||||
virtual bool fileContentsEqual(const String & file_name, const String & expected_file_contents) = 0;
|
||||
virtual std::unique_ptr<WriteBuffer> writeFile(const String & file_name) = 0;
|
||||
virtual void removeFile(const String & file_name) = 0;
|
||||
virtual void removeFiles(const Strings & file_names) = 0;
|
||||
virtual DataSourceDescription getDataSourceDescription() const = 0;
|
||||
virtual void copyFileThroughBuffer(std::unique_ptr<SeekableReadBuffer> && source, const String & file_name);
|
||||
|
@ -75,6 +75,13 @@ std::unique_ptr<WriteBuffer> BackupWriterDisk::writeFile(const String & file_nam
|
||||
return disk->writeFile(file_path);
|
||||
}
|
||||
|
||||
void BackupWriterDisk::removeFile(const String & file_name)
|
||||
{
|
||||
disk->removeFileIfExists(path / file_name);
|
||||
if (disk->isDirectory(path) && disk->isDirectoryEmpty(path))
|
||||
disk->removeDirectory(path);
|
||||
}
|
||||
|
||||
void BackupWriterDisk::removeFiles(const Strings & file_names)
|
||||
{
|
||||
for (const auto & file_name : file_names)
|
||||
|
@ -34,6 +34,7 @@ public:
|
||||
UInt64 getFileSize(const String & file_name) override;
|
||||
bool fileContentsEqual(const String & file_name, const String & expected_file_contents) override;
|
||||
std::unique_ptr<WriteBuffer> writeFile(const String & file_name) override;
|
||||
void removeFile(const String & file_name) override;
|
||||
void removeFiles(const Strings & file_names) override;
|
||||
DataSourceDescription getDataSourceDescription() const override;
|
||||
|
||||
|
@ -72,6 +72,13 @@ std::unique_ptr<WriteBuffer> BackupWriterFile::writeFile(const String & file_nam
|
||||
return std::make_unique<WriteBufferFromFile>(file_path);
|
||||
}
|
||||
|
||||
void BackupWriterFile::removeFile(const String & file_name)
|
||||
{
|
||||
fs::remove(path / file_name);
|
||||
if (fs::is_directory(path) && fs::is_empty(path))
|
||||
fs::remove(path);
|
||||
}
|
||||
|
||||
void BackupWriterFile::removeFiles(const Strings & file_names)
|
||||
{
|
||||
for (const auto & file_name : file_names)
|
||||
|
@ -31,6 +31,7 @@ public:
|
||||
UInt64 getFileSize(const String & file_name) override;
|
||||
bool fileContentsEqual(const String & file_name, const String & expected_file_contents) override;
|
||||
std::unique_ptr<WriteBuffer> writeFile(const String & file_name) override;
|
||||
void removeFile(const String & file_name) override;
|
||||
void removeFiles(const Strings & file_names) override;
|
||||
DataSourceDescription getDataSourceDescription() const override;
|
||||
bool supportNativeCopy(DataSourceDescription data_source_description) const override;
|
||||
|
@ -372,7 +372,48 @@ std::unique_ptr<WriteBuffer> BackupWriterS3::writeFile(const String & file_name)
|
||||
threadPoolCallbackRunner<void>(IOThreadPool::get(), "BackupWriterS3"));
|
||||
}
|
||||
|
||||
void BackupWriterS3::removeFile(const String & file_name)
|
||||
{
|
||||
Aws::S3::Model::DeleteObjectRequest request;
|
||||
request.SetBucket(s3_uri.bucket);
|
||||
request.SetKey(fs::path(s3_uri.key) / file_name);
|
||||
auto outcome = client->DeleteObject(request);
|
||||
if (!outcome.IsSuccess())
|
||||
throw Exception(outcome.GetError().GetMessage(), ErrorCodes::S3_ERROR);
|
||||
}
|
||||
|
||||
void BackupWriterS3::removeFiles(const Strings & file_names)
|
||||
{
|
||||
try
|
||||
{
|
||||
if (!supports_batch_delete.has_value() || supports_batch_delete.value() == true)
|
||||
{
|
||||
removeFilesBatch(file_names);
|
||||
supports_batch_delete = true;
|
||||
}
|
||||
else
|
||||
{
|
||||
for (const auto & file_name : file_names)
|
||||
removeFile(file_name);
|
||||
}
|
||||
}
|
||||
catch (const Exception &)
|
||||
{
|
||||
if (!supports_batch_delete.has_value())
|
||||
{
|
||||
supports_batch_delete = false;
|
||||
LOG_TRACE(log, "DeleteObjects is not supported. Retrying with plain DeleteObject.");
|
||||
|
||||
for (const auto & file_name : file_names)
|
||||
removeFile(file_name);
|
||||
}
|
||||
else
|
||||
throw;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
void BackupWriterS3::removeFilesBatch(const Strings & file_names)
|
||||
{
|
||||
/// One call of DeleteObjects() cannot remove more than 1000 keys.
|
||||
size_t chunk_size_limit = 1000;
|
||||
|
@ -54,6 +54,7 @@ public:
|
||||
UInt64 getFileSize(const String & file_name) override;
|
||||
bool fileContentsEqual(const String & file_name, const String & expected_file_contents) override;
|
||||
std::unique_ptr<WriteBuffer> writeFile(const String & file_name) override;
|
||||
void removeFile(const String & file_name) override;
|
||||
void removeFiles(const Strings & file_names) override;
|
||||
|
||||
DataSourceDescription getDataSourceDescription() const override;
|
||||
@ -79,11 +80,14 @@ private:
|
||||
const Aws::S3::Model::HeadObjectResult & head,
|
||||
const std::optional<ObjectAttributes> & metadata = std::nullopt) const;
|
||||
|
||||
void removeFilesBatch(const Strings & file_names);
|
||||
|
||||
S3::URI s3_uri;
|
||||
std::shared_ptr<Aws::S3::S3Client> client;
|
||||
ReadSettings read_settings;
|
||||
S3Settings::RequestSettings request_settings;
|
||||
Poco::Logger * log;
|
||||
std::optional<bool> supports_batch_delete;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -506,7 +506,7 @@ void BackupImpl::removeLockFile()
|
||||
return; /// Internal backup must not remove the lock file (it's still used by the initiator).
|
||||
|
||||
if (checkLockFile(false))
|
||||
writer->removeFiles({lock_file_name});
|
||||
writer->removeFile(lock_file_name);
|
||||
}
|
||||
|
||||
Strings BackupImpl::listFiles(const String & directory, bool recursive) const
|
||||
|
@ -22,6 +22,7 @@
|
||||
#include <Core/Block.h>
|
||||
#include <Core/Protocol.h>
|
||||
#include <Formats/FormatFactory.h>
|
||||
#include <Access/AccessControl.h>
|
||||
|
||||
#include "config_version.h"
|
||||
|
||||
@ -43,6 +44,7 @@
|
||||
#include <Parsers/ASTInsertQuery.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
#include <Parsers/ASTCreateFunctionQuery.h>
|
||||
#include <Parsers/Access/ASTCreateUserQuery.h>
|
||||
#include <Parsers/ASTDropQuery.h>
|
||||
#include <Parsers/ASTSetQuery.h>
|
||||
#include <Parsers/ASTUseQuery.h>
|
||||
@ -1562,6 +1564,15 @@ void ClientBase::processParsedSingleQuery(const String & full_query, const Strin
|
||||
updateLoggerLevel(logs_level_field->safeGet<String>());
|
||||
}
|
||||
|
||||
if (const auto * create_user_query = parsed_query->as<ASTCreateUserQuery>())
|
||||
{
|
||||
if (!create_user_query->attach && create_user_query->temporary_password_for_checks)
|
||||
{
|
||||
global_context->getAccessControl().checkPasswordComplexityRules(create_user_query->temporary_password_for_checks.value());
|
||||
create_user_query->temporary_password_for_checks.reset();
|
||||
}
|
||||
}
|
||||
|
||||
processed_rows = 0;
|
||||
written_first_block = false;
|
||||
progress_indication.resetProgress();
|
||||
|
@ -309,6 +309,21 @@ void Connection::receiveHello()
|
||||
readVarUInt(server_version_patch, *in);
|
||||
else
|
||||
server_version_patch = server_revision;
|
||||
|
||||
if (server_revision >= DBMS_MIN_PROTOCOL_VERSION_WITH_PASSWORD_COMPLEXITY_RULES)
|
||||
{
|
||||
UInt64 rules_size;
|
||||
readVarUInt(rules_size, *in);
|
||||
password_complexity_rules.reserve(rules_size);
|
||||
|
||||
for (size_t i = 0; i < rules_size; ++i)
|
||||
{
|
||||
String original_pattern, exception_message;
|
||||
readStringBinary(original_pattern, *in);
|
||||
readStringBinary(exception_message, *in);
|
||||
password_complexity_rules.push_back({std::move(original_pattern), std::move(exception_message)});
|
||||
}
|
||||
}
|
||||
}
|
||||
else if (packet_type == Protocol::Server::Exception)
|
||||
receiveException()->rethrow();
|
||||
|
@ -93,6 +93,8 @@ public:
|
||||
|
||||
Protocol::Compression getCompression() const { return compression; }
|
||||
|
||||
std::vector<std::pair<String, String>> getPasswordComplexityRules() const override { return password_complexity_rules; }
|
||||
|
||||
void sendQuery(
|
||||
const ConnectionTimeouts & timeouts,
|
||||
const String & query,
|
||||
@ -207,6 +209,8 @@ private:
|
||||
*/
|
||||
ThrottlerPtr throttler;
|
||||
|
||||
std::vector<std::pair<String, String>> password_complexity_rules;
|
||||
|
||||
/// From where to read query execution result.
|
||||
std::shared_ptr<ReadBuffer> maybe_compressed_in;
|
||||
std::unique_ptr<NativeReader> block_in;
|
||||
|
@ -82,6 +82,8 @@ public:
|
||||
|
||||
virtual const String & getDescription() const = 0;
|
||||
|
||||
virtual std::vector<std::pair<String, String>> getPasswordComplexityRules() const = 0;
|
||||
|
||||
/// If last flag is true, you need to call sendExternalTablesData after.
|
||||
virtual void sendQuery(
|
||||
const ConnectionTimeouts & timeouts,
|
||||
|
@ -91,6 +91,8 @@ public:
|
||||
|
||||
const String & getDescription() const override { return description; }
|
||||
|
||||
std::vector<std::pair<String, String>> getPasswordComplexityRules() const override { return {}; }
|
||||
|
||||
void sendQuery(
|
||||
const ConnectionTimeouts & timeouts,
|
||||
const String & query,
|
||||
|
@ -1204,11 +1204,6 @@ public:
|
||||
return res;
|
||||
}
|
||||
|
||||
template <typename DateOrTime>
|
||||
inline DateTimeComponents toDateTimeComponents(DateOrTime v) const
|
||||
{
|
||||
return toDateTimeComponents(lut[toLUTIndex(v)].date);
|
||||
}
|
||||
|
||||
inline UInt64 toNumYYYYMMDDhhmmss(Time t) const
|
||||
{
|
||||
|
@ -22,17 +22,29 @@ struct StringKey24
|
||||
inline StringRef ALWAYS_INLINE toStringRef(const StringKey8 & n)
|
||||
{
|
||||
assert(n != 0);
|
||||
#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
|
||||
return {reinterpret_cast<const char *>(&n), 8ul - (std::countr_zero(n) >> 3)};
|
||||
#else
|
||||
return {reinterpret_cast<const char *>(&n), 8ul - (std::countl_zero(n) >> 3)};
|
||||
#endif
|
||||
}
|
||||
inline StringRef ALWAYS_INLINE toStringRef(const StringKey16 & n)
|
||||
{
|
||||
assert(n.items[1] != 0);
|
||||
#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
|
||||
return {reinterpret_cast<const char *>(&n), 16ul - (std::countr_zero(n.items[1]) >> 3)};
|
||||
#else
|
||||
return {reinterpret_cast<const char *>(&n), 16ul - (std::countl_zero(n.items[1]) >> 3)};
|
||||
#endif
|
||||
}
|
||||
inline StringRef ALWAYS_INLINE toStringRef(const StringKey24 & n)
|
||||
{
|
||||
assert(n.c != 0);
|
||||
#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
|
||||
return {reinterpret_cast<const char *>(&n), 24ul - (std::countr_zero(n.c) >> 3)};
|
||||
#else
|
||||
return {reinterpret_cast<const char *>(&n), 24ul - (std::countl_zero(n.c) >> 3)};
|
||||
#endif
|
||||
}
|
||||
|
||||
struct StringHashTableHash
|
||||
@ -238,7 +250,6 @@ public:
|
||||
// 2. Use switch case extension to generate fast dispatching table
|
||||
// 3. Funcs are named callables that can be force_inlined
|
||||
//
|
||||
// NOTE: It relies on Little Endianness
|
||||
//
|
||||
// NOTE: It requires padded to 8 bytes keys (IOW you cannot pass
|
||||
// std::string here, but you can pass i.e. ColumnString::getDataAt()),
|
||||
@ -280,13 +291,19 @@ public:
|
||||
if ((reinterpret_cast<uintptr_t>(p) & 2048) == 0)
|
||||
{
|
||||
memcpy(&n[0], p, 8);
|
||||
n[0] &= -1ULL >> s;
|
||||
if constexpr (std::endian::native == std::endian::little)
|
||||
n[0] &= -1ULL >> s;
|
||||
else
|
||||
n[0] &= -1ULL << s;
|
||||
}
|
||||
else
|
||||
{
|
||||
const char * lp = x.data + x.size - 8;
|
||||
memcpy(&n[0], lp, 8);
|
||||
n[0] >>= s;
|
||||
if constexpr (std::endian::native == std::endian::little)
|
||||
n[0] >>= s;
|
||||
else
|
||||
n[0] <<= s;
|
||||
}
|
||||
keyHolderDiscardKey(key_holder);
|
||||
return func(self.m1, k8, hash(k8));
|
||||
@ -296,7 +313,10 @@ public:
|
||||
memcpy(&n[0], p, 8);
|
||||
const char * lp = x.data + x.size - 8;
|
||||
memcpy(&n[1], lp, 8);
|
||||
n[1] >>= s;
|
||||
if constexpr (std::endian::native == std::endian::little)
|
||||
n[1] >>= s;
|
||||
else
|
||||
n[1] <<= s;
|
||||
keyHolderDiscardKey(key_holder);
|
||||
return func(self.m2, k16, hash(k16));
|
||||
}
|
||||
@ -305,7 +325,10 @@ public:
|
||||
memcpy(&n[0], p, 16);
|
||||
const char * lp = x.data + x.size - 8;
|
||||
memcpy(&n[2], lp, 8);
|
||||
n[2] >>= s;
|
||||
if constexpr (std::endian::native == std::endian::little)
|
||||
n[2] >>= s;
|
||||
else
|
||||
n[2] <<= s;
|
||||
keyHolderDiscardKey(key_holder);
|
||||
return func(self.m3, k24, hash(k24));
|
||||
}
|
||||
|
@ -437,7 +437,7 @@ public:
|
||||
this->reserveForNextSize(std::forward<TAllocatorParams>(allocator_params)...);
|
||||
|
||||
new (t_end()) T(std::forward<U>(x));
|
||||
this->c_end += this->byte_size(1);
|
||||
this->c_end += sizeof(T);
|
||||
}
|
||||
|
||||
/** This method doesn't allow to pass parameters for Allocator,
|
||||
@ -450,12 +450,12 @@ public:
|
||||
this->reserveForNextSize();
|
||||
|
||||
new (t_end()) T(std::forward<Args>(args)...);
|
||||
this->c_end += this->byte_size(1);
|
||||
this->c_end += sizeof(T);
|
||||
}
|
||||
|
||||
void pop_back() /// NOLINT
|
||||
{
|
||||
this->c_end -= this->byte_size(1);
|
||||
this->c_end -= sizeof(T);
|
||||
}
|
||||
|
||||
/// Do not insert into the array a piece of itself. Because with the resize, the iterators on themselves can be invalidated.
|
||||
|
@ -156,6 +156,20 @@ inline bool isValidIdentifier(std::string_view str)
|
||||
&& !(str.size() == strlen("null") && 0 == strncasecmp(str.data(), "null", strlen("null")));
|
||||
}
|
||||
|
||||
|
||||
inline bool isNumberSeparator(bool is_start_of_block, bool is_hex, const char * pos, const char * end)
|
||||
{
|
||||
if (*pos != '_')
|
||||
return false;
|
||||
if (is_start_of_block && *pos == '_')
|
||||
return false; // e.g. _123, 12e_3
|
||||
if (pos + 1 < end && !(is_hex ? isHexDigit(pos[1]) : isNumericASCII(pos[1])))
|
||||
return false; // e.g. 1__2, 1_., 1_e, 1_p, 1_;
|
||||
if (pos + 1 == end)
|
||||
return false; // e.g. 12_
|
||||
return true;
|
||||
}
|
||||
|
||||
/// Works assuming isAlphaASCII.
|
||||
inline char toLowerIfAlphaASCII(char c)
|
||||
{
|
||||
|
@ -52,4 +52,5 @@
|
||||
#cmakedefine01 USE_ODBC
|
||||
#cmakedefine01 USE_BORINGSSL
|
||||
#cmakedefine01 USE_BLAKE3
|
||||
#cmakedefine01 USE_SKIM
|
||||
#cmakedefine01 USE_OPENSSL_INTREE
|
||||
|
@ -52,7 +52,7 @@
|
||||
/// NOTE: DBMS_TCP_PROTOCOL_VERSION has nothing common with VERSION_REVISION,
|
||||
/// later is just a number for server version (one number instead of commit SHA)
|
||||
/// for simplicity (sometimes it may be more convenient in some use cases).
|
||||
#define DBMS_TCP_PROTOCOL_VERSION 54460
|
||||
#define DBMS_TCP_PROTOCOL_VERSION 54461
|
||||
|
||||
#define DBMS_MIN_PROTOCOL_VERSION_WITH_INITIAL_QUERY_START_TIME 54449
|
||||
|
||||
@ -68,3 +68,5 @@
|
||||
|
||||
/// The server will send query elapsed run time in the Progress packet.
|
||||
#define DBMS_MIN_PROTOCOL_VERSION_WITH_SERVER_QUERY_TIME_IN_PROGRESS 54460
|
||||
|
||||
#define DBMS_MIN_PROTOCOL_VERSION_WITH_PASSWORD_COMPLEXITY_RULES 54461
|
||||
|
@ -456,7 +456,8 @@ void buildConfigurationFromFunctionWithKeyValueArguments(
|
||||
/// It's not possible to have a function in a dictionary definition since 22.10,
|
||||
/// because query must be normalized on dictionary creation. It's possible only when we load old metadata.
|
||||
/// For debug builds allow it only during server startup to avoid crash in BC check in Stress Tests.
|
||||
assert(!Context::getGlobalContextInstance()->isServerCompletelyStarted());
|
||||
assert(Context::getGlobalContextInstance()->getApplicationType() != Context::ApplicationType::SERVER
|
||||
|| !Context::getGlobalContextInstance()->isServerCompletelyStarted());
|
||||
auto builder = FunctionFactory::instance().tryGet(func->name, context);
|
||||
auto function = builder->build({});
|
||||
function->prepare({});
|
||||
|
@ -1343,30 +1343,6 @@ struct ToYYYYMMDDhhmmssImpl
|
||||
using FactorTransform = ZeroTransform;
|
||||
};
|
||||
|
||||
struct ToDateTimeComponentsImpl
|
||||
{
|
||||
static constexpr auto name = "toDateTimeComponents";
|
||||
|
||||
static inline DateLUTImpl::DateTimeComponents execute(Int64 t, const DateLUTImpl & time_zone)
|
||||
{
|
||||
return time_zone.toDateTimeComponents(t);
|
||||
}
|
||||
static inline DateLUTImpl::DateTimeComponents execute(UInt32 t, const DateLUTImpl & time_zone)
|
||||
{
|
||||
return time_zone.toDateTimeComponents(static_cast<DateLUTImpl::Time>(t));
|
||||
}
|
||||
static inline DateLUTImpl::DateTimeComponents execute(Int32 d, const DateLUTImpl & time_zone)
|
||||
{
|
||||
return time_zone.toDateTimeComponents(ExtendedDayNum(d));
|
||||
}
|
||||
static inline DateLUTImpl::DateTimeComponents execute(UInt16 d, const DateLUTImpl & time_zone)
|
||||
{
|
||||
return time_zone.toDateTimeComponents(DayNum(d));
|
||||
}
|
||||
|
||||
using FactorTransform = ZeroTransform;
|
||||
};
|
||||
|
||||
|
||||
template <typename FromType, typename ToType, typename Transform, bool is_extended_result = false>
|
||||
struct Transformer
|
||||
|
@ -1,17 +0,0 @@
|
||||
#include <Functions/FunctionsDecimalArithmetics.h>
|
||||
#include <Functions/FunctionFactory.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
REGISTER_FUNCTION(DivideDecimals)
|
||||
{
|
||||
factory.registerFunction<FunctionsDecimalArithmetics<DivideDecimalsImpl>>(Documentation(
|
||||
"Decimal division with given precision. Slower than simple `divide`, but has controlled precision and no sound overflows"));
|
||||
}
|
||||
|
||||
REGISTER_FUNCTION(MultiplyDecimals)
|
||||
{
|
||||
factory.registerFunction<FunctionsDecimalArithmetics<MultiplyDecimalsImpl>>(Documentation(
|
||||
"Decimal multiplication with given precision. Slower than simple `divide`, but has controlled precision and no sound overflows"));
|
||||
}
|
||||
}
|
@ -1,4 +1,5 @@
|
||||
#pragma once
|
||||
|
||||
#include <type_traits>
|
||||
#include <Core/AccurateComparison.h>
|
||||
|
||||
@ -23,7 +24,6 @@ namespace ErrorCodes
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_DIVISION;
|
||||
}
|
||||
|
||||
|
||||
@ -140,91 +140,6 @@ struct DecimalOpHelpers
|
||||
};
|
||||
|
||||
|
||||
struct DivideDecimalsImpl
|
||||
{
|
||||
static constexpr auto name = "divideDecimal";
|
||||
|
||||
template <typename FirstType, typename SecondType>
|
||||
static inline Decimal256
|
||||
execute(FirstType a, SecondType b, UInt16 scale_a, UInt16 scale_b, UInt16 result_scale)
|
||||
{
|
||||
if (b.value == 0)
|
||||
throw DB::Exception("Division by zero", ErrorCodes::ILLEGAL_DIVISION);
|
||||
if (a.value == 0)
|
||||
return Decimal256(0);
|
||||
|
||||
Int256 sign_a = a.value < 0 ? -1 : 1;
|
||||
Int256 sign_b = b.value < 0 ? -1 : 1;
|
||||
|
||||
std::vector<UInt8> a_digits = DecimalOpHelpers::toDigits(a.value * sign_a);
|
||||
|
||||
while (scale_a < scale_b + result_scale)
|
||||
{
|
||||
a_digits.push_back(0);
|
||||
++scale_a;
|
||||
}
|
||||
|
||||
while (scale_a > scale_b + result_scale && !a_digits.empty())
|
||||
{
|
||||
a_digits.pop_back();
|
||||
--scale_a;
|
||||
}
|
||||
|
||||
if (a_digits.empty())
|
||||
return Decimal256(0);
|
||||
|
||||
std::vector<UInt8> divided = DecimalOpHelpers::divide(a_digits, b.value * sign_b);
|
||||
|
||||
if (divided.size() > DecimalUtils::max_precision<Decimal256>)
|
||||
throw DB::Exception("Numeric overflow: result bigger that Decimal256", ErrorCodes::DECIMAL_OVERFLOW);
|
||||
return Decimal256(sign_a * sign_b * DecimalOpHelpers::fromDigits(divided));
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
struct MultiplyDecimalsImpl
|
||||
{
|
||||
static constexpr auto name = "multiplyDecimal";
|
||||
|
||||
template <typename FirstType, typename SecondType>
|
||||
static inline Decimal256
|
||||
execute(FirstType a, SecondType b, UInt16 scale_a, UInt16 scale_b, UInt16 result_scale)
|
||||
{
|
||||
if (a.value == 0 || b.value == 0)
|
||||
return Decimal256(0);
|
||||
|
||||
Int256 sign_a = a.value < 0 ? -1 : 1;
|
||||
Int256 sign_b = b.value < 0 ? -1 : 1;
|
||||
|
||||
std::vector<UInt8> a_digits = DecimalOpHelpers::toDigits(a.value * sign_a);
|
||||
std::vector<UInt8> b_digits = DecimalOpHelpers::toDigits(b.value * sign_b);
|
||||
|
||||
std::vector<UInt8> multiplied = DecimalOpHelpers::multiply(a_digits, b_digits);
|
||||
|
||||
UInt16 product_scale = scale_a + scale_b;
|
||||
while (product_scale < result_scale)
|
||||
{
|
||||
multiplied.push_back(0);
|
||||
++product_scale;
|
||||
}
|
||||
|
||||
while (product_scale > result_scale&& !multiplied.empty())
|
||||
{
|
||||
multiplied.pop_back();
|
||||
--product_scale;
|
||||
}
|
||||
|
||||
if (multiplied.empty())
|
||||
return Decimal256(0);
|
||||
|
||||
if (multiplied.size() > DecimalUtils::max_precision<Decimal256>)
|
||||
throw DB::Exception("Numeric overflow: result bigger that Decimal256", ErrorCodes::DECIMAL_OVERFLOW);
|
||||
|
||||
return Decimal256(sign_a * sign_b * DecimalOpHelpers::fromDigits(multiplied));
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
template <typename ResultType, typename Transform>
|
||||
struct Processor
|
||||
{
|
||||
@ -388,11 +303,12 @@ public:
|
||||
}
|
||||
|
||||
private:
|
||||
//long resolver to call proper templated func
|
||||
// long resolver to call proper templated func
|
||||
ColumnPtr resolveOverload(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type) const
|
||||
{
|
||||
WhichDataType which_dividend(arguments[0].type.get());
|
||||
WhichDataType which_divisor(arguments[1].type.get());
|
||||
|
||||
if (which_dividend.isDecimal32())
|
||||
{
|
||||
using DividendType = DataTypeDecimal32;
|
||||
@ -454,4 +370,3 @@ private:
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
|
@ -48,10 +48,6 @@ public:
|
||||
: scale_multiplier(DecimalUtils::scaleMultiplier<DateTime64::NativeType>(scale_))
|
||||
{}
|
||||
|
||||
TransformDateTime64(DateTime64::NativeType scale_multiplier_ = 1) /// NOLINT(google-explicit-constructor)
|
||||
: scale_multiplier(scale_multiplier_)
|
||||
{}
|
||||
|
||||
template <typename ... Args>
|
||||
inline auto NO_SANITIZE_UNDEFINED execute(const DateTime64 & t, Args && ... args) const
|
||||
{
|
||||
@ -131,8 +127,6 @@ public:
|
||||
return wrapped_transform.executeExtendedResult(t, std::forward<Args>(args)...);
|
||||
}
|
||||
|
||||
DateTime64::NativeType getScaleMultiplier() const { return scale_multiplier; }
|
||||
|
||||
private:
|
||||
DateTime64::NativeType scale_multiplier = 1;
|
||||
Transform wrapped_transform = {};
|
||||
|
178
src/Functions/concatWithSeparator.cpp
Normal file
178
src/Functions/concatWithSeparator.cpp
Normal file
@ -0,0 +1,178 @@
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Columns/ColumnFixedString.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <Functions/FunctionFactory.h>
|
||||
#include <Functions/FunctionHelpers.h>
|
||||
#include <Functions/IFunction.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <base/map.h>
|
||||
#include <base/range.h>
|
||||
|
||||
#include "formatString.h"
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
template <typename Name, bool is_injective>
|
||||
class ConcatWithSeparatorImpl : public IFunction
|
||||
{
|
||||
public:
|
||||
static constexpr auto name = Name::name;
|
||||
explicit ConcatWithSeparatorImpl(ContextPtr context_) : context(context_) {}
|
||||
|
||||
static FunctionPtr create(ContextPtr context) { return std::make_shared<ConcatWithSeparatorImpl>(context); }
|
||||
|
||||
String getName() const override { return name; }
|
||||
|
||||
bool isVariadic() const override { return true; }
|
||||
|
||||
size_t getNumberOfArguments() const override { return 0; }
|
||||
|
||||
bool isInjective(const ColumnsWithTypeAndName &) const override { return is_injective; }
|
||||
|
||||
bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return true; }
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
||||
{
|
||||
if (arguments.empty())
|
||||
throw Exception(
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH,
|
||||
"Number of arguments for function {} doesn't match: passed {}, should be at least 1",
|
||||
getName(),
|
||||
arguments.size());
|
||||
|
||||
for (const auto arg_idx : collections::range(0, arguments.size()))
|
||||
{
|
||||
const auto * arg = arguments[arg_idx].get();
|
||||
if (!isStringOrFixedString(arg))
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Illegal type {} of argument {} of function {}",
|
||||
arg->getName(),
|
||||
arg_idx + 1,
|
||||
getName());
|
||||
}
|
||||
|
||||
return std::make_shared<DataTypeString>();
|
||||
}
|
||||
|
||||
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count) const override
|
||||
{
|
||||
assert(!arguments.empty());
|
||||
if (arguments.size() == 1)
|
||||
return result_type->createColumnConstWithDefaultValue(input_rows_count);
|
||||
|
||||
auto c_res = ColumnString::create();
|
||||
c_res->reserve(input_rows_count);
|
||||
const ColumnConst * col_sep = checkAndGetColumnConstStringOrFixedString(arguments[0].column.get());
|
||||
if (!col_sep)
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_COLUMN,
|
||||
"Illegal column {} of first argument of function {}. Must be a constant String.",
|
||||
arguments[0].column->getName(),
|
||||
getName());
|
||||
String sep_str = col_sep->getValue<String>();
|
||||
|
||||
const size_t num_exprs = arguments.size() - 1;
|
||||
const size_t num_args = 2 * num_exprs - 1;
|
||||
|
||||
std::vector<const ColumnString::Chars *> data(num_args);
|
||||
std::vector<const ColumnString::Offsets *> offsets(num_args);
|
||||
std::vector<size_t> fixed_string_sizes(num_args);
|
||||
std::vector<std::optional<String>> constant_strings(num_args);
|
||||
|
||||
bool has_column_string = false;
|
||||
bool has_column_fixed_string = false;
|
||||
|
||||
for (size_t i = 0; i < num_exprs; ++i)
|
||||
{
|
||||
if (i != 0)
|
||||
constant_strings[2 * i - 1] = sep_str;
|
||||
|
||||
const ColumnPtr & column = arguments[i + 1].column;
|
||||
if (const ColumnString * col = checkAndGetColumn<ColumnString>(column.get()))
|
||||
{
|
||||
has_column_string = true;
|
||||
data[2 * i] = &col->getChars();
|
||||
offsets[2 * i] = &col->getOffsets();
|
||||
}
|
||||
else if (const ColumnFixedString * fixed_col = checkAndGetColumn<ColumnFixedString>(column.get()))
|
||||
{
|
||||
has_column_fixed_string = true;
|
||||
data[2 * i] = &fixed_col->getChars();
|
||||
fixed_string_sizes[2 * i] = fixed_col->getN();
|
||||
}
|
||||
else if (const ColumnConst * const_col = checkAndGetColumnConstStringOrFixedString(column.get()))
|
||||
constant_strings[2 * i] = const_col->getValue<String>();
|
||||
else
|
||||
throw Exception(ErrorCodes::ILLEGAL_COLUMN,
|
||||
"Illegal column {} of argument of function {}", column->getName(), getName());
|
||||
}
|
||||
|
||||
String pattern;
|
||||
pattern.reserve(num_args * 2);
|
||||
for (size_t i = 0; i < num_args; ++i)
|
||||
pattern += "{}";
|
||||
|
||||
FormatImpl::formatExecute(
|
||||
has_column_string,
|
||||
has_column_fixed_string,
|
||||
std::move(pattern),
|
||||
data,
|
||||
offsets,
|
||||
fixed_string_sizes,
|
||||
constant_strings,
|
||||
c_res->getChars(),
|
||||
c_res->getOffsets(),
|
||||
input_rows_count);
|
||||
return std::move(c_res);
|
||||
}
|
||||
|
||||
private:
|
||||
ContextWeakPtr context;
|
||||
};
|
||||
|
||||
struct NameConcatWithSeparator
|
||||
{
|
||||
static constexpr auto name = "concatWithSeparator";
|
||||
};
|
||||
struct NameConcatWithSeparatorAssumeInjective
|
||||
{
|
||||
static constexpr auto name = "concatWithSeparatorAssumeInjective";
|
||||
};
|
||||
|
||||
using FunctionConcatWithSeparator = ConcatWithSeparatorImpl<NameConcatWithSeparator, false>;
|
||||
using FunctionConcatWithSeparatorAssumeInjective = ConcatWithSeparatorImpl<NameConcatWithSeparatorAssumeInjective, true>;
|
||||
}
|
||||
|
||||
REGISTER_FUNCTION(ConcatWithSeparator)
|
||||
{
|
||||
factory.registerFunction<FunctionConcatWithSeparator>({
|
||||
R"(
|
||||
Returns the concatenation strings separated by string separator. Syntax: concatWithSeparator(sep, expr1, expr2, expr3...)
|
||||
)",
|
||||
Documentation::Examples{{"concatWithSeparator", "SELECT concatWithSeparator('a', '1', '2', '3')"}},
|
||||
Documentation::Categories{"String"}});
|
||||
|
||||
factory.registerFunction<FunctionConcatWithSeparatorAssumeInjective>({
|
||||
R"(
|
||||
Same as concatWithSeparator, the difference is that you need to ensure that concatWithSeparator(sep, expr1, expr2, expr3...) → result is injective, it will be used for optimization of GROUP BY.
|
||||
|
||||
The function is named “injective” if it always returns different result for different values of arguments. In other words: different arguments never yield identical result.
|
||||
)",
|
||||
Documentation::Examples{{"concatWithSeparatorAssumeInjective", "SELECT concatWithSeparatorAssumeInjective('a', '1', '2', '3')"}},
|
||||
Documentation::Categories{"String"}});
|
||||
|
||||
/// Compatibility with Spark:
|
||||
factory.registerAlias("concat_ws", "concatWithSeparator", FunctionFactory::CaseInsensitive);
|
||||
}
|
||||
|
||||
}
|
@ -1,7 +1,6 @@
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
#include <DataTypes/DataTypeDateTime64.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <Common/IntervalKind.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Columns/ColumnsDateTime.h>
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
@ -35,7 +34,6 @@ namespace ErrorCodes
|
||||
namespace
|
||||
{
|
||||
|
||||
template <bool is_diff>
|
||||
class DateDiffImpl
|
||||
{
|
||||
public:
|
||||
@ -167,92 +165,8 @@ public:
|
||||
template <typename TransformX, typename TransformY, typename T1, typename T2>
|
||||
Int64 calculate(const TransformX & transform_x, const TransformY & transform_y, T1 x, T2 y, const DateLUTImpl & timezone_x, const DateLUTImpl & timezone_y) const
|
||||
{
|
||||
if constexpr (is_diff)
|
||||
return static_cast<Int64>(transform_y.execute(y, timezone_y))
|
||||
return static_cast<Int64>(transform_y.execute(y, timezone_y))
|
||||
- static_cast<Int64>(transform_x.execute(x, timezone_x));
|
||||
else
|
||||
{
|
||||
auto res = static_cast<Int64>(transform_y.execute(y, timezone_y))
|
||||
- static_cast<Int64>(transform_x.execute(x, timezone_x));
|
||||
DateLUTImpl::DateTimeComponents a_comp;
|
||||
DateLUTImpl::DateTimeComponents b_comp;
|
||||
Int64 adjust_value;
|
||||
auto x_seconds = TransformDateTime64<ToRelativeSecondNumImpl<ResultPrecision::Extended>>(transform_x.getScaleMultiplier()).execute(x, timezone_x);
|
||||
auto y_seconds = TransformDateTime64<ToRelativeSecondNumImpl<ResultPrecision::Extended>>(transform_y.getScaleMultiplier()).execute(y, timezone_y);
|
||||
if (x_seconds <= y_seconds)
|
||||
{
|
||||
a_comp = TransformDateTime64<ToDateTimeComponentsImpl>(transform_x.getScaleMultiplier()).execute(x, timezone_x);
|
||||
b_comp = TransformDateTime64<ToDateTimeComponentsImpl>(transform_y.getScaleMultiplier()).execute(y, timezone_y);
|
||||
adjust_value = -1;
|
||||
}
|
||||
else
|
||||
{
|
||||
a_comp = TransformDateTime64<ToDateTimeComponentsImpl>(transform_y.getScaleMultiplier()).execute(y, timezone_y);
|
||||
b_comp = TransformDateTime64<ToDateTimeComponentsImpl>(transform_x.getScaleMultiplier()).execute(x, timezone_x);
|
||||
adjust_value = 1;
|
||||
}
|
||||
|
||||
if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeYearNumImpl<ResultPrecision::Extended>>>)
|
||||
{
|
||||
if ((a_comp.date.month > b_comp.date.month)
|
||||
|| ((a_comp.date.month == b_comp.date.month) && ((a_comp.date.day > b_comp.date.day)
|
||||
|| ((a_comp.date.day == b_comp.date.day) && ((a_comp.time.hour > b_comp.time.hour)
|
||||
|| ((a_comp.time.hour == b_comp.time.hour) && ((a_comp.time.minute > b_comp.time.minute)
|
||||
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second))))
|
||||
)))))
|
||||
res += adjust_value;
|
||||
}
|
||||
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeQuarterNumImpl<ResultPrecision::Extended>>>)
|
||||
{
|
||||
auto x_month_in_quarter = (a_comp.date.month - 1) % 3;
|
||||
auto y_month_in_quarter = (b_comp.date.month - 1) % 3;
|
||||
if ((x_month_in_quarter > y_month_in_quarter)
|
||||
|| ((x_month_in_quarter == y_month_in_quarter) && ((a_comp.date.day > b_comp.date.day)
|
||||
|| ((a_comp.date.day == b_comp.date.day) && ((a_comp.time.hour > b_comp.time.hour)
|
||||
|| ((a_comp.time.hour == b_comp.time.hour) && ((a_comp.time.minute > b_comp.time.minute)
|
||||
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second))))
|
||||
)))))
|
||||
res += adjust_value;
|
||||
}
|
||||
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeMonthNumImpl<ResultPrecision::Extended>>>)
|
||||
{
|
||||
if ((a_comp.date.day > b_comp.date.day)
|
||||
|| ((a_comp.date.day == b_comp.date.day) && ((a_comp.time.hour > b_comp.time.hour)
|
||||
|| ((a_comp.time.hour == b_comp.time.hour) && ((a_comp.time.minute > b_comp.time.minute)
|
||||
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second))))
|
||||
)))
|
||||
res += adjust_value;
|
||||
}
|
||||
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeWeekNumImpl<ResultPrecision::Extended>>>)
|
||||
{
|
||||
auto x_day_of_week = TransformDateTime64<ToDayOfWeekImpl>(transform_x.getScaleMultiplier()).execute(x, timezone_x);
|
||||
auto y_day_of_week = TransformDateTime64<ToDayOfWeekImpl>(transform_y.getScaleMultiplier()).execute(y, timezone_y);
|
||||
if ((x_day_of_week > y_day_of_week)
|
||||
|| ((x_day_of_week == y_day_of_week) && (a_comp.time.hour > b_comp.time.hour))
|
||||
|| ((a_comp.time.hour == b_comp.time.hour) && ((a_comp.time.minute > b_comp.time.minute)
|
||||
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second)))))
|
||||
res += adjust_value;
|
||||
}
|
||||
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeDayNumImpl<ResultPrecision::Extended>>>)
|
||||
{
|
||||
if ((a_comp.time.hour > b_comp.time.hour)
|
||||
|| ((a_comp.time.hour == b_comp.time.hour) && ((a_comp.time.minute > b_comp.time.minute)
|
||||
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second)))))
|
||||
res += adjust_value;
|
||||
}
|
||||
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeHourNumImpl<ResultPrecision::Extended>>>)
|
||||
{
|
||||
if ((a_comp.time.minute > b_comp.time.minute)
|
||||
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second)))
|
||||
res += adjust_value;
|
||||
}
|
||||
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeMinuteNumImpl<ResultPrecision::Extended>>>)
|
||||
{
|
||||
if (a_comp.time.second > b_comp.time.second)
|
||||
res += adjust_value;
|
||||
}
|
||||
return res;
|
||||
}
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
@ -279,8 +193,7 @@ private:
|
||||
|
||||
|
||||
/** dateDiff('unit', t1, t2, [timezone])
|
||||
* age('unit', t1, t2, [timezone])
|
||||
* t1 and t2 can be Date, Date32, DateTime or DateTime64
|
||||
* t1 and t2 can be Date or DateTime
|
||||
*
|
||||
* If timezone is specified, it applied to both arguments.
|
||||
* If not, timezones from datatypes t1 and t2 are used.
|
||||
@ -288,11 +201,10 @@ private:
|
||||
*
|
||||
* Timezone matters because days can have different length.
|
||||
*/
|
||||
template <bool is_relative>
|
||||
class FunctionDateDiff : public IFunction
|
||||
{
|
||||
public:
|
||||
static constexpr auto name = is_relative ? "dateDiff" : "age";
|
||||
static constexpr auto name = "dateDiff";
|
||||
static FunctionPtr create(ContextPtr) { return std::make_shared<FunctionDateDiff>(); }
|
||||
|
||||
String getName() const override
|
||||
@ -358,21 +270,21 @@ public:
|
||||
const auto & timezone_y = extractTimeZoneFromFunctionArguments(arguments, 3, 2);
|
||||
|
||||
if (unit == "year" || unit == "yy" || unit == "yyyy")
|
||||
impl.template dispatchForColumns<ToRelativeYearNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
impl.dispatchForColumns<ToRelativeYearNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
else if (unit == "quarter" || unit == "qq" || unit == "q")
|
||||
impl.template dispatchForColumns<ToRelativeQuarterNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
impl.dispatchForColumns<ToRelativeQuarterNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
else if (unit == "month" || unit == "mm" || unit == "m")
|
||||
impl.template dispatchForColumns<ToRelativeMonthNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
impl.dispatchForColumns<ToRelativeMonthNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
else if (unit == "week" || unit == "wk" || unit == "ww")
|
||||
impl.template dispatchForColumns<ToRelativeWeekNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
impl.dispatchForColumns<ToRelativeWeekNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
else if (unit == "day" || unit == "dd" || unit == "d")
|
||||
impl.template dispatchForColumns<ToRelativeDayNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
impl.dispatchForColumns<ToRelativeDayNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
else if (unit == "hour" || unit == "hh" || unit == "h")
|
||||
impl.template dispatchForColumns<ToRelativeHourNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
impl.dispatchForColumns<ToRelativeHourNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
else if (unit == "minute" || unit == "mi" || unit == "n")
|
||||
impl.template dispatchForColumns<ToRelativeMinuteNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
impl.dispatchForColumns<ToRelativeMinuteNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
else if (unit == "second" || unit == "ss" || unit == "s")
|
||||
impl.template dispatchForColumns<ToRelativeSecondNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
impl.dispatchForColumns<ToRelativeSecondNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
|
||||
else
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Function {} does not support '{}' unit", getName(), unit);
|
||||
@ -380,7 +292,7 @@ public:
|
||||
return res;
|
||||
}
|
||||
private:
|
||||
DateDiffImpl<is_relative> impl{name};
|
||||
DateDiffImpl impl{name};
|
||||
};
|
||||
|
||||
|
||||
@ -440,14 +352,14 @@ public:
|
||||
return res;
|
||||
}
|
||||
private:
|
||||
DateDiffImpl<true> impl{name};
|
||||
DateDiffImpl impl{name};
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
REGISTER_FUNCTION(DateDiff)
|
||||
{
|
||||
factory.registerFunction<FunctionDateDiff<true>>({}, FunctionFactory::CaseInsensitive);
|
||||
factory.registerFunction<FunctionDateDiff>({}, FunctionFactory::CaseInsensitive);
|
||||
}
|
||||
|
||||
REGISTER_FUNCTION(TimeDiff)
|
||||
@ -464,9 +376,4 @@ Example:
|
||||
Documentation::Categories{"Dates and Times"}}, FunctionFactory::CaseInsensitive);
|
||||
}
|
||||
|
||||
REGISTER_FUNCTION(Age)
|
||||
{
|
||||
factory.registerFunction<FunctionDateDiff<false>>({}, FunctionFactory::CaseInsensitive);
|
||||
}
|
||||
|
||||
}
|
||||
|
126
src/Functions/divideDecimal.cpp
Normal file
126
src/Functions/divideDecimal.cpp
Normal file
@ -0,0 +1,126 @@
|
||||
#include <Functions/FunctionsDecimalArithmetics.h>
|
||||
#include <Functions/FunctionFactory.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int DECIMAL_OVERFLOW;
|
||||
extern const int ILLEGAL_DIVISION;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
struct DivideDecimalsImpl
|
||||
{
|
||||
static constexpr auto name = "divideDecimal";
|
||||
|
||||
template <typename FirstType, typename SecondType>
|
||||
static inline Decimal256
|
||||
execute(FirstType a, SecondType b, UInt16 scale_a, UInt16 scale_b, UInt16 result_scale)
|
||||
{
|
||||
if (b.value == 0)
|
||||
throw DB::Exception("Division by zero", ErrorCodes::ILLEGAL_DIVISION);
|
||||
if (a.value == 0)
|
||||
return Decimal256(0);
|
||||
|
||||
Int256 sign_a = a.value < 0 ? -1 : 1;
|
||||
Int256 sign_b = b.value < 0 ? -1 : 1;
|
||||
|
||||
std::vector<UInt8> a_digits = DecimalOpHelpers::toDigits(a.value * sign_a);
|
||||
|
||||
while (scale_a < scale_b + result_scale)
|
||||
{
|
||||
a_digits.push_back(0);
|
||||
++scale_a;
|
||||
}
|
||||
|
||||
while (scale_a > scale_b + result_scale && !a_digits.empty())
|
||||
{
|
||||
a_digits.pop_back();
|
||||
--scale_a;
|
||||
}
|
||||
|
||||
if (a_digits.empty())
|
||||
return Decimal256(0);
|
||||
|
||||
std::vector<UInt8> divided = DecimalOpHelpers::divide(a_digits, b.value * sign_b);
|
||||
|
||||
if (divided.size() > DecimalUtils::max_precision<Decimal256>)
|
||||
throw DB::Exception("Numeric overflow: result bigger that Decimal256", ErrorCodes::DECIMAL_OVERFLOW);
|
||||
return Decimal256(sign_a * sign_b * DecimalOpHelpers::fromDigits(divided));
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
REGISTER_FUNCTION(DivideDecimals)
|
||||
{
|
||||
factory.registerFunction<FunctionsDecimalArithmetics<DivideDecimalsImpl>>(Documentation(
|
||||
R"(
|
||||
Performs division on two decimals. Result value will be of type [Decimal256](../../sql-reference/data-types/decimal.md).
|
||||
Result scale can be explicitly specified by `result_scale` argument (const Integer in range `[0, 76]`). If not specified, the result scale is the max scale of given arguments.
|
||||
|
||||
:::note
|
||||
These function work significantly slower than usual `divide`.
|
||||
In case you don't really need controlled precision and/or need fast computation, consider using [divide](#divide).
|
||||
:::
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
divideDecimal(a, b[, result_scale])
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `a` — First value: [Decimal](../../sql-reference/data-types/decimal.md).
|
||||
- `b` — Second value: [Decimal](../../sql-reference/data-types/decimal.md).
|
||||
- `result_scale` — Scale of result: [Int/UInt](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- The result of division with given scale.
|
||||
|
||||
Type: [Decimal256](../../sql-reference/data-types/decimal.md).
|
||||
|
||||
**Example**
|
||||
|
||||
```text
|
||||
┌─divideDecimal(toDecimal256(-12, 0), toDecimal32(2.1, 1), 10)─┐
|
||||
│ -5.7142857142 │
|
||||
└──────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Difference from regular division:**
|
||||
```sql
|
||||
SELECT toDecimal64(-12, 1) / toDecimal32(2.1, 1);
|
||||
SELECT toDecimal64(-12, 1) as a, toDecimal32(2.1, 1) as b, divideDecimal(a, b, 1), divideDecimal(a, b, 5);
|
||||
```
|
||||
|
||||
```text
|
||||
┌─divide(toDecimal64(-12, 1), toDecimal32(2.1, 1))─┐
|
||||
│ -5.7 │
|
||||
└──────────────────────────────────────────────────┘
|
||||
┌───a─┬───b─┬─divideDecimal(toDecimal64(-12, 1), toDecimal32(2.1, 1), 1)─┬─divideDecimal(toDecimal64(-12, 1), toDecimal32(2.1, 1), 5)─┐
|
||||
│ -12 │ 2.1 │ -5.7 │ -5.71428 │
|
||||
└─────┴─────┴────────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT toDecimal64(-12, 0) / toDecimal32(2.1, 1);
|
||||
SELECT toDecimal64(-12, 0) as a, toDecimal32(2.1, 1) as b, divideDecimal(a, b, 1), divideDecimal(a, b, 5);
|
||||
```
|
||||
|
||||
```text
|
||||
DB::Exception: Decimal result's scale is less than argument's one: While processing toDecimal64(-12, 0) / toDecimal32(2.1, 1). (ARGUMENT_OUT_OF_BOUND)
|
||||
┌───a─┬───b─┬─divideDecimal(toDecimal64(-12, 0), toDecimal32(2.1, 1), 1)─┬─divideDecimal(toDecimal64(-12, 0), toDecimal32(2.1, 1), 5)─┐
|
||||
│ -12 │ 2.1 │ -5.7 │ -5.71428 │
|
||||
└─────┴─────┴────────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
)"));
|
||||
}
|
||||
|
||||
}
|
134
src/Functions/multiplyDecimal.cpp
Normal file
134
src/Functions/multiplyDecimal.cpp
Normal file
@ -0,0 +1,134 @@
|
||||
#include <Functions/FunctionsDecimalArithmetics.h>
|
||||
#include <Functions/FunctionFactory.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int DECIMAL_OVERFLOW;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
struct MultiplyDecimalsImpl
|
||||
{
|
||||
static constexpr auto name = "multiplyDecimal";
|
||||
|
||||
template <typename FirstType, typename SecondType>
|
||||
static inline Decimal256
|
||||
execute(FirstType a, SecondType b, UInt16 scale_a, UInt16 scale_b, UInt16 result_scale)
|
||||
{
|
||||
if (a.value == 0 || b.value == 0)
|
||||
return Decimal256(0);
|
||||
|
||||
Int256 sign_a = a.value < 0 ? -1 : 1;
|
||||
Int256 sign_b = b.value < 0 ? -1 : 1;
|
||||
|
||||
std::vector<UInt8> a_digits = DecimalOpHelpers::toDigits(a.value * sign_a);
|
||||
std::vector<UInt8> b_digits = DecimalOpHelpers::toDigits(b.value * sign_b);
|
||||
|
||||
std::vector<UInt8> multiplied = DecimalOpHelpers::multiply(a_digits, b_digits);
|
||||
|
||||
UInt16 product_scale = scale_a + scale_b;
|
||||
while (product_scale < result_scale)
|
||||
{
|
||||
multiplied.push_back(0);
|
||||
++product_scale;
|
||||
}
|
||||
|
||||
while (product_scale > result_scale&& !multiplied.empty())
|
||||
{
|
||||
multiplied.pop_back();
|
||||
--product_scale;
|
||||
}
|
||||
|
||||
if (multiplied.empty())
|
||||
return Decimal256(0);
|
||||
|
||||
if (multiplied.size() > DecimalUtils::max_precision<Decimal256>)
|
||||
throw DB::Exception("Numeric overflow: result bigger that Decimal256", ErrorCodes::DECIMAL_OVERFLOW);
|
||||
|
||||
return Decimal256(sign_a * sign_b * DecimalOpHelpers::fromDigits(multiplied));
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
REGISTER_FUNCTION(MultiplyDecimals)
|
||||
{
|
||||
factory.registerFunction<FunctionsDecimalArithmetics<MultiplyDecimalsImpl>>(Documentation(
|
||||
R"(
|
||||
Performs multiplication on two decimals. Result value will be of type [Decimal256](../../sql-reference/data-types/decimal.md).
|
||||
Result scale can be explicitly specified by `result_scale` argument (const Integer in range `[0, 76]`). If not specified, the result scale is the max scale of given arguments.
|
||||
|
||||
:::note
|
||||
These functions work significantly slower than usual `multiply`.
|
||||
In case you don't really need controlled precision and/or need fast computation, consider using [multiply](#multiply)
|
||||
:::
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
multiplyDecimal(a, b[, result_scale])
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `a` — First value: [Decimal](../../sql-reference/data-types/decimal.md).
|
||||
- `b` — Second value: [Decimal](../../sql-reference/data-types/decimal.md).
|
||||
- `result_scale` — Scale of result: [Int/UInt](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- The result of multiplication with given scale.
|
||||
|
||||
Type: [Decimal256](../../sql-reference/data-types/decimal.md).
|
||||
|
||||
**Example**
|
||||
|
||||
```text
|
||||
┌─multiplyDecimal(toDecimal256(-12, 0), toDecimal32(-2.1, 1), 1)─┐
|
||||
│ 25.2 │
|
||||
└────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Difference from regular multiplication:**
|
||||
```sql
|
||||
SELECT toDecimal64(-12.647, 3) * toDecimal32(2.1239, 4);
|
||||
SELECT toDecimal64(-12.647, 3) as a, toDecimal32(2.1239, 4) as b, multiplyDecimal(a, b);
|
||||
```
|
||||
|
||||
```text
|
||||
┌─multiply(toDecimal64(-12.647, 3), toDecimal32(2.1239, 4))─┐
|
||||
│ -26.8609633 │
|
||||
└───────────────────────────────────────────────────────────┘
|
||||
┌─multiplyDecimal(toDecimal64(-12.647, 3), toDecimal32(2.1239, 4))─┐
|
||||
│ -26.8609 │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toDecimal64(-12.647987876, 9) AS a,
|
||||
toDecimal64(123.967645643, 9) AS b,
|
||||
multiplyDecimal(a, b);
|
||||
SELECT
|
||||
toDecimal64(-12.647987876, 9) AS a,
|
||||
toDecimal64(123.967645643, 9) AS b,
|
||||
a * b;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─────────────a─┬─────────────b─┬─multiplyDecimal(toDecimal64(-12.647987876, 9), toDecimal64(123.967645643, 9))─┐
|
||||
│ -12.647987876 │ 123.967645643 │ -1567.941279108 │
|
||||
└───────────────┴───────────────┴───────────────────────────────────────────────────────────────────────────────┘
|
||||
Received exception from server (version 22.11.1):
|
||||
Code: 407. DB::Exception: Received from localhost:9000. DB::Exception: Decimal math overflow: While processing toDecimal64(-12.647987876, 9) AS a, toDecimal64(123.967645643, 9) AS b, a * b. (DECIMAL_OVERFLOW)
|
||||
```
|
||||
)"));
|
||||
|
||||
}
|
||||
|
||||
}
|
@ -250,7 +250,7 @@ size_t ReadBufferFromS3::getFileSize()
|
||||
if (file_size)
|
||||
return *file_size;
|
||||
|
||||
auto object_size = S3::getObjectSize(client_ptr, bucket, key, version_id, true, read_settings.for_object_storage);
|
||||
auto object_size = S3::getObjectSize(*client_ptr, bucket, key, version_id, true, read_settings.for_object_storage);
|
||||
|
||||
file_size = object_size;
|
||||
return *file_size;
|
||||
|
@ -852,7 +852,7 @@ namespace S3
|
||||
}
|
||||
|
||||
|
||||
S3::ObjectInfo getObjectInfo(std::shared_ptr<const Aws::S3::S3Client> client_ptr, const String & bucket, const String & key, const String & version_id, bool throw_on_error, bool for_disk_s3)
|
||||
S3::ObjectInfo getObjectInfo(const Aws::S3::S3Client & client, const String & bucket, const String & key, const String & version_id, bool throw_on_error, bool for_disk_s3)
|
||||
{
|
||||
ProfileEvents::increment(ProfileEvents::S3HeadObject);
|
||||
if (for_disk_s3)
|
||||
@ -865,7 +865,7 @@ namespace S3
|
||||
if (!version_id.empty())
|
||||
req.SetVersionId(version_id);
|
||||
|
||||
Aws::S3::Model::HeadObjectOutcome outcome = client_ptr->HeadObject(req);
|
||||
Aws::S3::Model::HeadObjectOutcome outcome = client.HeadObject(req);
|
||||
|
||||
if (outcome.IsSuccess())
|
||||
{
|
||||
@ -879,9 +879,9 @@ namespace S3
|
||||
return {};
|
||||
}
|
||||
|
||||
size_t getObjectSize(std::shared_ptr<const Aws::S3::S3Client> client_ptr, const String & bucket, const String & key, const String & version_id, bool throw_on_error, bool for_disk_s3)
|
||||
size_t getObjectSize(const Aws::S3::S3Client & client, const String & bucket, const String & key, const String & version_id, bool throw_on_error, bool for_disk_s3)
|
||||
{
|
||||
return getObjectInfo(client_ptr, bucket, key, version_id, throw_on_error, for_disk_s3).size;
|
||||
return getObjectInfo(client, bucket, key, version_id, throw_on_error, for_disk_s3).size;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -130,9 +130,9 @@ struct ObjectInfo
|
||||
time_t last_modification_time = 0;
|
||||
};
|
||||
|
||||
S3::ObjectInfo getObjectInfo(std::shared_ptr<const Aws::S3::S3Client> client_ptr, const String & bucket, const String & key, const String & version_id, bool throw_on_error, bool for_disk_s3);
|
||||
S3::ObjectInfo getObjectInfo(const Aws::S3::S3Client & client, const String & bucket, const String & key, const String & version_id, bool throw_on_error, bool for_disk_s3);
|
||||
|
||||
size_t getObjectSize(std::shared_ptr<const Aws::S3::S3Client> client_ptr, const String & bucket, const String & key, const String & version_id, bool throw_on_error, bool for_disk_s3);
|
||||
size_t getObjectSize(const Aws::S3::S3Client & client, const String & bucket, const String & key, const String & version_id, bool throw_on_error, bool for_disk_s3);
|
||||
|
||||
}
|
||||
#endif
|
||||
|
@ -108,6 +108,12 @@ BlockIO InterpreterCreateUserQuery::execute()
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Authentication type NO_PASSWORD must be explicitly specified, check the setting allow_implicit_no_password in the server configuration");
|
||||
|
||||
if (!query.attach && query.temporary_password_for_checks)
|
||||
{
|
||||
access_control.checkPasswordComplexityRules(query.temporary_password_for_checks.value());
|
||||
query.temporary_password_for_checks.reset();
|
||||
}
|
||||
|
||||
std::optional<RolesOrUsersSet> default_roles_from_query;
|
||||
if (query.default_roles)
|
||||
{
|
||||
|
@ -141,7 +141,7 @@ void ConvertStringsToEnumMatcher::visit(ASTFunction & function_node, Data & data
|
||||
|
||||
if (function_node.name == "if")
|
||||
{
|
||||
if (function_node.arguments->children.size() != 2)
|
||||
if (function_node.arguments->children.size() != 3)
|
||||
return;
|
||||
|
||||
const ASTLiteral * literal1 = function_node.arguments->children[1]->as<ASTLiteral>();
|
||||
|
@ -1126,6 +1126,7 @@ void DatabaseCatalog::cleanupStoreDirectoryTask()
|
||||
continue;
|
||||
|
||||
size_t affected_dirs = 0;
|
||||
size_t checked_dirs = 0;
|
||||
for (auto it = disk->iterateDirectory("store"); it->isValid(); it->next())
|
||||
{
|
||||
String prefix = it->name();
|
||||
@ -1135,6 +1136,7 @@ void DatabaseCatalog::cleanupStoreDirectoryTask()
|
||||
if (!expected_prefix_dir)
|
||||
{
|
||||
LOG_WARNING(log, "Found invalid directory {} on disk {}, will try to remove it", it->path(), disk_name);
|
||||
checked_dirs += 1;
|
||||
affected_dirs += maybeRemoveDirectory(disk_name, disk, it->path());
|
||||
continue;
|
||||
}
|
||||
@ -1150,6 +1152,7 @@ void DatabaseCatalog::cleanupStoreDirectoryTask()
|
||||
if (!expected_dir)
|
||||
{
|
||||
LOG_WARNING(log, "Found invalid directory {} on disk {}, will try to remove it", jt->path(), disk_name);
|
||||
checked_dirs += 1;
|
||||
affected_dirs += maybeRemoveDirectory(disk_name, disk, jt->path());
|
||||
continue;
|
||||
}
|
||||
@ -1161,6 +1164,7 @@ void DatabaseCatalog::cleanupStoreDirectoryTask()
|
||||
/// so it looks safe enough to remove directory if we don't have uuid mapping for it.
|
||||
/// No table or database using this directory should concurrently appear,
|
||||
/// because creation of new table would fail with "directory already exists".
|
||||
checked_dirs += 1;
|
||||
affected_dirs += maybeRemoveDirectory(disk_name, disk, jt->path());
|
||||
}
|
||||
}
|
||||
@ -1168,7 +1172,7 @@ void DatabaseCatalog::cleanupStoreDirectoryTask()
|
||||
|
||||
if (affected_dirs)
|
||||
LOG_INFO(log, "Cleaned up {} directories from store/ on disk {}", affected_dirs, disk_name);
|
||||
else
|
||||
if (checked_dirs == 0)
|
||||
LOG_TEST(log, "Nothing to clean up from store/ on disk {}", disk_name);
|
||||
}
|
||||
|
||||
|
@ -46,6 +46,8 @@ public:
|
||||
|
||||
std::optional<AuthenticationData> auth_data;
|
||||
|
||||
mutable std::optional<String> temporary_password_for_checks;
|
||||
|
||||
std::optional<AllowedClientHosts> hosts;
|
||||
std::optional<AllowedClientHosts> add_hosts;
|
||||
std::optional<AllowedClientHosts> remove_hosts;
|
||||
|
@ -51,7 +51,7 @@ namespace
|
||||
}
|
||||
|
||||
|
||||
bool parseAuthenticationData(IParserBase::Pos & pos, Expected & expected, AuthenticationData & auth_data)
|
||||
bool parseAuthenticationData(IParserBase::Pos & pos, Expected & expected, AuthenticationData & auth_data, std::optional<String> & temporary_password_for_checks)
|
||||
{
|
||||
return IParserBase::wrapParseImpl(pos, [&]
|
||||
{
|
||||
@ -165,6 +165,10 @@ namespace
|
||||
common_names.insert(ast_child->as<const ASTLiteral &>().value.safeGet<String>());
|
||||
}
|
||||
|
||||
/// Save password separately for future complexity rules check
|
||||
if (expect_password)
|
||||
temporary_password_for_checks = value;
|
||||
|
||||
auth_data = AuthenticationData{*type};
|
||||
if (auth_data.getType() == AuthenticationType::SHA256_PASSWORD)
|
||||
{
|
||||
@ -438,6 +442,7 @@ bool ParserCreateUserQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expec
|
||||
|
||||
std::optional<String> new_name;
|
||||
std::optional<AuthenticationData> auth_data;
|
||||
std::optional<String> temporary_password_for_checks;
|
||||
std::optional<AllowedClientHosts> hosts;
|
||||
std::optional<AllowedClientHosts> add_hosts;
|
||||
std::optional<AllowedClientHosts> remove_hosts;
|
||||
@ -452,9 +457,11 @@ bool ParserCreateUserQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expec
|
||||
if (!auth_data)
|
||||
{
|
||||
AuthenticationData new_auth_data;
|
||||
if (parseAuthenticationData(pos, expected, new_auth_data))
|
||||
std::optional<String> new_temporary_password_for_checks;
|
||||
if (parseAuthenticationData(pos, expected, new_auth_data, new_temporary_password_for_checks))
|
||||
{
|
||||
auth_data = std::move(new_auth_data);
|
||||
temporary_password_for_checks = std::move(new_temporary_password_for_checks);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
@ -539,6 +546,7 @@ bool ParserCreateUserQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expec
|
||||
query->names = std::move(names);
|
||||
query->new_name = std::move(new_name);
|
||||
query->auth_data = std::move(auth_data);
|
||||
query->temporary_password_for_checks = std::move(temporary_password_for_checks);
|
||||
query->hosts = std::move(hosts);
|
||||
query->add_hosts = std::move(add_hosts);
|
||||
query->remove_hosts = std::move(remove_hosts);
|
||||
|
@ -830,21 +830,65 @@ bool ParserNumber::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
||||
if (!pos.isValid())
|
||||
return false;
|
||||
|
||||
/** Maximum length of number. 319 symbols is enough to write maximum double in decimal form.
|
||||
* Copy is needed to use strto* functions, which require 0-terminated string.
|
||||
*/
|
||||
static constexpr size_t MAX_LENGTH_OF_NUMBER = 319;
|
||||
auto try_read_float = [&](const char * it, const char * end)
|
||||
{
|
||||
char * str_end;
|
||||
errno = 0; /// Functions strto* don't clear errno.
|
||||
Float64 float_value = std::strtod(it, &str_end);
|
||||
if (str_end == end && errno != ERANGE)
|
||||
{
|
||||
if (float_value < 0)
|
||||
throw Exception("Logical error: token number cannot begin with minus, but parsed float number is less than zero.", ErrorCodes::LOGICAL_ERROR);
|
||||
|
||||
if (pos->size() > MAX_LENGTH_OF_NUMBER)
|
||||
if (negative)
|
||||
float_value = -float_value;
|
||||
|
||||
res = float_value;
|
||||
|
||||
auto literal = std::make_shared<ASTLiteral>(res);
|
||||
literal->begin = literal_begin;
|
||||
literal->end = ++pos;
|
||||
node = literal;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
expected.add(pos, "number");
|
||||
return false;
|
||||
};
|
||||
|
||||
/// NaN and Inf
|
||||
if (pos->type == TokenType::BareWord)
|
||||
{
|
||||
return try_read_float(pos->begin, pos->end);
|
||||
}
|
||||
|
||||
if (pos->type != TokenType::Number)
|
||||
{
|
||||
expected.add(pos, "number");
|
||||
return false;
|
||||
}
|
||||
|
||||
/** Maximum length of number. 319 symbols is enough to write maximum double in decimal form.
|
||||
* Copy is needed to use strto* functions, which require 0-terminated string.
|
||||
*/
|
||||
static constexpr size_t MAX_LENGTH_OF_NUMBER = 319;
|
||||
|
||||
char buf[MAX_LENGTH_OF_NUMBER + 1];
|
||||
|
||||
size_t size = pos->size();
|
||||
memcpy(buf, pos->begin, size);
|
||||
size_t buf_size = 0;
|
||||
for (const auto * it = pos->begin; it != pos->end; ++it)
|
||||
{
|
||||
if (*it != '_')
|
||||
buf[buf_size++] = *it;
|
||||
if (unlikely(buf_size > MAX_LENGTH_OF_NUMBER))
|
||||
{
|
||||
expected.add(pos, "number");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
size_t size = buf_size;
|
||||
buf[size] = 0;
|
||||
char * start_pos = buf;
|
||||
|
||||
@ -915,29 +959,7 @@ bool ParserNumber::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
||||
return true;
|
||||
}
|
||||
|
||||
char * pos_double = buf;
|
||||
errno = 0; /// Functions strto* don't clear errno.
|
||||
Float64 float_value = std::strtod(buf, &pos_double);
|
||||
if (pos_double == buf + pos->size() && errno != ERANGE)
|
||||
{
|
||||
if (float_value < 0)
|
||||
throw Exception("Logical error: token number cannot begin with minus, but parsed float number is less than zero.", ErrorCodes::LOGICAL_ERROR);
|
||||
|
||||
if (negative)
|
||||
float_value = -float_value;
|
||||
|
||||
res = float_value;
|
||||
|
||||
auto literal = std::make_shared<ASTLiteral>(res);
|
||||
literal->begin = literal_begin;
|
||||
literal->end = ++pos;
|
||||
node = literal;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
expected.add(pos, "number");
|
||||
return false;
|
||||
return try_read_float(buf, buf + buf_size);
|
||||
}
|
||||
|
||||
|
||||
|
@ -105,44 +105,71 @@ Token Lexer::nextTokenImpl()
|
||||
if (prev_significant_token_type == TokenType::Dot)
|
||||
{
|
||||
++pos;
|
||||
while (pos < end && isNumericASCII(*pos))
|
||||
while (pos < end && (isNumericASCII(*pos) || isNumberSeparator(false, false, pos, end)))
|
||||
++pos;
|
||||
}
|
||||
else
|
||||
{
|
||||
bool start_of_block = false;
|
||||
/// 0x, 0b
|
||||
bool hex = false;
|
||||
if (pos + 2 < end && *pos == '0' && (pos[1] == 'x' || pos[1] == 'b' || pos[1] == 'X' || pos[1] == 'B'))
|
||||
{
|
||||
bool is_valid = false;
|
||||
if (pos[1] == 'x' || pos[1] == 'X')
|
||||
hex = true;
|
||||
pos += 2;
|
||||
{
|
||||
if (isHexDigit(pos[2]))
|
||||
{
|
||||
hex = true;
|
||||
is_valid = true; // hex
|
||||
}
|
||||
}
|
||||
else if (pos[2] == '0' || pos[2] == '1')
|
||||
is_valid = true; // bin
|
||||
if (is_valid)
|
||||
{
|
||||
pos += 2;
|
||||
start_of_block = true;
|
||||
}
|
||||
else
|
||||
++pos; // consume the leading zero - could be an identifier
|
||||
}
|
||||
else
|
||||
++pos;
|
||||
|
||||
while (pos < end && (hex ? isHexDigit(*pos) : isNumericASCII(*pos)))
|
||||
while (pos < end && ((hex ? isHexDigit(*pos) : isNumericASCII(*pos)) || isNumberSeparator(start_of_block, hex, pos, end)))
|
||||
{
|
||||
++pos;
|
||||
start_of_block = false;
|
||||
}
|
||||
|
||||
/// decimal point
|
||||
if (pos < end && *pos == '.')
|
||||
{
|
||||
start_of_block = true;
|
||||
++pos;
|
||||
while (pos < end && (hex ? isHexDigit(*pos) : isNumericASCII(*pos)))
|
||||
while (pos < end && ((hex ? isHexDigit(*pos) : isNumericASCII(*pos)) || isNumberSeparator(start_of_block, hex, pos, end)))
|
||||
{
|
||||
++pos;
|
||||
start_of_block = false;
|
||||
}
|
||||
}
|
||||
|
||||
/// exponentiation (base 10 or base 2)
|
||||
if (pos + 1 < end && (hex ? (*pos == 'p' || *pos == 'P') : (*pos == 'e' || *pos == 'E')))
|
||||
{
|
||||
start_of_block = true;
|
||||
++pos;
|
||||
|
||||
/// sign of exponent. It is always decimal.
|
||||
if (pos + 1 < end && (*pos == '-' || *pos == '+'))
|
||||
++pos;
|
||||
|
||||
while (pos < end && isNumericASCII(*pos))
|
||||
while (pos < end && (isNumericASCII(*pos) || isNumberSeparator(start_of_block, false, pos, end)))
|
||||
{
|
||||
++pos;
|
||||
start_of_block = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -201,21 +228,29 @@ Token Lexer::nextTokenImpl()
|
||||
|| prev_significant_token_type == TokenType::Number))
|
||||
return Token(TokenType::Dot, token_begin, ++pos);
|
||||
|
||||
bool start_of_block = true;
|
||||
++pos;
|
||||
while (pos < end && isNumericASCII(*pos))
|
||||
while (pos < end && (isNumericASCII(*pos) || isNumberSeparator(start_of_block, false, pos, end)))
|
||||
{
|
||||
++pos;
|
||||
start_of_block = false;
|
||||
}
|
||||
|
||||
/// exponentiation
|
||||
if (pos + 1 < end && (*pos == 'e' || *pos == 'E'))
|
||||
{
|
||||
start_of_block = true;
|
||||
++pos;
|
||||
|
||||
/// sign of exponent
|
||||
if (pos + 1 < end && (*pos == '-' || *pos == '+'))
|
||||
++pos;
|
||||
|
||||
while (pos < end && isNumericASCII(*pos))
|
||||
while (pos < end && (isNumericASCII(*pos) || isNumberSeparator(start_of_block, false, pos, end)))
|
||||
{
|
||||
++pos;
|
||||
start_of_block = false;
|
||||
}
|
||||
}
|
||||
|
||||
return Token(TokenType::Number, token_begin, pos);
|
||||
|
@ -3,7 +3,7 @@
|
||||
#include <Core/Names.h>
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
#include <Columns/IColumn.h>
|
||||
#include <QueryPipeline/PipelineResourcesHolder.h>
|
||||
#include <QueryPipeline/QueryPlanResourceHolder.h>
|
||||
|
||||
#include <list>
|
||||
#include <memory>
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
#include <Processors/IProcessor.h>
|
||||
#include <QueryPipeline/PipelineResourcesHolder.h>
|
||||
#include <QueryPipeline/QueryPlanResourceHolder.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -1,7 +1,7 @@
|
||||
#pragma once
|
||||
|
||||
#include <Processors/IProcessor.h>
|
||||
#include <QueryPipeline/PipelineResourcesHolder.h>
|
||||
#include <QueryPipeline/QueryPlanResourceHolder.h>
|
||||
#include <QueryPipeline/Chain.h>
|
||||
#include <QueryPipeline/SizeLimits.h>
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
#pragma once
|
||||
#include <QueryPipeline/PipelineResourcesHolder.h>
|
||||
#include <QueryPipeline/QueryPlanResourceHolder.h>
|
||||
#include <QueryPipeline/SizeLimits.h>
|
||||
#include <QueryPipeline/StreamLocalLimits.h>
|
||||
#include <functional>
|
||||
|
@ -1,4 +1,4 @@
|
||||
#include <QueryPipeline/PipelineResourcesHolder.h>
|
||||
#include <QueryPipeline/QueryPlanResourceHolder.h>
|
||||
#include <Processors/QueryPlan/QueryPlan.h>
|
||||
#include <Processors/QueryPlan/QueryIdHolder.h>
|
||||
|
@ -36,6 +36,7 @@
|
||||
#include <Storages/MergeTree/MergeTreeDataPartUUID.h>
|
||||
#include <Storages/StorageS3Cluster.h>
|
||||
#include <Core/ExternalTable.h>
|
||||
#include <Access/AccessControl.h>
|
||||
#include <Access/Credentials.h>
|
||||
#include <Storages/ColumnDefault.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
@ -1193,6 +1194,17 @@ void TCPHandler::sendHello()
|
||||
writeStringBinary(server_display_name, *out);
|
||||
if (client_tcp_protocol_version >= DBMS_MIN_REVISION_WITH_VERSION_PATCH)
|
||||
writeVarUInt(DBMS_VERSION_PATCH, *out);
|
||||
if (client_tcp_protocol_version >= DBMS_MIN_PROTOCOL_VERSION_WITH_PASSWORD_COMPLEXITY_RULES)
|
||||
{
|
||||
auto rules = server.context()->getAccessControl().getPasswordComplexityRules();
|
||||
|
||||
writeVarUInt(rules.size(), *out);
|
||||
for (const auto & [original_pattern, exception_message] : rules)
|
||||
{
|
||||
writeStringBinary(original_pattern, *out);
|
||||
writeStringBinary(exception_message, *out);
|
||||
}
|
||||
}
|
||||
out->next();
|
||||
}
|
||||
|
||||
|
@ -7,11 +7,6 @@
|
||||
#include <fstream>
|
||||
#include <mutex>
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
class Logger;
|
||||
}
|
||||
|
||||
namespace DB
|
||||
{
|
||||
class ReadBufferFromFileLog : public ReadBuffer
|
||||
|
@ -1,6 +1,7 @@
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <Disks/StoragePolicy.h>
|
||||
#include <IO/ReadBufferFromFile.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteBufferFromFile.h>
|
||||
@ -17,11 +18,11 @@
|
||||
#include <Storages/StorageFactory.h>
|
||||
#include <Storages/StorageMaterializedView.h>
|
||||
#include <Storages/checkAndGetLiteralArgument.h>
|
||||
#include <Common/logger_useful.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/Macros.h>
|
||||
#include <Common/filesystemHelpers.h>
|
||||
#include <Common/getNumberOfPhysicalCPUCores.h>
|
||||
#include <Common/logger_useful.h>
|
||||
|
||||
#include <sys/stat.h>
|
||||
|
||||
@ -37,7 +38,6 @@ namespace ErrorCodes
|
||||
extern const int CANNOT_READ_ALL_DATA;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int TABLE_METADATA_ALREADY_EXISTS;
|
||||
extern const int DIRECTORY_DOESNT_EXIST;
|
||||
extern const int CANNOT_SELECT;
|
||||
extern const int QUERY_NOT_ALLOWED;
|
||||
}
|
||||
@ -64,6 +64,7 @@ StorageFileLog::StorageFileLog(
|
||||
, metadata_base_path(std::filesystem::path(metadata_base_path_) / "metadata")
|
||||
, format_name(format_name_)
|
||||
, log(&Poco::Logger::get("StorageFileLog (" + table_id_.table_name + ")"))
|
||||
, disk(getContext()->getStoragePolicy("default")->getDisks().at(0))
|
||||
, milliseconds_to_wait(filelog_settings->poll_directory_watch_events_backoff_init.totalMilliseconds())
|
||||
{
|
||||
StorageInMemoryMetadata storage_metadata;
|
||||
@ -75,21 +76,14 @@ StorageFileLog::StorageFileLog(
|
||||
{
|
||||
if (!attach)
|
||||
{
|
||||
std::error_code ec;
|
||||
std::filesystem::create_directories(metadata_base_path, ec);
|
||||
|
||||
if (ec)
|
||||
if (disk->exists(metadata_base_path))
|
||||
{
|
||||
if (ec == std::make_error_code(std::errc::file_exists))
|
||||
{
|
||||
throw Exception(ErrorCodes::TABLE_METADATA_ALREADY_EXISTS,
|
||||
"Metadata files already exist by path: {}, remove them manually if it is intended",
|
||||
metadata_base_path);
|
||||
}
|
||||
else
|
||||
throw Exception(ErrorCodes::DIRECTORY_DOESNT_EXIST,
|
||||
"Could not create directory {}, reason: {}", metadata_base_path, ec.message());
|
||||
throw Exception(
|
||||
ErrorCodes::TABLE_METADATA_ALREADY_EXISTS,
|
||||
"Metadata files already exist by path: {}, remove them manually if it is intended",
|
||||
metadata_base_path);
|
||||
}
|
||||
disk->createDirectories(metadata_base_path);
|
||||
}
|
||||
|
||||
loadMetaFiles(attach);
|
||||
@ -117,19 +111,8 @@ void StorageFileLog::loadMetaFiles(bool attach)
|
||||
/// Attach table
|
||||
if (attach)
|
||||
{
|
||||
const auto & storage = getStorageID();
|
||||
|
||||
auto metadata_path_exist = std::filesystem::exists(metadata_base_path);
|
||||
auto previous_path = std::filesystem::path(getContext()->getPath()) / ".filelog_storage_metadata" / storage.getDatabaseName() / storage.getTableName();
|
||||
|
||||
/// For compatibility with the previous path version.
|
||||
if (std::filesystem::exists(previous_path) && !metadata_path_exist)
|
||||
{
|
||||
std::filesystem::copy(previous_path, metadata_base_path, std::filesystem::copy_options::recursive);
|
||||
std::filesystem::remove_all(previous_path);
|
||||
}
|
||||
/// Meta file may lost, log and create directory
|
||||
else if (!metadata_path_exist)
|
||||
if (!disk->exists(metadata_base_path))
|
||||
{
|
||||
/// Create metadata_base_path directory when store meta data
|
||||
LOG_ERROR(log, "Metadata files of table {} are lost.", getStorageID().getTableName());
|
||||
@ -189,7 +172,7 @@ void StorageFileLog::loadFiles()
|
||||
/// data file have been renamed, need update meta file's name
|
||||
if (it->second.file_name != file)
|
||||
{
|
||||
std::filesystem::rename(getFullMetaPath(it->second.file_name), getFullMetaPath(file));
|
||||
disk->replaceFile(getFullMetaPath(it->second.file_name), getFullMetaPath(file));
|
||||
it->second.file_name = file;
|
||||
}
|
||||
}
|
||||
@ -217,7 +200,7 @@ void StorageFileLog::loadFiles()
|
||||
valid_metas.emplace(inode, meta);
|
||||
/// Delete meta file from filesystem
|
||||
else
|
||||
std::filesystem::remove(getFullMetaPath(meta.file_name));
|
||||
disk->removeFileIfExists(getFullMetaPath(meta.file_name));
|
||||
}
|
||||
file_infos.meta_by_inode.swap(valid_metas);
|
||||
}
|
||||
@ -228,70 +211,71 @@ void StorageFileLog::serialize() const
|
||||
for (const auto & [inode, meta] : file_infos.meta_by_inode)
|
||||
{
|
||||
auto full_name = getFullMetaPath(meta.file_name);
|
||||
if (!std::filesystem::exists(full_name))
|
||||
if (!disk->exists(full_name))
|
||||
{
|
||||
FS::createFile(full_name);
|
||||
disk->createFile(full_name);
|
||||
}
|
||||
else
|
||||
{
|
||||
checkOffsetIsValid(full_name, meta.last_writen_position);
|
||||
}
|
||||
WriteBufferFromFile out(full_name);
|
||||
writeIntText(inode, out);
|
||||
writeChar('\n', out);
|
||||
writeIntText(meta.last_writen_position, out);
|
||||
auto out = disk->writeFile(full_name);
|
||||
writeIntText(inode, *out);
|
||||
writeChar('\n', *out);
|
||||
writeIntText(meta.last_writen_position, *out);
|
||||
}
|
||||
}
|
||||
|
||||
void StorageFileLog::serialize(UInt64 inode, const FileMeta & file_meta) const
|
||||
{
|
||||
auto full_name = getFullMetaPath(file_meta.file_name);
|
||||
if (!std::filesystem::exists(full_name))
|
||||
if (!disk->exists(full_name))
|
||||
{
|
||||
FS::createFile(full_name);
|
||||
disk->createFile(full_name);
|
||||
}
|
||||
else
|
||||
{
|
||||
checkOffsetIsValid(full_name, file_meta.last_writen_position);
|
||||
}
|
||||
WriteBufferFromFile out(full_name);
|
||||
writeIntText(inode, out);
|
||||
writeChar('\n', out);
|
||||
writeIntText(file_meta.last_writen_position, out);
|
||||
auto out = disk->writeFile(full_name);
|
||||
writeIntText(inode, *out);
|
||||
writeChar('\n', *out);
|
||||
writeIntText(file_meta.last_writen_position, *out);
|
||||
}
|
||||
|
||||
void StorageFileLog::deserialize()
|
||||
{
|
||||
if (!std::filesystem::exists(metadata_base_path))
|
||||
if (!disk->exists(metadata_base_path))
|
||||
return;
|
||||
/// In case of single file (not a watched directory),
|
||||
/// iterated directory always has one file inside.
|
||||
for (const auto & dir_entry : std::filesystem::directory_iterator{metadata_base_path})
|
||||
for (const auto dir_iter = disk->iterateDirectory(metadata_base_path); dir_iter->isValid(); dir_iter->next())
|
||||
{
|
||||
if (!dir_entry.is_regular_file())
|
||||
auto full_name = getFullMetaPath(dir_iter->name());
|
||||
if (!disk->isFile(full_name))
|
||||
{
|
||||
throw Exception(
|
||||
ErrorCodes::BAD_FILE_TYPE,
|
||||
"The file {} under {} is not a regular file when deserializing meta files",
|
||||
dir_entry.path().c_str(),
|
||||
dir_iter->name(),
|
||||
metadata_base_path);
|
||||
}
|
||||
|
||||
ReadBufferFromFile in(dir_entry.path().c_str());
|
||||
auto in = disk->readFile(full_name);
|
||||
FileMeta meta;
|
||||
UInt64 inode, last_written_pos;
|
||||
|
||||
if (!tryReadIntText(inode, in))
|
||||
if (!tryReadIntText(inode, *in))
|
||||
{
|
||||
throw Exception(ErrorCodes::CANNOT_READ_ALL_DATA, "Read meta file {} failed", dir_entry.path().c_str());
|
||||
throw Exception(ErrorCodes::CANNOT_READ_ALL_DATA, "Read meta file {} failed", dir_iter->path());
|
||||
}
|
||||
assertChar('\n', in);
|
||||
if (!tryReadIntText(last_written_pos, in))
|
||||
assertChar('\n', *in);
|
||||
if (!tryReadIntText(last_written_pos, *in))
|
||||
{
|
||||
throw Exception(ErrorCodes::CANNOT_READ_ALL_DATA, "Read meta file {} failed", dir_entry.path().c_str());
|
||||
throw Exception(ErrorCodes::CANNOT_READ_ALL_DATA, "Read meta file {} failed", dir_iter->path());
|
||||
}
|
||||
|
||||
meta.file_name = dir_entry.path().filename();
|
||||
meta.file_name = dir_iter->name();
|
||||
meta.last_writen_position = last_written_pos;
|
||||
|
||||
file_infos.meta_by_inode.emplace(inode, meta);
|
||||
@ -506,17 +490,17 @@ void StorageFileLog::storeMetas(size_t start, size_t end)
|
||||
}
|
||||
}
|
||||
|
||||
void StorageFileLog::checkOffsetIsValid(const String & full_name, UInt64 offset)
|
||||
void StorageFileLog::checkOffsetIsValid(const String & full_name, UInt64 offset) const
|
||||
{
|
||||
ReadBufferFromFile in(full_name);
|
||||
auto in = disk->readFile(full_name);
|
||||
UInt64 _, last_written_pos;
|
||||
|
||||
if (!tryReadIntText(_, in))
|
||||
if (!tryReadIntText(_, *in))
|
||||
{
|
||||
throw Exception(ErrorCodes::CANNOT_READ_ALL_DATA, "Read meta file {} failed", full_name);
|
||||
}
|
||||
assertChar('\n', in);
|
||||
if (!tryReadIntText(last_written_pos, in))
|
||||
assertChar('\n', *in);
|
||||
if (!tryReadIntText(last_written_pos, *in))
|
||||
{
|
||||
throw Exception(ErrorCodes::CANNOT_READ_ALL_DATA, "Read meta file {} failed", full_name);
|
||||
}
|
||||
|
@ -1,5 +1,7 @@
|
||||
#pragma once
|
||||
|
||||
#include <Disks/IDisk.h>
|
||||
|
||||
#include <Storages/FileLog/Buffer_fwd.h>
|
||||
#include <Storages/FileLog/FileLogDirectoryWatcher.h>
|
||||
#include <Storages/FileLog/FileLogSettings.h>
|
||||
@ -147,6 +149,8 @@ private:
|
||||
const String format_name;
|
||||
Poco::Logger * log;
|
||||
|
||||
DiskPtr disk;
|
||||
|
||||
uint64_t milliseconds_to_wait;
|
||||
|
||||
/// In order to avoid data race, using a naive trick to forbid execute two select
|
||||
@ -198,7 +202,7 @@ private:
|
||||
void serialize(UInt64 inode, const FileMeta & file_meta) const;
|
||||
|
||||
void deserialize();
|
||||
static void checkOffsetIsValid(const String & full_name, UInt64 offset);
|
||||
void checkOffsetIsValid(const String & full_name, UInt64 offset) const;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -21,16 +21,12 @@ limitations under the License. */
|
||||
namespace DB
|
||||
{
|
||||
|
||||
using Time = std::chrono::time_point<std::chrono::system_clock>;
|
||||
using Seconds = std::chrono::seconds;
|
||||
using MilliSeconds = std::chrono::milliseconds;
|
||||
|
||||
|
||||
struct BlocksMetadata
|
||||
{
|
||||
String hash;
|
||||
UInt64 version;
|
||||
Time time;
|
||||
std::chrono::time_point<std::chrono::system_clock> time;
|
||||
};
|
||||
|
||||
struct MergeableBlocks
|
||||
@ -54,6 +50,10 @@ friend class LiveViewSource;
|
||||
friend class LiveViewEventsSource;
|
||||
friend class LiveViewSink;
|
||||
|
||||
using Time = std::chrono::time_point<std::chrono::system_clock>;
|
||||
using Seconds = std::chrono::seconds;
|
||||
using MilliSeconds = std::chrono::milliseconds;
|
||||
|
||||
public:
|
||||
StorageLiveView(
|
||||
const StorageID & table_id_,
|
||||
|
@ -2600,7 +2600,17 @@ void MergeTreeData::checkAlterIsPossible(const AlterCommands & commands, Context
|
||||
}
|
||||
}
|
||||
|
||||
dropped_columns.emplace(command.column_name);
|
||||
if (old_metadata.columns.has(command.column_name))
|
||||
{
|
||||
dropped_columns.emplace(command.column_name);
|
||||
}
|
||||
else
|
||||
{
|
||||
const auto & nested = old_metadata.columns.getNested(command.column_name);
|
||||
for (const auto & nested_column : nested)
|
||||
dropped_columns.emplace(nested_column.name);
|
||||
}
|
||||
|
||||
}
|
||||
else if (command.type == AlterCommand::RESET_SETTING)
|
||||
{
|
||||
@ -3884,9 +3894,9 @@ MergeTreeData::DataPartsVector MergeTreeData::getVisibleDataPartsVectorInPartiti
|
||||
return res;
|
||||
}
|
||||
|
||||
MergeTreeData::DataPartPtr MergeTreeData::getPartIfExists(const MergeTreePartInfo & part_info, const MergeTreeData::DataPartStates & valid_states)
|
||||
MergeTreeData::DataPartPtr MergeTreeData::getPartIfExists(const MergeTreePartInfo & part_info, const MergeTreeData::DataPartStates & valid_states, DataPartsLock * acquired_lock)
|
||||
{
|
||||
auto lock = lockParts();
|
||||
auto lock = (acquired_lock) ? DataPartsLock() : lockParts();
|
||||
|
||||
auto it = data_parts_by_info.find(part_info);
|
||||
if (it == data_parts_by_info.end())
|
||||
@ -3899,9 +3909,9 @@ MergeTreeData::DataPartPtr MergeTreeData::getPartIfExists(const MergeTreePartInf
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
MergeTreeData::DataPartPtr MergeTreeData::getPartIfExists(const String & part_name, const MergeTreeData::DataPartStates & valid_states)
|
||||
MergeTreeData::DataPartPtr MergeTreeData::getPartIfExists(const String & part_name, const MergeTreeData::DataPartStates & valid_states, DataPartsLock * acquired_lock)
|
||||
{
|
||||
return getPartIfExists(MergeTreePartInfo::fromPartName(part_name, format_version), valid_states);
|
||||
return getPartIfExists(MergeTreePartInfo::fromPartName(part_name, format_version), valid_states, acquired_lock);
|
||||
}
|
||||
|
||||
|
||||
|
@ -514,8 +514,8 @@ public:
|
||||
DataPartsVector getDataPartsVectorInPartitionForInternalUsage(const DataPartStates & affordable_states, const String & partition_id, DataPartsLock * acquired_lock = nullptr) const;
|
||||
|
||||
/// Returns the part with the given name and state or nullptr if no such part.
|
||||
DataPartPtr getPartIfExists(const String & part_name, const DataPartStates & valid_states);
|
||||
DataPartPtr getPartIfExists(const MergeTreePartInfo & part_info, const DataPartStates & valid_states);
|
||||
DataPartPtr getPartIfExists(const String & part_name, const DataPartStates & valid_states, DataPartsLock * acquired_lock = nullptr);
|
||||
DataPartPtr getPartIfExists(const MergeTreePartInfo & part_info, const DataPartStates & valid_states, DataPartsLock * acquired_lock = nullptr);
|
||||
|
||||
/// Total size of active parts in bytes.
|
||||
size_t getTotalActiveSizeInBytes() const;
|
||||
|
@ -142,6 +142,9 @@ void ReplicatedMergeTreeAttachThread::runImpl()
|
||||
|
||||
checkHasReplicaMetadataInZooKeeper(zookeeper, replica_path);
|
||||
|
||||
/// Just in case it was not removed earlier due to connection loss
|
||||
zookeeper->tryRemove(replica_path + "/flags/force_restore_data");
|
||||
|
||||
String replica_metadata_version;
|
||||
const bool replica_metadata_version_exists = zookeeper->tryGet(replica_path + "/metadata_version", replica_metadata_version);
|
||||
if (replica_metadata_version_exists)
|
||||
|
@ -1193,7 +1193,7 @@ bool ReplicatedMergeTreeQueue::isCoveredByFuturePartsImpl(const LogEntry & entry
|
||||
const LogEntry & another_entry = *entry_for_same_part_it->second;
|
||||
out_reason = fmt::format(
|
||||
"Not executing log entry {} of type {} for part {} "
|
||||
"because another log entry {} of type {} for the same part ({}) is being processed. This shouldn't happen often.",
|
||||
"because another log entry {} of type {} for the same part ({}) is being processed.",
|
||||
entry.znode_name, entry.type, entry.new_part_name,
|
||||
another_entry.znode_name, another_entry.type, another_entry.new_part_name);
|
||||
LOG_INFO(log, fmt::runtime(out_reason));
|
||||
|
53
src/Storages/ReadFromStorageProgress.cpp
Normal file
53
src/Storages/ReadFromStorageProgress.cpp
Normal file
@ -0,0 +1,53 @@
|
||||
#include <Storages/ReadFromStorageProgress.h>
|
||||
#include <Processors/ISource.h>
|
||||
#include <QueryPipeline/StreamLocalLimits.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
void updateRowsProgressApprox(
|
||||
ISource & source,
|
||||
const Chunk & chunk,
|
||||
UInt64 total_result_size,
|
||||
UInt64 & total_rows_approx_accumulated,
|
||||
size_t & total_rows_count_times,
|
||||
UInt64 & total_rows_approx_max)
|
||||
{
|
||||
if (!total_result_size)
|
||||
return;
|
||||
|
||||
const size_t num_rows = chunk.getNumRows();
|
||||
|
||||
if (!num_rows)
|
||||
return;
|
||||
|
||||
const auto progress = source.getReadProgress();
|
||||
if (progress && !progress->limits.empty())
|
||||
{
|
||||
for (const auto & limit : progress->limits)
|
||||
{
|
||||
if (limit.leaf_limits.max_rows || limit.leaf_limits.max_bytes
|
||||
|| limit.local_limits.size_limits.max_rows || limit.local_limits.size_limits.max_bytes)
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
const auto bytes_per_row = std::ceil(static_cast<double>(chunk.bytes()) / num_rows);
|
||||
size_t total_rows_approx = static_cast<size_t>(std::ceil(static_cast<double>(total_result_size) / bytes_per_row));
|
||||
total_rows_approx_accumulated += total_rows_approx;
|
||||
++total_rows_count_times;
|
||||
total_rows_approx = total_rows_approx_accumulated / total_rows_count_times;
|
||||
|
||||
/// We need to add diff, because total_rows_approx is incremental value.
|
||||
/// It would be more correct to send total_rows_approx as is (not a diff),
|
||||
/// but incrementation of total_rows_to_read does not allow that.
|
||||
/// A new counter can be introduced for that to be sent to client, but it does not worth it.
|
||||
if (total_rows_approx > total_rows_approx_max)
|
||||
{
|
||||
size_t diff = total_rows_approx - total_rows_approx_max;
|
||||
source.addTotalRowsApprox(diff);
|
||||
total_rows_approx_max = total_rows_approx;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
18
src/Storages/ReadFromStorageProgress.h
Normal file
18
src/Storages/ReadFromStorageProgress.h
Normal file
@ -0,0 +1,18 @@
|
||||
#pragma once
|
||||
#include <Core/Types.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
class ISource;
|
||||
class Chunk;
|
||||
|
||||
void updateRowsProgressApprox(
|
||||
ISource & source,
|
||||
const Chunk & chunk,
|
||||
UInt64 total_result_size,
|
||||
UInt64 & total_rows_approx_accumulated,
|
||||
size_t & total_rows_count_times,
|
||||
UInt64 & total_rows_approx_max);
|
||||
|
||||
}
|
@ -5,6 +5,7 @@
|
||||
#include <Storages/PartitionedSink.h>
|
||||
#include <Storages/Distributed/DirectoryMonitor.h>
|
||||
#include <Storages/checkAndGetLiteralArgument.h>
|
||||
#include <Storages/ReadFromStorageProgress.h>
|
||||
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/evaluateConstantExpression.h>
|
||||
@ -592,22 +593,8 @@ public:
|
||||
|
||||
if (num_rows)
|
||||
{
|
||||
auto bytes_per_row = std::ceil(static_cast<double>(chunk.bytes()) / num_rows);
|
||||
size_t total_rows_approx = static_cast<size_t>(std::ceil(static_cast<double>(files_info->total_bytes_to_read) / bytes_per_row));
|
||||
total_rows_approx_accumulated += total_rows_approx;
|
||||
++total_rows_count_times;
|
||||
total_rows_approx = total_rows_approx_accumulated / total_rows_count_times;
|
||||
|
||||
/// We need to add diff, because total_rows_approx is incremental value.
|
||||
/// It would be more correct to send total_rows_approx as is (not a diff),
|
||||
/// but incrementation of total_rows_to_read does not allow that.
|
||||
/// A new field can be introduces for that to be sent to client, but it does not worth it.
|
||||
if (total_rows_approx > total_rows_approx_prev)
|
||||
{
|
||||
size_t diff = total_rows_approx - total_rows_approx_prev;
|
||||
addTotalRowsApprox(diff);
|
||||
total_rows_approx_prev = total_rows_approx;
|
||||
}
|
||||
updateRowsProgressApprox(
|
||||
*this, chunk, files_info->total_bytes_to_read, total_rows_approx_accumulated, total_rows_count_times, total_rows_approx_max);
|
||||
}
|
||||
return chunk;
|
||||
}
|
||||
@ -648,7 +635,7 @@ private:
|
||||
|
||||
UInt64 total_rows_approx_accumulated = 0;
|
||||
size_t total_rows_count_times = 0;
|
||||
UInt64 total_rows_approx_prev = 0;
|
||||
UInt64 total_rows_approx_max = 0;
|
||||
};
|
||||
|
||||
|
||||
|
@ -1370,19 +1370,21 @@ MergeTreeDataPartPtr StorageMergeTree::outdatePart(MergeTreeTransaction * txn, c
|
||||
{
|
||||
/// Forcefully stop merges and make part outdated
|
||||
auto merge_blocker = stopMergesAndWait();
|
||||
auto part = getPartIfExists(part_name, {MergeTreeDataPartState::Active});
|
||||
auto parts_lock = lockParts();
|
||||
auto part = getPartIfExists(part_name, {MergeTreeDataPartState::Active}, &parts_lock);
|
||||
if (!part)
|
||||
throw Exception(ErrorCodes::NO_SUCH_DATA_PART, "Part {} not found, won't try to drop it.", part_name);
|
||||
|
||||
removePartsFromWorkingSet(txn, {part}, true);
|
||||
removePartsFromWorkingSet(txn, {part}, true, &parts_lock);
|
||||
return part;
|
||||
}
|
||||
else
|
||||
{
|
||||
/// Wait merges selector
|
||||
std::unique_lock lock(currently_processing_in_background_mutex);
|
||||
auto parts_lock = lockParts();
|
||||
|
||||
auto part = getPartIfExists(part_name, {MergeTreeDataPartState::Active});
|
||||
auto part = getPartIfExists(part_name, {MergeTreeDataPartState::Active}, &parts_lock);
|
||||
/// It's okay, part was already removed
|
||||
if (!part)
|
||||
return nullptr;
|
||||
@ -1392,7 +1394,7 @@ MergeTreeDataPartPtr StorageMergeTree::outdatePart(MergeTreeTransaction * txn, c
|
||||
if (currently_merging_mutating_parts.contains(part))
|
||||
return nullptr;
|
||||
|
||||
removePartsFromWorkingSet(txn, {part}, true);
|
||||
removePartsFromWorkingSet(txn, {part}, true, &parts_lock);
|
||||
return part;
|
||||
}
|
||||
}
|
||||
|
@ -357,25 +357,37 @@ StorageReplicatedMergeTree::StorageReplicatedMergeTree(
|
||||
/// It does not make sense for CREATE query
|
||||
if (attach)
|
||||
{
|
||||
if (current_zookeeper && current_zookeeper->exists(replica_path + "/host"))
|
||||
try
|
||||
{
|
||||
/// Check it earlier if we can (we don't want incompatible version to start).
|
||||
/// If "/host" doesn't exist, then replica is probably dropped and there's nothing to check.
|
||||
ReplicatedMergeTreeAttachThread::checkHasReplicaMetadataInZooKeeper(current_zookeeper, replica_path);
|
||||
if (current_zookeeper && current_zookeeper->exists(replica_path + "/host"))
|
||||
{
|
||||
/// Check it earlier if we can (we don't want incompatible version to start).
|
||||
/// If "/host" doesn't exist, then replica is probably dropped and there's nothing to check.
|
||||
ReplicatedMergeTreeAttachThread::checkHasReplicaMetadataInZooKeeper(current_zookeeper, replica_path);
|
||||
}
|
||||
|
||||
if (current_zookeeper && current_zookeeper->exists(replica_path + "/flags/force_restore_data"))
|
||||
{
|
||||
skip_sanity_checks = true;
|
||||
current_zookeeper->remove(replica_path + "/flags/force_restore_data");
|
||||
|
||||
LOG_WARNING(
|
||||
log,
|
||||
"Skipping the limits on severity of changes to data parts and columns (flag {}/flags/force_restore_data).",
|
||||
replica_path);
|
||||
}
|
||||
else if (has_force_restore_data_flag)
|
||||
{
|
||||
skip_sanity_checks = true;
|
||||
|
||||
LOG_WARNING(log, "Skipping the limits on severity of changes to data parts and columns (flag force_restore_data).");
|
||||
}
|
||||
}
|
||||
|
||||
if (current_zookeeper && current_zookeeper->exists(replica_path + "/flags/force_restore_data"))
|
||||
catch (const Coordination::Exception & e)
|
||||
{
|
||||
skip_sanity_checks = true;
|
||||
current_zookeeper->remove(replica_path + "/flags/force_restore_data");
|
||||
|
||||
LOG_WARNING(log, "Skipping the limits on severity of changes to data parts and columns (flag {}/flags/force_restore_data).", replica_path);
|
||||
}
|
||||
else if (has_force_restore_data_flag)
|
||||
{
|
||||
skip_sanity_checks = true;
|
||||
|
||||
LOG_WARNING(log, "Skipping the limits on severity of changes to data parts and columns (flag force_restore_data).");
|
||||
if (!Coordination::isHardwareError(e.code))
|
||||
throw;
|
||||
LOG_ERROR(log, "Caught exception while checking table metadata in ZooKeeper, will recheck later: {}", e.displayText());
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -28,6 +28,7 @@
|
||||
#include <Storages/getVirtualsForStorage.h>
|
||||
#include <Storages/checkAndGetLiteralArgument.h>
|
||||
#include <Storages/StorageURL.h>
|
||||
#include <Storages/ReadFromStorageProgress.h>
|
||||
|
||||
#include <IO/ReadBufferFromS3.h>
|
||||
#include <IO/WriteBufferFromS3.h>
|
||||
@ -153,6 +154,11 @@ public:
|
||||
return nextAssumeLocked();
|
||||
}
|
||||
|
||||
size_t getTotalSize() const
|
||||
{
|
||||
return total_size;
|
||||
}
|
||||
|
||||
private:
|
||||
|
||||
String nextAssumeLocked()
|
||||
@ -198,19 +204,27 @@ private:
|
||||
if (block.has("_file"))
|
||||
file_column = block.getByName("_file").column->assumeMutable();
|
||||
|
||||
for (const auto & row : result_batch)
|
||||
std::unordered_map<String, S3::ObjectInfo> all_object_infos;
|
||||
for (const auto & key_info : result_batch)
|
||||
{
|
||||
const String & key = row.GetKey();
|
||||
const String & key = key_info.GetKey();
|
||||
if (recursive || re2::RE2::FullMatch(key, *matcher))
|
||||
{
|
||||
String path = fs::path(globbed_uri.bucket) / key;
|
||||
if (object_infos)
|
||||
(*object_infos)[path] = {.size = size_t(row.GetSize()), .last_modification_time = row.GetLastModified().Millis() / 1000};
|
||||
String file = path.substr(path.find_last_of('/') + 1);
|
||||
const size_t key_size = key_info.GetSize();
|
||||
|
||||
all_object_infos.emplace(path, S3::ObjectInfo{.size = key_size, .last_modification_time = key_info.GetLastModified().Millis() / 1000});
|
||||
|
||||
if (path_column)
|
||||
{
|
||||
path_column->insert(path);
|
||||
}
|
||||
if (file_column)
|
||||
{
|
||||
String file = path.substr(path.find_last_of('/') + 1);
|
||||
file_column->insert(file);
|
||||
}
|
||||
|
||||
key_column->insert(key);
|
||||
}
|
||||
}
|
||||
@ -220,16 +234,35 @@ private:
|
||||
size_t rows = block.rows();
|
||||
buffer.reserve(rows);
|
||||
for (size_t i = 0; i < rows; ++i)
|
||||
buffer.emplace_back(keys.getDataAt(i).toString());
|
||||
{
|
||||
auto key = keys.getDataAt(i).toString();
|
||||
std::string path = fs::path(globbed_uri.bucket) / key;
|
||||
|
||||
const auto & object_info = all_object_infos.at(path);
|
||||
total_size += object_info.size;
|
||||
if (object_infos)
|
||||
object_infos->emplace(path, object_info);
|
||||
|
||||
buffer.emplace_back(key);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
buffer.reserve(result_batch.size());
|
||||
for (const auto & row : result_batch)
|
||||
for (const auto & key_info : result_batch)
|
||||
{
|
||||
String key = row.GetKey();
|
||||
String key = key_info.GetKey();
|
||||
if (recursive || re2::RE2::FullMatch(key, *matcher))
|
||||
{
|
||||
const size_t key_size = key_info.GetSize();
|
||||
total_size += key_size;
|
||||
if (object_infos)
|
||||
{
|
||||
const std::string path = fs::path(globbed_uri.bucket) / key;
|
||||
(*object_infos)[path] = {.size = key_size, .last_modification_time = key_info.GetLastModified().Millis() / 1000};
|
||||
}
|
||||
buffer.emplace_back(std::move(key));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -261,6 +294,7 @@ private:
|
||||
std::unordered_map<String, S3::ObjectInfo> * object_infos;
|
||||
Strings * read_keys;
|
||||
S3Settings::RequestSettings request_settings;
|
||||
size_t total_size = 0;
|
||||
};
|
||||
|
||||
StorageS3Source::DisclosedGlobIterator::DisclosedGlobIterator(
|
||||
@ -281,12 +315,28 @@ String StorageS3Source::DisclosedGlobIterator::next()
|
||||
return pimpl->next();
|
||||
}
|
||||
|
||||
size_t StorageS3Source::DisclosedGlobIterator::getTotalSize() const
|
||||
{
|
||||
return pimpl->getTotalSize();
|
||||
}
|
||||
|
||||
class StorageS3Source::KeysIterator::Impl : WithContext
|
||||
{
|
||||
public:
|
||||
explicit Impl(
|
||||
const std::vector<String> & keys_, const String & bucket_, ASTPtr query_, const Block & virtual_header_, ContextPtr context_)
|
||||
: WithContext(context_), keys(keys_), bucket(bucket_), query(query_), virtual_header(virtual_header_)
|
||||
const Aws::S3::S3Client & client_,
|
||||
const std::string & version_id_,
|
||||
const std::vector<String> & keys_,
|
||||
const String & bucket_,
|
||||
ASTPtr query_,
|
||||
const Block & virtual_header_,
|
||||
ContextPtr context_,
|
||||
std::unordered_map<String, S3::ObjectInfo> * object_infos_)
|
||||
: WithContext(context_)
|
||||
, keys(keys_)
|
||||
, bucket(bucket_)
|
||||
, query(query_)
|
||||
, virtual_header(virtual_header_)
|
||||
{
|
||||
/// Create a virtual block with one row to construct filter
|
||||
if (query && virtual_header)
|
||||
@ -316,14 +366,28 @@ public:
|
||||
if (block.has("_file"))
|
||||
file_column = block.getByName("_file").column->assumeMutable();
|
||||
|
||||
std::unordered_map<String, S3::ObjectInfo> all_object_infos;
|
||||
for (const auto & key : keys)
|
||||
{
|
||||
String path = fs::path(bucket) / key;
|
||||
String file = path.substr(path.find_last_of('/') + 1);
|
||||
const String path = fs::path(bucket) / key;
|
||||
|
||||
/// To avoid extra requests update total_size only if object_infos != nullptr
|
||||
/// (which means we eventually need this info anyway, so it should be ok to do it now).
|
||||
if (object_infos_)
|
||||
{
|
||||
auto key_info = S3::getObjectInfo(client_, bucket, key, version_id_, true, false);
|
||||
all_object_infos.emplace(path, S3::ObjectInfo{.size = key_info.size, .last_modification_time = key_info.last_modification_time});
|
||||
}
|
||||
|
||||
if (path_column)
|
||||
{
|
||||
path_column->insert(path);
|
||||
}
|
||||
if (file_column)
|
||||
{
|
||||
const String file = path.substr(path.find_last_of('/') + 1);
|
||||
file_column->insert(file);
|
||||
}
|
||||
key_column->insert(key);
|
||||
}
|
||||
|
||||
@ -333,7 +397,19 @@ public:
|
||||
Strings filtered_keys;
|
||||
filtered_keys.reserve(rows);
|
||||
for (size_t i = 0; i < rows; ++i)
|
||||
filtered_keys.emplace_back(keys_col.getDataAt(i).toString());
|
||||
{
|
||||
auto key = keys_col.getDataAt(i).toString();
|
||||
|
||||
if (object_infos_)
|
||||
{
|
||||
std::string path = fs::path(bucket) / key;
|
||||
const auto & object_info = all_object_infos.at(path);
|
||||
total_size += object_info.size;
|
||||
object_infos_->emplace(path, object_info);
|
||||
}
|
||||
|
||||
filtered_keys.emplace_back(key);
|
||||
}
|
||||
|
||||
keys = std::move(filtered_keys);
|
||||
}
|
||||
@ -348,6 +424,11 @@ public:
|
||||
return keys[current_index];
|
||||
}
|
||||
|
||||
size_t getTotalSize() const
|
||||
{
|
||||
return total_size;
|
||||
}
|
||||
|
||||
private:
|
||||
Strings keys;
|
||||
std::atomic_size_t index = 0;
|
||||
@ -355,11 +436,21 @@ private:
|
||||
String bucket;
|
||||
ASTPtr query;
|
||||
Block virtual_header;
|
||||
|
||||
size_t total_size = 0;
|
||||
};
|
||||
|
||||
StorageS3Source::KeysIterator::KeysIterator(
|
||||
const std::vector<String> & keys_, const String & bucket_, ASTPtr query, const Block & virtual_header, ContextPtr context)
|
||||
: pimpl(std::make_shared<StorageS3Source::KeysIterator::Impl>(keys_, bucket_, query, virtual_header, context))
|
||||
const Aws::S3::S3Client & client_,
|
||||
const std::string & version_id_,
|
||||
const std::vector<String> & keys_,
|
||||
const String & bucket_,
|
||||
ASTPtr query,
|
||||
const Block & virtual_header,
|
||||
ContextPtr context,
|
||||
std::unordered_map<String, S3::ObjectInfo> * object_infos_)
|
||||
: pimpl(std::make_shared<StorageS3Source::KeysIterator::Impl>(
|
||||
client_, version_id_, keys_, bucket_, query, virtual_header, context, object_infos_))
|
||||
{
|
||||
}
|
||||
|
||||
@ -368,6 +459,11 @@ String StorageS3Source::KeysIterator::next()
|
||||
return pimpl->next();
|
||||
}
|
||||
|
||||
size_t StorageS3Source::KeysIterator::getTotalSize() const
|
||||
{
|
||||
return pimpl->getTotalSize();
|
||||
}
|
||||
|
||||
Block StorageS3Source::getHeader(Block sample_block, const std::vector<NameAndTypePair> & requested_virtual_columns)
|
||||
{
|
||||
for (const auto & virtual_column : requested_virtual_columns)
|
||||
@ -390,7 +486,7 @@ StorageS3Source::StorageS3Source(
|
||||
const std::shared_ptr<const Aws::S3::S3Client> & client_,
|
||||
const String & bucket_,
|
||||
const String & version_id_,
|
||||
std::shared_ptr<IteratorWrapper> file_iterator_,
|
||||
std::shared_ptr<IIterator> file_iterator_,
|
||||
const size_t download_thread_num_,
|
||||
const std::unordered_map<String, S3::ObjectInfo> & object_infos_)
|
||||
: ISource(getHeader(sample_block_, requested_virtual_columns_))
|
||||
@ -459,7 +555,7 @@ std::unique_ptr<ReadBuffer> StorageS3Source::createS3ReadBuffer(const String & k
|
||||
if (it != object_infos.end())
|
||||
object_size = it->second.size;
|
||||
else
|
||||
object_size = DB::S3::getObjectSize(client, bucket, key, version_id, false, false);
|
||||
object_size = DB::S3::getObjectSize(*client, bucket, key, version_id, false, false);
|
||||
|
||||
auto download_buffer_size = getContext()->getSettings().max_download_buffer_size;
|
||||
const bool use_parallel_download = download_buffer_size > 0 && download_thread_num > 1;
|
||||
@ -503,6 +599,13 @@ Chunk StorageS3Source::generate()
|
||||
{
|
||||
UInt64 num_rows = chunk.getNumRows();
|
||||
|
||||
auto it = object_infos.find(file_path);
|
||||
if (num_rows && it != object_infos.end())
|
||||
{
|
||||
updateRowsProgressApprox(
|
||||
*this, chunk, file_iterator->getTotalSize(), total_rows_approx_accumulated, total_rows_count_times, total_rows_approx_max);
|
||||
}
|
||||
|
||||
for (const auto & virtual_column : requested_virtual_columns)
|
||||
{
|
||||
if (virtual_column.name == "_path")
|
||||
@ -797,7 +900,7 @@ StorageS3::StorageS3(
|
||||
virtual_block.insert({column.type->createColumn(), column.type, column.name});
|
||||
}
|
||||
|
||||
std::shared_ptr<StorageS3Source::IteratorWrapper> StorageS3::createFileIterator(
|
||||
std::shared_ptr<StorageS3Source::IIterator> StorageS3::createFileIterator(
|
||||
const S3Configuration & s3_configuration,
|
||||
const std::vector<String> & keys,
|
||||
bool is_key_with_globs,
|
||||
@ -810,25 +913,22 @@ std::shared_ptr<StorageS3Source::IteratorWrapper> StorageS3::createFileIterator(
|
||||
{
|
||||
if (distributed_processing)
|
||||
{
|
||||
return std::make_shared<StorageS3Source::IteratorWrapper>(
|
||||
[callback = local_context->getReadTaskCallback()]() -> String {
|
||||
return callback();
|
||||
});
|
||||
return std::make_shared<StorageS3Source::ReadTaskIterator>(local_context->getReadTaskCallback());
|
||||
}
|
||||
else if (is_key_with_globs)
|
||||
{
|
||||
/// Iterate through disclosed globs and make a source for each file
|
||||
auto glob_iterator = std::make_shared<StorageS3Source::DisclosedGlobIterator>(
|
||||
*s3_configuration.client, s3_configuration.uri, query, virtual_block, local_context, object_infos, read_keys, s3_configuration.request_settings);
|
||||
return std::make_shared<StorageS3Source::IteratorWrapper>([glob_iterator]() { return glob_iterator->next(); });
|
||||
return std::make_shared<StorageS3Source::DisclosedGlobIterator>(
|
||||
*s3_configuration.client, s3_configuration.uri, query, virtual_block,
|
||||
local_context, object_infos, read_keys, s3_configuration.request_settings);
|
||||
}
|
||||
else
|
||||
{
|
||||
auto keys_iterator
|
||||
= std::make_shared<StorageS3Source::KeysIterator>(keys, s3_configuration.uri.bucket, query, virtual_block, local_context);
|
||||
if (read_keys)
|
||||
*read_keys = keys;
|
||||
return std::make_shared<StorageS3Source::IteratorWrapper>([keys_iterator]() { return keys_iterator->next(); });
|
||||
|
||||
return std::make_shared<StorageS3Source::KeysIterator>(
|
||||
*s3_configuration.client, s3_configuration.uri.version_id, keys, s3_configuration.uri.bucket, query, virtual_block, local_context, object_infos);
|
||||
}
|
||||
}
|
||||
|
||||
@ -869,7 +969,7 @@ Pipe StorageS3::read(
|
||||
requested_virtual_columns.push_back(virtual_column);
|
||||
}
|
||||
|
||||
std::shared_ptr<StorageS3Source::IteratorWrapper> iterator_wrapper = createFileIterator(
|
||||
std::shared_ptr<StorageS3Source::IIterator> iterator_wrapper = createFileIterator(
|
||||
s3_configuration,
|
||||
keys,
|
||||
is_key_with_globs,
|
||||
@ -1369,7 +1469,7 @@ std::optional<ColumnsDescription> StorageS3::tryGetColumnsFromCache(
|
||||
/// Note that in case of exception in getObjectInfo returned info will be empty,
|
||||
/// but schema cache will handle this case and won't return columns from cache
|
||||
/// because we can't say that it's valid without last modification time.
|
||||
info = S3::getObjectInfo(s3_configuration.client, s3_configuration.uri.bucket, *it, s3_configuration.uri.version_id, false, false);
|
||||
info = S3::getObjectInfo(*s3_configuration.client, s3_configuration.uri.bucket, *it, s3_configuration.uri.version_id, false, false);
|
||||
if (object_infos)
|
||||
(*object_infos)[path] = info;
|
||||
}
|
||||
|
@ -33,7 +33,17 @@ class StorageS3SequentialSource;
|
||||
class StorageS3Source : public ISource, WithContext
|
||||
{
|
||||
public:
|
||||
class DisclosedGlobIterator
|
||||
class IIterator
|
||||
{
|
||||
public:
|
||||
virtual ~IIterator() = default;
|
||||
virtual String next() = 0;
|
||||
virtual size_t getTotalSize() const = 0;
|
||||
|
||||
String operator ()() { return next(); }
|
||||
};
|
||||
|
||||
class DisclosedGlobIterator : public IIterator
|
||||
{
|
||||
public:
|
||||
DisclosedGlobIterator(
|
||||
@ -46,7 +56,9 @@ public:
|
||||
Strings * read_keys_ = nullptr,
|
||||
const S3Settings::RequestSettings & request_settings_ = {});
|
||||
|
||||
String next();
|
||||
String next() override;
|
||||
|
||||
size_t getTotalSize() const override;
|
||||
|
||||
private:
|
||||
class Impl;
|
||||
@ -54,12 +66,22 @@ public:
|
||||
std::shared_ptr<Impl> pimpl;
|
||||
};
|
||||
|
||||
class KeysIterator
|
||||
class KeysIterator : public IIterator
|
||||
{
|
||||
public:
|
||||
explicit KeysIterator(
|
||||
const std::vector<String> & keys_, const String & bucket_, ASTPtr query, const Block & virtual_header, ContextPtr context);
|
||||
String next();
|
||||
const Aws::S3::S3Client & client_,
|
||||
const std::string & version_id_,
|
||||
const std::vector<String> & keys_,
|
||||
const String & bucket_,
|
||||
ASTPtr query,
|
||||
const Block & virtual_header,
|
||||
ContextPtr context,
|
||||
std::unordered_map<String, S3::ObjectInfo> * object_infos = nullptr);
|
||||
|
||||
String next() override;
|
||||
|
||||
size_t getTotalSize() const override;
|
||||
|
||||
private:
|
||||
class Impl;
|
||||
@ -67,7 +89,18 @@ public:
|
||||
std::shared_ptr<Impl> pimpl;
|
||||
};
|
||||
|
||||
using IteratorWrapper = std::function<String()>;
|
||||
class ReadTaskIterator : public IIterator
|
||||
{
|
||||
public:
|
||||
explicit ReadTaskIterator(const ReadTaskCallback & callback_) : callback(callback_) {}
|
||||
|
||||
String next() override { return callback(); }
|
||||
|
||||
size_t getTotalSize() const override { return 0; }
|
||||
|
||||
private:
|
||||
ReadTaskCallback callback;
|
||||
};
|
||||
|
||||
static Block getHeader(Block sample_block, const std::vector<NameAndTypePair> & requested_virtual_columns);
|
||||
|
||||
@ -85,7 +118,7 @@ public:
|
||||
const std::shared_ptr<const Aws::S3::S3Client> & client_,
|
||||
const String & bucket,
|
||||
const String & version_id,
|
||||
std::shared_ptr<IteratorWrapper> file_iterator_,
|
||||
std::shared_ptr<IIterator> file_iterator_,
|
||||
size_t download_thread_num,
|
||||
const std::unordered_map<String, S3::ObjectInfo> & object_infos_);
|
||||
|
||||
@ -116,11 +149,15 @@ private:
|
||||
/// onCancel and generate can be called concurrently
|
||||
std::mutex reader_mutex;
|
||||
std::vector<NameAndTypePair> requested_virtual_columns;
|
||||
std::shared_ptr<IteratorWrapper> file_iterator;
|
||||
std::shared_ptr<IIterator> file_iterator;
|
||||
size_t download_thread_num = 1;
|
||||
|
||||
Poco::Logger * log = &Poco::Logger::get("StorageS3Source");
|
||||
|
||||
UInt64 total_rows_approx_max = 0;
|
||||
size_t total_rows_count_times = 0;
|
||||
UInt64 total_rows_approx_accumulated = 0;
|
||||
|
||||
std::unordered_map<String, S3::ObjectInfo> object_infos;
|
||||
|
||||
/// Recreate ReadBuffer and Pipeline for each file.
|
||||
@ -233,7 +270,7 @@ private:
|
||||
|
||||
static void updateS3Configuration(ContextPtr, S3Configuration &);
|
||||
|
||||
static std::shared_ptr<StorageS3Source::IteratorWrapper> createFileIterator(
|
||||
static std::shared_ptr<StorageS3Source::IIterator> createFileIterator(
|
||||
const S3Configuration & s3_configuration,
|
||||
const std::vector<String> & keys,
|
||||
bool is_key_with_globs,
|
||||
|
@ -102,7 +102,7 @@ Pipe StorageS3Cluster::read(
|
||||
|
||||
auto iterator = std::make_shared<StorageS3Source::DisclosedGlobIterator>(
|
||||
*s3_configuration.client, s3_configuration.uri, query_info.query, virtual_block, context);
|
||||
auto callback = std::make_shared<StorageS3Source::IteratorWrapper>([iterator]() mutable -> String { return iterator->next(); });
|
||||
auto callback = std::make_shared<std::function<String()>>([iterator]() mutable -> String { return iterator->next(); });
|
||||
|
||||
/// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*)
|
||||
auto interpreter = InterpreterSelectQuery(query_info.query, context, SelectQueryOptions(processed_stage).analyze());
|
||||
|
@ -4,6 +4,7 @@
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Access/ContextAccess.h>
|
||||
#include <Storages/System/StorageSystemDatabases.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -17,6 +18,7 @@ NamesAndTypesList StorageSystemDatabases::getNamesAndTypes()
|
||||
{"data_path", std::make_shared<DataTypeString>()},
|
||||
{"metadata_path", std::make_shared<DataTypeString>()},
|
||||
{"uuid", std::make_shared<DataTypeUUID>()},
|
||||
{"engine_full", std::make_shared<DataTypeString>()},
|
||||
{"comment", std::make_shared<DataTypeString>()}
|
||||
};
|
||||
}
|
||||
@ -28,6 +30,42 @@ NamesAndAliases StorageSystemDatabases::getNamesAndAliases()
|
||||
};
|
||||
}
|
||||
|
||||
static String getEngineFull(const DatabasePtr & database)
|
||||
{
|
||||
DDLGuardPtr guard;
|
||||
while (true)
|
||||
{
|
||||
String name = database->getDatabaseName();
|
||||
guard = DatabaseCatalog::instance().getDDLGuard(name, "");
|
||||
|
||||
/// Ensure that the database was not renamed before we acquired the lock
|
||||
auto locked_database = DatabaseCatalog::instance().tryGetDatabase(name);
|
||||
|
||||
if (locked_database.get() == database.get())
|
||||
break;
|
||||
|
||||
/// Database was dropped
|
||||
if (!locked_database && name == database->getDatabaseName())
|
||||
return {};
|
||||
|
||||
guard.reset();
|
||||
}
|
||||
|
||||
ASTPtr ast = database->getCreateDatabaseQuery();
|
||||
auto * ast_create = ast->as<ASTCreateQuery>();
|
||||
|
||||
if (!ast_create || !ast_create->storage)
|
||||
return {};
|
||||
|
||||
String engine_full = ast_create->storage->formatWithSecretsHidden();
|
||||
static const char * const extra_head = " ENGINE = ";
|
||||
|
||||
if (startsWith(engine_full, extra_head))
|
||||
engine_full = engine_full.substr(strlen(extra_head));
|
||||
|
||||
return engine_full;
|
||||
}
|
||||
|
||||
void StorageSystemDatabases::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const
|
||||
{
|
||||
const auto access = context->getAccess();
|
||||
@ -47,7 +85,8 @@ void StorageSystemDatabases::fillData(MutableColumns & res_columns, ContextPtr c
|
||||
res_columns[2]->insert(context->getPath() + database->getDataPath());
|
||||
res_columns[3]->insert(database->getMetadataPath());
|
||||
res_columns[4]->insert(database->getUUID());
|
||||
res_columns[5]->insert(database->getDatabaseComment());
|
||||
res_columns[5]->insert(getEngineFull(database));
|
||||
res_columns[6]->insert(database->getDatabaseComment());
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -22,6 +22,9 @@ endif()
|
||||
if (TARGET ch_rust::blake3)
|
||||
set(USE_BLAKE3 1)
|
||||
endif()
|
||||
if (TARGET ch_rust::skim)
|
||||
set(USE_SKIM 1)
|
||||
endif()
|
||||
if (TARGET OpenSSL::SSL)
|
||||
set(USE_SSL 1)
|
||||
endif()
|
||||
|
@ -211,8 +211,8 @@ def test_attach_detach_partition(cluster):
|
||||
|
||||
node.query("ALTER TABLE hdfs_test DETACH PARTITION '2020-01-03'")
|
||||
assert node.query("SELECT count(*) FROM hdfs_test FORMAT Values") == "(4096)"
|
||||
wait_for_delete_inactive_parts(node, "hdfs_test")
|
||||
wait_for_delete_empty_parts(node, "hdfs_test")
|
||||
wait_for_delete_inactive_parts(node, "hdfs_test")
|
||||
|
||||
hdfs_objects = fs.listdir("/clickhouse")
|
||||
assert len(hdfs_objects) == FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE * 2
|
||||
@ -225,8 +225,8 @@ def test_attach_detach_partition(cluster):
|
||||
|
||||
node.query("ALTER TABLE hdfs_test DROP PARTITION '2020-01-03'")
|
||||
assert node.query("SELECT count(*) FROM hdfs_test FORMAT Values") == "(4096)"
|
||||
wait_for_delete_inactive_parts(node, "hdfs_test")
|
||||
wait_for_delete_empty_parts(node, "hdfs_test")
|
||||
wait_for_delete_inactive_parts(node, "hdfs_test")
|
||||
|
||||
hdfs_objects = fs.listdir("/clickhouse")
|
||||
assert len(hdfs_objects) == FILES_OVERHEAD + FILES_OVERHEAD_PER_PART_WIDE
|
||||
@ -237,8 +237,8 @@ def test_attach_detach_partition(cluster):
|
||||
settings={"allow_drop_detached": 1},
|
||||
)
|
||||
assert node.query("SELECT count(*) FROM hdfs_test FORMAT Values") == "(0)"
|
||||
wait_for_delete_inactive_parts(node, "hdfs_test")
|
||||
wait_for_delete_empty_parts(node, "hdfs_test")
|
||||
wait_for_delete_inactive_parts(node, "hdfs_test")
|
||||
|
||||
hdfs_objects = fs.listdir("/clickhouse")
|
||||
assert len(hdfs_objects) == FILES_OVERHEAD
|
||||
@ -305,8 +305,8 @@ def test_table_manipulations(cluster):
|
||||
|
||||
node.query("TRUNCATE TABLE hdfs_test")
|
||||
assert node.query("SELECT count(*) FROM hdfs_test FORMAT Values") == "(0)"
|
||||
wait_for_delete_inactive_parts(node, "hdfs_test")
|
||||
wait_for_delete_empty_parts(node, "hdfs_test")
|
||||
wait_for_delete_inactive_parts(node, "hdfs_test")
|
||||
|
||||
hdfs_objects = fs.listdir("/clickhouse")
|
||||
assert len(hdfs_objects) == FILES_OVERHEAD
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user