Merge branch 'ClickHouse:master' into fix_uninitialized_orc_data

This commit is contained in:
李扬 2024-11-07 09:13:26 +08:00 committed by GitHub
commit 2cf8f54c5b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
725 changed files with 22250 additions and 6266 deletions

1
.gitignore vendored
View File

@ -159,6 +159,7 @@ website/package-lock.json
/programs/server/store
/programs/server/uuid
/programs/server/coordination
/programs/server/workload
# temporary test files
tests/queries/0_stateless/test_*

2
.gitmodules vendored
View File

@ -332,7 +332,7 @@
url = https://github.com/ClickHouse/usearch.git
[submodule "contrib/SimSIMD"]
path = contrib/SimSIMD
url = https://github.com/ashvardanian/SimSIMD.git
url = https://github.com/ClickHouse/SimSIMD.git
[submodule "contrib/FP16"]
path = contrib/FP16
url = https://github.com/Maratyszcza/FP16.git

View File

@ -1,4 +1,5 @@
### Table of Contents
**[ClickHouse release v24.10, 2024-10-31](#2410)**<br/>
**[ClickHouse release v24.9, 2024-09-26](#249)**<br/>
**[ClickHouse release v24.8 LTS, 2024-08-20](#248)**<br/>
**[ClickHouse release v24.7, 2024-07-30](#247)**<br/>
@ -12,6 +13,165 @@
# 2024 Changelog
### <a id="2410"></a> ClickHouse release 24.10, 2024-10-31
#### Backward Incompatible Change
* Allow to write `SETTINGS` before `FORMAT` in a chain of queries with `UNION` when subqueries are inside parentheses. This closes [#39712](https://github.com/ClickHouse/ClickHouse/issues/39712). Change the behavior when a query has the SETTINGS clause specified twice in a sequence. The closest SETTINGS clause will have a preference for the corresponding subquery. In the previous versions, the outermost SETTINGS clause could take a preference over the inner one. [#68614](https://github.com/ClickHouse/ClickHouse/pull/68614) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Reordering of filter conditions from `[PRE]WHERE` clause is now allowed by default. It could be disabled by setting `allow_reorder_prewhere_conditions` to `false`. [#70657](https://github.com/ClickHouse/ClickHouse/pull/70657) ([Nikita Taranov](https://github.com/nickitat)).
* Remove the `idxd-config` library, which has an incompatible license. This also removes the experimental Intel DeflateQPL codec. [#70987](https://github.com/ClickHouse/ClickHouse/pull/70987) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
#### New Feature
* Allow to grant access to the wildcard prefixes. `GRANT SELECT ON db.table_pefix_* TO user`. [#65311](https://github.com/ClickHouse/ClickHouse/pull/65311) ([pufit](https://github.com/pufit)).
* If you press space bar during query runtime, the client will display a real-time table with detailed metrics. You can enable it globally with the new `--progress-table` option in clickhouse-client; a new `--enable-progress-table-toggle` is associated with the `--progress-table` option, and toggles the rendering of the progress table by pressing the control key (Space). [#63689](https://github.com/ClickHouse/ClickHouse/pull/63689) ([Maria Khristenko](https://github.com/mariaKhr)), [#70423](https://github.com/ClickHouse/ClickHouse/pull/70423) ([Julia Kartseva](https://github.com/jkartseva)).
* Allow to cache read files for object storage table engines and data lakes using hash from ETag + file path as cache key. [#70135](https://github.com/ClickHouse/ClickHouse/pull/70135) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Support creating a table with a query: `CREATE TABLE ... CLONE AS ...`. It clones the source table's schema and then attaches all partitions to the newly created table. This feature is only supported with tables of the `MergeTree` family Closes [#65015](https://github.com/ClickHouse/ClickHouse/issues/65015). [#69091](https://github.com/ClickHouse/ClickHouse/pull/69091) ([tuanpach](https://github.com/tuanpach)).
* Add a new system table, `system.query_metric_log` which contains history of memory and metric values from table system.events for individual queries, periodically flushed to disk. [#66532](https://github.com/ClickHouse/ClickHouse/pull/66532) ([Pablo Marcos](https://github.com/pamarcos)).
* A simple SELECT query can be written with implicit SELECT to enable calculator-style expressions, e.g., `ch "1 + 2"`. This is controlled by a new setting, `implicit_select`. [#68502](https://github.com/ClickHouse/ClickHouse/pull/68502) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Support the `--copy` mode for clickhouse local as a shortcut for format conversion [#68503](https://github.com/ClickHouse/ClickHouse/issues/68503). [#68583](https://github.com/ClickHouse/ClickHouse/pull/68583) ([Denis Hananein](https://github.com/denis-hananein)).
* Add a builtin HTML page for visualizing merges which is available at the `/merges` path. [#70821](https://github.com/ClickHouse/ClickHouse/pull/70821) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Add support for `arrayUnion` function. [#68989](https://github.com/ClickHouse/ClickHouse/pull/68989) ([Peter Nguyen](https://github.com/petern48)).
* Allow parametrised SQL aliases. [#50665](https://github.com/ClickHouse/ClickHouse/pull/50665) ([Anton Kozlov](https://github.com/tonickkozlov)).
* A new aggregate function `quantileExactWeightedInterpolated`, which is a interpolated version based on quantileExactWeighted. Some people may wonder why we need a new `quantileExactWeightedInterpolated` since we already have `quantileExactInterpolatedWeighted`. The reason is the new one is more accurate than the old one. This is for spark compatibility. [#69619](https://github.com/ClickHouse/ClickHouse/pull/69619) ([李扬](https://github.com/taiyang-li)).
* A new function `arrayElementOrNull`. It returns `NULL` if the array index is out of range or a Map key not found. [#69646](https://github.com/ClickHouse/ClickHouse/pull/69646) ([李扬](https://github.com/taiyang-li)).
* Allows users to specify regular expressions through new `message_regexp` and `message_regexp_negative` fields in the `config.xml` file to filter out logging. The logging is applied to the formatted un-colored text for the most intuitive developer experience. [#69657](https://github.com/ClickHouse/ClickHouse/pull/69657) ([Peter Nguyen](https://github.com/petern48)).
* Added `RIPEMD160` function, which computes the RIPEMD-160 cryptographic hash of a string. Example: `SELECT HEX(RIPEMD160('The quick brown fox jumps over the lazy dog'))` returns `37F332F68DB77BD9D7EDD4969571AD671CF9DD3B`. [#70087](https://github.com/ClickHouse/ClickHouse/pull/70087) ([Dergousov Maxim](https://github.com/m7kss1)).
* Support reading `Iceberg` tables on `HDFS`. [#70268](https://github.com/ClickHouse/ClickHouse/pull/70268) ([flynn](https://github.com/ucasfl)).
* Support for CTE in the form of `WITH ... INSERT`, as previously we only supported `INSERT ... WITH ...`. [#70593](https://github.com/ClickHouse/ClickHouse/pull/70593) ([Shichao Jin](https://github.com/jsc0218)).
* MongoDB integration: support for all MongoDB types, support for WHERE and ORDER BY statements on MongoDB side, restriction for expressions unsupported by MongoDB. Note that the new inegration is disabled by default, to use it, please set `<use_legacy_mongodb_integration>` to `false` in server config. [#63279](https://github.com/ClickHouse/ClickHouse/pull/63279) ([Kirill Nikiforov](https://github.com/allmazz)).
* A new function `getSettingOrDefault` added to return the default value and avoid exception if a custom setting is not found in the current profile. [#69917](https://github.com/ClickHouse/ClickHouse/pull/69917) ([Shankar](https://github.com/shiyer7474)).
#### Experimental feature
* Refreshable materialized views are production ready. [#70550](https://github.com/ClickHouse/ClickHouse/pull/70550) ([Michael Kolupaev](https://github.com/al13n321)). Refreshable materialized views are now supported in Replicated databases. [#60669](https://github.com/ClickHouse/ClickHouse/pull/60669) ([Michael Kolupaev](https://github.com/al13n321)).
* Parallel replicas are moved from experimental to beta. Reworked settings that control the behavior of parallel replicas algorithms. A quick recap: ClickHouse has four different algorithms for parallel reading involving multiple replicas, which is reflected in the setting `parallel_replicas_mode`, the default value for it is `read_tasks` Additionally, the toggle-switch setting `enable_parallel_replicas` has been added. [#63151](https://github.com/ClickHouse/ClickHouse/pull/63151) ([Alexey Milovidov](https://github.com/alexey-milovidov)), ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Support for the `Dynamic` type in most functions by executing them on internal types inside `Dynamic`. [#69691](https://github.com/ClickHouse/ClickHouse/pull/69691) ([Pavel Kruglov](https://github.com/Avogar)).
* Allow to read/write the `JSON` type as a binary string in `RowBinary` format under settings `input_format_binary_read_json_as_string/output_format_binary_write_json_as_string`. [#70288](https://github.com/ClickHouse/ClickHouse/pull/70288) ([Pavel Kruglov](https://github.com/Avogar)).
* Allow to serialize/deserialize `JSON` column as single String column in the Native format. For output use setting `output_format_native_write_json_as_string`. For input, use serialization version `1` before the column data. [#70312](https://github.com/ClickHouse/ClickHouse/pull/70312) ([Pavel Kruglov](https://github.com/Avogar)).
* Introduced a special (experimental) mode of a merge selector for MergeTree tables which makes it more aggressive for the partitions that are close to the limit by the number of parts. It is controlled by the `merge_selector_use_blurry_base` MergeTree-level setting. [#70645](https://github.com/ClickHouse/ClickHouse/pull/70645) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Implement generic ser/de between Avro's `Union` and ClickHouse's `Variant` types. Resolves [#69713](https://github.com/ClickHouse/ClickHouse/issues/69713). [#69712](https://github.com/ClickHouse/ClickHouse/pull/69712) ([Jiří Kozlovský](https://github.com/jirislav)).
#### Performance Improvement
* Refactor `IDisk` and `IObjectStorage` for better performance. Tables from `plain` and `plain_rewritable` object storages will initialize faster. [#68146](https://github.com/ClickHouse/ClickHouse/pull/68146) ([Alexey Milovidov](https://github.com/alexey-milovidov), [Julia Kartseva](https://github.com/jkartseva)). Do not call the LIST object storage API when determining if a file or directory exists on the plain rewritable disk, as it can be cost-inefficient. [#70852](https://github.com/ClickHouse/ClickHouse/pull/70852) ([Julia Kartseva](https://github.com/jkartseva)). Reduce the number of object storage HEAD API requests in the plain_rewritable disk. [#70915](https://github.com/ClickHouse/ClickHouse/pull/70915) ([Julia Kartseva](https://github.com/jkartseva)).
* Added an ability to parse data directly into sparse columns. [#69828](https://github.com/ClickHouse/ClickHouse/pull/69828) ([Anton Popov](https://github.com/CurtizJ)).
* Improved performance of parsing formats with high number of missed values (e.g. `JSONEachRow`). [#69875](https://github.com/ClickHouse/ClickHouse/pull/69875) ([Anton Popov](https://github.com/CurtizJ)).
* Supports parallel reading of parquet row groups and prefetching of row groups in single-threaded mode. [#69862](https://github.com/ClickHouse/ClickHouse/pull/69862) ([LiuNeng](https://github.com/liuneng1994)).
* Support minmax index for `pointInPolygon`. [#62085](https://github.com/ClickHouse/ClickHouse/pull/62085) ([JackyWoo](https://github.com/JackyWoo)).
* Use bloom filters when reading Parquet files. [#62966](https://github.com/ClickHouse/ClickHouse/pull/62966) ([Arthur Passos](https://github.com/arthurpassos)).
* Lock-free parts rename to avoid INSERT affect SELECT (due to parts lock) (under normal circumstances with `fsync_part_directory`, QPS of SELECT with INSERT in parallel, increased 2x, under heavy load the effect is even bigger). Note, this only includes `ReplicatedMergeTree` for now. [#64955](https://github.com/ClickHouse/ClickHouse/pull/64955) ([Azat Khuzhin](https://github.com/azat)).
* Respect `ttl_only_drop_parts` on `materialize ttl`; only read necessary columns to recalculate TTL and drop parts by replacing them with an empty one. [#65488](https://github.com/ClickHouse/ClickHouse/pull/65488) ([Andrey Zvonov](https://github.com/zvonand)).
* Optimized thread creation in the ThreadPool to minimize lock contention. Thread creation is now performed outside of the critical section to avoid delays in job scheduling and thread management under high load conditions. This leads to a much more responsive ClickHouse under heavy concurrent load. [#68694](https://github.com/ClickHouse/ClickHouse/pull/68694) ([filimonov](https://github.com/filimonov)).
* Enable reading `LowCardinality` string columns from `ORC`. [#69481](https://github.com/ClickHouse/ClickHouse/pull/69481) ([李扬](https://github.com/taiyang-li)).
* Use `LowCardinality` for `ProfileEvents` in system logs such as `part_log`, `query_views_log`, `filesystem_cache_log`. [#70152](https://github.com/ClickHouse/ClickHouse/pull/70152) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Improve performance of `fromUnixTimestamp`/`toUnixTimestamp` functions. [#71042](https://github.com/ClickHouse/ClickHouse/pull/71042) ([kevinyhzou](https://github.com/KevinyhZou)).
* Don't disable nonblocking read from page cache for the entire server when reading from a blocking I/O. This was leading to a poorer performance when a single filesystem (e.g., tmpfs) didn't support the `preadv2` syscall while others do. [#70299](https://github.com/ClickHouse/ClickHouse/pull/70299) ([Antonio Andelic](https://github.com/antonio2368)).
* `ALTER TABLE .. REPLACE PARTITION` doesn't wait anymore for mutations/merges that happen in other partitions. [#59138](https://github.com/ClickHouse/ClickHouse/pull/59138) ([Vasily Nemkov](https://github.com/Enmk)).
* Don't do validation when synchronizing ACL from Keeper. It's validating during creation. It shouldn't matter that much, but there are installations with tens of thousands or even more user created, and the unnecessary hash validation can take a long time to finish during server startup (it synchronizes everything from keeper). [#70644](https://github.com/ClickHouse/ClickHouse/pull/70644) ([Raúl Marín](https://github.com/Algunenano)).
#### Improvement
* `CREATE TABLE AS` will copy `PRIMARY KEY`, `ORDER BY`, and similar clauses (of `MergeTree` tables). [#69739](https://github.com/ClickHouse/ClickHouse/pull/69739) ([sakulali](https://github.com/sakulali)).
* Support 64-bit XID in Keeper. It can be enabled with the `use_xid_64` configuration value. [#69908](https://github.com/ClickHouse/ClickHouse/pull/69908) ([Antonio Andelic](https://github.com/antonio2368)).
* Command-line arguments for Bool settings are set to true when no value is provided for the argument (e.g. `clickhouse-client --optimize_aggregation_in_order --query "SELECT 1"`). [#70459](https://github.com/ClickHouse/ClickHouse/pull/70459) ([davidtsuk](https://github.com/davidtsuk)).
* Added user-level settings `min_free_disk_bytes_to_throw_insert` and `min_free_disk_ratio_to_throw_insert` to prevent insertions on disks that are almost full. [#69755](https://github.com/ClickHouse/ClickHouse/pull/69755) ([Marco Vilas Boas](https://github.com/marco-vb)).
* Embedded documentation for settings will be strictly more detailed and complete than the documentation on the website. This is the first step before making the website documentation always auto-generated from the source code. This has long-standing implications: - it will be guaranteed to have every setting; - there is no chance of having default values obsolete; - we can generate this documentation for each ClickHouse version; - the documentation can be displayed by the server itself even without Internet access. Generate the docs on the website from the source code. [#70289](https://github.com/ClickHouse/ClickHouse/pull/70289) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Allow empty needle in the function `replace`, the same behavior with PostgreSQL. [#69918](https://github.com/ClickHouse/ClickHouse/pull/69918) ([zhanglistar](https://github.com/zhanglistar)).
* Allow empty needle in functions `replaceRegexp*`. [#70053](https://github.com/ClickHouse/ClickHouse/pull/70053) ([zhanglistar](https://github.com/zhanglistar)).
* Symbolic links for tables in the `data/database_name/` directory are created for the actual paths to the table's data, depending on the storage policy, instead of the `store/...` directory on the default disk. [#61777](https://github.com/ClickHouse/ClickHouse/pull/61777) ([Kirill](https://github.com/kirillgarbar)).
* While parsing an `Enum` field from `JSON`, a string containing an integer will be interpreted as the corresponding `Enum` element. This closes [#65119](https://github.com/ClickHouse/ClickHouse/issues/65119). [#66801](https://github.com/ClickHouse/ClickHouse/pull/66801) ([scanhex12](https://github.com/scanhex12)).
* Allow `TRIM` -ing `LEADING` or `TRAILING` empty string as a no-op. Closes [#67792](https://github.com/ClickHouse/ClickHouse/issues/67792). [#68455](https://github.com/ClickHouse/ClickHouse/pull/68455) ([Peter Nguyen](https://github.com/petern48)).
* Improve compatibility of `cast(timestamp as String)` with Spark. [#69179](https://github.com/ClickHouse/ClickHouse/pull/69179) ([Wenzheng Liu](https://github.com/lwz9103)).
* Always use the new analyzer to calculate constant expressions when `enable_analyzer` is set to `true`. Support calculation of `executable` table function arguments without using `SELECT` query for constant expressions. [#69292](https://github.com/ClickHouse/ClickHouse/pull/69292) ([Dmitry Novik](https://github.com/novikd)).
* Add a setting `enable_secure_identifiers` to disallow identifiers with special characters. [#69411](https://github.com/ClickHouse/ClickHouse/pull/69411) ([tuanpach](https://github.com/tuanpach)).
* Add `show_create_query_identifier_quoting_rule` to define identifier quoting behavior in the `SHOW CREATE TABLE` query result. Possible values: - `user_display`: When the identifiers is a keyword. - `when_necessary`: When the identifiers is one of `{"distinct", "all", "table"}` and when it could lead to ambiguity: column names, dictionary attribute names. - `always`: Always quote identifiers. [#69448](https://github.com/ClickHouse/ClickHouse/pull/69448) ([tuanpach](https://github.com/tuanpach)).
* Improve restoring of access entities' dependencies [#69563](https://github.com/ClickHouse/ClickHouse/pull/69563) ([Vitaly Baranov](https://github.com/vitlibar)).
* If you run `clickhouse-client` or other CLI application, and it starts up slowly due to an overloaded server, and you start typing your query, such as `SELECT`, the previous versions will display the remaining of the terminal echo contents before printing the greetings message, such as `SELECTClickHouse local version 24.10.1.1.` instead of `ClickHouse local version 24.10.1.1.`. Now it is fixed. This closes [#31696](https://github.com/ClickHouse/ClickHouse/issues/31696). [#69856](https://github.com/ClickHouse/ClickHouse/pull/69856) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Add new column `readonly_duration` to the `system.replicas` table. Needed to be able to distinguish actual readonly replicas from sentinel ones in alerts. [#69871](https://github.com/ClickHouse/ClickHouse/pull/69871) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
* Change the type of `join_output_by_rowlist_perkey_rows_threshold` setting type to unsigned integer. [#69886](https://github.com/ClickHouse/ClickHouse/pull/69886) ([kevinyhzou](https://github.com/KevinyhZou)).
* Enhance OpenTelemetry span logging to include query settings. [#70011](https://github.com/ClickHouse/ClickHouse/pull/70011) ([sharathks118](https://github.com/sharathks118)).
* Add diagnostic info about higher-order array functions if lambda result type is unexpected. [#70093](https://github.com/ClickHouse/ClickHouse/pull/70093) ([ttanay](https://github.com/ttanay)).
* Keeper improvement: less locking during cluster changes. [#70275](https://github.com/ClickHouse/ClickHouse/pull/70275) ([Antonio Andelic](https://github.com/antonio2368)).
* Add `WITH IMPLICIT` and `FINAL` keywords to the `SHOW GRANTS` command. Fix a minor bug with implicit grants: [#70094](https://github.com/ClickHouse/ClickHouse/issues/70094). [#70293](https://github.com/ClickHouse/ClickHouse/pull/70293) ([pufit](https://github.com/pufit)).
* Respect `compatibility` for MergeTree settings. The `compatibility` value is taken from the `default` profile on server startup, and default MergeTree settings are changed accordingly. Further changes of the `compatibility` setting do not affect MergeTree settings. [#70322](https://github.com/ClickHouse/ClickHouse/pull/70322) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Avoid spamming the logs with large HTTP response bodies in case of errors during inter-server communication. [#70487](https://github.com/ClickHouse/ClickHouse/pull/70487) ([Vladimir Cherkasov](https://github.com/vdimir)).
* Added a new setting `max_parts_to_move` to control the maximum number of parts that can be moved at once. [#70520](https://github.com/ClickHouse/ClickHouse/pull/70520) ([Vladimir Cherkasov](https://github.com/vdimir)).
* Limit the frequency of certain log messages. [#70601](https://github.com/ClickHouse/ClickHouse/pull/70601) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* `CHECK TABLE` with `PART` qualifier was incorrectly formatted in the client. [#70660](https://github.com/ClickHouse/ClickHouse/pull/70660) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Support writing the column index and the offset index using parquet native writer. [#70669](https://github.com/ClickHouse/ClickHouse/pull/70669) ([LiuNeng](https://github.com/liuneng1994)).
* Support parsing `DateTime64` for microsecond and timezone in joda syntax ("joda" is a popular Java library for date and time, and the "joda syntax" is that library's style). [#70737](https://github.com/ClickHouse/ClickHouse/pull/70737) ([kevinyhzou](https://github.com/KevinyhZou)).
* Changed an approach to figure out if a cloud storage supports [batch delete](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html) or not. [#70786](https://github.com/ClickHouse/ClickHouse/pull/70786) ([Vitaly Baranov](https://github.com/vitlibar)).
* Support for Parquet page v2 in the native reader. [#70807](https://github.com/ClickHouse/ClickHouse/pull/70807) ([Arthur Passos](https://github.com/arthurpassos)).
* A check if table has both `storage_policy` and `disk` set. A check if a new storage policy is compatible with an old one when using `disk` setting is added. [#70839](https://github.com/ClickHouse/ClickHouse/pull/70839) ([Kirill](https://github.com/kirillgarbar)).
* Add `system.s3_queue_settings` and `system.azure_queue_settings`. [#70841](https://github.com/ClickHouse/ClickHouse/pull/70841) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Functions `base58Encode` and `base58Decode` now accept arguments of type `FixedString`. Example: `SELECT base58Encode(toFixedString('plaintext', 9));`. [#70846](https://github.com/ClickHouse/ClickHouse/pull/70846) ([Faizan Patel](https://github.com/faizan2786)).
* Add the `partition` column to every entry type of the part log. Previously, it was set only for some entries. This closes [#70819](https://github.com/ClickHouse/ClickHouse/issues/70819). [#70848](https://github.com/ClickHouse/ClickHouse/pull/70848) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Add `MergeStart` and `MutateStart` events into `system.part_log` which helps with merges analysis and visualization. [#70850](https://github.com/ClickHouse/ClickHouse/pull/70850) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Add a profile event about the number of merged source parts. It allows the monitoring of the fanout of the merge tree in production. [#70908](https://github.com/ClickHouse/ClickHouse/pull/70908) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Background downloads to the filesystem cache were enabled back. [#70929](https://github.com/ClickHouse/ClickHouse/pull/70929) ([Nikita Taranov](https://github.com/nickitat)).
* Add a new merge selector algorithm, named `Trivial`, for professional usage only. It is worse than the `Simple` merge selector. [#70969](https://github.com/ClickHouse/ClickHouse/pull/70969) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Support for atomic `CREATE OR REPLACE VIEW`. [#70536](https://github.com/ClickHouse/ClickHouse/pull/70536) ([tuanpach](https://github.com/tuanpach))
* Added `strict_once` mode to aggregate function `windowFunnel` to avoid counting one event several times in case it matches multiple conditions, close [#21835](https://github.com/ClickHouse/ClickHouse/issues/21835). [#69738](https://github.com/ClickHouse/ClickHouse/pull/69738) ([Vladimir Cherkasov](https://github.com/vdimir)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Apply configuration updates in global context object. It fixes issues like [#62308](https://github.com/ClickHouse/ClickHouse/issues/62308). [#62944](https://github.com/ClickHouse/ClickHouse/pull/62944) ([Amos Bird](https://github.com/amosbird)).
* Fix `ReadSettings` not using user set values, because defaults were only used. [#65625](https://github.com/ClickHouse/ClickHouse/pull/65625) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix type mismatch issue in `sumMapFiltered` when using signed arguments. [#58408](https://github.com/ClickHouse/ClickHouse/pull/58408) ([Chen768959](https://github.com/Chen768959)).
* Fix toHour-like conversion functions' monotonicity when optional time zone argument is passed. [#60264](https://github.com/ClickHouse/ClickHouse/pull/60264) ([Amos Bird](https://github.com/amosbird)).
* Relax `supportsPrewhere` check for `Merge` tables. This fixes [#61064](https://github.com/ClickHouse/ClickHouse/issues/61064). It was hardened unnecessarily in [#60082](https://github.com/ClickHouse/ClickHouse/issues/60082). [#61091](https://github.com/ClickHouse/ClickHouse/pull/61091) ([Amos Bird](https://github.com/amosbird)).
* Fix `use_concurrency_control` setting handling for proper `concurrent_threads_soft_limit_num` limit enforcing. This enables concurrency control by default because previously it was broken. [#61473](https://github.com/ClickHouse/ClickHouse/pull/61473) ([Sergei Trifonov](https://github.com/serxa)).
* Fix incorrect `JOIN ON` section optimization in case of `IS NULL` check under any other function (like `NOT`) that may lead to wrong results. Closes [#67915](https://github.com/ClickHouse/ClickHouse/issues/67915). [#68049](https://github.com/ClickHouse/ClickHouse/pull/68049) ([Vladimir Cherkasov](https://github.com/vdimir)).
* Prevent `ALTER` queries that would make the `CREATE` query of tables invalid. [#68574](https://github.com/ClickHouse/ClickHouse/pull/68574) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Fix inconsistent AST formatting for `negate` (`-`) and `NOT` functions with tuples and arrays. [#68600](https://github.com/ClickHouse/ClickHouse/pull/68600) ([Vladimir Cherkasov](https://github.com/vdimir)).
* Fix insertion of incomplete type into `Dynamic` during deserialization. It could lead to `Parameter out of bound` errors. [#69291](https://github.com/ClickHouse/ClickHouse/pull/69291) ([Pavel Kruglov](https://github.com/Avogar)).
* Zero-copy replication, which is experimental and should not be used in production: fix inf loop after `restore replica` in the replicated merge tree with zero copy. [#69293](https://github.com/CljmnickHouse/ClickHouse/pull/69293) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
* Return back default value of `processing_threads_num` as number of cpu cores in storage `S3Queue`. [#69384](https://github.com/ClickHouse/ClickHouse/pull/69384) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Bypass try/catch flow when de/serializing nested repeated protobuf to nested columns (fixes [#41971](https://github.com/ClickHouse/ClickHouse/issues/41971)). [#69556](https://github.com/ClickHouse/ClickHouse/pull/69556) ([Eliot Hautefeuille](https://github.com/hileef)).
* Fix crash during insertion into FixedString column in PostgreSQL engine. [#69584](https://github.com/ClickHouse/ClickHouse/pull/69584) ([Pavel Kruglov](https://github.com/Avogar)).
* Fix crash when executing `create view t as (with recursive 42 as ttt select ttt);`. [#69676](https://github.com/ClickHouse/ClickHouse/pull/69676) ([Han Fei](https://github.com/hanfei1991)).
* Fixed `maxMapState` throwing 'Bad get' if value type is DateTime64. [#69787](https://github.com/ClickHouse/ClickHouse/pull/69787) ([Michael Kolupaev](https://github.com/al13n321)).
* Fix `getSubcolumn` with `LowCardinality` columns by overriding `useDefaultImplementationForLowCardinalityColumns` to return `true`. [#69831](https://github.com/ClickHouse/ClickHouse/pull/69831) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
* Fix permanent blocked distributed sends if a DROP of distributed table failed. [#69843](https://github.com/ClickHouse/ClickHouse/pull/69843) ([Azat Khuzhin](https://github.com/azat)).
* Fix non-cancellable queries containing WITH FILL with NaN keys. This closes [#69261](https://github.com/ClickHouse/ClickHouse/issues/69261). [#69845](https://github.com/ClickHouse/ClickHouse/pull/69845) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix analyzer default with old compatibility value. [#69895](https://github.com/ClickHouse/ClickHouse/pull/69895) ([Raúl Marín](https://github.com/Algunenano)).
* Don't check dependencies during CREATE OR REPLACE VIEW during DROP of old table. Previously CREATE OR REPLACE query failed when there are dependent tables of the recreated view. [#69907](https://github.com/ClickHouse/ClickHouse/pull/69907) ([Pavel Kruglov](https://github.com/Avogar)).
* Something for Decimal. Fixes [#69730](https://github.com/ClickHouse/ClickHouse/issues/69730). [#69978](https://github.com/ClickHouse/ClickHouse/pull/69978) ([Arthur Passos](https://github.com/arthurpassos)).
* Now DEFINER/INVOKER will work with parameterized views. [#69984](https://github.com/ClickHouse/ClickHouse/pull/69984) ([pufit](https://github.com/pufit)).
* Fix parsing for view's definers. [#69985](https://github.com/ClickHouse/ClickHouse/pull/69985) ([pufit](https://github.com/pufit)).
* Fixed a bug when the timezone could change the result of the query with a `Date` or `Date32` arguments. [#70036](https://github.com/ClickHouse/ClickHouse/pull/70036) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Fixes `Block structure mismatch` for queries with nested views and `WHERE` condition. Fixes [#66209](https://github.com/ClickHouse/ClickHouse/issues/66209). [#70054](https://github.com/ClickHouse/ClickHouse/pull/70054) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Avoid reusing columns among different named tuples when evaluating `tuple` functions. This fixes [#70022](https://github.com/ClickHouse/ClickHouse/issues/70022). [#70103](https://github.com/ClickHouse/ClickHouse/pull/70103) ([Amos Bird](https://github.com/amosbird)).
* Fix wrong LOGICAL_ERROR when replacing literals in ranges. [#70122](https://github.com/ClickHouse/ClickHouse/pull/70122) ([Pablo Marcos](https://github.com/pamarcos)).
* Check for Nullable(Nothing) type during ALTER TABLE MODIFY COLUMN/QUERY to prevent tables with such data type. [#70123](https://github.com/ClickHouse/ClickHouse/pull/70123) ([Pavel Kruglov](https://github.com/Avogar)).
* Proper error message for illegal query `JOIN ... ON *` , close [#68650](https://github.com/ClickHouse/ClickHouse/issues/68650). [#70124](https://github.com/ClickHouse/ClickHouse/pull/70124) ([Vladimir Cherkasov](https://github.com/vdimir)).
* Fix wrong result with skipping index. [#70127](https://github.com/ClickHouse/ClickHouse/pull/70127) ([Raúl Marín](https://github.com/Algunenano)).
* Fix data race in ColumnObject/ColumnTuple decompress method that could lead to heap use after free. [#70137](https://github.com/ClickHouse/ClickHouse/pull/70137) ([Pavel Kruglov](https://github.com/Avogar)).
* Fix possible hung in ALTER COLUMN with Dynamic type. [#70144](https://github.com/ClickHouse/ClickHouse/pull/70144) ([Pavel Kruglov](https://github.com/Avogar)).
* Now ClickHouse will consider more errors as retriable and will not mark data parts as broken in case of such errors. [#70145](https://github.com/ClickHouse/ClickHouse/pull/70145) ([alesapin](https://github.com/alesapin)).
* Use correct `max_types` parameter during Dynamic type creation for JSON subcolumn. [#70147](https://github.com/ClickHouse/ClickHouse/pull/70147) ([Pavel Kruglov](https://github.com/Avogar)).
* Fix the password being displayed in `system.query_log` for users with bcrypt password authentication method. [#70148](https://github.com/ClickHouse/ClickHouse/pull/70148) ([Nikolay Degterinsky](https://github.com/evillique)).
* Fix event counter for the native interface (InterfaceNativeSendBytes). [#70153](https://github.com/ClickHouse/ClickHouse/pull/70153) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
* Fix possible crash related to JSON columns. [#70172](https://github.com/ClickHouse/ClickHouse/pull/70172) ([Pavel Kruglov](https://github.com/Avogar)).
* Fix multiple issues with arrayMin and arrayMax. [#70207](https://github.com/ClickHouse/ClickHouse/pull/70207) ([Raúl Marín](https://github.com/Algunenano)).
* Respect setting allow_simdjson in the JSON type parser. [#70218](https://github.com/ClickHouse/ClickHouse/pull/70218) ([Pavel Kruglov](https://github.com/Avogar)).
* Fix a null pointer dereference on creating a materialized view with two selects and an `INTERSECT`, e.g. `CREATE MATERIALIZED VIEW v0 AS (SELECT 1) INTERSECT (SELECT 1);`. [#70264](https://github.com/ClickHouse/ClickHouse/pull/70264) ([Konstantin Bogdanov](https://github.com/thevar1able)).
* Don't modify global settings with startup scripts. Previously, changing a setting in a startup script would change it globally. [#70310](https://github.com/ClickHouse/ClickHouse/pull/70310) ([Antonio Andelic](https://github.com/antonio2368)).
* Fix ALTER of `Dynamic` type with reducing max_types parameter that could lead to server crash. [#70328](https://github.com/ClickHouse/ClickHouse/pull/70328) ([Pavel Kruglov](https://github.com/Avogar)).
* Fix crash when using WITH FILL incorrectly. [#70338](https://github.com/ClickHouse/ClickHouse/pull/70338) ([Raúl Marín](https://github.com/Algunenano)).
* Fix possible use-after-free in `SYSTEM DROP FORMAT SCHEMA CACHE FOR Protobuf`. [#70358](https://github.com/ClickHouse/ClickHouse/pull/70358) ([Azat Khuzhin](https://github.com/azat)).
* Fix crash during GROUP BY JSON sub-object subcolumn. [#70374](https://github.com/ClickHouse/ClickHouse/pull/70374) ([Pavel Kruglov](https://github.com/Avogar)).
* Don't prefetch parts for vertical merges if part has no rows. [#70452](https://github.com/ClickHouse/ClickHouse/pull/70452) ([Antonio Andelic](https://github.com/antonio2368)).
* Fix crash in WHERE with lambda functions. [#70464](https://github.com/ClickHouse/ClickHouse/pull/70464) ([Raúl Marín](https://github.com/Algunenano)).
* Fix table creation with `CREATE ... AS table_function(...)` with database `Replicated` and unavailable table function source on secondary replica. [#70511](https://github.com/ClickHouse/ClickHouse/pull/70511) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Ignore all output on async insert with `wait_for_async_insert=1`. Closes [#62644](https://github.com/ClickHouse/ClickHouse/issues/62644). [#70530](https://github.com/ClickHouse/ClickHouse/pull/70530) ([Konstantin Bogdanov](https://github.com/thevar1able)).
* Ignore frozen_metadata.txt while traversing shadow directory from system.remote_data_paths. [#70590](https://github.com/ClickHouse/ClickHouse/pull/70590) ([Aleksei Filatov](https://github.com/aalexfvk)).
* Fix creation of stateful window functions on misaligned memory. [#70631](https://github.com/ClickHouse/ClickHouse/pull/70631) ([Raúl Marín](https://github.com/Algunenano)).
* Fixed rare crashes in `SELECT`-s and merges after adding a column of `Array` type with non-empty default expression. [#70695](https://github.com/ClickHouse/ClickHouse/pull/70695) ([Anton Popov](https://github.com/CurtizJ)).
* Insert into table function s3 will respect query settings. [#70696](https://github.com/ClickHouse/ClickHouse/pull/70696) ([Vladimir Cherkasov](https://github.com/vdimir)).
* Fix infinite recursion when inferring a protobuf schema when skipping unsupported fields is enabled. [#70697](https://github.com/ClickHouse/ClickHouse/pull/70697) ([Raúl Marín](https://github.com/Algunenano)).
* Disable enable_named_columns_in_function_tuple by default. [#70833](https://github.com/ClickHouse/ClickHouse/pull/70833) ([Raúl Marín](https://github.com/Algunenano)).
* Fix S3Queue table engine setting processing_threads_num not being effective in case it was deduced from the number of cpu cores on the server. [#70837](https://github.com/ClickHouse/ClickHouse/pull/70837) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Normalize named tuple arguments in aggregation states. This fixes [#69732](https://github.com/ClickHouse/ClickHouse/issues/69732) . [#70853](https://github.com/ClickHouse/ClickHouse/pull/70853) ([Amos Bird](https://github.com/amosbird)).
* Fix a logical error due to negative zeros in the two-level hash table. This closes [#70973](https://github.com/ClickHouse/ClickHouse/issues/70973). [#70979](https://github.com/ClickHouse/ClickHouse/pull/70979) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix `limit by`, `limit with ties` for distributed and parallel replicas. [#70880](https://github.com/ClickHouse/ClickHouse/pull/70880) ([Nikita Taranov](https://github.com/nickitat)).
### <a id="249"></a> ClickHouse release 24.9, 2024-09-26
#### Backward Incompatible Change
@ -328,6 +488,7 @@
* Remove `is_deterministic` field from the `system.functions` table. [#66630](https://github.com/ClickHouse/ClickHouse/pull/66630) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Function `tuple` will now try to construct named tuples in query (controlled by `enable_named_columns_in_function_tuple`). Introduce function `tupleNames` to extract names from tuples. [#54881](https://github.com/ClickHouse/ClickHouse/pull/54881) ([Amos Bird](https://github.com/amosbird)).
* Change how deduplication for Materialized Views works. Fixed a lot of cases like: - on destination table: data is split for 2 or more blocks and that blocks is considered as duplicate when that block is inserted in parallel. - on MV destination table: the equal blocks are deduplicated, that happens when MV often produces equal data as a result for different input data due to performing aggregation. - on MV destination table: the equal blocks which comes from different MV are deduplicated. [#61601](https://github.com/ClickHouse/ClickHouse/pull/61601) ([Sema Checherinda](https://github.com/CheSema)).
* Functions `bitShiftLeft` and `bitShitfRight` return an error for out of bounds shift positions [#65838](https://github.com/ClickHouse/ClickHouse/pull/65838) ([Pablo Marcos](https://github.com/pamarcos)).
#### New Feature
* Add `ASOF JOIN` support for `full_sorting_join` algorithm. [#55051](https://github.com/ClickHouse/ClickHouse/pull/55051) ([vdimir](https://github.com/vdimir)).
@ -439,7 +600,6 @@
* Functions `bitTest`, `bitTestAll`, and `bitTestAny` now return an error if the specified bit index is out-of-bounds [#65818](https://github.com/ClickHouse/ClickHouse/pull/65818) ([Pablo Marcos](https://github.com/pamarcos)).
* Setting `join_any_take_last_row` is supported in any query with hash join. [#65820](https://github.com/ClickHouse/ClickHouse/pull/65820) ([vdimir](https://github.com/vdimir)).
* Better handling of join conditions involving `IS NULL` checks (for example `ON (a = b AND (a IS NOT NULL) AND (b IS NOT NULL) ) OR ( (a IS NULL) AND (b IS NULL) )` is rewritten to `ON a <=> b`), fix incorrect optimization when condition other then `IS NULL` are present. [#65835](https://github.com/ClickHouse/ClickHouse/pull/65835) ([vdimir](https://github.com/vdimir)).
* Functions `bitShiftLeft` and `bitShitfRight` return an error for out of bounds shift positions [#65838](https://github.com/ClickHouse/ClickHouse/pull/65838) ([Pablo Marcos](https://github.com/pamarcos)).
* Fix growing memory usage in S3Queue. [#65839](https://github.com/ClickHouse/ClickHouse/pull/65839) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix tie handling in `arrayAUC` to match sklearn. [#65840](https://github.com/ClickHouse/ClickHouse/pull/65840) ([gabrielmcg44](https://github.com/gabrielmcg44)).
* Fix possible issues with MySQL server protocol TLS connections. [#65917](https://github.com/ClickHouse/ClickHouse/pull/65917) ([Azat Khuzhin](https://github.com/azat)).

View File

@ -88,6 +88,7 @@ string (TOUPPER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_UC)
list(REVERSE CMAKE_FIND_LIBRARY_SUFFIXES)
option (ENABLE_FUZZING "Fuzzy testing using libfuzzer" OFF)
option (ENABLE_FUZZER_TEST "Build testing fuzzers in order to test libFuzzer functionality" OFF)
if (ENABLE_FUZZING)
# Also set WITH_COVERAGE=1 for better fuzzing process

View File

@ -42,31 +42,19 @@ Keep an eye out for upcoming meetups and events around the world. Somewhere else
Upcoming meetups
* [Jakarta Meetup](https://www.meetup.com/clickhouse-indonesia-user-group/events/303191359/) - October 1
* [Singapore Meetup](https://www.meetup.com/clickhouse-singapore-meetup-group/events/303212064/) - October 3
* [Madrid Meetup](https://www.meetup.com/clickhouse-spain-user-group/events/303096564/) - October 22
* [Oslo Meetup](https://www.meetup.com/open-source-real-time-data-warehouse-real-time-analytics/events/302938622) - October 31
* [Barcelona Meetup](https://www.meetup.com/clickhouse-spain-user-group/events/303096876/) - November 12
* [Ghent Meetup](https://www.meetup.com/clickhouse-belgium-user-group/events/303049405/) - November 19
* [Dubai Meetup](https://www.meetup.com/clickhouse-dubai-meetup-group/events/303096989/) - November 21
* [Paris Meetup](https://www.meetup.com/clickhouse-france-user-group/events/303096434) - November 26
* [Amsterdam Meetup](https://www.meetup.com/clickhouse-netherlands-user-group/events/303638814) - December 3
* [New York Meetup](https://www.meetup.com/clickhouse-new-york-user-group/events/304268174) - December 9
* [San Francisco Meetup](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/304286951/) - December 12
Recently completed meetups
* [ClickHouse Guangzhou User Group Meetup](https://mp.weixin.qq.com/s/GSvo-7xUoVzCsuUvlLTpCw) - August 25
* [Seattle Meetup (Statsig)](https://www.meetup.com/clickhouse-seattle-user-group/events/302518075/) - August 27
* [Melbourne Meetup](https://www.meetup.com/clickhouse-australia-user-group/events/302732666/) - August 27
* [Sydney Meetup](https://www.meetup.com/clickhouse-australia-user-group/events/302862966/) - September 5
* [Zurich Meetup](https://www.meetup.com/clickhouse-switzerland-meetup-group/events/302267429/) - September 5
* [San Francisco Meetup (Cloudflare)](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/302540575) - September 5
* [Raleigh Meetup (Deutsche Bank)](https://www.meetup.com/triangletechtalks/events/302723486/) - September 9
* [New York Meetup (Rokt)](https://www.meetup.com/clickhouse-new-york-user-group/events/302575342) - September 10
* [Toronto Meetup (Shopify)](https://www.meetup.com/clickhouse-toronto-user-group/events/301490855/) - September 10
* [Chicago Meetup (Jump Capital)](https://lu.ma/43tvmrfw) - September 12
* [London Meetup](https://www.meetup.com/clickhouse-london-user-group/events/302977267) - September 17
* [Austin Meetup](https://www.meetup.com/clickhouse-austin-user-group/events/302558689/) - September 17
* [Bangalore Meetup](https://www.meetup.com/clickhouse-bangalore-user-group/events/303208274/) - September 18
* [Tel Aviv Meetup](https://www.meetup.com/clickhouse-meetup-israel/events/303095121) - September 22
* [Madrid Meetup](https://www.meetup.com/clickhouse-spain-user-group/events/303096564/) - October 22
* [Singapore Meetup](https://www.meetup.com/clickhouse-singapore-meetup-group/events/303212064/) - October 3
* [Jakarta Meetup](https://www.meetup.com/clickhouse-indonesia-user-group/events/303191359/) - October 1
## Recent Recordings
* **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments"

View File

@ -14,9 +14,10 @@ The following versions of ClickHouse server are currently supported with securit
| Version | Supported |
|:-|:-|
| 24.10 | ✔️ |
| 24.9 | ✔️ |
| 24.8 | ✔️ |
| 24.7 | ✔️ |
| 24.7 | |
| 24.6 | ❌ |
| 24.5 | ❌ |
| 24.4 | ❌ |

View File

@ -86,7 +86,7 @@ using StringRefs = std::vector<StringRef>;
* For more information, see hash_map_string_2.cpp
*/
inline bool compare8(const char * p1, const char * p2)
inline bool compare16(const char * p1, const char * p2)
{
return 0xFFFF == _mm_movemask_epi8(_mm_cmpeq_epi8(
_mm_loadu_si128(reinterpret_cast<const __m128i *>(p1)),
@ -115,7 +115,7 @@ inline bool compare64(const char * p1, const char * p2)
#elif defined(__aarch64__) && defined(__ARM_NEON)
inline bool compare8(const char * p1, const char * p2)
inline bool compare16(const char * p1, const char * p2)
{
uint64_t mask = getNibbleMask(vceqq_u8(
vld1q_u8(reinterpret_cast<const unsigned char *>(p1)), vld1q_u8(reinterpret_cast<const unsigned char *>(p2))));
@ -185,13 +185,22 @@ inline bool memequalWide(const char * p1, const char * p2, size_t size)
switch (size / 16) // NOLINT(bugprone-switch-missing-default-case)
{
case 3: if (!compare8(p1 + 32, p2 + 32)) return false; [[fallthrough]];
case 2: if (!compare8(p1 + 16, p2 + 16)) return false; [[fallthrough]];
case 1: if (!compare8(p1, p2)) return false; [[fallthrough]];
case 3:
if (!compare16(p1 + 32, p2 + 32))
return false;
[[fallthrough]];
case 2:
if (!compare16(p1 + 16, p2 + 16))
return false;
[[fallthrough]];
case 1:
if (!compare16(p1, p2))
return false;
[[fallthrough]];
default: ;
}
return compare8(p1 + size - 16, p2 + size - 16);
return compare16(p1 + size - 16, p2 + size - 16);
}
#endif

View File

@ -4,6 +4,7 @@
#include <string>
#include <sstream>
#include <cctz/time_zone.h>
#include <fmt/core.h>
inline std::string to_string(const std::time_t & time)
@ -11,18 +12,6 @@ inline std::string to_string(const std::time_t & time)
return cctz::format("%Y-%m-%d %H:%M:%S", std::chrono::system_clock::from_time_t(time), cctz::local_time_zone());
}
template <typename Clock, typename Duration = typename Clock::duration>
std::string to_string(const std::chrono::time_point<Clock, Duration> & tp)
{
// Don't use DateLUT because it shows weird characters for
// TimePoint::max(). I wish we could use C++20 format, but it's not
// there yet.
// return DateLUT::instance().timeToString(std::chrono::system_clock::to_time_t(tp));
auto in_time_t = std::chrono::system_clock::to_time_t(tp);
return to_string(in_time_t);
}
template <typename Rep, typename Period = std::ratio<1>>
std::string to_string(const std::chrono::duration<Rep, Period> & duration)
{
@ -33,6 +22,20 @@ std::string to_string(const std::chrono::duration<Rep, Period> & duration)
return std::to_string(seconds_as_double.count()) + "s";
}
template <typename Clock, typename Duration = typename Clock::duration>
std::string to_string(const std::chrono::time_point<Clock, Duration> & tp)
{
// Don't use DateLUT because it shows weird characters for
// TimePoint::max(). I wish we could use C++20 format, but it's not
// there yet.
// return DateLUT::instance().timeToString(std::chrono::system_clock::to_time_t(tp));
if constexpr (std::is_same_v<Clock, std::chrono::system_clock>)
return to_string(std::chrono::system_clock::to_time_t(tp));
else
return to_string(tp.time_since_epoch());
}
template <typename Clock, typename Duration = typename Clock::duration>
std::ostream & operator<<(std::ostream & o, const std::chrono::time_point<Clock, Duration> & tp)
{
@ -44,3 +47,23 @@ std::ostream & operator<<(std::ostream & o, const std::chrono::duration<Rep, Per
{
return o << to_string(duration);
}
template <typename Clock, typename Duration>
struct fmt::formatter<std::chrono::time_point<Clock, Duration>> : fmt::formatter<std::string>
{
template <typename FormatCtx>
auto format(const std::chrono::time_point<Clock, Duration> & tp, FormatCtx & ctx) const
{
return fmt::formatter<std::string>::format(::to_string(tp), ctx);
}
};
template <typename Rep, typename Period>
struct fmt::formatter<std::chrono::duration<Rep, Period>> : fmt::formatter<std::string>
{
template <typename FormatCtx>
auto format(const std::chrono::duration<Rep, Period> & duration, FormatCtx & ctx) const
{
return fmt::formatter<std::string>::format(::to_string(duration), ctx);
}
};

View File

@ -25,9 +25,10 @@
// We don't have libc struct available here.
// Compute aux vector manually (from /proc/self/auxv).
//
// Right now there is only 51 AT_* constants,
// so 64 should be enough until this implementation will be replaced with musl.
static unsigned long __auxv_procfs[64];
// Right now there are 51 AT_* constants. Custom kernels have been encountered
// making use of up to 71. 128 should be enough until this implementation is
// replaced with musl.
static unsigned long __auxv_procfs[128];
static unsigned long __auxv_secure = 0;
// Common
static unsigned long * __auxv_environ = NULL;

2
contrib/SimSIMD vendored

@ -1 +1 @@
Subproject commit ff51434d90c66f916e94ff05b24530b127aa4cff
Subproject commit ee3c9c9c00b51645f62a1a9e99611b78c0052a21

View File

@ -1,4 +1,8 @@
set(SIMSIMD_PROJECT_DIR "${ClickHouse_SOURCE_DIR}/contrib/SimSIMD")
add_library(_simsimd INTERFACE)
target_include_directories(_simsimd SYSTEM INTERFACE "${SIMSIMD_PROJECT_DIR}/include")
# See contrib/usearch-cmake/CMakeLists.txt, why only enabled on x86
if (ARCH_AMD64)
set(SIMSIMD_PROJECT_DIR "${ClickHouse_SOURCE_DIR}/contrib/SimSIMD")
set(SIMSIMD_SRCS ${SIMSIMD_PROJECT_DIR}/c/lib.c)
add_library(_simsimd ${SIMSIMD_SRCS})
target_include_directories(_simsimd SYSTEM PUBLIC "${SIMSIMD_PROJECT_DIR}/include")
target_compile_definitions(_simsimd PUBLIC SIMSIMD_DYNAMIC_DISPATCH)
endif()

2
contrib/arrow vendored

@ -1 +1 @@
Subproject commit 5cfccd8ea65f33d4517e7409815d761c7650b45d
Subproject commit 6e2574f5013a005c050c9a7787d341aef09d0063

View File

@ -213,13 +213,19 @@ target_include_directories(_orc SYSTEM PRIVATE
set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/src/arrow")
# arrow/cpp/src/arrow/CMakeLists.txt (ARROW_SRCS + ARROW_COMPUTE + ARROW_IPC)
# find . \( -iname \*.cc -o -iname \*.cpp -o -iname \*.c \) | sort | awk '{print "\"${LIBRARY_DIR}" substr($1,2) "\"" }' | grep -v 'test.cc' | grep -v 'json' | grep -v 'flight' \|
# grep -v 'csv' | grep -v 'acero' | grep -v 'dataset' | grep -v 'testing' | grep -v 'gpu' | grep -v 'engine' | grep -v 'filesystem' | grep -v 'benchmark.cc'
set(ARROW_SRCS
"${LIBRARY_DIR}/adapters/orc/adapter.cc"
"${LIBRARY_DIR}/adapters/orc/options.cc"
"${LIBRARY_DIR}/adapters/orc/util.cc"
"${LIBRARY_DIR}/array/array_base.cc"
"${LIBRARY_DIR}/array/array_binary.cc"
"${LIBRARY_DIR}/array/array_decimal.cc"
"${LIBRARY_DIR}/array/array_dict.cc"
"${LIBRARY_DIR}/array/array_nested.cc"
"${LIBRARY_DIR}/array/array_primitive.cc"
"${LIBRARY_DIR}/array/array_run_end.cc"
"${LIBRARY_DIR}/array/builder_adaptive.cc"
"${LIBRARY_DIR}/array/builder_base.cc"
"${LIBRARY_DIR}/array/builder_binary.cc"
@ -227,124 +233,26 @@ set(ARROW_SRCS
"${LIBRARY_DIR}/array/builder_dict.cc"
"${LIBRARY_DIR}/array/builder_nested.cc"
"${LIBRARY_DIR}/array/builder_primitive.cc"
"${LIBRARY_DIR}/array/builder_union.cc"
"${LIBRARY_DIR}/array/builder_run_end.cc"
"${LIBRARY_DIR}/array/array_run_end.cc"
"${LIBRARY_DIR}/array/builder_union.cc"
"${LIBRARY_DIR}/array/concatenate.cc"
"${LIBRARY_DIR}/array/data.cc"
"${LIBRARY_DIR}/array/diff.cc"
"${LIBRARY_DIR}/array/util.cc"
"${LIBRARY_DIR}/array/validate.cc"
"${LIBRARY_DIR}/builder.cc"
"${LIBRARY_DIR}/buffer.cc"
"${LIBRARY_DIR}/chunked_array.cc"
"${LIBRARY_DIR}/chunk_resolver.cc"
"${LIBRARY_DIR}/compare.cc"
"${LIBRARY_DIR}/config.cc"
"${LIBRARY_DIR}/datum.cc"
"${LIBRARY_DIR}/device.cc"
"${LIBRARY_DIR}/extension_type.cc"
"${LIBRARY_DIR}/memory_pool.cc"
"${LIBRARY_DIR}/pretty_print.cc"
"${LIBRARY_DIR}/record_batch.cc"
"${LIBRARY_DIR}/result.cc"
"${LIBRARY_DIR}/scalar.cc"
"${LIBRARY_DIR}/sparse_tensor.cc"
"${LIBRARY_DIR}/status.cc"
"${LIBRARY_DIR}/table.cc"
"${LIBRARY_DIR}/table_builder.cc"
"${LIBRARY_DIR}/tensor.cc"
"${LIBRARY_DIR}/tensor/coo_converter.cc"
"${LIBRARY_DIR}/tensor/csf_converter.cc"
"${LIBRARY_DIR}/tensor/csx_converter.cc"
"${LIBRARY_DIR}/type.cc"
"${LIBRARY_DIR}/visitor.cc"
"${LIBRARY_DIR}/builder.cc"
"${LIBRARY_DIR}/c/bridge.cc"
"${LIBRARY_DIR}/io/buffered.cc"
"${LIBRARY_DIR}/io/caching.cc"
"${LIBRARY_DIR}/io/compressed.cc"
"${LIBRARY_DIR}/io/file.cc"
"${LIBRARY_DIR}/io/hdfs.cc"
"${LIBRARY_DIR}/io/hdfs_internal.cc"
"${LIBRARY_DIR}/io/interfaces.cc"
"${LIBRARY_DIR}/io/memory.cc"
"${LIBRARY_DIR}/io/slow.cc"
"${LIBRARY_DIR}/io/stdio.cc"
"${LIBRARY_DIR}/io/transform.cc"
"${LIBRARY_DIR}/util/async_util.cc"
"${LIBRARY_DIR}/util/basic_decimal.cc"
"${LIBRARY_DIR}/util/bit_block_counter.cc"
"${LIBRARY_DIR}/util/bit_run_reader.cc"
"${LIBRARY_DIR}/util/bit_util.cc"
"${LIBRARY_DIR}/util/bitmap.cc"
"${LIBRARY_DIR}/util/bitmap_builders.cc"
"${LIBRARY_DIR}/util/bitmap_ops.cc"
"${LIBRARY_DIR}/util/bpacking.cc"
"${LIBRARY_DIR}/util/cancel.cc"
"${LIBRARY_DIR}/util/compression.cc"
"${LIBRARY_DIR}/util/counting_semaphore.cc"
"${LIBRARY_DIR}/util/cpu_info.cc"
"${LIBRARY_DIR}/util/decimal.cc"
"${LIBRARY_DIR}/util/delimiting.cc"
"${LIBRARY_DIR}/util/formatting.cc"
"${LIBRARY_DIR}/util/future.cc"
"${LIBRARY_DIR}/util/int_util.cc"
"${LIBRARY_DIR}/util/io_util.cc"
"${LIBRARY_DIR}/util/logging.cc"
"${LIBRARY_DIR}/util/key_value_metadata.cc"
"${LIBRARY_DIR}/util/memory.cc"
"${LIBRARY_DIR}/util/mutex.cc"
"${LIBRARY_DIR}/util/string.cc"
"${LIBRARY_DIR}/util/string_builder.cc"
"${LIBRARY_DIR}/util/task_group.cc"
"${LIBRARY_DIR}/util/tdigest.cc"
"${LIBRARY_DIR}/util/thread_pool.cc"
"${LIBRARY_DIR}/util/time.cc"
"${LIBRARY_DIR}/util/trie.cc"
"${LIBRARY_DIR}/util/unreachable.cc"
"${LIBRARY_DIR}/util/uri.cc"
"${LIBRARY_DIR}/util/utf8.cc"
"${LIBRARY_DIR}/util/value_parsing.cc"
"${LIBRARY_DIR}/util/byte_size.cc"
"${LIBRARY_DIR}/util/debug.cc"
"${LIBRARY_DIR}/util/tracing.cc"
"${LIBRARY_DIR}/util/atfork_internal.cc"
"${LIBRARY_DIR}/util/crc32.cc"
"${LIBRARY_DIR}/util/hashing.cc"
"${LIBRARY_DIR}/util/ree_util.cc"
"${LIBRARY_DIR}/util/union_util.cc"
"${LIBRARY_DIR}/vendored/base64.cpp"
"${LIBRARY_DIR}/vendored/datetime/tz.cpp"
"${LIBRARY_DIR}/vendored/musl/strptime.c"
"${LIBRARY_DIR}/vendored/uriparser/UriCommon.c"
"${LIBRARY_DIR}/vendored/uriparser/UriCompare.c"
"${LIBRARY_DIR}/vendored/uriparser/UriEscape.c"
"${LIBRARY_DIR}/vendored/uriparser/UriFile.c"
"${LIBRARY_DIR}/vendored/uriparser/UriIp4Base.c"
"${LIBRARY_DIR}/vendored/uriparser/UriIp4.c"
"${LIBRARY_DIR}/vendored/uriparser/UriMemory.c"
"${LIBRARY_DIR}/vendored/uriparser/UriNormalizeBase.c"
"${LIBRARY_DIR}/vendored/uriparser/UriNormalize.c"
"${LIBRARY_DIR}/vendored/uriparser/UriParseBase.c"
"${LIBRARY_DIR}/vendored/uriparser/UriParse.c"
"${LIBRARY_DIR}/vendored/uriparser/UriQuery.c"
"${LIBRARY_DIR}/vendored/uriparser/UriRecompose.c"
"${LIBRARY_DIR}/vendored/uriparser/UriResolve.c"
"${LIBRARY_DIR}/vendored/uriparser/UriShorten.c"
"${LIBRARY_DIR}/vendored/double-conversion/bignum.cc"
"${LIBRARY_DIR}/vendored/double-conversion/bignum-dtoa.cc"
"${LIBRARY_DIR}/vendored/double-conversion/cached-powers.cc"
"${LIBRARY_DIR}/vendored/double-conversion/double-to-string.cc"
"${LIBRARY_DIR}/vendored/double-conversion/fast-dtoa.cc"
"${LIBRARY_DIR}/vendored/double-conversion/fixed-dtoa.cc"
"${LIBRARY_DIR}/vendored/double-conversion/string-to-double.cc"
"${LIBRARY_DIR}/vendored/double-conversion/strtod.cc"
"${LIBRARY_DIR}/c/dlpack.cc"
"${LIBRARY_DIR}/chunk_resolver.cc"
"${LIBRARY_DIR}/chunked_array.cc"
"${LIBRARY_DIR}/compare.cc"
"${LIBRARY_DIR}/compute/api_aggregate.cc"
"${LIBRARY_DIR}/compute/api_scalar.cc"
"${LIBRARY_DIR}/compute/api_vector.cc"
"${LIBRARY_DIR}/compute/cast.cc"
"${LIBRARY_DIR}/compute/exec.cc"
"${LIBRARY_DIR}/compute/expression.cc"
"${LIBRARY_DIR}/compute/function.cc"
"${LIBRARY_DIR}/compute/function_internal.cc"
"${LIBRARY_DIR}/compute/kernel.cc"
@ -355,6 +263,7 @@ set(ARROW_SRCS
"${LIBRARY_DIR}/compute/kernels/aggregate_var_std.cc"
"${LIBRARY_DIR}/compute/kernels/codegen_internal.cc"
"${LIBRARY_DIR}/compute/kernels/hash_aggregate.cc"
"${LIBRARY_DIR}/compute/kernels/ree_util_internal.cc"
"${LIBRARY_DIR}/compute/kernels/row_encoder.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_arithmetic.cc"
"${LIBRARY_DIR}/compute/kernels/scalar_boolean.cc"
@ -382,30 +291,139 @@ set(ARROW_SRCS
"${LIBRARY_DIR}/compute/kernels/vector_cumulative_ops.cc"
"${LIBRARY_DIR}/compute/kernels/vector_hash.cc"
"${LIBRARY_DIR}/compute/kernels/vector_nested.cc"
"${LIBRARY_DIR}/compute/kernels/vector_pairwise.cc"
"${LIBRARY_DIR}/compute/kernels/vector_rank.cc"
"${LIBRARY_DIR}/compute/kernels/vector_replace.cc"
"${LIBRARY_DIR}/compute/kernels/vector_run_end_encode.cc"
"${LIBRARY_DIR}/compute/kernels/vector_select_k.cc"
"${LIBRARY_DIR}/compute/kernels/vector_selection.cc"
"${LIBRARY_DIR}/compute/kernels/vector_sort.cc"
"${LIBRARY_DIR}/compute/kernels/vector_selection_internal.cc"
"${LIBRARY_DIR}/compute/kernels/vector_selection_filter_internal.cc"
"${LIBRARY_DIR}/compute/kernels/vector_selection_internal.cc"
"${LIBRARY_DIR}/compute/kernels/vector_selection_take_internal.cc"
"${LIBRARY_DIR}/compute/light_array.cc"
"${LIBRARY_DIR}/compute/registry.cc"
"${LIBRARY_DIR}/compute/expression.cc"
"${LIBRARY_DIR}/compute/kernels/vector_sort.cc"
"${LIBRARY_DIR}/compute/key_hash_internal.cc"
"${LIBRARY_DIR}/compute/key_map_internal.cc"
"${LIBRARY_DIR}/compute/light_array_internal.cc"
"${LIBRARY_DIR}/compute/ordering.cc"
"${LIBRARY_DIR}/compute/registry.cc"
"${LIBRARY_DIR}/compute/row/compare_internal.cc"
"${LIBRARY_DIR}/compute/row/encode_internal.cc"
"${LIBRARY_DIR}/compute/row/grouper.cc"
"${LIBRARY_DIR}/compute/row/row_internal.cc"
"${LIBRARY_DIR}/compute/util.cc"
"${LIBRARY_DIR}/config.cc"
"${LIBRARY_DIR}/datum.cc"
"${LIBRARY_DIR}/device.cc"
"${LIBRARY_DIR}/extension_type.cc"
"${LIBRARY_DIR}/integration/c_data_integration_internal.cc"
"${LIBRARY_DIR}/io/buffered.cc"
"${LIBRARY_DIR}/io/caching.cc"
"${LIBRARY_DIR}/io/compressed.cc"
"${LIBRARY_DIR}/io/file.cc"
"${LIBRARY_DIR}/io/hdfs.cc"
"${LIBRARY_DIR}/io/hdfs_internal.cc"
"${LIBRARY_DIR}/io/interfaces.cc"
"${LIBRARY_DIR}/io/memory.cc"
"${LIBRARY_DIR}/io/slow.cc"
"${LIBRARY_DIR}/io/stdio.cc"
"${LIBRARY_DIR}/io/transform.cc"
"${LIBRARY_DIR}/ipc/dictionary.cc"
"${LIBRARY_DIR}/ipc/feather.cc"
"${LIBRARY_DIR}/ipc/file_to_stream.cc"
"${LIBRARY_DIR}/ipc/message.cc"
"${LIBRARY_DIR}/ipc/metadata_internal.cc"
"${LIBRARY_DIR}/ipc/options.cc"
"${LIBRARY_DIR}/ipc/reader.cc"
"${LIBRARY_DIR}/ipc/stream_to_file.cc"
"${LIBRARY_DIR}/ipc/writer.cc"
"${LIBRARY_DIR}/memory_pool.cc"
"${LIBRARY_DIR}/pretty_print.cc"
"${LIBRARY_DIR}/record_batch.cc"
"${LIBRARY_DIR}/result.cc"
"${LIBRARY_DIR}/scalar.cc"
"${LIBRARY_DIR}/sparse_tensor.cc"
"${LIBRARY_DIR}/status.cc"
"${LIBRARY_DIR}/table.cc"
"${LIBRARY_DIR}/table_builder.cc"
"${LIBRARY_DIR}/tensor.cc"
"${LIBRARY_DIR}/tensor/coo_converter.cc"
"${LIBRARY_DIR}/tensor/csf_converter.cc"
"${LIBRARY_DIR}/tensor/csx_converter.cc"
"${LIBRARY_DIR}/type.cc"
"${LIBRARY_DIR}/type_traits.cc"
"${LIBRARY_DIR}/util/align_util.cc"
"${LIBRARY_DIR}/util/async_util.cc"
"${LIBRARY_DIR}/util/atfork_internal.cc"
"${LIBRARY_DIR}/util/basic_decimal.cc"
"${LIBRARY_DIR}/util/bit_block_counter.cc"
"${LIBRARY_DIR}/util/bit_run_reader.cc"
"${LIBRARY_DIR}/util/bit_util.cc"
"${LIBRARY_DIR}/util/bitmap.cc"
"${LIBRARY_DIR}/util/bitmap_builders.cc"
"${LIBRARY_DIR}/util/bitmap_ops.cc"
"${LIBRARY_DIR}/util/bpacking.cc"
"${LIBRARY_DIR}/util/byte_size.cc"
"${LIBRARY_DIR}/util/cancel.cc"
"${LIBRARY_DIR}/util/compression.cc"
"${LIBRARY_DIR}/util/counting_semaphore.cc"
"${LIBRARY_DIR}/util/cpu_info.cc"
"${LIBRARY_DIR}/util/crc32.cc"
"${LIBRARY_DIR}/util/debug.cc"
"${LIBRARY_DIR}/util/decimal.cc"
"${LIBRARY_DIR}/util/delimiting.cc"
"${LIBRARY_DIR}/util/dict_util.cc"
"${LIBRARY_DIR}/util/float16.cc"
"${LIBRARY_DIR}/util/formatting.cc"
"${LIBRARY_DIR}/util/future.cc"
"${LIBRARY_DIR}/util/hashing.cc"
"${LIBRARY_DIR}/util/int_util.cc"
"${LIBRARY_DIR}/util/io_util.cc"
"${LIBRARY_DIR}/util/key_value_metadata.cc"
"${LIBRARY_DIR}/util/list_util.cc"
"${LIBRARY_DIR}/util/logging.cc"
"${LIBRARY_DIR}/util/memory.cc"
"${LIBRARY_DIR}/util/mutex.cc"
"${LIBRARY_DIR}/util/ree_util.cc"
"${LIBRARY_DIR}/util/string.cc"
"${LIBRARY_DIR}/util/string_builder.cc"
"${LIBRARY_DIR}/util/task_group.cc"
"${LIBRARY_DIR}/util/tdigest.cc"
"${LIBRARY_DIR}/util/thread_pool.cc"
"${LIBRARY_DIR}/util/time.cc"
"${LIBRARY_DIR}/util/tracing.cc"
"${LIBRARY_DIR}/util/trie.cc"
"${LIBRARY_DIR}/util/union_util.cc"
"${LIBRARY_DIR}/util/unreachable.cc"
"${LIBRARY_DIR}/util/uri.cc"
"${LIBRARY_DIR}/util/utf8.cc"
"${LIBRARY_DIR}/util/value_parsing.cc"
"${LIBRARY_DIR}/vendored/base64.cpp"
"${LIBRARY_DIR}/vendored/datetime/tz.cpp"
"${LIBRARY_DIR}/vendored/double-conversion/bignum-dtoa.cc"
"${LIBRARY_DIR}/vendored/double-conversion/bignum.cc"
"${LIBRARY_DIR}/vendored/double-conversion/cached-powers.cc"
"${LIBRARY_DIR}/vendored/double-conversion/double-to-string.cc"
"${LIBRARY_DIR}/vendored/double-conversion/fast-dtoa.cc"
"${LIBRARY_DIR}/vendored/double-conversion/fixed-dtoa.cc"
"${LIBRARY_DIR}/vendored/double-conversion/string-to-double.cc"
"${LIBRARY_DIR}/vendored/double-conversion/strtod.cc"
"${LIBRARY_DIR}/vendored/musl/strptime.c"
"${LIBRARY_DIR}/vendored/uriparser/UriCommon.c"
"${LIBRARY_DIR}/vendored/uriparser/UriCompare.c"
"${LIBRARY_DIR}/vendored/uriparser/UriEscape.c"
"${LIBRARY_DIR}/vendored/uriparser/UriFile.c"
"${LIBRARY_DIR}/vendored/uriparser/UriIp4.c"
"${LIBRARY_DIR}/vendored/uriparser/UriIp4Base.c"
"${LIBRARY_DIR}/vendored/uriparser/UriMemory.c"
"${LIBRARY_DIR}/vendored/uriparser/UriNormalize.c"
"${LIBRARY_DIR}/vendored/uriparser/UriNormalizeBase.c"
"${LIBRARY_DIR}/vendored/uriparser/UriParse.c"
"${LIBRARY_DIR}/vendored/uriparser/UriParseBase.c"
"${LIBRARY_DIR}/vendored/uriparser/UriQuery.c"
"${LIBRARY_DIR}/vendored/uriparser/UriRecompose.c"
"${LIBRARY_DIR}/vendored/uriparser/UriResolve.c"
"${LIBRARY_DIR}/vendored/uriparser/UriShorten.c"
"${LIBRARY_DIR}/visitor.cc"
"${ARROW_SRC_DIR}/arrow/adapters/orc/adapter.cc"
"${ARROW_SRC_DIR}/arrow/adapters/orc/util.cc"
@ -465,22 +483,38 @@ set(PARQUET_SRCS
"${LIBRARY_DIR}/arrow/schema.cc"
"${LIBRARY_DIR}/arrow/schema_internal.cc"
"${LIBRARY_DIR}/arrow/writer.cc"
"${LIBRARY_DIR}/benchmark_util.cc"
"${LIBRARY_DIR}/bloom_filter.cc"
"${LIBRARY_DIR}/bloom_filter_reader.cc"
"${LIBRARY_DIR}/column_reader.cc"
"${LIBRARY_DIR}/column_scanner.cc"
"${LIBRARY_DIR}/column_writer.cc"
"${LIBRARY_DIR}/encoding.cc"
"${LIBRARY_DIR}/encryption/crypto_factory.cc"
"${LIBRARY_DIR}/encryption/encryption.cc"
"${LIBRARY_DIR}/encryption/encryption_internal.cc"
"${LIBRARY_DIR}/encryption/encryption_internal_nossl.cc"
"${LIBRARY_DIR}/encryption/file_key_unwrapper.cc"
"${LIBRARY_DIR}/encryption/file_key_wrapper.cc"
"${LIBRARY_DIR}/encryption/file_system_key_material_store.cc"
"${LIBRARY_DIR}/encryption/internal_file_decryptor.cc"
"${LIBRARY_DIR}/encryption/internal_file_encryptor.cc"
"${LIBRARY_DIR}/encryption/key_material.cc"
"${LIBRARY_DIR}/encryption/key_metadata.cc"
"${LIBRARY_DIR}/encryption/key_toolkit.cc"
"${LIBRARY_DIR}/encryption/key_toolkit_internal.cc"
"${LIBRARY_DIR}/encryption/kms_client.cc"
"${LIBRARY_DIR}/encryption/local_wrap_kms_client.cc"
"${LIBRARY_DIR}/encryption/openssl_internal.cc"
"${LIBRARY_DIR}/exception.cc"
"${LIBRARY_DIR}/file_reader.cc"
"${LIBRARY_DIR}/file_writer.cc"
"${LIBRARY_DIR}/page_index.cc"
"${LIBRARY_DIR}/level_conversion.cc"
"${LIBRARY_DIR}/level_comparison.cc"
"${LIBRARY_DIR}/level_comparison_avx2.cc"
"${LIBRARY_DIR}/level_conversion.cc"
"${LIBRARY_DIR}/level_conversion_bmi2.cc"
"${LIBRARY_DIR}/metadata.cc"
"${LIBRARY_DIR}/page_index.cc"
"${LIBRARY_DIR}/platform.cc"
"${LIBRARY_DIR}/printer.cc"
"${LIBRARY_DIR}/properties.cc"
@ -489,7 +523,6 @@ set(PARQUET_SRCS
"${LIBRARY_DIR}/stream_reader.cc"
"${LIBRARY_DIR}/stream_writer.cc"
"${LIBRARY_DIR}/types.cc"
"${LIBRARY_DIR}/bloom_filter_reader.cc"
"${LIBRARY_DIR}/xxhasher.cc"
"${GEN_LIBRARY_DIR}/parquet_constants.cpp"
@ -520,6 +553,9 @@ endif ()
add_definitions(-DPARQUET_THRIFT_VERSION_MAJOR=0)
add_definitions(-DPARQUET_THRIFT_VERSION_MINOR=16)
# As per https://github.com/apache/arrow/pull/35672 you need to enable it explicitly.
add_definitions(-DARROW_ENABLE_THREADING)
# === tools
set(TOOLS_DIR "${ClickHouse_SOURCE_DIR}/contrib/arrow/cpp/tools/parquet")

2
contrib/flatbuffers vendored

@ -1 +1 @@
Subproject commit eb3f827948241ce0e701516f16cd67324802bce9
Subproject commit 0100f6a5779831fa7a651e4b67ef389a8752bd9b

2
contrib/krb5 vendored

@ -1 +1 @@
Subproject commit 71b06c2276009ae649c7703019f3b4605f66fd3d
Subproject commit c5b4b994c18db86933255907a97eee5993fd18fe

2
contrib/numactl vendored

@ -1 +1 @@
Subproject commit 8d13d63a05f0c3cd88bf777cbb61541202b7da08
Subproject commit ff32c618d63ca7ac48cce366c5a04bb3563683a0

2
contrib/usearch vendored

@ -1 +1 @@
Subproject commit 1706420acafbd83d852c512dcf343af0a4059e48
Subproject commit 7efe8b710c9831bfe06573b1df0fad001b04a2b5

View File

@ -6,12 +6,63 @@ target_include_directories(_usearch SYSTEM INTERFACE ${USEARCH_PROJECT_DIR}/incl
target_link_libraries(_usearch INTERFACE _fp16)
target_compile_definitions(_usearch INTERFACE USEARCH_USE_FP16LIB)
# target_compile_definitions(_usearch INTERFACE USEARCH_USE_SIMSIMD)
# ^^ simsimd is not enabled at the moment. Reasons:
# - Vectorization is important for raw scans but not so much for HNSW. We use usearch only for HNSW.
# - Simsimd does compile-time dispatch (choice of SIMD kernels determined by capabilities of the build machine) or dynamic dispatch (SIMD
# kernels chosen at runtime based on cpuid instruction). Since current builds are limited to SSE 4.2 (x86) and NEON (ARM), the speedup of
# the former would be moderate compared to AVX-512 / SVE. The latter is at the moment too fragile with respect to portability across x86
# and ARM machines ... certain conbinations of quantizations / distance functions / SIMD instructions are not implemented at the moment.
# Only x86 for now. On ARM, the linker goes down in flames. To make SimSIMD compile, I had to remove a macro checks in SimSIMD
# for AVX512 (x86, worked nicely) and __ARM_BF16_FORMAT_ALTERNATIVE. It is probably because of that.
if (ARCH_AMD64)
target_link_libraries(_usearch INTERFACE _simsimd)
target_compile_definitions(_usearch INTERFACE USEARCH_USE_SIMSIMD)
target_compile_definitions(_usearch INTERFACE USEARCH_CAN_COMPILE_FLOAT16)
target_compile_definitions(_usearch INTERFACE USEARCH_CAN_COMPILE_BF16)
endif ()
add_library(ch_contrib::usearch ALIAS _usearch)
# Cf. https://github.com/llvm/llvm-project/issues/107810 (though it is not 100% the same stack)
#
# LLVM ERROR: Cannot select: 0x7996e7a73150: f32,ch = load<(load (s16) from %ir.22, !tbaa !54231), anyext from bf16> 0x79961cb737c0, 0x7996e7a1a500, undef:i64, ./contrib/SimSIMD/include/simsimd/dot.h:215:1
# 0x7996e7a1a500: i64 = add 0x79961e770d00, Constant:i64<-16>, ./contrib/SimSIMD/include/simsimd/dot.h:215:1
# 0x79961e770d00: i64,ch = CopyFromReg 0x79961cb737c0, Register:i64 %4, ./contrib/SimSIMD/include/simsimd/dot.h:215:1
# 0x7996e7a1ae10: i64 = Register %4
# 0x7996e7a1b5f0: i64 = Constant<-16>
# 0x7996e7a1a730: i64 = undef
# In function: _ZL23simsimd_dot_bf16_serialPKu6__bf16S0_yPd
# PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace.
# Stack dump:
# 0. Running pass 'Function Pass Manager' on module 'src/libdbms.a(MergeTreeIndexVectorSimilarity.cpp.o at 2312737440)'.
# 1. Running pass 'AArch64 Instruction Selection' on function '@_ZL23simsimd_dot_bf16_serialPKu6__bf16S0_yPd'
# #0 0x00007999e83a63bf llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xda63bf)
# #1 0x00007999e83a44f9 llvm::sys::RunSignalHandlers() (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xda44f9)
# #2 0x00007999e83a6b00 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xda6b00)
# #3 0x00007999e6e45320 (/lib/x86_64-linux-gnu/libc.so.6+0x45320)
# #4 0x00007999e6e9eb1c pthread_kill (/lib/x86_64-linux-gnu/libc.so.6+0x9eb1c)
# #5 0x00007999e6e4526e raise (/lib/x86_64-linux-gnu/libc.so.6+0x4526e)
# #6 0x00007999e6e288ff abort (/lib/x86_64-linux-gnu/libc.so.6+0x288ff)
# #7 0x00007999e82fe0c2 llvm::report_fatal_error(llvm::Twine const&, bool) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xcfe0c2)
# #8 0x00007999e8c2f8e3 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x162f8e3)
# #9 0x00007999e8c2ed76 llvm::SelectionDAGISel::SelectCodeCommon(llvm::SDNode*, unsigned char const*, unsigned int) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x162ed76)
# #10 0x00007999ea1adbcb (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x2badbcb)
# #11 0x00007999e8c2611f llvm::SelectionDAGISel::DoInstructionSelection() (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x162611f)
# #12 0x00007999e8c25790 llvm::SelectionDAGISel::CodeGenAndEmitDAG() (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x1625790)
# #13 0x00007999e8c248de llvm::SelectionDAGISel::SelectAllBasicBlocks(llvm::Function const&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x16248de)
# #14 0x00007999e8c22934 llvm::SelectionDAGISel::runOnMachineFunction(llvm::MachineFunction&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x1622934)
# #15 0x00007999e87826b9 llvm::MachineFunctionPass::runOnFunction(llvm::Function&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x11826b9)
# #16 0x00007999e84f7772 llvm::FPPassManager::runOnFunction(llvm::Function&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xef7772)
# #17 0x00007999e84fd2f4 llvm::FPPassManager::runOnModule(llvm::Module&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xefd2f4)
# #18 0x00007999e84f7e9f llvm::legacy::PassManagerImpl::run(llvm::Module&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xef7e9f)
# #19 0x00007999e99f7d61 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x23f7d61)
# #20 0x00007999e99f8c91 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x23f8c91)
# #21 0x00007999e99f8b10 llvm::lto::thinBackend(llvm::lto::Config const&, unsigned int, std::function<llvm::Expected<std::unique_ptr<llvm::CachedFileStream, std::default_delete<llvm::CachedFileStream>>> (unsigned int, llvm::Twine const&)>, llvm::Module&, llvm::ModuleSummaryIndex const&, llvm::DenseMap<llvm::StringRef, std::unordered_set<unsigned long, std::hash<unsigned long>, std::equal_to<unsigned long>, std::allocator<unsigned long>>, llvm::DenseMapInfo<llvm::StringRef, void
# >, llvm::detail::DenseMapPair<llvm::StringRef, std::unordered_set<unsigned long, std::hash<unsigned long>, std::equal_to<unsigned long>, std::allocator<unsigned long>>>> const&, llvm::DenseMap<unsigned long, llvm::GlobalValueSummary*, llvm::DenseMapInfo<unsigned long, void>, llvm::detail::DenseMapPair<unsigned long, llvm::GlobalValueSummary*>> const&, llvm::MapVector<llvm::StringRef, llvm::BitcodeModule, llvm::DenseMap<llvm::StringRef, unsigned int, llvm::DenseMapInfo<llvm::S
# tringRef, void>, llvm::detail::DenseMapPair<llvm::StringRef, unsigned int>>, llvm::SmallVector<std::pair<llvm::StringRef, llvm::BitcodeModule>, 0u>>*, std::vector<unsigned char, std::allocator<unsigned char>> const&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x23f8b10)
# #22 0x00007999e99f248d (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x23f248d)
# #23 0x00007999e99f1cd6 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x23f1cd6)
# #24 0x00007999e82c9beb (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xcc9beb)
# #25 0x00007999e834ebe3 llvm::ThreadPool::processTasks(llvm::ThreadPoolTaskGroup*) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xd4ebe3)
# #26 0x00007999e834f704 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xd4f704)
# #27 0x00007999e6e9ca94 (/lib/x86_64-linux-gnu/libc.so.6+0x9ca94)
# #28 0x00007999e6f29c3c (/lib/x86_64-linux-gnu/libc.so.6+0x129c3c)
# clang++-18: error: unable to execute command: Aborted (core dumped)
# clang++-18: error: linker command failed due to signal (use -v to see invocation)
# ^[[A^Cninja: build stopped: interrupted by user.

View File

@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc
ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
ARG VERSION="24.9.2.42"
ARG VERSION="24.10.1.2812"
ARG PACKAGES="clickhouse-keeper"
ARG DIRECT_DOWNLOAD_URLS=""

View File

@ -35,7 +35,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc
ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
ARG VERSION="24.9.2.42"
ARG VERSION="24.10.1.2812"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
ARG DIRECT_DOWNLOAD_URLS=""

View File

@ -28,7 +28,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
ARG REPO_CHANNEL="stable"
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
ARG VERSION="24.9.2.42"
ARG VERSION="24.10.1.2812"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
#docker-official-library:off

View File

@ -25,7 +25,7 @@ EXTRA_COLUMNS_EXPRESSION_TRACE_LOG="${EXTRA_COLUMNS_EXPRESSION}, arrayMap(x -> d
# coverage_log needs more columns for symbolization, but only symbol names (the line numbers are too heavy to calculate)
EXTRA_COLUMNS_COVERAGE_LOG="${EXTRA_COLUMNS} symbols Array(LowCardinality(String)), "
EXTRA_COLUMNS_EXPRESSION_COVERAGE_LOG="${EXTRA_COLUMNS_EXPRESSION}, arrayMap(x -> demangle(addressToSymbol(x)), coverage)::Array(LowCardinality(String)) AS symbols"
EXTRA_COLUMNS_EXPRESSION_COVERAGE_LOG="${EXTRA_COLUMNS_EXPRESSION}, arrayDistinct(arrayMap(x -> demangle(addressToSymbol(x)), coverage))::Array(LowCardinality(String)) AS symbols"
function __set_connection_args

View File

@ -33,8 +33,6 @@ RUN apt-get update \
COPY requirements.txt /
RUN pip3 install --no-cache-dir -r /requirements.txt
ENV FUZZER_ARGS="-max_total_time=60"
SHELL ["/bin/bash", "-c"]
# docker run --network=host --volume <workspace>:/workspace -e PR_TO_TEST=<> -e SHA_TO_TEST=<> clickhouse/libfuzzer

View File

@ -28,7 +28,7 @@ COPY requirements.txt /
RUN pip3 install --no-cache-dir -r requirements.txt
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV LC_ALL=en_US.UTF-8
# Architecture of the image when BuildKit/buildx is used
ARG TARGETARCH

View File

@ -12,6 +12,7 @@ charset-normalizer==3.3.2
click==8.1.7
codespell==2.2.1
cryptography==43.0.1
datacompy==0.7.3
Deprecated==1.2.14
dill==0.3.8
flake8==4.0.1
@ -23,6 +24,7 @@ mccabe==0.6.1
multidict==6.0.5
mypy==1.8.0
mypy-extensions==1.0.0
pandas==2.2.3
packaging==24.1
pathspec==0.9.0
pip==24.1.1

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -4,9 +4,13 @@ sidebar_position: 50
sidebar_label: EmbeddedRocksDB
---
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
# EmbeddedRocksDB Engine
This engine allows integrating ClickHouse with [rocksdb](http://rocksdb.org/).
<CloudNotSupportedBadge />
This engine allows integrating ClickHouse with [RocksDB](http://rocksdb.org/).
## Creating a Table {#creating-a-table}

View File

@ -290,6 +290,7 @@ The following settings can be specified in configuration file for given endpoint
- `expiration_window_seconds` — Grace period for checking if expiration-based credentials have expired. Optional, default value is `120`.
- `no_sign_request` - Ignore all the credentials so requests are not signed. Useful for accessing public buckets.
- `header` — Adds specified HTTP header to a request to given endpoint. Optional, can be specified multiple times.
- `access_header` - Adds specified HTTP header to a request to given endpoint, in cases where there are no other credentials from another source.
- `server_side_encryption_customer_key_base64` — If specified, required headers for accessing S3 objects with SSE-C encryption will be set. Optional.
- `server_side_encryption_kms_key_id` - If specified, required headers for accessing S3 objects with [SSE-KMS encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html) will be set. If an empty string is specified, the AWS managed S3 key will be used. Optional.
- `server_side_encryption_kms_encryption_context` - If specified alongside `server_side_encryption_kms_key_id`, the given encryption context header for SSE-KMS will be set. Optional.
@ -320,6 +321,32 @@ The following settings can be specified in configuration file for given endpoint
</s3>
```
## Working with archives
Suppose that we have several archive files with following URIs on S3:
- 'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-10.csv.zip'
- 'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-11.csv.zip'
- 'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-12.csv.zip'
Extracting data from these archives is possible using ::. Globs can be used both in the url part as well as in the part after :: (responsible for the name of a file inside the archive).
``` sql
SELECT *
FROM s3(
'https://s3-us-west-1.amazonaws.com/umbrella-static/top-1m-2018-01-1{0..2}.csv.zip :: *.csv'
);
```
:::note
ClickHouse supports three archive formats:
ZIP
TAR
7Z
While ZIP and TAR archives can be accessed from any supported storage location, 7Z archives can only be read from the local filesystem where ClickHouse is installed.
:::
## Accessing public buckets
ClickHouse tries to fetch credentials from many different types of sources.

View File

@ -23,6 +23,7 @@ functions in ClickHouse. The sample datasets include:
- The [NYPD Complaint Data](../getting-started/example-datasets/nypd_complaint_data.md) demonstrates how to use data inference to simplify creating tables
- The ["What's on the Menu?" dataset](../getting-started/example-datasets/menus.md) has an example of denormalizing data
- The [Laion dataset](../getting-started/example-datasets/laion.md) has an example of [Approximate nearest neighbor search indexes](../engines/table-engines/mergetree-family/annindexes.md) usage
- The [TPC-H](../getting-started/example-datasets/tpch.md), [TPC-DS](../getting-started/example-datasets/tpcds.md), and [Star Schema (SSB)](../getting-started/example-datasets/star-schema.md) industry benchmarks for analytics databases
- [Getting Data Into ClickHouse - Part 1](https://clickhouse.com/blog/getting-data-into-clickhouse-part-1) provides examples of defining a schema and loading a small Hacker News dataset
- [Getting Data Into ClickHouse - Part 3 - Using S3](https://clickhouse.com/blog/getting-data-into-clickhouse-part-3-s3) has examples of loading data from s3
- [Generating random data in ClickHouse](https://clickhouse.com/blog/generating-random-test-distribution-data-for-clickhouse) shows how to generate random data if none of the above fit your needs.

View File

@ -190,6 +190,7 @@ You can pass parameters to `clickhouse-client` (all parameters have a default va
- `--config-file` The name of the configuration file.
- `--secure` If specified, will connect to server over secure connection (TLS). You might need to configure your CA certificates in the [configuration file](#configuration_files). The available configuration settings are the same as for [server-side TLS configuration](../operations/server-configuration-parameters/settings.md#openssl).
- `--history_file` — Path to a file containing command history.
- `--history_max_entries` — Maximum number of entries in the history file. Default value: 1 000 000.
- `--param_<name>` — Value for a [query with parameters](#cli-queries-with-parameters).
- `--hardware-utilization` — Print hardware utilization information in progress bar.
- `--print-profile-events` Print `ProfileEvents` packets.

View File

@ -9,7 +9,7 @@ sidebar_label: Prometheus protocols
## Exposing metrics {#expose}
:::note
ClickHouse Cloud does not currently support connecting to Prometheus. To be notified when this feature is supported, please contact support@clickhouse.com.
If you are using ClickHouse Cloud, you can expose metrics to Prometheus using the [Prometheus Integration](/en/integrations/prometheus).
:::
ClickHouse can expose its own metrics for scraping from Prometheus:

View File

@ -1975,6 +1975,22 @@ The default is `false`.
<async_load_databases>true</async_load_databases>
```
## async_load_system_database {#async_load_system_database}
Asynchronous loading of system tables. Helpful if there is a high amount of log tables and parts in the `system` database. Independent of the `async_load_databases` setting.
If set to `true`, all system databases with `Ordinary`, `Atomic`, and `Replicated` engines will be loaded asynchronously after the ClickHouse server starts. See `system.asynchronous_loader` table, `tables_loader_background_pool_size` and `tables_loader_foreground_pool_size` server settings. Any query that tries to access a system table, that is not yet loaded, will wait for exactly this table to be started up. The table that is waited for by at least one query will be loaded with higher priority. Also consider setting the `max_waiting_queries` setting to limit the total number of waiting queries.
If `false`, system database loads before server start.
The default is `false`.
**Example**
``` xml
<async_load_system_database>true</async_load_system_database>
```
## tables_loader_foreground_pool_size {#tables_loader_foreground_pool_size}
Sets the number of threads performing load jobs in foreground pool. The foreground pool is used for loading table synchronously before server start listening on a port and for loading tables that are waited for. Foreground pool has higher priority than background pool. It means that no job starts in background pool while there are jobs running in foreground pool.
@ -3208,6 +3224,34 @@ Default value: "default"
**See Also**
- [Workload Scheduling](/docs/en/operations/workload-scheduling.md)
## workload_path {#workload_path}
The directory used as a storage for all `CREATE WORKLOAD` and `CREATE RESOURCE` queries. By default `/workload/` folder under server working directory is used.
**Example**
``` xml
<workload_path>/var/lib/clickhouse/workload/</workload_path>
```
**See Also**
- [Workload Hierarchy](/docs/en/operations/workload-scheduling.md#workloads)
- [workload_zookeeper_path](#workload_zookeeper_path)
## workload_zookeeper_path {#workload_zookeeper_path}
The path to a ZooKeeper node, which is used as a storage for all `CREATE WORKLOAD` and `CREATE RESOURCE` queries. For consistency all SQL definitions are stored as a value of this single znode. By default ZooKeeper is not used and definitions are stored on [disk](#workload_path).
**Example**
``` xml
<workload_zookeeper_path>/clickhouse/workload/definitions.sql</workload_zookeeper_path>
```
**See Also**
- [Workload Hierarchy](/docs/en/operations/workload-scheduling.md#workloads)
- [workload_path](#workload_path)
## max_authentication_methods_per_user {#max_authentication_methods_per_user}
The maximum number of authentication methods a user can be created with or altered to.

View File

@ -19,7 +19,7 @@ Columns:
- `column` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Name of a column to which access is granted.
- `is_partial_revoke` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Logical value. It shows whether some privileges have been revoked. Possible values:
- `0` — The row describes a partial revoke.
- `1` — The row describes a grant.
- `0` — The row describes a grant.
- `1` — The row describes a partial revoke.
- `grant_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Permission is granted `WITH GRANT OPTION`, see [GRANT](../../sql-reference/statements/grant.md#granting-privilege-syntax).

View File

@ -18,6 +18,11 @@ Columns:
- `1` — Current user cant change the setting.
- `type` ([String](../../sql-reference/data-types/string.md)) — Setting type (implementation specific string value).
- `is_obsolete` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) - Shows whether a setting is obsolete.
- `tier` ([Enum8](../../sql-reference/data-types/enum.md)) — Support level for this feature. ClickHouse features are organized in tiers, varying depending on the current status of their development and the expectations one might have when using them. Values:
- `'Production'` — The feature is stable, safe to use and does not have issues interacting with other **production** features. .
- `'Beta'` — The feature is stable and safe. The outcome of using it together with other features is unknown and correctness is not guaranteed. Testing and reports are welcome.
- `'Experimental'` — The feature is under development. Only intended for developers and ClickHouse enthusiasts. The feature might or might not work and could be removed at any time.
- `'Obsolete'` — No longer supported. Either it is already removed or it will be removed in future releases.
**Example**
```sql

View File

@ -0,0 +1,37 @@
---
slug: /en/operations/system-tables/resources
---
# resources
Contains information for [resources](/docs/en/operations/workload-scheduling.md#workload_entity_storage) residing on the local server. The table contains a row for every resource.
Example:
``` sql
SELECT *
FROM system.resources
FORMAT Vertical
```
``` text
Row 1:
──────
name: io_read
read_disks: ['s3']
write_disks: []
create_query: CREATE RESOURCE io_read (READ DISK s3)
Row 2:
──────
name: io_write
read_disks: []
write_disks: ['s3']
create_query: CREATE RESOURCE io_write (WRITE DISK s3)
```
Columns:
- `name` (`String`) - Resource name.
- `read_disks` (`Array(String)`) - The array of disk names that uses this resource for read operations.
- `write_disks` (`Array(String)`) - The array of disk names that uses this resource for write operations.
- `create_query` (`String`) - The definition of the resource.

View File

@ -18,6 +18,11 @@ Columns:
- `1` — Current user cant change the setting.
- `default` ([String](../../sql-reference/data-types/string.md)) — Setting default value.
- `is_obsolete` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) - Shows whether a setting is obsolete.
- `tier` ([Enum8](../../sql-reference/data-types/enum.md)) — Support level for this feature. ClickHouse features are organized in tiers, varying depending on the current status of their development and the expectations one might have when using them. Values:
- `'Production'` — The feature is stable, safe to use and does not have issues interacting with other **production** features. .
- `'Beta'` — The feature is stable and safe. The outcome of using it together with other features is unknown and correctness is not guaranteed. Testing and reports are welcome.
- `'Experimental'` — The feature is under development. Only intended for developers and ClickHouse enthusiasts. The feature might or might not work and could be removed at any time.
- `'Obsolete'` — No longer supported. Either it is already removed or it will be removed in future releases.
**Example**
@ -26,19 +31,99 @@ The following example shows how to get information about settings which name con
``` sql
SELECT *
FROM system.settings
WHERE name LIKE '%min_i%'
WHERE name LIKE '%min_insert_block_size_%'
FORMAT Vertical
```
``` text
┌─name───────────────────────────────────────────────_─value─────_─changed─_─description───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────_─min──_─max──_─readonly─_─type─────────_─default───_─alias_for─_─is_obsolete─┐
│ min_insert_block_size_rows │ 1048449 │ 0 │ Squash blocks passed to INSERT query to specified size in rows, if blocks are not big enough. │ ________ │ 0 │ UInt64 │ 1048449 │ │ 0 │
│ min_insert_block_size_bytes │ 268402944 │ 0 │ Squash blocks passed to INSERT query to specified size in bytes, if blocks are not big enough. │ ________ │ 0 │ UInt64 │ 268402944 │ │ 0 │
│ min_insert_block_size_rows_for_materialized_views │ 0 │ 0 │ Like min_insert_block_size_rows, but applied only during pushing to MATERIALIZED VIEW (default: min_insert_block_size_rows) │ ________ │ 0 │ UInt64 │ 0 │ │ 0 │
│ min_insert_block_size_bytes_for_materialized_views │ 0 │ 0 │ Like min_insert_block_size_bytes, but applied only during pushing to MATERIALIZED VIEW (default: min_insert_block_size_bytes) │ ________ │ 0 │ UInt64 │ 0 │ │ 0 │
│ read_backoff_min_interval_between_events_ms │ 1000 │ 0 │ Settings to reduce the number of threads in case of slow reads. Do not pay attention to the event, if the previous one has passed less than a certain amount of time. │ ________ │ 0 │ Milliseconds │ 1000 │ │ 0 │
└────────────────────────────────────────────────────┴───────────┴─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────
──────────────────────────────────────────────────────┴──────┴──────┴──────────┴──────────────┴───────────┴───────────┴─────────────┘
```
Row 1:
──────
name: min_insert_block_size_rows
value: 1048449
changed: 0
description: Sets the minimum number of rows in the block that can be inserted into a table by an `INSERT` query. Smaller-sized blocks are squashed into bigger ones.
Possible values:
- Positive integer.
- 0 — Squashing disabled.
min: ᴺᵁᴸᴸ
max: ᴺᵁᴸᴸ
readonly: 0
type: UInt64
default: 1048449
alias_for:
is_obsolete: 0
tier: Production
Row 2:
──────
name: min_insert_block_size_bytes
value: 268402944
changed: 0
description: Sets the minimum number of bytes in the block which can be inserted into a table by an `INSERT` query. Smaller-sized blocks are squashed into bigger ones.
Possible values:
- Positive integer.
- 0 — Squashing disabled.
min: ᴺᵁᴸᴸ
max: ᴺᵁᴸᴸ
readonly: 0
type: UInt64
default: 268402944
alias_for:
is_obsolete: 0
tier: Production
Row 3:
──────
name: min_insert_block_size_rows_for_materialized_views
value: 0
changed: 0
description: Sets the minimum number of rows in the block which can be inserted into a table by an `INSERT` query. Smaller-sized blocks are squashed into bigger ones. This setting is applied only for blocks inserted into [materialized view](../../sql-reference/statements/create/view.md). By adjusting this setting, you control blocks squashing while pushing to materialized view and avoid excessive memory usage.
Possible values:
- Any positive integer.
- 0 — Squashing disabled.
**See Also**
- [min_insert_block_size_rows](#min-insert-block-size-rows)
min: ᴺᵁᴸᴸ
max: ᴺᵁᴸᴸ
readonly: 0
type: UInt64
default: 0
alias_for:
is_obsolete: 0
tier: Production
Row 4:
──────
name: min_insert_block_size_bytes_for_materialized_views
value: 0
changed: 0
description: Sets the minimum number of bytes in the block which can be inserted into a table by an `INSERT` query. Smaller-sized blocks are squashed into bigger ones. This setting is applied only for blocks inserted into [materialized view](../../sql-reference/statements/create/view.md). By adjusting this setting, you control blocks squashing while pushing to materialized view and avoid excessive memory usage.
Possible values:
- Any positive integer.
- 0 — Squashing disabled.
**See also**
- [min_insert_block_size_bytes](#min-insert-block-size-bytes)
min: ᴺᵁᴸᴸ
max: ᴺᵁᴸᴸ
readonly: 0
type: UInt64
default: 0
alias_for:
is_obsolete: 0
tier: Production
```
Using of `WHERE changed` can be useful, for example, when you want to check:

View File

@ -0,0 +1,40 @@
---
slug: /en/operations/system-tables/workloads
---
# workloads
Contains information for [workloads](/docs/en/operations/workload-scheduling.md#workload_entity_storage) residing on the local server. The table contains a row for every workload.
Example:
``` sql
SELECT *
FROM system.workloads
FORMAT Vertical
```
``` text
Row 1:
──────
name: production
parent: all
create_query: CREATE WORKLOAD production IN `all` SETTINGS weight = 9
Row 2:
──────
name: development
parent: all
create_query: CREATE WORKLOAD development IN `all`
Row 3:
──────
name: all
parent:
create_query: CREATE WORKLOAD `all`
```
Columns:
- `name` (`String`) - Workload name.
- `parent` (`String`) - Parent workload name.
- `create_query` (`String`) - The definition of the workload.

View File

@ -43,6 +43,20 @@ Example:
</clickhouse>
```
An alternative way to express which disks are used by a resource is SQL syntax:
```sql
CREATE RESOURCE resource_name (WRITE DISK disk1, READ DISK disk2)
```
Resource could be used for any number of disk for READ or WRITE or both for READ and WRITE. There a syntax allowing to use a resource for all the disks:
```sql
CREATE RESOURCE all_io (READ ANY DISK, WRITE ANY DISK);
```
Note that server configuration options have priority over SQL way to define resources.
## Workload markup {#workload_markup}
Queries can be marked with setting `workload` to distinguish different workloads. If `workload` is not set, than value "default" is used. Note that you are able to specify the other value using settings profiles. Setting constraints can be used to make `workload` constant if you want all queries from the user to be marked with fixed value of `workload` setting.
@ -153,9 +167,48 @@ Example:
</clickhouse>
```
## Workload hierarchy (SQL only) {#workloads}
Defining resources and classifiers in XML could be challenging. ClickHouse provides SQL syntax that is much more convenient. All resources that were created with `CREATE RESOURCE` share the same structure of the hierarchy, but could differ in some aspects. Every workload created with `CREATE WORKLOAD` maintains a few automatically created scheduling nodes for every resource. A child workload can be created inside another parent workload. Here is the example that defines exactly the same hierarchy as XML configuration above:
```sql
CREATE RESOURCE network_write (WRITE DISK s3)
CREATE RESOURCE network_read (READ DISK s3)
CREATE WORKLOAD all SETTINGS max_requests = 100
CREATE WORKLOAD development IN all
CREATE WORKLOAD production IN all SETTINGS weight = 3
```
The name of a leaf workload without children could be used in query settings `SETTINGS workload = 'name'`. Note that workload classifiers are also created automatically when using SQL syntax.
To customize workload the following settings could be used:
* `priority` - sibling workloads are served according to static priority values (lower value means higher priority).
* `weight` - sibling workloads having the same static priority share resources according to weights.
* `max_requests` - the limit on the number of concurrent resource requests in this workload.
* `max_cost` - the limit on the total inflight bytes count of concurrent resource requests in this workload.
* `max_speed` - the limit on byte processing rate of this workload (the limit is independent for every resource).
* `max_burst` - maximum number of bytes that could be processed by the workload without being throttled (for every resource independently).
Note that workload settings are translated into a proper set of scheduling nodes. For more details, see the description of the scheduling node [types and options](#hierarchy).
There is no way to specify different hierarchies of workloads for different resources. But there is a way to specify different workload setting value for a specific resource:
```sql
CREATE OR REPLACE WORKLOAD all SETTINGS max_requests = 100, max_speed = 1000000 FOR network_read, max_speed = 2000000 FOR network_write
```
Also note that workload or resource could not be dropped if it is referenced from another workload. To update a definition of a workload use `CREATE OR REPLACE WORKLOAD` query.
## Workloads and resources storage {#workload_entity_storage}
Definitions of all workloads and resources in the form of `CREATE WORKLOAD` and `CREATE RESOURCE` queries are stored persistently either on disk at `workload_path` or in ZooKeeper at `workload_zookeeper_path`. ZooKeeper storage is recommended to achieve consistency between nodes. Alternatively `ON CLUSTER` clause could be used along with disk storage.
## See also
- [system.scheduler](/docs/en/operations/system-tables/scheduler.md)
- [system.workloads](/docs/en/operations/system-tables/workloads.md)
- [system.resources](/docs/en/operations/system-tables/resources.md)
- [merge_workload](/docs/en/operations/settings/merge-tree-settings.md#merge_workload) merge tree setting
- [merge_workload](/docs/en/operations/server-configuration-parameters/settings.md#merge_workload) global server setting
- [mutation_workload](/docs/en/operations/settings/merge-tree-settings.md#mutation_workload) merge tree setting
- [mutation_workload](/docs/en/operations/server-configuration-parameters/settings.md#mutation_workload) global server setting
- [workload_path](/docs/en/operations/server-configuration-parameters/settings.md#workload_path) global server setting
- [workload_zookeeper_path](/docs/en/operations/server-configuration-parameters/settings.md#workload_zookeeper_path) global server setting

View File

@ -17,7 +17,7 @@ anyLast(column) [RESPECT NULLS]
- `column`: The column name.
:::note
Supports the `RESPECT NULLS` modifier after the function name. Using this modifier will ensure the function selects the first value passed, regardless of whether it is `NULL` or not.
Supports the `RESPECT NULLS` modifier after the function name. Using this modifier will ensure the function selects the last value passed, regardless of whether it is `NULL` or not.
:::
**Returned value**
@ -40,4 +40,4 @@ SELECT anyLast(city) FROM any_last_nulls;
┌─anyLast(city)─┐
│ Valencia │
└───────────────┘
```
```

View File

@ -5,7 +5,7 @@ sidebar_label: JSON
keywords: [json, data type]
---
# JSON
# JSON Data Type
Stores JavaScript Object Notation (JSON) documents in a single column.
@ -58,10 +58,10 @@ SELECT json FROM test;
└───────────────────────────────────┘
```
Using CAST from 'String':
Using CAST from `String`:
```sql
SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::JSON as json;
SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::JSON AS json;
```
```text
@ -70,7 +70,47 @@ SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::JSON as json
└────────────────────────────────────────────────┘
```
CAST from `JSON`, named `Tuple`, `Map` and `Object('json')` to `JSON` type will be supported later.
Using CAST from `Tuple`:
```sql
SELECT (tuple(42 AS b) AS a, [1, 2, 3] AS c, 'Hello, World!' AS d)::JSON AS json;
```
```text
┌─json───────────────────────────────────────────┐
│ {"a":{"b":42},"c":[1,2,3],"d":"Hello, World!"} │
└────────────────────────────────────────────────┘
```
Using CAST from `Map`:
```sql
SELECT map('a', map('b', 42), 'c', [1,2,3], 'd', 'Hello, World!')::JSON AS json;
```
```text
┌─json───────────────────────────────────────────┐
│ {"a":{"b":42},"c":[1,2,3],"d":"Hello, World!"} │
└────────────────────────────────────────────────┘
```
Using CAST from deprecated `Object('json')`:
```sql
SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::Object('json')::JSON AS json;
```
```text
┌─json───────────────────────────────────────────┐
│ {"a":{"b":42},"c":[1,2,3],"d":"Hello, World!"} │
└────────────────────────────────────────────────┘
```
:::note
CAST from `Tuple`/`Map`/`Object('json')` to `JSON` is implemented via serializing the column into `String` column containing JSON objects and deserializing it back to `JSON` type column.
:::
CAST between `JSON` types with different arguments will be supported later.
## Reading JSON paths as subcolumns

View File

@ -12,7 +12,7 @@ Syntax:
``` sql
ALTER USER [IF EXISTS] name1 [RENAME TO new_name |, name2 [,...]]
[ON CLUSTER cluster_name]
[NOT IDENTIFIED | RESET AUTHENTICATION METHODS TO NEW | {IDENTIFIED | ADD IDENTIFIED} {[WITH {plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | WITH NO_PASSWORD | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'} | {WITH ssh_key BY KEY 'public_key' TYPE 'ssh-rsa|...'} | {WITH http SERVER 'server_name' [SCHEME 'Basic']}
[NOT IDENTIFIED | RESET AUTHENTICATION METHODS TO NEW | {IDENTIFIED | ADD IDENTIFIED} {[WITH {plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | WITH NO_PASSWORD | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'} | {WITH ssh_key BY KEY 'public_key' TYPE 'ssh-rsa|...'} | {WITH http SERVER 'server_name' [SCHEME 'Basic']} [VALID UNTIL datetime]
[, {[{plaintext_password | sha256_password | sha256_hash | ...}] BY {'password' | 'hash'}} | {ldap SERVER 'server_name'} | {...} | ... [,...]]]
[[ADD | DROP] HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[VALID UNTIL datetime]
@ -91,3 +91,15 @@ Reset authentication methods and keep the most recent added one:
``` sql
ALTER USER user1 RESET AUTHENTICATION METHODS TO NEW
```
## VALID UNTIL Clause
Allows you to specify the expiration date and, optionally, the time for an authentication method. It accepts a string as a parameter. It is recommended to use the `YYYY-MM-DD [hh:mm:ss] [timezone]` format for datetime. By default, this parameter equals `'infinity'`.
The `VALID UNTIL` clause can only be specified along with an authentication method, except for the case where no authentication method has been specified in the query. In this scenario, the `VALID UNTIL` clause will be applied to all existing authentication methods.
Examples:
- `ALTER USER name1 VALID UNTIL '2025-01-01'`
- `ALTER USER name1 VALID UNTIL '2025-01-01 12:00:00 UTC'`
- `ALTER USER name1 VALID UNTIL 'infinity'`
- `ALTER USER name1 IDENTIFIED WITH plaintext_password BY 'no_expiration', bcrypt_password BY 'expiration_set' VALID UNTIL'2025-01-01''`

View File

@ -11,7 +11,7 @@ Syntax:
``` sql
CREATE USER [IF NOT EXISTS | OR REPLACE] name1 [, name2 [,...]] [ON CLUSTER cluster_name]
[NOT IDENTIFIED | IDENTIFIED {[WITH {plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | WITH NO_PASSWORD | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'} | {WITH ssh_key BY KEY 'public_key' TYPE 'ssh-rsa|...'} | {WITH http SERVER 'server_name' [SCHEME 'Basic']}
[NOT IDENTIFIED | IDENTIFIED {[WITH {plaintext_password | sha256_password | sha256_hash | double_sha1_password | double_sha1_hash}] BY {'password' | 'hash'}} | WITH NO_PASSWORD | {WITH ldap SERVER 'server_name'} | {WITH kerberos [REALM 'realm']} | {WITH ssl_certificate CN 'common_name' | SAN 'TYPE:subject_alt_name'} | {WITH ssh_key BY KEY 'public_key' TYPE 'ssh-rsa|...'} | {WITH http SERVER 'server_name' [SCHEME 'Basic']} [VALID UNTIL datetime]
[, {[{plaintext_password | sha256_password | sha256_hash | ...}] BY {'password' | 'hash'}} | {ldap SERVER 'server_name'} | {...} | ... [,...]]]
[HOST {LOCAL | NAME 'name' | REGEXP 'name_regexp' | IP 'address' | LIKE 'pattern'} [,...] | ANY | NONE]
[VALID UNTIL datetime]
@ -178,13 +178,16 @@ ClickHouse treats `user_name@'address'` as a username as a whole. Thus, technica
## VALID UNTIL Clause
Allows you to specify the expiration date and, optionally, the time for user credentials. It accepts a string as a parameter. It is recommended to use the `YYYY-MM-DD [hh:mm:ss] [timezone]` format for datetime. By default, this parameter equals `'infinity'`.
Allows you to specify the expiration date and, optionally, the time for an authentication method. It accepts a string as a parameter. It is recommended to use the `YYYY-MM-DD [hh:mm:ss] [timezone]` format for datetime. By default, this parameter equals `'infinity'`.
The `VALID UNTIL` clause can only be specified along with an authentication method, except for the case where no authentication method has been specified in the query. In this scenario, the `VALID UNTIL` clause will be applied to all existing authentication methods.
Examples:
- `CREATE USER name1 VALID UNTIL '2025-01-01'`
- `CREATE USER name1 VALID UNTIL '2025-01-01 12:00:00 UTC'`
- `CREATE USER name1 VALID UNTIL 'infinity'`
- ```CREATE USER name1 VALID UNTIL '2025-01-01 12:00:00 `Asia/Tokyo`'```
- `CREATE USER name1 IDENTIFIED WITH plaintext_password BY 'no_expiration', bcrypt_password BY 'expiration_set' VALID UNTIL '2025-01-01''`
## GRANTEES Clause

View File

@ -55,7 +55,7 @@ SELECT * FROM view(column1=value1, column2=value2 ...)
## Materialized View
``` sql
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] [TO[db.]name] [ENGINE = engine] [POPULATE]
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster_name] [TO[db.]name] [ENGINE = engine] [POPULATE]
[DEFINER = { user | CURRENT_USER }] [SQL SECURITY { DEFINER | INVOKER | NONE }]
AS SELECT ...
[COMMENT 'comment']

View File

@ -78,6 +78,10 @@ Specifying privileges you can use asterisk (`*`) instead of a table or a databas
Also, you can omit database name. In this case privileges are granted for current database.
For example, `GRANT SELECT ON * TO john` grants the privilege on all the tables in the current database, `GRANT SELECT ON mytable TO john` grants the privilege on the `mytable` table in the current database.
:::note
The feature described below is available starting with the 24.10 ClickHouse version.
:::
You can also put asterisks at the end of a table or a database name. This feature allows you to grant privileges on an abstract prefix of the table's path.
Example: `GRANT SELECT ON db.my_tables* TO john`. This query allows `john` to execute the `SELECT` query over all the `db` database tables with the prefix `my_tables*`.
@ -113,6 +117,7 @@ GRANT SELECT ON db*.* TO john -- correct
GRANT SELECT ON *.my_table TO john -- wrong
GRANT SELECT ON foo*bar TO john -- wrong
GRANT SELECT ON *suffix TO john -- wrong
GRANT SELECT(foo) ON db.table* TO john -- wrong
```
## Privileges
@ -238,10 +243,13 @@ Hierarchy of privileges:
- `HDFS`
- `HIVE`
- `JDBC`
- `KAFKA`
- `MONGO`
- `MYSQL`
- `NATS`
- `ODBC`
- `POSTGRES`
- `RABBITMQ`
- `REDIS`
- `REMOTE`
- `S3`
@ -520,10 +528,13 @@ Allows using external data sources. Applies to [table engines](../../engines/tab
- `HDFS`. Level: `GLOBAL`
- `HIVE`. Level: `GLOBAL`
- `JDBC`. Level: `GLOBAL`
- `KAFKA`. Level: `GLOBAL`
- `MONGO`. Level: `GLOBAL`
- `MYSQL`. Level: `GLOBAL`
- `NATS`. Level: `GLOBAL`
- `ODBC`. Level: `GLOBAL`
- `POSTGRES`. Level: `GLOBAL`
- `RABBITMQ`. Level: `GLOBAL`
- `REDIS`. Level: `GLOBAL`
- `REMOTE`. Level: `GLOBAL`
- `S3`. Level: `GLOBAL`

View File

@ -83,7 +83,7 @@ The presence of long-running or incomplete mutations often indicates that a Clic
- Or manually kill some of these mutations by sending a `KILL` command.
``` sql
KILL MUTATION [ON CLUSTER cluster]
KILL MUTATION
WHERE <where expression to SELECT FROM system.mutations query>
[TEST]
[FORMAT format]
@ -135,7 +135,6 @@ KILL MUTATION WHERE database = 'default' AND table = 'table'
-- Cancel the specific mutation:
KILL MUTATION WHERE database = 'default' AND table = 'table' AND mutation_id = 'mutation_3.txt'
```
:::tip If you are killing a mutation in ClickHouse Cloud or in a self-managed cluster, then be sure to use the ```ON CLUSTER [cluster-name]``` option, in order to ensure the mutation is killed on all replicas:::
The query is useful when a mutation is stuck and cannot finish (e.g. if some function in the mutation query throws an exception when applied to the data contained in the table).

View File

@ -93,7 +93,6 @@ LIMIT 5;
ClickHouse also can determine the compression method of the file. For example, if the file was zipped up with a `.csv.gz` extension, ClickHouse would decompress the file automatically.
:::
## Usage
Suppose that we have several files with following URIs on S3:
@ -248,6 +247,25 @@ FROM s3(
LIMIT 5;
```
## Using S3 credentials (ClickHouse Cloud)
For non-public buckets, users can pass an `aws_access_key_id` and `aws_secret_access_key` to the function. For example:
```sql
SELECT count() FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/mta/*.tsv', '<KEY>', '<SECRET>','TSVWithNames')
```
This is appropriate for one-off accesses or in cases where credentials can easily be rotated. However, this is not recommended as a long-term solution for repeated access or where credentials are sensitive. In this case, we recommend users rely on role-based access.
Role-based access for S3 in ClickHouse Cloud is documented [here](/docs/en/cloud/security/secure-s3#access-your-s3-bucket-with-the-clickhouseaccess-role).
Once configured, a `roleARN` can be passed to the s3 function via an `extra_credentials` parameter. For example:
```sql
SELECT count() FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/mta/*.tsv','CSVWithNames',extra_credentials(role_arn = 'arn:aws:iam::111111111111:role/ClickHouseAccessRole-001'))
```
Further examples can be found [here](/docs/en/cloud/security/secure-s3#access-your-s3-bucket-with-the-clickhouseaccess-role)
## Working with archives
@ -266,6 +284,14 @@ FROM s3(
);
```
:::note
ClickHouse supports three archive formats:
ZIP
TAR
7Z
While ZIP and TAR archives can be accessed from any supported storage location, 7Z archives can only be read from the local filesystem where ClickHouse is installed.
:::
## Virtual Columns {#virtual-columns}

View File

@ -70,10 +70,15 @@ SELECT count(*) FROM s3Cluster(
)
```
## Accessing private and public buckets
Users can use the same approaches as document for the s3 function [here](/docs/en/sql-reference/table-functions/s3#accessing-public-buckets).
## Optimizing performance
For details on optimizing the performance of the s3 function see [our detailed guide](/docs/en/integrations/s3/performance).
**See Also**
- [S3 engine](../../engines/table-engines/integrations/s3.md)

View File

@ -138,6 +138,7 @@ CREATE TABLE table_with_asterisk (name String, value UInt32)
- `use_insecure_imds_request` — признак использования менее безопасного соединения при выполнении запроса к IMDS при получении учётных данных из метаданных Amazon EC2. Значение по умолчанию — `false`.
- `region` — название региона S3.
- `header` — добавляет указанный HTTP-заголовок к запросу на заданную точку приема запроса. Может быть определен несколько раз.
- `access_header` - добавляет указанный HTTP-заголовок к запросу на заданную точку приема запроса, в случая если не указаны другие способы авторизации.
- `server_side_encryption_customer_key_base64` — устанавливает необходимые заголовки для доступа к объектам S3 с шифрованием SSE-C.
- `single_read_retries` — Максимальное количество попыток запроса при единичном чтении. Значение по умолчанию — `4`.

View File

@ -95,7 +95,7 @@ sudo yum install -y clickhouse-server clickhouse-client
sudo systemctl enable clickhouse-server
sudo systemctl start clickhouse-server
sudo systemctl status clickhouse-server
clickhouse-client # илм "clickhouse-client --password" если установлен пароль
clickhouse-client # или "clickhouse-client --password" если установлен пароль
```
Для использования наиболее свежих версий нужно заменить `stable` на `testing` (рекомендуется для тестовых окружений). Также иногда доступен `prestable`.

View File

@ -39,7 +39,7 @@ SELECT a, b, c FROM (SELECT ...)
## Материализованные представления {#materialized}
``` sql
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] [TO[db.]name] [ENGINE = engine] [POPULATE]
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster_name] [TO[db.]name] [ENGINE = engine] [POPULATE]
[DEFINER = { user | CURRENT_USER }] [SQL SECURITY { DEFINER | INVOKER | NONE }]
AS SELECT ...
```

View File

@ -192,14 +192,23 @@ GRANT SELECT(x,y) ON db.table TO john WITH GRANT OPTION
- `addressToSymbol`
- `demangle`
- [SOURCES](#grant-sources)
- `AZURE`
- `FILE`
- `URL`
- `REMOTE`
- `MYSQL`
- `ODBC`
- `JDBC`
- `HDFS`
- `HIVE`
- `JDBC`
- `KAFKA`
- `MONGO`
- `MYSQL`
- `NATS`
- `ODBC`
- `POSTGRES`
- `RABBITMQ`
- `REDIS`
- `REMOTE`
- `S3`
- `SQLITE`
- `URL`
- [dictGet](#grant-dictget)
Примеры того, как трактуется данная иерархия:
@ -461,14 +470,23 @@ GRANT INSERT(x,y) ON db.table TO john
Разрешает использовать внешние источники данных. Применяется к [движкам таблиц](../../engines/table-engines/index.md) и [табличным функциям](../table-functions/index.md#table-functions).
- `SOURCES`. Уровень: `GROUP`
- `AZURE`. Уровень: `GLOBAL`
- `FILE`. Уровень: `GLOBAL`
- `URL`. Уровень: `GLOBAL`
- `REMOTE`. Уровень: `GLOBAL`
- `MYSQL`. Уровень: `GLOBAL`
- `ODBC`. Уровень: `GLOBAL`
- `JDBC`. Уровень: `GLOBAL`
- `HDFS`. Уровень: `GLOBAL`
- `HIVE`. Уровень: `GLOBAL`
- `JDBC`. Уровень: `GLOBAL`
- `KAFKA`. Уровень: `GLOBAL`
- `MONGO`. Уровень: `GLOBAL`
- `MYSQL`. Уровень: `GLOBAL`
- `NATS`. Уровень: `GLOBAL`
- `ODBC`. Уровень: `GLOBAL`
- `POSTGRES`. Уровень: `GLOBAL`
- `RABBITMQ`. Уровень: `GLOBAL`
- `REDIS`. Уровень: `GLOBAL`
- `REMOTE`. Уровень: `GLOBAL`
- `S3`. Уровень: `GLOBAL`
- `SQLITE`. Уровень: `GLOBAL`
- `URL`. Уровень: `GLOBAL`
Привилегия `SOURCES` разрешает использование всех источников. Также вы можете присвоить привилегию для каждого источника отдельно. Для использования источников необходимы дополнительные привилегии.

View File

@ -39,7 +39,7 @@ SELECT a, b, c FROM (SELECT ...)
## Materialized {#materialized}
``` sql
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] [TO[db.]name] [ENGINE = engine] [POPULATE] AS SELECT ...
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster_name] [TO[db.]name] [ENGINE = engine] [POPULATE] AS SELECT ...
```
物化视图存储由相应的[SELECT](../../../sql-reference/statements/select/index.md)管理.

View File

@ -170,14 +170,23 @@ GRANT SELECT(x,y) ON db.table TO john WITH GRANT OPTION
- `addressToSymbol`
- `demangle`
- [SOURCES](#grant-sources)
- `AZURE`
- `FILE`
- `URL`
- `REMOTE`
- `YSQL`
- `ODBC`
- `JDBC`
- `HDFS`
- `HIVE`
- `JDBC`
- `KAFKA`
- `MONGO`
- `MYSQL`
- `NATS`
- `ODBC`
- `POSTGRES`
- `RABBITMQ`
- `REDIS`
- `REMOTE`
- `S3`
- `SQLITE`
- `URL`
- [dictGet](#grant-dictget)
如何对待该层级的示例:
@ -428,14 +437,23 @@ GRANT INSERT(x,y) ON db.table TO john
允许在 [table engines](../../engines/table-engines/index.md) 和 [table functions](../../sql-reference/table-functions/index.md#table-functions)中使用外部数据源。
- `SOURCES`. 级别: `GROUP`
- `AZURE`. 级别: `GLOBAL`
- `FILE`. 级别: `GLOBAL`
- `URL`. 级别: `GLOBAL`
- `REMOTE`. 级别: `GLOBAL`
- `YSQL`. 级别: `GLOBAL`
- `ODBC`. 级别: `GLOBAL`
- `JDBC`. 级别: `GLOBAL`
- `HDFS`. 级别: `GLOBAL`
- `HIVE`. 级别: `GLOBAL`
- `JDBC`. 级别: `GLOBAL`
- `KAFKA`. 级别: `GLOBAL`
- `MONGO`. 级别: `GLOBAL`
- `MYSQL`. 级别: `GLOBAL`
- `NATS`. 级别: `GLOBAL`
- `ODBC`. 级别: `GLOBAL`
- `POSTGRES`. 级别: `GLOBAL`
- `RABBITMQ`. 级别: `GLOBAL`
- `REDIS`. 级别: `GLOBAL`
- `REMOTE`. 级别: `GLOBAL`
- `S3`. 级别: `GLOBAL`
- `SQLITE`. 级别: `GLOBAL`
- `URL`. 级别: `GLOBAL`
`SOURCES` 权限允许使用所有数据源。当然也可以单独对每个数据源进行授权。要使用数据源时,还需要额外的权限。

View File

@ -192,6 +192,10 @@ void Client::parseConnectionsCredentials(Poco::Util::AbstractConfiguration & con
history_file = home_path + "/" + history_file.substr(1);
config.setString("history_file", history_file);
}
if (config.has(prefix + ".history_max_entries"))
{
config.setUInt("history_max_entries", history_max_entries);
}
if (config.has(prefix + ".accept-invalid-certificate"))
config.setBool("accept-invalid-certificate", config.getBool(prefix + ".accept-invalid-certificate"));
}

View File

@ -236,6 +236,7 @@ void DisksApp::runInteractiveReplxx()
ReplxxLineReader lr(
suggest,
history_file,
history_max_entries,
/* multiline= */ false,
/* ignore_shell_suspend= */ false,
query_extenders,
@ -398,6 +399,8 @@ void DisksApp::initializeHistoryFile()
throw;
}
}
history_max_entries = config().getUInt("history-max-entries", 1000000);
}
void DisksApp::init(const std::vector<String> & common_arguments)

View File

@ -62,6 +62,8 @@ private:
// Fields responsible for the REPL work
String history_file;
UInt32 history_max_entries = 0; /// Maximum number of entries in the history file. Needs to be initialized to 0 since we don't have a proper constructor. Worry not, actual value is set within the initializeHistoryFile method.
LineReader::Suggest suggest;
static LineReader::Patterns query_extenders;
static LineReader::Patterns query_delimiters;

View File

@ -243,6 +243,8 @@ void KeeperClient::initialize(Poco::Util::Application & /* self */)
}
}
history_max_entries = config().getUInt("history-max-entries", 1000000);
String default_log_level;
if (config().has("query"))
/// We don't want to see any information log in query mode, unless it was set explicitly
@ -319,6 +321,7 @@ void KeeperClient::runInteractiveReplxx()
ReplxxLineReader lr(
suggest,
history_file,
history_max_entries,
/* multiline= */ false,
/* ignore_shell_suspend= */ false,
query_extenders,

View File

@ -59,6 +59,8 @@ protected:
std::vector<String> getCompletions(const String & prefix) const;
String history_file;
UInt32 history_max_entries; /// Maximum number of entries in the history file.
LineReader::Suggest suggest;
zkutil::ZooKeeperArgs zk_args;

View File

@ -590,6 +590,7 @@ try
#if USE_SSL
CertificateReloader::instance().tryLoad(*config);
CertificateReloader::instance().tryLoadClient(*config);
#endif
});

View File

@ -821,11 +821,11 @@ void LocalServer::processConfig()
status.emplace(fs::path(path) / "status", StatusFile::write_full_info);
LOG_DEBUG(log, "Loading metadata from {}", path);
auto startup_system_tasks = loadMetadataSystem(global_context);
auto load_system_metadata_tasks = loadMetadataSystem(global_context);
attachSystemTablesServer(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::SYSTEM_DATABASE), false);
attachInformationSchema(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::INFORMATION_SCHEMA));
attachInformationSchema(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::INFORMATION_SCHEMA_UPPERCASE));
waitLoad(TablesLoaderForegroundPoolId, startup_system_tasks);
waitLoad(TablesLoaderForegroundPoolId, load_system_metadata_tasks);
if (!getClientConfiguration().has("only-system-tables"))
{

View File

@ -86,7 +86,7 @@
#include <Dictionaries/registerDictionaries.h>
#include <Disks/registerDisks.h>
#include <Common/Scheduler/Nodes/registerSchedulerNodes.h>
#include <Common/Scheduler/Nodes/registerResourceManagers.h>
#include <Common/Scheduler/Workload/IWorkloadEntityStorage.h>
#include <Common/Config/ConfigReloader.h>
#include <Server/HTTPHandlerFactory.h>
#include "MetricsTransmitter.h"
@ -168,9 +168,11 @@ namespace ServerSetting
{
extern const ServerSettingsUInt32 asynchronous_heavy_metrics_update_period_s;
extern const ServerSettingsUInt32 asynchronous_metrics_update_period_s;
extern const ServerSettingsBool asynchronous_metrics_enable_heavy_metrics;
extern const ServerSettingsBool async_insert_queue_flush_on_shutdown;
extern const ServerSettingsUInt64 async_insert_threads;
extern const ServerSettingsBool async_load_databases;
extern const ServerSettingsBool async_load_system_database;
extern const ServerSettingsUInt64 background_buffer_flush_schedule_pool_size;
extern const ServerSettingsUInt64 background_common_pool_size;
extern const ServerSettingsUInt64 background_distributed_schedule_pool_size;
@ -205,7 +207,6 @@ namespace ServerSetting
extern const ServerSettingsBool format_alter_operations_with_parentheses;
extern const ServerSettingsUInt64 global_profiler_cpu_time_period_ns;
extern const ServerSettingsUInt64 global_profiler_real_time_period_ns;
extern const ServerSettingsDouble gwp_asan_force_sample_probability;
extern const ServerSettingsUInt64 http_connections_soft_limit;
extern const ServerSettingsUInt64 http_connections_store_limit;
extern const ServerSettingsUInt64 http_connections_warn_limit;
@ -620,7 +621,7 @@ void sanityChecks(Server & server)
#if defined(OS_LINUX)
try
{
const std::unordered_set<std::string> fastClockSources = {
const std::unordered_set<std::string> fast_clock_sources = {
// ARM clock
"arch_sys_counter",
// KVM guest clock
@ -629,7 +630,7 @@ void sanityChecks(Server & server)
"tsc",
};
const char * filename = "/sys/devices/system/clocksource/clocksource0/current_clocksource";
if (!fastClockSources.contains(readLine(filename)))
if (!fast_clock_sources.contains(readLine(filename)))
server.context()->addWarningMessage("Linux is not using a fast clock source. Performance can be degraded. Check " + String(filename));
}
catch (...) // NOLINT(bugprone-empty-catch)
@ -919,7 +920,6 @@ try
registerFormats();
registerRemoteFileMetadatas();
registerSchedulerNodes();
registerResourceManagers();
CurrentMetrics::set(CurrentMetrics::Revision, ClickHouseRevision::getVersionRevision());
CurrentMetrics::set(CurrentMetrics::VersionInteger, ClickHouseRevision::getVersionInteger());
@ -1060,6 +1060,7 @@ try
ServerAsynchronousMetrics async_metrics(
global_context,
server_settings[ServerSetting::asynchronous_metrics_update_period_s],
server_settings[ServerSetting::asynchronous_metrics_enable_heavy_metrics],
server_settings[ServerSetting::asynchronous_heavy_metrics_update_period_s],
[&]() -> std::vector<ProtocolServerMetrics>
{
@ -1352,9 +1353,11 @@ try
}
FailPointInjection::enableFromGlobalConfig(config());
#endif
memory_worker.start();
#if defined(OS_LINUX)
int default_oom_score = 0;
#if !defined(NDEBUG)
@ -1927,10 +1930,6 @@ try
if (global_context->isServerCompletelyStarted())
CannotAllocateThreadFaultInjector::setFaultProbability(new_server_settings[ServerSetting::cannot_allocate_thread_fault_injection_probability]);
#if USE_GWP_ASAN
GWPAsan::setForceSampleProbability(new_server_settings[ServerSetting::gwp_asan_force_sample_probability]);
#endif
ProfileEvents::increment(ProfileEvents::MainConfigLoads);
/// Must be the last.
@ -2199,6 +2198,7 @@ try
LOG_INFO(log, "Loading metadata from {}", path_str);
LoadTaskPtrs load_system_metadata_tasks;
LoadTaskPtrs load_metadata_tasks;
// Make sure that if exception is thrown during startup async, new async loading jobs are not going to be called.
@ -2222,12 +2222,8 @@ try
auto & database_catalog = DatabaseCatalog::instance();
/// We load temporary database first, because projections need it.
database_catalog.initializeAndLoadTemporaryDatabase();
auto system_startup_tasks = loadMetadataSystem(global_context);
maybeConvertSystemDatabase(global_context, system_startup_tasks);
/// This has to be done before the initialization of system logs,
/// otherwise there is a race condition between the system database initialization
/// and creation of new tables in the database.
waitLoad(TablesLoaderForegroundPoolId, system_startup_tasks);
load_system_metadata_tasks = loadMetadataSystem(global_context, server_settings[ServerSetting::async_load_system_database]);
maybeConvertSystemDatabase(global_context, load_system_metadata_tasks);
/// Startup scripts can depend on the system log tables.
if (config().has("startup_scripts") && !server_settings[ServerSetting::prepare_system_log_tables_on_startup].changed)
@ -2258,6 +2254,8 @@ try
database_catalog.assertDatabaseExists(default_database);
/// Load user-defined SQL functions.
global_context->getUserDefinedSQLObjectsStorage().loadObjects();
/// Load WORKLOADs and RESOURCEs.
global_context->getWorkloadEntityStorage().loadEntities();
global_context->getRefreshSet().setRefreshesStopped(false);
}
@ -2271,10 +2269,19 @@ try
if (has_zookeeper && global_context->getMacros()->getMacroMap().contains("replica"))
{
auto zookeeper = global_context->getZooKeeper();
String stop_flag_path = "/clickhouse/stop_replicated_ddl_queries/{replica}";
stop_flag_path = global_context->getMacros()->expand(stop_flag_path);
found_stop_flag = zookeeper->exists(stop_flag_path);
try
{
auto zookeeper = global_context->getZooKeeper();
String stop_flag_path = "/clickhouse/stop_replicated_ddl_queries/{replica}";
stop_flag_path = global_context->getMacros()->expand(stop_flag_path);
found_stop_flag = zookeeper->exists(stop_flag_path);
}
catch (const Coordination::Exception & e)
{
if (e.code != Coordination::Error::ZCONNECTIONLOSS)
throw;
tryLogCurrentException(log);
}
}
if (found_stop_flag)
@ -2336,6 +2343,7 @@ try
#if USE_SSL
CertificateReloader::instance().tryLoad(config());
CertificateReloader::instance().tryLoadClient(config());
#endif
/// Must be done after initialization of `servers`, because async_metrics will access `servers` variable from its thread.
@ -2384,17 +2392,28 @@ try
if (has_zookeeper && config().has("distributed_ddl"))
{
/// DDL worker should be started after all tables were loaded
String ddl_zookeeper_path = config().getString("distributed_ddl.path", "/clickhouse/task_queue/ddl/");
String ddl_queue_path = config().getString("distributed_ddl.path", "/clickhouse/task_queue/ddl/");
String ddl_replicas_path = config().getString("distributed_ddl.replicas_path", "/clickhouse/task_queue/replicas/");
int pool_size = config().getInt("distributed_ddl.pool_size", 1);
if (pool_size < 1)
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "distributed_ddl.pool_size should be greater then 0");
global_context->setDDLWorker(std::make_unique<DDLWorker>(pool_size, ddl_zookeeper_path, global_context, &config(),
"distributed_ddl", "DDLWorker",
&CurrentMetrics::MaxDDLEntryID, &CurrentMetrics::MaxPushedDDLEntryID),
load_metadata_tasks);
global_context->setDDLWorker(
std::make_unique<DDLWorker>(
pool_size,
ddl_queue_path,
ddl_replicas_path,
global_context,
&config(),
"distributed_ddl",
"DDLWorker",
&CurrentMetrics::MaxDDLEntryID,
&CurrentMetrics::MaxPushedDDLEntryID),
joinTasks(load_system_metadata_tasks, load_metadata_tasks));
}
/// Do not keep tasks in server, they should be kept inside databases. Used here to make dependent tasks only.
load_system_metadata_tasks.clear();
load_system_metadata_tasks.shrink_to_fit();
load_metadata_tasks.clear();
load_metadata_tasks.shrink_to_fit();
@ -2420,7 +2439,6 @@ try
#if USE_GWP_ASAN
GWPAsan::initFinished();
GWPAsan::setForceSampleProbability(server_settings[ServerSetting::gwp_asan_force_sample_probability]);
#endif
try
@ -3014,7 +3032,7 @@ void Server::updateServers(
for (auto * server : all_servers)
{
if (!server->isStopping())
if (server->supportsRuntimeReconfiguration() && !server->isStopping())
{
std::string port_name = server->getPortName();
bool has_host = false;

View File

@ -1399,6 +1399,10 @@
If not specified they will be stored locally. -->
<!-- <user_defined_zookeeper_path>/clickhouse/user_defined</user_defined_zookeeper_path> -->
<!-- Path in ZooKeeper to store workload and resource created by the command CREATE WORKLOAD and CREATE REESOURCE.
If not specified they will be stored locally. -->
<!-- <workload_zookeeper_path>/clickhouse/workload/definitions.sql</workload_zookeeper_path> -->
<!-- Uncomment if you want data to be compressed 30-100% better.
Don't do that if you just started using ClickHouse.
-->
@ -1450,6 +1454,8 @@
<distributed_ddl>
<!-- Path in ZooKeeper to queue with DDL queries -->
<path>/clickhouse/task_queue/ddl</path>
<!-- Path in ZooKeeper to store running DDL hosts -->
<replicas_path>/clickhouse/task_queue/replicas</replicas_path>
<!-- Settings from this profile will be used to execute DDL queries -->
<!-- <profile>default</profile> -->

View File

@ -608,7 +608,7 @@ AuthResult AccessControl::authenticate(const Credentials & credentials, const Po
}
catch (...)
{
tryLogCurrentException(getLogger(), "from: " + address.toString() + ", user: " + credentials.getUserName() + ": Authentication failed");
tryLogCurrentException(getLogger(), "from: " + address.toString() + ", user: " + credentials.getUserName() + ": Authentication failed", LogsLevel::information);
WriteBufferFromOwnString message;
message << credentials.getUserName() << ": Authentication failed: password is incorrect, or there is no user with such name.";
@ -622,8 +622,9 @@ AuthResult AccessControl::authenticate(const Credentials & credentials, const Po
<< "and deleting this file will reset the password.\n"
<< "See also /etc/clickhouse-server/users.xml on the server where ClickHouse is installed.\n\n";
/// We use the same message for all authentication failures because we don't want to give away any unnecessary information for security reasons,
/// only the log will show the exact reason.
/// We use the same message for all authentication failures because we don't want to give away any unnecessary information for security reasons.
/// Only the log ((*), above) will show the exact reason. Note that (*) logs at information level instead of the default error level as
/// authentication failures are not an unusual event.
throw Exception(PreformattedMessage{message.str(),
"{}: Authentication failed: password is incorrect, or there is no user with such name",
std::vector<std::string>{credentials.getUserName()}},

View File

@ -1,12 +1,16 @@
#include <Access/AccessControl.h>
#include <Access/AuthenticationData.h>
#include <Common/Exception.h>
#include <Interpreters/Access/getValidUntilFromAST.h>
#include <Interpreters/Context.h>
#include <Interpreters/evaluateConstantExpression.h>
#include <Parsers/ASTExpressionList.h>
#include <Parsers/ASTLiteral.h>
#include <Parsers/Access/ASTPublicSSHKey.h>
#include <Storages/checkAndGetLiteralArgument.h>
#include <IO/parseDateTimeBestEffort.h>
#include <IO/ReadHelpers.h>
#include <IO/ReadBufferFromString.h>
#include <Common/OpenSSLHelpers.h>
#include <Poco/SHA1Engine.h>
@ -113,7 +117,8 @@ bool operator ==(const AuthenticationData & lhs, const AuthenticationData & rhs)
&& (lhs.ssh_keys == rhs.ssh_keys)
#endif
&& (lhs.http_auth_scheme == rhs.http_auth_scheme)
&& (lhs.http_auth_server_name == rhs.http_auth_server_name);
&& (lhs.http_auth_server_name == rhs.http_auth_server_name)
&& (lhs.valid_until == rhs.valid_until);
}
@ -384,14 +389,34 @@ std::shared_ptr<ASTAuthenticationData> AuthenticationData::toAST() const
throw Exception(ErrorCodes::LOGICAL_ERROR, "AST: Unexpected authentication type {}", toString(auth_type));
}
if (valid_until)
{
WriteBufferFromOwnString out;
writeDateTimeText(valid_until, out);
node->valid_until = std::make_shared<ASTLiteral>(out.str());
}
return node;
}
AuthenticationData AuthenticationData::fromAST(const ASTAuthenticationData & query, ContextPtr context, bool validate)
{
time_t valid_until = 0;
if (query.valid_until)
{
valid_until = getValidUntilFromAST(query.valid_until, context);
}
if (query.type && query.type == AuthenticationType::NO_PASSWORD)
return AuthenticationData();
{
AuthenticationData auth_data;
auth_data.setValidUntil(valid_until);
return auth_data;
}
/// For this type of authentication we have ASTPublicSSHKey as children for ASTAuthenticationData
if (query.type && query.type == AuthenticationType::SSH_KEY)
@ -418,6 +443,7 @@ AuthenticationData AuthenticationData::fromAST(const ASTAuthenticationData & que
}
auth_data.setSSHKeys(std::move(keys));
auth_data.setValidUntil(valid_until);
return auth_data;
#else
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "SSH is disabled, because ClickHouse is built without libssh");
@ -451,6 +477,8 @@ AuthenticationData AuthenticationData::fromAST(const ASTAuthenticationData & que
AuthenticationData auth_data(current_type);
auth_data.setValidUntil(valid_until);
if (validate)
context->getAccessControl().checkPasswordComplexityRules(value);
@ -494,6 +522,7 @@ AuthenticationData AuthenticationData::fromAST(const ASTAuthenticationData & que
}
AuthenticationData auth_data(*query.type);
auth_data.setValidUntil(valid_until);
if (query.contains_hash)
{

View File

@ -74,6 +74,9 @@ public:
const String & getHTTPAuthenticationServerName() const { return http_auth_server_name; }
void setHTTPAuthenticationServerName(const String & name) { http_auth_server_name = name; }
time_t getValidUntil() const { return valid_until; }
void setValidUntil(time_t valid_until_) { valid_until = valid_until_; }
friend bool operator ==(const AuthenticationData & lhs, const AuthenticationData & rhs);
friend bool operator !=(const AuthenticationData & lhs, const AuthenticationData & rhs) { return !(lhs == rhs); }
@ -106,6 +109,7 @@ private:
/// HTTP authentication properties
String http_auth_server_name;
HTTPAuthenticationScheme http_auth_scheme = HTTPAuthenticationScheme::BASIC;
time_t valid_until = 0;
};
}

View File

@ -99,6 +99,8 @@ enum class AccessType : uint8_t
M(CREATE_ARBITRARY_TEMPORARY_TABLE, "", GLOBAL, CREATE) /* allows to create and manipulate temporary tables
with arbitrary table engine */\
M(CREATE_FUNCTION, "", GLOBAL, CREATE) /* allows to execute CREATE FUNCTION */ \
M(CREATE_WORKLOAD, "", GLOBAL, CREATE) /* allows to execute CREATE WORKLOAD */ \
M(CREATE_RESOURCE, "", GLOBAL, CREATE) /* allows to execute CREATE RESOURCE */ \
M(CREATE_NAMED_COLLECTION, "", NAMED_COLLECTION, NAMED_COLLECTION_ADMIN) /* allows to execute CREATE NAMED COLLECTION */ \
M(CREATE, "", GROUP, ALL) /* allows to execute {CREATE|ATTACH} */ \
\
@ -108,6 +110,8 @@ enum class AccessType : uint8_t
implicitly enabled by the grant DROP_TABLE */\
M(DROP_DICTIONARY, "", DICTIONARY, DROP) /* allows to execute {DROP|DETACH} DICTIONARY */\
M(DROP_FUNCTION, "", GLOBAL, DROP) /* allows to execute DROP FUNCTION */\
M(DROP_WORKLOAD, "", GLOBAL, DROP) /* allows to execute DROP WORKLOAD */\
M(DROP_RESOURCE, "", GLOBAL, DROP) /* allows to execute DROP RESOURCE */\
M(DROP_NAMED_COLLECTION, "", NAMED_COLLECTION, NAMED_COLLECTION_ADMIN) /* allows to execute DROP NAMED COLLECTION */\
M(DROP, "", GROUP, ALL) /* allows to execute {DROP|DETACH} */\
\
@ -159,6 +163,7 @@ enum class AccessType : uint8_t
M(SYSTEM_SHUTDOWN, "SYSTEM KILL, SHUTDOWN", GLOBAL, SYSTEM) \
M(SYSTEM_DROP_DNS_CACHE, "SYSTEM DROP DNS, DROP DNS CACHE, DROP DNS", GLOBAL, SYSTEM_DROP_CACHE) \
M(SYSTEM_DROP_CONNECTIONS_CACHE, "SYSTEM DROP CONNECTIONS CACHE, DROP CONNECTIONS CACHE", GLOBAL, SYSTEM_DROP_CACHE) \
M(SYSTEM_PREWARM_MARK_CACHE, "SYSTEM PREWARM MARK, PREWARM MARK CACHE, PREWARM MARKS", GLOBAL, SYSTEM_DROP_CACHE) \
M(SYSTEM_DROP_MARK_CACHE, "SYSTEM DROP MARK, DROP MARK CACHE, DROP MARKS", GLOBAL, SYSTEM_DROP_CACHE) \
M(SYSTEM_DROP_UNCOMPRESSED_CACHE, "SYSTEM DROP UNCOMPRESSED, DROP UNCOMPRESSED CACHE, DROP UNCOMPRESSED", GLOBAL, SYSTEM_DROP_CACHE) \
M(SYSTEM_DROP_MMAP_CACHE, "SYSTEM DROP MMAP, DROP MMAP CACHE, DROP MMAP", GLOBAL, SYSTEM_DROP_CACHE) \
@ -238,6 +243,9 @@ enum class AccessType : uint8_t
M(S3, "", GLOBAL, SOURCES) \
M(HIVE, "", GLOBAL, SOURCES) \
M(AZURE, "", GLOBAL, SOURCES) \
M(KAFKA, "", GLOBAL, SOURCES) \
M(NATS, "", GLOBAL, SOURCES) \
M(RABBITMQ, "", GLOBAL, SOURCES) \
M(SOURCES, "", GROUP, ALL) \
\
M(CLUSTER, "", GLOBAL, ALL) /* ON CLUSTER queries */ \

View File

@ -52,7 +52,10 @@ namespace
{AccessType::HDFS, "HDFS"},
{AccessType::S3, "S3"},
{AccessType::HIVE, "Hive"},
{AccessType::AZURE, "AzureBlobStorage"}
{AccessType::AZURE, "AzureBlobStorage"},
{AccessType::KAFKA, "Kafka"},
{AccessType::NATS, "NATS"},
{AccessType::RABBITMQ, "RabbitMQ"}
};
@ -701,15 +704,17 @@ bool ContextAccess::checkAccessImplHelper(const ContextPtr & context, AccessFlag
const AccessFlags dictionary_ddl = AccessType::CREATE_DICTIONARY | AccessType::DROP_DICTIONARY;
const AccessFlags function_ddl = AccessType::CREATE_FUNCTION | AccessType::DROP_FUNCTION;
const AccessFlags workload_ddl = AccessType::CREATE_WORKLOAD | AccessType::DROP_WORKLOAD;
const AccessFlags resource_ddl = AccessType::CREATE_RESOURCE | AccessType::DROP_RESOURCE;
const AccessFlags table_and_dictionary_ddl = table_ddl | dictionary_ddl;
const AccessFlags table_and_dictionary_and_function_ddl = table_ddl | dictionary_ddl | function_ddl;
const AccessFlags write_table_access = AccessType::INSERT | AccessType::OPTIMIZE;
const AccessFlags write_dcl_access = AccessType::ACCESS_MANAGEMENT - AccessType::SHOW_ACCESS;
const AccessFlags not_readonly_flags = write_table_access | table_and_dictionary_and_function_ddl | write_dcl_access | AccessType::SYSTEM | AccessType::KILL_QUERY;
const AccessFlags not_readonly_flags = write_table_access | table_and_dictionary_and_function_ddl | workload_ddl | resource_ddl | write_dcl_access | AccessType::SYSTEM | AccessType::KILL_QUERY;
const AccessFlags not_readonly_1_flags = AccessType::CREATE_TEMPORARY_TABLE;
const AccessFlags ddl_flags = table_ddl | dictionary_ddl | function_ddl;
const AccessFlags ddl_flags = table_ddl | dictionary_ddl | function_ddl | workload_ddl | resource_ddl;
const AccessFlags introspection_flags = AccessType::INTROSPECTION;
};
static const PrecalculatedFlags precalc;

View File

@ -15,6 +15,9 @@ public:
explicit Credentials() = default;
explicit Credentials(const String & user_name_);
Credentials(const Credentials &) = default;
Credentials(Credentials &&) = default;
virtual ~Credentials() = default;
const String & getUserName() const;

View File

@ -554,7 +554,7 @@ std::optional<AuthResult> IAccessStorage::authenticateImpl(
continue;
}
if (areCredentialsValid(user->getName(), user->valid_until, auth_method, credentials, external_authenticators, auth_result.settings))
if (areCredentialsValid(user->getName(), auth_method, credentials, external_authenticators, auth_result.settings))
{
auth_result.authentication_data = auth_method;
return auth_result;
@ -579,7 +579,6 @@ std::optional<AuthResult> IAccessStorage::authenticateImpl(
bool IAccessStorage::areCredentialsValid(
const std::string & user_name,
time_t valid_until,
const AuthenticationData & authentication_method,
const Credentials & credentials,
const ExternalAuthenticators & external_authenticators,
@ -591,6 +590,7 @@ bool IAccessStorage::areCredentialsValid(
if (credentials.getUserName() != user_name)
return false;
auto valid_until = authentication_method.getValidUntil();
if (valid_until)
{
const time_t now = std::chrono::system_clock::to_time_t(std::chrono::system_clock::now());

View File

@ -236,7 +236,6 @@ protected:
bool allow_plaintext_password) const;
virtual bool areCredentialsValid(
const std::string & user_name,
time_t valid_until,
const AuthenticationData & authentication_method,
const Credentials & credentials,
const ExternalAuthenticators & external_authenticators,

View File

@ -19,8 +19,7 @@ bool User::equal(const IAccessEntity & other) const
return (authentication_methods == other_user.authentication_methods)
&& (allowed_client_hosts == other_user.allowed_client_hosts)
&& (access == other_user.access) && (granted_roles == other_user.granted_roles) && (default_roles == other_user.default_roles)
&& (settings == other_user.settings) && (grantees == other_user.grantees) && (default_database == other_user.default_database)
&& (valid_until == other_user.valid_until);
&& (settings == other_user.settings) && (grantees == other_user.grantees) && (default_database == other_user.default_database);
}
void User::setName(const String & name_)
@ -88,7 +87,6 @@ void User::clearAllExceptDependencies()
access = {};
settings.removeSettingsKeepProfiles();
default_database = {};
valid_until = 0;
}
}

View File

@ -23,7 +23,6 @@ struct User : public IAccessEntity
SettingsProfileElements settings;
RolesOrUsersSet grantees = RolesOrUsersSet::AllTag{};
String default_database;
time_t valid_until = 0;
bool equal(const IAccessEntity & other) const override;
std::shared_ptr<IAccessEntity> clone() const override { return cloneImpl<User>(); }

View File

@ -59,13 +59,13 @@ constexpr size_t group_array_sorted_sort_strategy_max_elements_threshold = 10000
template <typename T, GroupArraySortedStrategy strategy>
struct GroupArraySortedData
{
static constexpr bool is_value_generic_field = std::is_same_v<T, Field>;
using Allocator = MixedAlignedArenaAllocator<alignof(T), 4096>;
using Array = PODArray<T, 32, Allocator>;
using Array = typename std::conditional_t<is_value_generic_field, std::vector<T>, PODArray<T, 32, Allocator>>;
static constexpr size_t partial_sort_max_elements_factor = 2;
static constexpr bool is_value_generic_field = std::is_same_v<T, Field>;
Array values;
static bool compare(const T & lhs, const T & rhs)
@ -144,7 +144,7 @@ struct GroupArraySortedData
}
if (values.size() > max_elements)
values.resize(max_elements, arena);
resize(max_elements, arena);
}
ALWAYS_INLINE void partialSortAndLimitIfNeeded(size_t max_elements, Arena * arena)
@ -153,7 +153,23 @@ struct GroupArraySortedData
return;
::nth_element(values.begin(), values.begin() + max_elements, values.end(), Comparator());
values.resize(max_elements, arena);
resize(max_elements, arena);
}
ALWAYS_INLINE void resize(size_t n, Arena * arena)
{
if constexpr (is_value_generic_field)
values.resize(n);
else
values.resize(n, arena);
}
ALWAYS_INLINE void push_back(T && element, Arena * arena)
{
if constexpr (is_value_generic_field)
values.push_back(element);
else
values.push_back(element, arena);
}
ALWAYS_INLINE void addElement(T && element, size_t max_elements, Arena * arena)
@ -171,12 +187,12 @@ struct GroupArraySortedData
return;
}
values.push_back(std::move(element), arena);
push_back(std::move(element), arena);
std::push_heap(values.begin(), values.end(), Comparator());
}
else
{
values.push_back(std::move(element), arena);
push_back(std::move(element), arena);
partialSortAndLimitIfNeeded(max_elements, arena);
}
}
@ -210,14 +226,6 @@ struct GroupArraySortedData
result_array_data[result_array_data_insert_begin + i] = values[i];
}
}
~GroupArraySortedData()
{
for (auto & value : values)
{
value.~T();
}
}
};
template <typename T>
@ -313,14 +321,12 @@ public:
throw Exception(ErrorCodes::TOO_LARGE_ARRAY_SIZE, "Too large array size, it should not exceed {}", max_elements);
auto & values = this->data(place).values;
values.resize_exact(size, arena);
if constexpr (std::is_same_v<T, Field>)
if constexpr (Data::is_value_generic_field)
{
values.resize(size);
for (Field & element : values)
{
/// We must initialize the Field type since some internal functions (like operator=) use them
new (&element) Field;
bool has_value = false;
readBinary(has_value, buf);
if (has_value)
@ -329,6 +335,7 @@ public:
}
else
{
values.resize_exact(size, arena);
if constexpr (std::endian::native == std::endian::little)
{
buf.readStrict(reinterpret_cast<char *>(values.data()), size * sizeof(values[0]));

View File

@ -1,2 +1,2 @@
clickhouse_add_executable(aggregate_function_state_deserialization_fuzzer aggregate_function_state_deserialization_fuzzer.cpp ${SRCS})
target_link_libraries(aggregate_function_state_deserialization_fuzzer PRIVATE clickhouse_aggregate_functions)
target_link_libraries(aggregate_function_state_deserialization_fuzzer PRIVATE clickhouse_aggregate_functions dbms)

View File

@ -227,8 +227,13 @@ void QueryAnalyzer::resolveConstantExpression(QueryTreeNodePtr & node, const Que
scope.context = context;
auto node_type = node->getNodeType();
if (node_type == QueryTreeNodeType::QUERY || node_type == QueryTreeNodeType::UNION)
{
evaluateScalarSubqueryIfNeeded(node, scope);
return;
}
if (table_expression && node_type != QueryTreeNodeType::QUERY && node_type != QueryTreeNodeType::UNION)
if (table_expression)
{
scope.expression_join_tree_node = table_expression;
validateTableExpressionModifiers(scope.expression_join_tree_node, scope);

View File

@ -0,0 +1,135 @@
#include <Backups/BackupConcurrencyCheck.h>
#include <Common/Exception.h>
#include <Common/logger_useful.h>
namespace DB
{
namespace ErrorCodes
{
extern const int CONCURRENT_ACCESS_NOT_SUPPORTED;
}
BackupConcurrencyCheck::BackupConcurrencyCheck(
const UUID & backup_or_restore_uuid_,
bool is_restore_,
bool on_cluster_,
bool allow_concurrency_,
BackupConcurrencyCounters & counters_)
: is_restore(is_restore_), backup_or_restore_uuid(backup_or_restore_uuid_), on_cluster(on_cluster_), counters(counters_)
{
std::lock_guard lock{counters.mutex};
if (!allow_concurrency_)
{
bool found_concurrent_operation = false;
if (is_restore)
{
size_t num_local_restores = counters.local_restores;
size_t num_on_cluster_restores = counters.on_cluster_restores.size();
if (on_cluster)
{
if (!counters.on_cluster_restores.contains(backup_or_restore_uuid))
++num_on_cluster_restores;
}
else
{
++num_local_restores;
}
found_concurrent_operation = (num_local_restores + num_on_cluster_restores > 1);
}
else
{
size_t num_local_backups = counters.local_backups;
size_t num_on_cluster_backups = counters.on_cluster_backups.size();
if (on_cluster)
{
if (!counters.on_cluster_backups.contains(backup_or_restore_uuid))
++num_on_cluster_backups;
}
else
{
++num_local_backups;
}
found_concurrent_operation = (num_local_backups + num_on_cluster_backups > 1);
}
if (found_concurrent_operation)
throwConcurrentOperationNotAllowed(is_restore);
}
if (on_cluster)
{
if (is_restore)
++counters.on_cluster_restores[backup_or_restore_uuid];
else
++counters.on_cluster_backups[backup_or_restore_uuid];
}
else
{
if (is_restore)
++counters.local_restores;
else
++counters.local_backups;
}
}
BackupConcurrencyCheck::~BackupConcurrencyCheck()
{
std::lock_guard lock{counters.mutex};
if (on_cluster)
{
if (is_restore)
{
auto it = counters.on_cluster_restores.find(backup_or_restore_uuid);
if (it != counters.on_cluster_restores.end())
{
if (!--it->second)
counters.on_cluster_restores.erase(it);
}
}
else
{
auto it = counters.on_cluster_backups.find(backup_or_restore_uuid);
if (it != counters.on_cluster_backups.end())
{
if (!--it->second)
counters.on_cluster_backups.erase(it);
}
}
}
else
{
if (is_restore)
--counters.local_restores;
else
--counters.local_backups;
}
}
void BackupConcurrencyCheck::throwConcurrentOperationNotAllowed(bool is_restore)
{
throw Exception(
ErrorCodes::CONCURRENT_ACCESS_NOT_SUPPORTED,
"Concurrent {} are not allowed, turn on setting '{}'",
is_restore ? "restores" : "backups",
is_restore ? "allow_concurrent_restores" : "allow_concurrent_backups");
}
BackupConcurrencyCounters::BackupConcurrencyCounters() = default;
BackupConcurrencyCounters::~BackupConcurrencyCounters()
{
if (local_backups > 0 || local_restores > 0 || !on_cluster_backups.empty() || !on_cluster_restores.empty())
LOG_ERROR(getLogger(__PRETTY_FUNCTION__), "Some backups or restores are processing");
}
}

View File

@ -0,0 +1,55 @@
#pragma once
#include <Core/UUID.h>
#include <base/scope_guard.h>
#include <mutex>
#include <unordered_map>
namespace DB
{
class BackupConcurrencyCounters;
/// Local checker for concurrent BACKUP or RESTORE operations.
/// This class is used by implementations of IBackupCoordination and IRestoreCoordination
/// to throw an exception if concurrent backups or restores are not allowed.
class BackupConcurrencyCheck
{
public:
/// Checks concurrency of a BACKUP operation or a RESTORE operation.
/// Keep a constructed instance of BackupConcurrencyCheck until the operation is done.
BackupConcurrencyCheck(
const UUID & backup_or_restore_uuid_,
bool is_restore_,
bool on_cluster_,
bool allow_concurrency_,
BackupConcurrencyCounters & counters_);
~BackupConcurrencyCheck();
[[noreturn]] static void throwConcurrentOperationNotAllowed(bool is_restore);
private:
const bool is_restore;
const UUID backup_or_restore_uuid;
const bool on_cluster;
BackupConcurrencyCounters & counters;
};
class BackupConcurrencyCounters
{
public:
BackupConcurrencyCounters();
~BackupConcurrencyCounters();
private:
friend class BackupConcurrencyCheck;
size_t local_backups TSA_GUARDED_BY(mutex) = 0;
size_t local_restores TSA_GUARDED_BY(mutex) = 0;
std::unordered_map<UUID /* backup_uuid */, size_t /* num_refs */> on_cluster_backups TSA_GUARDED_BY(mutex);
std::unordered_map<UUID /* restore_uuid */, size_t /* num_refs */> on_cluster_restores TSA_GUARDED_BY(mutex);
std::mutex mutex;
};
}

View File

@ -0,0 +1,64 @@
#include <Backups/BackupCoordinationCleaner.h>
namespace DB
{
BackupCoordinationCleaner::BackupCoordinationCleaner(const String & zookeeper_path_, const WithRetries & with_retries_, LoggerPtr log_)
: zookeeper_path(zookeeper_path_), with_retries(with_retries_), log(log_)
{
}
void BackupCoordinationCleaner::cleanup()
{
tryRemoveAllNodes(/* throw_if_error = */ true, /* retries_kind = */ WithRetries::kNormal);
}
bool BackupCoordinationCleaner::tryCleanupAfterError() noexcept
{
return tryRemoveAllNodes(/* throw_if_error = */ false, /* retries_kind = */ WithRetries::kNormal);
}
bool BackupCoordinationCleaner::tryRemoveAllNodes(bool throw_if_error, WithRetries::Kind retries_kind)
{
{
std::lock_guard lock{mutex};
if (cleanup_result.succeeded)
return true;
if (cleanup_result.exception)
{
if (throw_if_error)
std::rethrow_exception(cleanup_result.exception);
return false;
}
}
try
{
LOG_TRACE(log, "Removing nodes from ZooKeeper");
auto holder = with_retries.createRetriesControlHolder("removeAllNodes", retries_kind);
holder.retries_ctl.retryLoop([&, &zookeeper = holder.faulty_zookeeper]()
{
with_retries.renewZooKeeper(zookeeper);
zookeeper->removeRecursive(zookeeper_path);
});
std::lock_guard lock{mutex};
cleanup_result.succeeded = true;
return true;
}
catch (...)
{
LOG_TRACE(log, "Caught exception while removing nodes from ZooKeeper for this restore: {}",
getCurrentExceptionMessage(/* with_stacktrace= */ false, /* check_embedded_stacktrace= */ true));
std::lock_guard lock{mutex};
cleanup_result.exception = std::current_exception();
if (throw_if_error)
throw;
return false;
}
}
}

View File

@ -0,0 +1,40 @@
#pragma once
#include <Backups/WithRetries.h>
namespace DB
{
/// Removes all the nodes from ZooKeeper used to coordinate a BACKUP ON CLUSTER operation or
/// a RESTORE ON CLUSTER operation (successful or not).
/// This class is used by BackupCoordinationOnCluster and RestoreCoordinationOnCluster to cleanup.
class BackupCoordinationCleaner
{
public:
BackupCoordinationCleaner(const String & zookeeper_path_, const WithRetries & with_retries_, LoggerPtr log_);
void cleanup();
bool tryCleanupAfterError() noexcept;
private:
bool tryRemoveAllNodes(bool throw_if_error, WithRetries::Kind retries_kind);
const String zookeeper_path;
/// A reference to a field of the parent object which is either BackupCoordinationOnCluster or RestoreCoordinationOnCluster.
const WithRetries & with_retries;
const LoggerPtr log;
struct CleanupResult
{
bool succeeded = false;
std::exception_ptr exception;
};
CleanupResult cleanup_result TSA_GUARDED_BY(mutex);
std::mutex mutex;
};
}

View File

@ -1,5 +1,7 @@
#include <Backups/BackupCoordinationLocal.h>
#include <Common/Exception.h>
#include <Common/ZooKeeper/ZooKeeperRetries.h>
#include <Common/logger_useful.h>
#include <Common/quoteString.h>
#include <fmt/format.h>
@ -8,27 +10,20 @@
namespace DB
{
BackupCoordinationLocal::BackupCoordinationLocal(bool plain_backup_)
: log(getLogger("BackupCoordinationLocal")), file_infos(plain_backup_)
BackupCoordinationLocal::BackupCoordinationLocal(
const UUID & backup_uuid_,
bool is_plain_backup_,
bool allow_concurrent_backup_,
BackupConcurrencyCounters & concurrency_counters_)
: log(getLogger("BackupCoordinationLocal"))
, concurrency_check(backup_uuid_, /* is_restore = */ false, /* on_cluster = */ false, allow_concurrent_backup_, concurrency_counters_)
, file_infos(is_plain_backup_)
{
}
BackupCoordinationLocal::~BackupCoordinationLocal() = default;
void BackupCoordinationLocal::setStage(const String &, const String &)
{
}
void BackupCoordinationLocal::setError(const Exception &)
{
}
Strings BackupCoordinationLocal::waitForStage(const String &)
{
return {};
}
Strings BackupCoordinationLocal::waitForStage(const String &, std::chrono::milliseconds)
ZooKeeperRetriesInfo BackupCoordinationLocal::getOnClusterInitializationKeeperRetriesInfo() const
{
return {};
}
@ -135,15 +130,4 @@ bool BackupCoordinationLocal::startWritingFile(size_t data_file_index)
return writing_files.emplace(data_file_index).second;
}
bool BackupCoordinationLocal::hasConcurrentBackups(const std::atomic<size_t> & num_active_backups) const
{
if (num_active_backups > 1)
{
LOG_WARNING(log, "Found concurrent backups: num_active_backups={}", num_active_backups);
return true;
}
return false;
}
}

View File

@ -1,6 +1,7 @@
#pragma once
#include <Backups/IBackupCoordination.h>
#include <Backups/BackupConcurrencyCheck.h>
#include <Backups/BackupCoordinationFileInfos.h>
#include <Backups/BackupCoordinationReplicatedAccess.h>
#include <Backups/BackupCoordinationReplicatedSQLObjects.h>
@ -21,13 +22,21 @@ namespace DB
class BackupCoordinationLocal : public IBackupCoordination
{
public:
explicit BackupCoordinationLocal(bool plain_backup_);
explicit BackupCoordinationLocal(
const UUID & backup_uuid_,
bool is_plain_backup_,
bool allow_concurrent_backup_,
BackupConcurrencyCounters & concurrency_counters_);
~BackupCoordinationLocal() override;
void setStage(const String & new_stage, const String & message) override;
void setError(const Exception & exception) override;
Strings waitForStage(const String & stage_to_wait) override;
Strings waitForStage(const String & stage_to_wait, std::chrono::milliseconds timeout) override;
Strings setStage(const String &, const String &, bool) override { return {}; }
void setBackupQueryWasSentToOtherHosts() override {}
bool trySetError(std::exception_ptr) override { return true; }
void finish() override {}
bool tryFinishAfterError() noexcept override { return true; }
void waitForOtherHostsToFinish() override {}
bool tryWaitForOtherHostsToFinishAfterError() noexcept override { return true; }
void addReplicatedPartNames(const String & table_zk_path, const String & table_name_for_logs, const String & replica_name,
const std::vector<PartNameAndChecksum> & part_names_and_checksums) override;
@ -54,17 +63,18 @@ public:
BackupFileInfos getFileInfosForAllHosts() const override;
bool startWritingFile(size_t data_file_index) override;
bool hasConcurrentBackups(const std::atomic<size_t> & num_active_backups) const override;
ZooKeeperRetriesInfo getOnClusterInitializationKeeperRetriesInfo() const override;
private:
LoggerPtr const log;
BackupConcurrencyCheck concurrency_check;
BackupCoordinationReplicatedTables TSA_GUARDED_BY(replicated_tables_mutex) replicated_tables;
BackupCoordinationReplicatedAccess TSA_GUARDED_BY(replicated_access_mutex) replicated_access;
BackupCoordinationReplicatedSQLObjects TSA_GUARDED_BY(replicated_sql_objects_mutex) replicated_sql_objects;
BackupCoordinationFileInfos TSA_GUARDED_BY(file_infos_mutex) file_infos;
BackupCoordinationReplicatedTables replicated_tables TSA_GUARDED_BY(replicated_tables_mutex);
BackupCoordinationReplicatedAccess replicated_access TSA_GUARDED_BY(replicated_access_mutex);
BackupCoordinationReplicatedSQLObjects replicated_sql_objects TSA_GUARDED_BY(replicated_sql_objects_mutex);
BackupCoordinationFileInfos file_infos TSA_GUARDED_BY(file_infos_mutex);
BackupCoordinationKeeperMapTables keeper_map_tables TSA_GUARDED_BY(keeper_map_tables_mutex);
std::unordered_set<size_t> TSA_GUARDED_BY(writing_files_mutex) writing_files;
std::unordered_set<size_t> writing_files TSA_GUARDED_BY(writing_files_mutex);
mutable std::mutex replicated_tables_mutex;
mutable std::mutex replicated_access_mutex;

View File

@ -1,7 +1,4 @@
#include <Backups/BackupCoordinationRemote.h>
#include <base/hex.h>
#include <boost/algorithm/string/split.hpp>
#include <Backups/BackupCoordinationOnCluster.h>
#include <Access/Common/AccessEntityType.h>
#include <Backups/BackupCoordinationReplicatedAccess.h>
@ -26,8 +23,6 @@ namespace ErrorCodes
extern const int LOGICAL_ERROR;
}
namespace Stage = BackupCoordinationStage;
namespace
{
using PartNameAndChecksum = IBackupCoordination::PartNameAndChecksum;
@ -149,144 +144,152 @@ namespace
};
}
size_t BackupCoordinationRemote::findCurrentHostIndex(const Strings & all_hosts, const String & current_host)
Strings BackupCoordinationOnCluster::excludeInitiator(const Strings & all_hosts)
{
Strings all_hosts_without_initiator = all_hosts;
bool has_initiator = (std::erase(all_hosts_without_initiator, kInitiator) > 0);
chassert(has_initiator);
return all_hosts_without_initiator;
}
size_t BackupCoordinationOnCluster::findCurrentHostIndex(const String & current_host, const Strings & all_hosts)
{
auto it = std::find(all_hosts.begin(), all_hosts.end(), current_host);
if (it == all_hosts.end())
return 0;
return all_hosts.size();
return it - all_hosts.begin();
}
BackupCoordinationRemote::BackupCoordinationRemote(
zkutil::GetZooKeeper get_zookeeper_,
BackupCoordinationOnCluster::BackupCoordinationOnCluster(
const UUID & backup_uuid_,
bool is_plain_backup_,
const String & root_zookeeper_path_,
zkutil::GetZooKeeper get_zookeeper_,
const BackupKeeperSettings & keeper_settings_,
const String & backup_uuid_,
const Strings & all_hosts_,
const String & current_host_,
bool plain_backup_,
bool is_internal_,
const Strings & all_hosts_,
bool allow_concurrent_backup_,
BackupConcurrencyCounters & concurrency_counters_,
ThreadPoolCallbackRunnerUnsafe<void> schedule_,
QueryStatusPtr process_list_element_)
: root_zookeeper_path(root_zookeeper_path_)
, zookeeper_path(root_zookeeper_path_ + "/backup-" + backup_uuid_)
, zookeeper_path(root_zookeeper_path_ + "/backup-" + toString(backup_uuid_))
, keeper_settings(keeper_settings_)
, backup_uuid(backup_uuid_)
, all_hosts(all_hosts_)
, all_hosts_without_initiator(excludeInitiator(all_hosts))
, current_host(current_host_)
, current_host_index(findCurrentHostIndex(all_hosts, current_host))
, plain_backup(plain_backup_)
, is_internal(is_internal_)
, log(getLogger("BackupCoordinationRemote"))
, with_retries(
log,
get_zookeeper_,
keeper_settings,
process_list_element_,
[my_zookeeper_path = zookeeper_path, my_current_host = current_host, my_is_internal = is_internal]
(WithRetries::FaultyKeeper & zk)
{
/// Recreate this ephemeral node to signal that we are alive.
if (my_is_internal)
{
String alive_node_path = my_zookeeper_path + "/stage/alive|" + my_current_host;
/// Delete the ephemeral node from the previous connection so we don't have to wait for keeper to do it automatically.
zk->tryRemove(alive_node_path);
zk->createAncestors(alive_node_path);
zk->create(alive_node_path, "", zkutil::CreateMode::Ephemeral);
}
})
, current_host_index(findCurrentHostIndex(current_host, all_hosts))
, plain_backup(is_plain_backup_)
, log(getLogger("BackupCoordinationOnCluster"))
, with_retries(log, get_zookeeper_, keeper_settings, process_list_element_, [root_zookeeper_path_](Coordination::ZooKeeperWithFaultInjection::Ptr zk) { zk->sync(root_zookeeper_path_); })
, concurrency_check(backup_uuid_, /* is_restore = */ false, /* on_cluster = */ true, allow_concurrent_backup_, concurrency_counters_)
, stage_sync(/* is_restore = */ false, fs::path{zookeeper_path} / "stage", current_host, all_hosts, allow_concurrent_backup_, with_retries, schedule_, process_list_element_, log)
, cleaner(zookeeper_path, with_retries, log)
{
createRootNodes();
stage_sync.emplace(
zookeeper_path,
with_retries,
log);
}
BackupCoordinationRemote::~BackupCoordinationRemote()
BackupCoordinationOnCluster::~BackupCoordinationOnCluster()
{
try
{
if (!is_internal)
removeAllNodes();
}
catch (...)
{
tryLogCurrentException(__PRETTY_FUNCTION__);
}
tryFinishImpl();
}
void BackupCoordinationRemote::createRootNodes()
void BackupCoordinationOnCluster::createRootNodes()
{
auto holder = with_retries.createRetriesControlHolder("createRootNodes");
auto holder = with_retries.createRetriesControlHolder("createRootNodes", WithRetries::kInitialization);
holder.retries_ctl.retryLoop(
[&, &zk = holder.faulty_zookeeper]()
{
with_retries.renewZooKeeper(zk);
zk->createAncestors(zookeeper_path);
Coordination::Requests ops;
Coordination::Responses responses;
ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path, "", zkutil::CreateMode::Persistent));
ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/repl_part_names", "", zkutil::CreateMode::Persistent));
ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/repl_mutations", "", zkutil::CreateMode::Persistent));
ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/repl_data_paths", "", zkutil::CreateMode::Persistent));
ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/repl_access", "", zkutil::CreateMode::Persistent));
ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/repl_sql_objects", "", zkutil::CreateMode::Persistent));
ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/keeper_map_tables", "", zkutil::CreateMode::Persistent));
ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/file_infos", "", zkutil::CreateMode::Persistent));
ops.emplace_back(zkutil::makeCreateRequest(zookeeper_path + "/writing_files", "", zkutil::CreateMode::Persistent));
zk->tryMulti(ops, responses);
zk->createIfNotExists(zookeeper_path, "");
zk->createIfNotExists(zookeeper_path + "/repl_part_names", "");
zk->createIfNotExists(zookeeper_path + "/repl_mutations", "");
zk->createIfNotExists(zookeeper_path + "/repl_data_paths", "");
zk->createIfNotExists(zookeeper_path + "/repl_access", "");
zk->createIfNotExists(zookeeper_path + "/repl_sql_objects", "");
zk->createIfNotExists(zookeeper_path + "/keeper_map_tables", "");
zk->createIfNotExists(zookeeper_path + "/file_infos", "");
zk->createIfNotExists(zookeeper_path + "/writing_files", "");
});
}
void BackupCoordinationRemote::removeAllNodes()
Strings BackupCoordinationOnCluster::setStage(const String & new_stage, const String & message, bool sync)
{
auto holder = with_retries.createRetriesControlHolder("removeAllNodes");
holder.retries_ctl.retryLoop(
[&, &zk = holder.faulty_zookeeper]()
stage_sync.setStage(new_stage, message);
if (!sync)
return {};
return stage_sync.waitForHostsToReachStage(new_stage, all_hosts_without_initiator);
}
void BackupCoordinationOnCluster::setBackupQueryWasSentToOtherHosts()
{
backup_query_was_sent_to_other_hosts = true;
}
bool BackupCoordinationOnCluster::trySetError(std::exception_ptr exception)
{
return stage_sync.trySetError(exception);
}
void BackupCoordinationOnCluster::finish()
{
bool other_hosts_also_finished = false;
stage_sync.finish(other_hosts_also_finished);
if ((current_host == kInitiator) && (other_hosts_also_finished || !backup_query_was_sent_to_other_hosts))
cleaner.cleanup();
}
bool BackupCoordinationOnCluster::tryFinishAfterError() noexcept
{
return tryFinishImpl();
}
bool BackupCoordinationOnCluster::tryFinishImpl() noexcept
{
bool other_hosts_also_finished = false;
if (!stage_sync.tryFinishAfterError(other_hosts_also_finished))
return false;
if ((current_host == kInitiator) && (other_hosts_also_finished || !backup_query_was_sent_to_other_hosts))
{
/// Usually this function is called by the initiator when a backup is complete so we don't need the coordination anymore.
///
/// However there can be a rare situation when this function is called after an error occurs on the initiator of a query
/// while some hosts are still making the backup. Removing all the nodes will remove the parent node of the backup coordination
/// at `zookeeper_path` which might cause such hosts to stop with exception "ZNONODE". Or such hosts might still do some useless part
/// of their backup work before that. Anyway in this case backup won't be finalized (because only an initiator can do that).
with_retries.renewZooKeeper(zk);
zk->removeRecursive(zookeeper_path);
});
if (!cleaner.tryCleanupAfterError())
return false;
}
return true;
}
void BackupCoordinationRemote::setStage(const String & new_stage, const String & message)
void BackupCoordinationOnCluster::waitForOtherHostsToFinish()
{
if (is_internal)
stage_sync->set(current_host, new_stage, message);
else
stage_sync->set(current_host, new_stage, /* message */ "", /* all_hosts */ true);
if ((current_host != kInitiator) || !backup_query_was_sent_to_other_hosts)
return;
stage_sync.waitForOtherHostsToFinish();
}
void BackupCoordinationRemote::setError(const Exception & exception)
bool BackupCoordinationOnCluster::tryWaitForOtherHostsToFinishAfterError() noexcept
{
stage_sync->setError(current_host, exception);
if (current_host != kInitiator)
return false;
if (!backup_query_was_sent_to_other_hosts)
return true;
return stage_sync.tryWaitForOtherHostsToFinishAfterError();
}
Strings BackupCoordinationRemote::waitForStage(const String & stage_to_wait)
ZooKeeperRetriesInfo BackupCoordinationOnCluster::getOnClusterInitializationKeeperRetriesInfo() const
{
return stage_sync->wait(all_hosts, stage_to_wait);
return ZooKeeperRetriesInfo{keeper_settings.max_retries_while_initializing,
static_cast<UInt64>(keeper_settings.retry_initial_backoff_ms.count()),
static_cast<UInt64>(keeper_settings.retry_max_backoff_ms.count())};
}
Strings BackupCoordinationRemote::waitForStage(const String & stage_to_wait, std::chrono::milliseconds timeout)
{
return stage_sync->waitFor(all_hosts, stage_to_wait, timeout);
}
void BackupCoordinationRemote::serializeToMultipleZooKeeperNodes(const String & path, const String & value, const String & logging_name)
void BackupCoordinationOnCluster::serializeToMultipleZooKeeperNodes(const String & path, const String & value, const String & logging_name)
{
{
auto holder = with_retries.createRetriesControlHolder(logging_name + "::create");
@ -301,7 +304,7 @@ void BackupCoordinationRemote::serializeToMultipleZooKeeperNodes(const String &
if (value.empty())
return;
size_t max_part_size = keeper_settings.keeper_value_max_size;
size_t max_part_size = keeper_settings.value_max_size;
if (!max_part_size)
max_part_size = value.size();
@ -324,7 +327,7 @@ void BackupCoordinationRemote::serializeToMultipleZooKeeperNodes(const String &
}
}
String BackupCoordinationRemote::deserializeFromMultipleZooKeeperNodes(const String & path, const String & logging_name) const
String BackupCoordinationOnCluster::deserializeFromMultipleZooKeeperNodes(const String & path, const String & logging_name) const
{
Strings part_names;
@ -357,7 +360,7 @@ String BackupCoordinationRemote::deserializeFromMultipleZooKeeperNodes(const Str
}
void BackupCoordinationRemote::addReplicatedPartNames(
void BackupCoordinationOnCluster::addReplicatedPartNames(
const String & table_zk_path,
const String & table_name_for_logs,
const String & replica_name,
@ -381,14 +384,14 @@ void BackupCoordinationRemote::addReplicatedPartNames(
});
}
Strings BackupCoordinationRemote::getReplicatedPartNames(const String & table_zk_path, const String & replica_name) const
Strings BackupCoordinationOnCluster::getReplicatedPartNames(const String & table_zk_path, const String & replica_name) const
{
std::lock_guard lock{replicated_tables_mutex};
prepareReplicatedTables();
return replicated_tables->getPartNames(table_zk_path, replica_name);
}
void BackupCoordinationRemote::addReplicatedMutations(
void BackupCoordinationOnCluster::addReplicatedMutations(
const String & table_zk_path,
const String & table_name_for_logs,
const String & replica_name,
@ -412,7 +415,7 @@ void BackupCoordinationRemote::addReplicatedMutations(
});
}
std::vector<IBackupCoordination::MutationInfo> BackupCoordinationRemote::getReplicatedMutations(const String & table_zk_path, const String & replica_name) const
std::vector<IBackupCoordination::MutationInfo> BackupCoordinationOnCluster::getReplicatedMutations(const String & table_zk_path, const String & replica_name) const
{
std::lock_guard lock{replicated_tables_mutex};
prepareReplicatedTables();
@ -420,7 +423,7 @@ std::vector<IBackupCoordination::MutationInfo> BackupCoordinationRemote::getRepl
}
void BackupCoordinationRemote::addReplicatedDataPath(
void BackupCoordinationOnCluster::addReplicatedDataPath(
const String & table_zk_path, const String & data_path)
{
{
@ -441,7 +444,7 @@ void BackupCoordinationRemote::addReplicatedDataPath(
});
}
Strings BackupCoordinationRemote::getReplicatedDataPaths(const String & table_zk_path) const
Strings BackupCoordinationOnCluster::getReplicatedDataPaths(const String & table_zk_path) const
{
std::lock_guard lock{replicated_tables_mutex};
prepareReplicatedTables();
@ -449,7 +452,7 @@ Strings BackupCoordinationRemote::getReplicatedDataPaths(const String & table_zk
}
void BackupCoordinationRemote::prepareReplicatedTables() const
void BackupCoordinationOnCluster::prepareReplicatedTables() const
{
if (replicated_tables)
return;
@ -536,7 +539,7 @@ void BackupCoordinationRemote::prepareReplicatedTables() const
replicated_tables->addDataPath(std::move(data_paths));
}
void BackupCoordinationRemote::addReplicatedAccessFilePath(const String & access_zk_path, AccessEntityType access_entity_type, const String & file_path)
void BackupCoordinationOnCluster::addReplicatedAccessFilePath(const String & access_zk_path, AccessEntityType access_entity_type, const String & file_path)
{
{
std::lock_guard lock{replicated_access_mutex};
@ -558,14 +561,14 @@ void BackupCoordinationRemote::addReplicatedAccessFilePath(const String & access
});
}
Strings BackupCoordinationRemote::getReplicatedAccessFilePaths(const String & access_zk_path, AccessEntityType access_entity_type) const
Strings BackupCoordinationOnCluster::getReplicatedAccessFilePaths(const String & access_zk_path, AccessEntityType access_entity_type) const
{
std::lock_guard lock{replicated_access_mutex};
prepareReplicatedAccess();
return replicated_access->getFilePaths(access_zk_path, access_entity_type, current_host);
}
void BackupCoordinationRemote::prepareReplicatedAccess() const
void BackupCoordinationOnCluster::prepareReplicatedAccess() const
{
if (replicated_access)
return;
@ -601,7 +604,7 @@ void BackupCoordinationRemote::prepareReplicatedAccess() const
replicated_access->addFilePath(std::move(file_path));
}
void BackupCoordinationRemote::addReplicatedSQLObjectsDir(const String & loader_zk_path, UserDefinedSQLObjectType object_type, const String & dir_path)
void BackupCoordinationOnCluster::addReplicatedSQLObjectsDir(const String & loader_zk_path, UserDefinedSQLObjectType object_type, const String & dir_path)
{
{
std::lock_guard lock{replicated_sql_objects_mutex};
@ -631,14 +634,14 @@ void BackupCoordinationRemote::addReplicatedSQLObjectsDir(const String & loader_
});
}
Strings BackupCoordinationRemote::getReplicatedSQLObjectsDirs(const String & loader_zk_path, UserDefinedSQLObjectType object_type) const
Strings BackupCoordinationOnCluster::getReplicatedSQLObjectsDirs(const String & loader_zk_path, UserDefinedSQLObjectType object_type) const
{
std::lock_guard lock{replicated_sql_objects_mutex};
prepareReplicatedSQLObjects();
return replicated_sql_objects->getDirectories(loader_zk_path, object_type, current_host);
}
void BackupCoordinationRemote::prepareReplicatedSQLObjects() const
void BackupCoordinationOnCluster::prepareReplicatedSQLObjects() const
{
if (replicated_sql_objects)
return;
@ -674,7 +677,7 @@ void BackupCoordinationRemote::prepareReplicatedSQLObjects() const
replicated_sql_objects->addDirectory(std::move(directory));
}
void BackupCoordinationRemote::addKeeperMapTable(const String & table_zookeeper_root_path, const String & table_id, const String & data_path_in_backup)
void BackupCoordinationOnCluster::addKeeperMapTable(const String & table_zookeeper_root_path, const String & table_id, const String & data_path_in_backup)
{
{
std::lock_guard lock{keeper_map_tables_mutex};
@ -695,7 +698,7 @@ void BackupCoordinationRemote::addKeeperMapTable(const String & table_zookeeper_
});
}
void BackupCoordinationRemote::prepareKeeperMapTables() const
void BackupCoordinationOnCluster::prepareKeeperMapTables() const
{
if (keeper_map_tables)
return;
@ -740,7 +743,7 @@ void BackupCoordinationRemote::prepareKeeperMapTables() const
}
String BackupCoordinationRemote::getKeeperMapDataPath(const String & table_zookeeper_root_path) const
String BackupCoordinationOnCluster::getKeeperMapDataPath(const String & table_zookeeper_root_path) const
{
std::lock_guard lock(keeper_map_tables_mutex);
prepareKeeperMapTables();
@ -748,7 +751,7 @@ String BackupCoordinationRemote::getKeeperMapDataPath(const String & table_zooke
}
void BackupCoordinationRemote::addFileInfos(BackupFileInfos && file_infos_)
void BackupCoordinationOnCluster::addFileInfos(BackupFileInfos && file_infos_)
{
{
std::lock_guard lock{file_infos_mutex};
@ -761,21 +764,21 @@ void BackupCoordinationRemote::addFileInfos(BackupFileInfos && file_infos_)
serializeToMultipleZooKeeperNodes(zookeeper_path + "/file_infos/" + current_host, file_infos_str, "addFileInfos");
}
BackupFileInfos BackupCoordinationRemote::getFileInfos() const
BackupFileInfos BackupCoordinationOnCluster::getFileInfos() const
{
std::lock_guard lock{file_infos_mutex};
prepareFileInfos();
return file_infos->getFileInfos(current_host);
}
BackupFileInfos BackupCoordinationRemote::getFileInfosForAllHosts() const
BackupFileInfos BackupCoordinationOnCluster::getFileInfosForAllHosts() const
{
std::lock_guard lock{file_infos_mutex};
prepareFileInfos();
return file_infos->getFileInfosForAllHosts();
}
void BackupCoordinationRemote::prepareFileInfos() const
void BackupCoordinationOnCluster::prepareFileInfos() const
{
if (file_infos)
return;
@ -801,7 +804,7 @@ void BackupCoordinationRemote::prepareFileInfos() const
}
}
bool BackupCoordinationRemote::startWritingFile(size_t data_file_index)
bool BackupCoordinationOnCluster::startWritingFile(size_t data_file_index)
{
{
/// Check if this host is already writing this file.
@ -842,66 +845,4 @@ bool BackupCoordinationRemote::startWritingFile(size_t data_file_index)
}
}
bool BackupCoordinationRemote::hasConcurrentBackups(const std::atomic<size_t> &) const
{
/// If its internal concurrency will be checked for the base backup
if (is_internal)
return false;
std::string backup_stage_path = zookeeper_path + "/stage";
bool result = false;
auto holder = with_retries.createRetriesControlHolder("getAllArchiveSuffixes");
holder.retries_ctl.retryLoop(
[&, &zk = holder.faulty_zookeeper]()
{
with_retries.renewZooKeeper(zk);
if (!zk->exists(root_zookeeper_path))
zk->createAncestors(root_zookeeper_path);
for (size_t attempt = 0; attempt < MAX_ZOOKEEPER_ATTEMPTS; ++attempt)
{
Coordination::Stat stat;
zk->get(root_zookeeper_path, &stat);
Strings existing_backup_paths = zk->getChildren(root_zookeeper_path);
for (const auto & existing_backup_path : existing_backup_paths)
{
if (startsWith(existing_backup_path, "restore-"))
continue;
String existing_backup_uuid = existing_backup_path;
existing_backup_uuid.erase(0, String("backup-").size());
if (existing_backup_uuid == toString(backup_uuid))
continue;
String status;
if (zk->tryGet(root_zookeeper_path + "/" + existing_backup_path + "/stage", status))
{
/// Check if some other backup is in progress
if (status == Stage::SCHEDULED_TO_START)
{
LOG_WARNING(log, "Found a concurrent backup: {}, current backup: {}", existing_backup_uuid, toString(backup_uuid));
result = true;
return;
}
}
}
zk->createIfNotExists(backup_stage_path, "");
auto code = zk->trySet(backup_stage_path, Stage::SCHEDULED_TO_START, stat.version);
if (code == Coordination::Error::ZOK)
break;
bool is_last_attempt = (attempt == MAX_ZOOKEEPER_ATTEMPTS - 1);
if ((code != Coordination::Error::ZBADVERSION) || is_last_attempt)
throw zkutil::KeeperException::fromPath(code, backup_stage_path);
}
});
return result;
}
}

View File

@ -1,6 +1,8 @@
#pragma once
#include <Backups/IBackupCoordination.h>
#include <Backups/BackupConcurrencyCheck.h>
#include <Backups/BackupCoordinationCleaner.h>
#include <Backups/BackupCoordinationFileInfos.h>
#include <Backups/BackupCoordinationReplicatedAccess.h>
#include <Backups/BackupCoordinationReplicatedSQLObjects.h>
@ -13,32 +15,35 @@
namespace DB
{
/// We try to store data to zookeeper several times due to possible version conflicts.
constexpr size_t MAX_ZOOKEEPER_ATTEMPTS = 10;
/// Implementation of the IBackupCoordination interface performing coordination via ZooKeeper. It's necessary for "BACKUP ON CLUSTER".
class BackupCoordinationRemote : public IBackupCoordination
class BackupCoordinationOnCluster : public IBackupCoordination
{
public:
using BackupKeeperSettings = WithRetries::KeeperSettings;
/// Empty string as the current host is used to mark the initiator of a BACKUP ON CLUSTER query.
static const constexpr std::string_view kInitiator;
BackupCoordinationRemote(
zkutil::GetZooKeeper get_zookeeper_,
BackupCoordinationOnCluster(
const UUID & backup_uuid_,
bool is_plain_backup_,
const String & root_zookeeper_path_,
zkutil::GetZooKeeper get_zookeeper_,
const BackupKeeperSettings & keeper_settings_,
const String & backup_uuid_,
const Strings & all_hosts_,
const String & current_host_,
bool plain_backup_,
bool is_internal_,
const Strings & all_hosts_,
bool allow_concurrent_backup_,
BackupConcurrencyCounters & concurrency_counters_,
ThreadPoolCallbackRunnerUnsafe<void> schedule_,
QueryStatusPtr process_list_element_);
~BackupCoordinationRemote() override;
~BackupCoordinationOnCluster() override;
void setStage(const String & new_stage, const String & message) override;
void setError(const Exception & exception) override;
Strings waitForStage(const String & stage_to_wait) override;
Strings waitForStage(const String & stage_to_wait, std::chrono::milliseconds timeout) override;
Strings setStage(const String & new_stage, const String & message, bool sync) override;
void setBackupQueryWasSentToOtherHosts() override;
bool trySetError(std::exception_ptr exception) override;
void finish() override;
bool tryFinishAfterError() noexcept override;
void waitForOtherHostsToFinish() override;
bool tryWaitForOtherHostsToFinishAfterError() noexcept override;
void addReplicatedPartNames(
const String & table_zk_path,
@ -73,13 +78,14 @@ public:
BackupFileInfos getFileInfosForAllHosts() const override;
bool startWritingFile(size_t data_file_index) override;
bool hasConcurrentBackups(const std::atomic<size_t> & num_active_backups) const override;
ZooKeeperRetriesInfo getOnClusterInitializationKeeperRetriesInfo() const override;
static size_t findCurrentHostIndex(const Strings & all_hosts, const String & current_host);
static Strings excludeInitiator(const Strings & all_hosts);
static size_t findCurrentHostIndex(const String & current_host, const Strings & all_hosts);
private:
void createRootNodes();
void removeAllNodes();
bool tryFinishImpl() noexcept;
void serializeToMultipleZooKeeperNodes(const String & path, const String & value, const String & logging_name);
String deserializeFromMultipleZooKeeperNodes(const String & path, const String & logging_name) const;
@ -96,26 +102,27 @@ private:
const String root_zookeeper_path;
const String zookeeper_path;
const BackupKeeperSettings keeper_settings;
const String backup_uuid;
const UUID backup_uuid;
const Strings all_hosts;
const Strings all_hosts_without_initiator;
const String current_host;
const size_t current_host_index;
const bool plain_backup;
const bool is_internal;
LoggerPtr const log;
/// The order of these two fields matters, because stage_sync holds a reference to with_retries object
mutable WithRetries with_retries;
std::optional<BackupCoordinationStageSync> stage_sync;
const WithRetries with_retries;
BackupConcurrencyCheck concurrency_check;
BackupCoordinationStageSync stage_sync;
BackupCoordinationCleaner cleaner;
std::atomic<bool> backup_query_was_sent_to_other_hosts = false;
mutable std::optional<BackupCoordinationReplicatedTables> TSA_GUARDED_BY(replicated_tables_mutex) replicated_tables;
mutable std::optional<BackupCoordinationReplicatedAccess> TSA_GUARDED_BY(replicated_access_mutex) replicated_access;
mutable std::optional<BackupCoordinationReplicatedSQLObjects> TSA_GUARDED_BY(replicated_sql_objects_mutex) replicated_sql_objects;
mutable std::optional<BackupCoordinationFileInfos> TSA_GUARDED_BY(file_infos_mutex) file_infos;
mutable std::optional<BackupCoordinationReplicatedTables> replicated_tables TSA_GUARDED_BY(replicated_tables_mutex);
mutable std::optional<BackupCoordinationReplicatedAccess> replicated_access TSA_GUARDED_BY(replicated_access_mutex);
mutable std::optional<BackupCoordinationReplicatedSQLObjects> replicated_sql_objects TSA_GUARDED_BY(replicated_sql_objects_mutex);
mutable std::optional<BackupCoordinationFileInfos> file_infos TSA_GUARDED_BY(file_infos_mutex);
mutable std::optional<BackupCoordinationKeeperMapTables> keeper_map_tables TSA_GUARDED_BY(keeper_map_tables_mutex);
std::unordered_set<size_t> TSA_GUARDED_BY(writing_files_mutex) writing_files;
std::unordered_set<size_t> writing_files TSA_GUARDED_BY(writing_files_mutex);
mutable std::mutex zookeeper_mutex;
mutable std::mutex replicated_tables_mutex;
mutable std::mutex replicated_access_mutex;
mutable std::mutex replicated_sql_objects_mutex;

View File

@ -8,10 +8,6 @@ namespace DB
namespace BackupCoordinationStage
{
/// This stage is set after concurrency check so ensure we dont start other backup/restores
/// when concurrent backup/restores are not allowed
constexpr const char * SCHEDULED_TO_START = "scheduled to start";
/// Finding all tables and databases which we're going to put to the backup and collecting their metadata.
constexpr const char * GATHERING_METADATA = "gathering metadata";
@ -46,10 +42,6 @@ namespace BackupCoordinationStage
/// Coordination stage meaning that a host finished its work.
constexpr const char * COMPLETED = "completed";
/// Coordination stage meaning that backup/restore has failed due to an error
/// Check '/error' for the error message
constexpr const char * ERROR = "error";
}
}

File diff suppressed because it is too large Load Diff

View File

@ -10,33 +10,193 @@ class BackupCoordinationStageSync
{
public:
BackupCoordinationStageSync(
const String & root_zookeeper_path_,
WithRetries & with_retries_,
bool is_restore_, /// true if this is a RESTORE ON CLUSTER command, false if this is a BACKUP ON CLUSTER command
const String & zookeeper_path_, /// path to the "stage" folder in ZooKeeper
const String & current_host_, /// the current host, or an empty string if it's the initiator of the BACKUP/RESTORE ON CLUSTER command
const Strings & all_hosts_, /// all the hosts (including the initiator and the current host) performing the BACKUP/RESTORE ON CLUSTER command
bool allow_concurrency_, /// whether it's allowed to have concurrent backups or restores.
const WithRetries & with_retries_,
ThreadPoolCallbackRunnerUnsafe<void> schedule_,
QueryStatusPtr process_list_element_,
LoggerPtr log_);
~BackupCoordinationStageSync();
/// Sets the stage of the current host and signal other hosts if there were other hosts waiting for that.
void set(const String & current_host, const String & new_stage, const String & message, const bool & all_hosts = false);
void setError(const String & current_host, const Exception & exception);
void setStage(const String & stage, const String & stage_result = {});
/// Sets the stage of the current host and waits until all hosts come to the same stage.
/// The function returns the messages all hosts set when they come to the required stage.
Strings wait(const Strings & all_hosts, const String & stage_to_wait);
/// Waits until all the specified hosts come to the specified stage.
/// The function returns the results which specified hosts set when they came to the required stage.
/// If it doesn't happen before the timeout then the function will stop waiting and throw an exception.
Strings waitForHostsToReachStage(const String & stage_to_wait, const Strings & hosts, std::optional<std::chrono::milliseconds> timeout = {}) const;
/// Almost the same as setAndWait() but this one stops waiting and throws an exception after a specific amount of time.
Strings waitFor(const Strings & all_hosts, const String & stage_to_wait, std::chrono::milliseconds timeout);
/// Waits until all the other hosts finish their work.
/// Stops waiting and throws an exception if another host encounters an error or if some host gets cancelled.
void waitForOtherHostsToFinish() const;
/// Lets other host know that the current host has finished its work.
void finish(bool & other_hosts_also_finished);
/// Lets other hosts know that the current host has encountered an error.
bool trySetError(std::exception_ptr exception) noexcept;
/// Waits until all the other hosts finish their work (as a part of error-handling process).
/// Doesn't stops waiting if some host encounters an error or gets cancelled.
bool tryWaitForOtherHostsToFinishAfterError() const noexcept;
/// Lets other host know that the current host has finished its work (as a part of error-handling process).
bool tryFinishAfterError(bool & other_hosts_also_finished) noexcept;
/// Returns a printable name of a specific host. For empty host the function returns "initiator".
static String getHostDesc(const String & host);
static String getHostsDesc(const Strings & hosts);
private:
/// Initializes the original state. It will be updated then with readCurrentState().
void initializeState();
/// Creates the root node in ZooKeeper.
void createRootNodes();
struct State;
State readCurrentState(WithRetries::RetriesControlHolder & retries_control_holder, const Strings & zk_nodes, const Strings & all_hosts, const String & stage_to_wait) const;
/// Atomically creates both 'start' and 'alive' nodes and also checks that there is no concurrent backup or restore if `allow_concurrency` is false.
void createStartAndAliveNodes();
void createStartAndAliveNodes(Coordination::ZooKeeperWithFaultInjection::Ptr zookeeper);
Strings waitImpl(const Strings & all_hosts, const String & stage_to_wait, std::optional<std::chrono::milliseconds> timeout) const;
/// Deserialize the version of a node stored in the 'start' node.
int parseStartNode(const String & start_node_contents, const String & host) const;
String zookeeper_path;
/// A reference to the field of parent object - BackupCoordinationRemote or RestoreCoordinationRemote
WithRetries & with_retries;
LoggerPtr log;
/// Recreates the 'alive' node if it doesn't exist. It's an ephemeral node so it's removed automatically after disconnections.
void createAliveNode(Coordination::ZooKeeperWithFaultInjection::Ptr zookeeper);
/// Checks that there is no concurrent backup or restore if `allow_concurrency` is false.
void checkConcurrency(Coordination::ZooKeeperWithFaultInjection::Ptr zookeeper);
/// Watching thread periodically reads the current state from ZooKeeper and recreates the 'alive' node.
void startWatchingThread();
void stopWatchingThread();
void watchingThread();
/// Reads the current state from ZooKeeper without throwing exceptions.
void readCurrentState(Coordination::ZooKeeperWithFaultInjection::Ptr zookeeper);
String getStageNodePath(const String & stage) const;
/// Lets other hosts know that the current host has encountered an error.
bool trySetError(const Exception & exception);
void setError(const Exception & exception);
/// Deserializes an error stored in the error node.
static std::pair<std::exception_ptr, String> parseErrorNode(const String & error_node_contents);
/// Reset the `connected` flag for each host.
void resetConnectedFlag();
/// Checks if the current query is cancelled, and if so then the function sets the `cancelled` flag in the current state.
void checkIfQueryCancelled();
/// Checks if the current state contains an error, and if so then the function passes this error to the query status
/// to cancel the current BACKUP or RESTORE command.
void cancelQueryIfError();
/// Checks if some host was disconnected for too long, and if so then the function generates an error and pass it to the query status
/// to cancel the current BACKUP or RESTORE command.
void cancelQueryIfDisconnectedTooLong();
/// Used by waitForHostsToReachStage() to check if everything is ready to return.
bool checkIfHostsReachStage(const Strings & hosts, const String & stage_to_wait, bool time_is_out, std::optional<std::chrono::milliseconds> timeout, Strings & results) const TSA_REQUIRES(mutex);
/// Creates the 'finish' node.
bool tryFinishImpl();
bool tryFinishImpl(bool & other_hosts_also_finished, bool throw_if_error, WithRetries::Kind retries_kind);
void createFinishNodeAndRemoveAliveNode(Coordination::ZooKeeperWithFaultInjection::Ptr zookeeper);
/// Returns the version used by the initiator.
int getInitiatorVersion() const;
/// Waits until all the other hosts finish their work.
bool tryWaitForOtherHostsToFinishImpl(const String & reason, bool throw_if_error, std::optional<std::chrono::seconds> timeout) const;
bool checkIfOtherHostsFinish(const String & reason, bool throw_if_error, bool time_is_out, std::optional<std::chrono::milliseconds> timeout) const TSA_REQUIRES(mutex);
const bool is_restore;
const String operation_name;
const String current_host;
const String current_host_desc;
const Strings all_hosts;
const bool allow_concurrency;
/// A reference to a field of the parent object which is either BackupCoordinationOnCluster or RestoreCoordinationOnCluster.
const WithRetries & with_retries;
const ThreadPoolCallbackRunnerUnsafe<void> schedule;
const QueryStatusPtr process_list_element;
const LoggerPtr log;
const std::chrono::seconds failure_after_host_disconnected_for_seconds;
const std::chrono::seconds finish_timeout_after_error;
const std::chrono::milliseconds sync_period_ms;
const size_t max_attempts_after_bad_version;
/// Paths in ZooKeeper.
const std::filesystem::path zookeeper_path;
const String root_zookeeper_path;
const String operation_node_path;
const String operation_node_name;
const String stage_node_path;
const String start_node_path;
const String finish_node_path;
const String num_hosts_node_path;
const String alive_node_path;
const String alive_tracker_node_path;
const String error_node_path;
std::shared_ptr<Poco::Event> zk_nodes_changed;
/// We store list of previously found ZooKeeper nodes to show better logging messages.
Strings zk_nodes;
/// Information about one host read from ZooKeeper.
struct HostInfo
{
String host;
bool started = false;
bool connected = false;
bool finished = false;
int version = 1;
std::map<String /* stage */, String /* result */> stages = {}; /// std::map because we need to compare states
std::exception_ptr exception = nullptr;
std::chrono::time_point<std::chrono::system_clock> last_connection_time = {};
std::chrono::time_point<std::chrono::steady_clock> last_connection_time_monotonic = {};
bool operator ==(const HostInfo & other) const;
bool operator !=(const HostInfo & other) const;
};
/// Information about all the host participating in the current BACKUP or RESTORE operation.
struct State
{
std::map<String /* host */, HostInfo> hosts; /// std::map because we need to compare states
std::optional<String> host_with_error;
bool cancelled = false;
bool operator ==(const State & other) const;
bool operator !=(const State & other) const;
};
State state TSA_GUARDED_BY(mutex);
mutable std::condition_variable state_changed;
std::future<void> watching_thread_future;
std::atomic<bool> should_stop_watching_thread = false;
struct FinishResult
{
bool succeeded = false;
std::exception_ptr exception;
bool other_hosts_also_finished = false;
};
FinishResult finish_result TSA_GUARDED_BY(mutex);
mutable std::mutex mutex;
};
}

View File

@ -102,7 +102,6 @@ BackupEntriesCollector::BackupEntriesCollector(
, read_settings(read_settings_)
, context(context_)
, process_list_element(context->getProcessListElement())
, on_cluster_first_sync_timeout(context->getConfigRef().getUInt64("backups.on_cluster_first_sync_timeout", 180000))
, collect_metadata_timeout(context->getConfigRef().getUInt64(
"backups.collect_metadata_timeout", context->getConfigRef().getUInt64("backups.consistent_metadata_snapshot_timeout", 600000)))
, attempts_to_collect_metadata_before_sleep(context->getConfigRef().getUInt("backups.attempts_to_collect_metadata_before_sleep", 2))
@ -176,21 +175,7 @@ Strings BackupEntriesCollector::setStage(const String & new_stage, const String
checkIsQueryCancelled();
current_stage = new_stage;
backup_coordination->setStage(new_stage, message);
if (new_stage == Stage::formatGatheringMetadata(0))
{
return backup_coordination->waitForStage(new_stage, on_cluster_first_sync_timeout);
}
if (new_stage.starts_with(Stage::GATHERING_METADATA))
{
auto current_time = std::chrono::steady_clock::now();
auto end_of_timeout = std::max(current_time, collect_metadata_end_time);
return backup_coordination->waitForStage(
new_stage, std::chrono::duration_cast<std::chrono::milliseconds>(end_of_timeout - current_time));
}
return backup_coordination->waitForStage(new_stage);
return backup_coordination->setStage(new_stage, message, /* sync = */ true);
}
void BackupEntriesCollector::checkIsQueryCancelled() const

View File

@ -111,10 +111,6 @@ private:
ContextPtr context;
QueryStatusPtr process_list_element;
/// The time a BACKUP ON CLUSTER or RESTORE ON CLUSTER command will wait until all the nodes receive the BACKUP (or RESTORE) query and start working.
/// This setting is similar to `distributed_ddl_task_timeout`.
const std::chrono::milliseconds on_cluster_first_sync_timeout;
/// The time a BACKUP command will try to collect the metadata of tables & databases.
const std::chrono::milliseconds collect_metadata_timeout;

View File

@ -5,6 +5,7 @@
namespace DB
{
class IDisk;
using DiskPtr = std::shared_ptr<IDisk>;
class SeekableReadBuffer;
@ -63,9 +64,13 @@ public:
virtual void copyFile(const String & destination, const String & source, size_t size) = 0;
/// Removes a file written to the backup, if it still exists.
virtual void removeFile(const String & file_name) = 0;
virtual void removeFiles(const Strings & file_names) = 0;
/// Removes the backup folder if it's empty or contains empty subfolders.
virtual void removeEmptyDirectories() = 0;
virtual const ReadSettings & getReadSettings() const = 0;
virtual const WriteSettings & getWriteSettings() const = 0;
virtual size_t getWriteBufferSize() const = 0;

View File

@ -81,6 +81,7 @@ public:
void removeFile(const String & file_name) override;
void removeFiles(const Strings & file_names) override;
void removeEmptyDirectories() override {}
private:
std::unique_ptr<ReadBuffer> readFile(const String & file_name, size_t expected_file_size) override;

View File

@ -91,16 +91,36 @@ std::unique_ptr<WriteBuffer> BackupWriterDisk::writeFile(const String & file_nam
void BackupWriterDisk::removeFile(const String & file_name)
{
disk->removeFileIfExists(root_path / file_name);
if (disk->existsDirectory(root_path) && disk->isDirectoryEmpty(root_path))
disk->removeDirectory(root_path);
}
void BackupWriterDisk::removeFiles(const Strings & file_names)
{
for (const auto & file_name : file_names)
disk->removeFileIfExists(root_path / file_name);
if (disk->existsDirectory(root_path) && disk->isDirectoryEmpty(root_path))
disk->removeDirectory(root_path);
}
void BackupWriterDisk::removeEmptyDirectories()
{
removeEmptyDirectoriesImpl(root_path);
}
void BackupWriterDisk::removeEmptyDirectoriesImpl(const fs::path & current_dir)
{
if (!disk->existsDirectory(current_dir))
return;
if (disk->isDirectoryEmpty(current_dir))
{
disk->removeDirectory(current_dir);
return;
}
/// Backups are not too deep, so recursion is good enough here.
for (auto it = disk->iterateDirectory(current_dir); it->isValid(); it->next())
removeEmptyDirectoriesImpl(current_dir / it->name());
if (disk->isDirectoryEmpty(current_dir))
disk->removeDirectory(current_dir);
}
void BackupWriterDisk::copyFileFromDisk(const String & path_in_backup, DiskPtr src_disk, const String & src_path,

View File

@ -50,9 +50,11 @@ public:
void removeFile(const String & file_name) override;
void removeFiles(const Strings & file_names) override;
void removeEmptyDirectories() override;
private:
std::unique_ptr<ReadBuffer> readFile(const String & file_name, size_t expected_file_size) override;
void removeEmptyDirectoriesImpl(const std::filesystem::path & current_dir);
const DiskPtr disk;
const std::filesystem::path root_path;

View File

@ -106,16 +106,36 @@ std::unique_ptr<WriteBuffer> BackupWriterFile::writeFile(const String & file_nam
void BackupWriterFile::removeFile(const String & file_name)
{
(void)fs::remove(root_path / file_name);
if (fs::is_directory(root_path) && fs::is_empty(root_path))
(void)fs::remove(root_path);
}
void BackupWriterFile::removeFiles(const Strings & file_names)
{
for (const auto & file_name : file_names)
(void)fs::remove(root_path / file_name);
if (fs::is_directory(root_path) && fs::is_empty(root_path))
(void)fs::remove(root_path);
}
void BackupWriterFile::removeEmptyDirectories()
{
removeEmptyDirectoriesImpl(root_path);
}
void BackupWriterFile::removeEmptyDirectoriesImpl(const fs::path & current_dir)
{
if (!fs::is_directory(current_dir))
return;
if (fs::is_empty(current_dir))
{
(void)fs::remove(current_dir);
return;
}
/// Backups are not too deep, so recursion is good enough here.
for (const auto & it : std::filesystem::directory_iterator{current_dir})
removeEmptyDirectoriesImpl(it.path());
if (fs::is_empty(current_dir))
(void)fs::remove(current_dir);
}
void BackupWriterFile::copyFileFromDisk(const String & path_in_backup, DiskPtr src_disk, const String & src_path,

View File

@ -42,9 +42,11 @@ public:
void removeFile(const String & file_name) override;
void removeFiles(const Strings & file_names) override;
void removeEmptyDirectories() override;
private:
std::unique_ptr<ReadBuffer> readFile(const String & file_name, size_t expected_file_size) override;
void removeEmptyDirectoriesImpl(const std::filesystem::path & current_dir);
const std::filesystem::path root_path;
const DataSourceDescription data_source_description;

View File

@ -74,6 +74,7 @@ public:
void removeFile(const String & file_name) override;
void removeFiles(const Strings & file_names) override;
void removeEmptyDirectories() override {}
private:
std::unique_ptr<ReadBuffer> readFile(const String & file_name, size_t expected_file_size) override;

View File

@ -147,11 +147,11 @@ BackupImpl::BackupImpl(
BackupImpl::~BackupImpl()
{
if ((open_mode == OpenMode::WRITE) && !is_internal_backup && !writing_finalized && !std::uncaught_exceptions() && !std::current_exception())
if ((open_mode == OpenMode::WRITE) && !writing_finalized && !corrupted)
{
/// It is suspicious to destroy BackupImpl without finalization while writing a backup when there is no exception.
LOG_ERROR(log, "BackupImpl is not finalized when destructor is called. Stack trace: {}", StackTrace().toString());
chassert(false && "BackupImpl is not finalized when destructor is called.");
LOG_ERROR(log, "BackupImpl is not finalized or marked as corrupted when destructor is called. Stack trace: {}", StackTrace().toString());
chassert(false, "BackupImpl is not finalized or marked as corrupted when destructor is called.");
}
try
@ -196,9 +196,6 @@ void BackupImpl::open()
if (open_mode == OpenMode::READ)
readBackupMetadata();
if ((open_mode == OpenMode::WRITE) && base_backup_info)
base_backup_uuid = getBaseBackupUnlocked()->getUUID();
}
void BackupImpl::close()
@ -280,6 +277,8 @@ std::shared_ptr<const IBackup> BackupImpl::getBaseBackupUnlocked() const
toString(base_backup->getUUID()),
(base_backup_uuid ? toString(*base_backup_uuid) : ""));
}
base_backup_uuid = base_backup->getUUID();
}
return base_backup;
}
@ -369,7 +368,7 @@ void BackupImpl::writeBackupMetadata()
if (base_backup_in_use)
{
*out << "<base_backup>" << xml << base_backup_info->toString() << "</base_backup>";
*out << "<base_backup_uuid>" << toString(*base_backup_uuid) << "</base_backup_uuid>";
*out << "<base_backup_uuid>" << getBaseBackupUnlocked()->getUUID() << "</base_backup_uuid>";
}
}
@ -594,9 +593,6 @@ bool BackupImpl::checkLockFile(bool throw_if_failed) const
void BackupImpl::removeLockFile()
{
if (is_internal_backup)
return; /// Internal backup must not remove the lock file (it's still used by the initiator).
if (checkLockFile(false))
writer->removeFile(lock_file_name);
}
@ -989,8 +985,11 @@ void BackupImpl::finalizeWriting()
if (open_mode != OpenMode::WRITE)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Backup is not opened for writing");
if (corrupted)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Backup can't be finalized after an error happened");
if (writing_finalized)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Backup is already finalized");
return;
if (!is_internal_backup)
{
@ -1015,20 +1014,58 @@ void BackupImpl::setCompressedSize()
}
void BackupImpl::tryRemoveAllFiles()
bool BackupImpl::setIsCorrupted() noexcept
{
if (open_mode != OpenMode::WRITE)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Backup is not opened for writing");
if (is_internal_backup)
return;
try
{
LOG_INFO(log, "Removing all files of backup {}", backup_name_for_logging);
std::lock_guard lock{mutex};
if (open_mode != OpenMode::WRITE)
{
LOG_ERROR(log, "Backup is not opened for writing. Stack trace: {}", StackTrace().toString());
chassert(false, "Backup is not opened for writing when setIsCorrupted() is called");
return false;
}
if (writing_finalized)
{
LOG_WARNING(log, "An error happened after the backup was completed successfully, the backup must be correct!");
return false;
}
if (corrupted)
return true;
LOG_WARNING(log, "An error happened, the backup won't be completed");
closeArchive(/* finalize= */ false);
corrupted = true;
return true;
}
catch (...)
{
DB::tryLogCurrentException(log, "Caught exception while setting that the backup was corrupted");
return false;
}
}
bool BackupImpl::tryRemoveAllFiles() noexcept
{
try
{
std::lock_guard lock{mutex};
if (!corrupted)
{
LOG_ERROR(log, "Backup is not set as corrupted. Stack trace: {}", StackTrace().toString());
chassert(false, "Backup is not set as corrupted when tryRemoveAllFiles() is called");
return false;
}
LOG_INFO(log, "Removing all files of backup {}", backup_name_for_logging);
Strings files_to_remove;
if (use_archive)
{
files_to_remove.push_back(archive_params.archive_name);
@ -1041,14 +1078,17 @@ void BackupImpl::tryRemoveAllFiles()
}
if (!checkLockFile(false))
return;
return false;
writer->removeFiles(files_to_remove);
removeLockFile();
writer->removeEmptyDirectories();
return true;
}
catch (...)
{
DB::tryLogCurrentException(__PRETTY_FUNCTION__);
DB::tryLogCurrentException(log, "Caught exception while removing files of a corrupted backup");
return false;
}
}

View File

@ -86,7 +86,8 @@ public:
void writeFile(const BackupFileInfo & info, BackupEntryPtr entry) override;
bool supportsWritingInMultipleThreads() const override { return !use_archive; }
void finalizeWriting() override;
void tryRemoveAllFiles() override;
bool setIsCorrupted() noexcept override;
bool tryRemoveAllFiles() noexcept override;
private:
void open();
@ -146,13 +147,14 @@ private:
int version;
mutable std::optional<BackupInfo> base_backup_info;
mutable std::shared_ptr<const IBackup> base_backup;
std::optional<UUID> base_backup_uuid;
mutable std::optional<UUID> base_backup_uuid;
std::shared_ptr<IArchiveReader> archive_reader;
std::shared_ptr<IArchiveWriter> archive_writer;
String lock_file_name;
std::atomic<bool> lock_file_before_first_file_checked = false;
bool writing_finalized = false;
bool corrupted = false;
bool deduplicate_files = true;
bool use_same_s3_credentials_for_base_backup = false;
bool use_same_password_for_base_backup = false;

View File

@ -0,0 +1,58 @@
#include <Backups/BackupKeeperSettings.h>
#include <Core/Settings.h>
#include <Interpreters/Context.h>
#include <Poco/Util/AbstractConfiguration.h>
namespace DB
{
namespace Setting
{
extern const SettingsUInt64 backup_restore_keeper_max_retries;
extern const SettingsUInt64 backup_restore_keeper_retry_initial_backoff_ms;
extern const SettingsUInt64 backup_restore_keeper_retry_max_backoff_ms;
extern const SettingsUInt64 backup_restore_failure_after_host_disconnected_for_seconds;
extern const SettingsUInt64 backup_restore_keeper_max_retries_while_initializing;
extern const SettingsUInt64 backup_restore_keeper_max_retries_while_handling_error;
extern const SettingsUInt64 backup_restore_finish_timeout_after_error_sec;
extern const SettingsUInt64 backup_restore_keeper_value_max_size;
extern const SettingsUInt64 backup_restore_batch_size_for_keeper_multi;
extern const SettingsUInt64 backup_restore_batch_size_for_keeper_multiread;
extern const SettingsFloat backup_restore_keeper_fault_injection_probability;
extern const SettingsUInt64 backup_restore_keeper_fault_injection_seed;
}
BackupKeeperSettings BackupKeeperSettings::fromContext(const ContextPtr & context)
{
BackupKeeperSettings keeper_settings;
const auto & settings = context->getSettingsRef();
const auto & config = context->getConfigRef();
keeper_settings.max_retries = settings[Setting::backup_restore_keeper_max_retries];
keeper_settings.retry_initial_backoff_ms = std::chrono::milliseconds{settings[Setting::backup_restore_keeper_retry_initial_backoff_ms]};
keeper_settings.retry_max_backoff_ms = std::chrono::milliseconds{settings[Setting::backup_restore_keeper_retry_max_backoff_ms]};
keeper_settings.failure_after_host_disconnected_for_seconds = std::chrono::seconds{settings[Setting::backup_restore_failure_after_host_disconnected_for_seconds]};
keeper_settings.max_retries_while_initializing = settings[Setting::backup_restore_keeper_max_retries_while_initializing];
keeper_settings.max_retries_while_handling_error = settings[Setting::backup_restore_keeper_max_retries_while_handling_error];
keeper_settings.finish_timeout_after_error = std::chrono::seconds(settings[Setting::backup_restore_finish_timeout_after_error_sec]);
if (config.has("backups.sync_period_ms"))
keeper_settings.sync_period_ms = std::chrono::milliseconds{config.getUInt64("backups.sync_period_ms")};
if (config.has("backups.max_attempts_after_bad_version"))
keeper_settings.max_attempts_after_bad_version = config.getUInt64("backups.max_attempts_after_bad_version");
keeper_settings.value_max_size = settings[Setting::backup_restore_keeper_value_max_size];
keeper_settings.batch_size_for_multi = settings[Setting::backup_restore_batch_size_for_keeper_multi];
keeper_settings.batch_size_for_multiread = settings[Setting::backup_restore_batch_size_for_keeper_multiread];
keeper_settings.fault_injection_probability = settings[Setting::backup_restore_keeper_fault_injection_probability];
keeper_settings.fault_injection_seed = settings[Setting::backup_restore_keeper_fault_injection_seed];
return keeper_settings;
}
}

View File

@ -0,0 +1,64 @@
#pragma once
#include <Interpreters/Context_fwd.h>
namespace DB
{
/// Settings for [Zoo]Keeper-related works during BACKUP or RESTORE.
struct BackupKeeperSettings
{
/// Maximum number of retries in the middle of a BACKUP ON CLUSTER or RESTORE ON CLUSTER operation.
/// Should be big enough so the whole operation won't be cancelled in the middle of it because of a temporary ZooKeeper failure.
UInt64 max_retries{1000};
/// Initial backoff timeout for ZooKeeper operations during backup or restore.
std::chrono::milliseconds retry_initial_backoff_ms{100};
/// Max backoff timeout for ZooKeeper operations during backup or restore.
std::chrono::milliseconds retry_max_backoff_ms{5000};
/// If a host during BACKUP ON CLUSTER or RESTORE ON CLUSTER doesn't recreate its 'alive' node in ZooKeeper
/// for this amount of time then the whole backup or restore is considered as failed.
/// Should be bigger than any reasonable time for a host to reconnect to ZooKeeper after a failure.
/// Set to zero to disable (if it's zero and some host crashed then BACKUP ON CLUSTER or RESTORE ON CLUSTER will be waiting
/// for the crashed host forever until the operation is explicitly cancelled with KILL QUERY).
std::chrono::seconds failure_after_host_disconnected_for_seconds{3600};
/// Maximum number of retries during the initialization of a BACKUP ON CLUSTER or RESTORE ON CLUSTER operation.
/// Shouldn't be too big because if the operation is going to fail then it's better if it fails faster.
UInt64 max_retries_while_initializing{20};
/// Maximum number of retries while handling an error of a BACKUP ON CLUSTER or RESTORE ON CLUSTER operation.
/// Shouldn't be too big because those retries are just for cleanup after the operation has failed already.
UInt64 max_retries_while_handling_error{20};
/// How long the initiator should wait for other host to handle the 'error' node and finish their work.
std::chrono::seconds finish_timeout_after_error{180};
/// How often the "stage" folder in ZooKeeper must be scanned in a background thread to track changes done by other hosts.
std::chrono::milliseconds sync_period_ms{5000};
/// Number of attempts after getting error ZBADVERSION from ZooKeeper.
size_t max_attempts_after_bad_version{10};
/// Maximum size of data of a ZooKeeper's node during backup.
UInt64 value_max_size{1048576};
/// Maximum size of a batch for a multi request.
UInt64 batch_size_for_multi{1000};
/// Maximum size of a batch for a multiread request.
UInt64 batch_size_for_multiread{10000};
/// Approximate probability of failure for a keeper request during backup or restore. Valid value is in interval [0.0f, 1.0f].
Float64 fault_injection_probability{0};
/// Seed for `fault_injection_probability`: 0 - random seed, otherwise the setting value.
UInt64 fault_injection_seed{0};
static BackupKeeperSettings fromContext(const ContextPtr & context);
};
}

Some files were not shown because too many files have changed in this diff Show More