mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-24 00:22:29 +00:00
Merge remote-tracking branch 'origin/master' into rm-select
This commit is contained in:
commit
d196973ab3
5
.github/workflows/pull_request.yml
vendored
5
.github/workflows/pull_request.yml
vendored
@ -532,6 +532,11 @@ jobs:
|
||||
run_command: |
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
|
||||
mkdir -p "${REPORTS_PATH}/integration"
|
||||
mkdir -p "${REPORTS_PATH}/stateless"
|
||||
cp -r ${REPORTS_PATH}/changed_images* ${REPORTS_PATH}/integration
|
||||
cp -r ${REPORTS_PATH}/changed_images* ${REPORTS_PATH}/stateless
|
||||
|
||||
TEMP_PATH="${TEMP_PATH}/integration" \
|
||||
REPORTS_PATH="${REPORTS_PATH}/integration" \
|
||||
python3 integration_test_check.py "Integration $CHECK_NAME" \
|
||||
|
242
CHANGELOG.md
242
CHANGELOG.md
@ -1,4 +1,5 @@
|
||||
### Table of Contents
|
||||
**[ClickHouse release v23.11, 2023-12-05](#2311)**<br/>
|
||||
**[ClickHouse release v23.10, 2023-11-02](#2310)**<br/>
|
||||
**[ClickHouse release v23.9, 2023-09-28](#239)**<br/>
|
||||
**[ClickHouse release v23.8 LTS, 2023-08-31](#238)**<br/>
|
||||
@ -13,7 +14,222 @@
|
||||
|
||||
# 2023 Changelog
|
||||
|
||||
### ClickHouse release 23.10, 2023-11-02
|
||||
### <a id="2311"></a> ClickHouse release 23.11, 2023-12-05
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* The default ClickHouse server configuration file has enabled `access_management` (user manipulation by SQL queries) and `named_collection_control` (manipulation of named collection by SQL queries) for the `default` user by default. This closes [#56482](https://github.com/ClickHouse/ClickHouse/issues/56482). [#56619](https://github.com/ClickHouse/ClickHouse/pull/56619) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Multiple improvements for `RESPECT NULLS`/`IGNORE NULLS` for window functions. If you use them as aggregate functions and store the states of aggregate functions with these modifiers, they might become incompatible. [#57189](https://github.com/ClickHouse/ClickHouse/pull/57189) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Remove optimization `optimize_move_functions_out_of_any`. [#57190](https://github.com/ClickHouse/ClickHouse/pull/57190) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Formatters `%l`/`%k`/`%c` in function `parseDateTime` are now able to parse hours/months without leading zeros, e.g. `select parseDateTime('2023-11-26 8:14', '%F %k:%i')` now works. Set `parsedatetime_parse_without_leading_zeros = 0` to restore the previous behavior which required two digits. Function `formatDateTime` is now also able to print hours/months without leading zeros. This is controlled by setting `formatdatetime_format_without_leading_zeros` but off by default to not break existing use cases. [#55872](https://github.com/ClickHouse/ClickHouse/pull/55872) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* You can no longer use the aggregate function `avgWeighted` with arguments of type `Decimal`. Workaround: convert arguments to `Float64`. This closes [#43928](https://github.com/ClickHouse/ClickHouse/issues/43928). This closes [#31768](https://github.com/ClickHouse/ClickHouse/issues/31768). This closes [#56435](https://github.com/ClickHouse/ClickHouse/issues/56435). If you have used this function inside materialized views or projections with `Decimal` arguments, contact support@clickhouse.com. Fixed error in aggregate function `sumMap` and made it slower around 1.5..2 times. It does not matter because the function is garbage anyway. This closes [#54955](https://github.com/ClickHouse/ClickHouse/issues/54955). This closes [#53134](https://github.com/ClickHouse/ClickHouse/issues/53134). This closes [#55148](https://github.com/ClickHouse/ClickHouse/issues/55148). Fix a bug in function `groupArraySample` - it used the same random seed in case more than one aggregate state is generated in a query. [#56350](https://github.com/ClickHouse/ClickHouse/pull/56350) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### New Feature
|
||||
* Added server setting `async_load_databases` for asynchronous loading of databases and tables. Speeds up the server start time. Applies to databases with `Ordinary`, `Atomic` and `Replicated` engines. Their tables load metadata asynchronously. Query to a table increases the priority of the load job and waits for it to be done. Added a new table `system.asynchronous_loader` for introspection. [#49351](https://github.com/ClickHouse/ClickHouse/pull/49351) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Add system table `blob_storage_log`. It allows auditing all the data written to S3 and other object storages. [#52918](https://github.com/ClickHouse/ClickHouse/pull/52918) ([vdimir](https://github.com/vdimir)).
|
||||
* Use statistics to order prewhere conditions better. [#53240](https://github.com/ClickHouse/ClickHouse/pull/53240) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Added support for compression in the Keeper's protocol. It can be enabled on the ClickHouse side by using this flag `use_compression` inside `zookeeper` section. Keep in mind that only ClickHouse Keeper supports compression, while Apache ZooKeeper does not. Resolves [#49507](https://github.com/ClickHouse/ClickHouse/issues/49507). [#54957](https://github.com/ClickHouse/ClickHouse/pull/54957) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Introduce the feature `storage_metadata_write_full_object_key`. If it is set as `true` then metadata files are written with the new format. With that format ClickHouse stores full remote object key in the metadata file which allows better flexibility and optimization. [#55566](https://github.com/ClickHouse/ClickHouse/pull/55566) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Add new settings and syntax to protect named collections' fields from being overridden. This is meant to prevent a malicious user from obtaining unauthorized access to secrets. [#55782](https://github.com/ClickHouse/ClickHouse/pull/55782) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
|
||||
* Add `hostname` column to all system log tables - it is useful if you make the system tables replicated, shared, or distributed. [#55894](https://github.com/ClickHouse/ClickHouse/pull/55894) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Add `CHECK ALL TABLES` query. [#56022](https://github.com/ClickHouse/ClickHouse/pull/56022) ([vdimir](https://github.com/vdimir)).
|
||||
* Added function `fromDaysSinceYearZero` which is similar to MySQL's `FROM_DAYS`. E.g. `SELECT fromDaysSinceYearZero(739136)` returns `2023-09-08`. [#56088](https://github.com/ClickHouse/ClickHouse/pull/56088) ([Joanna Hulboj](https://github.com/jh0x)).
|
||||
* Implemented a function for series period detect method using FFT. [#56171](https://github.com/ClickHouse/ClickHouse/pull/56171) ([Bhavna Jindal](https://github.com/bhavnajindal)).
|
||||
* Add an external Python tool to view backups and to extract information from them without using ClickHouse. [#56268](https://github.com/ClickHouse/ClickHouse/pull/56268) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Implement a new setting called `preferred_projection_name`. If it is set to a non-empty string, the specified projection would be used if possible instead of choosing from all the candidates. [#56309](https://github.com/ClickHouse/ClickHouse/pull/56309) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Add 4-letter command for yielding/resigning leadership (https://github.com/ClickHouse/ClickHouse/issues/56352). [#56354](https://github.com/ClickHouse/ClickHouse/pull/56354) ([Pradeep Chhetri](https://github.com/chhetripradeep)). [#56620](https://github.com/ClickHouse/ClickHouse/pull/56620) ([Pradeep Chhetri](https://github.com/chhetripradeep)).
|
||||
* Added a new SQL function, `arrayRandomSample(arr, k)` which returns a sample of k elements from the input array. Similar functionality could previously be achieved only with less convenient syntax, e.g. `SELECT arrayReduce('groupArraySample(3)', range(10))`. [#56416](https://github.com/ClickHouse/ClickHouse/pull/56416) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Added support for `Float16` type data to use in `.npy` files. Closes [#56344](https://github.com/ClickHouse/ClickHouse/issues/56344). [#56424](https://github.com/ClickHouse/ClickHouse/pull/56424) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Added a system view `information_schema.statistics` for better compatibility with Tableau Online. [#56425](https://github.com/ClickHouse/ClickHouse/pull/56425) ([Serge Klochkov](https://github.com/slvrtrn)).
|
||||
* Add `system.symbols` table useful for introspection of the binary. [#56548](https://github.com/ClickHouse/ClickHouse/pull/56548) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Configurable dashboards. Queries for charts are now loaded using a query, which by default uses a new `system.dashboards` table. [#56771](https://github.com/ClickHouse/ClickHouse/pull/56771) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Introduce `fileCluster` table function - it is useful if you mount a shared filesystem (NFS and similar) into the `user_files` directory. [#56868](https://github.com/ClickHouse/ClickHouse/pull/56868) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* Add `_size` virtual column with file size in bytes to `s3/file/hdfs/url/azureBlobStorage` engines. [#57126](https://github.com/ClickHouse/ClickHouse/pull/57126) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Expose the number of errors for each error code occurred on a server since last restart from the Prometheus endpoint. [#57209](https://github.com/ClickHouse/ClickHouse/pull/57209) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* ClickHouse keeper reports its running availability zone at `/keeper/availability-zone` path. This can be configured via `<availability_zone><value>us-west-1a</value></availability_zone>`. [#56715](https://github.com/ClickHouse/ClickHouse/pull/56715) ([Jianfei Hu](https://github.com/incfly)).
|
||||
* Make ALTER materialized_view MODIFY QUERY non experimental and deprecate `allow_experimental_alter_materialized_view_structure` setting. Fixes [#15206](https://github.com/ClickHouse/ClickHouse/issues/15206). [#57311](https://github.com/ClickHouse/ClickHouse/pull/57311) ([alesapin](https://github.com/alesapin)).
|
||||
* Setting `join_algorithm` respects specified order [#51745](https://github.com/ClickHouse/ClickHouse/pull/51745) ([vdimir](https://github.com/vdimir)).
|
||||
* Add support for the [well-known Protobuf types](https://protobuf.dev/reference/protobuf/google.protobuf/) in the Protobuf format. [#56741](https://github.com/ClickHouse/ClickHouse/pull/56741) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
|
||||
#### Performance Improvement
|
||||
* It is now possible to refer to ALIAS column in index (non-primary-key) definitions (issue [#55650](https://github.com/ClickHouse/ClickHouse/issues/55650)). Example: `CREATE TABLE tab(col UInt32, col_alias ALIAS col + 1, INDEX idx (col_alias) TYPE minmax) ENGINE = MergeTree ORDER BY col;`. [#57220](https://github.com/ClickHouse/ClickHouse/pull/57220) ([flynn](https://github.com/ucasfl)).
|
||||
* Adaptive timeouts for interacting with S3. The first attempt is made with low send and receive timeouts. [#56314](https://github.com/ClickHouse/ClickHouse/pull/56314) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Increase the default value of `max_concurrent_queries` from 100 to 1000. This makes sense when there is a large number of connecting clients, which are slowly sending or receiving data, so the server is not limited by CPU, or when the number of CPU cores is larger than 100. Also, enable the concurrency control by default, and set the desired number of query processing threads in total as twice the number of CPU cores. It improves performance in scenarios with a very large number of concurrent queries. [#46927](https://github.com/ClickHouse/ClickHouse/pull/46927) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Support parallel evaluation of window functions. Fixes [#34688](https://github.com/ClickHouse/ClickHouse/issues/34688). [#39631](https://github.com/ClickHouse/ClickHouse/pull/39631) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* `Numbers` table engine (of the `system.numbers` table) now analyzes the condition to generate the needed subset of data, like table's index. [#50909](https://github.com/ClickHouse/ClickHouse/pull/50909) ([JackyWoo](https://github.com/JackyWoo)).
|
||||
* Improved the performance of filtering by `IN (...)` condition for `Merge` table engine. [#54905](https://github.com/ClickHouse/ClickHouse/pull/54905) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* An improvement which takes place when the filesystem cache is full and there are big reads. [#55158](https://github.com/ClickHouse/ClickHouse/pull/55158) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Add ability to disable checksums for S3 to avoid excessive pass over the file (this is controlled by the setting `s3_disable_checksum`). [#55559](https://github.com/ClickHouse/ClickHouse/pull/55559) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Now we read synchronously from remote tables when data is in page cache (like we do for local tables). It is faster, it doesn't require synchronisation inside the thread pool, and doesn't hesitate to do `seek`-s on local FS, and reduces CPU wait. [#55841](https://github.com/ClickHouse/ClickHouse/pull/55841) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Optimization for getting value from `map`, `arrayElement`. It will bring about 30% speedup. - reduce the reserved memory - reduce the `resize` call. [#55957](https://github.com/ClickHouse/ClickHouse/pull/55957) ([lgbo](https://github.com/lgbo-ustc)).
|
||||
* Optimization of multi-stage filtering with AVX-512. The performance experiments of the OnTime dataset on the ICX device (Intel Xeon Platinum 8380 CPU, 80 cores, 160 threads) show that this change could bring the improvements of 7.4%, 5.9%, 4.7%, 3.0%, and 4.6% to the QPS of the query Q2, Q3, Q4, Q5 and Q6 respectively while having no impact on others. [#56079](https://github.com/ClickHouse/ClickHouse/pull/56079) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
|
||||
* Limit the number of threads busy inside the query profiler. If there are more - they will skip profiling. [#56105](https://github.com/ClickHouse/ClickHouse/pull/56105) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Decrease the amount of virtual function calls in window functions. [#56120](https://github.com/ClickHouse/ClickHouse/pull/56120) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Allow recursive Tuple field pruning in ORC data format to speed up scaning. [#56122](https://github.com/ClickHouse/ClickHouse/pull/56122) ([李扬](https://github.com/taiyang-li)).
|
||||
* Trivial count optimization for `Npy` data format: queries like `select count() from 'data.npy'` will work much more fast because of caching the results. [#56304](https://github.com/ClickHouse/ClickHouse/pull/56304) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Queries with aggregation and a large number of streams will use less amount of memory during the plan's construction. [#57074](https://github.com/ClickHouse/ClickHouse/pull/57074) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improve performance of executing queries for use cases with many users and highly concurrent queries (>2000 QPS) by optimizing the access to ProcessList. [#57106](https://github.com/ClickHouse/ClickHouse/pull/57106) ([Andrej Hoos](https://github.com/adikus)).
|
||||
* Trivial improvement on array join, reuse some intermediate results. [#57183](https://github.com/ClickHouse/ClickHouse/pull/57183) ([李扬](https://github.com/taiyang-li)).
|
||||
* There are cases when stack unwinding was slow. Not anymore. [#57221](https://github.com/ClickHouse/ClickHouse/pull/57221) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Now we use default read pool for reading from external storage when `max_streams = 1`. It is beneficial when read prefetches are enabled. [#57334](https://github.com/ClickHouse/ClickHouse/pull/57334) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Keeper improvement: improve memory-usage during startup by delaying log preprocessing. [#55660](https://github.com/ClickHouse/ClickHouse/pull/55660) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Improved performance of glob matching for `File` and `HDFS` storages. [#56141](https://github.com/ClickHouse/ClickHouse/pull/56141) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* Posting lists in experimental full text indexes are now compressed which reduces their size by 10-30%. [#56226](https://github.com/ClickHouse/ClickHouse/pull/56226) ([Harry Lee](https://github.com/HarryLeeIBM)).
|
||||
* Parallelise `BackupEntriesCollector` in backups. [#56312](https://github.com/ClickHouse/ClickHouse/pull/56312) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
|
||||
#### Improvement
|
||||
* Add a new `MergeTree` setting `add_implicit_sign_column_constraint_for_collapsing_engine` (disabled by default). When enabled, it adds an implicit CHECK constraint for `CollapsingMergeTree` tables that restricts the value of the `Sign` column to be only -1 or 1. [#56701](https://github.com/ClickHouse/ClickHouse/issues/56701). [#56986](https://github.com/ClickHouse/ClickHouse/pull/56986) ([Kevin Mingtarja](https://github.com/kevinmingtarja)).
|
||||
* Enable adding new disk to storage configuration without restart. [#56367](https://github.com/ClickHouse/ClickHouse/pull/56367) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Support creating and materializing index in the same alter query, also support "modify TTL" and "materialize TTL" in the same query. Closes [#55651](https://github.com/ClickHouse/ClickHouse/issues/55651). [#56331](https://github.com/ClickHouse/ClickHouse/pull/56331) ([flynn](https://github.com/ucasfl)).
|
||||
* Add a new table function named `fuzzJSON` with rows containing perturbed versions of the source JSON string with random variations. [#56490](https://github.com/ClickHouse/ClickHouse/pull/56490) ([Julia Kartseva](https://github.com/jkartseva)).
|
||||
* Engine `Merge` filters the records according to the row policies of the underlying tables, so you don't have to create another row policy on a `Merge` table. [#50209](https://github.com/ClickHouse/ClickHouse/pull/50209) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||
* Add a setting `max_execution_time_leaf` to limit the execution time on shard for distributed query, and `timeout_overflow_mode_leaf` to control the behaviour if timeout happens. [#51823](https://github.com/ClickHouse/ClickHouse/pull/51823) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Add ClickHouse setting to disable tunneling for HTTPS requests over HTTP proxy. [#55033](https://github.com/ClickHouse/ClickHouse/pull/55033) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Set `background_fetches_pool_size` to 16, background_schedule_pool_size to 512 that is better for production usage with frequent small insertions. [#54327](https://github.com/ClickHouse/ClickHouse/pull/54327) ([Denny Crane](https://github.com/den-crane)).
|
||||
* While read data from a csv format file, and at end of line is `\r` , which not followed by `\n`, then we will enconter the exception as follows `Cannot parse CSV format: found \r (CR) not followed by \n (LF). Line must end by \n (LF) or \r\n (CR LF) or \n\r.` In clickhouse, the csv end of line must be `\n` or `\r\n` or `\n\r`, so the `\r` must be followed by `\n`, but in some suitation, the csv input data is abnormal, like above, `\r` is at end of line. [#54340](https://github.com/ClickHouse/ClickHouse/pull/54340) ([KevinyhZou](https://github.com/KevinyhZou)).
|
||||
* Update Arrow library to release-13.0.0 that supports new encodings. Closes [#44505](https://github.com/ClickHouse/ClickHouse/issues/44505). [#54800](https://github.com/ClickHouse/ClickHouse/pull/54800) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Improve performance of ON CLUSTER queries by removing heavy system calls to get all network interfaces when looking for local ip address in the DDL entry hosts list. [#54909](https://github.com/ClickHouse/ClickHouse/pull/54909) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Fixed accounting of memory allocated before attaching a thread to a query or a user. [#56089](https://github.com/ClickHouse/ClickHouse/pull/56089) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Add support for `LARGE_LIST` in Apache Arrow formats. [#56118](https://github.com/ClickHouse/ClickHouse/pull/56118) ([edef](https://github.com/edef1c)).
|
||||
* Allow manual compaction of `EmbeddedRocksDB` via `OPTIMIZE` query. [#56225](https://github.com/ClickHouse/ClickHouse/pull/56225) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add ability to specify BlockBasedTableOptions for `EmbeddedRocksDB` tables. [#56264](https://github.com/ClickHouse/ClickHouse/pull/56264) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* `SHOW COLUMNS` now displays MySQL's equivalent data type name when the connection was made through the MySQL protocol. Previously, this was the case when setting `use_mysql_types_in_show_columns = 1`. The setting is retained but made obsolete. [#56277](https://github.com/ClickHouse/ClickHouse/pull/56277) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fixed possible `The local set of parts of table doesn't look like the set of parts in ZooKeeper` error if server was restarted just after `TRUNCATE` or `DROP PARTITION`. [#56282](https://github.com/ClickHouse/ClickHouse/pull/56282) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fixed handling of non-const query strings in functions `formatQuery`/ `formatQuerySingleLine`. Also added `OrNull` variants of both functions that return a NULL when a query cannot be parsed instead of throwing an exception. [#56327](https://github.com/ClickHouse/ClickHouse/pull/56327) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Allow backup of materialized view with dropped inner table instead of failing the backup. [#56387](https://github.com/ClickHouse/ClickHouse/pull/56387) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Queries to `system.replicas` initiate requests to ZooKeeper when certain columns are queried. When there are thousands of tables these requests might produce a considerable load on ZooKeeper. If there are multiple simultaneous queries to `system.replicas` they do same requests multiple times. The change is to "deduplicate" requests from concurrent queries. [#56420](https://github.com/ClickHouse/ClickHouse/pull/56420) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Fix translation to MySQL compatible query for querying external databases. [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
|
||||
* Add support for backing up and restoring tables using `KeeperMap` engine. [#56460](https://github.com/ClickHouse/ClickHouse/pull/56460) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* 404 response for CompleteMultipartUpload has to be rechecked. Operation could be done on server even if client got timeout or other network errors. The next retry of CompleteMultipartUpload receives 404 response. If the object key exists that operation is considered as successful. [#56475](https://github.com/ClickHouse/ClickHouse/pull/56475) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Enable the HTTP OPTIONS method by default - it simplifies requesting ClickHouse from a web browser. [#56483](https://github.com/ClickHouse/ClickHouse/pull/56483) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* The value for `dns_max_consecutive_failures` was changed by mistake in [#46550](https://github.com/ClickHouse/ClickHouse/issues/46550) - this is reverted and adjusted to a better value. Also, increased the HTTP keep-alive timeout to a reasonable value from production. [#56485](https://github.com/ClickHouse/ClickHouse/pull/56485) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Load base backups lazily (a base backup won't be loaded until it's needed). Also add some log message and profile events for backups. [#56516](https://github.com/ClickHouse/ClickHouse/pull/56516) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Setting `query_cache_store_results_of_queries_with_nondeterministic_functions` (with values `false` or `true`) was marked obsolete. It was replaced by setting `query_cache_nondeterministic_function_handling`, a three-valued enum that controls how the query cache handles queries with non-deterministic functions: a) throw an exception (default behavior), b) save the non-deterministic query result regardless, or c) ignore, i.e. don't throw an exception and don't cache the result. [#56519](https://github.com/ClickHouse/ClickHouse/pull/56519) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Rewrite equality with `is null` check in JOIN ON section. Experimental *Analyzer only*. [#56538](https://github.com/ClickHouse/ClickHouse/pull/56538) ([vdimir](https://github.com/vdimir)).
|
||||
* Function`concat` now supports arbitrary argument types (instead of only String and FixedString arguments). This makes it behave more similar to MySQL `concat` implementation. For example, `SELECT concat('ab', 42)` now returns `ab42`. [#56540](https://github.com/ClickHouse/ClickHouse/pull/56540) ([Serge Klochkov](https://github.com/slvrtrn)).
|
||||
* Allow getting cache configuration from 'named_collection' section in config or from SQL created named collections. [#56541](https://github.com/ClickHouse/ClickHouse/pull/56541) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Update `query_masking_rules` when reloading the config ([#56449](https://github.com/ClickHouse/ClickHouse/issues/56449)). [#56573](https://github.com/ClickHouse/ClickHouse/pull/56573) ([Mikhail Koviazin](https://github.com/mkmkme)).
|
||||
* PostgreSQL database engine: Make the removal of outdated tables less aggressive with unsuccessful postgres connection. [#56609](https://github.com/ClickHouse/ClickHouse/pull/56609) ([jsc0218](https://github.com/jsc0218)).
|
||||
* It took too much time to connnect to PG when URL is not right, so the relevant query stucks there and get cancelled. [#56648](https://github.com/ClickHouse/ClickHouse/pull/56648) ([jsc0218](https://github.com/jsc0218)).
|
||||
* Do not allow tables on different replicas have different aggregate functions in `SimpleAggregateFunction` columns. [#56724](https://github.com/ClickHouse/ClickHouse/pull/56724) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Keeper improvement: disable compressed logs by default in Keeper. [#56763](https://github.com/ClickHouse/ClickHouse/pull/56763) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Add config setting `wait_dictionaries_load_at_startup`. [#56782](https://github.com/ClickHouse/ClickHouse/pull/56782) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* There was a potential vulnerability in previous ClickHouse versions: if a user has connected and unsuccessfully tried to authenticate with the "interserver secret" method, the server didn't terminate the connection immediately but continued to receive and ignore the leftover packets from the client. While these packets are ignored, they are still parsed, and if they use a compression method with another known vulnerability, it will lead to exploitation of it without authentication. This issue was found with [ClickHouse Bug Bounty Program](https://github.com/ClickHouse/ClickHouse/issues/38986) by https://twitter.com/malacupa. [#56794](https://github.com/ClickHouse/ClickHouse/pull/56794) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fetching a part waits when that part is fully committed on remote replica. It is better not send part in PreActive state. In case of zero copy this is mandatory restriction. [#56808](https://github.com/ClickHouse/ClickHouse/pull/56808) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Fix possible postgresql logical replication conversion error when using experimental `MaterializedPostgreSQL`. [#53721](https://github.com/ClickHouse/ClickHouse/pull/53721) ([takakawa](https://github.com/takakawa)).
|
||||
* Implement user-level setting `alter_move_to_space_execute_async` which allow to execute queries `ALTER TABLE ... MOVE PARTITION|PART TO DISK|VOLUME` asynchronously. The size of pool for background executions is controlled by `background_move_pool_size`. Default behavior is synchronous execution. Fixes [#47643](https://github.com/ClickHouse/ClickHouse/issues/47643). [#56809](https://github.com/ClickHouse/ClickHouse/pull/56809) ([alesapin](https://github.com/alesapin)).
|
||||
* Able to filter by engine when scanning system.tables, avoid unnecessary (potentially time-consuming) connection. [#56813](https://github.com/ClickHouse/ClickHouse/pull/56813) ([jsc0218](https://github.com/jsc0218)).
|
||||
* Show `total_bytes` and `total_rows` in system tables for RocksDB storage. [#56816](https://github.com/ClickHouse/ClickHouse/pull/56816) ([Aleksandr Musorin](https://github.com/AVMusorin)).
|
||||
* Allow basic commands in ALTER for TEMPORARY tables. [#56892](https://github.com/ClickHouse/ClickHouse/pull/56892) ([Sergey](https://github.com/icuken)).
|
||||
* LZ4 compression. Buffer compressed block in a rare case when out buffer capacity is not enough for writing compressed block directly to out's buffer. [#56938](https://github.com/ClickHouse/ClickHouse/pull/56938) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Add metrics for the number of queued jobs, which is useful for the IO thread pool. [#56958](https://github.com/ClickHouse/ClickHouse/pull/56958) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add a setting for PostgreSQL table engine setting in the config file. Added a check for the setting Added documentation around the additional setting. [#56959](https://github.com/ClickHouse/ClickHouse/pull/56959) ([Peignon Melvyn](https://github.com/melvynator)).
|
||||
* Function `concat` can now be called with a single argument, e.g., `SELECT concat('abc')`. This makes its behavior more consistent with MySQL's concat implementation. [#57000](https://github.com/ClickHouse/ClickHouse/pull/57000) ([Serge Klochkov](https://github.com/slvrtrn)).
|
||||
* Signs all `x-amz-*` headers as required by AWS S3 docs. [#57001](https://github.com/ClickHouse/ClickHouse/pull/57001) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Function `fromDaysSinceYearZero` (alias: `FROM_DAYS`) can now be used with unsigned and signed integer types (previously, it had to be an unsigned integer). This improve compatibility with 3rd party tools such as Tableau Online. [#57002](https://github.com/ClickHouse/ClickHouse/pull/57002) ([Serge Klochkov](https://github.com/slvrtrn)).
|
||||
* Add `system.s3queue_log` to default config. [#57036](https://github.com/ClickHouse/ClickHouse/pull/57036) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Change the default for `wait_dictionaries_load_at_startup` to true, and use this setting only if `dictionaries_lazy_load` is false. [#57133](https://github.com/ClickHouse/ClickHouse/pull/57133) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Check dictionary source type on creation even if `dictionaries_lazy_load` is enabled. [#57134](https://github.com/ClickHouse/ClickHouse/pull/57134) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Plan-level optimizations can now be enabled/disabled individually. Previously, it was only possible to disable them all. The setting which previously did that (`query_plan_enable_optimizations`) is retained and can still be used to disable all optimizations. [#57152](https://github.com/ClickHouse/ClickHouse/pull/57152) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* The server's exit code will correspond to the exception code. For example, if the server cannot start due to memory limit, it will exit with the code 241 = MEMORY_LIMIT_EXCEEDED. In previous versions, the exit code for exceptions was always 70 = Poco::Util::ExitCode::EXIT_SOFTWARE. [#57153](https://github.com/ClickHouse/ClickHouse/pull/57153) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Do not demangle and symbolize stack frames from `functional` C++ header. [#57201](https://github.com/ClickHouse/ClickHouse/pull/57201) ([Mike Kot](https://github.com/myrrc)).
|
||||
* HTTP server page `/dashboard` now supports charts with multiple lines. [#57236](https://github.com/ClickHouse/ClickHouse/pull/57236) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* The `max_memory_usage_in_client` command line option supports a string value with a suffix (K, M, G, etc). Closes [#56879](https://github.com/ClickHouse/ClickHouse/issues/56879). [#57273](https://github.com/ClickHouse/ClickHouse/pull/57273) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Bumped Intel QPL (used by codec `DEFLATE_QPL`) from v1.2.0 to v1.3.1 . Also fixed a bug in case of BOF (Block On Fault) = 0, changed to handle page faults by falling back to SW path. [#57291](https://github.com/ClickHouse/ClickHouse/pull/57291) ([jasperzhu](https://github.com/jinjunzh)).
|
||||
* Increase default `replicated_deduplication_window` of MergeTree settings from 100 to 1k. [#57335](https://github.com/ClickHouse/ClickHouse/pull/57335) ([sichenzhao](https://github.com/sichenzhao)).
|
||||
* Stop using `INCONSISTENT_METADATA_FOR_BACKUP` that much. If possible prefer to continue scanning instead of stopping and starting the scanning for backup from the beginning. [#57385](https://github.com/ClickHouse/ClickHouse/pull/57385) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Add SQLLogic test. [#56078](https://github.com/ClickHouse/ClickHouse/pull/56078) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Make `clickhouse-local` and `clickhouse-client` available under short names (`ch`, `chl`, `chc`) for usability. [#56634](https://github.com/ClickHouse/ClickHouse/pull/56634) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Optimized build size further by removing unused code from external libraries. [#56786](https://github.com/ClickHouse/ClickHouse/pull/56786) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add automatic check that there are no large translation units. [#56559](https://github.com/ClickHouse/ClickHouse/pull/56559) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Lower the size of the single-binary distribution. This closes [#55181](https://github.com/ClickHouse/ClickHouse/issues/55181). [#56617](https://github.com/ClickHouse/ClickHouse/pull/56617) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Information about the sizes of every translation unit and binary file after each build will be sent to the CI database in ClickHouse Cloud. This closes [#56107](https://github.com/ClickHouse/ClickHouse/issues/56107). [#56636](https://github.com/ClickHouse/ClickHouse/pull/56636) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Certain files of "Apache Arrow" library (which we use only for non-essential things like parsing the arrow format) were rebuilt all the time regardless of the build cache. This is fixed. [#56657](https://github.com/ClickHouse/ClickHouse/pull/56657) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Avoid recompiling translation units depending on the autogenerated source file about version. [#56660](https://github.com/ClickHouse/ClickHouse/pull/56660) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Tracing data of the linker invocations will be sent to the CI database in ClickHouse Cloud. [#56725](https://github.com/ClickHouse/ClickHouse/pull/56725) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Use DWARF 5 debug symbols for the clickhouse binary (was DWARF 4 previously). [#56770](https://github.com/ClickHouse/ClickHouse/pull/56770) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Add a new build option `SANITIZE_COVERAGE`. If it is enabled, the code is instrumented to track the coverage. The collected information is available inside ClickHouse with: (1) a new function `coverage` that returns an array of unique addresses in the code found after the previous coverage reset; (2) `SYSTEM RESET COVERAGE` query that resets the accumulated data. This allows us to compare the coverage of different tests, including differential code coverage. Continuation of [#20539](https://github.com/ClickHouse/ClickHouse/issues/20539). [#56102](https://github.com/ClickHouse/ClickHouse/pull/56102) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Some of the stack frames might not be resolved when collecting stacks. In such cases the raw address might be helpful. [#56267](https://github.com/ClickHouse/ClickHouse/pull/56267) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Add an option to disable `libssh`. [#56333](https://github.com/ClickHouse/ClickHouse/pull/56333) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Enable temporary_data_in_cache in S3 tests in CI. [#48425](https://github.com/ClickHouse/ClickHouse/pull/48425) ([vdimir](https://github.com/vdimir)).
|
||||
* Set the max memory usage for clickhouse-client (`1G`) in the CI. [#56873](https://github.com/ClickHouse/ClickHouse/pull/56873) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Fix exerimental Analyzer - insertion from select with subquery referencing insertion table should process only insertion block. [#50857](https://github.com/ClickHouse/ClickHouse/pull/50857) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Fix a bug in `str_to_map` function. [#56423](https://github.com/ClickHouse/ClickHouse/pull/56423) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Keeper `reconfig`: add timeout before yielding/taking leadership [#53481](https://github.com/ClickHouse/ClickHouse/pull/53481) ([Mike Kot](https://github.com/myrrc)).
|
||||
* Fix incorrect header in grace hash join and filter pushdown [#53922](https://github.com/ClickHouse/ClickHouse/pull/53922) ([vdimir](https://github.com/vdimir)).
|
||||
* Select from system tables when table based on table function. [#55540](https://github.com/ClickHouse/ClickHouse/pull/55540) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* RFC: Fix "Cannot find column X in source stream" for Distributed queries with LIMIT BY [#55836](https://github.com/ClickHouse/ClickHouse/pull/55836) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix 'Cannot read from file:' while running client in a background [#55976](https://github.com/ClickHouse/ClickHouse/pull/55976) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix clickhouse-local exit on bad send_logs_level setting [#55994](https://github.com/ClickHouse/ClickHouse/pull/55994) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Bug fix explain ast with parameterized view [#56004](https://github.com/ClickHouse/ClickHouse/pull/56004) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Fix a crash during table loading on startup [#56232](https://github.com/ClickHouse/ClickHouse/pull/56232) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix ClickHouse-sourced dictionaries with an explicit query [#56236](https://github.com/ClickHouse/ClickHouse/pull/56236) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix segfault in signal handler for Keeper [#56266](https://github.com/ClickHouse/ClickHouse/pull/56266) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix incomplete query result for UNION in view() function. [#56274](https://github.com/ClickHouse/ClickHouse/pull/56274) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix inconsistency of "cast('0' as DateTime64(3))" and "cast('0' as Nullable(DateTime64(3)))" [#56286](https://github.com/ClickHouse/ClickHouse/pull/56286) ([李扬](https://github.com/taiyang-li)).
|
||||
* Fix rare race condition related to Memory allocation failure [#56303](https://github.com/ClickHouse/ClickHouse/pull/56303) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix restore from backup with `flatten_nested` and `data_type_default_nullable` [#56306](https://github.com/ClickHouse/ClickHouse/pull/56306) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix crash in case of adding a column with type Object(JSON) [#56307](https://github.com/ClickHouse/ClickHouse/pull/56307) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix crash in filterPushDown [#56380](https://github.com/ClickHouse/ClickHouse/pull/56380) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix restore from backup with mat view and dropped source table [#56383](https://github.com/ClickHouse/ClickHouse/pull/56383) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix segfault during Kerberos initialization [#56401](https://github.com/ClickHouse/ClickHouse/pull/56401) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix buffer overflow in T64 [#56434](https://github.com/ClickHouse/ClickHouse/pull/56434) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix nullable primary key in final (2) [#56452](https://github.com/ClickHouse/ClickHouse/pull/56452) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix ON CLUSTER queries without database on initial node [#56484](https://github.com/ClickHouse/ClickHouse/pull/56484) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix startup failure due to TTL dependency [#56489](https://github.com/ClickHouse/ClickHouse/pull/56489) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix ALTER COMMENT queries ON CLUSTER [#56491](https://github.com/ClickHouse/ClickHouse/pull/56491) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix ALTER COLUMN with ALIAS [#56493](https://github.com/ClickHouse/ClickHouse/pull/56493) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix empty NAMED COLLECTIONs [#56494](https://github.com/ClickHouse/ClickHouse/pull/56494) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix two cases of projection analysis. [#56502](https://github.com/ClickHouse/ClickHouse/pull/56502) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix handling of aliases in query cache [#56545](https://github.com/ClickHouse/ClickHouse/pull/56545) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix conversion from `Nullable(Enum)` to `Nullable(String)` [#56644](https://github.com/ClickHouse/ClickHouse/pull/56644) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* More reliable log handling in Keeper [#56670](https://github.com/ClickHouse/ClickHouse/pull/56670) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix configuration merge for nodes with substitution attributes [#56694](https://github.com/ClickHouse/ClickHouse/pull/56694) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||
* Fix duplicate usage of table function input(). [#56695](https://github.com/ClickHouse/ClickHouse/pull/56695) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix: RabbitMQ OpenSSL dynamic loading issue [#56703](https://github.com/ClickHouse/ClickHouse/pull/56703) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Fix crash in GCD codec in case when zeros present in data [#56704](https://github.com/ClickHouse/ClickHouse/pull/56704) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Fix 'mutex lock failed: Invalid argument' in clickhouse-local during insert into function [#56710](https://github.com/ClickHouse/ClickHouse/pull/56710) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix Date text parsing in optimistic path [#56765](https://github.com/ClickHouse/ClickHouse/pull/56765) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix crash in FPC codec [#56795](https://github.com/ClickHouse/ClickHouse/pull/56795) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* DatabaseReplicated: fix DDL query timeout after recovering a replica [#56796](https://github.com/ClickHouse/ClickHouse/pull/56796) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix incorrect nullable columns reporting in MySQL binary protocol [#56799](https://github.com/ClickHouse/ClickHouse/pull/56799) ([Serge Klochkov](https://github.com/slvrtrn)).
|
||||
* Support Iceberg metadata files for metastore tables [#56810](https://github.com/ClickHouse/ClickHouse/pull/56810) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix TSAN report under transform [#56817](https://github.com/ClickHouse/ClickHouse/pull/56817) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix SET query and SETTINGS formatting [#56825](https://github.com/ClickHouse/ClickHouse/pull/56825) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix failure to start due to table dependency in joinGet [#56828](https://github.com/ClickHouse/ClickHouse/pull/56828) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix flattening existing Nested columns during ADD COLUMN [#56830](https://github.com/ClickHouse/ClickHouse/pull/56830) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix allow cr end of line for csv [#56901](https://github.com/ClickHouse/ClickHouse/pull/56901) ([KevinyhZou](https://github.com/KevinyhZou)).
|
||||
* Fix `tryBase64Decode` with invalid input [#56913](https://github.com/ClickHouse/ClickHouse/pull/56913) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix generating deep nested columns in CapnProto/Protobuf schemas [#56941](https://github.com/ClickHouse/ClickHouse/pull/56941) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix sqlite file path validation [#56984](https://github.com/ClickHouse/ClickHouse/pull/56984) ([San](https://github.com/santrancisco)).
|
||||
* S3Queue: fix metadata reference increment [#56990](https://github.com/ClickHouse/ClickHouse/pull/56990) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* S3Queue minor fix [#56999](https://github.com/ClickHouse/ClickHouse/pull/56999) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix file path validation for DatabaseFileSystem [#57029](https://github.com/ClickHouse/ClickHouse/pull/57029) ([San](https://github.com/santrancisco)).
|
||||
* Fix `fuzzBits` with `ARRAY JOIN` [#57033](https://github.com/ClickHouse/ClickHouse/pull/57033) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Nullptr dereference in partial merge join with joined_subquery_re… [#57048](https://github.com/ClickHouse/ClickHouse/pull/57048) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix race condition in RemoteSource [#57052](https://github.com/ClickHouse/ClickHouse/pull/57052) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Implement `bitHammingDistance` for big integers [#57073](https://github.com/ClickHouse/ClickHouse/pull/57073) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* S3-style links bug fix [#57075](https://github.com/ClickHouse/ClickHouse/pull/57075) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Fix JSON_QUERY function with multiple numeric paths [#57096](https://github.com/ClickHouse/ClickHouse/pull/57096) ([KevinyhZou](https://github.com/KevinyhZou)).
|
||||
* Fix buffer overflow in Gorilla codec [#57107](https://github.com/ClickHouse/ClickHouse/pull/57107) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Close interserver connection on any exception before authentication [#57142](https://github.com/ClickHouse/ClickHouse/pull/57142) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Ignore comments when comparing column descriptions [#57259](https://github.com/ClickHouse/ClickHouse/pull/57259) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix `ReadonlyReplica` metric for all cases [#57267](https://github.com/ClickHouse/ClickHouse/pull/57267) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Background merges correctly use temporary data storage in the cache [#57275](https://github.com/ClickHouse/ClickHouse/pull/57275) ([vdimir](https://github.com/vdimir)).
|
||||
* Keeper fix for changelog and snapshots [#57299](https://github.com/ClickHouse/ClickHouse/pull/57299) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Ignore finished ON CLUSTER tasks if hostname changed [#57339](https://github.com/ClickHouse/ClickHouse/pull/57339) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* MergeTree mutations reuse source part index granularity [#57352](https://github.com/ClickHouse/ClickHouse/pull/57352) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* FS cache: add a limit for background download [#57424](https://github.com/ClickHouse/ClickHouse/pull/57424) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
|
||||
|
||||
### <a id="2310"></a> ClickHouse release 23.10, 2023-11-02
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* There is no longer an option to automatically remove broken data parts. This closes [#55174](https://github.com/ClickHouse/ClickHouse/issues/55174). [#55184](https://github.com/ClickHouse/ClickHouse/pull/55184) ([Alexey Milovidov](https://github.com/alexey-milovidov)). [#55557](https://github.com/ClickHouse/ClickHouse/pull/55557) ([Jihyuk Bok](https://github.com/tomahawk28)).
|
||||
@ -39,7 +255,7 @@
|
||||
* Allow to drop cache for Protobuf format with `SYSTEM DROP SCHEMA FORMAT CACHE [FOR Protobuf]`. [#55064](https://github.com/ClickHouse/ClickHouse/pull/55064) ([Aleksandr Musorin](https://github.com/AVMusorin)).
|
||||
* Add external HTTP Basic authenticator. [#55199](https://github.com/ClickHouse/ClickHouse/pull/55199) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
||||
* Added function `byteSwap` which reverses the bytes of unsigned integers. This is particularly useful for reversing values of types which are represented as unsigned integers internally such as IPv4. [#55211](https://github.com/ClickHouse/ClickHouse/pull/55211) ([Priyansh Agrawal](https://github.com/Priyansh121096)).
|
||||
* Added function `formatQuery()` which returns a formatted version (possibly spanning multiple lines) of a SQL query string. Also added function `formatQuerySingleLine()` which does the same but the returned string will not contain linebreaks. [#55239](https://github.com/ClickHouse/ClickHouse/pull/55239) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
|
||||
* Added function `formatQuery` which returns a formatted version (possibly spanning multiple lines) of a SQL query string. Also added function `formatQuerySingleLine` which does the same but the returned string will not contain linebreaks. [#55239](https://github.com/ClickHouse/ClickHouse/pull/55239) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
|
||||
* Added `DWARF` input format that reads debug symbols from an ELF executable/library/object file. [#55450](https://github.com/ClickHouse/ClickHouse/pull/55450) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Allow to save unparsed records and errors in RabbitMQ, NATS and FileLog engines. Add virtual columns `_error` and `_raw_message`(for NATS and RabbitMQ), `_raw_record` (for FileLog) that are filled when ClickHouse fails to parse new record. The behaviour is controlled under storage settings `nats_handle_error_mode` for NATS, `rabbitmq_handle_error_mode` for RabbitMQ, `handle_error_mode` for FileLog similar to `kafka_handle_error_mode`. If it's set to `default`, en exception will be thrown when ClickHouse fails to parse a record, if it's set to `stream`, erorr and raw record will be saved into virtual columns. Closes [#36035](https://github.com/ClickHouse/ClickHouse/issues/36035). [#55477](https://github.com/ClickHouse/ClickHouse/pull/55477) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Keeper client improvement: add `get_all_children_number command` that returns number of all children nodes under a specific path. [#55485](https://github.com/ClickHouse/ClickHouse/pull/55485) ([guoxiaolong](https://github.com/guoxiaolongzte)).
|
||||
@ -74,11 +290,11 @@
|
||||
* Reduced memory consumption during loading of hierarchical dictionaries. [#55838](https://github.com/ClickHouse/ClickHouse/pull/55838) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* All dictionaries support setting `dictionary_use_async_executor`. [#55839](https://github.com/ClickHouse/ClickHouse/pull/55839) ([vdimir](https://github.com/vdimir)).
|
||||
* Prevent excesive memory usage when deserializing AggregateFunctionTopKGenericData. [#55947](https://github.com/ClickHouse/ClickHouse/pull/55947) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* On a Keeper with lots of watches AsyncMetrics threads can consume 100% of CPU for noticable time in `DB::KeeperStorage::getSessionsWithWatchesCount()`. The fix is to avoid traversing heavy `watches` and `list_watches` sets. [#56054](https://github.com/ClickHouse/ClickHouse/pull/56054) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Add setting `optimize_trivial_approximate_count_query` to use `count()` approximation for storage EmbeddedRocksDB. Enable trivial count for StorageJoin. [#55806](https://github.com/ClickHouse/ClickHouse/pull/55806) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* On a Keeper with lots of watches AsyncMetrics threads can consume 100% of CPU for noticable time in `DB::KeeperStorage::getSessionsWithWatchesCount`. The fix is to avoid traversing heavy `watches` and `list_watches` sets. [#56054](https://github.com/ClickHouse/ClickHouse/pull/56054) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Add setting `optimize_trivial_approximate_count_query` to use `count` approximation for storage EmbeddedRocksDB. Enable trivial count for StorageJoin. [#55806](https://github.com/ClickHouse/ClickHouse/pull/55806) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
|
||||
#### Improvement
|
||||
* Functions `toDayOfWeek()` (MySQL alias: `DAYOFWEEK()`), `toYearWeek()` (`YEARWEEK()`) and `toWeek()` (`WEEK()`) now supports `String` arguments. This makes its behavior consistent with MySQL's behavior. [#55589](https://github.com/ClickHouse/ClickHouse/pull/55589) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Functions `toDayOfWeek` (MySQL alias: `DAYOFWEEK`), `toYearWeek` (`YEARWEEK`) and `toWeek` (`WEEK`) now supports `String` arguments. This makes its behavior consistent with MySQL's behavior. [#55589](https://github.com/ClickHouse/ClickHouse/pull/55589) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Introduced setting `date_time_overflow_behavior` with possible values `ignore`, `throw`, `saturate` that controls the overflow behavior when converting from Date, Date32, DateTime64, Integer or Float to Date, Date32, DateTime or DateTime64. [#55696](https://github.com/ClickHouse/ClickHouse/pull/55696) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* Implement query parameters support for `ALTER TABLE ... ACTION PARTITION [ID] {parameter_name:ParameterType}`. Merges [#49516](https://github.com/ClickHouse/ClickHouse/issues/49516). Closes [#49449](https://github.com/ClickHouse/ClickHouse/issues/49449). [#55604](https://github.com/ClickHouse/ClickHouse/pull/55604) ([alesapin](https://github.com/alesapin)).
|
||||
* Print processor ids in a prettier manner in EXPLAIN. [#48852](https://github.com/ClickHouse/ClickHouse/pull/48852) ([Vlad Seliverstov](https://github.com/behebot)).
|
||||
@ -112,7 +328,7 @@
|
||||
* Functions `(add|subtract)(Year|Quarter|Month|Week|Day|Hour|Minute|Second|Millisecond|Microsecond|Nanosecond)` now support string-encoded date arguments, e.g. `SELECT addDays('2023-10-22', 1)`. This increases compatibility with MySQL and is needed by Tableau Online. [#55869](https://github.com/ClickHouse/ClickHouse/pull/55869) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* The setting `apply_deleted_mask` when disabled allows to read rows that where marked as deleted by lightweight DELETE queries. This is useful for debugging. [#55952](https://github.com/ClickHouse/ClickHouse/pull/55952) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Allow skipping `null` values when serailizing Tuple to json objects, which makes it possible to keep compatiability with Spark's `to_json` function, which is also useful for gluten. [#55956](https://github.com/ClickHouse/ClickHouse/pull/55956) ([李扬](https://github.com/taiyang-li)).
|
||||
* Functions `(add|sub)Date()` now support string-encoded date arguments, e.g. `SELECT addDate('2023-10-22 11:12:13', INTERVAL 5 MINUTE)`. The same support for string-encoded date arguments is added to the plus and minus operators, e.g. `SELECT '2023-10-23' + INTERVAL 1 DAY`. This increases compatibility with MySQL and is needed by Tableau Online. [#55960](https://github.com/ClickHouse/ClickHouse/pull/55960) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Functions `(add|sub)Date` now support string-encoded date arguments, e.g. `SELECT addDate('2023-10-22 11:12:13', INTERVAL 5 MINUTE)`. The same support for string-encoded date arguments is added to the plus and minus operators, e.g. `SELECT '2023-10-23' + INTERVAL 1 DAY`. This increases compatibility with MySQL and is needed by Tableau Online. [#55960](https://github.com/ClickHouse/ClickHouse/pull/55960) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Allow unquoted strings with CR (`\r`) in CSV format. Closes [#39930](https://github.com/ClickHouse/ClickHouse/issues/39930). [#56046](https://github.com/ClickHouse/ClickHouse/pull/56046) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Allow to run `clickhouse-keeper` using embedded config. [#56086](https://github.com/ClickHouse/ClickHouse/pull/56086) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Set limit of the maximum configuration value for `queued.min.messages` to avoid problem with start fetching data with Kafka. [#56121](https://github.com/ClickHouse/ClickHouse/pull/56121) ([Stas Morozov](https://github.com/r3b-fish)).
|
||||
@ -133,7 +349,7 @@
|
||||
* Fixed bug of `match` function (regex) with pattern containing alternation produces incorrect key condition. Closes #53222. [#54696](https://github.com/ClickHouse/ClickHouse/pull/54696) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Fix 'Cannot find column' in read-in-order optimization with ARRAY JOIN [#51746](https://github.com/ClickHouse/ClickHouse/pull/51746) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Support missed experimental `Object(Nullable(json))` subcolumns in query. [#54052](https://github.com/ClickHouse/ClickHouse/pull/54052) ([zps](https://github.com/VanDarkholme7)).
|
||||
* Re-add fix for `accurateCastOrNull()` [#54629](https://github.com/ClickHouse/ClickHouse/pull/54629) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
|
||||
* Re-add fix for `accurateCastOrNull` [#54629](https://github.com/ClickHouse/ClickHouse/pull/54629) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
|
||||
* Fix detecting `DEFAULT` for columns of a Distributed table created without AS [#55060](https://github.com/ClickHouse/ClickHouse/pull/55060) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Proper cleanup in case of exception in ctor of ShellCommandSource [#55103](https://github.com/ClickHouse/ClickHouse/pull/55103) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Fix deadlock in LDAP assigned role update [#55119](https://github.com/ClickHouse/ClickHouse/pull/55119) ([Julian Maicher](https://github.com/jmaicher)).
|
||||
@ -191,7 +407,7 @@
|
||||
* Add error handler to odbc-bridge [#56185](https://github.com/ClickHouse/ClickHouse/pull/56185) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
|
||||
|
||||
### ClickHouse release 23.9, 2023-09-28
|
||||
### <a id="239"></a> ClickHouse release 23.9, 2023-09-28
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* Remove the `status_info` configuration option and dictionaries status from the default Prometheus handler. [#54090](https://github.com/ClickHouse/ClickHouse/pull/54090) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
@ -213,7 +429,7 @@
|
||||
* Add function `decodeHTMLComponent`. [#54097](https://github.com/ClickHouse/ClickHouse/pull/54097) ([Bharat Nallan](https://github.com/bharatnc)).
|
||||
* Added `peak_threads_usage` to query_log table. [#54335](https://github.com/ClickHouse/ClickHouse/pull/54335) ([Alexey Gerasimchuck](https://github.com/Demilivor)).
|
||||
* Add `SHOW FUNCTIONS` support to clickhouse-client. [#54337](https://github.com/ClickHouse/ClickHouse/pull/54337) ([Julia Kartseva](https://github.com/wat-ze-hex)).
|
||||
* Added function `toDaysSinceYearZero` with alias `TO_DAYS` (for compatibility with MySQL) which returns the number of days passed since `0001-01-01` (in Proleptic Gregorian Calendar). [#54479](https://github.com/ClickHouse/ClickHouse/pull/54479) ([Robert Schulze](https://github.com/rschu1ze)). Function `toDaysSinceYearZero()` now supports arguments of type `DateTime` and `DateTime64`. [#54856](https://github.com/ClickHouse/ClickHouse/pull/54856) ([Serge Klochkov](https://github.com/slvrtrn)).
|
||||
* Added function `toDaysSinceYearZero` with alias `TO_DAYS` (for compatibility with MySQL) which returns the number of days passed since `0001-01-01` (in Proleptic Gregorian Calendar). [#54479](https://github.com/ClickHouse/ClickHouse/pull/54479) ([Robert Schulze](https://github.com/rschu1ze)). Function `toDaysSinceYearZero` now supports arguments of type `DateTime` and `DateTime64`. [#54856](https://github.com/ClickHouse/ClickHouse/pull/54856) ([Serge Klochkov](https://github.com/slvrtrn)).
|
||||
* Added functions `YYYYMMDDtoDate`, `YYYYMMDDtoDate32`, `YYYYMMDDhhmmssToDateTime` and `YYYYMMDDhhmmssToDateTime64`. They convert a date or date with time encoded as integer (e.g. 20230911) into a native date or date with time. As such, they provide the opposite functionality of existing functions `YYYYMMDDToDate`, `YYYYMMDDToDateTime`, `YYYYMMDDhhmmddToDateTime`, `YYYYMMDDhhmmddToDateTime64`. [#54509](https://github.com/ClickHouse/ClickHouse/pull/54509) ([Quanfa Fu](https://github.com/dentiscalprum)) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Add several string distance functions, including `byteHammingDistance`, `editDistance`. [#54935](https://github.com/ClickHouse/ClickHouse/pull/54935) ([flynn](https://github.com/ucasfl)).
|
||||
* Allow specifying the expiration date and, optionally, the time for user credentials with `VALID UNTIL datetime` clause. [#51261](https://github.com/ClickHouse/ClickHouse/pull/51261) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
@ -229,7 +445,7 @@
|
||||
* An optimization to rewrite `COUNT(DISTINCT ...)` and various `uniq` variants to `count` if it is selected from a subquery with GROUP BY. [#52082](https://github.com/ClickHouse/ClickHouse/pull/52082) [#52645](https://github.com/ClickHouse/ClickHouse/pull/52645) ([JackyWoo](https://github.com/JackyWoo)).
|
||||
* Remove manual calls to `mmap/mremap/munmap` and delegate all this work to `jemalloc` - and it slightly improves performance. [#52792](https://github.com/ClickHouse/ClickHouse/pull/52792) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fixed high in CPU consumption when working with NATS. [#54399](https://github.com/ClickHouse/ClickHouse/pull/54399) ([Vasilev Pyotr](https://github.com/vahpetr)).
|
||||
* Since we use separate instructions for executing `toString()` with datetime argument, it is possible to improve performance a bit for non-datetime arguments and have some parts of the code cleaner. Follows up [#53680](https://github.com/ClickHouse/ClickHouse/issues/53680). [#54443](https://github.com/ClickHouse/ClickHouse/pull/54443) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Since we use separate instructions for executing `toString` with datetime argument, it is possible to improve performance a bit for non-datetime arguments and have some parts of the code cleaner. Follows up [#53680](https://github.com/ClickHouse/ClickHouse/issues/53680). [#54443](https://github.com/ClickHouse/ClickHouse/pull/54443) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Instead of serializing json elements into a `std::stringstream`, this PR try to put the serialization result into `ColumnString` direclty. [#54613](https://github.com/ClickHouse/ClickHouse/pull/54613) ([lgbo](https://github.com/lgbo-ustc)).
|
||||
* Enable ORDER BY optimization for reading data in corresponding order from a MergeTree table in case that the table is behind a view. [#54628](https://github.com/ClickHouse/ClickHouse/pull/54628) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Improve JSON SQL functions by reusing `GeneratorJSONPath` and removing several shared pointers. [#54735](https://github.com/ClickHouse/ClickHouse/pull/54735) ([lgbo](https://github.com/lgbo-ustc)).
|
||||
@ -479,7 +695,7 @@
|
||||
* The `domainRFC` function now supports IPv6 in square brackets. [#53506](https://github.com/ClickHouse/ClickHouse/pull/53506) ([Chen768959](https://github.com/Chen768959)).
|
||||
* Use longer timeout for S3 CopyObject requests, which are used in backups. [#53533](https://github.com/ClickHouse/ClickHouse/pull/53533) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Added server setting `aggregate_function_group_array_max_element_size`. This setting is used to limit array size for `groupArray` function at serialization. The default value is `16777215`. [#53550](https://github.com/ClickHouse/ClickHouse/pull/53550) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* `SCHEMA()` was added as alias for `DATABASE()` to improve MySQL compatibility. [#53587](https://github.com/ClickHouse/ClickHouse/pull/53587) ([Daniël van Eeden](https://github.com/dveeden)).
|
||||
* `SCHEMA` was added as alias for `DATABASE` to improve MySQL compatibility. [#53587](https://github.com/ClickHouse/ClickHouse/pull/53587) ([Daniël van Eeden](https://github.com/dveeden)).
|
||||
* Add asynchronous metrics about tables in the system database. For example, `TotalBytesOfMergeTreeTablesSystem`. This closes [#53603](https://github.com/ClickHouse/ClickHouse/issues/53603). [#53604](https://github.com/ClickHouse/ClickHouse/pull/53604) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* SQL editor in the Play UI and Dashboard will not use Grammarly. [#53614](https://github.com/ClickHouse/ClickHouse/pull/53614) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* As expert-level settings, it is now possible to (1) configure the size_ratio (i.e. the relative size of the protected queue) of the [index] mark/uncompressed caches, (2) configure the cache policy of the index mark and index uncompressed caches. [#53657](https://github.com/ClickHouse/ClickHouse/pull/53657) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
@ -741,7 +957,7 @@
|
||||
* Disable expression templates for time intervals [#52335](https://github.com/ClickHouse/ClickHouse/pull/52335) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix `apply_snapshot` in Keeper [#52358](https://github.com/ClickHouse/ClickHouse/pull/52358) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Update build-osx.md [#52377](https://github.com/ClickHouse/ClickHouse/pull/52377) ([AlexBykovski](https://github.com/AlexBykovski)).
|
||||
* Fix `countSubstrings()` hang with empty needle and a column haystack [#52409](https://github.com/ClickHouse/ClickHouse/pull/52409) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Fix `countSubstrings` hang with empty needle and a column haystack [#52409](https://github.com/ClickHouse/ClickHouse/pull/52409) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Fix normal projection with merge table [#52432](https://github.com/ClickHouse/ClickHouse/pull/52432) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix possible double-free in Aggregator [#52439](https://github.com/ClickHouse/ClickHouse/pull/52439) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fixed inserting into Buffer engine [#52440](https://github.com/ClickHouse/ClickHouse/pull/52440) ([Vasily Nemkov](https://github.com/Enmk)).
|
||||
@ -1585,7 +1801,7 @@
|
||||
* A couple of segfaults have been reported around `c-ares`. They were introduced in my previous pull requests. I have fixed them with the help of Alexander Tokmakov. [#45629](https://github.com/ClickHouse/ClickHouse/pull/45629) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Fix key description when encountering duplicate primary keys. This can happen in projections. See [#45590](https://github.com/ClickHouse/ClickHouse/issues/45590) for details. [#45686](https://github.com/ClickHouse/ClickHouse/pull/45686) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Set compression method and level for backup Closes [#45690](https://github.com/ClickHouse/ClickHouse/issues/45690). [#45737](https://github.com/ClickHouse/ClickHouse/pull/45737) ([Pradeep Chhetri](https://github.com/chhetripradeep)).
|
||||
* Should use `select_query_typed.limitByOffset()` instead of `select_query_typed.limitOffset()`. [#45817](https://github.com/ClickHouse/ClickHouse/pull/45817) ([刘陶峰](https://github.com/taofengliu)).
|
||||
* Should use `select_query_typed.limitByOffset` instead of `select_query_typed.limitOffset`. [#45817](https://github.com/ClickHouse/ClickHouse/pull/45817) ([刘陶峰](https://github.com/taofengliu)).
|
||||
* When use experimental analyzer, queries like `SELECT number FROM numbers(100) LIMIT 10 OFFSET 10;` get wrong results (empty result for this sql). That is caused by an unnecessary offset step added by planner. [#45822](https://github.com/ClickHouse/ClickHouse/pull/45822) ([刘陶峰](https://github.com/taofengliu)).
|
||||
* Backward compatibility - allow implicit narrowing conversion from UInt64 to IPv4 - required for "INSERT ... VALUES ..." expression. [#45865](https://github.com/ClickHouse/ClickHouse/pull/45865) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Bugfix IPv6 parser for mixed ip4 address with missed first octet (like `::.1.2.3`). [#45871](https://github.com/ClickHouse/ClickHouse/pull/45871) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
|
@ -35,6 +35,7 @@ curl https://clickhouse.com/ | sh
|
||||
|
||||
* [**ClickHouse Meetup in Berlin**](https://www.meetup.com/clickhouse-berlin-user-group/events/296488501/) - Nov 30
|
||||
* [**ClickHouse Meetup in NYC**](https://www.meetup.com/clickhouse-new-york-user-group/events/296488779/) - Dec 11
|
||||
* [**ClickHouse Meetup in Sydney**](https://www.meetup.com/clickhouse-sydney-user-group/events/297638812/) - Dec 12
|
||||
* [**ClickHouse Meetup in Boston**](https://www.meetup.com/clickhouse-boston-user-group/events/296488840/) - Dec 12
|
||||
|
||||
Also, keep an eye out for upcoming meetups around the world. Somewhere else you want us to be? Please feel free to reach out to tyler <at> clickhouse <dot> com.
|
||||
|
@ -20,7 +20,8 @@ RUN apt-get update --yes \
|
||||
RUN pip3 install \
|
||||
numpy \
|
||||
pyodbc \
|
||||
deepdiff
|
||||
deepdiff \
|
||||
sqlglot
|
||||
|
||||
ARG odbc_repo="https://github.com/ClickHouse/clickhouse-odbc.git"
|
||||
|
||||
@ -35,7 +36,7 @@ RUN git clone --recursive ${odbc_repo} \
|
||||
&& odbcinst -i -s -l -f /clickhouse-odbc/packaging/odbc.ini.sample
|
||||
|
||||
ENV TZ=Europe/Amsterdam
|
||||
ENV MAX_RUN_TIME=900
|
||||
ENV MAX_RUN_TIME=9000
|
||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||
|
||||
ARG sqllogic_test_repo="https://github.com/gregrahn/sqllogictest.git"
|
||||
|
@ -75,6 +75,20 @@ function run_tests()
|
||||
cat /test_output/statements-test/check_status.tsv >> /test_output/check_status.tsv
|
||||
cat /test_output/statements-test/test_results.tsv >> /test_output/test_results.tsv
|
||||
tar -zcvf statements-check.tar.gz statements-test 1>/dev/null
|
||||
|
||||
mkdir -p /test_output/complete-test
|
||||
/clickhouse-tests/sqllogic/runner.py \
|
||||
--log-file /test_output/runner-complete-test.log \
|
||||
--log-level info \
|
||||
complete-test \
|
||||
--input-dir /sqllogictest \
|
||||
--out-dir /test_output/complete-test \
|
||||
2>&1 \
|
||||
| ts '%Y-%m-%d %H:%M:%S'
|
||||
|
||||
cat /test_output/complete-test/check_status.tsv >> /test_output/check_status.tsv
|
||||
cat /test_output/complete-test/test_results.tsv >> /test_output/test_results.tsv
|
||||
tar -zcvf complete-check.tar.gz complete-test 1>/dev/null
|
||||
fi
|
||||
}
|
||||
|
||||
|
@ -19,10 +19,14 @@ dpkg -i package_folder/clickhouse-common-static-dbg_*.deb
|
||||
dpkg -i package_folder/clickhouse-server_*.deb
|
||||
dpkg -i package_folder/clickhouse-client_*.deb
|
||||
|
||||
echo "$BUGFIX_VALIDATE_CHECK"
|
||||
|
||||
# Check that the tools are available under short names
|
||||
ch --query "SELECT 1" || exit 1
|
||||
chl --query "SELECT 1" || exit 1
|
||||
chc --version || exit 1
|
||||
if [[ -z "$BUGFIX_VALIDATE_CHECK" ]]; then
|
||||
ch --query "SELECT 1" || exit 1
|
||||
chl --query "SELECT 1" || exit 1
|
||||
chc --version || exit 1
|
||||
fi
|
||||
|
||||
ln -s /usr/share/clickhouse-test/clickhouse-test /usr/bin/clickhouse-test
|
||||
|
||||
@ -46,6 +50,16 @@ fi
|
||||
|
||||
config_logs_export_cluster /etc/clickhouse-server/config.d/system_logs_export.yaml
|
||||
|
||||
if [[ -n "$BUGFIX_VALIDATE_CHECK" ]] && [[ "$BUGFIX_VALIDATE_CHECK" -eq 1 ]]; then
|
||||
sudo cat /etc/clickhouse-server/config.d/zookeeper.xml \
|
||||
| sed "/<use_compression>1<\/use_compression>/d" \
|
||||
> /etc/clickhouse-server/config.d/zookeeper.xml.tmp
|
||||
sudo mv /etc/clickhouse-server/config.d/zookeeper.xml.tmp /etc/clickhouse-server/config.d/zookeeper.xml
|
||||
|
||||
# it contains some new settings, but we can safely remove it
|
||||
rm /etc/clickhouse-server/users.d/s3_cache_new.xml
|
||||
fi
|
||||
|
||||
# For flaky check we also enable thread fuzzer
|
||||
if [ "$NUM_TRIES" -gt "1" ]; then
|
||||
export THREAD_FUZZER_CPU_TIME_PERIOD_US=1000
|
||||
|
@ -191,6 +191,12 @@ sudo cat /etc/clickhouse-server/config.d/logger_trace.xml \
|
||||
> /etc/clickhouse-server/config.d/logger_trace.xml.tmp
|
||||
mv /etc/clickhouse-server/config.d/logger_trace.xml.tmp /etc/clickhouse-server/config.d/logger_trace.xml
|
||||
|
||||
# Randomize async_load_databases
|
||||
if [ $(( $(date +%-d) % 2 )) -eq 1 ]; then
|
||||
sudo echo "<clickhouse><async_load_databases>true</async_load_databases></clickhouse>" \
|
||||
> /etc/clickhouse-server/config.d/enable_async_load_databases.xml
|
||||
fi
|
||||
|
||||
start
|
||||
|
||||
stress --hung-check --drop-databases --output-folder test_output --skip-func-tests "$SKIP_TESTS_OPTION" --global-time-limit 1200 \
|
||||
|
@ -79,6 +79,7 @@ rm /etc/clickhouse-server/config.d/merge_tree.xml
|
||||
rm /etc/clickhouse-server/config.d/enable_wait_for_shutdown_replicated_tables.xml
|
||||
rm /etc/clickhouse-server/users.d/nonconst_timezone.xml
|
||||
rm /etc/clickhouse-server/users.d/s3_cache_new.xml
|
||||
rm /etc/clickhouse-server/users.d/replicated_ddl_entry.xml
|
||||
|
||||
start
|
||||
stop
|
||||
@ -116,6 +117,7 @@ rm /etc/clickhouse-server/config.d/merge_tree.xml
|
||||
rm /etc/clickhouse-server/config.d/enable_wait_for_shutdown_replicated_tables.xml
|
||||
rm /etc/clickhouse-server/users.d/nonconst_timezone.xml
|
||||
rm /etc/clickhouse-server/users.d/s3_cache_new.xml
|
||||
rm /etc/clickhouse-server/users.d/replicated_ddl_entry.xml
|
||||
|
||||
start
|
||||
|
||||
|
@ -15,6 +15,27 @@ You can monitor:
|
||||
- Utilization of hardware resources.
|
||||
- ClickHouse server metrics.
|
||||
|
||||
## Built-in observability dashboard
|
||||
|
||||
<img width="400" alt="Screenshot 2023-11-12 at 6 08 58 PM" src="https://github.com/ClickHouse/ClickHouse/assets/3936029/2bd10011-4a47-4b94-b836-d44557c7fdc1" />
|
||||
|
||||
ClickHouse comes with a built-in observability dashboard feature which can be accessed by `$HOST:$PORT/dashboard` (requires user and password) that shows the following metrics:
|
||||
- Queries/second
|
||||
- CPU usage (cores)
|
||||
- Queries running
|
||||
- Merges running
|
||||
- Selected bytes/second
|
||||
- IO wait
|
||||
- CPU wait
|
||||
- OS CPU Usage (userspace)
|
||||
- OS CPU Usage (kernel)
|
||||
- Read from disk
|
||||
- Read from filesystem
|
||||
- Memory (tracked)
|
||||
- Inserted rows/second
|
||||
- Total MergeTree parts
|
||||
- Max parts for partition
|
||||
|
||||
## Resource Utilization {#resource-utilization}
|
||||
|
||||
ClickHouse also monitors the state of hardware resources by itself such as:
|
||||
|
@ -1646,6 +1646,45 @@ Default value: `0.5`.
|
||||
|
||||
|
||||
|
||||
## async_load_databases {#async_load_databases}
|
||||
|
||||
Asynchronous loading of databases and tables.
|
||||
|
||||
If `true` all non-system databases with `Ordinary`, `Atomic` and `Replicated` engine will be loaded asynchronously after the ClickHouse server start up. See `system.async_loader` table, `tables_loader_background_pool_size` and `tables_loader_foreground_pool_size` server settings. Any query that tries to access a table, that is not yet loaded, will wait for exactly this table to be started up. If load job fails, query will rethrow an error (instead of shutting down the whole server in case of `async_load_databases = false`). The table that is waited for by at least one query will be loaded with higher priority. DDL queries on a database will wait for exactly that database to be started up.
|
||||
|
||||
If `false`, all databases are loaded when the server starts.
|
||||
|
||||
The default is `false`.
|
||||
|
||||
**Example**
|
||||
|
||||
``` xml
|
||||
<async_load_databases>true</async_load_databases>
|
||||
```
|
||||
|
||||
## tables_loader_foreground_pool_size {#tables_loader_foreground_pool_size}
|
||||
|
||||
Sets the number of threads performing load jobs in foreground pool. The foreground pool is used for loading table synchronously before server start listening on a port and for loading tables that are waited for. Foreground pool has higher priority than background pool. It means that no job starts in background pool while there are jobs running in foreground pool.
|
||||
|
||||
Possible values:
|
||||
|
||||
- Any positive integer.
|
||||
- Zero. Use all available CPUs.
|
||||
|
||||
Default value: 0.
|
||||
|
||||
|
||||
## tables_loader_background_pool_size {#tables_loader_background_pool_size}
|
||||
|
||||
Sets the number of threads performing asynchronous load jobs in background pool. The background pool is used for loading tables asynchronously after server start in case there are no queries waiting for the table. It could be beneficial to keep low number of threads in background pool if there are a lot of tables. It will reserve CPU resources for concurrent query execution.
|
||||
|
||||
Possible values:
|
||||
|
||||
- Any positive integer.
|
||||
- Zero. Use all available CPUs.
|
||||
|
||||
Default value: 0.
|
||||
|
||||
|
||||
## merge_tree {#merge_tree}
|
||||
|
||||
|
54
docs/en/operations/system-tables/async_loader.md
Normal file
54
docs/en/operations/system-tables/async_loader.md
Normal file
@ -0,0 +1,54 @@
|
||||
---
|
||||
slug: /en/operations/system-tables/async_loader
|
||||
---
|
||||
# async_loader
|
||||
|
||||
Contains information and status for recent asynchronous jobs (e.g. for tables loading). The table contains a row for every job. There is a tool for visualizing information from this table `utils/async_loader_graph`.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT *
|
||||
FROM system.async_loader
|
||||
FORMAT Vertical
|
||||
LIMIT 1
|
||||
```
|
||||
|
||||
``` text
|
||||
```
|
||||
|
||||
Columns:
|
||||
|
||||
- `job` (`String`) - Job name (may be not unique).
|
||||
- `job_id` (`UInt64`) - Unique ID of the job.
|
||||
- `dependencies` (`Array(UInt64)`) - List of IDs of jobs that should be done before this job.
|
||||
- `dependencies_left` (`UInt64`) - Current number of dependencies left to be done.
|
||||
- `status` (`Enum`) - Current load status of a job:
|
||||
`PENDING`: Load job is not started yet.
|
||||
`OK`: Load job executed and was successful.
|
||||
`FAILED`: Load job executed and failed.
|
||||
`CANCELED`: Load job is not going to be executed due to removal or dependency failure.
|
||||
|
||||
A pending job might be in one of the following states:
|
||||
- `is_executing` (`UInt8`) - The job is currently being executed by a worker.
|
||||
- `is_blocked` (`UInt8`) - The job waits for its dependencies to be done.
|
||||
- `is_ready` (`UInt8`) - The job is ready to be executed and waits for a worker.
|
||||
- `elapsed` (`Float64`) - Seconds elapsed since start of execution. Zero if job is not started. Total execution time if job finished.
|
||||
|
||||
Every job has a pool associated with it and is started in this pool. Each pool has a constant priority and a mutable maximum number of workers. Higher priority (lower `priority` value) jobs are run first. No job with lower priority is started while there is at least one higher priority job ready or executing. Job priority can be elevated (but cannot be lowered) by prioritizing it. For example jobs for a table loading and startup will be prioritized if incoming query required this table. It is possible prioritize a job during its execution, but job is not moved from its `execution_pool` to newly assigned `pool`. The job uses `pool` for creating new jobs to avoid priority inversion. Already started jobs are not preempted by higher priority jobs and always run to completion after start.
|
||||
- `pool_id` (`UInt64`) - ID of a pool currently assigned to the job.
|
||||
- `pool` (`String`) - Name of `pool_id` pool.
|
||||
- `priority` (`Int64`) - Priority of `pool_id` pool.
|
||||
- `execution_pool_id` (`UInt64`) - ID of a pool the job is executed in. Equals initially assigned pool before execution starts.
|
||||
- `execution_pool` (`String`) - Name of `execution_pool_id` pool.
|
||||
- `execution_priority` (`Int64`) - Priority of `execution_pool_id` pool.
|
||||
|
||||
- `ready_seqno` (`Nullable(UInt64)`) - Not null for ready jobs. Worker pulls the next job to be executed from a ready queue of its pool. If there are multiple ready jobs, then job with the lowest value of `ready_seqno` is picked.
|
||||
- `waiters` (`UInt64`) - The number of threads waiting on this job.
|
||||
- `exception` (`Nullable(String)`) - Not null for failed and canceled jobs. Holds error message raised during query execution or error leading to cancelling of this job along with dependency failure chain of job names.
|
||||
|
||||
Time instants during job lifetime:
|
||||
- `schedule_time` (`DateTime64`) - Time when job was created and scheduled to be executed (usually with all its dependencies).
|
||||
- `enqueue_time` (`Nullable(DateTime64)`) - Time when job became ready and was enqueued into a ready queue of it's pool. Null if the job is not ready yet.
|
||||
- `start_time` (`Nullable(DateTime64)`) - Time when worker dequeues the job from ready queue and start its execution. Null if the job is not started yet.
|
||||
- `finish_time` (`Nullable(DateTime64)`) - Time when job execution is finished. Null if the job is not finished yet.
|
@ -13,6 +13,7 @@ ClickHouse does not delete data from the table automatically. See [Introduction]
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — The date when the async insert happened.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — The date and time when the async insert finished execution.
|
||||
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — The date and time when the async insert finished execution with microseconds precision.
|
||||
@ -42,6 +43,7 @@ SELECT * FROM system.asynchronous_insert_log LIMIT 1 \G;
|
||||
Result:
|
||||
|
||||
``` text
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2023-06-08
|
||||
event_time: 2023-06-08 10:08:53
|
||||
event_time_microseconds: 2023-06-08 10:08:53.199516
|
||||
|
@ -7,6 +7,7 @@ Contains the historical values for `system.asynchronous_metrics`, which are save
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Event date.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Event time.
|
||||
- `name` ([String](../../sql-reference/data-types/string.md)) — Metric name.
|
||||
@ -15,22 +16,33 @@ Columns:
|
||||
**Example**
|
||||
|
||||
``` sql
|
||||
SELECT * FROM system.asynchronous_metric_log LIMIT 10
|
||||
SELECT * FROM system.asynchronous_metric_log LIMIT 3 \G
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─event_date─┬──────────event_time─┬─name─────────────────────────────────────┬─────value─┐
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ CPUFrequencyMHz_0 │ 2120.9 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ jemalloc.arenas.all.pmuzzy │ 743 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ jemalloc.arenas.all.pdirty │ 26288 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ jemalloc.background_thread.run_intervals │ 0 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ jemalloc.background_thread.num_runs │ 0 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ jemalloc.retained │ 60694528 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ jemalloc.mapped │ 303161344 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ jemalloc.resident │ 260931584 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ jemalloc.metadata │ 12079488 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ jemalloc.allocated │ 133756128 │
|
||||
└────────────┴─────────────────────┴──────────────────────────────────────────┴───────────┘
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2023-11-14
|
||||
event_time: 2023-11-14 14:39:07
|
||||
metric: AsynchronousHeavyMetricsCalculationTimeSpent
|
||||
value: 0.001
|
||||
|
||||
Row 2:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2023-11-14
|
||||
event_time: 2023-11-14 14:39:08
|
||||
metric: AsynchronousHeavyMetricsCalculationTimeSpent
|
||||
value: 0
|
||||
|
||||
Row 3:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2023-11-14
|
||||
event_time: 2023-11-14 14:39:09
|
||||
metric: AsynchronousHeavyMetricsCalculationTimeSpent
|
||||
value: 0
|
||||
```
|
||||
|
||||
**See Also**
|
||||
|
@ -7,6 +7,7 @@ Contains logging entries with the information about `BACKUP` and `RESTORE` opera
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Date of the entry.
|
||||
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Time of the entry with microseconds precision.
|
||||
- `id` ([String](../../sql-reference/data-types/string.md)) — Identifier of the backup or restore operation.
|
||||
@ -45,6 +46,7 @@ SELECT * FROM system.backup_log WHERE id = 'e5b74ecb-f6f1-426a-80be-872f90043885
|
||||
```response
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2023-08-19
|
||||
event_time_microseconds: 2023-08-19 11:05:21.998566
|
||||
id: e5b74ecb-f6f1-426a-80be-872f90043885
|
||||
@ -63,6 +65,7 @@ bytes_read: 0
|
||||
|
||||
Row 2:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2023-08-19
|
||||
event_time_microseconds: 2023-08-19 11:08:56.916192
|
||||
id: e5b74ecb-f6f1-426a-80be-872f90043885
|
||||
@ -93,6 +96,7 @@ SELECT * FROM system.backup_log WHERE id = 'cdf1f731-52ef-42da-bc65-2e1bfcd4ce90
|
||||
```response
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2023-08-19
|
||||
event_time_microseconds: 2023-08-19 11:09:19.718077
|
||||
id: cdf1f731-52ef-42da-bc65-2e1bfcd4ce90
|
||||
@ -111,6 +115,7 @@ bytes_read: 0
|
||||
|
||||
Row 2:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2023-08-19
|
||||
event_time_microseconds: 2023-08-19 11:09:29.334234
|
||||
id: cdf1f731-52ef-42da-bc65-2e1bfcd4ce90
|
||||
|
@ -7,6 +7,7 @@ Contains information about stack traces for fatal errors. The table does not exi
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` ([DateTime](../../sql-reference/data-types/datetime.md)) — Date of the event.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Time of the event.
|
||||
- `timestamp_ns` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Timestamp of the event with nanoseconds.
|
||||
@ -32,6 +33,7 @@ Result (not full):
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2020-10-14
|
||||
event_time: 2020-10-14 15:47:40
|
||||
timestamp_ns: 1602679660271312710
|
||||
|
@ -6,6 +6,7 @@ slug: /en/operations/system-tables/metric_log
|
||||
Contains history of metrics values from tables `system.metrics` and `system.events`, periodically flushed to disk.
|
||||
|
||||
Columns:
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Event date.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Event time.
|
||||
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Event time with microseconds resolution.
|
||||
@ -19,6 +20,7 @@ SELECT * FROM system.metric_log LIMIT 1 FORMAT Vertical;
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2020-09-05
|
||||
event_time: 2020-09-05 16:22:33
|
||||
event_time_microseconds: 2020-09-05 16:22:33.196807
|
||||
|
@ -45,6 +45,22 @@ Number of threads in the Aggregator thread pool.
|
||||
|
||||
Number of threads in the Aggregator thread pool running a task.
|
||||
|
||||
### TablesLoaderForegroundThreads
|
||||
|
||||
Number of threads in the async loader foreground thread pool.
|
||||
|
||||
### TablesLoaderForegroundThreadsActive
|
||||
|
||||
Number of threads in the async loader foreground thread pool running a task.
|
||||
|
||||
### TablesLoaderBackgroundThreads
|
||||
|
||||
Number of threads in the async loader background thread pool.
|
||||
|
||||
### TablesLoaderBackgroundThreadsActive
|
||||
|
||||
Number of threads in the async loader background thread pool running a task.
|
||||
|
||||
### AsyncInsertCacheSize
|
||||
|
||||
Number of async insert hash id in cache
|
||||
@ -197,14 +213,6 @@ Number of threads in the DatabaseOnDisk thread pool.
|
||||
|
||||
Number of threads in the DatabaseOnDisk thread pool running a task.
|
||||
|
||||
### DatabaseOrdinaryThreads
|
||||
|
||||
Number of threads in the Ordinary database thread pool.
|
||||
|
||||
### DatabaseOrdinaryThreadsActive
|
||||
|
||||
Number of threads in the Ordinary database thread pool running a task.
|
||||
|
||||
### DelayedInserts
|
||||
|
||||
Number of INSERT queries that are throttled due to high number of active data parts for partition in a MergeTree table.
|
||||
@ -625,14 +633,6 @@ Number of connections that are sending data for external tables to remote server
|
||||
|
||||
Number of connections that are sending data for scalars to remote servers.
|
||||
|
||||
### StartupSystemTablesThreads
|
||||
|
||||
Number of threads in the StartupSystemTables thread pool.
|
||||
|
||||
### StartupSystemTablesThreadsActive
|
||||
|
||||
Number of threads in the StartupSystemTables thread pool running a task.
|
||||
|
||||
### StorageBufferBytes
|
||||
|
||||
Number of bytes in buffers of Buffer tables
|
||||
@ -677,14 +677,6 @@ Number of threads in the system.replicas thread pool running a task.
|
||||
|
||||
Number of connections to TCP server (clients with native interface), also included server-server distributed query connections
|
||||
|
||||
### TablesLoaderThreads
|
||||
|
||||
Number of threads in the tables loader thread pool.
|
||||
|
||||
### TablesLoaderThreadsActive
|
||||
|
||||
Number of threads in the tables loader thread pool running a task.
|
||||
|
||||
### TablesToDropQueueSize
|
||||
|
||||
Number of dropped tables, that are waiting for background data removal.
|
||||
|
@ -8,28 +8,19 @@ Contains information about [trace spans](https://opentracing.io/docs/overview/sp
|
||||
Columns:
|
||||
|
||||
- `trace_id` ([UUID](../../sql-reference/data-types/uuid.md)) — ID of the trace for executed query.
|
||||
|
||||
- `span_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — ID of the `trace span`.
|
||||
|
||||
- `parent_span_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — ID of the parent `trace span`.
|
||||
|
||||
- `operation_name` ([String](../../sql-reference/data-types/string.md)) — The name of the operation.
|
||||
|
||||
- `kind` ([Enum8](../../sql-reference/data-types/enum.md)) — The [SpanKind](https://opentelemetry.io/docs/reference/specification/trace/api/#spankind) of the span.
|
||||
- `INTERNAL` — Indicates that the span represents an internal operation within an application.
|
||||
- `SERVER` — Indicates that the span covers server-side handling of a synchronous RPC or other remote request.
|
||||
- `CLIENT` — Indicates that the span describes a request to some remote service.
|
||||
- `PRODUCER` — Indicates that the span describes the initiators of an asynchronous request. This parent span will often end before the corresponding child CONSUMER span, possibly even before the child span starts.
|
||||
- `CONSUMER` - Indicates that the span describes a child of an asynchronous PRODUCER request.
|
||||
|
||||
- `start_time_us` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The start time of the `trace span` (in microseconds).
|
||||
|
||||
- `finish_time_us` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The finish time of the `trace span` (in microseconds).
|
||||
|
||||
- `finish_date` ([Date](../../sql-reference/data-types/date.md)) — The finish date of the `trace span`.
|
||||
|
||||
- `attribute.names` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — [Attribute](https://opentelemetry.io/docs/go/instrumentation/#attributes) names depending on the `trace span`. They are filled in according to the recommendations in the [OpenTelemetry](https://opentelemetry.io/) standard.
|
||||
|
||||
- `attribute.values` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Attribute values depending on the `trace span`. They are filled in according to the recommendations in the `OpenTelemetry` standard.
|
||||
|
||||
**Example**
|
||||
|
@ -9,6 +9,7 @@ This table contains information about events that occurred with [data parts](../
|
||||
|
||||
The `system.part_log` table contains the following columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `query_id` ([String](../../sql-reference/data-types/string.md)) — Identifier of the `INSERT` query that created this data part.
|
||||
- `event_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Type of the event that occurred with the data part. Can have one of the following values:
|
||||
- `NewPart` — Inserting of a new data part.
|
||||
@ -56,13 +57,14 @@ SELECT * FROM system.part_log LIMIT 1 FORMAT Vertical;
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
query_id: 983ad9c7-28d5-4ae1-844e-603116b7de31
|
||||
event_type: NewPart
|
||||
merge_reason: NotAMerge
|
||||
merge_algorithm: Undecided
|
||||
event_date: 2021-02-02
|
||||
event_time: 2021-02-02 11:14:28
|
||||
event_time_microseconds: 2021-02-02 11:14:28.861919
|
||||
event_time_microseconds: 2021-02-02 11:14:28.861919
|
||||
duration_ms: 35
|
||||
database: default
|
||||
table: log_mt_2
|
||||
|
@ -4,6 +4,7 @@ This table contains profiling on processors level (that you can find in [`EXPLAI
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — The date when the event happened.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — The date and time when the event happened.
|
||||
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — The date and time with microseconds precision when the event happened.
|
||||
|
@ -34,6 +34,7 @@ You can use the [log_formatted_queries](../../operations/settings/settings.md#se
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `type` ([Enum8](../../sql-reference/data-types/enum.md)) — Type of an event that occurred when executing the query. Values:
|
||||
- `'QueryStart' = 1` — Successful start of query execution.
|
||||
- `'QueryFinish' = 2` — Successful end of query execution.
|
||||
@ -127,6 +128,7 @@ SELECT * FROM system.query_log WHERE type = 'QueryFinish' ORDER BY query_start_t
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
type: QueryFinish
|
||||
event_date: 2021-11-03
|
||||
event_time: 2021-11-03 16:13:54
|
||||
@ -167,7 +169,7 @@ initial_query_start_time: 2021-11-03 16:13:54
|
||||
initial_query_start_time_microseconds: 2021-11-03 16:13:54.952325
|
||||
interface: 1
|
||||
os_user: sevirov
|
||||
client_hostname: clickhouse.ru-central1.internal
|
||||
client_hostname: clickhouse.eu-central1.internal
|
||||
client_name: ClickHouse
|
||||
client_revision: 54449
|
||||
client_version_major: 21
|
||||
|
@ -18,6 +18,7 @@ You can use the [log_queries_probability](../../operations/settings/settings.md#
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — The date when the thread has finished execution of the query.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — The date and time when the thread has finished execution of the query.
|
||||
- `event_time_microsecinds` ([DateTime](../../sql-reference/data-types/datetime.md)) — The date and time when the thread has finished execution of the query with microseconds precision.
|
||||
@ -74,6 +75,7 @@ Columns:
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2020-09-11
|
||||
event_time: 2020-09-11 10:08:17
|
||||
event_time_microseconds: 2020-09-11 10:08:17.134042
|
||||
|
@ -18,6 +18,7 @@ You can use the [log_queries_probability](../../operations/settings/settings.md#
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — The date when the last event of the view happened.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — The date and time when the view finished execution.
|
||||
- `event_time_microseconds` ([DateTime](../../sql-reference/data-types/datetime.md)) — The date and time when the view finished execution with microseconds precision.
|
||||
@ -59,6 +60,7 @@ Result:
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2021-06-22
|
||||
event_time: 2021-06-22 13:23:07
|
||||
event_time_microseconds: 2021-06-22 13:23:07.738221
|
||||
|
@ -7,6 +7,7 @@ Contains information about all successful and failed login and logout events.
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `type` ([Enum8](../../sql-reference/data-types/enum.md)) — Login/logout result. Possible values:
|
||||
- `LoginFailure` — Login error.
|
||||
- `LoginSuccess` — Successful login.
|
||||
@ -57,6 +58,7 @@ Result:
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
type: LoginSuccess
|
||||
auth_id: 45e6bd83-b4aa-4a23-85e6-bd83b4aa1a23
|
||||
session_id:
|
||||
|
@ -7,6 +7,7 @@ Contains logging entries. The logging level which goes to this table can be limi
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` (Date) — Date of the entry.
|
||||
- `event_time` (DateTime) — Time of the entry.
|
||||
- `event_time_microseconds` (DateTime) — Time of the entry with microseconds precision.
|
||||
@ -39,6 +40,7 @@ SELECT * FROM system.text_log LIMIT 1 \G
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2020-09-10
|
||||
event_time: 2020-09-10 11:23:07
|
||||
event_time_microseconds: 2020-09-10 11:23:07.871397
|
||||
|
@ -12,37 +12,27 @@ To analyze logs, use the `addressToLine`, `addressToLineWithInlines`, `addressTo
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Date of sampling moment.
|
||||
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Timestamp of the sampling moment.
|
||||
|
||||
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Timestamp of the sampling moment with microseconds precision.
|
||||
|
||||
- `timestamp_ns` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Timestamp of the sampling moment in nanoseconds.
|
||||
|
||||
- `revision` ([UInt32](../../sql-reference/data-types/int-uint.md)) — ClickHouse server build revision.
|
||||
|
||||
When connecting to the server by `clickhouse-client`, you see the string similar to `Connected to ClickHouse server version 19.18.1.`. This field contains the `revision`, but not the `version` of a server.
|
||||
|
||||
- `trace_type` ([Enum8](../../sql-reference/data-types/enum.md)) — Trace type:
|
||||
|
||||
- `Real` represents collecting stack traces by wall-clock time.
|
||||
- `CPU` represents collecting stack traces by CPU time.
|
||||
- `Memory` represents collecting allocations and deallocations when memory allocation exceeds the subsequent watermark.
|
||||
- `MemorySample` represents collecting random allocations and deallocations.
|
||||
- `MemoryPeak` represents collecting updates of peak memory usage.
|
||||
- `ProfileEvent` represents collecting of increments of profile events.
|
||||
|
||||
- `thread_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Thread identifier.
|
||||
|
||||
- `query_id` ([String](../../sql-reference/data-types/string.md)) — Query identifier that can be used to get details about a query that was running from the [query_log](#system_tables-query_log) system table.
|
||||
|
||||
- `trace` ([Array(UInt64)](../../sql-reference/data-types/array.md)) — Stack trace at the moment of sampling. Each element is a virtual memory address inside ClickHouse server process.
|
||||
|
||||
- `size` ([Int64](../../sql-reference/data-types/int-uint.md)) - For trace types `Memory`, `MemorySample` or `MemoryPeak` is the amount of memory allocated, for other trace types is 0.
|
||||
|
||||
- `event` ([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md)) - For trace type `ProfileEvent` is the name of updated profile event, for other trace types is an empty string.
|
||||
|
||||
- `increment` ([UInt64](../../sql-reference/data-types/int-uint.md)) - For trace type `ProfileEvent` is the amount of increment of profile event, for other trace types is 0.
|
||||
|
||||
**Example**
|
||||
@ -54,6 +44,7 @@ SELECT * FROM system.trace_log LIMIT 1 \G
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2020-09-10
|
||||
event_time: 2020-09-10 11:23:09
|
||||
event_time_microseconds: 2020-09-10 11:23:09.872924
|
||||
|
@ -9,6 +9,7 @@ For requests, only columns with request parameters are filled in, and the remain
|
||||
|
||||
Columns with request parameters:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `type` ([Enum](../../sql-reference/data-types/enum.md)) — Event type in the ZooKeeper client. Can have one of the following values:
|
||||
- `Request` — The request has been sent.
|
||||
- `Response` — The response was received.
|
||||
@ -63,6 +64,7 @@ Result:
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
type: Request
|
||||
event_date: 2021-08-09
|
||||
event_time: 2021-08-09 21:38:30.291792
|
||||
|
@ -487,24 +487,23 @@ Where:
|
||||
|
||||
## uniqUpTo(N)(x)
|
||||
|
||||
Calculates the number of different argument values if it is less than or equal to N. If the number of different argument values is greater than N, it returns N + 1.
|
||||
Calculates the number of different values of the argument up to a specified limit, `N`. If the number of different argument values is greater than `N`, this function returns `N` + 1, otherwise it calculates the exact value.
|
||||
|
||||
Recommended for use with small Ns, up to 10. The maximum value of N is 100.
|
||||
Recommended for use with small `N`s, up to 10. The maximum value of `N` is 100.
|
||||
|
||||
For the state of an aggregate function, it uses the amount of memory equal to 1 + N \* the size of one value of bytes.
|
||||
For strings, it stores a non-cryptographic hash of 8 bytes. That is, the calculation is approximated for strings.
|
||||
For the state of an aggregate function, this function uses the amount of memory equal to 1 + `N` \* the size of one value of bytes.
|
||||
When dealing with strings, this function stores a non-cryptographic hash of 8 bytes; the calculation is approximated for strings.
|
||||
|
||||
The function also works for several arguments.
|
||||
For example, if you had a table that logs every search query made by users on your website. Each row in the table represents a single search query, with columns for the user ID, the search query, and the timestamp of the query. You can use `uniqUpTo` to generate a report that shows only the keywords that produced at least 5 unique users.
|
||||
|
||||
It works as fast as possible, except for cases when a large N value is used and the number of unique values is slightly less than N.
|
||||
|
||||
Usage example:
|
||||
|
||||
``` text
|
||||
Problem: Generate a report that shows only keywords that produced at least 5 unique users.
|
||||
Solution: Write in the GROUP BY query SearchPhrase HAVING uniqUpTo(4)(UserID) >= 5
|
||||
```sql
|
||||
SELECT SearchPhrase
|
||||
FROM SearchLog
|
||||
GROUP BY SearchPhrase
|
||||
HAVING uniqUpTo(4)(UserID) >= 5
|
||||
```
|
||||
|
||||
`uniqUpTo(4)(UserID)` calculates the number of unique `UserID` values for each `SearchPhrase`, but it only counts up to 4 unique values. If there are more than 4 unique `UserID` values for a `SearchPhrase`, the function returns 5 (4 + 1). The `HAVING` clause then filters out the `SearchPhrase` values for which the number of unique `UserID` values is less than 5. This will give you a list of search keywords that were used by at least 5 unique users.
|
||||
|
||||
## sumMapFiltered(keys_to_keep)(keys, values)
|
||||
|
||||
|
@ -1,48 +0,0 @@
|
||||
---
|
||||
toc_priority: 112
|
||||
---
|
||||
|
||||
# groupArraySorted {#groupArraySorted}
|
||||
|
||||
Returns an array with the first N items in ascending order.
|
||||
|
||||
``` sql
|
||||
groupArraySorted(N)(column)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `N` – The number of elements to return.
|
||||
|
||||
If the parameter is omitted, default value is the size of input.
|
||||
|
||||
- `column` – The value (Integer, String, Float and other Generic types).
|
||||
|
||||
**Example**
|
||||
|
||||
Gets the first 10 numbers:
|
||||
|
||||
``` sql
|
||||
SELECT groupArraySorted(10)(number) FROM numbers(100)
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─groupArraySorted(10)(number)─┐
|
||||
│ [0,1,2,3,4,5,6,7,8,9] │
|
||||
└──────────────────────────────┘
|
||||
```
|
||||
|
||||
|
||||
Gets all the String implementations of all numbers in column:
|
||||
|
||||
``` sql
|
||||
SELECT groupArraySorted(str) FROM (SELECT toString(number) as str FROM numbers(5));
|
||||
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─groupArraySorted(str)────────┐
|
||||
│ ['0','1','2','3','4'] │
|
||||
└──────────────────────────────┘
|
||||
```
|
||||
|
@ -54,7 +54,6 @@ ClickHouse-specific aggregate functions:
|
||||
- [groupArrayMovingAvg](/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingavg.md)
|
||||
- [groupArrayMovingSum](/docs/en/sql-reference/aggregate-functions/reference/grouparraymovingsum.md)
|
||||
- [groupArraySample](./grouparraysample.md)
|
||||
- [groupArraySorted](/docs/en/sql-reference/aggregate-functions/reference/grouparraysorted.md)
|
||||
- [groupBitAnd](/docs/en/sql-reference/aggregate-functions/reference/groupbitand.md)
|
||||
- [groupBitOr](/docs/en/sql-reference/aggregate-functions/reference/groupbitor.md)
|
||||
- [groupBitXor](/docs/en/sql-reference/aggregate-functions/reference/groupbitxor.md)
|
||||
|
@ -67,45 +67,7 @@ WHERE macro = 'test';
|
||||
│ test │ Value │
|
||||
└───────┴──────────────┘
|
||||
```
|
||||
|
||||
## getClientHTTPHeader
|
||||
Returns the value of specified http header.If there is no such header or the request method is not http, it will throw an exception.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
getClientHTTPHeader(name);
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `name` — HTTP header name .[String](../../sql-reference/data-types/string.md#string)
|
||||
|
||||
**Returned value**
|
||||
|
||||
Value of the specified header.
|
||||
Type:[String](../../sql-reference/data-types/string.md#string).
|
||||
|
||||
|
||||
When we use `clickhouse-client` to execute this function, we'll always get empty string, because client doesn't use http protocol.
|
||||
```sql
|
||||
SELECT getCientHTTPHeader('test')
|
||||
```
|
||||
result:
|
||||
|
||||
```text
|
||||
┌─getClientHTTPHeader('test')─┐
|
||||
│ │
|
||||
└────────────------───────────┘
|
||||
```
|
||||
Try to use http request:
|
||||
```shell
|
||||
echo "select getClientHTTPHeader('X-Clickhouse-User')" | curl -H 'X-ClickHouse-User: default' -H 'X-ClickHouse-Key: ' 'http://localhost:8123/' -d @-
|
||||
|
||||
#result
|
||||
default
|
||||
```
|
||||
|
||||
## FQDN
|
||||
|
||||
Returns the fully qualified domain name of the ClickHouse server.
|
||||
|
@ -1,4 +1,4 @@
|
||||
--
|
||||
---
|
||||
slug: /en/sql-reference/table-functions/file
|
||||
sidebar_position: 60
|
||||
sidebar_label: file
|
||||
|
@ -23,6 +23,7 @@
|
||||
#include <Common/scope_guard_safe.h>
|
||||
#include <Interpreters/Session.h>
|
||||
#include <Access/AccessControl.h>
|
||||
#include <Common/PoolId.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/Macros.h>
|
||||
#include <Common/Config/ConfigProcessor.h>
|
||||
@ -742,16 +743,16 @@ void LocalServer::processConfig()
|
||||
status.emplace(fs::path(path) / "status", StatusFile::write_full_info);
|
||||
|
||||
LOG_DEBUG(log, "Loading metadata from {}", path);
|
||||
loadMetadataSystem(global_context);
|
||||
auto startup_system_tasks = loadMetadataSystem(global_context);
|
||||
attachSystemTablesLocal(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::SYSTEM_DATABASE));
|
||||
attachInformationSchema(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::INFORMATION_SCHEMA));
|
||||
attachInformationSchema(global_context, *createMemoryDatabaseIfNotExists(global_context, DatabaseCatalog::INFORMATION_SCHEMA_UPPERCASE));
|
||||
startupSystemTables();
|
||||
waitLoad(TablesLoaderForegroundPoolId, startup_system_tasks);
|
||||
|
||||
if (!config().has("only-system-tables"))
|
||||
{
|
||||
DatabaseCatalog::instance().createBackgroundTasks();
|
||||
loadMetadata(global_context);
|
||||
waitLoad(loadMetadata(global_context));
|
||||
DatabaseCatalog::instance().startupBackgroundTasks();
|
||||
}
|
||||
|
||||
|
@ -20,6 +20,7 @@
|
||||
#include <base/coverage.h>
|
||||
#include <base/getFQDNOrHostName.h>
|
||||
#include <base/safeExit.h>
|
||||
#include <Common/PoolId.h>
|
||||
#include <Common/MemoryTracker.h>
|
||||
#include <Common/ClickHouseRevision.h>
|
||||
#include <Common/DNSResolver.h>
|
||||
@ -1279,8 +1280,6 @@ try
|
||||
global_context->setHTTPHeaderFilter(*config);
|
||||
|
||||
global_context->setMaxTableSizeToDrop(server_settings_.max_table_size_to_drop);
|
||||
global_context->setClientHTTPHeaderForbiddenHeaders(server_settings_.get_client_http_header_forbidden_headers);
|
||||
global_context->setAllowGetHTTPHeaderFunction(server_settings_.allow_get_client_http_header);
|
||||
global_context->setMaxPartitionSizeToDrop(server_settings_.max_partition_size_to_drop);
|
||||
|
||||
ConcurrencyControl::SlotCount concurrent_threads_soft_limit = ConcurrencyControl::Unlimited;
|
||||
@ -1336,6 +1335,10 @@ try
|
||||
global_context->getMessageBrokerSchedulePool().increaseThreadsCount(server_settings_.background_message_broker_schedule_pool_size);
|
||||
global_context->getDistributedSchedulePool().increaseThreadsCount(server_settings_.background_distributed_schedule_pool_size);
|
||||
|
||||
global_context->getAsyncLoader().setMaxThreads(TablesLoaderForegroundPoolId, server_settings_.tables_loader_foreground_pool_size);
|
||||
global_context->getAsyncLoader().setMaxThreads(TablesLoaderBackgroundLoadPoolId, server_settings_.tables_loader_background_pool_size);
|
||||
global_context->getAsyncLoader().setMaxThreads(TablesLoaderBackgroundStartupPoolId, server_settings_.tables_loader_background_pool_size);
|
||||
|
||||
getIOThreadPool().reloadConfiguration(
|
||||
server_settings.max_io_thread_pool_size,
|
||||
server_settings.max_io_thread_pool_free_size,
|
||||
@ -1676,17 +1679,18 @@ try
|
||||
|
||||
LOG_INFO(log, "Loading metadata from {}", path_str);
|
||||
|
||||
LoadTaskPtrs load_metadata_tasks;
|
||||
try
|
||||
{
|
||||
auto & database_catalog = DatabaseCatalog::instance();
|
||||
/// We load temporary database first, because projections need it.
|
||||
database_catalog.initializeAndLoadTemporaryDatabase();
|
||||
loadMetadataSystem(global_context);
|
||||
maybeConvertSystemDatabase(global_context);
|
||||
auto system_startup_tasks = loadMetadataSystem(global_context);
|
||||
maybeConvertSystemDatabase(global_context, system_startup_tasks);
|
||||
/// This has to be done before the initialization of system logs,
|
||||
/// otherwise there is a race condition between the system database initialization
|
||||
/// and creation of new tables in the database.
|
||||
startupSystemTables();
|
||||
waitLoad(TablesLoaderForegroundPoolId, system_startup_tasks);
|
||||
/// After attaching system databases we can initialize system log.
|
||||
global_context->initializeSystemLogs();
|
||||
global_context->setSystemZooKeeperLogAfterInitializationIfNeeded();
|
||||
@ -1702,9 +1706,10 @@ try
|
||||
/// and so loadMarkedAsDroppedTables() will find it and try to add, and UUID will overlap.
|
||||
database_catalog.loadMarkedAsDroppedTables();
|
||||
database_catalog.createBackgroundTasks();
|
||||
/// Then, load remaining databases
|
||||
loadMetadata(global_context, default_database);
|
||||
convertDatabasesEnginesIfNeed(global_context);
|
||||
/// Then, load remaining databases (some of them maybe be loaded asynchronously)
|
||||
load_metadata_tasks = loadMetadata(global_context, default_database, server_settings.async_load_databases);
|
||||
/// If we need to convert database engines, disable async tables loading
|
||||
convertDatabasesEnginesIfNeed(load_metadata_tasks, global_context);
|
||||
database_catalog.startupBackgroundTasks();
|
||||
/// After loading validate that default database exists
|
||||
database_catalog.assertDatabaseExists(default_database);
|
||||
@ -1716,6 +1721,7 @@ try
|
||||
tryLogCurrentException(log, "Caught exception while loading metadata");
|
||||
throw;
|
||||
}
|
||||
|
||||
LOG_DEBUG(log, "Loaded metadata.");
|
||||
|
||||
/// Init trace collector only after trace_log system table was created
|
||||
@ -1871,9 +1877,14 @@ try
|
||||
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "distributed_ddl.pool_size should be greater then 0");
|
||||
global_context->setDDLWorker(std::make_unique<DDLWorker>(pool_size, ddl_zookeeper_path, global_context, &config(),
|
||||
"distributed_ddl", "DDLWorker",
|
||||
&CurrentMetrics::MaxDDLEntryID, &CurrentMetrics::MaxPushedDDLEntryID));
|
||||
&CurrentMetrics::MaxDDLEntryID, &CurrentMetrics::MaxPushedDDLEntryID),
|
||||
load_metadata_tasks);
|
||||
}
|
||||
|
||||
/// Do not keep tasks in server, they should be kept inside databases. Used here to make dependent tasks only.
|
||||
load_metadata_tasks.clear();
|
||||
load_metadata_tasks.shrink_to_fit();
|
||||
|
||||
{
|
||||
std::lock_guard lock(servers_lock);
|
||||
for (auto & server : servers)
|
||||
|
@ -364,8 +364,15 @@
|
||||
<background_schedule_pool_size>128</background_schedule_pool_size>
|
||||
<background_message_broker_schedule_pool_size>16</background_message_broker_schedule_pool_size>
|
||||
<background_distributed_schedule_pool_size>16</background_distributed_schedule_pool_size>
|
||||
<tables_loader_foreground_pool_size>0</tables_loader_foreground_pool_size>
|
||||
<tables_loader_background_pool_size>0</tables_loader_background_pool_size>
|
||||
-->
|
||||
|
||||
<!-- Enables asynchronous loading of databases and tables to speedup server startup.
|
||||
Queries to not yet loaded entity will be blocked until load is finished.
|
||||
-->
|
||||
<!-- <async_load_databases>true</async_load_databases> -->
|
||||
|
||||
<!-- On memory constrained environments you may have to set this to value larger than 1.
|
||||
-->
|
||||
<max_server_memory_usage_to_ram_ratio>0.9</max_server_memory_usage_to_ram_ratio>
|
||||
|
@ -108,7 +108,7 @@
|
||||
filter: blur(1px);
|
||||
}
|
||||
|
||||
.chart div { position: absolute; }
|
||||
.chart > div { position: absolute; }
|
||||
|
||||
.inputs {
|
||||
height: auto;
|
||||
@ -215,8 +215,6 @@
|
||||
color: var(--text-color);
|
||||
}
|
||||
|
||||
.u-legend th { display: none; }
|
||||
|
||||
.themes {
|
||||
float: right;
|
||||
font-size: 20pt;
|
||||
@ -433,6 +431,16 @@
|
||||
display: none;
|
||||
}
|
||||
|
||||
.u-series {
|
||||
line-height: 0.8;
|
||||
}
|
||||
|
||||
.u-series.footer {
|
||||
font-size: 8px;
|
||||
padding-top: 0;
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
/* Source: https://cdn.jsdelivr.net/npm/uplot@1.6.21/dist/uPlot.min.css
|
||||
* It is copy-pasted to lower the number of requests.
|
||||
*/
|
||||
@ -478,7 +486,6 @@
|
||||
* - compress the state for URL's #hash;
|
||||
* - footer with "about" or a link to source code;
|
||||
* - allow to configure a table on a server to save the dashboards;
|
||||
* - multiple lines on chart;
|
||||
* - if a query returned one value, display this value instead of a diagram;
|
||||
* - if a query returned something unusual, display the table;
|
||||
*/
|
||||
@ -520,10 +527,54 @@ let queries = [];
|
||||
/// Query parameters with predefined default values.
|
||||
/// All other parameters will be automatically found in the queries.
|
||||
let params = {
|
||||
"rounding": "60",
|
||||
"seconds": "86400"
|
||||
'rounding': '60',
|
||||
'seconds': '86400'
|
||||
};
|
||||
|
||||
/// Palette generation for charts
|
||||
function generatePalette(baseColor, numColors) {
|
||||
const baseHSL = hexToHsl(baseColor);
|
||||
const hueStep = 360 / numColors;
|
||||
const palette = [];
|
||||
for (let i = 0; i < numColors; i++) {
|
||||
const hue = Math.round((baseHSL.h + i * hueStep) % 360);
|
||||
const color = `hsl(${hue}, ${baseHSL.s}%, ${baseHSL.l}%)`;
|
||||
palette.push(color);
|
||||
}
|
||||
return palette;
|
||||
}
|
||||
|
||||
/// Helper function to convert hex color to HSL
|
||||
function hexToHsl(hex) {
|
||||
hex = hex.replace(/^#/, '');
|
||||
const bigint = parseInt(hex, 16);
|
||||
const r = (bigint >> 16) & 255;
|
||||
const g = (bigint >> 8) & 255;
|
||||
const b = bigint & 255;
|
||||
const r_norm = r / 255;
|
||||
const g_norm = g / 255;
|
||||
const b_norm = b / 255;
|
||||
const max = Math.max(r_norm, g_norm, b_norm);
|
||||
const min = Math.min(r_norm, g_norm, b_norm);
|
||||
const l = (max + min) / 2;
|
||||
let s = 0;
|
||||
if (max !== min) {
|
||||
s = l > 0.5 ? (max - min) / (2 - max - min) : (max - min) / (max + min);
|
||||
}
|
||||
let h = 0;
|
||||
if (max !== min) {
|
||||
if (max === r_norm) {
|
||||
h = (g_norm - b_norm) / (max - min) + (g_norm < b_norm ? 6 : 0);
|
||||
} else if (max === g_norm) {
|
||||
h = (b_norm - r_norm) / (max - min) + 2;
|
||||
} else {
|
||||
h = (r_norm - g_norm) / (max - min) + 4;
|
||||
}
|
||||
}
|
||||
h = Math.round(h * 60);
|
||||
return { h, s: Math.round(s * 100), l: Math.round(l * 100) };
|
||||
}
|
||||
|
||||
let theme = 'light';
|
||||
|
||||
function setTheme(new_theme) {
|
||||
@ -913,6 +964,8 @@ document.getElementById('mass-editor-textarea').addEventListener('input', e => {
|
||||
|
||||
function legendAsTooltipPlugin({ className, style = { background: "var(--legend-background)" } } = {}) {
|
||||
let legendEl;
|
||||
let showTop = false;
|
||||
const showLimit = 5;
|
||||
|
||||
function init(u, opts) {
|
||||
legendEl = u.root.querySelector(".u-legend");
|
||||
@ -932,13 +985,28 @@ function legendAsTooltipPlugin({ className, style = { background: "var(--legend-
|
||||
...style
|
||||
});
|
||||
|
||||
// hide series color markers
|
||||
const idents = legendEl.querySelectorAll(".u-marker");
|
||||
if (opts.series.length == 2) {
|
||||
const nodes = legendEl.querySelectorAll("th");
|
||||
for (let i = 0; i < nodes.length; i++)
|
||||
nodes[i].style.display = "none";
|
||||
} else {
|
||||
legendEl.querySelector("th").remove();
|
||||
legendEl.querySelector("td").setAttribute('colspan', '2');
|
||||
legendEl.querySelector("td").style.textAlign = 'center';
|
||||
}
|
||||
|
||||
for (let i = 0; i < idents.length; i++)
|
||||
idents[i].style.display = "none";
|
||||
if (opts.series.length - 1 > showLimit) {
|
||||
showTop = true;
|
||||
let footer = legendEl.insertRow().insertCell();
|
||||
footer.setAttribute('colspan', '2');
|
||||
footer.style.textAlign = 'center';
|
||||
footer.classList.add('u-value');
|
||||
footer.parentNode.classList.add('u-series','footer');
|
||||
footer.textContent = ". . .";
|
||||
}
|
||||
|
||||
const overEl = u.over;
|
||||
overEl.style.overflow = "visible";
|
||||
|
||||
overEl.appendChild(legendEl);
|
||||
|
||||
@ -946,11 +1014,28 @@ function legendAsTooltipPlugin({ className, style = { background: "var(--legend-
|
||||
overEl.addEventListener("mouseleave", () => {legendEl.style.display = "none";});
|
||||
}
|
||||
|
||||
function nodeListToArray(nodeList) {
|
||||
return Array.prototype.slice.call(nodeList);
|
||||
}
|
||||
|
||||
function update(u) {
|
||||
let { left, top } = u.cursor;
|
||||
left -= legendEl.clientWidth / 2;
|
||||
top -= legendEl.clientHeight / 2;
|
||||
legendEl.style.transform = "translate(" + left + "px, " + top + "px)";
|
||||
if (showTop) {
|
||||
let nodes = nodeListToArray(legendEl.querySelectorAll("tr"));
|
||||
let header = nodes.shift();
|
||||
let footer = nodes.pop();
|
||||
nodes.forEach(function (node) { node._sort_key = +node.querySelector("td").textContent; });
|
||||
nodes.sort((a, b) => +b._sort_key - +a._sort_key);
|
||||
nodes.forEach(function (node) { node.parentNode.appendChild(node); });
|
||||
for (let i = 0; i < nodes.length; i++) {
|
||||
nodes[i].style.display = i < showLimit ? null : "none";
|
||||
delete nodes[i]._sort_key;
|
||||
}
|
||||
footer.parentNode.appendChild(footer);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
@ -961,12 +1046,13 @@ function legendAsTooltipPlugin({ className, style = { background: "var(--legend-
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
async function doFetch(query, url_params = '') {
|
||||
host = document.getElementById('url').value || host;
|
||||
user = document.getElementById('user').value;
|
||||
password = document.getElementById('password').value;
|
||||
|
||||
let url = `${host}?default_format=JSONCompactColumns&enable_http_compression=1`
|
||||
let url = `${host}?default_format=JSONColumnsWithMetadata&enable_http_compression=1`
|
||||
|
||||
if (add_http_cors_header) {
|
||||
// For debug purposes, you may set add_http_cors_header from a browser console
|
||||
@ -980,14 +1066,17 @@ async function doFetch(query, url_params = '') {
|
||||
url += `&password=${encodeURIComponent(password)}`;
|
||||
}
|
||||
|
||||
let response, data, error;
|
||||
let response, reply, error;
|
||||
try {
|
||||
response = await fetch(url + url_params, { method: "POST", body: query });
|
||||
data = await response.text();
|
||||
reply = await response.text();
|
||||
if (response.ok) {
|
||||
data = JSON.parse(data);
|
||||
reply = JSON.parse(reply);
|
||||
if (reply.exception) {
|
||||
error = reply.exception;
|
||||
}
|
||||
} else {
|
||||
error = data;
|
||||
error = reply;
|
||||
}
|
||||
} catch (e) {
|
||||
console.log(e);
|
||||
@ -1006,7 +1095,7 @@ async function doFetch(query, url_params = '') {
|
||||
}
|
||||
}
|
||||
|
||||
return {data, error};
|
||||
return {reply, error};
|
||||
}
|
||||
|
||||
async function draw(idx, chart, url_params, query) {
|
||||
@ -1015,17 +1104,76 @@ async function draw(idx, chart, url_params, query) {
|
||||
plots[idx] = null;
|
||||
}
|
||||
|
||||
let {data, error} = await doFetch(query, url_params);
|
||||
let {reply, error} = await doFetch(query, url_params);
|
||||
if (!error) {
|
||||
if (reply.rows.length == 0) {
|
||||
error = "Query returned empty result.";
|
||||
} else if (reply.meta.length < 2) {
|
||||
error = "Query should return at least two columns: unix timestamp and value.";
|
||||
} else {
|
||||
for (let i = 0; i < reply.meta.length; i++) {
|
||||
let label = reply.meta[i].name;
|
||||
let column = reply.data[label];
|
||||
if (!Array.isArray(column) || column.length != reply.data[reply.meta[0].name].length) {
|
||||
error = "Wrong data format of the query.";
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Transform string-labeled data to multi-column data
|
||||
function transformToColumns() {
|
||||
const x = reply.meta[0].name; // time; must be ordered
|
||||
const l = reply.meta[1].name; // string label column to distinguish series; must be ordered
|
||||
const y = reply.meta[2].name; // values; must have single value for (x, l) pair
|
||||
const labels = [...new Set(reply.data[l])].sort((a, b) => a - b);
|
||||
if (labels.includes('__time__')) {
|
||||
error = "The second column is not allowed to contain '__time__' values.";
|
||||
return;
|
||||
}
|
||||
const times = [...new Set(reply.data[x])].sort((a, b) => a - b);
|
||||
let new_meta = [{ name: '__time__', type: reply.meta[0].type }];
|
||||
let new_data = { __time__: [] };
|
||||
for (let label of labels) {
|
||||
new_meta.push({ name: label, type: reply.meta[2].type });
|
||||
new_data[label] = [];
|
||||
}
|
||||
let new_rows = 0;
|
||||
function row_done(row_time) {
|
||||
new_rows++;
|
||||
new_data.__time__.push(row_time);
|
||||
for (let label of labels) {
|
||||
if (new_data[label].length < new_rows) {
|
||||
new_data[label].push(null);
|
||||
}
|
||||
}
|
||||
}
|
||||
let prev_time = reply.data[x][0];
|
||||
const old_rows = reply.data[x].length;
|
||||
for (let i = 0; i < old_rows; i++) {
|
||||
const time = reply.data[x][i];
|
||||
const label = reply.data[l][i];
|
||||
const value = reply.data[y][i];
|
||||
if (prev_time != time) {
|
||||
row_done(prev_time);
|
||||
prev_time = time;
|
||||
}
|
||||
new_data[label].push(value);
|
||||
}
|
||||
row_done(prev_time);
|
||||
reply.meta = new_meta;
|
||||
reply.data = new_data;
|
||||
reply.rows = new_rows;
|
||||
}
|
||||
|
||||
function isStringColumn(type) {
|
||||
return type === 'String' || type === 'LowCardinality(String)';
|
||||
}
|
||||
|
||||
if (!error) {
|
||||
if (!Array.isArray(data)) {
|
||||
error = "Query should return an array.";
|
||||
} else if (data.length == 0) {
|
||||
error = "Query returned empty result.";
|
||||
} else if (data.length != 2) {
|
||||
error = "Query should return exactly two columns: unix timestamp and value.";
|
||||
} else if (!Array.isArray(data[0]) || !Array.isArray(data[1]) || data[0].length != data[1].length) {
|
||||
error = "Wrong data format of the query.";
|
||||
if (reply.meta.length == 3 && isStringColumn(reply.meta[1].type)) {
|
||||
transformToColumns();
|
||||
}
|
||||
}
|
||||
|
||||
@ -1043,24 +1191,38 @@ async function draw(idx, chart, url_params, query) {
|
||||
}
|
||||
|
||||
const [line_color, fill_color, grid_color, axes_color] = theme != 'dark'
|
||||
? ["#F88", "#FEE", "#EED", "#2c3235"]
|
||||
: ["#864", "#045", "#2c3235", "#c7d0d9"];
|
||||
? ["#ff8888", "#ffeeee", "#eeeedd", "#2c3235"]
|
||||
: ["#886644", "#004455", "#2c3235", "#c7d0d9"];
|
||||
|
||||
let sync = uPlot.sync("sync");
|
||||
|
||||
const max_value = Math.max(...data[1]);
|
||||
let axis = {
|
||||
stroke: axes_color,
|
||||
grid: { width: 1 / devicePixelRatio, stroke: grid_color },
|
||||
ticks: { width: 1 / devicePixelRatio, stroke: grid_color }
|
||||
};
|
||||
|
||||
let axes = [axis, axis];
|
||||
let series = [{ label: "x" }];
|
||||
let data = [reply.data[reply.meta[0].name]];
|
||||
|
||||
// Treat every column as series
|
||||
const series_count = reply.meta.length;
|
||||
const fill = series_count == 2 ? fill_color : undefined;
|
||||
const palette = generatePalette(line_color, series_count);
|
||||
let max_value = Number.NEGATIVE_INFINITY;
|
||||
for (let i = 1; i < series_count; i++) {
|
||||
let label = reply.meta[i].name;
|
||||
series.push({ label, stroke: palette[i - 1], fill });
|
||||
data.push(reply.data[label]);
|
||||
max_value = Math.max(max_value, ...reply.data[label]);
|
||||
}
|
||||
|
||||
const opts = {
|
||||
width: chart.clientWidth,
|
||||
height: chart.clientHeight,
|
||||
axes: [ { stroke: axes_color,
|
||||
grid: { width: 1 / devicePixelRatio, stroke: grid_color },
|
||||
ticks: { width: 1 / devicePixelRatio, stroke: grid_color } },
|
||||
{ stroke: axes_color,
|
||||
grid: { width: 1 / devicePixelRatio, stroke: grid_color },
|
||||
ticks: { width: 1 / devicePixelRatio, stroke: grid_color } } ],
|
||||
series: [ { label: "x" },
|
||||
{ label: "y", stroke: line_color, fill: fill_color } ],
|
||||
axes,
|
||||
series,
|
||||
padding: [ null, null, null, (Math.round(max_value * 100) / 100).toString().length * 6 - 10 ],
|
||||
plugins: [ legendAsTooltipPlugin() ],
|
||||
cursor: {
|
||||
@ -1216,22 +1378,21 @@ function saveState() {
|
||||
}
|
||||
|
||||
async function searchQueries() {
|
||||
let {data, error} = await doFetch(search_query);
|
||||
let {reply, error} = await doFetch(search_query);
|
||||
if (error) {
|
||||
throw new Error(error);
|
||||
}
|
||||
if (!Array.isArray(data)) {
|
||||
throw new Error("Search query should return an array.");
|
||||
} else if (data.length == 0) {
|
||||
let data = reply.data;
|
||||
if (reply.rows == 0) {
|
||||
throw new Error("Search query returned empty result.");
|
||||
} else if (data.length != 2) {
|
||||
} else if (reply.meta.length != 2 || reply.meta[0].name != "title" || reply.meta[1].name != "query") {
|
||||
throw new Error("Search query should return exactly two columns: title and query.");
|
||||
} else if (!Array.isArray(data[0]) || !Array.isArray(data[1]) || data[0].length != data[1].length) {
|
||||
} else if (!Array.isArray(data.title) || !Array.isArray(data.query) || data.title.length != data.query.length) {
|
||||
throw new Error("Wrong data format of the search query.");
|
||||
}
|
||||
|
||||
for (let i = 0; i < data[0].length; i++) {
|
||||
queries.push({title: data[0][i], query: data[1][i]});
|
||||
for (let i = 0; i < data.title.length; i++) {
|
||||
queries.push({title: data.title[i], query: data.query[i]});
|
||||
}
|
||||
|
||||
regenerate();
|
||||
|
@ -1,82 +0,0 @@
|
||||
#include <AggregateFunctions/AggregateFunctionFactory.h>
|
||||
#include <AggregateFunctions/AggregateFunctionGroupArraySorted.h>
|
||||
#include <AggregateFunctions/Helpers.h>
|
||||
#include <AggregateFunctions/FactoryHelpers.h>
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
#include <Common/Exception.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
template <template <typename> class AggregateFunctionTemplate, typename ... TArgs>
|
||||
AggregateFunctionPtr createWithNumericOrTimeType(const IDataType & argument_type, TArgs && ... args)
|
||||
{
|
||||
WhichDataType which(argument_type);
|
||||
if (which.idx == TypeIndex::Date) return std::make_shared<AggregateFunctionTemplate<UInt16>>(std::forward<TArgs>(args)...);
|
||||
if (which.idx == TypeIndex::DateTime) return std::make_shared<AggregateFunctionTemplate<UInt32>>(std::forward<TArgs>(args)...);
|
||||
if (which.idx == TypeIndex::IPv4) return std::make_shared<AggregateFunctionTemplate<IPv4>>(std::forward<TArgs>(args)...);
|
||||
return AggregateFunctionPtr(createWithNumericType<AggregateFunctionTemplate, TArgs...>(argument_type, std::forward<TArgs>(args)...));
|
||||
}
|
||||
|
||||
template <typename ... TArgs>
|
||||
inline AggregateFunctionPtr createAggregateFunctionGroupArraySortedImpl(const DataTypePtr & argument_type, const Array & parameters, TArgs ... args)
|
||||
{
|
||||
if (auto res = createWithNumericOrTimeType<GroupArraySortedNumericImpl>(*argument_type, argument_type, parameters, std::forward<TArgs>(args)...))
|
||||
return AggregateFunctionPtr(res);
|
||||
|
||||
WhichDataType which(argument_type);
|
||||
return std::make_shared<GroupArraySortedGeneralImpl<GroupArraySortedNodeGeneral>>(argument_type, parameters, std::forward<TArgs>(args)...);
|
||||
}
|
||||
|
||||
AggregateFunctionPtr createAggregateFunctionGroupArraySorted(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & parameters, const Settings *)
|
||||
{
|
||||
assertUnary(name, argument_types);
|
||||
|
||||
UInt64 max_elems = std::numeric_limits<UInt64>::max();
|
||||
|
||||
if (parameters.empty())
|
||||
{
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Parameter for aggregate function {} should have limit argument", name);
|
||||
}
|
||||
else if (parameters.size() == 1)
|
||||
{
|
||||
auto type = parameters[0].getType();
|
||||
if (type != Field::Types::Int64 && type != Field::Types::UInt64)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Parameter for aggregate function {} should be positive number", name);
|
||||
|
||||
if ((type == Field::Types::Int64 && parameters[0].get<Int64>() < 0) ||
|
||||
(type == Field::Types::UInt64 && parameters[0].get<UInt64>() == 0))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Parameter for aggregate function {} should be positive number", name);
|
||||
|
||||
max_elems = parameters[0].get<UInt64>();
|
||||
}
|
||||
else
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH,
|
||||
"Function {} does not support this number of arguments", name);
|
||||
|
||||
return createAggregateFunctionGroupArraySortedImpl(argument_types[0], parameters, max_elems);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
void registerAggregateFunctionGroupArraySorted(AggregateFunctionFactory & factory)
|
||||
{
|
||||
AggregateFunctionProperties properties = { .returns_default_when_only_null = false, .is_order_dependent = false };
|
||||
|
||||
factory.registerFunction("groupArraySorted", { createAggregateFunctionGroupArraySorted, properties });
|
||||
}
|
||||
|
||||
}
|
@ -1,355 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <IO/ReadBufferFromString.h>
|
||||
#include <IO/WriteBufferFromString.h>
|
||||
#include <IO/Operators.h>
|
||||
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
|
||||
#include <Columns/ColumnArray.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Columns/ColumnVector.h>
|
||||
#include <Functions/array/arraySort.h>
|
||||
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/ArenaAllocator.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Columns/ColumnConst.h>
|
||||
#include <DataTypes/IDataType.h>
|
||||
#include <base/sort.h>
|
||||
#include <Columns/IColumn.h>
|
||||
|
||||
#include <AggregateFunctions/IAggregateFunction.h>
|
||||
|
||||
#include <Common/RadixSort.h>
|
||||
#include <algorithm>
|
||||
#include <type_traits>
|
||||
#include <utility>
|
||||
|
||||
#define AGGREGATE_FUNCTION_GROUP_ARRAY_MAX_ELEMENT_SIZE 0xFFFFFF
|
||||
|
||||
namespace DB
|
||||
{
|
||||
struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int TOO_LARGE_ARRAY_SIZE;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
struct GroupArraySortedData;
|
||||
|
||||
template <typename T>
|
||||
struct GroupArraySortedData
|
||||
{
|
||||
/// For easy serialization.
|
||||
static_assert(std::has_unique_object_representations_v<T> || std::is_floating_point_v<T>);
|
||||
|
||||
// Switch to ordinary Allocator after 4096 bytes to avoid fragmentation and trash in Arena
|
||||
using Allocator = MixedAlignedArenaAllocator<alignof(T), 4096>;
|
||||
using Array = PODArray<T, 32, Allocator>;
|
||||
|
||||
Array value;
|
||||
};
|
||||
|
||||
template <typename T>
|
||||
class GroupArraySortedNumericImpl final
|
||||
: public IAggregateFunctionDataHelper<GroupArraySortedData<T>, GroupArraySortedNumericImpl<T>>
|
||||
{
|
||||
using Data = GroupArraySortedData<T>;
|
||||
UInt64 max_elems;
|
||||
SerializationPtr serialization;
|
||||
|
||||
public:
|
||||
explicit GroupArraySortedNumericImpl(
|
||||
const DataTypePtr & data_type_, const Array & parameters_, UInt64 max_elems_ = std::numeric_limits<UInt64>::max())
|
||||
: IAggregateFunctionDataHelper<GroupArraySortedData<T>, GroupArraySortedNumericImpl<T>>(
|
||||
{data_type_}, parameters_, std::make_shared<DataTypeArray>(data_type_))
|
||||
, max_elems(max_elems_)
|
||||
, serialization(data_type_->getDefaultSerialization())
|
||||
{
|
||||
}
|
||||
|
||||
String getName() const override { return "groupArraySorted"; }
|
||||
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
const auto & row_value = assert_cast<const ColumnVector<T> &>(*columns[0]).getData()[row_num];
|
||||
auto & cur_elems = this->data(place);
|
||||
|
||||
cur_elems.value.push_back(row_value, arena);
|
||||
|
||||
/// To optimize, we sort (2 * max_size) elements of input array over and over again
|
||||
/// and after each loop we delete the last half of sorted array
|
||||
if (cur_elems.value.size() >= max_elems * 2)
|
||||
{
|
||||
RadixSort<RadixSortNumTraits<T>>::executeLSD(cur_elems.value.data(), cur_elems.value.size());
|
||||
cur_elems.value.resize(max_elems, arena);
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
auto & cur_elems = this->data(place);
|
||||
auto & rhs_elems = this->data(rhs);
|
||||
|
||||
if (rhs_elems.value.empty())
|
||||
return;
|
||||
|
||||
if (rhs_elems.value.size())
|
||||
cur_elems.value.insertByOffsets(rhs_elems.value, 0, rhs_elems.value.size(), arena);
|
||||
|
||||
RadixSort<RadixSortNumTraits<T>>::executeLSD(cur_elems.value.data(), cur_elems.value.size());
|
||||
|
||||
size_t elems_size = cur_elems.value.size() < max_elems ? cur_elems.value.size() : max_elems;
|
||||
cur_elems.value.resize(elems_size, arena);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
||||
{
|
||||
auto & value = this->data(place).value;
|
||||
size_t size = value.size();
|
||||
writeVarUInt(size, buf);
|
||||
|
||||
for (const auto & elem : value)
|
||||
writeBinaryLittleEndian(elem, buf);
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, std::optional<size_t> /* version */, Arena * arena) const override
|
||||
{
|
||||
size_t size = 0;
|
||||
readVarUInt(size, buf);
|
||||
|
||||
if (unlikely(size > max_elems))
|
||||
throw Exception(ErrorCodes::TOO_LARGE_ARRAY_SIZE, "Too large array size, it should not exceed {}", max_elems);
|
||||
|
||||
auto & value = this->data(place).value;
|
||||
|
||||
value.resize(size, arena);
|
||||
for (auto & element : value)
|
||||
readBinaryLittleEndian(element, buf);
|
||||
}
|
||||
|
||||
static void checkArraySize(size_t elems, size_t max_elems)
|
||||
{
|
||||
if (unlikely(elems > max_elems))
|
||||
throw Exception(ErrorCodes::TOO_LARGE_ARRAY_SIZE,
|
||||
"Too large array size {} (maximum: {})", elems, max_elems);
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const override
|
||||
{
|
||||
auto& value = this->data(place).value;
|
||||
|
||||
RadixSort<RadixSortNumTraits<T>>::executeLSD(value.data(), value.size());
|
||||
size_t elems_size = value.size() < max_elems ? value.size() : max_elems;
|
||||
value.resize(elems_size, arena);
|
||||
size_t size = value.size();
|
||||
|
||||
ColumnArray & arr_to = assert_cast<ColumnArray &>(to);
|
||||
ColumnArray::Offsets & offsets_to = arr_to.getOffsets();
|
||||
|
||||
offsets_to.push_back(offsets_to.back() + size);
|
||||
|
||||
if (size)
|
||||
{
|
||||
typename ColumnVector<T>::Container & data_to = assert_cast<ColumnVector<T> &>(arr_to.getData()).getData();
|
||||
data_to.insert(this->data(place).value.begin(), this->data(place).value.end());
|
||||
RadixSort<RadixSortNumTraits<T>>::executeLSD(value.data(), value.size());
|
||||
value.resize(elems_size, arena);
|
||||
}
|
||||
}
|
||||
|
||||
bool allocatesMemoryInArena() const override { return true; }
|
||||
};
|
||||
|
||||
|
||||
template <typename Node, bool has_sampler>
|
||||
struct GroupArraySortedGeneralData;
|
||||
|
||||
template <typename Node>
|
||||
struct GroupArraySortedGeneralData<Node, false>
|
||||
{
|
||||
// Switch to ordinary Allocator after 4096 bytes to avoid fragmentation and trash in Arena
|
||||
using Allocator = MixedAlignedArenaAllocator<alignof(Node *), 4096>;
|
||||
using Array = PODArray<Field, 32, Allocator>;
|
||||
|
||||
Array value;
|
||||
};
|
||||
|
||||
template <typename Node>
|
||||
struct GroupArraySortedNodeBase
|
||||
{
|
||||
UInt64 size; // size of payload
|
||||
|
||||
/// Returns pointer to actual payload
|
||||
char * data() { return reinterpret_cast<char *>(this) + sizeof(Node); }
|
||||
|
||||
const char * data() const { return reinterpret_cast<const char *>(this) + sizeof(Node); }
|
||||
};
|
||||
|
||||
struct GroupArraySortedNodeString : public GroupArraySortedNodeBase<GroupArraySortedNodeString>
|
||||
{
|
||||
using Node = GroupArraySortedNodeString;
|
||||
|
||||
};
|
||||
|
||||
struct GroupArraySortedNodeGeneral : public GroupArraySortedNodeBase<GroupArraySortedNodeGeneral>
|
||||
{
|
||||
using Node = GroupArraySortedNodeGeneral;
|
||||
|
||||
};
|
||||
|
||||
/// Implementation of groupArraySorted for Generic data via Array
|
||||
template <typename Node>
|
||||
class GroupArraySortedGeneralImpl final
|
||||
: public IAggregateFunctionDataHelper<GroupArraySortedGeneralData<Node, false>, GroupArraySortedGeneralImpl<Node>>
|
||||
{
|
||||
using Data = GroupArraySortedGeneralData<Node, false>;
|
||||
static Data & data(AggregateDataPtr __restrict place) { return *reinterpret_cast<Data *>(place); }
|
||||
static const Data & data(ConstAggregateDataPtr __restrict place) { return *reinterpret_cast<const Data *>(place); }
|
||||
|
||||
DataTypePtr & data_type;
|
||||
UInt64 max_elems;
|
||||
SerializationPtr serialization;
|
||||
|
||||
|
||||
public:
|
||||
GroupArraySortedGeneralImpl(const DataTypePtr & data_type_, const Array & parameters_, UInt64 max_elems_ = std::numeric_limits<UInt64>::max())
|
||||
: IAggregateFunctionDataHelper<GroupArraySortedGeneralData<Node, false>, GroupArraySortedGeneralImpl<Node>>(
|
||||
{data_type_}, parameters_, std::make_shared<DataTypeArray>(data_type_))
|
||||
, data_type(this->argument_types[0])
|
||||
, max_elems(max_elems_)
|
||||
, serialization(data_type->getDefaultSerialization())
|
||||
{
|
||||
}
|
||||
|
||||
String getName() const override { return "groupArraySorted"; }
|
||||
|
||||
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena * arena) const override
|
||||
{
|
||||
auto & cur_elems = data(place);
|
||||
|
||||
cur_elems.value.push_back(columns[0][0][row_num], arena);
|
||||
|
||||
/// To optimize, we sort (2 * max_size) elements of input array over and over again and
|
||||
/// after each loop we delete the last half of sorted array
|
||||
|
||||
if (cur_elems.value.size() >= max_elems * 2)
|
||||
{
|
||||
std::sort(cur_elems.value.begin(), cur_elems.value.begin() + (max_elems * 2));
|
||||
cur_elems.value.erase(cur_elems.value.begin() + max_elems, cur_elems.value.begin() + (max_elems * 2));
|
||||
}
|
||||
}
|
||||
|
||||
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena * arena) const override
|
||||
{
|
||||
auto & cur_elems = data(place);
|
||||
auto & rhs_elems = data(rhs);
|
||||
|
||||
if (rhs_elems.value.empty())
|
||||
return;
|
||||
|
||||
UInt64 new_elems = rhs_elems.value.size();
|
||||
|
||||
for (UInt64 i = 0; i < new_elems; ++i)
|
||||
cur_elems.value.push_back(rhs_elems.value[i], arena);
|
||||
|
||||
checkArraySize(cur_elems.value.size(), AGGREGATE_FUNCTION_GROUP_ARRAY_MAX_ELEMENT_SIZE);
|
||||
|
||||
if (!cur_elems.value.empty())
|
||||
{
|
||||
std::sort(cur_elems.value.begin(), cur_elems.value.end());
|
||||
|
||||
if (cur_elems.value.size() > max_elems)
|
||||
cur_elems.value.resize(max_elems, arena);
|
||||
}
|
||||
}
|
||||
|
||||
static void checkArraySize(size_t elems, size_t max_elems)
|
||||
{
|
||||
if (unlikely(elems > max_elems))
|
||||
throw Exception(ErrorCodes::TOO_LARGE_ARRAY_SIZE,
|
||||
"Too large array size {} (maximum: {})", elems, max_elems);
|
||||
}
|
||||
|
||||
void serialize(ConstAggregateDataPtr __restrict place, WriteBuffer & buf, std::optional<size_t> /* version */) const override
|
||||
{
|
||||
auto & value = data(place).value;
|
||||
size_t size = value.size();
|
||||
checkArraySize(size, AGGREGATE_FUNCTION_GROUP_ARRAY_MAX_ELEMENT_SIZE);
|
||||
writeVarUInt(size, buf);
|
||||
|
||||
for (const Field & elem : value)
|
||||
{
|
||||
if (elem.isNull())
|
||||
{
|
||||
writeBinary(false, buf);
|
||||
}
|
||||
else
|
||||
{
|
||||
writeBinary(true, buf);
|
||||
serialization->serializeBinary(elem, buf, {});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void deserialize(AggregateDataPtr __restrict place, ReadBuffer & buf, std::optional<size_t> /* version */, Arena * arena) const override
|
||||
{
|
||||
size_t size = 0;
|
||||
readVarUInt(size, buf);
|
||||
|
||||
if (unlikely(size > max_elems))
|
||||
throw Exception(ErrorCodes::TOO_LARGE_ARRAY_SIZE, "Too large array size, it should not exceed {}", max_elems);
|
||||
|
||||
checkArraySize(size, AGGREGATE_FUNCTION_GROUP_ARRAY_MAX_ELEMENT_SIZE);
|
||||
auto & value = data(place).value;
|
||||
|
||||
value.resize(size, arena);
|
||||
for (Field & elem : value)
|
||||
{
|
||||
UInt8 is_null = 0;
|
||||
readBinary(is_null, buf);
|
||||
if (!is_null)
|
||||
serialization->deserializeBinary(elem, buf, {});
|
||||
}
|
||||
}
|
||||
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena * arena) const override
|
||||
{
|
||||
auto & column_array = assert_cast<ColumnArray &>(to);
|
||||
auto & value = data(place).value;
|
||||
|
||||
if (!value.empty())
|
||||
{
|
||||
std::sort(value.begin(), value.end());
|
||||
|
||||
if (value.size() > max_elems)
|
||||
value.resize_exact(max_elems, arena);
|
||||
}
|
||||
auto & offsets = column_array.getOffsets();
|
||||
offsets.push_back(offsets.back() + value.size());
|
||||
|
||||
auto & column_data = column_array.getData();
|
||||
|
||||
if (std::is_same_v<Node, GroupArraySortedNodeString>)
|
||||
{
|
||||
auto & string_offsets = assert_cast<ColumnString &>(column_data).getOffsets();
|
||||
string_offsets.reserve(string_offsets.size() + value.size());
|
||||
}
|
||||
|
||||
for (const Field& field : value)
|
||||
column_data.insert(field);
|
||||
}
|
||||
|
||||
bool allocatesMemoryInArena() const override { return true; }
|
||||
};
|
||||
|
||||
#undef AGGREGATE_FUNCTION_GROUP_ARRAY_MAX_ARRAY_SIZE
|
||||
|
||||
}
|
@ -15,7 +15,6 @@ void registerAggregateFunctionCount(AggregateFunctionFactory &);
|
||||
void registerAggregateFunctionDeltaSum(AggregateFunctionFactory &);
|
||||
void registerAggregateFunctionDeltaSumTimestamp(AggregateFunctionFactory &);
|
||||
void registerAggregateFunctionGroupArray(AggregateFunctionFactory &);
|
||||
void registerAggregateFunctionGroupArraySorted(AggregateFunctionFactory & factory);
|
||||
void registerAggregateFunctionGroupUniqArray(AggregateFunctionFactory &);
|
||||
void registerAggregateFunctionGroupArrayInsertAt(AggregateFunctionFactory &);
|
||||
void registerAggregateFunctionsQuantile(AggregateFunctionFactory &);
|
||||
@ -112,7 +111,6 @@ void registerAggregateFunctions()
|
||||
registerAggregateFunctionDeltaSum(factory);
|
||||
registerAggregateFunctionDeltaSumTimestamp(factory);
|
||||
registerAggregateFunctionGroupArray(factory);
|
||||
registerAggregateFunctionGroupArraySorted(factory);
|
||||
registerAggregateFunctionGroupUniqArray(factory);
|
||||
registerAggregateFunctionGroupArrayInsertAt(factory);
|
||||
registerAggregateFunctionsQuantile(factory);
|
||||
|
@ -451,17 +451,25 @@ void BackupEntriesCollector::gatherDatabaseMetadata(
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
throw Exception(ErrorCodes::INCONSISTENT_METADATA_FOR_BACKUP, "Couldn't get a create query for database {}", database_name);
|
||||
/// Probably the database has been just removed.
|
||||
if (throw_if_database_not_found)
|
||||
throw;
|
||||
LOG_WARNING(log, "Couldn't get a create query for database {}", backQuoteIfNeed(database_name));
|
||||
return;
|
||||
}
|
||||
|
||||
auto * create = create_database_query->as<ASTCreateQuery>();
|
||||
if (create->getDatabase() != database_name)
|
||||
{
|
||||
/// Probably the database has been just renamed. Use the older name for backup to keep the backup consistent.
|
||||
LOG_WARNING(log, "Got a create query with unexpected name {} for database {}",
|
||||
backQuoteIfNeed(create->getDatabase()), backQuoteIfNeed(database_name));
|
||||
create_database_query = create_database_query->clone();
|
||||
create = create_database_query->as<ASTCreateQuery>();
|
||||
create->setDatabase(database_name);
|
||||
}
|
||||
|
||||
database_info.create_database_query = create_database_query;
|
||||
const auto & create = create_database_query->as<const ASTCreateQuery &>();
|
||||
|
||||
if (create.getDatabase() != database_name)
|
||||
throw Exception(ErrorCodes::INCONSISTENT_METADATA_FOR_BACKUP,
|
||||
"Got a create query with unexpected name {} for database {}",
|
||||
backQuoteIfNeed(create.getDatabase()), backQuoteIfNeed(database_name));
|
||||
|
||||
String new_database_name = renaming_map.getNewDatabaseName(database_name);
|
||||
database_info.metadata_path_in_backup = root_path_in_backup / "metadata" / (escapeForFileName(new_database_name) + ".sql");
|
||||
}
|
||||
@ -582,26 +590,34 @@ std::vector<std::pair<ASTPtr, StoragePtr>> BackupEntriesCollector::findTablesInD
|
||||
}
|
||||
|
||||
std::unordered_set<String> found_table_names;
|
||||
for (const auto & db_table : db_tables)
|
||||
for (auto & db_table : db_tables)
|
||||
{
|
||||
const auto & create_table_query = db_table.first;
|
||||
const auto & create = create_table_query->as<const ASTCreateQuery &>();
|
||||
found_table_names.emplace(create.getTable());
|
||||
auto create_table_query = db_table.first;
|
||||
auto * create = create_table_query->as<ASTCreateQuery>();
|
||||
found_table_names.emplace(create->getTable());
|
||||
|
||||
if (database_name == DatabaseCatalog::TEMPORARY_DATABASE)
|
||||
{
|
||||
if (!create.temporary)
|
||||
throw Exception(ErrorCodes::INCONSISTENT_METADATA_FOR_BACKUP,
|
||||
if (!create->temporary)
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"Got a non-temporary create query for {}",
|
||||
tableNameWithTypeToString(database_name, create.getTable(), false));
|
||||
tableNameWithTypeToString(database_name, create->getTable(), false));
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
if (create.getDatabase() != database_name)
|
||||
throw Exception(ErrorCodes::INCONSISTENT_METADATA_FOR_BACKUP,
|
||||
"Got a create query with unexpected database name {} for {}",
|
||||
backQuoteIfNeed(create.getDatabase()),
|
||||
tableNameWithTypeToString(database_name, create.getTable(), false));
|
||||
if (create->getDatabase() != database_name)
|
||||
{
|
||||
/// Probably the table has been just renamed. Use the older name for backup to keep the backup consistent.
|
||||
LOG_WARNING(log, "Got a create query with unexpected database name {} for {}",
|
||||
backQuoteIfNeed(create->getDatabase()),
|
||||
tableNameWithTypeToString(database_name, create->getTable(), false));
|
||||
create_table_query = create_table_query->clone();
|
||||
create = create_table_query->as<ASTCreateQuery>();
|
||||
create->setDatabase(database_name);
|
||||
db_table.first = create_table_query;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -48,20 +48,22 @@ namespace
|
||||
}
|
||||
|
||||
const auto & request_settings = settings.request_settings;
|
||||
const Settings & global_settings = context->getGlobalContext()->getSettingsRef();
|
||||
const Settings & local_settings = context->getSettingsRef();
|
||||
|
||||
S3::PocoHTTPClientConfiguration client_configuration = S3::ClientFactory::instance().createClientConfiguration(
|
||||
settings.auth_settings.region,
|
||||
context->getRemoteHostFilter(),
|
||||
static_cast<unsigned>(context->getGlobalContext()->getSettingsRef().s3_max_redirects),
|
||||
static_cast<unsigned>(context->getGlobalContext()->getSettingsRef().s3_retry_attempts),
|
||||
context->getGlobalContext()->getSettingsRef().enable_s3_requests_logging,
|
||||
static_cast<unsigned>(global_settings.s3_max_redirects),
|
||||
static_cast<unsigned>(global_settings.s3_retry_attempts),
|
||||
global_settings.enable_s3_requests_logging,
|
||||
/* for_disk_s3 = */ false,
|
||||
request_settings.get_request_throttler,
|
||||
request_settings.put_request_throttler,
|
||||
s3_uri.uri.getScheme());
|
||||
|
||||
client_configuration.endpointOverride = s3_uri.endpoint;
|
||||
client_configuration.maxConnections = static_cast<unsigned>(context->getSettingsRef().s3_max_connections);
|
||||
client_configuration.maxConnections = static_cast<unsigned>(global_settings.s3_max_connections);
|
||||
/// Increase connect timeout
|
||||
client_configuration.connectTimeoutMs = 10 * 1000;
|
||||
/// Requests in backups can be extremely long, set to one hour
|
||||
@ -71,6 +73,7 @@ namespace
|
||||
return S3::ClientFactory::instance().create(
|
||||
client_configuration,
|
||||
s3_uri.is_virtual_hosted_style,
|
||||
local_settings.s3_disable_checksum,
|
||||
credentials.GetAWSAccessKeyId(),
|
||||
credentials.GetAWSSecretKey(),
|
||||
settings.auth_settings.server_side_encryption_customer_key_base64,
|
||||
|
@ -1,12 +1,24 @@
|
||||
#include <Common/AsyncLoader.h>
|
||||
|
||||
#include <limits>
|
||||
#include <optional>
|
||||
#include <base/defines.h>
|
||||
#include <base/scope_guard.h>
|
||||
#include <Common/ErrorCodes.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/noexcept_scope.h>
|
||||
#include <Common/setThreadName.h>
|
||||
#include <Common/logger_useful.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
#include <Common/getNumberOfPhysicalCPUCores.h>
|
||||
#include <Common/ProfileEvents.h>
|
||||
#include <Common/Stopwatch.h>
|
||||
|
||||
|
||||
namespace ProfileEvents
|
||||
{
|
||||
extern const Event AsyncLoaderWaitMicroseconds;
|
||||
}
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -16,6 +28,7 @@ namespace ErrorCodes
|
||||
extern const int ASYNC_LOAD_CYCLE;
|
||||
extern const int ASYNC_LOAD_FAILED;
|
||||
extern const int ASYNC_LOAD_CANCELED;
|
||||
extern const int LOGICAL_ERROR;
|
||||
}
|
||||
|
||||
static constexpr size_t PRINT_MESSAGE_EACH_N_OBJECTS = 256;
|
||||
@ -52,63 +65,48 @@ size_t LoadJob::pool() const
|
||||
return pool_id;
|
||||
}
|
||||
|
||||
void LoadJob::wait() const
|
||||
{
|
||||
std::unique_lock lock{mutex};
|
||||
waiters++;
|
||||
finished.wait(lock, [this] { return load_status != LoadStatus::PENDING; });
|
||||
waiters--;
|
||||
if (load_exception)
|
||||
std::rethrow_exception(load_exception);
|
||||
}
|
||||
|
||||
void LoadJob::waitNoThrow() const noexcept
|
||||
{
|
||||
std::unique_lock lock{mutex};
|
||||
waiters++;
|
||||
finished.wait(lock, [this] { return load_status != LoadStatus::PENDING; });
|
||||
waiters--;
|
||||
}
|
||||
|
||||
size_t LoadJob::waitersCount() const
|
||||
{
|
||||
std::unique_lock lock{mutex};
|
||||
return waiters;
|
||||
}
|
||||
|
||||
void LoadJob::ok()
|
||||
size_t LoadJob::ok()
|
||||
{
|
||||
std::unique_lock lock{mutex};
|
||||
load_status = LoadStatus::OK;
|
||||
finish();
|
||||
return finish();
|
||||
}
|
||||
|
||||
void LoadJob::failed(const std::exception_ptr & ptr)
|
||||
size_t LoadJob::failed(const std::exception_ptr & ptr)
|
||||
{
|
||||
std::unique_lock lock{mutex};
|
||||
load_status = LoadStatus::FAILED;
|
||||
load_exception = ptr;
|
||||
finish();
|
||||
return finish();
|
||||
}
|
||||
|
||||
void LoadJob::canceled(const std::exception_ptr & ptr)
|
||||
size_t LoadJob::canceled(const std::exception_ptr & ptr)
|
||||
{
|
||||
std::unique_lock lock{mutex};
|
||||
load_status = LoadStatus::CANCELED;
|
||||
load_exception = ptr;
|
||||
finish();
|
||||
return finish();
|
||||
}
|
||||
|
||||
void LoadJob::finish()
|
||||
size_t LoadJob::finish()
|
||||
{
|
||||
func = {}; // To ensure job function is destructed before `AsyncLoader::wait()` and `LoadJob::wait()` return
|
||||
func = {}; // To ensure job function is destructed before `AsyncLoader::wait()` return
|
||||
finish_time = std::chrono::system_clock::now();
|
||||
if (waiters > 0)
|
||||
finished.notify_all();
|
||||
return std::exchange(suspended_waiters, 0);
|
||||
}
|
||||
|
||||
void LoadJob::scheduled()
|
||||
void LoadJob::scheduled(UInt64 job_id_)
|
||||
{
|
||||
chassert(job_id == 0); // Job cannot be scheduled twice
|
||||
job_id = job_id_;
|
||||
schedule_time = std::chrono::system_clock::now();
|
||||
}
|
||||
|
||||
@ -118,11 +116,11 @@ void LoadJob::enqueued()
|
||||
enqueue_time = std::chrono::system_clock::now();
|
||||
}
|
||||
|
||||
void LoadJob::execute(size_t pool, const LoadJobPtr & self)
|
||||
void LoadJob::execute(AsyncLoader & loader, size_t pool, const LoadJobPtr & self)
|
||||
{
|
||||
execution_pool_id = pool;
|
||||
start_time = std::chrono::system_clock::now();
|
||||
func(self);
|
||||
func(loader, self);
|
||||
}
|
||||
|
||||
|
||||
@ -180,11 +178,11 @@ AsyncLoader::AsyncLoader(std::vector<PoolInitializer> pool_initializers, bool lo
|
||||
init.metric_threads,
|
||||
init.metric_active_threads,
|
||||
init.metric_scheduled_threads,
|
||||
init.max_threads,
|
||||
/* max_free_threads = */ 0,
|
||||
init.max_threads),
|
||||
/* max_threads = */ std::numeric_limits<size_t>::max(), // Unlimited number of threads, we do worker management ourselves
|
||||
/* max_free_threads = */ 0, // We do not require free threads
|
||||
/* queue_size = */0), // Unlimited queue to avoid blocking during worker spawning
|
||||
.ready_queue = {},
|
||||
.max_threads = init.max_threads
|
||||
.max_threads = init.max_threads > 0 ? init.max_threads : getNumberOfPhysicalCPUCores()
|
||||
});
|
||||
}
|
||||
|
||||
@ -228,16 +226,16 @@ void AsyncLoader::stop()
|
||||
void AsyncLoader::schedule(LoadTask & task)
|
||||
{
|
||||
chassert(this == &task.loader);
|
||||
scheduleImpl(task.jobs);
|
||||
schedule(task.jobs);
|
||||
}
|
||||
|
||||
void AsyncLoader::schedule(const LoadTaskPtr & task)
|
||||
{
|
||||
chassert(this == &task->loader);
|
||||
scheduleImpl(task->jobs);
|
||||
schedule(task->jobs);
|
||||
}
|
||||
|
||||
void AsyncLoader::schedule(const std::vector<LoadTaskPtr> & tasks)
|
||||
void AsyncLoader::schedule(const LoadTaskPtrs & tasks)
|
||||
{
|
||||
LoadJobSet all_jobs;
|
||||
for (const auto & task : tasks)
|
||||
@ -245,10 +243,10 @@ void AsyncLoader::schedule(const std::vector<LoadTaskPtr> & tasks)
|
||||
chassert(this == &task->loader);
|
||||
all_jobs.insert(task->jobs.begin(), task->jobs.end());
|
||||
}
|
||||
scheduleImpl(all_jobs);
|
||||
schedule(all_jobs);
|
||||
}
|
||||
|
||||
void AsyncLoader::scheduleImpl(const LoadJobSet & input_jobs)
|
||||
void AsyncLoader::schedule(const LoadJobSet & jobs_to_schedule)
|
||||
{
|
||||
std::unique_lock lock{mutex};
|
||||
|
||||
@ -264,7 +262,7 @@ void AsyncLoader::scheduleImpl(const LoadJobSet & input_jobs)
|
||||
// 1) exclude already scheduled or finished jobs
|
||||
// 2) include assigned job dependencies (that are not yet scheduled)
|
||||
LoadJobSet jobs;
|
||||
for (const auto & job : input_jobs)
|
||||
for (const auto & job : jobs_to_schedule)
|
||||
gatherNotScheduled(job, jobs, lock);
|
||||
|
||||
// Ensure scheduled_jobs graph will have no cycles. The only way to get a cycle is to add a cycle, assuming old jobs cannot reference new ones.
|
||||
@ -280,7 +278,7 @@ void AsyncLoader::scheduleImpl(const LoadJobSet & input_jobs)
|
||||
NOEXCEPT_SCOPE({
|
||||
ALLOW_ALLOCATIONS_IN_SCOPE;
|
||||
scheduled_jobs.try_emplace(job);
|
||||
job->scheduled();
|
||||
job->scheduled(++last_job_id);
|
||||
});
|
||||
}
|
||||
|
||||
@ -365,11 +363,20 @@ void AsyncLoader::prioritize(const LoadJobPtr & job, size_t new_pool)
|
||||
if (!job)
|
||||
return;
|
||||
chassert(new_pool < pools.size());
|
||||
|
||||
DENY_ALLOCATIONS_IN_SCOPE;
|
||||
std::unique_lock lock{mutex};
|
||||
prioritize(job, new_pool, lock);
|
||||
}
|
||||
|
||||
void AsyncLoader::wait(const LoadJobPtr & job, bool no_throw)
|
||||
{
|
||||
std::unique_lock job_lock{job->mutex};
|
||||
wait(job_lock, job);
|
||||
if (!no_throw && job->load_exception)
|
||||
std::rethrow_exception(job->load_exception);
|
||||
}
|
||||
|
||||
void AsyncLoader::remove(const LoadJobSet & jobs)
|
||||
{
|
||||
DENY_ALLOCATIONS_IN_SCOPE;
|
||||
@ -397,9 +404,10 @@ void AsyncLoader::remove(const LoadJobSet & jobs)
|
||||
if (auto info = scheduled_jobs.find(job); info != scheduled_jobs.end())
|
||||
{
|
||||
// Job is currently executing
|
||||
ALLOW_ALLOCATIONS_IN_SCOPE;
|
||||
chassert(info->second.isExecuting());
|
||||
lock.unlock();
|
||||
job->waitNoThrow(); // Wait for job to finish
|
||||
wait(job, /* no_throw = */ true); // Wait for job to finish
|
||||
lock.lock();
|
||||
}
|
||||
}
|
||||
@ -415,10 +423,12 @@ void AsyncLoader::remove(const LoadJobSet & jobs)
|
||||
|
||||
void AsyncLoader::setMaxThreads(size_t pool, size_t value)
|
||||
{
|
||||
if (value == 0)
|
||||
value = getNumberOfPhysicalCPUCores();
|
||||
std::unique_lock lock{mutex};
|
||||
auto & p = pools[pool];
|
||||
p.thread_pool->setMaxThreads(value);
|
||||
p.thread_pool->setQueueSize(value); // Keep queue size equal max threads count to avoid blocking during spawning
|
||||
// Note that underlying `ThreadPool` always has unlimited `queue_size` and `max_threads`.
|
||||
// Worker management is done by `AsyncLoader` based on `Pool::max_threads + Pool::suspended_workers` instead.
|
||||
p.max_threads = value;
|
||||
if (!is_running)
|
||||
return;
|
||||
@ -442,7 +452,6 @@ Priority AsyncLoader::getPoolPriority(size_t pool) const
|
||||
return pools[pool].priority; // NOTE: lock is not needed because `priority` is const and `pools` are immutable
|
||||
}
|
||||
|
||||
|
||||
size_t AsyncLoader::getScheduledJobCount() const
|
||||
{
|
||||
std::unique_lock lock{mutex};
|
||||
@ -479,11 +488,11 @@ void AsyncLoader::checkCycle(const LoadJobSet & jobs, std::unique_lock<std::mute
|
||||
while (!left.empty())
|
||||
{
|
||||
LoadJobPtr job = *left.begin();
|
||||
checkCycleImpl(job, left, visited, lock);
|
||||
checkCycle(job, left, visited, lock);
|
||||
}
|
||||
}
|
||||
|
||||
String AsyncLoader::checkCycleImpl(const LoadJobPtr & job, LoadJobSet & left, LoadJobSet & visited, std::unique_lock<std::mutex> & lock)
|
||||
String AsyncLoader::checkCycle(const LoadJobPtr & job, LoadJobSet & left, LoadJobSet & visited, std::unique_lock<std::mutex> & lock)
|
||||
{
|
||||
if (!left.contains(job))
|
||||
return {}; // Do not consider external dependencies and already processed jobs
|
||||
@ -494,7 +503,7 @@ String AsyncLoader::checkCycleImpl(const LoadJobPtr & job, LoadJobSet & left, Lo
|
||||
}
|
||||
for (const auto & dep : job->dependencies)
|
||||
{
|
||||
if (auto chain = checkCycleImpl(dep, left, visited, lock); !chain.empty())
|
||||
if (auto chain = checkCycle(dep, left, visited, lock); !chain.empty())
|
||||
{
|
||||
if (!visited.contains(job)) // Check for cycle end
|
||||
throw Exception(ErrorCodes::ASYNC_LOAD_CYCLE, "Load job dependency cycle detected: {} -> {}", job->name, chain);
|
||||
@ -509,10 +518,11 @@ String AsyncLoader::checkCycleImpl(const LoadJobPtr & job, LoadJobSet & left, Lo
|
||||
void AsyncLoader::finish(const LoadJobPtr & job, LoadStatus status, std::exception_ptr exception_from_job, std::unique_lock<std::mutex> & lock)
|
||||
{
|
||||
chassert(scheduled_jobs.contains(job)); // Job was pending
|
||||
size_t resumed_workers = 0; // Number of workers resumed in the execution pool of the job
|
||||
if (status == LoadStatus::OK)
|
||||
{
|
||||
// Notify waiters
|
||||
job->ok();
|
||||
resumed_workers += job->ok();
|
||||
|
||||
// Update dependent jobs and enqueue if ready
|
||||
for (const auto & dep : scheduled_jobs[job].dependent_jobs)
|
||||
@ -528,9 +538,9 @@ void AsyncLoader::finish(const LoadJobPtr & job, LoadStatus status, std::excepti
|
||||
{
|
||||
// Notify waiters
|
||||
if (status == LoadStatus::FAILED)
|
||||
job->failed(exception_from_job);
|
||||
resumed_workers += job->failed(exception_from_job);
|
||||
else if (status == LoadStatus::CANCELED)
|
||||
job->canceled(exception_from_job);
|
||||
resumed_workers += job->canceled(exception_from_job);
|
||||
|
||||
Info & info = scheduled_jobs[job];
|
||||
if (info.isReady())
|
||||
@ -572,35 +582,40 @@ void AsyncLoader::finish(const LoadJobPtr & job, LoadStatus status, std::excepti
|
||||
if (log_progress)
|
||||
logAboutProgress(log, finished_jobs.size() - old_jobs, finished_jobs.size() + scheduled_jobs.size() - old_jobs, stopwatch);
|
||||
});
|
||||
|
||||
if (resumed_workers)
|
||||
{
|
||||
Pool & pool = pools[job->executionPool()];
|
||||
pool.suspended_workers -= resumed_workers;
|
||||
}
|
||||
}
|
||||
|
||||
void AsyncLoader::prioritize(const LoadJobPtr & job, size_t new_pool_id, std::unique_lock<std::mutex> & lock)
|
||||
{
|
||||
Pool & old_pool = pools[job->pool_id];
|
||||
Pool & new_pool = pools[new_pool_id];
|
||||
if (old_pool.priority <= new_pool.priority)
|
||||
return; // Never lower priority or change pool leaving the same priority
|
||||
|
||||
// Note that there is no point in prioritizing finished jobs, but because we do not lock `job.mutex` here (due to recursion),
|
||||
// Races are inevitable, so we prioritize all job unconditionally: both finished and pending.
|
||||
|
||||
if (auto info = scheduled_jobs.find(job); info != scheduled_jobs.end())
|
||||
{
|
||||
Pool & old_pool = pools[job->pool_id];
|
||||
Pool & new_pool = pools[new_pool_id];
|
||||
if (old_pool.priority <= new_pool.priority)
|
||||
return; // Never lower priority or change pool leaving the same priority
|
||||
|
||||
// Update priority and push job forward through ready queue if needed
|
||||
UInt64 ready_seqno = info->second.ready_seqno;
|
||||
|
||||
// Requeue job into the new pool queue without allocations
|
||||
if (ready_seqno)
|
||||
if (UInt64 ready_seqno = info->second.ready_seqno)
|
||||
{
|
||||
new_pool.ready_queue.insert(old_pool.ready_queue.extract(ready_seqno));
|
||||
if (canSpawnWorker(new_pool, lock))
|
||||
spawn(new_pool, lock);
|
||||
}
|
||||
|
||||
// Set user-facing pool (may affect executing jobs)
|
||||
job->pool_id.store(new_pool_id);
|
||||
|
||||
// Recurse into dependencies
|
||||
for (const auto & dep : job->dependencies)
|
||||
prioritize(dep, new_pool_id, lock);
|
||||
}
|
||||
|
||||
job->pool_id.store(new_pool_id);
|
||||
|
||||
// Recurse into dependencies
|
||||
for (const auto & dep : job->dependencies)
|
||||
prioritize(dep, new_pool_id, lock);
|
||||
}
|
||||
|
||||
void AsyncLoader::enqueue(Info & info, const LoadJobPtr & job, std::unique_lock<std::mutex> & lock)
|
||||
@ -620,11 +635,102 @@ void AsyncLoader::enqueue(Info & info, const LoadJobPtr & job, std::unique_lock<
|
||||
spawn(pool, lock);
|
||||
}
|
||||
|
||||
// Keep track of currently executing load jobs to be able to:
|
||||
// 1) Detect "wait dependent" deadlocks -- throw LOGICAL_ERROR
|
||||
// (when job A function waits for job B that depends on job A)
|
||||
// 2) Detect "wait not scheduled" deadlocks -- throw LOGICAL_ERROR
|
||||
// (thread T is waiting on an assigned job A, but job A is not yet scheduled)
|
||||
// 3) Resolve "priority inversion" deadlocks -- apply priority inheritance
|
||||
// (when high-priority job A function waits for a lower-priority job B, and B never starts due to its priority)
|
||||
// 4) Resolve "blocked pool" deadlocks -- spawn more workers
|
||||
// (when job A in pool P waits for another ready job B in P, but B never starts because there are no free workers in P)
|
||||
thread_local LoadJob * current_load_job = nullptr;
|
||||
|
||||
size_t currentPoolOr(size_t pool)
|
||||
{
|
||||
return current_load_job ? current_load_job->executionPool() : pool;
|
||||
}
|
||||
|
||||
bool detectWaitDependentDeadlock(const LoadJobPtr & waited)
|
||||
{
|
||||
if (waited.get() == current_load_job)
|
||||
return true;
|
||||
for (const auto & dep : waited->dependencies)
|
||||
{
|
||||
if (detectWaitDependentDeadlock(dep))
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
void AsyncLoader::wait(std::unique_lock<std::mutex> & job_lock, const LoadJobPtr & job)
|
||||
{
|
||||
// Ensure job we are going to wait was scheduled to avoid "wait not scheduled" deadlocks
|
||||
if (job->job_id == 0)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Load job '{}' waits for not scheduled load job '{}'", current_load_job->name, job->name);
|
||||
|
||||
// Deadlock detection and resolution
|
||||
if (current_load_job && job->load_status == LoadStatus::PENDING)
|
||||
{
|
||||
if (detectWaitDependentDeadlock(job))
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Load job '{}' waits for dependent load job '{}'", current_load_job->name, job->name);
|
||||
|
||||
auto worker_pool = current_load_job->executionPool();
|
||||
auto worker_priority = getPoolPriority(worker_pool);
|
||||
auto job_priority = getPoolPriority(job->pool_id);
|
||||
|
||||
// Waiting for a lower-priority job ("priority inversion" deadlock) is resolved using priority inheritance.
|
||||
if (worker_priority < job_priority)
|
||||
{
|
||||
job_lock.unlock(); // Avoid reverse locking order
|
||||
prioritize(job, worker_pool);
|
||||
job_lock.lock();
|
||||
}
|
||||
|
||||
// Spawn more workers to avoid exhaustion of worker pool ("blocked pool" deadlock)
|
||||
if (worker_pool == job->pool_id)
|
||||
{
|
||||
job_lock.unlock(); // Avoid reverse locking order
|
||||
workerIsSuspendedByWait(worker_pool, job);
|
||||
job_lock.lock();
|
||||
}
|
||||
}
|
||||
|
||||
Stopwatch watch;
|
||||
job->waiters++;
|
||||
job->finished.wait(job_lock, [&] { return job->load_status != LoadStatus::PENDING; });
|
||||
job->waiters--;
|
||||
ProfileEvents::increment(ProfileEvents::AsyncLoaderWaitMicroseconds, watch.elapsedMicroseconds());
|
||||
}
|
||||
|
||||
void AsyncLoader::workerIsSuspendedByWait(size_t pool_id, const LoadJobPtr & job)
|
||||
{
|
||||
std::unique_lock lock{mutex};
|
||||
std::unique_lock job_lock{job->mutex};
|
||||
|
||||
if (job->load_status != LoadStatus::PENDING)
|
||||
return; // Job is already done, worker can continue execution
|
||||
|
||||
// To resolve "blocked pool" deadlocks we spawn a new worker for every suspended worker, if required
|
||||
// This can lead to a visible excess of `max_threads` specified for a pool,
|
||||
// but actual number of NOT suspended workers may exceed `max_threads` ONLY in intermittent state.
|
||||
Pool & pool = pools[pool_id];
|
||||
pool.suspended_workers++;
|
||||
job->suspended_waiters++;
|
||||
if (canSpawnWorker(pool, lock))
|
||||
spawn(pool, lock);
|
||||
|
||||
// TODO(serxa): it is a good idea to propagate `job` and all its dependencies in `pool.ready_queue` by introducing
|
||||
// key {suspended_waiters, ready_seqno} instead of plain `ready_seqno`, to force newly spawn workers to work on jobs
|
||||
// that are being waited. But it doesn't affect correctness. So let's not complicate it for time being.
|
||||
}
|
||||
|
||||
bool AsyncLoader::canSpawnWorker(Pool & pool, std::unique_lock<std::mutex> &)
|
||||
{
|
||||
// TODO(serxa): optimization: we should not spawn new worker on the first enqueue during `finish()` because current worker will take this job.
|
||||
return is_running
|
||||
&& !pool.ready_queue.empty()
|
||||
&& pool.workers < pool.max_threads
|
||||
&& pool.workers < pool.max_threads + pool.suspended_workers
|
||||
&& (!current_priority || *current_priority >= pool.priority);
|
||||
}
|
||||
|
||||
@ -632,7 +738,7 @@ bool AsyncLoader::canWorkerLive(Pool & pool, std::unique_lock<std::mutex> &)
|
||||
{
|
||||
return is_running
|
||||
&& !pool.ready_queue.empty()
|
||||
&& pool.workers <= pool.max_threads
|
||||
&& pool.workers <= pool.max_threads + pool.suspended_workers
|
||||
&& (!current_priority || *current_priority >= pool.priority);
|
||||
}
|
||||
|
||||
@ -705,7 +811,9 @@ void AsyncLoader::worker(Pool & pool)
|
||||
|
||||
try
|
||||
{
|
||||
job->execute(pool_id, job);
|
||||
current_load_job = job.get();
|
||||
SCOPE_EXIT({ current_load_job = nullptr; }); // Note that recursive job execution is not supported
|
||||
job->execute(*this, pool_id, job);
|
||||
exception_from_job = {};
|
||||
}
|
||||
catch (...)
|
||||
|
@ -21,6 +21,16 @@ namespace Poco { class Logger; }
|
||||
namespace DB
|
||||
{
|
||||
|
||||
// TERMINOLOGY:
|
||||
// Job (`LoadJob`) - The smallest part of loading process, executed by worker. Job can depend on the other jobs. Jobs are grouped in tasks.
|
||||
// Task (`LoadTask`) - Owning holder of a set of jobs. Should be held during the whole job lifetime. Cancels all jobs on destruction.
|
||||
// Goal jobs (goals) - a subset of "final" jobs of a task (usually no job in task depend on a goal job).
|
||||
// By default all jobs in task are included in goal jobs.
|
||||
// Goals should used if you need to create a job that depends on a task (to avoid placing all jobs of the task in dependencies).
|
||||
// Pool (worker pool) - A set of workers with specific priority. Every job is assigned to a pool. Job can change its pool dynamically.
|
||||
// Priority (pool priority) - Constant integer value showing relative priority of a pool. Lower value means higher priority.
|
||||
// AsyncLoader - scheduling system responsible for job dependency tracking and worker management respecting pool priorities.
|
||||
|
||||
class LoadJob;
|
||||
using LoadJobPtr = std::shared_ptr<LoadJob>;
|
||||
using LoadJobSet = std::unordered_set<LoadJobPtr>;
|
||||
@ -43,6 +53,7 @@ enum class LoadStatus
|
||||
// Smallest indivisible part of a loading process. Load job can have multiple dependencies, thus jobs constitute a direct acyclic graph (DAG).
|
||||
// Job encapsulates a function to be executed by `AsyncLoader` as soon as job functions of all dependencies are successfully executed.
|
||||
// Job can be waited for by an arbitrary number of threads. See `AsyncLoader` class description for more details.
|
||||
// WARNING: jobs are usually held with ownership by tasks (see `LoadTask`). You are encouraged to add jobs into a tasks as soon as the are created.
|
||||
class LoadJob : private boost::noncopyable
|
||||
{
|
||||
public:
|
||||
@ -50,6 +61,7 @@ public:
|
||||
LoadJob(LoadJobSetType && dependencies_, String name_, size_t pool_id_, Func && func_)
|
||||
: dependencies(std::forward<LoadJobSetType>(dependencies_))
|
||||
, name(std::move(name_))
|
||||
, execution_pool_id(pool_id_)
|
||||
, pool_id(pool_id_)
|
||||
, func(std::forward<Func>(func_))
|
||||
{}
|
||||
@ -67,18 +79,12 @@ public:
|
||||
// Value may change during job execution by `prioritize()`.
|
||||
size_t pool() const;
|
||||
|
||||
// Sync wait for a pending job to be finished: OK, FAILED or CANCELED status.
|
||||
// Throws if job is FAILED or CANCELED. Returns or throws immediately if called on non-pending job.
|
||||
void wait() const;
|
||||
|
||||
// Wait for a job to reach any non PENDING status.
|
||||
void waitNoThrow() const noexcept;
|
||||
|
||||
// Returns number of threads blocked by `wait()` or `waitNoThrow()` calls.
|
||||
// Returns number of threads blocked by `wait()` calls.
|
||||
size_t waitersCount() const;
|
||||
|
||||
// Introspection
|
||||
using TimePoint = std::chrono::system_clock::time_point;
|
||||
UInt64 jobId() const { return job_id; }
|
||||
TimePoint scheduleTime() const { return schedule_time; }
|
||||
TimePoint enqueueTime() const { return enqueue_time; }
|
||||
TimePoint startTime() const { return start_time; }
|
||||
@ -90,22 +96,24 @@ public:
|
||||
private:
|
||||
friend class AsyncLoader;
|
||||
|
||||
void ok();
|
||||
void failed(const std::exception_ptr & ptr);
|
||||
void canceled(const std::exception_ptr & ptr);
|
||||
void finish();
|
||||
[[nodiscard]] size_t ok();
|
||||
[[nodiscard]] size_t failed(const std::exception_ptr & ptr);
|
||||
[[nodiscard]] size_t canceled(const std::exception_ptr & ptr);
|
||||
[[nodiscard]] size_t finish();
|
||||
|
||||
void scheduled();
|
||||
void scheduled(UInt64 job_id_);
|
||||
void enqueued();
|
||||
void execute(size_t pool, const LoadJobPtr & self);
|
||||
void execute(AsyncLoader & loader, size_t pool, const LoadJobPtr & self);
|
||||
|
||||
std::atomic<UInt64> job_id{0};
|
||||
std::atomic<size_t> execution_pool_id;
|
||||
std::atomic<size_t> pool_id;
|
||||
std::function<void(const LoadJobPtr & self)> func;
|
||||
std::function<void(AsyncLoader & loader, const LoadJobPtr & self)> func;
|
||||
|
||||
mutable std::mutex mutex;
|
||||
mutable std::condition_variable finished;
|
||||
mutable size_t waiters = 0;
|
||||
mutable size_t waiters = 0; // All waiters, including suspended
|
||||
mutable size_t suspended_waiters = 0;
|
||||
LoadStatus load_status{LoadStatus::PENDING};
|
||||
std::exception_ptr load_exception;
|
||||
|
||||
@ -117,7 +125,7 @@ private:
|
||||
|
||||
struct EmptyJobFunc
|
||||
{
|
||||
void operator()(const LoadJobPtr &) {}
|
||||
void operator()(AsyncLoader &, const LoadJobPtr &) {}
|
||||
};
|
||||
|
||||
template <class Func = EmptyJobFunc>
|
||||
@ -144,6 +152,7 @@ LoadJobPtr makeLoadJob(const LoadJobSet & dependencies, size_t pool_id, String n
|
||||
return std::make_shared<LoadJob>(dependencies, std::move(name), pool_id, std::forward<Func>(func));
|
||||
}
|
||||
|
||||
|
||||
// Represents a logically connected set of LoadJobs required to achieve some goals (final LoadJob in the set).
|
||||
class LoadTask : private boost::noncopyable
|
||||
{
|
||||
@ -168,10 +177,11 @@ public:
|
||||
// auto load_task = loadSomethingAsync(async_loader, load_after_task.goals(), something);
|
||||
const LoadJobSet & goals() const { return goal_jobs.empty() ? jobs : goal_jobs; }
|
||||
|
||||
AsyncLoader & loader;
|
||||
|
||||
private:
|
||||
friend class AsyncLoader;
|
||||
|
||||
AsyncLoader & loader;
|
||||
LoadJobSet jobs;
|
||||
LoadJobSet goal_jobs;
|
||||
};
|
||||
@ -181,91 +191,6 @@ inline LoadTaskPtr makeLoadTask(AsyncLoader & loader, LoadJobSet && jobs, LoadJo
|
||||
return std::make_shared<LoadTask>(loader, std::move(jobs), std::move(goals));
|
||||
}
|
||||
|
||||
inline void scheduleLoad(const LoadTaskPtr & task)
|
||||
{
|
||||
task->schedule();
|
||||
}
|
||||
|
||||
inline void scheduleLoad(const LoadTaskPtrs & tasks)
|
||||
{
|
||||
for (const auto & task : tasks)
|
||||
task->schedule();
|
||||
}
|
||||
|
||||
template <class... Args>
|
||||
inline void scheduleLoadAll(Args && ... args)
|
||||
{
|
||||
(scheduleLoad(std::forward<Args>(args)), ...);
|
||||
}
|
||||
|
||||
inline void waitLoad(const LoadJobSet & jobs)
|
||||
{
|
||||
for (const auto & job : jobs)
|
||||
job->wait();
|
||||
}
|
||||
|
||||
inline void waitLoad(const LoadTaskPtr & task)
|
||||
{
|
||||
waitLoad(task->goals());
|
||||
}
|
||||
|
||||
inline void waitLoad(const LoadTaskPtrs & tasks)
|
||||
{
|
||||
for (const auto & task : tasks)
|
||||
waitLoad(task->goals());
|
||||
}
|
||||
|
||||
template <class... Args>
|
||||
inline void waitLoadAll(Args && ... args)
|
||||
{
|
||||
(waitLoad(std::forward<Args>(args)), ...);
|
||||
}
|
||||
|
||||
template <class... Args>
|
||||
inline void scheduleAndWaitLoadAll(Args && ... args)
|
||||
{
|
||||
scheduleLoadAll(std::forward<Args>(args)...);
|
||||
waitLoadAll(std::forward<Args>(args)...);
|
||||
}
|
||||
|
||||
inline LoadJobSet getGoals(const LoadTaskPtrs & tasks)
|
||||
{
|
||||
LoadJobSet result;
|
||||
for (const auto & task : tasks)
|
||||
result.insert(task->goals().begin(), task->goals().end());
|
||||
return result;
|
||||
}
|
||||
|
||||
inline LoadJobSet getGoalsOr(const LoadTaskPtrs & tasks, const LoadJobSet & alternative)
|
||||
{
|
||||
LoadJobSet result;
|
||||
for (const auto & task : tasks)
|
||||
result.insert(task->goals().begin(), task->goals().end());
|
||||
return result.empty() ? alternative : result;
|
||||
}
|
||||
|
||||
inline LoadJobSet joinJobs(const LoadJobSet & jobs1, const LoadJobSet & jobs2)
|
||||
{
|
||||
LoadJobSet result;
|
||||
if (!jobs1.empty())
|
||||
result.insert(jobs1.begin(), jobs1.end());
|
||||
if (!jobs2.empty())
|
||||
result.insert(jobs2.begin(), jobs2.end());
|
||||
return result;
|
||||
}
|
||||
|
||||
inline LoadTaskPtrs joinTasks(const LoadTaskPtrs & tasks1, const LoadTaskPtrs & tasks2)
|
||||
{
|
||||
if (tasks1.empty())
|
||||
return tasks2;
|
||||
if (tasks2.empty())
|
||||
return tasks1;
|
||||
LoadTaskPtrs result;
|
||||
result.reserve(tasks1.size() + tasks2.size());
|
||||
result.insert(result.end(), tasks1.begin(), tasks1.end());
|
||||
result.insert(result.end(), tasks2.begin(), tasks2.end());
|
||||
return result;
|
||||
}
|
||||
|
||||
// `AsyncLoader` is a scheduler for DAG of `LoadJob`s. It tracks job dependencies and priorities.
|
||||
// Basic usage example:
|
||||
@ -277,8 +202,8 @@ inline LoadTaskPtrs joinTasks(const LoadTaskPtrs & tasks1, const LoadTaskPtrs &
|
||||
//
|
||||
// // Create and schedule a task consisting of three jobs. Job1 has no dependencies and is run first.
|
||||
// // Job2 and job3 depend on job1 and are run only after job1 completion.
|
||||
// auto job_func = [&] (const LoadJobPtr & self) {
|
||||
// LOG_TRACE(log, "Executing load job '{}' in pool '{}'", self->name, async_loader->getPoolName(self->pool()));
|
||||
// auto job_func = [&] (AsyncLoader & loader, const LoadJobPtr & self) {
|
||||
// LOG_TRACE(log, "Executing load job '{}' in pool '{}'", self->name, loader->getPoolName(self->pool()));
|
||||
// };
|
||||
// auto job1 = makeLoadJob({}, "job1", /* pool_id = */ 1, job_func);
|
||||
// auto job2 = makeLoadJob({ job1 }, "job2", /* pool_id = */ 1, job_func);
|
||||
@ -287,8 +212,8 @@ inline LoadTaskPtrs joinTasks(const LoadTaskPtrs & tasks1, const LoadTaskPtrs &
|
||||
// task.schedule();
|
||||
//
|
||||
// // Another thread may prioritize a job by changing its pool and wait for it:
|
||||
// async_loader->prioritize(job3, /* pool_id = */ 0); // Increase priority: 1 -> 0 (lower is better)
|
||||
// job3->wait(); // Blocks until job completion or cancellation and rethrow an exception (if any)
|
||||
// async_loader.prioritize(job3, /* pool_id = */ 0); // Increase priority: 1 -> 0 (lower is better)
|
||||
// async_loader.wait(job3); // Blocks until job completion or cancellation and rethrow an exception (if any)
|
||||
//
|
||||
// Every job has a pool associated with it. AsyncLoader starts every job in its thread pool.
|
||||
// Each pool has a constant priority and a mutable maximum number of threads.
|
||||
@ -341,7 +266,8 @@ private:
|
||||
std::unique_ptr<ThreadPool> thread_pool; // NOTE: we avoid using a `ThreadPool` queue to be able to move jobs between pools.
|
||||
std::map<UInt64, LoadJobPtr> ready_queue; // FIFO queue of jobs to be executed in this pool. Map is used for faster erasing. Key is `ready_seqno`
|
||||
size_t max_threads; // Max number of workers to be spawn
|
||||
size_t workers = 0; // Number of currently execution workers
|
||||
size_t workers = 0; // Number of currently executing workers
|
||||
size_t suspended_workers = 0; // Number of workers that are blocked by `wait()` call on a job executing in the same pool (for deadlock resolution)
|
||||
|
||||
bool isActive() const { return workers > 0 || !ready_queue.empty(); }
|
||||
};
|
||||
@ -369,7 +295,7 @@ public:
|
||||
Metric metric_threads;
|
||||
Metric metric_active_threads;
|
||||
Metric metric_scheduled_threads;
|
||||
size_t max_threads;
|
||||
size_t max_threads; // Zero means use all CPU cores
|
||||
Priority priority;
|
||||
};
|
||||
|
||||
@ -399,6 +325,7 @@ public:
|
||||
// and are removed from AsyncLoader, so it is thread-safe to destroy them.
|
||||
void schedule(LoadTask & task);
|
||||
void schedule(const LoadTaskPtr & task);
|
||||
void schedule(const LoadJobSet & jobs_to_schedule);
|
||||
|
||||
// Schedule all tasks atomically. To ensure only highest priority jobs among all tasks are run first.
|
||||
void schedule(const LoadTaskPtrs & tasks);
|
||||
@ -407,6 +334,11 @@ public:
|
||||
// Jobs from higher (than `new_pool`) priority pools are not changed.
|
||||
void prioritize(const LoadJobPtr & job, size_t new_pool);
|
||||
|
||||
// Sync wait for a pending job to be finished: OK, FAILED or CANCELED status.
|
||||
// Throws if job is FAILED or CANCELED unless `no_throw` is set. Returns or throws immediately if called on non-pending job.
|
||||
// If job was not scheduled, it will be implicitly scheduled before the wait (deadlock auto-resolution).
|
||||
void wait(const LoadJobPtr & job, bool no_throw = false);
|
||||
|
||||
// Remove finished jobs, cancel scheduled jobs, wait for executing jobs to finish and remove them.
|
||||
void remove(const LoadJobSet & jobs);
|
||||
|
||||
@ -430,23 +362,26 @@ public:
|
||||
bool is_executing = false;
|
||||
};
|
||||
|
||||
// For introspection and debug only, see `system.async_loader` table
|
||||
// For introspection and debug only, see `system.async_loader` table.
|
||||
std::vector<JobState> getJobStates() const;
|
||||
|
||||
// For deadlock resolution. Should not be used directly.
|
||||
void workerIsSuspendedByWait(size_t pool_id, const LoadJobPtr & job);
|
||||
|
||||
private:
|
||||
void checkCycle(const LoadJobSet & jobs, std::unique_lock<std::mutex> & lock);
|
||||
String checkCycleImpl(const LoadJobPtr & job, LoadJobSet & left, LoadJobSet & visited, std::unique_lock<std::mutex> & lock);
|
||||
String checkCycle(const LoadJobPtr & job, LoadJobSet & left, LoadJobSet & visited, std::unique_lock<std::mutex> & lock);
|
||||
void finish(const LoadJobPtr & job, LoadStatus status, std::exception_ptr exception_from_job, std::unique_lock<std::mutex> & lock);
|
||||
void scheduleImpl(const LoadJobSet & input_jobs);
|
||||
void gatherNotScheduled(const LoadJobPtr & job, LoadJobSet & jobs, std::unique_lock<std::mutex> & lock);
|
||||
void prioritize(const LoadJobPtr & job, size_t new_pool_id, std::unique_lock<std::mutex> & lock);
|
||||
void enqueue(Info & info, const LoadJobPtr & job, std::unique_lock<std::mutex> & lock);
|
||||
bool canSpawnWorker(Pool & pool, std::unique_lock<std::mutex> &);
|
||||
bool canWorkerLive(Pool & pool, std::unique_lock<std::mutex> &);
|
||||
void updateCurrentPriorityAndSpawn(std::unique_lock<std::mutex> &);
|
||||
void spawn(Pool & pool, std::unique_lock<std::mutex> &);
|
||||
void wait(std::unique_lock<std::mutex> & job_lock, const LoadJobPtr & job);
|
||||
bool canSpawnWorker(Pool & pool, std::unique_lock<std::mutex> & lock);
|
||||
bool canWorkerLive(Pool & pool, std::unique_lock<std::mutex> & lock);
|
||||
void updateCurrentPriorityAndSpawn(std::unique_lock<std::mutex> & lock);
|
||||
void spawn(Pool & pool, std::unique_lock<std::mutex> & lock);
|
||||
void worker(Pool & pool);
|
||||
bool hasWorker(std::unique_lock<std::mutex> &) const;
|
||||
bool hasWorker(std::unique_lock<std::mutex> & lock) const;
|
||||
|
||||
// Logging
|
||||
const bool log_failures; // Worker should log all exceptions caught from job functions.
|
||||
@ -457,6 +392,7 @@ private:
|
||||
bool is_running = true;
|
||||
std::optional<Priority> current_priority; // highest priority among active pools
|
||||
UInt64 last_ready_seqno = 0; // Increasing counter for ready queue keys.
|
||||
UInt64 last_job_id = 0; // Increasing counter for job IDs
|
||||
std::unordered_map<LoadJobPtr, Info> scheduled_jobs; // Full set of scheduled pending jobs along with scheduling info.
|
||||
std::vector<Pool> pools; // Thread pools for job execution and ready queues
|
||||
LoadJobSet finished_jobs; // Set of finished jobs (for introspection only, until jobs are removed).
|
||||
@ -465,4 +401,136 @@ private:
|
||||
std::chrono::system_clock::time_point busy_period_start_time;
|
||||
};
|
||||
|
||||
// === HELPER FUNCTIONS ===
|
||||
// There are three types of helper functions:
|
||||
// schedulerLoad([loader], {jobs|task|tasks}):
|
||||
// Just schedule jobs for async loading.
|
||||
// Note that normally function `doSomethingAsync()` returns you a task which is NOT scheduled.
|
||||
// This is done to allow you:
|
||||
// (1) construct complex dependency graph offline.
|
||||
// (2) schedule tasks simultaneously to respect their relative priorities.
|
||||
// (3) do prioritization independently, before scheduling.
|
||||
// prioritizeLoad([loader], pool_id, {jobs|task|tasks}):
|
||||
// Prioritize jobs w/o waiting for it.
|
||||
// Note that prioritization may be done
|
||||
// (1) before scheduling (to ensure all jobs are started in the correct pools)
|
||||
// (2) after scheduling (for dynamic prioritization, e.g. when new query arrives)
|
||||
// waitLoad([loader], pool_id, {jobs|task|tasks}, [no_throw]):
|
||||
// Prioritize and wait for jobs.
|
||||
// Note that to avoid deadlocks it implicitly schedules all the jobs before waiting for them.
|
||||
// Also to avoid priority inversion you should never wait for a job that has lower priority.
|
||||
// So it prioritizes all jobs, then schedules all jobs and waits every job.
|
||||
// IMPORTANT: Any load error will be rethrown, unless `no_throw` is set.
|
||||
// Common usage pattern is:
|
||||
// waitLoad(currentPoolOr(foreground_pool_id), tasks);
|
||||
|
||||
// Returns current execution pool if it is called from load job, or `pool` otherwise
|
||||
// It should be used for waiting other load jobs in places that can be executed from load jobs
|
||||
size_t currentPoolOr(size_t pool);
|
||||
|
||||
inline void scheduleLoad(AsyncLoader & loader, const LoadJobSet & jobs)
|
||||
{
|
||||
loader.schedule(jobs);
|
||||
}
|
||||
|
||||
inline void scheduleLoad(const LoadTaskPtr & task)
|
||||
{
|
||||
task->schedule();
|
||||
}
|
||||
|
||||
inline void scheduleLoad(const LoadTaskPtrs & tasks)
|
||||
{
|
||||
if (tasks.empty())
|
||||
return;
|
||||
// NOTE: it is assumed that all tasks use the same `AsyncLoader`
|
||||
AsyncLoader & loader = tasks.front()->loader;
|
||||
loader.schedule(tasks);
|
||||
}
|
||||
|
||||
inline void waitLoad(AsyncLoader & loader, const LoadJobSet & jobs, bool no_throw = false)
|
||||
{
|
||||
scheduleLoad(loader, jobs);
|
||||
for (const auto & job : jobs)
|
||||
loader.wait(job, no_throw);
|
||||
}
|
||||
|
||||
inline void waitLoad(const LoadTaskPtr & task, bool no_throw = false)
|
||||
{
|
||||
scheduleLoad(task);
|
||||
waitLoad(task->loader, task->goals(), no_throw);
|
||||
}
|
||||
|
||||
inline void waitLoad(const LoadTaskPtrs & tasks, bool no_throw = false)
|
||||
{
|
||||
scheduleLoad(tasks);
|
||||
for (const auto & task : tasks)
|
||||
waitLoad(task->loader, task->goals(), no_throw);
|
||||
}
|
||||
|
||||
inline void prioritizeLoad(AsyncLoader & loader, size_t pool_id, const LoadJobSet & jobs)
|
||||
{
|
||||
for (const auto & job : jobs)
|
||||
loader.prioritize(job, pool_id);
|
||||
}
|
||||
|
||||
inline void prioritizeLoad(size_t pool_id, const LoadTaskPtr & task)
|
||||
{
|
||||
prioritizeLoad(task->loader, pool_id, task->goals());
|
||||
}
|
||||
|
||||
inline void prioritizeLoad(size_t pool_id, const LoadTaskPtrs & tasks)
|
||||
{
|
||||
for (const auto & task : tasks)
|
||||
prioritizeLoad(task->loader, pool_id, task->goals());
|
||||
}
|
||||
|
||||
inline void waitLoad(AsyncLoader & loader, size_t pool_id, const LoadJobSet & jobs, bool no_throw = false)
|
||||
{
|
||||
prioritizeLoad(loader, pool_id, jobs);
|
||||
waitLoad(loader, jobs, no_throw);
|
||||
}
|
||||
|
||||
inline void waitLoad(size_t pool_id, const LoadTaskPtr & task, bool no_throw = false)
|
||||
{
|
||||
prioritizeLoad(task->loader, pool_id, task->goals());
|
||||
waitLoad(task->loader, task->goals(), no_throw);
|
||||
}
|
||||
|
||||
inline void waitLoad(size_t pool_id, const LoadTaskPtrs & tasks, bool no_throw = false)
|
||||
{
|
||||
prioritizeLoad(pool_id, tasks);
|
||||
waitLoad(tasks, no_throw);
|
||||
}
|
||||
|
||||
inline LoadJobSet getGoals(const LoadTaskPtrs & tasks, const LoadJobSet & alternative = {})
|
||||
{
|
||||
LoadJobSet result;
|
||||
for (const auto & task : tasks)
|
||||
result.insert(task->goals().begin(), task->goals().end());
|
||||
return result.empty() ? alternative : result;
|
||||
}
|
||||
|
||||
inline LoadJobSet joinJobs(const LoadJobSet & jobs1, const LoadJobSet & jobs2)
|
||||
{
|
||||
LoadJobSet result;
|
||||
if (!jobs1.empty())
|
||||
result.insert(jobs1.begin(), jobs1.end());
|
||||
if (!jobs2.empty())
|
||||
result.insert(jobs2.begin(), jobs2.end());
|
||||
return result;
|
||||
}
|
||||
|
||||
inline LoadTaskPtrs joinTasks(const LoadTaskPtrs & tasks1, const LoadTaskPtrs & tasks2)
|
||||
{
|
||||
if (tasks1.empty())
|
||||
return tasks2;
|
||||
if (tasks2.empty())
|
||||
return tasks1;
|
||||
LoadTaskPtrs result;
|
||||
result.reserve(tasks1.size() + tasks2.size());
|
||||
result.insert(result.end(), tasks1.begin(), tasks1.end());
|
||||
result.insert(result.end(), tasks2.begin(), tasks2.end());
|
||||
return result;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -110,12 +110,12 @@
|
||||
M(StorageHiveThreads, "Number of threads in the StorageHive thread pool.") \
|
||||
M(StorageHiveThreadsActive, "Number of threads in the StorageHive thread pool running a task.") \
|
||||
M(StorageHiveThreadsScheduled, "Number of queued or active jobs in the StorageHive thread pool.") \
|
||||
M(TablesLoaderThreads, "Number of threads in the tables loader thread pool.") \
|
||||
M(TablesLoaderThreadsActive, "Number of threads in the tables loader thread pool running a task.") \
|
||||
M(TablesLoaderThreadsScheduled, "Number of queued or active jobs in the tables loader thread pool.") \
|
||||
M(DatabaseOrdinaryThreads, "Number of threads in the Ordinary database thread pool.") \
|
||||
M(DatabaseOrdinaryThreadsActive, "Number of threads in the Ordinary database thread pool running a task.") \
|
||||
M(DatabaseOrdinaryThreadsScheduled, "Number of queued or active jobs in the Ordinary database thread pool.") \
|
||||
M(TablesLoaderBackgroundThreads, "Number of threads in the tables loader background thread pool.") \
|
||||
M(TablesLoaderBackgroundThreadsActive, "Number of threads in the tables loader background thread pool running a task.") \
|
||||
M(TablesLoaderBackgroundThreadsScheduled, "Number of queued or active jobs in the tables loader background thread pool.") \
|
||||
M(TablesLoaderForegroundThreads, "Number of threads in the tables loader foreground thread pool.") \
|
||||
M(TablesLoaderForegroundThreadsActive, "Number of threads in the tables loader foreground thread pool running a task.") \
|
||||
M(TablesLoaderForegroundThreadsScheduled, "Number of queued or active jobs in the tables loader foreground thread pool.") \
|
||||
M(DatabaseOnDiskThreads, "Number of threads in the DatabaseOnDisk thread pool.") \
|
||||
M(DatabaseOnDiskThreadsActive, "Number of threads in the DatabaseOnDisk thread pool running a task.") \
|
||||
M(DatabaseOnDiskThreadsScheduled, "Number of queued or active jobs in the DatabaseOnDisk thread pool.") \
|
||||
|
@ -588,6 +588,7 @@
|
||||
M(706, LIBSSH_ERROR) \
|
||||
M(707, GCP_ERROR) \
|
||||
M(708, ILLEGAL_STATISTIC) \
|
||||
M(709, CANNOT_GET_REPLICATED_DATABASE_SNAPSHOT) \
|
||||
\
|
||||
M(999, KEEPER_EXCEPTION) \
|
||||
M(1000, POCO_EXCEPTION) \
|
||||
|
32
src/Common/PoolId.h
Normal file
32
src/Common/PoolId.h
Normal file
@ -0,0 +1,32 @@
|
||||
#pragma once
|
||||
|
||||
#include <Common/Priority.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/// Indices and priorities of `AsyncLoader` pools.
|
||||
|
||||
/// The most important difference from regular ThreadPools is priorities of pools:
|
||||
/// * Pools that have different priorities do NOT run jobs simultaneously (with small exception due to dynamic prioritization).
|
||||
/// * Pools with lower priority wait for all jobs in higher priority pools to be done.
|
||||
|
||||
/// Note that pools also have different configurable sizes not listed here. See `Context::getAsyncLoader()` for details.
|
||||
|
||||
/// WARNING: `*PoolId` values must be unique and sequential w/o gaps.
|
||||
|
||||
/// Used for executing load jobs that are waited for by queries or in case of synchronous table loading.
|
||||
constexpr size_t TablesLoaderForegroundPoolId = 0;
|
||||
constexpr Priority TablesLoaderForegroundPriority{0};
|
||||
|
||||
/// Has lower priority and is used by table load jobs.
|
||||
constexpr size_t TablesLoaderBackgroundLoadPoolId = 1;
|
||||
constexpr Priority TablesLoaderBackgroundLoadPriority{1};
|
||||
|
||||
/// Has even lower priority and is used by startup jobs.
|
||||
/// NOTE: This pool is required to begin table startup only after all tables are loaded.
|
||||
/// NOTE: Which is needed to prevent heavy merges/mutations from consuming all the resources, slowing table loading down.
|
||||
constexpr size_t TablesLoaderBackgroundStartupPoolId = 2;
|
||||
constexpr Priority TablesLoaderBackgroundStartupPriority{2};
|
||||
|
||||
}
|
@ -574,6 +574,8 @@ The server successfully detected this situation and will download merged part fr
|
||||
\
|
||||
M(ConnectionPoolIsFullMicroseconds, "Total time spent waiting for a slot in connection pool.") \
|
||||
\
|
||||
M(AsyncLoaderWaitMicroseconds, "Total time a query was waiting for async loader jobs.") \
|
||||
\
|
||||
M(LogTest, "Number of log messages with level Test") \
|
||||
M(LogTrace, "Number of log messages with level Trace") \
|
||||
M(LogDebug, "Number of log messages with level Debug") \
|
||||
|
@ -1,8 +1,12 @@
|
||||
#include <boost/core/noncopyable.hpp>
|
||||
#include <gtest/gtest.h>
|
||||
|
||||
#include <array>
|
||||
#include <list>
|
||||
#include <barrier>
|
||||
#include <chrono>
|
||||
#include <mutex>
|
||||
#include <shared_mutex>
|
||||
#include <stdexcept>
|
||||
#include <string_view>
|
||||
#include <vector>
|
||||
@ -19,9 +23,9 @@ using namespace DB;
|
||||
|
||||
namespace CurrentMetrics
|
||||
{
|
||||
extern const Metric TablesLoaderThreads;
|
||||
extern const Metric TablesLoaderThreadsActive;
|
||||
extern const Metric TablesLoaderThreadsScheduled;
|
||||
extern const Metric TablesLoaderBackgroundThreads;
|
||||
extern const Metric TablesLoaderBackgroundThreadsActive;
|
||||
extern const Metric TablesLoaderBackgroundThreadsScheduled;
|
||||
}
|
||||
|
||||
namespace DB::ErrorCodes
|
||||
@ -61,9 +65,9 @@ struct AsyncLoaderTest
|
||||
{
|
||||
result.push_back({
|
||||
.name = fmt::format("Pool{}", pool_id),
|
||||
.metric_threads = CurrentMetrics::TablesLoaderThreads,
|
||||
.metric_active_threads = CurrentMetrics::TablesLoaderThreadsActive,
|
||||
.metric_scheduled_threads = CurrentMetrics::TablesLoaderThreadsScheduled,
|
||||
.metric_threads = CurrentMetrics::TablesLoaderBackgroundThreads,
|
||||
.metric_active_threads = CurrentMetrics::TablesLoaderBackgroundThreadsActive,
|
||||
.metric_scheduled_threads = CurrentMetrics::TablesLoaderBackgroundThreadsScheduled,
|
||||
.max_threads = desc.max_threads,
|
||||
.priority = desc.priority
|
||||
});
|
||||
@ -155,7 +159,7 @@ TEST(AsyncLoader, Smoke)
|
||||
std::atomic<size_t> jobs_done{0};
|
||||
std::atomic<size_t> low_priority_jobs_done{0};
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr & self) {
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr & self) {
|
||||
jobs_done++;
|
||||
if (self->pool() == low_priority_pool)
|
||||
low_priority_jobs_done++;
|
||||
@ -172,13 +176,13 @@ TEST(AsyncLoader, Smoke)
|
||||
auto job5 = makeLoadJob({ job3, job4 }, low_priority_pool, "job5", job_func);
|
||||
task2->merge(t.schedule({ job5 }));
|
||||
|
||||
std::thread waiter_thread([=] { job5->wait(); });
|
||||
std::thread waiter_thread([&t, job5] { t.loader.wait(job5); });
|
||||
|
||||
t.loader.start();
|
||||
|
||||
job3->wait();
|
||||
t.loader.wait(job3);
|
||||
t.loader.wait();
|
||||
job4->wait();
|
||||
t.loader.wait(job4);
|
||||
|
||||
waiter_thread.join();
|
||||
|
||||
@ -196,7 +200,7 @@ TEST(AsyncLoader, CycleDetection)
|
||||
{
|
||||
AsyncLoaderTest t;
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr &) {};
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr &) {};
|
||||
|
||||
LoadJobPtr cycle_breaker; // To avoid memleak we introduce with a cycle
|
||||
|
||||
@ -241,7 +245,7 @@ TEST(AsyncLoader, CancelPendingJob)
|
||||
{
|
||||
AsyncLoaderTest t;
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr &) {};
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr &) {};
|
||||
|
||||
auto job = makeLoadJob({}, "job", job_func);
|
||||
auto task = t.schedule({ job });
|
||||
@ -251,7 +255,7 @@ TEST(AsyncLoader, CancelPendingJob)
|
||||
ASSERT_EQ(job->status(), LoadStatus::CANCELED);
|
||||
try
|
||||
{
|
||||
job->wait();
|
||||
t.loader.wait(job);
|
||||
FAIL();
|
||||
}
|
||||
catch (Exception & e)
|
||||
@ -264,7 +268,7 @@ TEST(AsyncLoader, CancelPendingTask)
|
||||
{
|
||||
AsyncLoaderTest t;
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr &) {};
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr &) {};
|
||||
|
||||
auto job1 = makeLoadJob({}, "job1", job_func);
|
||||
auto job2 = makeLoadJob({ job1 }, "job2", job_func);
|
||||
@ -277,7 +281,7 @@ TEST(AsyncLoader, CancelPendingTask)
|
||||
|
||||
try
|
||||
{
|
||||
job1->wait();
|
||||
t.loader.wait(job1);
|
||||
FAIL();
|
||||
}
|
||||
catch (Exception & e)
|
||||
@ -287,7 +291,7 @@ TEST(AsyncLoader, CancelPendingTask)
|
||||
|
||||
try
|
||||
{
|
||||
job2->wait();
|
||||
t.loader.wait(job2);
|
||||
FAIL();
|
||||
}
|
||||
catch (Exception & e)
|
||||
@ -300,7 +304,7 @@ TEST(AsyncLoader, CancelPendingDependency)
|
||||
{
|
||||
AsyncLoaderTest t;
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr &) {};
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr &) {};
|
||||
|
||||
auto job1 = makeLoadJob({}, "job1", job_func);
|
||||
auto job2 = makeLoadJob({ job1 }, "job2", job_func);
|
||||
@ -314,7 +318,7 @@ TEST(AsyncLoader, CancelPendingDependency)
|
||||
|
||||
try
|
||||
{
|
||||
job1->wait();
|
||||
t.loader.wait(job1);
|
||||
FAIL();
|
||||
}
|
||||
catch (Exception & e)
|
||||
@ -324,7 +328,7 @@ TEST(AsyncLoader, CancelPendingDependency)
|
||||
|
||||
try
|
||||
{
|
||||
job2->wait();
|
||||
t.loader.wait(job2);
|
||||
FAIL();
|
||||
}
|
||||
catch (Exception & e)
|
||||
@ -340,7 +344,7 @@ TEST(AsyncLoader, CancelExecutingJob)
|
||||
|
||||
std::barrier sync(2);
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr &)
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
sync.arrive_and_wait(); // (A) sync with main thread
|
||||
sync.arrive_and_wait(); // (B) wait for waiter
|
||||
@ -362,7 +366,7 @@ TEST(AsyncLoader, CancelExecutingJob)
|
||||
canceler.join();
|
||||
|
||||
ASSERT_EQ(job->status(), LoadStatus::OK);
|
||||
job->wait();
|
||||
t.loader.wait(job);
|
||||
}
|
||||
|
||||
TEST(AsyncLoader, CancelExecutingTask)
|
||||
@ -371,19 +375,19 @@ TEST(AsyncLoader, CancelExecutingTask)
|
||||
t.loader.start();
|
||||
std::barrier sync(2);
|
||||
|
||||
auto blocker_job_func = [&] (const LoadJobPtr &)
|
||||
auto blocker_job_func = [&] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
sync.arrive_and_wait(); // (A) sync with main thread
|
||||
sync.arrive_and_wait(); // (B) wait for waiter
|
||||
// signals (C)
|
||||
};
|
||||
|
||||
auto job_to_cancel_func = [&] (const LoadJobPtr &)
|
||||
auto job_to_cancel_func = [&] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
FAIL(); // this job should be canceled
|
||||
};
|
||||
|
||||
auto job_to_succeed_func = [&] (const LoadJobPtr &)
|
||||
auto job_to_succeed_func = [&] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
};
|
||||
|
||||
@ -430,7 +434,7 @@ TEST(AsyncLoader, DISABLED_JobFailure)
|
||||
|
||||
std::string error_message = "test job failure";
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr &) {
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr &) {
|
||||
throw std::runtime_error(error_message);
|
||||
};
|
||||
|
||||
@ -442,7 +446,7 @@ TEST(AsyncLoader, DISABLED_JobFailure)
|
||||
ASSERT_EQ(job->status(), LoadStatus::FAILED);
|
||||
try
|
||||
{
|
||||
job->wait();
|
||||
t.loader.wait(job);
|
||||
FAIL();
|
||||
}
|
||||
catch (Exception & e)
|
||||
@ -459,7 +463,7 @@ TEST(AsyncLoader, ScheduleJobWithFailedDependencies)
|
||||
|
||||
std::string_view error_message = "test job failure";
|
||||
|
||||
auto failed_job_func = [&] (const LoadJobPtr &) {
|
||||
auto failed_job_func = [&] (AsyncLoader &, const LoadJobPtr &) {
|
||||
throw Exception(ErrorCodes::ASYNC_LOAD_FAILED, "{}", error_message);
|
||||
};
|
||||
|
||||
@ -468,7 +472,7 @@ TEST(AsyncLoader, ScheduleJobWithFailedDependencies)
|
||||
|
||||
t.loader.wait();
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr &) {};
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr &) {};
|
||||
|
||||
auto job1 = makeLoadJob({ failed_job }, "job1", job_func);
|
||||
auto job2 = makeLoadJob({ job1 }, "job2", job_func);
|
||||
@ -480,7 +484,7 @@ TEST(AsyncLoader, ScheduleJobWithFailedDependencies)
|
||||
ASSERT_EQ(job2->status(), LoadStatus::CANCELED);
|
||||
try
|
||||
{
|
||||
job1->wait();
|
||||
t.loader.wait(job1);
|
||||
FAIL();
|
||||
}
|
||||
catch (Exception & e)
|
||||
@ -490,7 +494,7 @@ TEST(AsyncLoader, ScheduleJobWithFailedDependencies)
|
||||
}
|
||||
try
|
||||
{
|
||||
job2->wait();
|
||||
t.loader.wait(job2);
|
||||
FAIL();
|
||||
}
|
||||
catch (Exception & e)
|
||||
@ -504,14 +508,14 @@ TEST(AsyncLoader, ScheduleJobWithCanceledDependencies)
|
||||
{
|
||||
AsyncLoaderTest t;
|
||||
|
||||
auto canceled_job_func = [&] (const LoadJobPtr &) {};
|
||||
auto canceled_job_func = [&] (AsyncLoader &, const LoadJobPtr &) {};
|
||||
auto canceled_job = makeLoadJob({}, "canceled_job", canceled_job_func);
|
||||
auto canceled_task = t.schedule({ canceled_job });
|
||||
canceled_task->remove();
|
||||
|
||||
t.loader.start();
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr &) {};
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr &) {};
|
||||
auto job1 = makeLoadJob({ canceled_job }, "job1", job_func);
|
||||
auto job2 = makeLoadJob({ job1 }, "job2", job_func);
|
||||
auto task = t.schedule({ job1, job2 });
|
||||
@ -522,7 +526,7 @@ TEST(AsyncLoader, ScheduleJobWithCanceledDependencies)
|
||||
ASSERT_EQ(job2->status(), LoadStatus::CANCELED);
|
||||
try
|
||||
{
|
||||
job1->wait();
|
||||
t.loader.wait(job1);
|
||||
FAIL();
|
||||
}
|
||||
catch (Exception & e)
|
||||
@ -531,7 +535,7 @@ TEST(AsyncLoader, ScheduleJobWithCanceledDependencies)
|
||||
}
|
||||
try
|
||||
{
|
||||
job2->wait();
|
||||
t.loader.wait(job2);
|
||||
FAIL();
|
||||
}
|
||||
catch (Exception & e)
|
||||
@ -550,7 +554,7 @@ TEST(AsyncLoader, TestConcurrency)
|
||||
std::barrier sync(concurrency);
|
||||
|
||||
std::atomic<int> executing{0};
|
||||
auto job_func = [&] (const LoadJobPtr &)
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
executing++;
|
||||
ASSERT_LE(executing, concurrency);
|
||||
@ -577,7 +581,7 @@ TEST(AsyncLoader, TestOverload)
|
||||
|
||||
for (int concurrency = 4; concurrency <= 8; concurrency++)
|
||||
{
|
||||
auto job_func = [&] (const LoadJobPtr &)
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
executing++;
|
||||
t.randomSleepUs(100, 200, 100);
|
||||
@ -613,7 +617,7 @@ TEST(AsyncLoader, StaticPriorities)
|
||||
|
||||
std::string schedule;
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr & self)
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr & self)
|
||||
{
|
||||
schedule += fmt::format("{}{}", self->name, self->pool());
|
||||
};
|
||||
@ -656,18 +660,18 @@ TEST(AsyncLoader, SimplePrioritization)
|
||||
std::atomic<int> executed{0}; // Number of previously executed jobs (to test execution order)
|
||||
LoadJobPtr job_to_prioritize;
|
||||
|
||||
auto job_func_A_booster = [&] (const LoadJobPtr &)
|
||||
auto job_func_A_booster = [&] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
ASSERT_EQ(executed++, 0);
|
||||
t.loader.prioritize(job_to_prioritize, 2);
|
||||
};
|
||||
|
||||
auto job_func_B_tester = [&] (const LoadJobPtr &)
|
||||
auto job_func_B_tester = [&] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
ASSERT_EQ(executed++, 2);
|
||||
};
|
||||
|
||||
auto job_func_C_boosted = [&] (const LoadJobPtr &)
|
||||
auto job_func_C_boosted = [&] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
ASSERT_EQ(executed++, 1);
|
||||
};
|
||||
@ -680,7 +684,8 @@ TEST(AsyncLoader, SimplePrioritization)
|
||||
|
||||
job_to_prioritize = jobs[2]; // C
|
||||
|
||||
scheduleAndWaitLoadAll(task);
|
||||
scheduleLoad(task);
|
||||
waitLoad(task);
|
||||
}
|
||||
|
||||
TEST(AsyncLoader, DynamicPriorities)
|
||||
@ -714,7 +719,7 @@ TEST(AsyncLoader, DynamicPriorities)
|
||||
UInt64 ready_seqno_D = 0;
|
||||
UInt64 ready_seqno_E = 0;
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr & self)
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr & self)
|
||||
{
|
||||
{
|
||||
std::unique_lock lock{schedule_mutex};
|
||||
@ -791,7 +796,7 @@ TEST(AsyncLoader, RandomIndependentTasks)
|
||||
AsyncLoaderTest t(16);
|
||||
t.loader.start();
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr & self)
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr & self)
|
||||
{
|
||||
for (const auto & dep : self->dependencies)
|
||||
ASSERT_EQ(dep->status(), LoadStatus::OK);
|
||||
@ -818,7 +823,7 @@ TEST(AsyncLoader, RandomDependentTasks)
|
||||
std::vector<LoadTaskPtr> tasks;
|
||||
std::vector<LoadJobPtr> all_jobs;
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr & self)
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr & self)
|
||||
{
|
||||
for (const auto & dep : self->dependencies)
|
||||
ASSERT_EQ(dep->status(), LoadStatus::OK);
|
||||
@ -860,7 +865,7 @@ TEST(AsyncLoader, SetMaxThreads)
|
||||
syncs.push_back(std::make_unique<std::barrier<>>(max_threads + 1));
|
||||
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr &)
|
||||
auto job_func = [&] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
int idx = sync_index;
|
||||
if (idx < syncs.size())
|
||||
@ -914,10 +919,11 @@ TEST(AsyncLoader, DynamicPools)
|
||||
{
|
||||
std::atomic<bool> boosted{false}; // Visible concurrency was increased
|
||||
std::atomic<int> left{concurrency * jobs_in_chain / 2}; // Number of jobs to start before `prioritize()` call
|
||||
std::shared_mutex prioritization_mutex; // To slow down job execution during prioritization to avoid race condition
|
||||
|
||||
LoadJobSet jobs_to_prioritize;
|
||||
|
||||
auto job_func = [&] (const LoadJobPtr & self)
|
||||
auto job_func = [&] (AsyncLoader & loader, const LoadJobPtr & self)
|
||||
{
|
||||
auto pool_id = self->executionPool();
|
||||
executing[pool_id]++;
|
||||
@ -928,10 +934,12 @@ TEST(AsyncLoader, DynamicPools)
|
||||
// Dynamic prioritization
|
||||
if (--left == 0)
|
||||
{
|
||||
std::unique_lock lock{prioritization_mutex};
|
||||
for (const auto & job : jobs_to_prioritize)
|
||||
t.loader.prioritize(job, 1);
|
||||
loader.prioritize(job, 1);
|
||||
}
|
||||
|
||||
std::shared_lock lock{prioritization_mutex};
|
||||
t.randomSleepUs(100, 200, 100);
|
||||
|
||||
ASSERT_LE(executing[pool_id], max_threads[pool_id]);
|
||||
@ -941,9 +949,10 @@ TEST(AsyncLoader, DynamicPools)
|
||||
std::vector<LoadTaskPtr> tasks;
|
||||
tasks.reserve(concurrency);
|
||||
for (int i = 0; i < concurrency; i++)
|
||||
tasks.push_back(makeLoadTask(t.loader, t.chainJobSet(jobs_in_chain, job_func)));
|
||||
tasks.push_back(makeLoadTask(t.loader, t.chainJobSet(jobs_in_chain, job_func, fmt::format("c{}-j", i))));
|
||||
jobs_to_prioritize = getGoals(tasks); // All jobs
|
||||
scheduleAndWaitLoadAll(tasks);
|
||||
scheduleLoad(tasks);
|
||||
waitLoad(tasks);
|
||||
|
||||
ASSERT_EQ(executing[0], 0);
|
||||
ASSERT_EQ(executing[1], 0);
|
||||
@ -952,3 +961,136 @@ TEST(AsyncLoader, DynamicPools)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
TEST(AsyncLoader, SubJobs)
|
||||
{
|
||||
AsyncLoaderTest t(1);
|
||||
t.loader.start();
|
||||
|
||||
// An example of component with an asynchronous loading interface
|
||||
class MyComponent : boost::noncopyable {
|
||||
public:
|
||||
MyComponent(AsyncLoader & loader_, int jobs)
|
||||
: loader(loader_)
|
||||
, jobs_left(jobs)
|
||||
{}
|
||||
|
||||
[[nodiscard]] LoadTaskPtr loadAsync()
|
||||
{
|
||||
auto job_func = [this] (AsyncLoader &, const LoadJobPtr &) {
|
||||
auto sub_job_func = [this] (AsyncLoader &, const LoadJobPtr &) {
|
||||
--jobs_left;
|
||||
};
|
||||
LoadJobSet jobs;
|
||||
for (size_t j = 0; j < jobs_left; j++)
|
||||
jobs.insert(makeLoadJob({}, fmt::format("sub job {}", j), sub_job_func));
|
||||
waitLoad(makeLoadTask(loader, std::move(jobs)));
|
||||
};
|
||||
auto job = makeLoadJob({}, "main job", job_func);
|
||||
return load_task = makeLoadTask(loader, { job });
|
||||
}
|
||||
|
||||
bool isLoaded() const
|
||||
{
|
||||
return jobs_left == 0;
|
||||
}
|
||||
|
||||
private:
|
||||
AsyncLoader & loader;
|
||||
std::atomic<int> jobs_left;
|
||||
// It is a good practice to keep load task inside the component:
|
||||
// 1) to make sure it outlives its load jobs;
|
||||
// 2) to avoid removing load jobs from `system.async_loader` while we use the component
|
||||
LoadTaskPtr load_task;
|
||||
};
|
||||
|
||||
for (double jobs_per_thread : std::array{0.5, 1.0, 2.0})
|
||||
{
|
||||
for (size_t threads = 1; threads <= 32; threads *= 2)
|
||||
{
|
||||
t.loader.setMaxThreads(0, threads);
|
||||
std::list<MyComponent> components;
|
||||
LoadTaskPtrs tasks;
|
||||
size_t size = static_cast<size_t>(jobs_per_thread * threads);
|
||||
tasks.reserve(size);
|
||||
for (size_t j = 0; j < size; j++)
|
||||
{
|
||||
components.emplace_back(t.loader, 5);
|
||||
tasks.emplace_back(components.back().loadAsync());
|
||||
}
|
||||
waitLoad(tasks);
|
||||
for (const auto & component: components)
|
||||
ASSERT_TRUE(component.isLoaded());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
TEST(AsyncLoader, RecursiveJob)
|
||||
{
|
||||
AsyncLoaderTest t(1);
|
||||
t.loader.start();
|
||||
|
||||
// An example of component with an asynchronous loading interface (a complicated one)
|
||||
class MyComponent : boost::noncopyable {
|
||||
public:
|
||||
MyComponent(AsyncLoader & loader_, int jobs)
|
||||
: loader(loader_)
|
||||
, jobs_left(jobs)
|
||||
{}
|
||||
|
||||
[[nodiscard]] LoadTaskPtr loadAsync()
|
||||
{
|
||||
return load_task = loadAsyncImpl(jobs_left);
|
||||
}
|
||||
|
||||
bool isLoaded() const
|
||||
{
|
||||
return jobs_left == 0;
|
||||
}
|
||||
|
||||
private:
|
||||
[[nodiscard]] LoadTaskPtr loadAsyncImpl(int id)
|
||||
{
|
||||
auto job_func = [this] (AsyncLoader &, const LoadJobPtr & self) {
|
||||
jobFunction(self);
|
||||
};
|
||||
auto job = makeLoadJob({}, fmt::format("job{}", id), job_func);
|
||||
auto task = makeLoadTask(loader, { job });
|
||||
return task;
|
||||
}
|
||||
|
||||
void jobFunction(const LoadJobPtr & self)
|
||||
{
|
||||
int next = --jobs_left;
|
||||
if (next > 0)
|
||||
waitLoad(self->pool(), loadAsyncImpl(next));
|
||||
}
|
||||
|
||||
AsyncLoader & loader;
|
||||
std::atomic<int> jobs_left;
|
||||
// It is a good practice to keep load task inside the component:
|
||||
// 1) to make sure it outlives its load jobs;
|
||||
// 2) to avoid removing load jobs from `system.async_loader` while we use the component
|
||||
LoadTaskPtr load_task;
|
||||
};
|
||||
|
||||
for (double jobs_per_thread : std::array{0.5, 1.0, 2.0})
|
||||
{
|
||||
for (size_t threads = 1; threads <= 32; threads *= 2)
|
||||
{
|
||||
t.loader.setMaxThreads(0, threads);
|
||||
std::list<MyComponent> components;
|
||||
LoadTaskPtrs tasks;
|
||||
size_t size = static_cast<size_t>(jobs_per_thread * threads);
|
||||
tasks.reserve(size);
|
||||
for (size_t j = 0; j < size; j++)
|
||||
{
|
||||
components.emplace_back(t.loader, 5);
|
||||
tasks.emplace_back(components.back().loadAsync());
|
||||
}
|
||||
waitLoad(tasks);
|
||||
for (const auto & component: components)
|
||||
ASSERT_TRUE(component.isLoaded());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -101,6 +101,7 @@ void KeeperSnapshotManagerS3::updateS3Configuration(const Poco::Util::AbstractCo
|
||||
auto client = S3::ClientFactory::instance().create(
|
||||
client_configuration,
|
||||
new_uri.is_virtual_hosted_style,
|
||||
/* disable_checksum= */ false,
|
||||
credentials.GetAWSAccessKeyId(),
|
||||
credentials.GetAWSSecretKey(),
|
||||
auth_settings.server_side_encryption_customer_key_base64,
|
||||
|
@ -92,14 +92,15 @@ namespace DB
|
||||
M(UInt64, background_schedule_pool_size, 512, "The maximum number of threads that will be used for constantly executing some lightweight periodic operations.", 0) \
|
||||
M(UInt64, background_message_broker_schedule_pool_size, 16, "The maximum number of threads that will be used for executing background operations for message streaming.", 0) \
|
||||
M(UInt64, background_distributed_schedule_pool_size, 16, "The maximum number of threads that will be used for executing distributed sends.", 0) \
|
||||
M(UInt64, tables_loader_foreground_pool_size, 0, "The maximum number of threads that will be used for foreground (that is being waited for by a query) loading of tables. Also used for synchronous loading of tables before the server start. Zero means use all CPUs.", 0) \
|
||||
M(UInt64, tables_loader_background_pool_size, 0, "The maximum number of threads that will be used for background async loading of tables. Zero means use all CPUs.", 0) \
|
||||
M(Bool, async_load_databases, false, "Enable asynchronous loading of databases and tables to speedup server startup. Queries to not yet loaded entity will be blocked until load is finished.", 0) \
|
||||
M(Bool, display_secrets_in_show_and_select, false, "Allow showing secrets in SHOW and SELECT queries via a format setting and a grant", 0) \
|
||||
\
|
||||
M(UInt64, total_memory_profiler_step, 0, "Whenever server memory usage becomes larger than every next step in number of bytes the memory profiler will collect the allocating stack trace. Zero means disabled memory profiler. Values lower than a few megabytes will slow down server.", 0) \
|
||||
M(Double, total_memory_tracker_sample_probability, 0, "Collect random allocations and deallocations and write them into system.trace_log with 'MemorySample' trace_type. The probability is for every alloc/free regardless to the size of the allocation (can be changed with `memory_profiler_sample_min_allocation_size` and `memory_profiler_sample_max_allocation_size`). Note that sampling happens only when the amount of untracked memory exceeds 'max_untracked_memory'. You may want to set 'max_untracked_memory' to 0 for extra fine grained sampling.", 0) \
|
||||
M(UInt64, total_memory_profiler_sample_min_allocation_size, 0, "Collect random allocations of size greater or equal than specified value with probability equal to `total_memory_profiler_sample_probability`. 0 means disabled. You may want to set 'max_untracked_memory' to 0 to make this threshold to work as expected.", 0) \
|
||||
M(UInt64, total_memory_profiler_sample_max_allocation_size, 0, "Collect random allocations of size less or equal than specified value with probability equal to `total_memory_profiler_sample_probability`. 0 means disabled. You may want to set 'max_untracked_memory' to 0 to make this threshold to work as expected.", 0) \
|
||||
M(String, get_client_http_header_forbidden_headers, "", "Comma separated list of http header names that will not be returned by function getClientHTTPHeader.", 0) \
|
||||
M(Bool, allow_get_client_http_header, false, "Allow function getClientHTTPHeader", 0) \
|
||||
M(Bool, validate_tcp_client_information, false, "Validate client_information in the query packet over the native TCP protocol.", 0) \
|
||||
M(Bool, storage_metadata_write_full_object_key, false, "Write disk metadata files with VERSION_FULL_OBJECT_KEY format", 0) \
|
||||
|
||||
|
@ -104,9 +104,10 @@ class IColumn;
|
||||
M(Bool, s3_check_objects_after_upload, false, "Check each uploaded object to s3 with head request to be sure that upload was successful", 0) \
|
||||
M(Bool, s3_allow_parallel_part_upload, true, "Use multiple threads for s3 multipart upload. It may lead to slightly higher memory usage", 0) \
|
||||
M(Bool, s3_throw_on_zero_files_match, false, "Throw an error, when ListObjects request cannot match any files", 0) \
|
||||
M(Bool, s3_disable_checksum, false, "Do not calculate a checksum when sending a file to S3. This speeds up writes by avoiding excessive processing passes on a file. It is mostly safe as the data of MergeTree tables is checksummed by ClickHouse anyway, and when S3 is accessed with HTTPS, the TLS layer already provides integrity while transferring through the network. While additional checksums on S3 give defense in depth.", 0) \
|
||||
M(UInt64, s3_retry_attempts, 100, "Setting for Aws::Client::RetryStrategy, Aws::Client does retries itself, 0 means no retries", 0) \
|
||||
M(UInt64, s3_request_timeout_ms, 30000, "Idleness timeout for sending and receiving data to/from S3. Fail if a single TCP read or write call blocks for this long.", 0) \
|
||||
M(UInt64, s3_http_connection_pool_size, 1000, "How many reusable open connections to keep per S3 endpoint. Only applies to the S3 table engine and table function, not to S3 disks (for disks, use disk config instead). Global setting, can only be set in config, overriding it per session or per query has no effect.", 0) \
|
||||
M(UInt64, s3_http_connection_pool_size, 1000, "How many reusable open connections to keep per S3 endpoint. This only applies to the S3 table engine and table function, not to S3 disks (for disks, use disk config instead). Global setting, can only be set in config, overriding it per session or per query has no effect.", 0) \
|
||||
M(Bool, enable_s3_requests_logging, false, "Enable very explicit logging of S3 requests. Makes sense for debug only.", 0) \
|
||||
M(String, s3queue_default_zookeeper_path, "/clickhouse/s3queue/", "Default zookeeper path prefix for S3Queue engine", 0) \
|
||||
M(Bool, s3queue_enable_logging_to_s3queue_log, false, "Enable writing to system.s3queue_log. The value can be overwritten per table with table settings", 0) \
|
||||
@ -122,10 +123,10 @@ class IColumn;
|
||||
M(UInt64, max_remote_write_network_bandwidth, 0, "The maximum speed of data exchange over the network in bytes per second for write.", 0) \
|
||||
M(UInt64, max_local_read_bandwidth, 0, "The maximum speed of local reads in bytes per second.", 0) \
|
||||
M(UInt64, max_local_write_bandwidth, 0, "The maximum speed of local writes in bytes per second.", 0) \
|
||||
M(Bool, stream_like_engine_allow_direct_select, false, "Allow direct SELECT query for Kafka, RabbitMQ, FileLog, Redis Streams and NATS engines. In case there are attached materialized views, SELECT query is not allowed even if this setting is enabled.", 0) \
|
||||
M(Bool, stream_like_engine_allow_direct_select, false, "Allow direct SELECT query for Kafka, RabbitMQ, FileLog, Redis Streams, and NATS engines. In case there are attached materialized views, SELECT query is not allowed even if this setting is enabled.", 0) \
|
||||
M(String, stream_like_engine_insert_queue, "", "When stream like engine reads from multiple queues, user will need to select one queue to insert into when writing. Used by Redis Streams and NATS.", 0) \
|
||||
\
|
||||
M(Bool, distributed_foreground_insert, false, "If setting is enabled, insert query into distributed waits until data will be sent to all nodes in cluster. \n\nEnables or disables synchronous data insertion into a `Distributed` table.\n\nBy default, when inserting data into a Distributed table, the ClickHouse server sends data to cluster nodes in background. When `distributed_foreground_insert` = 1, the data is processed synchronously, and the `INSERT` operation succeeds only after all the data is saved on all shards (at least one replica for each shard if `internal_replication` is true).", 0) ALIAS(insert_distributed_sync) \
|
||||
M(Bool, distributed_foreground_insert, false, "If setting is enabled, insert query into distributed waits until data are sent to all nodes in a cluster. \n\nEnables or disables synchronous data insertion into a `Distributed` table.\n\nBy default, when inserting data into a Distributed table, the ClickHouse server sends data to cluster nodes in the background. When `distributed_foreground_insert` = 1, the data is processed synchronously, and the `INSERT` operation succeeds only after all the data is saved on all shards (at least one replica for each shard if `internal_replication` is true).", 0) ALIAS(insert_distributed_sync) \
|
||||
M(UInt64, distributed_background_insert_timeout, 0, "Timeout for insert query into distributed. Setting is used only with insert_distributed_sync enabled. Zero value means no timeout.", 0) ALIAS(insert_distributed_timeout) \
|
||||
M(Milliseconds, distributed_background_insert_sleep_time_ms, 100, "Sleep time for background INSERTs into Distributed, in case of any errors delay grows exponentially.", 0) ALIAS(distributed_directory_monitor_sleep_time_ms) \
|
||||
M(Milliseconds, distributed_background_insert_max_sleep_time_ms, 30000, "Maximum sleep time for background INSERTs into Distributed, it limits exponential growth too.", 0) ALIAS(distributed_directory_monitor_max_sleep_time_ms) \
|
||||
@ -749,7 +750,7 @@ class IColumn;
|
||||
M(UInt64, prefetch_buffer_size, DBMS_DEFAULT_BUFFER_SIZE, "The maximum size of the prefetch buffer to read from the filesystem.", 0) \
|
||||
M(UInt64, filesystem_prefetch_step_bytes, 0, "Prefetch step in bytes. Zero means `auto` - approximately the best prefetch step will be auto deduced, but might not be 100% the best. The actual value might be different because of setting filesystem_prefetch_min_bytes_for_single_read_task", 0) \
|
||||
M(UInt64, filesystem_prefetch_step_marks, 0, "Prefetch step in marks. Zero means `auto` - approximately the best prefetch step will be auto deduced, but might not be 100% the best. The actual value might be different because of setting filesystem_prefetch_min_bytes_for_single_read_task", 0) \
|
||||
M(UInt64, filesystem_prefetch_min_bytes_for_single_read_task, "8Mi", "Do not parallelize within one file read less than this amount of bytes. E.g. one reader will not receive a read task of size less than this amount. This setting is recommended to avoid spikes of time for aws getObject requests to aws", 0) \
|
||||
M(UInt64, filesystem_prefetch_min_bytes_for_single_read_task, "2Mi", "Do not parallelize within one file read less than this amount of bytes. E.g. one reader will not receive a read task of size less than this amount. This setting is recommended to avoid spikes of time for aws getObject requests to aws", 0) \
|
||||
M(UInt64, filesystem_prefetch_max_memory_usage, "1Gi", "Maximum memory usage for prefetches.", 0) \
|
||||
M(UInt64, filesystem_prefetches_limit, 200, "Maximum number of prefetches. Zero means unlimited. A setting `filesystem_prefetches_max_memory_usage` is more recommended if you want to limit the number of prefetches", 0) \
|
||||
\
|
||||
|
@ -5,6 +5,7 @@
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <IO/ReadBufferFromFile.h>
|
||||
#include <Parsers/formatAST.h>
|
||||
#include <Common/PoolId.h>
|
||||
#include <Common/atomicRename.h>
|
||||
#include <Common/filesystemHelpers.h>
|
||||
#include <Storages/StorageMaterializedView.h>
|
||||
@ -74,6 +75,7 @@ String DatabaseAtomic::getTableDataPath(const ASTCreateQuery & query) const
|
||||
|
||||
void DatabaseAtomic::drop(ContextPtr)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
assert(TSA_SUPPRESS_WARNING_FOR_READ(tables).empty());
|
||||
try
|
||||
{
|
||||
@ -112,6 +114,7 @@ StoragePtr DatabaseAtomic::detachTable(ContextPtr /* context */, const String &
|
||||
|
||||
void DatabaseAtomic::dropTable(ContextPtr local_context, const String & table_name, bool sync)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
auto table = tryGetTable(table_name, local_context);
|
||||
/// Remove the inner table (if any) to avoid deadlock
|
||||
/// (due to attempt to execute DROP from the worker thread)
|
||||
@ -175,6 +178,8 @@ void DatabaseAtomic::renameTable(ContextPtr local_context, const String & table_
|
||||
if (exchange && !supportsAtomicRename())
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "RENAME EXCHANGE is not supported");
|
||||
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
auto & other_db = dynamic_cast<DatabaseAtomic &>(to_database);
|
||||
bool inside_database = this == &other_db;
|
||||
|
||||
@ -412,7 +417,7 @@ void DatabaseAtomic::assertCanBeDetached(bool cleanup)
|
||||
DatabaseTablesIteratorPtr
|
||||
DatabaseAtomic::getTablesIterator(ContextPtr local_context, const IDatabase::FilterByNameFunction & filter_by_table_name) const
|
||||
{
|
||||
auto base_iter = DatabaseWithOwnTablesBase::getTablesIterator(local_context, filter_by_table_name);
|
||||
auto base_iter = DatabaseOrdinary::getTablesIterator(local_context, filter_by_table_name);
|
||||
return std::make_unique<AtomicDatabaseTablesSnapshotIterator>(std::move(typeid_cast<DatabaseTablesSnapshotIterator &>(*base_iter)));
|
||||
}
|
||||
|
||||
@ -441,28 +446,34 @@ void DatabaseAtomic::beforeLoadingMetadata(ContextMutablePtr /*context*/, Loadin
|
||||
}
|
||||
}
|
||||
|
||||
void DatabaseAtomic::loadStoredObjects(ContextMutablePtr local_context, LoadingStrictnessLevel mode)
|
||||
LoadTaskPtr DatabaseAtomic::startupDatabaseAsync(AsyncLoader & async_loader, LoadJobSet startup_after, LoadingStrictnessLevel mode)
|
||||
{
|
||||
beforeLoadingMetadata(local_context, mode);
|
||||
DatabaseOrdinary::loadStoredObjects(local_context, mode);
|
||||
auto base = DatabaseOrdinary::startupDatabaseAsync(async_loader, std::move(startup_after), mode);
|
||||
auto job = makeLoadJob(
|
||||
base->goals(),
|
||||
TablesLoaderBackgroundStartupPoolId,
|
||||
fmt::format("startup Atomic database {}", getDatabaseName()),
|
||||
[this, mode] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
if (mode < LoadingStrictnessLevel::FORCE_RESTORE)
|
||||
return;
|
||||
NameToPathMap table_names;
|
||||
{
|
||||
std::lock_guard lock{mutex};
|
||||
table_names = table_name_to_path;
|
||||
}
|
||||
|
||||
fs::create_directories(path_to_table_symlinks);
|
||||
for (const auto & table : table_names)
|
||||
tryCreateSymlink(table.first, table.second, true);
|
||||
});
|
||||
return startup_atomic_database_task = makeLoadTask(async_loader, {job});
|
||||
}
|
||||
|
||||
void DatabaseAtomic::startupTables(ThreadPool & thread_pool, LoadingStrictnessLevel mode)
|
||||
void DatabaseAtomic::waitDatabaseStarted(bool no_throw) const
|
||||
{
|
||||
DatabaseOrdinary::startupTables(thread_pool, mode);
|
||||
|
||||
if (mode < LoadingStrictnessLevel::FORCE_RESTORE)
|
||||
return;
|
||||
|
||||
NameToPathMap table_names;
|
||||
{
|
||||
std::lock_guard lock{mutex};
|
||||
table_names = table_name_to_path;
|
||||
}
|
||||
|
||||
fs::create_directories(path_to_table_symlinks);
|
||||
for (const auto & table : table_names)
|
||||
tryCreateSymlink(table.first, table.second, true);
|
||||
if (startup_atomic_database_task)
|
||||
waitLoad(currentPoolOr(TablesLoaderForegroundPoolId), startup_atomic_database_task, no_throw);
|
||||
}
|
||||
|
||||
void DatabaseAtomic::tryCreateSymlink(const String & table_name, const String & actual_data_path, bool if_data_path_exist)
|
||||
@ -532,6 +543,8 @@ void DatabaseAtomic::renameDatabase(ContextPtr query_context, const String & new
|
||||
{
|
||||
/// CREATE, ATTACH, DROP, DETACH and RENAME DATABASE must hold DDLGuard
|
||||
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
bool check_ref_deps = query_context->getSettingsRef().check_referential_table_dependencies;
|
||||
bool check_loading_deps = !check_ref_deps && query_context->getSettingsRef().check_table_dependencies;
|
||||
if (check_ref_deps || check_loading_deps)
|
||||
|
@ -48,11 +48,10 @@ public:
|
||||
|
||||
DatabaseTablesIteratorPtr getTablesIterator(ContextPtr context, const FilterByNameFunction & filter_by_table_name) const override;
|
||||
|
||||
void loadStoredObjects(ContextMutablePtr context, LoadingStrictnessLevel mode) override;
|
||||
|
||||
void beforeLoadingMetadata(ContextMutablePtr context, LoadingStrictnessLevel mode) override;
|
||||
|
||||
void startupTables(ThreadPool & thread_pool, LoadingStrictnessLevel mode) override;
|
||||
LoadTaskPtr startupDatabaseAsync(AsyncLoader & async_loader, LoadJobSet startup_after, LoadingStrictnessLevel mode) override;
|
||||
void waitDatabaseStarted(bool no_throw) const override;
|
||||
|
||||
/// Atomic database cannot be detached if there is detached table which still in use
|
||||
void assertCanBeDetached(bool cleanup) override;
|
||||
@ -87,6 +86,8 @@ protected:
|
||||
String path_to_table_symlinks;
|
||||
String path_to_metadata_symlink;
|
||||
const UUID db_uuid;
|
||||
|
||||
LoadTaskPtr startup_atomic_database_task;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -20,7 +20,6 @@ namespace ErrorCodes
|
||||
{
|
||||
extern const int UNKNOWN_TABLE;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int INCONSISTENT_METADATA_FOR_BACKUP;
|
||||
}
|
||||
|
||||
DatabaseMemory::DatabaseMemory(const String & name_, ContextPtr context_)
|
||||
@ -177,21 +176,30 @@ std::vector<std::pair<ASTPtr, StoragePtr>> DatabaseMemory::getTablesForBackup(co
|
||||
|
||||
auto storage_id = local_context->tryResolveStorageID(StorageID{"", table_name}, Context::ResolveExternal);
|
||||
if (!storage_id)
|
||||
throw Exception(ErrorCodes::INCONSISTENT_METADATA_FOR_BACKUP,
|
||||
"Couldn't resolve the name of temporary table {}", backQuoteIfNeed(table_name));
|
||||
{
|
||||
LOG_WARNING(log, "Couldn't resolve the name of temporary table {}", backQuoteIfNeed(table_name));
|
||||
continue;
|
||||
}
|
||||
|
||||
/// Here `storage_id.table_name` looks like looks like "_tmp_ab9b15a3-fb43-4670-abec-14a0e9eb70f1"
|
||||
/// it's not the real name of the table.
|
||||
auto create_table_query = tryGetCreateTableQuery(storage_id.table_name, local_context);
|
||||
if (!create_table_query)
|
||||
throw Exception(ErrorCodes::INCONSISTENT_METADATA_FOR_BACKUP,
|
||||
"Couldn't get a create query for temporary table {}", backQuoteIfNeed(table_name));
|
||||
{
|
||||
LOG_WARNING(log, "Couldn't get a create query for temporary table {}", backQuoteIfNeed(table_name));
|
||||
continue;
|
||||
}
|
||||
|
||||
const auto & create = create_table_query->as<const ASTCreateQuery &>();
|
||||
if (create.getTable() != table_name)
|
||||
throw Exception(ErrorCodes::INCONSISTENT_METADATA_FOR_BACKUP,
|
||||
"Got a create query with unexpected name {} for temporary table {}",
|
||||
backQuoteIfNeed(create.getTable()), backQuoteIfNeed(table_name));
|
||||
auto * create = create_table_query->as<ASTCreateQuery>();
|
||||
if (create->getTable() != table_name)
|
||||
{
|
||||
/// Probably the database has been just renamed. Use the older name for backup to keep the backup consistent.
|
||||
LOG_WARNING(log, "Got a create query with unexpected name {} for temporary table {}",
|
||||
backQuoteIfNeed(create->getTable()), backQuoteIfNeed(table_name));
|
||||
create_table_query = create_table_query->clone();
|
||||
create = create_table_query->as<ASTCreateQuery>();
|
||||
create->setTable(table_name);
|
||||
}
|
||||
|
||||
chassert(storage);
|
||||
storage->adjustCreateQueryForBackup(create_table_query);
|
||||
|
@ -163,6 +163,13 @@ DatabaseOnDisk::DatabaseOnDisk(
|
||||
}
|
||||
|
||||
|
||||
void DatabaseOnDisk::shutdown()
|
||||
{
|
||||
waitDatabaseStarted(/* no_throw = */ true);
|
||||
DatabaseWithOwnTablesBase::shutdown();
|
||||
}
|
||||
|
||||
|
||||
void DatabaseOnDisk::createTable(
|
||||
ContextPtr local_context,
|
||||
const String & table_name,
|
||||
@ -189,6 +196,8 @@ void DatabaseOnDisk::createTable(
|
||||
throw Exception(
|
||||
ErrorCodes::TABLE_ALREADY_EXISTS, "Table {}.{} already exists", backQuote(getDatabaseName()), backQuote(table_name));
|
||||
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
String table_metadata_path = getObjectMetadataPath(table_name);
|
||||
|
||||
if (create.attach_short_syntax)
|
||||
@ -278,6 +287,8 @@ void DatabaseOnDisk::commitCreateTable(const ASTCreateQuery & query, const Stora
|
||||
|
||||
void DatabaseOnDisk::detachTablePermanently(ContextPtr query_context, const String & table_name)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
auto table = detachTable(query_context, table_name);
|
||||
|
||||
fs::path detached_permanently_flag(getObjectMetadataPath(table_name) + detached_suffix);
|
||||
@ -294,6 +305,8 @@ void DatabaseOnDisk::detachTablePermanently(ContextPtr query_context, const Stri
|
||||
|
||||
void DatabaseOnDisk::dropTable(ContextPtr local_context, const String & table_name, bool /*sync*/)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
String table_metadata_path = getObjectMetadataPath(table_name);
|
||||
String table_metadata_path_drop = table_metadata_path + drop_suffix;
|
||||
String table_data_path_relative = getTableDataPath(table_name);
|
||||
@ -378,6 +391,8 @@ void DatabaseOnDisk::renameTable(
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Moving tables between databases of different engines is not supported");
|
||||
}
|
||||
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
auto table_data_relative_path = getTableDataPath(table_name);
|
||||
TableExclusiveLockHolder table_lock;
|
||||
String table_metadata_path;
|
||||
@ -519,6 +534,8 @@ ASTPtr DatabaseOnDisk::getCreateDatabaseQuery() const
|
||||
|
||||
void DatabaseOnDisk::drop(ContextPtr local_context)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
assert(TSA_SUPPRESS_WARNING_FOR_READ(tables).empty());
|
||||
if (local_context->getSettingsRef().force_remove_data_recursively_on_drop)
|
||||
{
|
||||
|
@ -32,6 +32,8 @@ class DatabaseOnDisk : public DatabaseWithOwnTablesBase
|
||||
public:
|
||||
DatabaseOnDisk(const String & name, const String & metadata_path_, const String & data_path_, const String & logger, ContextPtr context);
|
||||
|
||||
void shutdown() override;
|
||||
|
||||
void createTable(
|
||||
ContextPtr context,
|
||||
const String & table_name,
|
||||
|
@ -22,6 +22,7 @@
|
||||
#include <Parsers/queryToString.h>
|
||||
#include <Common/Stopwatch.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
#include <Common/PoolId.h>
|
||||
#include <Common/escapeForFileName.h>
|
||||
#include <Common/quoteString.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
@ -30,13 +31,6 @@
|
||||
|
||||
namespace fs = std::filesystem;
|
||||
|
||||
namespace CurrentMetrics
|
||||
{
|
||||
extern const Metric DatabaseOrdinaryThreads;
|
||||
extern const Metric DatabaseOrdinaryThreadsActive;
|
||||
extern const Metric DatabaseOrdinaryThreadsScheduled;
|
||||
}
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
@ -47,38 +41,6 @@ namespace ErrorCodes
|
||||
|
||||
static constexpr size_t METADATA_FILE_BUFFER_SIZE = 32768;
|
||||
|
||||
namespace
|
||||
{
|
||||
void tryAttachTable(
|
||||
ContextMutablePtr context,
|
||||
const ASTCreateQuery & query,
|
||||
DatabaseOrdinary & database,
|
||||
const String & database_name,
|
||||
const String & metadata_path,
|
||||
bool force_restore)
|
||||
{
|
||||
try
|
||||
{
|
||||
auto [table_name, table] = createTableFromAST(
|
||||
query,
|
||||
database_name,
|
||||
database.getTableDataPath(query),
|
||||
context,
|
||||
force_restore);
|
||||
|
||||
database.attachTable(context, table_name, table, database.getTableDataPath(query));
|
||||
}
|
||||
catch (Exception & e)
|
||||
{
|
||||
e.addMessage(
|
||||
"Cannot attach table " + backQuote(database_name) + "." + backQuote(query.getTable()) + " from metadata file " + metadata_path
|
||||
+ " from query " + serializeAST(query));
|
||||
throw;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
DatabaseOrdinary::DatabaseOrdinary(const String & name_, const String & metadata_path_, ContextPtr context_)
|
||||
: DatabaseOrdinary(name_, metadata_path_, "data/" + escapeForFileName(name_) + "/", "DatabaseOrdinary (" + name_ + ")", context_)
|
||||
{
|
||||
@ -90,75 +52,10 @@ DatabaseOrdinary::DatabaseOrdinary(
|
||||
{
|
||||
}
|
||||
|
||||
void DatabaseOrdinary::loadStoredObjects(ContextMutablePtr local_context, LoadingStrictnessLevel mode)
|
||||
void DatabaseOrdinary::loadStoredObjects(ContextMutablePtr, LoadingStrictnessLevel)
|
||||
{
|
||||
/** Tables load faster if they are loaded in sorted (by name) order.
|
||||
* Otherwise (for the ext4 filesystem), `DirectoryIterator` iterates through them in some order,
|
||||
* which does not correspond to order tables creation and does not correspond to order of their location on disk.
|
||||
*/
|
||||
|
||||
ParsedTablesMetadata metadata;
|
||||
bool force_attach = LoadingStrictnessLevel::FORCE_ATTACH <= mode;
|
||||
loadTablesMetadata(local_context, metadata, force_attach);
|
||||
|
||||
size_t total_tables = metadata.parsed_tables.size() - metadata.total_dictionaries;
|
||||
|
||||
AtomicStopwatch watch;
|
||||
std::atomic<size_t> dictionaries_processed{0};
|
||||
std::atomic<size_t> tables_processed{0};
|
||||
|
||||
ThreadPool pool(CurrentMetrics::DatabaseOrdinaryThreads, CurrentMetrics::DatabaseOrdinaryThreadsActive, CurrentMetrics::DatabaseOrdinaryThreadsScheduled);
|
||||
|
||||
/// We must attach dictionaries before attaching tables
|
||||
/// because while we're attaching tables we may need to have some dictionaries attached
|
||||
/// (for example, dictionaries can be used in the default expressions for some tables).
|
||||
/// On the other hand we can attach any dictionary (even sourced from ClickHouse table)
|
||||
/// without having any tables attached. It is so because attaching of a dictionary means
|
||||
/// loading of its config only, it doesn't involve loading the dictionary itself.
|
||||
|
||||
/// Attach dictionaries.
|
||||
for (const auto & name_with_path_and_query : metadata.parsed_tables)
|
||||
{
|
||||
const auto & name = name_with_path_and_query.first;
|
||||
const auto & path = name_with_path_and_query.second.path;
|
||||
const auto & ast = name_with_path_and_query.second.ast;
|
||||
const auto & create_query = ast->as<const ASTCreateQuery &>();
|
||||
|
||||
if (create_query.is_dictionary)
|
||||
{
|
||||
pool.scheduleOrThrowOnError([&]()
|
||||
{
|
||||
loadTableFromMetadata(local_context, path, name, ast, mode);
|
||||
|
||||
/// Messages, so that it's not boring to wait for the server to load for a long time.
|
||||
logAboutProgress(log, ++dictionaries_processed, metadata.total_dictionaries, watch);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
pool.wait();
|
||||
|
||||
/// Attach tables.
|
||||
for (const auto & name_with_path_and_query : metadata.parsed_tables)
|
||||
{
|
||||
const auto & name = name_with_path_and_query.first;
|
||||
const auto & path = name_with_path_and_query.second.path;
|
||||
const auto & ast = name_with_path_and_query.second.ast;
|
||||
const auto & create_query = ast->as<const ASTCreateQuery &>();
|
||||
|
||||
if (!create_query.is_dictionary)
|
||||
{
|
||||
pool.scheduleOrThrowOnError([&]()
|
||||
{
|
||||
loadTableFromMetadata(local_context, path, name, ast, mode);
|
||||
|
||||
/// Messages, so that it's not boring to wait for the server to load for a long time.
|
||||
logAboutProgress(log, ++tables_processed, total_tables, watch);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
pool.wait();
|
||||
// Because it supportsLoadingInTopologicalOrder, we don't need this loading method.
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented");
|
||||
}
|
||||
|
||||
void DatabaseOrdinary::loadTablesMetadata(ContextPtr local_context, ParsedTablesMetadata & metadata, bool is_startup)
|
||||
@ -232,59 +129,143 @@ void DatabaseOrdinary::loadTablesMetadata(ContextPtr local_context, ParsedTables
|
||||
TSA_SUPPRESS_WARNING_FOR_READ(database_name), tables_in_database, dictionaries_in_database);
|
||||
}
|
||||
|
||||
void DatabaseOrdinary::loadTableFromMetadata(ContextMutablePtr local_context, const String & file_path, const QualifiedTableName & name, const ASTPtr & ast,
|
||||
void DatabaseOrdinary::loadTableFromMetadata(
|
||||
ContextMutablePtr local_context,
|
||||
const String & file_path,
|
||||
const QualifiedTableName & name,
|
||||
const ASTPtr & ast,
|
||||
LoadingStrictnessLevel mode)
|
||||
{
|
||||
assert(name.database == TSA_SUPPRESS_WARNING_FOR_READ(database_name));
|
||||
const auto & create_query = ast->as<const ASTCreateQuery &>();
|
||||
|
||||
tryAttachTable(
|
||||
local_context,
|
||||
create_query,
|
||||
*this,
|
||||
name.database,
|
||||
file_path, LoadingStrictnessLevel::FORCE_RESTORE <= mode);
|
||||
}
|
||||
|
||||
void DatabaseOrdinary::startupTables(ThreadPool & thread_pool, LoadingStrictnessLevel /*mode*/)
|
||||
{
|
||||
LOG_INFO(log, "Starting up tables.");
|
||||
|
||||
/// NOTE No concurrent writes are possible during database loading
|
||||
const size_t total_tables = TSA_SUPPRESS_WARNING_FOR_READ(tables).size();
|
||||
if (!total_tables)
|
||||
return;
|
||||
|
||||
AtomicStopwatch watch;
|
||||
std::atomic<size_t> tables_processed{0};
|
||||
|
||||
auto startup_one_table = [&](const StoragePtr & table)
|
||||
{
|
||||
/// Since startup() method can use physical paths on disk we don't allow any exclusive actions (rename, drop so on)
|
||||
/// until startup finished.
|
||||
auto table_lock_holder = table->lockForShare(RWLockImpl::NO_QUERY, getContext()->getSettingsRef().lock_acquire_timeout);
|
||||
table->startup();
|
||||
logAboutProgress(log, ++tables_processed, total_tables, watch);
|
||||
};
|
||||
|
||||
const auto & query = ast->as<const ASTCreateQuery &>();
|
||||
|
||||
try
|
||||
{
|
||||
for (const auto & table : TSA_SUPPRESS_WARNING_FOR_READ(tables))
|
||||
thread_pool.scheduleOrThrowOnError([&]() { startup_one_table(table.second); });
|
||||
auto [table_name, table] = createTableFromAST(
|
||||
query,
|
||||
name.database,
|
||||
getTableDataPath(query),
|
||||
local_context,
|
||||
LoadingStrictnessLevel::FORCE_RESTORE <= mode);
|
||||
|
||||
attachTable(local_context, table_name, table, getTableDataPath(query));
|
||||
}
|
||||
catch (...)
|
||||
catch (Exception & e)
|
||||
{
|
||||
/// We have to wait for jobs to finish here, because job function has reference to variables on the stack of current thread.
|
||||
thread_pool.wait();
|
||||
e.addMessage(
|
||||
"Cannot attach table " + backQuote(name.database) + "." + backQuote(query.getTable()) + " from metadata file " + file_path
|
||||
+ " from query " + serializeAST(query));
|
||||
throw;
|
||||
}
|
||||
thread_pool.wait();
|
||||
}
|
||||
|
||||
LoadTaskPtr DatabaseOrdinary::loadTableFromMetadataAsync(
|
||||
AsyncLoader & async_loader,
|
||||
LoadJobSet load_after,
|
||||
ContextMutablePtr local_context,
|
||||
const String & file_path,
|
||||
const QualifiedTableName & name,
|
||||
const ASTPtr & ast,
|
||||
LoadingStrictnessLevel mode)
|
||||
{
|
||||
std::scoped_lock lock(mutex);
|
||||
auto job = makeLoadJob(
|
||||
std::move(load_after),
|
||||
TablesLoaderBackgroundLoadPoolId,
|
||||
fmt::format("load table {}", name.getFullName()),
|
||||
[this, local_context, file_path, name, ast, mode] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
loadTableFromMetadata(local_context, file_path, name, ast, mode);
|
||||
});
|
||||
|
||||
return load_table[name.table] = makeLoadTask(async_loader, {job});
|
||||
}
|
||||
|
||||
LoadTaskPtr DatabaseOrdinary::startupTableAsync(
|
||||
AsyncLoader & async_loader,
|
||||
LoadJobSet startup_after,
|
||||
const QualifiedTableName & name,
|
||||
LoadingStrictnessLevel /*mode*/)
|
||||
{
|
||||
std::scoped_lock lock(mutex);
|
||||
|
||||
/// Initialize progress indication on the first call
|
||||
if (total_tables_to_startup == 0)
|
||||
{
|
||||
total_tables_to_startup = tables.size();
|
||||
startup_watch.restart();
|
||||
}
|
||||
|
||||
auto job = makeLoadJob(
|
||||
std::move(startup_after),
|
||||
TablesLoaderBackgroundStartupPoolId,
|
||||
fmt::format("startup table {}", name.getFullName()),
|
||||
[this, name] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
if (auto table = tryGetTableNoWait(name.table))
|
||||
{
|
||||
/// Since startup() method can use physical paths on disk we don't allow any exclusive actions (rename, drop so on)
|
||||
/// until startup finished.
|
||||
auto table_lock_holder = table->lockForShare(RWLockImpl::NO_QUERY, getContext()->getSettingsRef().lock_acquire_timeout);
|
||||
table->startup();
|
||||
logAboutProgress(log, ++tables_started, total_tables_to_startup, startup_watch);
|
||||
}
|
||||
else
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Table {}.{} doesn't exist during startup",
|
||||
backQuote(name.database), backQuote(name.table));
|
||||
});
|
||||
|
||||
return startup_table[name.table] = makeLoadTask(async_loader, {job});
|
||||
}
|
||||
|
||||
LoadTaskPtr DatabaseOrdinary::startupDatabaseAsync(
|
||||
AsyncLoader & async_loader,
|
||||
LoadJobSet startup_after,
|
||||
LoadingStrictnessLevel /*mode*/)
|
||||
{
|
||||
// NOTE: this task is empty, but it is required for correct dependency handling (startup should be done after tables loading)
|
||||
auto job = makeLoadJob(
|
||||
std::move(startup_after),
|
||||
TablesLoaderBackgroundStartupPoolId,
|
||||
fmt::format("startup Ordinary database {}", getDatabaseName()));
|
||||
return startup_database_task = makeLoadTask(async_loader, {job});
|
||||
}
|
||||
|
||||
void DatabaseOrdinary::waitTableStarted(const String & name) const
|
||||
{
|
||||
/// Prioritize jobs (load and startup the table) to be executed in foreground pool and wait for them synchronously
|
||||
LoadTaskPtr task;
|
||||
{
|
||||
std::scoped_lock lock(mutex);
|
||||
if (auto it = startup_table.find(name); it != startup_table.end())
|
||||
task = it->second;
|
||||
}
|
||||
|
||||
if (task)
|
||||
waitLoad(currentPoolOr(TablesLoaderForegroundPoolId), task);
|
||||
}
|
||||
|
||||
void DatabaseOrdinary::waitDatabaseStarted(bool no_throw) const
|
||||
{
|
||||
/// Prioritize load and startup of all tables and database itself and wait for them synchronously
|
||||
if (startup_database_task)
|
||||
waitLoad(currentPoolOr(TablesLoaderForegroundPoolId), startup_database_task, no_throw);
|
||||
}
|
||||
|
||||
DatabaseTablesIteratorPtr DatabaseOrdinary::getTablesIterator(ContextPtr local_context, const DatabaseOnDisk::FilterByNameFunction & filter_by_table_name) const
|
||||
{
|
||||
auto result = DatabaseWithOwnTablesBase::getTablesIterator(local_context, filter_by_table_name);
|
||||
std::scoped_lock lock(mutex);
|
||||
typeid_cast<DatabaseTablesSnapshotIterator &>(*result).setLoadTasks(startup_table);
|
||||
return result;
|
||||
}
|
||||
|
||||
void DatabaseOrdinary::alterTable(ContextPtr local_context, const StorageID & table_id, const StorageInMemoryMetadata & metadata)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
String table_name = table_id.table_name;
|
||||
|
||||
/// Read the definition of the table and replace the necessary parts with new ones.
|
||||
String table_metadata_path = getObjectMetadataPath(table_name);
|
||||
String table_metadata_tmp_path = table_metadata_path + ".tmp";
|
||||
|
@ -27,10 +27,35 @@ public:
|
||||
|
||||
void loadTablesMetadata(ContextPtr context, ParsedTablesMetadata & metadata, bool is_startup) override;
|
||||
|
||||
void loadTableFromMetadata(ContextMutablePtr local_context, const String & file_path, const QualifiedTableName & name, const ASTPtr & ast,
|
||||
void loadTableFromMetadata(
|
||||
ContextMutablePtr local_context,
|
||||
const String & file_path,
|
||||
const QualifiedTableName & name,
|
||||
const ASTPtr & ast,
|
||||
LoadingStrictnessLevel mode) override;
|
||||
|
||||
void startupTables(ThreadPool & thread_pool, LoadingStrictnessLevel mode) override;
|
||||
LoadTaskPtr loadTableFromMetadataAsync(
|
||||
AsyncLoader & async_loader,
|
||||
LoadJobSet load_after,
|
||||
ContextMutablePtr local_context,
|
||||
const String & file_path,
|
||||
const QualifiedTableName & name,
|
||||
const ASTPtr & ast,
|
||||
LoadingStrictnessLevel mode) override;
|
||||
|
||||
LoadTaskPtr startupTableAsync(
|
||||
AsyncLoader & async_loader,
|
||||
LoadJobSet startup_after,
|
||||
const QualifiedTableName & name,
|
||||
LoadingStrictnessLevel mode) override;
|
||||
|
||||
void waitTableStarted(const String & name) const override;
|
||||
|
||||
void waitDatabaseStarted(bool no_throw) const override;
|
||||
|
||||
LoadTaskPtr startupDatabaseAsync(AsyncLoader & async_loader, LoadJobSet startup_after, LoadingStrictnessLevel mode) override;
|
||||
|
||||
DatabaseTablesIteratorPtr getTablesIterator(ContextPtr local_context, const DatabaseOnDisk::FilterByNameFunction & filter_by_table_name) const override;
|
||||
|
||||
void alterTable(
|
||||
ContextPtr context,
|
||||
@ -48,6 +73,13 @@ protected:
|
||||
ContextPtr query_context);
|
||||
|
||||
Strings permanently_detached_tables;
|
||||
|
||||
std::unordered_map<String, LoadTaskPtr> load_table TSA_GUARDED_BY(mutex);
|
||||
std::unordered_map<String, LoadTaskPtr> startup_table TSA_GUARDED_BY(mutex);
|
||||
LoadTaskPtr startup_database_task;
|
||||
std::atomic<size_t> total_tables_to_startup{0};
|
||||
std::atomic<size_t> tables_started{0};
|
||||
AtomicStopwatch startup_watch;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -12,6 +12,7 @@
|
||||
#include <Common/ZooKeeper/KeeperException.h>
|
||||
#include <Common/ZooKeeper/Types.h>
|
||||
#include <Common/ZooKeeper/ZooKeeper.h>
|
||||
#include <Common/PoolId.h>
|
||||
#include <Databases/DatabaseReplicated.h>
|
||||
#include <Databases/DatabaseReplicatedWorker.h>
|
||||
#include <Databases/DDLDependencyVisitor.h>
|
||||
@ -53,7 +54,7 @@ namespace ErrorCodes
|
||||
extern const int INCORRECT_QUERY;
|
||||
extern const int ALL_CONNECTION_TRIES_FAILED;
|
||||
extern const int NO_ACTIVE_REPLICAS;
|
||||
extern const int INCONSISTENT_METADATA_FOR_BACKUP;
|
||||
extern const int CANNOT_GET_REPLICATED_DATABASE_SNAPSHOT;
|
||||
extern const int CANNOT_RESTORE_TABLE;
|
||||
}
|
||||
|
||||
@ -533,41 +534,54 @@ void DatabaseReplicated::createReplicaNodesInZooKeeper(const zkutil::ZooKeeperPt
|
||||
createEmptyLogEntry(current_zookeeper);
|
||||
}
|
||||
|
||||
void DatabaseReplicated::beforeLoadingMetadata(ContextMutablePtr /*context*/, LoadingStrictnessLevel mode)
|
||||
void DatabaseReplicated::beforeLoadingMetadata(ContextMutablePtr context_, LoadingStrictnessLevel mode)
|
||||
{
|
||||
DatabaseAtomic::beforeLoadingMetadata(context_, mode);
|
||||
tryConnectToZooKeeperAndInitDatabase(mode);
|
||||
}
|
||||
|
||||
void DatabaseReplicated::loadStoredObjects(ContextMutablePtr local_context, LoadingStrictnessLevel mode)
|
||||
{
|
||||
beforeLoadingMetadata(local_context, mode);
|
||||
DatabaseAtomic::loadStoredObjects(local_context, mode);
|
||||
}
|
||||
|
||||
UInt64 DatabaseReplicated::getMetadataHash(const String & table_name) const
|
||||
{
|
||||
return DB::getMetadataHash(table_name, readMetadataFile(table_name));
|
||||
}
|
||||
|
||||
void DatabaseReplicated::startupTables(ThreadPool & thread_pool, LoadingStrictnessLevel mode)
|
||||
LoadTaskPtr DatabaseReplicated::startupDatabaseAsync(AsyncLoader & async_loader, LoadJobSet startup_after, LoadingStrictnessLevel mode)
|
||||
{
|
||||
DatabaseAtomic::startupTables(thread_pool, mode);
|
||||
auto base = DatabaseAtomic::startupDatabaseAsync(async_loader, std::move(startup_after), mode);
|
||||
auto job = makeLoadJob(
|
||||
base->goals(),
|
||||
TablesLoaderBackgroundStartupPoolId,
|
||||
fmt::format("startup Replicated database {}", getDatabaseName()),
|
||||
[this] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
UInt64 digest = 0;
|
||||
{
|
||||
std::lock_guard lock{mutex};
|
||||
for (const auto & table : tables)
|
||||
digest += getMetadataHash(table.first);
|
||||
LOG_DEBUG(log, "Calculated metadata digest of {} tables: {}", tables.size(), digest);
|
||||
}
|
||||
|
||||
/// TSA: No concurrent writes are possible during loading
|
||||
UInt64 digest = 0;
|
||||
for (const auto & table : TSA_SUPPRESS_WARNING_FOR_READ(tables))
|
||||
digest += getMetadataHash(table.first);
|
||||
{
|
||||
std::lock_guard lock{metadata_mutex};
|
||||
chassert(!tables_metadata_digest);
|
||||
tables_metadata_digest = digest;
|
||||
}
|
||||
|
||||
LOG_DEBUG(log, "Calculated metadata digest of {} tables: {}", TSA_SUPPRESS_WARNING_FOR_READ(tables).size(), digest);
|
||||
chassert(!TSA_SUPPRESS_WARNING_FOR_READ(tables_metadata_digest));
|
||||
TSA_SUPPRESS_WARNING_FOR_WRITE(tables_metadata_digest) = digest;
|
||||
if (is_probably_dropped)
|
||||
return;
|
||||
|
||||
if (is_probably_dropped)
|
||||
return;
|
||||
ddl_worker = std::make_unique<DatabaseReplicatedDDLWorker>(this, getContext());
|
||||
ddl_worker->startup();
|
||||
ddl_worker_initialized = true;
|
||||
});
|
||||
return startup_replicated_database_task = makeLoadTask(async_loader, {job});
|
||||
}
|
||||
|
||||
ddl_worker = std::make_unique<DatabaseReplicatedDDLWorker>(this, getContext());
|
||||
ddl_worker->startup();
|
||||
ddl_worker_initialized = true;
|
||||
void DatabaseReplicated::waitDatabaseStarted(bool no_throw) const
|
||||
{
|
||||
if (startup_replicated_database_task)
|
||||
waitLoad(currentPoolOr(TablesLoaderForegroundPoolId), startup_replicated_database_task, no_throw);
|
||||
}
|
||||
|
||||
bool DatabaseReplicated::checkDigestValid(const ContextPtr & local_context, bool debug_check /* = true */) const
|
||||
@ -728,6 +742,7 @@ void DatabaseReplicated::checkQueryValid(const ASTPtr & query, ContextPtr query_
|
||||
|
||||
BlockIO DatabaseReplicated::tryEnqueueReplicatedDDL(const ASTPtr & query, ContextPtr query_context, QueryFlags flags)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
if (query_context->getCurrentTransaction() && query_context->getSettingsRef().throw_on_unsupported_query_inside_transaction)
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Distributed DDL queries inside transactions are not supported");
|
||||
@ -791,6 +806,8 @@ static UUID getTableUUIDIfReplicated(const String & metadata, ContextPtr context
|
||||
|
||||
void DatabaseReplicated::recoverLostReplica(const ZooKeeperPtr & current_zookeeper, UInt32 our_log_ptr, UInt32 & max_log_ptr)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
is_recovering = true;
|
||||
SCOPE_EXIT({ is_recovering = false; });
|
||||
|
||||
@ -1107,31 +1124,43 @@ void DatabaseReplicated::recoverLostReplica(const ZooKeeperPtr & current_zookeep
|
||||
}
|
||||
|
||||
std::map<String, String> DatabaseReplicated::tryGetConsistentMetadataSnapshot(const ZooKeeperPtr & zookeeper, UInt32 & max_log_ptr)
|
||||
{
|
||||
return getConsistentMetadataSnapshotImpl(zookeeper, {}, /* max_retries= */ 10, max_log_ptr);
|
||||
}
|
||||
|
||||
std::map<String, String> DatabaseReplicated::getConsistentMetadataSnapshotImpl(
|
||||
const ZooKeeperPtr & zookeeper,
|
||||
const FilterByNameFunction & filter_by_table_name,
|
||||
size_t max_retries,
|
||||
UInt32 & max_log_ptr) const
|
||||
{
|
||||
std::map<String, String> table_name_to_metadata;
|
||||
constexpr int max_retries = 10;
|
||||
int iteration = 0;
|
||||
size_t iteration = 0;
|
||||
while (++iteration <= max_retries)
|
||||
{
|
||||
table_name_to_metadata.clear();
|
||||
LOG_DEBUG(log, "Trying to get consistent metadata snapshot for log pointer {}", max_log_ptr);
|
||||
Strings table_names = zookeeper->getChildren(zookeeper_path + "/metadata");
|
||||
|
||||
Strings escaped_table_names;
|
||||
escaped_table_names = zookeeper->getChildren(zookeeper_path + "/metadata");
|
||||
if (filter_by_table_name)
|
||||
std::erase_if(escaped_table_names, [&](const String & table) { return !filter_by_table_name(unescapeForFileName(table)); });
|
||||
|
||||
std::vector<zkutil::ZooKeeper::FutureGet> futures;
|
||||
futures.reserve(table_names.size());
|
||||
for (const auto & table : table_names)
|
||||
futures.reserve(escaped_table_names.size());
|
||||
for (const auto & table : escaped_table_names)
|
||||
futures.emplace_back(zookeeper->asyncTryGet(zookeeper_path + "/metadata/" + table));
|
||||
|
||||
for (size_t i = 0; i < table_names.size(); ++i)
|
||||
for (size_t i = 0; i < escaped_table_names.size(); ++i)
|
||||
{
|
||||
auto res = futures[i].get();
|
||||
if (res.error != Coordination::Error::ZOK)
|
||||
break;
|
||||
table_name_to_metadata.emplace(unescapeForFileName(table_names[i]), res.data);
|
||||
table_name_to_metadata.emplace(unescapeForFileName(escaped_table_names[i]), res.data);
|
||||
}
|
||||
|
||||
UInt32 new_max_log_ptr = parse<UInt32>(zookeeper->get(zookeeper_path + "/max_log_ptr"));
|
||||
if (new_max_log_ptr == max_log_ptr && table_names.size() == table_name_to_metadata.size())
|
||||
if (new_max_log_ptr == max_log_ptr && escaped_table_names.size() == table_name_to_metadata.size())
|
||||
break;
|
||||
|
||||
if (max_log_ptr < new_max_log_ptr)
|
||||
@ -1142,13 +1171,13 @@ std::map<String, String> DatabaseReplicated::tryGetConsistentMetadataSnapshot(co
|
||||
else
|
||||
{
|
||||
chassert(max_log_ptr == new_max_log_ptr);
|
||||
chassert(table_names.size() != table_name_to_metadata.size());
|
||||
chassert(escaped_table_names.size() != table_name_to_metadata.size());
|
||||
LOG_DEBUG(log, "Cannot get metadata of some tables due to ZooKeeper error, will retry");
|
||||
}
|
||||
}
|
||||
|
||||
if (max_retries < iteration)
|
||||
throw Exception(ErrorCodes::DATABASE_REPLICATION_FAILED, "Cannot get consistent metadata snapshot");
|
||||
throw Exception(ErrorCodes::CANNOT_GET_REPLICATED_DATABASE_SNAPSHOT, "Cannot get consistent metadata snapshot");
|
||||
|
||||
LOG_DEBUG(log, "Got consistent metadata snapshot for log pointer {}", max_log_ptr);
|
||||
|
||||
@ -1221,6 +1250,8 @@ void DatabaseReplicated::drop(ContextPtr context_)
|
||||
return;
|
||||
}
|
||||
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
auto current_zookeeper = getZooKeeper();
|
||||
current_zookeeper->set(replica_path, DROPPED_MARK, -1);
|
||||
createEmptyLogEntry(current_zookeeper);
|
||||
@ -1238,6 +1269,7 @@ void DatabaseReplicated::drop(ContextPtr context_)
|
||||
|
||||
void DatabaseReplicated::stopReplication()
|
||||
{
|
||||
waitDatabaseStarted(/* no_throw = */ true);
|
||||
if (ddl_worker)
|
||||
ddl_worker->shutdown();
|
||||
}
|
||||
@ -1253,6 +1285,8 @@ void DatabaseReplicated::shutdown()
|
||||
|
||||
void DatabaseReplicated::dropTable(ContextPtr local_context, const String & table_name, bool sync)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
auto txn = local_context->getZooKeeperMetadataTransaction();
|
||||
assert(!ddl_worker || !ddl_worker->isCurrentlyActive() || txn || startsWith(table_name, ".inner_id."));
|
||||
if (txn && txn->isInitialQuery() && !txn->isCreateOrReplaceQuery())
|
||||
@ -1295,6 +1329,8 @@ void DatabaseReplicated::renameTable(ContextPtr local_context, const String & ta
|
||||
if (exchange && !to_database.isTableExist(to_table_name, local_context))
|
||||
throw Exception(ErrorCodes::UNKNOWN_TABLE, "Table {} does not exist", to_table_name);
|
||||
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
String statement = readMetadataFile(table_name);
|
||||
String statement_to;
|
||||
if (exchange)
|
||||
@ -1395,6 +1431,8 @@ bool DatabaseReplicated::canExecuteReplicatedMetadataAlter() const
|
||||
|
||||
void DatabaseReplicated::detachTablePermanently(ContextPtr local_context, const String & table_name)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
auto txn = local_context->getZooKeeperMetadataTransaction();
|
||||
assert(!ddl_worker->isCurrentlyActive() || txn);
|
||||
if (txn && txn->isInitialQuery())
|
||||
@ -1418,6 +1456,8 @@ void DatabaseReplicated::detachTablePermanently(ContextPtr local_context, const
|
||||
|
||||
void DatabaseReplicated::removeDetachedPermanentlyFlag(ContextPtr local_context, const String & table_name, const String & table_metadata_path, bool attach)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
auto txn = local_context->getZooKeeperMetadataTransaction();
|
||||
assert(!ddl_worker->isCurrentlyActive() || txn);
|
||||
if (txn && txn->isInitialQuery() && attach)
|
||||
@ -1454,23 +1494,19 @@ String DatabaseReplicated::readMetadataFile(const String & table_name) const
|
||||
std::vector<std::pair<ASTPtr, StoragePtr>>
|
||||
DatabaseReplicated::getTablesForBackup(const FilterByNameFunction & filter, const ContextPtr &) const
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
/// Here we read metadata from ZooKeeper. We could do that by simple call of DatabaseAtomic::getTablesForBackup() however
|
||||
/// reading from ZooKeeper is better because thus we won't be dependent on how fast the replication queue of this database is.
|
||||
std::vector<std::pair<ASTPtr, StoragePtr>> res;
|
||||
auto zookeeper = getContext()->getZooKeeper();
|
||||
auto escaped_table_names = zookeeper->getChildren(zookeeper_path + "/metadata");
|
||||
for (const auto & escaped_table_name : escaped_table_names)
|
||||
UInt32 snapshot_version = parse<UInt32>(zookeeper->get(zookeeper_path + "/max_log_ptr"));
|
||||
auto snapshot = getConsistentMetadataSnapshotImpl(zookeeper, filter, /* max_retries= */ 20, snapshot_version);
|
||||
|
||||
std::vector<std::pair<ASTPtr, StoragePtr>> res;
|
||||
for (const auto & [table_name, metadata] : snapshot)
|
||||
{
|
||||
String table_name = unescapeForFileName(escaped_table_name);
|
||||
if (!filter(table_name))
|
||||
continue;
|
||||
|
||||
String zk_metadata;
|
||||
if (!zookeeper->tryGet(zookeeper_path + "/metadata/" + escaped_table_name, zk_metadata))
|
||||
throw Exception(ErrorCodes::INCONSISTENT_METADATA_FOR_BACKUP, "Metadata for table {} was not found in ZooKeeper", table_name);
|
||||
|
||||
ParserCreateQuery parser;
|
||||
auto create_table_query = parseQuery(parser, zk_metadata, 0, getContext()->getSettingsRef().max_parser_depth);
|
||||
auto create_table_query = parseQuery(parser, metadata, 0, getContext()->getSettingsRef().max_parser_depth);
|
||||
|
||||
auto & create = create_table_query->as<ASTCreateQuery &>();
|
||||
create.attach = false;
|
||||
@ -1501,6 +1537,8 @@ void DatabaseReplicated::createTableRestoredFromBackup(
|
||||
std::shared_ptr<IRestoreCoordination> restore_coordination,
|
||||
UInt64 timeout_ms)
|
||||
{
|
||||
waitDatabaseStarted(false);
|
||||
|
||||
/// Because of the replication multiple nodes can try to restore the same tables again and failed with "Table already exists"
|
||||
/// because of some table could be restored already on other node and then replicated to this node.
|
||||
/// To solve this problem we use the restore coordination: the first node calls
|
||||
|
@ -68,11 +68,9 @@ public:
|
||||
|
||||
void drop(ContextPtr /*context*/) override;
|
||||
|
||||
void loadStoredObjects(ContextMutablePtr context, LoadingStrictnessLevel mode) override;
|
||||
void beforeLoadingMetadata(ContextMutablePtr context_, LoadingStrictnessLevel mode) override;
|
||||
|
||||
void beforeLoadingMetadata(ContextMutablePtr context, LoadingStrictnessLevel mode) override;
|
||||
|
||||
void startupTables(ThreadPool & thread_pool, LoadingStrictnessLevel mode) override;
|
||||
LoadTaskPtr startupDatabaseAsync(AsyncLoader & async_loader, LoadJobSet startup_after, LoadingStrictnessLevel mode) override;
|
||||
|
||||
void shutdown() override;
|
||||
|
||||
@ -106,8 +104,12 @@ private:
|
||||
void checkQueryValid(const ASTPtr & query, ContextPtr query_context) const;
|
||||
|
||||
void recoverLostReplica(const ZooKeeperPtr & current_zookeeper, UInt32 our_log_ptr, UInt32 & max_log_ptr);
|
||||
|
||||
std::map<String, String> tryGetConsistentMetadataSnapshot(const ZooKeeperPtr & zookeeper, UInt32 & max_log_ptr);
|
||||
|
||||
std::map<String, String> getConsistentMetadataSnapshotImpl(const ZooKeeperPtr & zookeeper, const FilterByNameFunction & filter_by_table_name,
|
||||
size_t max_retries, UInt32 & max_log_ptr) const;
|
||||
|
||||
ASTPtr parseQueryFromMetadataInZooKeeper(const String & node_name, const String & query);
|
||||
String readMetadataFile(const String & table_name) const;
|
||||
|
||||
@ -124,6 +126,8 @@ private:
|
||||
UInt64 getMetadataHash(const String & table_name) const;
|
||||
bool checkDigestValid(const ContextPtr & local_context, bool debug_check = true) const TSA_REQUIRES(metadata_mutex);
|
||||
|
||||
void waitDatabaseStarted(bool no_throw) const override;
|
||||
|
||||
String zookeeper_path;
|
||||
String shard_name;
|
||||
String replica_name;
|
||||
@ -150,6 +154,8 @@ private:
|
||||
UInt64 tables_metadata_digest TSA_GUARDED_BY(metadata_mutex);
|
||||
|
||||
mutable ClusterPtr cluster;
|
||||
|
||||
LoadTaskPtr startup_replicated_database_task;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -25,7 +25,6 @@ namespace ErrorCodes
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int CANNOT_GET_CREATE_TABLE_QUERY;
|
||||
extern const int INCONSISTENT_METADATA_FOR_BACKUP;
|
||||
}
|
||||
|
||||
void applyMetadataChangesToCreateQuery(const ASTPtr & query, const StorageInMemoryMetadata & metadata)
|
||||
@ -199,11 +198,8 @@ bool DatabaseWithOwnTablesBase::isTableExist(const String & table_name, ContextP
|
||||
|
||||
StoragePtr DatabaseWithOwnTablesBase::tryGetTable(const String & table_name, ContextPtr) const
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
auto it = tables.find(table_name);
|
||||
if (it != tables.end())
|
||||
return it->second;
|
||||
return {};
|
||||
waitTableStarted(table_name);
|
||||
return tryGetTableNoWait(table_name);
|
||||
}
|
||||
|
||||
DatabaseTablesIteratorPtr DatabaseWithOwnTablesBase::getTablesIterator(ContextPtr, const FilterByNameFunction & filter_by_table_name) const
|
||||
@ -349,16 +345,22 @@ std::vector<std::pair<ASTPtr, StoragePtr>> DatabaseWithOwnTablesBase::getTablesF
|
||||
|
||||
auto create_table_query = tryGetCreateTableQuery(it->name(), local_context);
|
||||
if (!create_table_query)
|
||||
throw Exception(ErrorCodes::INCONSISTENT_METADATA_FOR_BACKUP,
|
||||
"Couldn't get a create query for table {}.{}",
|
||||
backQuoteIfNeed(getDatabaseName()), backQuoteIfNeed(it->name()));
|
||||
{
|
||||
LOG_WARNING(log, "Couldn't get a create query for table {}.{}",
|
||||
backQuoteIfNeed(getDatabaseName()), backQuoteIfNeed(it->name()));
|
||||
continue;
|
||||
}
|
||||
|
||||
const auto & create = create_table_query->as<const ASTCreateQuery &>();
|
||||
if (create.getTable() != it->name())
|
||||
throw Exception(ErrorCodes::INCONSISTENT_METADATA_FOR_BACKUP,
|
||||
"Got a create query with unexpected name {} for table {}.{}",
|
||||
backQuoteIfNeed(create.getTable()),
|
||||
backQuoteIfNeed(getDatabaseName()), backQuoteIfNeed(it->name()));
|
||||
auto * create = create_table_query->as<ASTCreateQuery>();
|
||||
if (create->getTable() != it->name())
|
||||
{
|
||||
/// Probably the database has been just renamed. Use the older name for backup to keep the backup consistent.
|
||||
LOG_WARNING(log, "Got a create query with unexpected name {} for table {}.{}",
|
||||
backQuoteIfNeed(create->getTable()), backQuoteIfNeed(getDatabaseName()), backQuoteIfNeed(it->name()));
|
||||
create_table_query = create_table_query->clone();
|
||||
create = create_table_query->as<ASTCreateQuery>();
|
||||
create->setTable(it->name());
|
||||
}
|
||||
|
||||
storage->adjustCreateQueryForBackup(create_table_query);
|
||||
res.emplace_back(create_table_query, storage);
|
||||
@ -376,4 +378,13 @@ void DatabaseWithOwnTablesBase::createTableRestoredFromBackup(const ASTPtr & cre
|
||||
interpreter.execute();
|
||||
}
|
||||
|
||||
StoragePtr DatabaseWithOwnTablesBase::tryGetTableNoWait(const String & table_name) const
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
auto it = tables.find(table_name);
|
||||
if (it != tables.end())
|
||||
return it->second;
|
||||
return {};
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -52,6 +52,7 @@ protected:
|
||||
void attachTableUnlocked(const String & table_name, const StoragePtr & table) TSA_REQUIRES(mutex);
|
||||
StoragePtr detachTableUnlocked(const String & table_name) TSA_REQUIRES(mutex);
|
||||
StoragePtr getTableUnlocked(const String & table_name) const TSA_REQUIRES(mutex);
|
||||
StoragePtr tryGetTableNoWait(const String & table_name) const;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -8,6 +8,8 @@
|
||||
#include <Storages/IStorage_fwd.h>
|
||||
#include <base/types.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/AsyncLoader.h>
|
||||
#include <Common/PoolId.h>
|
||||
#include <Common/ThreadPool_fwd.h>
|
||||
#include <QueryPipeline/BlockIO.h>
|
||||
|
||||
@ -75,12 +77,17 @@ private:
|
||||
Tables tables;
|
||||
Tables::iterator it;
|
||||
|
||||
// Tasks to wait before returning a table
|
||||
using Tasks = std::unordered_map<String, LoadTaskPtr>;
|
||||
Tasks tasks;
|
||||
|
||||
protected:
|
||||
DatabaseTablesSnapshotIterator(DatabaseTablesSnapshotIterator && other) noexcept
|
||||
: IDatabaseTablesIterator(std::move(other.database_name))
|
||||
{
|
||||
size_t idx = std::distance(other.tables.begin(), other.it);
|
||||
std::swap(tables, other.tables);
|
||||
std::swap(tasks, other.tasks);
|
||||
other.it = other.tables.end();
|
||||
it = tables.begin();
|
||||
std::advance(it, idx);
|
||||
@ -103,7 +110,17 @@ public:
|
||||
|
||||
const String & name() const override { return it->first; }
|
||||
|
||||
const StoragePtr & table() const override { return it->second; }
|
||||
const StoragePtr & table() const override
|
||||
{
|
||||
if (auto task = tasks.find(it->first); task != tasks.end())
|
||||
waitLoad(currentPoolOr(TablesLoaderForegroundPoolId), task->second);
|
||||
return it->second;
|
||||
}
|
||||
|
||||
void setLoadTasks(const Tasks & tasks_)
|
||||
{
|
||||
tasks = tasks_;
|
||||
}
|
||||
};
|
||||
|
||||
using DatabaseTablesIteratorPtr = std::unique_ptr<IDatabaseTablesIterator>;
|
||||
@ -151,13 +168,59 @@ public:
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented");
|
||||
}
|
||||
|
||||
virtual void loadTableFromMetadata(ContextMutablePtr /*local_context*/, const String & /*file_path*/, const QualifiedTableName & /*name*/, const ASTPtr & /*ast*/,
|
||||
virtual void loadTableFromMetadata(
|
||||
ContextMutablePtr /*local_context*/,
|
||||
const String & /*file_path*/,
|
||||
const QualifiedTableName & /*name*/,
|
||||
const ASTPtr & /*ast*/,
|
||||
LoadingStrictnessLevel /*mode*/)
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented");
|
||||
}
|
||||
|
||||
virtual void startupTables(ThreadPool & /*thread_pool*/, LoadingStrictnessLevel /*mode*/) {}
|
||||
/// Create a task to load table `name` after specified dependencies `startup_after` using `async_loader`.
|
||||
/// `load_after` must contain the tasks returned by `loadTableFromMetadataAsync()` for dependent tables (see TablesLoader).
|
||||
/// The returned task is also stored inside the database for cancellation on destruction.
|
||||
virtual LoadTaskPtr loadTableFromMetadataAsync(
|
||||
AsyncLoader & /*async_loader*/,
|
||||
LoadJobSet /*load_after*/,
|
||||
ContextMutablePtr /*local_context*/,
|
||||
const String & /*file_path*/,
|
||||
const QualifiedTableName & /*name*/,
|
||||
const ASTPtr & /*ast*/,
|
||||
LoadingStrictnessLevel /*mode*/)
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented");
|
||||
}
|
||||
|
||||
/// Create a task to startup table `name` after specified dependencies `startup_after` using `async_loader`.
|
||||
/// The returned task is also stored inside the database for cancellation on destruction.
|
||||
[[nodiscard]] virtual LoadTaskPtr startupTableAsync(
|
||||
AsyncLoader & /*async_loader*/,
|
||||
LoadJobSet /*startup_after*/,
|
||||
const QualifiedTableName & /*name*/,
|
||||
LoadingStrictnessLevel /*mode*/)
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented");
|
||||
}
|
||||
|
||||
/// Create a task to startup database after specified dependencies `startup_after` using `async_loader`.
|
||||
/// `startup_after` must contain all the tasks returned by `startupTableAsync()` for every table (see TablesLoader).
|
||||
/// The returned task is also stored inside the database for cancellation on destruction.
|
||||
[[nodiscard]] virtual LoadTaskPtr startupDatabaseAsync(
|
||||
AsyncLoader & /*async_loader*/,
|
||||
LoadJobSet /*startup_after*/,
|
||||
LoadingStrictnessLevel /*mode*/)
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Not implemented");
|
||||
}
|
||||
|
||||
/// Waits for specific table to be started up, i.e. task returned by `startupTableAsync()` is done
|
||||
virtual void waitTableStarted(const String & /*name*/) const {}
|
||||
|
||||
/// Waits for the database to be started up, i.e. task returned by `startupDatabaseAsync()` is done
|
||||
/// NOTE: `no_throw` wait should be used during shutdown to (1) prevent race with startup and (2) avoid exceptions if startup failed
|
||||
virtual void waitDatabaseStarted(bool /*no_throw*/) const {}
|
||||
|
||||
/// Check the existence of the table in memory (attached).
|
||||
virtual bool isTableExist(const String & name, ContextPtr context) const = 0;
|
||||
|
@ -10,6 +10,7 @@
|
||||
# include <Parsers/ASTCreateQuery.h>
|
||||
# include <Storages/StorageMaterializedMySQL.h>
|
||||
# include <Common/setThreadName.h>
|
||||
# include <Common/PoolId.h>
|
||||
# include <filesystem>
|
||||
|
||||
namespace fs = std::filesystem;
|
||||
@ -63,16 +64,29 @@ void DatabaseMaterializedMySQL::setException(const std::exception_ptr & exceptio
|
||||
exception = exception_;
|
||||
}
|
||||
|
||||
void DatabaseMaterializedMySQL::startupTables(ThreadPool & thread_pool, LoadingStrictnessLevel mode)
|
||||
LoadTaskPtr DatabaseMaterializedMySQL::startupDatabaseAsync(AsyncLoader & async_loader, LoadJobSet startup_after, LoadingStrictnessLevel mode)
|
||||
{
|
||||
LOG_TRACE(log, "Starting MaterializeMySQL tables");
|
||||
DatabaseAtomic::startupTables(thread_pool, mode);
|
||||
auto base = DatabaseAtomic::startupDatabaseAsync(async_loader, std::move(startup_after), mode);
|
||||
auto job = makeLoadJob(
|
||||
base->goals(),
|
||||
TablesLoaderBackgroundStartupPoolId,
|
||||
fmt::format("startup MaterializedMySQL database {}", getDatabaseName()),
|
||||
[this, mode] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
LOG_TRACE(log, "Starting MaterializeMySQL database");
|
||||
if (mode < LoadingStrictnessLevel::FORCE_ATTACH)
|
||||
materialize_thread.assertMySQLAvailable();
|
||||
|
||||
if (mode < LoadingStrictnessLevel::FORCE_ATTACH)
|
||||
materialize_thread.assertMySQLAvailable();
|
||||
materialize_thread.startSynchronization();
|
||||
started_up = true;
|
||||
});
|
||||
return startup_mysql_database_task = makeLoadTask(async_loader, {job});
|
||||
}
|
||||
|
||||
materialize_thread.startSynchronization();
|
||||
started_up = true;
|
||||
void DatabaseMaterializedMySQL::waitDatabaseStarted(bool no_throw) const
|
||||
{
|
||||
if (startup_mysql_database_task)
|
||||
waitLoad(currentPoolOr(TablesLoaderForegroundPoolId), startup_mysql_database_task, no_throw);
|
||||
}
|
||||
|
||||
void DatabaseMaterializedMySQL::createTable(ContextPtr context_, const String & name, const StoragePtr & table, const ASTPtr & query)
|
||||
@ -160,6 +174,7 @@ void DatabaseMaterializedMySQL::checkIsInternalQuery(ContextPtr context_, const
|
||||
|
||||
void DatabaseMaterializedMySQL::stopReplication()
|
||||
{
|
||||
waitDatabaseStarted(/* no_throw = */ true);
|
||||
materialize_thread.stopSynchronization();
|
||||
started_up = false;
|
||||
}
|
||||
|
@ -46,10 +46,13 @@ protected:
|
||||
|
||||
std::atomic_bool started_up{false};
|
||||
|
||||
LoadTaskPtr startup_mysql_database_task;
|
||||
|
||||
public:
|
||||
String getEngineName() const override { return "MaterializedMySQL"; }
|
||||
|
||||
void startupTables(ThreadPool & thread_pool, LoadingStrictnessLevel mode) override;
|
||||
LoadTaskPtr startupDatabaseAsync(AsyncLoader & async_loader, LoadJobSet startup_after, LoadingStrictnessLevel mode) override;
|
||||
void waitDatabaseStarted(bool no_throw) const override;
|
||||
|
||||
void createTable(ContextPtr context_, const String & name, const StoragePtr & table, const ASTPtr & query) override;
|
||||
|
||||
|
@ -407,7 +407,6 @@ String DatabaseMySQL::getMetadataPath() const
|
||||
|
||||
void DatabaseMySQL::loadStoredObjects(ContextMutablePtr, LoadingStrictnessLevel /*mode*/)
|
||||
{
|
||||
|
||||
std::lock_guard lock{mutex};
|
||||
fs::directory_iterator iter(getMetadataPath());
|
||||
|
||||
|
@ -7,6 +7,7 @@
|
||||
|
||||
#include <Common/logger_useful.h>
|
||||
#include <Common/Macros.h>
|
||||
#include <Common/PoolId.h>
|
||||
#include <Core/UUID.h>
|
||||
#include <DataTypes/DataTypeNullable.h>
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
@ -138,12 +139,25 @@ void DatabaseMaterializedPostgreSQL::startSynchronization()
|
||||
}
|
||||
|
||||
|
||||
void DatabaseMaterializedPostgreSQL::startupTables(ThreadPool & thread_pool, LoadingStrictnessLevel mode)
|
||||
LoadTaskPtr DatabaseMaterializedPostgreSQL::startupDatabaseAsync(AsyncLoader & async_loader, LoadJobSet startup_after, LoadingStrictnessLevel mode)
|
||||
{
|
||||
DatabaseAtomic::startupTables(thread_pool, mode);
|
||||
startup_task->activateAndSchedule();
|
||||
auto base = DatabaseAtomic::startupDatabaseAsync(async_loader, std::move(startup_after), mode);
|
||||
auto job = makeLoadJob(
|
||||
base->goals(),
|
||||
TablesLoaderBackgroundStartupPoolId,
|
||||
fmt::format("startup MaterializedMySQL database {}", getDatabaseName()),
|
||||
[this] (AsyncLoader &, const LoadJobPtr &)
|
||||
{
|
||||
startup_task->activateAndSchedule();
|
||||
});
|
||||
return startup_postgresql_database_task = makeLoadTask(async_loader, {job});
|
||||
}
|
||||
|
||||
void DatabaseMaterializedPostgreSQL::waitDatabaseStarted(bool no_throw) const
|
||||
{
|
||||
if (startup_postgresql_database_task)
|
||||
waitLoad(currentPoolOr(TablesLoaderForegroundPoolId), startup_postgresql_database_task, no_throw);
|
||||
}
|
||||
|
||||
void DatabaseMaterializedPostgreSQL::applySettingsChanges(const SettingsChanges & settings_changes, ContextPtr query_context)
|
||||
{
|
||||
@ -182,9 +196,9 @@ void DatabaseMaterializedPostgreSQL::applySettingsChanges(const SettingsChanges
|
||||
|
||||
StoragePtr DatabaseMaterializedPostgreSQL::tryGetTable(const String & name, ContextPtr local_context) const
|
||||
{
|
||||
/// In otder to define which table access is needed - to MaterializedPostgreSQL table (only in case of SELECT queries) or
|
||||
/// to its nested ReplacingMergeTree table (in all other cases), the context of a query os modified.
|
||||
/// Also if materialzied_tables set is empty - it means all access is done to ReplacingMergeTree tables - it is a case after
|
||||
/// In order to define which table access is needed - to MaterializedPostgreSQL table (only in case of SELECT queries) or
|
||||
/// to its nested ReplacingMergeTree table (in all other cases), the context of a query is modified.
|
||||
/// Also if materialized_tables set is empty - it means all access is done to ReplacingMergeTree tables - it is a case after
|
||||
/// replication_handler was shutdown.
|
||||
if (local_context->isInternalQuery() || materialized_tables.empty())
|
||||
{
|
||||
@ -422,6 +436,8 @@ void DatabaseMaterializedPostgreSQL::shutdown()
|
||||
|
||||
void DatabaseMaterializedPostgreSQL::stopReplication()
|
||||
{
|
||||
waitDatabaseStarted(/* no_throw = */ true);
|
||||
|
||||
std::lock_guard lock(handler_mutex);
|
||||
if (replication_handler)
|
||||
replication_handler->shutdown();
|
||||
|
@ -40,7 +40,8 @@ public:
|
||||
|
||||
String getMetadataPath() const override { return metadata_path; }
|
||||
|
||||
void startupTables(ThreadPool & thread_pool, LoadingStrictnessLevel mode) override;
|
||||
LoadTaskPtr startupDatabaseAsync(AsyncLoader & async_loader, LoadJobSet startup_after, LoadingStrictnessLevel mode) override;
|
||||
void waitDatabaseStarted(bool no_throw) const override;
|
||||
|
||||
DatabaseTablesIteratorPtr
|
||||
getTablesIterator(ContextPtr context, const DatabaseOnDisk::FilterByNameFunction & filter_by_table_name) const override;
|
||||
@ -92,6 +93,8 @@ private:
|
||||
|
||||
BackgroundSchedulePool::TaskHolder startup_task;
|
||||
bool shutdown_called = false;
|
||||
|
||||
LoadTaskPtr startup_postgresql_database_task;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -699,22 +699,6 @@ std::vector<StorageID> TablesDependencyGraph::getTablesSortedByDependency() cons
|
||||
}
|
||||
|
||||
|
||||
std::vector<std::vector<StorageID>> TablesDependencyGraph::getTablesSortedByDependencyForParallel() const
|
||||
{
|
||||
std::vector<std::vector<StorageID>> res;
|
||||
std::optional<size_t> last_level;
|
||||
for (const auto * node : getNodesSortedByLevel())
|
||||
{
|
||||
if (node->level != last_level)
|
||||
res.emplace_back();
|
||||
auto & table_ids = res.back();
|
||||
table_ids.emplace_back(node->storage_id);
|
||||
last_level = node->level;
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
|
||||
void TablesDependencyGraph::log() const
|
||||
{
|
||||
if (nodes.empty())
|
||||
|
@ -107,9 +107,6 @@ public:
|
||||
/// tables which depend on the tables which depend on the tables without dependencies, and so on.
|
||||
std::vector<StorageID> getTablesSortedByDependency() const;
|
||||
|
||||
/// The same as getTablesSortedByDependency() but make a list for parallel processing.
|
||||
std::vector<std::vector<StorageID>> getTablesSortedByDependencyForParallel() const;
|
||||
|
||||
/// Outputs information about this graph as a bunch of logging messages.
|
||||
void log() const;
|
||||
|
||||
|
@ -7,16 +7,9 @@
|
||||
#include <Interpreters/ExternalDictionariesLoader.h>
|
||||
#include <Poco/Util/AbstractConfiguration.h>
|
||||
#include <Common/logger_useful.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
#include <Common/CurrentMetrics.h>
|
||||
#include <numeric>
|
||||
|
||||
namespace CurrentMetrics
|
||||
{
|
||||
extern const Metric TablesLoaderThreads;
|
||||
extern const Metric TablesLoaderThreadsActive;
|
||||
extern const Metric TablesLoaderThreadsScheduled;
|
||||
}
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -33,18 +26,18 @@ TablesLoader::TablesLoader(ContextMutablePtr global_context_, Databases database
|
||||
, referential_dependencies("ReferentialDeps")
|
||||
, loading_dependencies("LoadingDeps")
|
||||
, all_loading_dependencies("LoadingDeps")
|
||||
, pool(CurrentMetrics::TablesLoaderThreads, CurrentMetrics::TablesLoaderThreadsActive, CurrentMetrics::TablesLoaderThreadsScheduled)
|
||||
, async_loader(global_context->getAsyncLoader())
|
||||
{
|
||||
metadata.default_database = global_context->getCurrentDatabase();
|
||||
log = &Poco::Logger::get("TablesLoader");
|
||||
}
|
||||
|
||||
|
||||
void TablesLoader::loadTables()
|
||||
LoadTaskPtrs TablesLoader::loadTablesAsync(LoadJobSet load_after)
|
||||
{
|
||||
bool need_resolve_dependencies = !global_context->getConfigRef().has("ignore_table_dependencies_on_metadata_loading");
|
||||
|
||||
/// Load all Lazy, MySQl, PostgreSQL, SQLite, etc databases first.
|
||||
/// Load all Lazy, MySQL, PostgreSQL, SQLite, etc databases first.
|
||||
/// Note that this loading is NOT async because it should be fast and it cannot have any dependencies
|
||||
for (auto & database : databases)
|
||||
{
|
||||
if (need_resolve_dependencies && database.second->supportsLoadingInTopologicalOrder())
|
||||
@ -54,7 +47,9 @@ void TablesLoader::loadTables()
|
||||
}
|
||||
|
||||
if (databases_to_load.empty())
|
||||
return;
|
||||
return {};
|
||||
|
||||
LoadTaskPtrs result;
|
||||
|
||||
/// Read and parse metadata from Ordinary, Atomic, Materialized*, Replicated, etc databases. Build dependency graph.
|
||||
for (auto & database_name : databases_to_load)
|
||||
@ -77,17 +72,66 @@ void TablesLoader::loadTables()
|
||||
/// Remove tables that do not exist
|
||||
removeUnresolvableDependencies();
|
||||
|
||||
loadTablesInTopologicalOrder(pool);
|
||||
/// Compatibility setting which should be enabled by default on attach
|
||||
/// Otherwise server will be unable to start for some old-format of IPv6/IPv4 types of columns
|
||||
ContextMutablePtr load_context = Context::createCopy(global_context);
|
||||
load_context->setSetting("cast_ipv4_ipv6_default_on_conversion_error", 1);
|
||||
|
||||
for (const auto & table_id : all_loading_dependencies.getTablesSortedByDependency())
|
||||
{
|
||||
/// Gather tasks to load before this table
|
||||
LoadTaskPtrs load_dependency_tasks;
|
||||
for (const StorageID & dependency_id : all_loading_dependencies.getDependencies(table_id))
|
||||
load_dependency_tasks.push_back(load_table[dependency_id.getFullTableName()]);
|
||||
|
||||
// Make load table task
|
||||
auto table_name = table_id.getQualifiedName();
|
||||
const auto & path_and_query = metadata.parsed_tables[table_name];
|
||||
auto task = databases[table_name.database]->loadTableFromMetadataAsync(
|
||||
async_loader,
|
||||
getGoals(load_dependency_tasks, load_after),
|
||||
load_context,
|
||||
path_and_query.path,
|
||||
table_name,
|
||||
path_and_query.ast,
|
||||
strictness_mode);
|
||||
load_table[table_id.getFullTableName()] = task;
|
||||
result.push_back(task);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
void TablesLoader::startupTables()
|
||||
LoadTaskPtrs TablesLoader::startupTablesAsync(LoadJobSet startup_after)
|
||||
{
|
||||
/// Startup tables after all tables are loaded. Background tasks (merges, mutations, etc) may slow down data parts loading.
|
||||
for (auto & database : databases)
|
||||
database.second->startupTables(pool, strictness_mode);
|
||||
}
|
||||
LoadTaskPtrs result;
|
||||
std::unordered_map<String, LoadTaskPtrs> startup_database; /// database name -> all its tables startup tasks
|
||||
|
||||
for (const auto & table_id : all_loading_dependencies.getTables())
|
||||
{
|
||||
// Make startup table task
|
||||
auto table_name = table_id.getQualifiedName();
|
||||
auto task = databases[table_name.database]->startupTableAsync(
|
||||
async_loader,
|
||||
joinJobs(load_table[table_id.getFullTableName()]->goals(), startup_after),
|
||||
table_name,
|
||||
strictness_mode);
|
||||
startup_database[table_name.database].push_back(task);
|
||||
result.push_back(task);
|
||||
}
|
||||
|
||||
/// Make startup database tasks
|
||||
for (auto & database_name : databases_to_load)
|
||||
{
|
||||
auto task = databases[database_name]->startupDatabaseAsync(
|
||||
async_loader,
|
||||
getGoals(startup_database[database_name], startup_after),
|
||||
strictness_mode);
|
||||
result.push_back(task);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
void TablesLoader::buildDependencyGraph()
|
||||
{
|
||||
@ -111,7 +155,6 @@ void TablesLoader::buildDependencyGraph()
|
||||
all_loading_dependencies.log();
|
||||
}
|
||||
|
||||
|
||||
void TablesLoader::removeUnresolvableDependencies()
|
||||
{
|
||||
auto need_exclude_dependency = [this](const StorageID & table_id)
|
||||
@ -165,39 +208,4 @@ void TablesLoader::removeUnresolvableDependencies()
|
||||
all_loading_dependencies.checkNoCyclicDependencies();
|
||||
}
|
||||
|
||||
|
||||
void TablesLoader::loadTablesInTopologicalOrder(ThreadPool & pool_)
|
||||
{
|
||||
/// Compatibility setting which should be enabled by default on attach
|
||||
/// Otherwise server will be unable to start for some old-format of IPv6/IPv4 types of columns
|
||||
ContextMutablePtr load_context = Context::createCopy(global_context);
|
||||
load_context->setSetting("cast_ipv4_ipv6_default_on_conversion_error", 1);
|
||||
|
||||
/// Load tables in parallel.
|
||||
auto tables_to_load = all_loading_dependencies.getTablesSortedByDependencyForParallel();
|
||||
|
||||
for (size_t level = 0; level != tables_to_load.size(); ++level)
|
||||
{
|
||||
startLoadingTables(pool_, load_context, tables_to_load[level], level);
|
||||
pool_.wait();
|
||||
}
|
||||
}
|
||||
|
||||
void TablesLoader::startLoadingTables(ThreadPool & pool_, ContextMutablePtr load_context, const std::vector<StorageID> & tables_to_load, size_t level)
|
||||
{
|
||||
size_t total_tables = metadata.parsed_tables.size();
|
||||
|
||||
LOG_INFO(log, "Loading {} tables with dependency level {}", tables_to_load.size(), level);
|
||||
|
||||
for (const auto & table_id : tables_to_load)
|
||||
{
|
||||
pool_.scheduleOrThrowOnError([this, load_context, total_tables, table_name = table_id.getQualifiedName()]()
|
||||
{
|
||||
const auto & path_and_query = metadata.parsed_tables[table_name];
|
||||
databases[table_name.database]->loadTableFromMetadata(load_context, path_and_query.path, table_name, path_and_query.ast, strictness_mode);
|
||||
logAboutProgress(log, ++tables_processed, total_tables, stopwatch);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -10,7 +10,8 @@
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
#include <Parsers/IAST_fwd.h>
|
||||
#include <Common/Stopwatch.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
#include <Common/AsyncLoader.h>
|
||||
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
@ -22,9 +23,6 @@ class AtomicStopwatch;
|
||||
namespace DB
|
||||
{
|
||||
|
||||
void logAboutProgress(Poco::Logger * log, size_t processed, size_t total, AtomicStopwatch & watch);
|
||||
|
||||
|
||||
class IDatabase;
|
||||
using DatabasePtr = std::shared_ptr<IDatabase>;
|
||||
|
||||
@ -57,8 +55,13 @@ public:
|
||||
TablesLoader(ContextMutablePtr global_context_, Databases databases_, LoadingStrictnessLevel strictness_mode_);
|
||||
TablesLoader() = delete;
|
||||
|
||||
void loadTables();
|
||||
void startupTables();
|
||||
/// Create tasks for async loading of all tables in `databases` after specified jobs `load_after`.
|
||||
[[nodiscard]] LoadTaskPtrs loadTablesAsync(LoadJobSet load_after = {});
|
||||
|
||||
/// Create tasks for async startup of all tables in `databases` after specified jobs `startup_after`.
|
||||
/// Note that for every table startup an extra dependency on that table loading will be added along with `startup_after`.
|
||||
/// Must be called only after `loadTablesAsync()`.
|
||||
[[nodiscard]] LoadTaskPtrs startupTablesAsync(LoadJobSet startup_after = {});
|
||||
|
||||
private:
|
||||
ContextMutablePtr global_context;
|
||||
@ -74,12 +77,13 @@ private:
|
||||
std::atomic<size_t> tables_processed{0};
|
||||
AtomicStopwatch stopwatch;
|
||||
|
||||
ThreadPool pool;
|
||||
AsyncLoader & async_loader;
|
||||
std::unordered_map<String, LoadTaskPtr> load_table; /// table_id -> load task
|
||||
|
||||
void buildDependencyGraph();
|
||||
void removeUnresolvableDependencies();
|
||||
void loadTablesInTopologicalOrder(ThreadPool & pool);
|
||||
void startLoadingTables(ThreadPool & pool, ContextMutablePtr load_context, const std::vector<StorageID> & tables_to_load, size_t level);
|
||||
void loadTablesInTopologicalOrder();
|
||||
void startLoadingTables(ContextMutablePtr load_context, const std::vector<StorageID> & tables_to_load, size_t level);
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -22,7 +22,7 @@ WriteBufferFromTemporaryFile::WriteBufferFromTemporaryFile(TemporaryFileOnDiskHo
|
||||
class ReadBufferFromTemporaryWriteBuffer : public ReadBufferFromFile
|
||||
{
|
||||
public:
|
||||
static ReadBufferPtr createFrom(WriteBufferFromTemporaryFile * origin)
|
||||
static std::unique_ptr<ReadBufferFromTemporaryWriteBuffer> createFrom(WriteBufferFromTemporaryFile * origin)
|
||||
{
|
||||
int fd = origin->getFD();
|
||||
std::string file_name = origin->getFileName();
|
||||
@ -32,7 +32,7 @@ public:
|
||||
throwFromErrnoWithPath("Cannot reread temporary file " + file_name, file_name,
|
||||
ErrorCodes::CANNOT_SEEK_THROUGH_FILE);
|
||||
|
||||
return std::make_shared<ReadBufferFromTemporaryWriteBuffer>(fd, file_name, std::move(origin->tmp_file));
|
||||
return std::make_unique<ReadBufferFromTemporaryWriteBuffer>(fd, file_name, std::move(origin->tmp_file));
|
||||
}
|
||||
|
||||
ReadBufferFromTemporaryWriteBuffer(int fd_, const std::string & file_name_, TemporaryFileOnDiskHolder && tmp_file_)
|
||||
@ -43,7 +43,7 @@ public:
|
||||
};
|
||||
|
||||
|
||||
ReadBufferPtr WriteBufferFromTemporaryFile::getReadBufferImpl()
|
||||
std::unique_ptr<ReadBuffer> WriteBufferFromTemporaryFile::getReadBufferImpl()
|
||||
{
|
||||
/// ignore buffer, write all data to file and reread it
|
||||
finalize();
|
||||
|
@ -21,7 +21,7 @@ public:
|
||||
~WriteBufferFromTemporaryFile() override;
|
||||
|
||||
private:
|
||||
std::shared_ptr<ReadBuffer> getReadBufferImpl() override;
|
||||
std::unique_ptr<ReadBuffer> getReadBufferImpl() override;
|
||||
|
||||
TemporaryFileOnDiskHolder tmp_file;
|
||||
|
||||
|
@ -43,6 +43,9 @@ std::unique_ptr<S3::Client> getClient(
|
||||
ContextPtr context,
|
||||
const S3ObjectStorageSettings & settings)
|
||||
{
|
||||
const Settings & global_settings = context->getGlobalContext()->getSettingsRef();
|
||||
const Settings & local_settings = context->getSettingsRef();
|
||||
|
||||
String endpoint = context->getMacros()->expand(config.getString(config_prefix + ".endpoint"));
|
||||
S3::URI uri(endpoint);
|
||||
if (!uri.key.ends_with('/'))
|
||||
@ -51,17 +54,17 @@ std::unique_ptr<S3::Client> getClient(
|
||||
S3::PocoHTTPClientConfiguration client_configuration = S3::ClientFactory::instance().createClientConfiguration(
|
||||
config.getString(config_prefix + ".region", ""),
|
||||
context->getRemoteHostFilter(),
|
||||
static_cast<int>(context->getGlobalContext()->getSettingsRef().s3_max_redirects),
|
||||
static_cast<int>(context->getGlobalContext()->getSettingsRef().s3_retry_attempts),
|
||||
context->getGlobalContext()->getSettingsRef().enable_s3_requests_logging,
|
||||
static_cast<int>(global_settings.s3_max_redirects),
|
||||
static_cast<int>(global_settings.s3_retry_attempts),
|
||||
global_settings.enable_s3_requests_logging,
|
||||
/* for_disk_s3 = */ true,
|
||||
settings.request_settings.get_request_throttler,
|
||||
settings.request_settings.put_request_throttler,
|
||||
uri.uri.getScheme());
|
||||
|
||||
client_configuration.connectTimeoutMs = config.getUInt(config_prefix + ".connect_timeout_ms", 1000);
|
||||
client_configuration.requestTimeoutMs = config.getUInt(config_prefix + ".request_timeout_ms", 30000);
|
||||
client_configuration.maxConnections = config.getUInt(config_prefix + ".max_connections", 100);
|
||||
client_configuration.connectTimeoutMs = config.getUInt(config_prefix + ".connect_timeout_ms", S3::DEFAULT_CONNECT_TIMEOUT_MS);
|
||||
client_configuration.requestTimeoutMs = config.getUInt(config_prefix + ".request_timeout_ms", S3::DEFAULT_REQUEST_TIMEOUT_MS);
|
||||
client_configuration.maxConnections = config.getUInt(config_prefix + ".max_connections", S3::DEFAULT_MAX_CONNECTIONS);
|
||||
client_configuration.endpointOverride = uri.endpoint;
|
||||
client_configuration.http_keep_alive_timeout_ms = config.getUInt(
|
||||
config_prefix + ".http_keep_alive_timeout_ms", DEFAULT_HTTP_KEEP_ALIVE_TIMEOUT * 1000);
|
||||
@ -96,6 +99,7 @@ std::unique_ptr<S3::Client> getClient(
|
||||
return S3::ClientFactory::instance().create(
|
||||
client_configuration,
|
||||
uri.is_virtual_hosted_style,
|
||||
local_settings.s3_disable_checksum,
|
||||
config.getString(config_prefix + ".access_key_id", ""),
|
||||
config.getString(config_prefix + ".secret_access_key", ""),
|
||||
config.getString(config_prefix + ".server_side_encryption_customer_key_base64", ""),
|
||||
|
@ -69,7 +69,7 @@ static void testCascadeBufferRedability(
|
||||
auto rbuf = wbuf_readable.tryGetReadBuffer();
|
||||
ASSERT_FALSE(!rbuf);
|
||||
|
||||
concat.appendBuffer(wrapReadBufferPointer(rbuf));
|
||||
concat.appendBuffer(wrapReadBufferPointer(std::move(rbuf)));
|
||||
}
|
||||
|
||||
std::string decoded_data;
|
||||
|
@ -1,116 +0,0 @@
|
||||
#include <Functions/IFunction.h>
|
||||
#include <Functions/FunctionFactory.h>
|
||||
#include <Functions/FunctionHelpers.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Common/CurrentThread.h>
|
||||
#include "Disks/DiskType.h"
|
||||
#include "Interpreters/Context_fwd.h"
|
||||
#include <Core/Field.h>
|
||||
#include <Poco/Net/NameValueCollection.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int FUNCTION_NOT_ALLOWED;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
/** Get the value of parameter in http headers.
|
||||
* If there no such parameter or the method of request is not
|
||||
* http, the function will throw an exception.
|
||||
*/
|
||||
class FunctionGetClientHTTPHeader : public IFunction, WithContext
|
||||
{
|
||||
private:
|
||||
|
||||
public:
|
||||
explicit FunctionGetClientHTTPHeader(ContextPtr context_): WithContext(context_) {}
|
||||
|
||||
static constexpr auto name = "getClientHTTPHeader";
|
||||
|
||||
static FunctionPtr create(ContextPtr context_)
|
||||
{
|
||||
return std::make_shared<FunctionGetClientHTTPHeader>(context_);
|
||||
}
|
||||
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
|
||||
String getName() const override { return name; }
|
||||
|
||||
bool isDeterministic() const override { return false; }
|
||||
|
||||
bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return true; }
|
||||
|
||||
|
||||
size_t getNumberOfArguments() const override
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
||||
{
|
||||
if (!getContext()->allowGetHTTPHeaderFunction())
|
||||
throw Exception(ErrorCodes::FUNCTION_NOT_ALLOWED, "The function {} is not enabled, you can set allow_get_client_http_header in config file.", getName());
|
||||
|
||||
if (!isString(arguments[0]))
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "The argument of function {} must have String type", getName());
|
||||
return std::make_shared<DataTypeString>();
|
||||
}
|
||||
|
||||
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count) const override
|
||||
{
|
||||
const auto & client_info = getContext()->getClientInfo();
|
||||
const auto & method = client_info.http_method;
|
||||
const auto & headers = client_info.headers;
|
||||
const IColumn * arg_column = arguments[0].column.get();
|
||||
const ColumnString * arg_string = checkAndGetColumn<ColumnString>(arg_column);
|
||||
|
||||
if (!arg_string)
|
||||
throw Exception(ErrorCodes::ILLEGAL_COLUMN, "The argument of function {} must be constant String", getName());
|
||||
|
||||
if (method != ClientInfo::HTTPMethod::GET && method != ClientInfo::HTTPMethod::POST)
|
||||
return result_type->createColumnConstWithDefaultValue(input_rows_count);
|
||||
|
||||
auto result_column = ColumnString::create();
|
||||
|
||||
const String default_value;
|
||||
const std::unordered_set<String> & forbidden_header_list = getContext()->getClientHTTPHeaderForbiddenHeaders();
|
||||
|
||||
for (size_t row = 0; row < input_rows_count; ++row)
|
||||
{
|
||||
auto header_name = arg_string->getDataAt(row).toString();
|
||||
|
||||
if (!headers.has(header_name))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "{} is not in HTTP request headers.", header_name);
|
||||
else
|
||||
{
|
||||
auto it = forbidden_header_list.find(header_name);
|
||||
if (it != forbidden_header_list.end())
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "The header {} is in get_client_http_header_forbidden_headers, you can config it in config file.", header_name);
|
||||
|
||||
const String & value = headers[header_name];
|
||||
result_column->insertData(value.data(), value.size());
|
||||
}
|
||||
}
|
||||
|
||||
return result_column;
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
REGISTER_FUNCTION(GetHttpHeader)
|
||||
{
|
||||
factory.registerFunction<FunctionGetClientHTTPHeader>();
|
||||
}
|
||||
|
||||
}
|
@ -8,7 +8,7 @@ namespace DB
|
||||
struct IReadableWriteBuffer
|
||||
{
|
||||
/// At the first time returns getReadBufferImpl(). Next calls return nullptr.
|
||||
inline std::shared_ptr<ReadBuffer> tryGetReadBuffer()
|
||||
inline std::unique_ptr<ReadBuffer> tryGetReadBuffer()
|
||||
{
|
||||
if (!can_reread)
|
||||
return nullptr;
|
||||
@ -24,7 +24,7 @@ protected:
|
||||
/// Creates read buffer from current write buffer.
|
||||
/// Returned buffer points to the first byte of original buffer.
|
||||
/// Original stream becomes invalid.
|
||||
virtual std::shared_ptr<ReadBuffer> getReadBufferImpl() = 0;
|
||||
virtual std::unique_ptr<ReadBuffer> getReadBufferImpl() = 0;
|
||||
|
||||
bool can_reread = true;
|
||||
};
|
||||
|
@ -124,11 +124,11 @@ void MemoryWriteBuffer::addChunk()
|
||||
}
|
||||
|
||||
|
||||
std::shared_ptr<ReadBuffer> MemoryWriteBuffer::getReadBufferImpl()
|
||||
std::unique_ptr<ReadBuffer> MemoryWriteBuffer::getReadBufferImpl()
|
||||
{
|
||||
finalize();
|
||||
|
||||
auto res = std::make_shared<ReadBufferFromMemoryWriteBuffer>(std::move(*this));
|
||||
auto res = std::make_unique<ReadBufferFromMemoryWriteBuffer>(std::move(*this));
|
||||
|
||||
/// invalidate members
|
||||
chunk_list.clear();
|
||||
|
@ -38,7 +38,7 @@ protected:
|
||||
|
||||
void finalizeImpl() override { /* no op */ }
|
||||
|
||||
std::shared_ptr<ReadBuffer> getReadBufferImpl() override;
|
||||
std::unique_ptr<ReadBuffer> getReadBufferImpl() override;
|
||||
|
||||
const size_t max_total_size;
|
||||
const size_t initial_chunk_size;
|
||||
|
@ -125,11 +125,12 @@ std::unique_ptr<Client> Client::create(
|
||||
const std::shared_ptr<Aws::Auth::AWSCredentialsProvider> & credentials_provider,
|
||||
const PocoHTTPClientConfiguration & client_configuration,
|
||||
Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy sign_payloads,
|
||||
bool use_virtual_addressing)
|
||||
bool use_virtual_addressing,
|
||||
bool disable_checksum)
|
||||
{
|
||||
verifyClientConfiguration(client_configuration);
|
||||
return std::unique_ptr<Client>(
|
||||
new Client(max_redirects_, std::move(sse_kms_config_), credentials_provider, client_configuration, sign_payloads, use_virtual_addressing));
|
||||
new Client(max_redirects_, std::move(sse_kms_config_), credentials_provider, client_configuration, sign_payloads, use_virtual_addressing, disable_checksum));
|
||||
}
|
||||
|
||||
std::unique_ptr<Client> Client::clone() const
|
||||
@ -159,12 +160,14 @@ Client::Client(
|
||||
const std::shared_ptr<Aws::Auth::AWSCredentialsProvider> & credentials_provider_,
|
||||
const PocoHTTPClientConfiguration & client_configuration_,
|
||||
Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy sign_payloads_,
|
||||
bool use_virtual_addressing_)
|
||||
bool use_virtual_addressing_,
|
||||
bool disable_checksum_)
|
||||
: Aws::S3::S3Client(credentials_provider_, client_configuration_, sign_payloads_, use_virtual_addressing_)
|
||||
, credentials_provider(credentials_provider_)
|
||||
, client_configuration(client_configuration_)
|
||||
, sign_payloads(sign_payloads_)
|
||||
, use_virtual_addressing(use_virtual_addressing_)
|
||||
, disable_checksum(disable_checksum_)
|
||||
, max_redirects(max_redirects_)
|
||||
, sse_kms_config(std::move(sse_kms_config_))
|
||||
, log(&Poco::Logger::get("S3Client"))
|
||||
@ -210,6 +213,7 @@ Client::Client(
|
||||
, client_configuration(client_configuration_)
|
||||
, sign_payloads(other.sign_payloads)
|
||||
, use_virtual_addressing(other.use_virtual_addressing)
|
||||
, disable_checksum(other.disable_checksum)
|
||||
, explicit_region(other.explicit_region)
|
||||
, detect_region(other.detect_region)
|
||||
, provider_type(other.provider_type)
|
||||
@ -511,6 +515,8 @@ Client::doRequest(RequestType & request, RequestFn request_fn) const
|
||||
addAdditionalAMZHeadersToCanonicalHeadersList(request, client_configuration.extra_headers);
|
||||
const auto & bucket = request.GetBucket();
|
||||
request.setApiMode(api_mode);
|
||||
if (disable_checksum)
|
||||
request.disableChecksum();
|
||||
|
||||
if (auto region = getRegionForBucket(bucket); !region.empty())
|
||||
{
|
||||
@ -844,6 +850,7 @@ ClientFactory & ClientFactory::instance()
|
||||
std::unique_ptr<S3::Client> ClientFactory::create( // NOLINT
|
||||
const PocoHTTPClientConfiguration & cfg_,
|
||||
bool is_virtual_hosted_style,
|
||||
bool disable_checksum,
|
||||
const String & access_key_id,
|
||||
const String & secret_access_key,
|
||||
const String & server_side_encryption_customer_key_base64,
|
||||
@ -888,7 +895,8 @@ std::unique_ptr<S3::Client> ClientFactory::create( // NOLINT
|
||||
credentials_provider,
|
||||
client_configuration, // Client configuration.
|
||||
Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy::Never,
|
||||
is_virtual_hosted_style || client_configuration.endpointOverride.empty() /// Use virtual addressing if endpoint is not specified.
|
||||
is_virtual_hosted_style || client_configuration.endpointOverride.empty(), /// Use virtual addressing if endpoint is not specified.
|
||||
disable_checksum
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -116,7 +116,8 @@ public:
|
||||
const std::shared_ptr<Aws::Auth::AWSCredentialsProvider> & credentials_provider,
|
||||
const PocoHTTPClientConfiguration & client_configuration,
|
||||
Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy sign_payloads,
|
||||
bool use_virtual_addressing);
|
||||
bool use_virtual_addressing,
|
||||
bool disable_checksum);
|
||||
|
||||
std::unique_ptr<Client> clone() const;
|
||||
|
||||
@ -211,7 +212,8 @@ private:
|
||||
const std::shared_ptr<Aws::Auth::AWSCredentialsProvider> & credentials_provider_,
|
||||
const PocoHTTPClientConfiguration & client_configuration,
|
||||
Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy sign_payloads,
|
||||
bool use_virtual_addressing);
|
||||
bool use_virtual_addressing,
|
||||
bool disable_checksum_);
|
||||
|
||||
Client(
|
||||
const Client & other, const PocoHTTPClientConfiguration & client_configuration);
|
||||
@ -257,6 +259,7 @@ private:
|
||||
PocoHTTPClientConfiguration client_configuration;
|
||||
Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy sign_payloads;
|
||||
bool use_virtual_addressing;
|
||||
bool disable_checksum;
|
||||
|
||||
std::string explicit_region;
|
||||
mutable bool detect_region = true;
|
||||
@ -287,6 +290,7 @@ public:
|
||||
std::unique_ptr<S3::Client> create(
|
||||
const PocoHTTPClientConfiguration & cfg,
|
||||
bool is_virtual_hosted_style,
|
||||
bool disable_checksum,
|
||||
const String & access_key_id,
|
||||
const String & secret_access_key,
|
||||
const String & server_side_encryption_customer_key_base64,
|
||||
|
@ -19,6 +19,9 @@ namespace DB::S3
|
||||
{
|
||||
|
||||
inline static constexpr uint64_t DEFAULT_EXPIRATION_WINDOW_SECONDS = 120;
|
||||
inline static constexpr uint64_t DEFAULT_CONNECT_TIMEOUT_MS = 1000;
|
||||
inline static constexpr uint64_t DEFAULT_REQUEST_TIMEOUT_MS = 30000;
|
||||
inline static constexpr uint64_t DEFAULT_MAX_CONNECTIONS = 100;
|
||||
|
||||
/// In GCP metadata service can be accessed via DNS regardless of IPv4 or IPv6.
|
||||
static inline constexpr char GCP_METADATA_SERVICE_ENDPOINT[] = "http://metadata.google.internal";
|
||||
|
@ -47,6 +47,17 @@ public:
|
||||
return params;
|
||||
}
|
||||
|
||||
Aws::String GetChecksumAlgorithmName() const override
|
||||
{
|
||||
/// Return empty string is enough to disable checksums (see
|
||||
/// AWSClient::AddChecksumToRequest [1] for more details).
|
||||
///
|
||||
/// [1]: https://github.com/aws/aws-sdk-cpp/blob/b0ee1c0d336dbb371c34358b68fba6c56aae2c92/src/aws-cpp-sdk-core/source/client/AWSClient.cpp#L783-L839
|
||||
if (!checksum)
|
||||
return "";
|
||||
return BaseRequest::GetChecksumAlgorithmName();
|
||||
}
|
||||
|
||||
void overrideRegion(std::string region) const
|
||||
{
|
||||
region_override = std::move(region);
|
||||
@ -67,10 +78,17 @@ public:
|
||||
api_mode = api_mode_;
|
||||
}
|
||||
|
||||
/// Disable checksum to avoid extra read of the input stream
|
||||
void disableChecksum() const
|
||||
{
|
||||
checksum = false;
|
||||
}
|
||||
|
||||
protected:
|
||||
mutable std::string region_override;
|
||||
mutable std::optional<S3::URI> uri_override;
|
||||
mutable ApiMode api_mode{ApiMode::AWS};
|
||||
mutable bool checksum = true;
|
||||
};
|
||||
|
||||
class CopyObjectRequest : public ExtendedRequest<Model::CopyObjectRequest>
|
||||
|
@ -107,6 +107,7 @@ using RequestFn = std::function<void(std::shared_ptr<const DB::S3::Client>, cons
|
||||
|
||||
void testServerSideEncryption(
|
||||
RequestFn do_request,
|
||||
bool disable_checksum,
|
||||
String server_side_encryption_customer_key_base64,
|
||||
DB::S3::ServerSideEncryptionKMSConfig sse_kms_config,
|
||||
String expected_headers)
|
||||
@ -142,6 +143,7 @@ void testServerSideEncryption(
|
||||
std::shared_ptr<DB::S3::Client> client = DB::S3::ClientFactory::instance().create(
|
||||
client_configuration,
|
||||
uri.is_virtual_hosted_style,
|
||||
disable_checksum,
|
||||
access_key_id,
|
||||
secret_access_key,
|
||||
server_side_encryption_customer_key_base64,
|
||||
@ -166,6 +168,7 @@ TEST(IOTestAwsS3Client, AppendExtraSSECHeadersRead)
|
||||
/// See https://github.com/ClickHouse/ClickHouse/pull/19748
|
||||
testServerSideEncryption(
|
||||
doReadRequest,
|
||||
/* disable_checksum= */ false,
|
||||
"Kv/gDqdWVGIT4iDqg+btQvV3lc1idlm4WI+MMOyHOAw=",
|
||||
{},
|
||||
"authorization: ... SignedHeaders="
|
||||
@ -190,6 +193,7 @@ TEST(IOTestAwsS3Client, AppendExtraSSECHeadersWrite)
|
||||
/// See https://github.com/ClickHouse/ClickHouse/pull/19748
|
||||
testServerSideEncryption(
|
||||
doWriteRequest,
|
||||
/* disable_checksum= */ false,
|
||||
"Kv/gDqdWVGIT4iDqg+btQvV3lc1idlm4WI+MMOyHOAw=",
|
||||
{},
|
||||
"authorization: ... SignedHeaders="
|
||||
@ -209,6 +213,30 @@ TEST(IOTestAwsS3Client, AppendExtraSSECHeadersWrite)
|
||||
"x-amz-server-side-encryption-customer-key-md5: fMNuOw6OLU5GG2vc6RTA+g==\n");
|
||||
}
|
||||
|
||||
TEST(IOTestAwsS3Client, AppendExtraSSECHeadersWriteDisableChecksum)
|
||||
{
|
||||
/// See https://github.com/ClickHouse/ClickHouse/pull/19748
|
||||
testServerSideEncryption(
|
||||
doWriteRequest,
|
||||
/* disable_checksum= */ true,
|
||||
"Kv/gDqdWVGIT4iDqg+btQvV3lc1idlm4WI+MMOyHOAw=",
|
||||
{},
|
||||
"authorization: ... SignedHeaders="
|
||||
"amz-sdk-invocation-id;"
|
||||
"amz-sdk-request;"
|
||||
"content-length;"
|
||||
"content-type;"
|
||||
"host;"
|
||||
"x-amz-content-sha256;"
|
||||
"x-amz-date;"
|
||||
"x-amz-server-side-encryption-customer-algorithm;"
|
||||
"x-amz-server-side-encryption-customer-key;"
|
||||
"x-amz-server-side-encryption-customer-key-md5, ...\n"
|
||||
"x-amz-server-side-encryption-customer-algorithm: AES256\n"
|
||||
"x-amz-server-side-encryption-customer-key: Kv/gDqdWVGIT4iDqg+btQvV3lc1idlm4WI+MMOyHOAw=\n"
|
||||
"x-amz-server-side-encryption-customer-key-md5: fMNuOw6OLU5GG2vc6RTA+g==\n");
|
||||
}
|
||||
|
||||
TEST(IOTestAwsS3Client, AppendExtraSSEKMSHeadersRead)
|
||||
{
|
||||
DB::S3::ServerSideEncryptionKMSConfig sse_kms_config;
|
||||
@ -218,6 +246,7 @@ TEST(IOTestAwsS3Client, AppendExtraSSEKMSHeadersRead)
|
||||
// KMS headers shouldn't be set on a read request
|
||||
testServerSideEncryption(
|
||||
doReadRequest,
|
||||
/* disable_checksum= */ false,
|
||||
"",
|
||||
sse_kms_config,
|
||||
"authorization: ... SignedHeaders="
|
||||
@ -239,6 +268,7 @@ TEST(IOTestAwsS3Client, AppendExtraSSEKMSHeadersWrite)
|
||||
sse_kms_config.bucket_key_enabled = true;
|
||||
testServerSideEncryption(
|
||||
doWriteRequest,
|
||||
/* disable_checksum= */ false,
|
||||
"",
|
||||
sse_kms_config,
|
||||
"authorization: ... SignedHeaders="
|
||||
|
@ -210,7 +210,8 @@ struct Client : DB::S3::Client
|
||||
std::make_shared<Aws::Auth::SimpleAWSCredentialsProvider>("", ""),
|
||||
GetClientConfiguration(),
|
||||
Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy::Never,
|
||||
/* use_virtual_addressing = */ true)
|
||||
/* use_virtual_addressing = */ true,
|
||||
/* disable_checksum_= */ false)
|
||||
, store(mock_s3_store)
|
||||
{ }
|
||||
|
||||
|
@ -1,5 +1,6 @@
|
||||
#include <Interpreters/AsynchronousInsertLog.h>
|
||||
|
||||
#include <base/getFQDNOrHostName.h>
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
#include <DataTypes/DataTypeDateTime64.h>
|
||||
@ -33,6 +34,7 @@ NamesAndTypesList AsynchronousInsertLogElement::getNamesAndTypes()
|
||||
|
||||
return
|
||||
{
|
||||
{"hostname", std::make_shared<DataTypeLowCardinality>(std::make_shared<DataTypeString>())},
|
||||
{"event_date", std::make_shared<DataTypeDate>()},
|
||||
{"event_time", std::make_shared<DataTypeDateTime>()},
|
||||
{"event_time_microseconds", std::make_shared<DataTypeDateTime64>(6)},
|
||||
@ -58,6 +60,7 @@ void AsynchronousInsertLogElement::appendToBlock(MutableColumns & columns) const
|
||||
{
|
||||
size_t i = 0;
|
||||
|
||||
columns[i++]->insert(getFQDNOrHostName());
|
||||
auto event_date = DateLUT::instance().toDayNum(event_time).toUnderType();
|
||||
columns[i++]->insert(event_date);
|
||||
columns[i++]->insert(event_time);
|
||||
|
@ -1,3 +1,4 @@
|
||||
#include <base/getFQDNOrHostName.h>
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
#include <DataTypes/DataTypeDateTime64.h>
|
||||
@ -15,6 +16,7 @@ NamesAndTypesList AsynchronousMetricLogElement::getNamesAndTypes()
|
||||
{
|
||||
return
|
||||
{
|
||||
{"hostname", std::make_shared<DataTypeLowCardinality>(std::make_shared<DataTypeString>())},
|
||||
{"event_date", std::make_shared<DataTypeDate>()},
|
||||
{"event_time", std::make_shared<DataTypeDateTime>()},
|
||||
{"metric", std::make_shared<DataTypeLowCardinality>(std::make_shared<DataTypeString>())},
|
||||
@ -26,6 +28,7 @@ void AsynchronousMetricLogElement::appendToBlock(MutableColumns & columns) const
|
||||
{
|
||||
size_t column_idx = 0;
|
||||
|
||||
columns[column_idx++]->insert(getFQDNOrHostName());
|
||||
columns[column_idx++]->insert(event_date);
|
||||
columns[column_idx++]->insert(event_time);
|
||||
columns[column_idx++]->insert(metric_name);
|
||||
|
@ -34,7 +34,8 @@ struct AsynchronousMetricLogElement
|
||||
/// Otherwise the list will be constructed from LogElement::getNamesAndTypes and LogElement::getNamesAndAliases.
|
||||
static const char * getCustomColumnList()
|
||||
{
|
||||
return "event_date Date CODEC(Delta(2), ZSTD(1)), "
|
||||
return "hostname LowCardinality(String) CODEC(ZSTD(1)), "
|
||||
"event_date Date CODEC(Delta(2), ZSTD(1)), "
|
||||
"event_time DateTime CODEC(Delta(4), ZSTD(1)), "
|
||||
"metric LowCardinality(String) CODEC(ZSTD(1)), "
|
||||
"value Float64 CODEC(ZSTD(3))";
|
||||
|
@ -1,8 +1,10 @@
|
||||
#include <Interpreters/BackupLog.h>
|
||||
|
||||
#include <base/getFQDNOrHostName.h>
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDateTime64.h>
|
||||
#include <DataTypes/DataTypeEnum.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
|
||||
@ -20,6 +22,7 @@ NamesAndTypesList BackupLogElement::getNamesAndTypes()
|
||||
{
|
||||
return
|
||||
{
|
||||
{"hostname", std::make_shared<DataTypeLowCardinality>(std::make_shared<DataTypeString>())},
|
||||
{"event_date", std::make_shared<DataTypeDate>()},
|
||||
{"event_time_microseconds", std::make_shared<DataTypeDateTime64>(6)},
|
||||
{"id", std::make_shared<DataTypeString>()},
|
||||
@ -41,6 +44,7 @@ NamesAndTypesList BackupLogElement::getNamesAndTypes()
|
||||
void BackupLogElement::appendToBlock(MutableColumns & columns) const
|
||||
{
|
||||
size_t i = 0;
|
||||
columns[i++]->insert(getFQDNOrHostName());
|
||||
columns[i++]->insert(DateLUT::instance().toDayNum(std::chrono::system_clock::to_time_t(event_time)).toUnderType());
|
||||
columns[i++]->insert(event_time_usec);
|
||||
columns[i++]->insert(info.id);
|
||||
|
@ -60,7 +60,7 @@ FileCache::FileCache(const std::string & cache_name, const FileCacheSettings & s
|
||||
, background_download_threads(settings.background_download_threads)
|
||||
, metadata_download_threads(settings.load_metadata_threads)
|
||||
, log(&Poco::Logger::get("FileCache(" + cache_name + ")"))
|
||||
, metadata(settings.base_path)
|
||||
, metadata(settings.base_path, settings.background_download_queue_size_limit)
|
||||
{
|
||||
main_priority = std::make_unique<LRUFileCachePriority>(settings.max_size, settings.max_elements);
|
||||
|
||||
@ -1269,39 +1269,39 @@ void FileCache::deactivateBackgroundOperations()
|
||||
cleanup_thread->join();
|
||||
}
|
||||
|
||||
FileSegments FileCache::getSnapshot()
|
||||
std::vector<FileSegment::Info> FileCache::getFileSegmentInfos()
|
||||
{
|
||||
assertInitialized();
|
||||
#ifndef NDEBUG
|
||||
assertCacheCorrectness();
|
||||
#endif
|
||||
|
||||
FileSegments file_segments;
|
||||
std::vector<FileSegment::Info> file_segments;
|
||||
metadata.iterate([&](const LockedKey & locked_key)
|
||||
{
|
||||
for (const auto & [_, file_segment_metadata] : locked_key)
|
||||
file_segments.push_back(FileSegment::getSnapshot(file_segment_metadata->file_segment));
|
||||
file_segments.push_back(FileSegment::getInfo(file_segment_metadata->file_segment, *this));
|
||||
});
|
||||
return file_segments;
|
||||
}
|
||||
|
||||
FileSegments FileCache::getSnapshot(const Key & key)
|
||||
std::vector<FileSegment::Info> FileCache::getFileSegmentInfos(const Key & key)
|
||||
{
|
||||
FileSegments file_segments;
|
||||
std::vector<FileSegment::Info> file_segments;
|
||||
auto locked_key = metadata.lockKeyMetadata(key, CacheMetadata::KeyNotFoundPolicy::THROW_LOGICAL);
|
||||
for (const auto & [_, file_segment_metadata] : *locked_key)
|
||||
file_segments.push_back(FileSegment::getSnapshot(file_segment_metadata->file_segment));
|
||||
file_segments.push_back(FileSegment::getInfo(file_segment_metadata->file_segment, *this));
|
||||
return file_segments;
|
||||
}
|
||||
|
||||
FileSegments FileCache::dumpQueue()
|
||||
std::vector<FileSegment::Info> FileCache::dumpQueue()
|
||||
{
|
||||
assertInitialized();
|
||||
|
||||
FileSegments file_segments;
|
||||
std::vector<FileSegment::Info> file_segments;
|
||||
main_priority->iterate([&](LockedKey &, const FileSegmentMetadataPtr & segment_metadata)
|
||||
{
|
||||
file_segments.push_back(FileSegment::getSnapshot(segment_metadata->file_segment));
|
||||
file_segments.push_back(FileSegment::getInfo(segment_metadata->file_segment, *this));
|
||||
return PriorityIterationResult::CONTINUE;
|
||||
}, lockCache());
|
||||
|
||||
@ -1381,12 +1381,12 @@ FileCache::QueryContextHolderPtr FileCache::getQueryContextHolder(
|
||||
return std::make_unique<QueryContextHolder>(query_id, this, std::move(context));
|
||||
}
|
||||
|
||||
FileSegments FileCache::sync()
|
||||
std::vector<FileSegment::Info> FileCache::sync()
|
||||
{
|
||||
FileSegments file_segments;
|
||||
std::vector<FileSegment::Info> file_segments;
|
||||
metadata.iterate([&](LockedKey & locked_key)
|
||||
{
|
||||
auto broken = locked_key.sync();
|
||||
auto broken = locked_key.sync(*this);
|
||||
file_segments.insert(file_segments.end(), broken.begin(), broken.end());
|
||||
});
|
||||
return file_segments;
|
||||
|
@ -126,11 +126,11 @@ public:
|
||||
|
||||
bool tryReserve(FileSegment & file_segment, size_t size, FileCacheReserveStat & stat);
|
||||
|
||||
FileSegments getSnapshot();
|
||||
std::vector<FileSegment::Info> getFileSegmentInfos();
|
||||
|
||||
FileSegments getSnapshot(const Key & key);
|
||||
std::vector<FileSegment::Info> getFileSegmentInfos(const Key & key);
|
||||
|
||||
FileSegments dumpQueue();
|
||||
std::vector<FileSegment::Info> dumpQueue();
|
||||
|
||||
void deactivateBackgroundOperations();
|
||||
|
||||
@ -152,7 +152,7 @@ public:
|
||||
|
||||
CacheGuard::Lock lockCache() const;
|
||||
|
||||
FileSegments sync();
|
||||
std::vector<FileSegment::Info> sync();
|
||||
|
||||
private:
|
||||
using KeyAndOffset = FileCacheKeyAndOffset;
|
||||
|
@ -37,6 +37,21 @@ FileCachePtr FileCacheFactory::getOrCreate(
|
||||
return it->second->cache;
|
||||
}
|
||||
|
||||
FileCachePtr FileCacheFactory::create(const std::string & cache_name, const FileCacheSettings & file_cache_settings)
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
|
||||
auto it = caches_by_name.find(cache_name);
|
||||
if (it != caches_by_name.end())
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Cache with name {} already exists", cache_name);
|
||||
|
||||
auto cache = std::make_shared<FileCache>(cache_name, file_cache_settings);
|
||||
it = caches_by_name.emplace(
|
||||
cache_name, std::make_unique<FileCacheData>(cache, file_cache_settings)).first;
|
||||
|
||||
return it->second->cache;
|
||||
}
|
||||
|
||||
FileCacheFactory::FileCacheData FileCacheFactory::getByName(const std::string & cache_name)
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
|
@ -32,6 +32,8 @@ public:
|
||||
|
||||
FileCachePtr getOrCreate(const std::string & cache_name, const FileCacheSettings & file_cache_settings);
|
||||
|
||||
FileCachePtr create(const std::string & cache_name, const FileCacheSettings & file_cache_settings);
|
||||
|
||||
CacheByName getAll();
|
||||
|
||||
FileCacheData getByName(const std::string & cache_name);
|
||||
|
@ -56,6 +56,9 @@ void FileCacheSettings::loadImpl(FuncHas has, FuncGetUInt get_uint, FuncGetStrin
|
||||
if (has("background_download_threads"))
|
||||
background_download_threads = get_uint("background_download_threads");
|
||||
|
||||
if (has("background_download_queue_size_limit"))
|
||||
background_download_queue_size_limit = get_uint("background_download_queue_size_limit");
|
||||
|
||||
if (has("load_metadata_threads"))
|
||||
load_metadata_threads = get_uint("load_metadata_threads");
|
||||
|
||||
|
@ -28,6 +28,7 @@ struct FileCacheSettings
|
||||
|
||||
size_t boundary_alignment = FILECACHE_DEFAULT_FILE_SEGMENT_ALIGNMENT;
|
||||
size_t background_download_threads = FILECACHE_DEFAULT_BACKGROUND_DOWNLOAD_THREADS;
|
||||
size_t background_download_queue_size_limit = FILECACHE_DEFAULT_BACKGROUND_DOWNLOAD_QUEUE_SIZE_LIMIT;
|
||||
|
||||
size_t load_metadata_threads = FILECACHE_DEFAULT_LOAD_METADATA_THREADS;
|
||||
|
||||
|
@ -6,7 +6,8 @@ namespace DB
|
||||
|
||||
static constexpr int FILECACHE_DEFAULT_MAX_FILE_SEGMENT_SIZE = 32 * 1024 * 1024; /// 32Mi
|
||||
static constexpr int FILECACHE_DEFAULT_FILE_SEGMENT_ALIGNMENT = 4 * 1024 * 1024; /// 4Mi
|
||||
static constexpr int FILECACHE_DEFAULT_BACKGROUND_DOWNLOAD_THREADS = 2;
|
||||
static constexpr int FILECACHE_DEFAULT_BACKGROUND_DOWNLOAD_THREADS = 5;
|
||||
static constexpr int FILECACHE_DEFAULT_BACKGROUND_DOWNLOAD_QUEUE_SIZE_LIMIT = 5000;
|
||||
static constexpr int FILECACHE_DEFAULT_LOAD_METADATA_THREADS = 1;
|
||||
static constexpr int FILECACHE_DEFAULT_MAX_ELEMENTS = 10000000;
|
||||
static constexpr int FILECACHE_DEFAULT_HITS_THRESHOLD = 0;
|
||||
|
@ -668,15 +668,13 @@ void FileSegment::complete()
|
||||
|
||||
if (is_last_holder)
|
||||
{
|
||||
bool added_to_download_queue = false;
|
||||
if (background_download_enabled && remote_file_reader)
|
||||
{
|
||||
LOG_TEST(
|
||||
log, "Submitting file segment for background download "
|
||||
"(having {}/{})", downloaded_size, range().size());
|
||||
|
||||
locked_key->addToDownloadQueue(offset(), segment_lock); /// Finish download in background.
|
||||
added_to_download_queue = locked_key->addToDownloadQueue(offset(), segment_lock); /// Finish download in background.
|
||||
}
|
||||
else
|
||||
|
||||
if (!added_to_download_queue)
|
||||
{
|
||||
locked_key->shrinkFileSegmentToDownloadedSize(offset(), segment_lock);
|
||||
setDetachedState(segment_lock); /// See comment below.
|
||||
@ -835,23 +833,23 @@ void FileSegment::assertNotDetachedUnlocked(const FileSegmentGuard::Lock & lock)
|
||||
}
|
||||
}
|
||||
|
||||
FileSegmentPtr FileSegment::getSnapshot(const FileSegmentPtr & file_segment)
|
||||
FileSegment::Info FileSegment::getInfo(const FileSegmentPtr & file_segment, FileCache & cache)
|
||||
{
|
||||
auto lock = file_segment->lockFileSegment();
|
||||
|
||||
auto snapshot = std::make_shared<FileSegment>(
|
||||
file_segment->key(),
|
||||
file_segment->offset(),
|
||||
file_segment->range().size(),
|
||||
State::DETACHED,
|
||||
CreateFileSegmentSettings(file_segment->getKind(), file_segment->is_unbound));
|
||||
|
||||
snapshot->hits_count = file_segment->getHitsCount();
|
||||
snapshot->downloaded_size = file_segment->getDownloadedSize();
|
||||
snapshot->download_state = file_segment->download_state.load();
|
||||
snapshot->ref_count = file_segment.use_count();
|
||||
|
||||
return snapshot;
|
||||
return Info{
|
||||
.key = file_segment->key(),
|
||||
.offset = file_segment->offset(),
|
||||
.path = cache.getPathInLocalCache(file_segment->key(), file_segment->offset(), file_segment->segment_kind),
|
||||
.range_left = file_segment->range().left,
|
||||
.range_right = file_segment->range().right,
|
||||
.kind = file_segment->segment_kind,
|
||||
.state = file_segment->download_state,
|
||||
.size = file_segment->range().size(),
|
||||
.downloaded_size = file_segment->downloaded_size,
|
||||
.cache_hits = file_segment->hits_count,
|
||||
.references = static_cast<uint64_t>(file_segment.use_count()),
|
||||
.is_unbound = file_segment->is_unbound,
|
||||
};
|
||||
}
|
||||
|
||||
bool FileSegment::isDetached() const
|
||||
|
@ -205,7 +205,22 @@ public:
|
||||
/// exception.
|
||||
void detach(const FileSegmentGuard::Lock &, const LockedKey &);
|
||||
|
||||
static FileSegmentPtr getSnapshot(const FileSegmentPtr & file_segment);
|
||||
struct Info
|
||||
{
|
||||
FileSegment::Key key;
|
||||
size_t offset;
|
||||
std::string path;
|
||||
uint64_t range_left;
|
||||
uint64_t range_right;
|
||||
FileSegmentKind kind;
|
||||
State state;
|
||||
uint64_t size;
|
||||
uint64_t downloaded_size;
|
||||
uint64_t cache_hits;
|
||||
uint64_t references;
|
||||
bool is_unbound;
|
||||
};
|
||||
static Info getInfo(const FileSegmentPtr & file_segment, FileCache & cache);
|
||||
|
||||
bool isDetached() const;
|
||||
|
||||
@ -341,8 +356,10 @@ struct FileSegmentsHolder : private boost::noncopyable
|
||||
void popFront() { completeAndPopFrontImpl(); }
|
||||
|
||||
FileSegment & front() { return *file_segments.front(); }
|
||||
const FileSegment & front() const { return *file_segments.front(); }
|
||||
|
||||
FileSegment & back() { return *file_segments.back(); }
|
||||
const FileSegment & back() const { return *file_segments.back(); }
|
||||
|
||||
FileSegment & add(FileSegmentPtr && file_segment);
|
||||
|
||||
|
@ -134,10 +134,10 @@ std::string KeyMetadata::getFileSegmentPath(const FileSegment & file_segment) co
|
||||
/ CacheMetadata::getFileNameForFileSegment(file_segment.offset(), file_segment.getKind());
|
||||
}
|
||||
|
||||
CacheMetadata::CacheMetadata(const std::string & path_)
|
||||
CacheMetadata::CacheMetadata(const std::string & path_, size_t background_download_queue_size_limit_)
|
||||
: path(path_)
|
||||
, cleanup_queue(std::make_shared<CleanupQueue>())
|
||||
, download_queue(std::make_shared<DownloadQueue>())
|
||||
, download_queue(std::make_shared<DownloadQueue>(background_download_queue_size_limit_))
|
||||
, log(&Poco::Logger::get("CacheMetadata"))
|
||||
{
|
||||
}
|
||||
@ -467,17 +467,20 @@ class DownloadQueue
|
||||
{
|
||||
friend struct CacheMetadata;
|
||||
public:
|
||||
void add(FileSegmentPtr file_segment)
|
||||
explicit DownloadQueue(size_t queue_size_limit_) : queue_size_limit(queue_size_limit_) {}
|
||||
|
||||
bool add(FileSegmentPtr file_segment)
|
||||
{
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
if (cancelled)
|
||||
return;
|
||||
if (cancelled || (queue_size_limit && queue.size() == queue_size_limit))
|
||||
return false;
|
||||
queue.push(DownloadInfo{file_segment->key(), file_segment->offset(), file_segment});
|
||||
}
|
||||
|
||||
CurrentMetrics::add(CurrentMetrics::FilesystemCacheDownloadQueueElements);
|
||||
cv.notify_one();
|
||||
return true;
|
||||
}
|
||||
|
||||
private:
|
||||
@ -490,6 +493,7 @@ private:
|
||||
cv.notify_all();
|
||||
}
|
||||
|
||||
const size_t queue_size_limit;
|
||||
std::mutex mutex;
|
||||
std::condition_variable cv;
|
||||
bool cancelled = false;
|
||||
@ -507,6 +511,7 @@ private:
|
||||
/// before we actually started background download.
|
||||
std::weak_ptr<FileSegment> file_segment;
|
||||
};
|
||||
|
||||
std::queue<DownloadInfo> queue;
|
||||
};
|
||||
|
||||
@ -847,12 +852,12 @@ void LockedKey::shrinkFileSegmentToDownloadedSize(
|
||||
chassert(file_segment->assertCorrectnessUnlocked(segment_lock));
|
||||
}
|
||||
|
||||
void LockedKey::addToDownloadQueue(size_t offset, const FileSegmentGuard::Lock &)
|
||||
bool LockedKey::addToDownloadQueue(size_t offset, const FileSegmentGuard::Lock &)
|
||||
{
|
||||
auto it = key_metadata->find(offset);
|
||||
if (it == key_metadata->end())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "There is not offset {}", offset);
|
||||
key_metadata->download_queue->add(it->second->file_segment);
|
||||
return key_metadata->download_queue->add(it->second->file_segment);
|
||||
}
|
||||
|
||||
std::optional<FileSegment::Range> LockedKey::hasIntersectingRange(const FileSegment::Range & range) const
|
||||
@ -922,9 +927,10 @@ std::string LockedKey::toString() const
|
||||
return result;
|
||||
}
|
||||
|
||||
FileSegments LockedKey::sync()
|
||||
|
||||
std::vector<FileSegment::Info> LockedKey::sync(FileCache & cache)
|
||||
{
|
||||
FileSegments broken;
|
||||
std::vector<FileSegment::Info> broken;
|
||||
for (auto it = key_metadata->begin(); it != key_metadata->end();)
|
||||
{
|
||||
if (it->second->evicting() || !it->second->releasable())
|
||||
@ -955,7 +961,7 @@ FileSegments LockedKey::sync()
|
||||
"File segment has DOWNLOADED state, but file does not exist ({})",
|
||||
file_segment->getInfoForLog());
|
||||
|
||||
broken.push_back(FileSegment::getSnapshot(file_segment));
|
||||
broken.push_back(FileSegment::getInfo(file_segment, cache));
|
||||
it = removeFileSegment(file_segment->offset(), file_segment->lock(), /* can_be_broken */true);
|
||||
continue;
|
||||
}
|
||||
@ -974,7 +980,7 @@ FileSegments LockedKey::sync()
|
||||
"File segment has unexpected size. Having {}, expected {} ({})",
|
||||
actual_size, expected_size, file_segment->getInfoForLog());
|
||||
|
||||
broken.push_back(FileSegment::getSnapshot(file_segment));
|
||||
broken.push_back(FileSegment::getInfo(file_segment, cache));
|
||||
it = removeFileSegment(file_segment->offset(), file_segment->lock(), /* can_be_broken */false);
|
||||
}
|
||||
return broken;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user