diff --git a/.github/ISSUE_TEMPLATE/10_question.md b/.github/ISSUE_TEMPLATE/10_question.md index 0992bf06217..08a05a844e0 100644 --- a/.github/ISSUE_TEMPLATE/10_question.md +++ b/.github/ISSUE_TEMPLATE/10_question.md @@ -10,3 +10,11 @@ assignees: '' > Make sure to check documentation https://clickhouse.com/docs/en/ first. If the question is concise and probably has a short answer, asking it in [community Slack](https://join.slack.com/t/clickhousedb/shared_invite/zt-1gh9ds7f4-PgDhJAaF8ad5RbWBAAjzFg) is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse > If you still prefer GitHub issues, remove all this text and ask your question here. + +**Company or project name** + +Put your company name or project description here + +**Question** + +Your question diff --git a/.github/ISSUE_TEMPLATE/20_feature-request.md b/.github/ISSUE_TEMPLATE/20_feature-request.md index f59dbc2c40f..cf5ac000a23 100644 --- a/.github/ISSUE_TEMPLATE/20_feature-request.md +++ b/.github/ISSUE_TEMPLATE/20_feature-request.md @@ -9,6 +9,10 @@ assignees: '' > (you don't have to strictly follow this form) +**Company or project name** + +> Put your company name or project description here + **Use case** > A clear and concise description of what is the intended usage scenario is. diff --git a/.github/ISSUE_TEMPLATE/30_unexpected-behaviour.md b/.github/ISSUE_TEMPLATE/30_unexpected-behaviour.md index 3630d95ba33..73c861886e6 100644 --- a/.github/ISSUE_TEMPLATE/30_unexpected-behaviour.md +++ b/.github/ISSUE_TEMPLATE/30_unexpected-behaviour.md @@ -9,6 +9,10 @@ assignees: '' (you don't have to strictly follow this form) +**Company or project name** + +Put your company name or project description here + **Describe the unexpected behaviour** A clear and concise description of what works not as it is supposed to. diff --git a/.github/ISSUE_TEMPLATE/35_incomplete_implementation.md b/.github/ISSUE_TEMPLATE/35_incomplete_implementation.md index 6a014ce3c29..45f752b53ef 100644 --- a/.github/ISSUE_TEMPLATE/35_incomplete_implementation.md +++ b/.github/ISSUE_TEMPLATE/35_incomplete_implementation.md @@ -9,6 +9,10 @@ assignees: '' (you don't have to strictly follow this form) +**Company or project name** + +Put your company name or project description here + **Describe the unexpected behaviour** A clear and concise description of what works not as it is supposed to. diff --git a/.github/ISSUE_TEMPLATE/45_usability-issue.md b/.github/ISSUE_TEMPLATE/45_usability-issue.md index b03b11606c1..79f23fe0a14 100644 --- a/.github/ISSUE_TEMPLATE/45_usability-issue.md +++ b/.github/ISSUE_TEMPLATE/45_usability-issue.md @@ -9,6 +9,9 @@ assignees: '' (you don't have to strictly follow this form) +**Company or project name** +Put your company name or project description here + **Describe the issue** A clear and concise description of what works not as it is supposed to. diff --git a/.github/ISSUE_TEMPLATE/50_build-issue.md b/.github/ISSUE_TEMPLATE/50_build-issue.md index 9b05fbbdd13..5a58add9ad8 100644 --- a/.github/ISSUE_TEMPLATE/50_build-issue.md +++ b/.github/ISSUE_TEMPLATE/50_build-issue.md @@ -9,6 +9,10 @@ assignees: '' > Make sure that `git diff` result is empty and you've just pulled fresh master. Try cleaning up cmake cache. Just in case, official build instructions are published here: https://clickhouse.com/docs/en/development/build/ +**Company or project name** + +> Put your company name or project description here + **Operating system** > OS kind or distribution, specific version/release, non-standard kernel if any. If you are trying to build inside virtual machine, please mention it too. diff --git a/.github/ISSUE_TEMPLATE/60_documentation-issue.md b/.github/ISSUE_TEMPLATE/60_documentation-issue.md index 557e5ea43c9..5a941977dac 100644 --- a/.github/ISSUE_TEMPLATE/60_documentation-issue.md +++ b/.github/ISSUE_TEMPLATE/60_documentation-issue.md @@ -8,6 +8,9 @@ labels: comp-documentation (you don't have to strictly follow this form) +**Company or project name** +Put your company name or project description here + **Describe the issue** A clear and concise description of what's wrong in documentation. diff --git a/.github/ISSUE_TEMPLATE/70_performance-issue.md b/.github/ISSUE_TEMPLATE/70_performance-issue.md index d0e549039a6..21eba3f5af1 100644 --- a/.github/ISSUE_TEMPLATE/70_performance-issue.md +++ b/.github/ISSUE_TEMPLATE/70_performance-issue.md @@ -9,6 +9,9 @@ assignees: '' (you don't have to strictly follow this form) +**Company or project name** +Put your company name or project description here + **Describe the situation** What exactly works slower than expected? diff --git a/.github/ISSUE_TEMPLATE/80_backward-compatibility.md b/.github/ISSUE_TEMPLATE/80_backward-compatibility.md index a13e9508f70..8058f5bcc53 100644 --- a/.github/ISSUE_TEMPLATE/80_backward-compatibility.md +++ b/.github/ISSUE_TEMPLATE/80_backward-compatibility.md @@ -9,6 +9,9 @@ assignees: '' (you don't have to strictly follow this form) +**Company or project name** +Put your company name or project description here + **Describe the issue** A clear and concise description of what works not as it is supposed to. diff --git a/.github/ISSUE_TEMPLATE/85_bug-report.md b/.github/ISSUE_TEMPLATE/85_bug-report.md index 6bf265260ac..c43473d63ad 100644 --- a/.github/ISSUE_TEMPLATE/85_bug-report.md +++ b/.github/ISSUE_TEMPLATE/85_bug-report.md @@ -11,6 +11,10 @@ assignees: '' > You have to provide the following information whenever possible. +**Company or project name** + +> Put your company name or project description here + **Describe what's wrong** > A clear and concise description of what works not as it is supposed to. diff --git a/.github/ISSUE_TEMPLATE/96_installation-issues.md b/.github/ISSUE_TEMPLATE/96_installation-issues.md index e4be8af86b6..5f1b6cfd640 100644 --- a/.github/ISSUE_TEMPLATE/96_installation-issues.md +++ b/.github/ISSUE_TEMPLATE/96_installation-issues.md @@ -7,6 +7,10 @@ assignees: '' --- +**Company or project name** + +Put your company name or project description here + **I have tried the following solutions**: https://clickhouse.com/docs/en/faq/troubleshooting/#troubleshooting-installation-errors **Installation type** diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 3d7c34af551..51a1a6e2df8 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -42,40 +42,27 @@ At a minimum, the following information should be added (but add more as needed) > Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/ -
- CI Settings - -**NOTE:** If your merge the PR with modified CI you **MUST KNOW** what you are doing -**NOTE:** Checked options will be applied if set before CI RunConfig/PrepareRunConfig step -- [ ] Allow: Integration Tests +#### CI Settings (Only check the boxes if you know what you are doing): +- [ ] Allow: All Required Checks - [ ] Allow: Stateless tests - [ ] Allow: Stateful tests -- [ ] Allow: Unit tests +- [ ] Allow: Integration Tests - [ ] Allow: Performance tests -- [ ] Allow: All with aarch64 -- [ ] Allow: All with ASAN -- [ ] Allow: All with TSAN -- [ ] Allow: All with Analyzer -- [ ] Allow: All with Azure -- [ ] Allow: Add your option here +- [ ] Allow: All NOT Required Checks +- [ ] Allow: batch 1, 2 for multi-batch jobs +- [ ] Allow: batch 3, 4, 5, 6 for multi-batch jobs --- +- [ ] Exclude: Style check - [ ] Exclude: Fast test - [ ] Exclude: Integration Tests - [ ] Exclude: Stateless tests - [ ] Exclude: Stateful tests - [ ] Exclude: Performance tests - [ ] Exclude: All with ASAN -- [ ] Exclude: All with TSAN -- [ ] Exclude: All with MSAN -- [ ] Exclude: All with UBSAN -- [ ] Exclude: All with Coverage - [ ] Exclude: All with Aarch64 +- [ ] Exclude: All with TSAN, MSAN, UBSAN, Coverage --- -- [ ] do not test (only style check) -- [ ] upload all binary artifacts from build jobs -- [ ] disable merge-commit (no merge from master before tests) -- [ ] disable CI cache (job reuse) -- [ ] allow: batch 1, 2 for multi-batch jobs -- [ ] allow: batch 3, 4 -- [ ] allow: batch 5, 6 -
+- [ ] Do not test +- [ ] Upload binaries for special builds +- [ ] Disable merge-commit +- [ ] Disable CI cache diff --git a/.github/workflows/merge_queue.yml b/.github/workflows/merge_queue.yml index d1b03198485..c8b2452829b 100644 --- a/.github/workflows/merge_queue.yml +++ b/.github/workflows/merge_queue.yml @@ -80,11 +80,27 @@ jobs: run_command: | python3 fast_test_check.py + Builds_1: + needs: [RunConfig, BuildDockers] + if: ${{ !failure() && !cancelled() && contains(fromJson(needs.RunConfig.outputs.data).stages_data.stages_to_do, 'Builds_1') }} + # using callable wf (reusable_stage.yml) allows grouping all nested jobs under a tab + uses: ./.github/workflows/reusable_build_stage.yml + with: + stage: Builds_1 + data: ${{ needs.RunConfig.outputs.data }} + Tests_1: + needs: [RunConfig, Builds_1] + if: ${{ !failure() && !cancelled() && contains(fromJson(needs.RunConfig.outputs.data).stages_data.stages_to_do, 'Tests_1') }} + uses: ./.github/workflows/reusable_test_stage.yml + with: + stage: Tests_1 + data: ${{ needs.RunConfig.outputs.data }} + ################################# Stage Final ################################# # FinishCheck: if: ${{ !failure() && !cancelled() }} - needs: [RunConfig, BuildDockers, StyleCheck, FastTest] + needs: [RunConfig, BuildDockers, StyleCheck, FastTest, Builds_1, Tests_1] runs-on: [self-hosted, style-checker-aarch64] steps: - name: Check out repository code diff --git a/CHANGELOG.md b/CHANGELOG.md index b10f521b8ac..a089e9e7491 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -12,9 +12,8 @@ #### Backward Incompatible Change * Renamed "inverted indexes" to "full-text indexes" which is a less technical / more user-friendly name. This also changes internal table metadata and breaks tables with existing (experimental) inverted indexes. Please make to drop such indexes before upgrade and re-create them after upgrade. [#62884](https://github.com/ClickHouse/ClickHouse/pull/62884) ([Robert Schulze](https://github.com/rschu1ze)). -* Usage of functions `neighbor`, `runningAccumulate`, `runningDifferenceStartingWithFirstValue`, `runningDifference` deprecated (because it is error-prone). Proper window functions should be used instead. To enable them back, set `allow_deprecated_functions = 1` or set `compatibility = '24.4'` or lower. [#63132](https://github.com/ClickHouse/ClickHouse/pull/63132) ([Nikita Taranov](https://github.com/nickitat)). +* Usage of functions `neighbor`, `runningAccumulate`, `runningDifferenceStartingWithFirstValue`, `runningDifference` deprecated (because it is error-prone). Proper window functions should be used instead. To enable them back, set `allow_deprecated_error_prone_window_functions = 1` or set `compatibility = '24.4'` or lower. [#63132](https://github.com/ClickHouse/ClickHouse/pull/63132) ([Nikita Taranov](https://github.com/nickitat)). * Queries from `system.columns` will work faster if there is a large number of columns, but many databases or tables are not granted for `SHOW TABLES`. Note that in previous versions, if you grant `SHOW COLUMNS` to individual columns without granting `SHOW TABLES` to the corresponding tables, the `system.columns` table will show these columns, but in a new version, it will skip the table entirely. Remove trace log messages "Access granted" and "Access denied" that slowed down queries. [#63439](https://github.com/ClickHouse/ClickHouse/pull/63439) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Setting `replace_long_file_name_to_hash` is enabled by default for `MergeTree` tables. [#64457](https://github.com/ClickHouse/ClickHouse/pull/64457) ([Anton Popov](https://github.com/CurtizJ)). The data written with this setting can be read by server versions since 23.9. After you use ClickHouse with this setting enabled, you cannot downgrade to versions 23.8 and earlier. #### New Feature * Adds the `Form` format to read/write a single record in the `application/x-www-form-urlencoded` format. [#60199](https://github.com/ClickHouse/ClickHouse/pull/60199) ([Shaun Struwig](https://github.com/Blargian)). @@ -29,7 +28,6 @@ * Support for conditional function `clamp`. [#62377](https://github.com/ClickHouse/ClickHouse/pull/62377) ([skyoct](https://github.com/skyoct)). * Add `NPy` output format. [#62430](https://github.com/ClickHouse/ClickHouse/pull/62430) ([豪肥肥](https://github.com/HowePa)). * `Raw` format as a synonym for `TSVRaw`. [#63394](https://github.com/ClickHouse/ClickHouse/pull/63394) ([Unalian](https://github.com/Unalian)). -* Added a new SQL function `generateSnowflakeID` for generating Twitter-style Snowflake IDs. [#63577](https://github.com/ClickHouse/ClickHouse/pull/63577) ([Danila Puzov](https://github.com/kazalika)). * Added a new SQL function `generateUUIDv7` to generate version 7 UUIDs aka. timestamp-based UUIDs with random component. Also added a new function `UUIDToNum` to extract bytes from a UUID and a new function `UUIDv7ToDateTime` to extract timestamp component from a UUID version 7. [#62852](https://github.com/ClickHouse/ClickHouse/pull/62852) ([Alexey Petrunyaka](https://github.com/pet74alex)). * On Linux and MacOS, if the program has stdout redirected to a file with a compression extension, use the corresponding compression method instead of nothing (making it behave similarly to `INTO OUTFILE`). [#63662](https://github.com/ClickHouse/ClickHouse/pull/63662) ([v01dXYZ](https://github.com/v01dXYZ)). * Change warning on high number of attached tables to differentiate tables, views and dictionaries. [#64180](https://github.com/ClickHouse/ClickHouse/pull/64180) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)). @@ -43,26 +41,20 @@ * Account failed files in `s3queue_tracked_file_ttl_sec` and `s3queue_traked_files_limit` for `StorageS3Queue`. [#63638](https://github.com/ClickHouse/ClickHouse/pull/63638) ([Kseniia Sumarokova](https://github.com/kssenii)). #### Performance Improvement -* A native parquet reader, which can read parquet binary to ClickHouse columns directly. Now this feature can be activated by setting `input_format_parquet_use_native_reader` to true. [#60361](https://github.com/ClickHouse/ClickHouse/pull/60361) ([ZhiHong Zhang](https://github.com/copperybean)). * Less contention in filesystem cache (part 4). Allow to keep filesystem cache not filled to the limit by doing additional eviction in the background (controlled by `keep_free_space_size(elements)_ratio`). This allows to release pressure from space reservation for queries (on `tryReserve` method). Also this is done in a lock free way as much as possible, e.g. should not block normal cache usage. [#61250](https://github.com/ClickHouse/ClickHouse/pull/61250) ([Kseniia Sumarokova](https://github.com/kssenii)). * Skip merging of newly created projection blocks during `INSERT`-s. [#59405](https://github.com/ClickHouse/ClickHouse/pull/59405) ([Nikita Taranov](https://github.com/nickitat)). * Process string functions `...UTF8` 'asciily' if input strings are all ascii chars. Inspired by https://github.com/apache/doris/pull/29799. Overall speed up by 1.07x~1.62x. Notice that peak memory usage had been decreased in some cases. [#61632](https://github.com/ClickHouse/ClickHouse/pull/61632) ([李扬](https://github.com/taiyang-li)). * Improved performance of selection (`{}`) globs in StorageS3. [#62120](https://github.com/ClickHouse/ClickHouse/pull/62120) ([Andrey Zvonov](https://github.com/zvonand)). * HostResolver has each IP address several times. If remote host has several IPs and by some reason (firewall rules for example) access on some IPs allowed and on others forbidden, than only first record of forbidden IPs marked as failed, and in each try these IPs have a chance to be chosen (and failed again). Even if fix this, every 120 seconds DNS cache dropped, and IPs can be chosen again. [#62652](https://github.com/ClickHouse/ClickHouse/pull/62652) ([Anton Ivashkin](https://github.com/ianton-ru)). -* Function `splitByRegexp` is now faster when the regular expression argument is a single-character, trivial regular expression (in this case, it now falls back internally to `splitByChar`). [#62696](https://github.com/ClickHouse/ClickHouse/pull/62696) ([Robert Schulze](https://github.com/rschu1ze)). -* Aggregation with 8-bit and 16-bit keys became faster: added min/max in FixedHashTable to limit the array index and reduce the `isZero()` calls during iteration. [#62746](https://github.com/ClickHouse/ClickHouse/pull/62746) ([Jiebin Sun](https://github.com/jiebinn)). * Add a new configuration`prefer_merge_sort_block_bytes` to control the memory usage and speed up sorting 2 times when merging when there are many columns. [#62904](https://github.com/ClickHouse/ClickHouse/pull/62904) ([LiuNeng](https://github.com/liuneng1994)). * `clickhouse-local` will start faster. In previous versions, it was not deleting temporary directories by mistake. Now it will. This closes [#62941](https://github.com/ClickHouse/ClickHouse/issues/62941). [#63074](https://github.com/ClickHouse/ClickHouse/pull/63074) ([Alexey Milovidov](https://github.com/alexey-milovidov)). * Micro-optimizations for the new analyzer. [#63429](https://github.com/ClickHouse/ClickHouse/pull/63429) ([Raúl Marín](https://github.com/Algunenano)). * Index analysis will work if `DateTime` is compared to `DateTime64`. This closes [#63441](https://github.com/ClickHouse/ClickHouse/issues/63441). [#63443](https://github.com/ClickHouse/ClickHouse/pull/63443) [#63532](https://github.com/ClickHouse/ClickHouse/pull/63532) ([Alexey Milovidov](https://github.com/alexey-milovidov)). * Speed up indices of type `set` a little (around 1.5 times) by removing garbage. [#64098](https://github.com/ClickHouse/ClickHouse/pull/64098) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Optimized vertical merges in tables with sparse columns. [#64311](https://github.com/ClickHouse/ClickHouse/pull/64311) ([Anton Popov](https://github.com/CurtizJ)). -* Improve filtering of sparse columns: reduce redundant calls of `ColumnSparse::filter` to improve performance. [#64426](https://github.com/ClickHouse/ClickHouse/pull/64426) ([Jiebin Sun](https://github.com/jiebinn)). * Remove copying data when writing to the filesystem cache. [#63401](https://github.com/ClickHouse/ClickHouse/pull/63401) ([Kseniia Sumarokova](https://github.com/kssenii)). * Now backups with azure blob storage will use multicopy. [#64116](https://github.com/ClickHouse/ClickHouse/pull/64116) ([alesapin](https://github.com/alesapin)). * Allow to use native copy for azure even with different containers. [#64154](https://github.com/ClickHouse/ClickHouse/pull/64154) ([alesapin](https://github.com/alesapin)). * Finally enable native copy for azure. [#64182](https://github.com/ClickHouse/ClickHouse/pull/64182) ([alesapin](https://github.com/alesapin)). -* Improve the iteration over sparse columns to reduce call of `size`. [#64497](https://github.com/ClickHouse/ClickHouse/pull/64497) ([Jiebin Sun](https://github.com/jiebinn)). #### Improvement * Allow using `clickhouse-local` and its shortcuts `clickhouse` and `ch` with a query or queries file as a positional argument. Examples: `ch "SELECT 1"`, `ch --param_test Hello "SELECT {test:String}"`, `ch query.sql`. This closes [#62361](https://github.com/ClickHouse/ClickHouse/issues/62361). [#63081](https://github.com/ClickHouse/ClickHouse/pull/63081) ([Alexey Milovidov](https://github.com/alexey-milovidov)). @@ -92,14 +84,8 @@ * Exception handling now works when ClickHouse is used inside AWS Lambda. Author: [Alexey Coolnev](https://github.com/acoolnev). [#64014](https://github.com/ClickHouse/ClickHouse/pull/64014) ([Alexey Milovidov](https://github.com/alexey-milovidov)). * Throw `CANNOT_DECOMPRESS` instread of `CORRUPTED_DATA` on invalid compressed data passed via HTTP. [#64036](https://github.com/ClickHouse/ClickHouse/pull/64036) ([vdimir](https://github.com/vdimir)). * A tip for a single large number in Pretty formats now works for Nullable and LowCardinality. This closes [#61993](https://github.com/ClickHouse/ClickHouse/issues/61993). [#64084](https://github.com/ClickHouse/ClickHouse/pull/64084) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Added knob `metadata_storage_type` to keep free space on metadata storage disk. [#64128](https://github.com/ClickHouse/ClickHouse/pull/64128) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). * Add metrics, logs, and thread names around parts filtering with indices. [#64130](https://github.com/ClickHouse/ClickHouse/pull/64130) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Metrics to track the number of directories created and removed by the `plain_rewritable` metadata storage, and the number of entries in the local-to-remote in-memory map. [#64175](https://github.com/ClickHouse/ClickHouse/pull/64175) ([Julia Kartseva](https://github.com/jkartseva)). * Ignore `allow_suspicious_primary_key` on `ATTACH` and verify on `ALTER`. [#64202](https://github.com/ClickHouse/ClickHouse/pull/64202) ([Azat Khuzhin](https://github.com/azat)). -* The query cache now considers identical queries with different settings as different. This increases robustness in cases where different settings (e.g. `limit` or `additional_table_filters`) would affect the query result. [#64205](https://github.com/ClickHouse/ClickHouse/pull/64205) ([Robert Schulze](https://github.com/rschu1ze)). -* Test that a non standard error code `QPSLimitExceeded` is supported and it is retryable error. [#64225](https://github.com/ClickHouse/ClickHouse/pull/64225) ([Sema Checherinda](https://github.com/CheSema)). -* Settings from the user config doesn't affect merges and mutations for MergeTree on top of object storage. [#64456](https://github.com/ClickHouse/ClickHouse/pull/64456) ([alesapin](https://github.com/alesapin)). -* Test that `totalqpslimitexceeded` is a retriable s3 error. [#64520](https://github.com/ClickHouse/ClickHouse/pull/64520) ([Sema Checherinda](https://github.com/CheSema)). #### Build/Testing/Packaging Improvement * ClickHouse is built with clang-18. A lot of new checks from clang-tidy-18 have been enabled. [#60469](https://github.com/ClickHouse/ClickHouse/pull/60469) ([Alexey Milovidov](https://github.com/alexey-milovidov)). @@ -162,7 +148,6 @@ * Fix analyzer: there's turtles all the way down... [#63930](https://github.com/ClickHouse/ClickHouse/pull/63930) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). * Allow certain ALTER TABLE commands for `plain_rewritable` disk [#63933](https://github.com/ClickHouse/ClickHouse/pull/63933) ([Julia Kartseva](https://github.com/jkartseva)). * Recursive CTE distributed fix [#63939](https://github.com/ClickHouse/ClickHouse/pull/63939) ([Maksim Kita](https://github.com/kitaisreal)). -* Fix reading of columns of type `Tuple(Map(LowCardinality(...)))` [#63956](https://github.com/ClickHouse/ClickHouse/pull/63956) ([Anton Popov](https://github.com/CurtizJ)). * Analyzer: Fix COLUMNS resolve [#63962](https://github.com/ClickHouse/ClickHouse/pull/63962) ([Dmitry Novik](https://github.com/novikd)). * LIMIT BY and skip_unused_shards with analyzer [#63983](https://github.com/ClickHouse/ClickHouse/pull/63983) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * A fix for some trash (experimental Kusto) [#63992](https://github.com/ClickHouse/ClickHouse/pull/63992) ([Yong Wang](https://github.com/kashwy)). @@ -176,8 +161,6 @@ * Prevent LOGICAL_ERROR on CREATE TABLE as Materialized View [#64174](https://github.com/ClickHouse/ClickHouse/pull/64174) ([Raúl Marín](https://github.com/Algunenano)). * Query Cache: Consider identical queries against different databases as different [#64199](https://github.com/ClickHouse/ClickHouse/pull/64199) ([Robert Schulze](https://github.com/rschu1ze)). * Ignore `text_log` for Keeper [#64218](https://github.com/ClickHouse/ClickHouse/pull/64218) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix ARRAY JOIN with Distributed. [#64226](https://github.com/ClickHouse/ClickHouse/pull/64226) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix: CNF with mutually exclusive atoms reduction [#64256](https://github.com/ClickHouse/ClickHouse/pull/64256) ([Eduard Karacharov](https://github.com/korowa)). * Fix Logical error: Bad cast for Buffer table with prewhere. [#64388](https://github.com/ClickHouse/ClickHouse/pull/64388) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). @@ -680,7 +663,7 @@ * Improve the operation of `sumMapFiltered` with NaN values. NaN values are now placed at the end (instead of randomly) and considered different from any values. `-0` is now also treated as equal to `0`; since 0 values are discarded, `-0` values are discarded too. [#58959](https://github.com/ClickHouse/ClickHouse/pull/58959) ([Raúl Marín](https://github.com/Algunenano)). * The function `visibleWidth` will behave according to the docs. In previous versions, it simply counted code points after string serialization, like the `lengthUTF8` function, but didn't consider zero-width and combining characters, full-width characters, tabs, and deletes. Now the behavior is changed accordingly. If you want to keep the old behavior, set `function_visible_width_behavior` to `0`, or set `compatibility` to `23.12` or lower. [#59022](https://github.com/ClickHouse/ClickHouse/pull/59022) ([Alexey Milovidov](https://github.com/alexey-milovidov)). * `Kusto` dialect is disabled until these two bugs will be fixed: [#59037](https://github.com/ClickHouse/ClickHouse/issues/59037) and [#59036](https://github.com/ClickHouse/ClickHouse/issues/59036). [#59305](https://github.com/ClickHouse/ClickHouse/pull/59305) ([Alexey Milovidov](https://github.com/alexey-milovidov)). Any attempt to use `Kusto` will result in exception. -* More efficient implementation of the `FINAL` modifier no longer guarantees preserving the order even if `max_threads = 1`. If you counted on the previous behavior, set `enable_vertical_final` to 0 or `compatibility` to `23.12`. +* More efficient implementation of the `FINAL` modifier no longer guarantees preserving the order even if `max_threads = 1`. If you counted on the previous behavior, set `enable_vertical_final` to 0 or `compatibility` to `23.12`. #### New Feature * Implement Variant data type that represents a union of other data types. Type `Variant(T1, T2, ..., TN)` means that each row of this type has a value of either type `T1` or `T2` or ... or `TN` or none of them (`NULL` value). Variant type is available under a setting `allow_experimental_variant_type`. Reference: [#54864](https://github.com/ClickHouse/ClickHouse/issues/54864). [#58047](https://github.com/ClickHouse/ClickHouse/pull/58047) ([Kruglov Pavel](https://github.com/Avogar)). diff --git a/SECURITY.md b/SECURITY.md index 14c39129db9..f8511fb42d6 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -2,20 +2,22 @@ the file is autogenerated by utils/security-generator/generate_security.py --> -# Security Policy +# ClickHouse Security Vulnerability Response Policy -## Security Announcements -Security fixes will be announced by posting them in the [security changelog](https://clickhouse.com/docs/en/whats-new/security-changelog/). +## Security Change Log and Support -## Scope and Supported Versions +Details regarding security fixes are publicly reported in our [security changelog](https://clickhouse.com/docs/en/whats-new/security-changelog/). A summary of known security vulnerabilities is shown at the bottom of this page. -The following versions of ClickHouse server are currently being supported with security updates: +Vulnerability notifications pre-release or during embargo periods are available to open source users and support customers registered for vulnerability alerts. Refer to our [Embargo Policy](#embargo-policy) below. + +The following versions of ClickHouse server are currently supported with security updates: | Version | Supported | |:-|:-| +| 24.5 | ✔️ | | 24.4 | ✔️ | | 24.3 | ✔️ | -| 24.2 | ✔️ | +| 24.2 | ❌ | | 24.1 | ❌ | | 23.* | ❌ | | 23.8 | ✔️ | @@ -37,7 +39,7 @@ The following versions of ClickHouse server are currently being supported with s We're extremely grateful for security researchers and users that report vulnerabilities to the ClickHouse Open Source Community. All reports are thoroughly investigated by developers. -To report a potential vulnerability in ClickHouse please send the details about it to [security@clickhouse.com](mailto:security@clickhouse.com). We do not offer any financial rewards for reporting issues to us using this method. Alternatively, you can also submit your findings through our public bug bounty program hosted by [Bugcrowd](https://bugcrowd.com/clickhouse) and be rewarded for it as per the program scope and rules of engagement. +To report a potential vulnerability in ClickHouse please send the details about it through our public bug bounty program hosted by [Bugcrowd](https://bugcrowd.com/clickhouse) and be rewarded for it as per the program scope and rules of engagement. ### When Should I Report a Vulnerability? @@ -59,3 +61,21 @@ As the security issue moves from triage, to identified fix, to release planning A public disclosure date is negotiated by the ClickHouse maintainers and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to 90 days. For a vulnerability with a straightforward mitigation, we expect the report date to disclosure date to be on the order of 7 days. +## Embargo Policy + +Open source users and support customers may subscribe to receive alerts during the embargo period by visiting [https://trust.clickhouse.com/?product=clickhouseoss](https://trust.clickhouse.com/?product=clickhouseoss), requesting access and subscribing for alerts. Subscribers agree not to make these notifications public, issue communications, share this information with others, or issue public patches before the disclosure date. Accidental disclosures must be reported immediately to trust@clickhouse.com. Failure to follow this policy or repeated leaks may result in removal from the subscriber list. + +Participation criteria: +1. Be a current open source user or support customer with a valid corporate email domain (no @gmail.com, @azure.com, etc.). +1. Sign up to the ClickHouse OSS Trust Center at [https://trust.clickhouse.com](https://trust.clickhouse.com). +1. Accept the ClickHouse Security Vulnerability Response Policy as outlined above. +1. Subscribe to ClickHouse OSS Trust Center alerts. + +Removal criteria: +1. Members may be removed for failure to follow this policy or repeated leaks. +1. Members may be removed for bounced messages (mail delivery failure). +1. Members may unsubscribe at any time. + +Notification process: +ClickHouse will post notifications within our OSS Trust Center and notify subscribers. Subscribers must log in to the Trust Center to download the notification. The notification will include the timeframe for public disclosure. + diff --git a/docker/keeper/Dockerfile b/docker/keeper/Dockerfile index 413ad2dfaed..b3271d94184 100644 --- a/docker/keeper/Dockerfile +++ b/docker/keeper/Dockerfile @@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \ # lts / testing / prestable / etc ARG REPO_CHANNEL="stable" ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}" -ARG VERSION="24.4.1.2088" +ARG VERSION="24.5.1.1763" ARG PACKAGES="clickhouse-keeper" ARG DIRECT_DOWNLOAD_URLS="" diff --git a/docker/server/Dockerfile.alpine b/docker/server/Dockerfile.alpine index 5e224b16764..3f3b880c8f3 100644 --- a/docker/server/Dockerfile.alpine +++ b/docker/server/Dockerfile.alpine @@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \ # lts / testing / prestable / etc ARG REPO_CHANNEL="stable" ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}" -ARG VERSION="24.4.1.2088" +ARG VERSION="24.5.1.1763" ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static" ARG DIRECT_DOWNLOAD_URLS="" diff --git a/docker/server/Dockerfile.ubuntu b/docker/server/Dockerfile.ubuntu index d82be0e63f6..5fd22ee9b51 100644 --- a/docker/server/Dockerfile.ubuntu +++ b/docker/server/Dockerfile.ubuntu @@ -28,7 +28,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list ARG REPO_CHANNEL="stable" ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main" -ARG VERSION="24.4.1.2088" +ARG VERSION="24.5.1.1763" ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static" #docker-official-library:off diff --git a/docker/test/upgrade/run.sh b/docker/test/upgrade/run.sh index 29174cc87e6..1f2cc9903b2 100644 --- a/docker/test/upgrade/run.sh +++ b/docker/test/upgrade/run.sh @@ -65,46 +65,22 @@ function save_settings_clean() script -q -c "clickhouse-local -q \"select * from system.settings into outfile '$out'\"" --log-out /dev/null } +# We save the (numeric) version of the old server to compare setting changes between the 2 +# We do this since we are testing against the latest release, not taking into account release candidates, so we might +# be testing current master (24.6) against the latest stable release (24.4) +function save_major_version() +{ + local out=$1 && shift + clickhouse-local -q "SELECT a[1]::UInt64 * 100 + a[2]::UInt64 as v FROM (Select splitByChar('.', version()) as a) into outfile '$out'" +} + save_settings_clean 'old_settings.native' +save_major_version 'old_version.native' # Initial run without S3 to create system.*_log on local file system to make it # available for dump via clickhouse-local configure -function remove_keeper_config() -{ - sudo sed -i "/<$1>$2<\/$1>/d" /etc/clickhouse-server/config.d/keeper_port.xml -} - -# async_replication setting doesn't exist on some older versions -remove_keeper_config "async_replication" "1" - -# create_if_not_exists feature flag doesn't exist on some older versions -remove_keeper_config "create_if_not_exists" "[01]" - -#todo: remove these after 24.3 released. -sudo sed -i "s|azure<|azure_blob_storage<|" /etc/clickhouse-server/config.d/azure_storage_conf.xml - -#todo: remove these after 24.3 released. -sudo sed -i "s|local<|local_blob_storage<|" /etc/clickhouse-server/config.d/storage_conf.xml - -# latest_logs_cache_size_threshold setting doesn't exist on some older versions -remove_keeper_config "latest_logs_cache_size_threshold" "[[:digit:]]\+" - -# commit_logs_cache_size_threshold setting doesn't exist on some older versions -remove_keeper_config "commit_logs_cache_size_threshold" "[[:digit:]]\+" - -# it contains some new settings, but we can safely remove it -rm /etc/clickhouse-server/config.d/merge_tree.xml -rm /etc/clickhouse-server/config.d/enable_wait_for_shutdown_replicated_tables.xml -rm /etc/clickhouse-server/config.d/zero_copy_destructive_operations.xml -rm /etc/clickhouse-server/config.d/storage_conf_02963.xml -rm /etc/clickhouse-server/config.d/backoff_failed_mutation.xml -rm /etc/clickhouse-server/config.d/handlers.yaml -rm /etc/clickhouse-server/users.d/nonconst_timezone.xml -rm /etc/clickhouse-server/users.d/s3_cache_new.xml -rm /etc/clickhouse-server/users.d/replicated_ddl_entry.xml - start stop mv /var/log/clickhouse-server/clickhouse-server.log /var/log/clickhouse-server/clickhouse-server.initial.log @@ -116,44 +92,11 @@ export USE_S3_STORAGE_FOR_MERGE_TREE=1 export ZOOKEEPER_FAULT_INJECTION=0 configure -# force_sync=false doesn't work correctly on some older versions -sudo sed -i "s|false|true|" /etc/clickhouse-server/config.d/keeper_port.xml - -#todo: remove these after 24.3 released. -sudo sed -i "s|azure<|azure_blob_storage<|" /etc/clickhouse-server/config.d/azure_storage_conf.xml - -#todo: remove these after 24.3 released. -sudo sed -i "s|local<|local_blob_storage<|" /etc/clickhouse-server/config.d/storage_conf.xml - -# async_replication setting doesn't exist on some older versions -remove_keeper_config "async_replication" "1" - -# create_if_not_exists feature flag doesn't exist on some older versions -remove_keeper_config "create_if_not_exists" "[01]" - -# latest_logs_cache_size_threshold setting doesn't exist on some older versions -remove_keeper_config "latest_logs_cache_size_threshold" "[[:digit:]]\+" - -# commit_logs_cache_size_threshold setting doesn't exist on some older versions -remove_keeper_config "commit_logs_cache_size_threshold" "[[:digit:]]\+" - # But we still need default disk because some tables loaded only into it sudo sed -i "s|
s3
|
s3
default|" /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml sudo chown clickhouse /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml sudo chgrp clickhouse /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml -# it contains some new settings, but we can safely remove it -rm /etc/clickhouse-server/config.d/merge_tree.xml -rm /etc/clickhouse-server/config.d/enable_wait_for_shutdown_replicated_tables.xml -rm /etc/clickhouse-server/config.d/zero_copy_destructive_operations.xml -rm /etc/clickhouse-server/config.d/storage_conf_02963.xml -rm /etc/clickhouse-server/config.d/backoff_failed_mutation.xml -rm /etc/clickhouse-server/config.d/handlers.yaml -rm /etc/clickhouse-server/config.d/block_number.xml -rm /etc/clickhouse-server/users.d/nonconst_timezone.xml -rm /etc/clickhouse-server/users.d/s3_cache_new.xml -rm /etc/clickhouse-server/users.d/replicated_ddl_entry.xml - start clickhouse-client --query="SELECT 'Server version: ', version()" @@ -192,6 +135,7 @@ then save_settings_clean 'new_settings.native' clickhouse-local -nmq " CREATE TABLE old_settings AS file('old_settings.native'); + CREATE TABLE old_version AS file('old_version.native'); CREATE TABLE new_settings AS file('new_settings.native'); SELECT @@ -202,8 +146,11 @@ then LEFT JOIN old_settings ON new_settings.name = old_settings.name WHERE (new_settings.value != old_settings.value) AND (name NOT IN ( SELECT arrayJoin(tupleElement(changes, 'name')) - FROM system.settings_changes - WHERE version = extract(version(), '^(?:\\d+\\.\\d+)') + FROM + ( + SELECT *, splitByChar('.', version) AS version_array FROM system.settings_changes + ) + WHERE (version_array[1]::UInt64 * 100 + version_array[2]::UInt64) > (SELECT v FROM old_version LIMIT 1) )) SETTINGS join_use_nulls = 1 INTO OUTFILE 'changed_settings.txt' @@ -216,8 +163,11 @@ then FROM old_settings )) AND (name NOT IN ( SELECT arrayJoin(tupleElement(changes, 'name')) - FROM system.settings_changes - WHERE version = extract(version(), '^(?:\\d+\\.\\d+)') + FROM + ( + SELECT *, splitByChar('.', version) AS version_array FROM system.settings_changes + ) + WHERE (version_array[1]::UInt64 * 100 + version_array[2]::UInt64) > (SELECT v FROM old_version LIMIT 1) )) INTO OUTFILE 'new_settings.txt' FORMAT PrettyCompactNoEscapes; diff --git a/docs/changelogs/v24.5.1.1763-stable.md b/docs/changelogs/v24.5.1.1763-stable.md new file mode 100644 index 00000000000..384e0395c4d --- /dev/null +++ b/docs/changelogs/v24.5.1.1763-stable.md @@ -0,0 +1,366 @@ +--- +sidebar_position: 1 +sidebar_label: 2024 +--- + +# 2024 Changelog + +### ClickHouse release v24.5.1.1763-stable (647c154a94d) FIXME as compared to v24.4.1.2088-stable (6d4b31322d1) + +#### Backward Incompatible Change +* Renamed "inverted indexes" to "full-text indexes" which is a less technical / more user-friendly name. This also changes internal table metadata and breaks tables with existing (experimental) inverted indexes. Please make to drop such indexes before upgrade and re-create them after upgrade. [#62884](https://github.com/ClickHouse/ClickHouse/pull/62884) ([Robert Schulze](https://github.com/rschu1ze)). +* Usage of functions `neighbor`, `runningAccumulate`, `runningDifferenceStartingWithFirstValue`, `runningDifference` deprecated (because it is error-prone). Proper window functions should be used instead. To enable them back, set `allow_deprecated_functions=1`. [#63132](https://github.com/ClickHouse/ClickHouse/pull/63132) ([Nikita Taranov](https://github.com/nickitat)). +* Queries from `system.columns` will work faster if there is a large number of columns, but many databases or tables are not granted for `SHOW TABLES`. Note that in previous versions, if you grant `SHOW COLUMNS` to individual columns without granting `SHOW TABLES` to the corresponding tables, the `system.columns` table will show these columns, but in a new version, it will skip the table entirely. Remove trace log messages "Access granted" and "Access denied" that slowed down queries. [#63439](https://github.com/ClickHouse/ClickHouse/pull/63439) ([Alexey Milovidov](https://github.com/alexey-milovidov)). + +#### New Feature +* Provide support for AzureBlobStorage function in ClickHouse server to use Azure Workload identity to authenticate against Azure blob storage. If `use_workload_identity` parameter is set in config, [workload identity](https://github.com/Azure/azure-sdk-for-cpp/tree/main/sdk/identity/azure-identity#authenticate-azure-hosted-applications) is used for authentication. [#57881](https://github.com/ClickHouse/ClickHouse/pull/57881) ([Vinay Suryadevara](https://github.com/vinay92-ch)). +* Introduce bulk loading to StorageEmbeddedRocksDB by creating and ingesting SST file instead of relying on rocksdb build-in memtable. This help to increase importing speed, especially for long-running insert query to StorageEmbeddedRocksDB tables. Also, introduce `StorageEmbeddedRocksDB` table settings. [#59163](https://github.com/ClickHouse/ClickHouse/pull/59163) ([Duc Canh Le](https://github.com/canhld94)). +* User can now parse CRLF with TSV format using a setting `input_format_tsv_crlf_end_of_line`. Closes [#56257](https://github.com/ClickHouse/ClickHouse/issues/56257). [#59747](https://github.com/ClickHouse/ClickHouse/pull/59747) ([Shaun Struwig](https://github.com/Blargian)). +* Adds the Form Format to read/write a single record in the application/x-www-form-urlencoded format. [#60199](https://github.com/ClickHouse/ClickHouse/pull/60199) ([Shaun Struwig](https://github.com/Blargian)). +* Added possibility to compress in CROSS JOIN. [#60459](https://github.com/ClickHouse/ClickHouse/pull/60459) ([p1rattttt](https://github.com/p1rattttt)). +* New setting `input_format_force_null_for_omitted_fields` that forces NULL values for omitted fields. [#60887](https://github.com/ClickHouse/ClickHouse/pull/60887) ([Constantine Peresypkin](https://github.com/pkit)). +* Support join with inequal conditions which involve columns from both left and right table. e.g. `t1.y < t2.y`. To enable, `SET allow_experimental_join_condition = 1`. [#60920](https://github.com/ClickHouse/ClickHouse/pull/60920) ([lgbo](https://github.com/lgbo-ustc)). +* Earlier our s3 storage and s3 table function didn't support selecting from archive files. I created a solution that allows to iterate over files inside archives in S3. [#62259](https://github.com/ClickHouse/ClickHouse/pull/62259) ([Daniil Ivanik](https://github.com/divanik)). +* Support for conditional function `clamp`. [#62377](https://github.com/ClickHouse/ClickHouse/pull/62377) ([skyoct](https://github.com/skyoct)). +* Add npy output format. [#62430](https://github.com/ClickHouse/ClickHouse/pull/62430) ([豪肥肥](https://github.com/HowePa)). +* Added SQL functions `generateUUIDv7`, `generateUUIDv7ThreadMonotonic`, `generateUUIDv7NonMonotonic` (with different monotonicity/performance trade-offs) to generate version 7 UUIDs aka. timestamp-based UUIDs with random component. Also added a new function `UUIDToNum` to extract bytes from a UUID and a new function `UUIDv7ToDateTime` to extract timestamp component from a UUID version 7. [#62852](https://github.com/ClickHouse/ClickHouse/pull/62852) ([Alexey Petrunyaka](https://github.com/pet74alex)). +* Backported in [#64307](https://github.com/ClickHouse/ClickHouse/issues/64307): Implement Dynamic data type that allows to store values of any type inside it without knowing all of them in advance. Dynamic type is available under a setting `allow_experimental_dynamic_type`. Reference: [#54864](https://github.com/ClickHouse/ClickHouse/issues/54864). [#63058](https://github.com/ClickHouse/ClickHouse/pull/63058) ([Kruglov Pavel](https://github.com/Avogar)). +* Introduce bulk loading to StorageEmbeddedRocksDB by creating and ingesting SST file instead of relying on rocksdb build-in memtable. This help to increase importing speed, especially for long-running insert query to StorageEmbeddedRocksDB tables. Also, introduce StorageEmbeddedRocksDB table settings. [#63324](https://github.com/ClickHouse/ClickHouse/pull/63324) ([Duc Canh Le](https://github.com/canhld94)). +* Raw as a synonym for TSVRaw. [#63394](https://github.com/ClickHouse/ClickHouse/pull/63394) ([Unalian](https://github.com/Unalian)). +* Added possibility to do cross join in temporary file if size exceeds limits. [#63432](https://github.com/ClickHouse/ClickHouse/pull/63432) ([p1rattttt](https://github.com/p1rattttt)). +* On Linux and MacOS, if the program has STDOUT redirected to a file with a compression extension, use the corresponding compression method instead of nothing (making it behave similarly to `INTO OUTFILE` ). [#63662](https://github.com/ClickHouse/ClickHouse/pull/63662) ([v01dXYZ](https://github.com/v01dXYZ)). +* Change warning on high number of attached tables to differentiate tables, views and dictionaries. [#64180](https://github.com/ClickHouse/ClickHouse/pull/64180) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)). + +#### Performance Improvement +* Skip merging of newly created projection blocks during `INSERT`-s. [#59405](https://github.com/ClickHouse/ClickHouse/pull/59405) ([Nikita Taranov](https://github.com/nickitat)). +* Process string functions XXXUTF8 'asciily' if input strings are all ascii chars. Inspired by https://github.com/apache/doris/pull/29799. Overall speed up by 1.07x~1.62x. Notice that peak memory usage had been decreased in some cases. [#61632](https://github.com/ClickHouse/ClickHouse/pull/61632) ([李扬](https://github.com/taiyang-li)). +* Improved performance of selection (`{}`) globs in StorageS3. [#62120](https://github.com/ClickHouse/ClickHouse/pull/62120) ([Andrey Zvonov](https://github.com/zvonand)). +* HostResolver has each IP address several times. If remote host has several IPs and by some reason (firewall rules for example) access on some IPs allowed and on others forbidden, than only first record of forbidden IPs marked as failed, and in each try these IPs have a chance to be chosen (and failed again). Even if fix this, every 120 seconds DNS cache dropped, and IPs can be chosen again. [#62652](https://github.com/ClickHouse/ClickHouse/pull/62652) ([Anton Ivashkin](https://github.com/ianton-ru)). +* Add a new configuration`prefer_merge_sort_block_bytes` to control the memory usage and speed up sorting 2 times when merging when there are many columns. [#62904](https://github.com/ClickHouse/ClickHouse/pull/62904) ([LiuNeng](https://github.com/liuneng1994)). +* `clickhouse-local` will start faster. In previous versions, it was not deleting temporary directories by mistake. Now it will. This closes [#62941](https://github.com/ClickHouse/ClickHouse/issues/62941). [#63074](https://github.com/ClickHouse/ClickHouse/pull/63074) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Micro-optimizations for the new analyzer. [#63429](https://github.com/ClickHouse/ClickHouse/pull/63429) ([Raúl Marín](https://github.com/Algunenano)). +* Index analysis will work if `DateTime` is compared to `DateTime64`. This closes [#63441](https://github.com/ClickHouse/ClickHouse/issues/63441). [#63443](https://github.com/ClickHouse/ClickHouse/pull/63443) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Index analysis will work if `DateTime` is compared to `DateTime64`. This closes [#63441](https://github.com/ClickHouse/ClickHouse/issues/63441). [#63532](https://github.com/ClickHouse/ClickHouse/pull/63532) ([Raúl Marín](https://github.com/Algunenano)). +* Speed up indices of type `set` a little (around 1.5 times) by removing garbage. [#64098](https://github.com/ClickHouse/ClickHouse/pull/64098) ([Alexey Milovidov](https://github.com/alexey-milovidov)). + +#### Improvement +* Maps can now have `Float32`, `Float64`, `Array(T)`, `Map(K,V)` and `Tuple(T1, T2, ...)` as keys. Closes [#54537](https://github.com/ClickHouse/ClickHouse/issues/54537). [#59318](https://github.com/ClickHouse/ClickHouse/pull/59318) ([李扬](https://github.com/taiyang-li)). +* Multiline strings with border preservation and column width change. [#59940](https://github.com/ClickHouse/ClickHouse/pull/59940) ([Volodyachan](https://github.com/Volodyachan)). +* Make rabbitmq nack broken messages. Closes [#45350](https://github.com/ClickHouse/ClickHouse/issues/45350). [#60312](https://github.com/ClickHouse/ClickHouse/pull/60312) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix a crash in asynchronous stack unwinding (such as when using the sampling query profiler) while interpreting debug info. This closes [#60460](https://github.com/ClickHouse/ClickHouse/issues/60460). [#60468](https://github.com/ClickHouse/ClickHouse/pull/60468) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Distinct messages for s3 error 'no key' for cases disk and storage. [#61108](https://github.com/ClickHouse/ClickHouse/pull/61108) ([Sema Checherinda](https://github.com/CheSema)). +* Less contention in filesystem cache (part 4). Allow to keep filesystem cache not filled to the limit by doing additional eviction in the background (controlled by `keep_free_space_size(elements)_ratio`). This allows to release pressure from space reservation for queries (on `tryReserve` method). Also this is done in a lock free way as much as possible, e.g. should not block normal cache usage. [#61250](https://github.com/ClickHouse/ClickHouse/pull/61250) ([Kseniia Sumarokova](https://github.com/kssenii)). +* The progress bar will work for trivial queries with LIMIT from `system.zeros`, `system.zeros_mt` (it already works for `system.numbers` and `system.numbers_mt`), and the `generateRandom` table function. As a bonus, if the total number of records is greater than the `max_rows_to_read` limit, it will throw an exception earlier. This closes [#58183](https://github.com/ClickHouse/ClickHouse/issues/58183). [#61823](https://github.com/ClickHouse/ClickHouse/pull/61823) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* YAML Merge Key support. [#62685](https://github.com/ClickHouse/ClickHouse/pull/62685) ([Azat Khuzhin](https://github.com/azat)). +* Enhance error message when non-deterministic function is used with Replicated source. [#62896](https://github.com/ClickHouse/ClickHouse/pull/62896) ([Grégoire Pineau](https://github.com/lyrixx)). +* Fix interserver secret for Distributed over Distributed from `remote`. [#63013](https://github.com/ClickHouse/ClickHouse/pull/63013) ([Azat Khuzhin](https://github.com/azat)). +* Allow using `clickhouse-local` and its shortcuts `clickhouse` and `ch` with a query or queries file as a positional argument. Examples: `ch "SELECT 1"`, `ch --param_test Hello "SELECT {test:String}"`, `ch query.sql`. This closes [#62361](https://github.com/ClickHouse/ClickHouse/issues/62361). [#63081](https://github.com/ClickHouse/ClickHouse/pull/63081) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Support configuration substitutions from YAML files. [#63106](https://github.com/ClickHouse/ClickHouse/pull/63106) ([Eduard Karacharov](https://github.com/korowa)). +* Add TTL information in system parts_columns table. [#63200](https://github.com/ClickHouse/ClickHouse/pull/63200) ([litlig](https://github.com/litlig)). +* Keep previous data in terminal after picking from skim suggestions. [#63261](https://github.com/ClickHouse/ClickHouse/pull/63261) ([FlameFactory](https://github.com/FlameFactory)). +* Width of fields now correctly calculate, ignoring ANSI escape sequences. [#63270](https://github.com/ClickHouse/ClickHouse/pull/63270) ([Shaun Struwig](https://github.com/Blargian)). +* Enable plain_rewritable metadata for local and Azure (azure_blob_storage) object storages. [#63365](https://github.com/ClickHouse/ClickHouse/pull/63365) ([Julia Kartseva](https://github.com/jkartseva)). +* Support English-style Unicode quotes, e.g. “Hello”, ‘world’. This is questionable in general but helpful when you type your query in a word processor, such as Google Docs. This closes [#58634](https://github.com/ClickHouse/ClickHouse/issues/58634). [#63381](https://github.com/ClickHouse/ClickHouse/pull/63381) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Allowed to create MaterializedMySQL database without connection to MySQL. [#63397](https://github.com/ClickHouse/ClickHouse/pull/63397) ([Kirill](https://github.com/kirillgarbar)). +* Remove copying data when writing to filesystem cache. [#63401](https://github.com/ClickHouse/ClickHouse/pull/63401) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Update the usage of error code `NUMBER_OF_ARGUMENTS_DOESNT_MATCH` by more accurate error codes when appropriate. [#63406](https://github.com/ClickHouse/ClickHouse/pull/63406) ([Yohann Jardin](https://github.com/yohannj)). +* `os_user` and `client_hostname` are now correctly set up for queries for command line suggestions in clickhouse-client. This closes [#63430](https://github.com/ClickHouse/ClickHouse/issues/63430). [#63433](https://github.com/ClickHouse/ClickHouse/pull/63433) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fixed tabulation from line numbering, correct handling of length when moving a line if the value has a tab, added tests. [#63493](https://github.com/ClickHouse/ClickHouse/pull/63493) ([Volodyachan](https://github.com/Volodyachan)). +* Add this `aggregate_function_group_array_has_limit_size`setting to support discarding data in some scenarios. [#63516](https://github.com/ClickHouse/ClickHouse/pull/63516) ([zhongyuankai](https://github.com/zhongyuankai)). +* Automatically mark a replica of Replicated database as lost and start recovery if some DDL task fails more than `max_retries_before_automatic_recovery` (100 by default) times in a row with the same error. Also, fixed a bug that could cause skipping DDL entries when an exception is thrown during an early stage of entry execution. [#63549](https://github.com/ClickHouse/ClickHouse/pull/63549) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Automatically correct `max_block_size=0` to default value. [#63587](https://github.com/ClickHouse/ClickHouse/pull/63587) ([Antonio Andelic](https://github.com/antonio2368)). +* Account failed files in `s3queue_tracked_file_ttl_sec` and `s3queue_traked_files_limit` for `StorageS3Queue`. [#63638](https://github.com/ClickHouse/ClickHouse/pull/63638) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Add a build_id ALIAS column to trace_log to facilitate auto renaming upon detecting binary changes. This is to address [#52086](https://github.com/ClickHouse/ClickHouse/issues/52086). [#63656](https://github.com/ClickHouse/ClickHouse/pull/63656) ([Zimu Li](https://github.com/woodlzm)). +* Enable truncate operation for object storage disks. [#63693](https://github.com/ClickHouse/ClickHouse/pull/63693) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* The loading of the keywords list is now dependent on the server revision and will be disabled for the old versions of ClickHouse server. CC @azat. [#63786](https://github.com/ClickHouse/ClickHouse/pull/63786) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Allow trailing commas in the columns list in the INSERT query. For example, `INSERT INTO test (a, b, c, ) VALUES ...`. [#63803](https://github.com/ClickHouse/ClickHouse/pull/63803) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Better exception messages for the `Regexp` format. [#63804](https://github.com/ClickHouse/ClickHouse/pull/63804) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Allow trailing commas in the `Values` format. For example, this query is allowed: `INSERT INTO test (a, b, c) VALUES (4, 5, 6,);`. [#63810](https://github.com/ClickHouse/ClickHouse/pull/63810) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Clickhouse disks have to read server setting to obtain actual metadata format version. [#63831](https://github.com/ClickHouse/ClickHouse/pull/63831) ([Sema Checherinda](https://github.com/CheSema)). +* Disable pretty format restrictions (`output_format_pretty_max_rows`/`output_format_pretty_max_value_width`) when stdout is not TTY. [#63942](https://github.com/ClickHouse/ClickHouse/pull/63942) ([Azat Khuzhin](https://github.com/azat)). +* Exception handling now works when ClickHouse is used inside AWS Lambda. Author: [Alexey Coolnev](https://github.com/acoolnev). [#64014](https://github.com/ClickHouse/ClickHouse/pull/64014) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Throw `CANNOT_DECOMPRESS` instread of `CORRUPTED_DATA` on invalid compressed data passed via HTTP. [#64036](https://github.com/ClickHouse/ClickHouse/pull/64036) ([vdimir](https://github.com/vdimir)). +* A tip for a single large number in Pretty formats now works for Nullable and LowCardinality. This closes [#61993](https://github.com/ClickHouse/ClickHouse/issues/61993). [#64084](https://github.com/ClickHouse/ClickHouse/pull/64084) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Now backups with azure blob storage will use multicopy. [#64116](https://github.com/ClickHouse/ClickHouse/pull/64116) ([alesapin](https://github.com/alesapin)). +* Add metrics, logs, and thread names around parts filtering with indices. [#64130](https://github.com/ClickHouse/ClickHouse/pull/64130) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Allow to use native copy for azure even with different containers. [#64154](https://github.com/ClickHouse/ClickHouse/pull/64154) ([alesapin](https://github.com/alesapin)). +* Finally enable native copy for azure. [#64182](https://github.com/ClickHouse/ClickHouse/pull/64182) ([alesapin](https://github.com/alesapin)). +* Ignore `allow_suspicious_primary_key` on `ATTACH` and verify on `ALTER`. [#64202](https://github.com/ClickHouse/ClickHouse/pull/64202) ([Azat Khuzhin](https://github.com/azat)). + +#### Build/Testing/Packaging Improvement +* ClickHouse is built with clang-18. A lot of new checks from clang-tidy-18 have been enabled. [#60469](https://github.com/ClickHouse/ClickHouse/pull/60469) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Re-enable broken s390x build in CI. [#63135](https://github.com/ClickHouse/ClickHouse/pull/63135) ([Harry Lee](https://github.com/HarryLeeIBM)). +* The Dockerfile is reviewed by the docker official library in https://github.com/docker-library/official-images/pull/15846. [#63400](https://github.com/ClickHouse/ClickHouse/pull/63400) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Information about every symbol in every translation unit will be collected in the CI database for every build in the CI. This closes [#63494](https://github.com/ClickHouse/ClickHouse/issues/63494). [#63495](https://github.com/ClickHouse/ClickHouse/pull/63495) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Experimentally support loongarch64 as a new platform for ClickHouse. [#63733](https://github.com/ClickHouse/ClickHouse/pull/63733) ([qiangxuhui](https://github.com/qiangxuhui)). +* Update Apache Datasketches library. It resolves [#63858](https://github.com/ClickHouse/ClickHouse/issues/63858). [#63923](https://github.com/ClickHouse/ClickHouse/pull/63923) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Enable GRPC support for aarch64 linux while cross-compiling binary. [#64072](https://github.com/ClickHouse/ClickHouse/pull/64072) ([alesapin](https://github.com/alesapin)). + +#### Bug Fix (user-visible misbehavior in an official stable release) + +* Fix making backup when multiple shards are used. This PR fixes [#56566](https://github.com/ClickHouse/ClickHouse/issues/56566). [#57684](https://github.com/ClickHouse/ClickHouse/pull/57684) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix passing projections/indexes from CREATE query into inner table of MV. [#59183](https://github.com/ClickHouse/ClickHouse/pull/59183) ([Azat Khuzhin](https://github.com/azat)). +* Fix boundRatio incorrect merge. [#60532](https://github.com/ClickHouse/ClickHouse/pull/60532) ([Tao Wang](https://github.com/wangtZJU)). +* Fix crash when using some functions with low-cardinality columns. [#61966](https://github.com/ClickHouse/ClickHouse/pull/61966) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix queries with FINAL give wrong result when table does not use adaptive granularity. [#62432](https://github.com/ClickHouse/ClickHouse/pull/62432) ([Duc Canh Le](https://github.com/canhld94)). +* Improve the detection of cgroups v2 memory controller in unusual locations. This fixes a warning that the cgroup memory observer was disabled because no cgroups v1 or v2 current memory file could be found. [#62903](https://github.com/ClickHouse/ClickHouse/pull/62903) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix subsequent use of external tables in client. [#62964](https://github.com/ClickHouse/ClickHouse/pull/62964) ([Azat Khuzhin](https://github.com/azat)). +* Fix crash with untuple and unresolved lambda. [#63131](https://github.com/ClickHouse/ClickHouse/pull/63131) ([Raúl Marín](https://github.com/Algunenano)). +* Fix bug which could lead to server to accept connections before server is actually loaded. [#63181](https://github.com/ClickHouse/ClickHouse/pull/63181) ([alesapin](https://github.com/alesapin)). +* Fix intersect parts when restart after drop range. [#63202](https://github.com/ClickHouse/ClickHouse/pull/63202) ([Han Fei](https://github.com/hanfei1991)). +* Fix a misbehavior when SQL security defaults don't load for old tables during server startup. [#63209](https://github.com/ClickHouse/ClickHouse/pull/63209) ([pufit](https://github.com/pufit)). +* JOIN filter push down filled join fix. Closes [#63228](https://github.com/ClickHouse/ClickHouse/issues/63228). [#63234](https://github.com/ClickHouse/ClickHouse/pull/63234) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix infinite loop while listing objects in Azure blob storage. [#63257](https://github.com/ClickHouse/ClickHouse/pull/63257) ([Julia Kartseva](https://github.com/jkartseva)). +* CROSS join can be executed with any value `join_algorithm` setting, close [#62431](https://github.com/ClickHouse/ClickHouse/issues/62431). [#63273](https://github.com/ClickHouse/ClickHouse/pull/63273) ([vdimir](https://github.com/vdimir)). +* Fixed a potential crash caused by a `no space left` error when temporary data in the cache is used. [#63346](https://github.com/ClickHouse/ClickHouse/pull/63346) ([vdimir](https://github.com/vdimir)). +* Fix bug which could potentially lead to rare LOGICAL_ERROR during SELECT query with message: `Unexpected return type from materialize. Expected type_XXX. Got type_YYY.` Introduced in [#59379](https://github.com/ClickHouse/ClickHouse/issues/59379). [#63353](https://github.com/ClickHouse/ClickHouse/pull/63353) ([alesapin](https://github.com/alesapin)). +* Fix `X-ClickHouse-Timezone` header returning wrong timezone when using `session_timezone` as query level setting. [#63377](https://github.com/ClickHouse/ClickHouse/pull/63377) ([Andrey Zvonov](https://github.com/zvonand)). +* Fix debug assert when using grouping WITH ROLLUP and LowCardinality types. [#63398](https://github.com/ClickHouse/ClickHouse/pull/63398) ([Raúl Marín](https://github.com/Algunenano)). +* Fix logical errors in queries with `GROUPING SETS` and `WHERE` and `group_by_use_nulls = true`, close [#60538](https://github.com/ClickHouse/ClickHouse/issues/60538). [#63405](https://github.com/ClickHouse/ClickHouse/pull/63405) ([vdimir](https://github.com/vdimir)). +* Fix backup of projection part in case projection was removed from table metadata, but part still has projection. [#63426](https://github.com/ClickHouse/ClickHouse/pull/63426) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix 'Every derived table must have its own alias' error for MYSQL dictionary source, close [#63341](https://github.com/ClickHouse/ClickHouse/issues/63341). [#63481](https://github.com/ClickHouse/ClickHouse/pull/63481) ([vdimir](https://github.com/vdimir)). +* Insert QueryFinish on AsyncInsertFlush with no data. [#63483](https://github.com/ClickHouse/ClickHouse/pull/63483) ([Raúl Marín](https://github.com/Algunenano)). +* Fix `system.query_log.used_dictionaries` logging. [#63487](https://github.com/ClickHouse/ClickHouse/pull/63487) ([Eduard Karacharov](https://github.com/korowa)). +* Avoid segafult in `MergeTreePrefetchedReadPool` while fetching projection parts. [#63513](https://github.com/ClickHouse/ClickHouse/pull/63513) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix rabbitmq heap-use-after-free found by clang-18, which can happen if an error is thrown from RabbitMQ during initialization of exchange and queues. [#63515](https://github.com/ClickHouse/ClickHouse/pull/63515) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix crash on exit with sentry enabled (due to openssl destroyed before sentry). [#63548](https://github.com/ClickHouse/ClickHouse/pull/63548) ([Azat Khuzhin](https://github.com/azat)). +* Fix support for Array and Map with Keyed hashing functions and materialized keys. [#63628](https://github.com/ClickHouse/ClickHouse/pull/63628) ([Salvatore Mesoraca](https://github.com/aiven-sal)). +* Fixed Parquet filter pushdown not working with Analyzer. [#63642](https://github.com/ClickHouse/ClickHouse/pull/63642) ([Michael Kolupaev](https://github.com/al13n321)). +* It is forbidden to convert MergeTree to replicated if the zookeeper path for this table already exists. [#63670](https://github.com/ClickHouse/ClickHouse/pull/63670) ([Kirill](https://github.com/kirillgarbar)). +* Read only the necessary columns from VIEW (new analyzer). Closes [#62594](https://github.com/ClickHouse/ClickHouse/issues/62594). [#63688](https://github.com/ClickHouse/ClickHouse/pull/63688) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix rare case with missing data in the result of distributed query. [#63691](https://github.com/ClickHouse/ClickHouse/pull/63691) ([vdimir](https://github.com/vdimir)). +* Fix [#63539](https://github.com/ClickHouse/ClickHouse/issues/63539). Forbid WINDOW redefinition in new analyzer. [#63694](https://github.com/ClickHouse/ClickHouse/pull/63694) ([Dmitry Novik](https://github.com/novikd)). +* Flatten_nested is broken with replicated database. [#63695](https://github.com/ClickHouse/ClickHouse/pull/63695) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix `SIZES_OF_COLUMNS_DOESNT_MATCH` error for queries with `arrayJoin` function in `WHERE`. Fixes [#63653](https://github.com/ClickHouse/ClickHouse/issues/63653). [#63722](https://github.com/ClickHouse/ClickHouse/pull/63722) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix `Not found column` and `CAST AS Map from array requires nested tuple of 2 elements` exceptions for distributed queries which use `Map(Nothing, Nothing)` type. Fixes [#63637](https://github.com/ClickHouse/ClickHouse/issues/63637). [#63753](https://github.com/ClickHouse/ClickHouse/pull/63753) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix possible `ILLEGAL_COLUMN` error in `partial_merge` join, close [#37928](https://github.com/ClickHouse/ClickHouse/issues/37928). [#63755](https://github.com/ClickHouse/ClickHouse/pull/63755) ([vdimir](https://github.com/vdimir)). +* `query_plan_remove_redundant_distinct` can break queries with WINDOW FUNCTIONS (with `allow_experimental_analyzer` is on). Fixes [#62820](https://github.com/ClickHouse/ClickHouse/issues/62820). [#63776](https://github.com/ClickHouse/ClickHouse/pull/63776) ([Igor Nikonov](https://github.com/devcrafter)). +* Fix possible crash with SYSTEM UNLOAD PRIMARY KEY. [#63778](https://github.com/ClickHouse/ClickHouse/pull/63778) ([Raúl Marín](https://github.com/Algunenano)). +* Fix a query with a duplicating cycling alias. Fixes [#63320](https://github.com/ClickHouse/ClickHouse/issues/63320). [#63791](https://github.com/ClickHouse/ClickHouse/pull/63791) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fixed performance degradation of parsing data formats in INSERT query. This closes [#62918](https://github.com/ClickHouse/ClickHouse/issues/62918). This partially reverts [#42284](https://github.com/ClickHouse/ClickHouse/issues/42284), which breaks the original design and introduces more problems. [#63801](https://github.com/ClickHouse/ClickHouse/pull/63801) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Add 'endpoint_subpath' S3 URI setting to allow plain_rewritable disks to share the same endpoint. [#63806](https://github.com/ClickHouse/ClickHouse/pull/63806) ([Julia Kartseva](https://github.com/jkartseva)). +* Fix queries using parallel read buffer (e.g. with max_download_thread > 0) getting stuck when threads cannot be allocated. [#63814](https://github.com/ClickHouse/ClickHouse/pull/63814) ([Antonio Andelic](https://github.com/antonio2368)). +* Allow JOIN filter push down to both streams if only single equivalent column is used in query. Closes [#63799](https://github.com/ClickHouse/ClickHouse/issues/63799). [#63819](https://github.com/ClickHouse/ClickHouse/pull/63819) ([Maksim Kita](https://github.com/kitaisreal)). +* Remove the data from all disks after DROP with the Lazy database engines. Without these changes, orhpaned will remain on the disks. [#63848](https://github.com/ClickHouse/ClickHouse/pull/63848) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* Fix incorrect select query result when parallel replicas were used to read from a Materialized View. [#63861](https://github.com/ClickHouse/ClickHouse/pull/63861) ([Nikita Taranov](https://github.com/nickitat)). +* Fixes in `find_super_nodes` and `find_big_family` command of keeper-client: - do not fail on ZNONODE errors - find super nodes inside super nodes - properly calculate subtree node count. [#63862](https://github.com/ClickHouse/ClickHouse/pull/63862) ([Alexander Gololobov](https://github.com/davenger)). +* Fix a error `Database name is empty` for remote queries with lambdas over the cluster with modified default database. Fixes [#63471](https://github.com/ClickHouse/ClickHouse/issues/63471). [#63864](https://github.com/ClickHouse/ClickHouse/pull/63864) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix SIGSEGV due to CPU/Real (`query_profiler_real_time_period_ns`/`query_profiler_cpu_time_period_ns`) profiler (has been an issue since 2022, that leads to periodic server crashes, especially if you were using distributed engine). [#63865](https://github.com/ClickHouse/ClickHouse/pull/63865) ([Azat Khuzhin](https://github.com/azat)). +* Fixed `EXPLAIN CURRENT TRANSACTION` query. [#63926](https://github.com/ClickHouse/ClickHouse/pull/63926) ([Anton Popov](https://github.com/CurtizJ)). +* Fix analyzer - IN function with arbitrary deep sub-selects in materialized view to use insertion block. [#63930](https://github.com/ClickHouse/ClickHouse/pull/63930) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Allow `ALTER TABLE .. MODIFY|RESET SETTING` and `ALTER TABLE .. MODIFY COMMENT` for plain_rewritable disk. [#63933](https://github.com/ClickHouse/ClickHouse/pull/63933) ([Julia Kartseva](https://github.com/jkartseva)). +* Fix Recursive CTE with distributed queries. Closes [#63790](https://github.com/ClickHouse/ClickHouse/issues/63790). [#63939](https://github.com/ClickHouse/ClickHouse/pull/63939) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix resolve of unqualified COLUMNS matcher. Preserve the input columns order and forbid usage of unknown identifiers. [#63962](https://github.com/ClickHouse/ClickHouse/pull/63962) ([Dmitry Novik](https://github.com/novikd)). +* Fix the `Not found column` error for queries with `skip_unused_shards = 1`, `LIMIT BY`, and the new analyzer. Fixes [#63943](https://github.com/ClickHouse/ClickHouse/issues/63943). [#63983](https://github.com/ClickHouse/ClickHouse/pull/63983) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* (Low-quality third-party Kusto Query Language). Resolve Client Abortion Issue When Using KQL Table Function in Interactive Mode. [#63992](https://github.com/ClickHouse/ClickHouse/pull/63992) ([Yong Wang](https://github.com/kashwy)). +* Backported in [#64356](https://github.com/ClickHouse/ClickHouse/issues/64356): Fix an `Cyclic aliases` error for cyclic aliases of different type (expression and function). Fixes [#63205](https://github.com/ClickHouse/ClickHouse/issues/63205). [#63993](https://github.com/ClickHouse/ClickHouse/pull/63993) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Deserialize untrusted binary inputs in a safer way. [#64024](https://github.com/ClickHouse/ClickHouse/pull/64024) ([Robert Schulze](https://github.com/rschu1ze)). +* Do not throw `Storage doesn't support FINAL` error for remote queries over non-MergeTree tables with `final = true` and new analyzer. Fixes [#63960](https://github.com/ClickHouse/ClickHouse/issues/63960). [#64037](https://github.com/ClickHouse/ClickHouse/pull/64037) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Add missing settings to recoverLostReplica. [#64040](https://github.com/ClickHouse/ClickHouse/pull/64040) ([Raúl Marín](https://github.com/Algunenano)). +* Fix unwind on SIGSEGV on aarch64 (due to small stack for signal). [#64058](https://github.com/ClickHouse/ClickHouse/pull/64058) ([Azat Khuzhin](https://github.com/azat)). +* Backported in [#64324](https://github.com/ClickHouse/ClickHouse/issues/64324): This fix will use a proper redefined context with the correct definer for each individual view in the query pipeline Closes [#63777](https://github.com/ClickHouse/ClickHouse/issues/63777). [#64079](https://github.com/ClickHouse/ClickHouse/pull/64079) ([pufit](https://github.com/pufit)). +* Backported in [#64384](https://github.com/ClickHouse/ClickHouse/issues/64384): Fix analyzer: "Not found column" error is fixed when using INTERPOLATE. [#64096](https://github.com/ClickHouse/ClickHouse/pull/64096) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Fix azure backup writing multipart blocks as 1mb (read buffer size) instead of max_upload_part_size. [#64117](https://github.com/ClickHouse/ClickHouse/pull/64117) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Backported in [#64541](https://github.com/ClickHouse/ClickHouse/issues/64541): Fix creating backups to S3 buckets with different credentials from the disk containing the file. [#64153](https://github.com/ClickHouse/ClickHouse/pull/64153) ([Antonio Andelic](https://github.com/antonio2368)). +* Prevent LOGICAL_ERROR on CREATE TABLE as MaterializedView. [#64174](https://github.com/ClickHouse/ClickHouse/pull/64174) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#64332](https://github.com/ClickHouse/ClickHouse/issues/64332): The query cache now considers two identical queries against different databases as different. The previous behavior could be used to bypass missing privileges to read from a table. [#64199](https://github.com/ClickHouse/ClickHouse/pull/64199) ([Robert Schulze](https://github.com/rschu1ze)). +* Ignore `text_log` config when using Keeper. [#64218](https://github.com/ClickHouse/ClickHouse/pull/64218) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#64692](https://github.com/ClickHouse/ClickHouse/issues/64692): Fix Query Tree size validation. Closes [#63701](https://github.com/ClickHouse/ClickHouse/issues/63701). [#64377](https://github.com/ClickHouse/ClickHouse/pull/64377) ([Dmitry Novik](https://github.com/novikd)). +* Backported in [#64411](https://github.com/ClickHouse/ClickHouse/issues/64411): Fix `Logical error: Bad cast` for `Buffer` table with `PREWHERE`. Fixes [#64172](https://github.com/ClickHouse/ClickHouse/issues/64172). [#64388](https://github.com/ClickHouse/ClickHouse/pull/64388) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Backported in [#64625](https://github.com/ClickHouse/ClickHouse/issues/64625): Fix an error `Cannot find column` in distributed queries with constant CTE in the `GROUP BY` key. [#64519](https://github.com/ClickHouse/ClickHouse/pull/64519) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Backported in [#64682](https://github.com/ClickHouse/ClickHouse/issues/64682): Fix [#64612](https://github.com/ClickHouse/ClickHouse/issues/64612). Do not rewrite aggregation if `-If` combinator is already used. [#64638](https://github.com/ClickHouse/ClickHouse/pull/64638) ([Dmitry Novik](https://github.com/novikd)). + +#### CI Fix or Improvement (changelog entry is not required) + +* Implement cumulative A Sync status. [#61464](https://github.com/ClickHouse/ClickHouse/pull/61464) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Add ability to run Azure tests in PR with label. [#63196](https://github.com/ClickHouse/ClickHouse/pull/63196) ([alesapin](https://github.com/alesapin)). +* Add azure run with msan. [#63238](https://github.com/ClickHouse/ClickHouse/pull/63238) ([alesapin](https://github.com/alesapin)). +* Improve cloud backport script. [#63282](https://github.com/ClickHouse/ClickHouse/pull/63282) ([Raúl Marín](https://github.com/Algunenano)). +* Use `/commit/` to have the URLs in [reports](https://play.clickhouse.com/play?user=play#c2VsZWN0IGRpc3RpbmN0IGNvbW1pdF91cmwgZnJvbSBjaGVja3Mgd2hlcmUgY2hlY2tfc3RhcnRfdGltZSA+PSBub3coKSAtIGludGVydmFsIDEgbW9udGggYW5kIHB1bGxfcmVxdWVzdF9udW1iZXI9NjA1MzI=) like https://github.com/ClickHouse/ClickHouse/commit/44f8bc5308b53797bec8cccc3bd29fab8a00235d and not like https://github.com/ClickHouse/ClickHouse/commits/44f8bc5308b53797bec8cccc3bd29fab8a00235d. [#63331](https://github.com/ClickHouse/ClickHouse/pull/63331) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Extra constraints for stress and fuzzer tests. [#63470](https://github.com/ClickHouse/ClickHouse/pull/63470) ([Raúl Marín](https://github.com/Algunenano)). +* Fix 02362_part_log_merge_algorithm flaky test. [#63635](https://github.com/ClickHouse/ClickHouse/pull/63635) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)). +* Fix test_odbc_interaction from aarch64 [#61457](https://github.com/ClickHouse/ClickHouse/issues/61457). [#63787](https://github.com/ClickHouse/ClickHouse/pull/63787) ([alesapin](https://github.com/alesapin)). +* Fix test `test_catboost_evaluate` for aarch64. [#61457](https://github.com/ClickHouse/ClickHouse/issues/61457). [#63789](https://github.com/ClickHouse/ClickHouse/pull/63789) ([alesapin](https://github.com/alesapin)). +* Remove HDFS from disks config for one integration test for arm. [#61457](https://github.com/ClickHouse/ClickHouse/issues/61457). [#63832](https://github.com/ClickHouse/ClickHouse/pull/63832) ([alesapin](https://github.com/alesapin)). +* Bump version for old image in test_short_strings_aggregation to make it work on arm. [#61457](https://github.com/ClickHouse/ClickHouse/issues/61457). [#63836](https://github.com/ClickHouse/ClickHouse/pull/63836) ([alesapin](https://github.com/alesapin)). +* Disable test `test_non_default_compression/test.py::test_preconfigured_deflateqpl_codec` on arm. [#61457](https://github.com/ClickHouse/ClickHouse/issues/61457). [#63839](https://github.com/ClickHouse/ClickHouse/pull/63839) ([alesapin](https://github.com/alesapin)). +* Include checks like `Stateless tests (asan, distributed cache, meta storage in keeper, s3 storage) [2/3]` in `Mergeable Check` and `A Sync`. [#63945](https://github.com/ClickHouse/ClickHouse/pull/63945) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Fix 02124_insert_deduplication_token_multiple_blocks. [#63950](https://github.com/ClickHouse/ClickHouse/pull/63950) ([Han Fei](https://github.com/hanfei1991)). +* Add `ClickHouseVersion.copy` method. Create a branch release in advance without spinning out the release to increase the stability. [#64039](https://github.com/ClickHouse/ClickHouse/pull/64039) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* The mime type is not 100% reliable for Python and shell scripts without shebangs; add a check for file extension. [#64062](https://github.com/ClickHouse/ClickHouse/pull/64062) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Add retries in git submodule update. [#64125](https://github.com/ClickHouse/ClickHouse/pull/64125) ([Alexey Milovidov](https://github.com/alexey-milovidov)). + +#### Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC) + +* Backported in [#64591](https://github.com/ClickHouse/ClickHouse/issues/64591): Disabled `enable_vertical_final` setting by default. This feature should not be used because it has a bug: [#64543](https://github.com/ClickHouse/ClickHouse/issues/64543). [#64544](https://github.com/ClickHouse/ClickHouse/pull/64544) ([Alexander Tokmakov](https://github.com/tavplubix)). + +#### NO CL ENTRY + +* NO CL ENTRY: 'Revert "Do not remove server constants from GROUP BY key for secondary query."'. [#63297](https://github.com/ClickHouse/ClickHouse/pull/63297) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* NO CL ENTRY: 'Revert "Introduce bulk loading to StorageEmbeddedRocksDB"'. [#63316](https://github.com/ClickHouse/ClickHouse/pull/63316) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* NO CL ENTRY: 'Add tags for the test 03000_traverse_shadow_system_data_paths.sql to make it stable'. [#63366](https://github.com/ClickHouse/ClickHouse/pull/63366) ([Aleksei Filatov](https://github.com/aalexfvk)). +* NO CL ENTRY: 'Revert "Revert "Do not remove server constants from GROUP BY key for secondary query.""'. [#63415](https://github.com/ClickHouse/ClickHouse/pull/63415) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* NO CL ENTRY: 'Revert "Fix index analysis for `DateTime64`"'. [#63525](https://github.com/ClickHouse/ClickHouse/pull/63525) ([Raúl Marín](https://github.com/Algunenano)). +* NO CL ENTRY: 'Add `jwcrypto` to integration tests runner'. [#63551](https://github.com/ClickHouse/ClickHouse/pull/63551) ([Konstantin Bogdanov](https://github.com/thevar1able)). +* NO CL ENTRY: 'Follow-up for the `binary_symbols` table in CI'. [#63802](https://github.com/ClickHouse/ClickHouse/pull/63802) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* NO CL ENTRY: 'chore(ci-workers): remove reusable from tailscale key'. [#63999](https://github.com/ClickHouse/ClickHouse/pull/63999) ([Gabriel Martinez](https://github.com/GMartinez-Sisti)). +* NO CL ENTRY: 'Revert "Update gui.md - Add ch-ui to open-source available tools."'. [#64064](https://github.com/ClickHouse/ClickHouse/pull/64064) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* NO CL ENTRY: 'Prevent stack overflow in Fuzzer and Stress test'. [#64082](https://github.com/ClickHouse/ClickHouse/pull/64082) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* NO CL ENTRY: 'Revert "Prevent conversion to Replicated if zookeeper path already exists"'. [#64214](https://github.com/ClickHouse/ClickHouse/pull/64214) ([Sergei Trifonov](https://github.com/serxa)). + +#### NOT FOR CHANGELOG / INSIGNIFICANT + +* Remove http_max_chunk_size setting (too internal) [#60852](https://github.com/ClickHouse/ClickHouse/pull/60852) ([Azat Khuzhin](https://github.com/azat)). +* Fix race in refreshable materialized views causing SELECT to fail sometimes [#60883](https://github.com/ClickHouse/ClickHouse/pull/60883) ([Michael Kolupaev](https://github.com/al13n321)). +* Parallel replicas: table check failover [#61935](https://github.com/ClickHouse/ClickHouse/pull/61935) ([Igor Nikonov](https://github.com/devcrafter)). +* Avoid crashing on column type mismatch in a few dozen places [#62087](https://github.com/ClickHouse/ClickHouse/pull/62087) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix optimize_if_chain_to_multiif const NULL handling [#62104](https://github.com/ClickHouse/ClickHouse/pull/62104) ([Michael Kolupaev](https://github.com/al13n321)). +* Use intrusive lists for `ResourceRequest` instead of deque [#62165](https://github.com/ClickHouse/ClickHouse/pull/62165) ([Sergei Trifonov](https://github.com/serxa)). +* Analyzer: Fix validateAggregates for tables with different aliases [#62346](https://github.com/ClickHouse/ClickHouse/pull/62346) ([vdimir](https://github.com/vdimir)). +* Improve code and tests of `DROP` of multiple tables [#62359](https://github.com/ClickHouse/ClickHouse/pull/62359) ([zhongyuankai](https://github.com/zhongyuankai)). +* Fix exception message during writing to partitioned s3/hdfs/azure path with globs [#62423](https://github.com/ClickHouse/ClickHouse/pull/62423) ([Kruglov Pavel](https://github.com/Avogar)). +* Support UBSan on Clang-19 (master) [#62466](https://github.com/ClickHouse/ClickHouse/pull/62466) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Save the stacktrace of thread waiting on failing AsyncLoader job [#62719](https://github.com/ClickHouse/ClickHouse/pull/62719) ([Sergei Trifonov](https://github.com/serxa)). +* group_by_use_nulls strikes back [#62922](https://github.com/ClickHouse/ClickHouse/pull/62922) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Analyzer: prefer column name to alias from array join [#62995](https://github.com/ClickHouse/ClickHouse/pull/62995) ([vdimir](https://github.com/vdimir)). +* CI: try separate the workflows file for GitHub's Merge Queue [#63123](https://github.com/ClickHouse/ClickHouse/pull/63123) ([Max K.](https://github.com/maxknv)). +* Try to fix coverage tests [#63130](https://github.com/ClickHouse/ClickHouse/pull/63130) ([Raúl Marín](https://github.com/Algunenano)). +* Fix azure backup flaky test [#63158](https://github.com/ClickHouse/ClickHouse/pull/63158) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Merging [#60920](https://github.com/ClickHouse/ClickHouse/issues/60920) [#63159](https://github.com/ClickHouse/ClickHouse/pull/63159) ([vdimir](https://github.com/vdimir)). +* QueryAnalysisPass improve QUALIFY validation [#63162](https://github.com/ClickHouse/ClickHouse/pull/63162) ([Maksim Kita](https://github.com/kitaisreal)). +* Add numpy tests for different endianness [#63189](https://github.com/ClickHouse/ClickHouse/pull/63189) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). +* Fallback action-runner to autoupdate when it's unable to start [#63195](https://github.com/ClickHouse/ClickHouse/pull/63195) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Fix possible endless loop while reading from azure [#63197](https://github.com/ClickHouse/ClickHouse/pull/63197) ([Anton Popov](https://github.com/CurtizJ)). +* Add information about materialized view security bug fix into the changelog [#63204](https://github.com/ClickHouse/ClickHouse/pull/63204) ([pufit](https://github.com/pufit)). +* Disable one query from 02994_sanity_check_settings [#63208](https://github.com/ClickHouse/ClickHouse/pull/63208) ([Raúl Marín](https://github.com/Algunenano)). +* Enable custom parquet encoder by default, attempt 2 [#63210](https://github.com/ClickHouse/ClickHouse/pull/63210) ([Michael Kolupaev](https://github.com/al13n321)). +* Update version after release [#63215](https://github.com/ClickHouse/ClickHouse/pull/63215) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Update version_date.tsv and changelogs after v24.4.1.2088-stable [#63217](https://github.com/ClickHouse/ClickHouse/pull/63217) ([robot-clickhouse](https://github.com/robot-clickhouse)). +* Update version_date.tsv and changelogs after v24.3.3.102-lts [#63226](https://github.com/ClickHouse/ClickHouse/pull/63226) ([robot-clickhouse](https://github.com/robot-clickhouse)). +* Update version_date.tsv and changelogs after v24.2.3.70-stable [#63227](https://github.com/ClickHouse/ClickHouse/pull/63227) ([robot-clickhouse](https://github.com/robot-clickhouse)). +* Return back [#61551](https://github.com/ClickHouse/ClickHouse/issues/61551) (More optimal loading of marks) [#63233](https://github.com/ClickHouse/ClickHouse/pull/63233) ([Anton Popov](https://github.com/CurtizJ)). +* Hide CI options under a spoiler [#63237](https://github.com/ClickHouse/ClickHouse/pull/63237) ([Konstantin Bogdanov](https://github.com/thevar1able)). +* Add `FROM` keyword to `TRUNCATE ALL TABLES` [#63241](https://github.com/ClickHouse/ClickHouse/pull/63241) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). +* Minor follow-up to a renaming PR [#63260](https://github.com/ClickHouse/ClickHouse/pull/63260) ([Robert Schulze](https://github.com/rschu1ze)). +* More checks for concurrently deleted files and dirs in system.remote_data_paths [#63274](https://github.com/ClickHouse/ClickHouse/pull/63274) ([Alexander Gololobov](https://github.com/davenger)). +* Fix SettingsChangesHistory.h for allow_experimental_join_condition [#63278](https://github.com/ClickHouse/ClickHouse/pull/63278) ([Raúl Marín](https://github.com/Algunenano)). +* Update version_date.tsv and changelogs after v23.8.14.6-lts [#63285](https://github.com/ClickHouse/ClickHouse/pull/63285) ([robot-clickhouse](https://github.com/robot-clickhouse)). +* Fix azure flaky test [#63286](https://github.com/ClickHouse/ClickHouse/pull/63286) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Fix deadlock in `CacheDictionaryUpdateQueue` in case of exception in constructor [#63287](https://github.com/ClickHouse/ClickHouse/pull/63287) ([Nikita Taranov](https://github.com/nickitat)). +* DiskApp: fix 'list --recursive /' and crash on invalid arguments [#63296](https://github.com/ClickHouse/ClickHouse/pull/63296) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix terminate because of unhandled exception in `MergeTreeDeduplicationLog::shutdown` [#63298](https://github.com/ClickHouse/ClickHouse/pull/63298) ([Nikita Taranov](https://github.com/nickitat)). +* Move s3_plain_rewritable unit test to shell [#63317](https://github.com/ClickHouse/ClickHouse/pull/63317) ([Julia Kartseva](https://github.com/jkartseva)). +* Add tests for [#63264](https://github.com/ClickHouse/ClickHouse/issues/63264) [#63321](https://github.com/ClickHouse/ClickHouse/pull/63321) ([Raúl Marín](https://github.com/Algunenano)). +* Try fix segfault in `MergeTreeReadPoolBase::createTask` [#63323](https://github.com/ClickHouse/ClickHouse/pull/63323) ([Antonio Andelic](https://github.com/antonio2368)). +* Update README.md [#63326](https://github.com/ClickHouse/ClickHouse/pull/63326) ([Tyler Hannan](https://github.com/tylerhannan)). +* Skip unaccessible table dirs in system.remote_data_paths [#63330](https://github.com/ClickHouse/ClickHouse/pull/63330) ([Alexander Gololobov](https://github.com/davenger)). +* Add test for [#56287](https://github.com/ClickHouse/ClickHouse/issues/56287) [#63340](https://github.com/ClickHouse/ClickHouse/pull/63340) ([Raúl Marín](https://github.com/Algunenano)). +* Update README.md [#63350](https://github.com/ClickHouse/ClickHouse/pull/63350) ([Tyler Hannan](https://github.com/tylerhannan)). +* Add test for [#48049](https://github.com/ClickHouse/ClickHouse/issues/48049) [#63351](https://github.com/ClickHouse/ClickHouse/pull/63351) ([Raúl Marín](https://github.com/Algunenano)). +* Add option `query_id_prefix` to `clickhouse-benchmark` [#63352](https://github.com/ClickHouse/ClickHouse/pull/63352) ([Anton Popov](https://github.com/CurtizJ)). +* Rollback azurite to working version [#63354](https://github.com/ClickHouse/ClickHouse/pull/63354) ([alesapin](https://github.com/alesapin)). +* Randomize setting `enable_block_offset_column` in stress tests [#63355](https://github.com/ClickHouse/ClickHouse/pull/63355) ([Anton Popov](https://github.com/CurtizJ)). +* Fix AST parsing of invalid type names [#63357](https://github.com/ClickHouse/ClickHouse/pull/63357) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix some 00002_log_and_exception_messages_formatting flakiness [#63358](https://github.com/ClickHouse/ClickHouse/pull/63358) ([Michael Kolupaev](https://github.com/al13n321)). +* Add a test for [#55655](https://github.com/ClickHouse/ClickHouse/issues/55655) [#63380](https://github.com/ClickHouse/ClickHouse/pull/63380) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix data race in `reportBrokenPart` [#63396](https://github.com/ClickHouse/ClickHouse/pull/63396) ([Antonio Andelic](https://github.com/antonio2368)). +* Workaround for `oklch()` inside canvas bug for firefox [#63404](https://github.com/ClickHouse/ClickHouse/pull/63404) ([Sergei Trifonov](https://github.com/serxa)). +* Add test for issue [#47862](https://github.com/ClickHouse/ClickHouse/issues/47862) [#63424](https://github.com/ClickHouse/ClickHouse/pull/63424) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix parsing of `CREATE INDEX` query [#63425](https://github.com/ClickHouse/ClickHouse/pull/63425) ([Anton Popov](https://github.com/CurtizJ)). +* We are using Shared Catalog in the CI Logs cluster [#63442](https://github.com/ClickHouse/ClickHouse/pull/63442) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix collection of coverage data in the CI Logs cluster [#63453](https://github.com/ClickHouse/ClickHouse/pull/63453) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix flaky test for rocksdb bulk sink [#63457](https://github.com/ClickHouse/ClickHouse/pull/63457) ([Duc Canh Le](https://github.com/canhld94)). +* io_uring: refactor get reader from context [#63475](https://github.com/ClickHouse/ClickHouse/pull/63475) ([Tomer Shafir](https://github.com/tomershafir)). +* Analyzer setting max_streams_to_max_threads_ratio overflow fix [#63478](https://github.com/ClickHouse/ClickHouse/pull/63478) ([Maksim Kita](https://github.com/kitaisreal)). +* Add setting for better rendering of multiline string for pretty format [#63479](https://github.com/ClickHouse/ClickHouse/pull/63479) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). +* Fix logical error when reloading config with customly created web disk broken after [#56367](https://github.com/ClickHouse/ClickHouse/issues/56367) [#63484](https://github.com/ClickHouse/ClickHouse/pull/63484) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Add test for [#49307](https://github.com/ClickHouse/ClickHouse/issues/49307) [#63486](https://github.com/ClickHouse/ClickHouse/pull/63486) ([Anton Popov](https://github.com/CurtizJ)). +* Remove leftovers of GCC support in cmake rules [#63488](https://github.com/ClickHouse/ClickHouse/pull/63488) ([Azat Khuzhin](https://github.com/azat)). +* Fix ProfileEventTimeIncrement code [#63489](https://github.com/ClickHouse/ClickHouse/pull/63489) ([Azat Khuzhin](https://github.com/azat)). +* MergeTreePrefetchedReadPool: Print parent name when logging projection parts [#63522](https://github.com/ClickHouse/ClickHouse/pull/63522) ([Raúl Marín](https://github.com/Algunenano)). +* Correctly stop `asyncCopy` tasks in all cases [#63523](https://github.com/ClickHouse/ClickHouse/pull/63523) ([Antonio Andelic](https://github.com/antonio2368)). +* Almost everything should work on AArch64 (Part of [#58061](https://github.com/ClickHouse/ClickHouse/issues/58061)) [#63527](https://github.com/ClickHouse/ClickHouse/pull/63527) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Update randomization of `old_parts_lifetime` [#63530](https://github.com/ClickHouse/ClickHouse/pull/63530) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Update 02240_system_filesystem_cache_table.sh [#63531](https://github.com/ClickHouse/ClickHouse/pull/63531) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix data race in `DistributedSink` [#63538](https://github.com/ClickHouse/ClickHouse/pull/63538) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix azure tests run on master [#63540](https://github.com/ClickHouse/ClickHouse/pull/63540) ([alesapin](https://github.com/alesapin)). +* Find a proper commit for cumulative `A Sync` status [#63543](https://github.com/ClickHouse/ClickHouse/pull/63543) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Add `no-s3-storage` tag to local_plain_rewritable ut [#63546](https://github.com/ClickHouse/ClickHouse/pull/63546) ([Julia Kartseva](https://github.com/jkartseva)). +* Go back to upstream lz4 submodule [#63574](https://github.com/ClickHouse/ClickHouse/pull/63574) ([Raúl Marín](https://github.com/Algunenano)). +* Fix logical error in ColumnTuple::tryInsert() [#63583](https://github.com/ClickHouse/ClickHouse/pull/63583) ([Michael Kolupaev](https://github.com/al13n321)). +* harmonize sumMap error messages on ILLEGAL_TYPE_OF_ARGUMENT [#63619](https://github.com/ClickHouse/ClickHouse/pull/63619) ([Yohann Jardin](https://github.com/yohannj)). +* Update README.md [#63631](https://github.com/ClickHouse/ClickHouse/pull/63631) ([Tyler Hannan](https://github.com/tylerhannan)). +* Ignore global profiler if system.trace_log is not enabled and fix really disable it for keeper standalone build [#63632](https://github.com/ClickHouse/ClickHouse/pull/63632) ([Azat Khuzhin](https://github.com/azat)). +* Fixes for 00002_log_and_exception_messages_formatting [#63634](https://github.com/ClickHouse/ClickHouse/pull/63634) ([Azat Khuzhin](https://github.com/azat)). +* Fix tests flakiness due to long SYSTEM FLUSH LOGS (explicitly specify old_parts_lifetime) [#63639](https://github.com/ClickHouse/ClickHouse/pull/63639) ([Azat Khuzhin](https://github.com/azat)). +* Update clickhouse-test help section [#63663](https://github.com/ClickHouse/ClickHouse/pull/63663) ([Ali](https://github.com/xogoodnow)). +* Fix bad test `02950_part_log_bytes_uncompressed` [#63672](https://github.com/ClickHouse/ClickHouse/pull/63672) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Remove leftovers of `optimize_monotonous_functions_in_order_by` [#63674](https://github.com/ClickHouse/ClickHouse/pull/63674) ([Nikita Taranov](https://github.com/nickitat)). +* tests: attempt to fix 02340_parts_refcnt_mergetree flakiness [#63684](https://github.com/ClickHouse/ClickHouse/pull/63684) ([Azat Khuzhin](https://github.com/azat)). +* Parallel replicas: simple cleanup [#63685](https://github.com/ClickHouse/ClickHouse/pull/63685) ([Igor Nikonov](https://github.com/devcrafter)). +* Cancel S3 reads properly when parallel reads are used [#63687](https://github.com/ClickHouse/ClickHouse/pull/63687) ([Antonio Andelic](https://github.com/antonio2368)). +* Explain map insertion order [#63690](https://github.com/ClickHouse/ClickHouse/pull/63690) ([Mark Needham](https://github.com/mneedham)). +* selectRangesToRead() simple cleanup [#63692](https://github.com/ClickHouse/ClickHouse/pull/63692) ([Igor Nikonov](https://github.com/devcrafter)). +* Fix fuzzed analyzer_join_with_constant query [#63702](https://github.com/ClickHouse/ClickHouse/pull/63702) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Add missing explicit instantiations of ColumnUnique [#63718](https://github.com/ClickHouse/ClickHouse/pull/63718) ([Raúl Marín](https://github.com/Algunenano)). +* Better asserts in ColumnString.h [#63719](https://github.com/ClickHouse/ClickHouse/pull/63719) ([Raúl Marín](https://github.com/Algunenano)). +* Don't randomize some settings in 02941_variant_type_* tests to avoid timeouts [#63721](https://github.com/ClickHouse/ClickHouse/pull/63721) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix flaky 03145_non_loaded_projection_backup.sh [#63728](https://github.com/ClickHouse/ClickHouse/pull/63728) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Userspace page cache: don't collect stats if cache is unused [#63730](https://github.com/ClickHouse/ClickHouse/pull/63730) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix insignificant UBSAN error in QueryAnalyzer::replaceNodesWithPositionalArguments() [#63734](https://github.com/ClickHouse/ClickHouse/pull/63734) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix a bug in resolving matcher inside lambda inside ARRAY JOIN [#63744](https://github.com/ClickHouse/ClickHouse/pull/63744) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Remove unused CaresPTRResolver::cancel_requests method [#63754](https://github.com/ClickHouse/ClickHouse/pull/63754) ([Arthur Passos](https://github.com/arthurpassos)). +* Do not hide disk name [#63756](https://github.com/ClickHouse/ClickHouse/pull/63756) ([Kseniia Sumarokova](https://github.com/kssenii)). +* CI: remove Cancel and Debug workflows as redundant [#63757](https://github.com/ClickHouse/ClickHouse/pull/63757) ([Max K.](https://github.com/maxknv)). +* Security Policy: Add notification process [#63773](https://github.com/ClickHouse/ClickHouse/pull/63773) ([Leticia Webb](https://github.com/leticiawebb)). +* Fix typo [#63774](https://github.com/ClickHouse/ClickHouse/pull/63774) ([Anton Popov](https://github.com/CurtizJ)). +* Fix fuzzer when only explicit faults are used [#63775](https://github.com/ClickHouse/ClickHouse/pull/63775) ([Raúl Marín](https://github.com/Algunenano)). +* Settings typo [#63782](https://github.com/ClickHouse/ClickHouse/pull/63782) ([Rory Crispin](https://github.com/RoryCrispin)). +* Changed the previous value of `output_format_pretty_preserve_border_for_multiline_string` setting [#63783](https://github.com/ClickHouse/ClickHouse/pull/63783) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). +* fix antlr insertStmt for issue 63657 [#63811](https://github.com/ClickHouse/ClickHouse/pull/63811) ([GG Bond](https://github.com/zzyReal666)). +* Fix race in `ReplicatedMergeTreeLogEntryData` [#63816](https://github.com/ClickHouse/ClickHouse/pull/63816) ([Antonio Andelic](https://github.com/antonio2368)). +* Allow allocation during job destructor in `ThreadPool` [#63829](https://github.com/ClickHouse/ClickHouse/pull/63829) ([Antonio Andelic](https://github.com/antonio2368)). +* io_uring: add basic io_uring clickhouse perf test [#63835](https://github.com/ClickHouse/ClickHouse/pull/63835) ([Tomer Shafir](https://github.com/tomershafir)). +* fix typo [#63838](https://github.com/ClickHouse/ClickHouse/pull/63838) ([Alexander Gololobov](https://github.com/davenger)). +* Remove unnecessary logging statements in MergeJoinTransform.cpp [#63860](https://github.com/ClickHouse/ClickHouse/pull/63860) ([vdimir](https://github.com/vdimir)). +* CI: disable ARM integration test cases with libunwind crash [#63867](https://github.com/ClickHouse/ClickHouse/pull/63867) ([Max K.](https://github.com/maxknv)). +* Fix some settings values in 02455_one_row_from_csv_memory_usage test to make it less flaky [#63874](https://github.com/ClickHouse/ClickHouse/pull/63874) ([Kruglov Pavel](https://github.com/Avogar)). +* Randomise `allow_experimental_parallel_reading_from_replicas` in stress tests [#63899](https://github.com/ClickHouse/ClickHouse/pull/63899) ([Nikita Taranov](https://github.com/nickitat)). +* Fix logs test for binary data by converting it to a valid UTF8 string. [#63909](https://github.com/ClickHouse/ClickHouse/pull/63909) ([Alexey Katsman](https://github.com/alexkats)). +* More sanity checks for parallel replicas [#63910](https://github.com/ClickHouse/ClickHouse/pull/63910) ([Nikita Taranov](https://github.com/nickitat)). +* Insignificant libunwind build fixes [#63946](https://github.com/ClickHouse/ClickHouse/pull/63946) ([Azat Khuzhin](https://github.com/azat)). +* Revert multiline pretty changes due to performance problems [#63947](https://github.com/ClickHouse/ClickHouse/pull/63947) ([Raúl Marín](https://github.com/Algunenano)). +* Some usability improvements for c++expr script [#63948](https://github.com/ClickHouse/ClickHouse/pull/63948) ([Azat Khuzhin](https://github.com/azat)). +* CI: aarch64: disable arm integration tests with kerberaized kafka [#63961](https://github.com/ClickHouse/ClickHouse/pull/63961) ([Max K.](https://github.com/maxknv)). +* Slightly better setting `force_optimize_projection_name` [#63997](https://github.com/ClickHouse/ClickHouse/pull/63997) ([Anton Popov](https://github.com/CurtizJ)). +* Better script to collect symbols statistics [#64013](https://github.com/ClickHouse/ClickHouse/pull/64013) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix a typo in Analyzer [#64022](https://github.com/ClickHouse/ClickHouse/pull/64022) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix libbcrypt for FreeBSD build [#64023](https://github.com/ClickHouse/ClickHouse/pull/64023) ([Azat Khuzhin](https://github.com/azat)). +* Fix searching for libclang_rt.builtins.*.a on FreeBSD [#64051](https://github.com/ClickHouse/ClickHouse/pull/64051) ([Azat Khuzhin](https://github.com/azat)). +* Fix waiting for mutations with retriable errors [#64063](https://github.com/ClickHouse/ClickHouse/pull/64063) ([Alexander Tokmakov](https://github.com/tavplubix)). +* harmonize h3PointDist* error messages [#64080](https://github.com/ClickHouse/ClickHouse/pull/64080) ([Yohann Jardin](https://github.com/yohannj)). +* This log message is better in Trace [#64081](https://github.com/ClickHouse/ClickHouse/pull/64081) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* tests: fix expected error for 03036_reading_s3_archives (fixes CI) [#64089](https://github.com/ClickHouse/ClickHouse/pull/64089) ([Azat Khuzhin](https://github.com/azat)). +* Fix sanitizers [#64090](https://github.com/ClickHouse/ClickHouse/pull/64090) ([Azat Khuzhin](https://github.com/azat)). +* Update llvm/clang to 18.1.6 [#64091](https://github.com/ClickHouse/ClickHouse/pull/64091) ([Azat Khuzhin](https://github.com/azat)). +* CI: mergeable check redesign [#64093](https://github.com/ClickHouse/ClickHouse/pull/64093) ([Max K.](https://github.com/maxknv)). +* Move `isAllASCII` from UTFHelper to StringUtils [#64108](https://github.com/ClickHouse/ClickHouse/pull/64108) ([Robert Schulze](https://github.com/rschu1ze)). +* Clean up .clang-tidy after transition to Clang 18 [#64111](https://github.com/ClickHouse/ClickHouse/pull/64111) ([Robert Schulze](https://github.com/rschu1ze)). +* Ignore exception when checking for cgroupsv2 [#64118](https://github.com/ClickHouse/ClickHouse/pull/64118) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix UBSan error in negative positional arguments [#64127](https://github.com/ClickHouse/ClickHouse/pull/64127) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Syncing code [#64135](https://github.com/ClickHouse/ClickHouse/pull/64135) ([Antonio Andelic](https://github.com/antonio2368)). +* Losen build resource limits for unusual architectures [#64152](https://github.com/ClickHouse/ClickHouse/pull/64152) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* fix clang tidy [#64179](https://github.com/ClickHouse/ClickHouse/pull/64179) ([Han Fei](https://github.com/hanfei1991)). +* Fix global query profiler [#64187](https://github.com/ClickHouse/ClickHouse/pull/64187) ([Azat Khuzhin](https://github.com/azat)). +* CI: cancel running PR wf after adding to MQ [#64188](https://github.com/ClickHouse/ClickHouse/pull/64188) ([Max K.](https://github.com/maxknv)). +* Add debug logging to EmbeddedRocksDBBulkSink [#64203](https://github.com/ClickHouse/ClickHouse/pull/64203) ([vdimir](https://github.com/vdimir)). +* Fix special builds (due to excessive resource usage - memory/CPU) [#64204](https://github.com/ClickHouse/ClickHouse/pull/64204) ([Azat Khuzhin](https://github.com/azat)). +* Add gh to style-check dockerfile [#64227](https://github.com/ClickHouse/ClickHouse/pull/64227) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Followup for [#63691](https://github.com/ClickHouse/ClickHouse/issues/63691) [#64285](https://github.com/ClickHouse/ClickHouse/pull/64285) ([vdimir](https://github.com/vdimir)). +* Rename allow_deprecated_functions to allow_deprecated_error_prone_win… [#64358](https://github.com/ClickHouse/ClickHouse/pull/64358) ([Raúl Marín](https://github.com/Algunenano)). +* Update description for settings `cross_join_min_rows_to_compress` and `cross_join_min_bytes_to_compress` [#64360](https://github.com/ClickHouse/ClickHouse/pull/64360) ([Nikita Fomichev](https://github.com/fm4v)). +* Rename aggregate_function_group_array_has_limit_size [#64362](https://github.com/ClickHouse/ClickHouse/pull/64362) ([Raúl Marín](https://github.com/Algunenano)). +* Split tests 03039_dynamic_all_merge_algorithms to avoid timeouts [#64363](https://github.com/ClickHouse/ClickHouse/pull/64363) ([Kruglov Pavel](https://github.com/Avogar)). +* Clean settings in 02943_variant_read_subcolumns test [#64437](https://github.com/ClickHouse/ClickHouse/pull/64437) ([Kruglov Pavel](https://github.com/Avogar)). +* CI: Critical bugfix category in PR template [#64480](https://github.com/ClickHouse/ClickHouse/pull/64480) ([Max K.](https://github.com/maxknv)). + diff --git a/docs/en/operations/configuration-files.md b/docs/en/operations/configuration-files.md index 0675e0edcb6..57fea3cca3a 100644 --- a/docs/en/operations/configuration-files.md +++ b/docs/en/operations/configuration-files.md @@ -7,6 +7,8 @@ sidebar_label: Configuration Files # Configuration Files The ClickHouse server can be configured with configuration files in XML or YAML syntax. In most installation types, the ClickHouse server runs with `/etc/clickhouse-server/config.xml` as default configuration file, but it is also possible to specify the location of the configuration file manually at server startup using command line option `--config-file=` or `-C`. Additional configuration files may be placed into directory `config.d/` relative to the main configuration file, for example into directory `/etc/clickhouse-server/config.d/`. Files in this directory and the main configuration are merged in a preprocessing step before the configuration is applied in ClickHouse server. Configuration files are merged in alphabetical order. To simplify updates and improve modularization, it is best practice to keep the default `config.xml` file unmodified and place additional customization into `config.d/`. +(The ClickHouse keeper configuration lives in `/etc/clickhouse-keeper/keeper_config.xml` and thus the additional files need to be placed in `/etc/clickhouse-keeper/keeper_config.d/` ) + It is possible to mix XML and YAML configuration files, for example you could have a main configuration file `config.xml` and additional configuration files `config.d/network.xml`, `config.d/timezone.yaml` and `config.d/keeper.yaml`. Mixing XML and YAML within a single configuration file is not supported. XML configuration files should use `...` as top-level tag. In YAML configuration files, `clickhouse:` is optional, the parser inserts it implicitly if absent. diff --git a/docs/en/operations/settings/settings.md b/docs/en/operations/settings/settings.md index 252b041ef6f..0b905df21d4 100644 --- a/docs/en/operations/settings/settings.md +++ b/docs/en/operations/settings/settings.md @@ -1956,7 +1956,7 @@ Possible values: - Positive integer. - 0 — Asynchronous insertions are disabled. -Default value: `1000000`. +Default value: `10485760`. ### async_insert_max_query_number {#async-insert-max-query-number} diff --git a/docs/en/sql-reference/functions/encoding-functions.md b/docs/en/sql-reference/functions/encoding-functions.md index 408b605727d..24a95b0398b 100644 --- a/docs/en/sql-reference/functions/encoding-functions.md +++ b/docs/en/sql-reference/functions/encoding-functions.md @@ -167,7 +167,7 @@ Performs the opposite operation of [hex](#hex). It interprets each pair of hexad If you want to convert the result to a number, you can use the [reverse](../../sql-reference/functions/string-functions.md#reverse) and [reinterpretAs<Type>](../../sql-reference/functions/type-conversion-functions.md#type-conversion-functions) functions. -:::note +:::note If `unhex` is invoked from within the `clickhouse-client`, binary strings display using UTF-8. ::: @@ -322,11 +322,11 @@ Alias: `UNBIN`. For a numeric argument `unbin()` does not return the inverse of `bin()`. If you want to convert the result to a number, you can use the [reverse](../../sql-reference/functions/string-functions.md#reverse) and [reinterpretAs<Type>](../../sql-reference/functions/type-conversion-functions.md#reinterpretasuint8163264) functions. -:::note +:::note If `unbin` is invoked from within the `clickhouse-client`, binary strings are displayed using UTF-8. ::: -Supports binary digits `0` and `1`. The number of binary digits does not have to be multiples of eight. If the argument string contains anything other than binary digits, some implementation-defined result is returned (an exception isn’t thrown). +Supports binary digits `0` and `1`. The number of binary digits does not have to be multiples of eight. If the argument string contains anything other than binary digits, some implementation-defined result is returned (an exception isn’t thrown). **Arguments** @@ -482,7 +482,7 @@ mortonEncode(range_mask, args) - `range_mask`: 1-8. - `args`: up to 8 [unsigned integers](../data-types/int-uint.md) or columns of the aforementioned type. -Note: when using columns for `args` the provided `range_mask` tuple should still be a constant. +Note: when using columns for `args` the provided `range_mask` tuple should still be a constant. **Returned value** @@ -626,7 +626,7 @@ Result: Accepts a range mask (tuple) as a first argument and the code as the second argument. Each number in the mask configures the amount of range shrink:
1 - no shrink
-2 - 2x shrink
+2 - 2x shrink
3 - 3x shrink
...
Up to 8x shrink.
@@ -701,6 +701,267 @@ Result: 1 2 3 4 5 6 7 8 ``` +## hilbertEncode + +Calculates code for Hilbert Curve for a list of unsigned integers. + +The function has two modes of operation: +- Simple +- Expanded + +### Simple mode + +Simple: accepts up to 2 unsigned integers as arguments and produces a UInt64 code. + +**Syntax** + +```sql +hilbertEncode(args) +``` + +**Parameters** + +- `args`: up to 2 [unsigned integers](../../sql-reference/data-types/int-uint.md) or columns of the aforementioned type. + +**Returned value** + +- A UInt64 code + +Type: [UInt64](../../sql-reference/data-types/int-uint.md) + +**Example** + +Query: + +```sql +SELECT hilbertEncode(3, 4); +``` +Result: + +```response +31 +``` + +### Expanded mode + +Accepts a range mask ([tuple](../../sql-reference/data-types/tuple.md)) as a first argument and up to 2 [unsigned integers](../../sql-reference/data-types/int-uint.md) as other arguments. + +Each number in the mask configures the number of bits by which the corresponding argument will be shifted left, effectively scaling the argument within its range. + +**Syntax** + +```sql +hilbertEncode(range_mask, args) +``` + +**Parameters** +- `range_mask`: ([tuple](../../sql-reference/data-types/tuple.md)) +- `args`: up to 2 [unsigned integers](../../sql-reference/data-types/int-uint.md) or columns of the aforementioned type. + +Note: when using columns for `args` the provided `range_mask` tuple should still be a constant. + +**Returned value** + +- A UInt64 code + +Type: [UInt64](../../sql-reference/data-types/int-uint.md) +**Example** +Range expansion can be beneficial when you need a similar distribution for arguments with wildly different ranges (or cardinality) +For example: 'IP Address' (0...FFFFFFFF) and 'Country code' (0...FF). + +Query: + +```sql +SELECT hilbertEncode((10,6), 1024, 16); +``` + +Result: + +```response +4031541586602 +``` + +Note: tuple size must be equal to the number of the other arguments. + +**Example** + +For a single argument without a tuple, the function returns the argument itself as the Hilbert index, since no dimensional mapping is needed. + +Query: + +```sql +SELECT hilbertEncode(1); +``` + +Result: + +```response +1 +``` + +**Example** + +If a single argument is provided with a tuple specifying bit shifts, the function shifts the argument left by the specified number of bits. + +Query: + +```sql +SELECT hilbertEncode(tuple(2), 128); +``` + +Result: + +```response +512 +``` + +**Example** + +The function also accepts columns as arguments: + +Query: + +First create the table and insert some data. + +```sql +create table hilbert_numbers( + n1 UInt32, + n2 UInt32 +) +Engine=MergeTree() +ORDER BY n1 SETTINGS index_granularity = 8192, index_granularity_bytes = '10Mi'; +insert into hilbert_numbers (*) values(1,2); +``` +Use column names instead of constants as function arguments to `hilbertEncode` + +Query: + +```sql +SELECT hilbertEncode(n1, n2) FROM hilbert_numbers; +``` + +Result: + +```response +13 +``` + +**implementation details** + +Please note that you can fit only so many bits of information into Hilbert code as [UInt64](../../sql-reference/data-types/int-uint.md) has. Two arguments will have a range of maximum 2^32 (64/2) each. All overflow will be clamped to zero. + +## hilbertDecode + +Decodes a Hilbert curve index back into a tuple of unsigned integers, representing coordinates in multi-dimensional space. + +As with the `hilbertEncode` function, this function has two modes of operation: +- Simple +- Expanded + +### Simple mode + +Accepts up to 2 unsigned integers as arguments and produces a UInt64 code. + +**Syntax** + +```sql +hilbertDecode(tuple_size, code) +``` + +**Parameters** +- `tuple_size`: integer value no more than 2. +- `code`: [UInt64](../../sql-reference/data-types/int-uint.md) code. + +**Returned value** + +- [tuple](../../sql-reference/data-types/tuple.md) of the specified size. + +Type: [UInt64](../../sql-reference/data-types/int-uint.md) + +**Example** + +Query: + +```sql +SELECT hilbertDecode(2, 31); +``` + +Result: + +```response +["3", "4"] +``` + +### Expanded mode + +Accepts a range mask (tuple) as a first argument and up to 2 unsigned integers as other arguments. +Each number in the mask configures the number of bits by which the corresponding argument will be shifted left, effectively scaling the argument within its range. + +Range expansion can be beneficial when you need a similar distribution for arguments with wildly different ranges (or cardinality) +For example: 'IP Address' (0...FFFFFFFF) and 'Country code' (0...FF). +As with the encode function, this is limited to 8 numbers at most. + +**Example** + +Hilbert code for one argument is always the argument itself (as a tuple). + +Query: + +```sql +SELECT hilbertDecode(1, 1); +``` + +Result: + +```response +["1"] +``` + +**Example** + +A single argument with a tuple specifying bit shifts will be right-shifted accordingly. + +Query: + +```sql +SELECT hilbertDecode(tuple(2), 32768); +``` + +Result: + +```response +["128"] +``` + +**Example** + +The function accepts a column of codes as a second argument: + +First create the table and insert some data. + +Query: +```sql +create table hilbert_numbers( + n1 UInt32, + n2 UInt32 +) +Engine=MergeTree() +ORDER BY n1 SETTINGS index_granularity = 8192, index_granularity_bytes = '10Mi'; +insert into hilbert_numbers (*) values(1,2); +``` +Use column names instead of constants as function arguments to `hilbertDecode` + +Query: + +```sql +select untuple(hilbertDecode(2, hilbertEncode(n1, n2))) from hilbert_numbers; +``` + +Result: + +```response +1 2 +``` diff --git a/docs/en/sql-reference/statements/create/table.md b/docs/en/sql-reference/statements/create/table.md index 0edf158e981..16918102f02 100644 --- a/docs/en/sql-reference/statements/create/table.md +++ b/docs/en/sql-reference/statements/create/table.md @@ -410,6 +410,10 @@ High compression levels are useful for asymmetric scenarios, like compress once, - For compression, ZSTD_QAT tries to use an Intel® QAT offloading device ([QuickAssist Technology](https://www.intel.com/content/www/us/en/developer/topic-technology/open/quick-assist-technology/overview.html)). If no such device was found, it will fallback to ZSTD compression in software. - Decompression is always performed in software. +:::note +ZSTD_QAT is not available in ClickHouse Cloud. +::: + #### DEFLATE_QPL `DEFLATE_QPL` — [Deflate compression algorithm](https://github.com/intel/qpl) implemented by Intel® Query Processing Library. Some limitations apply: diff --git a/docs/en/sql-reference/statements/create/view.md b/docs/en/sql-reference/statements/create/view.md index b526c94e508..1bdf22b35b0 100644 --- a/docs/en/sql-reference/statements/create/view.md +++ b/docs/en/sql-reference/statements/create/view.md @@ -85,6 +85,14 @@ Also note, that `materialized_views_ignore_errors` set to `true` by default for If you specify `POPULATE`, the existing table data is inserted into the view when creating it, as if making a `CREATE TABLE ... AS SELECT ...` . Otherwise, the query contains only the data inserted in the table after creating the view. We **do not recommend** using `POPULATE`, since data inserted in the table during the view creation will not be inserted in it. +:::note +Given that `POPULATE` works like `CREATE TABLE ... AS SELECT ...` it has limitations: +- It is not supported with Replicated database +- It is not supported in ClickHouse cloud + +Instead a separate `INSERT ... SELECT` can be used. +::: + A `SELECT` query can contain `DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`. Note that the corresponding conversions are performed independently on each block of inserted data. For example, if `GROUP BY` is set, data is aggregated during insertion, but only within a single packet of inserted data. The data won’t be further aggregated. The exception is when using an `ENGINE` that independently performs data aggregation, such as `SummingMergeTree`. The execution of [ALTER](/docs/en/sql-reference/statements/alter/view.md) queries on materialized views has limitations, for example, you can not update the `SELECT` query, so this might be inconvenient. If the materialized view uses the construction `TO [db.]name`, you can `DETACH` the view, run `ALTER` for the target table, and then `ATTACH` the previously detached (`DETACH`) view. diff --git a/programs/bash-completion/completions/clickhouse-bootstrap b/programs/bash-completion/completions/clickhouse-bootstrap index 2862140b528..73e2ef07477 100644 --- a/programs/bash-completion/completions/clickhouse-bootstrap +++ b/programs/bash-completion/completions/clickhouse-bootstrap @@ -154,7 +154,8 @@ function _clickhouse_quote() # Extract every option (everything that starts with "-") from the --help dialog. function _clickhouse_get_options() { - "$@" --help 2>&1 | awk -F '[ ,=<>.]' '{ for (i=1; i <= NF; ++i) { if (substr($i, 1, 1) == "-" && length($i) > 1) print $i; } }' | sort -u + # By default --help will not print all settings, this is done only under --verbose + "$@" --help --verbose 2>&1 | awk -F '[ ,=<>.]' '{ for (i=1; i <= NF; ++i) { if (substr($i, 1, 1) == "-" && length($i) > 1) print $i; } }' | sort -u } function _complete_for_clickhouse_generic_bin_impl() diff --git a/programs/keeper-client/Commands.cpp b/programs/keeper-client/Commands.cpp index 860840a2d06..df9da8e9613 100644 --- a/programs/keeper-client/Commands.cpp +++ b/programs/keeper-client/Commands.cpp @@ -11,7 +11,6 @@ namespace DB namespace ErrorCodes { extern const int LOGICAL_ERROR; - extern const int KEEPER_EXCEPTION; } bool LSCommand::parse(IParser::Pos & pos, std::shared_ptr & node, Expected & expected) const @@ -214,6 +213,143 @@ void GetStatCommand::execute(const ASTKeeperQuery * query, KeeperClient * client std::cout << "numChildren = " << stat.numChildren << "\n"; } +namespace +{ + +/// Helper class for parallelized tree traversal +template +struct TraversalTask : public std::enable_shared_from_this> +{ + using TraversalTaskPtr = std::shared_ptr>; + + struct Ctx + { + std::deque new_tasks; /// Tasks for newly discovered children, that hasn't been started yet + std::deque> in_flight_list_requests; /// In-flight getChildren requests + std::deque> finish_callbacks; /// Callbacks to be called + KeeperClient * client; + UserCtx & user_ctx; + + Ctx(KeeperClient * client_, UserCtx & user_ctx_) : client(client_), user_ctx(user_ctx_) {} + }; + +private: + const fs::path path; + const TraversalTaskPtr parent; + + Int64 child_tasks = 0; + Int64 nodes_in_subtree = 1; + +public: + TraversalTask(const fs::path & path_, TraversalTaskPtr parent_) + : path(path_) + , parent(parent_) + { + } + + /// Start traversing the subtree + void onStart(Ctx & ctx) + { + /// tryGetChildren doesn't throw if the node is not found (was deleted in the meantime) + std::shared_ptr> list_request = + std::make_shared>(ctx.client->zookeeper->asyncTryGetChildren(path)); + ctx.in_flight_list_requests.push_back([task = this->shared_from_this(), list_request](Ctx & ctx_) mutable + { + task->onGetChildren(ctx_, list_request->get()); + }); + } + + /// Called when getChildren request returns + void onGetChildren(Ctx & ctx, const Coordination::ListResponse & response) + { + const bool traverse_children = ctx.user_ctx.onListChildren(path, response.names); + + if (traverse_children) + { + /// Schedule traversal of each child + for (const auto & child : response.names) + { + auto task = std::make_shared(path / child, this->shared_from_this()); + ctx.new_tasks.push_back(task); + } + child_tasks = response.names.size(); + } + + if (child_tasks == 0) + finish(ctx); + } + + /// Called when a child subtree has been traversed + void onChildTraversalFinished(Ctx & ctx, Int64 child_nodes_in_subtree) + { + nodes_in_subtree += child_nodes_in_subtree; + + --child_tasks; + + /// Finish if all children have been traversed + if (child_tasks == 0) + finish(ctx); + } + +private: + /// This node and all its children have been traversed + void finish(Ctx & ctx) + { + ctx.user_ctx.onFinishChildrenTraversal(path, nodes_in_subtree); + + if (!parent) + return; + + /// Notify the parent that we have finished traversing the subtree + ctx.finish_callbacks.push_back([p = this->parent, child_nodes_in_subtree = this->nodes_in_subtree](Ctx & ctx_) + { + p->onChildTraversalFinished(ctx_, child_nodes_in_subtree); + }); + } +}; + +/// Traverses the tree in parallel and calls user callbacks +/// Parallelization is achieved by sending multiple async getChildren requests to Keeper, but all processing is done in a single thread +template +void parallelized_traverse(const fs::path & path, KeeperClient * client, size_t max_in_flight_requests, UserCtx & ctx_) +{ + typename TraversalTask::Ctx ctx(client, ctx_); + + auto root_task = std::make_shared>(path, nullptr); + + ctx.new_tasks.push_back(root_task); + + /// Until there is something to do + while (!ctx.new_tasks.empty() || !ctx.in_flight_list_requests.empty() || !ctx.finish_callbacks.empty()) + { + /// First process all finish callbacks, they don't wait for anything and allow to free memory + while (!ctx.finish_callbacks.empty()) + { + auto callback = std::move(ctx.finish_callbacks.front()); + ctx.finish_callbacks.pop_front(); + callback(ctx); + } + + /// Make new requests if there are less than max in flight + while (!ctx.new_tasks.empty() && ctx.in_flight_list_requests.size() < max_in_flight_requests) + { + auto task = std::move(ctx.new_tasks.front()); + ctx.new_tasks.pop_front(); + task->onStart(ctx); + } + + /// Wait for first request in the queue to finish + if (!ctx.in_flight_list_requests.empty()) + { + auto request = std::move(ctx.in_flight_list_requests.front()); + ctx.in_flight_list_requests.pop_front(); + request(ctx); + } + } +} + +} /// anonymous namespace + bool FindSuperNodes::parse(IParser::Pos & pos, std::shared_ptr & node, Expected & expected) const { ASTPtr threshold; @@ -237,27 +373,21 @@ void FindSuperNodes::execute(const ASTKeeperQuery * query, KeeperClient * client auto threshold = query->args[0].safeGet(); auto path = client->getAbsolutePath(query->args[1].safeGet()); - Coordination::Stat stat; - if (!client->zookeeper->exists(path, &stat)) - return; /// It is ok if node was deleted meanwhile - - if (stat.numChildren >= static_cast(threshold)) - std::cout << static_cast(path) << "\t" << stat.numChildren << "\n"; - - Strings children; - auto status = client->zookeeper->tryGetChildren(path, children); - if (status == Coordination::Error::ZNONODE) - return; /// It is ok if node was deleted meanwhile - else if (status != Coordination::Error::ZOK) - throw DB::Exception(DB::ErrorCodes::KEEPER_EXCEPTION, "Error {} while getting children of {}", status, path.string()); - - std::sort(children.begin(), children.end()); - auto next_query = *query; - for (const auto & child : children) + struct { - next_query.args[1] = DB::Field(path / child); - execute(&next_query, client); - } + bool onListChildren(const fs::path & path, const Strings & children) const + { + if (children.size() >= threshold) + std::cout << static_cast(path) << "\t" << children.size() << "\n"; + return true; + } + + void onFinishChildrenTraversal(const fs::path &, Int64) const {} + + size_t threshold; + } ctx {.threshold = threshold }; + + parallelized_traverse(path, client, /* max_in_flight_requests */ 50, ctx); } bool DeleteStaleBackups::parse(IParser::Pos & /* pos */, std::shared_ptr & /* node */, Expected & /* expected */) const @@ -322,38 +452,28 @@ bool FindBigFamily::parse(IParser::Pos & pos, std::shared_ptr & return true; } -/// DFS the subtree and return the number of nodes in the subtree -static Int64 traverse(const fs::path & path, KeeperClient * client, std::vector> & result) -{ - Int64 nodes_in_subtree = 1; - - Strings children; - auto status = client->zookeeper->tryGetChildren(path, children); - if (status == Coordination::Error::ZNONODE) - return 0; - else if (status != Coordination::Error::ZOK) - throw DB::Exception(DB::ErrorCodes::KEEPER_EXCEPTION, "Error {} while getting children of {}", status, path.string()); - - for (auto & child : children) - nodes_in_subtree += traverse(path / child, client, result); - - result.emplace_back(nodes_in_subtree, path.string()); - - return nodes_in_subtree; -} - void FindBigFamily::execute(const ASTKeeperQuery * query, KeeperClient * client) const { auto path = client->getAbsolutePath(query->args[0].safeGet()); auto n = query->args[1].safeGet(); - std::vector> result; + struct + { + std::vector> result; - traverse(path, client, result); + bool onListChildren(const fs::path &, const Strings &) const { return true; } - std::sort(result.begin(), result.end(), std::greater()); - for (UInt64 i = 0; i < std::min(result.size(), static_cast(n)); ++i) - std::cout << std::get<1>(result[i]) << "\t" << std::get<0>(result[i]) << "\n"; + void onFinishChildrenTraversal(const fs::path & path, Int64 nodes_in_subtree) + { + result.emplace_back(nodes_in_subtree, path.string()); + } + } ctx; + + parallelized_traverse(path, client, /* max_in_flight_requests */ 50, ctx); + + std::sort(ctx.result.begin(), ctx.result.end(), std::greater()); + for (UInt64 i = 0; i < std::min(ctx.result.size(), static_cast(n)); ++i) + std::cout << std::get<1>(ctx.result[i]) << "\t" << std::get<0>(ctx.result[i]) << "\n"; } bool RMCommand::parse(IParser::Pos & pos, std::shared_ptr & node, Expected & expected) const diff --git a/programs/keeper/CMakeLists.txt b/programs/keeper/CMakeLists.txt index af360e44ff4..52aa601b1a2 100644 --- a/programs/keeper/CMakeLists.txt +++ b/programs/keeper/CMakeLists.txt @@ -9,8 +9,6 @@ set (CLICKHOUSE_KEEPER_LINK clickhouse_common_zookeeper daemon dbms - - ${LINK_RESOURCE_LIB} ) clickhouse_program_add(keeper) @@ -210,8 +208,6 @@ if (BUILD_STANDALONE_KEEPER) loggers_no_text_log clickhouse_common_io clickhouse_parsers # Otherwise compression will not built. FIXME. - - ${LINK_RESOURCE_LIB_STANDALONE_KEEPER} ) set_target_properties(clickhouse-keeper PROPERTIES RUNTIME_OUTPUT_DIRECTORY ../) diff --git a/programs/server/CMakeLists.txt b/programs/server/CMakeLists.txt index 76d201cc924..be696ff2afe 100644 --- a/programs/server/CMakeLists.txt +++ b/programs/server/CMakeLists.txt @@ -14,8 +14,6 @@ set (CLICKHOUSE_SERVER_LINK clickhouse_storages_system clickhouse_table_functions - ${LINK_RESOURCE_LIB} - PUBLIC daemon ) diff --git a/programs/server/config.xml b/programs/server/config.xml index 27ed5952fc9..4b3248d9d1c 100644 --- a/programs/server/config.xml +++ b/programs/server/config.xml @@ -715,7 +715,7 @@ + By default this setting is true. --> true Exclude: All with TSAN, MSAN, UBSAN, Coverage + pattern = r"(#|- \[x\] + Integration tests +- [x] Non required - [ ] Integration tests (arm64) - [x] Integration tests - [x] Integration tests @@ -33,7 +33,7 @@ _TEST_BODY_2 = """ - [x] MUST include azure - [x] no action must be applied - [ ] no action must be applied -- [x] MUST exclude tsan +- [x] MUST exclude tsan - [x] MUST exclude aarch64 - [x] MUST exclude test with analazer - [ ] no action applied @@ -138,7 +138,7 @@ class TestCIOptions(unittest.TestCase): self.assertFalse(ci_options.do_not_test) self.assertFalse(ci_options.no_ci_cache) self.assertTrue(ci_options.no_merge_commit) - self.assertEqual(ci_options.ci_sets, ["ci_set_integration"]) + self.assertEqual(ci_options.ci_sets, ["ci_set_non_required"]) self.assertCountEqual(ci_options.include_keywords, ["foo", "foo_bar"]) self.assertCountEqual(ci_options.exclude_keywords, ["foo", "foo_bar"]) @@ -153,7 +153,7 @@ class TestCIOptions(unittest.TestCase): ) self.assertCountEqual( ci_options.exclude_keywords, - ["tsan", "aarch64", "analyzer", "s3_storage", "coverage"], + ["tsan", "foobar", "aarch64", "analyzer", "s3_storage", "coverage"], ) jobs_to_do = list(_TEST_JOB_LIST) jobs_to_skip = [] diff --git a/tests/integration/test_disk_over_web_server/test.py b/tests/integration/test_disk_over_web_server/test.py index dd5163082ef..9f43ab73fa3 100644 --- a/tests/integration/test_disk_over_web_server/test.py +++ b/tests/integration/test_disk_over_web_server/test.py @@ -358,7 +358,6 @@ def test_page_cache(cluster): node.query("SYSTEM FLUSH LOGS") def get_profile_events(query_name): - print(f"asdqwe {query_name}") text = node.query( f"SELECT ProfileEvents.Names, ProfileEvents.Values FROM system.query_log ARRAY JOIN ProfileEvents WHERE query LIKE '% -- {query_name}' AND type = 'QueryFinish'" ) @@ -367,7 +366,6 @@ def test_page_cache(cluster): if line == "": continue name, value = line.split("\t") - print(f"asdqwe {name} = {int(value)}") res[name] = int(value) return res diff --git a/tests/integration/test_keeper_client/test.py b/tests/integration/test_keeper_client/test.py index fbfc38ca35c..ca22c119281 100644 --- a/tests/integration/test_keeper_client/test.py +++ b/tests/integration/test_keeper_client/test.py @@ -61,7 +61,6 @@ def test_big_family(client: KeeperClient): ) response = client.find_big_family("/test_big_family", 2) - assert response == TSV( [ ["/test_big_family", "11"], @@ -87,7 +86,12 @@ def test_find_super_nodes(client: KeeperClient): client.cd("/test_find_super_nodes") response = client.find_super_nodes(4) - assert response == TSV( + + # The order of the response is not guaranteed, so we need to sort it + normalized_response = response.strip().split("\n") + normalized_response.sort() + + assert TSV(normalized_response) == TSV( [ ["/test_find_super_nodes/1", "5"], ["/test_find_super_nodes/2", "4"], diff --git a/tests/integration/test_move_ttl_broken_compatibility/__init__.py b/tests/integration/test_move_ttl_broken_compatibility/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/integration/test_move_ttl_broken_compatibility/configs/storage_conf.xml b/tests/integration/test_move_ttl_broken_compatibility/configs/storage_conf.xml new file mode 100644 index 00000000000..1b2177d0392 --- /dev/null +++ b/tests/integration/test_move_ttl_broken_compatibility/configs/storage_conf.xml @@ -0,0 +1,36 @@ + + + test + + + + + + s3 + http://minio1:9001/root/data/ + minio + minio123 + + + + + + default + + + + + + default + False + +
+ s3 + False +
+
+ 0.0 +
+
+
+
diff --git a/tests/integration/test_move_ttl_broken_compatibility/test.py b/tests/integration/test_move_ttl_broken_compatibility/test.py new file mode 100644 index 00000000000..f9eab8b5ebb --- /dev/null +++ b/tests/integration/test_move_ttl_broken_compatibility/test.py @@ -0,0 +1,105 @@ +#!/usr/bin/env python3 + +import logging +import random +import string +import time + +import pytest +from helpers.cluster import ClickHouseCluster +import minio + + +cluster = ClickHouseCluster(__file__) + + +@pytest.fixture(scope="module") +def started_cluster(): + try: + cluster.add_instance( + "node1", + main_configs=["configs/storage_conf.xml"], + image="clickhouse/clickhouse-server", + with_minio=True, + tag="24.1", + stay_alive=True, + with_installed_binary=True, + ) + cluster.start() + + yield cluster + finally: + cluster.shutdown() + + +def test_bc_compatibility(started_cluster): + node1 = cluster.instances["node1"] + node1.query( + """ + CREATE TABLE test_ttl_table ( + generation UInt64, + date_key DateTime, + number UInt64, + text String, + expired DateTime DEFAULT now() + ) + ENGINE=MergeTree + ORDER BY (generation, date_key) + PARTITION BY toMonth(date_key) + TTL expired + INTERVAL 20 SECONDS TO DISK 's3' + SETTINGS storage_policy = 's3'; + """ + ) + + node1.query( + """ + INSERT INTO test_ttl_table ( + generation, + date_key, + number, + text + ) + SELECT + 1, + toDateTime('2000-01-01 00:00:00') + rand(number) % 365 * 86400, + number, + toString(number) + FROM numbers(10000); + """ + ) + + disks = ( + node1.query( + """ + SELECT distinct disk_name + FROM system.parts + WHERE table = 'test_ttl_table' + """ + ) + .strip() + .split("\n") + ) + print("Disks before", disks) + + assert len(disks) == 1 + assert disks[0] == "default" + + node1.restart_with_latest_version() + + for _ in range(60): + disks = ( + node1.query( + """ + SELECT distinct disk_name + FROM system.parts + WHERE table = 'test_ttl_table' + """ + ) + .strip() + .split("\n") + ) + print("Disks after", disks) + if "s3" in disks: + break + time.sleep(1) + assert "s3" in disks diff --git a/tests/queries/0_stateless/00453_cast_enum.sql b/tests/queries/0_stateless/00453_cast_enum.sql index 023e7233acf..5fb62bd492d 100644 --- a/tests/queries/0_stateless/00453_cast_enum.sql +++ b/tests/queries/0_stateless/00453_cast_enum.sql @@ -12,6 +12,6 @@ INSERT INTO cast_enums SELECT 2 AS type, toDate('2017-01-01') AS date, number AS SELECT type, date, id FROM cast_enums ORDER BY type, id; -INSERT INTO cast_enums VALUES ('wrong_value', '2017-01-02', 7); -- { clientError 691 } +INSERT INTO cast_enums VALUES ('wrong_value', '2017-01-02', 7); -- { clientError UNKNOWN_ELEMENT_OF_ENUM } DROP TABLE IF EXISTS cast_enums; diff --git a/tests/queries/0_stateless/00700_decimal_bounds.sql b/tests/queries/0_stateless/00700_decimal_bounds.sql index 2fa1360eeae..9c78ed04a16 100644 --- a/tests/queries/0_stateless/00700_decimal_bounds.sql +++ b/tests/queries/0_stateless/00700_decimal_bounds.sql @@ -18,26 +18,26 @@ CREATE TABLE IF NOT EXISTS decimal j DECIMAL(1,0) ) ENGINE = Memory; -INSERT INTO decimal (a) VALUES (1000000000); -- { clientError 69 } -INSERT INTO decimal (a) VALUES (-1000000000); -- { clientError 69 } -INSERT INTO decimal (b) VALUES (1000000000000000000); -- { clientError 69 } -INSERT INTO decimal (b) VALUES (-1000000000000000000); -- { clientError 69 } -INSERT INTO decimal (c) VALUES (100000000000000000000000000000000000000); -- { clientError 69 } -INSERT INTO decimal (c) VALUES (-100000000000000000000000000000000000000); -- { clientError 69 } -INSERT INTO decimal (d) VALUES (1); -- { clientError 69 } -INSERT INTO decimal (d) VALUES (-1); -- { clientError 69 } -INSERT INTO decimal (e) VALUES (1000000000000000000); -- { clientError 69 } -INSERT INTO decimal (e) VALUES (-1000000000000000000); -- { clientError 69 } -INSERT INTO decimal (f) VALUES (1); -- { clientError 69 } -INSERT INTO decimal (f) VALUES (-1); -- { clientError 69 } -INSERT INTO decimal (g) VALUES (10000); -- { clientError 69 } -INSERT INTO decimal (g) VALUES (-10000); -- { clientError 69 } -INSERT INTO decimal (h) VALUES (1000000000); -- { clientError 69 } -INSERT INTO decimal (h) VALUES (-1000000000); -- { clientError 69 } -INSERT INTO decimal (i) VALUES (100000000000000000000); -- { clientError 69 } -INSERT INTO decimal (i) VALUES (-100000000000000000000); -- { clientError 69 } -INSERT INTO decimal (j) VALUES (10); -- { clientError 69 } -INSERT INTO decimal (j) VALUES (-10); -- { clientError 69 } +INSERT INTO decimal (a) VALUES (1000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (a) VALUES (-1000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (b) VALUES (1000000000000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (b) VALUES (-1000000000000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (c) VALUES (100000000000000000000000000000000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (c) VALUES (-100000000000000000000000000000000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (d) VALUES (1); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (d) VALUES (-1); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (e) VALUES (1000000000000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (e) VALUES (-1000000000000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (f) VALUES (1); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (f) VALUES (-1); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (g) VALUES (10000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (g) VALUES (-10000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (h) VALUES (1000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (h) VALUES (-1000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (i) VALUES (100000000000000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (i) VALUES (-100000000000000000000); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (j) VALUES (10); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (j) VALUES (-10); -- { clientError ARGUMENT_OUT_OF_BOUND } INSERT INTO decimal (a) VALUES (0.1); INSERT INTO decimal (a) VALUES (-0.1); @@ -84,14 +84,14 @@ INSERT INTO decimal (a, b, c, d, e, f, g, h, i, j) VALUES (0.0, 0.0, 0.0, 0.0, 0 INSERT INTO decimal (a, b, c, d, e, f, g, h, i, j) VALUES (-0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0); INSERT INTO decimal (a, b, g) VALUES ('42.00000', 42.0000000000000000000000000000000, '0.999990'); -INSERT INTO decimal (a) VALUES ('-9x'); -- { clientError 6 } -INSERT INTO decimal (a) VALUES ('0x1'); -- { clientError 6 } +INSERT INTO decimal (a) VALUES ('-9x'); -- { clientError CANNOT_PARSE_TEXT } +INSERT INTO decimal (a) VALUES ('0x1'); -- { clientError CANNOT_PARSE_TEXT } INSERT INTO decimal (a, b, c, d, e, f) VALUES ('0.9e9', '0.9e18', '0.9e38', '9e-9', '9e-18', '9e-38'); INSERT INTO decimal (a, b, c, d, e, f) VALUES ('-0.9e9', '-0.9e18', '-0.9e38', '-9e-9', '-9e-18', '-9e-38'); -INSERT INTO decimal (a, b, c, d, e, f) VALUES ('1e9', '1e18', '1e38', '1e-10', '1e-19', '1e-39'); -- { clientError 69 } -INSERT INTO decimal (a, b, c, d, e, f) VALUES ('-1e9', '-1e18', '-1e38', '-1e-10', '-1e-19', '-1e-39'); -- { clientError 69 } +INSERT INTO decimal (a, b, c, d, e, f) VALUES ('1e9', '1e18', '1e38', '1e-10', '1e-19', '1e-39'); -- { clientError ARGUMENT_OUT_OF_BOUND } +INSERT INTO decimal (a, b, c, d, e, f) VALUES ('-1e9', '-1e18', '-1e38', '-1e-10', '-1e-19', '-1e-39'); -- { clientError ARGUMENT_OUT_OF_BOUND } SELECT * FROM decimal ORDER BY a, b, c, d, e, f, g, h, i, j; DROP TABLE IF EXISTS decimal; diff --git a/tests/queries/0_stateless/00748_insert_array_with_null.sql b/tests/queries/0_stateless/00748_insert_array_with_null.sql index ac55d4e9d8c..5e0256ef6c8 100644 --- a/tests/queries/0_stateless/00748_insert_array_with_null.sql +++ b/tests/queries/0_stateless/00748_insert_array_with_null.sql @@ -5,7 +5,7 @@ set input_format_null_as_default=0; CREATE TABLE arraytest ( created_date Date DEFAULT toDate(created_at), created_at DateTime DEFAULT now(), strings Array(String) DEFAULT emptyArrayString()) ENGINE = MergeTree(created_date, cityHash64(created_at), (created_date, cityHash64(created_at)), 8192); INSERT INTO arraytest (created_at, strings) VALUES (now(), ['aaaaa', 'bbbbb', 'ccccc']); -INSERT INTO arraytest (created_at, strings) VALUES (now(), ['aaaaa', 'bbbbb', null]); -- { clientError 349 } +INSERT INTO arraytest (created_at, strings) VALUES (now(), ['aaaaa', 'bbbbb', null]); -- { clientError CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN } SELECT strings from arraytest; diff --git a/tests/queries/0_stateless/00948_values_interpreter_template.sql b/tests/queries/0_stateless/00948_values_interpreter_template.sql index a3d2ffd7452..918a051d621 100644 --- a/tests/queries/0_stateless/00948_values_interpreter_template.sql +++ b/tests/queries/0_stateless/00948_values_interpreter_template.sql @@ -23,9 +23,9 @@ INSERT INTO values_template VALUES ((1), lower(replaceAll('Hella', 'a', 'o')), 1 INSERT INTO values_template_nullable VALUES ((1), lower(replaceAll('Hella', 'a', 'o')), 1 + 2 + 3, arraySort(x -> assumeNotNull(x), [null, NULL::Nullable(UInt8)])), ((2), lower(replaceAll('Warld', 'b', 'o')), 4 - 5 + 6, arraySort(x -> assumeNotNull(x), [+1, -1, Null])), ((3), lower(replaceAll('Test', 'c', 'o')), 3 + 2 - 1, arraySort(x -> assumeNotNull(x), [1, nUlL, 3.14])), ((4), lower(replaceAll(null, 'c', 'o')), 6 + 5 - null, arraySort(x -> assumeNotNull(x), [3, 2, 1])); -INSERT INTO values_template_fallback VALUES (1 + x); -- { clientError 62 } -INSERT INTO values_template_fallback VALUES (abs(functionThatDoesNotExists(42))); -- { clientError 46 } -INSERT INTO values_template_fallback VALUES ([1]); -- { clientError 43 } +INSERT INTO values_template_fallback VALUES (1 + x); -- { clientError SYNTAX_ERROR } +INSERT INTO values_template_fallback VALUES (abs(functionThatDoesNotExists(42))); -- { clientError UNKNOWN_FUNCTION } +INSERT INTO values_template_fallback VALUES ([1]); -- { clientError ILLEGAL_TYPE_OF_ARGUMENT } INSERT INTO values_template_fallback VALUES (CAST(1, 'UInt8')), (CAST('2', 'UInt8')); SET input_format_values_accurate_types_of_literals = 0; diff --git a/tests/queries/0_stateless/01070_template_empty_file.sql b/tests/queries/0_stateless/01070_template_empty_file.sql index 46a8f38f80b..bbc67584ff7 100644 --- a/tests/queries/0_stateless/01070_template_empty_file.sql +++ b/tests/queries/0_stateless/01070_template_empty_file.sql @@ -1,2 +1,2 @@ -select 1 format Template settings format_template_row='01070_nonexistent_file.txt'; -- { clientError 107 } -select 1 format Template settings format_template_row='/dev/null'; -- { clientError 474 } +select 1 format Template settings format_template_row='01070_nonexistent_file.txt'; -- { clientError FILE_DOESNT_EXIST } +select 1 format Template settings format_template_row='/dev/null'; -- { clientError INVALID_TEMPLATE_FORMAT } diff --git a/tests/queries/0_stateless/01165_lost_part_empty_partition.sql b/tests/queries/0_stateless/01165_lost_part_empty_partition.sql index 84bee466365..b8998adbc52 100644 --- a/tests/queries/0_stateless/01165_lost_part_empty_partition.sql +++ b/tests/queries/0_stateless/01165_lost_part_empty_partition.sql @@ -4,7 +4,7 @@ create table rmt1 (d DateTime, n int) engine=ReplicatedMergeTree('/test/01165/{d create table rmt2 (d DateTime, n int) engine=ReplicatedMergeTree('/test/01165/{database}/rmt', '2') order by n partition by toYYYYMMDD(d); system stop replicated sends rmt1; -insert into rmt1 values (now(), arrayJoin([1, 2])); -- { clientError 36 } +insert into rmt1 values (now(), arrayJoin([1, 2])); -- { clientError BAD_ARGUMENTS } insert into rmt1(n) select * from system.numbers limit arrayJoin([1, 2]); -- { serverError BAD_ARGUMENTS, INVALID_LIMIT_EXPRESSION } insert into rmt1 values (now(), rand()); drop table rmt1; diff --git a/tests/queries/0_stateless/01173_transaction_control_queries.sql b/tests/queries/0_stateless/01173_transaction_control_queries.sql index 9d3f56f8f6b..a59abf30947 100644 --- a/tests/queries/0_stateless/01173_transaction_control_queries.sql +++ b/tests/queries/0_stateless/01173_transaction_control_queries.sql @@ -54,7 +54,7 @@ begin transaction; insert into mt1 values (6); insert into mt2 values (60); select 'on session close', arraySort(groupArray(n)) from (select n from mt1 union all select * from mt2); -insert into mt1 values ([1]); -- { clientError 43 } +insert into mt1 values ([1]); -- { clientError ILLEGAL_TYPE_OF_ARGUMENT } -- INSERT failures does not produce client reconnect anymore, so rollback can be done rollback; diff --git a/tests/queries/0_stateless/01188_attach_table_from_path.sql b/tests/queries/0_stateless/01188_attach_table_from_path.sql index 39ec643f623..d1b9493b6c2 100644 --- a/tests/queries/0_stateless/01188_attach_table_from_path.sql +++ b/tests/queries/0_stateless/01188_attach_table_from_path.sql @@ -7,7 +7,7 @@ drop table if exists mt; attach table test from 'some/path' (n UInt8) engine=Memory; -- { serverError NOT_IMPLEMENTED } attach table test from '/etc/passwd' (s String) engine=File(TSVRaw); -- { serverError PATH_ACCESS_DENIED } attach table test from '../../../../../../../../../etc/passwd' (s String) engine=File(TSVRaw); -- { serverError PATH_ACCESS_DENIED } -attach table test from 42 (s String) engine=File(TSVRaw); -- { clientError 62 } +attach table test from 42 (s String) engine=File(TSVRaw); -- { clientError SYNTAX_ERROR } insert into table function file('01188_attach/file/data.TSV', 'TSV', 's String, n UInt8') values ('file', 42); attach table file from '01188_attach/file' (s String, n UInt8) engine=File(TSV); diff --git a/tests/queries/0_stateless/01297_create_quota.sql b/tests/queries/0_stateless/01297_create_quota.sql index febdc7be6f5..ab84cbe86a5 100644 --- a/tests/queries/0_stateless/01297_create_quota.sql +++ b/tests/queries/0_stateless/01297_create_quota.sql @@ -156,8 +156,8 @@ CREATE QUOTA q13_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '12G'; CREATE QUOTA q14_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '12Gi'; CREATE QUOTA q15_01297 FOR INTERVAL 1 MINUTE MAX query_selects = 1.5; CREATE QUOTA q16_01297 FOR INTERVAL 1 MINUTE MAX execution_time = 1.5; -CREATE QUOTA q17_01297 FOR INTERVAL 1 MINUTE MAX query_selects = '1.5'; -- { clientError 27 } -CREATE QUOTA q18_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '1.5'; -- { clientError 27 } +CREATE QUOTA q17_01297 FOR INTERVAL 1 MINUTE MAX query_selects = '1.5'; -- { clientError CANNOT_PARSE_INPUT_ASSERTION_FAILED } +CREATE QUOTA q18_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '1.5'; -- { clientError CANNOT_PARSE_INPUT_ASSERTION_FAILED } SHOW CREATE QUOTA q1_01297; SHOW CREATE QUOTA q2_01297; SHOW CREATE QUOTA q3_01297; @@ -205,8 +205,8 @@ SHOW CREATE QUOTA q2_01297; DROP QUOTA IF EXISTS q1_01297; DROP QUOTA IF EXISTS q2_01297; SELECT '-- underflow test'; -CREATE QUOTA q1_01297 FOR INTERVAL 1 MINUTE MAX query_selects = '-1'; -- { clientError 72 } -CREATE QUOTA q2_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '-1'; -- { clientError 72 } +CREATE QUOTA q1_01297 FOR INTERVAL 1 MINUTE MAX query_selects = '-1'; -- { clientError CANNOT_PARSE_NUMBER } +CREATE QUOTA q2_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '-1'; -- { clientError CANNOT_PARSE_NUMBER } SELECT '-- syntax test'; CREATE QUOTA q1_01297 FOR INTERVAL 1 MINUTE MAX query_selects = ' 12 '; CREATE QUOTA q2_01297 FOR INTERVAL 1 MINUTE MAX execution_time = ' 12 '; @@ -239,11 +239,11 @@ DROP QUOTA IF EXISTS q8_01297; DROP QUOTA IF EXISTS q9_01297; DROP QUOTA IF EXISTS q10_01297; SELECT '-- bad syntax test'; -CREATE QUOTA q1_01297 FOR INTERVAL 1 MINUTE MAX query_selects = '1 1'; -- { clientError 27 } -CREATE QUOTA q2_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '1 1'; -- { clientError 27 } -CREATE QUOTA q3_01297 FOR INTERVAL 1 MINUTE MAX query_selects = '1K 1'; -- { clientError 27 } -CREATE QUOTA q4_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '1K 1'; -- { clientError 27 } -CREATE QUOTA q5_01297 FOR INTERVAL 1 MINUTE MAX query_selects = '1K1'; -- { clientError 27 } -CREATE QUOTA q6_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '1K1'; -- { clientError 27 } -CREATE QUOTA q7_01297 FOR INTERVAL 1 MINUTE MAX query_selects = 'foo'; -- { clientError 27 } -CREATE QUOTA q8_01297 FOR INTERVAL 1 MINUTE MAX execution_time = 'bar'; -- { clientError 27 } +CREATE QUOTA q1_01297 FOR INTERVAL 1 MINUTE MAX query_selects = '1 1'; -- { clientError CANNOT_PARSE_INPUT_ASSERTION_FAILED } +CREATE QUOTA q2_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '1 1'; -- { clientError CANNOT_PARSE_INPUT_ASSERTION_FAILED } +CREATE QUOTA q3_01297 FOR INTERVAL 1 MINUTE MAX query_selects = '1K 1'; -- { clientError CANNOT_PARSE_INPUT_ASSERTION_FAILED } +CREATE QUOTA q4_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '1K 1'; -- { clientError CANNOT_PARSE_INPUT_ASSERTION_FAILED } +CREATE QUOTA q5_01297 FOR INTERVAL 1 MINUTE MAX query_selects = '1K1'; -- { clientError CANNOT_PARSE_INPUT_ASSERTION_FAILED } +CREATE QUOTA q6_01297 FOR INTERVAL 1 MINUTE MAX execution_time = '1K1'; -- { clientError CANNOT_PARSE_INPUT_ASSERTION_FAILED } +CREATE QUOTA q7_01297 FOR INTERVAL 1 MINUTE MAX query_selects = 'foo'; -- { clientError CANNOT_PARSE_INPUT_ASSERTION_FAILED } +CREATE QUOTA q8_01297 FOR INTERVAL 1 MINUTE MAX execution_time = 'bar'; -- { clientError CANNOT_PARSE_INPUT_ASSERTION_FAILED } diff --git a/tests/queries/0_stateless/01564_test_hint_woes.reference b/tests/queries/0_stateless/01564_test_hint_woes.reference index d1c938deb58..adb4cc61816 100644 --- a/tests/queries/0_stateless/01564_test_hint_woes.reference +++ b/tests/queries/0_stateless/01564_test_hint_woes.reference @@ -3,11 +3,11 @@ create table values_01564( a int, constraint c1 check a < 10) engine Memory; -- client error hint after broken insert values -insert into values_01564 values ('f'); -- { clientError 6 } -insert into values_01564 values ('f'); -- { clientError 6 } +insert into values_01564 values ('f'); -- { clientError CANNOT_PARSE_TEXT } +insert into values_01564 values ('f'); -- { clientError CANNOT_PARSE_TEXT } select 1; 1 -insert into values_01564 values ('f'); -- { clientError 6 } +insert into values_01564 values ('f'); -- { clientError CANNOT_PARSE_TEXT } select nonexistent column; -- { serverError UNKNOWN_IDENTIFIER } select 1; 1 @@ -25,7 +25,7 @@ select 1; 1 -- a failing insert and then a normal insert (#https://github.com/ClickHouse/ClickHouse/issues/19353) CREATE TABLE t0 (c0 String, c1 Int32) ENGINE = Memory() ; -INSERT INTO t0(c0, c1) VALUES ("1",1) ; -- { clientError 47 } +INSERT INTO t0(c0, c1) VALUES ("1",1) ; -- { clientError UNKNOWN_IDENTIFIER } INSERT INTO t0(c0, c1) VALUES ('1', 1) ; -- the return code must be zero after the final query has failed with expected error insert into values_01564 values (11); -- { serverError VIOLATED_CONSTRAINT } diff --git a/tests/queries/0_stateless/01564_test_hint_woes.sql b/tests/queries/0_stateless/01564_test_hint_woes.sql index dd2c1accd4a..9864898b6b9 100644 --- a/tests/queries/0_stateless/01564_test_hint_woes.sql +++ b/tests/queries/0_stateless/01564_test_hint_woes.sql @@ -4,21 +4,21 @@ create table values_01564( constraint c1 check a < 10) engine Memory; -- client error hint after broken insert values -insert into values_01564 values ('f'); -- { clientError 6 } +insert into values_01564 values ('f'); -- { clientError CANNOT_PARSE_TEXT } -insert into values_01564 values ('f'); -- { clientError 6 } +insert into values_01564 values ('f'); -- { clientError CANNOT_PARSE_TEXT } select 1; -insert into values_01564 values ('f'); -- { clientError 6 } +insert into values_01564 values ('f'); -- { clientError CANNOT_PARSE_TEXT } select nonexistent column; -- { serverError UNKNOWN_IDENTIFIER } -- syntax error hint after broken insert values -insert into values_01564 this is bad syntax values ('f'); -- { clientError 62 } +insert into values_01564 this is bad syntax values ('f'); -- { clientError SYNTAX_ERROR } -insert into values_01564 this is bad syntax values ('f'); -- { clientError 62 } +insert into values_01564 this is bad syntax values ('f'); -- { clientError SYNTAX_ERROR } select 1; -insert into values_01564 this is bad syntax values ('f'); -- { clientError 62 } +insert into values_01564 this is bad syntax values ('f'); -- { clientError SYNTAX_ERROR } select nonexistent column; -- { serverError UNKNOWN_IDENTIFIER } -- server error hint after broken insert values (violated constraint) @@ -37,14 +37,14 @@ insert into values_01564 values (1); select 1; -- insert into values_01564 values (11) /*{ serverError VIOLATED_CONSTRAINT }*/; select 1; -- syntax error, where the last token we can parse is long before the semicolon. -select this is too many words for an alias; -- { clientError 62 } -OPTIMIZE TABLE values_01564 DEDUPLICATE BY; -- { clientError 62 } -OPTIMIZE TABLE values_01564 DEDUPLICATE BY a EXCEPT a; -- { clientError 62 } -select 'a' || distinct one || 'c' from system.one; -- { clientError 62 } +select this is too many words for an alias; -- { clientError SYNTAX_ERROR } +OPTIMIZE TABLE values_01564 DEDUPLICATE BY; -- { clientError SYNTAX_ERROR } +OPTIMIZE TABLE values_01564 DEDUPLICATE BY a EXCEPT a; -- { clientError SYNTAX_ERROR } +select 'a' || distinct one || 'c' from system.one; -- { clientError SYNTAX_ERROR } -- a failing insert and then a normal insert (#https://github.com/ClickHouse/ClickHouse/issues/19353) CREATE TABLE t0 (c0 String, c1 Int32) ENGINE = Memory() ; -INSERT INTO t0(c0, c1) VALUES ("1",1) ; -- { clientError 47 } +INSERT INTO t0(c0, c1) VALUES ("1",1) ; -- { clientError UNKNOWN_IDENTIFIER } INSERT INTO t0(c0, c1) VALUES ('1', 1) ; -- the return code must be zero after the final query has failed with expected error diff --git a/tests/queries/0_stateless/01581_deduplicate_by_columns_local.sql b/tests/queries/0_stateless/01581_deduplicate_by_columns_local.sql index 594a2f71162..5102820367c 100644 --- a/tests/queries/0_stateless/01581_deduplicate_by_columns_local.sql +++ b/tests/queries/0_stateless/01581_deduplicate_by_columns_local.sql @@ -32,10 +32,10 @@ OPTIMIZE TABLE full_duplicates DEDUPLICATE BY * EXCEPT(pk); -- { serverError THE OPTIMIZE TABLE full_duplicates DEDUPLICATE BY * EXCEPT(sk); -- { serverError THERE_IS_NO_COLUMN } -- sorting key column is missing [1] OPTIMIZE TABLE full_duplicates DEDUPLICATE BY * EXCEPT(partition_key); -- { serverError THERE_IS_NO_COLUMN } -- partitioning column is missing [1] -OPTIMIZE TABLE full_duplicates DEDUPLICATE BY; -- { clientError 62 } -- empty list is a syntax error -OPTIMIZE TABLE partial_duplicates DEDUPLICATE BY pk,sk,val,mat EXCEPT mat; -- { clientError 62 } -- invalid syntax -OPTIMIZE TABLE partial_duplicates DEDUPLICATE BY pk APPLY(pk + 1); -- { clientError 62 } -- APPLY column transformer is not supported -OPTIMIZE TABLE partial_duplicates DEDUPLICATE BY pk REPLACE(pk + 1); -- { clientError 62 } -- REPLACE column transformer is not supported +OPTIMIZE TABLE full_duplicates DEDUPLICATE BY; -- { clientError SYNTAX_ERROR } -- empty list is a syntax error +OPTIMIZE TABLE partial_duplicates DEDUPLICATE BY pk,sk,val,mat EXCEPT mat; -- { clientError SYNTAX_ERROR } -- invalid syntax +OPTIMIZE TABLE partial_duplicates DEDUPLICATE BY pk APPLY(pk + 1); -- { clientError SYNTAX_ERROR } -- APPLY column transformer is not supported +OPTIMIZE TABLE partial_duplicates DEDUPLICATE BY pk REPLACE(pk + 1); -- { clientError SYNTAX_ERROR } -- REPLACE column transformer is not supported -- Valid cases -- NOTE: here and below we need FINAL to force deduplication in such a small set of data in only 1 part. diff --git a/tests/queries/0_stateless/01602_show_create_view.sql b/tests/queries/0_stateless/01602_show_create_view.sql index 0aaabc2fa49..066242a046e 100644 --- a/tests/queries/0_stateless/01602_show_create_view.sql +++ b/tests/queries/0_stateless/01602_show_create_view.sql @@ -28,13 +28,13 @@ SHOW CREATE VIEW test_1602.tbl; -- { serverError BAD_ARGUMENTS } SHOW CREATE TEMPORARY VIEW; -- { serverError UNKNOWN_TABLE } -SHOW CREATE VIEW; -- { clientError 62 } +SHOW CREATE VIEW; -- { clientError SYNTAX_ERROR } -SHOW CREATE DATABASE; -- { clientError 62 } +SHOW CREATE DATABASE; -- { clientError SYNTAX_ERROR } -SHOW CREATE DICTIONARY; -- { clientError 62 } +SHOW CREATE DICTIONARY; -- { clientError SYNTAX_ERROR } -SHOW CREATE TABLE; -- { clientError 62 } +SHOW CREATE TABLE; -- { clientError SYNTAX_ERROR } SHOW CREATE test_1602.VIEW; diff --git a/tests/queries/0_stateless/01604_explain_ast_of_nonselect_query.sql b/tests/queries/0_stateless/01604_explain_ast_of_nonselect_query.sql index 6a4c065fd2c..a70785ccec5 100644 --- a/tests/queries/0_stateless/01604_explain_ast_of_nonselect_query.sql +++ b/tests/queries/0_stateless/01604_explain_ast_of_nonselect_query.sql @@ -1,3 +1,3 @@ -explain ast; -- { clientError 62 } +explain ast; -- { clientError SYNTAX_ERROR } explain ast alter table t1 delete where date = today(); explain ast create function double AS (n) -> 2*n; diff --git a/tests/queries/0_stateless/01715_table_function_view_fix.sql b/tests/queries/0_stateless/01715_table_function_view_fix.sql index 5c24131b438..7407e6b0d37 100644 --- a/tests/queries/0_stateless/01715_table_function_view_fix.sql +++ b/tests/queries/0_stateless/01715_table_function_view_fix.sql @@ -1,3 +1,3 @@ -SELECT view(SELECT 1); -- { clientError 62 } +SELECT view(SELECT 1); -- { clientError SYNTAX_ERROR } SELECT sumIf(dummy, dummy) FROM remote('127.0.0.{1,2}', numbers(2, 100), view(SELECT CAST(NULL, 'Nullable(UInt8)') AS dummy FROM system.one)); -- { serverError UNKNOWN_FUNCTION } diff --git a/tests/queries/0_stateless/01715_tuple_insert_null_as_default.sql b/tests/queries/0_stateless/01715_tuple_insert_null_as_default.sql index 69fa8cd7f13..6bfcea84424 100644 --- a/tests/queries/0_stateless/01715_tuple_insert_null_as_default.sql +++ b/tests/queries/0_stateless/01715_tuple_insert_null_as_default.sql @@ -8,7 +8,7 @@ INSERT INTO test_tuple VALUES ((NULL, 1)); SELECT * FROM test_tuple; SET input_format_null_as_default = 0; -INSERT INTO test_tuple VALUES ((NULL, 2)); -- { clientError 53 } +INSERT INTO test_tuple VALUES ((NULL, 2)); -- { clientError TYPE_MISMATCH } SELECT * FROM test_tuple; DROP TABLE test_tuple; @@ -23,7 +23,7 @@ INSERT INTO test_tuple_nested_in_array VALUES ([(NULL, 2), (3, NULL), (NULL, 4)] SELECT * FROM test_tuple_nested_in_array; SET input_format_null_as_default = 0; -INSERT INTO test_tuple_nested_in_array VALUES ([(NULL, 1)]); -- { clientError 53 } +INSERT INTO test_tuple_nested_in_array VALUES ([(NULL, 1)]); -- { clientError TYPE_MISMATCH } SELECT * FROM test_tuple_nested_in_array; DROP TABLE test_tuple_nested_in_array; @@ -38,7 +38,7 @@ INSERT INTO test_tuple_nested_in_array_nested_in_tuple VALUES ( (NULL, [(NULL, 2 SELECT * FROM test_tuple_nested_in_array_nested_in_tuple; SET input_format_null_as_default = 0; -INSERT INTO test_tuple_nested_in_array_nested_in_tuple VALUES ( (NULL, [(NULL, 1)]) ); -- { clientError 53 } +INSERT INTO test_tuple_nested_in_array_nested_in_tuple VALUES ( (NULL, [(NULL, 1)]) ); -- { clientError TYPE_MISMATCH } SELECT * FROM test_tuple_nested_in_array_nested_in_tuple; DROP TABLE test_tuple_nested_in_array_nested_in_tuple; @@ -54,7 +54,7 @@ INSERT INTO test_tuple_nested_in_map VALUES (map('test', (NULL, 1))); SELECT * FROM test_tuple_nested_in_map; SET input_format_null_as_default = 0; -INSERT INTO test_tuple_nested_in_map VALUES (map('test', (NULL, 1))); -- { clientError 53 } +INSERT INTO test_tuple_nested_in_map VALUES (map('test', (NULL, 1))); -- { clientError TYPE_MISMATCH } SELECT * FROM test_tuple_nested_in_map; DROP TABLE test_tuple_nested_in_map; @@ -69,7 +69,7 @@ INSERT INTO test_tuple_nested_in_map_nested_in_tuple VALUES ( (NULL, map('test', SELECT * FROM test_tuple_nested_in_map_nested_in_tuple; SET input_format_null_as_default = 0; -INSERT INTO test_tuple_nested_in_map_nested_in_tuple VALUES ( (NULL, map('test', (NULL, 1))) ); -- { clientError 53 } +INSERT INTO test_tuple_nested_in_map_nested_in_tuple VALUES ( (NULL, map('test', (NULL, 1))) ); -- { clientError TYPE_MISMATCH } SELECT * FROM test_tuple_nested_in_map_nested_in_tuple; DROP TABLE test_tuple_nested_in_map_nested_in_tuple; diff --git a/tests/queries/0_stateless/01825_type_json_field.sql b/tests/queries/0_stateless/01825_type_json_field.sql index 6c906023cef..15fd7b3c250 100644 --- a/tests/queries/0_stateless/01825_type_json_field.sql +++ b/tests/queries/0_stateless/01825_type_json_field.sql @@ -22,7 +22,7 @@ INSERT INTO t_json_field VALUES (4, map('a', 30, 'b', 400)), (5, map('s', 'qqq', SELECT id, data.a, data.s, data.b, data.t FROM t_json_field ORDER BY id; SELECT DISTINCT toTypeName(data) FROM t_json_field; -INSERT INTO t_json_field VALUES (6, map(1, 2, 3, 4)); -- { clientError 53 } -INSERT INTO t_json_field VALUES (6, (1, 2, 3)); -- { clientError 53 } +INSERT INTO t_json_field VALUES (6, map(1, 2, 3, 4)); -- { clientError TYPE_MISMATCH } +INSERT INTO t_json_field VALUES (6, (1, 2, 3)); -- { clientError TYPE_MISMATCH } DROP TABLE t_json_field; diff --git a/tests/queries/0_stateless/01917_distinct_on.sql b/tests/queries/0_stateless/01917_distinct_on.sql index fe202184f07..93f7566036f 100644 --- a/tests/queries/0_stateless/01917_distinct_on.sql +++ b/tests/queries/0_stateless/01917_distinct_on.sql @@ -8,16 +8,16 @@ SELECT DISTINCT ON (a, b) * FROM t1; SELECT DISTINCT ON (a) * FROM t1; -- fuzzer will fail, enable when fixed --- SELECT DISTINCT ON (a, b) a, b, c FROM t1 LIMIT 1 BY a, b; -- { clientError 62 } +-- SELECT DISTINCT ON (a, b) a, b, c FROM t1 LIMIT 1 BY a, b; -- { clientError SYNTAX_ERROR } --- SELECT DISTINCT ON a, b a, b FROM t1; -- { clientError 62 } --- SELECT DISTINCT ON a a, b FROM t1; -- { clientError 62 } +-- SELECT DISTINCT ON a, b a, b FROM t1; -- { clientError SYNTAX_ERROR } +-- SELECT DISTINCT ON a a, b FROM t1; -- { clientError SYNTAX_ERROR } -- "Code: 47. DB::Exception: Missing columns: 'DISTINCT'" - error can be better -- SELECT DISTINCT ON (a, b) DISTINCT a, b FROM t1; -- { serverError UNKNOWN_IDENTIFIER } --- SELECT DISTINCT DISTINCT ON (a, b) a, b FROM t1; -- { clientError 62 } +-- SELECT DISTINCT DISTINCT ON (a, b) a, b FROM t1; -- { clientError SYNTAX_ERROR } --- SELECT ALL DISTINCT ON (a, b) a, b FROM t1; -- { clientError 62 } --- SELECT DISTINCT ON (a, b) ALL a, b FROM t1; -- { clientError 62 } +-- SELECT ALL DISTINCT ON (a, b) a, b FROM t1; -- { clientError SYNTAX_ERROR } +-- SELECT DISTINCT ON (a, b) ALL a, b FROM t1; -- { clientError SYNTAX_ERROR } DROP TABLE IF EXISTS t1; diff --git a/tests/queries/0_stateless/02126_alter_table_alter_column.sql b/tests/queries/0_stateless/02126_alter_table_alter_column.sql index 149c7fa6852..f86d1575efd 100644 --- a/tests/queries/0_stateless/02126_alter_table_alter_column.sql +++ b/tests/queries/0_stateless/02126_alter_table_alter_column.sql @@ -5,5 +5,5 @@ ALTER TABLE alter_column_02126 ALTER COLUMN x TYPE Float32; SHOW CREATE TABLE alter_column_02126; ALTER TABLE alter_column_02126 ALTER COLUMN x TYPE Float64, MODIFY COLUMN y Float32; SHOW CREATE TABLE alter_column_02126; -ALTER TABLE alter_column_02126 MODIFY COLUMN y TYPE Float32; -- { clientError 62 } -ALTER TABLE alter_column_02126 ALTER COLUMN y Float32; -- { clientError 62 } +ALTER TABLE alter_column_02126 MODIFY COLUMN y TYPE Float32; -- { clientError SYNTAX_ERROR } +ALTER TABLE alter_column_02126 ALTER COLUMN y Float32; -- { clientError SYNTAX_ERROR } diff --git a/tests/queries/0_stateless/02155_create_table_w_timezone.sql b/tests/queries/0_stateless/02155_create_table_w_timezone.sql index 0b72122ce39..015efe3b6ba 100644 --- a/tests/queries/0_stateless/02155_create_table_w_timezone.sql +++ b/tests/queries/0_stateless/02155_create_table_w_timezone.sql @@ -1,5 +1,5 @@ -create table t02155_t64_tz ( a DateTime64(9, America/Chicago)) Engine = Memory; -- { clientError 62 } -create table t02155_t_tz ( a DateTime(America/Chicago)) Engine = Memory; -- { clientError 62 } +create table t02155_t64_tz ( a DateTime64(9, America/Chicago)) Engine = Memory; -- { clientError SYNTAX_ERROR } +create table t02155_t_tz ( a DateTime(America/Chicago)) Engine = Memory; -- { clientError SYNTAX_ERROR } create table t02155_t64_tz ( a DateTime64(9, 'America/Chicago')) Engine = Memory; create table t02155_t_tz ( a DateTime('America/Chicago')) Engine = Memory; diff --git a/tests/queries/0_stateless/02184_default_table_engine.sql b/tests/queries/0_stateless/02184_default_table_engine.sql index 2c7ffbbced3..bce939b4e94 100644 --- a/tests/queries/0_stateless/02184_default_table_engine.sql +++ b/tests/queries/0_stateless/02184_default_table_engine.sql @@ -69,9 +69,9 @@ DROP TABLE t2; CREATE DATABASE test_02184 ORDER BY kek; -- {serverError INCORRECT_QUERY} CREATE DATABASE test_02184 SETTINGS x=1; -- {serverError UNKNOWN_SETTING} -CREATE TABLE table_02184 (x UInt8, y int, PRIMARY KEY (x)) ENGINE=MergeTree PRIMARY KEY y; -- {clientError 36} +CREATE TABLE table_02184 (x UInt8, y int, PRIMARY KEY (x)) ENGINE=MergeTree PRIMARY KEY y; -- {clientError BAD_ARGUMENTS} SET default_table_engine = 'MergeTree'; -CREATE TABLE table_02184 (x UInt8, y int, PRIMARY KEY (x)) PRIMARY KEY y; -- {clientError 36} +CREATE TABLE table_02184 (x UInt8, y int, PRIMARY KEY (x)) PRIMARY KEY y; -- {clientError BAD_ARGUMENTS} CREATE TABLE mt (a UInt64, b Nullable(String), PRIMARY KEY (a, coalesce(b, 'test')), INDEX b_index b TYPE set(123) GRANULARITY 1); SHOW CREATE TABLE mt; diff --git a/tests/queries/0_stateless/02267_insert_empty_data.sql b/tests/queries/0_stateless/02267_insert_empty_data.sql index 9c92fc2a3f7..b39bd807844 100644 --- a/tests/queries/0_stateless/02267_insert_empty_data.sql +++ b/tests/queries/0_stateless/02267_insert_empty_data.sql @@ -2,7 +2,7 @@ DROP TABLE IF EXISTS t; CREATE TABLE t (n UInt32) ENGINE=Memory; -INSERT INTO t VALUES; -- { clientError 108 } +INSERT INTO t VALUES; -- { clientError NO_DATA_TO_INSERT } set throw_if_no_data_to_insert = 0; diff --git a/tests/queries/0_stateless/02294_decimal_second_errors.sql b/tests/queries/0_stateless/02294_decimal_second_errors.sql index 52d2279be41..b9b6d0a6223 100644 --- a/tests/queries/0_stateless/02294_decimal_second_errors.sql +++ b/tests/queries/0_stateless/02294_decimal_second_errors.sql @@ -1,6 +1,6 @@ -SELECT 1 SETTINGS max_execution_time=NaN; -- { clientError 72 } -SELECT 1 SETTINGS max_execution_time=Infinity; -- { clientError 72 }; -SELECT 1 SETTINGS max_execution_time=-Infinity; -- { clientError 72 }; +SELECT 1 SETTINGS max_execution_time=NaN; -- { clientError CANNOT_PARSE_NUMBER } +SELECT 1 SETTINGS max_execution_time=Infinity; -- { clientError CANNOT_PARSE_NUMBER }; +SELECT 1 SETTINGS max_execution_time=-Infinity; -- { clientError CANNOT_PARSE_NUMBER }; -- Ok values SELECT 1 SETTINGS max_execution_time=-0.5; diff --git a/tests/queries/0_stateless/02366_kql_summarize.sql b/tests/queries/0_stateless/02366_kql_summarize.sql index 861811711f0..ca16bc3a755 100644 --- a/tests/queries/0_stateless/02366_kql_summarize.sql +++ b/tests/queries/0_stateless/02366_kql_summarize.sql @@ -54,7 +54,7 @@ Customers | summarize dcount(Education); Customers | summarize dcountif(Education, Occupation=='Professional'); Customers | summarize count_ = count() by bin(Age, 10) | order by count_ asc; Customers | summarize job_count = count() by Occupation | where job_count > 0 | order by Occupation; -Customers | summarize 'Edu Count'=count() by Education | sort by 'Edu Count' desc; -- { clientError 62 } +Customers | summarize 'Edu Count'=count() by Education | sort by 'Edu Count' desc; -- { clientError SYNTAX_ERROR } print '-- make_list() --'; Customers | summarize f_list = make_list(Education) by Occupation | sort by Occupation; diff --git a/tests/queries/0_stateless/02469_fix_aliases_parser.sql b/tests/queries/0_stateless/02469_fix_aliases_parser.sql index 227d8becdb6..65eea8e9cd8 100644 --- a/tests/queries/0_stateless/02469_fix_aliases_parser.sql +++ b/tests/queries/0_stateless/02469_fix_aliases_parser.sql @@ -1,9 +1,9 @@ -SELECT sum(number number number) FROM numbers(10); -- { clientError 62 } -SELECT sum(number number) FROM numbers(10); -- { clientError 62 } +SELECT sum(number number number) FROM numbers(10); -- { clientError SYNTAX_ERROR } +SELECT sum(number number) FROM numbers(10); -- { clientError SYNTAX_ERROR } SELECT sum(number AS number) FROM numbers(10); -SELECT [number number number] FROM numbers(1); -- { clientError 62 } -SELECT [number number] FROM numbers(1); -- { clientError 62 } +SELECT [number number number] FROM numbers(1); -- { clientError SYNTAX_ERROR } +SELECT [number number] FROM numbers(1); -- { clientError SYNTAX_ERROR } SELECT [number AS number] FROM numbers(1); -SELECT cast('1234' lhs lhs, 'UInt32'), lhs; -- { clientError 62 } \ No newline at end of file +SELECT cast('1234' lhs lhs, 'UInt32'), lhs; -- { clientError SYNTAX_ERROR } \ No newline at end of file diff --git a/tests/queries/0_stateless/02472_segfault_expression_parser.sql b/tests/queries/0_stateless/02472_segfault_expression_parser.sql index 285de80a64a..4994da5dd85 100644 --- a/tests/queries/0_stateless/02472_segfault_expression_parser.sql +++ b/tests/queries/0_stateless/02472_segfault_expression_parser.sql @@ -1 +1 @@ -SELECT TIMESTAMP_SUB (SELECT ILIKE INTO OUTFILE , accurateCast ) FROM TIMESTAMP_SUB ( MINUTE , ) GROUP BY accurateCast; -- { clientError 62 } +SELECT TIMESTAMP_SUB (SELECT ILIKE INTO OUTFILE , accurateCast ) FROM TIMESTAMP_SUB ( MINUTE , ) GROUP BY accurateCast; -- { clientError SYNTAX_ERROR } diff --git a/tests/queries/0_stateless/02482_load_parts_refcounts.sh b/tests/queries/0_stateless/02482_load_parts_refcounts.sh index fe3cee1359e..5303824d97c 100755 --- a/tests/queries/0_stateless/02482_load_parts_refcounts.sh +++ b/tests/queries/0_stateless/02482_load_parts_refcounts.sh @@ -10,7 +10,7 @@ $CLICKHOUSE_CLIENT -n --query " CREATE TABLE load_parts_refcounts (id UInt32) ENGINE = ReplicatedMergeTree('/test/02482_load_parts_refcounts/{database}/{table}', '1') - ORDER BY id; + ORDER BY id SETTINGS old_parts_lifetime=100500; SYSTEM STOP MERGES load_parts_refcounts; diff --git a/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.reference b/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.reference index 5d1bfd22195..6d6e5a0cc03 100644 --- a/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.reference +++ b/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.reference @@ -147,3 +147,5 @@ with '2018-01-12 22:33:44.55' as s, toDateTime64(s, 6) as datetime64 SELECT form 550000000 with '2018-01-12 22:33:44.55' as s, toDateTime64(s, 6) as datetime64 SELECT formatDateTimeInJodaSyntax(datetime64, 'SSSSSSSSSS'); 550000000 +150 +300 diff --git a/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.sql b/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.sql index 89021e8561f..b2b29cc55a4 100644 --- a/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.sql +++ b/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.sql @@ -93,3 +93,7 @@ SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'Z'); -- { se SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'b'); -- { serverError NOT_IMPLEMENTED } SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), '\'aaaa\'\''); -- { serverError BAD_ARGUMENTS } + +-- Bug #64613 +select formatDateTimeInJodaSyntax(toDate('2012-05-29'), 'D'); +select formatDateTimeInJodaSyntax(toDateTime('2010-10-27 13:41:27'), 'D'); diff --git a/tests/queries/0_stateless/02554_invalid_create_view_syntax.sql b/tests/queries/0_stateless/02554_invalid_create_view_syntax.sql index bf16d635312..ad6c83cdeb6 100644 --- a/tests/queries/0_stateless/02554_invalid_create_view_syntax.sql +++ b/tests/queries/0_stateless/02554_invalid_create_view_syntax.sql @@ -1 +1 @@ -CREATE VIEW X TO Y AS SELECT 1; -- { clientError 62 } +CREATE VIEW X TO Y AS SELECT 1; -- { clientError SYNTAX_ERROR } diff --git a/tests/queries/0_stateless/02560_agg_state_deserialization_hash_table_crash.sql b/tests/queries/0_stateless/02560_agg_state_deserialization_hash_table_crash.sql index d85cacc70be..9f832f02840 100644 --- a/tests/queries/0_stateless/02560_agg_state_deserialization_hash_table_crash.sql +++ b/tests/queries/0_stateless/02560_agg_state_deserialization_hash_table_crash.sql @@ -1,4 +1,4 @@ DROP TABLE IF EXISTS tab; create table tab (d Int64, s AggregateFunction(groupUniqArrayArray, Array(UInt64)), c SimpleAggregateFunction(groupUniqArrayArray, Array(UInt64))) engine = SummingMergeTree() order by d; -INSERT INTO tab VALUES (1, 'このコー'); -- { clientError 128 } +INSERT INTO tab VALUES (1, 'このコー'); -- { clientError TOO_LARGE_ARRAY_SIZE } DROP TABLE tab; diff --git a/tests/queries/0_stateless/02703_row_policy_for_database.sql b/tests/queries/0_stateless/02703_row_policy_for_database.sql index 03183a96b98..51ce5f4f870 100644 --- a/tests/queries/0_stateless/02703_row_policy_for_database.sql +++ b/tests/queries/0_stateless/02703_row_policy_for_database.sql @@ -22,7 +22,7 @@ SHOW CREATE POLICY ON db1_02703.`*`; DROP POLICY db1_02703 ON db1_02703.*; DROP POLICY tbl1_02703 ON db1_02703.table; -CREATE ROW POLICY any_02703 ON *.some_table USING 1 AS PERMISSIVE TO ALL; -- { clientError 62 } +CREATE ROW POLICY any_02703 ON *.some_table USING 1 AS PERMISSIVE TO ALL; -- { clientError SYNTAX_ERROR } CREATE TABLE 02703_rqtable_default (x UInt8) ENGINE = MergeTree ORDER BY x; diff --git a/tests/queries/0_stateless/02751_parallel_replicas_bug_chunkinfo_not_set.sql b/tests/queries/0_stateless/02751_parallel_replicas_bug_chunkinfo_not_set.sql index 5ec0a1fcc31..a7112e5484b 100644 --- a/tests/queries/0_stateless/02751_parallel_replicas_bug_chunkinfo_not_set.sql +++ b/tests/queries/0_stateless/02751_parallel_replicas_bug_chunkinfo_not_set.sql @@ -18,7 +18,7 @@ INSERT INTO join_inner_table__fuzz_1 SELECT FROM generateRandom('number Int64, value1 String, value2 String, time Int64', 1, 10, 2) LIMIT 100; -SET max_parallel_replicas = 3, prefer_localhost_replica = 1, cluster_for_parallel_replicas = 'test_cluster_one_shard_three_replicas_localhost', allow_experimental_parallel_reading_from_replicas = 1; +SET max_parallel_replicas = 3, cluster_for_parallel_replicas = 'test_cluster_one_shard_three_replicas_localhost', allow_experimental_parallel_reading_from_replicas = 1, parallel_replicas_for_non_replicated_merge_tree=1; -- SELECT query will write a Warning to the logs SET send_logs_level='error'; diff --git a/tests/queries/0_stateless/02764_parallel_replicas_plain_merge_tree.sql b/tests/queries/0_stateless/02764_parallel_replicas_plain_merge_tree.sql index 9caa6f76e89..e166ce9b284 100644 --- a/tests/queries/0_stateless/02764_parallel_replicas_plain_merge_tree.sql +++ b/tests/queries/0_stateless/02764_parallel_replicas_plain_merge_tree.sql @@ -1,4 +1,5 @@ -CREATE TABLE IF NOT EXISTS parallel_replicas_plain (x String) ENGINE=MergeTree() ORDER BY x; +DROP TABLE IF EXISTS parallel_replicas_plain; +CREATE TABLE parallel_replicas_plain (x String) ENGINE=MergeTree() ORDER BY x; INSERT INTO parallel_replicas_plain SELECT toString(number) FROM numbers(10); SET max_parallel_replicas=3, allow_experimental_parallel_reading_from_replicas=1, cluster_for_parallel_replicas='parallel_replicas'; @@ -13,4 +14,4 @@ SET parallel_replicas_for_non_replicated_merge_tree = 1; SELECT x FROM parallel_replicas_plain LIMIT 1 FORMAT Null; SELECT max(length(x)) FROM parallel_replicas_plain FORMAT Null; -DROP TABLE IF EXISTS parallel_replicas_plain; +DROP TABLE parallel_replicas_plain; diff --git a/tests/queries/0_stateless/02811_parallel_replicas_prewhere_count.sql b/tests/queries/0_stateless/02811_parallel_replicas_prewhere_count.sql index 14edeecf57e..294c1325ba6 100644 --- a/tests/queries/0_stateless/02811_parallel_replicas_prewhere_count.sql +++ b/tests/queries/0_stateless/02811_parallel_replicas_prewhere_count.sql @@ -10,7 +10,6 @@ SELECT count() FROM users PREWHERE uid > 2000; -- enable parallel replicas but with high rows threshold SET -skip_unavailable_shards=1, allow_experimental_parallel_reading_from_replicas=1, max_parallel_replicas=3, cluster_for_parallel_replicas='parallel_replicas', @@ -20,4 +19,4 @@ parallel_replicas_min_number_of_rows_per_replica=1000; SELECT '-- count() with parallel replicas -------'; SELECT count() FROM users PREWHERE uid > 2000; -DROP TABLE IF EXISTS users; +DROP TABLE users; diff --git a/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.reference b/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.reference index 5efe10177dd..1d3b4efa02d 100644 --- a/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.reference +++ b/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.reference @@ -10,3 +10,4 @@ tablefunc03 StorageProxy CREATE TABLE default.tablefunc03 (`a` Int32) AS sqlite tablefunc04 StorageProxy CREATE TABLE default.tablefunc04 (`a` Int32) AS mongodb(\'127.0.0.1:27017\', \'test\', \'my_collection\', \'test_user\', \'[HIDDEN]\', \'a Int\') [] 1 1 tablefunc05 StorageProxy CREATE TABLE default.tablefunc05 (`a` Int32) AS redis(\'127.0.0.1:6379\', \'key\', \'key UInt32\') [] 1 1 tablefunc06 StorageProxy CREATE TABLE default.tablefunc06 (`a` Int32) AS s3(\'http://some_addr:9000/cloud-storage-01/data.tsv\', \'M9O7o0SX5I4udXhWxI12\', \'[HIDDEN]\', \'TSV\') [] 1 1 +StorageProxy diff --git a/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.sql b/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.sql index 783a922dfa4..14768a95006 100644 --- a/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.sql +++ b/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.sql @@ -21,7 +21,7 @@ SELECT name, engine, engine_full, create_table_query, data_paths, notEmpty([meta WHERE name like '%tablefunc%' and database=currentDatabase() ORDER BY name; -DETACH TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc01; +DETACH TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc01; DETACH TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc02; DETACH TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc03; DETACH TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc04; @@ -40,4 +40,7 @@ SELECT name, engine, engine_full, create_table_query, data_paths, notEmpty([meta WHERE name like '%tablefunc%' and database=currentDatabase() ORDER BY name; +SELECT count() FROM {CLICKHOUSE_DATABASE:Identifier}.tablefunc01; -- { serverError POSTGRESQL_CONNECTION_FAILURE } +SELECT engine FROM system.tables WHERE name = 'tablefunc01' and database=currentDatabase(); + DROP DATABASE IF EXISTS {CLICKHOUSE_DATABASE:Identifier}; diff --git a/tests/queries/0_stateless/02897_alter_partition_parameters.sql b/tests/queries/0_stateless/02897_alter_partition_parameters.sql index 0be7308ed1a..6150642f838 100644 --- a/tests/queries/0_stateless/02897_alter_partition_parameters.sql +++ b/tests/queries/0_stateless/02897_alter_partition_parameters.sql @@ -43,7 +43,7 @@ SELECT count() FROM test; INSERT INTO test VALUES(toDate('2023-10-09')); -- for some reason only tuples are allowed as non-string arguments -ALTER TABLE test DROP PARTITION toMonday({partition:String}); --{clientError 62} +ALTER TABLE test DROP PARTITION toMonday({partition:String}); --{clientError SYNTAX_ERROR} set param_partition_id = '20231009'; diff --git a/tests/queries/0_stateless/02950_dictionary_short_circuit.sql b/tests/queries/0_stateless/02950_dictionary_short_circuit.sql index f4575bcd115..12c934a8d2d 100644 --- a/tests/queries/0_stateless/02950_dictionary_short_circuit.sql +++ b/tests/queries/0_stateless/02950_dictionary_short_circuit.sql @@ -79,6 +79,10 @@ SELECT dictGetOrDefault('hashed_array_dictionary', 'v2', id+1, intDiv(NULL, id)) FROM dictionary_source_table; SELECT dictGetOrDefault('hashed_array_dictionary', 'v3', id+1, intDiv(NULL, id)) FROM dictionary_source_table; +-- Fuzzer +SELECT dictGetOrDefault('hashed_array_dictionary', ('v1', 'v2'), toUInt128(0), (materialize(toNullable(NULL)), intDiv(1, id), intDiv(1, id))) FROM dictionary_source_table; -- { serverError TYPE_MISMATCH } +SELECT materialize(materialize(toLowCardinality(15))), dictGetOrDefault('hashed_array_dictionary', ('v1', 'v2'), 0, (intDiv(materialize(NULL), id), intDiv(1, id), intDiv(1, id))) FROM dictionary_source_table; -- { serverError TYPE_MISMATCH } +SELECT dictGetOrDefault('hashed_array_dictionary', ('v1', 'v2'), 0, (toNullable(NULL), intDiv(1, id), intDiv(1, id))) FROM dictionary_source_table; -- { serverError TYPE_MISMATCH } DROP DICTIONARY hashed_array_dictionary; @@ -189,15 +193,15 @@ LIFETIME(3600); SELECT 'IP TRIE dictionary'; SELECT dictGetOrDefault('ip_dictionary', 'cca2', toIPv4('202.79.32.10'), intDiv(0, id)) FROM ip_dictionary_source_table; -SELECT dictGetOrDefault('ip_dictionary', ('asn', 'cca2'), IPv6StringToNum('2a02:6b8:1::1'), +SELECT dictGetOrDefault('ip_dictionary', ('asn', 'cca2'), IPv6StringToNum('2a02:6b8:1::1'), (intDiv(1, id), intDiv(1, id))) FROM ip_dictionary_source_table; DROP DICTIONARY ip_dictionary; DROP TABLE IF EXISTS polygon_dictionary_source_table; -CREATE TABLE polygon_dictionary_source_table +CREATE TABLE polygon_dictionary_source_table ( - key Array(Array(Array(Tuple(Float64, Float64)))), + key Array(Array(Array(Tuple(Float64, Float64)))), name Nullable(String) ) ENGINE=TinyLog; @@ -258,7 +262,9 @@ LIFETIME(0) LAYOUT(regexp_tree); SELECT 'Regular Expression Tree dictionary'; -SELECT dictGetOrDefault('regexp_dict', 'name', concat(toString(number), '/tclwebkit', toString(number)), +SELECT dictGetOrDefault('regexp_dict', 'name', concat(toString(number), '/tclwebkit', toString(number)), intDiv(1,number)) FROM numbers(2); +-- Fuzzer +SELECT dictGetOrDefault('regexp_dict', 'name', concat('/tclwebkit', toString(number)), intDiv(1, number)) FROM numbers(2); -- { serverError ILLEGAL_DIVISION } DROP DICTIONARY regexp_dict; DROP TABLE regexp_dictionary_source_table; diff --git a/tests/queries/0_stateless/03033_set_index_in.reference b/tests/queries/0_stateless/03033_set_index_in.reference new file mode 100644 index 00000000000..3800acc0458 --- /dev/null +++ b/tests/queries/0_stateless/03033_set_index_in.reference @@ -0,0 +1,3 @@ +32768 +49152 +32768 diff --git a/tests/queries/0_stateless/03033_set_index_in.sql b/tests/queries/0_stateless/03033_set_index_in.sql new file mode 100644 index 00000000000..ad42a576444 --- /dev/null +++ b/tests/queries/0_stateless/03033_set_index_in.sql @@ -0,0 +1,9 @@ +create table a (k UInt64, v UInt64, index i (v) type set(100) granularity 2) engine MergeTree order by k settings index_granularity=8192, index_granularity_bytes=1000000000, min_index_granularity_bytes=0; +insert into a select number, intDiv(number, 4096) from numbers(1000000); +select sum(1+ignore(*)) from a where indexHint(v in (20, 40)); +select sum(1+ignore(*)) from a where indexHint(v in (select 20 union all select 40 union all select 60)); + +SELECT 1 FROM a PREWHERE v IN (SELECT 1) WHERE v IN (SELECT 2); + +select 1 from a where indexHint(indexHint(materialize(0))); +select sum(1+ignore(*)) from a where indexHint(indexHint(v in (20, 40))); \ No newline at end of file diff --git a/tests/queries/0_stateless/03131_hilbert_coding.reference b/tests/queries/0_stateless/03131_hilbert_coding.reference new file mode 100644 index 00000000000..bdb578483fa --- /dev/null +++ b/tests/queries/0_stateless/03131_hilbert_coding.reference @@ -0,0 +1,8 @@ +----- START ----- +----- CONST ----- +133 +31 +(3,4) +----- 4294967296, 2 ----- +----- ERRORS ----- +----- END ----- diff --git a/tests/queries/0_stateless/03131_hilbert_coding.sql b/tests/queries/0_stateless/03131_hilbert_coding.sql new file mode 100644 index 00000000000..ed293dc6910 --- /dev/null +++ b/tests/queries/0_stateless/03131_hilbert_coding.sql @@ -0,0 +1,55 @@ +SELECT '----- START -----'; +drop table if exists hilbert_numbers_03131; +create table hilbert_numbers_03131( + n1 UInt32, + n2 UInt32 +) + Engine=MergeTree() + ORDER BY n1 SETTINGS index_granularity = 8192, index_granularity_bytes = '10Mi'; + +SELECT '----- CONST -----'; +select hilbertEncode(133); +select hilbertEncode(3, 4); +select hilbertDecode(2, 31); + +SELECT '----- 4294967296, 2 -----'; +insert into hilbert_numbers_03131 +select n1.number, n2.number +from numbers(pow(2, 32)-8,8) n1 + cross join numbers(pow(2, 32)-8, 8) n2 +; + +drop table if exists hilbert_numbers_1_03131; +create table hilbert_numbers_1_03131( + n1 UInt64, + n2 UInt64 +) + Engine=MergeTree() + ORDER BY n1 SETTINGS index_granularity = 8192, index_granularity_bytes = '10Mi'; + +insert into hilbert_numbers_1_03131 +select untuple(hilbertDecode(2, hilbertEncode(n1, n2))) +from hilbert_numbers_03131; + +( + select n1, n2 from hilbert_numbers_03131 + union distinct + select n1, n2 from hilbert_numbers_1_03131 +) +except +( + select n1, n2 from hilbert_numbers_03131 + intersect + select n1, n2 from hilbert_numbers_1_03131 +); +drop table if exists hilbert_numbers_1_03131; + +select '----- ERRORS -----'; +select hilbertEncode(); -- { serverError TOO_FEW_ARGUMENTS_FOR_FUNCTION } +select hilbertDecode(); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select hilbertEncode('text'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select hilbertDecode('text', 'text'); -- { serverError ILLEGAL_COLUMN } +select hilbertEncode((1, 2), 3); -- { serverError ARGUMENT_OUT_OF_BOUND } + +SELECT '----- END -----'; +drop table if exists hilbert_numbers_03131; diff --git a/tests/queries/0_stateless/03135_keeper_client_find_commands.sh b/tests/queries/0_stateless/03135_keeper_client_find_commands.sh index 0acc4014f1f..0f57694028d 100755 --- a/tests/queries/0_stateless/03135_keeper_client_find_commands.sh +++ b/tests/queries/0_stateless/03135_keeper_client_find_commands.sh @@ -21,7 +21,7 @@ $CLICKHOUSE_KEEPER_CLIENT -q "create $path/1/d/c 'foobar'" echo 'find_super_nodes' $CLICKHOUSE_KEEPER_CLIENT -q "find_super_nodes 1000000000" -$CLICKHOUSE_KEEPER_CLIENT -q "find_super_nodes 3 $path" +$CLICKHOUSE_KEEPER_CLIENT -q "find_super_nodes 3 $path" | sort echo 'find_big_family' $CLICKHOUSE_KEEPER_CLIENT -q "find_big_family $path 3" diff --git a/tests/queries/0_stateless/03147_parquet_memory_tracking.reference b/tests/queries/0_stateless/03147_parquet_memory_tracking.reference new file mode 100644 index 00000000000..573541ac970 --- /dev/null +++ b/tests/queries/0_stateless/03147_parquet_memory_tracking.reference @@ -0,0 +1 @@ +0 diff --git a/tests/queries/0_stateless/03147_parquet_memory_tracking.sql b/tests/queries/0_stateless/03147_parquet_memory_tracking.sql new file mode 100644 index 00000000000..aeca04ffb9d --- /dev/null +++ b/tests/queries/0_stateless/03147_parquet_memory_tracking.sql @@ -0,0 +1,13 @@ +-- Tags: no-fasttest, no-parallel + +-- Create an ~80 MB parquet file with one row group and one column. +insert into function file('03147_parquet_memory_tracking.parquet') select number from numbers(10000000) settings output_format_parquet_compression_method='none', output_format_parquet_row_group_size=1000000000000, engine_file_truncate_on_insert=1; + +-- Try to read it with 60 MB memory limit. Should fail because we read the 80 MB column all at once. +select sum(ignore(*)) from file('03147_parquet_memory_tracking.parquet') settings max_memory_usage=60000000; -- { serverError CANNOT_ALLOCATE_MEMORY } + +-- Try to read it with 500 MB memory limit, just in case. +select sum(ignore(*)) from file('03147_parquet_memory_tracking.parquet') settings max_memory_usage=500000000; + +-- Truncate the file to avoid leaving too much garbage behind. +insert into function file('03147_parquet_memory_tracking.parquet') select number from numbers(1) settings engine_file_truncate_on_insert=1; diff --git a/tests/queries/0_stateless/03147_table_function_loop.sql b/tests/queries/0_stateless/03147_table_function_loop.sql index af48e4b11e3..aa3c8e2def5 100644 --- a/tests/queries/0_stateless/03147_table_function_loop.sql +++ b/tests/queries/0_stateless/03147_table_function_loop.sql @@ -12,3 +12,5 @@ USE 03147_db; SELECT * FROM loop(03147_db.t) LIMIT 15; SELECT * FROM loop(t) LIMIT 15; SELECT * FROM loop(03147_db, t) LIMIT 15; + +SELECT * FROM loop('', '') -- { serverError UNKNOWN_TABLE } diff --git a/tests/queries/0_stateless/03164_analyzer_global_in_alias.reference b/tests/queries/0_stateless/03164_analyzer_global_in_alias.reference new file mode 100644 index 00000000000..459605fc1db --- /dev/null +++ b/tests/queries/0_stateless/03164_analyzer_global_in_alias.reference @@ -0,0 +1,4 @@ +1 1 +1 +1 1 +1 diff --git a/tests/queries/0_stateless/03164_analyzer_global_in_alias.sql b/tests/queries/0_stateless/03164_analyzer_global_in_alias.sql new file mode 100644 index 00000000000..00c293334ee --- /dev/null +++ b/tests/queries/0_stateless/03164_analyzer_global_in_alias.sql @@ -0,0 +1,6 @@ +SET allow_experimental_analyzer=1; +SELECT 1 GLOBAL IN (SELECT 1) AS s, s FROM remote('127.0.0.{2,3}', system.one) GROUP BY 1; +SELECT 1 GLOBAL IN (SELECT 1) AS s FROM remote('127.0.0.{2,3}', system.one) GROUP BY 1; + +SELECT 1 GLOBAL IN (SELECT 1) AS s, s FROM remote('127.0.0.{1,3}', system.one) GROUP BY 1; +SELECT 1 GLOBAL IN (SELECT 1) AS s FROM remote('127.0.0.{1,3}', system.one) GROUP BY 1; diff --git a/tests/queries/0_stateless/03164_analyzer_rewrite_aggregate_function_with_if.reference b/tests/queries/0_stateless/03164_analyzer_rewrite_aggregate_function_with_if.reference new file mode 100644 index 00000000000..d00491fd7e5 --- /dev/null +++ b/tests/queries/0_stateless/03164_analyzer_rewrite_aggregate_function_with_if.reference @@ -0,0 +1 @@ +1 diff --git a/tests/queries/0_stateless/03164_analyzer_rewrite_aggregate_function_with_if.sql b/tests/queries/0_stateless/03164_analyzer_rewrite_aggregate_function_with_if.sql new file mode 100644 index 00000000000..52f767d8aae --- /dev/null +++ b/tests/queries/0_stateless/03164_analyzer_rewrite_aggregate_function_with_if.sql @@ -0,0 +1 @@ +SELECT countIf(multiIf(number < 2, NULL, if(number = 4, 1, 0))) FROM numbers(5); diff --git a/tests/queries/0_stateless/03164_analyzer_validate_tree_size.reference b/tests/queries/0_stateless/03164_analyzer_validate_tree_size.reference new file mode 100644 index 00000000000..d00491fd7e5 --- /dev/null +++ b/tests/queries/0_stateless/03164_analyzer_validate_tree_size.reference @@ -0,0 +1 @@ +1 diff --git a/tests/queries/0_stateless/03164_analyzer_validate_tree_size.sql b/tests/queries/0_stateless/03164_analyzer_validate_tree_size.sql new file mode 100644 index 00000000000..0e581592aef --- /dev/null +++ b/tests/queries/0_stateless/03164_analyzer_validate_tree_size.sql @@ -0,0 +1,1007 @@ +CREATE TABLE t +( +c1 Int64 , +c2 Int64 , +c3 Int64 , +c4 Int64 , +c5 Int64 , +c6 Int64 , +c7 Int64 , +c8 Int64 , +c9 Int64 , +c10 Int64 , +c11 Int64 , +c12 Int64 , +c13 Int64 , +c14 Int64 , +c15 Int64 , +c16 Int64 , +c17 Int64 , +c18 Int64 , +c19 Int64 , +c20 Int64 , +c21 Int64 , +c22 Int64 , +c23 Int64 , +c24 Int64 , +c25 Int64 , +c26 Int64 , +c27 Int64 , +c28 Int64 , +c29 Int64 , +c30 Int64 , +c31 Int64 , +c32 Int64 , +c33 Int64 , +c34 Int64 , +c35 Int64 , +c36 Int64 , +c37 Int64 , +c38 Int64 , +c39 Int64 , +c40 Int64 , +c41 Int64 , +c42 Int64 , +c43 Int64 , +c44 Int64 , +c45 Int64 , +c46 Int64 , +c47 Int64 , +c48 Int64 , +c49 Int64 , +c50 Int64 , +c51 Int64 , +c52 Int64 , +c53 Int64 , +c54 Int64 , +c55 Int64 , +c56 Int64 , +c57 Int64 , +c58 Int64 , +c59 Int64 , +c60 Int64 , +c61 Int64 , +c62 Int64 , +c63 Int64 , +c64 Int64 , +c65 Int64 , +c66 Int64 , +c67 Int64 , +c68 Int64 , +c69 Int64 , +c70 Int64 , +c71 Int64 , +c72 Int64 , +c73 Int64 , +c74 Int64 , +c75 Int64 , +c76 Int64 , +c77 Int64 , +c78 Int64 , +c79 Int64 , +c80 Int64 , +c81 Int64 , +c82 Int64 , +c83 Int64 , +c84 Int64 , +c85 Int64 , +c86 Int64 , +c87 Int64 , +c88 Int64 , +c89 Int64 , +c90 Int64 , +c91 Int64 , +c92 Int64 , +c93 Int64 , +c94 Int64 , +c95 Int64 , +c96 Int64 , +c97 Int64 , +c98 Int64 , +c99 Int64 , +c100 Int64 , +c101 Int64 , +c102 Int64 , +c103 Int64 , +c104 Int64 , +c105 Int64 , +c106 Int64 , +c107 Int64 , +c108 Int64 , +c109 Int64 , +c110 Int64 , +c111 Int64 , +c112 Int64 , +c113 Int64 , +c114 Int64 , +c115 Int64 , +c116 Int64 , +c117 Int64 , +c118 Int64 , +c119 Int64 , +c120 Int64 , +c121 Int64 , +c122 Int64 , +c123 Int64 , +c124 Int64 , +c125 Int64 , +c126 Int64 , +c127 Int64 , +c128 Int64 , +c129 Int64 , +c130 Int64 , +c131 Int64 , +c132 Int64 , +c133 Int64 , +c134 Int64 , +c135 Int64 , +c136 Int64 , +c137 Int64 , +c138 Int64 , +c139 Int64 , +c140 Int64 , +c141 Int64 , +c142 Int64 , +c143 Int64 , +c144 Int64 , +c145 Int64 , +c146 Int64 , +c147 Int64 , +c148 Int64 , +c149 Int64 , +c150 Int64 , +c151 Int64 , +c152 Int64 , +c153 Int64 , +c154 Int64 , +c155 Int64 , +c156 Int64 , +c157 Int64 , +c158 Int64 , +c159 Int64 , +c160 Int64 , +c161 Int64 , +c162 Int64 , +c163 Int64 , +c164 Int64 , +c165 Int64 , +c166 Int64 , +c167 Int64 , +c168 Int64 , +c169 Int64 , +c170 Int64 , +c171 Int64 , +c172 Int64 , +c173 Int64 , +c174 Int64 , +c175 Int64 , +c176 Int64 , +c177 Int64 , +c178 Int64 , +c179 Int64 , +c180 Int64 , +c181 Int64 , +c182 Int64 , +c183 Int64 , +c184 Int64 , +c185 Int64 , +c186 Int64 , +c187 Int64 , +c188 Int64 , +c189 Int64 , +c190 Int64 , +c191 Int64 , +c192 Int64 , +c193 Int64 , +c194 Int64 , +c195 Int64 , +c196 Int64 , +c197 Int64 , +c198 Int64 , +c199 Int64 , +c200 Int64 , +c201 Int64 , +c202 Int64 , +c203 Int64 , +c204 Int64 , +c205 Int64 , +c206 Int64 , +c207 Int64 , +c208 Int64 , +c209 Int64 , +c210 Int64 , +c211 Int64 , +c212 Int64 , +c213 Int64 , +c214 Int64 , +c215 Int64 , +c216 Int64 , +c217 Int64 , +c218 Int64 , +c219 Int64 , +c220 Int64 , +c221 Int64 , +c222 Int64 , +c223 Int64 , +c224 Int64 , +c225 Int64 , +c226 Int64 , +c227 Int64 , +c228 Int64 , +c229 Int64 , +c230 Int64 , +c231 Int64 , +c232 Int64 , +c233 Int64 , +c234 Int64 , +c235 Int64 , +c236 Int64 , +c237 Int64 , +c238 Int64 , +c239 Int64 , +c240 Int64 , +c241 Int64 , +c242 Int64 , +c243 Int64 , +c244 Int64 , +c245 Int64 , +c246 Int64 , +c247 Int64 , +c248 Int64 , +c249 Int64 , +c250 Int64 , +c251 Int64 , +c252 Int64 , +c253 Int64 , +c254 Int64 , +c255 Int64 , +c256 Int64 , +c257 Int64 , +c258 Int64 , +c259 Int64 , +c260 Int64 , +c261 Int64 , +c262 Int64 , +c263 Int64 , +c264 Int64 , +c265 Int64 , +c266 Int64 , +c267 Int64 , +c268 Int64 , +c269 Int64 , +c270 Int64 , +c271 Int64 , +c272 Int64 , +c273 Int64 , +c274 Int64 , +c275 Int64 , +c276 Int64 , +c277 Int64 , +c278 Int64 , +c279 Int64 , +c280 Int64 , +c281 Int64 , +c282 Int64 , +c283 Int64 , +c284 Int64 , +c285 Int64 , +c286 Int64 , +c287 Int64 , +c288 Int64 , +c289 Int64 , +c290 Int64 , +c291 Int64 , +c292 Int64 , +c293 Int64 , +c294 Int64 , +c295 Int64 , +c296 Int64 , +c297 Int64 , +c298 Int64 , +c299 Int64 , +c300 Int64 , +c301 Int64 , +c302 Int64 , +c303 Int64 , +c304 Int64 , +c305 Int64 , +c306 Int64 , +c307 Int64 , +c308 Int64 , +c309 Int64 , +c310 Int64 , +c311 Int64 , +c312 Int64 , +c313 Int64 , +c314 Int64 , +c315 Int64 , +c316 Int64 , +c317 Int64 , +c318 Int64 , +c319 Int64 , +c320 Int64 , +c321 Int64 , +c322 Int64 , +c323 Int64 , +c324 Int64 , +c325 Int64 , +c326 Int64 , +c327 Int64 , +c328 Int64 , +c329 Int64 , +c330 Int64 , +c331 Int64 , +c332 Int64 , +c333 Int64 , +c334 Int64 , +c335 Int64 , +c336 Int64 , +c337 Int64 , +c338 Int64 , +c339 Int64 , +c340 Int64 , +c341 Int64 , +c342 Int64 , +c343 Int64 , +c344 Int64 , +c345 Int64 , +c346 Int64 , +c347 Int64 , +c348 Int64 , +c349 Int64 , +c350 Int64 , +c351 Int64 , +c352 Int64 , +c353 Int64 , +c354 Int64 , +c355 Int64 , +c356 Int64 , +c357 Int64 , +c358 Int64 , +c359 Int64 , +c360 Int64 , +c361 Int64 , +c362 Int64 , +c363 Int64 , +c364 Int64 , +c365 Int64 , +c366 Int64 , +c367 Int64 , +c368 Int64 , +c369 Int64 , +c370 Int64 , +c371 Int64 , +c372 Int64 , +c373 Int64 , +c374 Int64 , +c375 Int64 , +c376 Int64 , +c377 Int64 , +c378 Int64 , +c379 Int64 , +c380 Int64 , +c381 Int64 , +c382 Int64 , +c383 Int64 , +c384 Int64 , +c385 Int64 , +c386 Int64 , +c387 Int64 , +c388 Int64 , +c389 Int64 , +c390 Int64 , +c391 Int64 , +c392 Int64 , +c393 Int64 , +c394 Int64 , +c395 Int64 , +c396 Int64 , +c397 Int64 , +c398 Int64 , +c399 Int64 , +c400 Int64 , +c401 Int64 , +c402 Int64 , +c403 Int64 , +c404 Int64 , +c405 Int64 , +c406 Int64 , +c407 Int64 , +c408 Int64 , +c409 Int64 , +c410 Int64 , +c411 Int64 , +c412 Int64 , +c413 Int64 , +c414 Int64 , +c415 Int64 , +c416 Int64 , +c417 Int64 , +c418 Int64 , +c419 Int64 , +c420 Int64 , +c421 Int64 , +c422 Int64 , +c423 Int64 , +c424 Int64 , +c425 Int64 , +c426 Int64 , +c427 Int64 , +c428 Int64 , +c429 Int64 , +c430 Int64 , +c431 Int64 , +c432 Int64 , +c433 Int64 , +c434 Int64 , +c435 Int64 , +c436 Int64 , +c437 Int64 , +c438 Int64 , +c439 Int64 , +c440 Int64 , +c441 Int64 , +c442 Int64 , +c443 Int64 , +c444 Int64 , +c445 Int64 , +c446 Int64 , +c447 Int64 , +c448 Int64 , +c449 Int64 , +c450 Int64 , +c451 Int64 , +c452 Int64 , +c453 Int64 , +c454 Int64 , +c455 Int64 , +c456 Int64 , +c457 Int64 , +c458 Int64 , +c459 Int64 , +c460 Int64 , +c461 Int64 , +c462 Int64 , +c463 Int64 , +c464 Int64 , +c465 Int64 , +c466 Int64 , +c467 Int64 , +c468 Int64 , +c469 Int64 , +c470 Int64 , +c471 Int64 , +c472 Int64 , +c473 Int64 , +c474 Int64 , +c475 Int64 , +c476 Int64 , +c477 Int64 , +c478 Int64 , +c479 Int64 , +c480 Int64 , +c481 Int64 , +c482 Int64 , +c483 Int64 , +c484 Int64 , +c485 Int64 , +c486 Int64 , +c487 Int64 , +c488 Int64 , +c489 Int64 , +c490 Int64 , +c491 Int64 , +c492 Int64 , +c493 Int64 , +c494 Int64 , +c495 Int64 , +c496 Int64 , +c497 Int64 , +c498 Int64 , +c499 Int64 , +c500 Int64 , +b1 Int64 , +b2 Int64 , +b3 Int64 , +b4 Int64 , +b5 Int64 , +b6 Int64 , +b7 Int64 , +b8 Int64 , +b9 Int64 , +b10 Int64 , +b11 Int64 , +b12 Int64 , +b13 Int64 , +b14 Int64 , +b15 Int64 , +b16 Int64 , +b17 Int64 , +b18 Int64 , +b19 Int64 , +b20 Int64 , +b21 Int64 , +b22 Int64 , +b23 Int64 , +b24 Int64 , +b25 Int64 , +b26 Int64 , +b27 Int64 , +b28 Int64 , +b29 Int64 , +b30 Int64 , +b31 Int64 , +b32 Int64 , +b33 Int64 , +b34 Int64 , +b35 Int64 , +b36 Int64 , +b37 Int64 , +b38 Int64 , +b39 Int64 , +b40 Int64 , +b41 Int64 , +b42 Int64 , +b43 Int64 , +b44 Int64 , +b45 Int64 , +b46 Int64 , +b47 Int64 , +b48 Int64 , +b49 Int64 , +b50 Int64 , +b51 Int64 , +b52 Int64 , +b53 Int64 , +b54 Int64 , +b55 Int64 , +b56 Int64 , +b57 Int64 , +b58 Int64 , +b59 Int64 , +b60 Int64 , +b61 Int64 , +b62 Int64 , +b63 Int64 , +b64 Int64 , +b65 Int64 , +b66 Int64 , +b67 Int64 , +b68 Int64 , +b69 Int64 , +b70 Int64 , +b71 Int64 , +b72 Int64 , +b73 Int64 , +b74 Int64 , +b75 Int64 , +b76 Int64 , +b77 Int64 , +b78 Int64 , +b79 Int64 , +b80 Int64 , +b81 Int64 , +b82 Int64 , +b83 Int64 , +b84 Int64 , +b85 Int64 , +b86 Int64 , +b87 Int64 , +b88 Int64 , +b89 Int64 , +b90 Int64 , +b91 Int64 , +b92 Int64 , +b93 Int64 , +b94 Int64 , +b95 Int64 , +b96 Int64 , +b97 Int64 , +b98 Int64 , +b99 Int64 , +b100 Int64 , +b101 Int64 , +b102 Int64 , +b103 Int64 , +b104 Int64 , +b105 Int64 , +b106 Int64 , +b107 Int64 , +b108 Int64 , +b109 Int64 , +b110 Int64 , +b111 Int64 , +b112 Int64 , +b113 Int64 , +b114 Int64 , +b115 Int64 , +b116 Int64 , +b117 Int64 , +b118 Int64 , +b119 Int64 , +b120 Int64 , +b121 Int64 , +b122 Int64 , +b123 Int64 , +b124 Int64 , +b125 Int64 , +b126 Int64 , +b127 Int64 , +b128 Int64 , +b129 Int64 , +b130 Int64 , +b131 Int64 , +b132 Int64 , +b133 Int64 , +b134 Int64 , +b135 Int64 , +b136 Int64 , +b137 Int64 , +b138 Int64 , +b139 Int64 , +b140 Int64 , +b141 Int64 , +b142 Int64 , +b143 Int64 , +b144 Int64 , +b145 Int64 , +b146 Int64 , +b147 Int64 , +b148 Int64 , +b149 Int64 , +b150 Int64 , +b151 Int64 , +b152 Int64 , +b153 Int64 , +b154 Int64 , +b155 Int64 , +b156 Int64 , +b157 Int64 , +b158 Int64 , +b159 Int64 , +b160 Int64 , +b161 Int64 , +b162 Int64 , +b163 Int64 , +b164 Int64 , +b165 Int64 , +b166 Int64 , +b167 Int64 , +b168 Int64 , +b169 Int64 , +b170 Int64 , +b171 Int64 , +b172 Int64 , +b173 Int64 , +b174 Int64 , +b175 Int64 , +b176 Int64 , +b177 Int64 , +b178 Int64 , +b179 Int64 , +b180 Int64 , +b181 Int64 , +b182 Int64 , +b183 Int64 , +b184 Int64 , +b185 Int64 , +b186 Int64 , +b187 Int64 , +b188 Int64 , +b189 Int64 , +b190 Int64 , +b191 Int64 , +b192 Int64 , +b193 Int64 , +b194 Int64 , +b195 Int64 , +b196 Int64 , +b197 Int64 , +b198 Int64 , +b199 Int64 , +b200 Int64 , +b201 Int64 , +b202 Int64 , +b203 Int64 , +b204 Int64 , +b205 Int64 , +b206 Int64 , +b207 Int64 , +b208 Int64 , +b209 Int64 , +b210 Int64 , +b211 Int64 , +b212 Int64 , +b213 Int64 , +b214 Int64 , +b215 Int64 , +b216 Int64 , +b217 Int64 , +b218 Int64 , +b219 Int64 , +b220 Int64 , +b221 Int64 , +b222 Int64 , +b223 Int64 , +b224 Int64 , +b225 Int64 , +b226 Int64 , +b227 Int64 , +b228 Int64 , +b229 Int64 , +b230 Int64 , +b231 Int64 , +b232 Int64 , +b233 Int64 , +b234 Int64 , +b235 Int64 , +b236 Int64 , +b237 Int64 , +b238 Int64 , +b239 Int64 , +b240 Int64 , +b241 Int64 , +b242 Int64 , +b243 Int64 , +b244 Int64 , +b245 Int64 , +b246 Int64 , +b247 Int64 , +b248 Int64 , +b249 Int64 , +b250 Int64 , +b251 Int64 , +b252 Int64 , +b253 Int64 , +b254 Int64 , +b255 Int64 , +b256 Int64 , +b257 Int64 , +b258 Int64 , +b259 Int64 , +b260 Int64 , +b261 Int64 , +b262 Int64 , +b263 Int64 , +b264 Int64 , +b265 Int64 , +b266 Int64 , +b267 Int64 , +b268 Int64 , +b269 Int64 , +b270 Int64 , +b271 Int64 , +b272 Int64 , +b273 Int64 , +b274 Int64 , +b275 Int64 , +b276 Int64 , +b277 Int64 , +b278 Int64 , +b279 Int64 , +b280 Int64 , +b281 Int64 , +b282 Int64 , +b283 Int64 , +b284 Int64 , +b285 Int64 , +b286 Int64 , +b287 Int64 , +b288 Int64 , +b289 Int64 , +b290 Int64 , +b291 Int64 , +b292 Int64 , +b293 Int64 , +b294 Int64 , +b295 Int64 , +b296 Int64 , +b297 Int64 , +b298 Int64 , +b299 Int64 , +b300 Int64 , +b301 Int64 , +b302 Int64 , +b303 Int64 , +b304 Int64 , +b305 Int64 , +b306 Int64 , +b307 Int64 , +b308 Int64 , +b309 Int64 , +b310 Int64 , +b311 Int64 , +b312 Int64 , +b313 Int64 , +b314 Int64 , +b315 Int64 , +b316 Int64 , +b317 Int64 , +b318 Int64 , +b319 Int64 , +b320 Int64 , +b321 Int64 , +b322 Int64 , +b323 Int64 , +b324 Int64 , +b325 Int64 , +b326 Int64 , +b327 Int64 , +b328 Int64 , +b329 Int64 , +b330 Int64 , +b331 Int64 , +b332 Int64 , +b333 Int64 , +b334 Int64 , +b335 Int64 , +b336 Int64 , +b337 Int64 , +b338 Int64 , +b339 Int64 , +b340 Int64 , +b341 Int64 , +b342 Int64 , +b343 Int64 , +b344 Int64 , +b345 Int64 , +b346 Int64 , +b347 Int64 , +b348 Int64 , +b349 Int64 , +b350 Int64 , +b351 Int64 , +b352 Int64 , +b353 Int64 , +b354 Int64 , +b355 Int64 , +b356 Int64 , +b357 Int64 , +b358 Int64 , +b359 Int64 , +b360 Int64 , +b361 Int64 , +b362 Int64 , +b363 Int64 , +b364 Int64 , +b365 Int64 , +b366 Int64 , +b367 Int64 , +b368 Int64 , +b369 Int64 , +b370 Int64 , +b371 Int64 , +b372 Int64 , +b373 Int64 , +b374 Int64 , +b375 Int64 , +b376 Int64 , +b377 Int64 , +b378 Int64 , +b379 Int64 , +b380 Int64 , +b381 Int64 , +b382 Int64 , +b383 Int64 , +b384 Int64 , +b385 Int64 , +b386 Int64 , +b387 Int64 , +b388 Int64 , +b389 Int64 , +b390 Int64 , +b391 Int64 , +b392 Int64 , +b393 Int64 , +b394 Int64 , +b395 Int64 , +b396 Int64 , +b397 Int64 , +b398 Int64 , +b399 Int64 , +b400 Int64 , +b401 Int64 , +b402 Int64 , +b403 Int64 , +b404 Int64 , +b405 Int64 , +b406 Int64 , +b407 Int64 , +b408 Int64 , +b409 Int64 , +b410 Int64 , +b411 Int64 , +b412 Int64 , +b413 Int64 , +b414 Int64 , +b415 Int64 , +b416 Int64 , +b417 Int64 , +b418 Int64 , +b419 Int64 , +b420 Int64 , +b421 Int64 , +b422 Int64 , +b423 Int64 , +b424 Int64 , +b425 Int64 , +b426 Int64 , +b427 Int64 , +b428 Int64 , +b429 Int64 , +b430 Int64 , +b431 Int64 , +b432 Int64 , +b433 Int64 , +b434 Int64 , +b435 Int64 , +b436 Int64 , +b437 Int64 , +b438 Int64 , +b439 Int64 , +b440 Int64 , +b441 Int64 , +b442 Int64 , +b443 Int64 , +b444 Int64 , +b445 Int64 , +b446 Int64 , +b447 Int64 , +b448 Int64 , +b449 Int64 , +b450 Int64 , +b451 Int64 , +b452 Int64 , +b453 Int64 , +b454 Int64 , +b455 Int64 , +b456 Int64 , +b457 Int64 , +b458 Int64 , +b459 Int64 , +b460 Int64 , +b461 Int64 , +b462 Int64 , +b463 Int64 , +b464 Int64 , +b465 Int64 , +b466 Int64 , +b467 Int64 , +b468 Int64 , +b469 Int64 , +b470 Int64 , +b471 Int64 , +b472 Int64 , +b473 Int64 , +b474 Int64 , +b475 Int64 , +b476 Int64 , +b477 Int64 , +b478 Int64 , +b479 Int64 , +b480 Int64 , +b481 Int64 , +b482 Int64 , +b483 Int64 , +b484 Int64 , +b485 Int64 , +b486 Int64 , +b487 Int64 , +b488 Int64 , +b489 Int64 , +b490 Int64 , +b491 Int64 , +b492 Int64 , +b493 Int64 , +b494 Int64 , +b495 Int64 , +b496 Int64 , +b497 Int64 , +b498 Int64 , +b499 Int64 , +b500 Int64 +) ENGINE = Memory; + +insert into t(c1) values(1); + +SELECT count() FROM (SELECT tuple(*) FROM t); diff --git a/tests/queries/0_stateless/03164_create_as_default.reference b/tests/queries/0_stateless/03164_create_as_default.reference new file mode 100644 index 00000000000..aceba23beaf --- /dev/null +++ b/tests/queries/0_stateless/03164_create_as_default.reference @@ -0,0 +1,5 @@ +CREATE TABLE default.src_table\n(\n `time` DateTime(\'UTC\') DEFAULT fromUnixTimestamp(sipTimestamp),\n `sipTimestamp` UInt64\n)\nENGINE = MergeTree\nORDER BY time\nSETTINGS index_granularity = 8192 +sipTimestamp +time fromUnixTimestamp(sipTimestamp) +{"time":"2024-05-20 09:00:00","sipTimestamp":"1716195600"} +{"time":"2024-05-20 09:00:00","sipTimestamp":"1716195600"} diff --git a/tests/queries/0_stateless/03164_create_as_default.sql b/tests/queries/0_stateless/03164_create_as_default.sql new file mode 100644 index 00000000000..e9fd7c1e35a --- /dev/null +++ b/tests/queries/0_stateless/03164_create_as_default.sql @@ -0,0 +1,27 @@ +DROP TABLE IF EXISTS src_table; +DROP TABLE IF EXISTS copied_table; + +CREATE TABLE src_table +( + time DateTime('UTC') DEFAULT fromUnixTimestamp(sipTimestamp), + sipTimestamp UInt64 +) +ENGINE = MergeTree +ORDER BY time; + +INSERT INTO src_table(sipTimestamp) VALUES (toUnixTimestamp(toDateTime('2024-05-20 09:00:00', 'UTC'))); + +CREATE TABLE copied_table AS src_table; + +ALTER TABLE copied_table RENAME COLUMN `sipTimestamp` TO `timestamp`; + +SHOW CREATE TABLE src_table; + +SELECT name, default_expression FROM system.columns WHERE database = currentDatabase() AND table = 'src_table' ORDER BY name; +INSERT INTO src_table(sipTimestamp) VALUES (toUnixTimestamp(toDateTime('2024-05-20 09:00:00', 'UTC'))); + +SELECT * FROM src_table ORDER BY time FORMAT JSONEachRow; +SELECT * FROM copied_table ORDER BY time FORMAT JSONEachRow; + +DROP TABLE src_table; +DROP TABLE copied_table; diff --git a/tests/queries/0_stateless/03164_materialize_skip_index.reference b/tests/queries/0_stateless/03164_materialize_skip_index.reference new file mode 100644 index 00000000000..34251101e89 --- /dev/null +++ b/tests/queries/0_stateless/03164_materialize_skip_index.reference @@ -0,0 +1,52 @@ +20 +Expression ((Project names + Projection)) + Aggregating + Expression (Before GROUP BY) + Expression + ReadFromMergeTree (default.t_skip_index_insert) + Indexes: + Skip + Name: idx_a + Description: minmax GRANULARITY 1 + Parts: 2/2 + Granules: 50/50 + Skip + Name: idx_b + Description: set GRANULARITY 1 + Parts: 2/2 + Granules: 50/50 +20 +Expression ((Project names + Projection)) + Aggregating + Expression (Before GROUP BY) + Expression + ReadFromMergeTree (default.t_skip_index_insert) + Indexes: + Skip + Name: idx_a + Description: minmax GRANULARITY 1 + Parts: 1/1 + Granules: 6/50 + Skip + Name: idx_b + Description: set GRANULARITY 1 + Parts: 1/1 + Granules: 6/6 +20 +Expression ((Project names + Projection)) + Aggregating + Expression (Before GROUP BY) + Expression + ReadFromMergeTree (default.t_skip_index_insert) + Indexes: + Skip + Name: idx_a + Description: minmax GRANULARITY 1 + Parts: 1/2 + Granules: 6/50 + Skip + Name: idx_b + Description: set GRANULARITY 1 + Parts: 1/1 + Granules: 6/6 +4 0 diff --git a/tests/queries/0_stateless/03164_materialize_skip_index.sql b/tests/queries/0_stateless/03164_materialize_skip_index.sql new file mode 100644 index 00000000000..4e59ef6b6cd --- /dev/null +++ b/tests/queries/0_stateless/03164_materialize_skip_index.sql @@ -0,0 +1,50 @@ +DROP TABLE IF EXISTS t_skip_index_insert; + +CREATE TABLE t_skip_index_insert +( + a UInt64, + b UInt64, + INDEX idx_a a TYPE minmax, + INDEX idx_b b TYPE set(3) +) +ENGINE = MergeTree ORDER BY tuple() SETTINGS index_granularity = 4; + +SET allow_experimental_analyzer = 1; +SET materialize_skip_indexes_on_insert = 0; + +SYSTEM STOP MERGES t_skip_index_insert; + +INSERT INTO t_skip_index_insert SELECT number, number / 50 FROM numbers(100); +INSERT INTO t_skip_index_insert SELECT number, number / 50 FROM numbers(100, 100); + +SELECT count() FROM t_skip_index_insert WHERE a >= 110 AND a < 130 AND b = 2; +EXPLAIN indexes = 1 SELECT count() FROM t_skip_index_insert WHERE a >= 110 AND a < 130 AND b = 2; + +SYSTEM START MERGES t_skip_index_insert; +OPTIMIZE TABLE t_skip_index_insert FINAL; + +SELECT count() FROM t_skip_index_insert WHERE a >= 110 AND a < 130 AND b = 2; +EXPLAIN indexes = 1 SELECT count() FROM t_skip_index_insert WHERE a >= 110 AND a < 130 AND b = 2; + +TRUNCATE TABLE t_skip_index_insert; + +INSERT INTO t_skip_index_insert SELECT number, number / 50 FROM numbers(100); +INSERT INTO t_skip_index_insert SELECT number, number / 50 FROM numbers(100, 100); + +SET mutations_sync = 2; + +ALTER TABLE t_skip_index_insert MATERIALIZE INDEX idx_a; +ALTER TABLE t_skip_index_insert MATERIALIZE INDEX idx_b; + +SELECT count() FROM t_skip_index_insert WHERE a >= 110 AND a < 130 AND b = 2; +EXPLAIN indexes = 1 SELECT count() FROM t_skip_index_insert WHERE a >= 110 AND a < 130 AND b = 2; + +DROP TABLE IF EXISTS t_skip_index_insert; + +SYSTEM FLUSH LOGS; + +SELECT count(), sum(ProfileEvents['MergeTreeDataWriterSkipIndicesCalculationMicroseconds']) +FROM system.query_log +WHERE current_database = currentDatabase() + AND query LIKE 'INSERT INTO t_skip_index_insert SELECT%' + AND type = 'QueryFinish'; diff --git a/tests/queries/0_stateless/03164_materialize_statistics.reference b/tests/queries/0_stateless/03164_materialize_statistics.reference new file mode 100644 index 00000000000..c209d2e8b63 --- /dev/null +++ b/tests/queries/0_stateless/03164_materialize_statistics.reference @@ -0,0 +1,10 @@ +10 +10 +10 +statistic not used Condition less(b, 10_UInt8) moved to PREWHERE +statistic not used Condition less(a, 10_UInt8) moved to PREWHERE +statistic used after merge Condition less(a, 10_UInt8) moved to PREWHERE +statistic used after merge Condition less(b, 10_UInt8) moved to PREWHERE +statistic used after materialize Condition less(a, 10_UInt8) moved to PREWHERE +statistic used after materialize Condition less(b, 10_UInt8) moved to PREWHERE +2 0 diff --git a/tests/queries/0_stateless/03164_materialize_statistics.sql b/tests/queries/0_stateless/03164_materialize_statistics.sql new file mode 100644 index 00000000000..763644d16ab --- /dev/null +++ b/tests/queries/0_stateless/03164_materialize_statistics.sql @@ -0,0 +1,49 @@ +DROP TABLE IF EXISTS t_statistic_materialize; + +SET allow_experimental_analyzer = 1; +SET allow_experimental_statistic = 1; +SET allow_statistic_optimize = 1; +SET materialize_statistics_on_insert = 0; + +CREATE TABLE t_statistic_materialize +( + a Int64 STATISTIC(tdigest), + b Int16 STATISTIC(tdigest), +) ENGINE = MergeTree() ORDER BY tuple() +SETTINGS min_bytes_for_wide_part = 0, enable_vertical_merge_algorithm = 0; -- TODO: there is a bug in vertical merge with statistics. + +INSERT INTO t_statistic_materialize SELECT number, -number FROM system.numbers LIMIT 10000; + +SELECT count(*) FROM t_statistic_materialize WHERE b < 10 and a < 10 SETTINGS log_comment = 'statistic not used'; + +OPTIMIZE TABLE t_statistic_materialize FINAL; + +SELECT count(*) FROM t_statistic_materialize WHERE b < 10 and a < 10 SETTINGS log_comment = 'statistic used after merge'; + +TRUNCATE TABLE t_statistic_materialize; +SET mutations_sync = 2; + +INSERT INTO t_statistic_materialize SELECT number, -number FROM system.numbers LIMIT 10000; +ALTER TABLE t_statistic_materialize MATERIALIZE STATISTIC a, b TYPE tdigest; + +SELECT count(*) FROM t_statistic_materialize WHERE b < 10 and a < 10 SETTINGS log_comment = 'statistic used after materialize'; + +DROP TABLE t_statistic_materialize; + +SYSTEM FLUSH LOGS; + +SELECT log_comment, message FROM system.text_log JOIN +( + SELECT Settings['log_comment'] AS log_comment, query_id FROM system.query_log + WHERE current_database = currentDatabase() + AND query LIKE 'SELECT count(*) FROM t_statistic_materialize%' + AND type = 'QueryFinish' +) AS query_log USING (query_id) +WHERE message LIKE '%moved to PREWHERE%' +ORDER BY event_time_microseconds; + +SELECT count(), sum(ProfileEvents['MergeTreeDataWriterStatisticsCalculationMicroseconds']) +FROM system.query_log +WHERE current_database = currentDatabase() + AND query LIKE 'INSERT INTO t_statistic_materialize SELECT%' + AND type = 'QueryFinish'; diff --git a/tests/queries/1_stateful/00091_prewhere_two_conditions.sql b/tests/queries/1_stateful/00091_prewhere_two_conditions.sql index cbfbbaa2662..cd88743160c 100644 --- a/tests/queries/1_stateful/00091_prewhere_two_conditions.sql +++ b/tests/queries/1_stateful/00091_prewhere_two_conditions.sql @@ -14,6 +14,6 @@ WITH toTimeZone(EventTime, 'Asia/Dubai') AS xyz SELECT uniq(*) FROM test.hits WH SET optimize_move_to_prewhere = 0; SET enable_multiple_prewhere_read_steps = 0; -SELECT uniq(URL) FROM test.hits WHERE toTimeZone(EventTime, 'Asia/Dubai') >= '2014-03-20 00:00:00' AND toTimeZone(EventTime, 'Asia/Dubai') < '2014-03-21 00:00:00'; -- { serverError 307 } -SELECT uniq(URL) FROM test.hits WHERE toTimeZone(EventTime, 'Asia/Dubai') >= '2014-03-20 00:00:00' AND URL != '' AND toTimeZone(EventTime, 'Asia/Dubai') < '2014-03-21 00:00:00'; -- { serverError 307 } -SELECT uniq(URL) FROM test.hits PREWHERE toTimeZone(EventTime, 'Asia/Dubai') >= '2014-03-20 00:00:00' AND URL != '' AND toTimeZone(EventTime, 'Asia/Dubai') < '2014-03-21 00:00:00'; -- { serverError 307 } +SELECT uniq(URL) FROM test.hits WHERE toTimeZone(EventTime, 'Asia/Dubai') >= '2014-03-20 00:00:00' AND toTimeZone(EventTime, 'Asia/Dubai') < '2014-03-21 00:00:00'; -- { serverError TOO_MANY_BYTES } +SELECT uniq(URL) FROM test.hits WHERE toTimeZone(EventTime, 'Asia/Dubai') >= '2014-03-20 00:00:00' AND URL != '' AND toTimeZone(EventTime, 'Asia/Dubai') < '2014-03-21 00:00:00'; -- { serverError TOO_MANY_BYTES } +SELECT uniq(URL) FROM test.hits PREWHERE toTimeZone(EventTime, 'Asia/Dubai') >= '2014-03-20 00:00:00' AND URL != '' AND toTimeZone(EventTime, 'Asia/Dubai') < '2014-03-21 00:00:00'; -- { serverError TOO_MANY_BYTES } diff --git a/tests/queries/1_stateful/00175_counting_resources_in_subqueries.sql b/tests/queries/1_stateful/00175_counting_resources_in_subqueries.sql index fe7837d7ff1..63eca96414f 100644 --- a/tests/queries/1_stateful/00175_counting_resources_in_subqueries.sql +++ b/tests/queries/1_stateful/00175_counting_resources_in_subqueries.sql @@ -1,20 +1,20 @@ -- the work for scalar subquery is properly accounted: SET max_rows_to_read = 1000000; -SELECT 1 = (SELECT count() FROM test.hits WHERE NOT ignore(AdvEngineID)); -- { serverError 158 } +SELECT 1 = (SELECT count() FROM test.hits WHERE NOT ignore(AdvEngineID)); -- { serverError TOO_MANY_ROWS } -- the work for subquery in IN is properly accounted: SET max_rows_to_read = 1000000; -SELECT 1 IN (SELECT count() FROM test.hits WHERE NOT ignore(AdvEngineID)); -- { serverError 158 } +SELECT 1 IN (SELECT count() FROM test.hits WHERE NOT ignore(AdvEngineID)); -- { serverError TOO_MANY_ROWS } -- this query reads from the table twice: SET max_rows_to_read = 15000000; -SELECT count() IN (SELECT count() FROM test.hits WHERE NOT ignore(AdvEngineID)) FROM test.hits WHERE NOT ignore(AdvEngineID); -- { serverError 158 } +SELECT count() IN (SELECT count() FROM test.hits WHERE NOT ignore(AdvEngineID)) FROM test.hits WHERE NOT ignore(AdvEngineID); -- { serverError TOO_MANY_ROWS } -- the resources are properly accounted even if the subquery is evaluated in advance to facilitate the index analysis. -- this query is using index and filter out the second reading pass. SET max_rows_to_read = 1000000; -SELECT count() FROM test.hits WHERE CounterID > (SELECT count() FROM test.hits WHERE NOT ignore(AdvEngineID)); -- { serverError 158 } +SELECT count() FROM test.hits WHERE CounterID > (SELECT count() FROM test.hits WHERE NOT ignore(AdvEngineID)); -- { serverError TOO_MANY_ROWS } -- this query is using index but have to read all the data twice. SET max_rows_to_read = 10000000; -SELECT count() FROM test.hits WHERE CounterID < (SELECT count() FROM test.hits WHERE NOT ignore(AdvEngineID)); -- { serverError 158 } +SELECT count() FROM test.hits WHERE CounterID < (SELECT count() FROM test.hits WHERE NOT ignore(AdvEngineID)); -- { serverError TOO_MANY_ROWS } diff --git a/utils/check-style/aspell-ignore/en/aspell-dict.txt b/utils/check-style/aspell-ignore/en/aspell-dict.txt index 244f2ad98ff..6efbf47da7d 100644 --- a/utils/check-style/aspell-ignore/en/aspell-dict.txt +++ b/utils/check-style/aspell-ignore/en/aspell-dict.txt @@ -1735,6 +1735,8 @@ hdfs hdfsCluster heredoc heredocs +hilbertEncode +hilbertDecode hiveHash holistics homebrew diff --git a/utils/list-versions/version_date.tsv b/utils/list-versions/version_date.tsv index 1f47a999162..f7d84cce4b1 100644 --- a/utils/list-versions/version_date.tsv +++ b/utils/list-versions/version_date.tsv @@ -1,3 +1,4 @@ +v24.5.1.1763-stable 2024-06-01 v24.4.1.2088-stable 2024-05-01 v24.3.3.102-lts 2024-05-01 v24.3.2.23-lts 2024-04-03